Questions related to Mathematics
One of the central themes in the philosophy of formal sciences (or mathematics) is the debate between realism (sometimes misnamed Platonism) and nominalism (also called "anti-realism"), which has different versions.
In my opinion, what is decisive in this regard is the position adopted on the question of whether objects postulated by the theories of the formal sciences (such as the arithmetic of natural numbers) have some mode of existence independently of the language that we humans use to refer to them; that is, independently of linguistic representations and theories. The affirmative answer assumes that things like numbers or the golden ratio are genuine discoveries, while the negative one understands that numbers are not discoveries but human inventions, they are not entities but mere referents of a language whose postulation has been useful for various purposes.
However, it does not occur to me how an anti-realist or nominalist position can respond to these two realist arguments in philosophy of mathematics: first, if numbers have no existence independently of language, how can one explain the metaphysical difference, which we call numerical, at a time before the existence of humans in which at t0 there was in a certain space-time region what we call two dinosaurs and then at t1 what we call three dinosaurs? That seems to be a real metaphysical difference in the sense in which we use the word "numerical", and it does not even require human language, which suggests that number, quantities, etc., seem to be included in the very idea of an individual entity.
Secondly, if the so-called golden ratio (also represented as the golden number and related to the Fibonacci sequence) is a human invention, how can it be explained that this relationship exists in various manifestations of nature such as the shell of certain mollusks, the florets of sunflowers, waves, the structure of galaxies, the spiral of DNA, etc.? That seems to be a discovery and not an invention, a genuine mathematical discovery. And if it is, it seems something like a universal of which those examples are particular cases, perhaps in a Platonic-like sense, which seems to suggest that mathematical entities express characteristics of the spatio-temporal world. However, this form of mathematical realism does not seem compatible with the version that maintains that the entities that mathematical theories talk about exist outside of spacetime. That is to say, if mathematical objects bear to physical and natural objects the relationship that the golden ratio bears to those mentioned, then it seems that there must be a true geometry and that, ultimately, mathematical entities are not as far out of space-time as has been suggested. After all, not everything that exists in spacetime has to be material, as the social sciences well know, that refer to norms, values or attitudes that are not. (I apologize for using a translator. Thank you.)
There are billions of Black Holes. Any Black Hole consists of a singular point each with an infinitely density and curvature. How could our finite Cosmos contains as such many infinities?. If those singular points are just mere artificial mathematics there w`t be any problem. Otherwise, if they are physically exist, our Cosmos would have an enormous infinities as their singular points exist in their parent Black Holes! If we did sum up those infinities the Universe would have infinite density!
Could you recommend courses, papers, books or websites about modeling language and formalization?
Thank you for your attention and valuable support.
Have you ever wondered about using dimensional analysis in mathematics, as we do in physics.
For example, the Pythagoras formula is:
which relates the surface areas of squares resting on different sides of a right-angled triangle.
Therefore, based on a simple dimensional analysis, we may conclude:
a^2+b^2, could NOT be equal to c^3, due to conflict of dimensions.
This is a simple example. How about using dimensional analysis in other mathematics problems.
Please, feel free to share with me, your idea and comment.
Please, I need books that teach in-depth the mathematics of neural networks and maybe suggest possible improvement that can be made to the methodology
I am thinking of the vector as a point in multidimensional space. The Mean would be the location of a vector point with the minimum squared distances from all of the other vector points in the sample. Similarly, the Median would be the location of the vector point with the minimum absolute distance from all the other vector points.
Conventional thinking would have me calculate the Mean vector as the vector formed from the arithmetic mean of all the vector elements. However, there is a problem with this method. If we are working with a set of unit vectors the result of this method would not be a unit vector. So conventional thinking would have me normalize the result into a unit vector. But how would that method apply to other, non-unit, vectors? Should we divide by the arithmetic mean of the vector magnitudes? When calculating the Median, should we divide by the median of the vector magnitudes?
Do these methods produce a result that is mathematically correct? If not, what is the correct method?
I have a data in which the relationship between two parameters seems to fit to a model that has two oblique asymptotes. Does any one have any idea about what type of function I should use? Please find attached a screenshot of the data. I appreciate any help.
While working in both the software, after loading the training and validation data for the prediction of a single output using several input variables (say 10), it skips some of the inputs and delivered an explicit mathematical equation for future prediction of the specific parameter but it skips some of the input variables (say 2 or 3 or maybe greater). What criteria are these software uses in the back for picking the most influential parameters while providing a mathematical predictive model?
Could you recommend papers, books or websites about mathematical foundations of artificial intelligence?
Thank you for your attention and valuable support.
Currently the only proof of Fermat's Last Theorem is very complex and certainly not the proof that Fermat had in mind.
I wonder if it is possible to use a method that drastically
simplifies Wiles' theory, a theory that has received much honors from the entire mathematical community.
can you please tell how to solve the governing equation to obtain the frequencies to compare with the ansys result
Practical applications of special functions of mathematics in the oil and gas industry and related fields, thank you
Knowing orthometric height, latitude, longitude of a point and reduced level, latitude, longitude of a second point, what's the mathematical expression to compute for the orthometric correction to be applied to the reduced level of the 2nd point to get its corresponding orthometric Height?
I mean something strictly mathematical and not an algorithmic routine.
The function f(n) produces
I need the function f(n) remove the zeros and produces:
1, 2, 5, 1, 3, 8, 9, ...
How to linearize any of these surface functions (separately) near the origin?
I have attached the statement of the question, both as a screenshot, and as well as a PDF, for your perusal. Thank you.
International exchanges are inevitable in order to develop our projects and to ensure a sufficient critical base for the research. This confronts us with the problem of translating ideas, concepts and results that have developed in our local working language. As we know, English nowadays plays the role of the pivotal language in most conferences and publications. My intention is not to argue with this position -- a pivotal language is needed -- but to understand what are the main problems raised by writing and communicating in a language that is not the one in which the work is done.
English speakers themselves must question the meaning of words, sometimes neologisms, used by a non-English speaker. Of course, what is at stake is not the words but the meaning they convey. These issues are being addressed in the study of learning mathematics in a second language, or in the study of the variety and variability of teachers' vocabularies in different languages.
As researchers the issue is somewhat different. In particular, we must coin words and expressions to name phenomena or concepts in our own working language and then the challenge of translating them, or to understand words and expressions specific to the domain coming from another cultural and linguistic environment-- sometimes via the pivotal language.
I am preparing a short essay on these issues. I will appreciate your contributions, hence my questions:
Do you have examples to share or any particular experience? What do you think about the reasons for these difficulties and the impact they may have on your own communication?
It is a new type of mathematical explanation about the origin/existence of the universe. It is based on mathematical interactions of dimensional symmetries and fundamental technical aspects of the universe.
Can you accept (agree with) this solution?
(+0-0)^6 = (+1-(-1))^3 x (+0.0-0.0)^3
It is at the foundation of my research. Hint: "(a+b)^2 = a^2 +2ab +b^2"
My research is public as a preprint yet. I like to invite you to review it too.
- Hello. I am struggling with the problem. I can measure two ratios of three independent normal random values with their means and variances (not zero): Z1=V1/V0, Z2=V2/V0, V0~N(m0,s0), V1~N(m1,s1), V2~N(m2,s2). These are measurements of the speeds of the vehicle. Now I should estimate the means and the variances of these rations. We can see it is Cauchy distribution with no mean and variance. But it has analogs in the form of location and scale. Are there mathematical relations between mean and location, variance and scale? Can we approximate Cauchy by Normal? I have heard if we limit the estimated value we can obtain the mean and variance.
I am searching for auto tools that can be used to calculate the number of the real-consumed cycle or number of logical and mathematical operations for a code of thousands of lines?
I would like to know how when we change the direction of the magnetic field, the refractive indices of left and right circularly polarized light reverses, can someone tell how that happens mathematically???
The master Paul Erdos said "Mathematical may not be ready for such problem"
Terence Tao recently proposed a new and advanced approach for this conjecture and concluded: "Almost all orbits of the Collatz map attain almost bounded values".
The Collatz 's conjecture is infamous and very hard to solve
Take any positive integer, if it is even divide it by 2. If it is odd , multiply the number with 3 and add 1. Whatever the answer , repeat the same operations on the result.
Suppose the number is 5 then the operations wil be as follows: 5, 16, 8, 4, 2,1,4,2,1
Suppose the number is 7 then the operations will be as follows:7,22,11,34,17,52,26,13,40,20,10,5,16,8,4,2,1,4,2,1
The conjecture has been verified by computer for number as big as 2^68 and respects all the powers of 2. This is easely checked: 128, 64, 32, 16, 8, 4, 2, 1, 4, 2, 1.
How any positive integer reach some power of 2 in order to reach the loop of 4, 2, 1.?
We claim that any odd and positive integer has a special number equal to a multiple of the positive integer. When the operation of 3n+1 is performed on that multiple it leads to some power of 2 .
N=1 gives special multiple 5=5*1.
N=3 gives special multiple 21=3*7
N=5 gives special multiple 85=17*5
The set (1, 5, 21, 85, 341.....) are called Collatz Numbers.
So we can claim that the Collatz's conjecture is almost solved.
Pursuing research in mathematical science through physical problem-solving is an art. More students of science and engineering can be attracted to learn mathematics in the process and apply the mathematical tools in solving real-world problems.
Fermat's last theorem was finally solved by Wiles using mathematical tools that were wholly unavailable to Fermat.
Do you believe
A) That we have actually not solved Fermat's theorem the way it was supposed to be solved, and that we must still look for Fermat's original solution, still undiscovered,
B) That Fermat actually made a mistake, and that his 'wonderful' proof -which he did not have the necessary space to fully set forth - was in fact mistaken or flawed, and that we were obsessed for centuries with his last "theorem" when in fact he himself had not really proved it at all?
Why does the absorbance vary in a linear fashion, whereas the transmittance varies in an exponential fashion?
Can this be explained mathematically/intuitively or are these results based on experimentation?
how can i prove that mathematical proof about feasible solution is found at boundary point?
also want to ask about why feasible region must be composed of a convex set. is there any mathematical proof about this?
I am a homeschooling parent who came to realize that my young children (in grades 1 and 3 this year) are into mathematics for its beauty. They are amazed at the fact that adding two whole numbers in any order yields the same result (commutativity), grouping when adding may be helpful (associativity), that for particular numbers n, x, and y, we have that n = x^2 = y^3: 1 = 1^2 = 1^3; 64 = 8^2 = 4^3, etc. They are mesmerized by the fact that 1 = 1, 3 = 1 + 2, 6 = 1 + 2 + 3, 10 = 1 + 2 + 3 + 4, ...; special sequences of numbers such as (in this case) triangular numbers (which they know as "step shapes"). They don't take for granted that 1n = n for any integer (I don't think they are ready for real numbers in general). They can see that division is akin to factoring: 12/4 = 3 because 3 and 4 are factors of 12 (though they are not too up on the vocabulary yet)... They can see informally that a - b = - (b - a): 8 - 5 = 3 implies that 5 - 8 = -3.
I hope I am fortunate enough to see them through their entire grade school through high school learning.
My concern is: why do educators/teachers and parents a debate so much about basic skills versus critical reasoning as if these things removed from context of the ACTUAL beautiful ideas in mathematics will pull school children into understanding mathematics any more than knowing words alone is enough to make a great writer? Why do we either (a) think that its all basic skills, or (b) word problems about so-called everyday stuff that are more social studies than mathematics? How about focusing on mathematics as subject like we do Language Arts? We don't reduced Language Arts to writing letters and business contracts in which grammar and syntax are important? We want children to enjoy the beauty of language; shouldn't we do the same with mathematics?
At this moment, I have a nonlinear dataset that has X and Y-axis values. I want to come up with a nonlinear equation that represents the most values of this function without using a moving average filter.
- Asked 3 minutes ago
I have a problem. Measurements show the opposite of what convention assumes. It occurred in geotechnics, but could affect all material modelling branches.
I tested soil specimens. Convention interprets materials as things where deformation is created (output) and force is applied (input). So, our task is: decode how much deformation a certain loading (force) will generate.
After 6 years of testing, I noticed the convention is misleading. Reaction force behaves as a function of deformation. Not the other way round. Stiffness hysteresis loop shape, size and position stabilizes within deformation amplitudes. You can control the shape, size and position of stiffness loops - using deformation amplitude. All applied deformation values - have finite answers, unlike the "infinite displacement" paradox...
It's a big problem. All software is designed to model deformation as a function of force applied. But in reality, force (reaction) behaves as a function of deformation. It could be we are stuck in a paradigm, where deformation is modelled as a function of force. But in reality, the reaction force is a function of deformation. F=f(U) not U=f(F).
The observations (empirical evidence) pointed me to an abandoned theory from 40 years ago (strain-space plasticity, by P. J. Yoder 1980). His theory seems to be not only compatible with the observed physical properties, but also GPU - parallel computation compatible (there were no GPU units in 80's... so "parallel spring systems" in FEM caught no one's attention)
So, we have something that is both:
1. Potentially more physically correct
2. For the first time makes elasto-plastic FEM is super computer compatible.
I am stuck building robots for industrial projects at the faculty. For tests which are meant to provide "quick profit" to faculty. "fundamental" research is not funded. Tried applying for radical research EU grant... the topic is way too radical for them.
All observations were made in spare time. Evenings, weekends, at times - using life savings... I tried showing test results to renown experts. They become red in the face, angry, and say "I have not seen anything like it". After an hour of questions - they find no flaws in the testing machines. And.. Leave. Never to hear from again.
The theory of P. J. Yoder was defended in public defenses multiple times in the 80's. It seems "mathematically equivalent". As in - proven able to do "the same" as convention does. Without anyone ever testing what such "reversal of coordinate space" (strain instead of stress envelopes) would imply regarding interpretation of material properties. No one found flaws in it "mathematically". Never proves it wrong. But... Forgot, ignored and abandoned.
I tried asking industries for opinion too. Industry asks for code compatible with existing software (return of investment). And I alone can not code a full software package. Frankly, I would rather keep testing, try to prove my assumptions wrong. But the more I test, the more anomalies and paradoxes are observed, exposed and resolved on the topic..
What is the "antidote" in such situation? Tests showing convention wrong. Nobody find any mistakes. Which leads to silence and being ignored.
I have encountered a trouble about Zernik index when studing Zernik polynomials. This problem is equivalent mathematically as follows,
With each given n, let m=-n,-n+2,-n+4,...,n
With a given j, for example j=16, how to calculate n and m?
I was working to check the validity of a mathematical equation. In doing so I have obtained a large data set values >50,000 experimentally. As the mathematical equation gives only a single value I was wondering is there any way to compare that data set to mathematical equation. Based on that comparison I am willing to assign a constant that is when operated (+,-,x,/) to mathematical equation results in values justifying the experimental data set. The constant could be more than one as the range of the data set is quite large compared to empirically obtained values through a mathematical equation.
I have a basic mathematical question.
I designed a controller that can guarantee that variable z can remain zero or bounded for all time. Where z=s-r. Frome being zero or boundedness property of z can I conclude that s and r are also bounded?
Any help will be appreciated.
I wanna solve partial differential equation in terms of x and t (spatial and time), As I know one of the most useful way for solving pde is variable separation. well explained examples about mentioned way are wave equation, heat equation, diffusion....
wave equation is Utt=C^2 .Uxx
in other word; derivatives of displacement to time, equals to derivatives of displacement to spatial multiplied by constant or vice versa.
however my equation is not like that and derivatives are multiplied to each other.for example : Uxx=(1+Ux)*Utt
Im wondering how to solve this equation.
I will be thankful to hear any idea.
How can we determine the mathematical property of a high-dimensional benchmark problem?
Take, for example, the n-dimension Rosebrock equation. It is a unimodal equation with localized behaviour in 2 dimensions (We can tell this from the 2D plot). However, if the dimension is taken greater than 3, the equation shows multimodal behaviour. In short, the mathematical characteristic may change according to the dimension change. How can we determine the mathematical characteristic of an equation with different dimensions using a small number of analysis results obtained in the design space?
The n-dimensional Rosebrock equation is shared from the link below.
The French mathematician Pierre de Fermat (1601-1665), conjectured that the equation
x^n +y^n =z^n has no solution in positive integers x, y and z if n is a positive integer >= 3.
He wrote in the margin of his personal copy of Brachet’s translation of Diophantus’ Arithmetica:”I have discovered a truly marvellous demonstration of this proposition that this margin is too narrow to contain”.
Many researchers believe that Fermat does not find a demonstration of his proposition but some others think there is a proof and Fermat’s claim seems right.
The search of a solution of equation x^n + y^n =z^n are splitted in two directions.
The first one is oriented to search a solution for a specific value of the exponent n and the second is more general, oriented to find a solution for any value of the exponent n.
- Babylonian (570,495 BC) studied the equation x^2+y^2=z^2 and found the solution (3,4,5).
- Arabic mathematician Al-Khazin studied the equation x^3+y^3=z^3 in the X century and his work mentioned in a philosophic book by Avicenne in the XI century.
- A defective proof of FLT was given before 972 by the Arab Alkhodjandi
- The Arab Mohamed Beha Eddin ben Alhossain (1547-1622) listed among the problems remaining unsolved from former times that to divide a cube into two cubes.(refer Image of Arabic manuscript from British Museum. Problem N4 Red color at line 8 from top).
- Fermat (1601, 1665), Euler (1707, 1783) and Dirichlet (around 1825) solved the equation for n=3, 4 and 5.
- In 1753, Leonhard Euler presented a proof for x^3+y^3=z^3
- Fermat found a proof of x^4 +y^4 =z^4 using his famous “infinite descente”. This method combines proof by contradiction and proof by backward induction.
- Dirichlet (in 1825) solved the equation x^5+y^5=z^5 .
- Sophie Germain (in 1823) generalized the result of Dirichlet for prime p if 2p+1 is prime..
Let p prime, x^p +y^p =z^p has no solution in positive integers if 2p+1 is prime.
- In XIX century E.Kummer continued the work of Gauss and innovated by using numbers of cyclotomic field and introduced the concept of “prime factor ideal”.
-Andrew Wiles, a professor at Princeton University, provided an indirect proof of Fermat’s Conjecture in two articles published in the May 1995 issue of Annals of Mathematics.
Andrew Wiles solved a high level problem in modular forms about elliptic curves and the consequence is a solution for FLT. Thanks to the results of Andrew Wiles, we know that Fermat’s Last Theorem is true.
I think he opens a space for mathematicians to search proofs for FLT comprehensible by a normal student in mathematics and may be to find new concepts or ideas. This result should imply a direct proof of FLT.
In this paper, I would like to suggest a direct proof of FLT using mathematical concepts (Parity Even or Odd of Numbers, forward and backward Induction,....) and tools of the Fermat‘s era.
This direct proof is comprehensible for a student in mathematics and lovers of mathematics.
Ablation studies is quite common in machine learning literature, especially those on neural network and deep learning. When a new architecture or method is proposed, the authors often perform an ablation study where they remove sub-components of their methods that they feel are important one at a time, and study how the performance of said method changes, to learn the actual importance of each of the sub-components.
I find this approach quite useful for research on optimization algorithms, specifically heuristical algorithms, whose properties may not be easy or even possible to prove mathematically.
However, I have only seen ablation study or the term "ablation study" (might be called something else in other fields) in machine learning research. Would it be appropriate to include in papers on mathematical optimization algorithms or other areas of research in general?
Physical insight can suggest a principle which in turn leads to mathematical modeling. Example might include Newton extending the falling of an apple to towards Earth’s center, to the Moon falling around the Earth, and Einstein’s free falling elevator compared to the effect of gravity.
But mathematics can also give a powerful clue to a physical principle. Well known examples include Planck's 1900 or so work on energy packets implying discontinuous amounts of energy, and Dirac’s prediction of the positron.
From 2005 to 2008 studying networks, I found that the mean path length successfully scaled a lexical network, and the mathematical result implied some physical principles about the distribution of energy in a network, which by a circuitous route led to the idea of the principle of dimensional capacity around 2019.
History, consistent with my own experience, suggests physical principles can imply the mathematics and vice versa.
Your views? Do you have examples of each?
Need a journal published in German language in the fields of mathematics and programming indexed in scoups Q1?
I am interested in test anxiety and would like to dig into the seminal meta-analysis by Ray Hembree published in 1988. Does anyone has access to the bibliography file containing the list of studies included in the meta-analysis?
Any help would be greatly appreciated!
Meta-analysis (I have access)
Hembree, R. (1988). Correlates, Causes, Effects, and Treatment of Test Anxiety. In Review of Educational Research (Vol. 58, Issue 1). https://doi.org/10.3102/00346543058001047
Bibliography (I do not have access)
Hembree, R. (1987). A bibliography of research on academic test anxiety. Adrian, MI: Adrian College, Department of Mathematics. (ERIC Document Reproduction Service No. ED 281 779)
we substitute y=e^rx while solving constant coefficient linear differential equation. what is the reason behind this?. is there any proof?
I have digged into a theory established by D. Loeb from University of Bordeaux, and found an interesting way to represent the things you have (classical set) and the things you do not have and might need, want, be ready to acquire (new forms of set with negative number of elements)
This fits very well in the extension of the matrices of sets which I needed to develop, se why and how here (see references below).
I would be thrilled to know if you have use cases where this model of classical sets of what you have, and new negative sets of what you do not have, my help.
Please share your input!
Presentation Matrices of Sets, complete tutorial with use cases
 D. Loeb, Sets with a negative number of elements, Advances in Mathematics, 91 (1992), 64–74.https://core.ac.uk/download/pdf/82474206.pdf
Was Heisenberg a Third-Rate Natural Philosopher because he denied the reality of micro-objects that cannot be tracked by humans? Has this misled physics for 100+ years?
Surely, because we have created a whole civilization from the manipulations of electrons, especially digital electronics, then "LOOKING" at an object is NOT a requirement for existence.? Electrons interact with everything; so surely quite sufficient, eh?
Heisesenberg was educated in the Classical Philosophy of Aristotelian Classicism in the archaic German Education system. He failed to think for himself, substituting a Platonic idealist view of mathematics, as being superior to our imaginative/operational view of reality.
Some, including Morris Kline, in a book dealing with uncertainty in the basis of mathematics,
indicates that eachh definition in mathematics references some other or other definitions;
but that this leads backwards to undefined terms, which must be there. Perhaps there are other problems, that whatever you are defining may not even exist.
Given the historical roots of math, it is not so clear. That math ought to be based also on other disciplines, counting, land measurement and so on.
Hence that defenitions may better hinge on a descriptive intuitive account that earn us the work of looking up every other definition made in mathematics, This is a great barrier in understanding a lot of work these days, the huge number of definitions one must absorb.
This would best guide our intuition to see if the results are logical. But the comon practice these days are far from it, strait into having to prove everything, and know everything about very narrow specialties, meaningless to most outsiders.
What is your opinion? Should we reform this?
The temptation in all the remodeling of the study plans to train teachers that I have experienced, faced with a new strong point of interest, has been to introduce a subject that caters for it. Thus, environmental education, for example, has been the curriculum of Spain for more than 25 years.
Now the SDGs are the focus of attention, and the university is committed to developing them. the challenge is to do it from the established curricular subjects
Educational mathematics is developed in three or four subjects of each training curriculum, and has to embrace achieving the teacher's competence development to teach school mathematics and also, the purpose of working on the SDG, how can we do it?
I post this publication
The 3x + 1 Problem and its Generalizations
by Jeffrey C. Lagarias
The American Mathematical Monthly, Volume 92, 1985 - Issue 1, pp. 3 - 23
and the following introductory video by Veritasium (which in my view is extremely well-produced -- in particular the adopted graphics are superlative)
as I consider the links between this mathematical problem (the Collatz Conjecture) to physics to be noteworthy.
Here we discuss about one of the famous unsolved problems in mathematics, the Riemann hypothesis. We construct a vision from a student about this hypothesis, we ask a questions maybe it will give a help for researchers and scientist.
I am getting immersed in The Kervaire-Milnor Formula, where each term of which connects with a different field in mathematics: "The number of differential structures on the 4k − 1-dimensional sphere is given by a quantity that is the product of three quantities: Elementary factor × “Non − J Classes” × Numerator of B2k/2k". Information read in https://people.math.harvard.edu/~mazur/papers/slides.Bartlett.pdf
My research is focused on how to connect the Turán moments and coefficients of Jensen polynomials for the entire function Xi as I have noticed some valuable results. I would like to count with more articles or information about the Bernoulli numbers visible in topics of topology which could involve the Turán moments and Jensen coefficients as well.
Thanks in advance!
Is there a free academic online resource that can be used to look up and reference basic concepts and theorems in mathematics? I am esentially looking for a mathematical analogue to the Stanford Encyclopedia of Philosophy or the APA dictionary of Psychology, that can be cited in thesis work and the like, and that is suitable for researchers from other fields. (So basic information, not cutting-edge detailed research or whole discussions on current questions.)
That is, it should be trustworthy and considered "respectable" enough to be cited in academic work. This would rule out Wikipedia - even though its articles on the subject are usually trustworthy - or things like github. But it should also be a freely accessible source. (Alternatively, if publishing sites like Springer host something similar which is accessible through university subscriptions, that might work for my purpose, too.)
I know you can often find the correct answers to basic questions by googling, but the issue is it's a bit difficult to gauge sources and content as a non-mathematician. Thanks in advance!
I want to compute Ornstein-Uhlenbeck models using the OUwie() function in R. For this I'm using a set of 100 phylogenetic trees, so that each tree goes into one OUwie model, resulting in 100 models.
The output of the OUwie gives estimates for a trait optimum and according SE.
Now I want to describe this output over all 100 trees. For the estimates it's easy to simply give the average value, but as the standard error depends on the sample, I'm not sure if I can give the mean, or if I should give a range or if there is another (mathematically correct) way to communicate the information I get out of these models.
We use ALEKS-PPL for placement and co-requisite support for our math courses. Our institution is interested in using something similar for English composition: placement + adaptive learning co-requisite support. What is out there for us to review?
With the purpose of, to find out the pedagogical skills mathematics teachers use to identify and address students mathematics anxiety
I recently saw a question regarding the relationship between language and mathematical learning, but am interested in learning more about the opposite.
Can anyone recommend relevant readings that explore the relationship between mathematical ability/maths learning and language acquisition? I am primarily interested in second/foreign language acquisition, but also interest in first language acquisition and the relation to mathematics.
Good research is based on good relationship between the mentor or supervisor and the scholar. What are the qualities a supervisor or mentor must have to have a healthy and friendly environment in the laboratory?
In Mathematics, some proofs are not convincing since the assumption fails in the Proofs.
If equality is Changed to "approximately equal to" then the Proof becomes more Perfect. But Uniqueness cannot be guaranteed.
In Page number 291, Introduction to Real Analysis Fourth Edition authored by Bartle and Sherbert, the Proof of Uniqueness theorem is explained.
That Proof is not perfect.
Reason : Initially epsilon is assumed as positive and so not equal to zero. Before Conclusion of the Proof, epsilon is considered as zero and written as two limits are equal.
The equality cannot hold since epsilon is not zero. Only the possibility to write is Two limits are approximately equal.
Since Epsilon is arbitrary, never imply epsilon is zero.
I hope the authors and other mathematicians will notice this error and will change in new editions.
What research is there (if any), about gifted students frequently getting easy questions on academic tests/exams INCORRECT and harder more challenging ones CORRECT – particularly in mathematics? Does anyone have any empirical evidence as to why this might occur? Thank you.
Every integer can be expressed as a sum of three triangular numbers.
Given two integers N=a+b+c and M=x+y+z , how to express the product N*M as a sum of three triangular numbers. Example:
11=0+1+10 and 13=0+3+10 ; the product 11*13=143=10+28+105
How to write the product N*M=F(a,b,c,x,y,z) +G(a,b,c,x,y,z) +H(a,b,c,x,y,z) as three triangular numbers?