Questions related to Mathematics
Publish your scientific paper for free
Dear Researchers and postgraduate students
Iraqi Journal for Computer Science and Mathematics (IJCSM) issued by college of education, Al-Iraqia university, Iraq, invites you to publish your scientific papers and original research articles, short papers, long papers, review papers for the publication in the next issue for free the journal doesn’t requires any publication fee or article processing charge and all accepted papers in volume 3 issue 1 are published for free
1 -Publication fee: free
2- Frequency: 2 issues per year
3- Subject: computer science, mathematics, information technology, computer engineering and any related fields
4- ISSN: 2788-7421
5- Published by: Department of Computer Science, college of education, Al-Iraqia University, Iraq
6- Contact: email1: firstname.lastname@example.org,
Email 2: email@example.com,
Managing Editor: Mohammad Aljanabi
The derivative operators with fractal dimension and fractional order have only more recently been developed, they appear to be strong mathematical tools for modelling complex real-world phenomenas. So give some advantages(with mathematical point of view) about fractal fractional derivative in real life.
I recently came to wonder after studying Estonian together with my wife if language learning skills and intelligence isn't closely linked. My wife learns really fast, and I do not, sees patterns where I do not. My question is, what is the link in your opinion between language learning skills and intelligence? Am I the least intelligent person in our family, or is there hope for me as wel?l Or, perhaps it is my faith to stand by and watch my wife succeed and to bury my self under a thick blanket of jealousy :-)
The French mathematician Pierre de Fermat (1601-1665), conjectured that the equation
x^n +y^n =z^n has no solution in positive integers x, y and z if n is a positive integer >= 3.
He wrote in the margin of his personal copy of Brachet’s translation of Diophantus’ Arithmetica:”I have discovered a truly marvellous demonstration of this proposition that this margin is too narrow to contain”.
Many researchers believe that Fermat does not find a demonstration of his proposition but some others think there is a proof and Fermat’s claim seems right.
The search of a solution of equation x^n + y^n =z^n are splitted in two directions.
The first one is oriented to search a solution for a specific value of the exponent n and the second is more general, oriented to find a solution for any value of the exponent n.
- Babylonian (570,495 BC) studied the equation x^2+y^2=z^2 and found the solution (3,4,5).
- Arabic mathematician Al-Khazin studied the equation x^3+y^3=z^3 in the X century and his work mentioned in a philosophic book by Avicenne in the XI century.
- A defective proof of FLT was given before 972 by the Arab Alkhodjandi
- The Arab Mohamed Beha Eddin ben Alhossain (1547-1622) listed among the problems remaining unsolved from former times that to divide a cube into two cubes.(refer Image of Arabic manuscript from British Museum. Problem N4 Red color at line 8 from top).
- Fermat (1601, 1665), Euler (1707, 1783) and Dirichlet (around 1825) solved the equation for n=3, 4 and 5.
- In 1753, Leonhard Euler presented a proof for x^3+y^3=z^3
- Fermat found a proof of x^4 +y^4 =z^4 using his famous “infinite descente”. This method combines proof by contradiction and proof by backward induction.
- Dirichlet (in 1825) solved the equation x^5+y^5=z^5 .
- Sophie Germain (in 1823) generalized the result of Dirichlet for prime p if 2p+1 is prime..
Let p prime, x^p +y^p =z^p has no solution in positive integers if 2p+1 is prime.
- In XIX century E.Kummer continued the work of Gauss and innovated by using numbers of cyclotomic field and introduced the concept of “prime factor ideal”.
-Andrew Wiles, a professor at Princeton University, provided an indirect proof of Fermat’s Conjecture in two articles published in the May 1995 issue of Annals of Mathematics.
Andrew Wiles solved a high level problem in modular forms about elliptic curves and the consequence is a solution for FLT. Thanks to the results of Andrew Wiles, we know that Fermat’s Last Theorem is true.
I think he opens a space for mathematicians to search proofs for FLT comprehensible by a normal student in mathematics and may be to find new concepts or ideas. This result should imply a direct proof of FLT.
In this paper, I would like to suggest a direct proof of FLT using mathematical concepts (Parity Even or Odd of Numbers, forward and backward Induction,....) and tools of the Fermat‘s era.
This direct proof is comprehensible for a normal student and mathematical lovers.
Ablation studies is quite common in machine learning literature, especially those on neural network and deep learning. When a new architecture or method is proposed, the authors often perform an ablation study where they remove sub-components of their methods that they feel are important one at a time, and study how the performance of said method changes, to learn the actual importance of each of the sub-components.
I find this approach quite useful for research on optimization algorithms, specifically heuristical algorithms, whose properties may not be easy or even possible to prove mathematically.
However, I have only seen ablation study or the term "ablation study" (might be called something else in other fields) in machine learning research. Would it be appropriate to include in papers on mathematical optimization algorithms or other areas of research in general?
Physical insight can suggest a principle which in turn leads to mathematical modeling. Example might include Newton extending the falling of an apple to towards Earth’s center, to the Moon falling around the Earth, and Einstein’s free falling elevator compared to the effect of gravity.
But mathematics can also give a powerful clue to a physical principle. Well known examples include Planck's 1900 or so work on energy packets implying discontinuous amounts of energy, and Dirac’s prediction of the positron.
From 2005 to 2008 studying networks, I found that the mean path length successfully scaled a lexical network, and the mathematical result implied some physical principles about the distribution of energy in a network, which by a circuitous route led to the idea of the principle of dimensional capacity around 2019.
History, consistent with my own experience, suggests physical principles can imply the mathematics and vice versa.
Your views? Do you have examples of each?
Need a journal published in German language in the fields of mathematics and programming indexed in scoups Q1?
I am interested in test anxiety and would like to dig into the seminal meta-analysis by Ray Hembree published in 1988. Does anyone has access to the bibliography file containing the list of studies included in the meta-analysis?
Any help would be greatly appreciated!
Meta-analysis (I have access)
Hembree, R. (1988). Correlates, Causes, Effects, and Treatment of Test Anxiety. In Review of Educational Research (Vol. 58, Issue 1). https://doi.org/10.3102/00346543058001047
Bibliography (I do not have access)
Hembree, R. (1987). A bibliography of research on academic test anxiety. Adrian, MI: Adrian College, Department of Mathematics. (ERIC Document Reproduction Service No. ED 281 779)
Was Heisenberg a Third-Rate Natural Philosopher because he denied the reality of micro-objects that cannot be tracked by humans? Has this misled physics for 100+ years?
Surely, because we have created a whole civilization from the manipulations of electrons, especially digital electronics, then "LOOKING" at an object is NOT a requirement for existence.? Electrons interact with everything; so surely quite sufficient, eh?
Heisesenberg was educated in the Classical Philosophy of Aristotelian Classicism in the archaic German Education system. He failed to think for himself, substituting a Platonic idealist view of mathematics, as being superior to our imaginative/operational view of reality.
Some, including Morris Kline, in a book dealing with uncertainty in the basis of mathematics,
indicates that eachh definition in mathematics references some other or other definitions;
but that this leads backwards to undefined terms, which must be there. Perhaps there are other problems, that whatever you are defining may not even exist.
Given the historical roots of math, it is not so clear. That math ought to be based also on other disciplines, counting, land measurement and so on.
Hence that defenitions may better hinge on a descriptive intuitive account that earn us the work of looking up every other definition made in mathematics, This is a great barrier in understanding a lot of work these days, the huge number of definitions one must absorb.
This would best guide our intuition to see if the results are logical. But the comon practice these days are far from it, strait into having to prove everything, and know everything about very narrow specialties, meaningless to most outsiders.
What is your opinion? Should we reform this?
The temptation in all the remodeling of the study plans to train teachers that I have experienced, faced with a new strong point of interest, has been to introduce a subject that caters for it. Thus, environmental education, for example, has been the curriculum of Spain for more than 25 years.
Now the SDGs are the focus of attention, and the university is committed to developing them. the challenge is to do it from the established curricular subjects
Educational mathematics is developed in three or four subjects of each training curriculum, and has to embrace achieving the teacher's competence development to teach school mathematics and also, the purpose of working on the SDG, how can we do it?
I post this publication
The 3x + 1 Problem and its Generalizations
by Jeffrey C. Lagarias
The American Mathematical Monthly, Volume 92, 1985 - Issue 1, pp. 3 - 23
and the following introductory video by Veritasium (which in my view is extremely well-produced -- in particular the adopted graphics are superlative)
as I consider the links between this mathematical problem (the Collatz Conjecture) to physics to be noteworthy.
Here we discuss about one of the famous unsolved problems in mathematics, the Riemann hypothesis. We construct a vision from a student about this hypothesis, we ask a questions maybe it will give a help for researchers and scientist.
I am getting immersed in The Kervaire-Milnor Formula, where each term of which connects with a different field in mathematics: "The number of differential structures on the 4k − 1-dimensional sphere is given by a quantity that is the product of three quantities: Elementary factor × “Non − J Classes” × Numerator of B2k/2k". Information read in https://people.math.harvard.edu/~mazur/papers/slides.Bartlett.pdf
My research is focused on how to connect the Turán moments and coefficients of Jensen polynomials for the entire function Xi as I have noticed some valuable results. I would like to count with more articles or information about the Bernoulli numbers visible in topics of topology which could involve the Turán moments and Jensen coefficients as well.
Thanks in advance!
Is there a free academic online resource that can be used to look up and reference basic concepts and theorems in mathematics? I am esentially looking for a mathematical analogue to the Stanford Encyclopedia of Philosophy or the APA dictionary of Psychology, that can be cited in thesis work and the like, and that is suitable for researchers from other fields. (So basic information, not cutting-edge detailed research or whole discussions on current questions.)
That is, it should be trustworthy and considered "respectable" enough to be cited in academic work. This would rule out Wikipedia - even though its articles on the subject are usually trustworthy - or things like github. But it should also be a freely accessible source. (Alternatively, if publishing sites like Springer host something similar which is accessible through university subscriptions, that might work for my purpose, too.)
I know you can often find the correct answers to basic questions by googling, but the issue is it's a bit difficult to gauge sources and content as a non-mathematician. Thanks in advance!
I want to compute Ornstein-Uhlenbeck models using the OUwie() function in R. For this I'm using a set of 100 phylogenetic trees, so that each tree goes into one OUwie model, resulting in 100 models.
The output of the OUwie gives estimates for a trait optimum and according SE.
Now I want to describe this output over all 100 trees. For the estimates it's easy to simply give the average value, but as the standard error depends on the sample, I'm not sure if I can give the mean, or if I should give a range or if there is another (mathematically correct) way to communicate the information I get out of these models.
I am a homeschooling parent who came to realize that my young children (in grades 1 and 3 this year) are into mathematics for its beauty. They are amazed at the fact that adding two whole numbers in any order yields the same result (commutativity), grouping when adding may be helpful (associativity), that for particular numbers n, x, and y, we have that n = x^2 = y^3: 1 = 1^2 = 1^3; 64 = 8^2 = 4^3, etc. They are mesmerized by the fact that 1 = 1, 3 = 1 + 2, 6 = 1 + 2 + 3, 10 = 1 + 2 + 3 + 4, ...; special sequences of numbers such as (in this case) triangular numbers (which they know as "step shapes"). They don't take for granted that 1n = n for any integer (I don't think they are ready for real numbers in general). They can see that division is akin to factoring: 12/4 = 3 because 3 and 4 are factors of 12 (though they are not too up on the vocabulary yet)... They can see informally that a - b = - (b - a): 8 - 5 = 3 implies that 5 - 8 = -3.
I hope I am fortunate enough to see them through their entire grade school through high school learning.
My concern is: why do educators/teachers and parents a debate so much about basic skills versus critical reasoning as if these things removed from context of the ACTUAL beautiful ideas in mathematics will pull school children into understanding mathematics any more than knowing words alone is enough to make a great writer? Why do we either (a) think that its all basic skills, or (b) word problems about so-called everyday stuff that are more social studies than mathematics? How about focusing on mathematics as subject like we do Language Arts? We don't reduced Language Arts to writing letters and business contracts in which grammar and syntax are important? We want children to enjoy the beauty of language; shouldn't we do the same with mathematics?
We use ALEKS-PPL for placement and co-requisite support for our math courses. Our institution is interested in using something similar for English composition: placement + adaptive learning co-requisite support. What is out there for us to review?
In Mathematics, some proofs are not convincing since the assumption fails in the Proofs.
If equality is Changed to "approximately equal to" then the Proof becomes more Perfect. But Uniqueness cannot be guaranteed.
In Page number 291, Introduction to Real Analysis Fourth Edition authored by Bartle and Sherbert, the Proof of Uniqueness theorem is explained.
That Proof is not perfect.
Reason : Initially epsilon is assumed as positive and so not equal to zero. Before Conclusion of the Proof, epsilon is considered as zero and written as two limits are equal.
The equality cannot hold since epsilon is not zero. Only the possibility to write is Two limits are approximately equal.
Since Epsilon is arbitrary, never imply epsilon is zero.
I hope the authors and other mathematicians will notice this error and will change in new editions.
With the purpose of, to find out the pedagogical skills mathematics teachers use to identify and address students mathematics anxiety
we substitute y=e^rx while solving constant coefficient linear differential equation. what is the reason behind this?. is there any proof?
I recently saw a question regarding the relationship between language and mathematical learning, but am interested in learning more about the opposite.
Can anyone recommend relevant readings that explore the relationship between mathematical ability/maths learning and language acquisition? I am primarily interested in second/foreign language acquisition, but also interest in first language acquisition and the relation to mathematics.
What research is there (if any), about gifted students frequently getting easy questions on academic tests/exams INCORRECT and harder more challenging ones CORRECT – particularly in mathematics? Does anyone have any empirical evidence as to why this might occur? Thank you.
Every integer can be expressed as a sum of three triangular numbers.
Given two integers N=a+b+c and M=x+y+z , how to express the product N*M as a sum of three triangular numbers. Example:
11=0+1+10 and 13=0+3+10 ; the product 11*13=143=10+28+105
How to write the product N*M=F(a,b,c,x,y,z) +G(a,b,c,x,y,z) +H(a,b,c,x,y,z) as three triangular numbers?
I have digged into a theory established by D. Loeb from University of Bordeaux, and found an interesting way to represent the things you have (classical set) and the things you do not have and might need, want, be ready to acquire (new forms of set with negative number of elements)
This fits very well in the extension of the matrices of sets which I needed to develop, se why and how here (see references below).
I would be thrilled to know if you have use cases where this model of classical sets of what you have, and new negative sets of what you do not have, my help.
Please share your input!
Presentation Matrices of Sets, complete tutorial with use cases
 D. Loeb, Sets with a negative number of elements, Advances in Mathematics, 91 (1992), 64–74.https://core.ac.uk/download/pdf/82474206.pdf
In a mathematical analysis of piezoelectric models, we use the terms "the electrically open and short circuit case" for which we have some mathematical definition as well.
What is the physical significance of these boundary conditions in making sensors and other devices?
It is well confirmed that the nonlinear evolution equation (NLEE) is widely used as a mathematical leading equation for describing the physical issues in many branches of physics. But it is very difficult to solve. Recently, many researchers have proposed various types of solution methods. Unfortunately, most of the solutions methods lead to additional free parameters. In my opinion, the solutions with additional free parameters of NLEEs are not useful for further verification in the laboratory. That is why my discussion topic is "What is the best solution method for nonlinear evolution equations?".
Although Bashkar II, in his Siddhantha Shiromani, critisized a Buddhist school of astrology who held the Earth is moving, I have not seen any mention about texts on Buddhist astrological or mathematical tradition in recent studies. Vedic and Jain sources are well known.
Are there any primary textual resources on Buddhist astrology/mathematics?
Dear experts and colleagues,
Hello, all! I recently received a reviewer's comment stating that the proposed network measures of graph theoretical analyses could be correlated by mathematical definition.
So I ran the correlational analyses and global efficiency, characteristic path length, mean clustering coefficient (global measures) showed correlated to each other.
I understand that Global efficiency is inversely related to characteristic path length, but I am quite confused about how characteristic path length is inversely correlated with mean clustering coefficient, while mean clustering coefficient is positively correlated with global efficiency.
Or does my data is wrongly suggested?
Briefly explaining my study, it is a neurological study using MR images (diffusion tensor imaging) to explore the structural networks.
Any feedback and discussion are welcomed here!
hi, I have to get an inverse matrix of a square matrix (20*20) on excel , to make sure that calculation is right I Knew that multiplying the matrix with its inverse should yields a unity matrix which have a diagonal of ones and other cells are zeros but when i multiply the matrix with the inverse i got the result is a matrix with adiagonal of ones but not all other cells are zeros (some have values like 1,2,etc ... is this ok or what often is my mistake? thanks in advance
In classical times there was a concept of pure science that was produced entirely in the intellect.
More recently sciences were developed by testing the intellectual product with empirical data. These sciences are not regarded as being pure.
Mathematics, often regarded as pure science, has for most of history been based on postulates of geometry that could not be proven. Then came relativity and other geometries. In the past century there was considerable effort to reformulate mathematics on a firmer basis of conditional sets. Math is now regarded as being somewhat more pure than before, while producing two generations of graduating students in some countries who are not able to do simple arithmetic.
Fortunately I had some excellent teachers who explained the two systems and why they were both needed. Other teachers displayed the Gödel's incompleteness theorems.
In academic settings there seems to be a difference of opinions about whether or not math is a science, and whether or not it is pure.
Is Mathematics A Science?
About two years ago, I have submitted a manuscript to a reputed journal. After a couple of months of the peer review process, the response was “major revision has been requested”. I made the necessary adjustments and resubmitted it again. The Journal's editor responded that my manuscript requires minor revision. Well, the decision was <<"Revise for Editor Only'' he claimed that revision should be quick and it will not undergo the entire review process>>. Again, I made the required edits in order to make the manuscript acceptable for publication. Afterward, I resubmitted it. It is the day 120 and the status is "With Editor". In fact, I did send two emails to the editorial team to update the status. Their response to both emails was the same, saying that they contacted the editor to accelerate the process.
Dear readers, I need to have a piece of advice: what to do as a next step?
Thank you in advance,
I am combining fish abundance and biomass data collected from five locations which have utilised different transect dimensions:
15 x 4 m (60 m2; from one location)
30 x 5 m (150 m2; from three locations)
50 x 4 m (200 m2; from one location)
When pooling data collected from transects of differing sizes, should you standardise your data down (to the smallest dimensions) or up (to the largest)? Or in this case, perhaps standardise according to the most commonly used dimensions (30 x 5 m/150 m2)?
To me it seems as though there would be no difference, but I was wondering if there was some sort of mathematical explanation for why one way might be preferable over the others?
the VPL formula can be given here as,
Can anyone please explain if the value from the term (Pmin-P) in the above formula is in degrees or radians?
In engineering design, there are usually only a few data points or low order moments, so it is meaningful to fit a relatively accurate probability density function to guide engineering design. What are the methods of fitting probability density functions through small amounts of data or low order statistical moments?
Hello. I want to ask how can I translate weightages of "yes" or "no" of a criterion in terms of numbers in AHP calculations?
for examples, my goal is to select most suitable construction material vendor (Alternative) according to the set criteria including "ASTM Standards", with 4 other criteria.
This criterion means "Is the material vendor giving the material according to the Construction Standard ASTM, or not?" So the answer given to this by the alternative (vendor) would be a "Yes" or a "No", but not a value.
First, I've sent a questionnaire to the experts to give me pair wise comparison weightages for one criteria vs the other. And I've got great response with consistency of 0.9. Acceptable!
But when I ask the vendors to provide their responses, they put numerical values to the first four criteria, and I find the comparison matrix for these vendors (alternatives) for each criteria using simple maths division (value of one vendor divided by second vendor). It gives me the exact zero consistency, because I've found the alternatives weightages using exact math comparison. But I get numerical values of weightages to find ranks, so it works fine for me.
But when it comes to the 5th criterion (ASTM Standards), the responses are in "Yes" or "No" terms! My question is that what numerical value should I give to a Yes and a No so that I can get their comparison by dividing with each other?
for instance, if I use Yes =1 and No = 0, the system hangs because anything divided by zero gives an infinity answer, and it doesn't proceed further.
If I want to use a scale of 1 to 9, saying Yes = 9, and No = 1, it gives me good results. But I've no reason to use 1 or 9, for it could be any numerical value say 1000 for Yes, and 1 for No.
Please suggest me a reasonable value with a citation if possible.
Thanks in advance.
Sometimes , we use term of "zero time" in a formulation but are we sure it is really "0" ? maybe it is 0,000......1 and is there a "zero" time(can we stop the time?), or sometimes, we say v=0 are we sure?
On the other hand
1/0 = infinity. Well then, what's "infinity"? How does it work in all the other equations?
infinity - infinity = 0?
1 + infinity = infinity?
If we use closest number to zero-monad (basic thing that constitutes the universe-everything-)Gottfreid Leibniz, in his essay “Monadology,” suggested that the fundamental unit of all things is the monad. He intended the monad to have some of the attributes of the atom, but with important differences. The monads Leibniz proposed are indivisible, indissoluble, and have no extension or shape, yet from them all things were made. He called them “the true atoms of nature.” At the same time, each monad mirrored the universe. If we use monad instead of zero, every equations work
I think Science says "Every Thing had originated from a basic thing"
Boolean and ZFC logic are constructed from fundamentally different foundations. Modern (20th century) mathematics appears to be dominated by the ZFC approach. I am not sure it is understood the two approaches are incompatible. Is this incompatibility studied?
I have done cropping attacks in visual (pictorial) form but, I want to do this test in mathematical form, i.e., I want to calculate values of cropping attack analysis in MATLAB.
I am trying to write the motivation part for linearization of a non-linear mathematical formulation. I claim the mathematical model will be more simple and easier to solve. But, of course, I need a reference for that! Any suggested book chapters or articles that discuss this point please?
Thanks in advance...
STEM fue el tema principal de la conferencia internacional ASTE 2019, con al menos 8 pósteres, 27 presentaciones orales y 3 talleres que promovieron las aulas STEM, la instrucción/enseñanza STEM, las lecciones STEM, los campamentos de verano STEM, los clubes STEM y las escuelas STEM sin proporcionar una conceptualización o definición operativa de lo que es STEM. Algunas presentaciones defendían la integración de las disciplinas, pero el ejemplo proporcionado fue principalmente prácticas "indagatorias" y de "diseño de ingeniería" que de hecho no diferían del tipo de actividades en el aula hands-on/minds-off mal conceptualizadas y epistemológicamente incongruentes.
Por lo tanto, vale la pena considerar:
(1) ¿Por qué lo llamamos STEM si no difiere de las prácticas aplicadas durante décadas (por ejemplo, indagación, actividades hands-on)?
(2) ¿Qué beneficios (si los hubiere) puede aportar esta mentalidad/tendencia de STEMinificación a la educación científica y su investigación?
Dear colleagues, please tell me a link to materials about mathematics of Dropout regularization in ANN. I perfectly understand how this algorithm works in practice, but I don't quite understand the mathematics.
I know that an infant's brain can repair itself when damaged but why doesn't the same happen in adults after stroke or brain injuries?
I would really appreciate it if anyone could give some idea of some mathematical techniques for developing correlation for a dependent variable, W which has to be defined as W=f(x,y,z) where the individual functions are in form of power-law, W=a(x^b), W=c(y^d) and W=e(z^f).
In general in mathematics we work with groups that includ numbers, generally we work with constants.
So for exempel: For any x from R we can choose any element from R we will find it a constant, the same thing for any complex number, for any z from C we will find it a constant.
My question say: Can we find a group or groups that includ variabels not constants for exemple, we named G a group for any x from G we will find an x' it's not a constant it's another variabel ?
and if it exisit can you give me exempels ?
Thank you !
While working in both the software, after loading the training and validation data for the prediction of a single output using several input variables (say 10), it skips some of the inputs and delivered an explicit mathematical equation for future prediction of the specific parameter but it skips some of the input variables (say 2 or 3 or maybe greater). What criteria are these software uses in the back for picking the most influential parameters while providing a mathematical predictive model?
Austrian-born mathematician, logician, and philosopher Kurt Gödel created in 1931 one of the most stunning intellectual achievements in history. His shocking incompleteness theorems, published when he was just 25, proved that within any axiomatic mathematical system there are propositions that cannot be proved or disproved from the axioms within the system. Such a system cannot be both complete and consistent.
The understanding of Gödel’s proof requires advanced knowledge of symbolic logic, as well as Hilbert and Peano mathematics. Hilbert’s Program was a proposal by German mathematician David Hilbert to ground all existing theories to a finite, complete set of axioms, and provide a proof that these axioms were consistent. Ultimately, the consistency of all of mathematics could be reduced to basic arithmetic. Gödel’s 1931 paper proved that Hilbert’s Program is unattainable.
The book Gödel’s Proof by Ernest Nagel and James Newman provides a readable and accessible explanation of the main ideas and broad implications of Gödel's discovery.
Mathematicians, scholars and non-specialist readers are invited to offer their interpretations of Gödel's theory.
When I observed mathematics education in preservice teachers at my university, I found that What students learn in mathematics classes does not match what is needed in school mathematics classes. That led to my initiation to do design research, a Hypothetical Learning Trajectory that contains a sequence of learning processes related to elementary school mathematics learning. I consider starting with observations in elementary math class to identify misconceptions, then jointly reviewing issues on campus, conducting literature reviews, designing lessons to address identified problems, implementing them in class and evaluating the results. The final result is a Local Instructional Theory on Learning about Teaching Mathematics for prospective teachers to overcome misconceptions that occur in elementary mathematics classes. I need so many suggestions to do this research.
In 2010, Dr. Khmelnik has found the suitable method of resolving of the Navier-Stokes equations and published his results in a book. In 2021, already the sixth edition of his book was released that is attached to this question for downloading. Here it is worce to mention that the Clay Mathematics Institute has included this problem of resolving of the Navier-Stokes equations in the list of seven important millennium problems. Why the Navier-Stokes equations are very important?
According to the resources; there is used to algorithm in HOMER sotware, grid search algorithm and derivative-free algorithm.
So how can I find the mathematical formulas of these algorithm ?
Please if you know the way or source help me
Hello! I have an exercise to do and I am struggling with it for a week ago. I have a 2D flow with the initial velocity at the inlet=0.01m/s. And I want to calculate the numerical results of the velocity which is the function of x and y direction (U(x,y)) in the domain of the water flow. For example if I want to calculate the velocity at the distance x1 and x2 (as in the figure) in the function of x and y., how can I calculate it? Please note that it is incompressible flow. It is fully developed after it passed the entrance length. The height of the domain is 0.02 m and the length is 1 m. Please help me! You can see the following figure. Thank You in advance! I will appreciate your help !
According to the resources; there is two algorithm used in HOMER sotware optimization process; grid search algorithm and derivative-free algorithm.
So how can I find the mathematical formulas of these algorithm ?
Please if you know the way or source help me
using BTE how do we get mobilities and conductivity ,more calculation(mathematics) oriented answers would be helpful.
Physical reality can be observed. At least part of the structure and behavior of physical reality is perceivable. Humans can communicate about these experiences. Curious humans want to comprehend these perceptions. Humans have designed linguistic tools to be able to communicate about the perceived structure and behavior of physical reality and with these tools, they have constructed structures and models of mechanisms that might explain the perceived structures and mechanisms that physical reality exposes. Some of these structures and models of mechanisms seem to be successful. People discuss the success of these approaches and call this activity exact science. Other humans discuss this activity and call themselves philosophers. Humans are interested in the structure and mechanisms of physical reality because this knowledge helps them survive as individuals and as communities. Part of the exact sciences is formed by mathematics. Mathematics contains structures and models of mechanisms that are not directly derived from perceptions of physical reality. These concepts are derived from abstract foundations. Examples are empty space and point-like objects. Scientists use these concepts to construct vector spaces, number systems, and coordinate systems. The scientists apply these higher-level concepts to construct a model of their living space. The philosophers will immediately indicate that it is impossible to prove that these models are correct. However, these models feature structure and behavior. If the structure and behavior of the models agree more with the perceived models and behavior, then there is a larger chance that the model fits reality. Since reality appears to be very complicated, little chance exists that good correspondence will ever end the dispute.
One of the aspects of the dispute concerns what the best inroad will be for comprehending most of the structure and the mechanisms of physical reality. That is the background of the posed question.
US Fed's balanced sheet has recently doubled up from $4T to $8T (see the monthly chart) – not to mention it used to be $800B around 2008. It seems like to me, that is highly inflating the housing markets, pushing the housing prices higher as well as the inflation rates higher – regardless of taking economic productions much into consideration. Fed refers to the situation as "transitory".
I'm sure that there should be solid mathematical rations behind it. Does this parabolic Quantitative Easing approach a mathematically and/or economically sound strategy, considering the current economic situations? Any other alternative strategy? Any upsides and downsides?
A bit outdated visual components of Fed's balanced sheet:
Dear colleagues, I would like to know if there is a mathematical equation or model to calculate the Soxhlet number of cycles by varying the following parameters:
- the type of solvent
- the volume of the extractor
- the temperature of the solvent heating
if yes how can we apply it? and how much uncertainty it has?
thank you so much
such as Fourier , Hilbert or Laplace transforms in order to analysis and studies in the field of music acoustics.
Please look at the mathematical construction of theta and zeta functions in the representation of a winding sphere. In this paper, based on the representation of the winding of a sphere, we construct a proof of the Riemann hypothesis of nontrivial zeros. Could you check this proof?
Preprint On the winding of a sphere
Preprint О намотке сферы
I need to find a method of proving a mathematical equation that exist in attached file.
This mathematical equation is about Bessel function.
I will be thankful to hear your helps and advises.
Dimensional rules for Homogeneous Diophantine equations
A Diophantine equation is an algebraic equation with several unknowns and integer coefficients. Those are equations whose solutions must be whole numbers. The only solutions of interest are integers
In mathematics and physics, the dimension of a mathematical space (or object) is informally defined as the minimum number of coordinates needed to specify any point within it or the minimum number of sides to build a physical area or volume.
To build a geometrical rectangle (square) we require 2 orthogonal sides (x, y are independent indeterminates) area = x*y (or area = x*x)
To compute a numerical square (square number) we require 2 numbers p and q coprime with gcd(p,q)=1 or independent numbers.
Let we introduce a dimension for nature of numbers like I (for integer), R (for real), C (for complex) and so on.
n^2 = p^2 +q^2 ; 5^2=3^2 +4^2
Dimensional analysis gives for n^2= p^2 +q^2
I*I =I*I +I*I ; The sum of square might be equal to a square in some conditions.
Then the equation n^2 = p^2 +q^2 may have integer solutions.
A dimension is anything we can independently adjust, it is based on some geometric or algebraic definition.
In any correct mathematical equation representing the relation between physical quantities or mathematical objects (length, area, volume,….,.simplex), the dimensions of all terms must be the same on both sides. Terms separated by ‘+’ or ‘–’ must have the same dimensions.
The degree of freedom is the number of free components : how many components need to be known before the « objects » is fully determined?.
- a point in the plane has two degrees of freedom : its two coordinates;
- a square has two degrees of freedom : its two sides ( length and breadth).
- A point in 3-dimensional space has three degrees of freedom because three coordinates are needed to determine the position of the point.
Law of Degree of Freedom
The degree of freedom of a Diophantine equation is the number of parameters of each side that may vary independently. The degree of freedom must be the same on both sides.
- The equation n^2= p^2 +q^2
LHS is a square number with two degrees of freedom (its two multipliers or components) .
RHS is a sum of two squares with two degrees of freedom p^2 and q^2 which vary independently.
The degree of freedom is defined as the dimension of a mathematical object. When the degree of freedom is used instead of dimension, this usually means that the object is only implicitly defined.
Attempt to answer to Hilbert 10th problem
Hilbert's Tenth Problem Given a Diophantine equation with any number of unknown quantities and with rational integral numerical coefficients: To devise a process according to which it can be determined in a finite number of operations whether the equation is solvable in rational integers.
The problem was solved in 1970 by Yuri Matiyasevichwho in his doctoral dissertation proved that a general algorithm for solving all Diophantine equations cannot exist.
I think the concept of dimensional analysis is an attempt to answer Hilbert 10th problem related to homogeneous Diophantine equations . It works very well and allows to know if a homogeneous Diophantine equation has integer solutions but it remains to calculate these solutions using modular arithmetic or algebraic method…. The most important thing is to answer Yes/No and affirm that the integer solutions exist or not.
I want to center (subtract the mean) my data in order to lower my intercept (as close as possible to zero). However, as I have categorical data as well, which I don't know how I can possibly center, I can't seem to lower my intercept.
And the thing is that my dummy variable drive up the intercept... So I don't know to fix the issue.
When I enter only continuous variables that are centered, the intercept is close to zero.
I thank you in advance?