Questions related to Mathematics
Mathematics Teacher Educators (MTEs) best practices.
I'm interesting in research literature about Mathematics Teacher Educators (MTEs) best practices, especially on MTEs' practices for teaching to solve problems.
There is valid curiosity that such modifications might exist in nature.
There is research in proving mathematical results that shed light on QT in a more operational manner, motivated by quantum information theory.
There are at least 2 stands in this & issue debate.
**Scientific continuity is related to scientific change
Shebere & Kuhn are repredebtatives. The (alleged by S. ) problem of "incommensurability"(Kuhn, '60s) attempts to explain scientific change in terms of concepts of meaning and reference. Another way is through the concept of "reasons" and the issues of reasons.
The Gallilean paradigm broke meaning continuity from the Aristotelian & is inconsumerable i. E no comparison can be made between the 2
**Scientific Continuity as independent area
A more standard way, it providers factors such as mathematics continuity and causal continuity. GR for example deviated some how causal from Newtonian gravity but maintained mathematical cintinuity
I had worked with Dr. M. Azzedine,France member of MAA and proved unsolved Beals Conjecture, FLT directly, Collatz Conjecture & Goldbachs Conjecture.( All proofs in one article)Article will get published on April.
Can I expect any career / financial benefits from that work? I also developed Mathematical Theology to bring mathematics in humanity. I had investigated the thesis of some university level mathematics professors from Kerala State India.
What are the expectations on the person who proves Collatz Conjecture? I am an ordinary research scholar from a government college, Tamilnadu, India.
I have 'N' number of inputs (which correspond to temperature) to calculate a specific output parameter. I'm getting 'N' number output data based on the inputs.
However, my goal is to select an optimum number from all the output data and use it in another calculation.
'N' number of input data --> Output parameter calculation --> Identification of an optimized output parameter --> Use that value for another calculation.
How to find an optimal value among 'N' "number of output data. Can we employ any algorithm or process?
We consider that R~Q, as valid for certain areas, and others not -- as in [1,2] and in the RG question:
See [Box 1], in the REFERENCES, taken from that previous question, where the discrete Weyl-Heisenberg group is explained in quantum information.
That said, the consequences, both illustrating and limiting the equivalence (~) relationship in a wider scope, are:
1. Differential and Integral Calculus (analysis) [1,2] work for R and Q, but are absolutely precise for the sets Q and Q*;
2. Fourier Transform (FT~FFT)  in the set Q*, contrary to well-known assumption using the set R over an infinite and "continuous" range of frequencies;
3. Existence of the new set Q* , allowing all calculations to be expressible numerically in the set Q*, absolutely accurate, as opposed to complex numbers (the sets C or G) that cannot even be numerically calculated;
4. Deprecation of the sets C and G (complex numbers) in numerical analysis , that is physically visualized and solved in the set Q*, for the equation x^2+1=0, and any other;
5. New applications in Physics [6-7];
6. Faster numerical calculations [8,9], without including hopeless approximations for the set R ;
7. Problem of Closure  is solved in physics and mathematics;
 curvature representation of the inverse of the second derivative , instead of the second differential when it is not possible, or in the Schrödinger equation;
 According to [Box 1], in the REFERENCES,one cannot use mathematical real-numbers (the set R) in a "continuous" Weyl-Heisenberg group, in quantum information, but must use a discrete Weyl-Heisenberg group, which uses the set Q or Q*, and no infinitesimals are used;
 As provided by AO in this question, consider fractal functions. Fractal functions form the basis of a constructive approximation theory for non-differentiable functions,where "non-differentiable" is tainted by imposing a mythical continuity in mathematics [cf 1] in the set R, but linking to Q and Q*;
 As provided by RS in this question, consider Bayesian probability. The Bayesian methodology of updating beliefs (conditional probability) is cast in mathematical real-numbers R, but linking to Q, and Q*;
 The sets R, Q, and Q* look similar in a graph where the spacing between the points is small enough [1,2]; and
 particle diffraction uses stimulated emission with the set Q*, not waves using the set R, as explained in .
[+14] open in this question.
In addition, considering the wider scope of the points 1 and ff., above, can one consider R~Q in many cases as evidenced by those examples? But, in some cases, R cannot be used at all, even approximately, one has to use the sets Q or Q*.
The main consequences: Nature is ontically digital, quantum. Otherwise, R~Q is an approximation, to use R. Every effect in physical reality can be accomplished better with Q or Q* only; none requires R, C, or G. Also, infinitesimals do not have to be used at all.
This also answers affirmatively two previous questions, confirming [BOX 1], and .
What is your qualified opinion?
[Box 1] Although Weyl training was on these mythical aspects, the infinitesimal transformation and Lie algebra , he saw an application of groups in the many-electron atom, which must have a finite number of equations. The discrete Weyl-Heisenberg group comes from these discrete observations, and do not use infinitesimal transformations at all, with finite dimensional representations. Similarly, this is the same as someone trained in infinitesimal calculus, traditional, starting to use rational numbers in calculus, with DDF . The similar previous training applies in both fields, from a "continuous" field to a discrete, quantum field. In that sense, R~Q and R~Q*; the results are the same formulas -- but now, absolutely accurate. This allows results foreseen in Chapter 7 of , and other new results in physics, but now on a firm, not self-contradictory mathematical "continuous" ground .
Preprint FT ~ FFT
[6[ Yuri Igorevich Ozhigov, “Constructive Physics (Physics Research and Technology).” Nova Science Pub Inc; UK ed., ISBN 1612095534, 2011.
 in print, see also .
]11] in print,see .
It finally has occurred to me that there is a similarity between i = √-1 and √2. They are each linearized representations of essentially quadratic values. We use the former in complex numbers and include the latter in the real number system as an irrational number. Each has proved valuable and is part of accepted mathematics. However, an irrational number does not exist as a linear value because it is indeterminate – that is what non-ending, non-repeating decimal number means: it never can exist. Perhaps we need an irrational number system as well as a complex number system to be rigorous.
The sense of this observation is that some values are essentially quadratic. An example is the Schrödinger Equation which enables use of a linearized version of a particle wave function to calculate the probability of some future particle position, but only after multiplying the result by its complex conjugate to produce a real value. Complex number space is used to derive the result which must be made real to be real, i.e., a fundamentally quadratic value has been calculated using a linearized representation of it in complex number space.
Were we to consider √-1 and √2 as similarly non-rational we may find a companion space with √2 scaling to join the complex number space with √-1 scaling along a normal axis. For example, Development of the algebraic numbers a + b√2 could include coordinate points with a stretched normal axis (Harris Hancock, Foundations of the Theory of Algebraic Numbers).
A three-space with Rational – Irrational – Imaginary axes would clarify that linearization requires a closing operation to restore the result to the Rational number axis, where reality resides.
[Note: most people do not think like I do, and almost everyone is happy about that: please read openly, exploringly, as if there might be something here. (Yes, my request is based on experience!) Tens of thousands of pages in physics and mathematics literature from popular exposition to journal article lie behind this inquiry, should you wish to consider that.]
Could any expert try to examine the new interesting methodology for multi-objective optimization?
A brand new conception of preferable probability and its evaluation were created, the book was entitled "Probability - based multi - objective optimization for material selection", and published by Springer, which opens a new way for multi-objective orthogonal experimental design, uniform experimental design, respose surface design, and robust design, etc.
It is a rational approch without personal or other subjective coefficients, and available at https://link.springer.com/book/9789811933509,
My Awesomest Network, I am starting my Ph.D. studies and I have some questions and doubts concerning it. Could I write them down here, pleaswe? First of them is how can I join disciplines as sociology, management, economics, mathematics, informatics and other similaer items to make a complex holistic interdisciplinary analysis and coherent study of pointed fields.
Thank You very much for all in advance
Perception is not the ultimate guide for knowledge but as Gallileo captured the actual and empirical, not necessarily the real, similar concerns arise.
In general, the repercussions of Reduction arise because what is actual, i.e final instantiation of underlining process, is not all the story. Further omissions come from the empirical approach since sense means are not always valid projectors of the actual.
Gallilean approach has yielded a framework that empowered our comprehension & ability to define/describe phenomena in the realm of the actual& empirical. His treatise should not be considered more than this i.e descrining the nature of the real and its dynamics.
The reduction of change to motion has been noted but little has been argued about its shortfalls in epistemic practice. This reduction is part of the reduction of the real to the actual since it omits any need to refer to the real to make its claims functional. It also removes philosophical or anthropocentric notions of growth and ultimate ends which is good in one sense but in a pure "reductionist shortfalls" point of view is still a problem dimain restriction.
The description of motion with mathematics is another point neglected. Motion can be described qualitatively or conceptual but such a framework has not been devised.
This question discusses the YES answer. We don't need the √-1.
The complex numbers, using rational numbers (i.e., the Gauss set G) or mathematical real-numbers (the set R), are artificial. Can they be avoided?
Math cannot be in ones head, as explains .
To realize the YES answer, one must advance over current knowledge, and may sound strange. But, every path in a complex space must begin and end in a rational number -- anything that can be measured, or produced, must be a rational number. Complex numbers are not needed, physically, as a number. But, in algebra, they are useful.
The YES answer can improve the efficiency in using numbers in calculations, although it is less advantageous in algebra calculations, like in the well-known Gauss identity.
For example, in the FFT , there is no need to compute complex functions, or trigonometric functions.
This may lead to further improvement in computation time over the FFT, already providing orders of magnitude improvement in computation time over FT with mathematical real-numbers. Both the FT and the FFT are revealed to be equivalent -- see .
I detail this in  for comments. Maybe one can build a faster FFT (or, FFFT)?
The answer may also consider further advances into quantum computing?
Preprint FT ~ FFT
We assume that in general probability and statistics belong to physics rather than mathematics.
The Normal/Gaussian Distribution:
can be derived from the universal laws of physics for a given number n of randomly chosen data in less than five minutes.
Accuracy increases as n increases.
Mathematics abstracted and idealized concrete mathematics, exemplified in Euclid’s The Elements. Religion around the same time or earlier, abstracted the concrete representation of deities. Are there similarities in the problem solving approaches?
I have a system of non-linear differential equations that explains the behaviour of some of the cancer cells.
Looking for help identifying the equilibrium points and eigenvalues of this model in order to determine the type of bifurcation present.
Thanks in advance.
Mathematically, it is posited that the cosmic or local black hole singularity must someday become of infinite density and zero size. But this is unimaginable. If an infinite-density stuff should exist, it should already have existed.
Hence, in my opinion, this kind of mathematical necessities are to be the limiting cases of physics. IS THIS NOT THE STARTING POINT TO DETERMINE WHERE MATHEMATICS AND PHYSICAL SCIENCE MUST PART WAYS?
In my current work on the theory of hyperbolic functions, I, as a completely extraneous observer of the turbulent debates relating to the subtlest intricacies of the Special Theory of Relativity (SRT), have drawn attention to the fact that hyperbolic functions are most used not in constructing bridges, aqueduct arches or describing complex cases of X-ray diffraction, but in those sections of the SRT that are related to the name of Professor Minkowski. Since my personal interest in SRT is essentially limited to the correct application of hyperbolic functions when describing moving physical realities, I would be grateful to the experts in the field of SRT for the most concise explanation of the deep essence of the theory of space-time patterns of surrounding me reality.
Naturally, my question in no way implies the translation into human language of the lecture of the Creator of the Theory, the honour of acquaintance with which in 1907 belongs to the academic/medical community of the city of Cologne and its surroundings. My level of development and my agreeableness have ensured that I not only managed to read independently the text underlying the concept of « Minkowski four-dimensional continuum », but also to formulate my question as follows:
Which of the options I propose is the most concise (i.e. non-emotional-linguistic) explanation of the essence of Minkowski’s theory:
1. The consequence of any relative movement of massive physical objects is that we are all bound to suffer the same fate as the dinosaurs and mammoths, i.e. extinction.
2. Understanding/describing the spatial movements of physical objects described by a^2-b^2=const type mathematical expression implies acquiring practical skills of constructing second-order curves called «hyperbolas».
3. All of us, including those who are in a state of careless ignorance, are compelled to exist in curved space.
4. Everything in our lives is relative, and only the interval between physical events is constant.
5. The products of the form ct (or zct), where c is the speed of light and z is some dimensionless mathematical quantity/number symbolizes not a segment of three-dimensional space, but a time interval (or time?) t between uniquely defined events.
6. The electromagnetic radiation generated by a moving massive object always propagates in a direction orthogonal to the velocity vector of the moving object.
Of course, I will be grateful for any adjustments to my options, or expert’s own formulations that have either eluded my attention or whose substance is far beyond my level of mathematical or general development.
Our answer is YES. This question captured the reason of change: to help us improve. We, and mathematics, need to consider that reality is quantum [1-2], ontologically.
This affects both the microscopic (e.g., atoms) and the macroscopic (e.g., collective effects, like superconductivity, waves, and lasers).
Reality is thus not continuous, incremental, or happenstance.
That is why everything blocks, goes against, a change -- until it occurs, suddenly, taking everyone to a new and better level. This is History. It is not a surprise ... We are in a long evolution ...
As a consequence, tri-state, e.g., does not have to be used in hardware, just in design. Intel Corporation can realize this, and become more competitive. This is due to many factors, including 1^n = 1, and 0^n = 0, favoring Boolean sets in calculations.
This question is now CLOSED. Focusing on the discrete Weyl-Heisenberg group, as motivated by SN, this question has been expanded in a new question, where it was answered with YES in +12 areas:
If anyone knows of a conference on mathematics education to be held in Europe from April 2023 to March 2024, especially on mathematics education for elementary and junior high school students, please let me know.
As for the contents, it is even better if there are a textbook of mathematics, steam education, mathematics class and a computer.
Right now, in 2022, we can read with perfect understanding mathematical articles and books
written a century ago. It is indeed remarkable how the way we do mathematics has stabilised.
The difference between the mathematics of 1922 and 2022 is small compared to that between the mathematics of 1922 and 1822.
Looking beyond classical ZFC-based mathematics, a tremendous amount of effort has been put
into formalising all areas of mathematics within the framework of program-language implementations (for instance Coq, Agda) of the univalent extension of dependent type theory (homotopy type theory).
But Coq and Agda are complex programs which depend on other programs (OCaml and Haskell) and frameworks (for instance operating systems and C libraries) to function. In the future if we have new CPU architectures then
Coq and Agda would have to be compiled again. OCaml and Haskell would have to be compiled again.
Both software and operating systems are rapidly changing and have always been so. What is here today is deprecated tomorrow.
My question is: what guarantee do we have that the huge libraries of the current formal mathematics projects in Agda, Coq or other languages will still be relevant or even "runnable" (for instance type-checkable) without having to resort to emulators and computer archaeology 10, 20, 50 or 100 years from now ?
10 years from now will Agda be backwards compatible enough to still recognise
current Agda files ?
Have there been any organised efforts to guarantee permanent backward compatibility for all future versions of Agda and Coq ? Or OCaml and Haskell ?
Perhaps the formal mathematics project should be carried out within a meta-programing language, a simpler more abstract framework (with a uniform syntax) comprehensible at once to logicians, mathematicians and programers and which can be converted automatically into the latest version of Agda or Coq ?
When I tried to remotely accessed the scopus database by login into my institution id, it kept bring me back to the scopus preview. I tried cleaning the cache, reinstall the browser, using other internet and etc. But, none of it is working. As you can see in the image. It kept appeared in scopus preview.
How to write formally within the context of mathematics that: "given two series S1 and S2 and they are subtracted each other coming from a proved identity that is true and the result of this subtraction is a known finite number (real number) (which is valid) the two series S1 and S2 are convergent necessarily because the difference could not be divergent as it would contradict the result of convergence? I need that definition within a pure mathematical scenario ( I am engineer).
" Given S1- S2 = c , if c is a finite and real number, and the expression S1-S2 = c comes from a valid deduction, then, S1 and S2 are both convergent as mandatory."
I'm trying to write a java programme that will solve the system of ordinary differential equations using the Runge-Kutta fourth-order (RK4) technique. I need to solve a system of five equations. Those are not linear.
And determining all of the equilibrium solutions to this system of differential equations also requires.
Can someone help me? Thank you in advance.
Which are the implications in mathematics if the Irrationality Measure bound of "Pi" is proved to be Less than or equal to 2.5?
How can be understood the number pi within this context?
I am working on meta-heuristic optimization algorithms. I would like to solve Image segmentation using Otsu’s method and my algorithm. I could not understand how to use meta-heuristic in image segmentation. Please help me in this regard. I am from maths back ground. If anybody have matlab code for the same, please share with me. I will be grateful to you.
Irrational numbers are uncomputable with probability one. In that sense, numerical, they do not belong to nature. Animals cannot calculate it, nor humans, nor machines.
But algebra can deal with irrational numbers. Algebra deals with unknowns and indeterminates, exactly.
This would mean that a simple bee or fish can do algebra? No, this means, given the simple expression of their brains, that a higher entity is able to command them to do algebra. The same for humans and machines. We must be able also to do quantum computing, and beyond, also that way.
Thus, no one (animals, humans, extraterrestrials in the NASA search, and machines) is limited by their expressions, and all obey a higher entity, commanding through a network from the top down -- which entity we call God, and Jesus called Father.
This means that God holds all the dice. That also means that we can learn by mimicking nature. Even a wasp can teach us the medicinal properties of a passion fruit flower to lower aggression. Animals, no surprise, can self-medicate, knowing no biology or chemistry.
There is, then, no “personal” sense of algebra. It just is a combination of arithmetic operations.There is no “algebra in my sense” -- there is only one sense, the one mathematical sense that has made sense physically, for ages. I do not feel free to change it, and did not.
But we can reveal new facets of it. In that, we have already revealed several exact algebraic expressions for irrational numbers. Of course, the task is not even enumerable, but it is worth compiling, for the weary traveler. Any suggestions are welcome.
I'm interested in the intersection of mathematics and social sciences, and I'm looking for expert opinions on ethical content in mathematical history.
I am looking for mathematical formulas that calculate the rigid body movement of an element based on the nodal displacements. Can anyone give a brief explanation and recommend some materials to read? Thanks a lot.
How long does it take to a journal indexed in the "Emerging Sources Citation Index" get an Impact Factor? What is the future of journals indexed in Emerging Sources Citation Index?
Lockdowns due to the COVID pandemic in last three years (2020-22) has played a significant role in the widespread of online based classrooms using applications like Zoom, MS teams, Webex and Google Meet. While substantial amount of the students were happy to complete their semester classes in due time without getting hampered by the lockdowns, thanks to the online based classrooms, there are also notable amount of students and parents who were complained regarding the online based classrooms that they have drastically distracted the academic performance of students.
Overall, I would like to leave it as an open-ended question. Dear researchers, what you think regarding the online based classroom? Is it an advantage for students or a disadvantage?
My Awesomest Network, I am starting my Ph.D. studies and I have some questions and doubts concerning it. Could I write them down here, please? First of them is how can I join disciplines as sociology, management, economics, mathematics, informatics and other similar items to make a complex holistic interdisciplinary analysis and coherent study of pointed fields. I think personally that linking or joining et cetera aspects of artificial intelligence and computational social sciences would be interesting area of considerations. What are Your opinions?
Thank You very much for all in advance
I am a post graduate student presently writing my thesis in the department of curriculum and instructional designs.
What is infinite? does this have any value? must there be an end or is it just our thoughts it can't imagine that there is no end to infinity! Aren't all things part of infinity? we too? is God an infinity that cannot be imagined but felt? is an infinity an energy that binds us and all things (reality and thought) together? Is there a physical explanation for infinity? Is the limitation by (infinity -1) or (-infinity + 1) legitimate or just a need to calculate it mathematically?
We see many theories in physics, mathematics, etc. becoming extremely axiomatic and rigorous. But are comparisons between mathematics, physics, and philosophy? Can the primitive notions (categories) and axioms of mathematics, physics and philosophy converge? Can they possess a set of primitive notions, from which the respective primitive notions and axioms of mathematics, physics, and philosophy may be derived?
I suppose so, it's true that physics is the special case of mathematics.
In physics, the existence and the uniqueness of the solution are ensured whereas in mathematics, it is the general case of physics, the existence and the uniqueness of the solution pass before all.
Assume we have a program with different instructions. Due to some limitations in the field, it is not possible to test all the instructions. Instead, assume we have tested 4 instructions and calculated their rank for a particular problem.
the rank of Instruction 1 = 0.52
the rank of Instruction 2 = 0.23
the rank of Instruction 3 = 0.41
the rank of Instruction 4 = 0.19
Then we calculated the similarity between the tested instructions using cosine similarity (after converting the instructions from text form to vectors- machine learning instruction embedding).
Question ... is it possible to create a mathematical formula considering the values of rank and the similarity between instructions, so that .... given an un-tested instruction ... is it possible to calculate, estimate, or predict the rank of the new un-tested instruction based on its similarity with a tested instruction?
For example, we measure the similarity between instruction 5 and instruction 1. Is it possible to calculate the rank of instruction 5 based on its similarity with instruction 1? is it possible to create a model or mathematical formula? if yes, then how?
Need a good weather probability calculator. Would like to calculate the probability of e.g. 10 degrees Celsius on a day above the average. Has anybody got good research/formulas?
Which distribution is assumed in the probability calculation? Normal one?
Hi all, I want to solve a system of simultaneous equations in which some equations are cubic and some are quadratic. How to solve such a system. The solution should consist of combinations of all solutions i.e., the positive as well as the negative solutions.
Physics is a game of looking at physical phenomena, analyzing how physical phenomena changes with a hypothetical and yet mathematical arrow of time in 3D space, namely by plotting that physical phenomena with a mathematical grid model (typically cartesian based) assuming that physical phenomena can be plotted with points, and then arriving at a theory describing that physical phenomenon and phenomena under examination. The success of those physical models (mathematical descriptions of physical phenomena) is predicting new phenomena by taking that mathematics and predicting how the math of one phenomenon can link with the math of another phenomenon without any prior research experience with that connection yet based on the presumption of the initial mathematical model of physical phenomena being undertaken.
Everyone in physics, professional and amateur, appears to be doing this.
Does anyone see a problem with that process, and if so what problems do you see?
Is the dimension of space, such as a point in space, a physical thing? Is the dimension of time, such as a moment in time, a physical thing? Can a moment in time and a point of space exist as dimensions in the absence of what is perceived as being physical?
I am thinking of the vector as a point in multidimensional space. The Mean would be the location of a vector point with the minimum squared distances from all of the other vector points in the sample. Similarly, the Median would be the location of the vector point with the minimum absolute distance from all the other vector points.
Conventional thinking would have me calculate the Mean vector as the vector formed from the arithmetic mean of all the vector elements. However, there is a problem with this method. If we are working with a set of unit vectors the result of this method would not be a unit vector. So conventional thinking would have me normalize the result into a unit vector. But how would that method apply to other, non-unit, vectors? Should we divide by the arithmetic mean of the vector magnitudes? When calculating the Median, should we divide by the median of the vector magnitudes?
Do these methods produce a result that is mathematically correct? If not, what is the correct method?
Any idea why the solution of the attached equation is always zero at r=0? It seems simple at first look, however, when you start solving, you will see a black hole-like sink which makes the solution zero at r=0 (should not be). I used the variable separation method, I will be happy if you suggest another method or discuss the reasons.
I also attached the graph of the solution, showing the black hole-like sink.
I have no mathematical experience and no statistician to help me.
Is it possible to decompose a conditional probability with three or more elements (i.e. events) into conditional probability of only two elements or the marginal probability of one element? Knowing this decomposition, it would help to solve higher order Markov Chain mathematically. I also know that this decomposition can be solved if we add assumption of conditional independent.
To make it concrete here is a negative example:
Notice that the RHS still contains a conditional probability with three elements P(a,b│c).
Assuming conditional independent on c, we have P(a,b│c)=P(a│c)∙P(b│c). Thus, the conditional probability decomposition becomes
My question is whether this type of conditional probability decomposition into one or two element is possible without making assumption. If it is really unsolvable problem, then at least we know that the assumption of conditional independent is a must.
During the lecture, the lecturer mentioned the properties of Frequentist. As following
Unbiasedness is only one of the frequentist properties — arguably, the most compelling from a frequentist perspective and possibly one of the easiest to verify empirically (and, often, analytically).
There are however many others, including:
1. Bias-variance trade-off: we would consider as optimal an estimator with little (or no) bias; but we would also value ones with small variance (i.e. more precision in the estimate), So when choosing between two estimators, we may prefer one with very little bias and small variance to one that is unbiased but with large variance;
2. Consistency: we would like an estimator to become more and more precise and less and less biased as we collect more data (technically, when n → ∞).
3. Efficiency: as the sample size incrases indefinitely (n → ∞), we expect an estimator to become increasingly precise (i.e. its variance to reduce to 0, in the limit).
Why Frequentist has these kinds of properties and can we prove it? I think these properties can be applied to many other statistical approach.
Provide me the detail mathematical calculation for the measurement of uranium, thorium and potassium with their daughter progenies.
For correlation analysis between body composition variables and blood hormone levels,
I have data from two different blood analysis methods but the same units; ECLIA & CLIA.
I wonder if there is any statistical and mathematical error when I run correlation analysis using the data together.
If it cannot,
Is there any suggestion for using these data together for statistical analysis?
Thank you very much.
Famous mathematicians are failing each day to prove the Riemann's Hypothesis even if Clay Mathematics Institute proposes a prize of One Million Dollars for the proof.
The proof of Riemann's Hypothesis would allow us to understand better the distribution of prime numbers between all numbers and would also allow its official application in Quantics. However, many famous scientists still refuse the use of Riemann's Hypothesis in Quantics as I read in an article of Quanta Magazine.
Why is this Hypothesis so difficult to prove? And is the Zeta extension really useful for Physics and especially for Quantics ? Are Quantics scientists using the wrong mathematical tools when applying Riemann's Hypothesis ? Is Riemann's Hypothesis announcing "the schism" between abstract mathematics and Physics ? Can anyone propose a disproof of Riemann's Hypothesis based on Physics facts?
Here is the link to the article of Natalie Wolchover:
The zeros of the Riemann zeta function can also be caused by the use of rearrangements when trying to find an image by the extension since the Lévy–Steinitz theorem can happen when fixing a and b.
Suppositions or axioms should be made before trying to use the extension depending on the scientific field where it is demanded, and we should be sure if all the possible methods (rearrangements of series terms) can give the same image for a known s=a+ib.
You should also know that the Lévy–Steinitz theorem was formulated in 1905 and 1913, whereas, the Riemann's Hypothesis was formulated in 1859. This means that Riemann who died in 1866 and even the famous Euler never knew the Lévy–Steinitz theorem.
Everybody is eager to see the New winner of the Millenium Prize. Please share all your incomplete works about the Millenium Prize problems of Clay Mathematics Institute in order to collaborate for the solutions.
I am actually using the different nabla operator which I demonstrated mathematically in my published work: " A thesis about Newtonian mechanics rotations and about differential operators ".
This demonstrated differential tool enables to deal differently with the millenium problem about Navier-Stokes equation.
I also suggest that P=NP if the problem can be translated to a differential equation that has a solution. If your lucky friend finds and gives you an easy solution, than that solution leads directly to the general solution of the differential equation.
I will be waiting for your collaborations.
Say I have a satellite image of known dimensions. I also know the size of each pixel. The coordinates of some pixels are given to me, but not all. How can I calculate the coordinates for each pixel, using the known coordinates?
Hello Friends and Colleagues,
Can anyone suggest a mathematical book that helps me to build my own mathematical equations and functions? I want to convert real-life problems(natural sciences) into mathematical formulations.
Note that I have basic knowledge of mathematics.
Thanks in advance,
Applying mathematical knowledge in research models: This question has been in my mind for a long time. Can advance mathematics and applied mathematics solve all the problems in modeling research? Especially the formula derivation in the theoretical model part, can the analysis conclusion be obtained through multiple derivations or other methods? You have also read some mathematics-related publications yourself, and you have to admire the mystery of mathematics.
I would like to know how to locate mathematically a damage on a blade.
Usually a frequency analysis of the blade and a comparison with a similar healthy one helps to determine a default or damage on a blade. However, how could someone locate that damage?
Thank you for your time.
I'm struggling to understand the method followed in the following analysis. Can someone please explain how the author got the values of Δ_1 and K_1 that justify its analysis?
I have tried to isolate "Δ" and "K" by setting Equation (B8) equal to zero. but I have failed to get similar conditions.
P.S: I'm new to mathematical modelling, so I really need to understand what's going on here. Thanks
When solving mathematical equations or system of equations there are things that are considerable; such as the Universe or domains...
Because, some equations may have no solutions or impossible to solve., at all. If a solution or some solutions exist for an equation or systems of equations, there are regions or intervals containing the solution(s). Such considerations may be important when applying numerical methods.
What are basins of attractions?
The physical constants (G, h, c, e, me, kB), can be considered fundamental only if the units they are measured in (kg, m, s ...) are independent. However there are anomalies which occur in certain combinations of these constants which suggest a mathematical (unit number) relationship. By assigning a unit number θ (kg -> 15, m -> -13, s -> -30, A -> 3, K -> 20) to each unit we can define the relationship between the units.
The 2019 redefinition of SI base units resulted in 4 physical constants independently assigned exact values, and this confirmed the independence of their associated SI units, however these anomalies question this fundamental assumption. In every combination predicted by the model they give an answer consistent with CODATA precision. Statistically therefore, can these anomalies be dismissed as coincidence?
For convenience, the anomalies are listed on this wiki site (adapted from the article)
- Are these physical constant anomalies evidence of a mathematical relation between the SI units?
The diagram shows how the constants are related, they are solved with 2 fixed dimensionless constants (fine structure constant alpha, omega) and 2 unit dependent scalars.
Some general background to the physical constants.
I am aware of the facts that every totally bounded metric space is separable and a metric space is compact iff it is totally bounded and complete but I wanted to know, is every totally bounded metric space is locally compact or not. If not, then give an example of a metric space that is totally bounded but not locally compact.
Follow this question on the given link
I am trying to model a business scenario mathematically for my research paper but I do not have the required skillset. What is a legitimate way to find and get help. Are there any online sources or paid services. Do I need to add the expert a co-author? What type of solutions exists.
Dear Colleagues, a recent trend in Fractional Calculus is in introducing more and more new fractional derivatives and integrals and considering classical equations and models with these operators. Thus, we have to think about and to answer questions like “What are the fractional integrals and derivatives?”, “What are their decisive mathematical properties?”, “What fractional operators make sense in applications and why?’’, etc. These and similar questions have remained mostly unanswered until now. To provide an independent platform for discussion of these trends in the current development of FC, the SI “Fractional Integrals and Derivatives: “True” versus “False””( https://www.mdpi.com/journal/mathematics/special_issues/Fractional_Integrals_Derivatives2021) has been initiated. In this SI, some important papers have been already published. However, you are welcome to share with the scientific community your viewpoint. Contributions to this SI devoted both to the new fractional integrals and derivatives and their justification and those containing constructive criticism of these concepts are welcome.