Science topics: Mathematics
Science topic
Mathematics - Science topic
Mathematics, Pure and Applied Math
Questions related to Mathematics
Why prime numbers have a great importance in mathematics for the rest of the numbers ?
Vehicle routing problem is a classical application case of Operation research.
I need to implement the same in electric vehicle routing problem with different constraints.
I want to understand mathematics behind this. The journals available discuss different applications without much talking of mathematics.
Any book/ basic research paper/ PhD/ m.tech thesis will do the needful.
Thanks in advance.
Qm is the ultimate realists utilization of the powerful differential equations, because the integer options and necessities of solutions correspond to nature's quanta.
The same can be said for GR whose differential manifolds, an sdvanced concept or hranch in mathematics, have a realistic implementation in nature compatible motional geodesics.
1 century later,so new such feats have been possible, making one to think if the limit of heuristic mathematical supplementation in powerful ways towards realist results in physics in reached.
Why are numbers and shapes so exact? ‘One’, ‘two’, ‘point’, ‘line’, etc. are all exact. But irrational numbers are not so. The operations on these notions are also intended to be exact. If notions like ‘one’, ‘two’, ‘point’, ‘line’, etc. are defined to be so exact, then it is not by virtue of the exactness of these substantive notions, but instead, due to their being adjectival: ‘being a unity’, ‘being two unities’, ‘being a non-extended shape’, etc. A quality cannot be exact, but may be defined to be exact. It is in terms of the exactness attributed to these notions by definition that the adjectives ‘one’, ‘two’, ‘point’, ‘line’, etc. are exact. This is why the impossibility of fixing these (and other) substantive notions as exact miss our attention. If in fact these are inexact, then there is justification for the inexactness of irrational numbers too. If numbers and shapes are in fact inexact, then not only irrational numbers but all mathematical structures should remain inexact except for their having been defined as exact. Thus, mathematical structures, in all their detail, are a species of qualities, namely, quantitative qualities.
a) are there some possible hard and fast rules for review deadlines (yes or no?), and b) is there some obligation from an Editorial Board side to be giving the first answer to authors about the internal number of an article, if the article is submitting via email id (yes or no?).
Thanks for the input. https://clarivate.com/contact-us/
Applying mathematical knowledge in research models: This question has been in my mind for a long time. Can advance mathematics and applied mathematics solve all the problems in modeling research? Especially the formula derivation in the theoretical model part, can the analysis conclusion be obtained through multiple derivations or other methods? You have also read some mathematics-related publications yourself, and you have to admire the mystery of mathematics.
Which areas in mathematics education is trending currently
As it is not possible to show mathematical expressions here I am attaching link to the question.
Your expertise in determining and comprehending the boundaries of integration within the Delta function's tantalizing grip will be treasured beyond measure.
An attempt to extrapolate reality
Our answer is a competitive YES. However, universities face the laissez-faire of old staff.
This reference must be included:
Gerck, E. “Algorithms for Quantum Computation: Derivatives of Discontinuous Functions.” Mathematics 2023, 11, 68. https://doi.org/10.3390/math1101006, 2023.
announcing quantum computing on a physical basis, deprecating infinitesimals, epsilon-deltas, continuity, limits, mathematical real-numbers, imaginary numbers, and more, making calculus middle-school easy and with the same formulas.
Otherwise, difficulties and obsolescence follows. A hopeless scenario, no argument is possible against facts.
What is your qualified opinion? Must one self-study? A free PDF is currently available at my profile at RG.
Hello,
in an article I found the following sentence in the abstract:
"The results suggested that (1) Time 1 mathematics self-concept had significant effects on Time 2 mathematics school engagement at between-group and within-group levels; and (2) Time 2 mathematics school engagement played a partial mediating role between Time 1 mathematics self-concept and Time 2 mathematics achievement at the within-group level."
What is the meaning of the within-group-level and between-group-level in this context?
The article I am referring to is:
Xia, Z., Yang, F., Praschan, K., & Xu, Q. (2021). The formation and influence mechanism of mathematics self-concept of left-behind children in mainland China. Current Psychology, 40(11), 5567–5586. https://doi.org/10.1007/s12144-019-00495-4
An attempt to extrapolate reality
📷
I know that δ(f(x))=∑δ(x−xi)/f′(xi). What will be the expression if "f" is a function of two variables, i.e. δ(f(x,y))=?
I'm using target encoding in my work, and I'd like to understand why it's effective from a mathematical point of view.
Intuitively, my understanding is that it allows you to encode the past with the future. I can see why that's effective, and also why it could cause target leakage. However, I can't find a good mathematical explanation for its effectiveness/ issues.
Does anyone know the answer, or have a link to a resource they'd be willing to share?
We assume that the Lagrange multipliers originally introduced in the Boltzmann-Einstein model to derive the Gaussian distribution are just a mathematical trick to compensate for the lack of true definition of probability in unified 4D space.
The derivation of the Boltzmann distribution for the energy distribution of identical but distinguishable classical particles can be obtained in a mathematical approach [1] or equivalently via a statistical approach [2] where the Lagrange multipliers are completely ignored.
1-The Boltzmann factor: a simplified derivation
Rainer Muller
2- Statistical integration, I. Abbas
Hello!
I am curious , can anyone guide me how we can calculate the amount of hydrogen is stored in the metal hydride during the absorption process both in %wt. an in grams and how much energy is released during absorption
Mathematics and theoretical physics are currently searching for answers to this particular question and two other related questions that make up three of the most persistent questions:
i- Do probabilities and statistics belong to physics or mathematics?
ii- Followed by the related question, does nature operate in 3D geometry plus time as an external controller or more specifically, does it operate in the inseparable 4D unit space where time is woven?
iii-Lagrange multipliers: Is it just a classic mathematical trick that we can do without?
We assume the answers to these questions are all interconnected, but how?
I am trying to understand the mathematical relations that computes the residual entropy of methane in REFPROP. I have tried to replicate the graphs in "Entropy Scaling of Alkanes II" by Ian bellusing the mathematical expressions in the paper aand have been unsuccessful so far.
Can someone please provide more insight on the mathematical formulation of a frequency constrained UC model, comprising of only synchronous sources, as well as non-synchronous sources. Also what would be the associated MILP code in GAMS?
The error of building a physical world based on the basic feelings of fundamental concepts such as space and time occurred during the creation of Newtonian mechanics by Newton. Of course, this mistake should have been made, so that man would not be deprived of the numerous gifts of technology resulting from this science! But when the world showed another face of itself to man in very small and large scales, this theory along with the error did nothing.
When Newton had those ideas about space and time (of course, maybe he knew and had no choice), he built a mathematical system for his thoughts, differential and integral calculus! Mathematics resulting from his thoughts was a systematic continuation of his thoughts, with the same assumptions about space and time. That mathematics could not show him the right way to know the real world! Because the world was not Newtonian! Today, many pages in modern physics are created based on new assumptions of space and time and other seemingly obvious variables!
Now, why do we think that these pages of current mathematics necessarily lead to the correct knowledge of the world! Can we finally identify the world, as it is, by adopting appropriate and correct assumptions?!
Apart from the mathematical systems that confirm human feelings and perceptive sensors, there are countless mathematical systems that do not confirm these sensors and our sensory data! A question arises, are the worlds that these mathematical systems evoke are real? So in this way, there are countless worlds that can be realized with their respective physics. Can multiple universes be concluded from this point of view?
Don't we see that only one of these possible worlds is felt by our body?! Why? Have we created mathematics to suit our feelings in the beginning?! And now, in modern physics and the maturation of our powers of understanding, we have created mathematical systems that fit our dreams about the world!? Which of these mathematical devices is actually true about the world and has been realized?! If all of them have come true! So there is no single and objective world and everyone experiences their own world! If only one of these mathematical systems has been realized, how is this system the best?!
If the worlds created by these countless mathematical systems are not real, why do they exist in the human mind?!
The last question is, does the tangibleness of some of these mathematical systems for human senses, and the intangibleness of most of them, indicate the separation of the observable and hidden worlds?!
Recently I've discussed this topic with a tautologist researcher, Quine's follower. The denial of the capacity of deductive logic to generate new knowledge implies that all deductive results in mathematics wont increase our knowledge for real.
The tautologic nature of the deduction seems to lead to this conclusion. In my opinion some sort of logic omniscience is involved in that position.
So the questions would be:
- Is the set of theorems that follow logically from a set A of axioms, "implicit" knowledge? if so, what would be the proper difference between "implicit" and "explicit" knowledge?
- If we embrace the idea that no new knowledge comes from deduction, what is the precise meaning of "new" in this context?
- How do you avoid the problem of logic omniscience?
Thanks beforehand for your insights.
Are there certain methods, for instance T-tests or ANOVAs, for certain ways a survey question is asked?
Actually, I am working on the modeling of path loss between the coordinator and the sensor nodes of a BAN network. My objective is to make a performance comparison between the CM3A model of the IEEE 802.15.6 standard and a loss model that I have implemented mathematically.
So, according to your respectful experience, how can I implement these two path loss models? Do I have to define both path loss equations under the Wireless Channel model? Or do I create and implement for each path loss model a specific module under Castalia (like the wireless channel module) and after I call it from the omnet.ini file (configuration file) ?
You will find attached the two models in a figure.
Thanks in advance
Many people believe that x-t spacetime is separable and that describing x-t as an inseparable unit block is only essential when the speed of the object concerned approaches that of light.
This is the most common error in mathematics as I understand it.
The universe has been expanding since the time of the Big Bang at almost the speed of light and this may be the reason why the results of classical mathematics fail and become less accurate than those of the stochastic B-matrix ( or any other suitable matrix) even in the simplest situations like double integration and triple integration.
I believe that it is common knowledge that mathematics and its applications cannot directly prove Causality. What are the bases of the problem of incompatibility of physical causality with mathematics and its applications in the sciences and in philosophy?
The main but very general explanation could be that mathematics and mathematical explanations are not directly about the world, but are applicable to the world to a great extent.
Hence, mathematical explanations can at the most only show the general ways of movement of the processes and not demonstrate whether the ways of the cosmos are by causation, what the internal constitution of every part of it is, etc. Even when some very minute physical process is mathematized, the results are general, and not specific of the details of the internal constitution of that process.
No science and philosophy can start without admitting that the cosmos exists. If it exists, it is not nothing, not vacuum. Non-vacuous existence means that the existents are non-vacuously extended. This means that they have parts. Every part has parts too, ad libitum, because each part is extended and non-infinitesimal. Hence, each part is relatively discrete, not mathematically discrete.
None of the parts of any physical existent is an infinitesimal. They can be near-infinitesimal. This character of existents is Extension, a Category directly implied by the To Be of Reality-in-total.
Similarly, any extended being’s parts -- however near-infinitesimal -- are active, moving. This implies that every part has so (finite) impact on some others, not on infinite others. This character of existents is Change.
No other implication of To Be is so primary as these two (Extension-Change) and directly derivable from To Be. Hence, they are exhaustive of To Be.
Existence in Extension-Change is what we call Causality. If anything is existent, it is causal – hence Universal Causality is the trans-scientific and physical-ontological Law of all existents.
By the very concept of finite Extension-Change-wise existence, it becomes clear that no finite space-time is absolutely dense with existents. Hence, existents cannot be mathematically continuous. Since there is continuous (but finite and not discrete) change (transfer of impact), no existent can be mathematically absolutely continuous or discrete in its parts or in connection with others.
Can logic show the necessity of all existents as being causal? We have already discussed how, ontologically, the very concept of To Be implies Extension-Change and thus also Universal Causality.
WHAT ABOUT THE ABILITY OR NOT OF LOGIC TO CONCLUDE TO UNIVERSAL CAUSALITY?
In my argument above and elsewhere showing Extension-Change as the very exhaustive meaning of To Be, I have used mostly only the first principles of ordinary logic, namely, Identity, Non-contradiction, and Excluded Middle, and then argued that Extension-Change-wise existence is nothing but Universal Causality, if everything existing is non-vacuous in existence.
For example, does everything exist or not? If yes, let us call it non-vacuous existence. Hence, Extension as the first major implication of To Be. Non-vacuous means extended, because if not extended, the existent is vacuous. If extended, everything has parts.
The point of addition now has been Change, which makes the description physical. It is, so to say, from experience. Thereafter I move to the meaning of Change basically as motion or impact.
Naturally, everything in Extension must effect impacts. Everything has further parts. Hence, by implication from Change, everything causes changes by impacts. Thus, we conclude that Extension-Change-wise existence is Universal Causality. It is thus natural to claim that this is a pre-scientific Law of Existence.
In such foundational questions like To Be and its implications, we need to use the first principles of logic, because these are the foundational notions of all science and no other derivative logical procedure comes in as handy. In short, logic with its fundamental principles can help derive Universal Causality. Thus, Causality is more primary to experience than the primitive notions of mathematics.
Extension-Change, Universal Causality derived by their amalgamation, are the most fundamental Metaphysical, Physical-ontological, Categories. Since these are the direction exhaustive implications of To Be, all philosophy and science are based on these.
The congruent number problem has been a fascinating topic in number theory for centuries, and it continues to inspire research and exploration today. The problem asks whether a given positive integer can be the area of a right-angled triangle with rational sides. While this problem has been extensively studied, it is not yet fully understood, and mathematicians continue to search for new insights and solutions.
In recent years, there has been increasing interest in generalizing the congruent number problem to other mathematical objects. Some examples of such generalizations include the elliptic curve congruent number problem, which asks for the existence of rational points on certain elliptic curves related to congruent numbers, and the theta-congruent number problem as a variant, which considers the possibility of finding fixed-angled triangles with rational sides.
However, it is worth noting that not all generalizations of the congruent number problem are equally fruitful or meaningful. For example, one might consider generalizing the problem to arbitrary objects, but such a generalization would likely be too broad to be useful in practice.
Therefore, the natural question arises: what is the most fruitful and meaningful generalization of the congruent number problem to other mathematical objects? Any ideas are welcome.
here some articles
M. Fujiwara, θ-congruent numbers, in: Number Theory, Eger, 1996, de Gruyter, Berlin, 1998,pp. 235–241.
New generalizations of congruent numbers
Tsubasa Ochiai
DOI:10.1016/j.jnt.2018.05.003
A GENERALIZATION OF THE CONGRUENT NUMBER PROBLEM
LARRY ROLEN
Is the Arabic book about the congruent number problem cited correctly in the references? If anyone has any idea where I can find the Arabic version, it will be helpful. The link to the book is https://www.qdl.qa/العربية/archive/81055/vdc_100025652531.0x000005.
EDIT1:
I will present a family of elliptic curves in the same spirit as the congruent number elliptic curves.
This family exhibits similar patterns as the congruent number elliptic curves, including the property that the integer is still "congruent" if we take its square-free part, and there is evidence for a connection between congruence and positive rank (as seen in the congruent cases of $n=5,6,7$).
Insistence on mathematical continuity in nature is a mere idealization. It expects nature to obey our idealization. This is what happens in all physical and cosmological (and of course other) sciences as long as they use mathematical idealizations to represent existent objects / processes.
But mathematically following nature in whatever it is in its part-processes is a different procedure in science and philosophy (and even in the arts and humanities). This theoretical attitude accepts the existence of processual entities as what they are.
This theoretical attitude accepts in a highly generalized manner that
(1) mathematical continuity (in any theory and in terms of any amount of axiomatization of physical theories) is not totally realizable in nature as a whole and in its parts: because the necessity of mathematical approval in such a cosmology falls short miserably,
(2) absolute discreteness (even QM type, based on the Planck constant) in the physical cosmos (not in non-quantifiable “possible worlds”) and its parts is a mere commonsense compartmentalization from the "epistemology of piecemeal thinking": because the aspect of the causally processual connection between any two quanta is logically and mathematically alienated in the physical theory of Planck’s constant, and
(3) hence, the only viable and thus the most reasonably generalizable manner of being of the physical cosmos and of biological entities is that of CAUSAL CONTINUITY BETWEEN PARTIALLY DISCRETE PROCESSUAL OBJECTS.
PHYSICS and COSMOLOGY even today tend to make the cosmos mathematically either continuous or defectively discrete or statistically oriented to merely epistemically probabilistic decisions and determinations.
Can anyone suggest here the existence of a different sort of physics and cosmology that one may have witnessed until today? A topology and mereology of CAUSAL CONTINUITY BETWEEN PARTIALLY DISCRETE PROCESSUAL OBJECTS, fully free of discreteness-oriented category theory and functional analysis, is yet to be born.
Hence, causality in its deep roots in the very concept of To Be is alien to physics and cosmology till today.
i 'red' in a maths popularization book of steven strogatz that 1+3=4, 1+3+5=9, 1+3+5+7=16, and so on; wich would be the hypothesis when trying to demonstrate this striking 'fact'?
Our response is YES. Quantum computing has arrived, as an expression of that.
Numbers do obey a physical law. Massachusetts Institute of Technology Peter Shor was the first to say it, in 1994 [cf. 1], in modern times. It is a wormhole, connecting physics with mathematics, and has existed even before the Earth existed.
So-called "pure" mathematics is, after all, governed by objective laws. The Max Planck Institute of Quantum Optics (MPQ) showed the mathematical basis by recognizing the differentiation of discontinuous functions [1, 2, 3], in 1982.
This denies any type of square-root of a negative number [4] -- a.k.a. an imaginary number -- rational or continuous.
Complex numbers, of any type, are not objective and are not part of a quantum description, as said first by Erwin Schrödinger (1926) --
yet,
cryogenic behemoth quantum machines (see figure) consider a "complex qubit" -- two objective impossibilities. They are just poor physics and expensive analog experiments in these pioneering times.
Quantum computing is ... natural. Atoms do it all the time, and the human brain (based on +4 quantum properties of numbers).
Each point, in a quantum reality, is a point ... not continuous. So, reality is grainy, everywhere. Ontically.
To imagine a continuous point is to imagine a "mathematical paint" without atoms. Take a good microscope ... atoms appear!
The atoms, an objective reality, imply a graininess. This quantum description includes at least, necessarily (Einstein, 1917), three logical states -- with stimulated emission, absorption, and emission. Further states are possible, as in measured superradiance.
Mathematical complex numbers or mathematical real-numbers do not describe objective reality. They are continuous, without atoms. Poor math and poor physics.
It is easy to see that multiplication or division "infests" the real part with the imaginary part, and in calculating modulus -- e.g., in the polar representation as well as in the (x,y) rectangular representation. The Euler identity is a fiction, as it trigonometrically mixes types ... avoid it. The FFT will no longer have to use it, and FT=FFT.
The complex number system "infests" the real part with the imaginary part, even for Gaussian numbers, and this is well-known in third-degree polynomials.
Complex numbers, of any type, must be deprecated, they do not represent an objective sense. They should not "infest" quantum computing.
Quantum computing is better without complex numbers. software makes R,C=Q --> B={0,1}.
What is your qualified opinion?
REFERENCES
[1] DOI /2227-7390/11/1/68
[3] June 1982, Physical review A, Atomic, molecular, and optical physics 26:1(1).

Physics
The physicist betting that space-time isn't quantum after all
Most experts think we have to tweak general relativity to fit with quantum theory. Physicist Jonathan Oppenheim isn't so sure, which is why he’s made a 5000:1 bet that gravity isn’t a quantum force
By Joshua Howgego
13 March 2023
📷
Nabil NEZZAR
JONATHAN OPPENHEIM likes the occasional flutter, but the object of his interest is a little more rarefied than horse racing or the one-armed bandit. A quantum physicist at University College London, Oppenheim likes to make bets on the fundamental nature of reality – and his latest concerns space-time itself.
The two great theories of physics are fundamentally at odds. In one corner, you have general relativity, which says that gravity is the result of mass warping space-time, envisaged as a kind of stretchy sheet. In the other, there is quantum theory, which explains the subatomic world and holds that all matter and energy comes in tiny, discrete chunks. Put them together and you could describe much of reality. The only problem is that you can’t put them together: the grainy mathematics of quantum theory and the smooth description of space-time don’t mesh.
Most physicists reckon the solution is to “quantise” gravity, or to show how space-time comes in tiny quanta, like the three other forces of nature. In effect, that means tweaking general relativity so it fits into the quantum mould, a task that has occupied researchers for almost a century already. But Oppenheim wonders if this assumption might be mistaken, which is why he made a 5000:1 bet that space-time isn’t ultimately quantum.
The quantum experiment that could prove reality doesn't exist
................................................................................
Hi, My name Is Debi, I'm master student Mathematics Education major at Yogyakarta State University 2nd month. Please give me advice what the trend topic on mathematics education aspecially topic learning media math and learning psychology of math. May you share with me about it on your country or your universisty. Thank you so much.
This is actually a trivial question and I'm just being mischievous.
It turns on the shades of meaning of both "idea" and "exist."
Mathematically, a concept exists whether anyone has happened upon it or not. (A meaningless attempt at a concept is not a concept).
When first thought about by an actual brain of any kind, a concept acquires its first glimmer of existence as in the real world.
Physics is a scence of representations, with mathematical aspects in them, foremost and not of naked correlations and parameter analysis.
It also has competent conceptualizaions, genious principles.
Even the innocent seeking uniform motion is a representational scheme fo motions under the theory of kinematics. (Representations are seperate from reality but are invaluable part of scientific infering, predicting, explaining etc) i.e heat is represented as a flow between subsystems. Representations change i.e Einstein found the curved spacetime one for gravity phenomena.
Physics is also the science f cosmology. It has no meaning if it bypasses the universe-i.e the sum of subsystems. This discipline has problems because we cannot take ourself out of it and study it but physics has tools for this (QM) or theoretical approximaions (more cognitively open consideration of the concept of boundary conditions).
Is it possible to mathematically calculate K-40 from K total determined by ICP MS in sediment samples?
We assume that this statement is false, but one of the most common mathematical errors.
So a question arises: what is the importance of the LHS diagonal?
I have two networks, and wish to get them to dynamically interact with one another, yet retain modularity.
Category theory is a branch of mathematics that deals with the abstract structure of mathematical concepts and their relationships. While category theory has been applied to various areas of physics, such as quantum mechanics and general relativity, it is currently not clear whether it could serve as the language of a metatheory unifying the description of the laws of physics.
There are several challenges to using category theory as the language of a metatheory for physics. One challenge is that category theory is a highly abstract and general framework, and it is not yet clear how to connect it to the specific details of physical systems and their behaviour. Another challenge is that category theory is still an active area of research, and there are many open questions and debates about how to apply it to different areas of mathematics and science.
Despite these challenges, there are some researchers who believe that category theory could play a role in developing a metatheory for physics. For example, some have proposed that category theory could be used to describe the relationships between different physical theories and to unify them into a single framework. Others have suggested that category theory could be used to study the relationship between space and time in a more unified and conceptual way.
I am very interested in your experiences, opinions and ideas.
Is it mathematically justified to place negative and positive numbers on the same plane?
This question discusses the YES answer. We don't need the √-1.
The complex numbers, using rational numbers (i.e., the Gauss set G) or mathematical real-numbers (the set R), are artificial. Can they be avoided?
Math cannot be in ones head, as explains [1].
To realize the YES answer, one must advance over current knowledge, and may sound strange. But, every path in a complex space must begin and end in a rational number -- anything that can be measured, or produced, must be a rational number. Complex numbers are not needed, physically, as a number. But, in algebra, they are useful.
The YES answer can improve the efficiency in using numbers in calculations, although it is less advantageous in algebra calculations, like in the well-known Gauss identity.
For example, in the FFT [2], there is no need to compute complex functions, or trigonometric functions.
This may lead to further improvement in computation time over the FFT, already providing orders of magnitude improvement in computation time over FT with mathematical real-numbers. Both the FT and the FFT are revealed to be equivalent -- see [2].
I detail this in [3] for comments. Maybe one can build a faster FFT (or, FFFT)?
The answer may also consider further advances into quantum computing?
[2]
Preprint FT = FFT
[2]
Preprint The quantum set Q*
I noticed that in some very bad models of neural networks, the value of R² (coefficient of determination) can be negative. That is, the model is so bad that the mean of the data is better than the model.
In linear regression models, the multiple correlation coefficient (R) can be calculated using the root of R². However, this is not possible for a model of neural networks that presents a negative R². In that case, is R mathematically undefined?
I tried calculating the correlation y and y_pred (Pearson), but it is mathematically undefined (division by zero). I am attaching the values.
Obs.: The question is about artificial neural networks.
1 - Prof. Tegmark of MIT hypothesizes that the universe is not merely described by mathematics but IS mathematics.
2 - The Riemann hypothesis applies to the mathematical universe’s space-time, and says its infinite "nontrivial zeros" lie on the vertical line of the complex number plane (on the y-axis of Wick rotation).
3 - Implying infinity=zero, there's no distance in time or space - making superluminal and time travel feasible.
4 - Besides Mobius strips, topological propulsion uses holographic-universe theory to delete the 3rd dimension (and thus distance).
5 - Relationships between living organisms can be explained with scientifically applied mathematics instead of origin of species by biological evolution.
6 - Wick rotation - represented by a circle where the x- and y-axes intersect at its centre, and where real and imaginary numbers rotate counterclockwise between 4 quadrants - introduces the possibility of interaction of the x-axis' ordinary matter and energy with the y-axis' dark matter and dark energy.

Theoretical and computational physics provide the vision and the mathematical and computational framework for understanding and extending the knowledge of particles, forces, space-time, and the universe. A thriving theory program is essential to support current experiments and to identify new directions for high energy physics. Theoretical physicists provide a great deal of assistance to the Energy, Intensity, and Cosmic Frontiers with the in-depth understanding of the underlying theory behind experiments and interpreting the outcomes in context of the theory. Advanced computing tools are necessary for designing, operating, and interpreting experiments and to perform sophisticated scientific simulations that enable discovery in the science drivers and the three experimental frontiers.
source: HEP Theoretical and Computationa... | U.S. DOE Office of Science (SC) (osti.gov)
Irrational numbers are uncomputable with probability one. In that sense, numerical, they do not belong to nature. Animals cannot calculate it, nor humans, nor machines.
But algebra can deal with irrational numbers. Algebra deals with unknowns and indeterminates, exactly.
This would mean that a simple bee or fish can do algebra? No, this means, given the simple expression of their brains, that a higher entity is able to command them to do algebra. The same for humans and machines. We must be able also to do quantum computing, and beyond, also that way.
Thus, no one (animals, humans, extraterrestrials in the NASA search, and machines) is limited by their expressions, and all obey a higher entity, commanding through a network from the top down -- which entity we call God, and Jesus called Father.
This means that God holds all the dice. That also means that we can learn by mimicking nature. Even a wasp can teach us the medicinal properties of a passion fruit flower to lower aggression. Animals, no surprise, can self-medicate, knowing no biology or chemistry.
There is, then, no “personal” sense of algebra. It just is a combination of arithmetic operations.There is no “algebra in my sense” -- there is only one sense, the one mathematical sense that has made sense physically, for ages. I do not feel free to change it, and did not.
But we can reveal new facets of it. In that, we have already revealed several exact algebraic expressions for irrational numbers. Of course, the task is not even enumerable, but it is worth compiling, for the weary traveler. Any suggestions are welcome.
What is this mean ( ± 0.06) and How can I calculate it mathematically?
All tests doing a proof for the Riemann-Hypothesis on the Zeta-Function must fail.
There are no zeros by the so called function of a complex argument.
A function on two different units f(x, y) only then has values for the third unit
`z´ [z = f(x, y)]
if the values variables `x´ and `y´ would be combined by an algebraic rule.
So it should be done for the complex argument, Riemann had used.
But there isn´t such a combination. So Riemann only did a `scaling´. Where both parts of the complex number stay separate.
The second part of the refutation comes by showing wrong expert opinion of mathematics. This is on the false use of `imaginary´ and `prefixed multiplication´.
What are the properties of transversal risks in networks? Happy for applied examples and diffusion properties.
Project Name - Improving Achievement and Attitude through Co-operative learning in F. Y. B. Sc. Mathematics Class
What is missing is an exact definition of probability that would contain time as a dimensionless quantity woven into a 3D geometric physical space.
It should be mentioned that the current definition of probability as the relative frequency of successful trials is primitive and contains no time.
On the other hand, the quantum mechanical definition of the probability density as,
p(r,t)=ψ(r,t)*.ψ(r,t),
which introduces time via the system's destination time and not from its start time is of limited usefulness and leads to unnecessary complications.
It's just a sarcastic definition.
It should be mentioned that a preliminary definition of the probability function of space and time proposed in the Cairo technique led to revolutionary solutions of time-dependent partial differential equations, integration and differentiation, special functions such as the Gamma function, etc. without the use of mathematics.
Hello
I have an Excel file containing weather data of Missouri in U.S. The data starts from 25th July and ends on 9th September in 2014. For each day, almost 21 times data has been recorded (6 hours within solar noon time).
How can I make a type99 source file using this Excel file? I already have studied mathematical reference of Trnsys help, but that was not very helpful. Thanks
The Gamma function,
G(n)= Integral from 0 to infinity [Exp(e^-x^n)]dx
is of the great mathematical and physical importance.
It can be calculated without numerical integration (for practical purposes) via its mathematical and physical properties:
i-minimum of Gamma occurs at x = 1.4616321 and the corresponding value of Gamma(x) is 0.8856032.
ii-Gamma(1.)=Gamma(2.)=1.
iii-Gamma(x)=(x-1.) !
A simple preliminary approach that gives the value of Gamma(x) with an error less than 0.001 is the second-order polynomial expression for the factorial x,
(1.-0.46163*x+0.46163*x*x),x element of [0,1].
For example, this gives:
G(10.5)=11877478.
vs the value of 11899423.084) given by numerical integration.
and Gamma(1.4616)= 0.88527 vs 0.8856032.
There are a few point to consider in this issue
Points pro current emphasis
1. Math is the backbone of a physical theory. Good representation, good quantities of a theory, phenomena but bad math makes for bad theory
2. There is a general skepticism for reconsidering role of mathematized approach in physics Masters syllabi/upgrading role of literature/essay
2. Humans communicate, learn, think & develop construct via language
Arguments Con
1. Math is the elements in theory and "physics product" that is responsible for precision& prediction. Indespensible though, it exists in the mind of some individuals & function as well, in parallel with conception, physical arguments
2. Not all models in physics are mathematical. Some are conceptual
3. Formulations of solutions to physics problems via math techniques and methods is def of mathematical physics. However, this is a certain % of domain of skills.
But syllabus focuses 100% on this
Dear professors and students, greetings and courtesy. I wanted to know if the real numbers are the largest and the last set of numbers that exist, or if there are sets or sets of numbers that are larger than that, but maybe they have not been discovered yet? Which is true? If it is the last set of numbers that exists, what theorem proves the non-existence of a set of numbers greater than it? And if there is a larger set than that, in terms of the history of mathematics, by obtaining the answer to which mathematical problem, it was proved that the obtained answer is not closed with respect to the set of complex numbers and belongs to a larger set? Thank you very much
As the concept comes from the Bernoulli numbers and different branches of mathematics, I have recently considered the importance of introducing the same concept, 'The unity of mathematics' within the context of the Bernoulli numbers and some special series (the Flint Hills and Cookson Hills series). I believe in the scenario of defining a balanced relationship between the effect of the Bernoulli numbers and the series of hard convergence.
I am pointing out this potential link.
For a general conclusion about what I consider should the concept of 'unity' by the Bernoulli numbers and the Flint Hills, just pay attention to this screenshot:
DOI: 10.13140/RG.2.2.16745.98402
Fermat's last theorem was finally solved by Wiles using mathematical tools that were wholly unavailable to Fermat.
Do you believe
A) That we have actually not solved Fermat's theorem the way it was supposed to be solved, and that we must still look for Fermat's original solution, still undiscovered,
or
B) That Fermat actually made a mistake, and that his 'wonderful' proof -which he did not have the necessary space to fully set forth - was in fact mistaken or flawed, and that we were obsessed for centuries with his last "theorem" when in fact he himself had not really proved it at all?
Mathematics Teacher Educators (MTEs) best practices.
I'm interesting in research literature about Mathematics Teacher Educators (MTEs) best practices, especially on MTEs' practices for teaching to solve problems.
thank you.
Good day, Dear Colleagues!
Anyone interested in discussing this topic?
How can I define histogram bins in a well define mathematical expression especially driven from data points x_i, i=1,..,n and the range or any other well define measures in the dataset.
Kindly share with me any details of Scopus indexed Mathematics conferences in India.
Physics continues a tradition of assesment in graduate program based on final exams and of the form of mathematized exersices with no conceptual qs or essays.
This fulfils the aim. Of. Mastering demanding nomenclature in the domain. Given slow progress in field last decades this might be a good alternative but there are also pedagogical reasons.
This form of assesment is extreme and outdated.it has further disadvantages
** Students do not develop critical research skills such as literature analysis and research.
**certain skills for future researcher are notvtested i. E ability to combine research from different Source, ability to think critically of competing thesis or theories, to discern gaps in current research
** A mixed approach should ensure all aims
I have Expi293 cell cultures (suspension). After counting them, the density turned out to be 4.26 x 10^6. I need to split them. Starting from this concentration, how can I obtain a 30ml cellular suspention at a density of 0.25 x 10^6? How many mL of cellular suspension (the one at a density of 4.26 x 10^6) and how many mL of medium do I need? I still have difficulties in understanding which kind of mathematical calculation do I need to use. Could you explain it to me?
Thermal stresses in applied mathematics
Suggest some best topics for Experential learning
I have 'N' number of inputs (which correspond to temperature) to calculate a specific output parameter. I'm getting 'N' number output data based on the inputs.
However, my goal is to select an optimum number from all the output data and use it in another calculation.
'N' number of input data --> Output parameter calculation --> Identification of an optimized output parameter --> Use that value for another calculation.
How to find an optimal value among 'N' "number of output data. Can we employ any algorithm or process?