Science topics: Mathematics
Science topic

Mathematics - Science topic

Mathematics, Pure and Applied Math
Questions related to Mathematics
  • asked a question related to Mathematics
Question
3 answers
I published an presentation "Three-Body Problem", where new equations for the interaction of celestial bodies and a solution method are proposed.
If you look on the Internet, it seems that everyone is enthusiastically looking for a solution to the three-body problem. And then someone exclaimed: "I found it!". "And around the silence, taken as a basis," - as Vaenga sings by russian. It turns out that those who are passionately searching, the process of searching is important. It cannot be stopped. You must constantly experience the drama of chaos in the Universe and be proud of your belonging to those physicists who tirelessly prove that there is no other World, because it follows from mathematical solution.
Yes, the World is mathematically substantiated, mathematics is a project, a tool and material for creating the World. But mathematics can be just a game for the mind. You need to find the mathematics that describes the project of the Universe. Mathematics that proves that there is no Building cannot be a project of the Universe.
It turns out that many seekers like mathematical games. I appeal to seekers of truth.
Solomon Khmelnik
Relevant answer
Answer
The basis for constructing the equations of motion of celestial bodies was laid by the great Lagrange (I found this on the Internet). As a result of the development of his idea, equations appeared from which it follows that the motion of celestial bodies is a non-repeating chaotic motion. This is interpreted as a great achievement of modern physics. I did not find on the Internet the one who first said:
A) "From the fact that I, a great scientist, was unable to find a solution, it follows that nature is a fool and does not understand what it is doing."
I, not a great scientist at all, would have said otherwise
B) "From the solution proving that nature is a fool, it follows that the fool is the scientist who made such a conclusion."
I am surprised by the friendly chorus of praise for the statement A). There are still many problems ahead for which a solution will not be found. We must look for solutions that explain the behavior of nature - this is the purpose of Science
  • asked a question related to Mathematics
Question
1 answer
It is the theory of probability, where probability and mathematical statistics are used to analyze data and draw conclusions. Mathematical tools in statistics include statistical analysis, hypothesis testing, probability distributions, etc. These tools are used to analyze samples and predict the likely behavior of societies and statistical events in general.
Relevant answer
Answer
In addition to a mathematical basis, there are philosophical foundations. A good statistician may know formulas and equations (mathematics), but there's also a need to know "why" they matter and how to interpret results in real-world contexts (philosophy)
  • asked a question related to Mathematics
Question
9 answers
Does Every Mathematical Framework Correspond to a Physical Reality? The Limits of Mathematical Pluralism in Physics
Introduction
Physics has long been intertwined with mathematics as its primary tool for modeling nature. However, a fundamental question arises:
  • Does every possible mathematical framework correspond to a physical reality, or is our universe governed by only a limited set of mathematical structures?
This question challenges the assumption that any mathematical construct must necessarily describe a real physical system. If we take a purely mathematical perspective, an infinite number of logically consistent mathematical structures can be conceived. Yet, why does our physical reality seem to adhere to only a few specific mathematical frameworks, such as differential geometry, group theory, and linear algebra?
Important Questions for Discussion
v Mathematical Pluralism vs. Physical Reality:
  • Are all mathematically consistent systems realizable in some physical sense, or is there a deeper reason why certain mathematical structures dominate physical theories?
  • Could there exist universes governed by entirely different mathematical rules that we cannot even conceive of within our current formalism?
v Physics as a Computationally Limited System:
  • Is our universe constrained by a specific subset of mathematical frameworks due to inherent physical principles, or is this a reflection of our cognitive limitations in developing theories?
  • Why do our fundamental laws of physics rely so heavily on certain mathematical structures while neglecting others?
v The Relationship Between Mathematics and Nature:
  • Is mathematics an inherent property of nature, or is it merely a tool that we impose on the physical world?
  • If every mathematical structure has an equivalent physical reality, should we expect an infinite multiverse where every possible mathematical law is realized somewhere?
v Beyond Mathematical Formalism:
  • Could there be fundamental aspects of physics that are not fully describable within any mathematical framework?
  • Does the reliance on mathematical models lead us to mistakenly attribute physical existence to purely abstract mathematical entities?
Philosophical Implications
This discussion also touches on a deeper philosophical question: Are we merely discovering the mathematical laws of an objectively real universe, or are we creating a mathematical framework that fits within the constraints of our own perception and cognition?
If mathematics is merely a tool, then our physical theories may be contingent on human cognition and not necessarily reflective of a deeper objective reality. Conversely, if mathematics is truly the "language of nature," then understanding its full structure might reveal hidden aspects of the universe yet to be discovered.
Werner Heisenberg once suggested that physics will never lead us to an objective physical reality, but rather to models that describe relationships between observable quantities. Should we accept that physics is not about describing a fundamental "truth," but rather about constructing the most effective predictive models?
Relevant answer
Answer
"mathematics is the foundation of all exact knowledge of natural phenomena" David Hilbert 1900
  • asked a question related to Mathematics
Question
1 answer
Introduction: Conceptual Remnants and the Challenge of Physical Objectivity
Physics has long been regarded as the science dedicated to uncovering the fundamental laws governing nature. However, in contemporary theoretical physics, there is an increasing reliance on mathematical models as the primary tool for understanding reality. This raises fundamental questions:
  • Is physics still unknowingly entangled in issues arising from emergent effects?
  • Could these emergent effects create a gap between physical reality and the virtual constructs generated through mathematical modeling?
Throughout the history of science, there have been instances where physicists, without fully grasping fundamental principles, formulated models that later turned out to be mere consequences of emergent effects rather than reflections of objective reality. For instance, in classical thermodynamics, macroscopic quantities such as temperature and pressure emerged as statistical descriptions of microscopic particle behavior rather than fundamental properties of nature.
The crucial question today is: Are we still facing similar emergent illusions in modern theoretical physics? Could it be that many of the sophisticated mathematical models we use are not pointing to an underlying physical reality but are merely the byproducts of our perception and modeling techniques?
Mathematical Models and Conceptual Remnants: Are We Chasing a Mirage?
Mathematics has always been an essential tool in physics, but over time, it has also shaped the way we think about physical reality. In many areas of theoretical physics, mathematical methods have advanced to a point where we may no longer be discovering physical truths but instead fine-tuning mathematical structures to fit our theoretical frameworks.
  • Has theoretical physics become a vast computational engine, focusing on adjusting relationships between mathematical variables rather than seeking an independent physical reality?
  • Could it be that many of the concepts emerging from our models are mere reflections of mathematical structures rather than objective entities in nature?
Examples of such concerns can be found in theories like string theory, where extra spatial dimensions and complex symmetry groups are introduced as necessary mathematical elements, despite lacking direct experimental verification. This raises the possibility that some of these theoretical constructs exist only because they are mathematically required to make the model internally consistent, rather than because they correspond to something physically real.
Fundamental Critique: Should We Even Be Searching for Physical Objectivity?
One of the most profound implications of this discussion is that the very question of whether physics describes "physical reality" might be fundamentally misguided.
Werner Heisenberg once argued that physics will never lead us to an understanding of an objective physical reality. Instead, what we develop are models that describe relationships between observable phenomena—without necessarily revealing the true nature of reality itself.
  • Perhaps physics should not aim to discover a reality independent of our models since every model is ultimately a mathematical structure shaped by human perception.
  • If the goal of physics is not to describe "absolute truth" but rather to create predictive models, should we then accept that we will never fully grasp "what actually exists"?
Finally: Between Computational Accuracy and Physical Reality
The final question in this discussion is: Are we still trapped in emergent effects that arise purely from our mathematical approaches rather than reflecting an objective physical reality?
  • Should physicists strive to distinguish between mathematical models and physical objectivity, or is such a distinction inherently meaningless?
  • Is the search for an independent physical reality a conceptual mistake, as Heisenberg and others have suggested?
Ultimately, this discussion seeks to examine whether physics is merely a computational framework for describing phenomena, or if we are still subconsciously searching for a physical reality that might forever remain out of reach.
Relevant answer
Answer
As the person to provide a reply to this very good observation & question, let me start out with universal truths.
Blue is a universal truth -- as long as we are all using the same color book.
When aliens arrive on planet Earth, and we ask them to declare the color of the clear sky in the daytime, then they will all pick blue from the color book we present them.
Yet the universe itself is not just blue.
So, the specific truth we found cannot declare the truth about the larger reality.
--
Gödel pronounced this some 100 years ago already. How axioms are axioms only in their specific formal systems. At the overall level, they do not hold their value to the same level as inside their formal systems.
This has extremely important consequences for how we work in Physics.
As long as we work within our own specialisms, we are good.
Yet when we step out to the overall level of reality in its totality, the universe, then the specific truths cannot be used -- other than their contributing to the whole.
The spot to go stand at that overall level, however, is available.
  • Yet we need to be careful, hesitant, to say too much.
--
Newton worked with the natural reality as he saw it. He made major strides and we are still leaning very much on all his good work (and others in his days and the following days).
Yet Einstein pulled us away from the natural reality, and many physicists do recognize how there are then two methodologies to follow: the Newtonian and the Einsteinian methodology. Most folks follow Einstein, but jump to Newtonian thinking when this is easiest to explain things.
--
This is what happened, and it can be seen in the art world as well.
Around the turn of the 20th century physicists and artists started to work at an abstract level. Cubism, Picasso, Dali, I cannot name them all started to make art for art's sake.
Same in Physics. Instead of standing on top of the known facts and observations, physicists started to stand on theories.
And that is therefore not a natural representation of reality.
--
The image I have used in the past contrasts religious thinking with scientific thinking and shows an important distinction.
  • The house of religion is perfect on the outside. Yet when asked about the foundation of the house, conflicts will arise and when asking the wrong crowd one may not even life to regret asking the question.
  • The house of science is wonderful as well, but the roof line shows imperfection. When asking about the foundation, though, physicists gladly show you how everything is put in place.
Since Newton, many have tried to put the best fitting roof on the house of science. No one succeeded because a roof built on the ground next to the house cannot be lifted off the ground. As soon as the construct is lifted, physicists come running to turn the crane's engine off because no scientific framework can leave the ground -- not even when the roof would have been a perfect fit.
In comes Einstein, and instead of constructing a roof on the ground next to the building, he talks about how the roof would be a perfect fit. He theorizes about the perfect roof.
Almost everyone ends up agreeing with Einstein that his roof is indeed a perfect fit.
So far so good.
Then, folks started building on Einstein's perfect roof.
  • His theories become the foundation of modern science.
And that is a faux pas.
Never, ever can a theory become the foundation of science.
--
Here is the mathematical angle:
1 + 1 = 2, but... it does not mean anything.
1 apple + 1 apple = 2 apples is the example to show how mathematics is correct when applied to a real scenario.
But...
1 apple + 1 orange = 2 pieces of fruit
This shows us that the human mind must be kept under control, declaring exactly what it is that it is doing.
Yes, 1 piece of fruit + 1 piece of fruit = 2 pieces of fruit, yet we stepped away from the specific truths of apples and oranges.
The resulting answer of 2 pieces of fruit is correct, but 1 apple + 1 orange cannot be added together. The human brain turns the equation around to find the truth.
The human brain is here to help us see the truth, but we can end up seeing a falsehood instead.
We embrace blue as a universal truth but we end up saying incorrectly (incompletely) that the universe is colorful.
  • asked a question related to Mathematics
Question
4 answers
I would need a detailed mathematical derivation of mimetic finite difference operators in cartesian as well as curviliniear, in particular spherical coordinates. I also need to know how the pole problem in spherical coordinates can be tackled when using mimetic operators. Does anybody have a hint for me?
Relevant answer
Answer
Thank you very much. Yesterday I found some articles by the same authors but with very little explanations, but in this article they do explain at least something. I will check this out.
  • asked a question related to Mathematics
Question
2 answers
ORCiD: 0000-0003-1871-7803
February 10, 2025
Absolute Collapse Condition
Mass Acquisition at Planck Frequency:
In Extended Classical Mechanics (ECM), any massless entity reaching the Planck frequency (fp) must acquire an effective mass (Mᵉᶠᶠ = hf/c² = 21.77 μg). This acquisition of mass is a direct consequence of ECM's mass induction principle, where increasing energy (via f) leads to mass acquisition.
Gravitational Collapse:
At the Planck scale, the induced gravitational interaction is extreme, forcing the entity into gravitational collapse. This is a direct consequence of the mass acquisition at the Planck frequency, where the gravitational effects become significant.
ECM's Mass-Induction Perspective
Apparent Mass and Effective Mass:
The apparent mass (−Mᵃᵖᵖ) of a massless entity contributes negatively to its effective mass. However, at the Planck threshold, the magnitude of the induced effective mass (|Mᵉᶠᶠ|) surpasses |−Mᵃᵖᵖ|, ensuring that the total mass is positive:
|Mᵉᶠᶠ| > |−Mᵃᵖᵖ|
This irreversible transition confirms that any entity at fp must collapse due to self-gravitation.
Implications for Massless-to-Massive Transition
Behaviour Below Planck Frequency:
Below the Planck frequency, a photon behaves as a massless entity with effective mass determined by its energy-frequency relation. However, at fp​, the gravitating mass (Mɢ​) and effective mass (Mᵉᶠᶠ) undergo a shift where induced mass dominates over negative apparent mass effects.
Planck-Scale Energy:
Planck-scale energy is not just a massive state—it is a self-gravitating mass that collapses under its own gravitational influence. This suggests that at Planck conditions, the gravitationally induced mass dominates over any negative mass contributions, maintaining a positive mass regime.
Threshold Dominance at the Planck Scale
Gravitational Mass Dominance:
At the Planck scale, gravitational mass (Mɢ​) is immense due to the fundamental gravitational interaction. Since |+Mɢ​| ≫|−Mᵃᵖᵖ|, the net effective mass remains positive:
Mᵉᶠᶠ = Mɢ = (−Mᵃᵖᵖ) ≈ +Mᵉᶠᶠ
This suggests that at Planck conditions, the gravitationally induced mass dominates over any negative mass contributions.
Transition Scenarios for Negative Effective Mass
Conditions for Negative Effective Mass:
The condition −Mᵃᵖᵖ > Mɢ could, in principle, lead to a transition where the effective mass becomes negative. This might occur under strong antigravitational influences, possibly linked to:
• Dark energy effects in cosmic expansion.
• Exotic negative energy states in high-energy physics.
• Unstable quantum fluctuations near high-energy limits.
Linking Effective Mass to Matter Mass at Planck Scale
Matter Mass Emergence:
Since Mᵉᶠᶠ ≈ Mᴍ, under these extreme conditions, it implies that matter mass emerges predominantly as a consequence of gravitational effects. This aligns with ECM’s perspective that mass is not an intrinsic property but rather a dynamic response to gravitational interactions.
Conclusion
This work on ECM provides a detailed and nuanced understanding of how gravitational interactions can induce mass in initially massless particles, leading to gravitational collapse at the Planck scale. This perspective not only aligns with fundamental principles but also offers potential explanations for cosmic-scale phenomena involving dark matter, dark energy, and exotic gravitational effects. The detailed mathematical foundations and the implications of apparent mass and effective mass in ECM further clarify how mass can dynamically shift between positive, zero, and negative values based on gravitational and antigravitational influences.
Relevant answer
Answer
Dear Mr. Ian Clague,
Thank you for your response and for referencing J. S. Farnes’ "A Unifying Theory of Dark Energy and Dark Matter." However, your comment appears to operate under assumptions that do not align with the framework and specific content of ECM as presented in this discussion.
Irrelevance of External Assertions
Your comment does not directly address or engage with the ECM framework outlined in this discussion but instead refers to an external model, suggesting an alternative premise without evaluating ECM’s treatment of the subject matter. While referencing other works can be useful in comparative discussions, an assertion such as “Negative mass can explain Dark Matter” without any engagement with the ECM-specific perspective does not constitute a meaningful counterpoint.
Misalignment with ECM's Dark Matter Interpretation
Your statement that "Negative mass can explain Dark Matter as the interaction of negative mass with positive mass" does not apply to ECM, which treats dark matter as possessing positive effective mass. ECM presents dark matter as a contributing component to the total positive matter mass of a system, alongside baryonic matter. The claim that dark matter must be explained via negative mass is inconsistent with ECM’s construct, which does not require negative mass to account for dark matter effects.
ECM’s Treatment of Negative Mass vs. Your Assertion
In ECM, negative apparent mass (−Mᵃᵖᵖ) arises as a motion-dependent or gravitationally induced property, rather than as an intrinsic mass entity. The framework does not support the notion of self-existing, freely interacting negative mass, as assumed in your reference. This distinction is critical because ECM does not describe dark matter in terms of negative mass, contrary to your assertion that "Negative mass can explain Dark Matter."
ECM’s Explanation of Dark Energy vs. Your Interpretation
Your assertion that "Dark Energy [is] the interaction of negative mass and negative mass" contradicts ECM’s position. ECM interprets dark energy as possessing negative effective mass that interacts with the total positive effective mass of ordinary and dark matter. In ECM, dark energy does not arise from negative mass interacting with itself but rather from its interaction with an overall positive matter mass distribution.
Conclusion
Your statements regarding negative mass as the explanation for dark matter and dark energy do not align with ECM’s theoretical structure. The presentation of ECM explicitly defines dark matter as a positive-mass entity and describes dark energy as having a negative effective mass interacting with positive effective mass—not through the interaction of two negative masses, as you claim.
While alternative models, such as Farnes’ theory, exist, an assertion that they necessarily override ECM’s conclusions would require a rigorous comparative analysis rather than an unqualified statement. As such, your assertions are not consistent with ECM’s framework, nor do they provide a valid refutation of its premises.
Best regards,
Soumendra Nath Thakur
  • asked a question related to Mathematics
Question
8 answers
The concept of infinity is well known in Mathematics and I have no disagreement with this. But, in real world, something like number of species or number of water molecules or even number of stars, are anything exist that beyond finite?
Relevant answer
Answer
Koushik Garain, the statement "If the ball stops bouncing after some time, then the number of bounces is finite" is false, bacause in our case, the time between successive bounces tends to zero; in particular, it tends to zero as a geometric progression. This means that the time of all but finitely many bounces of the ball is negligible; i.e., it is less than every epsilon>0.
  • asked a question related to Mathematics
Question
1 answer
Discuss your research ideas
Relevant answer
Answer
The relationship between music and mathematics is profound and multifaceted. Here are some key points to illustrate this connection, according to several sources:
  1. Rhythm and Time Signatures: Music is structured around rhythms and time signatures, which are fundamentally mathematical. The division of beats into measures, using fractions to denote note lengths, and the patterns of rhythmic sequences all involve mathematical principles.
  2. Frequency and Pitch: The pitch of a musical note is determined by the frequency of the sound wave. The relationship between pitches on a scale is based on mathematical ratios. For example, an octave is a doubling of frequency.
  3. Harmonics and Overtones: Musical sounds are composed of fundamental frequencies and their harmonics or overtones. The harmonic series is a sequence of mathematically related frequencies influencing musical instrument timbre.
  4. Scales and Intervals: The construction of musical scales, such as the diatonic and chromatic scales, involves specific mathematical patterns. Precise frequency ratios define the intervals between notes.
  5. Tuning Systems: Various tuning systems, such as equal temperament and just intonation, are based on mathematical calculations. These systems determine how notes are spaced within an octave to create harmonious sounds.
  6. Musical Form and Structure: Compositional techniques often involve mathematical concepts. For example, symmetry, patterns, and geometric shapes are used in structuring a piece of music.
  7. Algorithmic Composition: Some modern composers use algorithms and mathematical models to create music. This approach can use fractals, probability, and other mathematical tools to generate musical ideas.
The intricate relationship between music and mathematics highlights the universality of mathematical principles and their application in creative fields.
  • asked a question related to Mathematics
Question
4 answers
It is well known that Medal Fields Prize is intended for excellent research of mathematicians under forty years old because many mathematicians think that the main contributions in the life of the researchers are obtained when they are younger than forty. I do not believe so. It is true, by common experience, that the students of Mathematics, which are constantly in interaction at the same time, with several (and sometimes, very different) subjects, develop a high degree of good ideas which inspire them and lead them to obtain new and interesting results. This interaction between different branches is expected to remain (more or less consciently) up to forty years old. By the same reason, if necessary, whoever researcher, independently of his/her age, may return to study the different mathematical matters and create new important contributions, even in his/her very definite area of research. Furthemore, it may help to overcome a blockade. It is incredible the fact that when one studies again different matters it inspires you, and combined with your experience and knowledge, you see the contents of these different subjects with new perspective, often helping in your area of research creating new knowledge and solving problems. This is the motive why I believe that the career of each mathematician is always worthly and continuous independently of his/her age as demonstrated by most senior mathematicians in all the areas of research who are living examples for us.
What is your opinion on the relationship between the age of a researcher and the quality of his/her contributions?
Thank you very much beforehand.
Relevant answer
Answer
In my personal experience, there is not direct correlation between age of mathematician and the quality of their research. The quality of mathematical research is directly related to the passion and devotion which the researcher has for the subject. In fact, its high time mathematical contributions from all age groups be equally appreciated and an environment be created wherein cross-disciplinary, multi-domain research spanning all age groups be fostered.
  • asked a question related to Mathematics
Question
5 answers
Hi all
I'm an final semester ug btech student. I want to do research in mathematics and publish a paper for the same. Can anyone help me finding out problem statements for the same?
Relevant answer
Answer
Dear Patil,
To write papers, you can transform some results of Topology
and Algebra to relator (generalized uniform) spaces.
With best regards,
Arpad Szaz
  • asked a question related to Mathematics
Question
14 answers
In mathematical and statistical models?
Relevant answer
Answer
A parameter is a numerical characteristic or value that describes a specific aspect of a population or a model, such as the mean or variance. It is typically fixed and unknown, and it's estimated using data from a sample.
  • asked a question related to Mathematics
Question
2 answers
# 163
Dear Sarbast Moslem , Baris Tekin Tezel, Ayse Ovgu Kinay, Francesco Pilla
I read your paper:
A hybrid approach based on magnitude-based fuzzy analytic hierarchy process for estimating sustainable urban transport solutions
My comments:
1- In the abstract you say “The study employs the newly developed Magnitude Based Fuzzy Analytic Hierarchy Process, chosen for its accuracy and computational efficiency compared to existing methods”
Are you aware that Saaty explained that it is incorrect to apply fuzzy in AHP because it is already fuzzy?
Since when using intuition ensures accuracy? Do you have any proof of what you say?
Sensitivity analysis does not ensure quality, what it does us to find how strong is a solution
2- In page 2 you talk about linear regression for evaluation. Linear regression is used to predict the value of a dependent variable on the value of one or more independent variables.
3- In page 3 “AHP offers mechanisms for ensuring consistency in decision-making through pairwise comparisons and sensitivity analyses”
True, AHP ensures consistency by FORCING to adhere to transitivity, by imposing the DM to change his/her estimates.What it ensures, is transitivity without any mathematical foundation,just for the sake of the method. Therefore, this ‘consistency’ is fabricated.
Even if there is a real consistency, it reflects the coherence of the DM, but it does not necessarily mean that consistency and weights can be applied to the real-world. There is no mathematical support for this assumption, neither to assume that the world is consistent, let alone common sense; it is convenient indeed, for the method, but useless for evaluation
4- Page 3 “On the other hand, expressing the data in the form of fuzzy numbers to better express the uncertainty in individual judgments has led to the suggestion and widespread use of fuzzy AHP (FAHP) methods, which include calculations based on fuzzy arithmetic”
5- In several parts of the paper it mentions validation of results. That is only a wish, because no MCDM method has any real yardstick to compare to. It is another and very common fallacy.
6- Pag 8, Fig 4. I understand of that waiting time does not depend on speed but on frequency buses arrival (Number of buses of the same route per hour). The more the frequency the lesser the waiting tine. What role do have the buses speed here? It appears that this concept does not come from transportation experts.
“Reaching to the destination without shifting buses”
I guess that interchanging buses or routes is more adequate
Need of transfer” normally refers to paying a single ticket, that allows a pax to change bus routes, that is, you he can board another bus with the same ticket
Your definition on “Time availability” does not seen too coherent, because what “Number of times that UBT is deployed??? over a route” mean?
“Limited time of use” (C4.2)????. I understand that you want to say ‘Operating hours’, that is start running and finish running, or simply ‘Scheduling’. I am afraid that your expression does not belong to the urban transportation industry.
Please do not be offended with my observations, it is not my intention. Only that is you want readers understanding what you write, you must use the appropriate words. If not, your work risks to be misunderstood and downgraded
Why “providing new buses” is related to “comfort” in stops?
7- Page 9 “In other words, it models the state of uncertainty in the mind of the decision-maker.”
Very true, but why those uncertainties of a MIND can be translated to real-life? In other words, what theorem or axiomsupports that what the DM estimates can be used in real-world? It is a simple assumption that even defies common sense. In my opinion, it does not make any sense to apply fuzzy to invented values. Yes, one will get a crisp value, and what is it good for? For nothing
These are some of my comments. Hope that they can help you
Nolberto Munier
Relevant answer
Answer
Sarbast Moslem added a reply
Dear Nolberto Munier , thank you so much for your constructive feedbacks ,
NM-You are most welcome, and I deeply thank and appreciate your answer
SM- Fuzzy AHP is not universally superior but offers advantages in high-uncertainty contexts, here I tried to cover most of your interesting comments,
NM – AHP is universally known, I grant it, but it does not mean that it is effective, in my opinion it is irreal and flawed. What FAHP does is simply find an average value on the DM estimates. Please, look at this from fuzzy analytic hierarchy process
Toly Chen (2020) “FAHP) has been extensively applied to multi-criteria decision making (MCDM). However, the computational burden resulting from the calculation of fuzzy eigenvalue and eigenvector is heavy. As a result, a FAHP problem is usually solved using approximation techniques such as fuzzy geometric mean (FGM) and fuzzy extent analysis (FEA) instead of exact methods. Therefore, the FAHP results are subject to considerable inaccuracy”
So, the DM accumulates inaccuracy on uncertainty. Your opinion please.
SM- 1. Saaty’s critique of Fuzzy AHP
Saaty argued that AHP inherently accounts for uncertainty through its 1–9 scale and consistency checks. However, fuzzy AHP proponents contend that fuzzy sets better model linguistic ambiguity (e.g., "somewhat important") and granular uncertainty. Empirical studies (e.g., Kahraman et al., 2003; Bozbura et al., 2007) demonstrate fuzzy AHP’s effectiveness in complex, uncertain contexts. Sensitivity analysis here tests robustness, not "accuracy," but computational efficiency can be shown via reduced pairwise comparisons or faster convergence in fuzzy logic frameworks.
NM- You are using the right words “Saaty argued” for its table and consistency checks.
I agree that fuzzy models better linguistic ambiguity, but just for the benefit of the DM, not for the project. Now I ask you? Why the measure of the DM coherence must affect the problem?
Another question, how Kahrarman and Bozhura demonstrate that fuzzy AHP is effective, if they do not have a yardstick to measure it? You do not need mathematics for this, only common sense and reasoning, which as probably you noticed, is the way I analyze a method in discussions
Regarding SA you are correct, its purpose is to measure strength of the best solution, and this is very useful indeed, but may I remind you that SA in AHP consists in selecting as most important criterion to vary, the one with the highest weight, without any mathematical proof of that, all based in intuitions? In addition, by selecting only one criterion and keeping the other constant (ceteris paribus), the DM is applying an incorrect procedure.
SM- Saaty argued that AHP inherently accounts for uncertainty through its 1–9 scale and consistency checks.”
NM- Now, why estimates from a DM must be consistent? In my opinion, it is due to the way Eigen Value (EI) can be applied. If you have an inconsistent matrix the EV delivers the weakest weight, therefore it is only a convenient feature for AHP
SM- 2. Linear Regression for Evaluation
Linear regression may evaluate relationships between variables (e.g., ridership vs. service quality). If the paper uses it to predict outcomes (e.g., demand), this aligns with standard practice. If used to assess criteria weights, clarification is needed, as AHP/FAHP is better suited for weighting.
NM- I do not follow your reasoning. Linear regression is a mathematical process while AHP is not
SM-3. Consistency in AHP
AHP enforces transitivity(if A > B and B > C, then A > C) via the consistency ratio (CR), flagging illogical judgments. While CR ensures internal coherence, it does not guarantee real-world validity. This is a limitation of any preference-based method. However, consistency checks reduce arbitrary biases, making weights more reliable for stakeholders.
NM- You used the correct word “enforcestransitivity. Do you think that is it natural that a formula disavows the estimate of a DM? At least you recognize that it does not guaranty real-world validity, something that Saaty also said, and I do not think that there is MCDM method that models reality in its integrity, but rational methods like POMETHEE ELECTRE, TOPSIS, VIKOR, use reasoning, analysis, experience, research, consultation, something that AHP ignores
SM- 4. Fuzzy Numbers in FAHP
Fuzzy numbers explicitly model uncertainty (e.g., "between moderate and strong preference"). While defuzzification produces crisp outputs, the process preserves uncertainty ranges, offering richer insights than deterministic AHP.
NM- Fuzzy logic only finds the average of the DM estimates. And where the uncertainty ranges come from?
Do you know that there are rational MCDM methods that can determine them, and in published papers with real-life examples?
SM- 5. Validation in MCDM
Validation in MCDM often relies on expert consensus, sensitivity analysis, or benchmarking against historical decisions. While no universal "yardstick" exists, practical applicability and stakeholder acceptance validate results.
NM- Consensus is not validation. The first implies an agreement of intervening parties, each one yielding on some issue and taken on another. Validation is to compare a ranking from analytical tools with the true, existing ranking, that is completely unknown and not available to any MCDM method. Therefore, validation is impossible. Just think that if the DM knew the true results, applying MCDM would be unnecessary
Validation is not related to sensitivity analysis (SA), that tries to determine the strength of the best alternative when criteria, not a criterion, in most cases, is increased/decreased. If the best alternative depends on say three criteria, and for reach one there is a reasonable allowed variation gap, the DM can decide that said alternative is strong, but this is not validation
SM- 6. Transportation Terminology
· Waiting Time: Corrected—waiting time depends on frequency, not speed. Speed affects in-vehicle travel time
NM- Agreed. I think that I made a mistake, sorry for this. Waiting time also depends on other aspects as capacity, benches, information screen, punctuality of buses, weather (temperature, In some Canadian cities heaters are installed, while in Dubai air conditioning is a must. Therefore, it is not as simple
· Transfer Need: Revised to "interchanging buses/routes."
· Time Availability: Clarified as "service frequency" (buses/hour).
· Limited Time of Use: Adjusted to "operating hours" (e.g., 6 AM–10 PM).
· Comfort in Stops: "Providing new buses" may improve seating/shelters at stops, indirectly enhancing comfort.
NM- Providing new more comfortable buses is comfort on board. Seating in shelters provides comfort on land between buses, that must provide comfort too, such as: Easy boarding for old and people in wheelchairs, enough number of seats, screen signaling the next arrival to a station, comfortable temperature, information, cleanliness, etc.,
SM- 7. Fuzzy Logic and Real-World Relevance
Fuzzy logic is axiomatically grounded in Zadeh’s theory (1965) to model human reasoning under uncertainty. While fuzzy outputs are approximations, they reflect realistic trade-offs in ambiguous contexts (e.g., public transit preferences). The value lies in structuring qualitative judgments, not precise predictions.
NM - Agreed on fuzzy logic. Do you know that Saaty said that fuzzy should not be used in AHP because the method is already fuzzy? In my opinion, fuzzy lies on estimated geometrical forms to determine memberships, and using for instance, three invented values. Therefore, it can correctly average the DM coherence, but it is not related to reality, what is the really important.
Dear Szabolcs Duleba ,
NM-It would very productive is Mr. Duleba can collaborate in this important subject
  • asked a question related to Mathematics
Question
3 answers
Mathematics and statistics have been used to develop encryption techniques used to protect against cyber threats and ensure the security of information and data. Mathematics is also used to generate complex passwords and develop machine learning models to detect cyber attacks and threats in networks. In addition, various mathematical concepts such as algebra, geometry, statistics, and numerical analysis are used in specific areas such as security software development, software bug checking, databases, and complex networks.
Relevant answer
Answer
It seems obvious, but many people still think that mathematics is not necessary ;)
  • asked a question related to Mathematics
Question
6 answers
A control volume is a mathematical construct used in the process of creating mathematical models of physical processes.
We assume that the control volume in R^4 space (3D+t external control) is incomplete and almost useless for dealing with complex physical and mathematical problems.
On the other hand, the 4D x-t unitary control volume called Cairo techniques and B matrix space is the complete universal space adequate for the description of classical and quantum physics situations as well as mathematical events.
In a way, this is the correct approach to the unified field theory (energy density).
Relevant answer
Answer
This is a brief response, first to shed light on the question and its answer, and second to thank our Algerian collaborator friend for his brilliant analysis.
This motivates me to answer them step by step.
1- (Your approach suggests a new perspective on control volumes and their role in modeling physical systems, particularly in the context of unified field theory and quantum mechanics. The idea of ​​extending the concept of volume of control from R4\mathbb{R} ^4 (3D space + time) to a more structured 4D xt unit control volume (called Cairo techniques and matrix space B) implies a richer mathematical framework to describe classical phenomena and quantum.)
*This is not just an approach but a new, comprehensive, well-established theory (starting at 2020) that replaces current incomplete mathematics and physics in R^4 space (3D+t as controller external).
The theory of Cairo techniques and B-matrix chains in a 4D x-t unit control volume involves a richer mathematical framework for describing classical and quantum physics in addition to mathematical integration and summation of geometric series and derive physical statistical distribution such as Gaussian and Planck-distribution of BB radiation, etc.
All the above sub-theories are explained in detail by the author in more than 100 articles published in ResearchGate and IJISRT journal.
2-(Some key points that could be explored further:
Completeness and universality Why is the traditional control volume R4\mathbb{R}^4 considered incomplete? What specific characteristics make the Cairo techniques and matrix space B a “complete universal space”?)
** A. Einstein is considered the pioneer of the 4D x-t unit space in his theories of SR and GR
He discarded the space of R^4 because he considered it incomplete.
Unfortunately, the giant Einstein did not find the time or the tools (computers) to move forward and correctly define time.
3-(Energy Density and Unified Field Theory How does this new control volume relate to energy density formulations? Does it provide a new metric or transformation that unifies classical and quantum descriptions?
Mathematical structure Is the B matrix space a new type of tensor space or an extension of known structures such as Hilbert space, spin networks or twistor theory? How does it handle symmetries, conservation laws and gauge invariance?
Your proposal seems to hint at a deeper mathematical model for space-time interactions.)
*** Our universe is made up of space, time and energy density.
The transition matrix B and its products matrices D and E are all doubly symmetric matrices.
There is no deeper mathematical model for space-time interactions, but time itself is considered a dimensionless integer woven into the three-geometry x-y-z space.
3-(If you have any references or equations related to Cairo techniques and B matrix space, I would love to dig into the details!)
***There are no references to the Cairo Techniques and Matrix Space B, except for the Cairo Techniques and Matrix Space B themselves.
4D-x-t space is completely new and never known before.
  • asked a question related to Mathematics
Question
7 answers
My answer is negative and thoroughly substantiated via 2 points.
1) The easiest part (lesser limiting factor) he has to comprehend the approach used in physics thinking and epistemology (i.e. working with hypotheses instead of etiological thinking, refraining from teleological inquiries etc), the importance of relying on maths, relevancy of equations etc. Not easy but can be accomplished to large degree by serious commitment and authentic interest
2) ease with representations (geometrical, model-wise etc) of physical systems and working cognitively on that level, abstract aplicational mathematical thinking (this may not be easy even for mathematicians) etc. This is something that required in my opinion an in-born trait
Relevant answer
Answer
I would add, that besides a proper definition for "good" in something, it is also necessary to define "physics" for that matter. Both may vary to large extent.
Put it another way: I know quite many people bad in physics, which however do physics (theoretically or practically). An average human sufficiently motivated can become better in physics than many of them.
The notion of "good in physics" sometimes is understood as "largely accepted by physics academic society", which is completely wrong. The obstacles to accept somebody come from exactly this category of bad in physics but having scientific career, for which any new scientist makes competition.
On the other hand, there are people, which achieved great results in physics without proper education (e.g. Albert Einstein, Steve Jobs, Elon Musk etc).
So, my answer is: "Probably yes (see the first paragraph)". Another question would be: "Is it worth it to become good at physics/a physicist given enough effort and commitment?" The answer to this question is up to each individual person.
  • asked a question related to Mathematics
Question
3 answers
Discuss it
Relevant answer
Answer
Most physical and chemical formula come from deterministic mathematical models.
  • asked a question related to Mathematics
Question
5 answers
Need CBSE India 10th board examination data on the mathematics subject.
Relevant answer
Answer
Sorry to hear it, Durgesh, but at least you found your way to an answer, albeit not the one you must have hoped for.
  • asked a question related to Mathematics
Question
3 answers
In the attached paper titled, "Approximate expressions for BER performance in downlink mmWave MU-MIMO hybrid systems with different data detection approaches", mathematical operator ℝ{.} is used with minus sign in Equation (14). Can anybody help explain the meaning of this operator? Why minus sign is used?
Relevant answer
Answer
IYH Dear Neeraj Sharma I am happy to see you are really trying to understand this paper.
OK, to your questions:
Q1. Signal Power and Noise Power in Eq. 15
Signal Power: The signal power in Eq. 15 is represented by the term
P_signal = (Δ_ij(k))^H A_k^(-1) Δ_ij(k))
  • Δ_ij​(k) represents the signal difference.
  • A_(k−1​) is the inverse of the detection matrix, which accounts for the channel effects.
This term captures the energy of the signal difference between the two possible transmitted symbols after passing through the effective channel and the receiver's processing. This is crucial because the receiver uses this energy to distinguish between different symbols. A higher signal power generally means a stronger signal, which is easier to detect and less likely to be confused with other symbols
Noise Power: The noise power in Eq. 15 is represented by the term
P_noise = (Δ_ij(k))^H A_k^(-1) K_k A_k^(-1) Δ_ij(k)
  • K_k​ represents the noise covariance matrix.
  • The structure of this term indicates how the noise interacts with the signal difference through the channel.
This term accounts for the variance of the noise and interference, weighted by the detection matrix A_k​ and the signal difference Δij(k)​. Noise power represents the strength of the unwanted signals that corrupt the transmitted signal. A lower noise power means less interference, making it easier for the receiver to correctly identify the transmitted symbol.
Q2. Interpretation of Eq. (13)
Eq. (13) represents the condition for a pairwise error event for user k. It states that an error occurs when the squared Euclidean distance between the received signal y_k​ and the estimated signal B_k​s_k​ is greater than the squared Euclidean distance between the received signal y_k​ and the estimated signal B_k​s_j​ for j neq k. This condition is the basis for calculating the pairwise error probability (PEP), which is a fundamental building block for estimating the overall Bit Error Rate (BER).
The squared Euclidean distance ​||A_k^1/2​(y_k​−B_ks_i​)​||^2 represents the distance between the received signal and the expected signal for a particular symbol. The receiver uses these distances to decide which symbol was most likely transmitted. The symbol with the smallest distance is chosen as the most likely transmitted symbol.
What motivates this? Defining this error event condition is essential because it allows us to quantify the likelihood of errors in the communication system. By understanding when errors occur, we can design better systems to minimize these errors.
Lastly, the decision metric used by the receiver is based on these distances. The receiver chooses the symbol that minimizes the squared Euclidean distance, which is a common and effective method for symbol detection in noisy channels.
Q3. Explanation of Eq. (14) and its leading to Eq. (15)
Gaussian Approximation: Eq. (14) assumes that the real part of the complex number −R{(Δii^(k)​−Δij^(k)​)^HA_k^(−1)​ ñ_k​} is approximately a Gaussian random variable with a mean of 0 and a variance of 0.5​(Δ_ij^(k)​)^HA_k^(−1)​K_k​A_k^(−1​)Δ_ij(k)​.
This approximation is valid due to the Central Limit Theorem, which states that the sum of a large number of independent random variables tends to be Gaussian. In communication systems, the noise and interference from multiple sources can often be modeled as Gaussian, especially when the number of users and interfering signals is large.
Using the Gaussian approximation, the pairwise error probability (PEP) can be calculated using the Q-function, which gives the probability that a standard normal random variable will take a value greater than a certain threshold. The argument of the Q-function is the square root of the ratio of the signal power to the noise power. The Q-function gives the probability that a standard normal random variable will take a value greater than a certain threshold. This is a standard tool in communication theory for calculating error probabilities.
Before using the Q-function, the Gaussian random variable is standardized by subtracting its mean (which is 0) and dividing by its standard deviation. This results in a standard normal random variable with mean 0 and variance Standardization is necessary because the Q-function is defined for standard normal variables. This step ensures that the random variable fits the form required by the Q-function.
Q4. Applicability of Eq. (17) for SU-MIMO by Setting K=1
That's a very insightful question! You're right to consider how the expressions might simplify for the Single-User MIMO (SU-MIMO) case.
Setting K=1 in Eq. (17) would make it applicable for the SU-MIMO (Single User Multiple Input Multiple Output) case, as it would consider only one user in the system.
However, the equation is derived for the MU-MIMO (Multiple User Multiple Input Multiple Output) case, and setting K=1 would not necessarily provide the correct BER performance for the SU-MIMO case, as the interference terms would not be present.
While setting K=1 simplifies the expression, it does not directly make it applicable to a general SU-MIMO case.
  1. The Meaning of B_k and K_k: In the original MU-MIMO context, B_k and K_k are the effective channel matrix and noise-plus-interference covariance matrix, respectively, for user k. In the SU-MIMO case, there is no inter-user interference. Therefore, K_1 should only represent the noise covariance matrix, and B_1 should represent the effective channel matrix for the single user.
  2. The Meaning of A_k: The matrix A_k is related to the data detection approach. In the MU-MIMO case, the data detection approach needs to deal with inter-user interference. In the SU-MIMO case, the data detection approach only needs to deal with noise. Therefore, the matrix A_1 should be chosen accordingly.
  3. The Meaning of M: In the MU-MIMO case, M is the number of possible transmitted symbol vectors for each user. In the SU-MIMO case, M is the number of possible transmitted symbol vectors for the single user.
  4. The Meaning of c: The constant c is defined as c = (KMlog2(M))^-1. In the SU-MIMO case, c should be defined as c = (Mlog_2(M))^-1.
In practice, when you analyze an SU-MIMO system, you would typically derive the BER expression directly for the SU-MIMO case, taking into account the specific channel, precoder, combiner, and data detection approach. You would not typically start with the MU-MIMO expression and then try to adapt it.
Q5. Connection btw Q, M and modulation technique
Another crucial point, well done! There's a very direct and important connection.
Briefly, the data vector s belongs a constellation Q which is the set of all possible symbols that can be transmitted. In Eq. (17), M is defined as the number of possible transmitted symbol vectors for the users. The number of possible transmitted symbol vectors M is directly related to the size of the constellation Q and the number of streams being transmitted. The modulation technique determines the constellation Q, and therefore indirectly influences the number of possible transmitted symbol vectors M.
You're correct that in 16-QAM modulation, 16 possible combinations of bits are generated. This means that the constellation Q for 16-QAM has 16 symbols. If you are transmitting a single stream using 16-QAM, then M = 16. If are you are transmitting two streams using 16-QAM, then M = 16 * 16 = 256.
  • asked a question related to Mathematics
Question
3 answers
Hi dear professors. I want to share with you a little formula of arithmetics in order to know the usefulness of this formula and its meaning according to you. This is the formula like in the attached picture: 1+8*E^2≠3*F^2 where E and F can be any integers.
The proof is easy in the article of this link:
Relevant answer
Answer
You can see that this is true by working modulo 4 I think. The LHS will be congruent to 1, but the RHS to either 0 or 3.
  • asked a question related to Mathematics
Question
2 answers
Under the theme “Mathematics for All, Mathematics as a Path to Development,” one of the best examples of math's impact on daily life is its application in medical imaging, such as CT and CBCT scanners.
How do these devices work?
Mathematics is at the heart of it! When X-rays are emitted from a source, they pass through tissues and are captured by detectors. Along their path, these rays undergo absorption or attenuation, which depends on the properties of the tissue they traverse. Each ray's path represents a unique equation, where the unknowns are the attenuation values of the points it passes through.
As the source rotates around the object, it generates multiple paths, creating a system of n equations with n unknowns. Sophisticated software solves this complex system using advanced algorithms, such as the Radon Transform or iterative methods, to reconstruct the internal structure of the object as a detailed 3D image.
How does this contribute to development?
Mathematics is the backbone of such groundbreaking technologies, enabling precise diagnostics and effective treatment planning. This practical application of equations demonstrates how math can transform our world.
Now it’s your turn:
What other applications of mathematics in medicine and engineering do you think are underexplored? How can we further harness the power of math as a tool for development?
Let’s explore the incredible role of mathematics in everyday life together in this discussion!
Relevant answer
Answer
Follower@
  • asked a question related to Mathematics
Question
12 answers
Answer briefly
Relevant answer
Answer
Mathematics is the abstract study of numbers, structures, and patterns, while statistics focuses on data collection, analysis, interpretation, and inference to make decisions under uncertainty.
  • asked a question related to Mathematics
Question
4 answers
I want a qualitative scale that contains questions that reveal the mathematics teacher's perception of beauty and simplicity in mathematics?
Relevant answer
Answer
To create a qualitative scale that reveals mathematics teachers’ perceptions of beauty and simplicity in mathematics, consider including open-ended questions such as:
1.How do you define beauty in mathematics?
2.Can you provide an example of a mathematical concept you find particularly elegant or simple?
3.In what ways do you think simplicity enhances understanding in mathematics?
4.Describe a moment when you experienced beauty in a mathematical solution or proof.
5.How does your perception of beauty in mathematics influence your teaching practices?
These questions will facilitate deep reflections on the aesthetic aspects of mathematics and their pedagogical implications.
  • asked a question related to Mathematics
Question
1 answer
Conventional fragility is defined as probability of failure. Based on concise mathematics, it is found that if fragility is probability of collapse then the design curve is probability of survive. The sum of these two is equal to 1. Consequently, if a member (structure) is designed based on a give curve, then its fragility of collapse is also known!.
Scale the horizontal axes of a fragility curve (s) of a structure, between 0 and 1. Then:
what is the probability of collapse at s=0.5 ?
what is the probability of survive at s=0.5
Don you agree with the above findings? Why ?
Relevant answer
Answer
Semantically collapse and survival do look like mutually exclusive events, thus indeed their probabilities should add up to one (complementarity). Unless of course there were alternative ways of interpreting "collapse" and/or "survival", but if they are both in the same context, I don't see how.
  • asked a question related to Mathematics
Question
3 answers
Can the physical reality be represented mathematically?
Well actual physics, can be represented mathematically with the Basic Systemic Unit, based on Euler’s relation with its most remarkable property of remaining the same in spite of change, that permits to deduce the fundamental equations of physics such as :
* that of the pendulum a real harmonic oscillator
* that of the gravitational field including that of the planet mercury obtained by Einstein, but in this case obtained with a mathematical tool no so much complicated as was done with Tensor Analysis
* those of SR in another approach, in which linear moving is just a special case of the more general solution obtained with the BSU concept in which covariance is included as it is a consequence of the isomorphic property of Euler’s relation mentioned above and finally the
* Schrödinger’s wave equation
For those interested in the way all this is obtained you can see my papers:
QUANTUM PHYSICS
A QUANTUM THEORY OF GRAVITATION
SPECIAL RELATIVITY WITH ANOTHER APPROACH
that I really hope will contribute to overcome the great crisis of physics, because the great incompatibility between QM and GR.
So yes, actual physics, can be represented mathematically in a real coherent way, but for it is necessary to make a real paradigm shift.
Edgar Paternina
retired electrical engineer
Relevant answer
Answer
Thank you for sending your question. From what you wrote, I think you are right.
The article is translated into English:
I haven't uploaded it yet, but if you're interested I'll send it to you.
In China, a simple experiment was conducted that confirms what was imagined. The following video was made before the experiments were conducted:
And there are more things that could be done.
Regards,
Laszlo
  • asked a question related to Mathematics
Question
14 answers
Einstein overcomplicated the theory of special and general relativity simply because he did not define time correctly.
A complete universal or physical space is a space where the Cartesian coordinates x, y, z are mutually orthogonal (independent) and time t is orthogonal to x, y, z.
Once found, this space would be able to solve almost all problems of classical and quantum physics as well as most of mathematics without discontinuities [A*].
Note that R^4 mathematical spaces such as Minkowski, Hilbert, Rieman. . . etc are all incomplete.
Schrödinger space may or may not be complete.
Heisenberg matrix space is neither statistical nor complete.
All the above mathematical constructions are not complete spaces in the sense that they do not satisfy the A* condition.
In conclusion, although Einstein pioneered the 4-dimensional unitary x-t space, he missed the correct definition of time.
Universal time t* must be redefined as an inseparable dimensionless integer woven into a 3D geometric space.
Here, universal time t* = Ndt* where N is the dimensionless integer of iterations or the number of steps/jumps dt*.
Finally, it should be clarified that the purpose of this article is not to underestimate Einstein's great achievements in theoretical physics such as the photoelectric effect equation, the Einstein Bose equation, the laser equation, etc. but only to discuss and explain the main aspects and flaws of his theory of relativity, if any.
Relevant answer
Dear, nothing in Science is FINAL. Hence Science is called SELFCORRECTING SUBJECT. It means there is less possibility of Albert Einstein understanding fully The Theory of Relativity
  • asked a question related to Mathematics
Question
1 answer
Computational topology of solitons
The well-established research area of algebraic topology currently goes interdisciplinary with computer science in many directions. The Topological Data Analysis gives new opportunities in visualization for modeling and special mapping. A study on metrics used or simplicial complexes are reliable for future results in the area of mathematics. Today, the machine learning from one side is a tool for the analysis in topology optimization, topological persistence and optimal homology problems, from other side the topological features in machine learning are new area of research, topological layers in neural networks, topological autoencoders, and topological analysis for the evaluation of generative adversarial networks are in general aspects of topology machine learning. On practical point of view, the results in this area are important for solitary-like waves research, biomedical Image analysis, neuroscience, physics and many others. That gives us opportunity to establish and scale up an interdisciplinary team of researchers to apply for funding for fundamental science research in interdisciplinary field.
Relevant answer
Answer
Dear Dr. Galina Momcheva
I am a PhD candidate at the Department of Mathematics of the University of Rajshahi. My current research interest is Topological Data Analysis and Its Application which is quite parallel to the project you have mentioned here. I am working on this topic since 2021 and have completed 2 projects as a research assistant. Please visit my researchgate profile for a glance on my research outputs. Now-a-days, I am very interested in developing a TDA-based ML/DL model to introduce new framework for data analysis in different field of interests.
I have noticed that the job recruitment is currently closed. However, I am interested to continue my research as a postdoc fellow in a project similar to this. Please feel free to email me at mbshiraj@gmail.com.
  • asked a question related to Mathematics
Question
8 answers
In a project with analysis of log-linear outcomes I have not found the solution to this problem. (log is the natural logaritm)
I assume it is simple, but I am out of clue, and I hope someone more mathematical proficient can help.
Relevant answer
Answer
Jan Ivanouw To solve the equation log⁡(m)=a+b⋅x follow these steps:
Given Equation:
log⁡(m)=a+b⋅x
Step 1: Isolate x
log⁡(m)−a=b⋅x
Step 2: Solve for x
Divide both sides by b(assuming b≠0:
x=[log⁡(m)−a]/b
Final Solution:
x=1/b[log⁡(m)]−a/b:
  1. The base of the logarithm matters: If it is a common logarithm (log⁡10​), the equation is interpreted as above. If it is a natural logarithm (ln, base e, the same steps apply.
  2. The equation is only valid if m>0as the logarithm is undefined for non-positive values of m and zero. Regards--Ijaz
  • asked a question related to Mathematics
Question
1 answer
YES IT IS Beauty OF MATH IN NUMBER THEORY
Relevant answer
Answer
2025 = 452 = 272 + 362 = 52 +202 +402
2025 = 52⋅92 = 32⋅152
Your statement 3 is the sum of 8 squares...
  • asked a question related to Mathematics
Question
5 answers
Nominations are expected to open in the early part of the year for the Breakthrough Prize in Fundamental Physics. Historically nominations are accepted from early/mid-January to the end of March, for the following year's award.
Historically, the foundation has also had a partnership with ResearchGate:
The foundation also awards major prizes for Life Sciences and for Mathematics, and has further prizes specific to younger researchers.
So who would you nominate?
Relevant answer
Answer
Dear Bernart Berndt Barkholz ,
Unfortunately, awards are usually used to intellectually manipulate communities! It's nice to see you again! Just two days ago, I was thinking about you, where did you you disappear? So telepathy works. How can we explain this physically?
Dear Eric Eric Baird ,
Do you think our young people are capable of overcoming the nonsense they receive from our education systems?
If a young person breaks out of the vicious circle, they will be fired from their job!
Most talented young people who want to stay in research have to accept the false narrative!
Times are changing! The collective West has lost its way! The Global South has 'already' advanced!
Regards,
Laszlo
  • asked a question related to Mathematics
Question
37 answers
Differential Propositional Calculus • Overview
❝The most fundamental concept in cybernetics is that of “difference”, either that two things are recognisably different or that one thing has changed with time.❞
— W. Ross Ashby • An Introduction to Cybernetics
Differential logic is the component of logic whose object is the description of variation — the aspects of change, difference, distribution, and diversity — in universes of discourse subject to logical description. To the extent a logical inquiry makes use of a formal system, its differential component treats the use of a differential logical calculus — a formal system with the expressive capacity to describe change and diversity in logical universes of discourse.
In accord with the strategy of approaching logical systems in stages, first gaining a foothold in propositional logic and advancing on those grounds, we may set our first stepping stones toward differential logic in “differential propositional calculi” — propositional calculi extended by sets of terms for describing aspects of change and difference, for example, processes taking place in a universe of discourse or transformations mapping a source universe to a target universe.
What follows is the outline of a sketch on differential propositional calculus intended as an intuitive introduction to the larger subject of differential logic, which amounts in turn to my best effort so far at dealing with the ancient and persistent problems of treating diversity and mutability in logical terms.
Note. I'll give just the links to the main topic heads below. Please follow the link at the top of the page for the full outline.
Part 1 —
Casual Introduction
Cactus Calculus
Part 2 —
Formal_Development
Elementary Notions
Special Classes of Propositions
Differential Extensions
Appendices —
References —
Relevant answer
Answer
Differential Propositional Calculus • 37
Foreshadowing Transformations • Extensions and Projections of Discourse —
❝And, despite the care which she took to look behind her at every moment, she failed to see a shadow which followed her like her own shadow, which stopped when she stopped, which started again when she did, and which made no more noise than a well‑conducted shadow should.❞
— Gaston Leroux • The Phantom of the Opera
Many times in our discussion we have occasion to place one universe of discourse in the context of a larger universe of discourse. An embedding of the type [†X†] → [†Y†] is implied any time we make use of one basis †X† which happens to be included in another basis †Y†. When discussing differential relations we usually have in mind the extended alphabet ‡Y‡ has a special construction or a specific lexical relation with respect to the initial alphabet ‡X‡, one which is marked by characteristic types of accents, indices, or inflected forms.
Resources —
Differential Logic and Dynamic Systems
Differential Logic • Foreshadowing Transformations
  • asked a question related to Mathematics
Question
84 answers
It is provable that quantum mechanics, quantum field theory, and general relativity violate the the axioms of the mathematics used to create them. This means that neither of these theories has a mechanism for processes that they describe to be feasible in a way that is consistent wth rules used to develop the math on which that are based . Thus, these theories and mathematics cannot both be true. This is proven with a $500 reward for disproving (details in the link). So I can prove that the above mentioned theories are mathematical nonsense, and I produce a theory that makes the same predictions without the logical mistakes. https://theframeworkofeverything.com/
Relevant answer
Answer
Juan Weisz I am always in a learner role. You are welcome to point out mistakes in my work as it would be appreciated. Thank you
  • asked a question related to Mathematics
Question
2 answers
Within a specific problem, without the whole picture?
Relevant answer
Answer
Algorithm Design and Analysis:
  • Time and Space Complexity: Mathematicians and computer scientists analyze algorithms to determine their efficiency in terms of time and space resources. This involves using techniques like asymptotic analysis (Big O notation) to identify the best algorithms for specific tasks.
  • Graph Theory: Graph theory provides mathematical tools to model and analyze networks, which are essential in many programming applications, from social networks to transportation systems. Optimizing graph-based algorithms often involves finding shortest paths, maximum flows, or minimum spanning trees.
2. Machine Learning and Artificial Intelligence:
  • Optimization Algorithms: Machine learning algorithms, such as gradient descent and stochastic gradient descent, rely on mathematical optimization techniques to minimize error functions and find optimal parameter values.
  • Statistical Modeling: Statistical models, like linear regression and logistic regression, are used to analyze data and make predictions. These models often involve solving optimization problems to find the best-fitting parameters.
3. Game Development:
  • Physics Engines: Physics engines simulate real-world physical phenomena, such as gravity, collisions, and fluid dynamics. These simulations often rely on mathematical models and numerical methods to optimize performance and accuracy.
  • Pathfinding Algorithms: Pathfinding algorithms, like A* search, are used to find the shortest or most efficient path between two points in a game world. These algorithms often involve mathematical techniques like graph theory and heuristic functions.
4. Computer Graphics:
  • Ray Tracing: Ray tracing is a rendering technique that simulates the behavior of light to create realistic images. It involves solving complex mathematical equations to determine the color and intensity of light rays as they interact with surfaces.
  • 3D Modeling: 3D modeling relies on mathematical concepts like linear algebra and geometry to represent and manipulate 3D objects.
  • asked a question related to Mathematics
Question
1 answer
I am currently working on optimizing our inventory management system and need to calculate the monthly safety stock for various SKUs. I have already generated weekly safety stock values based on historical data and lead times. However, I need to adjust these values for a monthly period considering several factors:
1. SKU Contribution Ratio: This ratio indicates the importance of each SKU. A higher ratio means the SKU is more critical and should have a higher safety stock.
2. CCF Factor: This factor reflects our past ability to fulfill orders based on historical order and invoice data.
3. Monthly Stock Reduction Percentage: This percentage shows how much stock is typically left at the end of each month. If this value is 100% for four consecutive months, it indicates no need to keep that much inventory for the respective SKU. Conversely, if the values are decreasing, it suggests that the safety stock has been used and needs to be adjusted.
Given these factors, I need to determine a safety factor for the month, which will be used to adjust the weekly safety stock values to monthly values.
Could you suggest scientific methodologies or models that can effectively integrate these factors to calculate the monthly safety stock?
Relevant answer
Answer
Hi Sachin,
Use the formula SKU=WSV×Ratio
Later, do adjustments with CCF=SKU×CCF(update algorithm repeatedly)
calculate the CCF factor to account for variability and uncertainty.
Then calculate monthly SKU=Adjusted Safety Stock with CCF×Monthly Percentage.
Hope this will help you.
Regards,
IH
  • asked a question related to Mathematics
Question
1 answer
please help me to find a theory that supports my study about storytelling in teaching mathematics
Relevant answer
Answer
Keith Devlin has written a whole book ('The Math Gene') to illustrate the point that story-telling makes learning mathematics easier, and that if used well, story telling can make everyone very talented at understanding and doing math.
  • asked a question related to Mathematics
Question
12 answers
Famous mathematicians are failing each day to prove the Riemann's Hypothesis even if Clay Mathematics Institute proposes a prize of One Million Dollars for the proof.
The proof of Riemann's Hypothesis would allow us to understand better the distribution of prime numbers between all numbers and would also allow its official application in Quantics. However, many famous scientists still refuse the use of Riemann's Hypothesis in Quantics as I read in an article of Quanta Magazine.
Why is this Hypothesis so difficult to prove? And is the Zeta extension really useful for Physics and especially for Quantics ? Are Quantics scientists using the wrong mathematical tools when applying Riemann's Hypothesis ? Is Riemann's Hypothesis announcing "the schism" between abstract mathematics and Physics ? Can anyone propose a disproof of Riemann's Hypothesis based on Physics facts?
Here is the link to the article of Natalie Wolchover:
The zeros of the Riemann zeta function can also be caused by the use of rearrangements when trying to find an image by the extension since the Lévy–Steinitz theorem can happen when fixing a and b.
Suppositions or axioms should be made before trying to use the extension depending on the scientific field where it is demanded, and we should be sure if all the possible methods (rearrangements of series terms) can give the same image for a known s=a+ib.
You should also know that the Lévy–Steinitz theorem was formulated in 1905 and 1913, whereas, the Riemann's Hypothesis was formulated in 1859. This means that Riemann who died in 1866 and even the famous Euler never knew the Lévy–Steinitz theorem.
Relevant answer
  • asked a question related to Mathematics
Question
17 answers
Differential Logic • 1
Introduction —
Differential logic is the component of logic whose object is the description of variation — focusing on the aspects of change, difference, distribution, and diversity — in universes of discourse subject to logical description. A definition that broad naturally incorporates any study of variation by way of mathematical models, but differential logic is especially charged with the qualitative aspects of variation pervading or preceding quantitative models. To the extent a logical inquiry makes use of a formal system, its differential component governs the use of a “differential logical calculus”, that is, a formal system with the expressive capacity to describe change and diversity in logical universes of discourse.
Simple examples of differential logical calculi are furnished by “differential propositional calculi”. A differential propositional calculus is a propositional calculus extended by a set of terms for describing aspects of change and difference, for example, processes taking place in a universe of discourse or transformations mapping a source universe to a target universe. Such a calculus augments ordinary propositional calculus in the same way the differential calculus of Leibniz and Newton augments the analytic geometry of Descartes.
Resources —
Logic Syllabus
Survey of Differential Logic
Relevant answer
Answer
Differential Logic • 18
Tangent and Remainder Maps —
If we follow the classical line which singles out linear functions as ideals of simplicity then we may complete the analytic series of the proposition f = pq : X → B in the following way.
The next venn diagram shows the differential proposition df = d(pq) : EX → B we get by extracting the linear approximation to the difference map Df = D(pq) : EX → B at each cell or point of the universe X. What results is the logical analogue of what would ordinarily be called “the differential” of pq but since the adjective “differential” is being attached to just about everything in sight the alternative name “tangent map” is commonly used for df whenever it's necessary to single it out.
Tangent Map d(pq) : EX → B
To be clear about what's being indicated here, it's a visual way of summarizing the following data.
d(pq)
= p ∙ q ∙ (dp , dq)
+ p ∙ (q) ∙ dq
+ (p) ∙ q ∙ dp
+ (p) ∙ (q) ∙ 0
To understand the extended interpretations, that is, the conjunctions of basic and differential features which are being indicated here, it may help to note the following equivalences.
• (dp , dq) = dp ∙ (dq) + (dp) ∙ dq
• dp = dp ∙ dq + dp ∙ (dq)
• dq = dp ∙ dq + (dp) ∙ dq
Capping the analysis of the proposition pq in terms of succeeding orders of linear propositions, the final venn diagram of the series shows the “remainder map” r(pq) : EX → B, which happens to be linear in pairs of variables.
Remainder r(pq) : EX → B
Reading the arrows off the map produces the following data.
r(pq)
= p ∙ q ∙ dp ∙ dq
+ p ∙ (q) ∙ dp ∙ dq
+ (p) ∙ q ∙ dp ∙ dq
+ (p) ∙ (q) ∙ dp ∙ dq
In short, r(pq) is a constant field, having the value dp ∙ dq at each cell.
Resources —
Logic Syllabus
Survey of Differential Logic
  • asked a question related to Mathematics
Question
1 answer
Program Description:
A program that converts mathematical equations from PDF files into editable equations within Word documents. The program relies on Optical Character Recognition (OCR) technology for mathematical equations, ensuring accuracy in retrieving symbols and mathematical formulas. It allows users to easily edit the equations directly in Word and provides support for various mathematical writing formats, such as LaTeX or MathType.
Program Features:
Accurate Conversion: Can read complex mathematical equations from PDF files.
Word Integration: Offers direct import options into Word documents.
Mathematical Format Support: Supports multiple formats such as MathML and LaTeX.
User-Friendly Interface: A simple design suitable for researchers and students.
Multi-Platform Compatibility: Works on operating systems like Windows and macOS.
Examples of programs that may meet this description include:
Mathpix Snip
InftyReader
You can try one of them to find the best solution for your need
Relevant answer
Answer
Try using mathpix, it does it so well
  • asked a question related to Mathematics
Question
2 answers
I have started an investigation about the utilization of AI for teaching mathematics and physics.
In this framework, I would like any insights and previous findings.
Please send me similar studies.
Thanks you in advance
Relevant answer
Answer
I've some thinking around this subject and would like to work as future project
  1. Personalized & Adaptive Mathematical Learning
  2. Virtual Tutoring along with automated grading
  • asked a question related to Mathematics
Question
1 answer
what about generate 3D shape using different ways: GAN or mathematics with python or LLMs or LSTMs and the related works about this !
Relevant answer
Answer
I think these are the steps involved in using LLM.
  • Craft a detailed prompt describing the 3D shape.
  • Input it to a LLM like Gemini.
  • The model outputs code, let's say- python script
  • Execute the generated code to create the 3D shape.
  • The generated 3D shape may require additional refinement using 3D modeling software.
  • asked a question related to Mathematics
Question
12 answers
Dear Esteemed Colleagues,
I hope this message finds you well. I am writing to invite your review and insights on what I believe to be a significant development in our understanding of the Riemann Hypothesis. After extensive work, I have arrived at a novel proof for the hypothesis, using a generalization of the integral test applicable to non-monotone series, as outlined in the attached document.
As a lead AI specialist at Microsoft, specializing in math-based AI, I have employed both traditional mathematical techniques and AI-based verification algorithms to rigorously validate the logical steps and conclusions drawn in this proof. The AI models have thoroughly checked the derivations, ensuring consistency in the logic and approach.
The essence of my proof hinges on an approximation for the zeta function that results in an error-free evaluation of its imaginary part at $x = \frac{1}{2}$, confirming this as the minimal point for both the real and imaginary components. I am confident that this new method is a significant step forward and stands up to scrutiny, but as always, peer review is a cornerstone of mathematical progress.
I warmly invite your feedback, comments, and any questions you may have regarding the methods or conclusions. I fully stand by this work and look forward to a robust, respectful discussion of the implications it carries. My goal is not to offend or overstate the findings but to contribute meaningfully to this ongoing conversation in the mathematical community.
Thank you for your time and consideration. I look forward to your responses and the productive discussions that follow.
Sincerely,
Rajah Iyer
Lead AI Specialist, Microsoft
Relevant answer
Answer
I was briefly reviewing your proof and noticed something unusual in this part.
  • asked a question related to Mathematics
Question
3 answers
Any answer my question
Relevant answer
Answer
The most important biodiversity measures in field studies are species richness (the count of distinct species), species evenness (distribution uniformity of species), and species diversity indices like Shannon and Simpson indices. Mathematically, species richness is a simple count, while evenness ratios measure uniformity. Diversity indices, like Shannon's, are logarithmic calculations of proportional abundance, whereas Simpson’s index focuses on dominance, calculating the probability of two randomly selected individuals belonging to the same species.
  • asked a question related to Mathematics
Question
1 answer
I am interested in the study of visual subcompetence in education, specifically how visual tools and technologies can be integrated into the educational process to enhance the development of professional competencies in future teachers, particularly in mathematics education.
I am looking for research and definitions that highlight and specify the concept of visual subcompetence in education. Specifically, I am interested in how visual subcompetence is distinguished as part of the broader professional competence, particularly in the context of mathematics teacher education.
Relevant answer
Answer
To think more number of case studies regarding visual subcompetence in education.
  • asked a question related to Mathematics
Question
2 answers
Can you suggest any study that uses Ethnographic Research design?
Relevant answer
Answer
Marjun Abear A notable study using ethnographic research design in teaching and learning is Paul Willis' "Learning to Labor" (1977). Willis observed working-class boys in a British school to explore how their social interactions and cultural attitudes shaped their educational experiences and future job prospects. This ethnography highlights how education can reinforce class inequalities, providing deep insights into the relationship between culture, learning, and social reproduction.
  • asked a question related to Mathematics
Question
60 answers
I apologize to you all! The question was asked incorrectly—my mistake. Now everything is correct:
In a circle with center O, chords AB and CD are drawn, intersecting at point P.
In each segment of the circle, other circles are inscribed with corresponding centers O_1; O_2; O_3; O_4.
Find the measure of angle ∠O_1 PO_2.
Relevant answer
Answer
  • asked a question related to Mathematics
Question
1 answer
Can you explain the mathematical principles behind the Proof of Stake (PoS) algorithm, including how validator selection probabilities, stake adjustments, and reward calculations are determined
Relevant answer
Answer
Dear Hiba, you can look up references I put far below or search Wikipedia if not done, not sure something is there. Here is what I can summarize; first, let’s break down the mathematical principles behind the Proof of Stake (PoS) algorithm as I understood it from existing literatures:
1. Validator Selection Probabilities
In PoS, validators are chosen to create new blocks based on the amount of cryptocurrency they hold and are willing to “stake” as collateral. The selection process is typically pseudo-random and influenced by several factors:
  • Stake Amount: The more coins a validator stakes, the higher their chances of being selected. Mathematically, if a validator (i) stakes (S_i) coins out of a total staked amount (S_{total}), their probability (P_i) of being selected is:Pi​=Stotal​Si​​
  • Coin Age: Some PoS systems also consider the age of the staked coins. The longer the coins have been staked, the higher the chances of selection. This can be represented as:Pi​=∑j=1N​Sj​×Aj​Si​×Ai​​where (A_i) is the age of the coins staked by validator (i).
  • Randomization: To prevent predictability and enhance security, a randomization factor is often introduced. This can be achieved through a hash function or a random number generator.
2. Stake Adjustments
Stake adjustments occur when validators add or remove their staked coins. The total stake (S_{total}) is updated accordingly, which in turn affects the selection probabilities. If a validator adds ( \Delta S ) coins to their stake, their new stake ( S_i’ ) becomes:
Si′​=Si​+ΔS
The new total stake ( S_{total}’ ) is:
Stotal′​=Stotal​+ΔS
3. Reward Calculations
Validators receive rewards for creating new blocks, which are typically proportional to their stake. The reward ( R_i ) for validator (i) can be calculated as:
Ri​=Rtotal​×Stotal​Si​​
where ( R_{total} ) is the total reward distributed for the block.
Some PoS systems also include penalties for malicious behavior or downtime, which can reduce the rewards or even the staked amount.
Example
Let’s consider a simple example with three validators:
  • Validator A stakes 40 coins.
  • Validator B stakes 30 coins.
  • Validator C stakes 30 coins.
The total stake ( S_{total} ) is 100 coins. The selection probabilities are:
  • ( P_A = \frac{40}{100} = 0.4 )
  • ( P_B = \frac{30}{100} = 0.3 )
  • ( P_C = \frac{30}{100} = 0.3 )
If the total reward for a block is 10 coins, the rewards are:
  • ( R_A = 10 \times 0.4 = 4 ) coins
Hope you will find this quite helpful.
  • asked a question related to Mathematics
Question
3 answers
توضيح كيفية التعليم الأخضر في مادة الرياضيات للأطفال
Relevant answer
Answer
Applying green education in mathematics for children involves integrating environmental themes and sustainability principles into math lessons. Here are some strategies to make this connection:
1. **Use Environmental Data in Math Problems**
- **Real-world examples**: Incorporate environmental statistics, such as data on pollution, recycling rates, and energy consumption, into math problems. This not only teaches mathematical concepts like percentages, averages, and data analysis but also raises awareness about environmental issues.
- **Hands-on projects**: Have children collect local environmental data (e.g., water usage, electricity consumption) and analyze it to learn about graphing, patterns, and calculations.
2. **Explore Geometry Through Nature**
- **Shapes in nature**: Use examples from nature like leaves, flowers, and snowflakes to teach geometric concepts like symmetry, fractals, and patterns.
- **Eco-friendly architecture**: Introduce geometric principles through sustainable design, such as how solar panels are angled to maximize sunlight or how certain shapes reduce waste in construction.
3. **Problem Solving with Environmental Impact**
- **Sustainability challenges**: Set up problem-solving activities where students must calculate the environmental impact of various actions. For instance, ask them to calculate the savings in resources when using recycled paper versus new paper.
- **Optimization tasks**: Use problems that involve optimizing energy use or waste reduction, showing how math can help create more sustainable solutions.
4. **Promote Critical Thinking on Environmental Issues**
- **Math and decision making**: Present scenarios where students need to make environmentally conscious decisions, such as calculating carbon footprints for different transportation methods or comparing the efficiency of renewable vs. non-renewable energy sources.
- **Game theory and resource use**: Introduce simple concepts of game theory or optimization to help children think about resource allocation and how different decisions impact the environment.
5. **Project-Based Learning with a Green Focus**
- **Eco-friendly projects**: Encourage students to work on projects like creating a garden, where they can use math for measurement, planning, and budgeting. This not only teaches practical math but also instills responsibility for the environment.
- **Sustainable design challenges**: Have students design eco-friendly solutions like a rainwater collection system, where they calculate the volume of water that can be saved based on local rainfall data.
6. **Use Visual and Interactive Tools**
- **Green apps and games**: Use interactive math apps and games that focus on environmental topics. For instance, apps that simulate resource management or renewable energy can teach math concepts while promoting green education.
- **Field trips and nature walks**: Incorporate math lessons into outdoor activities, where children measure plant growth, calculate the height of trees, or estimate the number of species in a given area.
7. **Introduce Mathematical Concepts Through Climate Change**
- **Climate data analysis**: Analyze real-world data on climate change, like global temperature rise or CO2 emissions. This fosters an understanding of trends and how math can model and predict future changes.
- **Carbon footprint calculation**: Teach students how to calculate their own carbon footprint using math, helping them understand the impact of their actions and encouraging more sustainable behavior.
By integrating green education into math, children not only gain math skills but also learn to think critically about environmental issues and sustainability, which can inspire them to take positive actions for the planet.
  • asked a question related to Mathematics
Question
3 answers
Bonjour,
Je suis actuellement en train de travailler sur un projet de recherche portant sur l'utilisation de l'optimisation mathématique pour déterminer le taux directeur optimal en politique monétaire. J'aimerais savoir s'il existe des travaux de recherche récents ou des modèles spécifiques qui ont abordé ce sujet. De plus, je suis à la recherche de conseils sur la manière de structurer mon modèle et de choisir des variables pertinentes pour ce type d'analyse. Toute suggestion de lecture ou d'expertise serait grandement appréciée.
Merci d'avance pour votre aide
Relevant answer
Answer
Research on the use of mathematical optimization to determine the policy rate includes models such as the Taylor Rule, which sets the policy rate based on inflation and output gaps, and dynamic stochastic general equilibrium (DSGE) models that incorporate optimization techniques to evaluate the impacts of monetary policy. Other studies utilize linear programming and mixed-integer optimization methods to analyze trade-offs in policy decisions and macroeconomic stability. These models help central banks effectively balance inflation control and economic growth.
  • asked a question related to Mathematics
Question
4 answers
As an academic working and pursuing a PhD degree in Egypt, both in private and public universities respectively, I wanted to put forward a simple question:
What is the role of universities, and other academic institutions, today? Was there ever a time where universities were agents of revolutionary action and change, or was it only a subject of the overall consumerist system?
We can take many steps back till the Ancient Egyptian times, where scribes and priests were taught writing, mathematics, and documentation of daily exchanges. All the way till today's era of digital globalization and mass education, where knowledge production process has become more of a virtual canvas rather than actual knowledge. Has knowledge ever served its purpose? Have academic institutions, and of course academic scholars, ever delivered the true purpose of education?
Was, and still, education's main sole purpose is economic prosperity of certain classes, hence socio-economic segregation?
Relevant answer
Answer
Today's global societies are very competitive in so many ways; as a result, this trend has a ripple effect in global educational institutions as well. Specifically speaking, without an MA/MS degree, one cannot compete to get a decent job in the professional level job market. Thus, universities are driven to restructure their institutions to meet this demand
  • asked a question related to Mathematics
Question
6 answers
Scientists believe theories must be proven by experiments. Does their faith in the existence of objective reality mean they are classical scientists who reject quantum mechanics' statements that observers and the observed are permanently and inextricably united? In this case, scientists would unavoidably and unconsciously influence every experiment and form of mathematics. In the end, they may be unavoidably and unconsciously influencing the universe which is the home of all experiments and all mathematics.
Relevant answer
Answer
Dear colleagues,
QM experiments, and probably even higher-level systems, are definitely proving to be affected by observers; e.g., see the research of Dean Radin on the deviation of the mean value of quantum random number generators and rest of his research. 
On the other hand, large systems are often in a state of decoherence, and hence, quantum effects have no impact on the behavior of such macroscopic objects and processes. The line between those two extreme cases is blurry and constantly shifting. 
What is astounding is that the bulk of research confirming that consciousness is impacting reality is constantly growing. It has far-reaching consequences. One of the most profound impacts is our innate ability to alter our well-being and health and even heal from serious diseases. 
A list of important publications describing quantum biology functioning follows. This research has gained impetus in the last couple of years. According to my understanding, from this research, we can start to understand the principles of coupling between consciousness and quantum systems outcomes. 
What is your take on this exciting area of research?
References:
[1] Madl, P.; Renati, P. Quantum Electrodynamics Coherence and Hormesis: Foundations of Quantum Biology. Int. J. Mol. Sci. 2023, 24, 14003. https:// doi.org/10.3390/ijms241814003
[2] Madl, P.; Renati, P. Quantum Electrodynamics Coherence and Hormesis: Foundations of Quantum Biology. Int. J. Mol. Sci. 2023, 24, 14003. https:// doi.org/10.3390/ijms241814003
[3] Lewis Grozinger, Martyn Amos Pablo Carbonell, Thomas E. Gorochowski, Diego A. Oyarzún Harold Fellermann, Ruud Stoof , Paolo Zuliani, Huseyin Tas & Angel Goñi-Moreno: Pathways to cellular supremacy in biocomputing, Nature Communications 10(1) (2019),
DOI: 10.1038/s41467-019-13232-z
[4] Michael P. Robertson & Gerald F. Joyce: The Origins of the RNA World, Cold Spring Harb Perspect Biol 2012;4:a003608, DOI: 10.1101/cshperspect.a003608
  • asked a question related to Mathematics
Question
4 answers
There exists a neural network model designed to predict a specific output, detailed in a published article. The model comprises 14 inputs, each normalized with minimum and maximum parameters specified for normalization. It incorporates six hidden layers, with the article providing the neural network's weight parameters from the input to the hidden layers, along with biases. Similarly, the parameters from the output layer to the hidden layers, including biases, are also documented.
The primary inquiry revolves around extracting the mathematical equation suitable for implementation in Excel or Python to facilitate output prediction.
Relevant answer
Answer
A late reply. I hope this helps you!
  • asked a question related to Mathematics
Question
7 answers
It seems it is common to combine basic observations to create new observable, which are then used for PPP and other applications. Basic observations such as pseudorange and carrier-phase observations are real measurement from GNSS. These real observations are combined to create entirely new observable which is not direct, physical, and real. Amazingly, these new observable solves the real problem such as PPP (e.g. Ionosphere -free combination).
  • What is the theory behind this?
  • Any similar approach like this in other scientific field or any simple analogous explanation?
  • You could direct me to resources such as videos, or literature.
Relevant answer
Answer
furthermore, one a satellite is locked for recording the phases, a counter starts, keep add "1" to the number in the counter whenever a whole cycle carrier wave is passed based on the time a wave length corresponds to. So, the unknow ambiguity of a satellite maintains constant (associated with the time instant when the lock starts) as long as this satellite is being locked without interruption. In case an interruption, a different ambiguity will have to be resolved in the estimation.
  • asked a question related to Mathematics
Question
9 answers
In triangle ∆ABC (with ∠C = 90°), the angle CBA is equal to 2α.
A line AD is drawn to the leg BC at an angle α (∠BAD = α).
The length of the hypotenuse is 6, and the segment CD is equal to 3.
Find the measure of the angle α.
This problem can be solved using three methods: trigonometric, algebraic, and geometric. I suggest you find the geometric method of solution!
Relevant answer
Answer
After trigonotric solution (pretty tedious way, not worth publication) I have found a geometric way same as Dinu Teodorescu, but in a somehow different order:
If one adds to the pic by Liudmyla Hetmanenko the center O of AB , then the following becomes clear:
property 1. AO=CO, which implied ∠CAO = ∠ACO =2 α
property 2. CD=CO, which implies that D and O lie on a circle with center at C
property 3. ∠DBO = α = 0.5 ∠DCO, which implies that D,O and B lie on a circle with center at C, which in turn imples that CB=CD, which means that
3 α = π/4 radians = 15o
  • asked a question related to Mathematics
Question
14 answers
A minion is a low-level official protecting a bureaucracy form challengers.
A Kuhnian minion (after Thomas Kuhn's Structure of Scientific Revolutions) is a low-power scientist who dismisses any challenge to existing paradigm.
A paradigm is a truth structure that partitions scientific statement as true to the paradigm or false.
Recently, I posted a question on Physics Stack Exchange that serves as a summary of the elastic string paradigm. My question was: “Is it possible there can be a non-Fourier model of string vibration? Is there an exact solution?”
To explain, I asked if they knew the Hamiltonian equation for the string vibration. They did not agree it must exist. I pointed out there are problems with the elastic model of vibration with its two degrees of freedom and unsolvable equations of motion can only be approximated by numerical methods. I said elasticity makes superposition the 4th Newtonian law. How can a string vibrate in an infinite number of modes without violating energy conservation?
Here are some comments I got in response:
“What does string is not Fourier mean? – Qmechanic
“ ‘String modes cannot superimpose!’ Yet, empirically, they do.” – John Doty
“ A string has an infinite number of degrees of freedom, since it can be modeled as a continuous medium. If you manage to force only the first harmonic, the dynamics of the system only involve the first harmonic and it’s a standing wave: this solution does depend on time, being (time dependence in the amplitude of the sine). No 4th Newton’s law. I didn’t get the question about Hamilton equation.
“What do you mean with ‘archaic model’? Can I ask you what’s your background that makes you do this sentence? Physics, Math, Engineering? You postulate nothing here. You have continuum mechanics here. You have PDEs under the assumption of continuum only. You have exact solutions in simple problems, you have numerical methods approximating and solving exact equations. And trust me: this is how the branch of physics used in many engineering fields, from mechanical, to civil, to aerospace engineering.” – basics
I want to show the rigid versus elastic dichotomy goes back to the calculus wars. Quoting here from Euler and Modern Science, published by the Mathematical Association of America:
"We now turn to the most famous disagreement between Euler and d’Alembert … over the particular problem of the theory of elasticity concerning a string whose transverse vibrations are expressed through second-order partial differential equations of a hyperbolic type later called the wave equation. The problem had long been of interest to mathematicians. The first approach worthy of note was proposed by B. Taylor, … A decisive step forward was made by d’Alembert in … the differential equation for the vibrations, its general solution in the form of two “arbitrary functions” arrived at by means original with d’Alembert, and a method of determining these functions from any prescribed initial and boundary conditions.”
[Editorial Note: The boundary conditions were taken to be the string endpoints. The use of the word hyperbolic is, I believe, a clear reference to Taylor’s string. A string with constant curvature can only have one mathematic form, which is the cycloid, which is defined by the hyperbolic cosh x function. The cosh x function is the only class of solutions that are allowed if the string cannot elongate. The Taylor/Euler-d’Alembert dispute whether the string is trigonometric or hyperbolic.
Continuing the quote from Euler and Modern Science:
"The most crucial issue dividing d’Alembert and Euler in connection with the vibrating string problem was the compass of the class of functions admissible as solutions of the wave equation, and the boundary problems of mathematical physics generally, D’Alembert regarded it as essential that the admissible initial conditions obey stringent restrictions or, more explicitly, that the functions giving the initial shape and speed of the string should over the whole length of the string be representable by a single analytical expression … and furthermore be twice continuously differentiable (in our terminology). He considered the method invalid otherwise.
"However, Euler was of a different opinion … maintaining that for the purposes of physics it is essential to relax these restrictions: the class of admissible functions or, equivalently, curves should include any curve that one might imagine traced out by a “free motion of the hand”…Although in such cases the analytic method is inapplicable, Euler proposed a geometric construction for obtain the shape of the string at any instant. …
Bernoulli proposed finding a solution by the method of superimposition of simple trigonometric functions, i.e. using trigonometric series, or, as we would now say, Fourier series. Although Daniel Bernoulli’s idea was extremely fruitful—in other hands--, he proved unable to develop it further.
Another example is Euler's manifold of the musical key and pitch values as a torus. To be fair, Euler did not assert the torus but only drew a network show the Key and Pitch can move independently. This was before Mobius's classification theorem.
My point is it should be clear the musical key and pitch do not have different centers of harmonic motion. But in my experience, the minions will not allow Euler to be challenged by someone like me. Never mind Euler's theory of music was crackpot!
Relevant answer
Answer
Physic Stack Exchange is not peer review, it is sneer review. I show then their answers are not correct but I am shut out.
  • asked a question related to Mathematics
Question
15 answers
The need of a paradigm shift in physics
Is it possible in a world as fragmented as ours to present a new concept of Unity in which Science, Philosophy and Spirituality or Ontology can be conceived working in Complete Harmony?
In this respect the late Thomas S. Kuhn wrote in his
The Structure of Scientific Revolutions
"Today research in parts of philosophy, psychology, linguistic, and even art history, all converge to suggest that the traditional paradigm is somehow askew. That failure to fit is also increasingly apparent by the historical study of science to which most of our attention is necessarily directed here."
And even the father of Quantum Physics complained strongly in his 1952 colloquia, when he wrote:
"Let me say at the outset, that in this speech, I am opposing not a few special statements claims of quantum mechanics held today, I am opposing its basic views that has been shaped 25 years ago, when Max Born put forward his probability interpretation, which was accepted by almost everybody. It has been worked out in great detail to form a scheme of admirable logical consistency which has since been inculcated in all young students of theoretical physics."
Where is the source of this "crisis of physics" as has been called?
Certainly the great incompatibility between General Relativity and Quantum Mechanics is in a certain sense, one of the reasons, of that great crisis, and that shows clearly the real need of a paradigm shift.
As one that comes from the Judeo-Christian tradition, that need of a real paradigm shift was of course a real need too. Philosophers such as Teilhard de Chardin, Henry Bergson, Charles Pierce and Ken Wilber, all of them worked for it!.
Ken Wilber said that goal of postmodernity should be the Integration of the Big Three, Science, Philosophy and Spirituality, and a scientist as Eric J. Lerner in his The Big Bang Never Happened, show clearly in it, how a paradigm shift was in cosmology is a real need too.
My work about that need started in 1968, when I found for the first time, an equation that was declared the most beautiful equation of mathematics, I mean Euler's relation found by him in 1745, when working with infinite series. It was this equation that took me in 1991, to define what I now call a Basic Systemic Unit, that has the most remarkable property to remain the same in spite of change, exactly the same definition of a Quantum as defined by professor Art Hobson in his book The Tales of Quantum, and that the University of Ottawa found when working with that strange concept that frightened Einstein, the entanglement concept, that seemed to violate Special Relativity.
Where is the real cause of the incompatibility between GR and QM?
For GR Tensor Analysis was used, a mathematical tool based on real numbers, and with it there was the need to solve ten functions representing the gravitational field:
"Thus, according to the general theory of relativity, gravitation occupies an exceptional position with regards to other forces, particularly the electromagnetic forces, since the ten functions representing the gravitational field at the same time define the metrical properties of the space measured."
THE FOUNDATION OF THE GENERAL THEORY OF RELATIVITY
By A. Einstein
Well the point is that, in that metrics that define the GR, time is just another variable, just as space, and as so with the same symmetrical properties, at the point that is can take both signs positive and negative, so time travel could be conceived just as a space travel, and any direction, in fact Stephen Hawking in his A BRIEFER HISTORY OF TIME, writes:
"It is possible to travel to the future. That is, relativity shows that it is possible to create a time machine that will jump you forward in time." Page 105
This is exactly the point that has made physics some sort of metaphysics, and as so created the great crisis of physics. While QM is based on the complex Schrödinger's wave equation or on complex numbers, in which the symbol sqr(-1), is a symbol to separate two different orders of reality, such as Time and Space, GR is based just on real numbers.
The Basic Systemic Unit concept, based on Euler's relation is in fact the definition of a Quantum, and as so it can be used to deduce all fundamental equations of physics as can be seen in my paper... resolving in this way that great crisis of physics
Quantum Physics
Edgar Paternina
retired electrical engineer
Relevant answer
Answer
In fact in IE in Power Systems, when dealing with three phase systems, we reduced them to one phase system, and for the power system to work properly in steady state the three phases must be balanced to avoid blackout.
  • asked a question related to Mathematics
Question
6 answers
I have been seeing and following a lot of work on these topics, it even seems that there are more results on them than on the corresponding classical topics, particularly on general topology.
What could be the cause of such results?
Relevant answer
Answer
Dear Colleagues,
If U and E are fixed sets, A is a subset of E, and F is a function
of A to the power set of U , then F should be called a soft set
over U and E , instead of the pair (F, A).
Thus, since set-valued functions can be identified
vith relations, a soft set over U and E is actually a relation
on E to X. That is, a subset of the product set of E and U.
Therefore, several defininitions and theorems on soft sets,
consuming a lot of pages, are superfluous. Moreover, notations
and terminology can be simplified.
  • asked a question related to Mathematics
Question
23 answers
Relevant answer
Answer
<<Einstein's Geometrical versus Feynman's Quantum-Field Approaches to Gravity Physics>>
If we turn to the already mentioned simplification of space in the form of a helix of a cylinder, then gravity is the force generated by the limit cycle, which tightens the pitch of the helix to zero at the point where the helix degenerates into a circle. As for quantized fields, these are limit cycles in dual space, so they are not responsible for gravity.
  • asked a question related to Mathematics
Question
6 answers
Has our mathematical knowledge progressed as much as contemporary science?
1- Assume a rectangle in the second dimension; this rectangle's components are lines. Its geometric characteristics are perimeter and area.
2- Assume a cube in the third dimension. Its components are the plane. Its geometric characteristics are area and volume.
3- What are the names of its components by transferring this figure to the 4th dimension? And what is the name of its geometric characteristics? And with the transfer to the 5th and higher dimensions, our mathematics has nothing to say.rectangle is just a simple shape how about complex geometric shapes?
According to new physical theories such as strings theory, we need to study different dimensions.
Relevant answer
Answer
Dear Yousef, we can not give "names" for each dimension n>3, because we would need an infinite number of names! ( How to name the cube in the space with 357 dimensions? ) If n>3, it's sufficient to add prefix "hyper", and every mathematician will understand correctly the sense!
The best description is in the case of dimension n=3. We have the cube, having as faces 6 bounded pieces from planes( that is 6 equal squares situated in 6 different planes).
The analogue of cube in dimension n=2 is the square( not the rectangle) having as "faces" 4 equal segments situated on 4 different lines.
The analogue of cube in all other n - dimensional space Rn with n>3 is called hypercube.
The hypercube in 4 dimensions has equal cubes as faces and each such face is situated in a 3 - dimensional space R3.
The hypercube in 5 dimensions has equal hypercubes from R4 as faces.
....................................................................................................................
No contradiction, all clear!
Analogue regarding the sphere! Sphere in 3 dimensions, circle in 2 dimensions, hypersphere in all dimension n>3 . Here the equations defining all such math objects are extremely obviously similar.
Hypercubes and hyperspheres have hypervolumes !
So, tu study efficiently and seriously string theory you need more and more advanced mathematics!
  • asked a question related to Mathematics
Question
4 answers
Modifying the original Feistel structure will it be feasible to design a lightweight and robust encryption algorithm. Somehow changing the structure's original flow and adding some mathematical functions there. I welcome everyone's view.
Relevant answer
Answer
Yes, it is indeed feasible to design a lightweight algorithm based on the Feistel structure. The Feistel network is a popular symmetric structure used in many modern cryptographic algorithms, such as DES (Data Encryption Standard). The design of a lightweight Feistel-based algorithm can effectively balance security and efficiency, making it suitable for environments with constrained resources, such as IoT devices and resource-limited systems.
Key Considerations for Designing a Lightweight Feistel-Based Algorithm
Feistel Structure Basics:
The Feistel structure divides the data into two halves and applies a series of rounds where the right half is modified using a function (often called the round function) combined with a subkey derived from the main key.
The left and right halves are then swapped after each round, employing the same round function iteratively over several rounds.
Lightweight Design Goals:
Reduced Resource Usage: The algorithm should minimize memory and processing requirements, which are crucial in lightweight applications.
Efficient Implementation: It should have efficient implementations in hardware (e.g., FPGAs, ASICs) as well as software (e.g., microcontrollers).
Security: While optimizing for lightweight design, the algorithm must maintain a sufficient level of security against common attacks (such as differential and linear cryptanalysis).
Steps in Designing a Lightweight Feistel Algorithm
Key Design Choices:
Number of Rounds: Determine the optimal number of rounds needed to achieve desired security without excessive computational cost. For lightweight applications, 4 to 8 rounds may be sufficient.
Block Size: Choose a block size that is suitable for the intended application. Smaller block sizes (e.g., 64 or 128 bits) may be appropriate for constrained environments.
Key Size: Develop a flexible key size that provides adequate security while keeping the implementation lightweight. A key size between 80 and 128 bits is commonly used for lightweight designs.
Round Function Design:
Simplicity and Efficiency: The round function should be computationally efficient, possibly utilizing modular arithmetic or simple logical operations (AND, OR, XOR) to enhance speed and reduce footprint.
Subkey Generation: Efficient and secure key scheduling is essential to generate round keys from the primary key, ensuring that each round has a unique key.
Attack Resistance:
Differential and Linear Cryptanalysis: Analyze the design for vulnerabilities to these forms of attacks. The choice of S-boxes in the round function can significantly enhance resistance.
Avalanche Effect: Ensure that a small change in the input or the key results in a significant change in the output.
Performance Optimization:
Implementation Flexibility: Design the algorithm to allow for easy adaptation for different platforms (hardware vs. software) to maximize performance.
Minimalistic Approach: Reduce unnecessary complexity in the algorithm to lower resource consumption, focusing on only essential component
Example Lightweight Feistel Structure
While developing a specific algorithm, you could consider a structure similar to the following:
function LightweightFeistelEncrypt(plaintext, key): Split plaintext into left (L0) and right (R0) For i from 1 to n (number of rounds): Ri = Li−1 XOR F(Ri−1, Ki) Li = Ri−1 return (Ln, Rn) function F(input, k): // Simple round function using lightweight operations // Example could include small S-boxes and XOR operations return output