Science topics: Mathematics
Science topic
Mathematics - Science topic
Mathematics, Pure and Applied Math
Questions related to Mathematics
I published an presentation "Three-Body Problem", where new equations for the interaction of celestial bodies and a solution method are proposed.
If you look on the Internet, it seems that everyone is enthusiastically looking for a solution to the three-body problem. And then someone exclaimed: "I found it!". "And around the silence, taken as a basis," - as Vaenga sings by russian. It turns out that those who are passionately searching, the process of searching is important. It cannot be stopped. You must constantly experience the drama of chaos in the Universe and be proud of your belonging to those physicists who tirelessly prove that there is no other World, because it follows from mathematical solution.
Yes, the World is mathematically substantiated, mathematics is a project, a tool and material for creating the World. But mathematics can be just a game for the mind. You need to find the mathematics that describes the project of the Universe. Mathematics that proves that there is no Building cannot be a project of the Universe.
It turns out that many seekers like mathematical games. I appeal to seekers of truth.
Solomon Khmelnik
Introduction: Conceptual Remnants and the Challenge of Physical Objectivity
Physics has long been regarded as the science dedicated to uncovering the fundamental laws governing nature. However, in contemporary theoretical physics, there is an increasing reliance on mathematical models as the primary tool for understanding reality. This raises fundamental questions:
- Is physics still unknowingly entangled in issues arising from emergent effects?
- Could these emergent effects create a gap between physical reality and the virtual constructs generated through mathematical modeling?
Throughout the history of science, there have been instances where physicists, without fully grasping fundamental principles, formulated models that later turned out to be mere consequences of emergent effects rather than reflections of objective reality. For instance, in classical thermodynamics, macroscopic quantities such as temperature and pressure emerged as statistical descriptions of microscopic particle behavior rather than fundamental properties of nature.
The crucial question today is: Are we still facing similar emergent illusions in modern theoretical physics? Could it be that many of the sophisticated mathematical models we use are not pointing to an underlying physical reality but are merely the byproducts of our perception and modeling techniques?
Mathematical Models and Conceptual Remnants: Are We Chasing a Mirage?
Mathematics has always been an essential tool in physics, but over time, it has also shaped the way we think about physical reality. In many areas of theoretical physics, mathematical methods have advanced to a point where we may no longer be discovering physical truths but instead fine-tuning mathematical structures to fit our theoretical frameworks.
- Has theoretical physics become a vast computational engine, focusing on adjusting relationships between mathematical variables rather than seeking an independent physical reality?
- Could it be that many of the concepts emerging from our models are mere reflections of mathematical structures rather than objective entities in nature?
Examples of such concerns can be found in theories like string theory, where extra spatial dimensions and complex symmetry groups are introduced as necessary mathematical elements, despite lacking direct experimental verification. This raises the possibility that some of these theoretical constructs exist only because they are mathematically required to make the model internally consistent, rather than because they correspond to something physically real.
Fundamental Critique: Should We Even Be Searching for Physical Objectivity?
One of the most profound implications of this discussion is that the very question of whether physics describes "physical reality" might be fundamentally misguided.
Werner Heisenberg once argued that physics will never lead us to an understanding of an objective physical reality. Instead, what we develop are models that describe relationships between observable phenomena—without necessarily revealing the true nature of reality itself.
- Perhaps physics should not aim to discover a reality independent of our models since every model is ultimately a mathematical structure shaped by human perception.
- If the goal of physics is not to describe "absolute truth" but rather to create predictive models, should we then accept that we will never fully grasp "what actually exists"?
Finally: Between Computational Accuracy and Physical Reality
The final question in this discussion is: Are we still trapped in emergent effects that arise purely from our mathematical approaches rather than reflecting an objective physical reality?
- Should physicists strive to distinguish between mathematical models and physical objectivity, or is such a distinction inherently meaningless?
- Is the search for an independent physical reality a conceptual mistake, as Heisenberg and others have suggested?
Ultimately, this discussion seeks to examine whether physics is merely a computational framework for describing phenomena, or if we are still subconsciously searching for a physical reality that might forever remain out of reach.
Does Every Mathematical Framework Correspond to a Physical Reality? The Limits of Mathematical Pluralism in Physics
Introduction
Physics has long been intertwined with mathematics as its primary tool for modeling nature. However, a fundamental question arises:
- Does every possible mathematical framework correspond to a physical reality, or is our universe governed by only a limited set of mathematical structures?
This question challenges the assumption that any mathematical construct must necessarily describe a real physical system. If we take a purely mathematical perspective, an infinite number of logically consistent mathematical structures can be conceived. Yet, why does our physical reality seem to adhere to only a few specific mathematical frameworks, such as differential geometry, group theory, and linear algebra?
Important Questions for Discussion
v Mathematical Pluralism vs. Physical Reality:
- Are all mathematically consistent systems realizable in some physical sense, or is there a deeper reason why certain mathematical structures dominate physical theories?
- Could there exist universes governed by entirely different mathematical rules that we cannot even conceive of within our current formalism?
v Physics as a Computationally Limited System:
- Is our universe constrained by a specific subset of mathematical frameworks due to inherent physical principles, or is this a reflection of our cognitive limitations in developing theories?
- Why do our fundamental laws of physics rely so heavily on certain mathematical structures while neglecting others?
v The Relationship Between Mathematics and Nature:
- Is mathematics an inherent property of nature, or is it merely a tool that we impose on the physical world?
- If every mathematical structure has an equivalent physical reality, should we expect an infinite multiverse where every possible mathematical law is realized somewhere?
v Beyond Mathematical Formalism:
- Could there be fundamental aspects of physics that are not fully describable within any mathematical framework?
- Does the reliance on mathematical models lead us to mistakenly attribute physical existence to purely abstract mathematical entities?
Philosophical Implications
This discussion also touches on a deeper philosophical question:
Are we merely discovering the mathematical laws of an objectively real universe, or are we creating a mathematical framework that fits within the constraints of our own perception and cognition?
If mathematics is merely a tool, then our physical theories may be contingent on human cognition and not necessarily reflective of a deeper objective reality. Conversely, if mathematics is truly the "language of nature," then understanding its full structure might reveal hidden aspects of the universe yet to be discovered.
Werner Heisenberg once suggested that physics will never lead us to an objective physical reality, but rather to models that describe relationships between observable quantities. Should we accept that physics is not about describing a fundamental "truth," but rather about constructing the most effective predictive models?
I would need a detailed mathematical derivation of mimetic finite difference operators in cartesian as well as curviliniear, in particular spherical coordinates. I also need to know how the pole problem in spherical coordinates can be tackled when using mimetic operators. Does anybody have a hint for me?
ORCiD: 0000-0003-1871-7803
February 10, 2025
Absolute Collapse Condition
Mass Acquisition at Planck Frequency:
In Extended Classical Mechanics (ECM), any massless entity reaching the Planck frequency (fp) must acquire an effective mass (Mᵉᶠᶠ = hf/c² = 21.77 μg). This acquisition of mass is a direct consequence of ECM's mass induction principle, where increasing energy (via f) leads to mass acquisition.
Gravitational Collapse:
At the Planck scale, the induced gravitational interaction is extreme, forcing the entity into gravitational collapse. This is a direct consequence of the mass acquisition at the Planck frequency, where the gravitational effects become significant.
ECM's Mass-Induction Perspective
Apparent Mass and Effective Mass:
The apparent mass (−Mᵃᵖᵖ) of a massless entity contributes negatively to its effective mass. However, at the Planck threshold, the magnitude of the induced effective mass (|Mᵉᶠᶠ|) surpasses |−Mᵃᵖᵖ|, ensuring that the total mass is positive:
|Mᵉᶠᶠ| > |−Mᵃᵖᵖ|
This irreversible transition confirms that any entity at fp must collapse due to self-gravitation.
Implications for Massless-to-Massive Transition
Behaviour Below Planck Frequency:
Below the Planck frequency, a photon behaves as a massless entity with effective mass determined by its energy-frequency relation. However, at fp, the gravitating mass (Mɢ) and effective mass (Mᵉᶠᶠ) undergo a shift where induced mass dominates over negative apparent mass effects.
Planck-Scale Energy:
Planck-scale energy is not just a massive state—it is a self-gravitating mass that collapses under its own gravitational influence. This suggests that at Planck conditions, the gravitationally induced mass dominates over any negative mass contributions, maintaining a positive mass regime.
Threshold Dominance at the Planck Scale
Gravitational Mass Dominance:
At the Planck scale, gravitational mass (Mɢ) is immense due to the fundamental gravitational interaction. Since |+Mɢ| ≫|−Mᵃᵖᵖ|, the net effective mass remains positive:
Mᵉᶠᶠ = Mɢ = (−Mᵃᵖᵖ) ≈ +Mᵉᶠᶠ
This suggests that at Planck conditions, the gravitationally induced mass dominates over any negative mass contributions.
Transition Scenarios for Negative Effective Mass
Conditions for Negative Effective Mass:
The condition −Mᵃᵖᵖ > Mɢ could, in principle, lead to a transition where the effective mass becomes negative. This might occur under strong antigravitational influences, possibly linked to:
• Dark energy effects in cosmic expansion.
• Exotic negative energy states in high-energy physics.
• Unstable quantum fluctuations near high-energy limits.
Linking Effective Mass to Matter Mass at Planck Scale
Matter Mass Emergence:
Since Mᵉᶠᶠ ≈ Mᴍ, under these extreme conditions, it implies that matter mass emerges predominantly as a consequence of gravitational effects. This aligns with ECM’s perspective that mass is not an intrinsic property but rather a dynamic response to gravitational interactions.
Conclusion
This work on ECM provides a detailed and nuanced understanding of how gravitational interactions can induce mass in initially massless particles, leading to gravitational collapse at the Planck scale. This perspective not only aligns with fundamental principles but also offers potential explanations for cosmic-scale phenomena involving dark matter, dark energy, and exotic gravitational effects. The detailed mathematical foundations and the implications of apparent mass and effective mass in ECM further clarify how mass can dynamically shift between positive, zero, and negative values based on gravitational and antigravitational influences.
The concept of infinity is well known in Mathematics and I have no disagreement with this. But, in real world, something like number of species or number of water molecules or even number of stars, are anything exist that beyond finite?
It is well known that Medal Fields Prize is intended for excellent research of mathematicians under forty years old because many mathematicians think that the main contributions in the life of the researchers are obtained when they are younger than forty. I do not believe so. It is true, by common experience, that the students of Mathematics, which are constantly in interaction at the same time, with several (and sometimes, very different) subjects, develop a high degree of good ideas which inspire them and lead them to obtain new and interesting results. This interaction between different branches is expected to remain (more or less consciently) up to forty years old. By the same reason, if necessary, whoever researcher, independently of his/her age, may return to study the different mathematical matters and create new important contributions, even in his/her very definite area of research. Furthemore, it may help to overcome a blockade. It is incredible the fact that when one studies again different matters it inspires you, and combined with your experience and knowledge, you see the contents of these different subjects with new perspective, often helping in your area of research creating new knowledge and solving problems. This is the motive why I believe that the career of each mathematician is always worthly and continuous independently of his/her age as demonstrated by most senior mathematicians in all the areas of research who are living examples for us.
What is your opinion on the relationship between the age of a researcher and the quality of his/her contributions?
Thank you very much beforehand.
Hi all
I'm an final semester ug btech student. I want to do research in mathematics and publish a paper for the same. Can anyone help me finding out problem statements for the same?
# 163
Dear Sarbast Moslem , Baris Tekin Tezel, Ayse Ovgu Kinay, Francesco Pilla
I read your paper:
A hybrid approach based on magnitude-based fuzzy analytic hierarchy process for estimating sustainable urban transport solutions
My comments:
1- In the abstract you say “The study employs the newly developed Magnitude Based Fuzzy Analytic Hierarchy Process, chosen for its accuracy and computational efficiency compared to existing methods”
Are you aware that Saaty explained that it is incorrect to apply fuzzy in AHP because it is already fuzzy?
Since when using intuition ensures accuracy? Do you have any proof of what you say?
Sensitivity analysis does not ensure quality, what it does us to find how strong is a solution
2- In page 2 you talk about linear regression for evaluation. Linear regression is used to predict the value of a dependent variable on the value of one or more independent variables.
3- In page 3 “AHP offers mechanisms for ensuring consistency in decision-making through pairwise comparisons and sensitivity analyses”
True, AHP ensures consistency by FORCING to adhere to transitivity, by imposing the DM to change his/her estimates.What it ensures, is transitivity without any mathematical foundation,just for the sake of the method. Therefore, this ‘consistency’ is fabricated.
Even if there is a real consistency, it reflects the coherence of the DM, but it does not necessarily mean that consistency and weights can be applied to the real-world. There is no mathematical support for this assumption, neither to assume that the world is consistent, let alone common sense; it is convenient indeed, for the method, but useless for evaluation
4- Page 3 “On the other hand, expressing the data in the form of fuzzy numbers to better express the uncertainty in individual judgments has led to the suggestion and widespread use of fuzzy AHP (FAHP) methods, which include calculations based on fuzzy arithmetic”
5- In several parts of the paper it mentions validation of results. That is only a wish, because no MCDM method has any real yardstick to compare to. It is another and very common fallacy.
6- Pag 8, Fig 4. I understand of that waiting time does not depend on speed but on frequency buses arrival (Number of buses of the same route per hour). The more the frequency the lesser the waiting tine. What role do have the buses speed here? It appears that this concept does not come from transportation experts.
“Reaching to the destination without shifting buses”
I guess that interchanging buses or routes is more adequate
“Need of transfer” normally refers to paying a single ticket, that allows a pax to change bus routes, that is, you he can board another bus with the same ticket
Your definition on “Time availability” does not seen too coherent, because what “Number of times that UBT is deployed??? over a route” mean?
“Limited time of use” (C4.2)????. I understand that you want to say ‘Operating hours’, that is start running and finish running, or simply ‘Scheduling’. I am afraid that your expression does not belong to the urban transportation industry.
Please do not be offended with my observations, it is not my intention. Only that is you want readers understanding what you write, you must use the appropriate words. If not, your work risks to be misunderstood and downgraded
Why “providing new buses” is related to “comfort” in stops?
7- Page 9 “In other words, it models the state of uncertainty in the mind of the decision-maker.”
Very true, but why those uncertainties of a MIND can be translated to real-life? In other words, what theorem or axiomsupports that what the DM estimates can be used in real-world? It is a simple assumption that even defies common sense. In my opinion, it does not make any sense to apply fuzzy to invented values. Yes, one will get a crisp value, and what is it good for? For nothing
These are some of my comments. Hope that they can help you
Nolberto Munier
Mathematics and statistics have been used to develop encryption techniques used to protect against cyber threats and ensure the security of information and data. Mathematics is also used to generate complex passwords and develop machine learning models to detect cyber attacks and threats in networks. In addition, various mathematical concepts such as algebra, geometry, statistics, and numerical analysis are used in specific areas such as security software development, software bug checking, databases, and complex networks.
A control volume is a mathematical construct used in the process of creating mathematical models of physical processes.
We assume that the control volume in R^4 space (3D+t external control) is incomplete and almost useless for dealing with complex physical and mathematical problems.
On the other hand, the 4D x-t unitary control volume called Cairo techniques and B matrix space is the complete universal space adequate for the description of classical and quantum physics situations as well as mathematical events.
In a way, this is the correct approach to the unified field theory (energy density).
My answer is negative and thoroughly substantiated via 2 points.
1) The easiest part (lesser limiting factor) he has to comprehend the approach used in physics thinking and epistemology (i.e. working with hypotheses instead of etiological thinking, refraining from teleological inquiries etc), the importance of relying on maths, relevancy of equations etc. Not easy but can be accomplished to large degree by serious commitment and authentic interest
2) ease with representations (geometrical, model-wise etc) of physical systems and working cognitively on that level, abstract aplicational mathematical thinking (this may not be easy even for mathematicians) etc. This is something that required in my opinion an in-born trait
Need CBSE India 10th board examination data on the mathematics subject.
In the attached paper titled, "Approximate expressions for BER performance in downlink mmWave MU-MIMO hybrid systems with different data detection approaches", mathematical operator ℝ{.} is used with minus sign in Equation (14). Can anybody help explain the meaning of this operator? Why minus sign is used?
Hi dear professors. I want to share with you a little formula of arithmetics in order to know the usefulness of this formula and its meaning according to you. This is the formula like in the attached picture: 1+8*E^2≠3*F^2 where E and F can be any integers.
The proof is easy in the article of this link:

Under the theme “Mathematics for All, Mathematics as a Path to Development,” one of the best examples of math's impact on daily life is its application in medical imaging, such as CT and CBCT scanners.
How do these devices work?
Mathematics is at the heart of it! When X-rays are emitted from a source, they pass through tissues and are captured by detectors. Along their path, these rays undergo absorption or attenuation, which depends on the properties of the tissue they traverse. Each ray's path represents a unique equation, where the unknowns are the attenuation values of the points it passes through.
As the source rotates around the object, it generates multiple paths, creating a system of n equations with n unknowns. Sophisticated software solves this complex system using advanced algorithms, such as the Radon Transform or iterative methods, to reconstruct the internal structure of the object as a detailed 3D image.
How does this contribute to development?
Mathematics is the backbone of such groundbreaking technologies, enabling precise diagnostics and effective treatment planning. This practical application of equations demonstrates how math can transform our world.
Now it’s your turn:
What other applications of mathematics in medicine and engineering do you think are underexplored? How can we further harness the power of math as a tool for development?
Let’s explore the incredible role of mathematics in everyday life together in this discussion!
I want a qualitative scale that contains questions that reveal the mathematics teacher's perception of beauty and simplicity in mathematics?
Conventional fragility is defined as probability of failure. Based on concise mathematics, it is found that if fragility is probability of collapse then the design curve is probability of survive. The sum of these two is equal to 1. Consequently, if a member (structure) is designed based on a give curve, then its fragility of collapse is also known!.
Scale the horizontal axes of a fragility curve (s) of a structure, between 0 and 1. Then:
what is the probability of collapse at s=0.5 ?
what is the probability of survive at s=0.5
Don you agree with the above findings? Why ?
Can the physical reality be represented mathematically?
Well actual physics, can be represented mathematically with the Basic Systemic Unit, based on Euler’s relation with its most remarkable property of remaining the same in spite of change, that permits to deduce the fundamental equations of physics such as :
* that of the pendulum a real harmonic oscillator
* that of the gravitational field including that of the planet mercury obtained by Einstein, but in this case obtained with a mathematical tool no so much complicated as was done with Tensor Analysis
* those of SR in another approach, in which linear moving is just a special case of the more general solution obtained with the BSU concept in which covariance is included as it is a consequence of the isomorphic property of Euler’s relation mentioned above and finally the
* Schrödinger’s wave equation
For those interested in the way all this is obtained you can see my papers:
QUANTUM PHYSICS
https://www.researchgate.net/publication/384190006_Quantum_Physics (https://www.researchgate.net/publication/384190006_Quantum_Physics)
A QUANTUM THEORY OF GRAVITATION
https://www.researchgate.net/publication/385592651_A_Quantum_Theory_of_GravitationNewpdf (https://www.researchgate.net/publication/385592651_A_Quantum_Theory_of_GravitationNewpdf)
SPECIAL RELATIVITY WITH ANOTHER APPROACH
https://www.researchgate.net/publication/382357270_Special_Relativity_Another_Approach (https://www.researchgate.net/publication/382357270_Special_Relativity_Another_Approach)
that I really hope will contribute to overcome the great crisis of physics, because the great incompatibility between QM and GR.
So yes, actual physics, can be represented mathematically in a real coherent way, but for it is necessary to make a real paradigm shift.
Edgar Paternina
retired electrical engineer
Einstein overcomplicated the theory of special and general relativity simply because he did not define time correctly.
A complete universal or physical space is a space where the Cartesian coordinates x, y, z are mutually orthogonal (independent) and time t is orthogonal to x, y, z.
Once found, this space would be able to solve almost all problems of classical and quantum physics as well as most of mathematics without discontinuities [A*].
Note that R^4 mathematical spaces such as Minkowski, Hilbert, Rieman. . . etc are all incomplete.
Schrödinger space may or may not be complete.
Heisenberg matrix space is neither statistical nor complete.
All the above mathematical constructions are not complete spaces in the sense that they do not satisfy the A* condition.
In conclusion, although Einstein pioneered the 4-dimensional unitary x-t space, he missed the correct definition of time.
Universal time t* must be redefined as an inseparable dimensionless integer woven into a 3D geometric space.
Here, universal time t* = Ndt* where N is the dimensionless integer of iterations or the number of steps/jumps dt*.
Finally, it should be clarified that the purpose of this article is not to underestimate Einstein's great achievements in theoretical physics such as the photoelectric effect equation, the Einstein Bose equation, the laser equation, etc. but only to discuss and explain the main aspects and flaws of his theory of relativity, if any.
Computational topology of solitons
The well-established research area of algebraic topology currently goes interdisciplinary with computer science in many directions. The Topological Data Analysis gives new opportunities in visualization for modeling and special mapping. A study on metrics used or simplicial complexes are reliable for future results in the area of mathematics.
Today, the machine learning from one side is a tool for the analysis in topology optimization, topological persistence and optimal homology problems, from other side the topological features in machine learning are new area of research, topological layers in neural networks, topological autoencoders, and topological analysis for the evaluation of generative adversarial networks are in general aspects of topology machine learning.
On practical point of view, the results in this area are important for solitary-like waves research, biomedical Image analysis, neuroscience, physics and many others.
That gives us opportunity to establish and scale up an interdisciplinary team of researchers to apply for funding for fundamental science research in interdisciplinary field.
More Info: https://euraxess.ec.europa.eu/jobs/249043
In a project with analysis of log-linear outcomes I have not found the solution to this problem. (log is the natural logaritm)
I assume it is simple, but I am out of clue, and I hope someone more mathematical proficient can help.
Nominations are expected to open in the early part of the year for the Breakthrough Prize in Fundamental Physics. Historically nominations are accepted from early/mid-January to the end of March, for the following year's award.
Historically, the foundation has also had a partnership with ResearchGate:
The foundation also awards major prizes for Life Sciences and for Mathematics, and has further prizes specific to younger researchers.
So who would you nominate?
Differential Propositional Calculus • Overview
❝The most fundamental concept in cybernetics is that of “difference”, either that two things are recognisably different or that one thing has changed with time.❞
— W. Ross Ashby • An Introduction to Cybernetics
Differential logic is the component of logic whose object is the description of variation — the aspects of change, difference, distribution, and diversity — in universes of discourse subject to logical description. To the extent a logical inquiry makes use of a formal system, its differential component treats the use of a differential logical calculus — a formal system with the expressive capacity to describe change and diversity in logical universes of discourse.
In accord with the strategy of approaching logical systems in stages, first gaining a foothold in propositional logic and advancing on those grounds, we may set our first stepping stones toward differential logic in “differential propositional calculi” — propositional calculi extended by sets of terms for describing aspects of change and difference, for example, processes taking place in a universe of discourse or transformations mapping a source universe to a target universe.
What follows is the outline of a sketch on differential propositional calculus intended as an intuitive introduction to the larger subject of differential logic, which amounts in turn to my best effort so far at dealing with the ancient and persistent problems of treating diversity and mutability in logical terms.
Note. I'll give just the links to the main topic heads below. Please follow the link at the top of the page for the full outline.
Part 1 —
Casual Introduction
Cactus Calculus
Part 2 —
Formal_Development
Elementary Notions
Special Classes of Propositions
Differential Extensions
• https://oeis.org/wiki/Differential_Propositional_Calculus_%E2%80%A2_Part_2#Differential_Extensions
Appendices —
References —
It is provable that quantum mechanics, quantum field theory, and general relativity violate the the axioms of the mathematics used to create them. This means that neither of these theories has a mechanism for processes that they describe to be feasible in a way that is consistent wth rules used to develop the math on which that are based . Thus, these theories and mathematics cannot both be true. This is proven with a $500 reward for disproving (details in the link). So I can prove that the above mentioned theories are mathematical nonsense, and I produce a theory that makes the same predictions without the logical mistakes. https://theframeworkofeverything.com/
Within a specific problem, without the whole picture?
I am currently working on optimizing our inventory management system and need to calculate the monthly safety stock for various SKUs. I have already generated weekly safety stock values based on historical data and lead times. However, I need to adjust these values for a monthly period considering several factors:
1. SKU Contribution Ratio: This ratio indicates the importance of each SKU. A higher ratio means the SKU is more critical and should have a higher safety stock.
2. CCF Factor: This factor reflects our past ability to fulfill orders based on historical order and invoice data.
3. Monthly Stock Reduction Percentage: This percentage shows how much stock is typically left at the end of each month. If this value is 100% for four consecutive months, it indicates no need to keep that much inventory for the respective SKU. Conversely, if the values are decreasing, it suggests that the safety stock has been used and needs to be adjusted.
Given these factors, I need to determine a safety factor for the month, which will be used to adjust the weekly safety stock values to monthly values.
Could you suggest scientific methodologies or models that can effectively integrate these factors to calculate the monthly safety stock?

please help me to find a theory that supports my study about storytelling in teaching mathematics
Famous mathematicians are failing each day to prove the Riemann's Hypothesis even if Clay Mathematics Institute proposes a prize of One Million Dollars for the proof.
The proof of Riemann's Hypothesis would allow us to understand better the distribution of prime numbers between all numbers and would also allow its official application in Quantics. However, many famous scientists still refuse the use of Riemann's Hypothesis in Quantics as I read in an article of Quanta Magazine.
Why is this Hypothesis so difficult to prove? And is the Zeta extension really useful for Physics and especially for Quantics ? Are Quantics scientists using the wrong mathematical tools when applying Riemann's Hypothesis ? Is Riemann's Hypothesis announcing "the schism" between abstract mathematics and Physics ? Can anyone propose a disproof of Riemann's Hypothesis based on Physics facts?
Here is the link to the article of Natalie Wolchover:
The zeros of the Riemann zeta function can also be caused by the use of rearrangements when trying to find an image by the extension since the Lévy–Steinitz theorem can happen when fixing a and b.
Suppositions or axioms should be made before trying to use the extension depending on the scientific field where it is demanded, and we should be sure if all the possible methods (rearrangements of series terms) can give the same image for a known s=a+ib.
You should also know that the Lévy–Steinitz theorem was formulated in 1905 and 1913, whereas, the Riemann's Hypothesis was formulated in 1859. This means that Riemann who died in 1866 and even the famous Euler never knew the Lévy–Steinitz theorem.
Differential Logic • 1
Introduction —
Differential logic is the component of logic whose object is the description of variation — focusing on the aspects of change, difference, distribution, and diversity — in universes of discourse subject to logical description. A definition that broad naturally incorporates any study of variation by way of mathematical models, but differential logic is especially charged with the qualitative aspects of variation pervading or preceding quantitative models. To the extent a logical inquiry makes use of a formal system, its differential component governs the use of a “differential logical calculus”, that is, a formal system with the expressive capacity to describe change and diversity in logical universes of discourse.
Simple examples of differential logical calculi are furnished by “differential propositional calculi”. A differential propositional calculus is a propositional calculus extended by a set of terms for describing aspects of change and difference, for example, processes taking place in a universe of discourse or transformations mapping a source universe to a target universe. Such a calculus augments ordinary propositional calculus in the same way the differential calculus of Leibniz and Newton augments the analytic geometry of Descartes.
Resources —
Logic Syllabus
Survey of Differential Logic
Program Description:
A program that converts mathematical equations from PDF files into editable equations within Word documents. The program relies on Optical Character Recognition (OCR) technology for mathematical equations, ensuring accuracy in retrieving symbols and mathematical formulas. It allows users to easily edit the equations directly in Word and provides support for various mathematical writing formats, such as LaTeX or MathType.
Program Features:
Accurate Conversion: Can read complex mathematical equations from PDF files.
Word Integration: Offers direct import options into Word documents.
Mathematical Format Support: Supports multiple formats such as MathML and LaTeX.
User-Friendly Interface: A simple design suitable for researchers and students.
Multi-Platform Compatibility: Works on operating systems like Windows and macOS.
Examples of programs that may meet this description include:
Mathpix Snip
InftyReader
You can try one of them to find the best solution for your need
I have started an investigation about the utilization of AI for teaching mathematics and physics.
In this framework, I would like any insights and previous findings.
Please send me similar studies.
Thanks you in advance
what about generate 3D shape using different ways: GAN or mathematics with python or LLMs or LSTMs and the related works about this !
Dear Esteemed Colleagues,
I hope this message finds you well. I am writing to invite your review and insights on what I believe to be a significant development in our understanding of the Riemann Hypothesis. After extensive work, I have arrived at a novel proof for the hypothesis, using a generalization of the integral test applicable to non-monotone series, as outlined in the attached document.
As a lead AI specialist at Microsoft, specializing in math-based AI, I have employed both traditional mathematical techniques and AI-based verification algorithms to rigorously validate the logical steps and conclusions drawn in this proof. The AI models have thoroughly checked the derivations, ensuring consistency in the logic and approach.
The essence of my proof hinges on an approximation for the zeta function that results in an error-free evaluation of its imaginary part at $x = \frac{1}{2}$, confirming this as the minimal point for both the real and imaginary components. I am confident that this new method is a significant step forward and stands up to scrutiny, but as always, peer review is a cornerstone of mathematical progress.
I warmly invite your feedback, comments, and any questions you may have regarding the methods or conclusions. I fully stand by this work and look forward to a robust, respectful discussion of the implications it carries. My goal is not to offend or overstate the findings but to contribute meaningfully to this ongoing conversation in the mathematical community.
Thank you for your time and consideration. I look forward to your responses and the productive discussions that follow.
Sincerely,
Rajah Iyer
Lead AI Specialist, Microsoft
I am interested in the study of visual subcompetence in education, specifically how visual tools and technologies can be integrated into the educational process to enhance the development of professional competencies in future teachers, particularly in mathematics education.
I am looking for research and definitions that highlight and specify the concept of visual subcompetence in education. Specifically, I am interested in how visual subcompetence is distinguished as part of the broader professional competence, particularly in the context of mathematics teacher education.
Can you suggest any study that uses Ethnographic Research design?
I apologize to you all! The question was asked incorrectly—my mistake. Now everything is correct:
In a circle with center O, chords AB and CD are drawn, intersecting at point P.
In each segment of the circle, other circles are inscribed with corresponding centers O_1; O_2; O_3; O_4.
Find the measure of angle ∠O_1 PO_2.

Can you explain the mathematical principles behind the Proof of Stake (PoS) algorithm, including how validator selection probabilities, stake adjustments, and reward calculations are determined
توضيح كيفية التعليم الأخضر في مادة الرياضيات للأطفال
Bonjour,
Je suis actuellement en train de travailler sur un projet de recherche portant sur l'utilisation de l'optimisation mathématique pour déterminer le taux directeur optimal en politique monétaire. J'aimerais savoir s'il existe des travaux de recherche récents ou des modèles spécifiques qui ont abordé ce sujet. De plus, je suis à la recherche de conseils sur la manière de structurer mon modèle et de choisir des variables pertinentes pour ce type d'analyse. Toute suggestion de lecture ou d'expertise serait grandement appréciée.
Merci d'avance pour votre aide
As an academic working and pursuing a PhD degree in Egypt, both in private and public universities respectively, I wanted to put forward a simple question:
What is the role of universities, and other academic institutions, today? Was there ever a time where universities were agents of revolutionary action and change, or was it only a subject of the overall consumerist system?
We can take many steps back till the Ancient Egyptian times, where scribes and priests were taught writing, mathematics, and documentation of daily exchanges. All the way till today's era of digital globalization and mass education, where knowledge production process has become more of a virtual canvas rather than actual knowledge. Has knowledge ever served its purpose? Have academic institutions, and of course academic scholars, ever delivered the true purpose of education?
Was, and still, education's main sole purpose is economic prosperity of certain classes, hence socio-economic segregation?
Scientists believe theories must be proven by experiments. Does their faith in the existence of objective reality mean they are classical scientists who reject quantum mechanics' statements that observers and the observed are permanently and inextricably united? In this case, scientists would unavoidably and unconsciously influence every experiment and form of mathematics. In the end, they may be unavoidably and unconsciously influencing the universe which is the home of all experiments and all mathematics.
There exists a neural network model designed to predict a specific output, detailed in a published article. The model comprises 14 inputs, each normalized with minimum and maximum parameters specified for normalization. It incorporates six hidden layers, with the article providing the neural network's weight parameters from the input to the hidden layers, along with biases. Similarly, the parameters from the output layer to the hidden layers, including biases, are also documented.
The primary inquiry revolves around extracting the mathematical equation suitable for implementation in Excel or Python to facilitate output prediction.
It seems it is common to combine basic observations to create new observable, which are then used for PPP and other applications. Basic observations such as pseudorange and carrier-phase observations are real measurement from GNSS. These real observations are combined to create entirely new observable which is not direct, physical, and real. Amazingly, these new observable solves the real problem such as PPP (e.g. Ionosphere -free combination).
- What is the theory behind this?
- Any similar approach like this in other scientific field or any simple analogous explanation?
- You could direct me to resources such as videos, or literature.
In triangle ∆ABC (with ∠C = 90°), the angle CBA is equal to 2α.
A line AD is drawn to the leg BC at an angle α (∠BAD = α).
The length of the hypotenuse is 6, and the segment CD is equal to 3.
Find the measure of the angle α.
This problem can be solved using three methods: trigonometric, algebraic, and geometric. I suggest you find the geometric method of solution!

A minion is a low-level official protecting a bureaucracy form challengers.
A Kuhnian minion (after Thomas Kuhn's Structure of Scientific Revolutions) is a low-power scientist who dismisses any challenge to existing paradigm.
A paradigm is a truth structure that partitions scientific statement as true to the paradigm or false.
Recently, I posted a question on Physics Stack Exchange that serves as a summary of the elastic string paradigm. My question was: “Is it possible there can be a non-Fourier model of string vibration? Is there an exact solution?”
To explain, I asked if they knew the Hamiltonian equation for the string vibration. They did not agree it must exist. I pointed out there are problems with the elastic model of vibration with its two degrees of freedom and unsolvable equations of motion can only be approximated by numerical methods. I said elasticity makes superposition the 4th Newtonian law. How can a string vibrate in an infinite number of modes without violating energy conservation?
Here are some comments I got in response:
“What does string is not Fourier mean? – Qmechanic
“ ‘String modes cannot superimpose!’ Yet, empirically, they do.” – John Doty
“ A string has an infinite number of degrees of freedom, since it can be modeled as a continuous medium. If you manage to force only the first harmonic, the dynamics of the system only involve the first harmonic and it’s a standing wave: this solution does depend on time, being (time dependence in the amplitude of the sine). No 4th Newton’s law. I didn’t get the question about Hamilton equation.
“What do you mean with ‘archaic model’? Can I ask you what’s your background that makes you do this sentence? Physics, Math, Engineering? You postulate nothing here. You have continuum mechanics here. You have PDEs under the assumption of continuum only. You have exact solutions in simple problems, you have numerical methods approximating and solving exact equations. And trust me: this is how the branch of physics used in many engineering fields, from mechanical, to civil, to aerospace engineering.” – basics
I want to show the rigid versus elastic dichotomy goes back to the calculus wars. Quoting here from Euler and Modern Science, published by the Mathematical Association of America:
"We now turn to the most famous disagreement between Euler and d’Alembert … over the particular problem of the theory of elasticity concerning a string whose transverse vibrations are expressed through second-order partial differential equations of a hyperbolic type later called the wave equation. The problem had long been of interest to mathematicians. The first approach worthy of note was proposed by B. Taylor, … A decisive step forward was made by d’Alembert in … the differential equation for the vibrations, its general solution in the form of two “arbitrary functions” arrived at by means original with d’Alembert, and a method of determining these functions from any prescribed initial and boundary conditions.”
[Editorial Note: The boundary conditions were taken to be the string endpoints. The use of the word hyperbolic is, I believe, a clear reference to Taylor’s string. A string with constant curvature can only have one mathematic form, which is the cycloid, which is defined by the hyperbolic cosh x function. The cosh x function is the only class of solutions that are allowed if the string cannot elongate. The Taylor/Euler-d’Alembert dispute whether the string is trigonometric or hyperbolic.
Continuing the quote from Euler and Modern Science:
"The most crucial issue dividing d’Alembert and Euler in connection with the vibrating string problem was the compass of the class of functions admissible as solutions of the wave equation, and the boundary problems of mathematical physics generally, D’Alembert regarded it as essential that the admissible initial conditions obey stringent restrictions or, more explicitly, that the functions giving the initial shape and speed of the string should over the whole length of the string be representable by a single analytical expression … and furthermore be twice continuously differentiable (in our terminology). He considered the method invalid otherwise.
"However, Euler was of a different opinion … maintaining that for the purposes of physics it is essential to relax these restrictions: the class of admissible functions or, equivalently, curves should include any curve that one might imagine traced out by a “free motion of the hand”…Although in such cases the analytic method is inapplicable, Euler proposed a geometric construction for obtain the shape of the string at any instant. …
Bernoulli proposed finding a solution by the method of superimposition of simple trigonometric functions, i.e. using trigonometric series, or, as we would now say, Fourier series. Although Daniel Bernoulli’s idea was extremely fruitful—in other hands--, he proved unable to develop it further.
Another example is Euler's manifold of the musical key and pitch values as a torus. To be fair, Euler did not assert the torus but only drew a network show the Key and Pitch can move independently. This was before Mobius's classification theorem.
My point is it should be clear the musical key and pitch do not have different centers of harmonic motion. But in my experience, the minions will not allow Euler to be challenged by someone like me. Never mind Euler's theory of music was crackpot!
The need of a paradigm shift in physics
Is it possible in a world as fragmented as ours to present a new concept of Unity in which Science, Philosophy and Spirituality or Ontology can be conceived working in Complete Harmony?
In this respect the late Thomas S. Kuhn wrote in his
The Structure of Scientific Revolutions
"Today research in parts of philosophy, psychology, linguistic, and even art history, all converge to suggest that the traditional paradigm is somehow askew. That failure to fit is also increasingly apparent by the historical study of science to which most of our attention is necessarily directed here."
And even the father of Quantum Physics complained strongly in his 1952 colloquia, when he wrote:
"Let me say at the outset, that in this speech, I am opposing not a few special statements claims of quantum mechanics held today, I am opposing its basic views that has been shaped 25 years ago, when Max Born put forward his probability interpretation, which was accepted by almost everybody. It has been worked out in great detail to form a scheme of admirable logical consistency which has since been inculcated in all young students of theoretical physics."
Where is the source of this "crisis of physics" as has been called?
Certainly the great incompatibility between General Relativity and Quantum Mechanics is in a certain sense, one of the reasons, of that great crisis, and that shows clearly the real need of a paradigm shift.
As one that comes from the Judeo-Christian tradition, that need of a real paradigm shift was of course a real need too. Philosophers such as Teilhard de Chardin, Henry Bergson, Charles Pierce and Ken Wilber, all of them worked for it!.
Ken Wilber said that goal of postmodernity should be the Integration of the Big Three, Science, Philosophy and Spirituality, and a scientist as Eric J. Lerner in his The Big Bang Never Happened, show clearly in it, how a paradigm shift was in cosmology is a real need too.
My work about that need started in 1968, when I found for the first time, an equation that was declared the most beautiful equation of mathematics, I mean Euler's relation found by him in 1745, when working with infinite series. It was this equation that took me in 1991, to define what I now call a Basic Systemic Unit, that has the most remarkable property to remain the same in spite of change, exactly the same definition of a Quantum as defined by professor Art Hobson in his book The Tales of Quantum, and that the University of Ottawa found when working with that strange concept that frightened Einstein, the entanglement concept, that seemed to violate Special Relativity.
Where is the real cause of the incompatibility between GR and QM?
For GR Tensor Analysis was used, a mathematical tool based on real numbers, and with it there was the need to solve ten functions representing the gravitational field:
"Thus, according to the general theory of relativity, gravitation occupies an exceptional position with regards to other forces, particularly the electromagnetic forces, since the ten functions representing the gravitational field at the same time define the metrical properties of the space measured."
THE FOUNDATION OF THE GENERAL THEORY OF RELATIVITY
By A. Einstein
Well the point is that, in that metrics that define the GR, time is just another variable, just as space, and as so with the same symmetrical properties, at the point that is can take both signs positive and negative, so time travel could be conceived just as a space travel, and any direction, in fact Stephen Hawking in his A BRIEFER HISTORY OF TIME, writes:
"It is possible to travel to the future. That is, relativity shows that it is possible to create a time machine that will jump you forward in time." Page 105
This is exactly the point that has made physics some sort of metaphysics, and as so created the great crisis of physics. While QM is based on the complex Schrödinger's wave equation or on complex numbers, in which the symbol sqr(-1), is a symbol to separate two different orders of reality, such as Time and Space, GR is based just on real numbers.
The Basic Systemic Unit concept, based on Euler's relation is in fact the definition of a Quantum, and as so it can be used to deduce all fundamental equations of physics as can be seen in my paper... resolving in this way that great crisis of physics
Quantum Physics
Edgar Paternina
retired electrical engineer
I have been seeing and following a lot of work on these topics, it even seems that there are more results on them than on the corresponding classical topics, particularly on general topology.
What could be the cause of such results?
Has our mathematical knowledge progressed as much as contemporary science?
1- Assume a rectangle in the second dimension; this rectangle's components are lines. Its geometric characteristics are perimeter and area.
2- Assume a cube in the third dimension. Its components are the plane. Its geometric characteristics are area and volume.
3- What are the names of its components by transferring this figure to the 4th dimension? And what is the name of its geometric characteristics? And with the transfer to the 5th and higher dimensions, our mathematics has nothing to say.rectangle is just a simple shape how about complex geometric shapes?
According to new physical theories such as strings theory, we need to study different dimensions.
Modifying the original Feistel structure will it be feasible to design a lightweight and robust encryption algorithm. Somehow changing the structure's original flow and adding some mathematical functions there. I welcome everyone's view.