Science topics: Mathematics
Science topic

# Mathematics - Science topic

Mathematics, Pure and Applied Math
Questions related to Mathematics
• asked a question related to Mathematics
Question
In mathematics, many authors working in the area of integer sequence, fibonacci polynomial, perin sequence......
Now what is the current research topics in this subject?.
I request , suggest some research topics which is related to Fibonacci sequence.
James Jamesfathiaraj : You started the following post/discussion thread on June 01, 2023, and I quote verbatim:
"In mathematics, many authors working in the area of integer sequence, fibonacci polynomial, perin sequence......
Now what is the current research topics in this subject?.
I request , suggest some research topics which is related to Fibonacci sequence."
The topic that I am about to suggest to you may not necessarily be related to the Fibonacci sequence, but I think you can try looking up recursively enumerable sets. In deed, the problem of determining whether a specific rational number greater than unity is an abundancy index (i.e. a ratio of the form sigma(m)/m, for some positive integer m and where sigma(m) is the classical sum of divisors of m) or an abundancy outlaw (see Holdener and Stanton's paper [https://www.researchgate.net/publication/264925121] published in JIS [2007]) is equivalent to finding out whether the set of abundancy indices (or the set of abundancy outlaws, for that matter) is a recursive set. (You can also refer to the following paper: https://cs.uwaterloo.ca/journals/JIS/VOL23/Holdener/holdener4.pdf, where it is mentioned that: "In the early 1970’s, [C. W.] Anderson conjectured that the set Image(I) [the set of abundancy indices] is a recursive set, meaning that there exists a recursive algorithm that can be employed to determine whether or not a given rational number k/m is an outlaw. This remains an open problem today.")
If you find this particular topic interesting, then you may contact me privately via e-mail (it is indicated in this recent paper of mine: https://www.researchgate.net/publication/370979239), and we can collaborate.
• asked a question related to Mathematics
Question
I am doing a project on plastic biodegradation by G. mellonella larvae. I am doing a project on plastic biodegradation by G. mellonella larvae. I am just getting into this field and I want to know how I can determine the biodegradation. Do I have to use some mathematical formula? Thank you very much.
I am working with expanded polystyrene. I understand, thank you very much.
• asked a question related to Mathematics
Question
From Newton's Metaphysics to Einstein's Theology!
The crisis in modern theoretical physics and cosmology has its root in its use, along with theology as a ruling-class tool, since medieval Europe. The Copernican revolution overthrowing the geocentric cosmology of theology led to unprecedented social and scientific developments in history. But Isaac Newton’s mathematical idealism-based and on-sided theory of universal gravitational attraction, in essence, restored the idealist geocentric cosmology; undermining the Copernican revolution. Albert Einstein’s theories of relativity proposed since the turn of the 20th century reinforced Newtonian mathematical idealism in modern theoretical physics and cosmology, exacerbating the crisis and hampering further progress. Moreover, the recognition of the quantum world - a fundamentally unintuitive new realm of objective reality, which is in conflict with the prevailing causality-based epistemology, requires a rethink of the philosophical foundation of theoretical physics and cosmology in particular and of natural science in general.
Today we demand of physics some understanding of existence itself.
John Wheeler
(9 Jul 1911 - 13 Apr 2008)
Quoted in Denis Brian, The Voice Of Genius: Conversations with Nobel Scientists and Other Luminaries, 127
The only thing harder to understand than a law of statistical origin would be a law that is not of statistical origin, for then there would be no way for it—or its progenitor principles—to come into being. On the other hand, when we view each of the laws of physics—and no laws are more magnificent in scope or better tested—as at bottom statistical in character, then we are at last able to forego the idea of a law that endures from everlasting to everlasting.
— John Wheeler
In 'Law without Law' (1979), in John Archibald Wheeler and Wojciech Hubert Zurek (eds.), Quantum Theory and Measurement(1983), 203.
No theory of physics that deals only with physics will ever explain physics. I believe that as we go on trying to understand the universe, we are at the same time trying to understand man.
— John Wheeler
In The Intellectual Digest (June 1973), as quoted and cited in Mark Chandos, 'Philosophical Essay: Story Theory", Kosmoautikon: Exodus From Sapiens (2015)
——————————
In the world of science, Jehovah was superseded by Copernicus, Galileo, and Kepler. All that God told Moses, admitting the entire account to be true, is dust and ashes compared to the discoveries of Descartes, Laplace, and Humboldt. In matters of fact, the bible has ceased to be regarded as a standard. Science has succeeded in breaking the chains of theology. A few years ago, Science endeavored to show that it was not inconsistent with the bible. The tables have been turned, and now, Religion is endeavoring to prove that the bible is not inconsistent with Science. The standard has been changed. Robert G. Ingersoll, Some Mistakes of Moses
——————
• asked a question related to Mathematics
Question
If we should calculate it by experimental test on target organism or we should find it mathematically?
co- toxicity factor =(O-E)*100/E
that
O is observed % mortality of combined plant extracts
E is expedcted m% mortality
In the context of the co-toxicity factor formula, the term "expected mortality" refers to the predicted or estimated mortality rate of an organism under the combined effects of multiple toxic substances. The co-toxicity factor formula is used to assess the combined toxicity of different substances on an organism, taking into account their toxicities.
To calculate the expected mortality using the co-toxicity factor formula, you typically follow these steps:
1. Determine the individual toxicity values: Obtain the toxicity values or toxicological data for each of the substances of interest. This could be in the form of lethal concentration (LC50) or lethal dose (LD50) values, which represent the concentration or dose at which 50% mortality is expected.
2. Calculate the co-toxicity factor: Calculate the co-toxicity factor for each substance by dividing the concentration or dose of the substance by its individual toxicity value. This step involves normalizing the concentration or dose of each substance concerning its toxicity.
3. Calculate the expected mortality: Sum up the co-toxicity factors for all the substances. The resulting value represents the expected mortality of the organism under the combined effects of the substances.
It's important to note that the co-toxicity factor formula is a simplified approach to assess combined toxicity and may not account for all possible interactions between substances. The formula assumes an additive or independent effect of the substances. In reality, interactions between substances can be more complex, including synergistic (enhanced) or antagonistic (reduced) effects.
Furthermore, the specific formula or equation used for calculating the co-toxicity factor may vary depending on the context, study design, and toxicological data available. It is essential to consult relevant literature, regulatory guidelines, or expert advice to ensure the appropriate use of the co-toxicity factor formula and interpretation of the results in your specific research or assessment.
• asked a question related to Mathematics
Question
Bonjour a tous,
je voudrais savoir la relation entre la temperature du verre ( viscosite ) et le radius de la fibre de verre?
merci
La relation entre la température du verre et le diamètre de la fibre de verre dépend de plusieurs facteurs, notamment la composition du verre et le processus de fabrication spécifique. Il n'y a pas de formule mathématique générale unique qui puisse décrire cette relation pour tous les types de verre.
Cependant, pour certains types de fibres de verre, une approximation couramment utilisée est la formule de coefficient de dilatation thermique linéaire. Cette formule relie la variation du diamètre d'un matériau à sa variation de température. La formule générale est la suivante :
ΔD = α * D * ΔT
où :
• ΔD est le changement de diamètre de la fibre de verre,
• α est le coefficient de dilatation thermique linéaire du verre,
• D est le diamètre initial de la fibre de verre, et
• ΔT est la variation de température.
Le coefficient de dilatation thermique linéaire (α) est une propriété spécifique du matériau de la fibre de verre et peut varier en fonction de sa composition. Il est généralement exprimé en unités de (1/°C) ou (1/K).
Il est important de noter que cette formule est une approximation et peut ne pas être exacte pour tous les types de verre ou dans toutes les plages de température. Pour obtenir des résultats précis, il est recommandé de consulter les spécifications techniques du matériau de la fibre de verre spécifique que vous utilisez, ou de vous référer aux données fournies par le fabricant.
• asked a question related to Mathematics
Question
- What is the relationship between the scientific understanding of the world and the reality in nature? It may be said that the real world is much richer in terms of structure than the results of the physical and mathematical models that were developed for it. In these models, there is one or more angles of view limited to the natural phenomenon in question, inventing a complete theory, whose results are correct from any angle, may be the dream theory of "Theory of Everything"!
- Is there an unknown form of mathematics that has not yet been found to solve all the problems of a theory of everything?
- Is it necessary to change the conceptual view of physicists on the subject of the theory of everything? So that this new look can include new concepts for problem solving?
- Is there a mathematical system, which has a distinct ability to represent the maximum possible states of the world!?
- Is it possible to imagine that the world is like a carpet that has infinite texture, but its colors and roles are determined by scientists with their theories about the world?! And are we looking for the most realistic pattern and design for the world's carpet?
The relationship between scientific understanding and the reality of nature is a complex and ongoing philosophical debate. Science aims to provide explanations and models that describe and predict the behavior of the natural world. However, it is important to recognize that scientific models are simplifications and abstractions of reality, and they can never fully capture the intricacies and complexities of the real world.
1. Unknown Form of Mathematics: It is possible that there may be undiscovered mathematical frameworks or tools that could help in developing a comprehensive theory of everything. The search for such frameworks is an active area of research in theoretical physics and mathematics. However, it is challenging to predict what form this new mathematics might take or whether it exists at all.
2. Changing Conceptual View: Scientists continually refine their conceptual frameworks and theories as new evidence and insights emerge. The quest for a theory of everything may require physicists to adopt new conceptual viewpoints and frameworks to address the outstanding challenges. This flexibility allows for the incorporation of new concepts and approaches in problem-solving.
3. Mathematical System Representing Possible States: Mathematics is a powerful tool for representing and describing the natural world, but it is not clear if there exists a single mathematical system that can encompass all possible states of the world. Different branches of mathematics are often used to describe specific phenomena or aspects of reality. The search for a comprehensive mathematical framework that can represent all possible states of the world is an ongoing pursuit.
4. The world as a Carpet: The analogy of the world is like a carpet with scientists searching for the most realistic pattern and design can be seen as a metaphorical representation of the scientific endeavor. Scientists develop theories and models that attempt to capture the underlying patterns and principles governing the natural world. However, it is important to remember that scientific theories are human constructs that are constantly refined and updated as our understanding deepens.
In summary, the relationship between scientific understanding and the reality of nature is complex and evolving. Scientific models and theories aim to capture aspects of reality, but they are limited abstractions. The search for a theory of everything requires ongoing exploration, including potential changes in conceptual viewpoints, the possibility of undiscovered mathematical frameworks, and the continuous refinement of scientific models to approach a more complete understanding of the world.
• asked a question related to Mathematics
Question
Three grade teachers response can help researcher measure the students creativity? Age of students is 8-11 years
It is possible for a mathematics class teacher to report on the creativity of students in third grade. However, it is important to clarify what is meant by "creativity" and to use a valid and reliable measure to assess it. In general, creativity can be defined as the ability to generate novel and useful ideas, and it can be assessed through a variety of measures such as divergent thinking tasks, creative problem-solving tasks, and self-assessment scales. It is also important to consider the limitations of relying solely on teacher reports to assess creativity, as teachers may have biases and may not always have a complete picture of a student's creative abilities. Therefore, a more comprehensive approach that includes multiple sources of information would be preferable.
• asked a question related to Mathematics
Question
Our department is offering an elective on Fluid Mechanics in daily life. The course is supposed to be more of a physical treatment of the fluid phenomena rather than mathematical. I would like some recommendations for books on the subject which are light and speak about fluid physics from a physical and application based perspective
Filippo Maria Denaro wrote, "I don’t think that any fluid dynamics course can be done without some fundamental math.." I completely agree. Among the many books on Fluid Mechanics, I find that the one that best embraces the physical point of view of the discipline is that of Guyon, E., Hulin, J. P., Petit, L., & de Gennes, P. G. (2001). Physical hydrodynamics, EDP sciences, whose first edition (2001) was prefaced by Pierre Gilles de Gennes (Nobel Prize for Physics) who did not lack praise for the particularity of the integration of physics in the very conception of the book. The book is in its third edition and has more than 750 citations.
Guyon, E., Hulin, J. P., & Petit, L. (2021). Hydrodynamique physique. EDP sciences.
Even if the book is in French, I think it would be interesting to consult it if only to have an idea of its construction
• asked a question related to Mathematics
Question
hello
How can I determine the tortuosity factor in a porous material with simple mathematical formula?
What about using the fractal dimension (box-counting) as a measure? Here is a paper on tortuosity.
Regards,
Joachim
• asked a question related to Mathematics
Question
I know that δ(f(x))=∑δ(x−xi)/f′(xi). What will be the expression if "f" is a function of two variables, i.e. δ(f(x,y))=?
K. Kassner You are right.
The method that I proposed may not work for all functions f(x,y), especially if they are continuous and do not have isolated zeros. In that case, one might try to separate the integrals or use a coordinate transformation as you suggested in your comment. For example, if we use polar coordinates $(r,\theta)$, then we have
$$\delta(f(r,\theta))=\frac{1}{r}\delta(r-r(\theta))$$
where $r(\theta)$ is the zero of f(r,$\theta$) as a function of r for a fixed $\theta$. This can be seen by using the Jacobian of the transformation and the property of the delta function.
• asked a question related to Mathematics
Question
Why prime numbers have a great importance in mathematics for the rest of the numbers ?
I worked considerable time related to prime numbers but it seems to obey no rules at all and they go up to both negative and positive infinity !
• asked a question related to Mathematics
Question
Vehicle routing problem is a classical application case of Operation research.
I need to implement the same in electric vehicle routing problem with different constraints.
I want to understand mathematics behind this. The journals available discuss different applications without much talking of mathematics.
Any book/ basic research paper/ PhD/ m.tech thesis will do the needful.
There is a vast literature on exact and heuristic approaches to vehicle routing problems. (You are looking at several thousands journal articles!)
If you are interested in exact approaches, then you need to be familiar with the following:
(i) The basics of integer programming, including the branch-and-bound method and cutting-plane methods.
(ii) The basics of computational complexity, including the concept of polynomial-time algorithms, pseudo-polynomial-time algorithms and NP-completeness.
(iii) Elementary graph theory (nodes, edges, arcs, and so on).
It also helps to know a bit about:
(iv) Dynamic programming.
(v) Lagrangian relaxation.
(vi) The branch-and-cut method, which combines branch-and-bound with strong cutting planes from polyhedral studies.
(vii) The branch-and-price method, which combines branch-and-bound with Dantzig-Wolfe decomposition and dynamic programming.
A good place to start is the book "The Vehicle Routing Problem", edited by Toth and Vigo. There is also "The Vehicle Routing Problem: Latest Advances and New Challenges", edited by Golden et al.
• asked a question related to Mathematics
Question
Qm is the ultimate realists utilization of the powerful differential equations, because the integer options and necessities of solutions correspond to nature's quanta.
The same can be said for GR whose differential manifolds, an sdvanced concept or hranch in mathematics, have a realistic implementation in nature compatible motional geodesics.
1 century later,so new such feats have been possible, making one to think if the limit of heuristic mathematical supplementation in powerful ways towards realist results in physics in reached.
Yazen Alawaideh hello,
I am not talking about the limit of mathematics or. Physics but about the limit of combining them to make wonders such these 2 theories. Despite their issues they are the lighthouses of physics.
Still maybe they are the last expression of genuiys childs from the marriage, as for 30 years and tens of millions of dollars research no unified or underliing theory with better or more fundamental mathematics has been found..
• asked a question related to Mathematics
Question
Applying mathematical knowledge in research models: This question has been in my mind for a long time. Can advance mathematics and applied mathematics solve all the problems in modeling research? Especially the formula derivation in the theoretical model part, can the analysis conclusion be obtained through multiple derivations or other methods? You have also read some mathematics-related publications yourself, and you have to admire the mystery of mathematics.
We all know that Mathematics include Reading , Writing & Arithmetic & its starts with every action of our life image & as such it is the action of our performance & image in every part of our life. With this some years back I have expressed my views in this areas which I submit herewith for your kind perusal .
In my early days students interested in Mathematics & scoring full marks they can perform in their working of mathematics either by listening to music or song or prior to during a home work they have formulated a habit of reading either a lesson or interested topics & after carrying out their working system they used to give justice to the subject of Mathematics.
This is my personal opinion
• asked a question related to Mathematics
Question
Which areas in mathematics education is trending currently
As an AI language model, I can provide an overview of some current research trends in mathematics education up until my knowledge cutoff in September 2021. However, please note that the field of research is constantly evolving, and there may have been new developments since then. Here are a few prevalent research trends in mathematics education:
1. Technology Integration: Researchers are exploring the effective use of technology in mathematics education. This includes studying the impact of digital tools, interactive software, simulations, and online resources on teaching and learning mathematics. Additionally, there is a focus on designing technology-enhanced learning environments and investigating the role of technology in fostering conceptual understanding and problem-solving skills.
2. Problem-Solving and Mathematical Thinking: There is an emphasis on promoting problem-solving skills and mathematical thinking among students. Researchers are investigating instructional strategies and interventions that help students develop problem-solving abilities, reasoning skills, and a deep conceptual understanding of mathematical concepts. This includes exploring the use of open-ended problems, mathematical modeling, and real-world contexts to engage students in authentic mathematical experiences.
3. Learning Trajectories and Progressions: Research in this area focuses on understanding the developmental progression of mathematical concepts and skills. Learning trajectories provide a framework for mapping out the sequence of learning in different mathematical domains and identifying the key milestones along the way. By understanding how students progress through these trajectories, researchers aim to develop effective instructional approaches and interventions that cater to students' diverse learning needs.
4. Assessment and Feedback: There is ongoing research on developing innovative assessment methods and providing effective feedback in mathematics education. This includes investigating formative assessment strategies, computer-based assessments, and alternative approaches to evaluating mathematical competencies. Researchers are also exploring the role of feedback in enhancing students' learning and understanding of mathematics.
5. Equity and Access: Mathematics education research is increasingly focusing on issues of equity, diversity, and inclusion. Researchers are examining the factors that contribute to achievement gaps among different student populations and investigating strategies to promote equitable mathematics learning experiences. This includes exploring culturally responsive teaching practices, addressing stereotype threats, and promoting access to high-quality mathematics education for all students.
These research trends highlight some of the current areas of focus in mathematics education. However, it is essential to note that the field is dynamic, and new trends may have emerged since my knowledge cutoff. For the most up-to-date information, it is advisable to consult recent academic journals and conferences in the field of mathematics education.
• asked a question related to Mathematics
Question
As it is not possible to show mathematical expressions here I am attaching link to the question.
Your expertise in determining and comprehending the boundaries of integration within the Delta function's tantalizing grip will be treasured beyond measure.
Use δ(a-√x2+y2)=(a/√a2-x2)(δ(y-√a2-x2)+δ(y+√a2-x2)) to do the integral over y. Then the integral over x remains and its integration interval is [-a,a].
The general recipe is to transform the δ function δ(a-g(y)) into a sum of δ functions δ(y-yk), where yk are the zeros of g(y)-a. Each term acquires a denominator |g'(yk)| in the process.
• asked a question related to Mathematics
Question
An attempt to extrapolate reality
Both textbooks and also most of secondary school maths teachers are not sufficient to excite students. Some of the USA textbooks are really extraordinary attractive but still very common in so many other countries apart from Europe !?
• asked a question related to Mathematics
Question
Our answer is a competitive YES. However, universities face the laissez-faire of old staff.
This reference must be included:
Gerck, E. “Algorithms for Quantum Computation: Derivatives of Discontinuous Functions.” Mathematics 2023, 11, 68. https://doi.org/10.3390/math1101006, 2023.
announcing quantum computing on a physical basis, deprecating infinitesimals, epsilon-deltas, continuity, limits, mathematical real-numbers, imaginary numbers, and more, making calculus middle-school easy and with the same formulas.
Otherwise, difficulties and obsolescence follows. A hopeless scenario, no argument is possible against facts.
What is your qualified opinion? Must one self-study? A free PDF is currently available at my profile at RG.
Physics confirming math, or denying it. Time for colleges to catch-up and be competitive.
• asked a question related to Mathematics
Question
Hello,
in an article I found the following sentence in the abstract:
"The results suggested that (1) Time 1 mathematics self-concept had significant effects on Time 2 mathematics school engagement at between-group and within-group levels; and (2) Time 2 mathematics school engagement played a partial mediating role between Time 1 mathematics self-concept and Time 2 mathematics achievement at the within-group level."
What is the meaning of the within-group-level and between-group-level in this context?
The article I am referring to is:
Xia, Z., Yang, F., Praschan, K., & Xu, Q. (2021). The formation and influence mechanism of mathematics self-concept of left-behind children in mainland China. Current Psychology, 40(11), 5567–5586. https://doi.org/10.1007/s12144-019-00495-4
Hi Max,
I quickly went through the paper, and I understand your confusion. In study 2, there are actually two within-components or, if you will, a three-level design (time within students within classes).
After having read through the paper, I am quite convinced that the authors refer to between-groups when they mean between-classroom effects, e.g., average classroom math self-concept differs with respect to some predictor on the classroom level. In other words, by within-group effects the authors refer to the individual-(student-)level. While this makes complete sense in study 1, in it slightly confusing in study 2 in my opinion because of the longitudinal design.
Hope this helps!
• asked a question related to Mathematics
Question
An attempt to extrapolate reality
📷
There are many reasons why students may have a negative attitude towards mathematics. Some possible reasons include:
Lack of confidence: Students may feel that they are not good at mathematics and may believe that they are incapable of doing well in the subject.
Difficulty with abstract concepts: Mathematics involves working with abstract concepts, which can be difficult for some students to understand.
Negative experiences: Students may have had negative experiences with mathematics in the past, such as poor grades or negative feedback from teachers or peers.
Lack of engagement: Some students may find mathematics boring or irrelevant to their lives, which can lead to disengagement and a negative attitude towards the subject.
Cultural stereotypes: Some students may hold negative cultural stereotypes about mathematics, such as the belief that it is a subject for boys or that only extremely intelligent people can excel in the subject.
Teacher quality: The quality of instruction in mathematics can vary widely, and students who have had poor teachers may develop a negative attitude towards the subject as a result.
Addressing these issues and finding ways to engage students and help them develop a more positive attitude towards the subject can be a challenge, but it is an important one for educators to tackle.
• asked a question related to Mathematics
Question
I'm using target encoding in my work, and I'd like to understand why it's effective from a mathematical point of view.
Intuitively, my understanding is that it allows you to encode the past with the future. I can see why that's effective, and also why it could cause target leakage. However, I can't find a good mathematical explanation for its effectiveness/ issues.
Does anyone know the answer, or have a link to a resource they'd be willing to share?
Target encoding is a technique used in machine learning to encode categorical variables as numerical values based on the target variable. The idea is to use the target variable to create a new feature for each unique category in the categorical variable, where the value of the feature is the average value of the target variable for that category.
From a mathematical point of view, target encoding can be effective because it can help capture the relationship between the categorical variable and the target variable. By encoding the categorical variable based on the target variable, the resulting numerical values can provide a more informative representation of the categorical variable, which can help improve the performance of the machine learning model.
One way to think about this is in terms of information theory. The target variable provides information about the relationship between the categorical variable and the target variable. By encoding the categorical variable based on the target variable, we are effectively incorporating this information into the feature representation. This can help improve the model's ability to learn patterns and make accurate predictions.
However, target encoding can also be prone to target leakage, where the encoded feature incorporates information from the target variable that would not be available at prediction time. This can lead to overfitting and poor generalization performance. To mitigate this issue, it is important to use proper cross-validation techniques and to ensure that the encoding is done using only information that would be available at prediction time.
Here are a few resources that you may find helpful:
• asked a question related to Mathematics
Question
We assume that the Lagrange multipliers originally introduced in the Boltzmann-Einstein model to derive the Gaussian distribution are just a mathematical trick to compensate for the lack of true definition of probability in unified 4D space.
The derivation of the Boltzmann distribution for the energy distribution of identical but distinguishable classical particles can be obtained in a mathematical approach [1] or equivalently via a statistical approach [2] where the Lagrange multipliers are completely ignored.
1-The Boltzmann factor: a simplified derivation
Rainer Muller
2- Statistical integration, I. Abbas
The the Lagrange multipliers LG originally introduced in the Boltzmann-Einstein model to derive the Gaussian distribution are just a classical mathematical trick to compensate for the lack of true definition of probability in unified 4D space.
The derivation of the Boltzmann distribution for the energy distribution of identical but distinguishable classical particles can be obtained via a mathematical approach [1] or equivalently via a statistical approach [2,3] where the Lagrange multipliers can be completely ignored.
However, there are still many people who claim that
i- without the Lagrange multipliers all economic theory, and some finance applications as well, are gonna be in trouble in finite or infinite dimensional spaces.
ii- while the use of Lagrange multipliers may not be the only way to derive the Boltzmann distribution, it is a well-established and useful technique that should not be dismissed as a mere "trick" .
iii- without the L.G. all economic theory, and some finance applications as well, are gonna be in trouble in finite or infinite dimensional spaces.
iv-LG are used in various fields, including physics, economics, and optimization. They are used to optimize a function subject to a set of constraints, by introducing additional parameters (the Lagrange multipliers) that allow the constraints to be incorporated into the objective function.
In the context of statistical mechanics, Lagrange multipliers are used to enforce the constraints on the total energy, volume, and number of particles in a system, while maximizing the entropy.
While it's true that the use of Lagrange multipliers is a mathematical technique, it's not just a "trick" that can be ignored. The Lagrange multipliers are necessary to incorporate the constraints into the optimization problem and obtain the correct solution. Without the Lagrange multipliers, the constraints would not be taken into account, and the resulting distribution would not accurately reflect the physical behavior of the system.
We assume that the fears or claims i-iv will not happen because LG constraints will be incorporated in adequate statistical theory.
In brief, Lagrange multipliers is just a classic mathematical trick that we can do without.
A detailed answers to this question and other related questions such as the numerical statistical solution of double and triple integration as well as the time dependent PDE are all explained in references[2,3] where the modern definition of probability in the transition matrix B is an interconnected thing of the three topics. Imagination is the first important common factor in mathematics, physics and especially in the probability of transition in unitary 4D space is incorporated in B-matrix statistical chains to numerically solve single, double, and triple (Hypercube) integrals as well as time dependent PDE[ 2.3].
Ref:
1-The Boltzmann factor: a simplified derivation
Rainer Muller
2-I.M.Abbas, How Nature Works in Four-Dimensional Space: The Untold Complex Story, Researchgate, May 2023.
3-I.M.Abbas, How Nature Works in Four-Dimensional Space: The Untold Complex Story, IJISRT review, May 2023.
• asked a question related to Mathematics
Question
Hello!
I am curious , can anyone guide me how we can calculate the amount of hydrogen is stored in the metal hydride during the absorption process both in %wt. an in grams and how much energy is released during absorption
Hello
To explain my answer I propose you what guided it;
It is difficult to know on which form the hydrogen is in the metal, that is to say it is in hydride form, atomic or molecular. The next difficulty is to measure its concentration. I approached these questions by studying the adsorption of incondensable gases on metals, atomically clean (Ta and Al (111)) at very low H2 partial pressure, of the order of 10^-3 Pascal and at room temperature. While the residence time of the molecules should be extremely short of the order of 10^- 10 seconds or less, it is much larger and depends on the polarity of the molecules and the presence of defects, impurities and dislocations. These defects create local variations of the crystal field and internal stresses while the adsorbed molecules create image charges that modify the electronic and vibrational structure at the surface of the solid. This is verified by electron spectrometry and is expressed by the dielectric function. Even at very low coverage, the adsorbed molecules become unstable due to the relaxation of internal stresses of entropic origin. This scheme is parallel to that of the effects of adsorbed charges on insulators and is expressed by an equation of state and expresses the surface barrier deformation by defects and the resulting physical/chemical adsorption and ion diffusion processes. On the basis of these elements, the hydrogen concentration of the order of ppm would not be measurable by weight but fatal to the cohesion. One can imagine several experiments to verify this scheme.
Have a nice day
Claude
• asked a question related to Mathematics
Question
Mathematics and theoretical physics are currently searching for answers to this particular question and two other related questions that make up three of the most persistent questions:
i- Do probabilities and statistics belong to physics or mathematics?
ii- Followed by the related question, does nature operate in 3D geometry plus time as an external controller or more specifically, does it operate in the inseparable 4D unit space where time is woven?
iii-Lagrange multipliers: Is it just a classic mathematical trick that we can do without?
We assume the answers to these questions are all interconnected, but how?
One could invoke that energy subdivides between systems by scaling them with parameters, same way as Lagrange multipliers are used in variational problems. I just read my latest article, where I did not do that, but instead used an entire gradient, same way as entire time differentiation. It would result in the same traveling waves, with one more variable, probably, and need not be reduced to obtain the phenomenon?
• asked a question related to Mathematics
Question
Can someone please provide more insight on the mathematical formulation of a frequency constrained UC model, comprising of only synchronous sources, as well as non-synchronous sources. Also what would be the associated MILP code in GAMS?
The previous paragraph outlines a step-by-step process for using Mixed Integer Linear Programming (MILP) to solve the frequency constrained unit commitment problem in power system operation. The process involves formulating the objective function and constraints, solving the MILP problem using a suitable solver, validating the solution, and implementing the optimal unit commitment schedule in the power system operation. The main constraints include power balance, generator capacity, minimum up and down time, ramping, and frequency constraints. The solution obtained should satisfy all the constraints and be physically feasible
• asked a question related to Mathematics
Question
The error of building a physical world based on the basic feelings of fundamental concepts such as space and time occurred during the creation of Newtonian mechanics by Newton. Of course, this mistake should have been made, so that man would not be deprived of the numerous gifts of technology resulting from this science! But when the world showed another face of itself to man in very small and large scales, this theory along with the error did nothing.
When Newton had those ideas about space and time (of course, maybe he knew and had no choice), he built a mathematical system for his thoughts, differential and integral calculus! Mathematics resulting from his thoughts was a systematic continuation of his thoughts, with the same assumptions about space and time. That mathematics could not show him the right way to know the real world! Because the world was not Newtonian! Today, many pages in modern physics are created based on new assumptions of space and time and other seemingly obvious variables!
Now, why do we think that these pages of current mathematics necessarily lead to the correct knowledge of the world! Can we finally identify the world, as it is, by adopting appropriate and correct assumptions?!
Good question. I suggest that the background structure for science research, which includes quantum mechanics is consciousness. Yet consciousness is deleted from mainstream science, hence for such research, there is no background structure. Two papers in the Journal Communicative and Integrative Biology discuss this situation: Omni-local consciousness: and The two principles that shape scientific research:
• asked a question related to Mathematics
Question
Are there certain methods, for instance T-tests or ANOVAs, for certain ways a survey question is asked?
There are several mathematical and statistical methods used to quantify and discuss survey questions. Here are some of the most common ones:
1. Descriptive statistics: Descriptive statistics can be used to summarize the data from survey questions. For example, you can calculate measures of central tendency (such as the mean, median, and mode) and measures of variability (such as the range, standard deviation, and variance) to describe the distribution of responses.
2. Cross-tabulation: Cross-tabulation (also known as contingency tables or pivot tables) can be used to examine the relationship between two or more survey questions. It allows you to see how the responses to one question vary with the responses to another question.
3. Chi-square test: The chi-square test can be used to determine whether there is a significant association between two categorical variables. It can be used to test whether the responses to one survey question depend on the responses to another survey question.
4. T-tests and ANOVA: T-tests and analysis of variance (ANOVA) can be used to compare the means of two or more groups on a single survey question. They can be used to test whether there are significant differences in the responses to a survey question between different groups (such as men and women, or different age groups).
5. Regression analysis: Regression analysis can be used to examine the relationship between one or more independent variables and a dependent variable. It can be used to test whether there is a significant linear relationship between a survey question and other variables, such as demographic variables or other survey questions.
6. Factor analysis: Factor analysis can be used to identify underlying factors or dimensions that explain the pattern of responses to multiple survey questions. It can be used to group survey questions that measure similar concepts or to identify unique factors that explain variation in the responses.
These are just a few examples of the mathematical and statistical methods that can be used to quantify and discuss survey questions. The choice of method will depend on the research question, the type of data collected, and the level of analysis needed.
• asked a question related to Mathematics
Question
Actually, I am working on the modeling of path loss between the coordinator and the sensor nodes of a BAN network. My objective is to make a performance comparison between the CM3A model of the IEEE 802.15.6 standard and a loss model that I have implemented mathematically.
So, according to your respectful experience, how can I implement these two path loss models? Do I have to define both path loss equations under the Wireless Channel model? Or do I create and implement for each path loss model a specific module under Castalia (like the wireless channel module) and after I call it from the omnet.ini file (configuration file) ?
You will find attached the two models in a figure.
Not sure if I could get your question properly, but here is an alternative solution to obtain the Path Loss value:
There is a software: "NYUSIM"
You can actually get the path loss comparisons by running the simulation. This software can simulate up to 100 GHz. All you have to do is to insert the appropriate simulation data in terms of your desired outcome.
• asked a question related to Mathematics
Question
Many people believe that x-t spacetime is separable and that describing x-t as an inseparable unit block is only essential when the speed of the object concerned approaches that of light.
This is the most common error in mathematics as I understand it.
The universe has been expanding since the time of the Big Bang at almost the speed of light and this may be the reason why the results of classical mathematics fail and become less accurate than those of the stochastic B-matrix ( or any other suitable matrix) even in the simplest situations like double integration and triple integration.
CASE A:To begin with, let's admit that nature does not see with our eyes and does not think with our brains.We try to understand how nature performs its own resolutions in space-time x-t like an inseparableunit block.
However, B-Matrix statistical chains (or any other suitable stochastic chains) can answer this question and demonstrate, in a way, how nature works:
i-nature see:the curve as a trapezoidal area.ii-nature see:the square as a cube or a cuboid volume.iii-nature see:the cube as a 4D Hypercube and evaluates its volume as L^4.In all hypotheses i-iii, time t is the additional dimension.However, many people still believe that x-t spacetime is separable and that describing x-t as an inseparable unit block is only essential when the speed of the object concerned approaches that of light.This is the most common error in mathematics as I understand it.The universe has been expanding since the time of the Big Bang at almost the speed of light and this may be the reason why the results of classical mathematics fail and become less accurate than those of the stochastic B-matrix ( or any other suitable matrix) even in the simplest situations like double integration and triple integration.This is the reason why the current definition of double and triple integration is incomplete.Brief,-------Time is part of an inseparable block of space-time and therefore,geometric space is the other part of the inseparable block of space-time.In other words, you can perform integration using the x-t space-time unit with wide applicability and excellent speed and accuracy.On the other hand, the classical mathematical methods of integration using the classical mathematical  technique in the geometric Cartesian space x-alone can still be applied but only in special cases and it is to be expected that their results are only a limit of success.In other words, you can perform integration using the x-t space-time unit with wide applicability and excellent speed and accuracy.1-It is important to understand that mathematics is only a tool for quantitatively describing physical phenomena, it cannot replace physical understanding.2-It is claimed that mathematics is the language of physics, but the reverse is also true, physics can be the language of mathematics as in the case of numerical integration and the derivation of the normal/ Gaussian distribution law via the  statistical  B-matrix chains.However, in a revolutionary technique, chains of B matrices are used to solve numerically PDE, double and triple integrals  as well as the general case of time-dependent partial differential equations with arbitrary Dirichlet BC and initial arbitrary conditions.At first,I=∫∫∫ f(x,y,z) dxdydzwhich has been defined as the limit of the sum of the product f(x,y,z) dxdydz for a small infinitesimal dx,dy,dz is completely ignored in numerical statistical methods as if it never existed.It is obvious that the new B matrix technique ignores the classical 3D integration I=∫∫∫ f(x,y,z) dxdydz.We concentrate below on some results in the field of numerical integration via the theory of the matrix B, which in itself is not complicated but rather long.
------------
7 Free Nodes:
Single finite Integral
I=∫ f(x) dx ... for  a<=x<=b
Briefly, we arrive at,
The statistical integration formula for 7 nodes is given by,
I=6h/77(6.Y1 +11.Y2 + 14.Y3+15.Y4 +14.Y5 + 11.Y6 + 6.Y7)
which is the statistical equivalence of Simpson's rule for 7 nodes.
Now consider the special case,
I=∫ y dx from x=2 to x=8 where y=X^2.
That is,
X = 2 3   4  5   6   7  8
Y = 4 9 16 25 36 49 64
Numerical result via Trapezoidal ruler,
It = Y1/2+Y2+Y3+Y4 +Y5+Y6+Y7/22+9+16 +25+ 36+ 49+32=169 square units.
Analytic integration expression I=X^3/3
Ia=(384-8)/3= 168 square units.
Finally, the statistical integration formula for 7 nodes is given by,
Is=6 h/77 (6*4 +11* 9 + 14* 16+15* 25 +14* 36 + 11*49 + 6*64)
I = 167.455 square units. This means that static integration is quite fast and accurate.
CASE B:
-----------
Double finite Integral  I=∫∫ f(x,y) dx dy... for the domain  a<=x<=b and  c<=y<=d
If we introduce a specific example without loss of generality where ,
the function Z=f(x,y) is defined as,
Z(x,y)= X^2.Y^2 + X^3 . . . . . (1)
defined on the rectangular domain [abcd],
1<=x=>3 and 1<=y=>3 . . . Domain D(1)
The process of double numerical integration (I),
I=∫∫ f(x,y) dxdy
on the D1 domain can be achieved via three different approaches, namely,
1-analytically (a),
Ia=(x^3/3 *y^2 +x^4/4) + (x^2*y^3/3+x^3) . . . (2)
2-Rule of the Double Sympson (ds),
I ds=
h^3.(16f(b+a/2,d+c/2)+4f(b+a/2,d)+4f(b+a/2,c)+4f(b,d+c/ 2) +4f(a,d+c/2)+f(b,d)+f(b,c)+f(a,d)+f(a,c))/36 . . . . (3)
iii- The statistical integration formula via the Cairo technique (ct),
Ict = 9h^3/29.5( 2.75Z(1,1)+3.5Z(1,2)+2.75.Z(1,3)+3.5Z(2,1)+4.5Z(2,2)+ 3 .5Z(2.3)+2.75Z(3.1)+3.5Z(3.2)+2.75Z(3.3)) . . . . (4)
where h is the equidistant interval on the x and y axes.
The numerical results are as follows,
i- Ia =227.25
ii-Ids=226.5
iii-I ct=227.035 ..which is the most accurate.
CASE C:
----------------
Triple finite Integral and Hypercube
I=∫∫∫ f(x,y) dx dy dz... for the domain  a<=x<=b and  c<=y<=d & e<=z<=f.
Again, we present the nature and the matrix of the triple integration on the cube abcdefgh, divided into 27 equidistant nodes,
we present below the supposed nature matrix of the triple integration on the 3D cube abcdefgh (1,1,1 & 2,1,1, .......& 3,3,3)
I=∫∫∫ W(x,y,z) dxdydz
on the cube domain.
I = 27h^4/59( 2.555W(1,1,1)+3.13W(1,2,1)+2.555.W(1,3,1)+3.13W(2,1,1)+3.876 W(2,2,1)+ 3,13W(2,3,1)+2,555W(3,1,1)+2,555W(3,2,1)+3,13Z(3,3,1) . . . etc.)
The question arises, why are statistical forms of integration faster and more accurate than mathematical forms?
We assume that the answer is inherent in the processes of integration, whether they belong to the 3D geometric space or the unitary x-t space.
• asked a question related to Mathematics
Question
The congruent number problem has been a fascinating topic in number theory for centuries, and it continues to inspire research and exploration today. The problem asks whether a given positive integer can be the area of a right-angled triangle with rational sides. While this problem has been extensively studied, it is not yet fully understood, and mathematicians continue to search for new insights and solutions.
In recent years, there has been increasing interest in generalizing the congruent number problem to other mathematical objects. Some examples of such generalizations include the elliptic curve congruent number problem, which asks for the existence of rational points on certain elliptic curves related to congruent numbers, and the theta-congruent number problem as a variant, which considers the possibility of finding fixed-angled triangles with rational sides.
However, it is worth noting that not all generalizations of the congruent number problem are equally fruitful or meaningful. For example, one might consider generalizing the problem to arbitrary objects, but such a generalization would likely be too broad to be useful in practice.
Therefore, the natural question arises: what is the most fruitful and meaningful generalization of the congruent number problem to other mathematical objects? Any ideas are welcome.
here some articles
M. Fujiwara, θ-congruent numbers, in: Number Theory, Eger, 1996, de Gruyter, Berlin, 1998,pp. 235–241.
New generalizations of congruent numbers
Tsubasa Ochiai
DOI:10.1016/j.jnt.2018.05.003
A GENERALIZATION OF THE CONGRUENT NUMBER PROBLEM
LARRY ROLEN
Is the Arabic book about the congruent number problem cited correctly in the references? If anyone has any idea where I can find the Arabic version, it will be helpful. The link to the book is https://www.qdl.qa/العربية/archive/81055/vdc_100025652531.0x000005.
EDIT1:
I will present a family of elliptic curves in the same spirit as the congruent number elliptic curves.
This family exhibits similar patterns as the congruent number elliptic curves, including the property that the integer is still "congruent" if we take its square-free part, and there is evidence for a connection between congruence and positive rank (as seen in the congruent cases of $n=5,6,7$).
Thank you, Irshad Ayoob . I need to rephrase my question. What I mean by a generalization of the congruent number is as follows: A congruent number is related to the area of a right triangle or, simply, to the Diophantine equation a^2 + b^2 = c^2. An integer n is congruent if 2n = ab. Historically, this was not the first definition of a congruent number. Instead, in Arab manuscripts, n is congruent if the two Diophantine equations v^2 - n = u^2 and v^2 + n = w^2 have simultaneously a solution. By the way, this is equivalent to the well-known definition of a congruent number today, which is linked to the right triangle.
Now, my remark is about the degree two Diophantine equation. For example, let's take a^2 + 2b^2 = c^2. We know that if this Diophantine equation (or any other degree two of the form ra^2 + sb^2 = t*c^2, where a, b, c, r, s, t are integers) has a non-trivial solution, it will have an infinite number of solutions. So, in the case of the Pythagorean triple, we have the definition of the congruent number. But for the other equations, what is the correct definition of a congruent number?
• asked a question related to Mathematics
Question
MATHEMATICS VS. CAUSALITY:
A SYSTEMIC RECONCILIATION
Raphael Neelamkavil, Ph.D., Dr. phil.
1. Preface on the Use of Complex Language
2. Prelude on the Pre-Scientific Principle of Causality
3. Mathematical “Continuity and Discreteness” Vs. Causal Continuity
4. Mathematics and Logic within Causal Metaphysics
5. Mathematics, Causality, and Contemporary Philosophical Schools
1. Preface on the Use of Complex Language
First of all, a cautious justification is in place about the complexity one may experience in the formulations below: When I publish anything, the readers have the right to ask me constantly for further justifications of my arguments and claims. And if I have the right to anticipate some such possible questions and arguments, I will naturally attempt to be as detailed and systemic as possible in my formulation of each sentence here and now. A sentence is merely a part of the formulated text. After reading each sentence, you may pose me questions, which certainly cannot all be answered well within the sentences or soon after the sentences in question, because justification is a long process.
Hence, my sentences may tend to be systemically complex. A serious reader will not find these arguments getting too complex, because such a person has further unanswered questions. We do not purposely make anything complex. Our characterizations of meanings in mathematics, physics, philosophy, and logic can be complex and prohibitive for some. But would we all accuse these disciplines or the readers if the readers find them all complex and difficult? In that case, I could be excused too. I do not intentionally create a complex state of affairs in these few pages; but there are complexities here too. I express my helplessness in case any one finds these statements complex.
The languages of both science and philosophy tend to be complex and exact. This, nevertheless, should be tolerated provided the purpose is understood and practiced by both the authors and the readers. Ordinary language has its worth and power. If I give a lecture, I do not always use such formal a language as when I write, because I am there to re-clarify.
But the Wittgensteinian obsession with “ordinary” language does not make him use an ordinary language in his own works. Nor does the Fregean phobia about it save him from falling into the same ordinary-language naïveté of choosing concrete and denotative equivalence between terms and their reference-objects without a complex ontology behind them. I attempt to explain the complex ontology behind the notions that I use.
2. Prelude on the Pre-Scientific Principle of Causality
Which are the ultimate conditions implied by the notion of existence (To Be), without which conditions implied nothing exists, and without which sort of existents nothing can be discoursed? Anything exists non-vacuously. This implies that existents are inevitably in Extension (having parts, each of which is further extended and not vacuous). The parts will naturally have some contact with a finite number of others. That is, everything is in Change (impacting some other extended existents).
Anything without these two characteristics cannot exist. If not in Change, how can something exist in the state of Extension alone? And if not in Extension, how can something exist in the state of Change alone? Hence, Extension-Change are two fundamental ontological categories of all existence and the only two exhaustive implications of To Be. Any unit of causation with one causal aspect and one effect aspect is termed a process.
These conditions are ultimate in the sense that they are implied by To Be, not as the secondary conditions for anything to fulfil after its existence. Thus, “To Be” is not merely of one specific existent, but of all existents. Hence, Extension-Change are the implications of the To Be of Reality-in-total. Physical entities obey these implications. Hence, they must be the foundations of physics and all other sciences. Theoretical foundations, procedures, and conclusions based on these implications in the sciences and philosophy, I hold, are wise enough.
Extension-Change-wise existence is what we understand as Causality: extended existents and their parts exert impacts on other extended existents. Every part of existents does it. That is, if anything exists, it is in Causation. This is the principle of Universal Causality. In short, Causality is not a matter to be decided in science – whether there is Causality or not in any process under experiment and in all existents is a matter for philosophy to decide, because philosophy tends to study all existents. Science can ask only whether there occurs any specific sort of causation or not, because each science has its own restricted viewpoint of questions and experiments and in some cases also restrictions in the object set.
Thus, statistically mathematical causality is not a decision as to whether there is causation or not in the object set. It is not a different sort of causation, but a measure of the extent of determination of special causes that we have made at a given time. Even the allegedly “non-causal” quantum-mechanical constituent processes are mathematically and statistically circumscribed measuremental concepts from the results of Extended-Changing existents and, ipso facto, the realities behind these statistical measurements are in Extension-Change if they are physically existent.
Space is the measured shape of Extension; time is that of Change. Therefore, space and time are epistemic categories. How then can statistical causality based only on measuremental data be causality at all, if the causes are all in Extension-Change and if Universal Causality is already the pre-scientific Law under which all other laws appear? No part of an existent is non-extended and non-changing. One unit of cause and effect may be called a process. Every existent and its parts are processual.
And how can a so-called random cause be a cause, except when the randomness is the extent of our measuremental reach of the cause, which already is causal because of its Extension-Change-wise existence? Extension and Change are the very exhaustive meanings of To Be, and hence I call them the highest Categories of metaphysics, physical ontology, physics, and all science. Not merely philosophy but also science must obey these two Categories.
In short, everything existent is causal. Hence, Universal Causality is the highest pre-scientific Law, second conceptually only to Extension-Change and third to Existence / To Be. Natural laws are merely derivative. Since Extension-Change-wise existence is the same as Universal Causality, scientific laws are derived from Universal Causality, and not vice versa. Today the sciences attempt to derive causality from the various scientific laws!The relevance of metaphysics / physical ontology for the sciences is clear from the above.
Existents have some Activity and Stability. This is a fully physical fact. These two Categories may be shown to be subservient to Extension-Change and Causality. Pure vacuum (non-existence) is absence of Activity and Stability. Thus, entities, irreducibly, are active-stable processes in Extension-Change. Physical entities / processes possess finite Activity and Stability. Activity and Stability together belong to Extension; and Activity and Stability together belong to Change too.
That is, Stability is neither merely about space nor about Extension. Activity is neither merely about time nor about Change. There is a unique reason for this. There is no absolute stability nor absolute activity in the physical world. Hence, Activity is finite, which is by Extended-Changing processes; and Stability is finite, which is also by Extended-Changing processes. But the tradition still seems to parallelise Stability and Activity with space and time respectively. We consider Activity and Stability as sub-Categories, because they are based on Extension-Change, which together add up to Universal Causality; and each unit of cause and effect is a process.
These are not Categories that belong to merely imaginary counterfactual situations. The Categories of Extension-Change and their sub-formulations are all about existents. There can be counterfactuals that signify cases that appertain existent processes. But separating these cases from some of the useless logical talk as in linguistic-analytically tending logic, philosophy, and philosophy of science is near to impossible.
Today physics and the various sciences do at times something like the said absence of separation of counterfactual cases from actual in that they indulge in particularistically defined terms and procedures, by blindly thinking that counterfactuals can directly represent the physical processes under inquiry. Concerning mathematical applications too, the majority attitude among scientists is that they are somehow free from the physical world.
Hence, without a very general physical ontology of Categories that are applicable to all existent processes and without deriving the mathematical foundations from these Categories, the sciences and mathematics are in gross handicap. Mathematics is no exception in its applicability to physical sciences. Moreover, pure mathematics too needs the hand of Extension and Change, since these are part of the ontological universals, form their reflections in mind and language, etc., thus giving rise to mathematics.
The exactness within complexity that could be expected of any discourse based on the Categorial implications of To Be can only be such that (1) the denotative terms ‘Extension’ and ‘Change’ may or may not remain the same, (2) but the two dimensions of Extension and Change – that are their aspects in ontological universals – would be safeguarded both physical-ontologically and scientifically.
That is, definitional flexibility and openness towards re-deepening, re-generalizing, re-sharpening, etc. may even change the very denotative terms, but the essential Categorial features within the definitions (1) will differ only meagrely, and (2) will normally be completely the same.
3. Mathematical “Continuity and Discreteness” Vs. Causal “Continuity”
The best examples for the above are mathematical continuity and discreteness that are being attributed blindly to physical processes due to the physical absolutization of mathematical requirements. But physical processes are continuous and discrete only in their Causality. This is nothing but Extension-Change-wise discrete causal continuity. At any time, causation is present in anything, hence there is causal continuity. This is finite causation and hence effects finite continuity and finite discreteness. But this is different from absolute mathematical continuity and discreteness.
I believe that it is common knowledge that mathematics and its applications cannot prove Causality directly. What are the bases of the problem of incompatibility of physical causality within mathematics and its applications in the sciences and in philosophy? The main but general explanation could be that mathematical explanations are not directly about the world but are applicable to the world to a great extent.
It is good to note that mathematics is a separate science as if its “objects” were existent, but in fact as non-existent and different from those of any other science – thus creating mathematics into an abstract science in its theoretical aspects of rational effectiveness. Hence, mathematical explanations can at the most only show the ways of movement of the processes and not demonstrate whether the ways of the cosmos are by causation.
Moreover, the basic notions of mathematics (number, number systems, points, shapes, operations, structures, etc.) are all universals / universal qualities / ontological universals that belong to groups of existent things that are irreducibly Extension-Change-type processes. (See below.)
Thus, mathematical notions have their origin in ontological universals and their reflections in mind (connotative universals) and in language (denotative universals). The basic nature of these universals is ‘quantitatively qualitative’. We shall not discuss this aspect here at length.
No science and philosophy can start without admitting that the cosmos exists. If it exists, it is not nothing, not non-entity, not vacuum. Non-vacuous existence means that the existents are non-vacuously extended. This means they have parts. Every part has parts too, ad libitum, because each part is extended. None of the parts is an infinitesimal. They can be near-infinitesimal. This character of existents is Extension, a Category directly implied by To Be.
Similarly, any extended being’s parts are active, moving. This implies that every part has impact on some others, not on infinite others. This character of existents is Change. No other implication of To Be is so primary as these. Hence, they are exhaustive of the concept of To Be, which belongs to Reality-in-total. These arguments show us the way to conceive the meaning of causal continuity.
Existence in Extension-Change is what we call Causality. If anything is existent, it is causal – hence Universal Causality is the trans-science physical-ontological Law of all existents. By the very concept of finite Extension-Change-wise existence, it becomes clear that no finite space-time is absolutely dense with existents. In fact, space-time is no ontological affair, but only epistemological, and existent processes need measurementally accessible finite space for Change. Hence, existents cannot be mathematically continuous. Since there is Change and transfer of impact, no existent can be absolutely discrete in its parts or in connection with others.
Can logic show the necessity of all existents to be causal? We have already discussed how, ontologically, the very concept of To Be implies Extension-Change and thus also Universal Causality. Logic can only be instrumental in this.
What about the ability or not of logic to conclude to Universal Causality? In my arguments above and elsewhere showing Extension-Change as the very exhaustive meaning of To Be, I have used mostly only the first principles of ordinary logic, namely, Identity, Contradiction, and Excluded Middle, and then argued that Extension-Change-wise existence is nothing but Universal Causality if everything existing is non-vacuous in existence.
For example, does everything exist or not? If yes, let us call it non-vacuous existence. Hence, Extension is the first major implication of To Be. Non-vacuous means extended, because if not extended the existent is vacuous. If extended, everything has parts. Having parts implies distances, however minute, between all the near-infinitesimal parts of any existent process. In this sense, the basic logical laws do help conclude the causal nature of existents.
A point of addition now has been Change. It is, so to say, from experience. But this need not exactly mean an addition. If existents have parts (i.e., if they are in Extension), the parts’ mutual difference already implies the possibility of contact between parts. Thus, I am empowered to move to the meaning of Change basically as motion or impact. Naturally, everything in Extension must effect impacts.
Everything has further parts. Hence, by implication from Change and the need for there to be contacts between every near-infinitesimal set of parts of existents, everything causes changes by impacts. In the physical world this is by finite impact formation. Hence, nothing can exist as an infinitesimal. Leibniz’s monads have no significance in the real world.
Thus, we conclude that Extension-Change-wise existence is Universal Causality, and every actor in causation is a real existent, not a non-extended existent, as energy particles seem to have been considered and are even today thought to be, due to their unit-shape yielded merely for the sake mathematical applications. It is thus natural to claim that Causality is a pre-scientific Law of Existence, where existents are all inwardly and outwardly in Change, i.e., in impact formation – otherwise, the concept of Change would lose meaning.
In such foundational questions like To Be and its implications, the first principles of logic must be used, because these are the foundational notions of all science and no other derivative logical procedure comes in as handy. In short, logic with its fundamental principles can help derive Universal Causality. Thus, Causality (Extension-Change) is more primary to experience than the primitive notions of mathematics. But the applicability of these three logical Laws is not guaranteed so well in arguments using derivative, less categorial, sorts of concepts.
I suggest that the crux of the problem of mathematics and causality is the dichotomy between mathematical continuity and mathematical discreteness on the one hand and the incompatibility of applying any of them directly on the data collected / collectible / interpretable from some layers of the phenomena which are from some layers of the object-process in question. Not recognizing the presence of such stratificational debilitation of epistemic directness is an epistemological foolishness. Science and philosophy, in my opinion, are victims of this. Thus, for example, the Bayesian statistical theory recognizes only a statistical membrane between reality and data!
Here I point at the avoidance of the problem of stratificational debilitation of epistemic directness, by the centuries of epistemological foolishness, by reason of the forgetfulness of the ontological and epistemological relevance of expressions like ‘from some layers of data from some layers of phenomena from some layers of the reality’.
This is the point at which it is time to recognize the gross violence against natural reason behind phrases and statements involving ‘data from observation’, ‘data from phenomena’, ‘data from nature / reality’ etc., without epistemological and ontological sharpness in both science and philosophy to accept these basic facts of nature. As we all know, this state of affairs has gone irredeemable in the sciences today.
The whole of what we used to call space is not filled with matter-energy. Hence, if causal continuity between partially discrete “processual” objects is the case, then the data collected / collectible cannot be the very processual objects and hence cannot provide all knowledge about the processual objects. But mathematics and all other research methodologies are based on human experience and thought based on experience.
This theoretical attitude facilitates and accepts in a highly generalized manner the following three points:
(1) Mathematical continuity (in any theory and in terms of any amount of axiomatization of logical, mathematical, physical, biological, social, and linguistic theories) is totally non-realizable in nature as a whole and in its parts: because (a) the necessity of mathematical approval of any sort of causality in the sciences and by means of its systemic physical ontology falls short miserably in actuality, and (b) the logical continuity of any kind does not automatically make linguistically or mathematically symbolized activity of representation adequate enough to represent the processual nature of entities as derivate from data.
(2) The concept of absolute discreteness in nature, which, as of today, is ultimately of the quantum-mechanical type based on Planck’s constant, continues to be a mathematical and partial misfit in the physical cosmos and its parts, (a) if there exist other universes that may causally determine the constant differently at their specific expansion and/or contraction phases, and (b) if there are an infinite number of such finite-content universes.
The case may not of course be so problematic in non-quantifiable “possible worlds” due to their absolute causal disconnection or their predominant tendency to causal disconnection, but this is a mere common-sense, merely mathematical, compartmentalization: because (a) the aspect of the causally processual connection between any two quanta is logically and mathematically alienated in the physical theory of Planck’s constant, and (b) the possible worlds have only a non-causal existence, and hence, anything may be determined in this world as a constant, and an infinite number of possible universes may be posited without any causal objection!
It is usually not kept in mind here by physicists that the epistemology of unit-based thinking – of course, based on quantum physics or not – is implied by the almost unconscious tendency of symbolic activity of body-minds. This need not have anything to do with a physics that produces laws for all existent universes.
(3) The only viable and thus the most reasonably generalizable manner of being of the physical cosmos and of biological entities is that of existence in an Extended (having parts) and Changing manner (extended entities and their parts impacting a finite number of other existents and their parts in a finite quantity and in a finite duration). Existence in the Extension-Change-wise manner is nothing but causal activity.
Thus, insofar as everything is existent, every existent is causal. There is no time (i.e., no minute measuremental iota of Change) wherein such causal manner of existing ceases in any existent. This is causal continuity between partially discrete processual objects. This is not mathematizable in a discrete manner. The concept of geometrical and number-theoretic continuity may apply. But if there are other universes, the Planck constant of proportionality that determines the proportion of content of discreteness may change in the others. This is not previsioned in terrestrially planned physics.
The attitude of treating everything as causal may also be characterized by the self-aware symbolic activity by symbolic activity itself, in which certain instances of causation are avoided or enhanced, all decrementally or incrementally as the case may be, but not absolutely. This, at the most, is what may be called freedom.
It is fully causal – need not be sensed as causal within a specific set of parameters, but as causal within the context of Reality-in-total. But the whole three millennia of psychological and religious (contemplative) tradition of basing freedom merely on awareness intensity, and not on love – this is a despicable state of affairs, on which a book-length treatise is necessary.
Physics and cosmology even today tend to make the cosmos either (1) mathematically presupposedly continuous, or (2) discrete with defectively ideal mathematical status for causal continuity and with perfectly geometrical ideal status for specific beings, or (3) statistically indeterministic, thus being compelled to consider everything as partially causal, or even non-causal in the interpretation of statistics’ orientation to epistemically logical decisions and determinations based on data. If this has not been the case, can anyone suggest proofs for an alleged existence of a different sort of physics and cosmology until today?
The statistician does not even realize (1) that Universal Causality is already granted by the very existence of anything, and (2) that what they call non-causality is merely the not being the cause, or not having been discovered as the cause, of a specific set of selected data or processes. Such non-causality is not with respect to all existents. Quantum physics, statistical physics, and cosmology are replete with examples for this empirical and technocratic treachery of the notion of science.
A topology and mereologically clean physical ontology of causal continuity between partially discrete processual objects, fully free of absolutely continuity-oriented or absolutely discreteness-oriented category theory, geometry, topology, functional analysis, set theory, and logic, are yet to be born. Hence, the fundamentality of Universal Causality in its deep roots in the very concept of the To Be (namely, in the physical-ontological Categories of Extension and Change) of all physically and non-vacuously existent processes, is alien to physics and cosmology until today.
Non-integer rational numbers are not the direct notion of anything existent. Even a part of a unit process has the attribute ‘unity’ in all the senses in which any other object possesses transpire. For this reason, natural numbers have Categorial priority over rational numbers, because natural numbers are more directly related to ontological universals than other sorts of numbers are. Complex numbers, for example, are the most general number system for their sub-systems defined mathematically, but this does not mean that they are more primary in the metaphysics of ontological universals, since the primary mode of numerically quantitative qualities / universals is that of natural numbers.
4. Mathematics and Logic within Causal Metaphysics
Hence, it is important to define the limits of applicability of mathematics to the physics that use physical data (under the species of various layers of their origin). This is the only way to approximate beyond the data and the methodologically derived conclusions beyond the data. As to how and on what levels this is to be done is a matter to be discussed separately.
The same may be said also about logic and language. Logic is the broader rational picture of mathematics. Language is the symbolic manner of application of both logic and its quantitatively qualitative version, namely, mathematics, with respect to specific fields of inquiry. Here I do not explicitly discuss ordinary conversation, literature, etc.
We may do well to instantiate logic as the formulated picture of reason. But human reason is limited to the procedures of reasoning by brains. What exactly is the reason that existent physical processes constantly undergo? How to get at conclusions based on this reason of nature – by using our brain’s reasoning – and thus transcend at least to some extent the limitations set by data and methods in our brain’s reasoning?
If we may call the universal reason of Reality-in-total by a name, it is nothing but Universal Causality. It is possible to demonstrate that Universal Causality is a trans-physical, trans-scientific Law of Existence. This argument needs clarity. How to demonstrate this as the case? This has been done in an elementary fashion in the above, but more of it is not to be part of this discussion.
Insistence on mathematical continuity in nature is a mere idealization. It expects nature to obey our merely epistemic sort of idealizations, that is, in ideal cases based mostly on the brain-interpreted concepts from some layers of data, which are from some layers of phenomena, which are from some layers of the reality under observation. Some of the best examples in science are the suppositions that virtual worlds are existent worlds, dark energy is a kind of propagative energy, zero-value cosmic vacuum can create an infinite number of universes, etc.
The processes outside are vaguely presented primarily by the processes themselves, but highly indirectly, in a natural manner. This is represented by the epistemic / cognitive activity within the brain in a natural manner (by the connotative universals in the mind as reflections of the ontological universals in groups of object processes), and then idealized via concepts expressed in words, connectives, and sentences (not merely linguistic but also mathematical, computerized, etc.) by the symbolizing human tendency (thus creating denotative universals in words) to capture the whole of the object by use of a part of the human body-mind.
The symbolizing activity is based on data, but the data are not all we have as end results. We can mentally recreate the idealized results behind the multitude ontological, connotative, and denotative universals as existents.
As the procedural aftermath of this, virtual worlds begin to “exist”, dark energy begins to “propagate”, and zero-value cosmic vacuum “creates” universes. Even kinetic and potential energies are treated as propagative energies existent outside of material bodies and supposed to be totally different from material bodies. These are mere theoretically interim arrangements in the absence of direct certainty for the existence or not of unobservables.
Insistence on mathematical continuity in nature as a natural conclusion by the application of mathematics to nature is what happens in all physical and cosmological (and of course other) sciences insofar as they use mathematical idealizations to represent existent objects and processes and extrapolate further beyond them. Mathematical idealizations are another version of linguistic symbolization and idealization.
Logic and its direct quantitatively qualitative expression as found in mathematics are, of course, powerful tools. But, as being part of the denotative function of symbolic language, they are tendentially idealizational. By use of the same symbolizing tendency, it is perhaps possible to a certain extent to de-idealize the side-effects of the same symbols in the language, logic, and mathematics being used in order to symbolically idealize representations.
Merely mathematically following physical nature in whatever it is in its part-processes is a debilitating procedure in science and philosophy (and even in the arts and humanities), if this procedure is not de-idealized effectively. If this is possible at least to a small and humble extent, why not do it?Our language, logic, and mathematics too do their functions well, although they too are equally unable to capture the whole of Reality in whatever it is, wholly or in parts, far beyond the data and their interpretations! Why not de-idealize the side-effects of mathematics too?
This theoretical attitude of partially de-idealizing the effects of human symbolizing activity by use of the same symbolic activity accepts the existence of processual entities as whatever they are. This is what I call ontological commitment – of course, different from and more generalized than those of Quine and others. Perhaps such a generalization can give a slightly better concept of reality than is possible by the normally non-self-aware symbolic activity in language, logic, and mathematics.
5. Mathematics, Causality, and Contemporary Philosophical Schools
With respect to what we have been discussing, linguistic philosophyand even its more recent causalist child, namely, dispositionalist causal ontology, have even today the following characteristics:
(1) They attribute an even now overly discrete nature to “entities” in the extent of their causal separateness from others while considering them as entities. The ontological notion of an object or even of an event in its unity in analytic philosophy and in particular in modal ontology forecloses consideration of the process nature of each such unity within, on par with interactions of such units with one another. (David Lewis, Parts of Classes, p. vii) This is done without ever attempting to touch the deeply Platonic (better, geometrically atomistic) shades of common-sense Aristotelianism, Thomism, Newtonianism, Modernism, Quantum Physics, etc., and without reconciling the diametrically opposite geometrical tendency to make every physical representation continuous.
(2) They are logically comatose about the impossibility of the exactly referential definitional approach to the processual demands of existent physical objects without first analyzing and resolving the metaphysical implications of existent objects, namely, being irreducibly in finite Extension and Change and thus in continuous Universal Causality in finite extents at any given moment.
(3) They are unable to get at the causally fully continuous (neither mathematically continuous nor geometrically discontinuous) nature of the physical-ontologically “partially discrete” processual objects in the physical world, also because they have misunderstood the discreteness of processual objects (including quanta) within stipulated periods as typically universalizable due to their pragmatic approach in physics and involvement of the notion of continuity of time.
Phenomenology has done a lot to show the conceptual structures of ordinary reasoning, physical reasoning, mathematical and logical thinking, and reasoning in the human sciences. But due to its lack of commitment to building a physical ontology of the cosmos and due to its purpose as a research methodology, phenomenology has failed to an extent to show the nature of causal continuity (instead of mathematical continuity) in physically existent, processually discrete, objects in nature.
Hermeneutics has just followed the human-scientific interpretative aspect of Husserlian phenomenology and projected it as a method. Hence, it was no contender to accomplish the said fete.
Postmodern philosophies qualified all science and philosophy as being perniciously cursed to be “modernistic” – by thus monsterizing all compartmentalization, rules, laws, axiomatization, discovery of regularities in nature, logical rigidity, and even metaphysical grounding as insurmountable curses of the human project of knowing and as a synonym for all that are unapproachable in science and thought. The linguistic-analytic philosophy in later Wittgenstein too was no exception to this nature of postmodern philosophies – a matter that many Wittgenstein followers do not notice. Take a look at the first few pages of Wittgenstein’s Philosophical Investigations, and the matter will be more than clear.
The philosophies of the sciences seem today to follow the beaten paths of extreme pragmatism in linguistic-analytic philosophy, physics, mathematics, and logic, which lack a foundational concept of causally concrete and processual physical existence.
Hence, it is useful for the growth of science, philosophy, and humanities alike to research into the causal continuity between partially discrete “processual” objects and forget about absolute mathematical continuity or discontinuity in nature. Mathematics and the physical universe are to be reconciled in order to mutually delimit them in terms of the causal continuity between partially discrete processual objects.
Bibliography
(1) Gravitational Coalescence Paradox and Cosmogenetic Causality in Quantum Astrophysical Cosmology, 647 pp., Berlin, 2018.
(2) Physics without Metaphysics? Categories of Second Generation Scientific Ontology, 386 pp., Frankfurt, 2015.
(3) Causal Ubiquity in Quantum Physics: A Superluminal and Local-Causal Physical Ontology, 361 pp., Frankfurt, 2014.
(4) Essential Cosmology and Philosophy for All: Gravitational Coalescence Cosmology, 92 pp., KDP Amazon, 2022, 2nd Edition.
(5) Essenzielle Kosmologie und Philosophie für alle: Gravitational-Koaleszenz-Kosmologie, 104 pp., KDP Amazon, 2022, 1st Edition.
The view that humans have on the world is influenced by the cognitive method that has been formed. All the fields of knowledge and science that you mentioned together have made a cognitive method in the current time, which identifies the world with this dominant method!
We know the one in which our thinking frame is located! To look at the world in a different way and to adopt a different method, one must work without all these fields and with a method that penetrates into the cause of the phenomena! Is there mathematics and logic that the resulting physical and cosmological models have the least distance from the principle of current phenomena in the world?! Our language is unable to express what is actually happening! Maybe a world like David Boehm's holographic world is the solution to our problem! And in fact, an important part of the truth of the world is hidden in the hidden world! The part that we need to understand the world's phenomena! Or the invention of logic, mathematics and a new scientific and epistemological language is needed?! In my opinion, the result of the phenomenological method will not say anything about the causal layers of this world! The phenomenological method helps us to work without knowing the world and gives us a way of living and using most of the capacities of this world without knowing it. Previously, Newton claimed a method for understanding physics with wrong assumptions about space and time, which lasted for a while! These assumptions are based on the feeling and philosophy of that history, which was also manifested in the language of that time! It cannot be said, now these wrong assumptions have stopped! And maybe the current path of epistemology and cosmology, with incorrect and limiting assumptions, will not bring us to the true knowledge of the world.....
• asked a question related to Mathematics
Question
Insistence on mathematical continuity in nature is a mere idealization. It expects nature to obey our idealization. This is what happens in all physical and cosmological (and of course other) sciences as long as they use mathematical idealizations to represent existent objects / processes.
But mathematically following nature in whatever it is in its part-processes is a different procedure in science and philosophy (and even in the arts and humanities). This theoretical attitude accepts the existence of processual entities as what they are.
This theoretical attitude accepts in a highly generalized manner that
(1) mathematical continuity (in any theory and in terms of any amount of axiomatization of physical theories) is not totally realizable in nature as a whole and in its parts: because the necessity of mathematical approval in such a cosmology falls short miserably,
(2) absolute discreteness (even QM type, based on the Planck constant) in the physical cosmos (not in non-quantifiable “possible worlds”) and its parts is a mere commonsense compartmentalization from the "epistemology of piecemeal thinking": because the aspect of the causally processual connection between any two quanta is logically and mathematically alienated in the physical theory of Planck’s constant, and
(3) hence, the only viable and thus the most reasonably generalizable manner of being of the physical cosmos and of biological entities is that of CAUSAL CONTINUITY BETWEEN PARTIALLY DISCRETE PROCESSUAL OBJECTS.
PHYSICS and COSMOLOGY even today tend to make the cosmos mathematically either continuous or defectively discrete or statistically oriented to merely epistemically probabilistic decisions and determinations.
Can anyone suggest here the existence of a different sort of physics and cosmology that one may have witnessed until today? A topology and mereology of CAUSAL CONTINUITY BETWEEN PARTIALLY DISCRETE PROCESSUAL OBJECTS, fully free of discreteness-oriented category theory and functional analysis, is yet to be born.
Hence, causality in its deep roots in the very concept of To Be is alien to physics and cosmology till today.
Bibliography
(1) Gravitational Coalescence Paradox and Cosmogenetic Causality in Quantum Astrophysical Cosmology, 647 pp., Berlin, 2018.
(2) Physics without Metaphysics? Categories of Second Generation Scientific Ontology, 386 pp., Frankfurt, 2015.
(3) Causal Ubiquity in Quantum Physics: A Superluminal and Local-Causal Physical Ontology, 361 pp., Frankfurt, 2014.
(4) Essential Cosmology and Philosophy for All: Gravitational Coalescence Cosmology, 92 pp., KDP Amazon, 2022, 2nd Edition.
(5) Essenzielle Kosmologie und Philosophie für alle: Gravitational-Koaleszenz-Kosmologie, 104 pp., KDP Amazon, 2022, 1st Edition.
Humans in this age, by inventing and using mathematical models and trying to match them with natural phenomena, are free to know the world. The mentioned models have their own logic and causality! And experience has shown that they do not have a significant relationship with nature! And there will always be an inevitable distance between our models of nature, the motivation to reduce this distance may be another request for scientific efforts!
• asked a question related to Mathematics
Question
i 'red' in a maths popularization book of steven strogatz that 1+3=4, 1+3+5=9, 1+3+5+7=16, and so on; wich would be the hypothesis when trying to demonstrate this striking 'fact'?
in his famous 'principia', newton uses mathematical considerations to demonstrate the angular momentum conservation; in the 'first' triangle one of the side is the velocity--its value, so a number--, and the other two sides are the 'distances' at t and t+1, respectively; now, if the ares are the same, could we say that theese two sides express the flow of the time
• asked a question related to Mathematics
Question
Physics
The physicist betting that space-time isn't quantum after all
Most experts think we have to tweak general relativity to fit with quantum theory. Physicist Jonathan Oppenheim isn't so sure, which is why he’s made a 5000:1 bet that gravity isn’t a quantum force
By Joshua Howgego
13 March 2023
📷
Nabil NEZZAR
JONATHAN OPPENHEIM likes the occasional flutter, but the object of his interest is a little more rarefied than horse racing or the one-armed bandit. A quantum physicist at University College London, Oppenheim likes to make bets on the fundamental nature of reality – and his latest concerns space-time itself.
The two great theories of physics are fundamentally at odds. In one corner, you have general relativity, which says that gravity is the result of mass warping space-time, envisaged as a kind of stretchy sheet. In the other, there is quantum theory, which explains the subatomic world and holds that all matter and energy comes in tiny, discrete chunks. Put them together and you could describe much of reality. The only problem is that you can’t put them together: the grainy mathematics of quantum theory and the smooth description of space-time don’t mesh.
Most physicists reckon the solution is to “quantise” gravity, or to show how space-time comes in tiny quanta, like the three other forces of nature. In effect, that means tweaking general relativity so it fits into the quantum mould, a task that has occupied researchers for almost a century already. But Oppenheim wonders if this assumption might be mistaken, which is why he made a 5000:1 bet that space-time isn’t ultimately quantum.
The quantum experiment that could prove reality doesn't exist
................................................................................
A special spammer again appeared after SS post above, in this case again aimed at to place his “last post in the thread”, which are indicated in the RG rather useful “– Science topic” options, which are indicated below threads’ questions; an example here are “Space Time”, “Quantum”, “QUANTA”, “Mathematics” – Science topic” options, aimed at to replace really scientific answers; what this spammer does thoroughly in many threads.
Since he has only full stop imagination about what physics [and other sciences, though] is, besides that there are some terribly scientific words in physics, his posts are some senseless sets of such words – as that this post is; and, at that, he recommended the replaced SS post, seems knowing that his recommendation of any post means that this post is some trash.
As to
“…Quantum mechanics is but a language, and like a language it contains contradictory propositions. (These propositions can be precise and "work", but that's not the point.) …”
- quantum mechanics is the theory just as that classical mechanics quite equally is. And has indeed a lot of really fundamental flaws – again, many of the same flaws the classic mechanics has also.
These flaws exist because of in mainstream philosophy and sciences, including physics, all really fundamental phenomena/notions, first of all in physics “Matter”– and so everything in Matter, i.e. “particles”, “fields”, etc., “Consciousness”, “Space”, “Time”, “Energy”, “Information”, are fundamentally completely transcendent/uncertain/irrational,
- so in every case, when the mainstream physics addresses to some really fundamental problem, the results completely obligatorily logically are nothing else than some transcendent fantastic mental constructions; in both cases
- when some authors attempt to solve some directly fundamental problems, and some fantastic and fundamentally non-testable experimentally , say, “string” one, theories” appear in physics, and
- when a theory describes experimentally observed material objects/systems/events/effects, in this case, say, QED, are fitted with experiments, but for that really completely ad hoc – and really wrong – mathematical tricks are used; etc.
Real fundamental physics development can be possible only provided that the fundamental phenomena/notions above are scientifically defined, what is possible, and is done, only in framework of the 2007 Shevchenko-Tokarevsky’s “The Information as Absolute” conception, recent version of the basic paper see
- and practically for sure in many cases basing on the SS&VT Tokarevsky’s informational physical model , which is based on the conception; two main paper are
https://www.researchgate.net/publication/355361749_The_informational_physical_model_and_fundamental_problems_in_physics, where yet now more 30 fundamental physical problems are either solved or essentially clarified.
Including in last link [mostly section “Conclusion”] the fundamental flaws of both mechanicses are discussed.
However that
“….General relativity is a genuine theory. Both can't be unified until we have a proper quantum theory...”
- is fundamentally incorrect. In the GR the author addressed to completely transcendent for him fundamental phenomena/notions above, and so postulated for these phenomena really completely transcendent and fundamentally non-adequate to the reality properties; etc.; more see the SS post 5 days ago now in https://www.researchgate.net/post/Do_you_think_that_general_relativity_needs_modifications_or_it_is_a_perfect_theory/138 ,
- here only note that really Gravity is, of course, fundamentally nothing else than some fundamental Nature force, as that Electric, Strong/Nuclear, and Weak Forces are; and with a well non-zero probability it acts as that is shown in the SS&VT initial models of the Forces – see https://www.researchgate.net/publication/365437307_The_informational_model_-_Gravity_and_Electric_Forces, and
Cheers
• asked a question related to Mathematics
Question
This is actually a trivial question and I'm just being mischievous.
It turns on the shades of meaning of both "idea" and "exist."
Mathematically, a concept exists whether anyone has happened upon it or not. (A meaningless attempt at a concept is not a concept).
When first thought about by an actual brain of any kind, a concept acquires its first glimmer of existence as in the real world.
I will attempt to provide an answer by recalling a fascinating lecture given by Professor Enrico Bellone. In a letter dated May 7th, 1952, addressed to his friend Maurice Solovine, Einstein shared a drawing that summarized his ideas on the subject. The drawing features a horizontal line labeled E, which represents immediate experiences or the empirical basis, and a vertical line labeled A, which represents the axioms underlying theories. Einstein argued that there is no logical process that allows us to derive axioms from experiences; instead, it requires an intuitive, extralogical, and psychological leap. Once we have intuited the axioms, we can deduce special statements S1, S2... by assuming their truthfulness and then comparing them with experience. According to Einstein, the crucial level lies in the axioms, and therefore there is no distinction between science and philosophy, but rather a single set of concepts. He also maintained that the theoretical principles of scientific theories are fictional, and any attempt to deduce ideas and laws from elementary experiences is doomed to fail. So, what is science according to Albert Einstein? Einstein believed that all the games are played at the top of the drawing, where we jump from one idea to another, from one theory to another, and where we model nature because we have categories of ideas that are fairly standard. In this context, Einstein praised the great philosophers he admired.
• asked a question related to Mathematics
Question
Is it possible to mathematically calculate K-40 from K total determined by ICP MS in sediment samples?
There are three (commonly encountered) isotopes of K: 39K, 40K and 41K. ICP-MS does not measure "K total", but either 39K (~93.3% of K) or 41K (~6.7% of K). It also doesn't measure 40K (0.01%) as there is too much interference from 40Ar used to generate the plasma of the ICP.
So you could potentially measure 39K and calculate that if this is 93.2581% of total K, then 40K is 0.0117% of the total.
Of course, you are assuming
• you have taken a representative sample of sediment
• you extracted 100% the K from the sediment
• the 3 isotopes are equally extracted by your process
• the ICP-MS is accurately measuring the 39K
• the 40K is actually present in the ratio 0.0117 : 93.2581 in the sediment
Perhaps you need to find a non-ICP method to quantify the K isotopes.
• asked a question related to Mathematics
Question
Physics is a scence of representations, with mathematical aspects in them, foremost and not of naked correlations and parameter analysis.
It also has competent conceptualizaions, genious principles.
Even the innocent seeking uniform motion is a representational scheme fo motions under the theory of kinematics. (Representations are seperate from reality but are invaluable part of scientific infering, predicting, explaining etc) i.e heat is represented as a flow between subsystems. Representations change i.e Einstein found the curved spacetime one for gravity phenomena.
Physics is also the science f cosmology. It has no meaning if it bypasses the universe-i.e the sum of subsystems. This discipline has problems because we cannot take ourself out of it and study it but physics has tools for this (QM) or theoretical approximaions (more cognitively open consideration of the concept of boundary conditions).
If want to know about the big things that are the sum how many things act, you need to know how many things act. Following is just a few concepts that need changing and my proofs of the changes:
The proofs that 4 constants ae changed by relativistic velocities as a PDF file will pop up when the following link is Left clicked: -------------------------------------------------------------------- https://drive.google.com/file/d/1e1ExWG-VyTR8PAPxU5OnfSzd86uj59nh/view?usp=share_link --------------------------------------------------------------------- A link to prerequisite proofs that all Doppler shifts change time and distance (axial, gravitational and transverse not just the transverse). -------------------------------------------------------------------- https://drive.google.com/file/d/1vGRBH1AgUOCP8_zp7fKxBTMPg-YP_-uh/view?usp=share_link -------------------------------------------------------------------- A link to: Proof of a version of the Schrodinger equation for relativistic velocities: ------------------------------------------------------------------ https://drive.google.com/file/d/1kh2d4fYFOd8rbS6tgyUbTA5zDZW-aUNH/view?usp=share_link-------------------------------------------------------------------------------------------- I hope you can make use of the above. Samuel Lewis Reich (sLrch53@gmail.com)
• asked a question related to Mathematics
Question
We assume that this statement is false, but one of the most common mathematical errors.
So a question arises: what is the importance of the LHS diagonal?
Important or not? As a question it is too general, no definite answer could be given.
• asked a question related to Mathematics
Question
I have two networks, and wish to get them to dynamically interact with one another, yet retain modularity.
Hello Faizan Rao, to dynamically join two separate database schemas while retaining modularity, you can use a mathematical layer such as a view or a stored procedure to perform the interaction. This method allows you to keep the database schemas separate while still enabling them to communicate and share data.
Here's a step-by-step guide on how to do this:
Identify the shared data: Determine the common data points between the two schemas that need to be combined or interact with each other. This could be a common key, attribute, or any other data that can be used to relate the two schemas.
Create a view or stored procedure: Depending on your database system (MySQL, PostgreSQL, SQL Server, etc.), you can create a view or a stored procedure that performs the necessary calculations or data manipulations. This will act as the mathematical layer between the two schemas. This layer will query data from both schemas and perform the required calculations, transformations, or aggregations.
For example, in SQL Server, you can create a view like this:
CREATE VIEW CombinedData AS
SELECT a.*, b.*
FROM Schema1.Table1 a
JOIN Schema2.Table2 b
ON a.CommonKey = b.CommonKey;
In this example, Schema1.Table1 and Schema2.Table2 represent tables from the two separate schemas, and CommonKey is the column used to relate the data.
Access the view or stored procedure: Whenever you need to access the combined data, you can query the view or execute the stored procedure to fetch the results. This ensures that the two schemas remain separate, but the data can be combined and accessed dynamically as needed.
Maintain modularity: By using a view or stored procedure, you can keep the two schemas separate and modular. When updates are needed, you can make changes to the individual schemas without affecting the other. The view or stored procedure can then be updated to accommodate the changes. A mathematical layer in the form of a view or a stored procedure can help you dynamically join two separate database schemas while retaining modularity. This method ensures that the schemas remain independent and can be maintained separately, while still allowing for the interaction and sharing of data between them.
Regards,
• asked a question related to Mathematics
Question
Category theory is a branch of mathematics that deals with the abstract structure of mathematical concepts and their relationships. While category theory has been applied to various areas of physics, such as quantum mechanics and general relativity, it is currently not clear whether it could serve as the language of a metatheory unifying the description of the laws of physics.
There are several challenges to using category theory as the language of a metatheory for physics. One challenge is that category theory is a highly abstract and general framework, and it is not yet clear how to connect it to the specific details of physical systems and their behaviour. Another challenge is that category theory is still an active area of research, and there are many open questions and debates about how to apply it to different areas of mathematics and science.
Despite these challenges, there are some researchers who believe that category theory could play a role in developing a metatheory for physics. For example, some have proposed that category theory could be used to describe the relationships between different physical theories and to unify them into a single framework. Others have suggested that category theory could be used to study the relationship between space and time in a more unified and conceptual way.
I am very interested in your experiences, opinions and ideas.
I believe your sentiment is that Category Theory is so general that it might if applied to physics provide an insight into the overarching principles of the science. But Category Theory is not going to give you any insight that you do not already possess. It might provide a convenient notation for expressing that insight. Using Category Theory to navigate physics without a physical understanding would be like sailing a yacht without a keel.
• asked a question related to Mathematics
Question
Is it mathematically justified to place negative and positive numbers on the same plane?
Goes like this:
e^2pii=1
e^pi=-1
Consequently:-2pi=pi
and thats precisely what I was trying to say
• asked a question related to Mathematics
Question
This question discusses the YES answer. We don't need the √-1.
The complex numbers, using rational numbers (i.e., the Gauss set G) or mathematical real-numbers (the set R), are artificial. Can they be avoided?
Math cannot be in ones head, as explains [1].
To realize the YES answer, one must advance over current knowledge, and may sound strange. But, every path in a complex space must begin and end in a rational number -- anything that can be measured, or produced, must be a rational number. Complex numbers are not needed, physically, as a number. But, in algebra, they are useful.
The YES answer can improve the efficiency in using numbers in calculations, although it is less advantageous in algebra calculations, like in the well-known Gauss identity.
For example, in the FFT [2], there is no need to compute complex functions, or trigonometric functions.
This may lead to further improvement in computation time over the FFT, already providing orders of magnitude improvement in computation time over FT with mathematical real-numbers. Both the FT and the FFT are revealed to be equivalent -- see [2].
I detail this in [3] for comments. Maybe one can build a faster FFT (or, FFFT)?
[2]
Preprint FT = FFT
[2]
The form z=a+ib is called the rectangular coordinate form of a complex number, that humans have fancied to exist for more than 500 years.
We are showing that is an illusion, see [1].
Quantum mechanics does not, contrary to popular belief, include anything imaginary. All results and probabilities are rational numbers, as we used and published (see ResearchGate) since 1978, see [1].
Everything that is measured or can be constructed is then a rational number, a member of the set Q.
This connects in a 1:1 mapping (isomorphism) to the set Z. From there, one can take out negative numbers and 0, and through an easy isomorphism, connect to the set N and to the set B^n, where B={0,1}.
We reach the domain of digital computers in B={0,1}. That is all a digital computer needs to process -- the set B={0,1}, addition, and encoding, see [1].
The number 0^n=0, and 1^n=1. There Is no need to calculate trigonometric functions, analysis (calculus), or other functions. Mathematics can end in middle-school. We can all follow computers!
REFERENCES
[1] Search online.
• asked a question related to Mathematics
Question
@Juan Weisz
incentive here is that e^2pii=1 (Euler's formula)
• asked a question related to Mathematics
Question
I noticed that in some very bad models of neural networks, the value of R² (coefficient of determination) can be negative. That is, the model is so bad that the mean of the data is better than the model.
In linear regression models, the multiple correlation coefficient (R) can be calculated using the root of R². However, this is not possible for a model of neural networks that presents a negative R². In that case, is R mathematically undefined?
I tried calculating the correlation y and y_pred (Pearson), but it is mathematically undefined (division by zero). I am attaching the values.
Obs.: The question is about artificial neural networks.
Raid, apologies here's the attachment. David Booth
• asked a question related to Mathematics
Question
1 - Prof. Tegmark of MIT hypothesizes that the universe is not merely described by mathematics but IS mathematics.
2 - The Riemann hypothesis applies to the mathematical universe’s space-time, and says its infinite "nontrivial zeros" lie on the vertical line of the complex number plane (on the y-axis of Wick rotation).
3 - Implying infinity=zero, there's no distance in time or space - making superluminal and time travel feasible.
4 - Besides Mobius strips, topological propulsion uses holographic-universe theory to delete the 3rd dimension (and thus distance).
5 - Relationships between living organisms can be explained with scientifically applied mathematics instead of origin of species by biological evolution.
6 - Wick rotation - represented by a circle where the x- and y-axes intersect at its centre, and where real and imaginary numbers rotate counterclockwise between 4 quadrants - introduces the possibility of interaction of the x-axis' ordinary matter and energy with the y-axis' dark matter and dark energy.
An equivalent formulation of the Riemann hypothesis. See formula (3.22) in the
• asked a question related to Mathematics
Question
Theoretical and computational physics provide the vision and the mathematical and computational framework for understanding and extending the knowledge of particles, forces, space-time, and the universe. A thriving theory program is essential to support current experiments and to identify new directions for high energy physics. Theoretical physicists provide a great deal of assistance to the Energy, Intensity, and Cosmic Frontiers with the in-depth understanding of the underlying theory behind experiments and interpreting the outcomes in context of the theory. Advanced computing tools are necessary for designing, operating, and interpreting experiments and to perform sophisticated scientific simulations that enable discovery in the science drivers and the three experimental frontiers.
source: HEP Theoretical and Computationa... | U.S. DOE Office of Science (SC) (osti.gov)
Physics, mathematical, and computational sciences have contributed to the betterment of mankind and continue to push innovation and research today because they provide fundamental frameworks for understanding the natural world, developing new technologies, and solving real-world problems.
Physics, for example, provides a fundamental understanding of the laws of nature that govern the behavior of matter and energy, from the smallest particles to the largest structures in the universe. This understanding has led to the development of technologies such as lasers, semiconductors, and superconductors, which have revolutionized communication, computing, and energy production.
Mathematics provides the language and tools for describing the structure of the natural world and for solving problems across a wide range of fields, from engineering to economics. Mathematical models and simulations allow scientists and engineers to study complex systems and make predictions about their behavior, leading to new discoveries and innovations.
Computational science, which combines mathematics, computer science, and domain-specific knowledge, has become increasingly important in recent years due to the explosion of data and the growing complexity of problems in many fields. Computational tools and algorithms are used to simulate physical processes, analyze large data sets, and develop new materials and drugs.
Overall, physics, mathematical, and computational sciences continue to play a critical role in driving innovation and advancing knowledge in many fields, making them essential for the betterment of mankind.
• asked a question related to Mathematics
Question
What is this mean ( ± 0.06) and How can I calculate it mathematically?
Standard error = s / √n
where:
• s: sample standard deviation
• n: sample size
• asked a question related to Mathematics
Question
All tests doing a proof for the Riemann-Hypothesis on the Zeta-Function must fail.
There are no zeros by the so called function of a complex argument.
A function on two different units f(x, y) only then has values for the third unit
z´ [z = f(x, y)]
if the values variables x´ and y´ would be combined by an algebraic rule.
So it should be done for the complex argument, Riemann had used.
But there isn´t such a combination. So Riemann only did a scaling´. Where both parts of the complex number stay separate.
The second part of the refutation comes by showing wrong expert opinion of mathematics. This is on the false use of imaginary´ and prefixed multiplication´.
The imaginary compartmentalization does not remain of the same dimension all throughout the processualization of complex functions . Even self-determination is ensconced in a sphere where the focus is not on mathematical rigor but rather on collecting some bits of data on the functionals [ of the originary function ] on the very powerful machinery of manifolds and “post-Newtonian calculus”. The systematics of functional differential equations does not have a continuously differentiable solution for every value of the parameter , say , a .
• asked a question related to Mathematics
Question
What are the properties of transversal risks in networks? Happy for applied examples and diffusion properties.
Transversal risks in networks are risks that cross multiple nodes or elements of a network, rather than being confined to a single node. These risks can have a significant impact on the network as a whole and can be difficult to manage and mitigate.
Some properties of transversal risks in networks include:
1. Diffusion: Transversal risks have the potential to spread rapidly through a network, affecting multiple nodes and elements. The speed and extent of diffusion depend on factors such as the topology of the network, the connectivity between nodes, and the nature of the risk itself.
2. Interconnectivity: Transversal risks often arise from the interconnectivity between nodes or elements in a network. The risk can be amplified when the interconnectivity is high, and nodes or elements are highly dependent on one another.
3. Cascading effects: Transversal risks can trigger cascading effects, leading to a chain reaction of failures or disruptions across the network. These cascading effects can be difficult to predict and control.
4. Non-linearity: Transversal risks often exhibit non-linear behavior, meaning that the impact of the risk is not proportional to the size or severity of the risk. Small disruptions can lead to large and unexpected consequences.
Applied examples of transversal risks in networks include:
1. Cybersecurity: Cyber attacks can spread through computer networks, affecting multiple nodes and elements. A single attack can lead to cascading effects, disrupting critical systems and services.
2. Supply chain disruptions: Disruptions in one part of a supply chain can affect multiple nodes and elements downstream, leading to inventory shortages, delays, and other disruptions.
3. Financial contagion: Financial risks can spread through interconnected financial institutions, leading to a systemic crisis that affects the broader economy.
4. Disease outbreaks: Diseases can spread through social networks, leading to large-scale epidemics that affect multiple regions and populations.
Overall, the properties of transversal risks in networks highlight the importance of understanding the interconnectivity and complexity of modern systems and networks. Effective management of transversal risks requires a holistic approach that considers the entire network and its interdependencies.
• asked a question related to Mathematics
Question
Project Name - Improving Achievement and Attitude through Co-operative learning in F. Y. B. Sc. Mathematics Class
• asked a question related to Mathematics
Question
What is missing is an exact definition of probability that would contain time as a dimensionless quantity woven into a 3D geometric physical space.
It should be mentioned that the current definition of probability as the relative frequency of successful trials is primitive and contains no time.
On the other hand, the quantum mechanical definition of the probability density as,
p(r,t)=ψ(r,t)*.ψ(r,t),
which introduces time via the system's destination time and not from its start time is of limited usefulness and leads to unnecessary complications.
It's just a sarcastic definition.
It should be mentioned that a preliminary definition of the probability function of space and time proposed in the Cairo technique led to revolutionary solutions of time-dependent partial differential equations, integration and differentiation, special functions such as the Gamma function, etc. without the use of mathematics.
The theory of the interacting gases as the electrons in solids is still
in infancy. Many areas of physics need more development. Cosmology is still under uncertainties. Particle physics is hard, and in infancy.
• asked a question related to Mathematics
Question
Hello
I have an Excel file containing weather data of Missouri in U.S. The data starts from 25th July and ends on 9th September in 2014. For each day, almost 21 times data has been recorded (6 hours within solar noon time).
How can I make a type99 source file using this Excel file? I already have studied mathematical reference of Trnsys help, but that was not very helpful. Thanks
No, unfortunately I didn't find any solution for this problem.
• asked a question related to Mathematics
Question
The Gamma function,
G(n)= Integral from 0 to infinity [Exp(e^-x^n)]dx
is of the great mathematical and physical importance.
It can be calculated without numerical integration (for practical purposes) via its mathematical and physical properties:
i-minimum of Gamma occurs at x = 1.4616321 and the corresponding value of Gamma(x) is 0.8856032.
ii-Gamma(1.)=Gamma(2.)=1.
iii-Gamma(x)=(x-1.) !
A simple preliminary approach that gives the value of Gamma(x) with an error less than 0.001 is the second-order polynomial expression for the factorial x,
(1.-0.46163*x+0.46163*x*x),x element of [0,1].
For example, this gives:
G(10.5)=11877478.
vs the value of 11899423.084) given by numerical integration.
and Gamma(1.4616)= 0.88527 vs 0.8856032.
The Gamma function G(n) is well defined for any positive value of n,
G(n)= Integral from 0 to infinity [Exp(e^-x^n)]dx . . . . . (1)
Needless to say, it is of great mathematical and physical importance.
However, for practical purposes, it can be calculated using an adequate closed-form polynomial without going through complicated numerical integration.
The required closed form solution must retain its mathematical and physical properties, namely:
i-minimum of Gamma occurs at x = 1.4616321 and the corresponding value of Gamma(x) is 0.8856032.
ii-Gamma(1.)=Gamma(2.)=1.
iii-Gamma(x)=(x-1.) !
iv-The recurrence relation ,Gamma (x)=x. Gamma (x-1)
We propose a simple preliminary approach that gives the required value of Gamma(x) with an error less than 0.001. The proposed preliminary approach must satisfy conditions i-iv,
but since the factorial function x! is not yet defined for negative numbers (x<0), we divide the entire positive x-space into three intervals as follows:
a) x element of ]0.1]
Here, the proposed second-order polynomial expression for the Gamma function is G(x)=F(x-1) where F(x) is the factorial function x!. F is approximated by,
F(x)=(1.-0.46163*x+0.46163*x*x) . . . . . . . (2)
x element of [0,1].
b) x element of [1,2]
The Gamma function is approximated via the expression,
G(x)=Done(x)+0.3333/X**1.5 . . . . . . . . . .  (3)
Where 1/3 *1/X. Sqrt(x) is a correction factor.
c) x element of [0,infinity[
We can here use the expression (4) supplemented by the expression (2) for the remaining fraction,
G(x)=F(x-1) . . . . . . . . . . . . . . . . . . . . .. . . . (4)
Equations 2, 3 and 4 were implemented in a simple algorithm which produced the required numerical results.
Table I presents some examples of numerical results of the proposed technique compared to those of the numerical tables obtained by numerical integration of Eq. 1.
Table I. results of the proposed method vs those of the numerical tables obtained by numerical integration of Eq. 1.
x                          10.5                     1.4616                      0.5                     0.25
G(x) Proposed    11877478           0.88527            1.82646                    3.5798
technique
G(x) numerical   11899423.084 0.8856032.       1.7738                      3.6534
tables
• asked a question related to Mathematics
Question
There are a few point to consider in this issue
Points pro current emphasis
1. Math is the backbone of a physical theory. Good representation, good quantities of a theory, phenomena but bad math makes for bad theory
2. There is a general skepticism for reconsidering role of mathematized approach in physics Masters syllabi/upgrading role of literature/essay
2. Humans communicate, learn, think & develop construct via language
Arguments Con
1. Math is the elements in theory and "physics product" that is responsible for precision& prediction. Indespensible though, it exists in the mind of some individuals & function as well, in parallel with conception, physical arguments
2. Not all models in physics are mathematical. Some are conceptual
3. Formulations of solutions to physics problems via math techniques and methods is def of mathematical physics. However, this is a certain % of domain of skills.
But syllabus focuses 100% on this
In general physics is the science that observe a phenomenon, gives it a name and describes its properties. And because we can only observe phenomena – humans are phenomena too – we actually describe the mutual relations between the phenomena. The relations are expressed with the help of measurement units (SI system) so we don’t have to compare multiple phenomena to get an answer. We just use the units as a “measuring interface” between all the distinguishable phenomena.
Now where is the math?
When I was young engineers had books full of formulas. To calculate steam turbines, electric motors, mechanical constructions made of concrete, steel, wood, etc., etc. Nobody said that these formulas are mathematical formulas. Because we only need arithmetic to calculate the mutual relations, not math. Math is something else.
So what is math? The problem is that nobody knows the real answer. The ancient Greek philosophers had the opinion that our universe is a mathematical existence. That means that the primary properties of our universe are dynamical geometrical properties. An idea that shows some similarity with the general concept in modern Quantum Field Theory (there are no particles, there are only fields). Unfortunately, in physics we can only determine the mutual relations between the phenomena, we don’t know “what’s inside”. So the ancient Greek philosophers may be right, but modern physics is still too limited to make statements about the subject that are convincing. And more worse, in mathematics there is also the search for the unifying theory… For example, probability theory has no mathematical foundation, it is an empirical theory.
Studying theoretical physics has not much to do with pure math itself, in physics we use math as "arithmetic" tools. However, in pure mathematical physics the situation is different. Mathematical physics use mathematical models to “simulate” physical reality. In other words, they build up physical reality “from scratch” and the model is constructed with the help of “facts” that originate from physics, mathematics and philosophy. Because before we can construct a model we have to create some kind of a conceptual framework.
An example… There is a nice video about mathematical physics, the Causal Dynamical Triangulations approach (https://www.youtube.com/watch?v=PRyo_ee2r0U). They use the Planck units and Einstein’s curved spacetime (therefore the math is directly related with the Ricci scalar curvature in Riemannian geometry because they have to implement the universal scalar field, the Higgs field, in the model). So if this is the way you want to do physics, try to become a mathematician too.
With kind regards, Sydney
• asked a question related to Mathematics
Question
Dear professors and students, greetings and courtesy. I wanted to know if the real numbers are the largest and the last set of numbers that exist, or if there are sets or sets of numbers that are larger than that, but maybe they have not been discovered yet? Which is true? If it is the last set of numbers that exists, what theorem proves the non-existence of a set of numbers greater than it? And if there is a larger set than that, in terms of the history of mathematics, by obtaining the answer to which mathematical problem, it was proved that the obtained answer is not closed with respect to the set of complex numbers and belongs to a larger set? Thank you very much
First, what do you mean by a number? If you mean a set of things that includes, say, rational numbers and extends the addition & multiplication operations in a way that is consistent with the usual rules (e.g., associativity: a+(b+c)=(a+b)+c & a*(b*c)=(a*b)*c, distributivity: a*(b+c)=a*b+b*c, zeros: 0+a=a, 0*a=0, commutativity: a+b=b+a, a*b=b*a, unit/identity: 1*a = a, inverses: every a has a b=-a where a+b=0, every a not zero has a b=1/a where a*b=1) then we can make a claim that the complex numbers are the biggest class of "numbers".
Actually, you can go further by constructing a non-Archimedean set of "real numbers" which extends real numbers and is still an ordered field, and then have an extended set of complex numbers of the form a+i.b where a, b are these extended real numbers.
Non-Archimedean means that there are "numbers" a, b > 0 where a/b is larger than any whole number 1, 2, 3, ...
But if you keep the regular real numbers, but are willing to lose something else like commutativity of multiplication (a*b=b*a) then there are the quaternions discovered/invented by William Rowan Hamilton in the 19th century. These have the form a+b.i +c.j +d.k where a, b, c, d are real numbers. The essential properties of the symbols i, j, k are that i*i = j*j = k*k = -1, and i*j = -j*i = k, j*k = -k*j = i, & k*i = -i*k = j. Quaternions form a division ring.
• asked a question related to Mathematics
Question
As the concept comes from the Bernoulli numbers and different branches of mathematics, I have recently considered the importance of introducing the same concept, 'The unity of mathematics' within the context of the Bernoulli numbers and some special series (the Flint Hills and Cookson Hills series). I believe in the scenario of defining a balanced relationship between the effect of the Bernoulli numbers and the series of hard convergence.
I am pointing out this potential link.
For a general conclusion about what I consider should the concept of 'unity' by the Bernoulli numbers and the Flint Hills, just pay attention to this screenshot:
DOI: 10.13140/RG.2.2.16745.98402
I dont see very well how you could achieve the "unity of mathematics"
based on series/sequences alone. For example geometry would remain apart. you must persue some more modest goal.
• asked a question related to Mathematics
Question
Which software is best for making high-quality graphs? Origin or Excel? Thank you
Origin is better for Pro
• asked a question related to Mathematics
Question
Mathematics Teacher Educators (MTEs) best practices.
I'm interesting in research literature about Mathematics Teacher Educators (MTEs) best practices, especially on MTEs' practices for teaching to solve problems.
thank you.
Problem solving technique and thinking aloud strategies helpful for mathematics teaching
• asked a question related to Mathematics
Question
Good day, Dear Colleagues!
Anyone interested in discussing this topic?
The tandem of Mathematics and Computer Science is vital for the development of AI. Mathematics provides the theoretical foundation, while Computer Science provides the practical tools to implement AI algorithms. By combining these two disciplines, researchers and practitioners can create AI systems that can perform increasingly complex tasks, and help solve some of the most challenging problems facing our society.
• asked a question related to Mathematics
Question
How can I define histogram bins in a well define mathematical expression especially driven from data points x_i, i=1,..,n and the range or any other well define measures in the dataset.
Dear Dr Babura,
I think that you mean how Sturges came up with his rule. Part of your question is explained in the following link,
Please let me know if this is what you are looking for Dr.
Best wishes.
• asked a question related to Mathematics
Question
Kindly share with me any details of Scopus indexed Mathematics conferences in India.
See attached file
• asked a question related to Mathematics
Question
Physics continues a tradition of assesment in graduate program based on final exams and of the form of mathematized exersices with no conceptual qs or essays.
This fulfils the aim. Of. Mastering demanding nomenclature in the domain. Given slow progress in field last decades this might be a good alternative but there are also pedagogical reasons.
This form of assesment is extreme and outdated.it has further disadvantages
** Students do not develop critical research skills such as literature analysis and research.
**certain skills for future researcher are notvtested i. E ability to combine research from different Source, ability to think critically of competing thesis or theories, to discern gaps in current research
** A mixed approach should ensure all aims