Science topics: Mathematics
Science topic

# Mathematics - Science topic

Mathematics, Pure and Applied Math
Questions related to Mathematics
• asked a question related to Mathematics
Question
Is it logical to assume that the probability created by nature produces symmetry?
And if this is true, is anti-symmetry just a mathematical tool that can be misleading in specific situations?
well, given a new (and very recent, btw) research finding
"that specific circular ribonucleic acids (RNAs) can stick to DNA in cells and cause mutations that result in cancer." - a cite from:
then how should this finding be compared with statement like "nature itself operates ... and produces a kind of symmetry" - what a symmetry is seen here when the result is cancer mutations - technically, resulting and coming from randomness in nature ...
• asked a question related to Mathematics
Question
Mathematical Generalities: ‘Number’ may be termed as a general term, but real numbers, a sub-set of numbers, is sub-general. Clearly, it is a quality: “having one member, having two members, etc.”; and here one, two, etc., when taken as nominatives, lose their significance, and are based primarily only on the adjectival use. Hence the justification for the adjectival (qualitative) primacy of numbers as universals. While defining one kind of ‘general’ another sort of ‘general’ may naturally be involved in the definition, insofar as they pertain to an existent process and not when otherwise.
Why are numbers and shapes so exact? ‘One’, ‘two’, ‘point’, ‘line’, etc. are all exact. The operations on these notions are also intended to be exact. But irrational numbers are not so exact in measurement. If notions like ‘one’, ‘two’, ‘point’, ‘line’, etc. are defined to be so exact, then it is not by virtue of the exactness of these substantive notions, but instead, due to their being defined as exact. Their adjectival natures: ‘being a unity’, ‘being two unities’, ‘being a non-extended shape’, etc., are not so exact.
A quality cannot be exact, but may be defined to be exact. It is in terms of the exactness attributed to these notions by definition that the adjectives ‘one’, ‘two’, ‘point’, ‘line’, etc. are exact. This is why the impossibility of fixing these (and other) substantive notions as exact miss our attention. If in fact these are inexact, then there is justification for the inexactness of irrational, transcendental, and other numbers too.
If numbers and shapes are in fact inexact, then not only irrational numbers, transcendental numbers, etc., but all exact numbers and the mathematical structures should remain inexact if they have not been defined as exact. And if behind the exact definitions of exact numbers there are no exact universals, i.e., quantitative qualities? If the formation of numbers is by reference to experience (i.e., not from the absolute vacuum of non-experience), their formation is with respect to the quantitatively qualitative and thus inexact ontological universals of oneness, two-ness, point, line, etc.
Thus, mathematical structures, in all their detail, are a species of qualities, namely, quantitative qualities, defined to be exact and not naturally exact. Quantitative qualities are ontological universals, with their own connotative and denotative versions.
Natural numbers, therefore, are the origin of primitive mathematical experience, although complex numbers may be more general than all others in a purely mathematical manner of definition.
Alun Wyn-Jones, this is wonderful. I can experience the relevance of the contrast you brought in. Congrats!
I add to it Hugh Everett III's conclusion while applying QM to cosmology, where he says that it is better to accept it because the mathematics says it. I have not cited the exact words.
Here he exhibits a sort of blind adoration to math. My attempt nowadays has also been to find out where in mathematical applications the math could be believed as sufficiently or absolutely the best guidepost, and where it must be limited. But then arises the question whether we can suggest some areas where the math should not be trusted!
The bafflement continues....
• asked a question related to Mathematics
Question
Подтверждается мнение о том, что математическая подготовка физиков, даже ведущих, недостаточна. Именно такие претензии В.А. Фок и Н.Н. Боголюбов предъявляли Ландау. Математика не сводится к набору формул и решения уравнений. Предложена формулировка принципа эквивалентности гравитационного поля и ускоренной системы отсчета на основе стандартного математического эпсилон-дельта метода (черновик).
Другая статья
Sorry to hear this is happening in Russia. Education in the West is in free fall. It's happening in the West because academics are no longer a goal. Why is it happening there?
• asked a question related to Mathematics
Question
In mathematics, many authors working in the area of integer sequence, fibonacci polynomial, perin sequence......
Now what is the current research topics in this subject?.
I request , suggest some research topics which is related to Fibonacci sequence.
The general theory may be in automorphic forms.
There are subclases of matrices which multiplied together are both commutative and form conserving
x -y
y x
for example is isomorphic to Complex numbers.
The matrix
x 0
y x
if recursivly multiplied together will exhibit the derivative of x^N on the diagonal as multiplying y, is isomorphic to
the ring of dual numbers. Certain interesting identities are obtained from the form conservation multiplying
these matrices, with different indices together; the determinant of the result is the product of determinants.
(and a product of eigenvalues). In some sense the numbers are the eigenvalues, for example (x+iy) and (x-iy)
in the complex case.
Starting from certain values one may find a finite step repetition of values to give finite element matrix groups.
For example substitutions in a linear fractional transformations representable as matrices. Either that or very
strange fractal forms.
But better ask mathematicians to go beyond in questions of interest in the theory.
• asked a question related to Mathematics
Question
From Newton's Metaphysics to Einstein's Theology!
The crisis in modern theoretical physics and cosmology has its root in its use, along with theology as a ruling-class tool, since medieval Europe. The Copernican revolution overthrowing the geocentric cosmology of theology led to unprecedented social and scientific developments in history. But Isaac Newton’s mathematical idealism-based and on-sided theory of universal gravitational attraction, in essence, restored the idealist geocentric cosmology; undermining the Copernican revolution. Albert Einstein’s theories of relativity proposed since the turn of the 20th century reinforced Newtonian mathematical idealism in modern theoretical physics and cosmology, exacerbating the crisis and hampering further progress. Moreover, the recognition of the quantum world - a fundamentally unintuitive new realm of objective reality, which is in conflict with the prevailing causality-based epistemology, requires a rethink of the philosophical foundation of theoretical physics and cosmology in particular and of natural science in general.
Am in support of Abdul Malek His scientific views are sound, with respect to natural dialectics and physics.
Yet if anyone believes that the earth rotates, surely he will hold that its motion is natural, not violent.
Nicolaus Copernicus
• asked a question related to Mathematics
Question
If we should calculate it by experimental test on target organism or we should find it mathematically?
co- toxicity factor =(O-E)*100/E
that
O is observed % mortality of combined plant extracts
E is expedcted m% mortality
In the context of the co-toxicity factor formula, the term "expected mortality" refers to the predicted or estimated mortality rate of an organism under the combined effects of multiple toxic substances. The co-toxicity factor formula is used to assess the combined toxicity of different substances on an organism, taking into account their toxicities.
To calculate the expected mortality using the co-toxicity factor formula, you typically follow these steps:
1. Determine the individual toxicity values: Obtain the toxicity values or toxicological data for each of the substances of interest. This could be in the form of lethal concentration (LC50) or lethal dose (LD50) values, which represent the concentration or dose at which 50% mortality is expected.
2. Calculate the co-toxicity factor: Calculate the co-toxicity factor for each substance by dividing the concentration or dose of the substance by its individual toxicity value. This step involves normalizing the concentration or dose of each substance concerning its toxicity.
3. Calculate the expected mortality: Sum up the co-toxicity factors for all the substances. The resulting value represents the expected mortality of the organism under the combined effects of the substances.
It's important to note that the co-toxicity factor formula is a simplified approach to assess combined toxicity and may not account for all possible interactions between substances. The formula assumes an additive or independent effect of the substances. In reality, interactions between substances can be more complex, including synergistic (enhanced) or antagonistic (reduced) effects.
Furthermore, the specific formula or equation used for calculating the co-toxicity factor may vary depending on the context, study design, and toxicological data available. It is essential to consult relevant literature, regulatory guidelines, or expert advice to ensure the appropriate use of the co-toxicity factor formula and interpretation of the results in your specific research or assessment.
• asked a question related to Mathematics
Question
Bonjour a tous,
je voudrais savoir la relation entre la temperature du verre ( viscosite ) et le radius de la fibre de verre?
merci
La relation entre la température du verre et le diamètre de la fibre de verre dépend de plusieurs facteurs, notamment la composition du verre et le processus de fabrication spécifique. Il n'y a pas de formule mathématique générale unique qui puisse décrire cette relation pour tous les types de verre.
Cependant, pour certains types de fibres de verre, une approximation couramment utilisée est la formule de coefficient de dilatation thermique linéaire. Cette formule relie la variation du diamètre d'un matériau à sa variation de température. La formule générale est la suivante :
ΔD = α * D * ΔT
où :
• ΔD est le changement de diamètre de la fibre de verre,
• α est le coefficient de dilatation thermique linéaire du verre,
• D est le diamètre initial de la fibre de verre, et
• ΔT est la variation de température.
Le coefficient de dilatation thermique linéaire (α) est une propriété spécifique du matériau de la fibre de verre et peut varier en fonction de sa composition. Il est généralement exprimé en unités de (1/°C) ou (1/K).
Il est important de noter que cette formule est une approximation et peut ne pas être exacte pour tous les types de verre ou dans toutes les plages de température. Pour obtenir des résultats précis, il est recommandé de consulter les spécifications techniques du matériau de la fibre de verre spécifique que vous utilisez, ou de vous référer aux données fournies par le fabricant.
• asked a question related to Mathematics
Question
- What is the relationship between the scientific understanding of the world and the reality in nature? It may be said that the real world is much richer in terms of structure than the results of the physical and mathematical models that were developed for it. In these models, there is one or more angles of view limited to the natural phenomenon in question, inventing a complete theory, whose results are correct from any angle, may be the dream theory of "Theory of Everything"!
- Is there an unknown form of mathematics that has not yet been found to solve all the problems of a theory of everything?
- Is it necessary to change the conceptual view of physicists on the subject of the theory of everything? So that this new look can include new concepts for problem solving?
- Is there a mathematical system, which has a distinct ability to represent the maximum possible states of the world!?
- Is it possible to imagine that the world is like a carpet that has infinite texture, but its colors and roles are determined by scientists with their theories about the world?! And are we looking for the most realistic pattern and design for the world's carpet?
The relationship between scientific understanding and the reality of nature is a complex and ongoing philosophical debate. Science aims to provide explanations and models that describe and predict the behavior of the natural world. However, it is important to recognize that scientific models are simplifications and abstractions of reality, and they can never fully capture the intricacies and complexities of the real world.
1. Unknown Form of Mathematics: It is possible that there may be undiscovered mathematical frameworks or tools that could help in developing a comprehensive theory of everything. The search for such frameworks is an active area of research in theoretical physics and mathematics. However, it is challenging to predict what form this new mathematics might take or whether it exists at all.
2. Changing Conceptual View: Scientists continually refine their conceptual frameworks and theories as new evidence and insights emerge. The quest for a theory of everything may require physicists to adopt new conceptual viewpoints and frameworks to address the outstanding challenges. This flexibility allows for the incorporation of new concepts and approaches in problem-solving.
3. Mathematical System Representing Possible States: Mathematics is a powerful tool for representing and describing the natural world, but it is not clear if there exists a single mathematical system that can encompass all possible states of the world. Different branches of mathematics are often used to describe specific phenomena or aspects of reality. The search for a comprehensive mathematical framework that can represent all possible states of the world is an ongoing pursuit.
4. The world as a Carpet: The analogy of the world is like a carpet with scientists searching for the most realistic pattern and design can be seen as a metaphorical representation of the scientific endeavor. Scientists develop theories and models that attempt to capture the underlying patterns and principles governing the natural world. However, it is important to remember that scientific theories are human constructs that are constantly refined and updated as our understanding deepens.
In summary, the relationship between scientific understanding and the reality of nature is complex and evolving. Scientific models and theories aim to capture aspects of reality, but they are limited abstractions. The search for a theory of everything requires ongoing exploration, including potential changes in conceptual viewpoints, the possibility of undiscovered mathematical frameworks, and the continuous refinement of scientific models to approach a more complete understanding of the world.
• asked a question related to Mathematics
Question
Three grade teachers response can help researcher measure the students creativity? Age of students is 8-11 years
It is possible for a mathematics class teacher to report on the creativity of students in third grade. However, it is important to clarify what is meant by "creativity" and to use a valid and reliable measure to assess it. In general, creativity can be defined as the ability to generate novel and useful ideas, and it can be assessed through a variety of measures such as divergent thinking tasks, creative problem-solving tasks, and self-assessment scales. It is also important to consider the limitations of relying solely on teacher reports to assess creativity, as teachers may have biases and may not always have a complete picture of a student's creative abilities. Therefore, a more comprehensive approach that includes multiple sources of information would be preferable.
• asked a question related to Mathematics
Question
Our department is offering an elective on Fluid Mechanics in daily life. The course is supposed to be more of a physical treatment of the fluid phenomena rather than mathematical. I would like some recommendations for books on the subject which are light and speak about fluid physics from a physical and application based perspective
Filippo Maria Denaro wrote, "I don’t think that any fluid dynamics course can be done without some fundamental math.." I completely agree. Among the many books on Fluid Mechanics, I find that the one that best embraces the physical point of view of the discipline is that of Guyon, E., Hulin, J. P., Petit, L., & de Gennes, P. G. (2001). Physical hydrodynamics, EDP sciences, whose first edition (2001) was prefaced by Pierre Gilles de Gennes (Nobel Prize for Physics) who did not lack praise for the particularity of the integration of physics in the very conception of the book. The book is in its third edition and has more than 750 citations.
Guyon, E., Hulin, J. P., & Petit, L. (2021). Hydrodynamique physique. EDP sciences.
Even if the book is in French, I think it would be interesting to consult it if only to have an idea of its construction
• asked a question related to Mathematics
Question
This question is dedicated only to sharing important research of OTHER RESEARCHERS (not our own) about complex systems, self-organization, emergence, self-repair, self-assembly, and other exiting phenomena observed in Complex Systems.
Please keep in own mind that each research has to promote complex systems and help others to understand them in the context of any scientific filed. We can educate each other in this way.
Experiments, simulations, and theoretical results are equally important.
Links to videos and animations will help everyone to understand the given phenomenon under study quickly and efficiently.
Viscoelastic microfluidics: progress and challenges.
Zhou, J. and Papautsky, I.
Microsyst Nanoeng 6, 113 (2020).
Abstract:
The manipulation of cells and particles suspended in viscoelastic fluids in microchannels has drawn increasing attention, in part due to the ability for single-stream three-dimensional focusing in simple channel geometries. Improvement in the understanding of non-Newtonian effects on particle dynamics has led to expanding exploration of focusing and sorting particles and cells using viscoelastic microfluidics. Multiple factors, such as the driving forces arising from fluid elasticity and inertia, the effect of fluid rheology, the physical properties of particles and cells, and channel geometry, actively interact and compete together to govern the intricate migration behavior of particles and cells in microchannels. Here, we review the viscoelastic fluid physics and the hydrodynamic forces in such flows and identify three pairs of competing forces/effects that collectively govern viscoelastic migration. We discuss migration dynamics, focusing positions, numerical simulations, and recent progress in viscoelastic microfluidic applications as well as the remaining challenges. Finally, we hope that an improved understanding of viscoelastic flows in microfluidics can lead to increased sophistication of microfluidic platforms in clinical diagnostics and biomedical research.
###
Without proper understanding of viscoelastic liquids, which are omnipresent in all biology, our biological description of living systems remain incomplete.
Without proper understanding of viscoelastic liquids, which are omnipresent in all biology, our biological description of living systems remain incomplete.
Just one example. Flowing of blood through micro-vessels utilizes viscoelastic properties of this liquid. Advances in this ar4 can help us to understand better the very mechanisms of blood clotting, organ infractions, and many other health issues.
• asked a question related to Mathematics
Question
hello
How can I determine the tortuosity factor in a porous material with simple mathematical formula?
What about using the fractal dimension (box-counting) as a measure? Here is a paper on tortuosity.
Regards,
Joachim
• asked a question related to Mathematics
Question
I know that δ(f(x))=∑δ(x−xi)/f′(xi). What will be the expression if "f" is a function of two variables, i.e. δ(f(x,y))=?
K. Kassner You are right.
The method that I proposed may not work for all functions f(x,y), especially if they are continuous and do not have isolated zeros. In that case, one might try to separate the integrals or use a coordinate transformation as you suggested in your comment. For example, if we use polar coordinates $(r,\theta)$, then we have
$$\delta(f(r,\theta))=\frac{1}{r}\delta(r-r(\theta))$$
where $r(\theta)$ is the zero of f(r,$\theta$) as a function of r for a fixed $\theta$. This can be seen by using the Jacobian of the transformation and the property of the delta function.
• asked a question related to Mathematics
Question
a) are there some possible hard and fast rules for review deadlines (yes or no?), and b) is there some obligation from an Editorial Board side to be giving the first answer to authors about the internal number of an article, if the article is submitting via email id (yes or no?).
Thanks for the input. https://clarivate.com/contact-us/
Thanks. I think in exactly the same way as you!
Sincerely,
Sergey
• asked a question related to Mathematics
Question
Why prime numbers have a great importance in mathematics for the rest of the numbers ?
Prime numbers have great importance in mathematics due to their unique properties and their fundamental role in number theory. Here are a few reasons why prime numbers hold significance:
1. Building blocks of numbers: Every positive integer greater than 1 can be expressed as a product of prime numbers in a unique way, known as the prime factorization. This property is known as the fundamental theorem of arithmetic. It means that prime numbers are the building blocks of all other numbers, making them fundamental in understanding the structure of numbers.
2. Divisibility and factors: Prime numbers play a crucial role in determining divisibility and factors of a number. A prime number has only two distinct positive divisors: 1 and itself. This property makes prime numbers essential in understanding and analyzing the factors and divisors of any given number.
3. Cryptography: Prime numbers find extensive applications in modern cryptography algorithms. Public-key encryption systems like the RSA algorithm heavily rely on the difficulty of factoring large composite numbers into their prime factors. The security of such systems is rooted in the use of prime numbers.
4. Distribution of primes: The study of prime numbers involves analyzing their distribution throughout the number line. The prime number theorem, proven by mathematicians Jacques Hadamard and Charles Jean de la Vallée Poussin independently in 1896, provides an estimation of the number of primes below a given value. Investigating the distribution of primes leads to a deeper understanding of the overall structure of the number system.
5. Unsolved problems: Prime numbers are at the center of many unsolved problems in mathematics. The most famous among them is the Riemann Hypothesis, which deals with the distribution of prime numbers. Numerous other conjectures and problems related to primes continue to captivate mathematicians, highlighting their ongoing importance in the field.
6. Mathematical proofs: Prime numbers often serve as key elements in mathematical proofs. They can be used to establish important theorems and propositions in various branches of mathematics, including algebra, number theory, and geometry. Prime numbers provide a foundation for rigorous mathematical reasoning.
In summary, prime numbers hold immense importance in mathematics as they form the basis for number theory, cryptography, and many other mathematical concepts. Their unique properties and distribution patterns make them invaluable in understanding the structure of numbers and solving mathematical problems.
• asked a question related to Mathematics
Question
Vehicle routing problem is a classical application case of Operation research.
I need to implement the same in electric vehicle routing problem with different constraints.
I want to understand mathematics behind this. The journals available discuss different applications without much talking of mathematics.
Any book/ basic research paper/ PhD/ m.tech thesis will do the needful.
There is a vast literature on exact and heuristic approaches to vehicle routing problems. (You are looking at several thousands journal articles!)
If you are interested in exact approaches, then you need to be familiar with the following:
(i) The basics of integer programming, including the branch-and-bound method and cutting-plane methods.
(ii) The basics of computational complexity, including the concept of polynomial-time algorithms, pseudo-polynomial-time algorithms and NP-completeness.
(iii) Elementary graph theory (nodes, edges, arcs, and so on).
It also helps to know a bit about:
(iv) Dynamic programming.
(v) Lagrangian relaxation.
(vi) The branch-and-cut method, which combines branch-and-bound with strong cutting planes from polyhedral studies.
(vii) The branch-and-price method, which combines branch-and-bound with Dantzig-Wolfe decomposition and dynamic programming.
A good place to start is the book "The Vehicle Routing Problem", edited by Toth and Vigo. There is also "The Vehicle Routing Problem: Latest Advances and New Challenges", edited by Golden et al.
• asked a question related to Mathematics
Question
Qm is the ultimate realists utilization of the powerful differential equations, because the integer options and necessities of solutions correspond to nature's quanta.
The same can be said for GR whose differential manifolds, an sdvanced concept or hranch in mathematics, have a realistic implementation in nature compatible motional geodesics.
1 century later,so new such feats have been possible, making one to think if the limit of heuristic mathematical supplementation in powerful ways towards realist results in physics in reached.
The short answer is No. And QM is much more than an application of the theory of differential equations, as is GR. The resolution of spacetime singularities-that are predicted by GR, isn't a mathematical issue, it's a physical issue. It will be the discovery of what are the appropriate physical degrees of freedom that will indicate what is the mathematical framework that is appropriate for describing them.
QM doesn't have any particular outstanding mathematical issues-it's quantum field theory that poses mathematical challenges.
• asked a question related to Mathematics
Question
Why are numbers and shapes so exact? ‘One’, ‘two’, ‘point’, ‘line’, etc. are all exact. But irrational numbers are not so. The operations on these notions are also intended to be exact. If notions like ‘one’, ‘two’, ‘point’, ‘line’, etc. are defined to be so exact, then it is not by virtue of the exactness of these substantive notions, but instead, due to their being adjectival: ‘being a unity’, ‘being two unities’, ‘being a non-extended shape’, etc. A quality cannot be exact, but may be defined to be exact. It is in terms of the exactness attributed to these notions by definition that the adjectives ‘one’, ‘two’, ‘point’, ‘line’, etc. are exact. This is why the impossibility of fixing these (and other) substantive notions as exact miss our attention. If in fact these are inexact, then there is justification for the inexactness of irrational numbers too. If numbers and shapes are in fact inexact, then not only irrational numbers but all mathematical structures should remain inexact except for their having been defined as exact. Thus, mathematical structures, in all their detail, are a species of qualities, namely, quantitative qualities.
I have revised the lead text (Mathematics and Causality: A Systemic Reconciliation). Now it has become 8 pages in A-4. See the link below.
I believe your questions may be answered here in my humble manner. Further questions and suggestions are most welcome. Thanks.
• asked a question related to Mathematics
Question
Applying mathematical knowledge in research models: This question has been in my mind for a long time. Can advance mathematics and applied mathematics solve all the problems in modeling research? Especially the formula derivation in the theoretical model part, can the analysis conclusion be obtained through multiple derivations or other methods? You have also read some mathematics-related publications yourself, and you have to admire the mystery of mathematics.
We all know that Mathematics include Reading , Writing & Arithmetic & its starts with every action of our life image & as such it is the action of our performance & image in every part of our life. With this some years back I have expressed my views in this areas which I submit herewith for your kind perusal .
In my early days students interested in Mathematics & scoring full marks they can perform in their working of mathematics either by listening to music or song or prior to during a home work they have formulated a habit of reading either a lesson or interested topics & after carrying out their working system they used to give justice to the subject of Mathematics.
This is my personal opinion
• asked a question related to Mathematics
Question
Which areas in mathematics education is trending currently
As an AI language model, I can provide an overview of some current research trends in mathematics education up until my knowledge cutoff in September 2021. However, please note that the field of research is constantly evolving, and there may have been new developments since then. Here are a few prevalent research trends in mathematics education:
1. Technology Integration: Researchers are exploring the effective use of technology in mathematics education. This includes studying the impact of digital tools, interactive software, simulations, and online resources on teaching and learning mathematics. Additionally, there is a focus on designing technology-enhanced learning environments and investigating the role of technology in fostering conceptual understanding and problem-solving skills.
2. Problem-Solving and Mathematical Thinking: There is an emphasis on promoting problem-solving skills and mathematical thinking among students. Researchers are investigating instructional strategies and interventions that help students develop problem-solving abilities, reasoning skills, and a deep conceptual understanding of mathematical concepts. This includes exploring the use of open-ended problems, mathematical modeling, and real-world contexts to engage students in authentic mathematical experiences.
3. Learning Trajectories and Progressions: Research in this area focuses on understanding the developmental progression of mathematical concepts and skills. Learning trajectories provide a framework for mapping out the sequence of learning in different mathematical domains and identifying the key milestones along the way. By understanding how students progress through these trajectories, researchers aim to develop effective instructional approaches and interventions that cater to students' diverse learning needs.
4. Assessment and Feedback: There is ongoing research on developing innovative assessment methods and providing effective feedback in mathematics education. This includes investigating formative assessment strategies, computer-based assessments, and alternative approaches to evaluating mathematical competencies. Researchers are also exploring the role of feedback in enhancing students' learning and understanding of mathematics.
5. Equity and Access: Mathematics education research is increasingly focusing on issues of equity, diversity, and inclusion. Researchers are examining the factors that contribute to achievement gaps among different student populations and investigating strategies to promote equitable mathematics learning experiences. This includes exploring culturally responsive teaching practices, addressing stereotype threats, and promoting access to high-quality mathematics education for all students.
These research trends highlight some of the current areas of focus in mathematics education. However, it is essential to note that the field is dynamic, and new trends may have emerged since my knowledge cutoff. For the most up-to-date information, it is advisable to consult recent academic journals and conferences in the field of mathematics education.
• asked a question related to Mathematics
Question
As it is not possible to show mathematical expressions here I am attaching link to the question.
Your expertise in determining and comprehending the boundaries of integration within the Delta function's tantalizing grip will be treasured beyond measure.
Use δ(a-√x2+y2)=(a/√a2-x2)(δ(y-√a2-x2)+δ(y+√a2-x2)) to do the integral over y. Then the integral over x remains and its integration interval is [-a,a].
The general recipe is to transform the δ function δ(a-g(y)) into a sum of δ functions δ(y-yk), where yk are the zeros of g(y)-a. Each term acquires a denominator |g'(yk)| in the process.
• asked a question related to Mathematics
Question
An attempt to extrapolate reality
Both textbooks and also most of secondary school maths teachers are not sufficient to excite students. Some of the USA textbooks are really extraordinary attractive but still very common in so many other countries apart from Europe !?
• asked a question related to Mathematics
Question
Our answer is a competitive YES. However, universities face the laissez-faire of old staff.
This reference must be included:
Gerck, E. “Algorithms for Quantum Computation: Derivatives of Discontinuous Functions.” Mathematics 2023, 11, 68. https://doi.org/10.3390/math1101006, 2023.
announcing quantum computing on a physical basis, deprecating infinitesimals, epsilon-deltas, continuity, limits, mathematical real-numbers, imaginary numbers, and more, making calculus middle-school easy and with the same formulas.
Otherwise, difficulties and obsolescence follows. A hopeless scenario, no argument is possible against facts.
What is your qualified opinion? Must one self-study? A free PDF is currently available at my profile at RG.
Physics confirming math, or denying it. Time for colleges to catch-up and be competitive.
• asked a question related to Mathematics
Question
Hello,
in an article I found the following sentence in the abstract:
"The results suggested that (1) Time 1 mathematics self-concept had significant effects on Time 2 mathematics school engagement at between-group and within-group levels; and (2) Time 2 mathematics school engagement played a partial mediating role between Time 1 mathematics self-concept and Time 2 mathematics achievement at the within-group level."
What is the meaning of the within-group-level and between-group-level in this context?
The article I am referring to is:
Xia, Z., Yang, F., Praschan, K., & Xu, Q. (2021). The formation and influence mechanism of mathematics self-concept of left-behind children in mainland China. Current Psychology, 40(11), 5567–5586. https://doi.org/10.1007/s12144-019-00495-4
Hi Max,
I quickly went through the paper, and I understand your confusion. In study 2, there are actually two within-components or, if you will, a three-level design (time within students within classes).
After having read through the paper, I am quite convinced that the authors refer to between-groups when they mean between-classroom effects, e.g., average classroom math self-concept differs with respect to some predictor on the classroom level. In other words, by within-group effects the authors refer to the individual-(student-)level. While this makes complete sense in study 1, in it slightly confusing in study 2 in my opinion because of the longitudinal design.
Hope this helps!
• asked a question related to Mathematics
Question
An attempt to extrapolate reality
📷
There are many reasons why students may have a negative attitude towards mathematics. Some possible reasons include:
Lack of confidence: Students may feel that they are not good at mathematics and may believe that they are incapable of doing well in the subject.
Difficulty with abstract concepts: Mathematics involves working with abstract concepts, which can be difficult for some students to understand.
Negative experiences: Students may have had negative experiences with mathematics in the past, such as poor grades or negative feedback from teachers or peers.
Lack of engagement: Some students may find mathematics boring or irrelevant to their lives, which can lead to disengagement and a negative attitude towards the subject.
Cultural stereotypes: Some students may hold negative cultural stereotypes about mathematics, such as the belief that it is a subject for boys or that only extremely intelligent people can excel in the subject.
Teacher quality: The quality of instruction in mathematics can vary widely, and students who have had poor teachers may develop a negative attitude towards the subject as a result.
Addressing these issues and finding ways to engage students and help them develop a more positive attitude towards the subject can be a challenge, but it is an important one for educators to tackle.
• asked a question related to Mathematics
Question
I'm using target encoding in my work, and I'd like to understand why it's effective from a mathematical point of view.
Intuitively, my understanding is that it allows you to encode the past with the future. I can see why that's effective, and also why it could cause target leakage. However, I can't find a good mathematical explanation for its effectiveness/ issues.
Does anyone know the answer, or have a link to a resource they'd be willing to share?
Target encoding is a technique used in machine learning to encode categorical variables as numerical values based on the target variable. The idea is to use the target variable to create a new feature for each unique category in the categorical variable, where the value of the feature is the average value of the target variable for that category.
From a mathematical point of view, target encoding can be effective because it can help capture the relationship between the categorical variable and the target variable. By encoding the categorical variable based on the target variable, the resulting numerical values can provide a more informative representation of the categorical variable, which can help improve the performance of the machine learning model.
One way to think about this is in terms of information theory. The target variable provides information about the relationship between the categorical variable and the target variable. By encoding the categorical variable based on the target variable, we are effectively incorporating this information into the feature representation. This can help improve the model's ability to learn patterns and make accurate predictions.
However, target encoding can also be prone to target leakage, where the encoded feature incorporates information from the target variable that would not be available at prediction time. This can lead to overfitting and poor generalization performance. To mitigate this issue, it is important to use proper cross-validation techniques and to ensure that the encoding is done using only information that would be available at prediction time.
Here are a few resources that you may find helpful:
• asked a question related to Mathematics
Question
We assume that the Lagrange multipliers originally introduced in the Boltzmann-Einstein model to derive the Gaussian distribution are just a mathematical trick to compensate for the lack of true definition of probability in unified 4D space.
The derivation of the Boltzmann distribution for the energy distribution of identical but distinguishable classical particles can be obtained in a mathematical approach [1] or equivalently via a statistical approach [2] where the Lagrange multipliers are completely ignored.
1-The Boltzmann factor: a simplified derivation
Rainer Muller
2- Statistical integration, I. Abbas
The the Lagrange multipliers LG originally introduced in the Boltzmann-Einstein model to derive the Gaussian distribution are just a classical mathematical trick to compensate for the lack of true definition of probability in unified 4D space.
The derivation of the Boltzmann distribution for the energy distribution of identical but distinguishable classical particles can be obtained via a mathematical approach [1] or equivalently via a statistical approach [2,3] where the Lagrange multipliers can be completely ignored.
However, there are still many people who claim that
i- without the Lagrange multipliers all economic theory, and some finance applications as well, are gonna be in trouble in finite or infinite dimensional spaces.
ii- while the use of Lagrange multipliers may not be the only way to derive the Boltzmann distribution, it is a well-established and useful technique that should not be dismissed as a mere "trick" .
iii- without the L.G. all economic theory, and some finance applications as well, are gonna be in trouble in finite or infinite dimensional spaces.
iv-LG are used in various fields, including physics, economics, and optimization. They are used to optimize a function subject to a set of constraints, by introducing additional parameters (the Lagrange multipliers) that allow the constraints to be incorporated into the objective function.
In the context of statistical mechanics, Lagrange multipliers are used to enforce the constraints on the total energy, volume, and number of particles in a system, while maximizing the entropy.
While it's true that the use of Lagrange multipliers is a mathematical technique, it's not just a "trick" that can be ignored. The Lagrange multipliers are necessary to incorporate the constraints into the optimization problem and obtain the correct solution. Without the Lagrange multipliers, the constraints would not be taken into account, and the resulting distribution would not accurately reflect the physical behavior of the system.
We assume that the fears or claims i-iv will not happen because LG constraints will be incorporated in adequate statistical theory.
In brief, Lagrange multipliers is just a classic mathematical trick that we can do without.
A detailed answers to this question and other related questions such as the numerical statistical solution of double and triple integration as well as the time dependent PDE are all explained in references[2,3] where the modern definition of probability in the transition matrix B is an interconnected thing of the three topics. Imagination is the first important common factor in mathematics, physics and especially in the probability of transition in unitary 4D space is incorporated in B-matrix statistical chains to numerically solve single, double, and triple (Hypercube) integrals as well as time dependent PDE[ 2.3].
Ref:
1-The Boltzmann factor: a simplified derivation
Rainer Muller
2-I.M.Abbas, How Nature Works in Four-Dimensional Space: The Untold Complex Story, Researchgate, May 2023.
3-I.M.Abbas, How Nature Works in Four-Dimensional Space: The Untold Complex Story, IJISRT review, May 2023.
• asked a question related to Mathematics
Question
Hello!
I am curious , can anyone guide me how we can calculate the amount of hydrogen is stored in the metal hydride during the absorption process both in %wt. an in grams and how much energy is released during absorption
Hello
To explain my answer I propose you what guided it;
It is difficult to know on which form the hydrogen is in the metal, that is to say it is in hydride form, atomic or molecular. The next difficulty is to measure its concentration. I approached these questions by studying the adsorption of incondensable gases on metals, atomically clean (Ta and Al (111)) at very low H2 partial pressure, of the order of 10^-3 Pascal and at room temperature. While the residence time of the molecules should be extremely short of the order of 10^- 10 seconds or less, it is much larger and depends on the polarity of the molecules and the presence of defects, impurities and dislocations. These defects create local variations of the crystal field and internal stresses while the adsorbed molecules create image charges that modify the electronic and vibrational structure at the surface of the solid. This is verified by electron spectrometry and is expressed by the dielectric function. Even at very low coverage, the adsorbed molecules become unstable due to the relaxation of internal stresses of entropic origin. This scheme is parallel to that of the effects of adsorbed charges on insulators and is expressed by an equation of state and expresses the surface barrier deformation by defects and the resulting physical/chemical adsorption and ion diffusion processes. On the basis of these elements, the hydrogen concentration of the order of ppm would not be measurable by weight but fatal to the cohesion. One can imagine several experiments to verify this scheme.
Have a nice day
Claude
• asked a question related to Mathematics
Question
Mathematics and theoretical physics are currently searching for answers to this particular question and two other related questions that make up three of the most persistent questions:
i- Do probabilities and statistics belong to physics or mathematics?
ii- Followed by the related question, does nature operate in 3D geometry plus time as an external controller or more specifically, does it operate in the inseparable 4D unit space where time is woven?
iii-Lagrange multipliers: Is it just a classic mathematical trick that we can do without?
We assume the answers to these questions are all interconnected, but how?
One could invoke that energy subdivides between systems by scaling them with parameters, same way as Lagrange multipliers are used in variational problems. I just read my latest article, where I did not do that, but instead used an entire gradient, same way as entire time differentiation. It would result in the same traveling waves, with one more variable, probably, and need not be reduced to obtain the phenomenon?
• asked a question related to Mathematics
Question
I am trying to understand the mathematical relations that computes the residual entropy of methane in REFPROP. I have tried to replicate the graphs in "Entropy Scaling of Alkanes II" by Ian bellusing the mathematical expressions in the paper aand have been unsuccessful so far.
The whole point of "residual entropy" (and residual enthalpy) is the part that is *not* an ideal gas; so using an ideal gas relationship will provide no insight. Most textbooks provide an integral with respect to pressure ∫dP, which is useless because equations of state are never in this form. We use integration by parts to convert this to an integral with respect to specific volume. Put the equation of state into this equation and you have the residuals.
• asked a question related to Mathematics
Question
Can someone please provide more insight on the mathematical formulation of a frequency constrained UC model, comprising of only synchronous sources, as well as non-synchronous sources. Also what would be the associated MILP code in GAMS?
The previous paragraph outlines a step-by-step process for using Mixed Integer Linear Programming (MILP) to solve the frequency constrained unit commitment problem in power system operation. The process involves formulating the objective function and constraints, solving the MILP problem using a suitable solver, validating the solution, and implementing the optimal unit commitment schedule in the power system operation. The main constraints include power balance, generator capacity, minimum up and down time, ramping, and frequency constraints. The solution obtained should satisfy all the constraints and be physically feasible
• asked a question related to Mathematics
Question
The error of building a physical world based on the basic feelings of fundamental concepts such as space and time occurred during the creation of Newtonian mechanics by Newton. Of course, this mistake should have been made, so that man would not be deprived of the numerous gifts of technology resulting from this science! But when the world showed another face of itself to man in very small and large scales, this theory along with the error did nothing.
When Newton had those ideas about space and time (of course, maybe he knew and had no choice), he built a mathematical system for his thoughts, differential and integral calculus! Mathematics resulting from his thoughts was a systematic continuation of his thoughts, with the same assumptions about space and time. That mathematics could not show him the right way to know the real world! Because the world was not Newtonian! Today, many pages in modern physics are created based on new assumptions of space and time and other seemingly obvious variables!
Now, why do we think that these pages of current mathematics necessarily lead to the correct knowledge of the world! Can we finally identify the world, as it is, by adopting appropriate and correct assumptions?!
Good question. I suggest that the background structure for science research, which includes quantum mechanics is consciousness. Yet consciousness is deleted from mainstream science, hence for such research, there is no background structure. Two papers in the Journal Communicative and Integrative Biology discuss this situation: Omni-local consciousness: and The two principles that shape scientific research:
• asked a question related to Mathematics
Question
Apart from the mathematical systems that confirm human feelings and perceptive sensors, there are countless mathematical systems that do not confirm these sensors and our sensory data! A question arises, are the worlds that these mathematical systems evoke are real? So in this way, there are countless worlds that can be realized with their respective physics. Can multiple universes be concluded from this point of view?
Don't we see that only one of these possible worlds is felt by our body?! Why? Have we created mathematics to suit our feelings in the beginning?! And now, in modern physics and the maturation of our powers of understanding, we have created mathematical systems that fit our dreams about the world!? Which of these mathematical devices is actually true about the world and has been realized?! If all of them have come true! So there is no single and objective world and everyone experiences their own world! If only one of these mathematical systems has been realized, how is this system the best?!
If the worlds created by these countless mathematical systems are not real, why do they exist in the human mind?!
The last question is, does the tangibleness of some of these mathematical systems for human senses, and the intangibleness of most of them, indicate the separation of the observable and hidden worlds?!
Dear Seyed Mohammad Mousavi You're right on question. Unfortunately we accustomed for centuries to use this tool of manmade one dimension, static mathematic to blend it to nature that it is changing constantly in three dimensions. For same reason, all of our equations never settled perfectly. Newtons gravity, Einstein's , Maxwell, Lorenz...all misguided and not working. My theories on universe is mentioned this phenomenon. read my articles
regards
• asked a question related to Mathematics
Question
Recently I've discussed this topic with a tautologist researcher, Quine's follower. The denial of the capacity of deductive logic to generate new knowledge implies that all deductive results in mathematics wont increase our knowledge for real.
The tautologic nature of the deduction seems to lead to this conclusion. In my opinion some sort of logic omniscience is involved in that position.
So the questions would be:
• Is the set of theorems that follow logically from a set A of axioms, "implicit" knowledge? if so, what would be the proper difference between "implicit" and "explicit" knowledge?
• If we embrace the idea that no new knowledge comes from deduction, what is the precise meaning of "new" in this context?
• How do you avoid the problem of logic omniscience?
In my case I find that use of the term 'implicit' very problematic. In my opinion it is case of language abuse, because the term 'implicit' is normally used to refer to thinks that are known, but not said. For example, if I say: "buy a car", it is implicit that you will buy it with money. I don't have to say: "buy a car with money", because it is already known, and no need to be explicit. So the original meaning of 'implicit' is an implied idea that it is not need to be said.
However, used in the way of Lakatos, we would have to say that abc conjecture is implicit in the ZFC axioms, not because it is known at all, but because its true/falsehood is a possible conclusion of a deduction. Which in my opinion is nonsense, and introduce the questionable concept that un-deduced conclusions are already known someway before the deduction is actually performed.
Therefore I do not think that all logically implied conclusions fit into the concept of implicit.
• asked a question related to Mathematics
Question
Are there certain methods, for instance T-tests or ANOVAs, for certain ways a survey question is asked?
There are several mathematical and statistical methods used to quantify and discuss survey questions. Here are some of the most common ones:
1. Descriptive statistics: Descriptive statistics can be used to summarize the data from survey questions. For example, you can calculate measures of central tendency (such as the mean, median, and mode) and measures of variability (such as the range, standard deviation, and variance) to describe the distribution of responses.
2. Cross-tabulation: Cross-tabulation (also known as contingency tables or pivot tables) can be used to examine the relationship between two or more survey questions. It allows you to see how the responses to one question vary with the responses to another question.
3. Chi-square test: The chi-square test can be used to determine whether there is a significant association between two categorical variables. It can be used to test whether the responses to one survey question depend on the responses to another survey question.
4. T-tests and ANOVA: T-tests and analysis of variance (ANOVA) can be used to compare the means of two or more groups on a single survey question. They can be used to test whether there are significant differences in the responses to a survey question between different groups (such as men and women, or different age groups).
5. Regression analysis: Regression analysis can be used to examine the relationship between one or more independent variables and a dependent variable. It can be used to test whether there is a significant linear relationship between a survey question and other variables, such as demographic variables or other survey questions.
6. Factor analysis: Factor analysis can be used to identify underlying factors or dimensions that explain the pattern of responses to multiple survey questions. It can be used to group survey questions that measure similar concepts or to identify unique factors that explain variation in the responses.
These are just a few examples of the mathematical and statistical methods that can be used to quantify and discuss survey questions. The choice of method will depend on the research question, the type of data collected, and the level of analysis needed.
• asked a question related to Mathematics
Question
Actually, I am working on the modeling of path loss between the coordinator and the sensor nodes of a BAN network. My objective is to make a performance comparison between the CM3A model of the IEEE 802.15.6 standard and a loss model that I have implemented mathematically.
So, according to your respectful experience, how can I implement these two path loss models? Do I have to define both path loss equations under the Wireless Channel model? Or do I create and implement for each path loss model a specific module under Castalia (like the wireless channel module) and after I call it from the omnet.ini file (configuration file) ?
You will find attached the two models in a figure.
Not sure if I could get your question properly, but here is an alternative solution to obtain the Path Loss value:
There is a software: "NYUSIM"
You can actually get the path loss comparisons by running the simulation. This software can simulate up to 100 GHz. All you have to do is to insert the appropriate simulation data in terms of your desired outcome.
• asked a question related to Mathematics
Question
Many people believe that x-t spacetime is separable and that describing x-t as an inseparable unit block is only essential when the speed of the object concerned approaches that of light.
This is the most common error in mathematics as I understand it.
The universe has been expanding since the time of the Big Bang at almost the speed of light and this may be the reason why the results of classical mathematics fail and become less accurate than those of the stochastic B-matrix ( or any other suitable matrix) even in the simplest situations like double integration and triple integration.
CASE A:To begin with, let's admit that nature does not see with our eyes and does not think with our brains.We try to understand how nature performs its own resolutions in space-time x-t like an inseparableunit block.
However, B-Matrix statistical chains (or any other suitable stochastic chains) can answer this question and demonstrate, in a way, how nature works:
i-nature see:the curve as a trapezoidal area.ii-nature see:the square as a cube or a cuboid volume.iii-nature see:the cube as a 4D Hypercube and evaluates its volume as L^4.In all hypotheses i-iii, time t is the additional dimension.However, many people still believe that x-t spacetime is separable and that describing x-t as an inseparable unit block is only essential when the speed of the object concerned approaches that of light.This is the most common error in mathematics as I understand it.The universe has been expanding since the time of the Big Bang at almost the speed of light and this may be the reason why the results of classical mathematics fail and become less accurate than those of the stochastic B-matrix ( or any other suitable matrix) even in the simplest situations like double integration and triple integration.This is the reason why the current definition of double and triple integration is incomplete.Brief,-------Time is part of an inseparable block of space-time and therefore,geometric space is the other part of the inseparable block of space-time.In other words, you can perform integration using the x-t space-time unit with wide applicability and excellent speed and accuracy.On the other hand, the classical mathematical methods of integration using the classical mathematical  technique in the geometric Cartesian space x-alone can still be applied but only in special cases and it is to be expected that their results are only a limit of success.In other words, you can perform integration using the x-t space-time unit with wide applicability and excellent speed and accuracy.1-It is important to understand that mathematics is only a tool for quantitatively describing physical phenomena, it cannot replace physical understanding.2-It is claimed that mathematics is the language of physics, but the reverse is also true, physics can be the language of mathematics as in the case of numerical integration and the derivation of the normal/ Gaussian distribution law via the  statistical  B-matrix chains.However, in a revolutionary technique, chains of B matrices are used to solve numerically PDE, double and triple integrals  as well as the general case of time-dependent partial differential equations with arbitrary Dirichlet BC and initial arbitrary conditions.At first,I=∫∫∫ f(x,y,z) dxdydzwhich has been defined as the limit of the sum of the product f(x,y,z) dxdydz for a small infinitesimal dx,dy,dz is completely ignored in numerical statistical methods as if it never existed.It is obvious that the new B matrix technique ignores the classical 3D integration I=∫∫∫ f(x,y,z) dxdydz.We concentrate below on some results in the field of numerical integration via the theory of the matrix B, which in itself is not complicated but rather long.
------------
7 Free Nodes:
Single finite Integral
I=∫ f(x) dx ... for  a<=x<=b
Briefly, we arrive at,
The statistical integration formula for 7 nodes is given by,
I=6h/77(6.Y1 +11.Y2 + 14.Y3+15.Y4 +14.Y5 + 11.Y6 + 6.Y7)
which is the statistical equivalence of Simpson's rule for 7 nodes.
Now consider the special case,
I=∫ y dx from x=2 to x=8 where y=X^2.
That is,
X = 2 3   4  5   6   7  8
Y = 4 9 16 25 36 49 64
Numerical result via Trapezoidal ruler,
It = Y1/2+Y2+Y3+Y4 +Y5+Y6+Y7/22+9+16 +25+ 36+ 49+32=169 square units.
Analytic integration expression I=X^3/3
Ia=(384-8)/3= 168 square units.
Finally, the statistical integration formula for 7 nodes is given by,
Is=6 h/77 (6*4 +11* 9 + 14* 16+15* 25 +14* 36 + 11*49 + 6*64)
I = 167.455 square units. This means that static integration is quite fast and accurate.
CASE B:
-----------
Double finite Integral  I=∫∫ f(x,y) dx dy... for the domain  a<=x<=b and  c<=y<=d
If we introduce a specific example without loss of generality where ,
the function Z=f(x,y) is defined as,
Z(x,y)= X^2.Y^2 + X^3 . . . . . (1)
defined on the rectangular domain [abcd],
1<=x=>3 and 1<=y=>3 . . . Domain D(1)
The process of double numerical integration (I),
I=∫∫ f(x,y) dxdy
on the D1 domain can be achieved via three different approaches, namely,
1-analytically (a),
Ia=(x^3/3 *y^2 +x^4/4) + (x^2*y^3/3+x^3) . . . (2)
2-Rule of the Double Sympson (ds),
I ds=
h^3.(16f(b+a/2,d+c/2)+4f(b+a/2,d)+4f(b+a/2,c)+4f(b,d+c/ 2) +4f(a,d+c/2)+f(b,d)+f(b,c)+f(a,d)+f(a,c))/36 . . . . (3)
iii- The statistical integration formula via the Cairo technique (ct),
Ict = 9h^3/29.5( 2.75Z(1,1)+3.5Z(1,2)+2.75.Z(1,3)+3.5Z(2,1)+4.5Z(2,2)+ 3 .5Z(2.3)+2.75Z(3.1)+3.5Z(3.2)+2.75Z(3.3)) . . . . (4)
where h is the equidistant interval on the x and y axes.
The numerical results are as follows,
i- Ia =227.25
ii-Ids=226.5
iii-I ct=227.035 ..which is the most accurate.
CASE C:
----------------
Triple finite Integral and Hypercube
I=∫∫∫ f(x,y) dx dy dz... for the domain  a<=x<=b and  c<=y<=d & e<=z<=f.
Again, we present the nature and the matrix of the triple integration on the cube abcdefgh, divided into 27 equidistant nodes,
we present below the supposed nature matrix of the triple integration on the 3D cube abcdefgh (1,1,1 & 2,1,1, .......& 3,3,3)
I=∫∫∫ W(x,y,z) dxdydz
on the cube domain.
I = 27h^4/59( 2.555W(1,1,1)+3.13W(1,2,1)+2.555.W(1,3,1)+3.13W(2,1,1)+3.876 W(2,2,1)+ 3,13W(2,3,1)+2,555W(3,1,1)+2,555W(3,2,1)+3,13Z(3,3,1) . . . etc.)
The question arises, why are statistical forms of integration faster and more accurate than mathematical forms?
We assume that the answer is inherent in the processes of integration, whether they belong to the 3D geometric space or the unitary x-t space.
• asked a question related to Mathematics
Question
I believe that it is common knowledge that mathematics and its applications cannot directly prove Causality. What are the bases of the problem of incompatibility of physical causality with mathematics and its applications in the sciences and in philosophy?
The main but very general explanation could be that mathematics and mathematical explanations are not directly about the world, but are applicable to the world to a great extent.
Hence, mathematical explanations can at the most only show the general ways of movement of the processes and not demonstrate whether the ways of the cosmos are by causation, what the internal constitution of every part of it is, etc. Even when some very minute physical process is mathematized, the results are general, and not specific of the details of the internal constitution of that process.
No science and philosophy can start without admitting that the cosmos exists. If it exists, it is not nothing, not vacuum. Non-vacuous existence means that the existents are non-vacuously extended. This means that they have parts. Every part has parts too, ad libitum, because each part is extended and non-infinitesimal. Hence, each part is relatively discrete, not mathematically discrete.
None of the parts of any physical existent is an infinitesimal. They can be near-infinitesimal. This character of existents is Extension, a Category directly implied by the To Be of Reality-in-total.
Similarly, any extended being’s parts -- however near-infinitesimal -- are active, moving. This implies that every part has so (finite) impact on some others, not on infinite others. This character of existents is Change.
No other implication of To Be is so primary as these two (Extension-Change) and directly derivable from To Be. Hence, they are exhaustive of To Be.
Existence in Extension-Change is what we call Causality. If anything is existent, it is causal – hence Universal Causality is the trans-scientific and physical-ontological Law of all existents.
By the very concept of finite Extension-Change-wise existence, it becomes clear that no finite space-time is absolutely dense with existents. Hence, existents cannot be mathematically continuous. Since there is continuous (but finite and not discrete) change (transfer of impact), no existent can be mathematically absolutely continuous or discrete in its parts or in connection with others.
Can logic show the necessity of all existents as being causal? We have already discussed how, ontologically, the very concept of To Be implies Extension-Change and thus also Universal Causality.
WHAT ABOUT THE ABILITY OR NOT OF LOGIC TO CONCLUDE TO UNIVERSAL CAUSALITY?
In my argument above and elsewhere showing Extension-Change as the very exhaustive meaning of To Be, I have used mostly only the first principles of ordinary logic, namely, Identity, Non-contradiction, and Excluded Middle, and then argued that Extension-Change-wise existence is nothing but Universal Causality, if everything existing is non-vacuous in existence.
For example, does everything exist or not? If yes, let us call it non-vacuous existence. Hence, Extension as the first major implication of To Be. Non-vacuous means extended, because if not extended, the existent is vacuous. If extended, everything has parts.
The point of addition now has been Change, which makes the description physical. It is, so to say, from experience. Thereafter I move to the meaning of Change basically as motion or impact.
Naturally, everything in Extension must effect impacts. Everything has further parts. Hence, by implication from Change, everything causes changes by impacts. Thus, we conclude that Extension-Change-wise existence is Universal Causality. It is thus natural to claim that this is a pre-scientific Law of Existence.
In such foundational questions like To Be and its implications, we need to use the first principles of logic, because these are the foundational notions of all science and no other derivative logical procedure comes in as handy. In short, logic with its fundamental principles can help derive Universal Causality. Thus, Causality is more primary to experience than the primitive notions of mathematics.
Extension-Change, Universal Causality derived by their amalgamation, are the most fundamental Metaphysical, Physical-ontological, Categories. Since these are the direction exhaustive implications of To Be, all philosophy and science are based on these.
The Irretutable Argument for Universal Causality. Any Opposing Position?
• asked a question related to Mathematics
Question
The congruent number problem has been a fascinating topic in number theory for centuries, and it continues to inspire research and exploration today. The problem asks whether a given positive integer can be the area of a right-angled triangle with rational sides. While this problem has been extensively studied, it is not yet fully understood, and mathematicians continue to search for new insights and solutions.
In recent years, there has been increasing interest in generalizing the congruent number problem to other mathematical objects. Some examples of such generalizations include the elliptic curve congruent number problem, which asks for the existence of rational points on certain elliptic curves related to congruent numbers, and the theta-congruent number problem as a variant, which considers the possibility of finding fixed-angled triangles with rational sides.
However, it is worth noting that not all generalizations of the congruent number problem are equally fruitful or meaningful. For example, one might consider generalizing the problem to arbitrary objects, but such a generalization would likely be too broad to be useful in practice.
Therefore, the natural question arises: what is the most fruitful and meaningful generalization of the congruent number problem to other mathematical objects? Any ideas are welcome.
here some articles
M. Fujiwara, θ-congruent numbers, in: Number Theory, Eger, 1996, de Gruyter, Berlin, 1998,pp. 235–241.
New generalizations of congruent numbers
Tsubasa Ochiai
DOI:10.1016/j.jnt.2018.05.003
A GENERALIZATION OF THE CONGRUENT NUMBER PROBLEM
LARRY ROLEN
Is the Arabic book about the congruent number problem cited correctly in the references? If anyone has any idea where I can find the Arabic version, it will be helpful. The link to the book is https://www.qdl.qa/العربية/archive/81055/vdc_100025652531.0x000005.
EDIT1:
I will present a family of elliptic curves in the same spirit as the congruent number elliptic curves.
This family exhibits similar patterns as the congruent number elliptic curves, including the property that the integer is still "congruent" if we take its square-free part, and there is evidence for a connection between congruence and positive rank (as seen in the congruent cases of $n=5,6,7$).
Thank you, Irshad Ayoob . I need to rephrase my question. What I mean by a generalization of the congruent number is as follows: A congruent number is related to the area of a right triangle or, simply, to the Diophantine equation a^2 + b^2 = c^2. An integer n is congruent if 2n = ab. Historically, this was not the first definition of a congruent number. Instead, in Arab manuscripts, n is congruent if the two Diophantine equations v^2 - n = u^2 and v^2 + n = w^2 have simultaneously a solution. By the way, this is equivalent to the well-known definition of a congruent number today, which is linked to the right triangle.
Now, my remark is about the degree two Diophantine equation. For example, let's take a^2 + 2b^2 = c^2. We know that if this Diophantine equation (or any other degree two of the form ra^2 + sb^2 = t*c^2, where a, b, c, r, s, t are integers) has a non-trivial solution, it will have an infinite number of solutions. So, in the case of the Pythagorean triple, we have the definition of the congruent number. But for the other equations, what is the correct definition of a congruent number?
• asked a question related to Mathematics
Question
MATHEMATICS VS. CAUSALITY:
A SYSTEMIC RECONCILIATION
Raphael Neelamkavil, Ph.D., Dr. phil.
1. Preface on the Use of Complex Language
2. Prelude on the Pre-Scientific Principle of Causality
3. Mathematical “Continuity and Discreteness” Vs. Causal Continuity
4. Mathematics and Logic within Causal Metaphysics
5. Mathematics, Causality, and Contemporary Philosophical Schools
1. Preface on the Use of Complex Language
First of all, a cautious justification is in place about the complexity one may experience in the formulations below: When I publish anything, the readers have the right to ask me constantly for further justifications of my arguments and claims. And if I have the right to anticipate some such possible questions and arguments, I will naturally attempt to be as detailed and systemic as possible in my formulation of each sentence here and now. A sentence is merely a part of the formulated text. After reading each sentence, you may pose me questions, which certainly cannot all be answered well within the sentences or soon after the sentences in question, because justification is a long process.
Hence, my sentences may tend to be systemically complex. A serious reader will not find these arguments getting too complex, because such a person has further unanswered questions. We do not purposely make anything complex. Our characterizations of meanings in mathematics, physics, philosophy, and logic can be complex and prohibitive for some. But would we all accuse these disciplines or the readers if the readers find them all complex and difficult? In that case, I could be excused too. I do not intentionally create a complex state of affairs in these few pages; but there are complexities here too. I express my helplessness in case any one finds these statements complex.
The languages of both science and philosophy tend to be complex and exact. This, nevertheless, should be tolerated provided the purpose is understood and practiced by both the authors and the readers. Ordinary language has its worth and power. If I give a lecture, I do not always use such formal a language as when I write, because I am there to re-clarify.
But the Wittgensteinian obsession with “ordinary” language does not make him use an ordinary language in his own works. Nor does the Fregean phobia about it save him from falling into the same ordinary-language naïveté of choosing concrete and denotative equivalence between terms and their reference-objects without a complex ontology behind them. I attempt to explain the complex ontology behind the notions that I use.
2. Prelude on the Pre-Scientific Principle of Causality
Which are the ultimate conditions implied by the notion of existence (To Be), without which conditions implied nothing exists, and without which sort of existents nothing can be discoursed? Anything exists non-vacuously. This implies that existents are inevitably in Extension (having parts, each of which is further extended and not vacuous). The parts will naturally have some contact with a finite number of others. That is, everything is in Change (impacting some other extended existents).
Anything without these two characteristics cannot exist. If not in Change, how can something exist in the state of Extension alone? And if not in Extension, how can something exist in the state of Change alone? Hence, Extension-Change are two fundamental ontological categories of all existence and the only two exhaustive implications of To Be. Any unit of causation with one causal aspect and one effect aspect is termed a process.
These conditions are ultimate in the sense that they are implied by To Be, not as the secondary conditions for anything to fulfil after its existence. Thus, “To Be” is not merely of one specific existent, but of all existents. Hence, Extension-Change are the implications of the To Be of Reality-in-total. Physical entities obey these implications. Hence, they must be the foundations of physics and all other sciences. Theoretical foundations, procedures, and conclusions based on these implications in the sciences and philosophy, I hold, are wise enough.
Extension-Change-wise existence is what we understand as Causality: extended existents and their parts exert impacts on other extended existents. Every part of existents does it. That is, if anything exists, it is in Causation. This is the principle of Universal Causality. In short, Causality is not a matter to be decided in science – whether there is Causality or not in any process under experiment and in all existents is a matter for philosophy to decide, because philosophy tends to study all existents. Science can ask only whether there occurs any specific sort of causation or not, because each science has its own restricted viewpoint of questions and experiments and in some cases also restrictions in the object set.
Thus, statistically mathematical causality is not a decision as to whether there is causation or not in the object set. It is not a different sort of causation, but a measure of the extent of determination of special causes that we have made at a given time. Even the allegedly “non-causal” quantum-mechanical constituent processes are mathematically and statistically circumscribed measuremental concepts from the results of Extended-Changing existents and, ipso facto, the realities behind these statistical measurements are in Extension-Change if they are physically existent.
Space is the measured shape of Extension; time is that of Change. Therefore, space and time are epistemic categories. How then can statistical causality based only on measuremental data be causality at all, if the causes are all in Extension-Change and if Universal Causality is already the pre-scientific Law under which all other laws appear? No part of an existent is non-extended and non-changing. One unit of cause and effect may be called a process. Every existent and its parts are processual.
And how can a so-called random cause be a cause, except when the randomness is the extent of our measuremental reach of the cause, which already is causal because of its Extension-Change-wise existence? Extension and Change are the very exhaustive meanings of To Be, and hence I call them the highest Categories of metaphysics, physical ontology, physics, and all science. Not merely philosophy but also science must obey these two Categories.
In short, everything existent is causal. Hence, Universal Causality is the highest pre-scientific Law, second conceptually only to Extension-Change and third to Existence / To Be. Natural laws are merely derivative. Since Extension-Change-wise existence is the same as Universal Causality, scientific laws are derived from Universal Causality, and not vice versa. Today the sciences attempt to derive causality from the various scientific laws!The relevance of metaphysics / physical ontology for the sciences is clear from the above.
Existents have some Activity and Stability. This is a fully physical fact. These two Categories may be shown to be subservient to Extension-Change and Causality. Pure vacuum (non-existence) is absence of Activity and Stability. Thus, entities, irreducibly, are active-stable processes in Extension-Change. Physical entities / processes possess finite Activity and Stability. Activity and Stability together belong to Extension; and Activity and Stability together belong to Change too.
That is, Stability is neither merely about space nor about Extension. Activity is neither merely about time nor about Change. There is a unique reason for this. There is no absolute stability nor absolute activity in the physical world. Hence, Activity is finite, which is by Extended-Changing processes; and Stability is finite, which is also by Extended-Changing processes. But the tradition still seems to parallelise Stability and Activity with space and time respectively. We consider Activity and Stability as sub-Categories, because they are based on Extension-Change, which together add up to Universal Causality; and each unit of cause and effect is a process.
These are not Categories that belong to merely imaginary counterfactual situations. The Categories of Extension-Change and their sub-formulations are all about existents. There can be counterfactuals that signify cases that appertain existent processes. But separating these cases from some of the useless logical talk as in linguistic-analytically tending logic, philosophy, and philosophy of science is near to impossible.
Today physics and the various sciences do at times something like the said absence of separation of counterfactual cases from actual in that they indulge in particularistically defined terms and procedures, by blindly thinking that counterfactuals can directly represent the physical processes under inquiry. Concerning mathematical applications too, the majority attitude among scientists is that they are somehow free from the physical world.
Hence, without a very general physical ontology of Categories that are applicable to all existent processes and without deriving the mathematical foundations from these Categories, the sciences and mathematics are in gross handicap. Mathematics is no exception in its applicability to physical sciences. Moreover, pure mathematics too needs the hand of Extension and Change, since these are part of the ontological universals, form their reflections in mind and language, etc., thus giving rise to mathematics.
The exactness within complexity that could be expected of any discourse based on the Categorial implications of To Be can only be such that (1) the denotative terms ‘Extension’ and ‘Change’ may or may not remain the same, (2) but the two dimensions of Extension and Change – that are their aspects in ontological universals – would be safeguarded both physical-ontologically and scientifically.
That is, definitional flexibility and openness towards re-deepening, re-generalizing, re-sharpening, etc. may even change the very denotative terms, but the essential Categorial features within the definitions (1) will differ only meagrely, and (2) will normally be completely the same.
3. Mathematical “Continuity and Discreteness” Vs. Causal “Continuity”
The best examples for the above are mathematical continuity and discreteness that are being attributed blindly to physical processes due to the physical absolutization of mathematical requirements. But physical processes are continuous and discrete only in their Causality. This is nothing but Extension-Change-wise discrete causal continuity. At any time, causation is present in anything, hence there is causal continuity. This is finite causation and hence effects finite continuity and finite discreteness. But this is different from absolute mathematical continuity and discreteness.
I believe that it is common knowledge that mathematics and its applications cannot prove Causality directly. What are the bases of the problem of incompatibility of physical causality within mathematics and its applications in the sciences and in philosophy? The main but general explanation could be that mathematical explanations are not directly about the world but are applicable to the world to a great extent.
It is good to note that mathematics is a separate science as if its “objects” were existent, but in fact as non-existent and different from those of any other science – thus creating mathematics into an abstract science in its theoretical aspects of rational effectiveness. Hence, mathematical explanations can at the most only show the ways of movement of the processes and not demonstrate whether the ways of the cosmos are by causation.
Moreover, the basic notions of mathematics (number, number systems, points, shapes, operations, structures, etc.) are all universals / universal qualities / ontological universals that belong to groups of existent things that are irreducibly Extension-Change-type processes. (See below.)
Thus, mathematical notions have their origin in ontological universals and their reflections in mind (connotative universals) and in language (denotative universals). The basic nature of these universals is ‘quantitatively qualitative’. We shall not discuss this aspect here at length.
No science and philosophy can start without admitting that the cosmos exists. If it exists, it is not nothing, not non-entity, not vacuum. Non-vacuous existence means that the existents are non-vacuously extended. This means they have parts. Every part has parts too, ad libitum, because each part is extended. None of the parts is an infinitesimal. They can be near-infinitesimal. This character of existents is Extension, a Category directly implied by To Be.
Similarly, any extended being’s parts are active, moving. This implies that every part has impact on some others, not on infinite others. This character of existents is Change. No other implication of To Be is so primary as these. Hence, they are exhaustive of the concept of To Be, which belongs to Reality-in-total. These arguments show us the way to conceive the meaning of causal continuity.
Existence in Extension-Change is what we call Causality. If anything is existent, it is causal – hence Universal Causality is the trans-science physical-ontological Law of all existents. By the very concept of finite Extension-Change-wise existence, it becomes clear that no finite space-time is absolutely dense with existents. In fact, space-time is no ontological affair, but only epistemological, and existent processes need measurementally accessible finite space for Change. Hence, existents cannot be mathematically continuous. Since there is Change and transfer of impact, no existent can be absolutely discrete in its parts or in connection with others.
Can logic show the necessity of all existents to be causal? We have already discussed how, ontologically, the very concept of To Be implies Extension-Change and thus also Universal Causality. Logic can only be instrumental in this.
What about the ability or not of logic to conclude to Universal Causality? In my arguments above and elsewhere showing Extension-Change as the very exhaustive meaning of To Be, I have used mostly only the first principles of ordinary logic, namely, Identity, Contradiction, and Excluded Middle, and then argued that Extension-Change-wise existence is nothing but Universal Causality if everything existing is non-vacuous in existence.
For example, does everything exist or not? If yes, let us call it non-vacuous existence. Hence, Extension is the first major implication of To Be. Non-vacuous means extended, because if not extended the existent is vacuous. If extended, everything has parts. Having parts implies distances, however minute, between all the near-infinitesimal parts of any existent process. In this sense, the basic logical laws do help conclude the causal nature of existents.
A point of addition now has been Change. It is, so to say, from experience. But this need not exactly mean an addition. If existents have parts (i.e., if they are in Extension), the parts’ mutual difference already implies the possibility of contact between parts. Thus, I am empowered to move to the meaning of Change basically as motion or impact. Naturally, everything in Extension must effect impacts.
Everything has further parts. Hence, by implication from Change and the need for there to be contacts between every near-infinitesimal set of parts of existents, everything causes changes by impacts. In the physical world this is by finite impact formation. Hence, nothing can exist as an infinitesimal. Leibniz’s monads have no significance in the real world.
Thus, we conclude that Extension-Change-wise existence is Universal Causality, and every actor in causation is a real existent, not a non-extended existent, as energy particles seem to have been considered and are even today thought to be, due to their unit-shape yielded merely for the sake mathematical applications. It is thus natural to claim that Causality is a pre-scientific Law of Existence, where existents are all inwardly and outwardly in Change, i.e., in impact formation – otherwise, the concept of Change would lose meaning.
In such foundational questions like To Be and its implications, the first principles of logic must be used, because these are the foundational notions of all science and no other derivative logical procedure comes in as handy. In short, logic with its fundamental principles can help derive Universal Causality. Thus, Causality (Extension-Change) is more primary to experience than the primitive notions of mathematics. But the applicability of these three logical Laws is not guaranteed so well in arguments using derivative, less categorial, sorts of concepts.
I suggest that the crux of the problem of mathematics and causality is the dichotomy between mathematical continuity and mathematical discreteness on the one hand and the incompatibility of applying any of them directly on the data collected / collectible / interpretable from some layers of the phenomena which are from some layers of the object-process in question. Not recognizing the presence of such stratificational debilitation of epistemic directness is an epistemological foolishness. Science and philosophy, in my opinion, are victims of this. Thus, for example, the Bayesian statistical theory recognizes only a statistical membrane between reality and data!
Here I point at the avoidance of the problem of stratificational debilitation of epistemic directness, by the centuries of epistemological foolishness, by reason of the forgetfulness of the ontological and epistemological relevance of expressions like ‘from some layers of data from some layers of phenomena from some layers of the reality’.
This is the point at which it is time to recognize the gross violence against natural reason behind phrases and statements involving ‘data from observation’, ‘data from phenomena’, ‘data from nature / reality’ etc., without epistemological and ontological sharpness in both science and philosophy to accept these basic facts of nature. As we all know, this state of affairs has gone irredeemable in the sciences today.
The whole of what we used to call space is not filled with matter-energy. Hence, if causal continuity between partially discrete “processual” objects is the case, then the data collected / collectible cannot be the very processual objects and hence cannot provide all knowledge about the processual objects. But mathematics and all other research methodologies are based on human experience and thought based on experience.
This theoretical attitude facilitates and accepts in a highly generalized manner the following three points:
(1) Mathematical continuity (in any theory and in terms of any amount of axiomatization of logical, mathematical, physical, biological, social, and linguistic theories) is totally non-realizable in nature as a whole and in its parts: because (a) the necessity of mathematical approval of any sort of causality in the sciences and by means of its systemic physical ontology falls short miserably in actuality, and (b) the logical continuity of any kind does not automatically make linguistically or mathematically symbolized activity of representation adequate enough to represent the processual nature of entities as derivate from data.
(2) The concept of absolute discreteness in nature, which, as of today, is ultimately of the quantum-mechanical type based on Planck’s constant, continues to be a mathematical and partial misfit in the physical cosmos and its parts, (a) if there exist other universes that may causally determine the constant differently at their specific expansion and/or contraction phases, and (b) if there are an infinite number of such finite-content universes.
The case may not of course be so problematic in non-quantifiable “possible worlds” due to their absolute causal disconnection or their predominant tendency to causal disconnection, but this is a mere common-sense, merely mathematical, compartmentalization: because (a) the aspect of the causally processual connection between any two quanta is logically and mathematically alienated in the physical theory of Planck’s constant, and (b) the possible worlds have only a non-causal existence, and hence, anything may be determined in this world as a constant, and an infinite number of possible universes may be posited without any causal objection!
It is usually not kept in mind here by physicists that the epistemology of unit-based thinking – of course, based on quantum physics or not – is implied by the almost unconscious tendency of symbolic activity of body-minds. This need not have anything to do with a physics that produces laws for all existent universes.
(3) The only viable and thus the most reasonably generalizable manner of being of the physical cosmos and of biological entities is that of existence in an Extended (having parts) and Changing manner (extended entities and their parts impacting a finite number of other existents and their parts in a finite quantity and in a finite duration). Existence in the Extension-Change-wise manner is nothing but causal activity.
Thus, insofar as everything is existent, every existent is causal. There is no time (i.e., no minute measuremental iota of Change) wherein such causal manner of existing ceases in any existent. This is causal continuity between partially discrete processual objects. This is not mathematizable in a discrete manner. The concept of geometrical and number-theoretic continuity may apply. But if there are other universes, the Planck constant of proportionality that determines the proportion of content of discreteness may change in the others. This is not previsioned in terrestrially planned physics.
The attitude of treating everything as causal may also be characterized by the self-aware symbolic activity by symbolic activity itself, in which certain instances of causation are avoided or enhanced, all decrementally or incrementally as the case may be, but not absolutely. This, at the most, is what may be called freedom.
It is fully causal – need not be sensed as causal within a specific set of parameters, but as causal within the context of Reality-in-total. But the whole three millennia of psychological and religious (contemplative) tradition of basing freedom merely on awareness intensity, and not on love – this is a despicable state of affairs, on which a book-length treatise is necessary.
Physics and cosmology even today tend to make the cosmos either (1) mathematically presupposedly continuous, or (2) discrete with defectively ideal mathematical status for causal continuity and with perfectly geometrical ideal status for specific beings, or (3) statistically indeterministic, thus being compelled to consider everything as partially causal, or even non-causal in the interpretation of statistics’ orientation to epistemically logical decisions and determinations based on data. If this has not been the case, can anyone suggest proofs for an alleged existence of a different sort of physics and cosmology until today?
The statistician does not even realize (1) that Universal Causality is already granted by the very existence of anything, and (2) that what they call non-causality is merely the not being the cause, or not having been discovered as the cause, of a specific set of selected data or processes. Such non-causality is not with respect to all existents. Quantum physics, statistical physics, and cosmology are replete with examples for this empirical and technocratic treachery of the notion of science.
A topology and mereologically clean physical ontology of causal continuity between partially discrete processual objects, fully free of absolutely continuity-oriented or absolutely discreteness-oriented category theory, geometry, topology, functional analysis, set theory, and logic, are yet to be born. Hence, the fundamentality of Universal Causality in its deep roots in the very concept of the To Be (namely, in the physical-ontological Categories of Extension and Change) of all physically and non-vacuously existent processes, is alien to physics and cosmology until today.
Non-integer rational numbers are not the direct notion of anything existent. Even a part of a unit process has the attribute ‘unity’ in all the senses in which any other object possesses transpire. For this reason, natural numbers have Categorial priority over rational numbers, because natural numbers are more directly related to ontological universals than other sorts of numbers are. Complex numbers, for example, are the most general number system for their sub-systems defined mathematically, but this does not mean that they are more primary in the metaphysics of ontological universals, since the primary mode of numerically quantitative qualities / universals is that of natural numbers.
4. Mathematics and Logic within Causal Metaphysics
Hence, it is important to define the limits of applicability of mathematics to the physics that use physical data (under the species of various layers of their origin). This is the only way to approximate beyond the data and the methodologically derived conclusions beyond the data. As to how and on what levels this is to be done is a matter to be discussed separately.
The same may be said also about logic and language. Logic is the broader rational picture of mathematics. Language is the symbolic manner of application of both logic and its quantitatively qualitative version, namely, mathematics, with respect to specific fields of inquiry. Here I do not explicitly discuss ordinary conversation, literature, etc.
We may do well to instantiate logic as the formulated picture of reason. But human reason is limited to the procedures of reasoning by brains. What exactly is the reason that existent physical processes constantly undergo? How to get at conclusions based on this reason of nature – by using our brain’s reasoning – and thus transcend at least to some extent the limitations set by data and methods in our brain’s reasoning?
If we may call the universal reason of Reality-in-total by a name, it is nothing but Universal Causality. It is possible to demonstrate that Universal Causality is a trans-physical, trans-scientific Law of Existence. This argument needs clarity. How to demonstrate this as the case? This has been done in an elementary fashion in the above, but more of it is not to be part of this discussion.
Insistence on mathematical continuity in nature is a mere idealization. It expects nature to obey our merely epistemic sort of idealizations, that is, in ideal cases based mostly on the brain-interpreted concepts from some layers of data, which are from some layers of phenomena, which are from some layers of the reality under observation. Some of the best examples in science are the suppositions that virtual worlds are existent worlds, dark energy is a kind of propagative energy, zero-value cosmic vacuum can create an infinite number of universes, etc.
The processes outside are vaguely presented primarily by the processes themselves, but highly indirectly, in a natural manner. This is represented by the epistemic / cognitive activity within the brain in a natural manner (by the connotative universals in the mind as reflections of the ontological universals in groups of object processes), and then idealized via concepts expressed in words, connectives, and sentences (not merely linguistic but also mathematical, computerized, etc.) by the symbolizing human tendency (thus creating denotative universals in words) to capture the whole of the object by use of a part of the human body-mind.
The symbolizing activity is based on data, but the data are not all we have as end results. We can mentally recreate the idealized results behind the multitude ontological, connotative, and denotative universals as existents.
As the procedural aftermath of this, virtual worlds begin to “exist”, dark energy begins to “propagate”, and zero-value cosmic vacuum “creates” universes. Even kinetic and potential energies are treated as propagative energies existent outside of material bodies and supposed to be totally different from material bodies. These are mere theoretically interim arrangements in the absence of direct certainty for the existence or not of unobservables.
Insistence on mathematical continuity in nature as a natural conclusion by the application of mathematics to nature is what happens in all physical and cosmological (and of course other) sciences insofar as they use mathematical idealizations to represent existent objects and processes and extrapolate further beyond them. Mathematical idealizations are another version of linguistic symbolization and idealization.
Logic and its direct quantitatively qualitative expression as found in mathematics are, of course, powerful tools. But, as being part of the denotative function of symbolic language, they are tendentially idealizational. By use of the same symbolizing tendency, it is perhaps possible to a certain extent to de-idealize the side-effects of the same symbols in the language, logic, and mathematics being used in order to symbolically idealize representations.
Merely mathematically following physical nature in whatever it is in its part-processes is a debilitating procedure in science and philosophy (and even in the arts and humanities), if this procedure is not de-idealized effectively. If this is possible at least to a small and humble extent, why not do it?Our language, logic, and mathematics too do their functions well, although they too are equally unable to capture the whole of Reality in whatever it is, wholly or in parts, far beyond the data and their interpretations! Why not de-idealize the side-effects of mathematics too?
This theoretical attitude of partially de-idealizing the effects of human symbolizing activity by use of the same symbolic activity accepts the existence of processual entities as whatever they are. This is what I call ontological commitment – of course, different from and more generalized than those of Quine and others. Perhaps such a generalization can give a slightly better concept of reality than is possible by the normally non-self-aware symbolic activity in language, logic, and mathematics.
5. Mathematics, Causality, and Contemporary Philosophical Schools
With respect to what we have been discussing, linguistic philosophyand even its more recent causalist child, namely, dispositionalist causal ontology, have even today the following characteristics:
(1) They attribute an even now overly discrete nature to “entities” in the extent of their causal separateness from others while considering them as entities. The ontological notion of an object or even of an event in its unity in analytic philosophy and in particular in modal ontology forecloses consideration of the process nature of each such unity within, on par with interactions of such units with one another. (David Lewis, Parts of Classes, p. vii) This is done without ever attempting to touch the deeply Platonic (better, geometrically atomistic) shades of common-sense Aristotelianism, Thomism, Newtonianism, Modernism, Quantum Physics, etc., and without reconciling the diametrically opposite geometrical tendency to make every physical representation continuous.
(2) They are logically comatose about the impossibility of the exactly referential definitional approach to the processual demands of existent physical objects without first analyzing and resolving the metaphysical implications of existent objects, namely, being irreducibly in finite Extension and Change and thus in continuous Universal Causality in finite extents at any given moment.
(3) They are unable to get at the causally fully continuous (neither mathematically continuous nor geometrically discontinuous) nature of the physical-ontologically “partially discrete” processual objects in the physical world, also because they have misunderstood the discreteness of processual objects (including quanta) within stipulated periods as typically universalizable due to their pragmatic approach in physics and involvement of the notion of continuity of time.
Phenomenology has done a lot to show the conceptual structures of ordinary reasoning, physical reasoning, mathematical and logical thinking, and reasoning in the human sciences. But due to its lack of commitment to building a physical ontology of the cosmos and due to its purpose as a research methodology, phenomenology has failed to an extent to show the nature of causal continuity (instead of mathematical continuity) in physically existent, processually discrete, objects in nature.
Hermeneutics has just followed the human-scientific interpretative aspect of Husserlian phenomenology and projected it as a method. Hence, it was no contender to accomplish the said fete.
Postmodern philosophies qualified all science and philosophy as being perniciously cursed to be “modernistic” – by thus monsterizing all compartmentalization, rules, laws, axiomatization, discovery of regularities in nature, logical rigidity, and even metaphysical grounding as insurmountable curses of the human project of knowing and as a synonym for all that are unapproachable in science and thought. The linguistic-analytic philosophy in later Wittgenstein too was no exception to this nature of postmodern philosophies – a matter that many Wittgenstein followers do not notice. Take a look at the first few pages of Wittgenstein’s Philosophical Investigations, and the matter will be more than clear.
The philosophies of the sciences seem today to follow the beaten paths of extreme pragmatism in linguistic-analytic philosophy, physics, mathematics, and logic, which lack a foundational concept of causally concrete and processual physical existence.
Hence, it is useful for the growth of science, philosophy, and humanities alike to research into the causal continuity between partially discrete “processual” objects and forget about absolute mathematical continuity or discontinuity in nature. Mathematics and the physical universe are to be reconciled in order to mutually delimit them in terms of the causal continuity between partially discrete processual objects.
The view that humans have on the world is influenced by the cognitive method that has been formed. All the fields of knowledge and science that you mentioned together have made a cognitive method in the current time, which identifies the world with this dominant method!
We know the one in which our thinking frame is located! To look at the world in a different way and to adopt a different method, one must work without all these fields and with a method that penetrates into the cause of the phenomena! Is there mathematics and logic that the resulting physical and cosmological models have the least distance from the principle of current phenomena in the world?! Our language is unable to express what is actually happening! Maybe a world like David Boehm's holographic world is the solution to our problem! And in fact, an important part of the truth of the world is hidden in the hidden world! The part that we need to understand the world's phenomena! Or the invention of logic, mathematics and a new scientific and epistemological language is needed?! In my opinion, the result of the phenomenological method will not say anything about the causal layers of this world! The phenomenological method helps us to work without knowing the world and gives us a way of living and using most of the capacities of this world without knowing it. Previously, Newton claimed a method for understanding physics with wrong assumptions about space and time, which lasted for a while! These assumptions are based on the feeling and philosophy of that history, which was also manifested in the language of that time! It cannot be said, now these wrong assumptions have stopped! And maybe the current path of epistemology and cosmology, with incorrect and limiting assumptions, will not bring us to the true knowledge of the world.....
• asked a question related to Mathematics
Question
Insistence on mathematical continuity in nature is a mere idealization. It expects nature to obey our idealization. This is what happens in all physical and cosmological (and of course other) sciences as long as they use mathematical idealizations to represent existent objects / processes.
But mathematically following nature in whatever it is in its part-processes is a different procedure in science and philosophy (and even in the arts and humanities). This theoretical attitude accepts the existence of processual entities as what they are.
This theoretical attitude accepts in a highly generalized manner that
(1) mathematical continuity (in any theory and in terms of any amount of axiomatization of physical theories) is not totally realizable in nature as a whole and in its parts: because the necessity of mathematical approval in such a cosmology falls short miserably,
(2) absolute discreteness (even QM type, based on the Planck constant) in the physical cosmos (not in non-quantifiable “possible worlds”) and its parts is a mere commonsense compartmentalization from the "epistemology of piecemeal thinking": because the aspect of the causally processual connection between any two quanta is logically and mathematically alienated in the physical theory of Planck’s constant, and
(3) hence, the only viable and thus the most reasonably generalizable manner of being of the physical cosmos and of biological entities is that of CAUSAL CONTINUITY BETWEEN PARTIALLY DISCRETE PROCESSUAL OBJECTS.
PHYSICS and COSMOLOGY even today tend to make the cosmos mathematically either continuous or defectively discrete or statistically oriented to merely epistemically probabilistic decisions and determinations.
Can anyone suggest here the existence of a different sort of physics and cosmology that one may have witnessed until today? A topology and mereology of CAUSAL CONTINUITY BETWEEN PARTIALLY DISCRETE PROCESSUAL OBJECTS, fully free of discreteness-oriented category theory and functional analysis, is yet to be born.
Hence, causality in its deep roots in the very concept of To Be is alien to physics and cosmology till today.
Humans in this age, by inventing and using mathematical models and trying to match them with natural phenomena, are free to know the world. The mentioned models have their own logic and causality! And experience has shown that they do not have a significant relationship with nature! And there will always be an inevitable distance between our models of nature, the motivation to reduce this distance may be another request for scientific efforts!
• asked a question related to Mathematics
Question
i 'red' in a maths popularization book of steven strogatz that 1+3=4, 1+3+5=9, 1+3+5+7=16, and so on; wich would be the hypothesis when trying to demonstrate this striking 'fact'?
in his famous 'principia', newton uses mathematical considerations to demonstrate the angular momentum conservation; in the 'first' triangle one of the side is the velocity--its value, so a number--, and the other two sides are the 'distances' at t and t+1, respectively; now, if the ares are the same, could we say that theese two sides express the flow of the time
• asked a question related to Mathematics
Question
Our response is YES. Quantum computing has arrived, as an expression of that.
Numbers do obey a physical law. Massachusetts Institute of Technology Peter Shor was the first to say it, in 1994 [cf. 1], in modern times. It is a wormhole, connecting physics with mathematics, and has existed even before the Earth existed.
So-called "pure" mathematics is, after all, governed by objective laws. The Max Planck Institute of Quantum Optics (MPQ) showed the mathematical basis by recognizing the differentiation of discontinuous functions [1, 2, 3], in 1982.
This denies any type of square-root of a negative number [4] -- a.k.a. an imaginary number -- rational or continuous.
Complex numbers, of any type, are not objective and are not part of a quantum description, as said first by Erwin Schrödinger (1926) --
yet,
cryogenic behemoth quantum machines (see figure) consider a "complex qubit" -- two objective impossibilities. They are just poor physics and expensive analog experiments in these pioneering times.
Quantum computing is ... natural. Atoms do it all the time, and the human brain (based on +4 quantum properties of numbers).
Each point, in a quantum reality, is a point ... not continuous. So, reality is grainy, everywhere. Ontically.
To imagine a continuous point is to imagine a "mathematical paint" without atoms. Take a good microscope ... atoms appear!
The atoms, an objective reality, imply a graininess. This quantum description includes at least, necessarily (Einstein, 1917), three logical states -- with stimulated emission, absorption, and emission. Further states are possible, as in measured superradiance.
Mathematical complex numbers or mathematical real-numbers do not describe objective reality. They are continuous, without atoms. Poor math and poor physics.
It is easy to see that multiplication or division "infests" the real part with the imaginary part, and in calculating modulus -- e.g., in the polar representation as well as in the (x,y) rectangular representation. The Euler identity is a fiction, as it trigonometrically mixes types ... avoid it. The FFT will no longer have to use it, and FT=FFT.
The complex number system "infests" the real part with the imaginary part, even for Gaussian numbers, and this is well-known in third-degree polynomials.
Complex numbers, of any type, must be deprecated, they do not represent an objective sense. They should not "infest" quantum computing.
Quantum computing is better without complex numbers. software makes R,C=Q --> B={0,1}.
REFERENCES
[1] DOI /2227-7390/11/1/68
[3] June 1982, Physical review A, Atomic, molecular, and optical physics 26:1(1).
Can numbers obey a physical law?
==============================
The situation is exactly the opposite - physical laws obey numbers. Otherwise, you will have to believe some pseudo-scientific statements that somehow caught my eye that a few hundred million years ago the number pi was exactly 3. Like, the Earth was closer to the Sun ... You are saying something similar here. (But, not about Betelgeuse...)
Speaking of prime numbers. In the picture, the spider has 8 legs. If you do not notice 3 legs, then yes! - Exactly 5!
• asked a question related to Mathematics
Question
Physics
The physicist betting that space-time isn't quantum after all
Most experts think we have to tweak general relativity to fit with quantum theory. Physicist Jonathan Oppenheim isn't so sure, which is why he’s made a 5000:1 bet that gravity isn’t a quantum force
By Joshua Howgego
13 March 2023
📷
Nabil NEZZAR
JONATHAN OPPENHEIM likes the occasional flutter, but the object of his interest is a little more rarefied than horse racing or the one-armed bandit. A quantum physicist at University College London, Oppenheim likes to make bets on the fundamental nature of reality – and his latest concerns space-time itself.
The two great theories of physics are fundamentally at odds. In one corner, you have general relativity, which says that gravity is the result of mass warping space-time, envisaged as a kind of stretchy sheet. In the other, there is quantum theory, which explains the subatomic world and holds that all matter and energy comes in tiny, discrete chunks. Put them together and you could describe much of reality. The only problem is that you can’t put them together: the grainy mathematics of quantum theory and the smooth description of space-time don’t mesh.
Most physicists reckon the solution is to “quantise” gravity, or to show how space-time comes in tiny quanta, like the three other forces of nature. In effect, that means tweaking general relativity so it fits into the quantum mould, a task that has occupied researchers for almost a century already. But Oppenheim wonders if this assumption might be mistaken, which is why he made a 5000:1 bet that space-time isn’t ultimately quantum.
The quantum experiment that could prove reality doesn't exist
................................................................................
A special spammer again appeared after SS post above, in this case again aimed at to place his “last post in the thread”, which are indicated in the RG rather useful “– Science topic” options, which are indicated below threads’ questions; an example here are “Space Time”, “Quantum”, “QUANTA”, “Mathematics” – Science topic” options, aimed at to replace really scientific answers; what this spammer does thoroughly in many threads.
Since he has only full stop imagination about what physics [and other sciences, though] is, besides that there are some terribly scientific words in physics, his posts are some senseless sets of such words – as that this post is; and, at that, he recommended the replaced SS post, seems knowing that his recommendation of any post means that this post is some trash.
As to
“…Quantum mechanics is but a language, and like a language it contains contradictory propositions. (These propositions can be precise and "work", but that's not the point.) …”
- quantum mechanics is the theory just as that classical mechanics quite equally is. And has indeed a lot of really fundamental flaws – again, many of the same flaws the classic mechanics has also.
These flaws exist because of in mainstream philosophy and sciences, including physics, all really fundamental phenomena/notions, first of all in physics “Matter”– and so everything in Matter, i.e. “particles”, “fields”, etc., “Consciousness”, “Space”, “Time”, “Energy”, “Information”, are fundamentally completely transcendent/uncertain/irrational,
- so in every case, when the mainstream physics addresses to some really fundamental problem, the results completely obligatorily logically are nothing else than some transcendent fantastic mental constructions; in both cases
- when some authors attempt to solve some directly fundamental problems, and some fantastic and fundamentally non-testable experimentally , say, “string” one, theories” appear in physics, and
- when a theory describes experimentally observed material objects/systems/events/effects, in this case, say, QED, are fitted with experiments, but for that really completely ad hoc – and really wrong – mathematical tricks are used; etc.
Real fundamental physics development can be possible only provided that the fundamental phenomena/notions above are scientifically defined, what is possible, and is done, only in framework of the 2007 Shevchenko-Tokarevsky’s “The Information as Absolute” conception, recent version of the basic paper see
- and practically for sure in many cases basing on the SS&VT Tokarevsky’s informational physical model , which is based on the conception; two main paper are
https://www.researchgate.net/publication/355361749_The_informational_physical_model_and_fundamental_problems_in_physics, where yet now more 30 fundamental physical problems are either solved or essentially clarified.
Including in last link [mostly section “Conclusion”] the fundamental flaws of both mechanicses are discussed.
However that
“….General relativity is a genuine theory. Both can't be unified until we have a proper quantum theory...”
- is fundamentally incorrect. In the GR the author addressed to completely transcendent for him fundamental phenomena/notions above, and so postulated for these phenomena really completely transcendent and fundamentally non-adequate to the reality properties; etc.; more see the SS post 5 days ago now in https://www.researchgate.net/post/Do_you_think_that_general_relativity_needs_modifications_or_it_is_a_perfect_theory/138 ,
- here only note that really Gravity is, of course, fundamentally nothing else than some fundamental Nature force, as that Electric, Strong/Nuclear, and Weak Forces are; and with a well non-zero probability it acts as that is shown in the SS&VT initial models of the Forces – see https://www.researchgate.net/publication/365437307_The_informational_model_-_Gravity_and_Electric_Forces, and
Cheers
• asked a question related to Mathematics
Question
Hi, My name Is Debi, I'm master student Mathematics Education major at Yogyakarta State University 2nd month. Please give me advice what the trend topic on mathematics education aspecially topic learning media math and learning psychology of math. May you share with me about it on your country or your universisty. Thank you so much.
1. Logic in school mathematics: norm and reality.
2. Why do we need proofs in mathematics teaching?
3. Proof and understanding in mathematics.
4. Free and bound variables in mathematics and school mathematics.
5. After all, what does it mean to solve an equation?
6. What is inequality and what does it mean to solve inequality?
7. What is a definition and how should it be applied.
8. Teaching mathematics: proving VS storytelling.
9. The concept of teaching mathematics (according to A. Naziev).
10. Correction of errors in traditional formulations of Vieta's theorem and related theorems.
11. Moral education: in the searches of lost mathematics.
12. Name, denotation and sense in mathematics and teaching it.
13. Semantic reading in teaching mathematics.
14. Teaching mathematics on the base of axioms, definitions, and theorems.
15. Stereometric problems on combinations of figures.
16. Solving cross-section problems with assistance of LaTeX.
17. Solving problems on the representation of shadows using LaTeX.
18. Representation of combinations of spatial figures by means of LaTeX.
19. Evidence-based solution of problems on the transformation of graphs of functions.
20. Quantifiers in mathematics and mathematics teaching.
• asked a question related to Mathematics
Question
This is actually a trivial question and I'm just being mischievous.
It turns on the shades of meaning of both "idea" and "exist."
Mathematically, a concept exists whether anyone has happened upon it or not. (A meaningless attempt at a concept is not a concept).
When first thought about by an actual brain of any kind, a concept acquires its first glimmer of existence as in the real world.
Mathematically speaking, whether an 'idea' exists before it can be 'had', relates to the question of whether mathematical concepts exist a priori, leaving them to be 'discovered', or whether the mind must piece them together, leaving them to be 'invented'.
For those interested in the distinction 'invention' and 'discovery' in this context, Jacques Hadamard has published interesting views to this matter in his booklet "The Psychology of Invention in the Mathematical Field" (1954).
Thanks Karl Sipfle for this non-trivial question.
• asked a question related to Mathematics
Question
Physics is a scence of representations, with mathematical aspects in them, foremost and not of naked correlations and parameter analysis.
It also has competent conceptualizaions, genious principles.
Even the innocent seeking uniform motion is a representational scheme fo motions under the theory of kinematics. (Representations are seperate from reality but are invaluable part of scientific infering, predicting, explaining etc) i.e heat is represented as a flow between subsystems. Representations change i.e Einstein found the curved spacetime one for gravity phenomena.
Physics is also the science f cosmology. It has no meaning if it bypasses the universe-i.e the sum of subsystems. This discipline has problems because we cannot take ourself out of it and study it but physics has tools for this (QM) or theoretical approximaions (more cognitively open consideration of the concept of boundary conditions).
If want to know about the big things that are the sum how many things act, you need to know how many things act. Following is just a few concepts that need changing and my proofs of the changes:
The proofs that 4 constants ae changed by relativistic velocities as a PDF file will pop up when the following link is Left clicked: -------------------------------------------------------------------- https://drive.google.com/file/d/1e1ExWG-VyTR8PAPxU5OnfSzd86uj59nh/view?usp=share_link --------------------------------------------------------------------- A link to prerequisite proofs that all Doppler shifts change time and distance (axial, gravitational and transverse not just the transverse). -------------------------------------------------------------------- https://drive.google.com/file/d/1vGRBH1AgUOCP8_zp7fKxBTMPg-YP_-uh/view?usp=share_link -------------------------------------------------------------------- A link to: Proof of a version of the Schrodinger equation for relativistic velocities: ------------------------------------------------------------------ https://drive.google.com/file/d/1kh2d4fYFOd8rbS6tgyUbTA5zDZW-aUNH/view?usp=share_link-------------------------------------------------------------------------------------------- I hope you can make use of the above. Samuel Lewis Reich (sLrch53@gmail.com)
• asked a question related to Mathematics
Question
Is it possible to mathematically calculate K-40 from K total determined by ICP MS in sediment samples?
See Wikipedia for Potassium-40, abundance.
• asked a question related to Mathematics
Question
We assume that this statement is false, but one of the most common mathematical errors.
So a question arises: what is the importance of the LHS diagonal?
Important or not? As a question it is too general, no definite answer could be given.
• asked a question related to Mathematics
Question
I have two networks, and wish to get them to dynamically interact with one another, yet retain modularity.
Hello Faizan Rao, to dynamically join two separate database schemas while retaining modularity, you can use a mathematical layer such as a view or a stored procedure to perform the interaction. This method allows you to keep the database schemas separate while still enabling them to communicate and share data.
Here's a step-by-step guide on how to do this:
Identify the shared data: Determine the common data points between the two schemas that need to be combined or interact with each other. This could be a common key, attribute, or any other data that can be used to relate the two schemas.
Create a view or stored procedure: Depending on your database system (MySQL, PostgreSQL, SQL Server, etc.), you can create a view or a stored procedure that performs the necessary calculations or data manipulations. This will act as the mathematical layer between the two schemas. This layer will query data from both schemas and perform the required calculations, transformations, or aggregations.
For example, in SQL Server, you can create a view like this:
CREATE VIEW CombinedData AS
SELECT a.*, b.*
FROM Schema1.Table1 a
JOIN Schema2.Table2 b
ON a.CommonKey = b.CommonKey;
In this example, Schema1.Table1 and Schema2.Table2 represent tables from the two separate schemas, and CommonKey is the column used to relate the data.
Access the view or stored procedure: Whenever you need to access the combined data, you can query the view or execute the stored procedure to fetch the results. This ensures that the two schemas remain separate, but the data can be combined and accessed dynamically as needed.
Maintain modularity: By using a view or stored procedure, you can keep the two schemas separate and modular. When updates are needed, you can make changes to the individual schemas without affecting the other. The view or stored procedure can then be updated to accommodate the changes. A mathematical layer in the form of a view or a stored procedure can help you dynamically join two separate database schemas while retaining modularity. This method ensures that the schemas remain independent and can be maintained separately, while still allowing for the interaction and sharing of data between them.
Regards,
• asked a question related to Mathematics
Question
Category theory is a branch of mathematics that deals with the abstract structure of mathematical concepts and their relationships. While category theory has been applied to various areas of physics, such as quantum mechanics and general relativity, it is currently not clear whether it could serve as the language of a metatheory unifying the description of the laws of physics.
There are several challenges to using category theory as the language of a metatheory for physics. One challenge is that category theory is a highly abstract and general framework, and it is not yet clear how to connect it to the specific details of physical systems and their behaviour. Another challenge is that category theory is still an active area of research, and there are many open questions and debates about how to apply it to different areas of mathematics and science.
Despite these challenges, there are some researchers who believe that category theory could play a role in developing a metatheory for physics. For example, some have proposed that category theory could be used to describe the relationships between different physical theories and to unify them into a single framework. Others have suggested that category theory could be used to study the relationship between space and time in a more unified and conceptual way.
I am very interested in your experiences, opinions and ideas.
I believe your sentiment is that Category Theory is so general that it might if applied to physics provide an insight into the overarching principles of the science. But Category Theory is not going to give you any insight that you do not already possess. It might provide a convenient notation for expressing that insight. Using Category Theory to navigate physics without a physical understanding would be like sailing a yacht without a keel.
• asked a question related to Mathematics
Question
Is it mathematically justified to place negative and positive numbers on the same plane?
Goes like this:
e^2pii=1
e^pi=-1
Consequently:-2pi=pi
and thats precisely what I was trying to say
• asked a question related to Mathematics
Question
This question discusses the YES answer. We don't need the √-1.
The complex numbers, using rational numbers (i.e., the Gauss set G) or mathematical real-numbers (the set R), are artificial. Can they be avoided?
Math cannot be in ones head, as explains [1].
To realize the YES answer, one must advance over current knowledge, and may sound strange. But, every path in a complex space must begin and end in a rational number -- anything that can be measured, or produced, must be a rational number. Complex numbers are not needed, physically, as a number. But, in algebra, they are useful.
The YES answer can improve the efficiency in using numbers in calculations, although it is less advantageous in algebra calculations, like in the well-known Gauss identity.
For example, in the FFT [2], there is no need to compute complex functions, or trigonometric functions.
This may lead to further improvement in computation time over the FFT, already providing orders of magnitude improvement in computation time over FT with mathematical real-numbers. Both the FT and the FFT are revealed to be equivalent -- see [2].
I detail this in [3] for comments. Maybe one can build a faster FFT (or, FFFT)?
[2]
Preprint FT = FFT
[2]
The form z=a+ib is called the rectangular coordinate form of a complex number, that humans have fancied to exist for more than 500 years.
We are showing that is an illusion, see [1].
Quantum mechanics does not, contrary to popular belief, include anything imaginary. All results and probabilities are rational numbers, as we used and published (see ResearchGate) since 1978, see [1].
Everything that is measured or can be constructed is then a rational number, a member of the set Q.
This connects in a 1:1 mapping (isomorphism) to the set Z. From there, one can take out negative numbers and 0, and through an easy isomorphism, connect to the set N and to the set B^n, where B={0,1}.
We reach the domain of digital computers in B={0,1}. That is all a digital computer needs to process -- the set B={0,1}, addition, and encoding, see [1].
The number 0^n=0, and 1^n=1. There Is no need to calculate trigonometric functions, analysis (calculus), or other functions. Mathematics can end in middle-school. We can all follow computers!
REFERENCES
[1] Search online.
• asked a question related to Mathematics
Question
@Juan Weisz
incentive here is that e^2pii=1 (Euler's formula)
• asked a question related to Mathematics
Question
I noticed that in some very bad models of neural networks, the value of R² (coefficient of determination) can be negative. That is, the model is so bad that the mean of the data is better than the model.
In linear regression models, the multiple correlation coefficient (R) can be calculated using the root of R². However, this is not possible for a model of neural networks that presents a negative R². In that case, is R mathematically undefined?
I tried calculating the correlation y and y_pred (Pearson), but it is mathematically undefined (division by zero). I am attaching the values.
Obs.: The question is about artificial neural networks.
Raid, apologies here's the attachment. David Booth
• asked a question related to Mathematics
Question
1 - Prof. Tegmark of MIT hypothesizes that the universe is not merely described by mathematics but IS mathematics.
2 - The Riemann hypothesis applies to the mathematical universe’s space-time, and says its infinite "nontrivial zeros" lie on the vertical line of the complex number plane (on the y-axis of Wick rotation).
3 - Implying infinity=zero, there's no distance in time or space - making superluminal and time travel feasible.
4 - Besides Mobius strips, topological propulsion uses holographic-universe theory to delete the 3rd dimension (and thus distance).
5 - Relationships between living organisms can be explained with scientifically applied mathematics instead of origin of species by biological evolution.
6 - Wick rotation - represented by a circle where the x- and y-axes intersect at its centre, and where real and imaginary numbers rotate counterclockwise between 4 quadrants - introduces the possibility of interaction of the x-axis' ordinary matter and energy with the y-axis' dark matter and dark energy.
An equivalent formulation of the Riemann hypothesis. See formula (3.22) in the
• asked a question related to Mathematics
Question
Theoretical and computational physics provide the vision and the mathematical and computational framework for understanding and extending the knowledge of particles, forces, space-time, and the universe. A thriving theory program is essential to support current experiments and to identify new directions for high energy physics. Theoretical physicists provide a great deal of assistance to the Energy, Intensity, and Cosmic Frontiers with the in-depth understanding of the underlying theory behind experiments and interpreting the outcomes in context of the theory. Advanced computing tools are necessary for designing, operating, and interpreting experiments and to perform sophisticated scientific simulations that enable discovery in the science drivers and the three experimental frontiers.
source: HEP Theoretical and Computationa... | U.S. DOE Office of Science (SC) (osti.gov)
Physics, mathematical, and computational sciences have contributed to the betterment of mankind and continue to push innovation and research today because they provide fundamental frameworks for understanding the natural world, developing new technologies, and solving real-world problems.
Physics, for example, provides a fundamental understanding of the laws of nature that govern the behavior of matter and energy, from the smallest particles to the largest structures in the universe. This understanding has led to the development of technologies such as lasers, semiconductors, and superconductors, which have revolutionized communication, computing, and energy production.
Mathematics provides the language and tools for describing the structure of the natural world and for solving problems across a wide range of fields, from engineering to economics. Mathematical models and simulations allow scientists and engineers to study complex systems and make predictions about their behavior, leading to new discoveries and innovations.
Computational science, which combines mathematics, computer science, and domain-specific knowledge, has become increasingly important in recent years due to the explosion of data and the growing complexity of problems in many fields. Computational tools and algorithms are used to simulate physical processes, analyze large data sets, and develop new materials and drugs.
Overall, physics, mathematical, and computational sciences continue to play a critical role in driving innovation and advancing knowledge in many fields, making them essential for the betterment of mankind.
• asked a question related to Mathematics
Question
Irrational numbers are uncomputable with probability one. In that sense, numerical, they do not belong to nature. Animals cannot calculate it, nor humans, nor machines.
But algebra can deal with irrational numbers. Algebra deals with unknowns and indeterminates, exactly.
This would mean that a simple bee or fish can do algebra? No, this means, given the simple expression of their brains, that a higher entity is able to command them to do algebra. The same for humans and machines. We must be able also to do quantum computing, and beyond, also that way.
Thus, no one (animals, humans, extraterrestrials in the NASA search, and machines) is limited by their expressions, and all obey a higher entity, commanding through a network from the top down -- which entity we call God, and Jesus called Father.
This means that God holds all the dice. That also means that we can learn by mimicking nature. Even a wasp can teach us the medicinal properties of a passion fruit flower to lower aggression. Animals, no surprise, can self-medicate, knowing no biology or chemistry.
There is, then, no “personal” sense of algebra. It just is a combination of arithmetic operations.There is no “algebra in my sense” -- there is only one sense, the one mathematical sense that has made sense physically, for ages. I do not feel free to change it, and did not.
But we can reveal new facets of it. In that, we have already revealed several exact algebraic expressions for irrational numbers. Of course, the task is not even enumerable, but it is worth compiling, for the weary traveler. Any suggestions are welcome.
@Ed Gerck
Irrational numbers are uncomputable with probability one
================================================= ===
There was a link to your own thread given by you. This thread gives your erroneous statement from the very beginning, namely: "Irrational numbers are uncomputable with probability one".
Please agree, Dear Professor Ed G., that any irrational number is calculated with 100% accuracy with a probability of 1 for any number of orders p-1, if you write down p orders. Thus, if you write for the root of 2 one order before point and three orders after point, you will have sqrt(2)=1.414..., i.e., you can consider that you have written 4 orders. At the same time, the accuracy of 100% with a probability of 1 is provided for 3 orders, i.e. 1.41, etc., for any number of orders...
Speaking of some kind of all orders "full notation" , as you would like to see it, it's not possible for such a representation of irrational numbers.
If you point out my mistake to me, I will be grateful.
Greetings,
SPK
• asked a question related to Mathematics
Question
What is this mean ( ± 0.06) and How can I calculate it mathematically?
Standard error = s / √n
where:
• s: sample standard deviation
• n: sample size
• asked a question related to Mathematics
Question
All tests doing a proof for the Riemann-Hypothesis on the Zeta-Function must fail.
There are no zeros by the so called function of a complex argument.
A function on two different units f(x, y) only then has values for the third unit
z´ [z = f(x, y)]
if the values variables x´ and y´ would be combined by an algebraic rule.
So it should be done for the complex argument, Riemann had used.
But there isn´t such a combination. So Riemann only did a scaling´. Where both parts of the complex number stay separate.
The second part of the refutation comes by showing wrong expert opinion of mathematics. This is on the false use of imaginary´ and prefixed multiplication´.