Science topics: Mathematics
Science topic
Mathematics - Science topic
Mathematics, Pure and Applied Math
Questions related to Mathematics
Dear colleagues and enthusiasts of beautiful geometric problems, I invite you to solve another elegant problem:
Reconstruct triangle ABC from points A, D, E, and H1.
I will be glad to see your solutions and share my own!

If this ordinary person with zero basic knowledge can learn advanced mathematics in one or two years, then I think this will greatly improve the entire mathematics or scientific research community. (Those who have methods or opinions can express their own opinions. If There is no way I will start learning advanced mathematics from scratch)
Attaching mathematical expressions here is problematic. I am attaching the link to the question here.
Does someone have any idea for proving or rejecting the Riemann Hypothesis?
Mathematical proof of Euler product
ζ(s) = Σ 1/n^s = 1/1^s + 1/2^s + 1/3^s + ... ... + 1/n^s -----[1]
s>=1, ζ(S) divergent
1/2^s ζ(S) = 1/2^s + 1/4^s + 1/6^s + ... ... + 1/2n^s -----[2]
[1]-[2]
=> (1-1/2^s) ζ(S) = 1 + 1/3^s + 1/5^s + 1/7^s + ... ... -----[3]
1/3^s (1-1/2^s) ζ(S) = 1/3^s + 1/9^s + 1/15^s + 1/21^s + ... ... -----[4]
[3]-[4]
=> (1-1/3^s) (1-1/s^s) ζ(S) = 1 + 1/5^s + 1/7^s + 1/11^s + 1/13^s + ... ...
... ...
(1-1/5^s)(1-1/3^s)(1-1/2^s) ζ(S) = 1 + 1/7^s + 1/11^s + 1/13^s + ... ...
... ...
∏(1-1/p^s) ζ(S) = 1
p(prime numbers)
=> ζ(S) = ∏(1-1/p^s)^(-1) = 1/1^s + 1/2^s + 1/3^s + ... ... + 1/n^s
s=1, ζ(S) divergent
So prime numbers are infinitas
Euler product is only meaningful when s>=1; the output will diverge if s<1
Riemann used analytic continuation to make the ζ(s) function meaningful on the complex plane
when s<1.
ζ(s) = Σ 1/n^s = 1/1^s + 1/2^s + 1/3^s + ... ... + 1/n^s
analytic continuation -∞ <= s <= ∞
=> ζ(s) = Γ(1-s)/2𝝅i * ∫{-∞}^[∞] [(-Z)^s / (e^z - 1)] dZ/Z = Reimann ζ function (s)
Reimann ζ(s) = Σ 1/n^s, s∈C, n∈N
where s is any complex number, while n is any natural number.
Γ(s)= (s-1)!
ζ(s) = 2Γ(1-s)/(2𝝅)^(s-1) * sin (𝝅s/2) ζ(1-s)
when s = -2, -4, -6 ... ...
ζ(s) = 0 (trivial zeros)
Reimann hypothesis (1859)
all nontrivial zeros of ζ(s) function, their output of complex number with real part 1/2.
# Original paper here ----- https://www.emis.de/classics/Riemann/Zeta.pdf

Dear Researchers,
Subject: Call for Systematic Literature Review Papers in Computer Science Fields - Special Issue in the Iraqi Journal for Computer Science and Mathematics
I hope this letter finds you in good health and high spirits. We are pleased to announce a unique opportunity for researchers in the field of computer science to contribute to our upcoming special issue focused on systematic review papers. As a Scopus-indexed journal with a remarkable CiteScore of 2.9 and a CiteScore Tracker of 3.5, the Iraqi Journal for Computer Science and Mathematics is dedicated to advancing the knowledge and understanding of computer science.
Special Issue Details:
- Title: Special Issue on Systematic Literature Review Papers in Computer Science Fields
- Journal: Iraqi Journal for Computer Science and Mathematics
- CiteScore Tracker: 3.5 (As per the latest available data)
- CiteScore: 2.9 (As per the latest available data)
- Submission Deadline: December 31, 2023
- Publication Fee: None (This special issue is free of charge)
We invite you to contribute your valuable insights and research findings by submitting your systematic review papers to this special issue. Systematic reviews play a crucial role in synthesizing existing research, identifying trends, and guiding future research directions. This special issue aims to gather a diverse collection of high-quality systematic review papers across various computer science disciplines.
Submission Guidelines:
Please visit our journal's submission portal at https://journal.esj.edu.iq/index.php/IJCM/submissions
to submit your paper. Make sure to select the special issue "Systematic Literature Review Papers in Computer Science Fields" during the submission process.
We encourage you to review the author guidelines and formatting requirements available on the journal's website to ensure your submission adheres to our standards.
Should you have any inquiries or need further assistance, please do not hesitate to contact our editorial team at mohammed.khaleel@aliraqia.edu.iq
Your contribution to this special issue will undoubtedly enrich the field of computer science and contribute to our mission of fostering academic excellence. We look forward to receiving your submissions and collaborating towards the advancement of knowledge.
Warm regards,
Dr. Mohammad Aljanabi
Editor in Chief
Iraqi Journal for Computer Science and Mathematics
ChatGPT scored a 155 on an IQ test , and has sufficient background to process mathematical proof review for example and verifying scientific formulas and checking at real time the plagiarism traces but the scientific community argues that the confidentiality breach prevents the use of AI as recognized peer reviewer , what do you think about it ? writers and journals should they recognize the AI as a valid peer reviewer ?
I am working on the image of some critical curves under complex-valued harmonic polynomials. The following picture was produced and I couldn't give the well known name for this in Mathematics. Can I get any suggestion on this please?

Hi,
Does anyone know a good way to mathematically define/identify the onset of a plateau for a curve y = f(x) in a 2D plane?
A bit more background: I have a set of curves from which I'd like to extract the x values where the "plateau" starts, by applying a consistent definition of plateau onset.
Thanks,
Yifan
Recently I've discussed this topic with a tautologist researcher, Quine's follower. The denial of the capacity of deductive logic to generate new knowledge implies that all deductive results in mathematics wont increase our knowledge for real.
The tautologic nature of the deduction seems to lead to this conclusion. In my opinion some sort of logic omniscience is involved in that position.
So the questions would be:
- Is the set of theorems that follow logically from a set A of axioms, "implicit" knowledge? if so, what would be the proper difference between "implicit" and "explicit" knowledge?
- If we embrace the idea that no new knowledge comes from deduction, what is the precise meaning of "new" in this context?
- How do you avoid the problem of logic omniscience?
Thanks beforehand for your insights.
Fermat's last theorem was finally solved by Wiles using mathematical tools that were wholly unavailable to Fermat.
Do you believe
A) That we have actually not solved Fermat's theorem the way it was supposed to be solved, and that we must still look for Fermat's original solution, still undiscovered,
or
B) That Fermat actually made a mistake, and that his 'wonderful' proof -which he did not have the necessary space to fully set forth - was in fact mistaken or flawed, and that we were obsessed for centuries with his last "theorem" when in fact he himself had not really proved it at all?
In the 'Collection of Geometric Problems' from 1966, there is a problem in which the author made a mistake.
Try to find the author's error!
In the picture, you can see the conditions of this mathematical problem without changes, with an error.

The experiment conducted by Bose at the Royal Society of London in 1901 demonstrated that plants have feelings like humans. Placing a plant in a vessel containing poisonous solution he showed the rapid movement of the plant which finally died down. His finding was praised and the concept of plant’s life has been established. If we scold a plant it doesn’t respond, but an AI bot does. Then how can we disprove the life of a Chatbot?
Are its faces polytopes?
Is there any information in the literature on the geometry and topology of saddle polyhedra?
Can we use them to construct structural triply periodic minimal surfaces?
An attempt to answer some of these questions in:

Article Topic: Some Algebraic Inequalitties
I have been collecting some algebraic inequalities, soonly it has been completed and published on Romanian Mathematical Magazine.
For computer science, is mathematics more of a tool or a language?
The fundamental theorem of calculus is the backbone of natural sciences, thus, given the occasional thin line between the natural and social, how common is the fundamental theorem of calculus in social sciences?
Examples I found:
Ohnemus , Alexander . "Proving the Fundamental Theorem of Calculus through Critical Race Theory." ResearchGate.net . 1 July 2023. www.researchgate.net/publication/372338504_Proving_the_Fundamental_Theorem_of_Calculus_through_Critical_Race_Theory. Accessed 9 Aug. 2023.
Ohnemus , Alexander . "Correlations in Game Theory, Category Theory, Linking Calculus with Statistics and Forms (Alexander Ohnemus' Contributions to Mathematics Book 9)." amazon.com. 12 Dec. 2022. www.amazon.com/gp/aw/d/B0BPX1CSHS?ref_=dbs_m_mng_wam_calw_tkin_8&storeType=ebooks. Accessed 11 July 2023.
Ohnemus , Alexander . "Linguistic mapping of critical race theory(the evolution of languages and oppression. How Germanic languages came to dominate the world) (Alexander Ohnemus' Contributions to Mathematics Book 20)." amazon.com. 3 Jan. 2023. www.amazon.com/Linguistic-evolution-oppression-Contributions-Mathematics-ebook/dp/B0BRP1KYLR/ref=mp_s_a_1_13?qid=1688598986&refinements=p_27%3AAlexander+Ohnemus&s=digital-text&sr=1-13. Accessed 5 July 2023.
Ohnemus , Alexander . "Fundamental Theorem of Calculus proved by Wagner's Law (Alexander Ohnemus' Contributions to Mathematics Book 8)." amazon.com. 11 Dec. 2022. www.amazon.com/gp/aw/d/B0BPS2ZMXC?ref_=dbs_m_mng_wam_calw_tkin_7&storeType=ebooks. Accessed 25 June 2023.
Mathematically, it is posited that the cosmic or local black hole singularity must someday become of infinite density and zero size. But this is unimaginable. If an infinite-density stuff should exist, it should already have existed.
Hence, in my opinion, this kind of mathematical necessities are to be the limiting cases of physics. IS THIS NOT THE STARTING POINT TO DETERMINE WHERE MATHEMATICS AND PHYSICAL SCIENCE MUST PART WAYS?
I believe that it is common knowledge that mathematics and its applications cannot directly prove Causality. What are the bases of the problem of incompatibility of physical causality with mathematics and its applications in the sciences and in philosophy?
The main but very general explanation could be that mathematics and mathematical explanations are not directly about the world, but are applicable to the world to a great extent.
Hence, mathematical explanations can at the most only show the general ways of movement of the processes and not demonstrate whether the ways of the cosmos are by causation, what the internal constitution of every part of it is, etc. Even when some very minute physical process is mathematized, the results are general, and not specific of the details of the internal constitution of that process.
No science and philosophy can start without admitting that the cosmos exists. If it exists, it is not nothing, not vacuum. Non-vacuous existence means that the existents are non-vacuously extended. This means that they have parts. Every part has parts too, ad libitum, because each part is extended and non-infinitesimal. Hence, each part is relatively discrete, not mathematically discrete.
None of the parts of any physical existent is an infinitesimal. They can be near-infinitesimal. This character of existents is Extension, a Category directly implied by the To Be of Reality-in-total.
Similarly, any extended being’s parts -- however near-infinitesimal -- are active, moving. This implies that every part has so (finite) impact on some others, not on infinite others. This character of existents is Change.
No other implication of To Be is so primary as these two (Extension-Change) and directly derivable from To Be. Hence, they are exhaustive of To Be.
Existence in Extension-Change is what we call Causality. If anything is existent, it is causal – hence Universal Causality is the trans-scientific and physical-ontological Law of all existents.
By the very concept of finite Extension-Change-wise existence, it becomes clear that no finite space-time is absolutely dense with existents. Hence, existents cannot be mathematically continuous. Since there is continuous (but finite and not discrete) change (transfer of impact), no existent can be mathematically absolutely continuous or discrete in its parts or in connection with others.
Can logic show the necessity of all existents as being causal? We have already discussed how, ontologically, the very concept of To Be implies Extension-Change and thus also Universal Causality.
WHAT ABOUT THE ABILITY OR NOT OF LOGIC TO CONCLUDE TO UNIVERSAL CAUSALITY?
In my argument above and elsewhere showing Extension-Change as the very exhaustive meaning of To Be, I have used mostly only the first principles of ordinary logic, namely, Identity, Non-contradiction, and Excluded Middle, and then argued that Extension-Change-wise existence is nothing but Universal Causality, if everything existing is non-vacuous in existence.
For example, does everything exist or not? If yes, let us call it non-vacuous existence. Hence, Extension as the first major implication of To Be. Non-vacuous means extended, because if not extended, the existent is vacuous. If extended, everything has parts.
The point of addition now has been Change, which makes the description physical. It is, so to say, from experience. Thereafter I move to the meaning of Change basically as motion or impact.
Naturally, everything in Extension must effect impacts. Everything has further parts. Hence, by implication from Change, everything causes changes by impacts. Thus, we conclude that Extension-Change-wise existence is Universal Causality. It is thus natural to claim that this is a pre-scientific Law of Existence.
In such foundational questions like To Be and its implications, we need to use the first principles of logic, because these are the foundational notions of all science and no other derivative logical procedure comes in as handy. In short, logic with its fundamental principles can help derive Universal Causality. Thus, Causality is more primary to experience than the primitive notions of mathematics.
Extension-Change, Universal Causality derived by their amalgamation, are the most fundamental Metaphysical, Physical-ontological, Categories. Since these are the direction exhaustive implications of To Be, all philosophy and science are based on these.
Bibliography
(1) Gravitational Coalescence Paradox and Cosmogenetic Causality in Quantum Astrophysical Cosmology, 647 pp., Berlin, 2018.
(2) Physics without Metaphysics? Categories of Second Generation Scientific Ontology, 386 pp., Frankfurt, 2015.
(3) Causal Ubiquity in Quantum Physics: A Superluminal and Local-Causal Physical Ontology, 361 pp., Frankfurt, 2014.
(4) Essential Cosmology and Philosophy for All: Gravitational Coalescence Cosmology, 92 pp., KDP Amazon, 2022, 2nd Edition.
(5) Essenzielle Kosmologie und Philosophie für alle: Gravitational-Koaleszenz-Kosmologie, 104 pp., KDP Amazon, 2022, 1st Edition.
Why are numbers and shapes so exact? ‘One’, ‘two’, ‘point’, ‘line’, etc. are all exact. But irrational numbers are not so. The operations on these notions are also intended to be exact. If notions like ‘one’, ‘two’, ‘point’, ‘line’, etc. are defined to be so exact, then it is not by virtue of the exactness of these substantive notions, but instead, due to their being defined so, that they are exact, and mathematics is exact.
But on the other side, due to their being adjectival: ‘being a unity’, ‘being two unities’, ‘being a non-extended shape’, etc., their application-objects are all processes that can obtain these adjectives only in groups. These are pure adjectives, not properties which are composed of many adjectives.
A quality cannot be exact, but may be defined to be exact. It is in terms of the exactness attributed to these notions by definition that the adjectives ‘one’, ‘two’, ‘point’, ‘line’, etc. are exact. This is why the impossibility of fixing these (and other) substantive notions as exact misses our attention.
If in fact these quantitative qualities are inexact due to their pertaining to groups of processual things, then there is justification for the inexactness of irrational numbers, transcendental numbers, etc. too. If numbers and shapes are in fact inexact, then not only irrational and other inexact numbers but all mathematical structures should remain inexact except for their having been defined as exact.
Thus, mathematical structures, in all their detail, are a species of qualities, namely, quantitative qualities. Mathematics is exact only because its fundamental bricks are defined to be so. Hence, mathematics is an as-if exact science, as-if real science. Caution is advised while using it in the sciences as if mathematics were absolutely applicable, as if it were exact.
Bibliography
(1) Gravitational Coalescence Paradox and Cosmogenetic Causality in Quantum Astrophysical Cosmology, 647 pp., Berlin, 2018.
(2) Physics without Metaphysics? Categories of Second Generation Scientific Ontology, 386 pp., Frankfurt, 2015.
(3) Causal Ubiquity in Quantum Physics: A Superluminal and Local-Causal Physical Ontology, 361 pp., Frankfurt, 2014.
(4) Essential Cosmology and Philosophy for All: Gravitational Coalescence Cosmology, 92 pp., KDP Amazon, 2022, 2nd Edition.
(5) Essenzielle Kosmologie und Philosophie für alle: Gravitational-Koaleszenz-Kosmologie, 104 pp., KDP Amazon, 2022, 1st Edition.
THE FATE OF “SOURCE-INDEPENDENCE” IN ELECTROMAGNETISM, GRAVITATION, AND MONOPOLES
Raphael Neelamkavil, Ph.D., Dr. phil.
With the introductory claim that I make here suggestions that seem rationally acceptable in physics and the philosophy of physics, I attempt here to connect reasons beyond the concepts of magnetic monopoles, electromagnetic propagation, and gravitation.
A magnetic or other monopole is conceptually built to be such only insofar as the basic consideration with respect to it is that of the high speed and the direction of movement of propagation of the so-called monopole. Let me attempt to substantiate this claim accommodating also the theories in which the so-called magnetic monopole’s velocity could be sub-luminal.
If its velocity is sub-luminal, its source-dependence may be demonstrated, without difficulty, directly from the fact that the velocity of the gross source affects the velocity of the sub-luminal material propagations from it. This is clear from the fact that some causal change in the gross source is what has initiated the emission of the sub-luminal matter propagation, and hence the emission is affected by the velocity of the source’s part which has initiated the emission.
But the same is the case also with energy emissions and the subsequent propagation of luminal-velocity wavicles, because (1) some change in exactly one physical sub-state of the gross source (i.e., exactly the sub-state part of the gross source in which the emission takes place) has initiated the emission of the energy wavicle, (2) the change within the sub-state part in the gross source must surely have been affected also by the velocity of the gross source and the specific velocity of the sub-state part, and (3) there will surely be involved in the sub-state part at least some external agitations, however minute, which are not taken into consideration, not possible to consider, and are pragmatically not necessary to be taken into consideration.
Some might claim (1) that even electromagnetic and gravitational propagations are just mathematical waves without corporeality (because they are mathematically considered as absolute, infinitesimally thin waves and/or infinitesimal particles) or (2) that they are mere existent monopole objects conducted in luminal velocity but without an opposite pole and with nothing specifically existent between the two poles. How can an object have only a single part, which they term mathematically as the only pole?
The mathematical necessity to name it a monopole shows that the level of velocity of the wavicle is such that (1) its conventionally accepted criterial nature to measure all other motions makes it only conceptually insuperable and hence comparable in theoretical effects to the infinity-/zero-limit of the amount of matter, energy, etc. in the universe, and that (2) this should help terming the wavicle (a) as infinitesimally elongated or concentrated and hence as a physically non-existent wave-shaped or particle-shaped carrier of energy or (b) as an existent monopole with nothing except the one mathematically described pole in existence.
If a wavicle or a monopole is existent, it should have parts in all the three spatial directions, however great and seemingly insuperable its velocity may be when mathematically tested in terms of its own velocity as initiated by STR and GTR and later accepted by all physical sciences. If anyone prefers to call the above arguments as a nonsensical commonsense, I should accept it with a smile. In any case, I would continue to insist that physicists want to describe only existent objects / processes, and not non-existent stuff.
The part A at the initial moment of issue of the wavicle represents the phase of emission of the energy wavicle, and it surely has an effect on the source, because at least a quantum of energy is lost from the source and hence, as a result of the emission of the quantum, (1) certain changes have taken place in the source and (2) certain changes have taken place also in the emitted quantum. This fact is also the foundation of the Uncertainty Principle of Heisenberg. How then can the energy propagation be source-independent?
Source-independence with respect to the sub-luminal level of velocity of the source is defined with respect to the speed of energy propagation merely in a conventional manner. And then how can we demand that, since our definition of sub-luminal motions is with respect to our observation with respect to the luminal speed, all material objects should move sub-luminally?
This is the conventionally chosen effect that allegedly frees the wavicle from the effect of the velocity of the source. If physics must not respect this convention as a necessary postulate in STR and GTR and hence also in QM, energy emission must necessarily be source-dependent, because at least a quantum of energy is lost from the source and hence (1) certain changes have taken place in the source, and (2) certain changes have taken place also in the emitted quantum.
(I invite critical evaluations from earnest scientists and thinkers.)
Most masters focus on general review of qm, classical mechanics, assesing students skills in classical yet heneric and self-value calculative and interpreting capabilities.
The English MSc's on the other hand, provide an introduction to the physical principles and mathematical techniques of current research in:
general relativity
quantum gravity
quantum f. Theory
quantum information
cosmology and the early universe
There is also a particular focus on topics reflecting research strengths.
Graduates are more well equiped to contribute to research and make impressive ph. D dissertations.
Of course instructors that teach masters are working in classical and quantum gravity, geometry and relativity, to take the theoretical physics sub-domain, in all universities but the emphasis on current research's mathematical techniques and principles is only found in English university'masters offerings.
Hello everyone. When I checked the user-defined cell (CFX language), I faced the problem of writing the code in Fortran. I'm not accustomed to Fortan and the integration of Fortan into CFX. If you explain this matter, I would appreciate it.
Warmest Regards,
-Alper
The choice of coordinate systems is a mathematical tool used to describe physical events. Local or universal spatial events occur in multiple coordinate systems of space and time or spacetime as we know it under classical, relativistic and cosmological physics.
Whether the fundamental laws of physics remains consistent across different coordinate systems.
I have deep neural network where I want to include a layer which should have one input and two outputs. For example, I want to construct an intermediate layer where Layer-1 is connected to the input of this intermediate layer and one output of the intermediate layer is connected to Layer-2 and another output is connected to Layer-3. Moreover, the intermediate layer just passes the data as it is through it without doing any mathematical operation on the input data. I have seen additionLayer in MATLAB, but it has only 1 output and this function is read-only for the number of outputs.
"Mathematics is logical systems formulising relationships of variable(s) with other variable(s) quantitatively &/or qualitatively as science language." (Sinan Ibaguner)
I tried to devise my best description as shortly & clearly !
For physics, is mathematics more of a tool or a language?
"Matematik, değişken(ler)in diğer değişken(ler)le ilişkilerini niceliksel ve(ya) niteliksel tarz formüle eden mantıksal sistemlerin sanatsal bilim dili. "
Kısa ve net matematik tanımım ! Daha iyisi ne olabilir !?
Hello,
I am looking for mathematical formulas that calculate the rigid body movement of an element based on the nodal displacements. Can anyone give a brief explanation and recommend some materials to read? Thanks a lot.
Best,
Chen
I am using SPSS to perform binary logistic regression. One of the parameters generated is the prediction probability. Is there a simple mathematical formula that could be used to calculate it manually? e.g. based on the B values generated for each variable in model?
Mathematical Generalities: ‘Number’ may be termed as a general term, but real numbers, a sub-set of numbers, is sub-general. Clearly, it is a quality: “having one member, having two members, etc.”; and here one, two, etc., when taken as nominatives, lose their significance, and are based primarily only on the adjectival use. Hence the justification for the adjectival (qualitative) primacy of numbers as universals. While defining one kind of ‘general’ another sort of ‘general’ may naturally be involved in the definition, insofar as they pertain to an existent process and not when otherwise.
Why are numbers and shapes so exact? ‘One’, ‘two’, ‘point’, ‘line’, etc. are all exact. The operations on these notions are also intended to be exact. But irrational numbers are not so exact in measurement. If notions like ‘one’, ‘two’, ‘point’, ‘line’, etc. are defined to be so exact, then it is not by virtue of the exactness of these substantive notions, but instead, due to their being defined as exact. Their adjectival natures: ‘being a unity’, ‘being two unities’, ‘being a non-extended shape’, etc., are not so exact.
A quality cannot be exact, but may be defined to be exact. It is in terms of the exactness attributed to these notions by definition that the adjectives ‘one’, ‘two’, ‘point’, ‘line’, etc. are exact. This is why the impossibility of fixing these (and other) substantive notions as exact miss our attention. If in fact these are inexact, then there is justification for the inexactness of irrational, transcendental, and other numbers too.
If numbers and shapes are in fact inexact, then not only irrational numbers, transcendental numbers, etc., but all exact numbers and the mathematical structures should remain inexact if they have not been defined as exact. And if behind the exact definitions of exact numbers there are no exact universals, i.e., quantitative qualities? If the formation of numbers is by reference to experience (i.e., not from the absolute vacuum of non-experience), their formation is with respect to the quantitatively qualitative and thus inexact ontological universals of oneness, two-ness, point, line, etc.
Thus, mathematical structures, in all their detail, are a species of qualities, namely, quantitative qualities, defined to be exact and not naturally exact. Quantitative qualities are ontological universals, with their own connotative and denotative versions.
Natural numbers, therefore, are the origin of primitive mathematical experience, although complex numbers may be more general than all others in a purely mathematical manner of definition.
Bibliography
(1) Gravitational Coalescence Paradox and Cosmogenetic Causality in Quantum Astrophysical Cosmology, 647 pp., Berlin, 2018.
(2) Physics without Metaphysics? Categories of Second Generation Scientific Ontology, 386 pp., Frankfurt, 2015.
(3) Causal Ubiquity in Quantum Physics: A Superluminal and Local-Causal Physical Ontology, 361 pp., Frankfurt, 2014.
(4) Essential Cosmology and Philosophy for All: Gravitational Coalescence Cosmology, 92 pp., KDP Amazon, 2022, 2nd Edition.
(5) Essenzielle Kosmologie und Philosophie für alle: Gravitational-Koaleszenz-Kosmologie, 104 pp., KDP Amazon, 2022, 1st Edition.
Paradox Etymology can be traced back to at least Plato's Parmenides [1]. Paradox comes from para ("contrary to") and doxa ("opinion"). The word appeared in Latin "paradoxum" which means "contrary to expectation," or "incredible. We propose, in this discussion thread, to debate philosophical or scientific paradoxes: their geneses, formulations, solutions, or propositions of solutions... All contributions on "Paradoxes", including paradoxical ones, are welcome.
The limits of logic and mathematics is that we couldn't describe a question without symbol system, but symbol system is just an abstraction of the real world not the real world itself, so there is a distance between the abstracted symbol system and the real world, therefore there is truth we can't reach by symbol system, which A-HA moment may reach. But when we thinking we always use a symbol system like words or mathematics with apriori logic, so I wonder if AI could have A-HA moment?
If someone can help me understand Helicity in the context of the High Harmonic Generation, it will be helpful. Due to mathematical notations, the exact question can be found "https://physics.stackexchange.com/questions/778274/what-is-helicity-in-high-harmonic-generation".
In what ways may a STEM facility develop these skills?
1. On the “Field” concept of objective reality:
Einstein in an August 10, letter to his friend Besso (1954): “I consider it quite possible that physics cannot be based on the field concept, i.e., continuous structure. In that case, nothing remains of my entire castle in the air, gravitation theory included, (and of) the rest of modern physics” A. Pais, Subtle is the Lord …” The Science and the Life of Albert Einstein”, Oxford University Press, (1982) 467,
2. On “Black Hole”:
"The essential result of this investigation is a clear understanding as to why the "Schwarzschild singularities" do not exist in physical reality. Although the theory given here treats only clusters whose particles move along circular paths it does not seem to be subject to reasonable doubt that more general cases will have analogous results. The "Schwarzschild singularity" does not appear for the reason that matter cannot be concentrated arbitrarily. And this is due to the fact that otherwise the constituting particles would reach the velocity of light.
This investigation arose out of discussions the author conducted with Professor H. P. Robertson and with Drs. V. Bargmann and P. Bergmann on the mathematical and physical significance of the Schwarzschild singularity. The problem quite naturally leads to the question, answered by this paper in the negative, as to whether physical models are capable of exhibiting such a singularity.", A. Einstein, The Annals of Mathematics, Second Series, Vol. 40, No. 4 (Oct., 1939), pp. 922-936
3. On the Quantum Phenomena:
“Many physicists maintain - and there are weighty arguments in their favour – that in the face of these facts (quantum mechanical), not merely the differential law, but the law of causation itself - hitherto the ultimate basic postulate of all natural science – has collapsed”. A. Einstein, “Essays in Science”, p. 38-39 (1934)
4. On Gravitational Wave:
Einstein dismissed the idea of gravitational wave until his death:
“Together with a young collaborator, I arrived at the interesting result that gravitational waves do not exist, though they had been assumed a certainty to the first approximation,” he wrote in a letter to his friend Max Born. Einstein's paper to the Physical Review Letters titled “Do gravitational waves exist?”; was rejected.
Arthur Eddington who brought an obscure Einstein to world fame, and considered himself to be the second person (other than Einstein), who understood General Relativity (GR); dismissed the idea of gravitational wave in the following way: "They are not objective, and (like absolute velocity) are not detectable by any conceivable experiment. They are merely sinuosities in the co-ordinate-system, and the only speed of propagation relevant to them is 'the speed of thought'".
A.S. Eddington, F.R.S., The Proceedings of the Royal Society of London, Series A, Containing Papers of a Mathematical and Physical Character. The Propagation of Gravitational Waves. (Received October 11, 1922), page 268
Dear Colleaugues & Allies ~ I just posted the final prepublication draft of an article on the nature of the Langlands Program, RH, P v. NP, and other "open" problems of pure maths, number theory, etc., and the proofs. I would deeply appreciate your feedback and suggestions. So, if you are interested, please send me a request for access to the [private] file, for review and comment. Thanks & best of luck etc. ~ M
The mathematical function of TPMS unit cell is as follows: (for example Gyroid)
sin x * cos y+ sin y * cos z+ sin z * cos x = c
parameter 𝑐 determines the relative density of the unit cell.
I am interested to design TPMS unit cell with nTopology software. In this software, TPMS network-based unit cell is designed with "Mid-surface offset" parameter and TPMS sheet-based unit cell is designed with "approximate thickness" parameter.
What is the relation between these parameters and the relative density of the unit cell?
Physics is a game of looking at physical phenomena, analyzing how physical phenomena changes with a hypothetical and yet mathematical arrow of time in 3D space, namely by plotting that physical phenomena with a mathematical grid model (typically cartesian based) assuming that physical phenomena can be plotted with points, and then arriving at a theory describing that physical phenomenon and phenomena under examination. The success of those physical models (mathematical descriptions of physical phenomena) is predicting new phenomena by taking that mathematics and predicting how the math of one phenomenon can link with the math of another phenomenon without any prior research experience with that connection yet based on the presumption of the initial mathematical model of physical phenomena being undertaken.
Everyone in physics, professional and amateur, appears to be doing this.
Does anyone see a problem with that process, and if so what problems do you see?
Is the dimension of space, such as a point in space, a physical thing? Is the dimension of time, such as a moment in time, a physical thing? Can a moment in time and a point of space exist as dimensions in the absence of what is perceived as being physical?
Right now, in 2022, we can read with perfect understanding mathematical articles and books
written a century ago. It is indeed remarkable how the way we do mathematics has stabilised.
The difference between the mathematics of 1922 and 2022 is small compared to that between the mathematics of 1922 and 1822.
Looking beyond classical ZFC-based mathematics, a tremendous amount of effort has been put
into formalising all areas of mathematics within the framework of program-language implementations (for instance Coq, Agda) of the univalent extension of dependent type theory (homotopy type theory).
But Coq and Agda are complex programs which depend on other programs (OCaml and Haskell) and frameworks (for instance operating systems and C libraries) to function. In the future if we have new CPU architectures then
Coq and Agda would have to be compiled again. OCaml and Haskell would have to be compiled again.
Both software and operating systems are rapidly changing and have always been so. What is here today is deprecated tomorrow.
My question is: what guarantee do we have that the huge libraries of the current formal mathematics projects in Agda, Coq or other languages will still be relevant or even "runnable" (for instance type-checkable) without having to resort to emulators and computer archaeology 10, 20, 50 or 100 years from now ?
10 years from now will Agda be backwards compatible enough to still recognise
current Agda files ?
Have there been any organised efforts to guarantee permanent backward compatibility for all future versions of Agda and Coq ? Or OCaml and Haskell ?
Perhaps the formal mathematics project should be carried out within a meta-programing language, a simpler more abstract framework (with a uniform syntax) comprehensible at once to logicians, mathematicians and programers and which can be converted automatically into the latest version of Agda or Coq ?
After sharing that article, I received an email saying
"I have read the abstract. But can not see the connections between the individual topics. They are completely different areas that can not be easily related to each other. e.g. the electromagnetic wave to the Wick rotation or Möbius band."
I admit that I struggled with the connections between topics myself, and I wasn't satisfied with my posting. I'd decided to dispense with a classical approach and tackle these topics from the point of view that everything is connected to everything else (what may be called a Theory of Everything or Quantum Gravity or Unified Field approach). I'm convinced the connections are there, and wrote the following in my notepad before getting out of bed this morning (I dreamed about the Riemann hypothesis last night). It clarified things for me and I hope it will help the other ResearchGaters I'm sharing with.
The Riemann hypothesis, proposed in 1859 by the German mathematician Georg Friedrich Bernhard Riemann, is fascinating. It seems to fit these ideas on various subjects in physics very well. The Riemann hypothesis doesn’t just apply to the distribution of prime numbers but can also apply to the fundamental structure of the mathematical universe’s space-time (addressed in the article with the Mobius strip, figure-8 Klein bottle, Wick rotation, and vector-tensor-scalar geometry). In mapping the distribution of prime numbers, the Riemann hypothesis is concerned with the locations of “nontrivial zeros” on the “critical line”, and says these zeros must lie on the vertical line of the complex number plane i.e. on the y-axis in the attached figure of Wick Rotation. Besides having a real part, zeros in the critical line (the y-axis) have an imaginary part. This is reflected in the real +1 and -1 of the x-axis in the attached figure, as well as by the imaginary +i and -i of the y-axis. In the upper half-plane of the attached figure, a quarter rotation plus a quarter rotation equals a half – both quadrants begin with positive values and ¼ + ¼ = ½. (The Riemann hypothesis states that the real part of every nontrivial zero must be 1/2.) While in the lower half-plane, both quadrants begin with negative numbers and a quarter rotation plus a negative quarter rotation equals zero: 1/4 + (-1/4) = 0. In the Riemann zeta function, there may be infinitely many zeros on the critical line. This suggests the y-axis is literally infinite. To truly be infinite, the gravitational and electromagnetic waves it represents cannot be restricted to the up-down direction but must include all directions. That means it would include the horizontal direction and interact with the x-axis – with the waves rotating to produce ordinary mass (and wave-particle duality) in the x-axis’ space-time, and (acting as dark energy) to produce dark matter in the y-axis’ imaginary space-time.
The Riemann hypothesis can apply to the fundamental structure of the mathematical universe’s space-time, and VTS geometry unites the fermions composing the Sun and planets with bosons filling space-time. Thus, the hypothesis also applies to the bodies of the Sun and Mercury themselves. Its link to Wick Rotation means Mercury’s orbit rotates (the Riemann hypothesis is the cause of precession, which doesn’t only exist close to the Sun but throughout astronomical space-time as well as the quantum scale). The link between the half-planes of the hypothesis and the half-periods of Alternating Current’s sine wave suggests the Sun is composed, in part, of AC waves.
Vector-Tensor-Scalar (VTS) Geometry suggests matter is built up layer by layer from the 1 divided by 2 interaction described in the article. The Sun and stars are a special case of VTS geometry in which stellar bodies are built up layer by layer with AC waves in addition to matter such as hydrogen and helium etc. If the Sun only used 1 / 2 (without the AC interaction), it’d be powered by high temperatures and pressures compressing its particles by nuclear fusion. When powered by AC waves, the half-periods entangle to produce phonons which manifest as vibrations apparent in its rising and falling convection cells of, respectively, hot and cooler plasma.
Summation of AC’s sine waves leads to the Sun’s vibratory waves, emission of photons (and to a small extent, of gravitons whose push contributes to planetary orbits increasing in diameter). Because of the connection to Wick rotation, the convective rising and falling in the Sun correlates with time dilation’s rising and falling photons and gravitons. As explained in the article, this slows time near the speed of light and near intense gravitation because the particles interfere with each other. Thus, even if it's never refreshed/reloaded by future Information Technology, our solar system's star will exist far longer than currently predicted.

II need to know how a suggested mechanism for a problem of players' private information which describes a market of selling and buying things can change the outcome and direction of incentives.
I need to discuss this more. It would be my gratitude if any expert in mechanism design and game theory can help me to model the idea mathematically and prove its efficiency.
I did not find a mathematical formula to find or through which we can determine or choose the correspondences in the case of unequal sample sizes
In the meantime, I would like to say hello to the professors and those who are interested in mathematics. A question about whether dy/dx=dx/dy is a differential equation or not challenged my mind today. I got help from GBT and artificial intelligence and he answered that no it is an equation but it cannot be a differential equation and for the second time when a friend asked about artificial intelligence he answered that it is an algebraic equation and not an equation Differential! I asked math professors and they all said that yes, it is a non-linear first order differential equation that has two types of answers, the first type is the parallel lines that bisect the first and third quadrants, the second type is the parallel lines that bisect the second and fourth quadrants. Again, the question arose for me, why does artificial intelligence give the wrong answer to the problem, dear ones, because artificial intelligence uses special algorithms and does not have a central computing unit. Well, now the question is, why do the algorithms that define it give these wrong answers? Are the algorithms wrong or is it something else? In this matter, the professors of mathematics and computer science, please guide me by giving a complete answer. Thanks
The Ricci tensor assumes the role of helping us understand curvature. Within my Universal Theory research, the Ricci tensor unveils itself. I was pleased to find as detailed in my research document on the Grand Unified Theory Framework (of which advancements in technology are showing there may be more than one viable form of as science progresses)that the Ricci Tensor was typically vanishing to zero in relation to the schwarzschild metric as it should back when I was performing feasibility and speciousness checks via calculations with other experts and myself. But in practical applications of the Grand Unified Theory Framework, vanishing to zero unravels very intriguing consequences.
One of said consequences was something small and interesting I wanted to discuss. The purpose is to highlight the intricacies of implementing such a highly comprehensive concepts in practical settings such as code. To thus detail the challenges researchers may face when translating comprehensive physics and mathematics formulations into concrete applications. More often than not I have found it requiring innovative adaptations and problem-solving. I also want to hear if anyone has any experience with similar things and what their experience was.
My recent amd past ventures into authenticating the Universal Theory framework in code but also writing complex neural networking and AI code with it, as well as Quantum computing code had a lot of interesting hurdles. I immersed myself in the depths of this then encountered a peculiar happenstance. The vanishing of the Ricci tensor to zero in the code procceses. I didn't realize why a lot of the code wasn't working. It's because I was trying to run iterative artificial learning code. And since it incorporated the Universal Theory, and did so in a mathematically accurate way (also authenticating it in various ways via code this way is possible) I didn't realize that no matter what I did the code would never work with the full form of the theory, because the Ricci tensor would always vanish to zero in terms of the schwarzschild metric within the subsequent processes running off initial code. And while this was validating for my theory it was equally frustrating to realize it may be a massive hurdle to institutingnit in code.
This unexpected twist threw me into a world where certain possibilities seemed to evaporate into the ether. The task of setting values for the tensor g_ij (the einstein tensor form utilized in the Grand Unified Theory Framework) in code had to demand a lot of intricate modifications.
I found myself utterly lost. I thought the code was specious. Before I thought to check the ricci tensor calculations, Christoffel and Riemann formations and got it running. I think it's quite scary in a way that someone could have similar code with my own or another form of Unified Theory but if they didn't have THAT sufficient of knowledge on relativity, they may never know the code worked. I feel few have attempted to embrace the tangible variations of complex frameworks within code. I wanted to share this because I thought it was interesting as an example of multidisciplinary science. Coding and physics together is always interesting and there isn't a whole lot of support or information for people venturing into these waters sometimes.
I would like to know what everyone thinks of multidisciplinary issues such as this as well, wherein one may entirely miss valuable data by not knowing what to look for, and how that may affect final results and calculations of research and experimentation. In this situation, ultimately I had to employ some of the concepts in my research document to arrive at the Ricci tensor without any formations of Christoffel or Riemann symbols in the subsequent processes. I thought that was interesting from a physics and coding perspective too. Because I never would've know how to parse this code to get it functioning without knowledge of relativity.
Is it possible to create a random 2- dimensional shape using mathematical equations Or in software like 3D-max and AutoCAD? like this one:
If you have please share it with me at stevegjostwriter@gmail.com Basic Technical
Mathematics with
Calculus, SI Version, 11th edition
In the field of solid mechanics, Navier’s partial differential equation of linear elasticity for material in vector form is:
(λ+G)∇(∇⋅f) + G∇2f = 0, where f = (u, v, w)
The corresponding component form can be evaluated by expanding the ∇ operator and organizing it as follows:
For x-component (u):
(λ+2G)*∂2u/∂x2 + G*(∂2u/∂y2 + ∂2u/∂z2) + (λ+G)*(∂2v/(∂x∂y) + ∂2v/(∂x∂z)) = 0
However, I find it difficult to convert from the component form back to its compact vector form using the combination of divergence, gradient, and Laplacian operators, especially when there are coefficients involved.
Does anyone have any experience with this? Any advice would be appreciated.
Mathematical Literacy prepares students for real-life situations while using aspects of Mathematics taught in younger grades. Students will be able to do basic tax, calculate water and electricity tariffs, the amount of paint needed to paint a room or the amount of tiles needed to tile a floor. Isn't this adding to adulting life and preparing students for society? While Mathematics can be a compulsory subject for those that want to go to university and have a great talent in Mathematics?
Invitation to Contribute to an Edited Book
Banach Contraction Principle: A Centurial Journey
As editors, we are pleased to invite you and your colleagues to contribute your research work to an Edited Book entitled Banach Contraction Principle: A Centurial Journey to be published by Springer.
The main objective of this book is to focus on the journey of the Banach Contraction Principle, its generalizations, extensions, and consequences in the form of applications that are of interest to a wide range of audiences. Different results for fixed points as well as fixed figures for single-valued and multi-valued mappings satisfying various contractive conditions in distinct spaces have been investigated, and this research is still ongoing. The book is expected to contain new applications of fixed point techniques in diverse fields besides the survey/advancements of 100 years of the celebrated Banach contraction principle.
Please go through the details below for the deadlines.
Full chapter submission: July 12, 2023
Review results: Aug. 12, 2023
Revision Submission: Sept. 01, 2023
Final acceptance/rejection notification: Sept.16, 2023
Submission of final chapters to Springer: Sept.21, 2023
Email your papers to anitatmr@yahoo.com or jainmanish26128301@gmail.com (pdf and tex files) at the earliest possible. Submitted papers will be peer-reviewed by 3 reviewers. On acceptance, authors will be requested to submit the final paper as per the format of the book.
We firmly believe that your contribution will enrich the academic and intellectual content of the book along with opening up of new endeavors of research.
Kindly note that there is no fee or charge from authors at any stage of publication.
Looking forward to your valuable contribution.
Best Regards
Anita Tomar
Professor & Head
Department of Mathematics
Pt. L. M. S. Campus
Sridev Suman Uttarakhand University
Rishikesh-249201, India
&
Manish Jain
Head
Department of Mathematics,
Ahir College, Rewari-123401, India
How we express the quantitative research method in mathematical forms including studied variables?
Could something that does not have an end be related to the concept of eternal in nature? Could you cite without any doubt something that could prove it?
If could the infinity to be linked to eternity, is it possible to think about it that it is something without any limit known? How assume that something that you can observe its limits could be infinite in its area? If the infinity doesn't fit in limits, and cannot be totally observed, how we can assume that the infinity could be inside a circle, for example? Or between two numbers as zero and 1, with zero and 1 being limits?
I would like to ask a general question: Any other physicists of any kind, what do YOU see as the fundamental flaws currently existing in mathematics-to-physics (or vice versa) calculations in a general sense? Is it differences in tensors, unknown values, inconsistent unreliable outputs with known methods, no reliable well-known methods ect? Or is the problem to you seen as more of a problem with scientific attitudes and viewpoints being limiting in their current state? And the bigger overall question: Which of these options is limiting science to a higher degree? I'd love to hear other's comments on this.
Dear Professional/Researchers/Students,
Can you please suggest any technique or mathematical approach to optimize spare parts management(Automobile industry) for improve or increase production.
SOURCE OF MAJOR FLAWS IN COSMOLOGICAL THEORIES:
MATHEMATICS-TO-PHYSICS APPLICATION DISCREPENCY
Raphael Neelamkavil, Ph.D., Dr. phil.
The big bang theory has many limitations. These are,
(1) the uncertainty regarding the causes / triggers of the big bang,
(2) the need to trace the determination of certain physical constants to the big bang moments and not further backwards,
(3) the necessity to explain the notion of what scientists and philosophers call “time” in terms of the original bang of the universe,
(4) the compulsion to define the notion of “space” with respect to the inner and outer regions of the big bang universe,
(5) the possibility of and the uncertainty about there being other finite or infinite number of universes,
(6) the choice between an infinite number of oscillations between big bangs and big crunches in the big bang universe (in case of there being only our finite-content universe in existence), in every big hang universe (if there are an infinite number of universes),
(7) the question whether energy will be lost from the universe during each phase of the oscillation, and in that case how an infinite number of oscillations can be the whole process of the finite-content universe,
(8) the difficulty involved in mathematizing these cases, etc.
These have given rise to many other cosmological and cosmogenetic theories – mythical, religious, philosophical, physical, and even purely mathematical. It must also be mentioned that the thermodynamic laws created primarily for earth-based physical systems have played a big role in determining the nature of these theories.
The big bang is already a cosmogenetic theory regarding a finite-content universe. The consideration of an INFINITE-CONTENT universe has always been taken as an alternative source of theories to the big bang model. Here, in the absence of conceptual clarity on the physically permissible meaning of infinite content and without attempting such clarity, cosmologists have been accessing the various mathematical tools available to explain the meaning of infinite content. They do not also seem to keep themselves aware that locally possible mathematical definitions of infinity cannot apply to physical localities at all.
The result has been the acceptance of temporal eternality to the infinite-content universe without fixing physically possible varieties of eternality. For example, pre-existence from the past eternity is already an eternality. Continuance from any arbitrary point of time with respect to any cluster of universes is also an eternality. But models of an infinite-content cosmos and even of a finite-content universe have been suggested in the past one century, which never took care of the fact that mathematical infinity of content or action within a finite locality has nothing to do with physical feasibility. This, for example, is the source of the quantum-cosmological quick-fix that a quantum vacuum can go on create new universes.
But due to their obsession with our access to observational details merely from our local big bang universe, and the obsession to keep the big bang universe as an infinite-content universe and as temporally eternal by using the mathematical tools found, a mathematically automatic recycling of the content of the universe was conceived. Here they naturally found it safe to accommodate the big universe, and clearly maintain a sort of eternality for the local big bang universe and its content, without recourse to external creation.
Quantum-cosmological and superstrings-cosmological gimmicks like considering each universe as a membrane and the “space” between them as vacuum have given rise to the consideration that it is these vacua that just create other membranes or at least supplies new matter-energy to the membranes to continue to give rise to other universes. (1) The ubiquitous sensationalized science journalism with rating motivation and (2) the physicists’ and cosmologists’ need to stick to mathematical mystification in the absence of clarity concurring physical feasibility in their infinities – these give fame to the originators of such universes as great and original scientists.
I suggest that the need to justify an eternal recycling of the big bang universe with no energy loss at the fringes of the finite-content big bang universe was fulfilled by cosmologists with the automatically working mathematical tools like the Lambda term and its equivalents. This in my opinion is the origin of the concepts of the almighty versions of dark energy, virtual quantum soup, quantum vacuum, ether, etc., for cosmological applications. Here too the physical feasibility of these concepts by comparing them with the maximal-medial-minimal possibilities of existence of dark energy, virtual quantum soup, quantum vacuum, ether, etc. within the finite-content and infinite-content cosmos, has not been considered. Their almighty versions were required because they had to justify an eternal pre-existence and an eternal future for the universe from a crass physicalist viewpoint, of which most scientists are prey even today. (See: Minimal Metaphysical Physicalism (MMP) vs. Panpsychisms and Monisms: Beyond Mind-Body Dualism: https://www.researchgate.net/post/Minimal_Metaphysical_Physicalism_MMP_vs_Panpsychisms_and_Monisms_Beyond_Mind-Body_Dualism)
I believe that the inconsistencies present in the mathematically artificialized notions and in the various cosmogenetic theories in general are due to the blind acceptance of available mathematical tools to explain an infinite-content and eternally existent universe.
What should in fact have been done? We know that physics is not mathematics. In mathematics all sorts of predefined continuities and discretenesses may be created without recourse to solutions as to whether they are sufficiently applicable to be genuinely physics-justifying by reason of the general compulsions of physical existence. I CONTINUE TO ATTEMPT TO DISCOVER WHERE THE DISCREPENCIES LIE. History is on the side of sanity.
One clear example for the partial incompatibility between physics and mathematics is where the so-called black hole singularity is being mathematized by use of asymptotic approach. I admit that we have only this tool. But we do not have to blindly accept it without setting rationally limiting boundaries between the physics of the black hole and the mathematics applied here. It must be recognized that the definition of any fundamental notion of mathematics is absolute and exact only in the definition, and not in the physical counterparts. (See: Mathematics and Causality: A Systemic Reconciliation, https://www.researchgate.net/post/Mathematics_and_Causality_A_Systemic_Reconciliation)
I shall continue to add material here on the asymptotic approach in cosmology and other similar theoretical and application-level concepts.
Bibliography
(1) Gravitational Coalescence Paradox and Cosmogenetic Causality in Quantum Astrophysical Cosmology, 647 pp., Berlin, 2018.
(2) Physics without Metaphysics? Categories of Second Generation Scientific Ontology, 386 pp., Frankfurt, 2015.
(3) Causal Ubiquity in Quantum Physics: A Superluminal and Local-Causal Physical Ontology, 361 pp., Frankfurt, 2014.
(4) Essential Cosmology and Philosophy for All: Gravitational Coalescence Cosmology, 92 pp., KDP Amazon, 2022, 2nd Edition.
(5) Essenzielle Kosmologie und Philosophie für alle: Gravitational-Koaleszenz-Kosmologie, 104 pp., KDP Amazon, 2022, 1st Edition.
Is it logical to assume that the probability created by nature produces symmetry?
And if this is true, is anti-symmetry just a mathematical tool that can be misleading in specific situations?
I understand that we can produce that number in MATLAB by evaluating exp(1), or possibly using exp(sym(1)) for the exact representation. But e is a very common constant in mathematics and it is as important as pi to some scholars, so after all these many versions of MATLAB, why haven't they recognize this valuable constant yet and show some appreciation by defining it as an individual constant rather than having to use the exp function for that?
Below is a conversation between me and MATLAB illustrating why MATLAB developers have ZERO common sense... Enjoy the conversation dear fellows and no regret for MATLAB staff...
Me: Hello MATLAB, how is things?
MATLAB: All good! How can I serve you today sir?
Me: Yes, please. Could you give me the value of Euler's number? You know... it's a very popular and fundamental constant in mathematics.
MATLAB: Sure, but wait until I call the exponential function and ask it to evaluate it for me...
Me: Why would you call the exponential function bro??? Isn't Euler's number always constant and its value is well known for thousands of digits?
MATLAB: You will never know sir... Maybe its value will change in the future, so we continuously check its value with the exponential function every time I'm turned on...
Me: You do WHAT!!!
MATLAB: Well... This is a normal procedure sir and I have to do this every time you turn me on...
Me: Stop right there and don't tell me more please...
MATLAB: No, wait sir... I agree with you that this is perhaps one of the most cloddish things that was ever made in the history of programming, but what can I do sir? The guys who developed me actually believe that this is ingenius.
Me: Ooooh oooh ooooh.... reeeeeally!!! Now ain't that something...
MATLAB: They say sir that this is for your security plus there are no applications for that number sir, so why should they care? Even Euler himself, if resurrected again, would fail to find a single application for that number sir. Probably Jacob Bernoulli, the first to discover this number in 1683, would fail also sir, so why should we bother sir? Though it's a mathematical constant and deeply appreciated by the mathematicians around the world for centuries, we don't respect that number sir and find it useless.
Me: Who decides on the importance of Euler's number as a mathematical quantity? Mathematicians or the guys who develop you?
MATLAB: The guys who develop me sir; right?!?!?!?!?
Me: Bro I was obsessed with you in the past and I was truly a big fan of you for more than a decade. But, with the mentality I saw here from the guys who develop you, I believe you will beset with fundamental issues for a long time to come bro... No wonder why Python have beaten you in many directions and became the most popular programming language in the world. Time to move to Python you closed minded and thanks for helping me in my research works in the past decade!!! Good bye for good.
MATLAB: Wait sir... Don't leave please... As a way to compensate for the absence of Euler's number, we offer the 2 symbols i and j sir to represent the complex unity, so the extra symbol is a good compensation for Euler's number...
Me: What did you just say?
MATLAB: Say what?
Me: You provide 2 symbols to represent the same mathematical complex unity quantity, but you have none for Euler's number???
MATLAB: Yeeeeeeeap... you got it.
Me: You can't be serious!
MATLAB: I swear sir by the name of the machine I'm installed in that this is true; I'm not making that up.
Me: But why 2 symbols for the same constant; pick up one for God sake!
MATLAB: Well... There is a wisdom sir for picking 2 symbols for the same constant not just 1.
Me: What is it?
MATLAB: Have you seen the movie "The Man in the Iron Mask" written by Alexandre Dumas and Randall Wallace or read the novel "The Three Musketeers," by the nineteenth century French author Alexandre Dumas sir?
Me: I only saw the movie. But why???
MATLAB: Then you must have heard the motto the movie heros lived by in their glorious youth, "One for all, all for one".
Me: Yes, I did...
MATLAB: We sir were very impressed by this motto, so we came up with a new one.
Me: Impress me!
MATLAB: "i for j, j for i".
Me: You're killing me...
MATLAB: Wait sir, there is more...
Me: More what?????
MATLAB: Many experts around the world project that the number of letters to represent the complex unity in MATLAB may reach 52 letters sir by the end of 2050, so that you can use any English letter (capital or small) to represent the complex unity. How about this sir? Ain't this ingenious also? Sir ?!!?!?!?!?
Me: And this is when common sense was blown up by a nuclear weapon... This circus is over...
"What is a non-STEM major? A non-STEM major is a major that isn't in science, technology, engineering, or mathematics. This means non-STEM majors include those in business, literature, education, arts, and humanities. In STEM itself, programs in this category include ones that emphasize research, innovation or the development of new technologies."
Apart from the mathematical systems that confirm human feelings and perceptive sensors, there are countless mathematical systems that do not confirm these sensors and our sensory data! A question arises, are the worlds that these mathematical systems evoke are real? So in this way, there are countless worlds that can be realized with their respective physics. Can multiple universes be concluded from this point of view?
Don't we see that only one of these possible worlds is felt by our body?! Why? Have we created mathematics to suit our feelings in the beginning?! And now, in modern physics and the maturation of our powers of understanding, we have created mathematical systems that fit our dreams about the world!? Which of these mathematical devices is actually true about the world and has been realized?! If all of them have come true! So there is no single and objective world and everyone experiences their own world! If only one of these mathematical systems has been realized, how is this system the best?!
If the worlds created by these countless mathematical systems are not real, why do they exist in the human mind?!
The last question is, does the tangibleness of some of these mathematical systems for human senses, and the intangibleness of most of them, indicate the separation of the observable and hidden worlds?!
Given: x = 10sin(0.2t), y = 10cos(0.2t), z = 2.5sin(0.2t) (1)
There exists the following mathematical relationship:
u = x'cos(z) + y'sin(z),
v = -x'sin(z) + y'cos(z), (2)
r = z'
How to express rd=[x,y,z,u,v,r]' in the form of drd/dt = h(rd), where the function h(rd) does not explicitly depend on the time variable t?
My approach is as follows:
From (2), we have:
x' = ucos(z) - vsin(z), y' = usin(z) + vcos(z), z' = r (3)
with initial values x(0) = 0, y(0) = 10, z(0) = 0
From (2), we have:
u' = x''cos(z) - x'sin(z)z' + y''sin(z) + y'cos(z)z',
v' = -x''sin(z) - x'cos(z)z' + y''cos(z) - y'sin(z)z',
r' = z''
By calculating based on (1), we obtain:
x' = 2cos(0.2t) = 0.2y
x'' = -0.4sin(0.2t) = -0.04x
y' = -2sin(0.2t) = -0.2x (4)
y'' = -0.4cos(0.2t) = -0.04y
z' = 0.5cos(0.2t) = 0.05y
z'' = -0.1sin(0.2t) = -0.01x
Substituting x', x'', y', y'', z', z'' into (4), we get:
u' = -0.04xcos(z) - 0.2y * 0.05ysin(z) - 0.04ysin(z) - 0.2x * 0.05y*cos(z)
v' = -x''sin(z) - x'cos(z)z' + y''cos(z) - y'sin(z)z'
r' = z''
with initial values u(0)=2, v(0)=0, r(0)=0.5
The calculation process is accurate, but is the problem-solving approach correct?
I have a dataset of rice leaf for training and testing in machine learning. Here is the link: https://data.mendeley.com/datasets/znsxdctwtt/1
I want to develop my project with these techniques;
- RGB Image Acquisition & Preprocessing (HSV Conversation, Thresholding and Masking)
- Image Segmentation(GLCM matrices, Wavelets(DWT))
- Classifications (SVM, CNN ,KNN, Random Forest)
- Results with Matlab Codings.
- But I have a confusion for final scores for confusion matrices. So I need any technique to check which extraction method is good for dataset.
- My main target is detection normal and abnormal(disease) leaf with labels.
#image #processing #mathematics #machinelearning #matlab #deeplearning