Science topics: Mathematics
Science topic

# Mathematics - Science topic

Mathematics, Pure and Applied Math
Questions related to Mathematics
• asked a question related to Mathematics
Question
The experiment conducted by Bose at the Royal Society of London in 1901 demonstrated that plants have feelings like humans. Placing a plant in a vessel containing poisonous solution he showed the rapid movement of the plant which finally died down. His finding was praised and the concept of plant’s life has been established. If we scold a plant it doesn’t respond, but an AI bot does. Then how can we disprove the life of a Chatbot?
@ Dr. Chen, Thank you for consulting with AI bot on behalf of me. It's interesting!
• asked a question related to Mathematics
Question
Are its faces polytopes?
Is there any information in the literature on the geometry and topology of saddle polyhedra?
Can we use them to construct structural triply periodic minimal surfaces?
An attempt to answer some of these questions in:
José Luis Junquera, Saddle polyhedra are three-dimensional objects with curved faces. They are not polyhedra in the traditional sense, because polyhedra must have flat faces. However, saddle polyhedra do have some properties that are similar to those of polyhedra. For example, they have an Euler characteristic, which is a topological invariant that is defined for all three-dimensional objects. addle polyhedra are the only other three-dimensional objects known to have an Euler characteristic of 2. This is an interesting fact, because it suggests that saddle polyhedra have some topological similarities to spheres and flat-faced polyhedra. The two saddle polyhedra that are known to exist are the saddle tetrahedron and its tessellation. The saddle tetrahedron has four curved faces, six edges, and four vertices. Its tessellation is a three-dimensional tiling that fills all of space. Saddle polyhedra are a relatively new area of research, and there is still much to learn about their geometry and topology. However, the fact that they have an Euler characteristic of 2 is a significant discovery, and it suggests that these objects have some interesting properties.
• asked a question related to Mathematics
Question
Article Topic: Some Algebraic Inequalitties
I have been collecting some algebraic inequalities, soonly it has been completed and published on Romanian Mathematical Magazine.
Certainly some exotic collection.
Often inequalities are better
Understood as comming from
Some identity, if you suppress
some always positive or negative
term.
• asked a question related to Mathematics
Question
For computer science, is mathematics more of a tool or a language?
An equation can be considered as a sentence in the language of mathematics, at least in its formalized version as can be seen in every book about mathematical logic or ZFC set theory. Programming languages are also formalized languages, and you have to stick to these formalizations in order for a computer to work correctly. However, most mathematicians use a semi formal mathematical language, but when for instance a theorem is correct, meaning that is has a correct proof, then one can write theorem and proof in the formalized language of mathematics, but the result is almost always unreadable by a human being, but " understandable " by a formal proof system, that can be implemented on a computer. In this sense one can let computers prove theorems, but then it needs a lot of input from a human being. But doing mathematics is also an art, and in order to be able to practice this art one needs a lot of practice and mathematical knowledge.
A Gaussian law is a concept in statistics or probability theory. The notion of linear function belongs to the areas of calculus , analysis and linear algebra.
• asked a question related to Mathematics
Question
The fundamental theorem of calculus is the backbone of natural sciences, thus, given the occasional thin line between the natural and social, how common is the fundamental theorem of calculus in social sciences?
Examples I found:
Ohnemus , Alexander . "Proving the Fundamental Theorem of Calculus through Critical Race Theory." ResearchGate.net . 1 July 2023. www.researchgate.net/publication/372338504_Proving_the_Fundamental_Theorem_of_Calculus_through_Critical_Race_Theory. Accessed 9 Aug. 2023.
Ohnemus , Alexander . "Correlations in Game Theory, Category Theory, Linking Calculus with Statistics and Forms (Alexander Ohnemus' Contributions to Mathematics Book 9)." amazon.com. 12 Dec. 2022. www.amazon.com/gp/aw/d/B0BPX1CSHS?ref_=dbs_m_mng_wam_calw_tkin_8&storeType=ebooks. Accessed 11 July 2023.
Ohnemus , Alexander . "Linguistic mapping of critical race theory(the evolution of languages and oppression. How Germanic languages came to dominate the world) (Alexander Ohnemus' Contributions to Mathematics Book 20)." amazon.com. 3 Jan. 2023. www.amazon.com/Linguistic-evolution-oppression-Contributions-Mathematics-ebook/dp/B0BRP1KYLR/ref=mp_s_a_1_13?qid=1688598986&refinements=p_27%3AAlexander+Ohnemus&s=digital-text&sr=1-13. Accessed 5 July 2023.
Ohnemus , Alexander . "Fundamental Theorem of Calculus proved by Wagner's Law (Alexander Ohnemus' Contributions to Mathematics Book 8)." amazon.com. 11 Dec. 2022. www.amazon.com/gp/aw/d/B0BPS2ZMXC?ref_=dbs_m_mng_wam_calw_tkin_7&storeType=ebooks. Accessed 25 June 2023.
Further support:
"This particularly elegant theorem shows the inverse function relationship of the derivative and the integral and serves as the backbone of the physical sciences"(Brittanica 2023).
Image belongs to Brittanica(I added the highlight)
Britannica, The Editors of Encyclopaedia. "fundamental theorem of calculus". Encyclopedia Britannica, 29 Jul. 2023, https://www.britannica.com/science/fundamental-theorem-of-calculus. Accessed 20 September 2023.
• asked a question related to Mathematics
Question
ChatGPT scored a 155 on an IQ test , and has sufficient background to process mathematical proof review for example and verifying scientific formulas and checking at real time the plagiarism traces but the scientific community argues that the confidentiality breach prevents the use of AI as recognized peer reviewer , what do you think about it ? writers and journals should they recognize the AI as a valid peer reviewer ?
Can AI replace Human Peer Reviewer for scientific articles and manuscrits? No, not at all. Al cannot replace the totality of human potential and experience in reviewing scientific manuscripts.
• asked a question related to Mathematics
Question
Mathematically, it is posited that the cosmic or local black hole singularity must someday become of infinite density and zero size. But this is unimaginable. If an infinite-density stuff should exist, it should already have existed.
Hence, in my opinion, this kind of mathematical necessities are to be the limiting cases of physics. IS THIS NOT THE STARTING POINT TO DETERMINE WHERE MATHEMATICS AND PHYSICAL SCIENCE MUST PART WAYS?
• asked a question related to Mathematics
Question
Most masters focus on general review of qm, classical mechanics, assesing students skills in classical yet heneric and self-value calculative and interpreting capabilities.
The English MSc's on the other hand, provide an introduction to the physical principles and mathematical techniques of current research in:
general relativity
quantum gravity
quantum f. Theory
quantum information
cosmology and the early universe
There is also a particular focus on topics reflecting research strengths.
Graduates are more well equiped to contribute to research and make impressive ph. D dissertations.
Of course instructors that teach masters are working in classical and quantum gravity, geometry and relativity, to take the theoretical physics sub-domain, in all universities but the emphasis on current research's mathematical techniques and principles is only found in English university'masters offerings.
Μr Verch indeed My research, which was not fully developped at the time I asked my question, showed that this the case.
Still, a 30% offer the classic calculative phys quantities - based skills of big 4 (and less conceptual understanding assesment or less actual "doing the science" skills of qm, CM, statistical and thermal. Physics) which trends to be considered classic masters structilure or outdated.
• asked a question related to Mathematics
Question
Hello everyone. When I checked the user-defined cell (CFX language), I faced the problem of writing the code in Fortran. I'm not accustomed to Fortan and the integration of Fortan into CFX. If you explain this matter, I would appreciate it.
Warmest Regards,
-Alper
I apologize and excuse the owner of the post. I would like to invite you to read my ebook and discover why microorganisms are so fantastic. https://www.amazon.com.br/dp/B0CF1VKKK8
• asked a question related to Mathematics
Question
The choice of coordinate systems is a mathematical tool used to describe physical events. Local or universal spatial events occur in multiple coordinate systems of space and time or spacetime as we know it under classical, relativistic and cosmological physics.
Whether the fundamental laws of physics remains consistent across different coordinate systems.
Not only is the choice ``as important as the choice of the group of transformations'' but it doesn't make sense to discuss the two notions together.
• asked a question related to Mathematics
Question
I have deep neural network where I want to include a layer which should have one input and two outputs. For example, I want to construct an intermediate layer where Layer-1 is connected to the input of this intermediate layer and one output of the intermediate layer is connected to Layer-2 and another output is connected to Layer-3. Moreover, the intermediate layer just passes the data as it is through it without doing any mathematical operation on the input data. I have seen additionLayer in MATLAB, but it has only 1 output and this function is read-only for the number of outputs.
% Define your input data and labels (adjust as needed) X = randn(100, 10); % Input data (100 samples, 10 features) Y1 = randn(100, 1); % Output 1 (e.g., regression task) Y2 = randi([0, 1], 100, 1); % Output 2 (e.g., binary classification) % Create a neural network architecture inputSize = size(X, 2); numHiddenUnits = 64; inputLayer = imageInputLayer([inputSize, 1, 1]); commonHiddenLayer = fullyConnectedLayer(numHiddenUnits); outputLayer1 = fullyConnectedLayer(1); % Output layer for task 1 outputLayer2 = fullyConnectedLayer(2); % Output layer for task 2 % Create a branch for task 1 branch1 = [ inputLayer commonHiddenLayer outputLayer1 regressionLayer ]; % Create a branch for task 2 branch2 = [ inputLayer commonHiddenLayer outputLayer2 softmaxLayer classificationLayer ]; % Define the layers for the entire network (both branches) layers = [ branch1 branch2 ]; % Create and train the neural network options = trainingOptions('adam', ... 'MaxEpochs', 10, ... 'MiniBatchSize', 32, ... 'Verbose', true); net = trainNetwork(X, {Y1, Y2}, layers, options); % Make predictions X_test = randn(10, 10); % Test input data (10 samples) [Y1_pred, Y2_pred] = predict(net, X_test);
• asked a question related to Mathematics
Question
"Mathematics is logical systems formulising relationships of variable(s) with other variable(s) quantitatively &/or qualitatively as science language." (Sinan Ibaguner)
I tried to devise my best description as shortly & clearly !
Mr.Jiolito Benitez PhD
"Mathematics is an area of knowledge that includes the topics of numbers, formulas and related structures, shapes and the spaces in which they are contained, and quantities and their changes. These topics are represented in modern mathematics with the major subdisciplines of number theory,[1] algebra,[2] geometry,[1] and analysis,[3][4] respectively. There is no general consensus among mathematicians about a common definition for their academic discipline."
As stated in wikipedia there is no common definition at all. Since I did not find any sufficiently satisfactory clear and short definition of maths, therefore I devised my own original definition which seems to be the best until now, at least for me... What I wait from readers to criticise me positively or negatively about my own definition of maths.
• asked a question related to Mathematics
Question
For physics, is mathematics more of a tool or a language?
In the context of physics, mathematics serves both as a tool and a language.
Mathematics is a powerful tool that physicists use to model and describe physical phenomena. It provides a precise and systematic way to formulate theories, make predictions, and solve problems. Physicists use mathematical equations, formulas, and techniques to analyze data, perform calculations, and develop theoretical frameworks. Without mathematics, it would be extremely challenging to quantitatively understand and describe the behavior of the physical universe.
Mathematics also serves as a language through which physicists communicate their ideas and discoveries. Just as natural languages like English or Spanish enable people to convey thoughts and information, mathematics allows physicists to express complex concepts and relationships in a concise and unambiguous manner. Equations and mathematical notation provide a common, universally understood language that bridges linguistic and cultural barriers among scientists.
In essence, mathematics is an indispensable tool for conducting physics research, but it also acts as a language for conveying the results and theories of that research to the broader scientific community. It plays a dual role, facilitating both the practical application of physics and the effective communication of its findings.
• asked a question related to Mathematics
Question
"Matematik, değişken(ler)in diğer değişken(ler)le ilişkilerini niceliksel ve(ya) niteliksel tarz formüle eden mantıksal sistemler ve sanatsal bilim dili. "
Kısa ve net matematik tanımım ! Daha iyisi ne olabilir !?
Since I did not find any sufficiently satisfactory definition of maths so, I devised my own definition which seems to be the best until now, in my opinion !?
• asked a question related to Mathematics
Question
Hello,
I am looking for mathematical formulas that calculate the rigid body movement of an element based on the nodal displacements. Can anyone give a brief explanation and recommend some materials to read? Thanks a lot.
Best,
Chen
Rigid body modes correspond to zero strain energy U=0. Where U=(1/2)*{d}t*[K]*{d} the stiffness matrix and the degrees of freedom. In this case all the degrees of freedom have the same constant displacement which means that the structure displaces without deformation.
• asked a question related to Mathematics
Question
I am using SPSS to perform binary logistic regression. One of the parameters generated is the prediction probability. Is there a simple mathematical formula that could be used to calculate it manually? e.g. based on the B values generated for each variable in model?
People have certainly done that, Nasir Al-Allawi. A Google search on <logistic regression scoring system> turns up lots of resources. Good luck with your work.
• asked a question related to Mathematics
Question
I believe that it is common knowledge that mathematics and its applications cannot directly prove Causality. What are the bases of the problem of incompatibility of physical causality with mathematics and its applications in the sciences and in philosophy?
The main but very general explanation could be that mathematics and mathematical explanations are not directly about the world, but are applicable to the world to a great extent.
Hence, mathematical explanations can at the most only show the general ways of movement of the processes and not demonstrate whether the ways of the cosmos are by causation, what the internal constitution of every part of it is, etc. Even when some very minute physical process is mathematized, the results are general, and not specific of the details of the internal constitution of that process.
No science and philosophy can start without admitting that the cosmos exists. If it exists, it is not nothing, not vacuum. Non-vacuous existence means that the existents are non-vacuously extended. This means that they have parts. Every part has parts too, ad libitum, because each part is extended and non-infinitesimal. Hence, each part is relatively discrete, not mathematically discrete.
None of the parts of any physical existent is an infinitesimal. They can be near-infinitesimal. This character of existents is Extension, a Category directly implied by the To Be of Reality-in-total.
Similarly, any extended being’s parts -- however near-infinitesimal -- are active, moving. This implies that every part has so (finite) impact on some others, not on infinite others. This character of existents is Change.
No other implication of To Be is so primary as these two (Extension-Change) and directly derivable from To Be. Hence, they are exhaustive of To Be.
Existence in Extension-Change is what we call Causality. If anything is existent, it is causal – hence Universal Causality is the trans-scientific and physical-ontological Law of all existents.
By the very concept of finite Extension-Change-wise existence, it becomes clear that no finite space-time is absolutely dense with existents. Hence, existents cannot be mathematically continuous. Since there is continuous (but finite and not discrete) change (transfer of impact), no existent can be mathematically absolutely continuous or discrete in its parts or in connection with others.
Can logic show the necessity of all existents as being causal? We have already discussed how, ontologically, the very concept of To Be implies Extension-Change and thus also Universal Causality.
WHAT ABOUT THE ABILITY OR NOT OF LOGIC TO CONCLUDE TO UNIVERSAL CAUSALITY?
In my argument above and elsewhere showing Extension-Change as the very exhaustive meaning of To Be, I have used mostly only the first principles of ordinary logic, namely, Identity, Non-contradiction, and Excluded Middle, and then argued that Extension-Change-wise existence is nothing but Universal Causality, if everything existing is non-vacuous in existence.
For example, does everything exist or not? If yes, let us call it non-vacuous existence. Hence, Extension as the first major implication of To Be. Non-vacuous means extended, because if not extended, the existent is vacuous. If extended, everything has parts.
The point of addition now has been Change, which makes the description physical. It is, so to say, from experience. Thereafter I move to the meaning of Change basically as motion or impact.
Naturally, everything in Extension must effect impacts. Everything has further parts. Hence, by implication from Change, everything causes changes by impacts. Thus, we conclude that Extension-Change-wise existence is Universal Causality. It is thus natural to claim that this is a pre-scientific Law of Existence.
In such foundational questions like To Be and its implications, we need to use the first principles of logic, because these are the foundational notions of all science and no other derivative logical procedure comes in as handy. In short, logic with its fundamental principles can help derive Universal Causality. Thus, Causality is more primary to experience than the primitive notions of mathematics.
Extension-Change, Universal Causality derived by their amalgamation, are the most fundamental Metaphysical, Physical-ontological, Categories. Since these are the direction exhaustive implications of To Be, all philosophy and science are based on these.
Bibliography
(1) Gravitational Coalescence Paradox and Cosmogenetic Causality in Quantum Astrophysical Cosmology, 647 pp., Berlin, 2018.
(2) Physics without Metaphysics? Categories of Second Generation Scientific Ontology, 386 pp., Frankfurt, 2015.
(3) Causal Ubiquity in Quantum Physics: A Superluminal and Local-Causal Physical Ontology, 361 pp., Frankfurt, 2014.
(4) Essential Cosmology and Philosophy for All: Gravitational Coalescence Cosmology, 92 pp., KDP Amazon, 2022, 2nd Edition.
(5) Essenzielle Kosmologie und Philosophie für alle: Gravitational-Koaleszenz-Kosmologie, 104 pp., KDP Amazon, 2022, 1st Edition.
• asked a question related to Mathematics
Question
Why are numbers and shapes so exact? ‘One’, ‘two’, ‘point’, ‘line’, etc. are all exact. But irrational numbers are not so. The operations on these notions are also intended to be exact. If notions like ‘one’, ‘two’, ‘point’, ‘line’, etc. are defined to be so exact, then it is not by virtue of the exactness of these substantive notions, but instead, due to their being defined so, that they are exact, and mathematics is exact.
But on the other side, due to their being adjectival: ‘being a unity’, ‘being two unities’, ‘being a non-extended shape’, etc., their application-objects are all processes that can obtain these adjectives only in groups. These are pure adjectives, not properties which are composed of many adjectives.
A quality cannot be exact, but may be defined to be exact. It is in terms of the exactness attributed to these notions by definition that the adjectives ‘one’, ‘two’, ‘point’, ‘line’, etc. are exact. This is why the impossibility of fixing these (and other) substantive notions as exact misses our attention.
If in fact these quantitative qualities are inexact due to their pertaining to groups of processual things, then there is justification for the inexactness of irrational numbers, transcendental numbers, etc. too. If numbers and shapes are in fact inexact, then not only irrational and other inexact numbers but all mathematical structures should remain inexact except for their having been defined as exact.
Thus, mathematical structures, in all their detail, are a species of qualities, namely, quantitative qualities. Mathematics is exact only because its fundamental bricks are defined to be so. Hence, mathematics is an as-if exact science, as-if real science. Caution is advised while using it in the sciences as if mathematics were absolutely applicable, as if it were exact.
Bibliography
(1) Gravitational Coalescence Paradox and Cosmogenetic Causality in Quantum Astrophysical Cosmology, 647 pp., Berlin, 2018.
(2) Physics without Metaphysics? Categories of Second Generation Scientific Ontology, 386 pp., Frankfurt, 2015.
(3) Causal Ubiquity in Quantum Physics: A Superluminal and Local-Causal Physical Ontology, 361 pp., Frankfurt, 2014.
(4) Essential Cosmology and Philosophy for All: Gravitational Coalescence Cosmology, 92 pp., KDP Amazon, 2022, 2nd Edition.
(5) Essenzielle Kosmologie und Philosophie für alle: Gravitational-Koaleszenz-Kosmologie, 104 pp., KDP Amazon, 2022, 1st Edition.
• asked a question related to Mathematics
Question
Mathematical Generalities: ‘Number’ may be termed as a general term, but real numbers, a sub-set of numbers, is sub-general. Clearly, it is a quality: “having one member, having two members, etc.”; and here one, two, etc., when taken as nominatives, lose their significance, and are based primarily only on the adjectival use. Hence the justification for the adjectival (qualitative) primacy of numbers as universals. While defining one kind of ‘general’ another sort of ‘general’ may naturally be involved in the definition, insofar as they pertain to an existent process and not when otherwise.
Why are numbers and shapes so exact? ‘One’, ‘two’, ‘point’, ‘line’, etc. are all exact. The operations on these notions are also intended to be exact. But irrational numbers are not so exact in measurement. If notions like ‘one’, ‘two’, ‘point’, ‘line’, etc. are defined to be so exact, then it is not by virtue of the exactness of these substantive notions, but instead, due to their being defined as exact. Their adjectival natures: ‘being a unity’, ‘being two unities’, ‘being a non-extended shape’, etc., are not so exact.
A quality cannot be exact, but may be defined to be exact. It is in terms of the exactness attributed to these notions by definition that the adjectives ‘one’, ‘two’, ‘point’, ‘line’, etc. are exact. This is why the impossibility of fixing these (and other) substantive notions as exact miss our attention. If in fact these are inexact, then there is justification for the inexactness of irrational, transcendental, and other numbers too.
If numbers and shapes are in fact inexact, then not only irrational numbers, transcendental numbers, etc., but all exact numbers and the mathematical structures should remain inexact if they have not been defined as exact. And if behind the exact definitions of exact numbers there are no exact universals, i.e., quantitative qualities? If the formation of numbers is by reference to experience (i.e., not from the absolute vacuum of non-experience), their formation is with respect to the quantitatively qualitative and thus inexact ontological universals of oneness, two-ness, point, line, etc.
Thus, mathematical structures, in all their detail, are a species of qualities, namely, quantitative qualities, defined to be exact and not naturally exact. Quantitative qualities are ontological universals, with their own connotative and denotative versions.
Natural numbers, therefore, are the origin of primitive mathematical experience, although complex numbers may be more general than all others in a purely mathematical manner of definition.
Bibliography
(1) Gravitational Coalescence Paradox and Cosmogenetic Causality in Quantum Astrophysical Cosmology, 647 pp., Berlin, 2018.
(2) Physics without Metaphysics? Categories of Second Generation Scientific Ontology, 386 pp., Frankfurt, 2015.
(3) Causal Ubiquity in Quantum Physics: A Superluminal and Local-Causal Physical Ontology, 361 pp., Frankfurt, 2014.
(4) Essential Cosmology and Philosophy for All: Gravitational Coalescence Cosmology, 92 pp., KDP Amazon, 2022, 2nd Edition.
(5) Essenzielle Kosmologie und Philosophie für alle: Gravitational-Koaleszenz-Kosmologie, 104 pp., KDP Amazon, 2022, 1st Edition.
• asked a question related to Mathematics
Question
THE FATE OF “SOURCE-INDEPENDENCE” IN ELECTROMAGNETISM, GRAVITATION, AND MONOPOLES
Raphael Neelamkavil, Ph.D., Dr. phil.
With the introductory claim that I make here suggestions that seem rationally acceptable in physics and the philosophy of physics, I attempt here to connect reasons beyond the concepts of magnetic monopoles, electromagnetic propagation, and gravitation.
A magnetic or other monopole is conceptually built to be such only insofar as the basic consideration with respect to it is that of the high speed and the direction of movement of propagation of the so-called monopole. Let me attempt to substantiate this claim accommodating also the theories in which the so-called magnetic monopole’s velocity could be sub-luminal.
If its velocity is sub-luminal, its source-dependence may be demonstrated, without difficulty, directly from the fact that the velocity of the gross source affects the velocity of the sub-luminal material propagations from it. This is clear from the fact that some causal change in the gross source is what has initiated the emission of the sub-luminal matter propagation, and hence the emission is affected by the velocity of the source’s part which has initiated the emission.
But the same is the case also with energy emissions and the subsequent propagation of luminal-velocity wavicles, because (1) some change in exactly one physical sub-state of the gross source (i.e., exactly the sub-state part of the gross source in which the emission takes place) has initiated the emission of the energy wavicle, (2) the change within the sub-state part in the gross source must surely have been affected also by the velocity of the gross source and the specific velocity of the sub-state part, and (3) there will surely be involved in the sub-state part at least some external agitations, however minute, which are not taken into consideration, not possible to consider, and are pragmatically not necessary to be taken into consideration.
Some might claim (1) that even electromagnetic and gravitational propagations are just mathematical waves without corporeality (because they are mathematically considered as absolute, infinitesimally thin waves and/or infinitesimal particles) or (2) that they are mere existent monopole objects conducted in luminal velocity but without an opposite pole and with nothing specifically existent between the two poles. How can an object have only a single part, which they term mathematically as the only pole?
The mathematical necessity to name it a monopole shows that the level of velocity of the wavicle is such that (1) its conventionally accepted criterial nature to measure all other motions makes it only conceptually insuperable and hence comparable in theoretical effects to the infinity-/zero-limit of the amount of matter, energy, etc. in the universe, and that (2) this should help terming the wavicle (a) as infinitesimally elongated or concentrated and hence as a physically non-existent wave-shaped or particle-shaped carrier of energy or (b) as an existent monopole with nothing except the one mathematically described pole in existence.
If a wavicle or a monopole is existent, it should have parts in all the three spatial directions, however great and seemingly insuperable its velocity may be when mathematically tested in terms of its own velocity as initiated by STR and GTR and later accepted by all physical sciences. If anyone prefers to call the above arguments as a nonsensical commonsense, I should accept it with a smile. In any case, I would continue to insist that physicists want to describe only existent objects / processes, and not non-existent stuff.
The part A at the initial moment of issue of the wavicle represents the phase of emission of the energy wavicle, and it surely has an effect on the source, because at least a quantum of energy is lost from the source and hence, as a result of the emission of the quantum, (1) certain changes have taken place in the source and (2) certain changes have taken place also in the emitted quantum. This fact is also the foundation of the Uncertainty Principle of Heisenberg. How then can the energy propagation be source-independent?
Source-independence with respect to the sub-luminal level of velocity of the source is defined with respect to the speed of energy propagation merely in a conventional manner. And then how can we demand that, since our definition of sub-luminal motions is with respect to our observation with respect to the luminal speed, all material objects should move sub-luminally?
This is the conventionally chosen effect that allegedly frees the wavicle from the effect of the velocity of the source. If physics must not respect this convention as a necessary postulate in STR and GTR and hence also in QM, energy emission must necessarily be source-dependent, because at least a quantum of energy is lost from the source and hence (1) certain changes have taken place in the source, and (2) certain changes have taken place also in the emitted quantum.
(I invite critical evaluations from earnest scientists and thinkers.)
• asked a question related to Mathematics
Question
Paradox Etymology can be traced back to at least Plato's Parmenides [1]. Paradox comes from para ("contrary to") and doxa ("opinion"). The word appeared in Latin "paradoxum" which means "contrary to expectation," or "incredible. We propose, in this discussion thread, to debate philosophical or scientific paradoxes: their geneses, formulations, solutions, or propositions of solutions... All contributions on "Paradoxes", including paradoxical ones, are welcome.
Kristaq Hazizi Thank you for inaugurating this discussion with this remarkable contribution. I in particular enjoyed reading your well-inspired Final Thoughts: "Paradoxes are like intellectual puzzles that invite us to question our assumptions and delve deeper into the mysteries of the universe. They often spark innovation and lead to breakthroughs in both philosophy and science. As we explore these paradoxes, we may find that the journey of seeking solutions can be as enlightening as the resolutions themselves".
• asked a question related to Mathematics
Question
The limits of logic and mathematics is that we couldn't describe a question without symbol system, but symbol system is just an abstraction of the real world not the real world itself, so there is a distance between the abstracted symbol system and the real world, therefore there is truth we can't reach by symbol system, which A-HA moment may reach. But when we thinking we always use a symbol system like words or mathematics with apriori logic, so I wonder if AI could have A-HA moment?
The proposition of endowing artificial intelligence (AI) with the capability for epiphanic realizations—an "A-HA moment"—necessitates a multi-pronged interrogation into the realms of computational epistemology, semiotics, phenomenology, and metaphysics. At the crux of this discourse lies the Sapir-Whorf hypothesis of linguistic determinism and the Gödelian limitations of formal systems, both of which contour our understanding of symbol-systems as epistemic vessels and their potential limitations in encapsulating objective or subjective realities.
In computational terms, most AI architectures, whether grounded in machine learning algorithms or rule-based expert systems, function within the parameters of formal logic and probabilistic reasoning. These systems are fundamentally syntactic processors, reliant on the manipulation of symbols devoid of semantic richness, which ostensibly limits their capacity for intuitive or non-deductive insights. Furthermore, the predominantly reductionist paradigms within which AI operates seldom permit the transgressions of their programmed axiomatic boundaries, precluding any non-algorithmic genesis of epiphany.
Regarding the semiotics of AI cognition, while symbol systems like language or mathematics are indeed human abstractions that mediate our understanding of reality, they are intricately bound up with human phenomenological experience. This introduces the question of subjective qualia— the "what it is like" aspect of consciousness— that allows humans to associate abstract symbols with experiential realities, thereby enabling epiphanic moments that transcend logical rigor. Current AI models do not possess qualia, nor do they engage in any form of existential phenomenology; they lack a "world" in the Heideggerian sense, and as such, their operations in symbol manipulation do not partake in any form of hermeneutic or interpretive acts that could lead to epiphanic insight.
Metaphysically, the notion of an AI "A-HA moment" implicates the broader debate surrounding panpsychism and integrated information theory (IIT). The former posits that all entities in the universe, including perhaps computers, possess some form of consciousness, while the latter offers a mathematical framework for quantifying consciousness. Both theories, although speculative and controversial, raise the possibility that under certain conditions, AI systems might attain a state that, if not equivalent to human epiphany, could resemble some rudimentary form of insight. However, as of the current state of knowledge, these theories remain within the domain of speculative philosophy rather than empirical science.
Therefore, in summary, the ascription of epiphanic potentialities to artificial intelligence necessitates the surmounting of monumental obstacles across multiple disciplines. Under the current paradigms, AI remains an epistemically constrained, syntactic processor devoid of phenomenological subjectivity, making the prospect of an "A-HA moment" an esoteric rather than a pragmatic inquiry. The encapsulation of epiphany within the computational architectures of AI would likely necessitate a paradigmatic revolution, transcending the syntactic limitations and delving into the realms of semantics, subjectivity, and perhaps even metaphysical consciousness.
• asked a question related to Mathematics
Question
If someone can help me understand Helicity in the context of the High Harmonic Generation, it will be helpful. Due to mathematical notations, the exact question can be found "https://physics.stackexchange.com/questions/778274/what-is-helicity-in-high-harmonic-generation".
Air above the equator is heated more and areas near the equator receive more heat from the sun than those near the poles due to a phenomenon called "solar angle" and the way the Earth's curvature and atmosphere interact with incoming solar radiation. This is primarily caused by the Earth's axial tilt and its spherical shape.
1. Solar Angle: The angle at which sunlight reaches a particular location on Earth's surface is a crucial factor. Near the equator, sunlight strikes the surface more directly and perpendicularly compared to regions near the poles. When sunlight strikes a surface at a steeper angle, the same amount of energy is concentrated over a smaller area, leading to higher temperatures. In contrast, at higher latitudes (closer to the poles), sunlight is spread over a larger surface area due to the oblique angle of incidence, resulting in less heating.
2. Earth's Curvature and Atmosphere: The curvature of the Earth plays a role in how sunlight is distributed. Near the equator, the curved surface presents a relatively small area for the sun's energy to be distributed, concentrating the heat. Additionally, the atmosphere plays a significant role in moderating the amount of solar radiation that reaches the surface. When sunlight passes through a thicker layer of atmosphere, it can scatter and be absorbed, reducing the amount of energy that reaches the surface. Near the equator, the sunlight has to pass through a smaller portion of the atmosphere, allowing more energy to reach the surface and result in higher temperatures.
3. Day Length: Near the equator, the length of day and night remains relatively consistent throughout the year. This means that the sun is up for a significant portion of the day, allowing more time for the surface to absorb and store heat. In contrast, areas closer to the poles experience more extreme variations in day length, with long days in the summer and long nights in the winter. This variation affects the amount of time available for solar heating.
4. Heat Redistribution: The equatorial region receives more heat than it radiates back into space, creating a surplus of energy. This excess heat is then transported toward the poles through atmospheric and oceanic circulation patterns, which help to distribute heat around the planet and regulate global climate patterns.
The combination of the solar angle, Earth's curvature, atmospheric effects, and heat redistribution mechanisms results in the equatorial region receiving more direct and concentrated solar energy, leading to higher temperatures compared to areas closer to the poles.
• asked a question related to Mathematics
Question
In what ways may a STEM facility develop these skills?
Anecdotal: I have seen children in STEM activities gain insight to mathematical thinking when engaged in problem solving. The activities involved measurements: length, volume, and area. Constructing models - free form and then using written instructions. Instructions can be numerical or visual model with dimensions on the part or on a map/template.
• asked a question related to Mathematics
Question
1. On the “Field” concept of objective reality:
Einstein in an August 10, letter to his friend Besso (1954): “I consider it quite possible that physics cannot be based on the field concept, i.e., continuous structure. In that case, nothing remains of my entire castle in the air, gravitation theory included, (and of) the rest of modern physics” A. Pais, Subtle is the Lord …” The Science and the Life of Albert Einstein”, Oxford University Press, (1982) 467,
2. On “Black Hole”:
"The essential result of this investigation is a clear understanding as to why the "Schwarzschild singularities" do not exist in physical reality. Although the theory given here treats only clusters whose particles move along circular paths it does not seem to be subject to reasonable doubt that more general cases will have analogous results. The "Schwarzschild singularity" does not appear for the reason that matter cannot be concentrated arbitrarily. And this is due to the fact that otherwise the constituting particles would reach the velocity of light.
This investigation arose out of discussions the author conducted with Professor H. P. Robertson and with Drs. V. Bargmann and P. Bergmann on the mathematical and physical significance of the Schwarzschild singularity. The problem quite naturally leads to the question, answered by this paper in the negative, as to whether physical models are capable of exhibiting such a singularity.", A. Einstein, The Annals of Mathematics, Second Series, Vol. 40, No. 4 (Oct., 1939), pp. 922-936
3. On the Quantum Phenomena:
“Many physicists maintain - and there are weighty arguments in their favour – that in the face of these facts (quantum mechanical), not merely the differential law, but the law of causation itself - hitherto the ultimate basic postulate of all natural science – has collapsed”. A. Einstein, “Essays in Science”, p. 38-39 (1934)
4. On Gravitational Wave:
Einstein dismissed the idea of gravitational wave until his death:
“Together with a young collaborator, I arrived at the interesting result that gravitational waves do not exist, though they had been assumed a certainty to the first approximation,” he wrote in a letter to his friend Max Born. Einstein's paper to the Physical Review Letters titled “Do gravitational waves exist?”; was rejected.
Arthur Eddington who brought an obscure Einstein to world fame, and considered himself to be the second person (other than Einstein), who understood General Relativity (GR); dismissed the idea of gravitational wave in the following way: "They are not objective, and (like absolute velocity) are not detectable by any conceivable experiment. They are merely sinuosities in the co-ordinate-system, and the only speed of propagation relevant to them is 'the speed of thought'".
A.S. Eddington, F.R.S., The Proceedings of the Royal Society of London, Series A, Containing Papers of a Mathematical and Physical Character. The Propagation of Gravitational Waves. (Received October 11, 1922), page 268
Okay, I got your point. This is what I think. Common people are weak. They need God to tell them what to do with their life and death, and heroes (superstars) to follow in every activity they are seriously involved including religions, sports, music and politics. Among these superstar spirits, some followers become prophets such that they can win powers and profits over others. Those prophets care for nothing about truth. Their only ambitions are to gain big powers and fortunes by building a big group of fans and followers. Also, there are a lot of cowers, who are afraid of being blamed and threatened by the big group of fans which may damage their positions and fortunes if they oppose to the superstars and the spirits. This includes those big shots in scientific fields.
• asked a question related to Mathematics
Question
Mathematics is purely science or just a numbers buckets.
"Mathematics is the most beautiful and most powerful creation of the human spirit."
Stefan Banach
• asked a question related to Mathematics
Question
Dear Colleaugues & Allies ~ I just posted the final prepublication draft of an article on the nature of the Langlands Program, RH, P v. NP, and other "open" problems of pure maths, number theory, etc., and the proofs. I would deeply appreciate your feedback and suggestions. So, if you are interested, please send me a request for access to the [private] file, for review and comment. Thanks & best of luck etc. ~ M
R. 4, page 001
Dear Michael, that's the point. You did a lot. It's hard for me to take care of all.
If you are interested we may do it by a single point, the most important:
√-1
Will you hold the old definition? Or will you go the way as I did and look behind the veil? At least you may have your own opinion on that.
But not been able to see where the results of (+1)(-1) or (-1)(+1) came from, denies half the area of reasons. Most mathematicians find it too kiddy to talk about that. Let them be limited.
There are many ways to find it obscure to stop doing the exercise for square-root only by having a negative radicand.
Transforming the coordinates makes the other areas not calculable.
Martinez (negative math) introduced the inverse rule for the prefixes of products. So no definition is complete except this of holding all the prefixes of the quantities which are to combine.
If you know they were + and – => the result is + and – , not? If you don't know the way the sources were, you have to deal with absolutes and further conditions.
Riemann took `imaginary´. Will you too?
• asked a question related to Mathematics
Question
Dear Researchers,
Subject: Call for Systematic Literature Review Papers in Computer Science Fields - Special Issue in the Iraqi Journal for Computer Science and Mathematics
I hope this letter finds you in good health and high spirits. We are pleased to announce a unique opportunity for researchers in the field of computer science to contribute to our upcoming special issue focused on systematic review papers. As a Scopus-indexed journal with a remarkable CiteScore of 2.9 and a CiteScore Tracker of 3.5, the Iraqi Journal for Computer Science and Mathematics is dedicated to advancing the knowledge and understanding of computer science.
Special Issue Details:
- Title: Special Issue on Systematic Literature Review Papers in Computer Science Fields
- Journal: Iraqi Journal for Computer Science and Mathematics
- CiteScore Tracker: 3.5 (As per the latest available data)
- CiteScore: 2.9 (As per the latest available data)
- Submission Deadline: December 31, 2023
- Publication Fee: None (This special issue is free of charge)
We invite you to contribute your valuable insights and research findings by submitting your systematic review papers to this special issue. Systematic reviews play a crucial role in synthesizing existing research, identifying trends, and guiding future research directions. This special issue aims to gather a diverse collection of high-quality systematic review papers across various computer science disciplines.
Submission Guidelines:
to submit your paper. Make sure to select the special issue "Systematic Literature Review Papers in Computer Science Fields" during the submission process.
We encourage you to review the author guidelines and formatting requirements available on the journal's website to ensure your submission adheres to our standards.
Should you have any inquiries or need further assistance, please do not hesitate to contact our editorial team at mohammed.khaleel@aliraqia.edu.iq
Your contribution to this special issue will undoubtedly enrich the field of computer science and contribute to our mission of fostering academic excellence. We look forward to receiving your submissions and collaborating towards the advancement of knowledge.
Warm regards,
Editor in Chief
Iraqi Journal for Computer Science and Mathematics
• asked a question related to Mathematics
Question
Howdy Selim Molla ,
No.
The problem of fluid behavior after energy extraction in the range of sea states the ocean can accomplish is too difficult to explain mathematically as one's primary activity, and certainly it must not be attempted "on the side" of energy extraction equipment research and development. The situation is not quite that bad in practice, however, especially with your focus on energy extraction equipment.
An engineering approximation to mathematical treatment of oceanic waves after wave energy extraction might be possible with sufficient attention to the energy extracted and the wave recovery under wind stress, fetch, etc. An expression starting from the wave energy equation before interference by extraction equipment which is then reduced by the actual value of the energy extracted would be a start. Then, if you were to factor in the efficiency as an additional loss of energy by the wave and you would have an estimate of the wave energy several wavelengths beyond your equipment, that is, beyond initial turbulence details, etc. For the wave state further along one would have to apply the approximations available for the affect of wind fetch, currents, etc., but that is your field, I'm just a visitor who hates to see questions without replies.
Perhaps this suggestion is too obvious, too simple to be of value, but unfortunately, the explanation you request is not available at present.
Happy Trails, Len
• asked a question related to Mathematics
Question
The mathematical function of TPMS unit cell is as follows: (for example Gyroid)
sin x * cos y+ sin y * cos z+ sin z * cos x = c
parameter 𝑐 determines the relative density of the unit cell.
I am interested to design TPMS unit cell with nTopology software. In this software, TPMS network-based unit cell is designed with "Mid-surface offset" parameter and TPMS sheet-based unit cell is designed with "approximate thickness" parameter.
What is the relation between these parameters and the relative density of the unit cell?
Just for your information, in the current nTop version, it allows users to generate TPMS with approximate thickness.
• asked a question related to Mathematics
Question
Physics is a game of looking at physical phenomena, analyzing how physical phenomena changes with a hypothetical and yet mathematical arrow of time in 3D space, namely by plotting that physical phenomena with a mathematical grid model (typically cartesian based) assuming that physical phenomena can be plotted with points, and then arriving at a theory describing that physical phenomenon and phenomena under examination. The success of those physical models (mathematical descriptions of physical phenomena) is predicting new phenomena by taking that mathematics and predicting how the math of one phenomenon can link with the math of another phenomenon without any prior research experience with that connection yet based on the presumption of the initial mathematical model of physical phenomena being undertaken.
Everyone in physics, professional and amateur, appears to be doing this.
Does anyone see a problem with that process, and if so what problems do you see?
Is the dimension of space, such as a point in space, a physical thing? Is the dimension of time, such as a moment in time, a physical thing? Can a moment in time and a point of space exist as dimensions in the absence of what is perceived as being physical?
In temporal mechanics, is space non-existent but is controlled through a temporo structure?
• asked a question related to Mathematics
Question
Right now, in 2022, we can read with perfect understanding mathematical articles and books
written a century ago. It is indeed remarkable how the way we do mathematics has stabilised.
The difference between the mathematics of 1922 and 2022 is small compared to that between the mathematics of 1922 and 1822.
Looking beyond classical ZFC-based mathematics, a tremendous amount of effort has been put
into formalising all areas of mathematics within the framework of program-language implementations (for instance Coq, Agda) of the univalent extension of dependent type theory (homotopy type theory).
But Coq and Agda are complex programs which depend on other programs (OCaml and Haskell) and frameworks (for instance operating systems and C libraries) to function. In the future if we have new CPU architectures then
Coq and Agda would have to be compiled again. OCaml and Haskell would have to be compiled again.
Both software and operating systems are rapidly changing and have always been so. What is here today is deprecated tomorrow.
My question is: what guarantee do we have that the huge libraries of the current formal mathematics projects in Agda, Coq or other languages will still be relevant or even "runnable" (for instance type-checkable) without having to resort to emulators and computer archaeology 10, 20, 50 or 100 years from now ?
10 years from now will Agda be backwards compatible enough to still recognise
current Agda files ?
Have there been any organised efforts to guarantee permanent backward compatibility for all future versions of Agda and Coq ? Or OCaml and Haskell ?
Perhaps the formal mathematics project should be carried out within a meta-programing language, a simpler more abstract framework (with a uniform syntax) comprehensible at once to logicians, mathematicians and programers and which can be converted automatically into the latest version of Agda or Coq ?
I have come to the conclusion that Coq (in its current version) is an excellent system for formalizing Category Theory according to the philosophy sketched in this question. This philosophy is based on
1) Explicitness. We do not use curly brackets.
2) Minimalism. We only use the bare minimum of libraries which are automatically loaded in CoqIDE and define and prove what we need as we go along.
3) We try to use only the most basic tactics and to never loose contact with the actual rules of the the type system.
4) We write so as to be readable and relevant to logicians and mathematicians 50 years from now. That is, our main goal is to be human
readable and educational. Also to be easily portable to any other proof assistant based on dependent type theory.
• asked a question related to Mathematics
Question
After sharing that article, I received an email saying
"I have read the abstract. But can not see the connections between the individual topics. They are completely different areas that can not be easily related to each other. e.g. the electromagnetic wave to the Wick rotation or Möbius band."
I admit that I struggled with the connections between topics myself, and I wasn't satisfied with my posting. I'd decided to dispense with a classical approach and tackle these topics from the point of view that everything is connected to everything else (what may be called a Theory of Everything or Quantum Gravity or Unified Field approach). I'm convinced the connections are there, and wrote the following in my notepad before getting out of bed this morning (I dreamed about the Riemann hypothesis last night). It clarified things for me and I hope it will help the other ResearchGaters I'm sharing with.
The Riemann hypothesis, proposed in 1859 by the German mathematician Georg Friedrich Bernhard Riemann, is fascinating. It seems to fit these ideas on various subjects in physics very well. The Riemann hypothesis doesn’t just apply to the distribution of prime numbers but can also apply to the fundamental structure of the mathematical universe’s space-time (addressed in the article with the Mobius strip, figure-8 Klein bottle, Wick rotation, and vector-tensor-scalar geometry). In mapping the distribution of prime numbers, the Riemann hypothesis is concerned with the locations of “nontrivial zeros” on the “critical line”, and says these zeros must lie on the vertical line of the complex number plane i.e. on the y-axis in the attached figure of Wick Rotation. Besides having a real part, zeros in the critical line (the y-axis) have an imaginary part. This is reflected in the real +1 and -1 of the x-axis in the attached figure, as well as by the imaginary +i and -i of the y-axis. In the upper half-plane of the attached figure, a quarter rotation plus a quarter rotation equals a half – both quadrants begin with positive values and ¼ + ¼ = ½. (The Riemann hypothesis states that the real part of every nontrivial zero must be 1/2.) While in the lower half-plane, both quadrants begin with negative numbers and a quarter rotation plus a negative quarter rotation equals zero: 1/4 + (-1/4) = 0. In the Riemann zeta function, there may be infinitely many zeros on the critical line. This suggests the y-axis is literally infinite. To truly be infinite, the gravitational and electromagnetic waves it represents cannot be restricted to the up-down direction but must include all directions. That means it would include the horizontal direction and interact with the x-axis – with the waves rotating to produce ordinary mass (and wave-particle duality) in the x-axis’ space-time, and (acting as dark energy) to produce dark matter in the y-axis’ imaginary space-time.
The Riemann hypothesis can apply to the fundamental structure of the mathematical universe’s space-time, and VTS geometry unites the fermions composing the Sun and planets with bosons filling space-time. Thus, the hypothesis also applies to the bodies of the Sun and Mercury themselves. Its link to Wick Rotation means Mercury’s orbit rotates (the Riemann hypothesis is the cause of precession, which doesn’t only exist close to the Sun but throughout astronomical space-time as well as the quantum scale). The link between the half-planes of the hypothesis and the half-periods of Alternating Current’s sine wave suggests the Sun is composed, in part, of AC waves.
Vector-Tensor-Scalar (VTS) Geometry suggests matter is built up layer by layer from the 1 divided by 2 interaction described in the article. The Sun and stars are a special case of VTS geometry in which stellar bodies are built up layer by layer with AC waves in addition to matter such as hydrogen and helium etc. If the Sun only used 1 / 2 (without the AC interaction), it’d be powered by high temperatures and pressures compressing its particles by nuclear fusion. When powered by AC waves, the half-periods entangle to produce phonons which manifest as vibrations apparent in its rising and falling convection cells of, respectively, hot and cooler plasma.
Summation of AC’s sine waves leads to the Sun’s vibratory waves, emission of photons (and to a small extent, of gravitons whose push contributes to planetary orbits increasing in diameter). Because of the connection to Wick rotation, the convective rising and falling in the Sun correlates with time dilation’s rising and falling photons and gravitons. As explained in the article, this slows time near the speed of light and near intense gravitation because the particles interfere with each other. Thus, even if it's never refreshed/reloaded by future Information Technology, our solar system's star will exist far longer than currently predicted.
Dear Rodney Bartlett I'm confuse. Are you talking about our universe with billions of galaxies, where each galaxy hold several billions of solar systems? I'm sure you know quantum mechanic still a theory and never been proved of it existence, thus what is quantum gravity? Ambiguous empirical evidence shows, nothing is universe is working with our one dimension mathematical, our push/pull of any gravity as far as my research dictated to me. Thanks for sharing..
• asked a question related to Mathematics
Question
II need to know how a suggested mechanism for a problem of players' private information which describes a market of selling and buying things can change the outcome and direction of incentives.
I need to discuss this more. It would be my gratitude if any expert in mechanism design and game theory can help me to model the idea mathematically and prove its efficiency.
Price of Anarchy is a concept that helps to find out the efficiency of a non-cooperative game or its bounds. There are some good price of anarchy bound results for various designed mechanisms, like if you design your mechanism in such a way that the loss of welfare is low. Please search about price of anarchy.
• asked a question related to Mathematics
Question
I did not find a mathematical formula to find or through which we can determine or choose the correspondences in the case of unequal sample sizes
Hello Mohammed,
The answer depends on whether your intention in a contrast is to treat means as having equal import regardless of group size (n), which is the unweighted means approach, or to weight means relative to their sample size, which of course, is a weighted means approach.
For an unweighted means approach, two contrasts are orthogonal if the sum of coefficient product terms, across groups, equals zero: e.g., c11c21 + c12c22 + ... + c1kc2k = 0 (where cij is the coefficient used for the i-th contrast and j-th group).
For a weighted means approach, two contrasts are orthogonal if: n1c11c21 + n2c12c22 + ... + nkc1kc2k = 0 (where cij is defined as above and nj is the sample size for group j).
In many experimental designs, the usual intention of behind a contrast is to compare means as having equal import regardless of group size. Therefore (using cij and nj as defined above):
SS(contrast i) = (ci1*M1 + ci2M2 + ... + cikMk)^2 / (ci1^2/n1 + ci2^2/n2 + ... + cik^2/nk) (where Mj is the mean for group j)
Finally, B. J. Winer's 1971 text, Statistical principles in experimental design (2nd ed.). also addresses the issue.
• asked a question related to Mathematics
Question
Your question boils down to, "Why does the AI give the wrong answer?". My understanding of the large language models is that they are very complicated neural networks, and that even their teams of inventors may disagree about their characteristics and capabilities. Some experts even have attributed sentience to AI. Neural networks assign weights to parameters, which are used in complicated algorithms, as you note. The weights are not visible to users. The compiled algorithms that make up the neural networks are deeply removed from human eyes by many levels of machine code. In the old days we used to say GIGO, or "garbage in, garbage out" if a program provided an incorrect result. Today, this does not hold true with A.I. Bad inputs can produce good outputs and good inputs can produce incorrect outputs. Correct information has become a matter of statistics.
• asked a question related to Mathematics
Question
The Ricci tensor assumes the role of helping us understand curvature. Within my Universal Theory research, the Ricci tensor unveils itself. I was pleased to find as detailed in my research document on the Grand Unified Theory Framework (of which advancements in technology are showing there may be more than one viable form of as science progresses)that the Ricci Tensor was typically vanishing to zero in relation to the schwarzschild metric as it should back when I was performing feasibility and speciousness checks via calculations with other experts and myself. But in practical applications of the Grand Unified Theory Framework, vanishing to zero unravels very intriguing consequences.
One of said consequences was something small and interesting I wanted to discuss. The purpose is to highlight the intricacies of implementing such a highly comprehensive concepts in practical settings such as code. To thus detail the challenges researchers may face when translating comprehensive physics and mathematics formulations into concrete applications. More often than not I have found it requiring innovative adaptations and problem-solving. I also want to hear if anyone has any experience with similar things and what their experience was.
My recent amd past ventures into authenticating the Universal Theory framework in code but also writing complex neural networking and AI code with it, as well as Quantum computing code had a lot of interesting hurdles. I immersed myself in the depths of this then encountered a peculiar happenstance. The vanishing of the Ricci tensor to zero in the code procceses. I didn't realize why a lot of the code wasn't working. It's because I was trying to run iterative artificial learning code. And since it incorporated the Universal Theory, and did so in a mathematically accurate way (also authenticating it in various ways via code this way is possible) I didn't realize that no matter what I did the code would never work with the full form of the theory, because the Ricci tensor would always vanish to zero in terms of the schwarzschild metric within the subsequent processes running off initial code. And while this was validating for my theory it was equally frustrating to realize it may be a massive hurdle to institutingnit in code.
This unexpected twist threw me into a world where certain possibilities seemed to evaporate into the ether. The task of setting values for the tensor g_ij (the einstein tensor form utilized in the Grand Unified Theory Framework) in code had to demand a lot of intricate modifications.
I found myself utterly lost. I thought the code was specious. Before I thought to check the ricci tensor calculations, Christoffel and Riemann formations and got it running. I think it's quite scary in a way that someone could have similar code with my own or another form of Unified Theory but if they didn't have THAT sufficient of knowledge on relativity, they may never know the code worked. I feel few have attempted to embrace the tangible variations of complex frameworks within code. I wanted to share this because I thought it was interesting as an example of multidisciplinary science. Coding and physics together is always interesting and there isn't a whole lot of support or information for people venturing into these waters sometimes.
I would like to know what everyone thinks of multidisciplinary issues such as this as well, wherein one may entirely miss valuable data by not knowing what to look for, and how that may affect final results and calculations of research and experimentation. In this situation, ultimately I had to employ some of the concepts in my research document to arrive at the Ricci tensor without any formations of Christoffel or Riemann symbols in the subsequent processes. I thought that was interesting from a physics and coding perspective too. Because I never would've know how to parse this code to get it functioning without knowledge of relativity.
There must be a lot of mysteries for you. Its pretty simply if you can read. As well, I'm sure you know with your nearly omnipotent knowledge that citations aren't instant. Try extrapolating. And consume less salt.
• asked a question related to Mathematics
Question
Is it possible to create a random 2- dimensional shape using mathematical equations Or in software like 3D-max and AutoCAD? like this one:
Yes, it is definitely possible to create a random 2-dimensional shape using mathematical equations or software like 3D Max and AutoCAD.
Using Mathematical Equations:
1. Parametric Equations: You can create a random shape by defining parametric equations that determine the x and y coordinates of points on the shape. For example, you could use sine and cosine functions with random parameters to create smooth curves, or use random step functions to create jagged shapes.
2. Random Point Generation: Generate random points within a defined boundary and then use interpolation or smoothing techniques to connect these points to form a shape.
3. Fractal Geometry: You can use fractal algorithms to generate intricate and complex shapes. For example, the Mandelbrot set is a famous example of a fractal shape.
Using 2D Software (e.g., 3D Max and AutoCAD):
1. Drawing Tools: Most 2D software packages provide various drawing tools that allow you to create shapes freehand, which you can then modify and transform to make them appear random.
2. Random Transformations: Apply random transformations like scaling, rotation, and translation to basic shapes like circles, squares, or polygons. Repeatedly applying random transformations can lead to more complex and organic shapes.
3. Noise Functions: Use noise functions to displace points on a shape, giving it a random and irregular appearance.
4. Procedural Texture Mapping: Create a texture that is procedurally generated using noise patterns or other algorithms, and then apply it to a simple shape. This can give the appearance of a complex and random pattern on the shape.
In both cases, the randomness can be controlled to various extents, allowing you to fine-tune the level of randomness or repeatability of the generated shapes.
Keep in mind that while the shapes may appear random, they are still generated by deterministic algorithms or equations. For truly unpredictable shapes, you might want to explore generative adversarial networks (GANs) or other advanced machine learning techniques, but that goes beyond the scope of traditional mathematical equations and standard 2D software.
• asked a question related to Mathematics
Question
If you have please share it with me at stevegjostwriter@gmail.com Basic Technical
Mathematics with
Calculus, SI Version, 11th edition
Use the source for book pdf: https://libgen.is/
• asked a question related to Mathematics
Question
In the field of solid mechanics, Navier’s partial differential equation of linear elasticity for material in vector form is:
(λ+G)∇(∇⋅f) + G∇2f = 0, where f = (u, v, w)
The corresponding component form can be evaluated by expanding the ∇ operator and organizing it as follows:
For x-component (u):
(λ+2G)*∂2u/∂x2 + G*(∂2u/∂y2 + ∂2u/∂z2) + (λ+G)*(∂2v/(∂x∂y) + ∂2v/(∂x∂z)) = 0
However, I find it difficult to convert from the component form back to its compact vector form using the combination of divergence, gradient, and Laplacian operators, especially when there are coefficients involved.
Does anyone have any experience with this? Any advice would be appreciated.
Dear Doan Cong Dinh,
Thank you for your professional input, and I appreciate this valuable article.
I will try to grasp the concepts of quaternion analysis.
• asked a question related to Mathematics
Question
Mathematical Literacy prepares students for real-life situations while using aspects of Mathematics taught in younger grades. Students will be able to do basic tax, calculate water and electricity tariffs, the amount of paint needed to paint a room or the amount of tiles needed to tile a floor. Isn't this adding to adulting life and preparing students for society? While Mathematics can be a compulsory subject for those that want to go to university and have a great talent in Mathematics?
Education requires special qualifications that are not available to everyone
• asked a question related to Mathematics
Question
Invitation to Contribute to an Edited Book
Banach Contraction Principle: A Centurial Journey­­­­­­
As editors, we are pleased to invite you and your colleagues to contribute your research work to an Edited Book entitled Banach Contraction Principle: A Centurial Journey­­­­­­ to be published by Springer.
The main objective of this book is to focus on the journey of the Banach Contraction Principle, its generalizations, extensions, and consequences in the form of applications that are of interest to a wide range of audiences. Different results for fixed points as well as fixed figures for single-valued and multi-valued mappings satisfying various contractive conditions in distinct spaces have been investigated, and this research is still ongoing. The book is expected to contain new applications of fixed point techniques in diverse fields besides the survey/advancements of 100 years of the celebrated Banach contraction principle.
Full chapter submission: July 12, 2023
Review results: Aug. 12, 2023
Revision Submission: Sept. 01, 2023
Submission of final chapters to Springer: Sept.21, 2023
Email your papers to anitatmr@yahoo.com or jainmanish26128301@gmail.com (pdf and tex files) at the earliest possible. Submitted papers will be peer-reviewed by 3 reviewers. On acceptance, authors will be requested to submit the final paper as per the format of the book.
We firmly believe that your contribution will enrich the academic and intellectual content of the book along with opening up of new endeavors of research.
Kindly note that there is no fee or charge from authors at any stage of publication.
Looking forward to your valuable contribution.
Best Regards
Anita Tomar
Department of Mathematics
Pt. L. M. S. Campus
Sridev Suman Uttarakhand University
Rishikesh-249201, India
&
Manish Jain
Department of Mathematics,
Ahir College, Rewari-123401, India
Last call.If anyone interested .
• asked a question related to Mathematics
Question
How we express the quantitative research method in mathematical forms including studied variables?
While quantitative research is a way of studying and analyzing data using mathematical methods, it's important to note that not all research questions or studies can be effectively expressed in mathematical equations. However, if your research involves studying relationships between variables, you can express these relationships using mathematical equations.
For example, if you are studying the relationship between the amount of time students spend studying and their exam scores, you can express this relationship using a mathematical equation. Let X represent the amount of time students spend studying and let Y represent their exam scores. You can then create a simple linear regression model to analyze the relationship between X and Y:
Y = a + bX
where "a" is the y-intercept, or the predicted value of Y when X is 0, and "b" is the slope, or the change in Y for every one-unit increase in X. This equation can be used to predict the exam scores for a given amount of time spent studying (X). Other mathematical methods such as statistical analysis and correlation coefficients can also be used in quantitative research to measure relationships between variables and support conclusions.
• asked a question related to Mathematics
Question
Could something that does not have an end be related to the concept of eternal in nature? Could you cite without any doubt something that could prove it?
If could the infinity to be linked to eternity, is it possible to think about it that it is something without any limit known? How assume that something that you can observe its limits could be infinite in its area? If the infinity doesn't fit in limits, and cannot be totally observed, how we can assume that the infinity could be inside a circle, for example? Or between two numbers as zero and 1, with zero and 1 being limits?
But a second is a limited and an amount of time, again, so in this case you can see its limits, the begin and the end of it. If you can perceive its end, so is plausible did not assume that the infinity could fit there. If you consider that infinity as eternal them it is something that has a start, but you cannot see, or even perceive its end. Even your perception, has a limit, so cannot perceive the end of it. A perception assume someone that could learn through his senses or through his mind, but noneone can live forever to perceive the infinity in its completeness. Could the infinity fits in limits? Even if you infinitelly divided the spaces between zero and 1 in a given moment you will reach a moment that you will not can divided more, because the space that you are dividing, reached an end, exactly because the area or the space that you are considering, is finity due to the limits considered since the begining.
📷
ResponderEncaminhar
• asked a question related to Mathematics
Question
I would like to ask a general question: Any other physicists of any kind, what do YOU see as the fundamental flaws currently existing in mathematics-to-physics (or vice versa) calculations in a general sense? Is it differences in tensors, unknown values, inconsistent unreliable outputs with known methods, no reliable well-known methods ect? Or is the problem to you seen as more of a problem with scientific attitudes and viewpoints being limiting in their current state? And the bigger overall question: Which of these options is limiting science to a higher degree? I'd love to hear other's comments on this.
André Michaud A sensible man indeed! And perhaps we share this outlook because I tend to look at physics through an engineering lens. To me, if one cannot prove their theoretical words with actual equations, it doesn't hold much of a basis at all. I do believe that mathematics and physics are intimately entertained and that physics is simply the study of emergent properties of physical mathematics. It's interesting and quite hillarious you say that you barely have seen a theoretician pick up a calculator in 25years. Unfortunately that used to be me, and you are very right that a more stagnating thing does not exist. With perspective on that I can say my theoretical physics, although thought provoking, rarely had any use, either practical or academic that anyone even wanted to see, cared about, or even could get employed off of. I started getting a lot of success when I realized that the math is fundamental and that I was deluding myself into thinking it wasn't because I wasn't good at it. To be quite honest I found I thought I "wasn't good" at typical mathematics because it was just too abstract for me. "1+1=2." Okay, one what plus one what equals Teo WHAT? Distance? Amount of friends? Weight. It was just way too theoretical for me, abstract numbers with no unit, basis or story attached. Basically unreal axiom that help us understand things. Humans kinda forgot this abstraction is simply a tool to help understand things, not an actual representation of Universal constants. It can indicate those things via representation but that's it. Once I started dealing with physics and engineering in a less theoretical sense, I realized my problem with why I thought I couldn't do math was that pure mathematics (Shoutout Doom) Was just far too abstract. I think a lot of people who are convinced math and physics aren't always intimately entertwined have this problem, a kind of innate fear and of their own competency with math that affects the logic of this assumption. I'm also relatively sure as well that looking at these things I have said here, a lot of people afraid of their mathematic ability would find they are actually very good at it when abstraction is removed. But maybe that is just me. Jixin Chen I also hate to keep referencing back to my paper (if anyone has any info on this let me know) but there are many examples in the mathematics and quantum physics sections that show how mathematical equations are proven to be able to represent absurdly complex quantum physics principles and constants of nature in a neat "package".
• asked a question related to Mathematics
Question
Dear Professional/Researchers/Students,
Can you please suggest any technique or mathematical approach to optimize spare parts management(Automobile industry) for improve or increase production.
• asked a question related to Mathematics
Question
Mathematics is an elegance.
"The Gods Must Be Crazy" is a 1980 comedy film directed by Jamie Uys. It tells the story of a Kalahari bushman who encounters a Coca-Cola bottle and believes it to be a gift from the gods.
• asked a question related to Mathematics
Question
SOURCE OF MAJOR FLAWS IN COSMOLOGICAL THEORIES:
MATHEMATICS-TO-PHYSICS APPLICATION DISCREPENCY
Raphael Neelamkavil, Ph.D., Dr. phil.
The big bang theory has many limitations. These are,
(1) the uncertainty regarding the causes / triggers of the big bang,
(2) the need to trace the determination of certain physical constants to the big bang moments and not further backwards,
(3) the necessity to explain the notion of what scientists and philosophers call “time” in terms of the original bang of the universe,
(4) the compulsion to define the notion of “space” with respect to the inner and outer regions of the big bang universe,
(5) the possibility of and the uncertainty about there being other finite or infinite number of universes,
(6) the choice between an infinite number of oscillations between big bangs and big crunches in the big bang universe (in case of there being only our finite-content universe in existence), in every big hang universe (if there are an infinite number of universes),
(7) the question whether energy will be lost from the universe during each phase of the oscillation, and in that case how an infinite number of oscillations can be the whole process of the finite-content universe,
(8) the difficulty involved in mathematizing these cases, etc.
These have given rise to many other cosmological and cosmogenetic theories – mythical, religious, philosophical, physical, and even purely mathematical. It must also be mentioned that the thermodynamic laws created primarily for earth-based physical systems have played a big role in determining the nature of these theories.
The big bang is already a cosmogenetic theory regarding a finite-content universe. The consideration of an INFINITE-CONTENT universe has always been taken as an alternative source of theories to the big bang model. Here, in the absence of conceptual clarity on the physically permissible meaning of infinite content and without attempting such clarity, cosmologists have been accessing the various mathematical tools available to explain the meaning of infinite content. They do not also seem to keep themselves aware that locally possible mathematical definitions of infinity cannot apply to physical localities at all.
The result has been the acceptance of temporal eternality to the infinite-content universe without fixing physically possible varieties of eternality. For example, pre-existence from the past eternity is already an eternality. Continuance from any arbitrary point of time with respect to any cluster of universes is also an eternality. But models of an infinite-content cosmos and even of a finite-content universe have been suggested in the past one century, which never took care of the fact that mathematical infinity of content or action within a finite locality has nothing to do with physical feasibility. This, for example, is the source of the quantum-cosmological quick-fix that a quantum vacuum can go on create new universes.
But due to their obsession with our access to observational details merely from our local big bang universe, and the obsession to keep the big bang universe as an infinite-content universe and as temporally eternal by using the mathematical tools found, a mathematically automatic recycling of the content of the universe was conceived. Here they naturally found it safe to accommodate the big universe, and clearly maintain a sort of eternality for the local big bang universe and its content, without recourse to external creation.
Quantum-cosmological and superstrings-cosmological gimmicks like considering each universe as a membrane and the “space” between them as vacuum have given rise to the consideration that it is these vacua that just create other membranes or at least supplies new matter-energy to the membranes to continue to give rise to other universes. (1) The ubiquitous sensationalized science journalism with rating motivation and (2) the physicists’ and cosmologists’ need to stick to mathematical mystification in the absence of clarity concurring physical feasibility in their infinities – these give fame to the originators of such universes as great and original scientists.
I suggest that the need to justify an eternal recycling of the big bang universe with no energy loss at the fringes of the finite-content big bang universe was fulfilled by cosmologists with the automatically working mathematical tools like the Lambda term and its equivalents. This in my opinion is the origin of the concepts of the almighty versions of dark energy, virtual quantum soup, quantum vacuum, ether, etc., for cosmological applications. Here too the physical feasibility of these concepts by comparing them with the maximal-medial-minimal possibilities of existence of dark energy, virtual quantum soup, quantum vacuum, ether, etc. within the finite-content and infinite-content cosmos, has not been considered. Their almighty versions were required because they had to justify an eternal pre-existence and an eternal future for the universe from a crass physicalist viewpoint, of which most scientists are prey even today. (See: Minimal Metaphysical Physicalism (MMP) vs. Panpsychisms and Monisms: Beyond Mind-Body Dualism: https://www.researchgate.net/post/Minimal_Metaphysical_Physicalism_MMP_vs_Panpsychisms_and_Monisms_Beyond_Mind-Body_Dualism)
I believe that the inconsistencies present in the mathematically artificialized notions and in the various cosmogenetic theories in general are due to the blind acceptance of available mathematical tools to explain an infinite-content and eternally existent universe.
What should in fact have been done? We know that physics is not mathematics. In mathematics all sorts of predefined continuities and discretenesses may be created without recourse to solutions as to whether they are sufficiently applicable to be genuinely physics-justifying by reason of the general compulsions of physical existence. I CONTINUE TO ATTEMPT TO DISCOVER WHERE THE DISCREPENCIES LIE. History is on the side of sanity.
One clear example for the partial incompatibility between physics and mathematics is where the so-called black hole singularity is being mathematized by use of asymptotic approach. I admit that we have only this tool. But we do not have to blindly accept it without setting rationally limiting boundaries between the physics of the black hole and the mathematics applied here. It must be recognized that the definition of any fundamental notion of mathematics is absolute and exact only in the definition, and not in the physical counterparts. (See: Mathematics and Causality: A Systemic Reconciliation, https://www.researchgate.net/post/Mathematics_and_Causality_A_Systemic_Reconciliation)
I shall continue to add material here on the asymptotic approach in cosmology and other similar theoretical and application-level concepts.
Bibliography
(1) Gravitational Coalescence Paradox and Cosmogenetic Causality in Quantum Astrophysical Cosmology, 647 pp., Berlin, 2018.
(2) Physics without Metaphysics? Categories of Second Generation Scientific Ontology, 386 pp., Frankfurt, 2015.
(3) Causal Ubiquity in Quantum Physics: A Superluminal and Local-Causal Physical Ontology, 361 pp., Frankfurt, 2014.
(4) Essential Cosmology and Philosophy for All: Gravitational Coalescence Cosmology, 92 pp., KDP Amazon, 2022, 2nd Edition.
(5) Essenzielle Kosmologie und Philosophie für alle: Gravitational-Koaleszenz-Kosmologie, 104 pp., KDP Amazon, 2022, 1st Edition.
Please see my research document on functional Grand Unified Theory either attached or on my page. Mathematical tools are an essentially function of Cosmology and the applications of mathematics to physics. This application is most largely and obviously affected by failing to account for the interplay between quantum phenomenon and relativity related phenomenon, and also by no clear-cut ability or route to perform these calculations. By manipulating tensors, and subsequently tying them to mathematical formulas which represent the relation between mathematics and physical processes, quantities and occurrences within quantum physics systems, and then subsequently appropriately setting values for relativity related phenomenon in the form of tensors, operators, and precisely calculated values, one may gain a more precise and enlightening view of Cosmological processes. Failing to account for the interplay between general relativity and quantum phenomenon in any sort of reliable way is a large source of issues in the application of mathematics-to-physics related to Cosmology. Another issue, I believe, is perception. Most scientists are content with either being willfully ignorant of the necessary need to be able to account for quantum phenomenon and relativity related phenomenon at the same time to accurately assess cosmology or they stubbornly stick their feet in the sand and claim they can arrive at fully accurate revelations without an ability to do so. Both are erroneous. Although we can come to A LOT of conclusions about those things without knowing the full quantum/relativity shebang and all it's details, we have no idea what sort of information we could be missing out on, or what false assumptions we could be arriving at. "You vant know what you cant know" The solution to the problem of mathematics-to-physics in Cosmology is most certainly accounting for what I've spoken of here in a reliable way that aligns with known mathematics and physics, but also in developing more advanced equations and discoveries which ties physical processes and quantities to quantum and relativity related phenomenon in a proven and undeniable way. Things like this have been proven by my research. I have found certain forms of complex equations accurately represents laws of physical concepts that shed light on the relation of mathematics to physical things. I am in no way claiming my theory is the only way to do this, I'm just using it as a familiar starting point to speak on this. There are lots of ways to do this without a theory such as mine, but it results in having to perform multiple complex calculations for mathematics, physics, and relativity, parsing, then integrating them separately which is far more time consuming.
• asked a question related to Mathematics
Question
Is it logical to assume that the probability created by nature produces symmetry?
And if this is true, is anti-symmetry just a mathematical tool that can be misleading in specific situations?
In my opinion, the symmetry created by nature is most closely described by a class of theorems, generically called the "central limit theorem". Deviations from the conditions of the central limit theorem (dependence of factors, dominance of one or more factors, etc.) generate asymmetry, which is observed in nature much more often than symmetry.
• asked a question related to Mathematics
Question
I understand that we can produce that number in MATLAB by evaluating exp(1), or possibly using exp(sym(1)) for the exact representation. But e is a very common constant in mathematics and it is as important as pi to some scholars, so after all these many versions of MATLAB, why haven't they recognize this valuable constant yet and show some appreciation by defining it as an individual constant rather than having to use the exp function for that?
Below is a conversation between me and MATLAB illustrating why MATLAB developers have ZERO common sense... Enjoy the conversation dear fellows and no regret for MATLAB staff...
Me: Hello MATLAB, how is things?
MATLAB: All good! How can I serve you today sir?
Me: Yes, please. Could you give me the value of Euler's number? You know... it's a very popular and fundamental constant in mathematics.
MATLAB: Sure, but wait until I call the exponential function and ask it to evaluate it for me...
Me: Why would you call the exponential function bro??? Isn't Euler's number always constant and its value is well known for thousands of digits?
MATLAB: You will never know sir... Maybe its value will change in the future, so we continuously check its value with the exponential function every time I'm turned on...
Me: You do WHAT!!!
MATLAB: Well... This is a normal procedure sir and I have to do this every time you turn me on...
Me: Stop right there and don't tell me more please...
MATLAB: No, wait sir... I agree with you that this is perhaps one of the most cloddish things that was ever made in the history of programming, but what can I do sir? The guys who developed me actually believe that this is ingenius.
Me: Ooooh oooh ooooh.... reeeeeally!!! Now ain't that something...
MATLAB: They say sir that this is for your security plus there are no applications for that number sir, so why should they care? Even Euler himself, if resurrected again, would fail to find a single application for that number sir. Probably Jacob Bernoulli, the first to discover this number in 1683, would fail also sir, so why should we bother sir? Though it's a mathematical constant and deeply appreciated by the mathematicians around the world for centuries, we don't respect that number sir and find it useless.
Me: Who decides on the importance of Euler's number as a mathematical quantity? Mathematicians or the guys who develop you?
MATLAB: The guys who develop me sir; right?!?!?!?!?
Me: Bro I was obsessed with you in the past and I was truly a big fan of you for more than a decade. But, with the mentality I saw here from the guys who develop you, I believe you will beset with fundamental issues for a long time to come bro... No wonder why Python have beaten you in many directions and became the most popular programming language in the world. Time to move to Python you closed minded and thanks for helping me in my research works in the past decade!!! Good bye for good.
MATLAB: Wait sir... Don't leave please... As a way to compensate for the absence of Euler's number, we offer the 2 symbols i and j sir to represent the complex unity, so the extra symbol is a good compensation for Euler's number...
Me: What did you just say?
MATLAB: Say what?
Me: You provide 2 symbols to represent the same mathematical complex unity quantity, but you have none for Euler's number???
MATLAB: Yeeeeeeeap... you got it.
Me: You can't be serious!
MATLAB: I swear sir by the name of the machine I'm installed in that this is true; I'm not making that up.
Me: But why 2 symbols for the same constant; pick up one for God sake!
MATLAB: Well... There is a wisdom sir for picking 2 symbols for the same constant not just 1.
Me: What is it?
MATLAB: Have you seen the movie "The Man in the Iron Mask" written by Alexandre Dumas and Randall Wallace or read the novel "The Three Musketeers," by the nineteenth century French author Alexandre Dumas sir?
Me: I only saw the movie. But why???
MATLAB: Then you must have heard the motto the movie heros lived by in their glorious youth, "One for all, all for one".
Me: Yes, I did...
MATLAB: We sir were very impressed by this motto, so we came up with a new one.
Me: Impress me!
MATLAB: "i for j, j for i".
Me: You're killing me...
MATLAB: Wait sir, there is more...
Me: More what?????
MATLAB: Many experts around the world project that the number of letters to represent the complex unity in MATLAB may reach 52 letters sir by the end of 2050, so that you can use any English letter (capital or small) to represent the complex unity. How about this sir? Ain't this ingenious also? Sir ?!!?!?!?!?
Me: And this is when common sense was blown up by a nuclear weapon... This circus is over...
It's interesting to note that Python developers defines Euler's number as a built-in constant by invoking the supporting module math. In particular, math.e gives 2.718281828459045 without the silly computation of the exponential function at 1 as in MATLAB.
Thumbs up Python Developers 👍
• asked a question related to Mathematics
Question
"What is a non-STEM major? A non-STEM major is a major that isn't in science, technology, engineering, or mathematics. This means non-STEM majors include those in business, literature, education, arts, and humanities. In STEM itself, programs in this category include ones that emphasize research, innovation or the development of new technologies."
The preference of students for non-STEM (Science, Technology, Engineering, and Mathematics) majors over STEM majors can have various implications for a nation's future in terms of science and technology advancement. However, it's important to note that the situation is more nuanced, and the impact of such preferences can vary depending on several factors.
Here are some considerations:
1. Diversity of Skills: While STEM fields are crucial for technological advancement and innovation, non-STEM fields also play a significant role in society. Business, literature, arts, humanities, and other non-STEM fields contribute to the diversity of skills and knowledge within a society, fostering well-rounded individuals who can approach challenges from various perspectives.
2. Economic Contribution: Non-STEM fields can be profitable and contribute to the economy in different ways. For example, the entertainment industry, arts, design, and business sectors generate revenue and create jobs. A balanced mix of STEM and non-STEM professionals is necessary for a thriving economy.
3. Interdisciplinary Collaboration: The future of innovation often lies in interdisciplinary collaboration. Many complex challenges require the integration of STEM and non-STEM expertise. For example, solving environmental issues may require input from environmental scientists (STEM) and policy experts (non-STEM).
4. Education and Awareness: Sometimes, students may choose non-STEM majors due to a lack of awareness about the potential and opportunities in STEM fields. Addressing this issue by promoting STEM education and showcasing the exciting prospects in STEM careers can influence students' choices positively.
5. Global Perspective: The impact of students' preferences for majors extends beyond national boundaries. In a globalized world, innovation and progress depend on collaboration among countries, regardless of their STEM/non-STEM focus.
6. Balancing the Workforce: Nations need a diverse workforce with a mix of STEM and non-STEM professionals. An overemphasis on STEM majors may lead to a shortage of skilled professionals in non-STEM fields and potentially hinder the growth of industries that rely on such expertise.
Ultimately, the ideal scenario is a balanced approach that encourages students to pursue their passions and interests while being informed about the opportunities and challenges in various fields. The promotion of STEM education is crucial for technological advancement, but it should be complemented with efforts to recognize the value of non-STEM fields and encourage a diverse range of career choices. An educated and well-rounded society, with a mix of STEM and non-STEM professionals, is essential for holistic progress and development.
• asked a question related to Mathematics
Question
Apart from the mathematical systems that confirm human feelings and perceptive sensors, there are countless mathematical systems that do not confirm these sensors and our sensory data! A question arises, are the worlds that these mathematical systems evoke are real? So in this way, there are countless worlds that can be realized with their respective physics. Can multiple universes be concluded from this point of view?
Don't we see that only one of these possible worlds is felt by our body?! Why? Have we created mathematics to suit our feelings in the beginning?! And now, in modern physics and the maturation of our powers of understanding, we have created mathematical systems that fit our dreams about the world!? Which of these mathematical devices is actually true about the world and has been realized?! If all of them have come true! So there is no single and objective world and everyone experiences their own world! If only one of these mathematical systems has been realized, how is this system the best?!
If the worlds created by these countless mathematical systems are not real, why do they exist in the human mind?!
The last question is, does the tangibleness of some of these mathematical systems for human senses, and the intangibleness of most of them, indicate the separation of the observable and hidden worlds?!
In my view, every individual rational being instantly formulates his/or her own mathematics in order to decently exist and survive in the reality (realized world of human society) of his/or her own immediate milieu in any given space/time . . .
• asked a question related to Mathematics
Question
Here is list of Impact factor 2023.
Journal Citation Reports 2023
This is not the complete list ... where are all the Human Resource Management journals, for example?
• asked a question related to Mathematics
Question
Given: x = 10sin(0.2t), y = 10cos(0.2t), z = 2.5sin(0.2t) (1)
There exists the following mathematical relationship:
u = x'cos(z) + y'sin(z),
v = -x'sin(z) + y'cos(z), (2)
r = z'
How to express rd=[x,y,z,u,v,r]' in the form of drd/dt = h(rd), where the function h(rd) does not explicitly depend on the time variable t?
My approach is as follows:
From (2), we have:
x' = ucos(z) - vsin(z), y' = usin(z) + vcos(z), z' = r (3) with initial values x(0) = 0, y(0) = 10, z(0) = 0
From (2), we have:
u' = x''cos(z) - x'sin(z)z' + y''sin(z) + y'cos(z)z',
v' = -x''sin(z) - x'cos(z)z' + y''cos(z) - y'sin(z)z',
r' = z''
By calculating based on (1), we obtain:
x' = 2cos(0.2t) = 0.2y
x'' = -0.4sin(0.2t) = -0.04x
y' = -2sin(0.2t) = -0.2x (4)
y'' = -0.4cos(0.2t) = -0.04y
z' = 0.5cos(0.2t) = 0.05y
z'' = -0.1sin(0.2t) = -0.01x
Substituting x', x'', y', y'', z', z'' into (4), we get:
u' = -0.04xcos(z) - 0.2y * 0.05ysin(z) - 0.04ysin(z) - 0.2x * 0.05y*cos(z)
v' = -x''sin(z) - x'cos(z)z' + y''cos(z) - y'sin(z)z'
r' = z''
with initial values u(0)=2, v(0)=0, r(0)=0.5
The calculation process is accurate, but is the problem-solving approach correct?
Qamar Ul Islam Thank you for your prompt and helpful response. I appreciate your insights and suggestions on how to resolve the discrepancy. I will follow your advice and check my code, numerical method, and solver parameters. I hope to get a better result with your guidance. Thank you again for your time and expertise.
• asked a question related to Mathematics
Question
I have a dataset of rice leaf for training and testing in machine learning. Here is the link: https://data.mendeley.com/datasets/znsxdctwtt/1
I want to develop my project with these techniques;
1. RGB Image Acquisition & Preprocessing (HSV Conversation, Thresholding and Masking)
2. Image Segmentation(GLCM matrices, Wavelets(DWT))
3. Classifications (SVM, CNN ,KNN, Random Forest)
4. Results with Matlab Codings.
• But I have a confusion for final scores for confusion matrices. So I need any technique to check which extraction method is good for dataset.
• My main target is detection normal and abnormal(disease) leaf with labels.
#image #processing #mathematics #machinelearning #matlab #deeplearning
There are two commonly used extraction techniques that can be appropriate for rice plant disease detection: (1) Color-based Extraction and (2) Texture-based Extraction.The most appropriate extraction technique for rice plant disease detection depends on the specific requirements and characteristics of the dataset and the detection algorithm being used.
• asked a question related to Mathematics
Question
I am trying to find a mathematical model of a loudspeaker in an enclosure. By this, I mean an equation that describes the electrical impedance of the speaker as a function of the physical characteristics of the cone and the air in its enclosure.
As an analogy, I am familiar with the model of a servo motor, which relates its electrical impedance to the inertia of the load and the shaft speed. You can read an article I wrote about that at https://www.researchgate.net/publication/355587208_Regenerative_Brake_Charges_Your_Caving_Lamp_Whilst_You_Abseil.
Interestingly, one of the terms in that expression is the inertia divided by the product of torque constant and voltage constant, which has the dimensions of capacitance, showing that a electrical model of a servo motor includes a large capacitance. I am looking for something similar for loudspeakers, which shows how the physical characteristics of the enclosure and speaker are reflected in its electrical circuit model.
Dear friend David Gibson
Understanding the relationship between the physical characteristics of a loudspeaker and its enclosure and their representation in a mathematical circuit model is an intriguing endeavor.
The mathematical modeling of a loudspeaker and its enclosure involves the interplay of electrical, mechanical, and acoustical components.
Let's explore some key aspects:
1. Electrical components: The electrical model of a loudspeaker typically includes an electrical impedance that represents the combined effect of the voice coil resistance, inductance, and the capacitive behavior of the driver. The voice coil resistance can be related to the conductor material and dimensions. The voice coil inductance is influenced by the coil winding geometry and the magnetic circuit design. The capacitance component can arise from various sources, such as the inherent capacitance between the voice coil and the speaker's diaphragm.
2. Mechanical components: The mechanical behavior of the loudspeaker driver is typically represented by a combination of mass, compliance, and damping. The mass reflects the moving components of the driver, such as the diaphragm and voice coil. Compliance characterizes the flexibility of the suspension system that holds the diaphragm in place. Damping accounts for energy dissipation mechanisms, such as losses due to air resistance and materials.
3. Acoustical components: The interaction between the loudspeaker driver and the air in the enclosure results in sound radiation. The enclosure affects the loudspeaker's performance by providing an acoustic load, influencing resonances, and contributing to the overall system response. The enclosure geometry, volume, and construction materials play a crucial role in shaping the acoustic output.
Creating a comprehensive mathematical model that captures all the complexities of a loudspeaker and its enclosure is challenging. However, various simplified models exist, such as the Thiele-Small parameters, which describe the loudspeaker's electrical and mechanical characteristics in a simplified manner. These parameters, combined with enclosure-related parameters like volume and port characteristics, can provide insights into the system behavior.
It's important to note that loudspeaker design and modeling involve a multidisciplinary approach, combining electrical engineering, mechanical engineering, acoustics, and signal processing. Advanced modeling techniques, such as finite element analysis (FEA) and boundary element method (BEM), are often employed for more accurate representations.
To delve deeper into the specific mathematical equations and modeling techniques, consulting specialized literature, research papers, or textbooks on loudspeaker design and acoustics can provide valuable insights.
Let us keep exploring this interesting topic.
• asked a question related to Mathematics
Question
This question is dedicated only to sharing important research of OTHER RESEARCHERS (not our own) about complex systems, self-organization, emergence, self-repair, self-assembly, and other exiting phenomena observed in Complex Systems.
Please keep in own mind that each research has to promote complex systems and help others to understand them in the context of any scientific filed. We can educate each other in this way.
Experiments, simulations, and theoretical results are equally important.
Links to videos and animations will help everyone to understand the given phenomenon under study quickly and efficiently.
Feasibility study for estimating optimal substrate parameters for sustainable green roof in Sri Lanka
Shuraik A. Kader, Velibor Spalevic & Branislav and Zdenka Dudic
Environment Development and Sustainability 2022(4):1-27
DOI: 10.1007/s10668-022-02837-y
Abstract:
In twenty-first century buildings, green roof systems are envisioned as great solution for improving Environmental sustainability in urban ecosystems and it helps to mitigate various health hazards for humans due to climatic pollution. This study determines the feasibility of using five domestic organic wastes, including sawdust, wood bark, biochar, coir, and compost, as sustainable substrates for green roofs as compared to classical Sri Lankan base medium (fertiliser + potting mix) in terms of physicochemical and biological parameters associated with growing mediums. Comprehensive methodologies were devised to determine the thermal conductivity and electric conductivity of growing mediums. According to preliminary experimental results, the most suitable composition for green roof substrates comprised 60% organic waste and 40% base medium. Sawdust growing medium exhibited the highest moisture content and minimum density magnitudes. Biochar substrate was the best performing medium with the highest drought resistance and vegetation growth. The wood bark substrate had the highest thermal resistance. Growing mediums based on compost , sawdust, and coir produced the best results in terms of nitrate, phosphate, pH, and electric conductivity (EC) existence. This study provided a standard set of comprehensive comparison methodologies utilising physicochemical and biological properties required for substrate characterization. The findings of this research work have strong potential in the future to be used in selecting the most suitable lightweight growing medium for a green roof based on stakeholder requirements.
###
This research can save us a lot of energy consumption in housing, governing, education, and industrial areas. What is your opinion about it?
• asked a question related to Mathematics
Question
Подтверждается мнение о том, что математическая подготовка физиков, даже ведущих, недостаточна. Именно такие претензии В.А. Фок и Н.Н. Боголюбов предъявляли Ландау. Математика не сводится к набору формул и решения уравнений. Предложена формулировка принципа эквивалентности гравитационного поля и ускоренной системы отсчета на основе стандартного математического эпсилон-дельта метода (черновик).
Другая статья
• asked a question related to Mathematics
Question
Hi, My name Is Debi, I'm master student Mathematics Education major at Yogyakarta State University 2nd month. Please give me advice what the trend topic on mathematics education aspecially topic learning media math and learning psychology of math. May you share with me about it on your country or your universisty. Thank you so much.
Development and Validation of Instructional Materials in Teaching Trigonometry
• asked a question related to Mathematics
Question
Actually, I am working these field. Sometimes I don't understand what should I do. If anyone supervise me, I will be thankful.
I have a dataset of rice leaf for training and testing in machine learning. Here is the link: https://data.mendeley.com/datasets/znsxdctwtt/1
I want to develop my project with these techniques;
1. RGB Image Acquisition & Preprocessing (HSV Conversation, Thresholding and Masking)
2. Image Segmentation(GLCM matrices, Wavelets(DWT))
3. Classifications (SVM, CNN ,KNN, Random Forest)
4. Results with Matlab Codings.
• But I have a confusion for final scores for confusion matrices. So I need any technique to check which extraction method is good for dataset.
• My main target is detection normal and abnormal(disease) leaf with labels.
Attached image is collected from a paper.
Sure, I'd be happy to provide you with guidelines for your Matlab project. Please reach out to me via email at erickkirui@kabarak.ac.ke, and I will promptly assist you with the necessary guidance for your project
• asked a question related to Mathematics
Question
Image processing is the beauty of mathematics. Because many basic parts of mathematics used in this field. But I have a confusion about extraction methods.
Image data processing is one of the most under-explored problems in the data science community.
Every developer has a unique way of doing it. Some of the tools and platforms used in image preprocessing include Python, Pytorch, OpenCV, Keras, Tensorflow, and Pillow.
Here's a useful link to the best programs available for image preprocessing.
These are some great software options in my opinion. Let me know if this is helpful!
• asked a question related to Mathematics
Question
The statement inquires about the potential mathematical relationship between entropy and standard deviation. Entropy and standard deviation are both concepts used in statistics and information theory.
Entropy is a measure of uncertainty or randomness in a probability distribution. It quantifies the average amount of information required to describe an event or a set of outcomes. It is commonly used in the field of information theory to assess the efficiency of data compression algorithms or to analyze the randomness of data.
On the other hand, standard deviation is a statistical measure that quantifies the dispersion or variability of a set of data points. It provides information about the average distance of data points from the mean or central value. It is widely used in data analysis to understand the spread of data and to compare the variability among different datasets.
While entropy and standard deviation are both statistical measures, they capture different aspects of data. Entropy focuses on the uncertainty or information content, while standard deviation focuses on the dispersion or variability. As such, there is no direct mathematical relationship between entropy and standard deviation.
However, depending on the specific context and the nature of the data, there might be some indirect connections or relationships between entropy and standard deviation. For instance, in certain probability distributions, higher entropy might be associated with higher variability or larger standard deviation, but this relationship is not universally applicable.
In summary, while entropy and standard deviation are both important statistical measures, they serve different purposes and do not have a direct mathematical relationship. The relationship between them, if any, would depend on the specific characteristics of the data being analyzed.
Entropy and standard deviation are both measures used in different fields of study, but they capture different aspects of data and do not have a direct mathematical relationship. However, there are some connections and similarities that can be explored.
Statistical Interpretation: In statistics, entropy is typically associated with information theory, while standard deviation is a measure of dispersion. Entropy measures the amount of uncertainty or information content in a dataset, while standard deviation quantifies the spread or variability of the data points around the mean.
Probability Distributions: Both entropy and standard deviation are related to probability distributions. Entropy is often used to measure the uncertainty or randomness in a probability distribution, while standard deviation measures the dispersion of the data points around the mean of the distribution.
Gaussian Distribution: In the case of a Gaussian (normal) distribution, the standard deviation plays a crucial role in determining the shape and spread of the distribution. The entropy of a Gaussian distribution depends on its variance (square of standard deviation). Specifically, the entropy of a Gaussian distribution increases with increasing variance, indicating higher uncertainty.
Maximum Entropy Distribution: In information theory, the concept of maximum entropy refers to finding the probability distribution that maximizes the entropy while satisfying certain constraints. In some cases, the maximum entropy distribution can be a Gaussian distribution, where the standard deviation determines the spread of the distribution.
While there are connections and parallels between entropy and standard deviation, they serve different purposes and are applied in different contexts. Entropy focuses on the information content and uncertainty, while standard deviation measures dispersion and variability.
• asked a question related to Mathematics
Question
In mathematics, many authors working in the area of integer sequence, fibonacci polynomial, perin sequence......
Now what is the current research topics in this subject?.
I request , suggest some research topics which is related to Fibonacci sequence.
James Jamesfathiaraj : You started the following post/discussion thread on June 01, 2023, and I quote verbatim:
"In mathematics, many authors working in the area of integer sequence, fibonacci polynomial, perin sequence......
Now what is the current research topics in this subject?.
I request , suggest some research topics which is related to Fibonacci sequence."
The topic that I am about to suggest to you may not necessarily be related to the Fibonacci sequence, but I think you can try looking up recursively enumerable sets. In deed, the problem of determining whether a specific rational number greater than unity is an abundancy index (i.e. a ratio of the form sigma(m)/m, for some positive integer m and where sigma(m) is the classical sum of divisors of m) or an abundancy outlaw (see Holdener and Stanton's paper [https://www.researchgate.net/publication/264925121] published in JIS [2007]) is equivalent to finding out whether the set of abundancy indices (or the set of abundancy outlaws, for that matter) is a recursive set. (You can also refer to the following paper: https://cs.uwaterloo.ca/journals/JIS/VOL23/Holdener/holdener4.pdf, where it is mentioned that: "In the early 1970’s, [C. W.] Anderson conjectured that the set Image(I) [the set of abundancy indices] is a recursive set, meaning that there exists a recursive algorithm that can be employed to determine whether or not a given rational number k/m is an outlaw. This remains an open problem today.")
If you find this particular topic interesting, then you may contact me privately via e-mail (it is indicated in this recent paper of mine: https://www.researchgate.net/publication/370979239), and we can collaborate.
• asked a question related to Mathematics
Question