Science topic

Applied Mathematics - Science topic

Applied Mathematics are a common forum for all branches of applicable Mathematics
Questions related to Applied Mathematics
  • asked a question related to Applied Mathematics
Question
2 answers
If breastfeeding correlates with reducing risks of autism in infants parsimoniously (as the most simple explanation with the most evidence) then that could also lower toxic masculinity in adult males. Especially because pegging is already thought to lower toxic masculinity.
Work Cited
Ohnemus , Alexander . "Why Women Should Breastfeed Their Male Romantic Partners." ResearchGate.net . www.researchgate.net/publication/372401611_Why_Women_Should_Breastfeed_Their_Male_Romantic_Partners. Accessed 8 Sep. 2023.
Ohnemus , Alexander . "Erotic Lactation Reduces Toxic Masculinity Theorem "ELRTM Theorem" (Group Theory)(Dynamic Systems)(Differential Equations) A theorem." ResearchGate.net . www.researchgate.net/publication/373448419_Erotic_Lactation_Reduces_Toxic_Masculinity_Theorem_ELRTM_Theorem_Group_TheoryDynamic_SystemsDifferential_Equations_A_theorem. Accessed 8 Sep. 2023.
Ohnemus , Alexander . "Differential Equations of Erotic Lactation (Group Theory)(Dynamic Systems)(Differential Equations)." ResearchGate.net . www.researchgate.net/publication/373711079_Differential_Equations_of_Erotic_Lactation_Group_TheoryDynamic_SystemsDifferential_Equations. Accessed 8 Sep. 2023.
Relevant answer
Answer
Well, breast feeding should then be introduced to cure toxic masculinity in 40 year old gamers, and perhaps even donald trump.
  • asked a question related to Applied Mathematics
Question
3 answers
Hello, I'm about to join a team working on auditory speech perception using iEEG. It is planned that I will use Temporal Response Function (TRF) to determine correlations between stimulus characteristics (variations in the acoustic signal envelope, for example) and characteristics of recorded neuronal activity.
I would therefore like to fully understand the different stages of data processing carried out, as well as the reasoning and hypotheses behind them.
I took a look at the article presenting the method
and I studied the matrix calculations
But several questions remain.
In particular, regarding this formula:
w = (ST S)-1 ST r
where S is a matrix of dimension (T*tau) presenting the characteristics of the stimulus over time (T) as a function of different temporal windows/shifts (tau) as :
S =
[ s(tmin-taumin) ... s(t) ... s(tmin-taumax) ]
[ ... ... ]
[ ... ... ]
[ s(tmax-taumin) ... s(t) ... s(tmax-taumax) ]
and where r is a matrix of dimension (T*N) presenting the recorded activity of each channel in time.
  1. Why do STS? What does the product of this operation represent?
  2. Why do (STS)-1? What does this operation bring?
  3. Why do (STS)-1ST? What is represented in this product?
  4. And finally w = (STS)-1STr. What does w of dimension tau * N really represent?
Hypothesis: STS represents the "covariance" of each time window with the others (high covariance in the diagonal (because product of equal columns), high covariance for adjacent columns (because product of close time windows) and low covariance for distant columns whose time windows are very far apart (and therefore presenting little mutual information)). Maybe that (STS)-1ST (of dimension T*tau) makes it possible to obtain a representation of the stimulus according to time windows and time, but with the abrogation of any correlations that may exist between windows? However, the representation of the stimulus in this product remains very unclear to me... And finally, w may represents the weights (or correlations) of each N channel for the different time windows of the signal. My incomprehension mainly concerns the representation of the stimulus by (STS)-1ST and I would like to better understand the reasoning behind these operations and the benefits they bring to the decoding of neural activity. I'd like to thank anyone familiar with TRFs for any help he/she can give me. My reasoning may be wrong or incomplete, any contribution would be appreciated.
Relevant answer
Answer
Here's a follow up Camille,
Weight Matrix w in TRF Analysis:
The weight matrix w is a fundamental output of Temporal Response Function (TRF) analysis, providing insights into how different aspects of the stimulus relate to neural activity.
Mathematical Representation:
- Each row of w corresponds to a specific time window in the stimulus, denoted as t=1, t=2, t=3, and so on.
- Each column of w corresponds to a neural activity channel, represented as Channel 1, Channel 2, and so forth.
- The values in the weight matrix w are calculated using the formula:
w = (STS)^-1STr
Example:
Suppose we have a simplified weight matrix w, where rows represent different time windows and columns represent neural channels:
| w1, Channel 1 w1, Channel 2 ... w1, Channel N |
| w2, Channel 1 w2, Channel 2 ... w2, Channel N |
| w3, Channel 1 w3, Channel 2 ... w3, Channel N |
In this matrix:
- w1, Channel 1 represents the weight or correlation between the first time window (t=1) of the stimulus and neural Channel 1.
- w2, Channel 2 represents the weight or correlation between the second time window (t=2) of the stimulus and neural Channel 2.
- Each value w captures how strongly a specific time window influences the activity in a particular neural channel.
Interpretation:
- Larger positive values of w indicate that a particular time window has a strong positive influence on the neural activity in a given channel.
- Smaller positive values indicate a positive but weaker influence.
- Negative values suggest a negative correlation, meaning that the time window has an inhibitory effect on neural activity in that channel.
Practical Use:
By examining the weight matrix w, researchers can pinpoint which temporal aspects of the stimulus are most relevant for explaining neural responses. This information is crucial for understanding how auditory stimuli are processed in the brain and aids in the decoding of auditory speech perception.
  • asked a question related to Applied Mathematics
Question
2 answers
Hello everyone,
I am Danillo Souza, and I am currently a Post-Doc Researcher at Basque Center for Applied Mathematics (BCAM). I am currently working on the Mathematical, Computational and Experimental Neuroscience Group (MCEN). One of the challenges of my work is to derive optimal tools to exact topological and/or geometrical information from Big data.
I am trying to submit a work to arXiv and unfortunately, an endorsement in Physics - Data Analysis and Statistics is required. I was wondering if some researcher could be my endorser in this area.
Beforehand, I appreciate your efforts in trying to help me.
With kind regards,
Danillo
Danillo Barros De Souza requests your endorsement to submit an article to the physics.data-an section of arXiv. To tell us that you would (or would not) like to endorse this person, please visit the following URL: https://arxiv.org/auth/endorse?x=UOKIX3 If that URL does not work for you, please visit http://arxiv.org/auth/endorse.php and enter the following six-digit alphanumeric string: Endorsement Code: UOKIX3
Relevant answer
Answer
Publish your paper for free
_________________________
Dear Researchers and postgraduate students
MESOPOTAMIAN JOURNAL OF BIG DATA (MJBD) issued by Mesopotamian Academic Press, welcomes the original research articles, short papers, long papers, review papers for the publication in the next issue the journal doesn’t requires any publication fee or article processing charge and all papers are published for free
Journal info.
1 -Publication fee: free
2- Frequency: 1 issues per year
3- Subject: computer science, Big data, Parallel Processing, Parallel Computing and any related fields
4- ISSN: 2958-6453
5- Published by: Mesopotamian Academic Press.
Managing Editor: Dr. Ahmed Ali
The journal indexed in
1- Croosref
2- DOAJ
3- Google scholar
4- Research gate
  • asked a question related to Applied Mathematics
Question
40 answers
Mathematical Generalities: ‘Number’ may be termed as a general term, but real numbers, a sub-set of numbers, is sub-general. Clearly, it is a quality: “having one member, having two members, etc.”; and here one, two, etc., when taken as nominatives, lose their significance, and are based primarily only on the adjectival use. Hence the justification for the adjectival (qualitative) primacy of numbers as universals. While defining one kind of ‘general’ another sort of ‘general’ may naturally be involved in the definition, insofar as they pertain to an existent process and not when otherwise.
Why are numbers and shapes so exact? ‘One’, ‘two’, ‘point’, ‘line’, etc. are all exact. The operations on these notions are also intended to be exact. But irrational numbers are not so exact in measurement. If notions like ‘one’, ‘two’, ‘point’, ‘line’, etc. are defined to be so exact, then it is not by virtue of the exactness of these substantive notions, but instead, due to their being defined as exact. Their adjectival natures: ‘being a unity’, ‘being two unities’, ‘being a non-extended shape’, etc., are not so exact.
A quality cannot be exact, but may be defined to be exact. It is in terms of the exactness attributed to these notions by definition that the adjectives ‘one’, ‘two’, ‘point’, ‘line’, etc. are exact. This is why the impossibility of fixing these (and other) substantive notions as exact miss our attention. If in fact these are inexact, then there is justification for the inexactness of irrational, transcendental, and other numbers too.
If numbers and shapes are in fact inexact, then not only irrational numbers, transcendental numbers, etc., but all exact numbers and the mathematical structures should remain inexact if they have not been defined as exact. And if behind the exact definitions of exact numbers there are no exact universals, i.e., quantitative qualities? If the formation of numbers is by reference to experience (i.e., not from the absolute vacuum of non-experience), their formation is with respect to the quantitatively qualitative and thus inexact ontological universals of oneness, two-ness, point, line, etc.
Thus, mathematical structures, in all their detail, are a species of qualities, namely, quantitative qualities, defined to be exact and not naturally exact. Quantitative qualities are ontological universals, with their own connotative and denotative versions.
Natural numbers, therefore, are the origin of primitive mathematical experience, although complex numbers may be more general than all others in a purely mathematical manner of definition.
Bibliography
(1) Gravitational Coalescence Paradox and Cosmogenetic Causality in Quantum Astrophysical Cosmology, 647 pp., Berlin, 2018.
(2) Physics without Metaphysics? Categories of Second Generation Scientific Ontology, 386 pp., Frankfurt, 2015.
(3) Causal Ubiquity in Quantum Physics: A Superluminal and Local-Causal Physical Ontology, 361 pp., Frankfurt, 2014.
(4) Essential Cosmology and Philosophy for All: Gravitational Coalescence Cosmology, 92 pp., KDP Amazon, 2022, 2nd Edition.
(5) Essenzielle Kosmologie und Philosophie für alle: Gravitational-Koaleszenz-Kosmologie, 104 pp., KDP Amazon, 2022, 1st Edition.
Relevant answer
  • asked a question related to Applied Mathematics
Question
3 answers
Usually to know the financial performance of the co-operative society, we use ratio analysis and more than that any other tools available to find the financial performance or growth of the organization suggest me.
Relevant answer
Answer
Good day, Kannamudaiyar!
In addition to ratio analysis, some econometric and mathematical models could be used to analyze the financial statements/performances of cooperative societies, such as:
1) Time Series Analysis: cointegration test (Engle-Granger, ARDL) for long-term equilibrium, ARIMA, VAR, etc...
2) Panel Data Analysis: Fixed Effect (One-way, Two-way) Models, Random Effect Model...
3) Event Analysis: To assess the impact of specific events such as mergers, political or policy changes, etc., Dummy variable regression, ...
4) Data Envelopment Analysis (DEA): To analyze the relative efficiency of the multiple units.
.. many others.
Notably, the choice of model should correspond to the specific research queries and availability of data.
  • asked a question related to Applied Mathematics
Question
4 answers
Hi Dears, according to implementation RSA in python. I found that if p and q large.
the decryption phase takes a lot of time to execute.
for example, in this code i select p=23099, q=23059, message=3
it takes 26 minute it decrypts the encrypted message.!
So I wonderful how we can select to large prime number for RSA, while it cannot execute in desired time. !
So, I think that we cannot use RSA i n real time systems.
Are you agree with me?
the source code is:
from math import gcd
import time
# defining a function to perform RSA approch
def RSA(p: int, q: int, message: int):
# calculating n
n = p * q
print(n)
# calculating totient, t
t = (p - 1) * (q - 1)
start = time.time()
# selecting public key, e
for i in range(2,t):
if gcd(i, t) == 1:
e = i
break
print("eeeeeeeeeeeeee",e)
# selecting private key, d
j = 0
while True:
if (j * e) % t == 1:
d = j
break
j += 1
print("dddddddddddddddd",d)
end = time.time()
# print(end-start)
e=0
#RSA(p=7, q=17, message=3)
RSA(p=23099, q=23059, message=3)
d=106518737
n=532639841
e=5
#RSA(p=23099, q=23059, message=3)
start= time.time()
ct=(3 ** e) % n
print(ct)
pt=(ct ** d) % n
end = time.time()
print(end-start)
print(pt)
#----------------------------------------------------
Relevant answer
Answer
Hi Mohammad,
If you're using Python, I'd suggest using pow(ct, d, n) instead of pt=(ct ** d) % n. That should improve the time. As Wim suggests, modular exponentiation improves the running time of large exponents.
Cheers
  • asked a question related to Applied Mathematics
Question
8 answers
importance of applied mathematics
Relevant answer
Answer
Here is a simple example. I worked for a company that produced a product out of some aluminum bars. The bars came in certain lengths. The bars had to be cut because our product needed several different lengths. Each single unit of our product needed a certain number of bar sections of one length, a different number of bar sections at a different length, and so on. Applied math was used to figure out how to cut the bars into sections in the way that minimizes material waste.
  • asked a question related to Applied Mathematics
Question
3 answers
Up to this point, I thought that when doing a dimensional analysis using the Buckingham-Pi theorem the exponents are generally quite "simple". For example 0.5, 1 or 2 and so on.
However, I now have found a paper where the exponents of the dimensionless numbers that are formulated by a data-driven approach have "strange" exponents like 0.07 or 0.304. This seems a bit odd to me and brings me to the questions: are such exponents (still) physically meaningful? If so, in which cases does such a type of exponents occur (and why)?
Thank you very much!
Relevant answer
Answer
Some of my thought:
In physics, theories about phase transition could give various non-integer exponents (see e.g. https://en.wikipedia.org/wiki/Ising_critical_exponents).
Sometimes, exponents could arise from specific type of ODE (e.g. https://en.wikipedia.org/wiki/Cauchy%E2%80%93Euler_equation). For example, I am working on one simple physics model and got a second-order ODE. One coefficient is an definite integral of some special function (I can only calculate it numerically). Because of this coefficient, the solution of ODE has non-integer exponents.
  • asked a question related to Applied Mathematics
Question
132 answers
SOURCE OF MAJOR FLAWS IN COSMOLOGICAL THEORIES:
MATHEMATICS-TO-PHYSICS APPLICATION DISCREPENCY
Raphael Neelamkavil, Ph.D., Dr. phil.
The big bang theory has many limitations. These are,
(1) the uncertainty regarding the causes / triggers of the big bang,
(2) the need to trace the determination of certain physical constants to the big bang moments and not further backwards,
(3) the necessity to explain the notion of what scientists and philosophers call “time” in terms of the original bang of the universe,
(4) the compulsion to define the notion of “space” with respect to the inner and outer regions of the big bang universe,
(5) the possibility of and the uncertainty about there being other finite or infinite number of universes,
(6) the choice between an infinite number of oscillations between big bangs and big crunches in the big bang universe (in case of there being only our finite-content universe in existence), in every big hang universe (if there are an infinite number of universes),
(7) the question whether energy will be lost from the universe during each phase of the oscillation, and in that case how an infinite number of oscillations can be the whole process of the finite-content universe,
(8) the difficulty involved in mathematizing these cases, etc.
These have given rise to many other cosmological and cosmogenetic theories – mythical, religious, philosophical, physical, and even purely mathematical. It must also be mentioned that the thermodynamic laws created primarily for earth-based physical systems have played a big role in determining the nature of these theories.
The big bang is already a cosmogenetic theory regarding a finite-content universe. The consideration of an INFINITE-CONTENT universe has always been taken as an alternative source of theories to the big bang model. Here, in the absence of conceptual clarity on the physically permissible meaning of infinite content and without attempting such clarity, cosmologists have been accessing the various mathematical tools available to explain the meaning of infinite content. They do not also seem to keep themselves aware that locally possible mathematical definitions of infinity cannot apply to physical localities at all.
The result has been the acceptance of temporal eternality to the infinite-content universe without fixing physically possible varieties of eternality. For example, pre-existence from the past eternity is already an eternality. Continuance from any arbitrary point of time with respect to any cluster of universes is also an eternality. But models of an infinite-content cosmos and even of a finite-content universe have been suggested in the past one century, which never took care of the fact that mathematical infinity of content or action within a finite locality has nothing to do with physical feasibility. This, for example, is the source of the quantum-cosmological quick-fix that a quantum vacuum can go on create new universes.
But due to their obsession with our access to observational details merely from our local big bang universe, and the obsession to keep the big bang universe as an infinite-content universe and as temporally eternal by using the mathematical tools found, a mathematically automatic recycling of the content of the universe was conceived. Here they naturally found it safe to accommodate the big universe, and clearly maintain a sort of eternality for the local big bang universe and its content, without recourse to external creation.
Quantum-cosmological and superstrings-cosmological gimmicks like considering each universe as a membrane and the “space” between them as vacuum have given rise to the consideration that it is these vacua that just create other membranes or at least supplies new matter-energy to the membranes to continue to give rise to other universes. (1) The ubiquitous sensationalized science journalism with rating motivation and (2) the physicists’ and cosmologists’ need to stick to mathematical mystification in the absence of clarity concurring physical feasibility in their infinities – these give fame to the originators of such universes as great and original scientists.
I suggest that the need to justify an eternal recycling of the big bang universe with no energy loss at the fringes of the finite-content big bang universe was fulfilled by cosmologists with the automatically working mathematical tools like the Lambda term and its equivalents. This in my opinion is the origin of the concepts of the almighty versions of dark energy, virtual quantum soup, quantum vacuum, ether, etc., for cosmological applications. Here too the physical feasibility of these concepts by comparing them with the maximal-medial-minimal possibilities of existence of dark energy, virtual quantum soup, quantum vacuum, ether, etc. within the finite-content and infinite-content cosmos, has not been considered. Their almighty versions were required because they had to justify an eternal pre-existence and an eternal future for the universe from a crass physicalist viewpoint, of which most scientists are prey even today. (See: Minimal Metaphysical Physicalism (MMP) vs. Panpsychisms and Monisms: Beyond Mind-Body Dualism: https://www.researchgate.net/post/Minimal_Metaphysical_Physicalism_MMP_vs_Panpsychisms_and_Monisms_Beyond_Mind-Body_Dualism)
I believe that the inconsistencies present in the mathematically artificialized notions and in the various cosmogenetic theories in general are due to the blind acceptance of available mathematical tools to explain an infinite-content and eternally existent universe.
What should in fact have been done? We know that physics is not mathematics. In mathematics all sorts of predefined continuities and discretenesses may be created without recourse to solutions as to whether they are sufficiently applicable to be genuinely physics-justifying by reason of the general compulsions of physical existence. I CONTINUE TO ATTEMPT TO DISCOVER WHERE THE DISCREPENCIES LIE. History is on the side of sanity.
One clear example for the partial incompatibility between physics and mathematics is where the so-called black hole singularity is being mathematized by use of asymptotic approach. I admit that we have only this tool. But we do not have to blindly accept it without setting rationally limiting boundaries between the physics of the black hole and the mathematics applied here. It must be recognized that the definition of any fundamental notion of mathematics is absolute and exact only in the definition, and not in the physical counterparts. (See: Mathematics and Causality: A Systemic Reconciliation, https://www.researchgate.net/post/Mathematics_and_Causality_A_Systemic_Reconciliation)
I shall continue to add material here on the asymptotic approach in cosmology and other similar theoretical and application-level concepts.
Bibliography
(1) Gravitational Coalescence Paradox and Cosmogenetic Causality in Quantum Astrophysical Cosmology, 647 pp., Berlin, 2018.
(2) Physics without Metaphysics? Categories of Second Generation Scientific Ontology, 386 pp., Frankfurt, 2015.
(3) Causal Ubiquity in Quantum Physics: A Superluminal and Local-Causal Physical Ontology, 361 pp., Frankfurt, 2014.
(4) Essential Cosmology and Philosophy for All: Gravitational Coalescence Cosmology, 92 pp., KDP Amazon, 2022, 2nd Edition.
(5) Essenzielle Kosmologie und Philosophie für alle: Gravitational-Koaleszenz-Kosmologie, 104 pp., KDP Amazon, 2022, 1st Edition.
Relevant answer
Answer
Please see my research document on functional Grand Unified Theory either attached or on my page. Mathematical tools are an essentially function of Cosmology and the applications of mathematics to physics. This application is most largely and obviously affected by failing to account for the interplay between quantum phenomenon and relativity related phenomenon, and also by no clear-cut ability or route to perform these calculations. By manipulating tensors, and subsequently tying them to mathematical formulas which represent the relation between mathematics and physical processes, quantities and occurrences within quantum physics systems, and then subsequently appropriately setting values for relativity related phenomenon in the form of tensors, operators, and precisely calculated values, one may gain a more precise and enlightening view of Cosmological processes. Failing to account for the interplay between general relativity and quantum phenomenon in any sort of reliable way is a large source of issues in the application of mathematics-to-physics related to Cosmology. Another issue, I believe, is perception. Most scientists are content with either being willfully ignorant of the necessary need to be able to account for quantum phenomenon and relativity related phenomenon at the same time to accurately assess cosmology or they stubbornly stick their feet in the sand and claim they can arrive at fully accurate revelations without an ability to do so. Both are erroneous. Although we can come to A LOT of conclusions about those things without knowing the full quantum/relativity shebang and all it's details, we have no idea what sort of information we could be missing out on, or what false assumptions we could be arriving at. "You vant know what you cant know" The solution to the problem of mathematics-to-physics in Cosmology is most certainly accounting for what I've spoken of here in a reliable way that aligns with known mathematics and physics, but also in developing more advanced equations and discoveries which ties physical processes and quantities to quantum and relativity related phenomenon in a proven and undeniable way. Things like this have been proven by my research. I have found certain forms of complex equations accurately represents laws of physical concepts that shed light on the relation of mathematics to physical things. I am in no way claiming my theory is the only way to do this, I'm just using it as a familiar starting point to speak on this. There are lots of ways to do this without a theory such as mine, but it results in having to perform multiple complex calculations for mathematics, physics, and relativity, parsing, then integrating them separately which is far more time consuming.
  • asked a question related to Applied Mathematics
Question
23 answers
The Nobel Prize Summit 2023: Truth, Trust and Hope has started today, 24 May 2023. The summit encourages participation. Thus, I have sent an open letter and eagerly anticipate their response. Please comment if the points I have made is adequate enough.
Open Letter to The Nobel Committee for Physics
Is There a Nobel Prize for Metaphysics?
Dear Nobel Committee for Physics,
Among the differences between an established religion, such as Roman Catholicism, and science, is the presence of a hierarchical organization in the former for defending its creed and conducting its affairs. The head of the religious institution ultimately bears responsibility for the veracity of its claims and strategic policies. This accountability was evident in historical figures like John Wycliffe, Jan Hus, and Martin Luther, who held the papacy responsible for wrong doctrines, such as the indulgence scandal during the late Middle Ages. In that context, challenging such doctrines, albeit with the anticipated risk of being burned at the stake, involved posting opposing theses on the doors of churches.
In contrast, the scientific endeavour lacks a tangible temple, and no definitive organization exists to be held accountable for possible misconducts. Science is a collective effort by scientists and scientific institutes to discover new facts within and beyond our current understanding. While scientists may occasionally flirt with science fiction, they ultimately make significant leaps in understanding the universe. However, problems arise when a branch of science is held and defended as a sacred dogma, disregarding principles such as falsifiability. This mentality can lead to a rule of pseudo-scientific oppression, similar to historical instances like the Galileo or Lysenko affairs. Within this realm, there is little chance of liberating science from science fiction. Any criticism is met with ridicule, damnation, and exclusion, reminiscent of the attitudes displayed by arrogant religious establishments during the medieval period. Unfortunately, it seems that the scientific establishment has not learned from these lessons and has failed to provide a process for dealing with these unfortunate and embarrassing scenarios. On the contrary, it is preoccupied with praising and celebrating its achievements while stubbornly closing its ears to sincere critical voices.
Allow me to illustrate my concerns through the lens of relativistic physics, a subject that has captured my interest. Initially, I was filled with excitement, recognizing the great challenges and intellectual richness that lay before me. However, as I delved deeper, I encountered several perplexing issues with no satisfactory answers provided by physicists. While the majority accepts relativity as it stands, what if one does not accept the various inherent paradoxes and seeks a deeper insight?
Gradually, I discovered that certain scientific steps are not taken correctly in this branch of science. For example, we place our trust in scientists to conduct proper analyses of experiments. Yet, I stumbled upon evidence suggesting that this trust may have been misplaced in the case of a renowned experiment that played a pivotal role in heralding relativistic physics. If this claim is indeed valid, it represents a grave concern and a significant scandal for the scientific community. To clarify my points, I wrote reports and raised my concerns. Fortunately, there are still venues outside established institutions where critical perspectives are not yet suppressed. However, the reactions I received ranged from silence to condescending remarks infused with irritation. I was met with statements like "everything has been proven many times over, what are you talking about?" or "go and find your mistake yourself." Instead of responding to my pointed questions and concerns, a professor even suggested that I should broaden my knowledge by studying various other subjects.
While we may excuse the inability of poor, uneducated peasants in the Middle Ages to scrutinize the veracity of the Church's doctrine against the Latin Bible, there is no excuse for professors of physics and mathematics to be unwilling to revaluate the analysis of an experiment and either refute the criticism or acknowledge an error. It raises suspicions about the reliability of science itself if, for over 125 years, the famous Michelson-Morley experiment has not been subjected to rigorous and accurate analysis.
Furthermore, I am deeply concerned that the problem has been exacerbated by certain physicists rediscovering the power and benefits of metaphysics. They have proudly replaced real experiments with thought experiments conducted with thought-equipment. Consequently, theoretical physicists find themselves compelled to shut the door on genuine scientific criticism of their enigmatic activities. Simply put, the acceptance of experiment-free science has been the root cause of all these wrongdoings.
To demonstrate the consequences of this damaging trend, I will briefly mention two more complications among many others:
1. Scientists commonly represent time with the letter 't', assuming it has dimension T, and confidently perform mathematical calculations based on this assumption. However, when it comes to relativistic physics, time is represented as 'ct' with dimension L, and any brave individual questioning this inconsistency is shunned from scientific circles and excluded from canonical publications.
2. Even after approximately 120 years, eminent physicist and Nobel Prize laureate Richard Feynman, along with various professors in highly regarded physics departments, have failed to mathematically prove what Einstein claimed in his 1905 paper. They merely copy from one another, seemingly engaged in a damage limitation exercise, producing so-called approximate results. I invite you to refer to the linked document for a detailed explanation:
I am now submitting this letter to the Nobel Committee for Physics, confident that the committee, having awarded Nobel Prizes related to relativistic physics, possesses convincing scientific answers to the specific dilemmas mentioned herein.
Yours sincerely,
Ziaedin Shafiei
Relevant answer
Answer
I looked at the link you gave which was
In that link I found the statement:
Einstein claimed that “If a unit electric point charge is in motion in an electromagnetic field, the force acting upon it is equal to the electric force which is present at the locality of the charge, and which we ascertain by transformation of the field to a system of co-ordinates at rest relatively to the electrical charge.”
I also get from the above link that you have a disagreement with the above statement. I think the confusion here is about which observer is defining the force. The electromagnetic field as transformed to coordinates at rest relative to the charge is the field needed to predict the force as seen by an observer at rest with the charge (an electric force but no magnetic force because the charge is not moving). Field transformations to other coordinate systems are needed to predict the force as seen by observers moving relative to the charge. This means that different observers (having different motions relative to each other) can see different forces even if all coordinate systems are inertial. This is in contrast to Newtonian mechanics in which the same force is seen in all inertial coordinate systems. Newtonian mechanics is wrong when applied to electromagnetic forces so we need to include things like field energy or field momentum (outside the scope of Newtonian mechanics) to obtain conservation laws. However, I think that your complaint is not that Newtonian mechanics should be used when it isn't, but rather that special relativity is wrong. Special relativity does have limitations (when general relativity becomes an issue) but for its intended applications (i.e., when general relativity is not needed) it has done a great job of producing all of today's modern technology derived from it. In particular, the treatment of electromagnetic forces in the context of special relativity is one of the most thoroughly studied of all topics in physics. If there was a real incompatibility between special relativity and electromagnetism, we would have known about that a long time ago. We would have known about it during the days when special relativity was first introduced and had a lot of opposition, and a lot of people searched very hard to find inconsistencies with the theory. The theory survived attacks by brilliant people searching for problems with the theory, and it will survive attacks by people that perceive it to be wrong because of their own lack of understanding.
  • asked a question related to Applied Mathematics
Question
4 answers
What Are Some Of The Material That Makes Your Work Easier?
Relevant answer
Answer
Dear Mr. Boakye!
I use LinkedIn, ResearchGate, Elsevier Researcher Academy, Google Search, and BrightTALK - platform: please register free of charge:
Yours sincerely, Bulcsu Szekely
  • asked a question related to Applied Mathematics
Question
5 answers
Applying mathematical knowledge in research models: This question has been in my mind for a long time. Can advance mathematics and applied mathematics solve all the problems in modeling research? Especially the formula derivation in the theoretical model part, can the analysis conclusion be obtained through multiple derivations or other methods? You have also read some mathematics-related publications yourself, and you have to admire the mystery of mathematics.
Relevant answer
Answer
We all know that Mathematics include Reading , Writing & Arithmetic & its starts with every action of our life image & as such it is the action of our performance & image in every part of our life. With this some years back I have expressed my views in this areas which I submit herewith for your kind perusal .
In my early days students interested in Mathematics & scoring full marks they can perform in their working of mathematics either by listening to music or song or prior to during a home work they have formulated a habit of reading either a lesson or interested topics & after carrying out their working system they used to give justice to the subject of Mathematics.
This is my personal opinion
  • asked a question related to Applied Mathematics
Question
1 answer
Dear Researcher,
Global Climate Models (GCMs) of Coupled Model Intercomparison Project Phase 6 (CMIP6) are numerical models that represent the various physical systems of the Earth's climate with respect to the surface of the land, oceans, atmosphere, and cryosphere, and these are employed to provide likely changes in future climate projections. I wanted to know what are the initial and lateral boundary conditions used while developing the models.
Sincerely,
Aman Srivastava
Relevant answer
Answer
Global Climate Models (GCMs) are complex numerical models used to simulate the Earth's climate system. The latest generation of GCMs used in the Coupled Model Intercomparison Project phase 6 (CMIP6) includes a wide range of models with varying resolutions, parameterizations, and boundary conditions. The specific initial and lateral boundary conditions used by each model depend on its design and configuration, but here are some general guidelines:
Initial Conditions:
The initial conditions of a GCM refer to the state of the climate system at the beginning of a simulation. The initial conditions typically include the atmospheric and oceanic conditions such as temperature, humidity, pressure, and winds, as well as the initial state of sea ice, land surface properties, and greenhouse gas concentrations.
For CMIP6-based GCMs, the initial conditions are usually taken from a pre-industrial control simulation, where the model is run for several hundred years without any external forcing, to allow the climate to reach a stable state.
Some models may also use observational data such as ocean temperatures and sea ice concentration to initialize their simulations.
Lateral Boundary Conditions:
The lateral boundary conditions of a GCM refer to the conditions at the edges of the model domain, where the model interacts with the outside world.
For CMIP6-based GCMs, the lateral boundary conditions are usually prescribed using outputs from other models, such as reanalysis data or output from previous versions of the same model.
In some cases, the boundary conditions may be nudged towards observed values to improve the realism of the simulation.
The specific boundary conditions used by each model depend on its design and configuration, but in general, they are chosen to ensure that the model produces realistic simulations of the global climate system.
  • asked a question related to Applied Mathematics
Question
99 answers
MATHEMATICS VS. CAUSALITY:
A SYSTEMIC RECONCILIATION
Raphael Neelamkavil, Ph.D., Dr. phil.
1. Preface on the Use of Complex Language
2. Prelude on the Pre-Scientific Principle of Causality
3. Mathematical “Continuity and Discreteness” Vs. Causal Continuity
4. Mathematics and Logic within Causal Metaphysics
5. Mathematics, Causality, and Contemporary Philosophical Schools
1. Preface on the Use of Complex Language
First of all, a cautious justification is in place about the complexity one may experience in the formulations below: When I publish anything, the readers have the right to ask me constantly for further justifications of my arguments and claims. And if I have the right to anticipate some such possible questions and arguments, I will naturally attempt to be as detailed and systemic as possible in my formulation of each sentence here and now. A sentence is merely a part of the formulated text. After reading each sentence, you may pose me questions, which certainly cannot all be answered well within the sentences or soon after the sentences in question, because justification is a long process.
Hence, my sentences may tend to be systemically complex. A serious reader will not find these arguments getting too complex, because such a person has further unanswered questions. We do not purposely make anything complex. Our characterizations of meanings in mathematics, physics, philosophy, and logic can be complex and prohibitive for some. But would we all accuse these disciplines or the readers if the readers find them all complex and difficult? In that case, I could be excused too. I do not intentionally create a complex state of affairs in these few pages; but there are complexities here too. I express my helplessness in case any one finds these statements complex.
The languages of both science and philosophy tend to be complex and exact. This, nevertheless, should be tolerated provided the purpose is understood and practiced by both the authors and the readers. Ordinary language has its worth and power. If I give a lecture, I do not always use such formal a language as when I write, because I am there to re-clarify.
But the Wittgensteinian obsession with “ordinary” language does not make him use an ordinary language in his own works. Nor does the Fregean phobia about it save him from falling into the same ordinary-language naïveté of choosing concrete and denotative equivalence between terms and their reference-objects without a complex ontology behind them. I attempt to explain the complex ontology behind the notions that I use.
2. Prelude on the Pre-Scientific Principle of Causality
Which are the ultimate conditions implied by the notion of existence (To Be), without which conditions implied nothing exists, and without which sort of existents nothing can be discoursed? Anything exists non-vacuously. This implies that existents are inevitably in Extension (having parts, each of which is further extended and not vacuous). The parts will naturally have some contact with a finite number of others. That is, everything is in Change (impacting some other extended existents).
Anything without these two characteristics cannot exist. If not in Change, how can something exist in the state of Extension alone? And if not in Extension, how can something exist in the state of Change alone? Hence, Extension-Change are two fundamental ontological categories of all existence and the only two exhaustive implications of To Be. Any unit of causation with one causal aspect and one effect aspect is termed a process.
These conditions are ultimate in the sense that they are implied by To Be, not as the secondary conditions for anything to fulfil after its existence. Thus, “To Be” is not merely of one specific existent, but of all existents. Hence, Extension-Change are the implications of the To Be of Reality-in-total. Physical entities obey these implications. Hence, they must be the foundations of physics and all other sciences. Theoretical foundations, procedures, and conclusions based on these implications in the sciences and philosophy, I hold, are wise enough.
Extension-Change-wise existence is what we understand as Causality: extended existents and their parts exert impacts on other extended existents. Every part of existents does it. That is, if anything exists, it is in Causation. This is the principle of Universal Causality. In short, Causality is not a matter to be decided in science – whether there is Causality or not in any process under experiment and in all existents is a matter for philosophy to decide, because philosophy tends to study all existents. Science can ask only whether there occurs any specific sort of causation or not, because each science has its own restricted viewpoint of questions and experiments and in some cases also restrictions in the object set.
Thus, statistically mathematical causality is not a decision as to whether there is causation or not in the object set. It is not a different sort of causation, but a measure of the extent of determination of special causes that we have made at a given time. Even the allegedly “non-causal” quantum-mechanical constituent processes are mathematically and statistically circumscribed measuremental concepts from the results of Extended-Changing existents and, ipso facto, the realities behind these statistical measurements are in Extension-Change if they are physically existent.
Space is the measured shape of Extension; time is that of Change. Therefore, space and time are epistemic categories. How then can statistical causality based only on measuremental data be causality at all, if the causes are all in Extension-Change and if Universal Causality is already the pre-scientific Law under which all other laws appear? No part of an existent is non-extended and non-changing. One unit of cause and effect may be called a process. Every existent and its parts are processual.
And how can a so-called random cause be a cause, except when the randomness is the extent of our measuremental reach of the cause, which already is causal because of its Extension-Change-wise existence? Extension and Change are the very exhaustive meanings of To Be, and hence I call them the highest Categories of metaphysics, physical ontology, physics, and all science. Not merely philosophy but also science must obey these two Categories.
In short, everything existent is causal. Hence, Universal Causality is the highest pre-scientific Law, second conceptually only to Extension-Change and third to Existence / To Be. Natural laws are merely derivative. Since Extension-Change-wise existence is the same as Universal Causality, scientific laws are derived from Universal Causality, and not vice versa. Today the sciences attempt to derive causality from the various scientific laws!The relevance of metaphysics / physical ontology for the sciences is clear from the above.
Existents have some Activity and Stability. This is a fully physical fact. These two Categories may be shown to be subservient to Extension-Change and Causality. Pure vacuum (non-existence) is absence of Activity and Stability. Thus, entities, irreducibly, are active-stable processes in Extension-Change. Physical entities / processes possess finite Activity and Stability. Activity and Stability together belong to Extension; and Activity and Stability together belong to Change too.
That is, Stability is neither merely about space nor about Extension. Activity is neither merely about time nor about Change. There is a unique reason for this. There is no absolute stability nor absolute activity in the physical world. Hence, Activity is finite, which is by Extended-Changing processes; and Stability is finite, which is also by Extended-Changing processes. But the tradition still seems to parallelise Stability and Activity with space and time respectively. We consider Activity and Stability as sub-Categories, because they are based on Extension-Change, which together add up to Universal Causality; and each unit of cause and effect is a process.
These are not Categories that belong to merely imaginary counterfactual situations. The Categories of Extension-Change and their sub-formulations are all about existents. There can be counterfactuals that signify cases that appertain existent processes. But separating these cases from some of the useless logical talk as in linguistic-analytically tending logic, philosophy, and philosophy of science is near to impossible.
Today physics and the various sciences do at times something like the said absence of separation of counterfactual cases from actual in that they indulge in particularistically defined terms and procedures, by blindly thinking that counterfactuals can directly represent the physical processes under inquiry. Concerning mathematical applications too, the majority attitude among scientists is that they are somehow free from the physical world.
Hence, without a very general physical ontology of Categories that are applicable to all existent processes and without deriving the mathematical foundations from these Categories, the sciences and mathematics are in gross handicap. Mathematics is no exception in its applicability to physical sciences. Moreover, pure mathematics too needs the hand of Extension and Change, since these are part of the ontological universals, form their reflections in mind and language, etc., thus giving rise to mathematics.
The exactness within complexity that could be expected of any discourse based on the Categorial implications of To Be can only be such that (1) the denotative terms ‘Extension’ and ‘Change’ may or may not remain the same, (2) but the two dimensions of Extension and Change – that are their aspects in ontological universals – would be safeguarded both physical-ontologically and scientifically.
That is, definitional flexibility and openness towards re-deepening, re-generalizing, re-sharpening, etc. may even change the very denotative terms, but the essential Categorial features within the definitions (1) will differ only meagrely, and (2) will normally be completely the same.
3. Mathematical “Continuity and Discreteness” Vs. Causal “Continuity”
The best examples for the above are mathematical continuity and discreteness that are being attributed blindly to physical processes due to the physical absolutization of mathematical requirements. But physical processes are continuous and discrete only in their Causality. This is nothing but Extension-Change-wise discrete causal continuity. At any time, causation is present in anything, hence there is causal continuity. This is finite causation and hence effects finite continuity and finite discreteness. But this is different from absolute mathematical continuity and discreteness.
I believe that it is common knowledge that mathematics and its applications cannot prove Causality directly. What are the bases of the problem of incompatibility of physical causality within mathematics and its applications in the sciences and in philosophy? The main but general explanation could be that mathematical explanations are not directly about the world but are applicable to the world to a great extent.
It is good to note that mathematics is a separate science as if its “objects” were existent, but in fact as non-existent and different from those of any other science – thus creating mathematics into an abstract science in its theoretical aspects of rational effectiveness. Hence, mathematical explanations can at the most only show the ways of movement of the processes and not demonstrate whether the ways of the cosmos are by causation.
Moreover, the basic notions of mathematics (number, number systems, points, shapes, operations, structures, etc.) are all universals / universal qualities / ontological universals that belong to groups of existent things that are irreducibly Extension-Change-type processes. (See below.)
Thus, mathematical notions have their origin in ontological universals and their reflections in mind (connotative universals) and in language (denotative universals). The basic nature of these universals is ‘quantitatively qualitative’. We shall not discuss this aspect here at length.
No science and philosophy can start without admitting that the cosmos exists. If it exists, it is not nothing, not non-entity, not vacuum. Non-vacuous existence means that the existents are non-vacuously extended. This means they have parts. Every part has parts too, ad libitum, because each part is extended. None of the parts is an infinitesimal. They can be near-infinitesimal. This character of existents is Extension, a Category directly implied by To Be.
Similarly, any extended being’s parts are active, moving. This implies that every part has impact on some others, not on infinite others. This character of existents is Change. No other implication of To Be is so primary as these. Hence, they are exhaustive of the concept of To Be, which belongs to Reality-in-total. These arguments show us the way to conceive the meaning of causal continuity.
Existence in Extension-Change is what we call Causality. If anything is existent, it is causal – hence Universal Causality is the trans-science physical-ontological Law of all existents. By the very concept of finite Extension-Change-wise existence, it becomes clear that no finite space-time is absolutely dense with existents. In fact, space-time is no ontological affair, but only epistemological, and existent processes need measurementally accessible finite space for Change. Hence, existents cannot be mathematically continuous. Since there is Change and transfer of impact, no existent can be absolutely discrete in its parts or in connection with others.
Can logic show the necessity of all existents to be causal? We have already discussed how, ontologically, the very concept of To Be implies Extension-Change and thus also Universal Causality. Logic can only be instrumental in this.
What about the ability or not of logic to conclude to Universal Causality? In my arguments above and elsewhere showing Extension-Change as the very exhaustive meaning of To Be, I have used mostly only the first principles of ordinary logic, namely, Identity, Contradiction, and Excluded Middle, and then argued that Extension-Change-wise existence is nothing but Universal Causality if everything existing is non-vacuous in existence.
For example, does everything exist or not? If yes, let us call it non-vacuous existence. Hence, Extension is the first major implication of To Be. Non-vacuous means extended, because if not extended the existent is vacuous. If extended, everything has parts. Having parts implies distances, however minute, between all the near-infinitesimal parts of any existent process. In this sense, the basic logical laws do help conclude the causal nature of existents.
A point of addition now has been Change. It is, so to say, from experience. But this need not exactly mean an addition. If existents have parts (i.e., if they are in Extension), the parts’ mutual difference already implies the possibility of contact between parts. Thus, I am empowered to move to the meaning of Change basically as motion or impact. Naturally, everything in Extension must effect impacts.
Everything has further parts. Hence, by implication from Change and the need for there to be contacts between every near-infinitesimal set of parts of existents, everything causes changes by impacts. In the physical world this is by finite impact formation. Hence, nothing can exist as an infinitesimal. Leibniz’s monads have no significance in the real world.
Thus, we conclude that Extension-Change-wise existence is Universal Causality, and every actor in causation is a real existent, not a non-extended existent, as energy particles seem to have been considered and are even today thought to be, due to their unit-shape yielded merely for the sake mathematical applications. It is thus natural to claim that Causality is a pre-scientific Law of Existence, where existents are all inwardly and outwardly in Change, i.e., in impact formation – otherwise, the concept of Change would lose meaning.
In such foundational questions like To Be and its implications, the first principles of logic must be used, because these are the foundational notions of all science and no other derivative logical procedure comes in as handy. In short, logic with its fundamental principles can help derive Universal Causality. Thus, Causality (Extension-Change) is more primary to experience than the primitive notions of mathematics. But the applicability of these three logical Laws is not guaranteed so well in arguments using derivative, less categorial, sorts of concepts.
I suggest that the crux of the problem of mathematics and causality is the dichotomy between mathematical continuity and mathematical discreteness on the one hand and the incompatibility of applying any of them directly on the data collected / collectible / interpretable from some layers of the phenomena which are from some layers of the object-process in question. Not recognizing the presence of such stratificational debilitation of epistemic directness is an epistemological foolishness. Science and philosophy, in my opinion, are victims of this. Thus, for example, the Bayesian statistical theory recognizes only a statistical membrane between reality and data!
Here I point at the avoidance of the problem of stratificational debilitation of epistemic directness, by the centuries of epistemological foolishness, by reason of the forgetfulness of the ontological and epistemological relevance of expressions like ‘from some layers of data from some layers of phenomena from some layers of the reality’.
This is the point at which it is time to recognize the gross violence against natural reason behind phrases and statements involving ‘data from observation’, ‘data from phenomena’, ‘data from nature / reality’ etc., without epistemological and ontological sharpness in both science and philosophy to accept these basic facts of nature. As we all know, this state of affairs has gone irredeemable in the sciences today.
The whole of what we used to call space is not filled with matter-energy. Hence, if causal continuity between partially discrete “processual” objects is the case, then the data collected / collectible cannot be the very processual objects and hence cannot provide all knowledge about the processual objects. But mathematics and all other research methodologies are based on human experience and thought based on experience.
This theoretical attitude facilitates and accepts in a highly generalized manner the following three points:
(1) Mathematical continuity (in any theory and in terms of any amount of axiomatization of logical, mathematical, physical, biological, social, and linguistic theories) is totally non-realizable in nature as a whole and in its parts: because (a) the necessity of mathematical approval of any sort of causality in the sciences and by means of its systemic physical ontology falls short miserably in actuality, and (b) the logical continuity of any kind does not automatically make linguistically or mathematically symbolized activity of representation adequate enough to represent the processual nature of entities as derivate from data.
(2) The concept of absolute discreteness in nature, which, as of today, is ultimately of the quantum-mechanical type based on Planck’s constant, continues to be a mathematical and partial misfit in the physical cosmos and its parts, (a) if there exist other universes that may causally determine the constant differently at their specific expansion and/or contraction phases, and (b) if there are an infinite number of such finite-content universes.
The case may not of course be so problematic in non-quantifiable “possible worlds” due to their absolute causal disconnection or their predominant tendency to causal disconnection, but this is a mere common-sense, merely mathematical, compartmentalization: because (a) the aspect of the causally processual connection between any two quanta is logically and mathematically alienated in the physical theory of Planck’s constant, and (b) the possible worlds have only a non-causal existence, and hence, anything may be determined in this world as a constant, and an infinite number of possible universes may be posited without any causal objection!
It is usually not kept in mind here by physicists that the epistemology of unit-based thinking – of course, based on quantum physics or not – is implied by the almost unconscious tendency of symbolic activity of body-minds. This need not have anything to do with a physics that produces laws for all existent universes.
(3) The only viable and thus the most reasonably generalizable manner of being of the physical cosmos and of biological entities is that of existence in an Extended (having parts) and Changing manner (extended entities and their parts impacting a finite number of other existents and their parts in a finite quantity and in a finite duration). Existence in the Extension-Change-wise manner is nothing but causal activity.
Thus, insofar as everything is existent, every existent is causal. There is no time (i.e., no minute measuremental iota of Change) wherein such causal manner of existing ceases in any existent. This is causal continuity between partially discrete processual objects. This is not mathematizable in a discrete manner. The concept of geometrical and number-theoretic continuity may apply. But if there are other universes, the Planck constant of proportionality that determines the proportion of content of discreteness may change in the others. This is not previsioned in terrestrially planned physics.
The attitude of treating everything as causal may also be characterized by the self-aware symbolic activity by symbolic activity itself, in which certain instances of causation are avoided or enhanced, all decrementally or incrementally as the case may be, but not absolutely. This, at the most, is what may be called freedom.
It is fully causal – need not be sensed as causal within a specific set of parameters, but as causal within the context of Reality-in-total. But the whole three millennia of psychological and religious (contemplative) tradition of basing freedom merely on awareness intensity, and not on love – this is a despicable state of affairs, on which a book-length treatise is necessary.
Physics and cosmology even today tend to make the cosmos either (1) mathematically presupposedly continuous, or (2) discrete with defectively ideal mathematical status for causal continuity and with perfectly geometrical ideal status for specific beings, or (3) statistically indeterministic, thus being compelled to consider everything as partially causal, or even non-causal in the interpretation of statistics’ orientation to epistemically logical decisions and determinations based on data. If this has not been the case, can anyone suggest proofs for an alleged existence of a different sort of physics and cosmology until today?
The statistician does not even realize (1) that Universal Causality is already granted by the very existence of anything, and (2) that what they call non-causality is merely the not being the cause, or not having been discovered as the cause, of a specific set of selected data or processes. Such non-causality is not with respect to all existents. Quantum physics, statistical physics, and cosmology are replete with examples for this empirical and technocratic treachery of the notion of science.
A topology and mereologically clean physical ontology of causal continuity between partially discrete processual objects, fully free of absolutely continuity-oriented or absolutely discreteness-oriented category theory, geometry, topology, functional analysis, set theory, and logic, are yet to be born. Hence, the fundamentality of Universal Causality in its deep roots in the very concept of the To Be (namely, in the physical-ontological Categories of Extension and Change) of all physically and non-vacuously existent processes, is alien to physics and cosmology until today.
Non-integer rational numbers are not the direct notion of anything existent. Even a part of a unit process has the attribute ‘unity’ in all the senses in which any other object possesses transpire. For this reason, natural numbers have Categorial priority over rational numbers, because natural numbers are more directly related to ontological universals than other sorts of numbers are. Complex numbers, for example, are the most general number system for their sub-systems defined mathematically, but this does not mean that they are more primary in the metaphysics of ontological universals, since the primary mode of numerically quantitative qualities / universals is that of natural numbers.
4. Mathematics and Logic within Causal Metaphysics
Hence, it is important to define the limits of applicability of mathematics to the physics that use physical data (under the species of various layers of their origin). This is the only way to approximate beyond the data and the methodologically derived conclusions beyond the data. As to how and on what levels this is to be done is a matter to be discussed separately.
The same may be said also about logic and language. Logic is the broader rational picture of mathematics. Language is the symbolic manner of application of both logic and its quantitatively qualitative version, namely, mathematics, with respect to specific fields of inquiry. Here I do not explicitly discuss ordinary conversation, literature, etc.
We may do well to instantiate logic as the formulated picture of reason. But human reason is limited to the procedures of reasoning by brains. What exactly is the reason that existent physical processes constantly undergo? How to get at conclusions based on this reason of nature – by using our brain’s reasoning – and thus transcend at least to some extent the limitations set by data and methods in our brain’s reasoning?
If we may call the universal reason of Reality-in-total by a name, it is nothing but Universal Causality. It is possible to demonstrate that Universal Causality is a trans-physical, trans-scientific Law of Existence. This argument needs clarity. How to demonstrate this as the case? This has been done in an elementary fashion in the above, but more of it is not to be part of this discussion.
Insistence on mathematical continuity in nature is a mere idealization. It expects nature to obey our merely epistemic sort of idealizations, that is, in ideal cases based mostly on the brain-interpreted concepts from some layers of data, which are from some layers of phenomena, which are from some layers of the reality under observation. Some of the best examples in science are the suppositions that virtual worlds are existent worlds, dark energy is a kind of propagative energy, zero-value cosmic vacuum can create an infinite number of universes, etc.
The processes outside are vaguely presented primarily by the processes themselves, but highly indirectly, in a natural manner. This is represented by the epistemic / cognitive activity within the brain in a natural manner (by the connotative universals in the mind as reflections of the ontological universals in groups of object processes), and then idealized via concepts expressed in words, connectives, and sentences (not merely linguistic but also mathematical, computerized, etc.) by the symbolizing human tendency (thus creating denotative universals in words) to capture the whole of the object by use of a part of the human body-mind.
The symbolizing activity is based on data, but the data are not all we have as end results. We can mentally recreate the idealized results behind the multitude ontological, connotative, and denotative universals as existents.
As the procedural aftermath of this, virtual worlds begin to “exist”, dark energy begins to “propagate”, and zero-value cosmic vacuum “creates” universes. Even kinetic and potential energies are treated as propagative energies existent outside of material bodies and supposed to be totally different from material bodies. These are mere theoretically interim arrangements in the absence of direct certainty for the existence or not of unobservables.
Insistence on mathematical continuity in nature as a natural conclusion by the application of mathematics to nature is what happens in all physical and cosmological (and of course other) sciences insofar as they use mathematical idealizations to represent existent objects and processes and extrapolate further beyond them. Mathematical idealizations are another version of linguistic symbolization and idealization.
Logic and its direct quantitatively qualitative expression as found in mathematics are, of course, powerful tools. But, as being part of the denotative function of symbolic language, they are tendentially idealizational. By use of the same symbolizing tendency, it is perhaps possible to a certain extent to de-idealize the side-effects of the same symbols in the language, logic, and mathematics being used in order to symbolically idealize representations.
Merely mathematically following physical nature in whatever it is in its part-processes is a debilitating procedure in science and philosophy (and even in the arts and humanities), if this procedure is not de-idealized effectively. If this is possible at least to a small and humble extent, why not do it?Our language, logic, and mathematics too do their functions well, although they too are equally unable to capture the whole of Reality in whatever it is, wholly or in parts, far beyond the data and their interpretations! Why not de-idealize the side-effects of mathematics too?
This theoretical attitude of partially de-idealizing the effects of human symbolizing activity by use of the same symbolic activity accepts the existence of processual entities as whatever they are. This is what I call ontological commitment – of course, different from and more generalized than those of Quine and others. Perhaps such a generalization can give a slightly better concept of reality than is possible by the normally non-self-aware symbolic activity in language, logic, and mathematics.
5. Mathematics, Causality, and Contemporary Philosophical Schools
With respect to what we have been discussing, linguistic philosophyand even its more recent causalist child, namely, dispositionalist causal ontology, have even today the following characteristics:
(1) They attribute an even now overly discrete nature to “entities” in the extent of their causal separateness from others while considering them as entities. The ontological notion of an object or even of an event in its unity in analytic philosophy and in particular in modal ontology forecloses consideration of the process nature of each such unity within, on par with interactions of such units with one another. (David Lewis, Parts of Classes, p. vii) This is done without ever attempting to touch the deeply Platonic (better, geometrically atomistic) shades of common-sense Aristotelianism, Thomism, Newtonianism, Modernism, Quantum Physics, etc., and without reconciling the diametrically opposite geometrical tendency to make every physical representation continuous.
(2) They are logically comatose about the impossibility of the exactly referential definitional approach to the processual demands of existent physical objects without first analyzing and resolving the metaphysical implications of existent objects, namely, being irreducibly in finite Extension and Change and thus in continuous Universal Causality in finite extents at any given moment.
(3) They are unable to get at the causally fully continuous (neither mathematically continuous nor geometrically discontinuous) nature of the physical-ontologically “partially discrete” processual objects in the physical world, also because they have misunderstood the discreteness of processual objects (including quanta) within stipulated periods as typically universalizable due to their pragmatic approach in physics and involvement of the notion of continuity of time.
Phenomenology has done a lot to show the conceptual structures of ordinary reasoning, physical reasoning, mathematical and logical thinking, and reasoning in the human sciences. But due to its lack of commitment to building a physical ontology of the cosmos and due to its purpose as a research methodology, phenomenology has failed to an extent to show the nature of causal continuity (instead of mathematical continuity) in physically existent, processually discrete, objects in nature.
Hermeneutics has just followed the human-scientific interpretative aspect of Husserlian phenomenology and projected it as a method. Hence, it was no contender to accomplish the said fete.
Postmodern philosophies qualified all science and philosophy as being perniciously cursed to be “modernistic” – by thus monsterizing all compartmentalization, rules, laws, axiomatization, discovery of regularities in nature, logical rigidity, and even metaphysical grounding as insurmountable curses of the human project of knowing and as a synonym for all that are unapproachable in science and thought. The linguistic-analytic philosophy in later Wittgenstein too was no exception to this nature of postmodern philosophies – a matter that many Wittgenstein followers do not notice. Take a look at the first few pages of Wittgenstein’s Philosophical Investigations, and the matter will be more than clear.
The philosophies of the sciences seem today to follow the beaten paths of extreme pragmatism in linguistic-analytic philosophy, physics, mathematics, and logic, which lack a foundational concept of causally concrete and processual physical existence.
Hence, it is useful for the growth of science, philosophy, and humanities alike to research into the causal continuity between partially discrete “processual” objects and forget about absolute mathematical continuity or discontinuity in nature. Mathematics and the physical universe are to be reconciled in order to mutually delimit them in terms of the causal continuity between partially discrete processual objects.
Bibliography
(1) Gravitational Coalescence Paradox and Cosmogenetic Causality in Quantum Astrophysical Cosmology, 647 pp., Berlin, 2018.
(2) Physics without Metaphysics? Categories of Second Generation Scientific Ontology, 386 pp., Frankfurt, 2015.
(3) Causal Ubiquity in Quantum Physics: A Superluminal and Local-Causal Physical Ontology, 361 pp., Frankfurt, 2014.
(4) Essential Cosmology and Philosophy for All: Gravitational Coalescence Cosmology, 92 pp., KDP Amazon, 2022, 2nd Edition.
(5) Essenzielle Kosmologie und Philosophie für alle: Gravitational-Koaleszenz-Kosmologie, 104 pp., KDP Amazon, 2022, 1st Edition.
Relevant answer
Answer
The view that humans have on the world is influenced by the cognitive method that has been formed. All the fields of knowledge and science that you mentioned together have made a cognitive method in the current time, which identifies the world with this dominant method!
We know the one in which our thinking frame is located! To look at the world in a different way and to adopt a different method, one must work without all these fields and with a method that penetrates into the cause of the phenomena! Is there mathematics and logic that the resulting physical and cosmological models have the least distance from the principle of current phenomena in the world?! Our language is unable to express what is actually happening! Maybe a world like David Boehm's holographic world is the solution to our problem! And in fact, an important part of the truth of the world is hidden in the hidden world! The part that we need to understand the world's phenomena! Or the invention of logic, mathematics and a new scientific and epistemological language is needed?! In my opinion, the result of the phenomenological method will not say anything about the causal layers of this world! The phenomenological method helps us to work without knowing the world and gives us a way of living and using most of the capacities of this world without knowing it. Previously, Newton claimed a method for understanding physics with wrong assumptions about space and time, which lasted for a while! These assumptions are based on the feeling and philosophy of that history, which was also manifested in the language of that time! It cannot be said, now these wrong assumptions have stopped! And maybe the current path of epistemology and cosmology, with incorrect and limiting assumptions, will not bring us to the true knowledge of the world.....
  • asked a question related to Applied Mathematics
Question
4 answers
It seems that the most simple way to get both an infinite number of tessellated solids and lattices with periodic minimal surfaces in R3 consists on using a Pearce "saddle tetrahedron". The resultant convex solids:
1) Have configurations which tesselate the euclidean space. These tesselations are not Voronoi and have curved boundaries in a bcc lattice.
2) Define minimal surfaces for any 3 dimensional quadrelateral on the external closed surface of the solid.
Is there any topological description of such solids in the literature? How can we get a Weierstrass representation of the external surface of each polyhedron? How can we get the conjugate continous surfaces? Can we consider this to be a good design method for structural lattices?
Any comment will be wellcome.
Relevant answer
Answer
Thanks a lot again. I appreciate your comments very much.
  • asked a question related to Applied Mathematics
Question
2 answers
How the integral in equation 1 is calculated into equation 3 depicted in the attached picture using equation 2?
Relevant answer
Answer
Without seeing the attached picture and the context surrounding equations 1 and 3, it's difficult to provide a specific answer. However, I can give a general overview of how integrals can be calculated and how they might be related to other equations.
An integral is a mathematical concept that represents the area under a curve. It is often used to calculate things like displacement, velocity, and acceleration in physics, or to find the total value of a function over a specific interval. The process of calculating an integral is called integration.
There are several methods for calculating integrals, including substitution, integration by parts, and partial fraction decomposition. The specific method used depends on the complexity of the function being integrated and the available tools and techniques.
In terms of how integrals might relate to other equations, it's possible that equation 1 is a differential equation, which describes the rate of change of a variable over time. Integrating this equation might give equation 3, which represents the value of the variable at a given point in time. However, this is just one possible scenario, and without more information, it's difficult to say for sure.
In summary, integrals are a mathematical concept used to calculate the area under a curve, and there are several methods for calculating them. They can be related to other equations in a variety of ways, depending on the context and the specific equations involved
  • asked a question related to Applied Mathematics
Question
4 answers
Why seperation of variables methods (https://en.wikipedia.org/wiki/Separation_of_variables) can't be applied into Burger's Equation (https://en.wikipedia.org/wiki/Burgers%27_equation)?
Relevant answer
Answer
To solve nonlinear PDEs (like Burger or Burger-Huxley etc.), you can use the "Invariant subspace Method (ISM)."
The solution obtained by ISM is expressed as a product of functions, where each function depends on only one independent variable. This factorization of the solution makes it easier to solve the PDE, as it reduces the problem to solving a system of ordinary differential equations (ODEs) instead of a complex PDE. The method involves finding a set of invariant subspaces for the given PDE. By doing this, the solution can be obtained more straightforwardly without requiring complex numerical methods or approximations.
  • asked a question related to Applied Mathematics
Question
3 answers
A collection of solved examples in Pyomo environment (Python package)
The solved problems are mainly related to supply chain management and power systems.
Feel free to follow / branch / contribute
Relevant answer
Answer
udemy. com/course/optimization-in-python/?couponCode= 36C6F6B228A087695AD9
  • asked a question related to Applied Mathematics
Question
5 answers
I have an data of 30 X 1 matrix, in which by using gradient descent algorithm is it possible to find the best optimized value.If yes, please share me the procedure or link for the detailed background theory behind it.it will be helpful for me to proceed further on my research.
Relevant answer
Answer
It depends on the cost function and the model that you are using. Gradient descent will converge to the optimal value (or very close to it) of the training loss function, given a properly set learning rate, if the optimization problem is convex with respect to the parameters. That is the case for linear regression using the mean squared error loss, or logistic regression using cross entropy. For the case of neural networks with several layers and non-linearities none of these loss functions make the problem convex, therefore there is no guarantee that you will find the optimal value. The same would happen if you used logistic regression with the mean squared error instead of cross entropy.
An important thing to note is that when I talk about the optimal value, I mean the value that minimizes the loss in your training set. It is always possible to overfit, which means that you find the optimal parameters for your training set, but those parameters make inaccurate predictions on the test set.
  • asked a question related to Applied Mathematics
Question
3 answers
Thermal stresses in applied mathematics
Relevant answer
Answer
New trend in thermal stress is application oriented, especially cloud computing and biomedical field of research @Alim Khan
  • asked a question related to Applied Mathematics
Question
4 answers
Non similarity transformation
Relevant answer
Answer
Hi Noraihan,
In mathematical perspective it is just a method to reduce the dimensionality of System of PDEs to ODEs.
  • asked a question related to Applied Mathematics
Question
18 answers
The equation dx/dt = F(x) can be linearized using Calerman techniques and solved with linear state equation method. But for some condition I found a proper decomposition of F(x) relating to know logist solution od a foundamental canonical problem .
Relevant answer
Answer
Hello from 2016 up today i have finally solved any non linear autonomous first orde diff equations . The solution is in terms of important analysical recursion which i will present in a later paper. Regards.
  • asked a question related to Applied Mathematics
Question
2 answers
I have a system of non-linear differential equations that explains the behaviour of some of the cancer cells.
Looking for help identifying the equilibrium points and eigenvalues of this model in order to determine the type of bifurcation present.
Thanks in advance.
Relevant answer
Answer
Well it's a good idea to find some of them, first. The first equation implies that y=0 is an equilibrium, so a class of equilibria is of the form (x,0,z). That reduces the problem. From the last equation it then should be possible to solve for z and, from the second, for x.
Then look at the other factor of the first equation; and so on.
  • asked a question related to Applied Mathematics
Question
7 answers
I want to do phd on mathematical modeling of infectious diseases (eg. Covid 19, maleria, denge). I am also interested in pure mathematics as well like Nonlinear Analysis, Variational Inequalities so my question is can i get any connection between this two part. Need suggestions thank you.
Relevant answer
Answer
Nonlinear analysis and variational inequalities are mathematical tools that can be used to model and analyze a wide range of phenomena, including those related to infectious diseases. In particular, nonlinear analysis and variational inequalities can be used to study the dynamics of infectious disease outbreaks, such as the spread and control of the disease in a population.
  • asked a question related to Applied Mathematics
Question
14 answers
Can an elliptic crack (small enough to remain a single entity, with no internal pressure or shear force) inside an isotropic material (no boundary effect) be expanded in its own plane under externally applied shearing stresses only?
If yes, how did you show that? Do we have experimental evidence for the process?
Relevant answer
Answer
In order to overcome the difficulty for an elliptical crack to expand under applied shearing stresses parallel to the plane of the loop, we just analysed THE ROUGH CONOIDAL CRACK UNDER GENERAL LOADING. These types of cracks are observed in high strength broken specimens (steel, Ni superalloys ...) subjected to long life fatigue experiments.
Please refer to the "Q&A" question How to estimate the average rough conoidal crack shape (angle, height, circular basis) observed in high strength materials (steel, nickel alloys)?
  • asked a question related to Applied Mathematics
Question
8 answers
I am interested to compare two time varying correlations series. Is there any statistically appropriate method to make this comparison.
Thank You
  • asked a question related to Applied Mathematics
Question
3 answers
Assume we have a program with different instructions. Due to some limitations in the field, it is not possible to test all the instructions. Instead, assume we have tested 4 instructions and calculated their rank for a particular problem.
the rank of Instruction 1 = 0.52
the rank of Instruction 2 = 0.23
the rank of Instruction 3 = 0.41
the rank of Instruction 4 = 0.19
Then we calculated the similarity between the tested instructions using cosine similarity (after converting the instructions from text form to vectors- machine learning instruction embedding).
Question ... is it possible to create a mathematical formula considering the values of rank and the similarity between instructions, so that .... given an un-tested instruction ... is it possible to calculate, estimate, or predict the rank of the new un-tested instruction based on its similarity with a tested instruction?
For example, we measure the similarity between instruction 5 and instruction 1. Is it possible to calculate the rank of instruction 5 based on its similarity with instruction 1? is it possible to create a model or mathematical formula? if yes, then how?
Relevant answer
Answer
As far as I understand your problem, you first need a mathematical relation between the instructions and rank. For instance, Rank x should correspond to some instruction value as y and vice versa; it means you require a mathematical function.
So there are various methods/tools to find a suitable (as accurate as you want) to find mathematical function based on given discrete values like curve fitting methods or the use of ML.
Further, Once you obtain the mathematical function, run your code a few times, and you will get a set for various combinations of (instruction, rank). These set values will work as the feedback for your derived function. Make changes based on the feedback, and you will get a much more accurate function.
I hope you are looking for the same.
  • asked a question related to Applied Mathematics
Question
4 answers
During the lecture, the lecturer mentioned the properties of Frequentist. As following
Unbiasedness is only one of the frequentist properties — arguably, the most compelling from a frequentist perspective and possibly one of the easiest to verify empirically (and, often, analytically).
There are however many others, including:
1. Bias-variance trade-off: we would consider as optimal an estimator with little (or no) bias; but we would also value ones with small variance (i.e. more precision in the estimate), So when choosing between two estimators, we may prefer one with very little bias and small variance to one that is unbiased but with large variance;
2. Consistency: we would like an estimator to become more and more precise and less and less biased as we collect more data (technically, when n → ∞).
3. Efficiency: as the sample size incrases indefinitely (n → ∞), we expect an estimator to become increasingly precise (i.e. its variance to reduce to 0, in the limit).
Why Frequentist has these kinds of properties and can we prove it? I think these properties can be applied to many other statistical approach.
Relevant answer
Answer
Sorry, Jianhing. But I think you have misunderstood something in the lecture. Frequentist statistics, which is an interpretation of probability to be assigned on the basis of many random experiments.
In this setting, on designs functions of the data (also called statistics) which estimate certain quantities from data. For example, the probability p of a coin to land heads is given from n independent trials with the same coin and just counting the fraction of heads. This is then an estimator for the parameter p.
Each estimator should have desirable properties, as unbiasedness, consistency, efficiency and low variance and so on. Not every estimator has these properties. But, in principle one can proof, whether a given estimator has these properties.
So, it is not a characteristics of frequentist statistics, but a property of an individual estimator based on frequentist statistics.
  • asked a question related to Applied Mathematics
Question
4 answers
Please, I need, if available, some important research papers which relate the theory of dynamical systems to climate change. Also, in general, I know there are a lot of published research articles that relate dynamical systems to many applications. But, are there papers that research centers and governments depend on that before taking any procedures? I mean, are there papers, especially on climate change and the environment, which are not only in theory but have practical applications?
Relevant answer
Fellow researcher,
Chaos theory, which is a branch of dynamical systems was founded by the study of the Lorenz attractor (butterfly diagram). Edward Lorenz was a meteorologist and this attractor was introduced by him as a consequence of its simplified mathematical model for atmospheric convection. So yes, climate study and dynamical systems are interlinked since the beginning and I recommend Strogatz's "Nonlinear Dynamics And Chaos" for a overview with applications to climate change or "Nonlinear Dynamics in Weather and Climate" for a more specialized text. It is worth to mention that climate change also involve stochastic processes so consulting also works like "Stochastic resonance in climatic change" is important for you repertoire.
Which you have fun and have success in your studies.
  • asked a question related to Applied Mathematics
Question
24 answers
I am aware of the facts that every totally bounded metric space is separable and a metric space is compact iff it is totally bounded and complete but I wanted to know, is every totally bounded metric space is locally compact or not. If not, then give an example of a metric space that is totally bounded but not locally compact.
Follow this question on the given link
Relevant answer
Answer
Metric space A is said to be a totally bounded if every Cauchy sequence in A has convergent sub sequence
  • asked a question related to Applied Mathematics
Question
8 answers
A few years ago, in a conversation that remained unfinished, a statistics expert told me that percentage variables should not be taken as a response variable in an ANOVA. Does anyone know if this is true, and why?
Relevant answer
Answer
Javier Ernesto Vilaso Cadre , tests do not corroborate that a distribution is normal. They may only fail to corroborate that a distribution is not normal, and that may simple be due to the sample size. Actually, such tests only tell you if your sample size is already large enough to see that the normal distribution model (an idealized model!) does not account for all features of a real distribution. So actually they don't give you any useful information (you may fail to see relevant discrepancies because the sample size is too small, or you may get blinede with "statistically significant" discrepancies that are irrelevant for your problem). The only sensible way is to understand the variable and have some theoretical justification of its distribution, and then to judge if the presumed discrepancies are relevent for your problem. One may then certainly have a look at the empirical distribution of the observed data: if it screams at you that your thoughts and arguments are very likely very wrong you may go back and refine or deepen your understanding of the data-gereative process you like to study.
  • asked a question related to Applied Mathematics
Question
9 answers
Well,
I am a very curious person. During Covid-19 in 2020, I through coded data and taking only the last name, noticed in my country that people with certain surnames were more likely to die than others (and this pattern has remained unchanged over time). Through mathematical ratio and proportion, inconsistencies were found by performing a "conversion" so that all surnames had the same weighting. The rest, simple exercise of probability and statistics revealed this controversial fact.
Of course, what I did was a shallow study, just a data mining exercise, but it has been something that caught my attention, even more so when talking to an Indian researcher who found similar patterns within his country about another disease.
In the context of pandemics (for the end of these and others that may come)
I think it would be interesting to have a line of research involving different professionals such as data scientists; statisticians/mathematicians; sociology and demographics; human sciences; biological sciences to compose a more refined study on this premise.
Some questions still remain:
What if we could have such answers? How should Research Ethics be handled? Could we warn people about care? How would people with certain last names considered at risk react? And the other way around? From a sociological point of view, could such a recommendation divide society into "superior" or "inferior" genes?
What do you think about it?
=================================
Note: Due to important personal matters I have taken a break and returned with my activities today, February 13, 2023. I am too happy to come across many interesting feedbacks.
Relevant answer
Answer
It is just coincidental
  • asked a question related to Applied Mathematics
Question
41 answers
Dear All,
I am planning to do Ph.D in Applied mathematics but not able to decide on the area to be dealt in. Can anyone suggest any good option to go with ?
Relevant answer
Answer
Computational Mathematics is the best option. I think.
  • asked a question related to Applied Mathematics
Question
6 answers
Dear researchers
Do you know a journal in the field of applied mathematics or chemistry-mathematics, in which the publication of the article is free and the answer to the review of the article is announced within 3 months at the most?
please contact me by the following:
Thank you very much.
Best regards
Relevant answer
Answer
I suggest you look at the websites for these journals. As far as I can tell, you can choose for your article to be "subscription" and not "open access" and therefore you would not have to pay for publication. My personal strategy is to contact those journals by email to confirm that I am correct.
IMA Journal of Applied Mathematics
Journal of Applied Mathematics and Computing
There are, as you suggest, a lot of journals in this topic where you have to pay and it is really hard to find one where you do not!
  • asked a question related to Applied Mathematics
Question
7 answers
Hi
I have a huge dataset for which I'd like to assess the independence of two categorical variables (x,y) given a third categorical variable (z).
My assumption: I have to do the independence tests per each unique "z" and even if one of these experiments shows the rejection of null hypothesis (independence), it would be rejected for the whole data.
Results: I have done Chi-Sq, Chi with Yates correction, Monte Carlo and Fisher.
- Chi-Sq is not a good method for my data due to sparse contingency table
- Yates and Monte carlo show the rejection of null hypothesis
- For Fisher, all the p values are equal to 1
1) I would like to know if there is something I'm missing or not.
2) I have already discarded the "z"s that have DOF = 0. If I keep them how could I interpret the independence?
3) Why do Fisher result in pval=1 all the time?
4) Any suggestion?
#### Apply Fisher exact test
fish = fisher.test(cont_table,workspace = 6e8,simulate.p.value=T)
#### Apply Chi^2 method
chi_cor = chisq.test(cont_table,correct=T); ### Yates correction of the Chi^2
chi = chisq.test(cont_table,correct=F);
chi_monte = chisq.test(cont_table,simulate.p.value=T, B=3000);
Relevant answer
Answer
Hello Masha,
Why not use the Mantel-Haenszel test across all the z-level 2x2 tables for which there is some data? This allows you to estimate the aggregate odds ratio (and its standard error), thus you can easily determine whether a confidence interval includes 1 (no difference in odds, and hence, no relationship between the two variables in each table) or not.
That seems simpler than having to run a bunch of tests, and by so doing, increase the aggregate risk of a type I error (false positive).
Good luck with your work.
  • asked a question related to Applied Mathematics
Question
8 answers
Dear colleagues, we know that getting a new research paper published can be a challenge for a new researcher. It is even more challenging when considering the risk of refusal that comes from submitting a new paper to a journal that is not the right fit. we can also mention that some journals require an article processing charge (APC) but also have a policy allowing them to waive fees on request at the discretion of the editor, howover we underline that we want to publish a new research paper without APC!
So, what do you suggest?
We are certainly grateful for your recommendations. Kind regards! ------------------------------------------------------------------------------
Abdelaziz Hellal Mohamed Boudiaf M'sila, University, Algeria.
Relevant answer
Answer
Cubo, a mathematical journal
Is very good for this fields of mathematics and more than it.
  • asked a question related to Applied Mathematics
Question
13 answers
I have registered for conference well held in Singapore from 9-10 September 2019 
I want to ask if this event is a real event.
Name of event: International Conference on Applied Mathematics and Science (ICAMS-19) 
Organization: WRF CONFERENCE 
Date: 9th-10th SEP 2019
Best regards
Relevant answer
Answer
Hello Usama,
I would not spend any time looking at that 'conference'.
I quote from that page,
"The main focus of conference is to improve the , accelerate the translation of leading edge discovery at research level"
Ungrammatical, misspelled.
But the main issue I have with it, is that there is no topic.
What on Earth is this conference about?
An event so broad in scope cannot hope to be useful in attracting an audience that would actually be beneficial to its attendees.
The link to "Upcoming conference" does not work. The telephone number is a British one (+44) but the Whats App number is not.
If you can tell me your field of interest, I might be able to suggest some useful meetings. My background is in aerospace systems, material testing for oil & gas firms, and respiratory medical devices.
  • asked a question related to Applied Mathematics
Question
6 answers
I have previously conducted laboratory experiments on a photovoltaic panel under the influence of artificial soiling in order to be able to obtain the short circuit current and the open-circuit voltage data, which I analyzed later using statistical methods to draw a performance coefficient specific to this panel that expresses the percentage of the decrease in the power produced from the panel with the increase of accumulating dust. Are there any similar studies that relied on statistical analysis to measure this dust effect?
I hope I can find researchers interested in this line of research and that we can do joint work together!
Article link:
Relevant answer
Answer
Dear Dr Younis
Find attached:
1-(1) (PDF) Spatial Management for Solar and Wind Energy in Kuwait (researchgate.net)
2-(1) (PDF) Cost and effect of native vegetation change on aeolian sand, dust, microclimate and sustainable energy in Kuwait (researchgate.net)
regards
Ali Al-Dousari
  • asked a question related to Applied Mathematics
Question
4 answers
A tunable clock source will consist of a PLL circuit like the Si5319, configured by a microcontroller. The input frequency is fixed, e.g. 100 MHz. The user selects an output frequency with a resolution of, say, 1 Hz. The output frequency will always be lower than the input frequency.
The problem: The two registers of the PLL circuit which determine the ratio "output frequency/input frequency" are only 23 bit wide, i.e. the upper limit of both numerator and denominator is 8,388,607. As a consequence, when the user sets the frequency to x, the rational number x/108 has to be reduced or approximated.
If the greatest common divider (GCD) of x and 108 >= 12 then the solution is obvious. If not, the task is to find the element in the Farey sequence F8388607 that is closest to x/108. This can be done by descending from the root along the left half of the Stern-Brocot tree. However, this tree, with all elements beyond F8388607 pruned away, is far from balanced, resulting in a maximum number of descending steps in excess of 4 million; no problem on a desktop computer but a bit slow on an ordinary microcontroller.
F8388607 has about 21*1012 elements, so a balanced binary tree with these elements as leaves would have a depth of about 45. But since such a tree cannot be stored in the memory of a microcontroller, numerator and denominator of the searched Farey element have to be calculated somehow during the descent. This task is basically simple in the Stern-Brocot tree but I don't know of any solution in any other tree.
Do you know of a fast algorithm for this problem, maybe working along entirely different lines?
Many thanks in advance for any suggestions!
Relevant answer
Answer
Now I tested the idea of jumping downwards in the Stern-Brocot tree. This idea is based on the observation that long paths from the root to a Farey element always seem to contain long sequences of steps directed exclusively to the left or exclusively to the right. As long as the direction is constant, the values of numerator and denominator which are added to the current node are also constant, obviously. Therefore, the products of numerator and a certain jump width resp. denominator and a this jump width can be added to the node.
In order to determine the largest possible jump width, bitwise successive approximation is used in this first approach. The result is quite satisfactory:
With an input frequency of 100 MHz, and the output frequency in the range 1 Hz to 100 MHz - 1 Hz (at the extrema, the approximation by Farey elements is poor, of course), the sum of the passes through the outer loop (movement through the tree) and through the inner loop (determining the maximum jump width) never exceeds 386. Attached is my C source. Compared to the maximum number of single steps, this is an improvement of 4 orders of magnitude.
While this approach solves my practical problem, I would still be interested in other solutions because sometimes it's amazing how a problem can be tackled in different ways.
  • asked a question related to Applied Mathematics
Question
9 answers
Dear researchers
As we know, recently a new type of derivatives have been introduced which depend two parameters such as fractional order and fractal dimension. These derivatives are called fractal-fractional derivatives and divided into three categories with respect to kernel: power-law type kernel, exponential decay-type kernel, generalized Mittag-Leffler type kernel.
The power and accuracy of these operators in simulations have motivated many researchers for using them in modeling of different diseases and processes.
Is there any researchers working on these operators for working on equilibrium points, sensitivity analysis and local and global stability?
If you would like to collaborate with me, please contact me by the following:
Thank you very much.
Best regards
Sina Etemad, PhD
Relevant answer
Answer
Yes I am
  • asked a question related to Applied Mathematics
Question
8 answers
Dears,
What can you say about the journal " Italian journal of pure and applied Mathematics"?
Relevant answer
Answer
The good news is that this journal is still indexed. Your observation has to do with the fact that:
-The journal only publishes two times a year (and the first issue in 2022 is just published)
-The journal is most likely rather slow in providing the metadata for Scopus needed for proper indexing
The bad news is that when I had a closer look they, indeed as said by Karrar Q. Al-Jubouri and yourself, take a long time to publish accepted papers. So, if you can wait patiently then this is just a fine journal but with some degree of time pressure you better go for another one.
Best regards.
  • asked a question related to Applied Mathematics
Question
54 answers
I am studying integral transforms (Fourier, Laplace, etc), to apply them in physics problems. However, it is difficult to get books that have enough exercises and their answers. I have found that in particular the Russian authors have excellent books where there are a lot of exercises and their solutions.
Greetings,
Ender
  • asked a question related to Applied Mathematics
Question
10 answers
Journal of Industrial & Management Optimization (JIMO) is an open access journal. You pay a substantial amount to publish a paper. When you go to the website of its publisher, American Institute of Mathematical Sciences (AIMS Press), it seems that it is not really based in the United States. I am not sure if it is a legitimate professional organization or if it is a predatory publisher. They have a large number of open access journals. On the other hand, their handling of papers is terrible: extremely slow and low-tech, which is not typical for predatory journals. It may take 13 months to get an editorial rejection, for instance. Furthermore, they don't have an online submission system with user profiles on it, you just submit the paper on a website, and they give you a URL to check your paper's status, which makes your submission open to anyone who has the URL. It has an impact factor of 1.3, which makes me puzzled. Any comments on this organization and the journal will be appreciated.
Relevant answer
Answer
Norbert Tihanyi one little warning, if you look whether a particular journal is mentioned in the Beall’s list you should not only check the journal title in the stand-alone journal list (https://beallslist.net/standalone-journals/) but also the publisher behind it (if any). In this case the publisher is not mentioned in the Beall’s list (https://beallslist.net/). Anis Hamza I suppose you mean ISSN number, this journal with ISSN 1547-5816 and/or E-ISSN:1553-166X is mentioned in Scopus (https://www.scopus.com/sources.uri?zone=TopNavBar&origin=searchbasic) and Clarivate’s Master journal list (https://mjl.clarivate.com/home).
Back to your question, it is somewhat diffuse. There are signs that you are dealing with a questionable organization:
-Contact info renders in Google a nice residence but does not seem to correspond to an office and I quote “The American Institute of Mathematical Sciences is an international organization for the advancement and dissemination of mathematical and applied sciences.” https://www.aimsciences.org/common_news/column/aboutaims
-Both websites https://www.aimsciences.org/and http://www.aimspress.com/ function more or less okay but not flawless
-The journal “Journal of Industrial & Management Optimization (JIMO)“ is somewhat vague about the APC. It positions itself as hybrid (with an APC of 1800 USD), but all papers I checked can be read as open access (although not all have a CC etc. license). It mentions something like open access for free when an agreement is signed with your institution but how much this cost is unclear
-No problem by itself but the majority of authors are from China, makes you wonder about American Institute…
-Editing is well…sober
On the other hand it looks like and I quote “AIMS is a science organization with two independent operations: AIMS Press (www.aimspress.com) and the American Institute of Mathematical Sciences (AIMS) (www.aimsciences.org ).” AIMS Press is focused on Open Access journals while the journals published by AIMS (www.aimsciences.org) are/used to be subscription-based journals. Pretty much like Springer has there BioMed Central (BMC) journal portfolio and Bentham has their Bentham Open division.
Facts are:
-AIMS ( www.aimsciences.org ), more than 20 of their journals are indexed in SCIE and indexed in Scopus as well (under the publisher’s name: American Institute of Mathematical Sciences)
-AIMS Press (www.aimspress.com ), four journals are indexed in SCIE and thus have an impact factor and 14 journals are indexed in Clarivate’s ESCI. 7 journals are indexed in Scopus.
-AIMS Press, 20 of their journals are a member of DOAJ
-Journal of Industrial & Management Optimization (JIMO) https://www.aimsciences.org/journal/1547-5816 is indexed in Clarivate’s SCIE (impact factor 1.801, see enclosed file for latest JCR Report) and Scopus indexed CiteScore 1.8 https://www.scopus.com/sourceid/12900154727.
-For the papers I checked the time between received and accepted varies between 6 and 9 months and an additional 3-4 months before publication (it is well… not fast but not unusual)
So, overall, I think that the publisher has quite some credibility and it might be worthwhile to consider.
Best regards.
  • asked a question related to Applied Mathematics
Question
15 answers
Why is a Proof to Fermat's Last Theorem so Important?
I have been observing an obsession in mathematicians. logicians and number theorists with providing a "Proof for Fermat's Last Theorem". Many intend to publish these papers in peer reviewed journal. Publishing your findings is good but the problem is that a lot of the papers aimed at providing a proof for Fermat's Last Theorem are erroneous and the authors don't seem to realize that.
So
Why is the Proof of Fermat's Last Theorem so much important that a huge chunk of mathematicians are obsessed with providing the proof and failing miserably?
What are the practical application's of this theorem?
Note: I am not against the theorem or the research that is going on the theorem but it seems to be an addiction. That is why I thought of asking this question.
Relevant answer
Answer
Muneeb Faiq , the situation has changed and there should be no fear, when staying before the FLT.
  • asked a question related to Applied Mathematics
Question
14 answers
Hello everyone,
Could you recommend courses, papers, books or websites about modeling language and formalization?
Thank you for your attention and valuable support.
Regards,
Cecilia-Irene Loeza-Mejía
Relevant answer
Answer
Kindly check also the following very good RG link:
  • asked a question related to Applied Mathematics
Question
1 answer
Dear collegues.
I would like to ask,if anybody works with neural networks,to check my loop for the test sample.
I've 4 sequences (with a goal to predict prov,monthly data,22 data in each sequence) and I would like to construct the forecast for each next month with using training sample size 5 months.
It means, I need to shift each time by one month with 5 elements:
train<-1:5, train<-2:6, train<-3:7...,train<-17:21. So I need to get 17 columns as a output result.
The loop is:
shift <- 4
number_forecasts <- 1
d <- nrow(maxmindf)
k <- number_forecasts
for (i in 1:(d - shift + 1))
{
The code:
require(quantmod)
require(nnet)
require(caret)
prov=c(25,22,47,70,59,49,29,40,49,2,6,50,84,33,25,67,89,3,4,7,8,2)
temp=c(22,23,23,23,25,29,20,27,22,23,23,23,25,29,20,27,20,30,35,50,52,20)
soil=c(676,589,536,499,429,368,370,387,400,423,676,589,536,499,429,368,370,387,400,423,600,605)
rain=c(7,8,2,8,6,5,4,9,7,8,2,8,6,5,4,9,5,6,9,2,3,4)
df=data.frame(prov,temp,soil,rain)
mydata<-df
attach(mydata)
mi<-mydata
scaleddata<-scale(mi$prov)
normalize <- function(x) {
return ((x - min(x)) / (max(x) - min(x)))
}
maxmindf <- as.data.frame(lapply(mydata, normalize))
go<-maxmindf
forecasts <- NULL
forecasts$prov <- 1:22
forecasts$predictions <- NA
forecasts <- data.frame(forecasts)
# Training and Test Data
trainset <- maxmindf()
testset <- maxmindf()
#Neural Network
library(neuralnet)
nn <- neuralnet(prov~temp+soil+rain, data=trainset, hidden=c(3,2), linear.output=FALSE, threshold=0.01)
nn$result.matrix
plot(nn)
#Test the resulting output
#Test the resulting output
temp_test <- subset(testset, select = c("temp","soil", "rain"))
head(temp_test)
nn.results <- compute(nn, temp_test)
results <- data.frame(actual = testset$prov, prediction = nn.results$net.result)
}
minval<-min(x)
maxval<-max(x)
minvec <- sapply(mydata,min)
maxvec <- sapply(mydata,max)
denormalize <- function(x,minval,maxval) {
x*(maxval-minval) + minval
}
as.data.frame(Map(denormalize,results,minvec,maxvec))
Could you tell me please,what can i add in trainset and testset (with using loop) and how to display all predictions using a loop so that the results are displayed with a shift by one with a test sample of 5?
I am very grateful for your answers
  • asked a question related to Applied Mathematics
Question
3 answers
I want to develop a Hybrid SARIMA-GARCH for forecasting monthly rainfall data. The 100% of data is split into 80% for training and 20% for testing the data. I initially fit a SARIMA model for rainfall and found the residual of the SARIMA model is heteroscedastic in nature. To capture the information left in the SARIMA residual, GARCH is applied to model the residual part. The model order (p=1,q=1) of GARCH is applied. But when the data is forecasted I am getting constant value. I tried applying different model orders for GARCH, still, I am getting a constant value. I have attached my code, kindly help me resolve it? Where have I made mistake in coding? or is some other CRAN package has to be used?
library(“tseries”)
library(“forecast”)
library(“fgarch”)
setwd("C:/Users/Desktop") # Setting of the work directory
data<-read.table("data.txt") # Importing data
datats<-ts(data,frequency=12,start=c(1982,4)) # Converting data set into time series
plot.ts(datats) # Plot of the data set
adf.test(datats) # Test for stationarity
diffdatats<-diff(datats,differences=1) # Differencing the series
datatsacf<-acf(datats,lag.max=12) # Obtaining the ACF plot
datapacf<-pacf(datats,lag.max=12) # Obtaining the PACF plot
auto.arima(diffdatats) # Finding the order of ARIMA model
datatsarima<-arima(diffdatats,order=c(1,0,1),include.mean=TRUE) # Fitting of ARIMA model
forearimadatats<-forecast.Arima(datatsarima,h=12) # Forecasting using ARIMA model
plot.forecast(forearimadatats) # Plot of the forecast
residualarima<-resid(datatsarima) # Obtaining residuals
archTest(residualarima,lag=12) # Test for heteroscedascity
# Fitting of ARIMA-GARCH model
garchdatats<-garchFit(formula = ~ arma(2)+garch(1, 1), data = datats, cond.dist = c("norm"), include.mean = TRUE, include.delta = NULL, include.skew = NULL, include.shape = NULL, leverage = NULL, trace = TRUE,algorithm = c("nlminb"))
# Forecasting using ARIMA-GARCH model
forecastgarch<-predict(garchdatats, n.ahead = 12, trace = FALSE, mse = c("uncond"), plot=FALSE, nx=NULL, crit_val=NULL, conf=NULL)
plot.ts(forecastgarch) # Plot of the forecast
Relevant answer
Answer
At the begin it happens as usual, and this way we learning, I would like to advise you to check your theory & codes line by line. It will work for sure.
  • asked a question related to Applied Mathematics
Question
5 answers
A comprehensive way to find the concentration of random solutions would enhance benefits related with health, industry, technology and commercial aspects. Although beer lambert law is a solution, there are some cases where Epsilon is unknown (Example: A Coca-Cola drink or a cup of coffee). In this cases, proper alternative ways of determining concentration should be suggested.
  • asked a question related to Applied Mathematics
Question
4 answers
I am trying to solve the differential equation. I was able to solve it when the function P is constant and independent of r and z. But I am not able to solve it further when P is a function of r and z or function of r only (IMAGE 1).
Any general solution for IMAGE 2?
Kindly help me with this. Thanks
Relevant answer
Answer
Vikas Rohil sir, please help me.
  • asked a question related to Applied Mathematics
Question
5 answers
Multinomial or crdered choice. Which one is applicable?
Relevant answer
Answer
multinomial logistic regression model is best
  • asked a question related to Applied Mathematics
Question
2 answers
The complete flow equations for a third grade flow can be derived from the differential representation of the stress tensor. Has anyone ever obtained any results, experimentally or otherwise, that indicate the space-invariance (constancy) of the velocity gradient, especially for 1D shear flow in the presence of constant wall-suction velocity? Under what conditions were the results obtained?
Relevant answer
Answer
Academic resources on Fluid Mechanics are provided on
SINGLE PHASE AND MULTIPHASE TURBULENT FLOWS (SMTF) IN NATURE AND ENGINEERING APPLICATIONS | Jamel Chahed | 3 publications | Research Project (researchgate.net)
  • asked a question related to Applied Mathematics
Question
4 answers
Grubbs's test and Dixon's test are widely applied in the field of Hydrology to detect outliers, but the drawback of these statistical tests is that it needs the dataset to be approximately normally distributed? I have rainfall data for 113 years and the dataset is non-normally distributed. What are the statistical tests for finding outliers in non-normally distributed datasets & what values should we replace in the place of Outliers?
Relevant answer
Answer
Hello Kabbilawsh,
If you believed your sample data accurately represented the target population, you could: (a) run a simulation study of random samples from such a population; and (b) identify exact thresholds for cases (either individual data points or sample means or medians, depending on which better fit your research situation) at whatever desired level of Type I risk you were willing to apply.
If you don't believe your sample data accurately represent the target population, you could invoke whatever distribution you believe to be plausible for the population, then proceed as above.
On the other hand, you could always construct a Chebychev confidence interval for the mean at whatever confidence level you desired, though this would then identify thresholds beyond which no more than 100 - CI% of sample means would be expected to fall, no matter what the shape of the distribution. This, of course, would apply only to samples of 2 or more cases, not to individual scores.
Good luck with your work.
  • asked a question related to Applied Mathematics
Question
47 answers
Consider the powerful central role of differential equations in physics and applied mathematics.
In the theory of ordinary differential equations and in dynamical systems we generally consider smooth or C^k class solutions. In partial differential equations we consider far more general solutions, involving distributions and Sobolev spaces.
I was wondering, what are the best examples or arguments that show that restriction to the analytic case is insufficient ?
What if we only consider ODEs with analytic coeficients and only consider analytic solutions. And likewise for PDEs. Here by "analytic" I mean real maps which can be extended to holomorphic ones. How would this affect the practical use of differential equations in physics and science in general ? Is there an example of a differential equation arising in physics (excluding quantum theory !) which only has C^k or smooth solutions and no analytic ones ?
It seems we could not even have an analytic version of the theory of distributions as there could be no test functions .There are no non-zero analytic functions with compact support.
Is Newtonian physics analytic ? Is Newton's law of gravitation only the first term in a Laurent expansion ? Can we add terms to obtain a better fit to experimental data without going relativistic ?
Maybe we can consider that the smooth category is used as a convenient approximation to the analytic category. The smooth category allows perfect locality. For instance, we can consider that a gravitational field dies off outside a finite radius.
Cosmologists usually consider space-time to be a manifold (although with possible "singularities"). Why a manifold rather than the adequate smooth analogue of an analytic space ?
Space = regular points, Matter and Energy = singular points ?
Relevant answer
Answer
For a function describing some physical property, when complex arguments and complex results are physically meaningful, then often the physics requires the function to be analytic. But if the only physically valid arguments and results are real values, then the physics only requires (infinitely) smooth functions.
For example, exp(-1/z^2) is not analytic at z=0, but exp(-1/x^2) is infinitely smooth everywhere on the real line (and so may be valid physically).
One place where this happens is in using centre manifolds to rigorously construct low-D model of high-D dynamical systems. One may start with an analytic high-D system (e.g., dx/dt=-xy, dy/dt=-y+x^2) and find that the (slow) centre manifold typically is only locally infinitely smooth described by the divergent series (e.g., y=x^2+2x^4+12x^4+112x^6+1360x^8+... from section 4.5.2 in http://bookstore.siam.org/mm20/). Other examples show a low-D centre manifold model is often only finitely smooth in some finite domain, again despite the analyticity of the original system.
  • asked a question related to Applied Mathematics
Question
38 answers
Is the reciprocal of the inverse tangent $\frac{1}{\arctan x}$ a (logarithmically) completely monotonic function on the right-half line?
If $\frac{1}{\arctan x}$ is a (logarithmically) completely monotonic function on $(0,\infty)$, can one give an explicit expression of the measure $\mu(t)$ in the integral representation in the Bernstein--Widder theorem for $f(x)=\frac{1}{\arctan x}$?
These questions have been stated in details at the website https://math.stackexchange.com/questions/4247090
Relevant answer
Answer
It seems that a correct proof for this question has been announced at arxiv.org/abs/2112.09960v1.
Qi’s conjecture on logarithmically complete monotonicity of the reciprocal of the inverse tangent function
  • asked a question related to Applied Mathematics
Question
5 answers
Hello Researchers,
Say that I have 'p' number of variables and 'm' number of constraint equations between these variables. Therefore, I must have 'p - m' independent variables, and the remaining variables can be related to the independent ones through the constraint equations. Is there any rationale for selecting these 'p - m' independent variables from available 'p' variables?
Relevant answer
Answer
Bob Senyange Sir and Victor Krasnoshchekov Sir, thank you for your comments.
  • asked a question related to Applied Mathematics
Question
5 answers
Uses in applied mathematics and computer sciences
Relevant answer
  • asked a question related to Applied Mathematics
Question
2 answers
Once I obtain the Ricatti equations to solve the moments equation I can't find the value of the constants. How can I obtain the value of these constants? Have these values already been reported for the titanium dioxide?
Relevant answer
Answer
You may possibly mean the method of moments (deriving moments of mass) to solve a set of Smoluchowski coagulation equations as described in, e.g., "A Kinetic View of Statistical Physics" (Chapter 5) by Krapivsky et al.?
Definitely, you shall provide more details and/or the mentioned equations themselves. A lot of different expressions are called as Smoluchowski equations.
  • asked a question related to Applied Mathematics
Question
3 answers
The  Riemannian metric g satisfies
g(P_X\,Y, W)=-g(Y, P_X\,W)
where P_X\,Y is denote tangential part of (\nabla_X\J)\,Y=P_X\,Y+Q_X\,Y.
that condition can we take on Nordan manifold?
Relevant answer
Answer
Bhowmik Subrata Here $(M_{2n},\phi)$ is an almost complex manifold with Norden matric $g$, then we called that $(M_{2n},\phi,g)$ is an almost Norden Manifold, if $\phi$ be an integrable, we can say that $(M_{2n},\phi,g)$ is a Norden Manifold. In addition $H=\mu I$ used mainly in statistical physics where $H$ is $g$-symmetric operator.
  • asked a question related to Applied Mathematics
Question
1 answer
Assuming we have a piece of timber C16 - 100x100x1000mm and we apply UDL + a point load at middle point on it (parallel to the fibre) as shown below, how much will the timber compress between the force and the concrete surface ?
I have attached a sketch as well. Please see below.
If you could show a detailed calculation would be much appreciated. Thank you!
Relevant answer
Answer
Kindly find my working if you need more tutorials you can follow me on research gate. I can teach you for free.
  • asked a question related to Applied Mathematics
Question
4 answers
I am coding a multi-objective genetic algorithm, it can predict the pareto fronts accurately for convex pareto front of multi-objective functions. But, for non-convex pareto fronts, it is not accurate and the predicted pareto points are clustered on the ends of the pareto front obtained from MATLAB genetic algorithm. can anybody provide some techniques to solve this problem. Thanks in advance.
The attached pdf file shows the results from different problems
Relevant answer
Answer
  • asked a question related to Applied Mathematics
Question
16 answers
This paper is a project to build a new function. I will propose a form of this function and I let people help me to develop the idea of this project, and in the same time we will try to applied this function in other sciences as quantum mechanics, probability, electronics …
Relevant answer
Answer
Are you sure you have defined your function correctly?