Science topics: Mathematical SciencesApplied Mathematics

Science topic

# Applied Mathematics - Science topic

Applied Mathematics are a common forum for all branches of applicable Mathematics

Questions related to Applied Mathematics

If breastfeeding correlates with reducing risks of autism in infants parsimoniously (as the most simple explanation with the most evidence) then that could also lower toxic masculinity in adult males. Especially because pegging is already thought to lower toxic masculinity.

Work Cited

Ohnemus , Alexander . "Why Women Should Breastfeed Their Male Romantic Partners." ResearchGate.net . www.researchgate.net/publication/372401611_Why_Women_Should_Breastfeed_Their_Male_Romantic_Partners. Accessed 8 Sep. 2023.

Ohnemus , Alexander . "Erotic Lactation Reduces Toxic Masculinity Theorem "ELRTM Theorem" (Group Theory)(Dynamic Systems)(Differential Equations) A theorem." ResearchGate.net . www.researchgate.net/publication/373448419_Erotic_Lactation_Reduces_Toxic_Masculinity_Theorem_ELRTM_Theorem_Group_TheoryDynamic_SystemsDifferential_Equations_A_theorem. Accessed 8 Sep. 2023.

Ohnemus , Alexander . "Differential Equations of Erotic Lactation (Group Theory)(Dynamic Systems)(Differential Equations)." ResearchGate.net . www.researchgate.net/publication/373711079_Differential_Equations_of_Erotic_Lactation_Group_TheoryDynamic_SystemsDifferential_Equations. Accessed 8 Sep. 2023.

Hello, I'm about to join a team working on auditory speech perception using iEEG. It is planned that I will use Temporal Response Function (TRF) to determine correlations between stimulus characteristics (variations in the acoustic signal envelope, for example) and characteristics of recorded neuronal activity.

I would therefore like to fully understand the different stages of data processing carried out, as well as the reasoning and hypotheses behind them.

I took a look at the article presenting the method

and I studied the matrix calculations

But several questions remain.

In particular, regarding this formula:

**w = (S**

^{T}S)^{-1}S^{T}rwhere S is a matrix of dimension (T*tau) presenting the characteristics of the stimulus over time (T) as a function of different temporal windows/shifts (tau) as :

S =

[ s(t

_{min}-tau_{min}) ... s(t) ... s(t_{min}-tau_{max}) ][ ... ... ]

[ ... ... ]

[ s(t

_{max}-tau_{min}) ... s(t) ... s(t_{max}-tau_{max}) ]and where r is a matrix of dimension (T*N) presenting the recorded activity of each channel in time.

- Why do S
^{T}S? What does the product of this operation represent? - Why do (S
^{T}S)^{-1}? What does this operation bring? - Why do (S
^{T}S)^{-1}S^{T}? What is represented in this product? - And finally w = (S
^{T}S)^{-1}S^{T}r. What does w of dimension tau * N really represent?

Hypothesis:
S

^{T}S represents the "covariance" of each time window with the others (high covariance in the diagonal (because product of equal columns), high covariance for adjacent columns (because product of close time windows) and low covariance for distant columns whose time windows are very far apart (and therefore presenting little mutual information)). Maybe that (S^{T}S)^{-1}S^{T}(of dimension T*tau) makes it possible to obtain a representation of the stimulus according to time windows and time, but with the abrogation of any correlations that may exist between windows? However, the representation of the stimulus in this product remains very unclear to me... And finally, w may represents the weights (or correlations) of each N channel for the different time windows of the signal. My incomprehension mainly concerns the representation of the stimulus by (S^{T}S)^{-1}S^{T}and I would like to better understand the reasoning behind these operations and the benefits they bring to the decoding of neural activity. I'd like to thank anyone familiar with TRFs for any help he/she can give me. My reasoning may be wrong or incomplete, any contribution would be appreciated.Hello everyone,

I am Danillo Souza, and I am currently a Post-Doc Researcher at Basque Center for Applied Mathematics (BCAM). I am currently working on the Mathematical, Computational and Experimental Neuroscience Group (MCEN). One of the challenges of my work is to derive optimal tools to exact topological and/or geometrical information from Big data.

I am trying to submit a work to arXiv and unfortunately, an endorsement in Physics - Data Analysis and Statistics is required. I was wondering if some researcher could be my endorser in this area.

Beforehand, I appreciate your efforts in trying to help me.

With kind regards,

Danillo

Email: dbarros@bcamath.org

Danillo Barros De Souza requests your endorsement to submit an article
to the physics.data-an section of arXiv. To tell us that you would (or
would not) like to endorse this person, please visit the following URL:
https://arxiv.org/auth/endorse?x=UOKIX3
If that URL does not work for you, please visit
http://arxiv.org/auth/endorse.php
and enter the following six-digit alphanumeric string:
Endorsement Code: UOKIX3

*Mathematical Generalities:*‘Number’ may be termed as a general term, but real numbers, a sub-set of numbers, is sub-general. Clearly, it is a quality: “having one member, having two members, etc.”; and here one, two, etc., when taken as nominatives, lose their significance, and are based primarily only on the adjectival use. Hence the justification for the adjectival (qualitative) primacy of numbers as universals. While defining one kind of ‘general’ another sort of ‘general’ may naturally be involved in the definition, insofar as they pertain to an existent process and not when otherwise.

Why are numbers and shapes so exact? ‘One’, ‘two’, ‘point’, ‘line’, etc. are all exact. The operations on these notions are also intended to be exact. But irrational numbers are not so exact in measurement. If notions like ‘one’, ‘two’, ‘point’, ‘line’, etc. are defined to be so exact, then it is not by virtue of the exactness of these substantive notions, but instead, due to their being defined as exact. Their adjectival natures: ‘being a unity’, ‘being two unities’, ‘being a non-extended shape’, etc., are not so exact.

A quality cannot be exact, but may be defined to be exact. It is in terms of the exactness attributed to these notions by definition that the adjectives ‘one’, ‘two’, ‘point’, ‘line’, etc. are exact. This is why the impossibility of fixing these (and other) substantive notions as exact miss our attention. If in fact these are inexact, then there is justification for the inexactness of irrational, transcendental, and other numbers too.

If numbers and shapes are in fact inexact, then not only irrational numbers, transcendental numbers, etc., but all exact numbers and the mathematical structures should remain inexact if they have not been defined as exact. And if behind the exact definitions of exact numbers there are no exact universals, i.e., quantitative qualities? If the formation of numbers is by reference to experience (i.e., not from the absolute vacuum of non-experience), their formation is with respect to the quantitatively qualitative and thus inexact ontological universals of oneness, two-ness, point, line, etc.

Thus,

**, in all their detail, are a species of qualities, namely, quantitative qualities,***mathematical structures***. Quantitative qualities are ontological universals, with their own connotative and denotative versions.***defined to be exact and not naturally exact*Natural numbers, therefore, are the origin of primitive mathematical experience, although complex numbers may be more general than all others in a purely mathematical manner of definition.

Bibliography

*(1) Gravitational Coalescence Paradox and Cosmogenetic Causality in Quantum Astrophysical Cosmology*, 647 pp., Berlin, 2018.

*(2) Physics without Metaphysics? Categories of Second Generation Scientific Ontology*, 386 pp., Frankfurt, 2015.

*(3) Causal Ubiquity in Quantum Physics: A Superluminal and Local-Causal Physical Ontology*, 361 pp., Frankfurt, 2014.

*(4) Essential Cosmology and Philosophy for All: Gravitational Coalescence Cosmology*, 92 pp., KDP Amazon, 2022, 2nd Edition.

*(5) Essenzielle Kosmologie und Philosophie für alle: Gravitational-Koaleszenz-Kosmologie*, 104 pp., KDP Amazon, 2022, 1st Edition.

Usually to know the financial performance of the co-operative society, we use ratio analysis and more than that any other tools available to find the financial performance or growth of the organization suggest me.

Hi Dears, according to implementation RSA in python. I found that if p and q large.

the decryption phase takes a lot of time to execute.

for example, in this code i select p=23099, q=23059, message=3

it takes 26 minute it decrypts the encrypted message.!

So I wonderful how we can select to large prime number for RSA, while it cannot execute in desired time. !

So, I think that we cannot use RSA i n real time systems.

Are you agree with me?

the source code is:

from math import gcd

import time

# defining a function to perform RSA approch

def RSA(p: int, q: int, message: int):

# calculating n

n = p * q

print(n)

# calculating totient, t

t = (p - 1) * (q - 1)

start = time.time()

# selecting public key, e

for i in range(2,t):

if gcd(i, t) == 1:

e = i

break

print("eeeeeeeeeeeeee",e)

# selecting private key, d

j = 0

while True:

if (j * e) % t == 1:

d = j

break

j += 1

print("dddddddddddddddd",d)

end = time.time()

# print(end-start)

e=0

#RSA(p=7, q=17, message=3)

RSA(p=23099, q=23059, message=3)

d=106518737

n=532639841

e=5

#RSA(p=23099, q=23059, message=3)

start= time.time()

ct=(3 ** e) % n

print(ct)

pt=(ct ** d) % n

end = time.time()

print(end-start)

print(pt)

#----------------------------------------------------

importance of applied mathematics

Up to this point, I thought that when doing a dimensional analysis using the Buckingham-Pi theorem the exponents are generally quite "simple". For example 0.5, 1 or 2 and so on.

However, I now have found a paper where the exponents of the dimensionless numbers that are formulated by a data-driven approach have "strange" exponents like 0.07 or 0.304. This seems a bit odd to me and brings me to the questions: are such exponents (still) physically meaningful? If so, in which cases does such a type of exponents occur (and why)?

Thank you very much!

**SOURCE OF MAJOR FLAWS IN COSMOLOGICAL THEORIES:**

**MATHEMATICS-TO-PHYSICS APPLICATION DISCREPENCY**

**Raphael Neelamkavil, Ph.D., Dr. phil.**

The big bang theory has many limitations. These are,

(1) the uncertainty regarding the causes / triggers of the big bang,

(2) the need to trace the determination of certain physical constants to the big bang moments and not further backwards,

(3) the necessity to explain the notion of what scientists and philosophers call “time” in terms of the original bang of the universe,

(4) the compulsion to define the notion of “space” with respect to the inner and outer regions of the big bang universe,

(5) the possibility of and the uncertainty about there being other finite or infinite number of universes,

(6) the choice between an infinite number of oscillations between big bangs and big crunches in the big bang universe (in case of there being only our finite-content universe in existence), in every big hang universe (if there are an infinite number of universes),

(7) the question whether energy will be lost from the universe during each phase of the oscillation, and in that case how an infinite number of oscillations can be the whole process of the finite-content universe,

(8) the difficulty involved in mathematizing these cases, etc.

These have given rise to many other cosmological and cosmogenetic theories – mythical, religious, philosophical, physical, and even purely mathematical. It must also be mentioned that the thermodynamic laws created primarily for earth-based physical systems have played a big role in determining the nature of these theories.

The big bang is already a cosmogenetic theory regarding a finite-content universe. The consideration of an INFINITE-CONTENT universe has always been taken as an alternative source of theories to the big bang model. Here, in the absence of conceptual clarity on the physically permissible meaning of infinite content and without attempting such clarity, cosmologists have been accessing the various mathematical tools available to explain the meaning of infinite content. They do not also seem to keep themselves aware that locally possible mathematical definitions of infinity cannot apply to physical localities at all.

The result has been the acceptance of temporal eternality to the infinite-content universe without fixing physically possible varieties of eternality. For example, pre-existence from the past eternity is already an eternality. Continuance from any arbitrary point of time with respect to any cluster of universes is also an eternality. But models of an infinite-content cosmos and even of a finite-content universe have been suggested in the past one century, which never took care of the fact that mathematical infinity of content or action within a finite locality has nothing to do with physical feasibility. This, for example, is the source of the quantum-cosmological quick-fix that a quantum vacuum can go on create new universes.

But due to their obsession with our access to observational details merely from our local big bang universe, and the obsession to keep the big bang universe as an infinite-content universe and as temporally eternal by using the mathematical tools found, a mathematically automatic recycling of the content of the universe was conceived. Here they naturally found it safe to accommodate the big universe, and clearly maintain a sort of eternality for the local big bang universe and its content, without recourse to external creation.

Quantum-cosmological and superstrings-cosmological gimmicks like considering each universe as a membrane and the “space” between them as vacuum have given rise to the consideration that it is these vacua that just create other membranes or at least supplies new matter-energy to the membranes to continue to give rise to other universes. (1) The ubiquitous sensationalized science journalism with rating motivation and (2) the physicists’ and cosmologists’ need to stick to mathematical mystification in the absence of clarity concurring physical feasibility in their infinities – these give fame to the originators of such universes as great and original scientists.

I suggest that the need to justify an eternal recycling of the big bang universe with no energy loss at the fringes of the finite-content big bang universe was fulfilled by cosmologists with the automatically working mathematical tools like the Lambda term and its equivalents. This in my opinion is the origin of the concepts of the almighty versions of dark energy, virtual quantum soup, quantum vacuum, ether, etc., for cosmological applications. Here too the physical feasibility of these concepts by comparing them with the maximal-medial-minimal possibilities of existence of dark energy, virtual quantum soup, quantum vacuum, ether, etc. within the finite-content and infinite-content cosmos, has not been considered. Their almighty versions were required because they had to justify an eternal pre-existence and an eternal future for the universe from a crass physicalist viewpoint, of which most scientists are prey even today. (See:

**Minimal Metaphysical Physicalism (MMP) vs. Panpsychisms and Monisms: Beyond Mind-Body Dualism:**https://www.researchgate.net/post/Minimal_Metaphysical_Physicalism_MMP_vs_Panpsychisms_and_Monisms_Beyond_Mind-Body_Dualism)I believe that the inconsistencies present in the mathematically artificialized notions and in the various cosmogenetic theories in general are due to the blind acceptance of available mathematical tools to explain an infinite-content and eternally existent universe.

What should in fact have been done? We know that physics is not mathematics. In mathematics all sorts of predefined continuities and discretenesses may be created without recourse to solutions as to whether they are sufficiently applicable to be genuinely physics-justifying by reason of the general compulsions of physical existence. I CONTINUE TO ATTEMPT TO DISCOVER WHERE THE DISCREPENCIES LIE. History is on the side of sanity.

One clear example for the partial incompatibility between physics and mathematics is where the so-called black hole singularity is being mathematized by use of asymptotic approach. I admit that we have only this tool. But we do not have to blindly accept it without setting rationally limiting boundaries between the physics of the black hole and the mathematics applied here. It must be recognized that the definition of any fundamental notion of mathematics is absolute and exact only in the definition, and not in the physical counterparts. (See:

**Mathematics and Causality: A Systemic Reconciliation,**https://www.researchgate.net/post/Mathematics_and_Causality_A_Systemic_Reconciliation)I shall continue to add material here on the asymptotic approach in cosmology and other similar theoretical and application-level concepts.

Bibliography

*(1) Gravitational Coalescence Paradox and Cosmogenetic Causality in Quantum Astrophysical Cosmology*, 647 pp., Berlin, 2018.

*(2) Physics without Metaphysics? Categories of Second Generation Scientific Ontology*, 386 pp., Frankfurt, 2015.

*(3) Causal Ubiquity in Quantum Physics: A Superluminal and Local-Causal Physical Ontology*, 361 pp., Frankfurt, 2014.

*(4) Essential Cosmology and Philosophy for All: Gravitational Coalescence Cosmology*, 92 pp., KDP Amazon, 2022, 2nd Edition.

*(5) Essenzielle Kosmologie und Philosophie für alle: Gravitational-Koaleszenz-Kosmologie*, 104 pp., KDP Amazon, 2022, 1st Edition.

The Nobel Prize Summit 2023: Truth, Trust and Hope has started today, 24 May 2023. The summit encourages participation. Thus, I have sent an open letter and eagerly anticipate their response. Please comment if the points I have made is adequate enough.

Open Letter to The Nobel Committee for Physics

Is There a Nobel Prize for Metaphysics?

Dear Nobel Committee for Physics,

Among the differences between an established religion, such as Roman Catholicism, and science, is the presence of a hierarchical organization in the former for defending its creed and conducting its affairs. The head of the religious institution ultimately bears responsibility for the veracity of its claims and strategic policies. This accountability was evident in historical figures like John Wycliffe, Jan Hus, and Martin Luther, who held the papacy responsible for wrong doctrines, such as the indulgence scandal during the late Middle Ages. In that context, challenging such doctrines, albeit with the anticipated risk of being burned at the stake, involved posting opposing theses on the doors of churches.

In contrast, the scientific endeavour lacks a tangible temple, and no definitive organization exists to be held accountable for possible misconducts. Science is a collective effort by scientists and scientific institutes to discover new facts within and beyond our current understanding. While scientists may occasionally flirt with science fiction, they ultimately make significant leaps in understanding the universe. However, problems arise when a branch of science is held and defended as a sacred dogma, disregarding principles such as falsifiability. This mentality can lead to a rule of pseudo-scientific oppression, similar to historical instances like the Galileo or Lysenko affairs. Within this realm, there is little chance of liberating science from science fiction. Any criticism is met with ridicule, damnation, and exclusion, reminiscent of the attitudes displayed by arrogant religious establishments during the medieval period. Unfortunately, it seems that the scientific establishment has not learned from these lessons and has failed to provide a process for dealing with these unfortunate and embarrassing scenarios. On the contrary, it is preoccupied with praising and celebrating its achievements while stubbornly closing its ears to sincere critical voices.

Allow me to illustrate my concerns through the lens of relativistic physics, a subject that has captured my interest. Initially, I was filled with excitement, recognizing the great challenges and intellectual richness that lay before me. However, as I delved deeper, I encountered several perplexing issues with no satisfactory answers provided by physicists. While the majority accepts relativity as it stands, what if one does not accept the various inherent paradoxes and seeks a deeper insight?

Gradually, I discovered that certain scientific steps are not taken correctly in this branch of science. For example, we place our trust in scientists to conduct proper analyses of experiments. Yet, I stumbled upon evidence suggesting that this trust may have been misplaced in the case of a renowned experiment that played a pivotal role in heralding relativistic physics. If this claim is indeed valid, it represents a grave concern and a significant scandal for the scientific community. To clarify my points, I wrote reports and raised my concerns. Fortunately, there are still venues outside established institutions where critical perspectives are not yet suppressed. However, the reactions I received ranged from silence to condescending remarks infused with irritation. I was met with statements like "everything has been proven many times over, what are you talking about?" or "go and find your mistake yourself." Instead of responding to my pointed questions and concerns, a professor even suggested that I should broaden my knowledge by studying various other subjects.

While we may excuse the inability of poor, uneducated peasants in the Middle Ages to scrutinize the veracity of the Church's doctrine against the Latin Bible, there is no excuse for professors of physics and mathematics to be unwilling to revaluate the analysis of an experiment and either refute the criticism or acknowledge an error. It raises suspicions about the reliability of science itself if, for over 125 years, the famous Michelson-Morley experiment has not been subjected to rigorous and accurate analysis.

Furthermore, I am deeply concerned that the problem has been exacerbated by certain physicists rediscovering the power and benefits of metaphysics. They have proudly replaced real experiments with thought experiments conducted with thought-equipment. Consequently, theoretical physicists find themselves compelled to shut the door on genuine scientific criticism of their enigmatic activities. Simply put, the acceptance of experiment-free science has been the root cause of all these wrongdoings.

To demonstrate the consequences of this damaging trend, I will briefly mention two more complications among many others:

1. Scientists commonly represent time with the letter '

*t*', assuming it has dimension**T**, and confidently perform mathematical calculations based on this assumption. However, when it comes to relativistic physics, time is represented as '*ct*' with dimension**L**, and any brave individual questioning this inconsistency is shunned from scientific circles and excluded from canonical publications.2. Even after approximately 120 years, eminent physicist and Nobel Prize laureate Richard Feynman, along with various professors in highly regarded physics departments, have failed to mathematically prove what Einstein claimed in his 1905 paper. They merely copy from one another, seemingly engaged in a damage limitation exercise, producing so-called approximate results. I invite you to refer to the linked document for a detailed explanation:

I am now submitting this letter to the Nobel Committee for Physics, confident that the committee, having awarded Nobel Prizes related to relativistic physics, possesses convincing scientific answers to the specific dilemmas mentioned herein.

Yours sincerely,

Ziaedin Shafiei

What Are Some Of The Material That Makes Your Work Easier?

**Applying mathematical knowledge in research models:**This question has been in my mind for a long time. Can advance mathematics and applied mathematics solve all the problems in modeling research? Especially the formula derivation in the theoretical model part, can the analysis conclusion be obtained through multiple derivations or other methods? You have also read some mathematics-related publications yourself, and you have to admire the mystery of mathematics.

Dear Researcher,

Global Climate Models (GCMs) of Coupled Model Intercomparison Project Phase 6 (CMIP6) are numerical models that represent the various physical systems of the Earth's climate with respect to the surface of the land, oceans, atmosphere, and cryosphere, and these are employed to provide likely changes in future climate projections. I wanted to know what are the initial and lateral boundary conditions used while developing the models.

Sincerely,

Aman Srivastava

**MATHEMATICS VS. CAUSALITY:**

**A SYSTEMIC RECONCILIATION**

Raphael Neelamkavil, Ph.D., Dr. phil.

1. Preface on the Use of Complex Language

2. Prelude on the Pre-Scientific Principle of Causality

3. Mathematical “Continuity and Discreteness” Vs. Causal Continuity

4. Mathematics and Logic within Causal Metaphysics

5. Mathematics, Causality, and Contemporary Philosophical Schools

**1. Preface on the Use of Complex Language**

First of all, a cautious justification is in place about the complexity one may experience in the formulations below: When I publish anything, the readers have the right to ask me constantly for further justifications of my arguments and claims. And if I have the right to anticipate some such possible questions and arguments, I will naturally attempt to be as detailed and systemic as possible in my formulation of each sentence here and now. A sentence is merely a part of the formulated text. After reading each sentence, you may pose me questions, which certainly cannot all be answered well within the sentences or soon after the sentences in question, because justification is a long process.

Hence, my sentences may tend to be systemically complex. A serious reader will not find these arguments getting too complex, because such a person has further unanswered questions. We do not purposely make anything complex. Our characterizations of meanings in mathematics, physics, philosophy, and logic can be complex and prohibitive for some. But would we all accuse these disciplines or the readers if the readers find them all complex and difficult? In that case, I could be excused too. I do not intentionally create a complex state of affairs in these few pages; but there are complexities here too. I express my helplessness in case any one finds these statements complex.

The languages of both science and philosophy tend to be complex and exact. This, nevertheless, should be tolerated provided the purpose is understood and practiced by both the authors and the readers. Ordinary language has its worth and power. If I give a lecture, I do not always use such formal a language as when I write, because I am there to re-clarify.

But the Wittgensteinian obsession with “ordinary” language does not make him use an ordinary language in his own works. Nor does the Fregean phobia about it save him from falling into the same ordinary-language naïveté of choosing concrete and denotative equivalence between terms and their reference-objects without a complex ontology behind them. I attempt to explain the complex ontology behind the notions that I use.

**2. Prelude on the Pre-Scientific Principle of Causality**

Which are the ultimate conditions implied by the notion of existence (To Be), without which conditions implied nothing exists, and without which sort of existents nothing can be discoursed? Anything exists non-vacuously. This implies that existents are inevitably in Extension (having parts, each of which is further extended and not vacuous). The parts will naturally have some contact with a finite number of others. That is, everything is in Change (impacting some other extended existents).

Anything without these two characteristics cannot exist. If not in Change, how can something exist in the state of Extension alone? And if not in Extension, how can something exist in the state of Change alone? Hence, Extension-Change are two fundamental ontological categories of all existence and the only two exhaustive implications of To Be. Any unit of causation with one causal aspect and one effect aspect is termed a process.

These conditions are ultimate in the sense that they are implied by To Be, not as the secondary conditions for anything to fulfil after its existence. Thus, “To Be” is not merely of one specific existent, but of all existents. Hence, Extension-Change are the implications of the To Be of Reality-in-total. Physical entities obey these implications. Hence, they must be the foundations of physics and all other sciences. Theoretical foundations, procedures, and conclusions based on these implications in the sciences and philosophy, I hold, are wise enough.

Extension-Change-wise existence is what we understand as Causality: extended existents and their parts exert impacts on other extended existents. Every part of existents does it. That is, if anything exists, it is in Causation. This is the principle of Universal Causality. In short, Causality is not a matter to be decided in science – whether there is Causality or not in any process under experiment and in all existents is a matter for philosophy to decide, because philosophy tends to study all existents. Science can ask only whether there occurs any specific sort of causation or not, because each science has its own restricted viewpoint of questions and experiments and in some cases also restrictions in the object set.

Thus, statistically mathematical causality is not a decision as to whether there is causation or not in the object set. It is not a different sort of causation, but a measure of the extent of determination of special causes that we have made at a given time.

*Even the allegedly “non-causal” quantum-mechanical constituent processes*are mathematically and statistically circumscribed measuremental concepts from the results of Extended-Changing existents and,*ipso facto*, the realities behind these statistical measurements are in Extension-Change if they are physically existent.Space is the measured shape of Extension; time is that of Change. Therefore, space and time are epistemic categories. How then can statistical causality based only on measuremental data be causality at all, if the causes are all in Extension-Change and if Universal Causality is already the pre-scientific Law under which all other laws appear? No part of an existent is non-extended and non-changing. One unit of cause and effect may be called a process. Every existent and its parts are processual.

And how can a so-called random cause be a cause, except when the randomness is the extent of our measuremental reach of the cause, which already is causal because of its Extension-Change-wise existence? Extension and Change are the very exhaustive meanings of To Be, and hence I call them the highest Categories of metaphysics, physical ontology, physics, and all science. Not merely philosophy but also science must obey these two Categories.

In short, everything existent is causal. Hence, Universal Causality is the highest pre-scientific Law, second conceptually only to Extension-Change and third to Existence / To Be. Natural laws are merely derivative. Since Extension-Change-wise existence is the same as Universal Causality, scientific laws are derived from Universal Causality, and not

*vice versa*.*The relevance of metaphysics / physical ontology for the sciences is clear from the above.***Today the sciences attempt to derive causality from the various scientific laws!**Existents have some Activity and Stability. This is a fully physical fact. These two Categories may be shown to be subservient to Extension-Change and Causality. Pure vacuum (non-existence) is absence of Activity and Stability. Thus, entities, irreducibly, are active-stable processes in Extension-Change. Physical entities / processes possess finite Activity and Stability. Activity and Stability together belong to Extension; and Activity and Stability together belong to Change too.

That is, Stability is neither merely about space nor about Extension. Activity is neither merely about time nor about Change. There is a unique reason for this. There is no absolute stability nor absolute activity in the physical world. Hence, Activity is finite, which is by Extended-Changing processes; and Stability is finite, which is also by Extended-Changing processes. But the tradition still seems to parallelise Stability and Activity with space and time respectively. We consider Activity and Stability as sub-Categories, because they are based on Extension-Change, which together add up to Universal Causality; and each unit of cause and effect is a process.

These are not Categories that belong to merely imaginary counterfactual situations. The Categories of Extension-Change and their sub-formulations are all about existents. There can be counterfactuals that signify cases that appertain existent processes. But separating these cases from some of the useless logical talk as in linguistic-analytically tending logic, philosophy, and philosophy of science is near to impossible.

Today physics and the various sciences do at times something like the said absence of separation of counterfactual cases from actual in that they indulge in particularistically defined terms and procedures, by blindly thinking that counterfactuals can directly represent the physical processes under inquiry. Concerning mathematical applications too, the majority attitude among scientists is that they are somehow free from the physical world.

Hence, without a very general physical ontology of Categories that are applicable to all existent processes and without deriving the mathematical foundations from these Categories, the sciences and mathematics are in gross handicap. Mathematics is no exception in its applicability to physical sciences. Moreover, pure mathematics too needs the hand of Extension and Change, since these are part of the ontological universals, form their reflections in mind and language, etc., thus giving rise to mathematics.

The exactness within complexity that could be expected of any discourse based on the Categorial implications of To Be can only be such that (1) the denotative terms ‘Extension’ and ‘Change’ may or may not remain the same, (2) but the two dimensions of Extension and Change – that are their aspects in ontological universals – would be safeguarded both physical-ontologically and scientifically.

That is, definitional flexibility and openness towards re-deepening, re-generalizing, re-sharpening, etc. may even change the very denotative terms, but the essential Categorial features within the definitions (1) will differ only meagrely, and (2) will normally be completely the same.

**3. Mathematical “Continuity and Discreteness” Vs. Causal “Continuity”**

The best examples for the above are mathematical continuity and discreteness that are being attributed blindly to physical processes due to the physical absolutization of mathematical requirements. But physical processes are continuous and discrete only in their Causality. This is nothing but Extension-Change-wise discrete causal continuity. At any time, causation is present in anything, hence there is causal continuity. This is finite causation and hence effects finite continuity and finite discreteness. But this is different from absolute mathematical continuity and discreteness.

I believe that it is common knowledge that mathematics and its applications cannot prove Causality directly. What are the bases of the problem of incompatibility of physical causality within mathematics and its applications in the sciences and in philosophy? The main but general explanation could be that mathematical explanations are not directly about the world but are applicable to the world to a great extent.

It is good to note that

**. Hence, mathematical explanations can at the most only show the ways of movement of the processes and not demonstrate whether the ways of the cosmos are by causation.***mathematics is a separate science as if its “objects” were existent, but in fact as non-existent and different from those of any other science – thus creating mathematics into an abstract science in its theoretical aspects of rational effectiveness***(number, number systems, points, shapes, operations, structures, etc.) are all universals / universal qualities / ontological universals that belong to groups of existent things that are irreducibly Extension-Change-type processes. (See below.)**

*Moreover, the basic notions of mathematics*Thus, mathematical notions have their origin in ontological universals and their reflections in mind (connotative universals) and in language (denotative universals). The basic nature of these universals is ‘quantitatively qualitative’. We shall not discuss this aspect here at length.

No science and philosophy can start without admitting that the cosmos exists. If it exists, it is not nothing, not non-entity, not vacuum.

**. This means they have parts. Every part has parts too,***Non-vacuous existence means that the existents are non-vacuously extended**ad libitum*, because each part is extended. None of the parts is an infinitesimal. They can be near-infinitesimal. This character of existents is Extension, a Category directly implied by To Be.Similarly, any extended being’s parts are active, moving.

**. This character of existents is Change. No other implication of To Be is so primary as these. Hence, they are exhaustive of the concept of To Be, which belongs to Reality-in-total. These arguments show us the way to conceive the meaning of causal continuity.***This implies that every part has impact on some others, not on infinite others*Existence in Extension-Change is what we call Causality. If anything is existent, it is causal – hence Universal Causality is the trans-science physical-ontological Law of all existents. By the very concept of finite Extension-Change-wise existence, it becomes clear that no finite space-time is absolutely dense with existents. In fact, space-time is no ontological affair, but only epistemological, and existent processes need measurementally accessible finite space for Change. Hence,

**. Since there is Change and transfer of impact, no existent can be absolutely discrete in its parts or in connection with others.***existents cannot be mathematically continuous*Can logic show the necessity of all existents to be causal? We have already discussed how, ontologically, the very concept of To Be implies Extension-Change and thus also Universal Causality. Logic can only be instrumental in this.

What about the ability or not of logic to conclude to Universal Causality? In my arguments above and elsewhere showing Extension-Change as the very exhaustive meaning of To Be, I have used mostly only the first principles of ordinary logic, namely, Identity, Contradiction, and Excluded Middle, and then argued that

**.***Extension-Change-wise existence is nothing but Universal Causality if everything existing is non-vacuous in existence*For example, does everything exist or not? If yes, let us call it non-vacuous existence. Hence, Extension is the first major implication of To Be. Non-vacuous means extended, because if not extended the existent is vacuous. If extended, everything has parts.

**. In this sense, the basic logical laws do help conclude the causal nature of existents.***Having parts implies distances, however minute, between all the near-infinitesimal parts of any existent process*A point of addition now has been Change. It is, so to say, from experience. But this need not exactly mean an addition.

**. Thus, I am empowered to move to the meaning of Change basically as motion or impact. Naturally, everything in Extension must effect impacts.***If existents have parts (i.e., if they are in Extension), the parts’ mutual difference already implies the possibility of contact between parts*Everything has further parts. Hence,

**. In the physical world this is by finite impact formation. Hence,***by implication from Change and the need for there to be contacts between every near-infinitesimal set of parts of existents, everything causes changes by impacts***. Leibniz’s monads have no significance in the real world.***nothing can exist as an infinitesimal*Thus, we conclude that Extension-Change-wise existence is Universal Causality, and every actor in causation is a real existent, not a non-extended existent, as energy particles seem to have been considered and are even today thought to be, due to their unit-shape yielded merely for the sake mathematical applications. It is thus natural to claim that Causality is a pre-scientific Law of Existence, where

**.***existents are all inwardly and outwardly in Change, i.e., in impact formation – otherwise, the concept of Change would lose meaning*In such foundational questions like To Be and its implications, the first principles of logic must be used, because these are the foundational notions of all science and no other derivative logical procedure comes in as handy. In short, logic with its fundamental principles can help derive Universal Causality. Thus,

**. But the applicability of these three logical Laws is not guaranteed so well in arguments using derivative, less categorial, sorts of concepts.***Causality (Extension-Change) is more primary to experience than the primitive notions of mathematics*I suggest that the crux of the problem of mathematics and causality is the dichotomy between mathematical continuity and mathematical discreteness on the one hand and the incompatibility of applying any of them directly on the data collected / collectible / interpretable from some layers of the phenomena which are from some layers of the object-process in question. Not recognizing the presence of such

**is an epistemological foolishness. Science and philosophy, in my opinion, are victims of this. Thus, for example,***stratificational debilitation of epistemic directness**the***!***Bayesian statistical theory recognizes only a statistical membrane between reality and data*Here I point at the avoidance of the problem of stratificational debilitation of epistemic directness, by the centuries of epistemological foolishness, by reason of the forgetfulness of the ontological and epistemological relevance of expressions like ‘from some layers of data from some layers of phenomena from some layers of the reality’.

This is the point at which it is time to recognize the gross violence against natural reason behind phrases and statements involving ‘data from observation’, ‘data from phenomena’, ‘data from nature / reality’ etc.,

**. As we all know, this state of affairs has gone irredeemable in the sciences today.***without epistemological and ontological sharpness in both science and philosophy to accept these basic facts of nature*The whole of what we used to call space is not filled with matter-energy. Hence, if causal continuity between partially discrete “processual” objects is the case, then the data collected / collectible cannot be the very processual objects and hence cannot provide all knowledge about the processual objects. But mathematics and all other research methodologies are based on human experience and thought based on experience.

This theoretical attitude facilitates and accepts in a highly generalized manner the following three points:

(1) Mathematical continuity (in any theory and in terms of any amount of axiomatization of logical, mathematical, physical, biological, social, and linguistic theories) is totally non-realizable in nature as a whole and in its parts: because (a) the necessity of mathematical approval of any sort of causality in the sciences and by means of its systemic physical ontology falls short miserably in actuality, and (b) the logical continuity of any kind does not automatically make linguistically or mathematically symbolized activity of representation adequate enough to represent the processual nature of entities as derivate from data.

(2) The concept of absolute discreteness in nature, which, as of today, is ultimately of the quantum-mechanical type based on Planck’s constant, continues to be a mathematical and partial misfit in the physical cosmos and its parts, (a) if there exist other universes that may causally determine the constant differently at their specific expansion and/or contraction phases, and (b) if there are an infinite number of such finite-content universes.

The case may not of course be so problematic in non-quantifiable “possible worlds” due to their absolute causal disconnection or their predominant tendency to causal disconnection, but this is a mere common-sense, merely mathematical, compartmentalization: because (a) the aspect of the causally processual connection between any two quanta is logically and mathematically alienated in the physical theory of Planck’s constant, and (b) the possible worlds have only a non-causal existence, and hence, anything may be determined in this world as a constant, and an infinite number of possible universes may be posited without any causal objection!

It is usually not kept in mind here by physicists that the epistemology of unit-based thinking – of course, based on quantum physics or not – is implied by the almost unconscious tendency of symbolic activity of body-minds. This need not have anything to do with a physics that produces laws for all existent universes.

(3) The only viable and thus the most reasonably generalizable manner of being of the physical cosmos and of biological entities is that of existence in an Extended (having parts) and Changing manner (extended entities and their parts impacting a finite number of other existents and their parts in a finite quantity and in a finite duration). Existence in the Extension-Change-wise manner is nothing but causal activity.

Thus, insofar as everything is existent, every existent is causal. There is no time (i.e., no minute measuremental iota of Change) wherein such causal manner of existing ceases in any existent. This is

**. This is not mathematizable in a discrete manner. The concept of geometrical and number-theoretic continuity may apply. But if there are other universes, the Planck constant of proportionality that determines the proportion of content of discreteness may change in the others. This is not previsioned in terrestrially planned physics.***causal continuity between partially discrete processual objects*The attitude of treating everything as causal may also be characterized by the self-aware symbolic activity by symbolic activity itself, in which certain instances of causation are avoided or enhanced, all decrementally or incrementally as the case may be, but not absolutely.

**.***This, at the most, is what may be called freedom*It is fully causal – need not be sensed as causal within a specific set of parameters, but as causal within the context of Reality-in-total. But the whole three millennia of psychological and religious (contemplative) tradition of basing freedom merely on awareness intensity, and not on love – this is a despicable state of affairs, on which a book-length treatise is necessary.

Physics and cosmology even today tend to make the cosmos either (1) mathematically presupposedly continuous, or (2) discrete with defectively ideal mathematical status for causal continuity and with perfectly geometrical ideal status for specific beings, or (3) statistically indeterministic, thus being compelled to consider everything as partially causal, or even non-causal in the interpretation of statistics’ orientation to epistemically logical decisions and determinations based on data. If this has not been the case, can anyone suggest proofs for an alleged existence of a different sort of physics and cosmology until today?

The statistician does not even realize (1) that Universal Causality is already granted by the very existence of anything, and (2) that what they call non-causality is merely the not being the cause, or not having been discovered as the cause, of a specific set of selected data or processes. Such non-causality is not with respect to all existents. Quantum physics, statistical physics, and cosmology are replete with examples for this empirical and technocratic treachery of the notion of science.

A topology and mereologically clean physical ontology of

**, fully free of absolutely continuity-oriented or absolutely discreteness-oriented category theory, geometry, topology, functional analysis, set theory, and logic, are yet to be born. Hence, the fundamentality of Universal Causality in its deep roots in the very concept of the To Be (namely, in the physical-ontological Categories of Extension and Change) of all physically and non-vacuously existent processes, is alien to physics and cosmology until today.***causal continuity between partially discrete processual objects*Non-integer rational numbers are not the direct notion of anything existent. Even a part of a unit process has the attribute ‘unity’ in all the senses in which any other object possesses transpire. For this reason, natural numbers have Categorial priority over rational numbers, because natural numbers are more directly related to ontological universals than other sorts of numbers are. Complex numbers, for example, are the most general number system for their sub-systems defined mathematically, but this does not mean that they are more primary in the metaphysics of ontological universals, since the primary mode of numerically quantitative qualities / universals is that of natural numbers.

**4. Mathematics and Logic within Causal Metaphysics**

Hence, it is important to define the limits of applicability of mathematics to the physics that use physical data (under the species of various layers of their origin). This is the only way to approximate beyond the data and the methodologically derived conclusions beyond the data. As to how and on what levels this is to be done is a matter to be discussed separately.

The same may be said also about logic and language. Logic is the broader rational picture of mathematics. Language is the symbolic manner of application of both logic and its quantitatively qualitative version, namely, mathematics, with respect to specific fields of inquiry. Here I do not explicitly discuss ordinary conversation, literature, etc.

We may do well to instantiate logic as the formulated picture of reason. But human reason is limited to the procedures of reasoning by brains. What exactly is the reason that existent physical processes constantly undergo? How to get at conclusions based on this reason of nature – by using our brain’s reasoning – and thus transcend at least to some extent the limitations set by data and methods in our brain’s reasoning?

If we may call the universal reason of Reality-in-total by a name, it is nothing but Universal Causality. It is possible to demonstrate that Universal Causality is a trans-physical, trans-scientific Law of Existence. This argument needs clarity. How to demonstrate this as the case? This has been done in an elementary fashion in the above, but more of it is not to be part of this discussion.

Insistence on mathematical continuity in nature is a mere idealization. It expects nature to obey our merely epistemic sort of idealizations, that is, in ideal cases based mostly on the brain-interpreted concepts from some layers of data, which are from some layers of phenomena, which are from some layers of the reality under observation. Some of the best examples in science are the suppositions that virtual worlds are existent worlds, dark energy is a kind of propagative energy, zero-value cosmic vacuum can create an infinite number of universes, etc.

The processes outside are vaguely presented primarily by the processes themselves, but highly indirectly, in a natural manner. This is represented by the epistemic / cognitive activity within the brain in a natural manner (by the connotative universals in the mind as reflections of the ontological universals in groups of object processes), and then idealized via concepts expressed in words, connectives, and sentences (not merely linguistic but also mathematical, computerized, etc.) by the symbolizing human tendency (thus creating denotative universals in words) to capture the whole of the object by use of a part of the human body-mind.

The symbolizing activity is based on data, but the data are not all we have as end results. We can mentally recreate the idealized results behind the multitude ontological, connotative, and denotative universals as existents.

As the procedural aftermath of this, virtual worlds begin to “exist”, dark energy begins to “propagate”, and zero-value cosmic vacuum “creates” universes. Even kinetic and potential energies are treated as propagative energies existent outside of material bodies and supposed to be totally different from material bodies. These are mere theoretically interim arrangements in the absence of direct certainty for the existence or not of unobservables.

Insistence on mathematical continuity in nature as a natural conclusion by the application of mathematics to nature is what happens in all physical and cosmological (and of course other) sciences insofar as they use mathematical idealizations to represent existent objects and processes and extrapolate further beyond them. Mathematical idealizations are another version of linguistic symbolization and idealization.

Logic and its direct quantitatively qualitative expression as found in mathematics are, of course, powerful tools. But, as being part of the denotative function of symbolic language, they are tendentially idealizational. By use of the same symbolizing tendency, it is perhaps possible to a certain extent to

**the side-effects of the same symbols in the language, logic, and mathematics being used in order to symbolically idealize representations.***de-idealize*Merely mathematically following physical nature in whatever it is in its part-processes is a debilitating procedure in science and philosophy (and even in the arts and humanities), if this procedure is not de-idealized effectively.

**Our language, logic, and mathematics too do their functions well, although they too are equally unable to capture the whole of Reality in whatever it is, wholly or in parts, far beyond the data and their interpretations!***If this is possible at least to a small and humble extent, why not do it?**Why not de-idealize the side-effects of mathematics too?*This theoretical attitude of partially de-idealizing the effects of human symbolizing activity by use of the same symbolic activity accepts the existence of processual entities as whatever they are.

**– of course, different from and more generalized than those of Quine and others. Perhaps such a generalization can give a slightly better concept of reality than is possible by the normally non-self-aware symbolic activity in language, logic, and mathematics.***This is what I call ontological commitment***5. Mathematics, Causality, and Contemporary Philosophical Schools**

With respect to what we have been discussing,

**and even its more recent causalist child, namely,***linguistic philosophy***, have even today the following characteristics:***dispositionalist causal ontology*(1) They attribute an even now overly discrete nature to “entities” in the extent of their causal separateness from others while considering them as entities. The ontological notion of an object or even of an event in its unity in analytic philosophy and in particular in modal ontology forecloses consideration of the process nature of each such unity within, on par with interactions of such units with one another. (David Lewis,

*Parts of Classes*, p. vii) This is done without ever attempting to touch the deeply Platonic (better, geometrically atomistic) shades of common-sense Aristotelianism, Thomism, Newtonianism, Modernism, Quantum Physics, etc., and without reconciling the diametrically opposite geometrical tendency to make every physical representation continuous.(2) They are logically comatose about the impossibility of the exactly referential definitional approach to the processual demands of existent physical objects without first analyzing and resolving the metaphysical implications of existent objects, namely, being irreducibly in finite Extension and Change and thus in continuous Universal Causality in finite extents at any given moment.

(3) They are unable to get at the

**(neither mathematically continuous nor geometrically discontinuous) nature of the physical-ontologically “partially discrete” processual objects in the physical world, also because they have misunderstood the discreteness of processual objects (including quanta) within stipulated periods as typically universalizable due to their pragmatic approach in physics and involvement of the notion of continuity of time.***causally fully continuous***has done a lot to show the conceptual structures of ordinary reasoning, physical reasoning, mathematical and logical thinking, and reasoning in the human sciences. But due to its lack of commitment to building a physical ontology of the cosmos and due to its purpose as a research methodology, phenomenology has failed to an extent to show the nature of causal continuity (instead of mathematical continuity) in physically existent, processually discrete, objects in nature.**

*Phenomenology***has just followed the human-scientific interpretative aspect of Husserlian phenomenology and projected it as a method. Hence, it was no contender to accomplish the said fete.**

*Hermeneutics***qualified all science and philosophy as being perniciously cursed to be “modernistic” – by thus monsterizing all compartmentalization, rules, laws, axiomatization, discovery of regularities in nature, logical rigidity, and even metaphysical grounding as insurmountable curses of the human project of knowing and as a synonym for all that are unapproachable in science and thought. The linguistic-analytic philosophy in later Wittgenstein too was no exception to this nature of postmodern philosophies – a matter that many Wittgenstein followers do not notice. Take a look at the first few pages of Wittgenstein’s**

*Postmodern philosophies**Philosophical Investigations*, and the matter will be more than clear.

**seem today to follow the beaten paths of extreme pragmatism in linguistic-analytic philosophy, physics, mathematics, and logic, which lack a**

*The philosophies of the sciences***.**

*foundational concept of causally concrete and processual physical existence*Hence, it is useful for the growth of science, philosophy, and humanities alike to research into the

**and forget about absolute mathematical continuity or discontinuity in nature. Mathematics and the physical universe are to be reconciled in order to mutually delimit them in terms of the causal continuity between partially discrete processual objects.***causal continuity between partially discrete “processual” objects*Bibliography

*(1) Gravitational Coalescence Paradox and Cosmogenetic Causality in Quantum Astrophysical Cosmology*, 647 pp., Berlin, 2018.

*(2) Physics without Metaphysics? Categories of Second Generation Scientific Ontology*, 386 pp., Frankfurt, 2015.

*(3) Causal Ubiquity in Quantum Physics: A Superluminal and Local-Causal Physical Ontology*, 361 pp., Frankfurt, 2014.

*(4) Essential Cosmology and Philosophy for All: Gravitational Coalescence Cosmology*, 92 pp., KDP Amazon, 2022, 2nd Edition.

*(5) Essenzielle Kosmologie und Philosophie für alle: Gravitational-Koaleszenz-Kosmologie*, 104 pp., KDP Amazon, 2022, 1st Edition.

It seems that the most simple way to get both an infinite number of tessellated solids and lattices with periodic minimal surfaces in R3 consists on using a Pearce "saddle tetrahedron". The resultant convex solids:

1) Have configurations which tesselate the euclidean space. These tesselations are not Voronoi and have curved boundaries in a bcc lattice.

2) Define minimal surfaces for any 3 dimensional quadrelateral on the external closed surface of the solid.

Is there any topological description of such solids in the literature? How can we get a Weierstrass representation of the external surface of each polyhedron? How can we get the conjugate continous surfaces? Can we consider this to be a good design method for structural lattices?

Any comment will be wellcome.

How the integral in equation 1 is calculated into equation 3 depicted in the attached picture using equation 2?

Why seperation of variables methods (https://en.wikipedia.org/wiki/Separation_of_variables) can't be applied into Burger's Equation (https://en.wikipedia.org/wiki/Burgers%27_equation)?

A collection of solved examples in Pyomo environment (Python package)

The solved problems are mainly related to supply chain management and power systems.

Feel free to follow / branch / contribute

I have an data of 30 X 1 matrix, in which by using gradient descent algorithm is it possible to find the best optimized value.If yes, please share me the procedure or link for the detailed background theory behind it.it will be helpful for me to proceed further on my research.

Thermal stresses in applied mathematics

Non similarity transformation

The equation dx/dt = F(x) can be linearized using Calerman techniques and solved with linear state equation method. But for some condition I found a proper decomposition of F(x) relating to know logist solution od a foundamental canonical problem .

I have a system of non-linear differential equations that explains the behaviour of some of the cancer cells.

Looking for help identifying the equilibrium points and eigenvalues of this model in order to determine the type of bifurcation present.

Thanks in advance.

I want to do phd on mathematical modeling of infectious diseases (eg. Covid 19, maleria, denge). I am also interested in pure mathematics as well like Nonlinear Analysis, Variational Inequalities so my question is can i get any connection between this two part. Need suggestions thank you.

Can an elliptic crack (small enough to remain a single entity, with

**no internal pressure or shear force**) inside an isotropic material (no boundary effect) be expanded in its own plane under**externally applied shearing stresses**only?If yes, how did you show that? Do we have experimental evidence for the process?

I am interested to compare two time varying correlations series. Is there any statistically appropriate method to make this comparison.

Thank You

Assume we have a program with different instructions. Due to some limitations in the field, it is not possible to test all the instructions. Instead, assume we have tested 4 instructions and calculated their rank for a particular problem.

the rank of Instruction 1 = 0.52

the rank of Instruction 2 = 0.23

the rank of Instruction 3 = 0.41

the rank of Instruction 4 = 0.19

Then we calculated the similarity between the tested instructions using

**cosine similarity**(after converting the instructions from text form to vectors- machine learning instruction embedding).Question ... is it possible to create a mathematical formula considering the values of rank and the similarity between instructions, so that .... given an

**un-tested instruction**... is it possible to calculate, estimate, or predict the rank of the new un-tested instruction based on its similarity with a tested instruction?For example, we measure the similarity between instruction 5 and instruction 1. Is it possible to calculate the rank of instruction 5 based on its similarity with instruction 1? is it possible to create a model or mathematical formula? if yes, then how?

During the lecture, the lecturer mentioned the properties of Frequentist. As following

**Unbiasedness**is only one of the frequentist properties — arguably, the most compelling from a frequentist perspective and possibly one of the easiest to verify empirically (and, often, analytically).

There are however many others, including:

1.

**Bias-variance trade-off**: we would consider as optimal an estimator with little (or no) bias; but we would also value ones with small variance (i.e. more precision in the estimate), So when choosing between two estimators, we may prefer one with very little bias and small variance to one that is unbiased but with large variance;2.

**Consistency**: we would like an estimator to become more and more precise and less and less biased as we collect more data (technically, when n → ∞).3.

**Efficiency**: as the sample size incrases indefinitely (n → ∞), we expect an estimator to become increasingly precise (i.e. its variance to reduce to 0, in the limit).Why Frequentist has these kinds of properties and can we prove it? I think these properties can be applied to many other statistical approach.

Please, I need, if available, some important research papers which relate the theory of dynamical systems to climate change. Also, in general, I know there are a lot of published research articles that relate dynamical systems to many applications. But, are there papers that research centers and governments depend on that before taking any procedures? I mean, are there papers, especially on climate change and the environment, which are not only in theory but have practical applications?

I am aware of the facts that every totally bounded metric space is separable and a metric space is compact iff it is totally bounded and complete but I wanted to know, is every totally bounded metric space is locally compact or not. If not, then give an example of a metric space that is totally bounded but not locally compact.

Follow this question on the given link

A few years ago, in a conversation that remained unfinished, a statistics expert told me that percentage variables should not be taken as a response variable in an ANOVA. Does anyone know if this is true, and why?

Well,

I am a very curious person. During Covid-19 in 2020, I through coded data and taking only the last name, noticed in my country that people with certain surnames were more likely to die than others (and this pattern has remained unchanged over time). Through mathematical ratio and proportion, inconsistencies were found by performing a "conversion" so that all surnames had the same weighting. The rest, simple exercise of probability and statistics revealed this controversial fact.

Of course, what I did was a shallow study, just a data mining exercise, but it has been something that caught my attention, even more so when talking to an Indian researcher who found similar patterns within his country about another disease.

In the context of pandemics (for the end of these and others that may come)

I think it would be interesting to have a line of research involving different professionals such as data scientists; statisticians/mathematicians; sociology and demographics; human sciences; biological sciences to compose a more refined study on this premise.

Some questions still remain:

What if we could have such answers? How should Research Ethics be handled? Could we warn people about care? How would people with certain last names considered at risk react? And the other way around? From a sociological point of view, could such a recommendation divide society into "superior" or "inferior" genes?

What do you think about it?

=================================

Note: Due to important personal matters I have taken a break and returned with my activities today, February 13, 2023. I am too happy to come across many interesting feedbacks.

Dear All,

I am planning to do Ph.D in Applied mathematics but not able to decide on the area to be dealt in. Can anyone suggest any good option to go with ?

Dear researchers

Do you know a journal in the field of applied mathematics or chemistry-mathematics, in which the publication of the article is free and the answer to the review of the article is announced within 3 months at the most?

please contact me by the following:

Thank you very much.

Best regards

Hi

I have a huge dataset for which I'd like to assess the independence of two categorical variables (x,y) given a third categorical variable (z).

My assumption: I have to do the independence tests per each unique "z" and even if one of these experiments shows the rejection of null hypothesis (independence), it would be rejected for the whole data.

Results: I have done Chi-Sq, Chi with Yates correction, Monte Carlo and Fisher.

- Chi-Sq is not a good method for my data due to sparse contingency table

- Yates and Monte carlo show the rejection of null hypothesis

- For Fisher, all the p values are equal to 1

1) I would like to know if there is something I'm missing or not.

2) I have already discarded the "z"s that have DOF = 0. If I keep them how could I interpret the independence?

3) Why do Fisher result in pval=1 all the time?

4) Any suggestion?

#### Apply Fisher exact test

fish = fisher.test(cont_table,workspace = 6e8,simulate.p.value=T)

#### Apply Chi^2 method

chi_cor = chisq.test(cont_table,correct=T); ### Yates correction of the Chi^2

chi = chisq.test(cont_table,correct=F);

chi_monte = chisq.test(cont_table,simulate.p.value=T, B=3000);

**Dear colleagues, we know that getting a new research paper published can be a challenge for a new researcher. It is even more challenging when considering the risk of refusal that comes from submitting a new paper to a journal that is not the right fit. we can also mention that some journals require an article processing charge (APC) but also have a policy allowing them to waive fees on request at the discretion of the editor, howover we underline that we want to publish a new research paper without APC!**

**So, what do you suggest?**

We are certainly grateful for your recommendations.
Kind regards!

*------------------------------------------------------------------------------**Abdelaziz Hellal Mohamed Boudiaf M'sila, University, Algeria.*

I have registered for conference well held in Singapore from 9-10 September 2019

I want to ask if this event is a real event.

Name of event: International Conference on Applied Mathematics and Science (ICAMS-19)

Organization: WRF CONFERENCE

Date: 9th-10th SEP 2019

Best regards

I have previously conducted laboratory experiments on a photovoltaic panel under the influence of artificial soiling in order to be able to obtain the short circuit current and the open-circuit voltage data, which I analyzed later using statistical methods to draw a performance coefficient specific to this panel that expresses the percentage of the decrease in the power produced from the panel with the increase of accumulating dust. Are there any similar studies that relied on statistical analysis to measure this dust effect?

I hope I can find researchers interested in this line of research and that we can do joint work together!

**Article link:**

A tunable clock source will consist of a PLL circuit like the Si5319, configured by a microcontroller. The input frequency is fixed, e.g. 100 MHz. The user selects an output frequency with a resolution of, say, 1 Hz. The output frequency will always be lower than the input frequency.

The problem: The two registers of the PLL circuit which determine the ratio "output frequency/input frequency" are only 23 bit wide, i.e. the upper limit of both numerator and denominator is 8,388,607. As a consequence, when the user sets the frequency to x, the rational number x/10

^{8}has to be reduced or approximated.If the greatest common divider (GCD) of x and 10

^{8}>= 12 then the solution is obvious. If not, the task is to find the element in the Farey sequence F_{8388607}that is closest to x/10^{8}. This can be done by descending from the root along the left half of the Stern-Brocot tree. However, this tree, with all elements beyond F_{8388607}pruned away, is far from balanced, resulting in a maximum number of descending steps in excess of 4 million; no problem on a desktop computer but a bit slow on an ordinary microcontroller.F

_{8388607}has about 21*10^{12}elements, so a balanced binary tree with these elements as leaves would have a depth of about 45. But since such a tree cannot be stored in the memory of a microcontroller, numerator and denominator of the searched Farey element have to be calculated somehow during the descent. This task is basically simple in the Stern-Brocot tree but I don't know of any solution in any other tree.Do you know of a fast algorithm for this problem, maybe working along entirely different lines?

Many thanks in advance for any suggestions!

**Dear researchers**

**As we know, recently a new type of derivatives have been introduced which depend two parameters such as fractional order and fractal dimension. These derivatives are called fractal-fractional derivatives and divided into three categories with respect to kernel: power-law type kernel, exponential decay-type kernel, generalized Mittag-Leffler type kernel.**

**The power and accuracy of these operators in simulations have motivated many researchers for using them in modeling of different diseases and processes.**

**Is there any researchers working on these operators for working on equilibrium points, sensitivity analysis and local and global stability?**

**If you would like to collaborate with me, please contact me by the following:**

**Thank you very much.**

**Best regards**

**Sina Etemad, PhD**

Dears,

What can you say about the journal " Italian journal of pure and applied Mathematics"?

I am studying integral transforms (Fourier, Laplace, etc), to apply them in physics problems. However, it is difficult to get books that have enough exercises and their answers. I have found that in particular the Russian authors have excellent books where there are a lot of exercises and their solutions.

Greetings,

Ender

Journal of Industrial & Management Optimization (JIMO) is an open access journal. You pay a substantial amount to publish a paper. When you go to the website of its publisher, American Institute of Mathematical Sciences (AIMS Press), it seems that it is not really based in the United States. I am not sure if it is a legitimate professional organization or if it is a predatory publisher. They have a large number of open access journals. On the other hand, their handling of papers is terrible: extremely slow and low-tech, which is not typical for predatory journals. It may take 13 months to get an editorial rejection, for instance. Furthermore, they don't have an online submission system with user profiles on it, you just submit the paper on a website, and they give you a URL to check your paper's status, which makes your submission open to anyone who has the URL. It has an impact factor of 1.3, which makes me puzzled. Any comments on this organization and the journal will be appreciated.

*Why is a Proof to Fermat's Last Theorem so Important?*

*I have been observing an obsession in mathematicians. logicians and number theorists with providing a "Proof for Fermat's Last Theorem". Many intend to publish these papers in peer reviewed journal. Publishing your findings is good but the problem is that a lot of the papers aimed at providing a proof for Fermat's Last Theorem are erroneous and the authors don't seem to realize that.*

*So*

*Why is the Proof of Fermat's Last Theorem so much important that a huge chunk of mathematicians are obsessed with providing the proof and failing miserably?*

*What are the practical application's of this theorem?*

*Note: I am not against the theorem or the research that is going on the theorem but it seems to be an addiction. That is why I thought of asking this question.*Hello everyone,

Could you recommend courses, papers, books or websites about modeling language and formalization?

Thank you for your attention and valuable support.

Regards,

Cecilia-Irene Loeza-Mejía

Dear collegues.

I would like to ask,if anybody works with neural networks,to check my loop for the test sample.

I've 4 sequences (with a goal to predict prov,monthly data,22 data in each sequence) and I would like to construct the forecast for each next month with using training sample size 5 months.

It means, I need to shift each time by one month with 5 elements:

train<-1:5, train<-2:6, train<-3:7...,train<-17:21. So I need to get 17 columns as a output result.

The loop is:

shift <- 4

number_forecasts <- 1

d <- nrow(maxmindf)

k <- number_forecasts

for (i in 1:(d - shift + 1))

{

The code:

require(quantmod)

require(nnet)

require(caret)

prov=c(25,22,47,70,59,49,29,40,49,2,6,50,84,33,25,67,89,3,4,7,8,2)

temp=c(22,23,23,23,25,29,20,27,22,23,23,23,25,29,20,27,20,30,35,50,52,20)

soil=c(676,589,536,499,429,368,370,387,400,423,676,589,536,499,429,368,370,387,400,423,600,605)

rain=c(7,8,2,8,6,5,4,9,7,8,2,8,6,5,4,9,5,6,9,2,3,4)

df=data.frame(prov,temp,soil,rain)

mydata<-df

attach(mydata)

mi<-mydata

scaleddata<-scale(mi$prov)

normalize <- function(x) {

return ((x - min(x)) / (max(x) - min(x)))

}

maxmindf <- as.data.frame(lapply(mydata, normalize))

go<-maxmindf

forecasts <- NULL

forecasts$prov <- 1:22

forecasts$predictions <- NA

forecasts <- data.frame(forecasts)

# Training and Test Data

trainset <- maxmindf()

testset <- maxmindf()

#Neural Network

library(neuralnet)

nn <- neuralnet(prov~temp+soil+rain, data=trainset, hidden=c(3,2), linear.output=FALSE, threshold=0.01)

nn$result.matrix

plot(nn)

#Test the resulting output

#Test the resulting output

temp_test <- subset(testset, select = c("temp","soil", "rain"))

head(temp_test)

nn.results <- compute(nn, temp_test)

results <- data.frame(actual = testset$prov, prediction = nn.results$net.result)

}

minval<-min(x)

maxval<-max(x)

minvec <- sapply(mydata,min)

maxvec <- sapply(mydata,max)

denormalize <- function(x,minval,maxval) {

x*(maxval-minval) + minval

}

as.data.frame(Map(denormalize,results,minvec,maxvec))

Could you tell me please,what can i add in trainset and testset (with using loop) and how to display all predictions using a loop so that the results are displayed with a shift by one with a test sample of 5?

I am very grateful for your answers

I want to develop a Hybrid SARIMA-GARCH for forecasting monthly rainfall data. The 100% of data is split into 80% for training and 20% for testing the data. I initially fit a SARIMA model for rainfall and found the residual of the SARIMA model is heteroscedastic in nature. To capture the information left in the SARIMA residual, GARCH is applied to model the residual part. The model order (p=1,q=1) of GARCH is applied. But when the data is forecasted I am getting constant value. I tried applying different model orders for GARCH, still, I am getting a constant value. I have attached my code, kindly help me resolve it? Where have I made mistake in coding? or is some other CRAN package has to be used?

library(“tseries”)

library(“forecast”)

library(“fgarch”)

setwd("C:/Users/Desktop")

**# Setting of the work directory**data<-read.table("data.txt")

**# Importing data**datats<-ts(data,frequency=12,start=c(1982,4))

**# Converting data set into time series**plot.ts(datats)

**# Plot of the data set**adf.test(datats)

**# Test for stationarity**diffdatats<-diff(datats,differences=1)

**# Differencing the series**datatsacf<-acf(datats,lag.max=12)

**# Obtaining the ACF plot**datapacf<-pacf(datats,lag.max=12)

**# Obtaining the PACF plot**auto.arima(diffdatats)

**# Finding the order of ARIMA model**datatsarima<-arima(diffdatats,order=c(1,0,1),include.mean=TRUE)

**# Fitting of ARIMA**modelforearimadatats<-forecast.Arima(datatsarima,h=12)

**# Forecasting using ARIMA model**plot.forecast(forearimadatats)

**# Plot of the forecast**residualarima<-resid(datatsarima)

**# Obtaining residuals**archTest(residualarima,lag=12)

**# Test for heteroscedascity****# Fitting of ARIMA-GARCH model**

garchdatats<-garchFit(formula = ~ arma(2)+garch(1, 1), data = datats, cond.dist = c("norm"), include.mean = TRUE, include.delta = NULL, include.skew = NULL, include.shape = NULL, leverage = NULL, trace = TRUE,algorithm = c("nlminb"))

**# Forecasting using ARIMA-GARCH model**

forecastgarch<-predict(garchdatats, n.ahead = 12, trace = FALSE, mse = c("uncond"), plot=FALSE, nx=NULL, crit_val=NULL, conf=NULL)

plot.ts(forecastgarch)

**# Plot of the forecast**A comprehensive way to find the concentration of random solutions would enhance benefits related with health, industry, technology and commercial aspects. Although beer lambert law is a solution, there are some cases where Epsilon is unknown (Example: A Coca-Cola drink or a cup of coffee). In this cases, proper alternative ways of determining concentration should be suggested.

I am trying to solve the differential equation. I was able to solve it when the function P is constant and independent of r and z. But I am not able to solve it further when P is a function of r and z or function of r only (IMAGE 1).

Any general solution for IMAGE 2?

Kindly help me with this. Thanks

Multinomial or crdered choice. Which one is applicable?

The complete flow equations for a third grade flow can be derived from the differential representation of the stress tensor. Has anyone ever obtained any results, experimentally or otherwise, that indicate the space-invariance (constancy) of the velocity gradient, especially for 1D shear flow in the presence of constant wall-suction velocity? Under what conditions were the results obtained?

Grubbs's test and Dixon's test are widely applied in the field of Hydrology to detect outliers, but the drawback of these statistical tests is that it needs the dataset to be approximately normally distributed? I have rainfall data for 113 years and the dataset is non-normally distributed. What are the statistical tests for finding outliers in non-normally distributed datasets & what values should we replace in the place of Outliers?

Consider the powerful central role of differential equations in physics and applied mathematics.

In the theory of ordinary differential equations and in dynamical systems we generally consider smooth or C^k class solutions. In partial differential equations we consider far more general solutions, involving distributions and Sobolev spaces.

I was wondering, what are the best examples or arguments that show that restriction to the analytic case is insufficient ?

What if we only consider ODEs with analytic coeficients and only consider analytic solutions. And likewise for PDEs. Here by "analytic" I mean real maps which can be extended to holomorphic ones. How would this affect the practical use of differential equations in physics and science in general ? Is there an example of a differential equation arising in physics (excluding quantum theory !) which only has C^k or smooth solutions and no analytic ones ?

It seems we could not even have an analytic version of the theory of distributions as there could be no test functions .There are no non-zero analytic functions with compact support.

Is Newtonian physics analytic ? Is Newton's law of gravitation only the first term in a Laurent expansion ? Can we add terms to obtain a better fit to experimental data without going relativistic ?

Maybe we can consider that the smooth category is used as a convenient approximation to the analytic category. The smooth category allows perfect locality. For instance, we can consider that a gravitational field dies off outside a finite radius.

Cosmologists usually consider space-time to be a manifold (although with possible "singularities"). Why a manifold rather than the adequate smooth analogue of an analytic space ?

Space = regular points, Matter and Energy = singular points ?

Is the reciprocal of the inverse tangent $\frac{1}{\arctan x}$ a (logarithmically) completely monotonic function on the right-half line?

If $\frac{1}{\arctan x}$ is a (logarithmically) completely monotonic function on $(0,\infty)$, can one give an explicit expression of the measure

*$\mu(t)$*in the integral representation in the Bernstein--Widder theorem for*$f(x)=*\frac{1}{\arctan x}*$*?These questions have been stated in details at the website https://math.stackexchange.com/questions/4247090

Hello Researchers,

Say that I have

**'p'**number of variables and**'m'**number of*constraint equations*between these variables. Therefore, I must have**'p - m'**independent variables, and the remaining variables can be related to the independent ones through the*constraint equations*. Is there any rationale for selecting these**'p - m'**independent variables from available**'p'**variables?Uses in applied mathematics and computer sciences

Once I obtain the Ricatti equations to solve the moments equation I can't find the value of the constants. How can I obtain the value of these constants? Have these values already been reported for the titanium dioxide?

The Riemannian metric g satisfies

g(P_X\,Y, W)=-g(Y, P_X\,W)

where P_X\,Y is denote tangential part of (\nabla_X\J)\,Y=P_X\,Y+Q_X\,Y.

that condition can we take on Nordan manifold?

Assuming we have a piece of timber C16 - 100x100x1000mm and we apply UDL + a point load at middle point on it (parallel to the fibre) as shown below, how much will the timber compress between the force and the concrete surface ?

I have attached a sketch as well. Please see below.

If you could show a detailed calculation would be much appreciated. Thank you!

I am coding a multi-objective genetic algorithm, it can predict the pareto fronts accurately for convex pareto front of multi-objective functions. But, for non-convex pareto fronts, it is not accurate and the predicted pareto points are clustered on the ends of the pareto front obtained from MATLAB genetic algorithm. can anybody provide some techniques to solve this problem. Thanks in advance.

The attached pdf file shows the results from different problems

*This paper is a project to build a new function. I will propose a form of this function and I let people help me to develop the idea of this project, and in the same time we will try to applied this function in other sciences as quantum mechanics, probability, electronics …*