Science topics: PhysicsTheoretical Physics
Science topic
Theoretical Physics - Science topic
Explore the latest questions and answers in Theoretical Physics, and find Theoretical Physics experts.
Questions related to Theoretical Physics
1. Grounded Physical-Ontological Categories behind Physics
Grounding can be of various levels and grades. I speak of grounding all sorts of concepts, procedure principles, procedure methods, and theories in any system of thought and science. It is unnecessary in this context to discuss the grounding of highly derivative concepts that occur much later in theories than those that appear while founding them with best-grounded foundations. I go directly to the case of what should be called the most Categorial concepts behind physics, on which physics is grounded.
These Categorial concepts cannot be merely from within physics but should be directly related to and facilitating physics in as many of its aspects as possible. The success of foundational Categories consists in that they serve to ground as many aspects as possible of the particular science or system. Concepts strictly and exclusively physical or generally scientific cannot be as useful as notions from beyond in order to serve as Categories. Evidently, this is because no scientific discipline or system can be grounded on itself and hence on its own concepts. This is clearly also part of the epistemological and ontological implications of the work of Godel.
Grounded ontological Categories are such that they are inevitably and exhaustively grounded in the To Be of Reality-in-total as the only exhaustive implications of To Be. All other Categories, as far as possible, must be derivative of the most primary Categories. The more the number of Categories within the Categorial system that do not derive from the primary Categories the worse for the self-evidence of the science or system within it.
Grounding is exhaustive in the sense that the Categories that ground all physics need nothing else to be a concept than the To Be of Reality-in-total. To Be is the source of the Categories. It happens to be that there are two such Categories that are inevitably and exhaustively grounded. I call them Extension and Change. Clarifications of their meaning, ontological significance, and epistemological and physical implications and follow.
As I said, preferably grounding must be on the surest notion of all, which is existence. I prefer to term it To Be. As far as thought, feeling, and sensation are concerned, To Be is a notion in al of them. But principally To Be must belong to the whole of Reality, and not to a few things. If anything and/or all processes of Reality are existent, then what exist are the parts of existent Reality. The first minimum guarantee hereof should be that existence should be non-vacuous. Non-vacuous signifies that each possesses or contains whatever is possible within its existence in the given measurementally spatio-temporal context (which, as shall soon be clear, belong ontologically to the Extension-Change-wise existence of things).
3. Definitions of Universals, Extension-Change, Causality, and Unit Process
Even the minimum realism in thought, feeling, and sensation has for its principal conditions (1) the ontological primacy of universal qualities / natures that belong to groups of entities (processes), where the groups are also called natural kinds in the analytic tradition, and then (2) the ultimate simplicity and indivisibility of the universal characteristics that pertain to all existents. Contrary to the infinite divisibility of existent matter-energy, universals as the characteristics of existent matter-energy conglomerations (of togethernesses of unit Processes) are ontologically ideal universals, and hence indivisible. These universals are ideal not because of our idealisation of the characteristics, but instead because they are the general characteristics of the natural kinds to which each existent belongs. Thus, it is important to keep in mind that ontological universals are not our idealizations.
The properties of things are built out of these simple ontological universals in the natural kinds. The vague reflections of simple ontological universals within our minds are conceptually connotative universals, which are conceptual ideals. And their linguistic reflections in minds and all kinds of symbolic instruments are denotative universals.
Connotative and denotative universals are epistemological universals, formed epistemically from the little contact that minds have with the phenomena (“showings-themselves”) from some layers of processual objects from out there. The properties of existent processual things (matter-energy particulars) are vaguely reflected in minds and languages through the connotative and denotative instrumentalization of concepts in order to reflect the things via phenomena in terms of the data created by minds out of them. Any theory that permits ontological primacy to epistemological universals is one of a range of theories yielding primacy to the perceiving mind over the perceived objects. This is anathema in any scientific or philosophical science, because things are not vacua.
Non-vacuous existence implies that existents are extended. This is one of the most important characteristics of existents. Extension implies having parts, compositionality. Any extended existent’s parts impart impact to some others. This is Change. Only extended existents can exert impacts on any other. As a result, the object that exerts impact gets in itself some impact within, which is nothing but the proof that an impact by one extended part implies movements and impact formation by its parts too, as a result of the overall impact formation in question which contains the inner parts’ impact formation within. The latter need not always have its effects merely within the parts but instead also outwards.
Extension and Change are the highest, deepest, and most general characteristics of all existents. Interestingly, existence in Extension-Change is itself the process that we have so far named causation. Hence, anything non-vacuously existent has Extension and Change not separately but together. This is the meaning of Universal Causality. Physics cannot dispense with this pre-scientific universal Law. No more shall quantum physicists or scientists from other disciplines tell us that quantum physics has some sort of non-causality within! Any causal unit of existents in which the causal part and the effect part may be termed a process. Processuality is yet another important characteristic of existents, but we formulate it as Process, which represents the matter-energy units that there can be.
By this have clearly been set up three physical-ontological Categories of physics: Extension, Change, Causality, and Process. Space and time are merely epistemic categories. They cannot characterize existent processes. Ontological universals, as the characteristics of existent matter-energy conglomerations, are of togethernesses of unit Processes. Ontological universals are therefore ontologically ideal universals belonging (pertaining) to some natural kinds. The Categories as ontological universals belong to Reality-in-total, and not merely some natural kinds.
- A perpetual motion machine is a concept of engineering and outcome. It plays a small role in the first law of thermodynamics, but in the second law of thermodynamics, perpetual motion machines have become the starting point of theory, greatly improving their status. When comparing the two, it can be found that the logic of the second law of thermodynamics is filled with experiential themes, lacking rational logic, and is a loss of the rational spirit of scientists.
- In practice, scientists extensively use method B in the figure to try to find a balance between theory and experiment. This kind of thing was originally invisible, but scientists treated it as a treasure. It's quite ironic.
- Originally a trial of the second law of thermodynamics, it has become a trial of scientists. I believe there will be a response from scientists.
INFINITE-ETERNAL MULTIVERSE
Raphael Neelamkavil
Ph. D. (Causality in Quantum Physics), Dr. phil. (Causality in Cosmology)
We cannot see, or predict the existence or not of limits of the edges of the universe. But, merely due to our inability to see or predict, the universe need not be infinite in content and extent. Similarly, nor should it be taken as finite in content and extent. Any number of insistent avowals on experimental proofs become useless here.
Some have attempted to determine the content of the universe by first determining the geometry of the universe by depending on their determination of isotropy and anisotropy at any local region or layer of the universe. Then they formulate the separate geometry of the finite or infinite spatial or temporal content of the universe, in order thus to indicate the matter-energy content (as finite or infinite) of the whole the universe...!
Does the absence of empirical evidence mean that we should not speak of the rational cases possible in the maximal, medial, and minimal cases of extent of time and space, and of content of matter-energy in the universe? I do not think so. Why not we treat each possible case and sub-case separately and come to conclusions not merely about the geometry, but also of the rationally most acceptable amount of content of the universe?
Many people in the physics and astrophysics of the whole cosmos often tend to insist that the mathematics used to derive conclusions from the particular / local portions of the universe to the total / overall cosmos will help to conclude physically to whatever the mathematics suggests. This is because they tend to equivocate the physics to the mathematics.
If in a physical system within a controlled environment on earth tends to reverse all the physical dispersion of matter (where no real tab is possible on the loss of a minute energy from the system), then they tend to conclude on a cosmic scale theory on the energy loss at the fringes of each local universe or portion of it that all energy propagated will return at some point. This goes without saying, they tend to conclude so.
If there is an internal gravitational reason that makes the loss of energy at the fringes from the first moment of expansion (due to whatever amount of expansion, because there is no total absence of expansion and contraction in any physical system!), then it is rationally and astrophysical-cosmologically clear that some energy will have left the system at least at the speed of light at the first moment of the process of expansion considered and it will continue at an intensive or less intensive mode.
Then many cosmologists tend to insist that our finite-content big bang universe (which either is just the totality of the cosmos that ever has existed, or is only our finite-content portion of the infinite-content cosmos) has only two options: (1) EITHER it will go on expanding eternally and become rarefied forever (in which case it could already have been so if the universe had no origin), (2) OR it will oscillate between expansion and contraction (which is the cyclic model, which too incurs a non-eternal process of rarefaction of the finite content by reason of the fringe-loss of energy).
In the latter case, all the mathematics-is-omnipotent sort of physicists just calculate the implications of their theory by depending merely on the strength of the mathematics. They say first that in any system most of the matter-energy return in any system, because even in the case of entropy of a given system the loss of energy – in case it is the case – is not great with respect to the system.
This is very inaccurate and in any case gives rise to the prejudice that the negligible loss is zero loss for all purposes. We do not know for sure whether every energy wavicle that left the system returns. It is impossible to measure the loss so exactly. The “sufficiently accurate empirical measure” is no guarantee for a total correctness. They then say that, whether there is a big bang universe or not, every system contains and preserves all the energy that it has, merely because of their presumption that matter and energy are interconvertible, and hence all the energy that left at the outskirts should return.
Lots of geometrical restrictions are then rendered for the fringe-loss energy to return: that the fringes are not infinite, any universe has within itself all that its space-time has, etc. In none of these do they try to really rethink the foundations of their merely mathematical concept of spacetime with respect to the fringes of local universes from the start to the finish of any amount of expansion and contraction that any cosmic body should have. Consequently, they consider the local universe as a complete system – by their unresearched presumption – and presuppose that cosmic bodies within that universe can lose energy forever, which of course will be within the system of this universe. Does anyone sense an inconsistency here?
As we all know, the second law of thermodynamics is formulated with almost closed terrestrial systems in mind, and no methods exist to perfectly measure all the minute losses of energy from within the system. If gravitation alone is involved in bringing back all the lost energy of the finite-content local universe, then the second law need not have applicability there at all!
Merely because we have formulated the physics in such a way that the second law of thermodynamics does not apply also to the entropy of the outermost fringes of the local universe, we cannot insist that the energy lost at the fringes will automatically return without the agency of a later gravitational propagation. One should naturally use the wisdom that no gravitational propagation issued before or after the start of the expansion can run slower or faster than the lost (gravitational and/or non-gravitational) energy and bring the lost energy back to the centre for recycling for use in another phase of the cycle of expansion and contraction.
In order to avoid this state of affairs, many might bring up geometries and cosmological theories requiring no big bang or big crunch. But who can insist that the local universe never has any amount of expansion and contraction? Even if it does not have expansion and contraction, energy at the fringes will be propagated off. There is no special wall there (except the geometrical walls created by a few cosmologists) to block the outward-bound propagations forever!
In short, the big bang universe cannot go on eternally in existence as a conglomeration with the same amount of matter-energy. If it had to be insulated from all other possible universes outside, it was certainly not in existence from the past eternity proper to it, because the fringe-loss of energy, however minute, could have exhausted such an eternally existent finite-content universe an eternity ago. It would have to evaporate all or most of its content, and hence, if it had existed from the past eternity as the sole physical cosmos, it should already have exhausted itself.
Why is it that this our finite-content big bang (or slightly expanding) universe did not already conclude at an earlier point of time by getting fully evaporated into the outer realms, if it was not created at all and if it really existed from eternity? Hence, IF MERELY THIS FINITE-CONTENT UNIVERSE EXISTS, some sort of creation of this finite-content universe should have been the case. Hence, let us leave this possibility – for otherwise scientistic scientists would begin attacking me.
Let’s ask: Why should only this one universe exist? The following are the only two possible sub-cases:
(1) Probably there existed, from the past eternity of each universe, an infinite number of universes bigger or smaller. These need not have an origin, since the small amount of energy that each universe loses at each of its expansion- and contraction phases will end up at some finite future in other similar universes; and perhaps this is enough for an eternal co-existence of each of them from the past eternity proper of all parts of each such universe as parts of one or many other universes within the infinite-content universe.
(2) If the one universe was the result of an instant creation or continuous creation of various parts, there should be other infinite number of universes too – because, the Source should not be this same universe or other universes, and the Source should then have the eternal ability of performing continuous creation. Moreover, the other universes in the cosmos cannot create themselves, and this big bang universe of ours cannot create itself, except when they have infinite activity within and the infinite stability proper to infinite activity.
Any number of arguments articulating a quantum vacuum creating new universes from themselves will naturally involve creation only from already existent matter-energy and/or universes. Existing matter-energy – however empty or full the quantum vacuum as the supposed agent of creation is – cannot create fresh matter-energy except from within existing matter-energy in each universe. But this is finite in amount, and cannot go on by fresh creation.
Transfer and re-formulation of matter-energy is not creation. It is just a new mixing with other matter-energy at finite distances. This activity is already included in the processes of the universe, by including which the finite-content universe/s exhaust themselves into their own outer spaces within a finite time, not permitting further prospects of new creation.
In the case of eternal and continuous creation from a Source, it must be admitted that the universe is infinite in content, that is, contains an infinite number of finite-content universes. This is because the Source cannot be this infinite-content cosmos or be part of it, and must exist continuously in the act of infinite and eternal creation.
I do not insist that the above is the case. I have presented one possible case given by any sort of open reasoning. I have elaborated all these and similar other cases of cosmogenesis in the 647 pages of my book: Gravitational Coalescence Paradox and Cosmogenetic Causality in Quantum Astrophysical Cosmology, 2018, Berlin. It is the result of more than 35 years of reading, research, and cogitations – from my very school days.
I have published a short but differently argued version of it in less than 100 pages, presenting the logic of these reasonings in a more simplified manner, so that an ordinary educated person interested in cosmology can grasp the basic lines of the above book easily. It is titled: Essential Cosmology and Philosophy for All, KDP Amazon, 2022. This book is available as Kindle and Printed, for a few Euros or Dollars.
I suggest these books because I cannot write more than a few pages here….
Bibliography
(1) Gravitational Coalescence Paradox and Cosmogenetic Causality in Quantum Astrophysical Cosmology, 647 pp., Berlin, 2018.
(2) Physics without Metaphysics? Categories of Second Generation Scientific Ontology, 386 pp., Frankfurt, 2015.
(3) Causal Ubiquity in Quantum Physics: A Superluminal and Local-Causal Physical Ontology, 361 pp., Frankfurt, 2014.
(4) Essential Cosmology and Philosophy for All: Gravitational Coalescence Cosmology, 92 pp., KDP Amazon, 2022, 2nd Edition.
(5) Essenzielle Kosmologie und Philosophie für alle: Gravitational-Koaleszenz-Kosmologie, 104 pp., KDP Amazon, 2022, 1st Edition.
HOW TO GROUND SCIENCE AND PHILOSOPHY TOGETHER AXIOMATICALLY?
Raphael Neelamkavil, Ph.D., Dr. phil.
We see many theories in physics, mathematics, etc. becoming extremely axiomatic and rigorous. They call themselves or attempt to be as quantitative as possible. But are adequate comparisons between mathematics, physical sciences, biological sciences, human sciences, and philosophy, and adequate adaptation of the axiomatic method possible by creating a system of all exact, physical, and human sciences that depend only on the quantitively qualitative proportionalities and call them invariables?
They cannot do well enough to explain Reality-in-total, because Reality-in-total primarily involves all sorts of ontological universals that are purely qualitative, and some of them are the most fundamental, proportionality-type, quantitative invariables of all physical existents in their specificity and totality in their natural kinds. But as the inquiry comes to Reality-in-total, ontological qualitative universals must come into the picture. Hence, merely quantitative (mathematical) explanations do not exhaust the explanation of Reality-in-total.
Existence as individuals and existence in groups are not differentiable and systematizable in terms of quantitatively qualitative universals alone. Both qualitative and quantitatively qualitative universals are necessary for this. Both together are general qualities pertaining to existents in their processual aspect, not merely in their separation from each other. Therefore, the primitive notions (called traditionally as Categories) of Reality-in-total must be ontological qualitative universals involving both the qualitative and quantitative aspects. The most basic of universals that pertain properly to Reality-in-total are now to be found.
Can the primitive notions (Categories) and axioms of the said sciences converge so that the axioms of a system of Reality take shape from a set of the highest possible ontological Categories as simple sentential formulations of the Categories which directly imply existents? This must be deemed necessary for philosophy, natural sciences, and human sciences, because these deal with existents, unlike the formal sciences that deal only with the qualitatively quantitative form of arguments.
Thus, in the case of mathematics and logic there can be various sorts of quantitative and qualitative primitive notions (categories) and then axioms that use the primitive notions in a manner that adds some essential, pre-defined, operations. But the sciences and philosophy need also the existence of their object-processes. For this reason, the primitive axioms can be simple sentential formulations involving the Categories and nothing else. This is in order to avoid indirect existence statements and to involve existence in terms exclusively of the Categories.
Further, the sciences together could possess just one set of sufficiently common primitive notions of all knowledge, from which also the respective primitive notions and axioms of mathematics, logic, physical and human sciences, and philosophy may be derived. I support this view because the physical-ontological Categories involving the existence of Reality and realities, in my opinion, must be most general and fully exhaustive of the notion of To Be (existence) in a qualitatively universal manner that is applicable to all existents in their individual processual and total processual senses.
Today the nexus or the interface of the sciences and philosophies is in a crisis of dichotomy between truth versus reality. Most scientists, philosophers, and common people rush after “truths”. But who, in scientific and philosophical practice, wants to draw unto the possible limits the consequences of the fact that we can at the most have ever better truths, and not final truths as such?
Finalized truths as such may be concluded to in cases where there is natural and inevitable availability of an absolute right to use the logical Laws of Identity, Contradiction, and Excluded Middle, especially in order to decide between concepts related to the existence and non-existence of anything out there.
Practically very few may be seen generalizing upon and extrapolating from this metaphysical and logical state of affairs beyond its epistemological consequences. In the name of practicality, ever less academicians want today to connect ever broader truths compatible to Reality-in-total by drawing from the available and imaginable commonalities of both.
The only thinkable way to accentuate the process of access to ever broader truths compatible to Reality-in-total is to look for the truest possible of all truths with foundations on existence (nominal) / existing (gerund) / To Be (verbal). The truest are those propositions where the Laws of Identity, Contradiction, and Excluded Middle can be applied best. The truest are not generalizable and extendable merely epistemologically, but also metaphysically, physical-ontologically, mathematically, biologically, human-scientifically, etc.
The agents that permit generalization and extrapolation are the axioms that are the tautologically sentential formulations of the most fundamental of all notions (Categories) and imply nothing but the Categories of all that exist – that too with respect to the existence of Realit-in-total. These purely physical-ontological implications of existence are what I analyze further in the present work. One may wonder how these purely metaphysical, physical-ontological axioms and their Categories can be applicable to sciences other than physics and philosophy.
My justification is as follows: Take for example the case of the commonality of foundations of mathematics, logic, the sciences, philosophy, and language. The notions that may be taken as the primitive notions of mathematics were born not from a non-existent virtual world but instead from the human capacity of spatial, temporal, quantitatively qualitative, and purely qualitative imagination.
I have already been working so as to show qualitative (having to do with the ontological universals of existents, expressed in terms of adjectives) quantitativeness (notions based on spatial and temporal imagination, where, it should be kept in mind, that space-time are epistemically measuremental) may be seen to be present in their elements in mathematics, logic, the sciences, philosophy, and language.
The agents I use for this are: ‘ontological universals’, ‘connotative universals’, and ‘denotative universals’. In my opinion, the physical-ontological basis of these must and can be established in terms merely of the Categories of Extension-Change, which you find being discussed briefly here.
Pitiably, most scientists and philosophers forget that following the exhaustively physical-ontological implications of To Be in the foundations of science and philosophy is the best way to approach Reality well enough in order to derive the best possible of truths and their probable derivatives. Most of them forget that we need to rush after Reality, not merely after truths and truths about specific processes.
Bibliography
(1) Gravitational Coalescence Paradox and Cosmogenetic Causality in Quantum Astrophysical Cosmology, 647 pp., Berlin, 2018.
(2) Physics without Metaphysics? Categories of Second Generation Scientific Ontology, 386 pp., Frankfurt, 2015.
(3) Causal Ubiquity in Quantum Physics: A Superluminal and Local-Causal Physical Ontology, 361 pp., Frankfurt, 2014.
(4) Essential Cosmology and Philosophy for All: Gravitational Coalescence Cosmology, 92 pp., KDP Amazon, 2022, 2nd Edition.
(5) Essenzielle Kosmologie und Philosophie für alle: Gravitational-Koaleszenz-Kosmologie, 104 pp., KDP Amazon, 2022, 1st Edition.
SYMMETRY: A SUBSET OF UNIVERSAL CAUSALITY
What is the Difference between Cause and Reason?
Raphael Neelamkavil, Ph.D., Dr. phil.
1. Symmetry and Symmetry Breaking of Choice
2. Defining Causality
3. Defining Symmetry Causally
I discuss here the concept of symmetry and relate it to Universal Causality. I do not bring in the concept of Conservation here. Nor do we mention or discuss the mathematicians and physicists who deal with this concept, because such a short document cannot study their work or critique them in order to related them to Universal Causality.
1. Symmetry and Symmetry Breaking of Choice
Suppose that, by use of a conventionally decided unit of physically causal action α (of whatever, say, a photon) from A, the choice is met by the unit of action between two given electrons B and C. We consider B and C to be the immediate candidates for direct causal action by α, but the said causal action does not take place in B or C by an external causal action α from A. Then we tend to claim that there exists a PERFECT SYMMETRY OF CHOICE between B and C, for the unit of action α from A.
Whether α is from A or anything else does not matter here. What matters is that in nature such a perfect symmetry is never the case. Suppose there is no choice for α other than that between B and C, that is, there exist only A, B, and C in the world. In that case, at some point of time in the future of occurrence of the physically real mutual (causal, if A were to interact with B or C through the exertion of the causal action α) approach between (1) the unit of action α issuing from A and (2) any one or B and C, then there occurs the causal choice between the two.
If it is possible to stipulate that A, B, and C are in motion at various directions, then there exist some other D, E, etc. in the universe and A, B, and C have had causal interaction with many others. In that case, the decision of α for interaction with either B or C at a stipulated point of time lies in the acquisition of the knowledge as to how much A, B, and C have been causally affected by others, and to what extent of time.
This is not determinable given the fact that we are unable to causally contact all the agents of causal action upon A, B, and C. The final choice by us will be considering at least in a percentage-wise manner how much, how many other As, Bs, and Cs have causally influenced A, B, and C, beginning from a certain past relative point of time. But our decision is a speculation based on a few nearby-lying causal influences upon them. But this is not as much true as when we had the whole information.
We tend to term the action that follows with the so-called “choice” for B or C by the action potential α of A as symmetry breaking. Symmetry breaking here is nothing but the ability of any action potential α of A to affect B or C (or any other) processual entity causally – but this ability is presumed and calculated without taking, and without being able to take, into consideration all the causal antecedents of the action potential α of A and the processual entities B, C, etc.
These causal antecedents are such that, if known fully well, the action route of the action potential α of A can be predicted without access to the notion of symmetry or symmetry breaking. Such symmetry breaking may then even be cited by some physicists as the reason for the choice. Note also that this or any other concept of symmetry and symmetry breaking is not such that all the causal antecedents in A, B, C, etc. are already summed up in it. Recall to mind here also the Bohmian notion of hidden variables. Hidden variables are not actual variables, but instead, a device to merely represent unknown and non-represented variable values.
One may argue that symmetry too is causal. The direct cause of the choice is the action α by A on B or C. But even within the notion of the direct or immediate cause, cannot be included the notion of other external and remote causes of the event of the action potential α of A choosing B or C causally. That is, immediate causes do not contain within themselves all the remote past causal routes that have contributed to the choice by the action potential α of A to choose B or C causally at a moment to interact with.
This shows that the notions of symmetry and symmetry breaking are the results of conceptually ostracizing (or of our inability to reach and include) the past causal horizon of the causal event at discussion. Hence, these are instruments to do physics in our given context. This does not mean that science and philosophy should not recognize the universal nature of causality or that physics and philosophy should ostracize Universal Causality.
The action is physically processed in the form of a conglomeration of existent processes, whichever be the participating causal forces from within them and from outside – the latter of which normally are not being taken into consideration by the experiment and the symmetric-mathematical description, because there are limits to experimental setups and mathematical tools. But theoretically generalizing inquiry has no limits. This is why we need a theoretically generalizing notion of Universal Causality based solely on the notion of existence. The generalities in the natural kinds of physically existent processes are called ontological universals. These are not merely and exclusively in individual token entities.
In nature there are only causes, not reasons. Reasons are in human minds, and are active in two ways:
(1) In a connotative manner (i.e., consciousness notes together the generalities in processes. and then concatenates the connotative universals achieved / formed within consciousness in order to facilitates concepts and their expression in statements).
(2) In a denotative manner (i.e., connotative universals are mixed with brain elements and then expressed in symbols and language, and thereafter denotative universals are concatenated in various ways in symbolically formulated statements in language, mathematics, automated intelligence, and other symbolic instruments).
Both these are aspects of the constitution of reason in their own ways as and when they have to do with reasons in consciousnesses and expressions via symbols and languages. Causes in physical processes are existent as such outside our connotative universals, connotative concatenations of connotative universals, denotative universals, and denotative concatenation of denotative universals. Reasons occur in the concatenations of connotative and denotative universals, respectively in the pure conceptual aspect of consciousness and in its symbolizing aspect in various natural and artificial languages.
The symmetry or symmetry breaking in any given case is such an explanation, a reason. It is not a cause or the cause of anything. Many a time physicists tend to get confused between reasons and causes. Symmetry is just an example for instances where this universal phenomenon of confusion occurs.
2. Defining Causality
Anything existent is in Extension, i.e., is composite and thus has a finite number of parts, none of the parts of which can be taken as an infinitesimal in any exercise of division and counting. Anything in existence is in Change, i.e., all existent processes and parts thereof make new impact-generation on other such and as a result also within itself – this is the only other aspect of composition of existents. The latter part of Change, namely, the inner and inward action as a result of the previous action, is to be recognized as an additional action.
The combined action of Extension-Change-wise existence is nothing but causation. Everything existent is in causality – hence Universal Causality. Causes are always in the Extension-Change-wise mode of being of existents. Extension and Change together are the exhaustive meaning of existence (To Be) of Reality-in-total. All Extension-Change-wise instances of existence are instances of causation.
In short, everything existent has parts (Extension), every part has parts because it is in Extension, and all of them are in their own proper action of impact-formation (Change) inwards and outwards. Extension and Change are the only two exhaustive modes of the meaning of existing non-vacuously. Every existent is thus in causal action.
Such causation is everywhere, in all existents, as the very implication of existence. Hence, Universal Causality is the principle of nature that is instantiated when the choice by a unit of processual action (α of any A) between two electrons (B, C) breaks the principle of symmetry. Symmetry breaking with respect to a preferred or prescribed sort of action should always be causal, because this event has a past causal horizon, however long. (The question as to whether the past causal horizon is physically past eternal or past finitely eternal is the cosmogenetic question. We do not treat it here.)
Symmetry and the symmetry breaking are names for what may be called reasons in any case that may be discussed. But reasons must be explained always in terms of the causal actions within the given contexts. If physics is unable to do that in any instance, it is not entitled to call it non-causality or a-causality. Nor should the situation be filled up with an indescribable something called vacuum energy, ubiquitous ether, etc., and make vacuum energy and ether do the creation of the universe/s.
If the various laws of Conservation are considered as instances of symmetry, such symmetry is not merely of a choice of interaction, but much more, as a symmetry that may be defined as not being otherwise than what the processes involved (and thus all that exist as processes) are.
Existent processes are fundamentally in existence, which is the same as being in Extension-Change, i.e., in Causal existence, and only derivatively (i.e., by preferring to involve only a few causes) from the continuously Causal existence are they in the states of symmetry or symmetry breaking – whatever be the states or choice of states considered. This is done in terms of reasons, in terms of conceptual explanation, all this is in fact based on causal processes that guide everything being considered for investigation in the cosmos.
3. Defining Symmetry Causally
To be clearer in terms of what physics does, symmetry is a mode of perception and explanation of causal physical action quantitively within a given limited context of causes, where the totally causal nature of all existents does not get considered as playing a direct role in the formation of the immediate causes that are being considered.
Universal Causality is equivalent to non-vacuous existence, because Universal Causality, composed of Extension and Change, is the very exhaustive meaning of existence. Hence, Universal Causality is physical-ontologically more a priori than the symmetry and symmetry breaking of some select states, where the state of having two sides, aspects, choices, possibilities of actions, etc. are based on the Extension-Change-wise modes of existent processes in being in finite measures of activity and in stability in the same finite measures of activity.
The finite measures of causal action may be quantified. But this quantification in terms of any conventional mode of measurement does not represent all that the physical processes involved are in themselves, in terms of all that have causally happened in them.
Symmetry is not a matter of absolutely virtual knowledge. It is naturally based on the causal action of parts in parts of the universe and their comparability with respect to certain criteria of comparison. Various mathematical tools have come to be used to make comparisons effective and productive.
But this is not the case concerning Universal Causality, which I have defined here, because mathematical applications in physics, astrophysics, cosmology, etc. tend to forget the basic fact of the universality of Causality, which should have been dealt with in every little part of these sciences. This is the sad part of the story of Universal Causality.
Bibliography
(1) Gravitational Coalescence Paradox and Cosmogenetic Causality in Quantum Astrophysical Cosmology, 647 pp., Berlin, 2018.
(2) Physics without Metaphysics? Categories of Second Generation Scientific Ontology, 386 pp., Frankfurt, 2015.
(3) Causal Ubiquity in Quantum Physics: A Superluminal and Local-Causal Physical Ontology, 361 pp., Frankfurt, 2014.
(4) Essential Cosmology and Philosophy for All: Gravitational Coalescence Cosmology, 92 pp., KDP Amazon, 2022, 2nd Edition.
(5) Essenzielle Kosmologie und Philosophie für alle: Gravitational-Koaleszenz-Kosmologie, 104 pp., KDP Amazon, 2022, 1st Edition.
THE METHODOLOGY OF CAUSAL HORIZONAL RESEARCH
Raphael Neelamkavil, Ph.D., Dr. phil.
The uninterruptedly past-existent causal influences pointing ever backward for recognition of causal pervasiveness and therefore beckoning consideration of causal pasts for achievement of rational adequacy with respect to the perceived causes of any particular process – this I call the ontological givenness of the causal horizon of anything whatever.
The whole of the past causal influences are together never fully actually traceable back from a given point of time by human intellects and instruments. All processes are in principle and in general ever better traceable and capable of being theoretically included in general ontological research. Such theoretical traceability of causes is rejected by their probabilistically ontological exclusion by QM despite the very finite Extension-Change distances between any two QM events and between a QM event and an arbitrarily chosen experimenter.
Due to the principle of inner-universe conservation of matter-energy, these past causal influences – the causal horizon – as influences at any time traceable to the future, are not annihilated into non-existence in the present. Therefore, they have their real significance from the past in the present of any process.
I propose therefore that a physical-ontologically and cosmologically tenable Causal Horizonal Research (CHR)[1]into inner-universe causalities traceable theoretically to the indefinite past of any process at inquiry – even in case of existence of the external originative cause of all that is physical – can yield at least a more than vague and sufficiently broad outlook at some problematic issues of causal reach in micro-physics, cosmology, physical ontology, and philosophy. That is, the status and extent of causal processes in the micro- and macro-universe, the relation of real causality with the recognition-level or calculation-level probability, randomness, chaos, catastrophe, etc. can be further elucidated and systematized by CHR.
If individual processes in the universe have had any measure of past causality active in their parts in any manner (wholly or allegedly partially), this demonstrates by definition the fact that any causal explanation of any process hints at all the processes (causal or allegedly non-causal) that are prior to a phenomenon / event / process, relative to the spatiotemporally connected posteriority of the thing being explained and the priority of the causes being generalized upon. The measure of Extension-Change (measured as space-time in science and ordinary parlance) that has already taken place is theoretically traceable.
Suppose that a certain process’s causal roots proper (or, at least what we call antecedents proper) are conceivable in principle as having been existent in the past. Then there is no reason why the experimentally and theoretically in-principle feasible extent of tracing it should obstruct us from taking at least a theoretically general look at the ontological structure of past causal (antecedent) roots, and then still farther past roots, etc.
Its ontological structure is, in general, the Extension-Change antecedent-horizon that lies always in the past direction. The need to tracing causal roots is in simple terms the rational basis of the principle and procedure of CHR, granted that the antecedents proper of all that is today, of all that we speak of, are in fact causes. To make sure in the present context that these are causes, we wait till the end of the present work.
By positing causes as active in the past and relevantly dormant in the present, and by reason of the principle of conservation as it is active towards the future proper in all past and present processes, the proposed ontological and cosmological methodology of CHR is theoretically implementable. We want to see the extent of causality where CHR takes us to, even when we do not mention its use in the following chapters.
It will not take us to a meaningless infinite regress, since any infinite regress without the involvement of a Creator will still be with reference to causes within the universe, and such an infinite regress within the universe should naturally be physically meaningful.[2]The stage for CHR in micro-cosmology will be set by the following chapters, where causation in QM will be discussed from various angles, along with making the need for our methodology further explicit.
The above is a short introduction to CHR treated mainly in my book (2014):
CAUSAL UBIQUITY IN QUANTUM PHYSICS
[1] For detailed reflections, see Raphael Neelamkavil, “Causal Horizonal Research in Cosmology” (21-47), Journal of Dharma 34, 2 (April-June 2009).
[2] In order to circumvent infinite regress, we do not posit an unmoved creator as the final past end of any causal horizon. It is beset with metaphysical paradoxes. We keep the option of a continuously creating Divine open, but this is not needed for our more restricted methodology for physical research, namely, Causal Horizonal Research.
- η=η (T) =1-T1/T2 (excluding volume). E (V, T), P (V, T) contains volume, using η (T) Calculating E (V, T), P (V, T) does not match the experiment. This is in line with mathematical logic. The specific scientific calculations have changed their flavor. Please refer to the following figure for details
- η=η (T) =1-T1/T2 is about the ideal gas formula.

Discontinuity (artificially) of The Thermophysical Properties of NIST affects the second law of thermodynamics:
1) Scientists create Type 2 perpetual motion machines;
2) Scientists have discovered new laws of phase transition.
3) Scientists don't need to create a bunch of fake things for the second law of thermodynamics.
Heat is transferred from low Temp. to high Temp. without consuming external energy. Compared to nuclear fusion, it is simple and easier to gain energy.

Comparison:
1)The first law of thermodynamics calculates the Carnot efficiency;
2)the second law of thermodynamics predicts: η= 1-T1/T2.
Method:
1)The first law: P=P (V, T), E=E (V, T) DE=Q-W==>η,Efficiency needs to be calculated and determined.
2)Second Law: Anti perpetual motion machine, guessing==>1-T1/T2.
Effect:
1)The first law: E, P, W, Q ,η of the cyclic process can be obtained,
2)Second Law: Only efficiency can be obtained:η= 1-T1/T2.
- The uniqueness of natural science requires scientists to make choices.
- The second law of thermodynamics can only yield a single conclusion: η= 1-T1/T2(Meaningless--- lacking support from E, P, W, Q results.)Like an island in the ocean.
I believe this can be really achievable using cosmological constants but in combination with specific cosmological parameters to calculate theoretical physical constants! But I think the problem is very difficult because in my opinion it could take time to discover all the cosmological parameters!
Combining the pictures to see the logical flaws and deviations from the experiment of the second law of thermodynamics.
1,Please take a look at the picture: Compared to the first law of thermodynamics, the second law of thermodynamics is a pseudoscience: Perpetual motion machine is a result and engineering concept, which cannot be used as the starting point of theory (the second law)
2,In the second picture, the second law of thermodynamics was misused by scientists, indicating that this theory does not match the experiment.
3,The above two explanations indicate that the second type of perpetual motion machine exists. If you're not satisfied, you can read my other discussions or articles.
4,With the second type of perpetual motion machine, the energy and environmental crisis has been lifted. By using the electricity generated by perpetual motion machines to desalinate seawater, the Sahara desert will become fertile land, and there will be no food crisis. War and Poverty Will Move Away from Humanity
See picture for details
The second law of thermodynamics, no matter how powerful, must follow the laws of logic.
Einstein is one of the greatest and most admired physicists of all times. Einstein's general theory of relativity is one of the most beautiful theories in physics. However, every theory in physics has its limitations, and that should also be expected for Einstein's theory of gravity: A possible problem on small length scales is signaled by 90 years of unwavering resistance of general relativity to quantization, and a possible problem on the largest length scales is indicated by the present search for "dark energy" to explain the accelerated expansion of the universe within general relativity.
Why, then, is the curvature of spacetime so generally accepted as an ultimate truth, as the decisive origin of gravitation, both by physicists and philosophers? This seems to be a fashionable but unreflected metaphysical assumption to me.
Are there alternative theories of gravity? There are plenty of alternatives. As a consequence of the equivalence of inertial and gravitational mass, they typically involve geometry. The most natural option seems to be a gauge field theory of the Yang-Mills type with Lorentz symmetry group, which offers a unified description of all fundamental interactions and a most promising route to quantization.
I feel that metaphysical assumptions should always be justified and questioned (rather than unreflected and fashionable). How can such a healthy attitude be awakened in the context of the curvature of spacetime?
Research areas: Theoretical Physics, Philosophy of Science, Gravitation, General Relativity, Metaphysics
God said, "Let there be light."
So, did God need to use many means when He created light? Physically we have to ask, "Should all processes of light generation obey the same equation?" "Is this equation the 'God equation'?"
Regarding the types of "light sources", we categorize them according to "how the light is emitted" (the way it is emitted):
Type 0 - naturally existing light. This philosophical assumption is important. It is important because it is impossible to determine whether it is more essential that all light is produced by matter, or that all light exists naturally and is transformed into matter. Moreover, naturally existing light can provide us with an absolute spacetime background (free light has a constant speed of light, independent of the motion of the light source and independent of the observer, which is equivalent to an absolute reference system).
Type I - Orbital Electron Transition[1]: usually determines the characteristic spectra of the elements in the periodic table, they are the "fingerprints" of the elements; if there is human intervention, coherent optical lasers can be generated. According to the assumptions of Bohr's orbital theory, the transitions are instantaneous, there is no process, and no time is required*. Therefore, it also cannot be described using specific differential equations, but only by probabilities. However, Schrödinger believed that the wave equation could give a reasonable explanation, and that the transition was no longer an instantaneous process, but a transitional one. The wave function transitions from one stable state to another, with a "superposition of states" in between [2].
Type II - Accelerated motion of charged particles emitting light. There are various scenarios here, and it should be emphasized that theoretically they can produce light of any wavelength, infinitely short to infinitely long, and they are all photons. 1) Blackbody radiation [3][4]: produced by the thermal motion of charged particles [5], is closely dependent on the temperature, and has a continuous spectrum in terms of statistical properties. This is the most ubiquitous class of light sources, ranging from stars like the Sun to the cosmic microwave background radiation [6], all of which have the same properties. 2) Radio: the most ubiquitous example of this is the electromagnetic waves radiated from antennas of devices such as wireless broadcasting, wireless communications, and radar. 3)Synchrotron radiation[7],e+e− → e+e−γ;the electromagnetic radiation emitted when charged particles travel in curved paths. 4)bremsstrahlung[8],for example, e+e− → qqg → 3 jets[11];electromagnetic radiation produced by the acceleration or especially the deceleration of a charged particle after passing through the electric and magnetic fields of a nucleus,continuous spectrum. 5)Cherenkov Radiation[9]:light produced by charged particles when they pass through an optically transparent medium at speeds greater than the speed of light in that medium.
Type III:Partical reactions、Nuclear reactions:Any physical reaction process that produces photon (boson**) output. 1)the Gamma Decay;2)Annihilation of particles and antiparticles when they meet[10]: this is a universal property of symmetric particles, the most typical physical reaction;3)Various concomitant light, such as during particle collisions;4)Transformational light output when light interacts with matter, such as Compton scattering[12].
Type IV: Various redshifts and violet shifts, changing the relative energies of light: gravitational redshift and violet shift, Doppler shift; cosmological redshift.
Type V: Virtual Photon[13][14]?
Our questions are:
Among these types of light-emitting modes, type II and type IV light-emitting obey Maxwell's equation, and the type I and type III light-emitting processes are not clearly explained.
We can not know the light-emitting process, but we can be sure that the result, the final output of photons, is the same. Can we be sure that it is a different process that produces the same photons?
Is the thing that is capable of producing light, itself light? Or at least contains elements of light, e.g., an electric field E, a magnetic field H. If there aren't any elements of light in it, then how was it created? By what means was one energy, momentum, converted into another energy hν, momentum h/λ?
There is a view that "Virtual particles are indeed real particles. Quantum theory predicts that every particle spends some time as a combination of other particles in all possible ways"[15]. What then are the actual things that can fulfill this interpretation? Can it only be energy-momentum?
We believe everything needs to be described by mathematical equations (not made-up operators). If the output of a system is the same, then the process that bridges the output should also be the same. That is, the output equations for light are the same, whether it is a transition, an accelerated moving charged particle, or an annihilation process, the difference is only in the input.
------------------------------------------------------------------------------
* Schrödinger said:the theory was silent about the period s of transition or 'quantum jumps' (as one then began to call them). Since intermediary states had to remain disallowed, one could not but regard the transition as instantaneous; but on the other hand, the radiating of a coherent wave train of 3 or 4 feet length, as it can be observed in an interferometer, would use up just about the average interval between two transitions, leaving the atom no time to 'be' in those stationary states, the only ones of which the theory gave a description.
** We know the most about photons, but not so much about the nature of W, Z, and g. Their mass and confined existence is a problem. We hope to be able to discuss this in a follow-up issue.
------------------------------------------------------------------------------
Links to related issues:
【1】"How does light know its speed and maintain that speed?”;
【2】"How do light and particles know that they are choosing the shortest path?”
【3】"light is always propagated with a definite velocity c which is independent of the state of motion of the emitting body.";
【4】“Are annihilation and pair production mutually inverse processes?”; https://www.researchgate.net/post/NO8_Are_annihilation_and_pair_production_mutually_inverse_processes;
------------------------------------------------------------------------------
Reference:
[1] Bohr, N. (1913). "On the constitution of atoms and molecules." The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science 26(151): 1-25.
[2] Schrödinger, E. (1952). "Are there quantum jumps? Part I." The British Journal for the Philosophy of science 3.10 (1952): 109-123.
[3] Gearhart, C. A. (2002). "Planck, the Quantum, and the Historians." Physics in perspective 4(2): 170-215.
[4] Jain, P. and L. Sharma (1998). "The Physics of blackbody radiation: A review." Journal of Applied Science in Southern Africa 4: 80-101. 【GR@Pushpendra K. Jain】
[5] Arons, A. B. and M. Peppard (1965). "Einstein's Proposal of the Photon Concept—a Translation of the Annalen der Physik Paper of 1905." American Journal of Physics 33(5): 367-374.
[6] PROGRAM, P. "PLANCK PROGRAM."
[8] 韧致辐射;
[9] Neutrino detection by Cherenkov radiation:" Super-Kamiokande(超级神冈)." from https://www-sk.icrr.u-tokyo.ac.jp/en/sk/about/. 江门中微子实验 "The Jiangmen Underground Neutrino Observatory (JUNO)." from http://juno.ihep.cas.cn/.
[10] Li, B. A. and C. N. Yang (1989). "CY Chao, Pair creation and Pair Annihilation." International Journal of Modern Physics A 4(17): 4325-4335.
[11] Schmitz, W. (2019). Particles, Fields and Forces, Springer.
[12] Compton, A. H. (1923). "The Spectrum of Scattered X-Rays." Physical Review 22(5): 409-413.
[13] Manoukian, E. B. (2020). Transition Amplitudes and the Meaning of Virtual Particles. 100 Years of Fundamental Theoretical Physics in the Palm of Your Hand: Integrated Technical Treatment. E. B. Manoukian. Cham, Springer International Publishing: 169-175.
[14] Jaeger, G. (2021). "Exchange Forces in Particle Physics." Foundations of Physics 51(1): 13.
[15] Are virtual particles really constantly popping in and out of existence? Or are they merely a mathematical bookkeeping device for quantum mechanics? - Scientific American.
If it is so, who is responsible for this terrible downfall of the King of all sciences - the brilliant, but fame-fortune-fund-hungry scientists; or the behind the scene invisible puppeteers?
What is real and rational – a universe created at a single cataclysmic instant by an omnipotent and omniscient creator ruled by deterministic causality or an infinite, eternal and ever-changing one, mediated by quantum and dialectical chance and necessity?
"After Einstein, a New Generation Tries to Create a Theory of Everything - A new generation of physicists hope to succeed where Einstein failed": Scientific American: https://www.scientificamerican.com/article/after-einstein-a-new-generation-tries-to-create-a-theory-of-everything/
"Quō Vādis Theoretical Physics and Cosmology? From Newton's Metaphysics to Einstein's Theology!"
" Ambartsumian, Arp and the Breeding Galaxies" : http://redshift.vif.com/JournalFiles/V12NO2PDF/V12N2MAL.pdf
"The Infinite - As a Hegelian Philosophical Category and Its Implication for Modern Theoretical Natural Science" - The Limits of Mathematics: http://www.e-journal.org.uk/shape/papers/Special%2064.pdf
"Philosophy of Space-Time: Whence Cometh "Matter" and "Motion"?”
Does energy have an origin or root?
When Plato talks about beauty in the "Hippias Major", he asks: "A beautiful young girl is beautiful", "A sturdy mare is beautiful", "A fine harp is beautiful", "A smooth clay pot is beautiful" ....... , So what exactly is beauty? [1]
We can likewise ask, Mechanical energy is energy, Heat energy is energy, Electrical and magnetic energy is energy, Chemical and internal energy is energy, Radiant energy is energy, so what exactly is "energy"?[2]
Richard Feynman, said in his Lectures in the sixties, "It is important to realize that in physics today we have no knowledge of what energy is". Thus, Feynman introduced energy as an abstract quantity from the beginning of his university teaching [3].
However, the universal concept of energy in physics states that energy can neither be created nor destroyed, but can only be transformed. If energy cannot be destroyed, then it must be a real thing that exists, because it makes no sense to say that we cannot destroy something that does not exist. If energy can be transformed, then, in reality, it must appear in a different form. Therefore, based on this concept of energy, one can easily be led to the idea that energy is a real thing, a substance. This concept of energy is often used, for example, that energy can flow and that it can be carried, lost, stored, or added to a system [4][5].
Indeed, in different areas of physics, there is no definition of what energy are, and what is consistent is only their Metrics and measures. So, whether energy is a concrete Substance**, or is just heat, or is the capacity of doing work, or is just an abstract cause of change, was much discussed by early physicists. However, we must be clear that there is only one kind of energy, and it is called energy. It is stored in different systems and in different ways in those systems, and it is transferred by some mechanism or other from one system to another[9].
Based on a comprehensive analysis of physical interactions and chemical reaction processes, energy is considered to be the only thing that communicates various phenomena. Thus, "Energism" was born*[8]. Ostwald had argued that matter and energy had a “parallel” existence, he developed a more radical position: matter is subordinate to energy. “Energy is always stored or contained in some physical system. Therefore, we will always have to think of energy as a property of some identifiable physical system”. “Ostwald regarded his Energism as the ultimate monism, a unitary "science of science" which would bridge not only physics and chemistry, but the physical and biological sciences as well”[6]. This view has expressed the idea of considering "pure energy" as a "unity" and has assumed the process of energy interaction. However, because of the impossibility to determine what energy is, it has been rejected by both scientific and philosophical circles as "metaphysics" and "materialism"[10].
The consistency and transitivity of energy and momentum in different physical domains have actually shown that they must be linked and bound by something fundamental. Therefore, it is necessary to re-examine the "Energism" and try to promote it.
The relationship between energy and momentum, which are independent in classical mechanics, and their conservation are also independent. the momentum of the particle does not involve its energy. but In relativity, the conservations of momentum and energy cannot be dissociated. The conservation of momentum in all inertial frames requires the conservation of energy and vice versa. space and time are frame-dependent projections of spacetime[7].
Our questions are:
1) What is energy, is it a fundamental thing of entity nature**, or is it just a measure, like the property "label" of "beauty", which can be used by anyone: heat, light, electricity, machinery, atomic nuclei. Do the various forms of energy express the same meaning? Can they be expressed mathematically in a uniform way? Is there a mathematical definition of "energy"? ***
2) Is the conservation of energy a universal principle? How does physics ensure this conservation?
3) Why is there a definite relationship between energy and momentum in all situations? Where are they rooted?
4) If the various forms of energy and momentum are unified, given the existence of relativity, is there any definite relationship between them and time and space?
-------------------------------------------------------------------------
* At the end of the nineteenth century, two theories were born that tried to unify the physical world, "electromagnetic worldview" and "Energism". We believe that this is the most intuitive and simple view of the world. And, probably the most beautiful and correct view of the world.
** If it is an entity, then it must still exist at absolute zero. Like the energy and momentum of the photon itself, it does not change because of the temperature, as long as it does not interact with each other.
*** We believe that this is an extremely important issue, first mentioned by Sergey Shevchenko( https://www.researchgate.net/profile/Sergey-Shevchenko )in his reply to a question on Researchgate, see https://www.researchgate.net/post/NO1_Three-dimensional_space_issue; SS's reply.
-------------------------------------------------------------------------
Referencs
[1] Plato.
[2] Ostwald identified five “Arten der Energie”: I. Mechanical energy, II. Heat, III. Electrical and magnetic energy, IV. Chemical and internal energy, and V. Radiant energy. Each form of energy (heat, chemical, electrical, volume, etc.) is assigned an intensity. And formulated two fundamental laws of energetics. The first expresses the conservation of energy in the process of transfer and conversion; the second explains in terms of intensity equilibrium what can start and stop the transfer and conversion of energy.
[3] Duit, R. (1981). "Understanding Energy as a Conserved Quantity‐‐Remarks on the Article by RU Sexl." European journal of science education 3(3): 291-301.
[4] Swackhamer, G. (2005). Cognitive resources for understanding energy.
[5] Coelho, R. L. (2014). "On the Concept of Energy: Eclecticism and Rationality." Science & Education 23(6): 1361-1380.
[6] Holt, N. R. (1970). "A note on Wilhelm Ostwald's energism." Isis 61(3): 386-389.
[7] Ashtekar, A. and V. Petkov (2014). Springer Handbook of Spacetime. Berlin, Heidelberg, Springer Berlin Heidelberg.
[8] Leegwater, A. (1986). "The development of Wilhelm Ostwald's chemical energetics." Centaurus 29(4): 314-337.
[9] Swackhamer, G. (2005). Cognitive resources for understanding energy.
[10] The two major scientific critics of Energism are Max Planck and Ernst Mach. The leading critic of the political-philosophical community was Vladimir Lenin (the founder of the organization known as Comintern). But he criticized not only Ostwald, but also Ernst Mach.
The original manuscript of this article is to answer some questions asked by Zhihu users, here added the second half of the content into the original manuscript, one for sharing, and the other to keep some views written casually, so as not to be lost, it may be useful in the future.
To talk about this problem under the topic of physics is because thermodynamics describes collective behavior and involves all levels. As far as the current situation of thermodynamics is concerned, classical thermodynamics + chemical thermodynamics seems more systematic, but it is not complete.
In comparison, thermodynamics is not as exquisite and rigorous as other theoretical systems of physics, the physical images of many concepts such as entropy, enthalpy and other thermodynamic potentials now are still unclear, and the mathematical transformations, from the perspective of physical meaning, cannot plainly shown what physical contents are transformed, the physical images of many derivatives, differentials, and equations are unclear, for instance, those derivatives, differentials cannot even distinguish between energy transfer and energy conversion.
The way classical thermodynamics is thought is also different from other systems of physics, the "subdivision" of the internal energy is not in place. We all know that the internal energy is the sum of different forms of energy within a given system, since there have different forms, then it should be classified, no, that's a pile. The description method is to see how much comes out through the heat transfer path, how much can come out through the work path, and calculate a total changes, the unique definitions are the parts that can be released in the form of work, called the thermodynamic potentials, when the path changed, one don't know whether they still are.
There is no even a basic classification of the internal energy, how to discuss the conversion between different forms of energy?
For example, a spontaneous chemical reaction, G decreases, S increases, but from the perspective of energy conversion, the answer by a professor of chemistry may not be as good as a liberal arts student who have studied a little in high school and forget most of it, the latter will most likely say that it is chemical energy converted into heat energy, although the terms are not professional, but the meaning of energy conversion is clear. The professors of chemistry know that G decreased, but they don't know what this decreased G turns into, there is no a complete narrative about energy conversion.
In the entire thermodynamic theoretical system, you can hardly find such a sensation, such as delicate, rigorous, physical image clarity, similar to that in the other theoretical systems of physics, and the appeasement philosophy is all over the place.
Statistical physics cannot independently establish equations for the relationships between thermodynamic state functions, relies on thermodynamics in the theoretical system, which also inherited the problems from thermodynamic theory. Statistical physics itself also brings more problems, for instance, statistical physics cannot explain such a process, an ideal gas does work to compress a spring, the internal energy of the ideal gas is converted into the elastic potential energy of the spring. If such simple, realistic problem cannot be explained, what are the use your statistical ensemble, phase spaces, the Poincaré recurrence theorem, mathematical transformations?
The thermodynamic direction maybe currently the last big chance in theoretical physics that can be verified or falsified, because it doesn't face the difficulties in other directions: you can write some dizzying mathematical equations, but maybe a century from now you don't know whether it's right or wrong.
Thermodynamics is the most grand theoretical system in the entire scientific system, a scientific system on natural evolution, although it is not yet complete, and has not yet risen to the level of fundamental theory, which provides a grand narrative of natural evolution, running through all levels.
Newton laws, Maxwell equations, Schrödinger equation, Hamiltonian dynamics, etc., for thermodynamics, that is only one law: the first law of thermodynamics, the law reveals the conservation of energy and conversion relationships of collective behavior at all levels and in all processes, the direction of change that the second law of thermodynamics described now has not been found in the dynamics of physics, will there be? there are some clues but not certain, because there is no corresponding theoretical framework.
The popular view of physicists on the conflict between time inversion symmetry of the fundamental process of dynamics and the second law is all wrong, the errors are: 1, confused the relationship between the fundamental laws, the theme that those dynamical equations discussed is the relation of conservation of energy, which correspond to the first law of thermodynamics, and the time inversion for the first law of thermodynamics is also symmetrical. 2, The symmetry of an equation and the symmetry of a phenomenon are two different concepts, the time inversion symmetry of the energy conservation equation only shows that energy conserve in past, present, and future, it does not explain whether the phenomenon itself is symmetrical in time inversion, the problem that the fundamental dynamic processes themselves are reversible or irreversible cannot be discussed by the equation of conservation of energy.
Have you noticed? in department of chemistry, one have to face the problem of time inversion asymmetry every day, and they also have dynamics, the chemical dynamics of time inversion asymmetry.
On the problem of irreversibility, those seemingly delicate, rigorous, time-inverse symmetrical physical systems are completely powerless, statistical physics is somewhat useful, such as explaining the diffusion phenomenon, calculating the number of collisions, its effective range has been limited by its theoretical postulates, the postulate of an equal probability determines that it is only valid for describing the processes tending to an equal probability distributions. In the framework of statistical physics, there is only one driving force of "change", tending to an equal probability distributions, the question is, the driving forces of "changes" in the real world around us are not only this one.
The second law of thermodynamics indicates that there are two different "dynamics": the physics of time inversion symmetry shows people a world without evolution, and the chemical dynamics of time inversion asymmetry shows us the different situations, whether the latter has universal sense at other levels is still unknown. From astrophysics to macroscopic, at least to the elementary particles level, all observed and confirmed results without exception strongly support the existence for the dynamics which are time inversion asymmetry, will it point to a final ending?
Let's take a look at the different thermodynamics?
The articles linked below show a new theoretical framework for thermodynamics that is different from what you can see in textbooks and other articles, and it also provides a new starting point for the study of a series of major problems.
I am trying to learn modern physics and frequently encounter the phrase "gauge theory". I look up the definition and find that it is a theory in which a Lagrangian is invariant under a certain class of transformations. That sounds to me like we are using Noether's theorem to find constants of motion. I learned enough math to know several ways of finding constants of motion. One way is Noether's theorem. Another way is to find operators that commute with a Hamiltonian. A third way is to derive implications directly from the given governing equations. Which, if any of these methods, are called "gauge theory"? And why?
By now, we've all realised how well GPT AI is able to find and replicate patterns in language and in 2D images. Its ability to find and interact with data patterns sometimes allows it to answer questions better than some students.
I expect that right now there will be teams training GPT installations with molecular structure and physical characteristics data to try to find candidates for new materials for high-temperature superconductors, or to find organic lattice structures with high hydrogen affinity to replace palladium for hydrogen storage cells. The financial and social rewards for success in these areas make it difficult to justify NOT trying out GPT AI.
But what about fundamental physics theory? Could AI find a solution to the current mismatch between Einstein's general relativity and quantum mechanics? Could it start to solve hard problems that have defeated mainstream academia for decades? If so, what happens to the instiutions?
Further reading:
In my paper in Phys. Letts..vol 68A (1978)409-411, I have discussed a metric
projectively related to Friedman/R-W metric with identical geodesics.Questions:
Are there other such solutions for this case or for other conformaly flat
spaces? One such solution defines an infinite succession. Is there a computer
program to find infinite succession of (covariant)Einstein tensors.; the
change represents the erruption of matter-energy in assumed spontaneous
projective change. on approach to a singularity. Physically the change is
caused by intervention of Gauge fields to avoid gravity-induced collapse.
The point is that addition to Christofell connection of a term (Identity tensor
times Vector) leaves a system of geodesics unchanged,and is in accord
with equivalence principle. This way one can relate both gauge field and
Q.M. with G .R..Details on request.See also Matsience Report no92(1978)/
paper 9,14pp (www.imsc.res.in/Library)-Black Body Structure of a Black
Hole. And Lie Structure of Quasiconformal Maps in R*(*=N). And Physics of
String Theory in Quantum Field Theory. QM,& Optics- Ed VV Dodonov &
V I Manko , Moscow (1990) Nova Publishers (N,Y)vol 187 of Proc 0f Lebedev Phy. Inst.Acad. 0f Sciences 0f the USSR./pp113-116..
From Newton's Metaphysics to Einstein's Theology!
The crisis in modern theoretical physics and cosmology has its root in its use, along with theology as a ruling-class tool, since medieval Europe. The Copernican revolution overthrowing the geocentric cosmology of theology led to unprecedented social and scientific developments in history. But Isaac Newton’s mathematical idealism-based and on-sided theory of universal gravitational attraction, in essence, restored the idealist geocentric cosmology; undermining the Copernican revolution. Albert Einstein’s theories of relativity proposed since the turn of the 20th century reinforced Newtonian mathematical idealism in modern theoretical physics and cosmology, exacerbating the crisis and hampering further progress. Moreover, the recognition of the quantum world - a fundamentally unintuitive new realm of objective reality, which is in conflict with the prevailing causality-based epistemology, requires a rethink of the philosophical foundation of theoretical physics and cosmology in particular and of natural science in general.
Dear All,
I am a MSc in Theoretical Physics and I am finishing my master thesis on Algebraic geometry over Lie algebras in Tabriz University . I am searching a PhD position.
If anyone is interested in a PhD student, please feel free to contact me.
Thanks in advance,
Sona Samaei
I searched yesterday and could not find any references, apart from hypercubes etc, to mathematical modeling using 4 dimensions other than my articles on arXiv and RG. That may explain why the role of 4/3 scaling has been unnoticed by physics.
I think a fourth dimension does play a role in modeling:
3/4 metabolic scaling.
Peto’s paradox
Brain weight scaling
4/3 fractal envelope of Brownian motion.
Clausius 1860 article on gas molecular mean path lengths.
Waterston on the energy to maintain a levitating elastic plane in a gravitational field (Roy Soc 1892 publication of 1845 submission).
Dark energy.
Are there any others?
Several articles on RG discuss 4/3 scaling, which involves the 4th dimension, including:
Preprint Dark energy modeled by scaling
Preprint Flow as a fourth dimension
and several other RG articles back to .
Space and time being so fundamental, shouldn’t four dimensional analogues all be in plain sight?
Where are they? Has physics overlooked or missed them?
This is a follow up to the question:
And see:
We assume that the statistical weight SW for free nodes in a geometric shape is of extreme importance in mathematics and theoretical physics but is still absent.
However, SW, which is a dimensionless mathematical/physical quantity attached to the importance of the position of the node, can be well defined via the normal/Gaussian distribution curve or equivalently via the B-matrix transition chains.
Both approaches give exactly the same result, which shows that SW is uniquely defined.
Mathematics and theoretical physics are currently searching for answers to this particular question and two other related questions that make up three of the most persistent questions:
i- Do probabilities and statistics belong to physics or mathematics?
ii- Followed by the related question, does nature operate in 3D geometry plus time as an external controller or more specifically, does it operate in the inseparable 4D unit space where time is woven?
iii-Lagrange multipliers: Is it just a classic mathematical trick that we can do without?
We assume the answers to these questions are all interconnected, but how?
Not so simple.
First, extraordinary claims attract extraordinary skepticism and scrutiny. The least oversight can sink theory.
Second, for all the reasons set out in Thomas Kuhn’s The Structure of Scientific Revolutions.
Text books, courses and curricula, including introductory courses might require revision. Proposed theories of dark energy that are superseded would impair the field of research of many experts.
Third, billions of dollars of current and future research has been allocated to dark energy, if you include proposed space missions. Jobs and careers would be impaired by a valid theory.
Or not.
What do you think? What other factors would influence rejection or acceptance?
Similarly, are there books and articles on examples of generalization in physics?
In the Copenhagen interpretation of QM, properties of a system begin to be realistic when the observer makes appropriate measurements and observations. Does it mean that the laws of QM were inconsequential about 2 billion years ago when nobody was there to make observations? This problem does not arise if we take the statistical interpretation. However, then one has to accept that QM does not provide a theory for individual events. But what is the need if the results of individual events are random due to microscopic nature of the systems?
Hellow.
I released My Thesis in Theoretical Physics 2 Years ago, last version 7 months ago. Almost 15,000 unique people have downloaded it. Total 34,000 downloads. An amount bigger than the entire Theoretical Physics community. I wonder how is it possible that I have not received not a single mention, in other peer reviewed paper. That is despite solving ,with no doubt, one of the most burning questions in the field by providing a numerical proof.
What kind of people are those who do Theoretical Physics ? They either don't know, which is not likely at that point, March 2023, considering the total download volume. The second option is that they do know that answer was found and they are quite, which is not likely either. What should one do In such Situation ?
What is missing is an exact definition of probability that would contain time as a dimensionless quantity woven into a 3D geometric physical space.
It should be mentioned that the current definition of probability as the relative frequency of successful trials is primitive and contains no time.
On the other hand, the quantum mechanical definition of the probability density as,
p(r,t)=ψ(r,t)*.ψ(r,t),
which introduces time via the system's destination time and not from its start time is of limited usefulness and leads to unnecessary complications.
It's just a sarcastic definition.
It should be mentioned that a preliminary definition of the probability function of space and time proposed in the Cairo technique led to revolutionary solutions of time-dependent partial differential equations, integration and differentiation, special functions such as the Gamma function, etc. without the use of mathematics.
The topic considered here is the Klein-Gordon equation governing some scalar field amplitude, with the field amplitude defined by the property of being a solution of this equation. The original Klein-Gordan equation does not contain any gauge potentials, but a modified version of the equation (also called the Klein-Gordon equation in some books for reasons that I do not understand) does contain a gauge potential. This gauge potential is often represented in the literature by the symbol Ai (a four-component vector). Textbooks show that if a suitable transformation is applied to the field amplitude to produce a transformed field amplitude, and another suitable transformation is applied to the gauge potential to produce a transformed gauge potential, the Lagrangian is the same function of the transformed quantities as it is of the original quantities. With these transformations collectively called a gauge transformation we say that the Lagrangian is invariant under a gauge transformation. This statement has the appearance of being justification for the use of Noether’s theorem to derive a conservation law. However, it seems to me that this appearance is an illusion. If the field amplitude and gauge potential are both transformed, then they are both treated the same way as each other in Noether’s theorem. In particular, the theorem requires both to be solutions of their respective Lagrange equations. The Lagrange equation for the field amplitude is the Klein-Gordon equation (the version that includes the gauge potential). The textbook that I am studying does not discuss this but I worked out the Lagrange equations for the gauge potential and determined that the solution is not in general zero (zero is needed to make the Klein-Gordon equation with gauge potential reduce to the original equation). The field amplitude is required in textbooks to be a solution to its Lagrange equation (the Klein-Gordon equation). However, the textbook that I am studying has not explained to me that the gauge potential is required to be a solution of its Lagrange equations. If this requirement is not imposed, I don’t see how any conclusions can be reached via Noether’s theorem. Is there a way to justify the use of Noether’s theorem without requiring the gauge potential to satisfy its Lagrange equation? Or, is the gauge potential required to satisfy that equation without my textbook telling me about that?
Einstein was a revolutionary scientist, possibly for the first time in the several hundred years that transpired following Sir Isaac Newton's discovery of, and ability to express in mathematical terms, the gravitational laws of motion.
Today's theoretical physics research has stalled. Most leading theorists lack the combination of traits that made these great
** ability to express new effective representations of physical systems
** physical intuition-based postulation of principles
** deep knowledge of issues without distortions
** imagination and conceptual power
sorry, this got a little bit long....
Initiating a Concise Discurse about "Three Podkletnov experiments"
Herein I want to reinitialize the scientific discurse again.
The Physicist Evgeny.Podkletnov
- measured (around 1997 (!) in a special experiment
- an unexpected result in form of an unexpected upwards directed force
- with strange properties.
The experiment has never been reproduced yet,
- what may be owed to the non-standard technology necessary to build parts of the experimental equipment.
In the 2020s the necessary technical equipment should be creatable.
But this experiment seems to be completely forgotten in the conscience of the scientific community.
# Useful Physical Knowledge for this discussion, #
- knowledge in Experimental-Physics,
- knowledge in Theoretical Physics like: QM, Spin-Physics, Bloch-Theorem,
additionally
- Electrodynamics
- basic knowledge about General Theory of Relativity
=>--- see below for the target of this discussion ---
# Historical: negative reception in the scientific community #
The measured effect lead in the eyes of the writer of these lines to completely over-excessive pressure on the involved scientists. Mr. Podkletnov was not allowed to finish his experiments and a try to publish the document failed due to in my eyes unjustified overreaction of the "scientific world". Other involved Physicists suffered by damage in scientific reputation.
In the end the measurement results landed in the corner "Fringe-Physics".
The mistakes that was made
is somehow strategic.
The Authors bond the experimental measurement to a theoretical explanation,
that was out of bounds for serious mainstream science.
The given hypothesis has been:
"Shielding of Gravity by the rotating superconducting disc (Antigravity)"
-------------------------- -------------------------- --------------------------
My approach is the following
Why can an experimental physicist not measure something unexpected
that is against the mainstream knowledge without getting harmed defamatory?
From a historical point of view
has the theory about "Energy Conservation" been delayed 2 decades,
before it made its way to mainstream science.
The scientific, physical community even should not make same mistake again,
even if the probability may be very small.
-------------------------- -------------------------- --------------------------
====== ====== ====== ====== ====== ====== ====== ====== ======
Description of "The 1st Podkletnov Experiment"
The unique experimental setup based on
- a leveating, fast rotating, multilayered, superconductive ceramic disc (YBaCuO).
- Special preparation of the (for the 1990s) large YBaCuO disc was necessary.
- The method to make the disc leveate and rotate is tricky.
Mr. Podkletnov reported the observation
- of a small, unexpected force in direction of the ceiling...
- ...within a cylindric volume above the disc.
- The conducted measures showed a force of between 1-5% of the gravitational force.
- Also smoke at the rim of the cylindric volume behaved strange and unexpected.
Seen from a science-historical view, the "only mistake" that was made by the scientist Podkletnov and collegues
to hand out explanation directly together with his experimental results.
It was interpreted as a gravitational shielding.
- => The results were silly rumours
- about UFO-Physics and about application in space-technology.
====== ====== ====== ====== ====== ====== ====== ====== ======
in total 3 experiments have been conducted.
In the "3rd Podkletnov experiment" --- The layout was kind of horizontally.
Mr. Evgeny.Podkletnov constructed and built a Spark-generating apparatus, working at slightly reduced pressure chamber. The observed (obviously) unusual spark-stream between a superconductor and the opposite plate indicated in the horizontal direction.
A short horizontal-impulse correlated to the spark direction and the timing was observed.
The impulse has been described to be large enough to push a standing booklet from the table. (!!!) The apparatus is labelled Impulse Gravity Generator (IGG) by its creator.
# Target of this discussion group #
These experiments started more than 2 decades ago, and have been forgotten.
Herein, the writer of these lines
tries to restart a sober SCIENTIFIC discussion on the PHYSICS of the observation.
## Long term GOALs ##
- is to start a CONCISE and scientific discussion about the PHYSICS ,
- trying to find possibilities to reproduce the three Podkletnov-Experiments,
- to develop a theoretical approach
- suggest new experimental setups
a side task
- will be to simply collect any hypothetic reasons "of the physical background" and to categorize these by plausibility and experimental provability.
## My prework##
I start this discussion group at www.researchgate.net,
as I have
- collected almost all available facts around the experiments
Additionally have an initial idea/clue,
- what _could_ have happened physically (what may show up as crap :-).
- The clue: It is NOT about GRAVITY-PHYSICS.
## Rules in this Discussions##
- Respectful, friendly and adequate ways of communication is expected.
Not accepted:
- claims this (shall||must) not be a topic of ANY discussion are not followed
- completely nuts contributions about application of the effect are not accepted(e.g. Antigravity Space-ship drives).
---
Mit freundlichen Grüßen,
Dipl.-Phys. Frank Haferkorn
Our answer is YES. A new question (at https://www.researchgate.net/post/If_RQ_what_are_the_consequences/1) has been answered affirmatively, confirming the YES answer in this question, with wider evidence in +12 areas.
This question continued the same question from 3 years ago, with the same name, considering new published evidence and results. The previous text of the question maybe useful and is available here:
We now can provably include DDF [1] -- the differentiation of discontinuous functions. This is not shaky, but advances knowledge. The quantum principle of Niels Bohr in physics, "all states at once", meets mathematics and quantum computing.
Without infinitesimals or epsilon-deltas, DDF is possible, allowing quantum computing [1] between discrete states, and a faster FFT [2]. The Problem of Closure was made clear in [1].
Although Weyl training was on these mythical aspects, the infinitesimal transformation and Lie algebra [4], he saw an application of groups in the many-electron atom, which must have a finite number of equations. The discrete Weyl-Heisenberg group comes from these discrete observations, and do not use infinitesimal transformations at all, with finite dimensional representations. Similarly, this is the same as someone trained in infinitesimal calculus, traditional, starts to use rational numbers in calculus, with DDF [1]. The similar previous training applies in both fields, from a "continuous" field to a discrete, quantum field. In that sense, R~Q*; the results are the same formulas -- but now, absolutely accurate.
New results have been made public [1-3], confirming the advantages of the YES answer, since this question was first asked 3 years ago. All computation is revealed to be exact in modular arithmetic, there is NO concept of approximation, no "environmental noise" when using it.
As a consequence of the facts in [1], no one can formalize the field of non-standard analysis in the use of infinitesimals in a consistent and complete way, or Cauchy epsilon-deltas, against [1], although these may have been claimed and chalk spilled.
Some branches of mathematics will have to change. New results are promised in quantum mechanics and quantum computing.
This question is closed, affirming the YES answer.
REFERENCES
[2]
Preprint FT = FFT
[3]
Preprint The quantum set Q*
The introduction to the discussion is the 16:58 minute video of Sabine Hossenfelder. In my opinion Sabine Hossenfelder is one of the physicists who shows an outstanding insight in modern theoretical physics. Her videos at Youtube.com are well argued, founded and understandable for every theoretical physicist. So I hope that every RG member who wants to participate in the discussion about the Big-Bang hypothesis will try to communicate at the same level as Sabine Hossenfelder (I cross my fingers).
With kind regards, Sydney
Hello dear researchers.
I would like to know how to determine the number of bands of a compound ????
Thanks in advance.
Consider the quantum field theory (QFT) operator (an operator for each space-time point) that the field amplitude becomes when making the transition from classical field quantities to QFT operators. We will call this the field-amplitude operator. The type of field considered is one in which the classical field amplitude evaluated at a given space-time point is a complex number instead of a real number. In the QFT description, the field amplitude is not an observable and the field-amplitude operator is not Hermitian. Can we still say that an eigenstate of this operator has a definite value of field amplitude (equal to the eigenvalue) even when the field amplitude is not an observable and the eigenvalue is not real number?
In the elementary quantum mechanics (QM) of a single particle responding to a given environment, the state of the particle can be specified by specifying a set of commuting (i.e., simultaneously knowable) observables. Examples of observables include energy and angular momentum. Although not simultaneously knowable, other examples include the three rectangular spatial coordinates and the three components of linear momentum. Each observable in QM is a real number and is an eigenvalue of some Hermitian operator. Now consider quantum field theory (QFT) which considers a field instead of a particle. First consider the classical (before introducing QFT operators) description of the state of the field at a selected point in time. This is the field amplitude at every spatial location at the selected time point. For at least some kinds of fields, the field amplitude at a given space-time point is a complex number. Now consider the QFT corresponding to the selected classical example of a field. Is the field amplitude an observable even when it is not a real number? It is not an eigenvalue of any Hermitian operator when not real. So if the field amplitude is an observable, there is no Hermitian operator associated with this observable. My guess (and my question is whether this guess is correct) is that the real and imaginary parts of the field amplitude are simultaneously knowable observables, with a Hermitian operator (assigned to each space-time point) for each. This would at least explain how the field amplitude can be an observable but not real and not have any associated Hermitian operator. Is my guess correct?
Dear Sirs,
In the below I give some very dubious speculations and recent theoretical articles about the question. Maybe they promote some discussion.
1.) One can suppose that every part of our reality should be explained by some physical laws. Particularly general relativity showed that even space and time are curved and governed by physical laws. But the physical laws themself is also a part of reality. Of course, one can say that every physical theory can only approximately describe a reality. But let me suppose that there are physical laws in nature which describe the universe with zero error. So then the question arises. Are the physical laws (as an information) some special kind of matter described by some more general laws? May the physical law as an information transform to an energy and mass?
2.) Besides of the above logical approach one can come to the same question by another way. Let us considers a transition from macroscopic world to atomic scale. It is well known that in quantum mechanics some physical information or some physical laws dissapear. For example a free paricle has a momentum but it has not a position. Magnetic moment of nucleus has a projection on the external magnetic field direction but the transverse projection does not exist. So we can not talk that nuclear magnetic moment is moving around the external magnetic field like an compass arror in the Earth magnetic field. The similar consideration can be made for a spin of elementary particle.
One can hypothesize that if an information is equivalent to some very small mass or energy (e. g. as shown in the next item) then it maybe so that some information or physical laws are lossed e.g. for an electron having extremely low mass. This conjecture agrees with the fact that objects having mass much more than proton's one are described by classical Newton's physics.
But one can express an objection to the above view that a photon has not a rest mass and, e.g. rest neutrino mass is extremely small. Despite of it they have a spin and momentum as an electron. This spin and momentum information is not lost. Moreover the photon energy for long EM waves is extremely low, much less then 1 eV, while the electron rest energy is about 0.5 MeV. These facts contradict to a conjecture that an information transforms into energy or mass.
But there is possibly a solution to the above problem. Photon moves with light speed (neutrino speed is very near to light speed) that is why the physical information cannot be detatched and go away from photon (information distribution speed is light speed).
3.) Searching the internet I have found recent articles by Melvin M. Vopson
which propose mass-energy-information equivalence principle and its experimental verification. As far as I know this experimental verification has not yet be done.
I would be grateful to hear your view on this subject.
How can we calculate the number of dimensions in a discrete space if we only have a complete scheme of all its points and possible transitions between them (or data about the adjacency of points)? Such a scheme can be very confusing and far from the clear two- or three-dimensional space we know. We can observe it, but it is stochastic and there are no regularities, fractals or the like in its organization. We only have access to an array of points and transitions between them.
Such computations can be resource-intensive, so I am especially looking for algorithms that can quickly approximate the dimensionality of the space based on the available data about the points of the space and their adjacencies.
I would be glad if you could help me navigate in dimensions of spaces in my computer model :-)
Consider the two propositions of the Kalam cosmological argument:
1. Everything that begins to exist has a cause.
2. The universe began to exist.
Both are based on assuming full knowledge of whatever exists in the world which is obviously not totally true. Even big bang cosmology relies on a primordial seed which science has no idea of its origin or characteristics.
The attached article proposes that such deductive arguments should not be allowed in philosophy and science as it is the tell-tale sign that human wrongly presupposes omniscient.
Your comments are much appreciated.
Mainstream theoretical physics applies the stationary action principle to derive Lagrangian equations. This cannot explain the origin of electrical charges and cannot explain the shortlist of elementary particle types described in the Standard Model of the experimental particle physicists. See
Preprint The setbacks of theoretical physics
When studying statistical mechanics for the first time (about 5 decades ago) I learned an interesting postulate of equilibrium statistical mechanics which is: "The probability of a system being in a given state is the same for all states having the same energy." But I ask: "Why energy instead of some other quantity". When I was learning this topic I was under the impression that the postulates of equilibrium statistical mechanics should be derivable from more fundamental laws of physics (that I supposedly had already learned before studying this topic) but the problem is that nobody has figured out how to do that derivation yet. If somebody figures out how to derive the postulates from more fundamental laws, we will have an answer to the question "Why energy instead of some other quantity." Until somebody figures out how to do that, we have to accept the postulate as a postulate instead of a derived conclusion. The question that I am asking 5 decades later is, has somebody figured it out yet? I'm not an expert on statistical mechanics so I hope that answers can be simple enough to be understood by people that are not experts.
The physical constants (G, h, c, e, me, kB ...), can be considered fundamental only if the units they are measured in (kg, m, s ...) are independent. However there are anomalies which occur in certain combinations of these constants which suggest a mathematical (unit number) relationship. By assigning a unit number to each unit we can define a relationship between the units (kg -> 15, m -> -13, s -> -30, A -> 3, K -> 20).
In order for the dimensioned physical constants to be fundamental, as noted above, the units must be independent of each other, there cannot be a unit number relationship, however these anomalies question this assumption (in the table below (G, h, c, e, me, kB ...) are reduced to the geometry of 2 dimensionless constants and 2 unit-dependent dimensioned constants linked by this unit number relation. Every combination predicted by the model returns an answer consistent with CODATA precision. Statistically therefore, can these anomalies be dismissed as coincidence?
For convenience, the article has been transcribed to this wiki site.
The table lists the physical constants according to their unit number θ, 2 dimensionless constants (the fine structure constant alpha and Omega), and 2 unit dependent scalars v, r (tuned to the SI units). This illustrates how we may construct physical entities from mathematical structures.
...
Some general background to the physical constants.

I don't even know what the mathematical definition of "equal footing" is, but I do understand the meaning of the postulate (which I am not complaining about) that the laws of physics are expressible in a way that can be used by all observers. However, given this postulate that I accept until convinced otherwise, this still does not imply any equivalence between time and space. They have some similarities in the Lorentz transformation in special relativity but they also have profound differences, including:
1. The most obvious difference is human perception that perceives time differently from space.
2. On a more mathematical level, the metric tensor has only one eigenvalue having the sign for the time coordinate and three eigenvalues having the opposite sign for spatial coordinates.
3. Still using math, the time coordinate can always be used as the parameter in the parametric equations representing a particle trajectory, while other coordinates can serve this purpose only for special cases.
4. Because of the usefulness of time as a parameter (see item 3), Hamilton's equations give time a special role.
5. Constants of motion in any physics topic refer to quantities that do not change with time.
6. Getting more mathematical, but really referring to Item 5 above, the topic of field theory identifies field invariant quantities as spatial volume integrals that are constant in time.
So why are we told to treat time and space in the same way?
For those that have the seventh printing of Goldstein's "Classical Mechanics" so I don't have to write any equations here. The Lagrangian for electromagnetic fields (expressed in terms of scalar and vector potentials) for a given charge density and current density that creates the fields is the spatial volume integral of the Lagrangian density listed in Goldstein's book as Eq. (11-65) (page 366 in my edition of the book). Goldstein then considers the case (page 369 in my edition of the book) in which the charges and currents are carried by point charges. The charge density (for example) is taken to be a Dirac delta function of the spatial coordinates. This is utilized in the evaluation of one of the integrals used to construct the Lagrangian. This integral is the spatial volume integral of charge density multiplied by the scalar potential. What is giving me trouble is as follows.
In the discussion below, a "particle" refers to an object that is small in some sense but has a greater-than-zero size. It becomes a point as a limiting case as the size shrinks to zero. In order for the charge density of a particle, regardless of how small the particle is, to be represented by a delta function in the volume integral of charge density multiplied by potential, it is necessary for the potential to be nearly constant over distances equal to the particle size. This is true (when the particle is sufficiently small) for external potentials evaluated at the location of the particle of interest, where the external potential as seen by the particle of interest is defined to be the potential created by all particles except the particle of interest. However, total potential, which includes the potential created by the particle of interest, is not slowly varying over the dimensions of the particle of interest regardless of how small the particle is. The charge density cannot be represented by a delta function in the integral of charge density times potential, when the potential is total potential, regardless of how small the particle is. If we imagine the particles to be charged marbles (greater than zero size and having finite charge densities) the potential that should be multiplying the charge density in the integral is total potential. As the marble size shrinks to zero the potential is still total potential and the marble charge density cannot be represented by a delta function. Yet textbooks do use this representation, as if the potential is external potential instead of total potential. How do we justify replacing total potential with external potential in this integral?
I won't be surprised if the answers get into the issues of self forces (the forces producing the recoil of a particle from its own emitted electromagnetic radiation). I am happy with using the simple textbook approach and ignoring self forces if some justification can be given for replacing total potential with external potential. But without that justification being given, I don't see how the textbooks reach the conclusions they reach with or without self forces being ignored.
Dear RG community members, in this thread, I will discuss the similitudes and differences between two marvelous superconductors:
One is the liquid isotope Helium three (3He) which has a superconducting transition temperature of Tc ~ 2.4 mK, very close to the absolute zero, it has several phases that can be described in a pressure - P vs temperature T phase diagram.
3He was discovered by professors Lee, Oshero, and Richardson and it was an initial point of remarkable investigations in unconventional superconductors which has other symmetries broken in addition to the global phase symmetry.
The other is the crystal strontium ruthenate (Sr2RuO4) which is a metallic solid alloy with a superconducting transition temperature of Tc ~ 1.5 K and where nonmagnetic impurities play a crucial role in the building up of a phase diagram from my particular point of view.
Sr2RuO4 was discovered by Prof. Maeno and collaborators in 1994.
The rest of the discussion will be part of this thread.
Best Regards to All.


My understanding of the significance of Bell's inequality in quantum mechanics (QM) is as follows. The assumption of hidden variables implies an inequality called Bell's inequality. This inequality is violated not only by conventional QM theory but also by experimental data designed to test the prediction (the experimental data agree with conventional QM theory). This implies that the hidden variable assumption is wrong. But from reading Bell's paper it looks to me that the assumption proven wrong is hidden variables (without saying local or otherwise), while people smarter than me say that the assumption proven wrong is local hidden variables. I don't understand why it is only local hidden variables, instead of just hidden variables, that was proven wrong. Can somebody explain this?
How much does the existence of advanced laboratories and appropriate financial budgets and different support for a researcher's research affect the quality and quantity of a researcher's work?
Heidegger said that philosophy is thinking. What else is philosophy? What is the ultimate aim of philosophy? Truth? Certainty? …
Heidegger said that science is knowledge. What else is science? What is the ultimate aim of science? Knowledge? Truth? Certainty? …
Scientists have been using quantum theory for almost a century now, but embarrassingly they still don’t know what it means. An informal poll taken at a 2011 conference on Quantum Physics and the Nature of Reality showed that there’s still no consensus on what quantum theory says about reality — the participants remained deeply divided about how the theory should be interpreted.
1. Bose-Einstein condensation: How do we rigorously prove the existence of Bose-Einstein condensates for general interacting systems? (Schlein, Benjamin. "Graduate Seminar on Partial Differential Equations in the Sciences – Energy, and Dynamics of Boson Systems". Hausdorff Center for Mathematics. Retrieved 23 April 2012.)
2. Scharnhorst effect: Can light signals travel slightly faster than c between two closely spaced conducting plates, exploiting the Casimir effect?(Barton, G.; Scharnhorst, K. (1993). "QED between parallel mirrors: light signals faster than c, or amplified by the vacuum". Journal of Physics A. 26 (8): 2037.)
I have to fabricate a 2D Hot electron transistor for my project.
I come mainly from a theoretical physics background. So I don't know what to search or read to know about what parameters affect the frequency of a 2D heterostructure Transistor. Can someone help me out by point