Science topic
Fundamental Physics - Science topic
Explore the latest questions and answers in Fundamental Physics, and find Fundamental Physics experts.
Questions related to Fundamental Physics
Soumendra Nath Thakur
ORCiD: 0000-0003-1871-7803
Match 16, 2025
Abstract:
Extended Classical Mechanics (ECM) refines the classical understanding of force, energy, and mass by incorporating the concept of negative apparent mass. In ECM, the effective force is determined by both observable mass and negative apparent mass, leading to a revised force equation. The framework introduces a novel energy-mass relationship where kinetic energy emerges from variations in potential energy, ensuring consistency with classical conservation laws. This study extends ECM to massless particles, demonstrating that they exhibit an effective mass governed by their negative apparent mass components. The connection between ECM’s kinetic energy formulation and the quantum mechanical energy-frequency relation establishes a fundamental link between classical and quantum descriptions of energy and mass. Furthermore, ECM naturally accounts for repulsive gravitational effects without requiring a cosmological constant, reinforcing the interpretation of negative apparent mass as a fundamental aspect of energy displacement in gravitational fields. The framework is further supported by an analogy with Archimedes’ Principle, providing an intuitive understanding of how mass-energy interactions shape particle dynamics. These findings suggest that ECM offers a predictive and self-consistent alternative to relativistic mass-energy interpretations, shedding new light on massless particle dynamics and the nature of gravitational interactions.
Keywords:
Extended Classical Mechanics (ECM), Negative Apparent Mass, Effective Mass, Energy-Mass Relationship, Kinetic Energy, Massless Particles, Quantum Energy-Frequency Relation, Archimedes’ Principle, Gravitational Interactions, Antigravity
Extended Classical Mechanics: Energy and Mass Considerations
1. Force Considerations in ECM:
The force in Extended Classical Mechanics (ECM) is determined by the interplay of observable mass and negative apparent mass. The force equation is expressed as:
F = {Mᴍ +(−Mᵃᵖᵖ)}aᵉᶠᶠ
where: Mᵉᶠᶠ = {Mᴍ +(−Mᵃᵖᵖ)}, Mᴍ ∝ 1/Mᴍ = -Mᵃᵖᵖ
Significance:
- This equation refines classical force considerations by incorporating negative apparent mass −Mᵃᵖᵖ, which emerges due to gravitational interactions and motion.
- The effective acceleration aᵉᶠᶠ adapts dynamically based on motion or gravitational conditions, ensuring consistency in ECM's mass-energy framework.
- The expression (Mᴍ ∝ 1/Mᴍ) provides a self-consistent relationship between observable mass and its apparent counterpart, reinforcing the analogy with Archimedes' principle.
2. Total Energy Considerations in ECM:
Total energy in ECM consists of both potential and kinetic components, adjusted for mass variations:
Eₜₒₜₐₗ = PE + KE
By incorporating the variation in potential energy:
Eₜₒₜₐₗ = (PE − ΔPE) + ΔPE
where:
- Potential Energy: PE = (PE - ΔPE)
- Kinetic Energy:( KE = ΔPE)
Since in ECM, (ΔPE) corresponds to the energy displaced due to apparent mass effects:
Eₜₒₜₐₗ = PE + KE
⇒ (PE − ΔPE of Mᴍ) + (KE of ΔPE) ≡ (Mᴍ − 1/Mᴍ) + (-Mᵃᵖᵖ)
Here, Potential Energy Component:
(PE − ΔPE of Mᴍ) ≡ (Mᴍ − 1/Mᴍ)
This represents how the variation in potential energy is linked and identically equal to mass effects.
Kinetic Energy Component:
(KE of ΔPE) ≡ (-Mᵃᵖᵖ)
This aligns with the ECM interpretation where kinetic energy arises due to negative apparent mass effects.
Significance:
- Ensures energy conservation by explicitly including mass variations.
- Demonstrates that kinetic energy naturally arises from the variation in potential energy, aligning with the effective mass formulation.
- Strengthens the analogy with fluid displacement, reinforcing the concept of negative apparent mass as a counterpart to conventional mass.
3. Kinetic Energy for Massive Particles in ECM:
For massive particles, kinetic energy is derived from classical principles but adjusted for ECM considerations:
KE = ΔPE = 1/2 Mᴍv²
where:
Mᵉᶠᶠ = Mᴍ + (−Mᵃᵖᵖ)
Significance:
- Maintains compatibility with classical mechanics while integrating ECM mass variations.
- Reflects how kinetic energy is influenced by the effective mass, ensuring consistency across different gravitational regimes.
- Provides a basis for extending kinetic energy considerations to cases involving negative apparent mass.
4. Kinetic Energy for Conventionally Massless but Negative Apparent Massive Particles:
For conventionally massless particles in ECM, negative apparent mass contributes to the effective mass as follows:
Mᵉᶠᶠ = −Mᵃᵖᵖ + (−Mᵃᵖᵖ)
Since in ECM:
Mᴍ ⇒ −Mᵃᵖᵖ
it follows that:
Mᵉᶠᶠ = −2Mᵃᵖᵖ
Significance:
- Establishes that even conventionally massless particles possess an effective mass due to their negative apparent mass components.
- Provides a self-consistent framework that supports ECM's interpretation of mass-energy interactions.
- Highlights the role of negative apparent mass in governing the energetic properties of massless particles.
5. Kinetic Energy for Negative Apparent Mass Particles, Including Photons:
For negative apparent mass particles, such as photons, kinetic energy is given by:
KE = 1/2 (−2Mᵃᵖᵖ)c²
where:
v = c
Since:
ΔPE = −Mᵃᵖᵖ.c²
it follows that:
ΔPE/c² = −Mᵃᵖᵖ
Thus:
KE = ΔPE/c² = −Mᵃᵖᵖ
Significance:
- Establishes a direct relationship between kinetic energy and the quantum mechanical frequency relation.
- Demonstrates that photons, despite being conventionally massless, exhibit kinetic energy consistent with ECM’s negative apparent mass framework.
- Reinforces the view that negative apparent mass plays a fundamental role in governing mass-energy interactions at both classical and quantum scales.
6. ECM Kinetic Energy and Quantum Mechanical Frequency Relationship for Negative Apparent Mass Particles:
KE = ΔPE/c² = hf/c² = −Mᵃᵖᵖ
This equation establishes a direct link between the kinetic energy of a negative apparent mass particle and the quantum energy-frequency relation. The expression ensures consistency with quantum mechanical principles while reinforcing the role of negative apparent mass in energy dynamics.
7. Effective Mass and Apparent Mass in ECM:
In ECM, the Effective Mass represents the overall mass that is observed, while the Negative Apparent Mass (−Mᵃᵖᵖ) emerges due to motion or gravitational interactions. This distinction provides deeper insight into how mass behaves dynamically under varying conditions, differentiating ECM from conventional mass-energy interpretations.
8. Direct Energy-Mass Relationship in ECM:
hf/c² = −Mᵃᵖᵖ
This equation is inherently consistent with dimensional analysis, showing that negative apparent mass naturally arises from the energy-frequency relationship without requiring any extra scaling factors. This highlights ECM's compatibility with established quantum mechanical formulations and reinforces the role of negative apparent mass as an intrinsic component of energy-based mass considerations.
9. Effective Mass for Massive Particles in ECM
For a massive particle in ECM, the effective mass is given by:
Mᵉᶠᶠ = Mᴍ + (−Mᵃᵖᵖ)
where:
- Mᴍ is the conventional mass.
- −Mᵃᵖᵖ is the negative apparent mass component induced by gravitational interactions and acceleration effects.
ECM establishes the inverse proportionality of apparent mass to conventional mass:
Mᴍ ∝ 1/Mᴍ ⇒ Mᴍ = − Mᵃᵖᵖ
Thus, we obtain:
Mᵉᶠᶠ = Mᴍ − Mᴍ = 0
which represents a limiting case where effective mass cancels out under specific conditions.
10. Effective Mass for Massless Particles in Motion
For massless particles such as photons, the conventional mass is:
Mᴍ = 0
However, in ECM, massless particles exhibit an effective mass due to the interaction of negative apparent mass with energy-mass dynamics.
From ECM’s force equation for a photon in motion:
Fₚₕₒₜₒₙ = −Mᵃᵖᵖaᵉᶠᶠ
This indicates that the apparent mass governs the photon’s dynamics.
Since massless particles always move at the speed of light (v = c), ECM treats their total apparent mass contribution as doubled due to energy displacement effects (analogous to Archimedean displacement in a gravitational-energy field):
Mᵉᶠᶠ = (−Mᵃᵖᵖ) + (−Mᵃᵖᵖ) = −2Mᵃᵖᵖ
Thus, for massless particles in motion:
Mᵉᶠᶠ = −2Mᵃᵖᵖ
This confirms that even though Mᴍ = 0, the particle still possesses an effective mass purely governed by negative apparent mass interactions.
11. Archimedes’ Principle Analogy in ECM
ECM’s treatment of negative apparent mass is closely related to Archimedes’ Principle, which describes the buoyant force in a fluid medium. In classical mechanics, a submerged object experiences an upward force equal to the weight of the displaced fluid. Similarly, in ECM:
- A mass moving through a gravitational-energy field experiences an **apparent reduction** in mass due to energy displacement, akin to an object losing effective weight in a fluid.
- For massive particles, this effect reduces their observed mass through the relation:
Mᵉᶠᶠ = Mᴍ + (−Mᵃᵖᵖ)
- For massless particles, the displacement effect is **doubled**, leading to:
Mᵉᶠᶠ = −2Mᵃᵖᵖ
This is analogous to how a fully submerged object displaces its entire volume, reinforcing the interpretation that massless particles inherently interact with the surrounding energy field via their negative apparent mass component.
Physical & Theoretical Significance
(A) Massless Particles Exhibit an Effective Mass
- This challenges the traditional view that massless particles (e.g., photons) have no mass at all. ECM reveals that while they lack conventional rest mass, their motion within an energy field naturally endows them with an effective mass, explained by negative apparent mass effects.
(B) Quantum Mechanical Consistency
- The ECM kinetic energy relation aligns with quantum mechanical frequency-based energy expressions:
KE = hf/c² = −Mᵃᵖᵖ
This suggests that negative apparent mass is directly linked to the fundamental nature of wave-particle duality, reinforcing ECM’s consistency with established quantum mechanics principles.
(C) Natural Explanation for Antigravity
- The doubling of negative apparent mass for massless particles introduces a natural anti-gravity effect, distinct from the ad hoc introduction of a cosmological constant Λ in relativistic models.
- Since massless particles propagate via their effective mass Mᵉᶠᶠ = −2Mᵃᵖᵖ, ECM naturally incorporates repulsive gravitational effects without requiring modifications to spacetime geometry.
(D) Reinforcement of ECM’s Fluid Displacement Analogy
- The analogy with Archimedes’ Principle provides a strong conceptual foundation for negative apparent mass. Just as an object in a fluid experiences a buoyant force due to displaced volume, mass in ECM interacts with gravitational-energy fields via displaced potential energy, leading to apparent mass effects.
Conclusion
ECM’s interpretation of effective mass provides a self-consistent framework where both massive and massless particles exhibit observable mass variations due to negative apparent mass effects. The Archimedean displacement analogy reinforces this concept, offering an intuitive understanding of how energy-mass interactions govern particle dynamics.
This formulation provides a clear, predictive alternative to conventional relativistic models, demonstrating how massless particles still exhibit mass-like behaviour via their motion and interaction with energy fields.
12. Photon Dynamics in ECM & Archimedean Displacement Analogy
Total Energy Consideration for Photons in ECM
In ECM, the total energy of a photon is composed of:
Eₚₕₒₜₒₙ = Eᵢₙₕₑᵣₑₙₜ + E𝑔
where:
- Eᵢₙₕₑᵣₑₙₜ is the inherent energy of the photon.
- E𝑔 is the interactional energy due to gravitational effects.
When a photon is fully submerged in a gravitational field, its total energy is doubled due to its interactional energy contribution:
Eₚₕₒₜₒₙ = Eᵢₙₕₑᵣₑₙₜ + E𝑔 ⇒ 2E
This represents the energy displacement effect, aligning with ECM’s formulation that massless particles experience a doubled apparent mass contribution in motion:
Mᵉᶠᶠ = −2Mᵃᵖᵖ
Photon Escaping the Gravitational Field
As the photon escapes the gravitational field, it expends E𝑔, reducing its total energy:
Eₚₕₒₜₒₙ ⇒ Eᵢₙₕₑᵣₑₙₜ, E𝑔 ⇒ 0
Thus, once the photon is completely outside the gravitational influence:
Eₚₕₒₜₒₙ = E, E𝑔 = 0
This describes how a photon’s energy and effective mass vary dynamically with gravitational interaction, reinforcing the ECM perspective on gravitational influence on energy-mass dynamics.
Alignment with Archimedean Displacement Analogy
This ECM interpretation strongly aligns with Archimedes' Principle, where:
- A photon in a gravitational field is analogous to an object fully submerged in a fluid, experiencing an energy displacement effect.
- As the photon leaves the gravitational field, it expends its interactional energy E𝑔, similar to how an object leaving a fluid medium loses its buoyant force.
This analogy further strengthens ECM’s concept of negative apparent mass, where the gravitational interaction displaces energy similarly to how a fluid displaces volume.
Conclusion & Significance
- The ECM photon dynamics equation aligns with the Archimedean displacement analogy, reinforcing the physical reality of negative apparent mass effects.
- This provides a natural, intuitive explanation for how photons interact with gravitational fields without requiring relativistic spacetime curvature.
- It further supports the energy-mass displacement framework, demonstrating how photons dynamically exchange energy with gravitational fields while maintaining ECM’s effective mass principles.
This formulation elegantly unifies photon energy dynamics with mass-energy interactions, further validating ECM as a robust framework for fundamental physics.
13. Effective Acceleration and Apparent Mass in Massless Particles
For photons in ECM, the effective force is given by:
Fₚₕₒₜₒₙ = −Mᵉᶠᶠaᵉᶠᶠ, Where: aᵉᶠᶠ = 6 × 10⁸ m/s²
- Negative Apparent Mass & Acceleration:
Photons possess negative apparent mass (−Mᵃᵖᵖ), which leads to an anti-gravitational effect. Their effective acceleration (aᵉᶠᶠ) is inversely proportional to Mᵉᶠᶠ and radial distance r.
- Within a gravitational field, the photon has more interactional energy E𝑔, increasing aᵉᶠᶠ.
- Escaping the field, it expends E𝑔, reducing Mᵃᵖᵖ and lowering aᵉᶠᶠ.
- Acceleration Scaling with Gravitational Interaction:
E𝑔 ∝ 1/r
- At r₀ ⇒ E𝑔,ₘₐₓ ⇒ Maximum −Mᵃᵖᵖaᵉᶠᶠ ⇒ aᵉᶠᶠ = 2c.
- At rₘₐₓ ⇒ E𝑔 = 0 ⇒ Minimum −Mᵃᵖᵖaᵉᶠᶠ ⇒ aᵉᶠᶠ = c.
This confirms that effective acceleration (2c) is a function of gravitational interaction, not an intrinsic speed change, reinforcing ECM’s explanation of negative apparent mass dynamics.
14. Extended Classical Mechanics: Effective Acceleration, Negative Apparent Mass, and Photon Dynamics in Gravitational Fields
Analytical Description & Significance:
This paper refines and extends the framework of Extended Classical Mechanics (ECM) by establishing a comprehensive formulation for effective acceleration, negative apparent mass, and their implications for massless and massive particles under gravitational influence. The analysis revises ECM equations to incorporate Archimedes' principle as a physical analogy for negative apparent mass, clarifies the role of effective acceleration (2c) in different gravitational conditions, and demonstrates how negative apparent mass serves as a natural anti-gravity effect, contrasting with the relativistic cosmological constant (Λ).
A key highlight is the kinetic energy formulation for negative apparent mass particles, which aligns with quantum mechanical frequency relations for massless particles. This formulation provides deeper insight into how negative apparent mass influences energy and motion without requiring conventional mass assumptions.
Key Implications & Theoretical Advancements:
Refined Effective Acceleration Equation for Massless Particles:
- ECM establishes that photons, despite being massless in the conventional sense, exhibit negative apparent mass contributions, leading to an effective acceleration of aᵉᶠᶠ = 6 × 10⁸ m/s² = 2c inside gravitational fields.
- This acceleration naturally arises due to the relationship between negative apparent mass −Mᵃᵖᵖ and gravitational interaction energy E𝑔.
- The effective acceleration decreases as a photon exits the gravitational field, reaching c in free space.
Negative Apparent Mass as a Replacement for Cosmological Constant (Λ):
- Unlike Λ, which assumes a uniform energy density, negative apparent mass dynamically varies with gravitational interaction energy.
- This formulation provides a self-consistent explanation for observed cosmological effects, particularly in gravitational repulsion and expansion scenarios.
Physical Analogy with Archimedes’ Principle:
- The ECM framework aligns negative apparent mass effects with Archimedean displacement, where gravitational interaction leads to energy displacement effects analogous to buoyant forces in fluids.
- In gravitational fields, a photon's interactional energy (E𝑔) contributes to its total energy, analogous to an object submerged in a fluid experiencing an upward force.
- As the photon escapes, the loss of E𝑔 mirrors an object emerging from a fluid losing its buoyant support.
4. Revision in the Energy-Mass Relation for Massless Particles:
- The study revise prior inconsistency by explicitly linking the kinetic energy of negative apparent mass particles to quantum mechanical frequency relations, ensuring consistency between ECM and established quantum principles.
Conclusion:
This research enhances ECM’s predictive power by clarifying the role of negative apparent mass in gravitational dynamics and demonstrating its relevance to photon motion, cosmological expansion, and gravitational interactions. By introducing effective acceleration (2c) as a natural consequence of gravitational interaction, ECM provides a compelling alternative to relativistic formulations, reinforcing the practical applicability of classical mechanics principles in modern physics.

Recent advancements in quantum photonics have sparked widespread interest, with headlines suggesting that scientists have achieved the impossible—freezing light. However, a deeper examination reveals that this interpretation is metaphorical rather than literal. The breakthrough in question involves engineering a supersolid state in a photonic platform, where light exhibits paradoxical properties of both superfluidity and crystalline order. This is achieved through the condensation of polaritons, hybrid quasiparticles formed by coupling photons with excitons in a gallium arsenide semiconductor. Through precise laser excitation, researchers have induced Bose-Einstein condensation (BEC), leading to a unique state where light behaves as both a fluid and a structured lattice. While this achievement challenges classical understandings of light behavior, it does not imply that photons have been halted or frozen. Instead, the experiment demonstrates an emergent quantum phase transition, limited by the transient nature of polaritons and the specific conditions required for their formation. As highlighted in my research paper (DOI: 10.13140/RG.2.2.22964.36482), these developments call for a critical reassessment of existing quantum theories and their applicability to light-matter interactions. While this work expands the boundaries of quantum physics, it remains essential to differentiate between experimental findings and oversimplified interpretations that may mislead the scientific discourse.
Challenging established theories and providing solutions to long-standing problems in physics is no small feat. It has been proven now in the latest research that the second law of thermodynamics is wrong (Entropy is Constant) and that the Arrow of Time is T-symmetric. This could have significant implications for our understanding of the universe. This actually changes physics as we know it for sure, as science will never be the same again after the findings that has already been published in an accredited peer reviewed international journal (see the paper below for details).
Do you guys agree to the findings? The proof is simple to read yet powerful enough to wrong the traditional laws of science. If not, please provide a reason why? We have had some very interesting discussions so far on other topics and I want to keep this channel open, clear and omni-directional!
Sandeep
Subtitle: Will all the fundamental researchers be fired from their jobs in the future and fundamental research become obsolete?
This is a philosophical but also practical question with immediate implications to our not so far future.
The danger is that AI applications in science like AlphaFold (Nobel prize in Chemistry 2024):
are not really predictions made by science by fully and fundamentally understanding nature's physics mechanics and chemistry but just brute force smart computational pattern recognition correlating known outcomes of similar input data and guessing the most likely new outcome. This is not new fundamental science and physics research but just an application of AI computation.
The philosophical question here is, will future scientists and human civilization using AI, continue to be motivated to do fundamental science research?
Is there really any real human urge to fundamentally understand a physical phenomenon or system in order to predict its outcome results for a specific input, if the outcome results can be easily and much faster and effortlessly being empirically and statistically guessed by an AI without the need of fundamental understanding?
This is a blind and mutilated future science and future danger of slowing down real new fundamental science breakthroughs and milestones. Therefore, essentially slowing down human civilization progress and evolution and demoting science to the role of a "magic oracle".
In my opinion, the use of AI in fundamental research like fundamental new physics research must be regulated or excluded. Already many science Journals have strict rules about the use of "Generative AI" inside the submitted papers and also completely not allowing it.
What are your opinions and thoughts?
Nominations are expected to open in the early part of the year for the Breakthrough Prize in Fundamental Physics. Historically nominations are accepted from early/mid-January to the end of March, for the following year's award.
Historically, the foundation has also had a partnership with ResearchGate:
The foundation also awards major prizes for Life Sciences and for Mathematics, and has further prizes specific to younger researchers.
So who would you nominate?
Who is really able to judge whether your theory is good or not? And what are the criteria of editors and reviewers? Is it only their experiences? Is all that matters to them the audience like on TV even if the content of the program is super empty? Is this how we are moving further and further away from fundamental physics?
Author Comment:
This study synthesizes key conclusions derived from a series of research papers on extended classical mechanics. These papers provide a fresh perspective on established experimental results, challenging traditional interpretations and highlighting potential inaccuracies in previous theoretical frameworks. Through this reinterpretation, the study aims to refine our understanding of fundamental physical phenomena, opening avenues for further exploration and validation.
Keywords: Photon dynamics, Gravitational interaction, Negative mass, Cosmic redshift, Extended classical mechanics,
Reversibility of Gravitational Interaction:
A photon’s interaction with an external gravitational force is inherently reversible. The photon maintains its intrinsic momentum throughout the process and eventually resumes its original trajectory after disengaging from the gravitational field.
Intrinsic Energy (E) Preservation:
The photon's intrinsic energy E, derived from its emission source, remains unaltered despite gaining or losing energy (Eg) through gravitational interaction within a massive body's gravitational influence.
Contextual Gravitational Energy (Eg):
The gravitational interaction energy Eg is a localized phenomenon, significant only within the gravitational influence of a massive body. Beyond this influence, in regions of negligible gravity, the photon retains only its intrinsic energy E.
Cosmic Redshift and Energy Loss (ΔE):
In the context of cosmic expansion, the recession of galaxies causes a permanent loss of a photon's intrinsic energy ΔE due to the cosmological redshift. This energy loss is independent of local gravitational interactions and reflects the large-scale dynamics of the expanding universe.
Negative Apparent Mass and Antigravitational Effects:
The photon's negative apparent mass Mᵃᵖᵖ,ₚₕₒₜₒₙ generates a constant negative force −F, which manifests as an antigravitational effect. This behaviour parallels the characteristics attributed to dark energy in its capacity to resist gravitational attraction.
Wave Speed Consistency (c):
The constant negative force −F, arising from the photon's energy dynamics, ensures the photon’s ability to maintain a constant wave propagation speed c, irrespective of gravitational influences.
Negative Effective Mass:
The photon’s negative effective mass Mᵉᶠᶠ,ₚₕₒₜₒₙ allows it to exhibit properties akin to those of a negative particle. This feature contributes to its unique interaction dynamics within gravitational fields and reinforces its role in antigravitational phenomena.
Constant Effective Acceleration:
From the moment of its emission at an initial velocity of 0m/s, the photon experiences a constant effective acceleration, quantified as aᵉᶠᶠ,ₚₕₒₜₒₙ = 6 × 10⁸ m/s². This acceleration underpins the photon’s ability to achieve and sustain its characteristic speed of light (c), reinforcing its intrinsic energy and momentum dynamics.
Zero stands for emptiness, for nothing, and yet it is considered to be one of the greatest achievements of humankind. It took a long stretch of human history for it to be recognized and appreciated [1][4]. In the history of mathematics considerable confusion exists as to the origin of zero. There can be no unique answer to the query, "Who first discovered the zero?", for this may refer to any one of several related but distinct historical issues† [2]. A very explicit use of the concept of zero was made by Aristotle, who, speaking of motion in a vacuum, said "there is no ratio in which the void is exceeded by body, as there is no ratio of zero to a number” [3][2]*. He apparently recognized “the Special Status of Zero among the Natural Numbers.”
If we believe that zero is explicitly expressed mathematically, whether in number theory, algebra, or set theory, is the meaning of zero also clear and unified in the different branches of physics? Or can it have multiple meanings? Such as:
1)Annihilation——When positive and negative particles meet [5][6], e+e-=γ+γ',the two charges disappear, the two masses disappear, and only the energy does not disappear or increase; the momentum of the two electrons, which was 0, now becomes the positive and negative momentum of the two photons. How many kinds of zeros exist here, and what does each mean?
2)Double-slit interference—— The interference pattern in Young's double slit experiment, what exactly is expressed at the dark fringe? And how should it actually be understood? For light waves, it can be understood as the field canceling due to destructive interference and presenting itself as zero. For single photons, single electrons [7], physics considers it to be a probabilistic statistical property [12]. This means that in practice, at the dark fringes of theoretical calculations, the field will also be likely not to be zero‡.
3)Destructive interference——In Mach–Zehnder interferometer [8],there's always been a question of where the energy in the destructive interference arm went [9]? There seems to be an energy cancellation occurring.
4)Anti-reflection coatings——By coating [10], the reflected waves are completely canceled out to achieve the purpose of increasing transmission.
5)Nodes of Standing Waves——In optical resonant cavity, Laser Resonator. " The resonator cavity's path length determines the longitudinal resonator modes, or electric field distributions which cause a standing wave in the cavity "[13]. The amplitude of the electromagnetic field at the node of the standing wave is zero, but we cannot say that the energy and momentum at this point are zero, which would violate the uncertainty principle.
6)Laser Beam Mode——The simplest type of laser resonator modes are Hermite-Gaussian modes, also known as transverse electromagnetic modes (TEMnm), in which the electric field profile can be approximated by the product of a Gaussian function with a Hermite polynomial. TEMnm,where n is the number of nodes in x direction, m is the number of nodes in y direction [14].
7)Nodes of the Wave Function——Nodes and ends of the Wave Function Ψ in a square potential well have zero probability in quantum mechanics‡ [11]。
8)Pauli exclusion principle—— Fermions are antisymmetric,Ψ(q1,q2)=-Ψ(q1,q2), so Ψ(q1,q2)=0;Here a wave function of zero means that "field" is not allowed to exist, or according to the Copenhagen interpretation, the wave function has zero probability of appearing here?
9)Photon——zero mass, zero charge.
10)Absolute vacuum——Can it be defined as zero energy space?
11)Absolute temperature 0K——Is the entire physical world defined as a zero energy state except for photons?
12)Perfect superconductor—— "The three 'big zeros' of superconductivity (zero resistance, zero induction and zero entropy) have equal weight and grow from a single root: quantization of the angular momentum of paired electrons" [15].
13)......
Doesn't it violate mathematical principles if we may interpret the meaning of zeros in physics according to our needs? If we regard all zeros as energy not existing, or not allowed to exist here, does it mean that energy must have the same expression? Otherwise, we cannot find a unified explanation.
---------------------------------------------
Notes
* Ratio was a symmetrical expression particularly favored by the ancient Greeks.
† Symbols(0,...), words (zero, null, void, empty, none, ...), etc..
‡ Note in particular that probability itself is defined as a probability, not an exact value. For example, a probability of 0.5 can occur in physical reality as 0.49999999999, and it is almost never possible to have an accurate probability value such as 0.5. This means that there is no probability value that never occurs, even if the probability is theoretically 0. It is against the principle of probability to assume that a probability of zero means that it will never occur in reality.
---------------------------------------------
References
[1] Nieder, A. (2016). "Representing something out of nothing: The dawning of zero." Trends in Cognitive Sciences 20(11): 830-842.
[2] Boyer, C. B. (1944). "Zero: The symbol, the concept, the number." National Mathematics Magazine 18(8): 323-330.
[3] the Physics of Aristotle;
[4] Boyer, C. B. (1944). "Zero: The symbol, the concept, the number." National Mathematics Magazine 18(8): 323-330.
[5] https://www.researchgate.net/post/NO8Are_annihilation_and_pair_production_mutually_inverse_processes
[7] Davisson, C. and L. H. Germer (1927). "Diffraction of Electrons by a Crystal of Nickel." Physical Review 30(6): 705-740.
[8] Mach, L., L. Zehnder and C. Clark (2017). The Interferometers of Zehnder and Mach.
[9] Zetie, K., S. Adams and R. Tocknell (2000). "How does a Mach-Zehnder interferometer work?" Physics Education 35(1): 46.
[11] Chen, J. (2023). From Particle-in-a-Box Thought Experiment to a Complete Quantum Theory? -Version 22.
[12] Born, M. (1955). "Statistical Interpretation of Quantum Mechanics." Science 122(3172): 675-679.
[13]
[14] "Gaussian Beam Optics." from https://experimentationlab.berkeley.edu/sites/default/files/MOT/Gaussian-Beam-Optics.pdf.
[15] Kozhevnikov, V. (2021). "Meissner Effect: History of Development and Novel Aspects." Journal of Superconductivity and Novel Magnetism 34(8): 1979-2009.
‘How big is the proton?"[1] We can similarly ask, “How big is the electron?” “How big is the photon?” CODATA gives the answer [2], proton rms charge radius rp=8.41 x10-16m; classical electron radius, re=2.81x10-15m [6]. However, over a century after its discovery, the proton still keeps physicists busy understanding its basic properties, its radius, mass, stability and the origin of its spin [1][4][7]. Physics still believes that there is a ‘proton-radius puzzle’ [3][4], and does not consider that the size of a photon is related to its wavelength.
Geometrically the radius of a circle is clearly defined, and if an elementary particle is regarded as a energy packet, which is unquestionably the case, whether or not it can be described by a wavefunction, can its energy have a clear boundary like a geometrical shape? Obviously the classical electron radius is not a clear boundary conceptually in the field, because its electric field energy is always extending. When physics uses the term ‘charge radius’, what does it mean when mapped to geometry? If there is really a spherical charge [8][9], how is it maintained and formed*?
----------------------------------------
Notes:
*“Now if we have a sphere of charge, the electrical forces are all repulsive and an electron would tend to fly apart. Because the system has unbalanced forces, we can get all kinds of errors in the laws relating energy and momentum.” [Feynman Lecture C28]
----------------------------------------
References:
[1] Editorial. (2021). Proton puzzles. Nature Reviews Physics, 3(1), 1-1. https://doi.org/10.1038/s42254-020-00268-0
[2] Tiesinga, E. (2021). CODATA recommended values of the fundamental physical constants: 2018.
[3] Carlson, C. E. (2015). The proton radius puzzle. Progress in Particle and Nuclear Physics, 82, 59-77. https://doi.org/https://doi.org/10.1016/j.ppnp.2015.01.002
[4] Gao, H., Liu, T., Peng, C., Ye, Z., & Zhao, Z. (2015). Proton remains puzzling. The Universe, 3(2).
[5] Karr, J.-P., Marchand, D., & Voutier, E. (2020). The proton size. Nature Reviews Physics, 2(11), 601-614. https://doi.org/10.1038/s42254-020-0229-x
[6] "also called the Compton radius, by equating the electrostatic potential energy of a sphere of charge e and radius with the rest energy of the electron"; https://scienceworld.wolfram.com/physics/ElectronRadius.html
[7] Gao, H., & Vanderhaeghen, M. (2021). The proton charge radius. https://www.researchgate.net/post/NO44_What_is_an_electric_charge_Can_it_exist_apart_from_electrons_Would_it_be_an_effect ;
[8] What is an electric charge? Can it exist apart from electrons? Would it be an effect? https://www.researchgate.net/post/NO44_What_is_an_electric_charge_Can_it_exist_apart_from_electrons_Would_it_be_an_effect ;
[9] Phenomena Related to Electric Charge,and Remembering Nobel Laureate T. D. Lee; https://www.researchgate.net/post/NO46Phenomena_Related_to_Electric_Chargeand_Remembering_Nobel_Laureate_T_D_Lee
Paradox 1 - The Laws of Physics Invalidate Themselves, When They Enter the Singularity Controlled by Themselves.
Paradox 2 - The Collapse of Matter Caused by the Law of Gravity Will Eventually Destroy the Law of Gravity.
The laws of physics dominate the structure and behavior of matter. Different levels of material structure correspond to different laws of physics. According to reductionism, when we require the structure of matter to be reduced, the corresponding laws of physics are also reduced. Different levels of physical laws correspond to different physical equations, many of which have singularities. Higher level equations may enter singularities when forced by strong external conditions, pressure, temperature, etc., resulting in phase transitions such as lattice and magnetic properties being destroyed. Essentially the higher level physics equations have failed and entered the lower level physics equations. Obviously there should exist a lowest level physics equation which cannot be reduced further, it would be the last line of defense after all the higher level equations have failed and it is not allowed to enter the singularity. This equation is the ultimate equation. The equation corresponding to the Hawking-Penrose spacetime singularity [1] should be such an equation.
We can think of the physical equations as a description of a dynamical system because they are all direct or indirect expressions of energy-momentum quantities, and we have no evidence that it is possible to completely detach any physical parameter, macroscopic or microscopic, from the Lagrangian and Hamiltonian.
Gravitational collapse causes black holes, which have singularities [2]. What characterizes a singularity? Any finite parameter before entering a spacetime singularity becomes infinite after entering the singularity. Information becomes infinite, energy-momentum becomes infinite, but all material properties disappears completely. A dynamical equation, transitioning from finite to infinite, is impossible because there is no infinite source of dynamics, and also the Uncertainty Principle would prevent this singularity from being achieved*. Therefore, while there must be a singularity according to the Singularity Principle, this singularity must be inaccessible, or will not enter. Before entering this singularity, a sufficiently long period of time must have elapsed, waiting for the conditions that would destroy it, such as the collision of two black holes.
Most of these singularities, however, can usually be resolved by pointing out that the equations are missing some factor, or noting the physical impossibility of ever reaching the singularity point. In other words, they are probably not 'real'.” [3] We believe this statement is correct. Nature will not destroy by itself the causality it has established.
-----------------------------------------------
Notes
* According to the uncertainty principle, finite energy and momentum cannot be concentrated at a single point in space-time.
-----------------------------------------------
References
[1] Hawking, S. (1966). "Singularities and the geometry of spacetime." The European Physical Journal H 39(4): 413-503.
[2] Hawking, S. W. and R. Penrose (1970). "The singularities of gravitational collapse and cosmology." Proceedings of the Royal Society of London. A. Mathematical and Physical Sciences 314(1519): 529-548.
==================================================
补充 2023-1-14
Structural Logic Paradox
Russell once wrote a letter to Ludwig Wittgenstein while visiting China (1920 - 1921) in which he said "I am living in a Chinese house built around a courtyard *......" [1]. The phrase would probably mean to the West, "I live in a house built around the back of a yard." Russell was a logician, but there is clearly a logical problem with this expression, since the yard is determined by the house built, not vice versa. The same expression is reflected in a very famous poem "A Moonlit Night On The Spring River" from the Tang Dynasty (618BC - 907BC) in China. One of the lines is: "We do not know tonight for whom she sheds her ray, But hear the river say to its water adieu." The problem here is that the river exists because of the water, and without the water there would be no river. Therefore, there would be no logic of the river saying goodbye to its water. There are, I believe, many more examples of this kind, and perhaps we can reduce these problems to a structural logic pradox †.
Ignoring the above logical problems will not have any effect on literature, but it should become a serious issue in physics. The biggest obstacle in current physics is that we do not know the structure of elementary particles and black holes. Renormalization is an effective technique, but offers an alternative result that masks the internal structure and can only be considered a stopgap tool. Hawking and Penrose proved the Singularity Theorem, but no clear view has been developed on how to treat singularities. It seems to us that this scenario is the same problem as the structural logic described above. Without black holes (and perhaps elementary particles) there would be no singularities, and (virtual) singularities accompany black holes. Since there is a black hole and there is a singularity, how does a black hole not collapse today because of a singularity, will collapse tomorrow because of the same singularity? Do yards make houses disappear? Does a river make water disappear? This is the realistic explanation of the "paradox" in the subtitle of this question. The laws of physics do not destroy themselves.
-------------------------------------------------
Notes
* One of the typical architectural patterns in Beijing, China, is the "quadrangle", which is usually a square open space with houses built along the perimeter, and when the houses are built, a courtyard is formed in the center. Thus, before the houses were built, it was the field, not the courtyard. The yard must have been formed after the house was built, even though that center open space did not substantially change before or after the building, but the concept changed.
† I hope some logician or philosopher will point out the impropriety.
-------------------------------------------------
References
[1] Monk, R. (1990). Ludwig Wittgenstein: the duty of genius. London: J. Cape. Morgan, G. (Chinese version @2011)
Please prove me right or wrong.
I have recently published a paper [1] in which I conclusively prove that the Stoney Mass invented by George Stoney in 1881 and covered by the shroud of mystery for over 140 years does not represent any physical mass, but has a one-to-one correspondence with the electron charge. The rationale of this rather unusual claim, is the effect of the deliberate choice in establishing SI base units of mass (kg) and the electric charge derived unit (coulomb: C = As). They are inherently incommensurable in the SI, as well as in CGS units.
The commensurability of physical quantities may however depends on the definition of base units in a given system. The experimental “Rationalized Metric System (RMS) developed in [1] eliminates the SI mass and charge units (kg and As, respectively), which both become derived units with dimensions of [m3 s-2]. The RMS ratio of the electron charge to the electron mass became non-dimensional and equal to 2.04098×1021, that is the square root of the electric to gravitational force ratio for the electron.
As much as the proof is quite simple and straightforward I start meeting persons disagreeing with my claim but they cannot come up with a rational argument.
I would like your opinion and arguments pro or against. This could be a rewarding scientific discussion given the importance of this claim for the history of science and beyond.
The short proof is in the attached pdf and the full context in my paper
====================================================
As a results of discussions and critical analysis, I have summarised my position a few answers below, but I have decided to consolidate the most recent here as a supplement to the attached pdf.
I intended to improve my arguments that would increase the level of complexity. However, I found a shorter proof that Stoney Mass has no independent physical existence.
Assumptions:
- Stoney defined the mass as an expression based on pure dimensional analysis relationship, without any implied or explicit ontological status claims.
- Based on Buckingham assertions physical laws do not depend on the choice of base units.
- The system of units [m s] (RMS) can validly replace the system: [kg m s As] as described in [1]
By examining the different systems of units and their corresponding expressions of the Stoney mass, we can shed light on its physical existence. When we consider the CGS and SI systems, we find that both express the Stoney mass in their respective base units of mass (grams or kilograms). However, if we were to use a different system of units, such as the Rationalized Metric System (RMS)[1], we find that there is no equivalent RMS dimensional constants as in the SI Stoney formula to combine with the electron charge to produce a mass value. Stoney Mass expression cannot be constructed in RMS.
In simpler terms, the Stoney mass is a consequence of the chosen arbitrary base units for mass and Current (consequently charge), leading to what is known as the incommensurability of units. This demonstrates that the Stoney mass is not observable or experimentally meaningful outside of the chosen context of CGS or SI units.
Thus it is evident that the Stoney mass lacks a physical manifestation beyond its theoretical formulation in specific unit systems. It exists as somewhat of an artifact caused by the incommensurability between base units of mass and charge. Note that in contrast, the Planck mass SI/CGS expresion does not vanish under the conversion to RMS units, and a dimensional expression is still retained albeit simpler.
When we dig deeper into the fundamental interactions and physical laws, we find no empirical evidence or measurable effects associated with the Stoney mass, reinforcing the understanding that it holds no substantial physical connotation.
The meaning of stoney mass in SI or CGS refers to the mass equivalent of the fundamental unit of electron charge in terms of SM rest energy and (possibly) the equivalent finite electric field energy of the electron.
The Introduction of complex numbers in physics was at first superficial but now they seem increasingly fundamental. Are we missing their true interpretation? What do you think?
Recently I asked a question related to QCD and in response reliability of QCD itself was challenged by many researchers.
It left me with the question, what exactly is fundamental in physics. Can we rely entirely on the two equations given by Einstien? If not then what can we say as fundamental in physics?
Should this set of Constants Originate in the Equations that Dominate the Existence and Evolution of Nature?
There are over 300 physical constants in physics [1][2], c, h, G, e, α, me, mp, θ, μ0, g, H0, Λ, ...... with different definitions [3], functions and statuses; some of them are measured, some are derived [4] and some are conjectured [5]. There is a recursive relationship between physical constants, capable of establishing, from a few constants, the dimensions of the whole of physics [6], such as SI Units. There is a close correlation between physical constants and the laws of physics. Lévy-Leblond said, any universal fundamental constant may be described as a concept synthesizer expressing the unification of two previously unconnected physical concepts into a single one of extended validity [7], such as, the mass-energy equation E = mc^2. Physics is skeptical that many constants are constant constants [8], even including the speed of light invariance. But "letting a constant vary implies replacing it by a dynamical field consistently" [9], in order to avoid being trapped in a causal loop, we have to admit that there is a set of fundamental constants that are eternally invariant*.
So which physical constants are the most fundamental natural constants? Are they the ones that have invariance, Lorentz invariance, gauge invariance, diffeomorphism invariance [10]? Planck's 'units of measurement' [11], combines the relationship between the three constants Planck constant h, speed of light c, gravitational constant G. "These quantities will retain their natural meaning for as long as the laws of gravity, the propagation of light in vacuum and the two principles of the theory of heat hold, and, even if measured by different intelligences and using different methods, must always remain the same."[12] This should be the most unignorable reference to the best provenance of these constants, which should be the coefficients of some extremely important equations? [13]
-------------------------------
Notes
* They are eternal and unchanging, both at the micro and macro level, at any stage of the evolution of the universe, even at the Big Bang, the Big Crash.
-------------------------------
References
[1] Group, P. D., P. Zyla, R. Barnett, J. Beringer, O. Dahl, D. Dwyer, D. Groom, C.-J. Lin, K. Lugovsky and E. Pianori (2020). "Review of particle physics." Progress of Theoretical and Experimental Physics 2020(8): 083C001.
[2] Tiesinga, E. (2021). "CODATA recommended values of the fundamental physical constants: 2018."
[4] DuMond, J. W. (1940). "A Complete Isometric Consistency Chart for the Natural Constants e, m and h." Physical Review 58(5): 457.
[5] Carroll, S. M., W. H. Press and E. L. Turner (1992). "The cosmological constant." Annual review of astronomy and astrophysics 30: 499-542.
[6] Martin-Delgado, M. A. (2020). "The new SI and the fundamental constants of nature." European Journal of Physics 41(6): 063003.
[7] Lévy-Leblond, J.-M. (1977, 2019). "On the Conceptual Nature of the Physical Constants". The Reform of the International System of Units (SI), Philosophical, Historical and Sociological Issues.
[8] Dirac, P. A. M. (1979). "The large numbers hypothesis and the Einstein theory of gravitation " Proceedings of the Royal Society of London. A. Mathematical and Physical Sciences 365.1720: 19-30.
Webb, J., M. Murphy, V. Flambaum, V. Dzuba, J. Barrow, C. Churchill, J. Prochaska and A. Wolfe (2001). "Further evidence for cosmological evolution of the fine structure constant." Physical Review Letters 87(9): 091301.
[9] Ellis, G. F. and J.-P. Uzan (2005). "c is the speed of light, isn't it?" American journal of physics 73(3): 240-247.
[10] Utiyama, R. (1956). "Invariant theoretical interpretation of interaction." Physical Review 101(5): 1597.
Gross, D. J. (1995). "Symmetry in physics: Wigner's legacy." Physics Today 48(12): 46-50.
[11] Stoney, G. J. (1881). "LII. On the physical units of nature." The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science 11(69): 381-390.
Meschini, D. (2007). "Planck-Scale Physics: Facts and Beliefs." Foundations of Science 12(4): 277-294.
[12] Robotti, N. and M. Badino (2001). "Max Planck and the 'Constants of Nature'." Annals of Science 58(2): 137-162.
Can Physical Constants Which Are Obtained with Combinations of Fundamental Physical Constants Have a More Fundamental Nature?
Planck Scales (Planck's 'units of measurement') are different combinations of the three physical constants h, c, G, Planck Scales=f(c,h,G):
Planck Time: tp=√ℏG/c^5=5.31x10^-44s ......(1)
Planck Length: Lp=√ℏG/c^3=1.62x10^-35m ......(2)
Planck Mass: Mp=√ℏc/G=2.18x10^-8 kg ......(3)
“These quantities will retain their natural meaning for as long as the laws of gravity, the propagation of light in vacuum and the two principles of the theory of heat hold, and, even if measured by different intelligences and using different methods, must always remain the same.”[1] And because of the possible relation between Mp and the radius of the Schwarzschild black hole, the possible generalized uncertainty principle [2], makes them a dependent basis for new physics [3]. But what exactly is their natural meaning?
However, the physical constants, the speed of light, c, the Planck constant, h, and the gravitational constant, G, are clear, fundamental, and invariant.
c: bounds the relationship between Space and Time, with c = ΔL/ Δt, and Lorentz invariance [4];
h: bounds the relationship between Energy and Momentum with h=E/ν = Pλ, and energy-momentum conservation [5][6];
G: bounds the relationship between Space-Time and Energy-Momentum, with the Einstein field equation c^4* Gμν = (8πG) * Tμν, and general covariance [7].
The physical constants c, h, G already determine all fundamental physical phenomena‡. So, can the Planck Scales obtained by combining them be even more fundamental than they are? Could it be that the essence of physics is (c, h, G) = f(tp, Lp, Mp)? rather than equations (1), (2), (3). From what physical fact, or what physical imagination, are we supposed to get this notion? Never seeing such an argument, we just take it and use it, and still recognize c,h,G fundamentality. Obviously, Planck Scales are not fundamental physical constants, they can only be regarded as a kind of 'units of measurement'.
So are they a kind of parameter? According to Eqs. (1)(2)(3), c,h,G can be directly replaced by c,h,G and the substitution expression loses its meaning.
So are they a principle? Then who are they expressing? What kind of behavioral pattern is expressed? The theory of quantum gravity takes this as a " baseline ", only in the order sense, not in the exact numerical value.
Thus, Planck time, length, mass, determined entirely by h, c, G, do they really have unquestionable physical significance?
-----------------------------------------
Notes
‡ Please ignore for the moment the phenomena within the nucleus of the atom, eventually we will understand that they are still determined by these three constants.
-----------------------------------------
References
[1] Robotti, N. and M. Badino (2001). "Max Planck and the 'Constants of Nature'." Annals of Science 58(2): 137-162.
[2] Maggiore, M. (1993). A generalized uncertainty principle in quantum gravity. Physics Letters B, 304(1), 65-69. https://doi.org/https://doi.org/10.1016/0370-2693(93)91401-8
[3] Kiefer, C. (2006). Quantum gravity: general introduction and recent developments. Annalen der Physik, 518(1-2), 129-148.
[4] Einstein, A. (1905). On the electrodynamics of moving bodies. Annalen der Physik, 17(10), 891-921.
[5] Planck, M. (1900). The theory of heat radiation (1914 (Translation) ed., Vol. 144).
[6] Einstein, A. (1917). Physikalisehe Zeitschrift, xviii, p.121
[7] Petruzziello, L. (2020). A dissertation on General Covariance and its application in particle physics. Journal of Physics: Conference Series,
Is the Fine-Structure Constant the Most Fundamental Physical Constant?
The fine-structure constant is obtained when the classical Bohr atomic model is relativisticized [1][2]. α=e2/ℏc, a number whose value lies very close to 1/137. α did not correspond to any elementary physical unit, since α is dimensionless. It may also be variable [6][7]*.
Sommerfeld introduced this number as the relation of the “relativistic boundary moment” p0=e2/c of the electron in the hydrogen atom to the first of n “quantum moments” pn=nh/2π. Sommerfeld had argued that α=p0/p1 would “play an important role in all succeeding formulas,” he had argued ‡ [5].
There are several usual interpretations of the significance of fine structure constants [3].
a)In 1916, Sommerfeld had gone no further than to suggest that more fundamental physical questions might be tied to this “relational quantity.” In Atomic Structure and Spectral Lines, α was given a somewhat clearer interpretation as the relation of the orbital speed of an electron “in the first Bohr orbit” of the hydrogen atom, to the speed of light [5].
b) α plays an important role in the details of atomic emission, giving the spectrum a "fine structure".
c) The electrodynamic interaction was thought to be a process in which light quanta were exchanged between electrically charged particles, where the fine-structure constant was recognized as a measure of the force of this interaction. [5]
d) α is a combination of the elementary charge e, Planck's constant h, and the speed of light c. These constants represent electromagnetic interaction, quantum mechanics, and relativity, respectively. So does that mean that if G is ignored (or canceled out) it represents the complete physical phenomenon.
Questions implicated here :
1) What does the dimensionless nature of α imply? The absence of dimension means that there is no conversion relation. Since it is a coupling relation between photons and electrons, is it a characterization of the consistency between photons and charges?
2) The various interpretations of α are not in conflict with each other, therefore should they be unified?
3) Is our current interpretation of α the ultimate? Is it sufficient?
4) Is α the most fundamental physical constant**? This is similar to Planck Scales† in that they are combinations of other fundamental physical constants.
-----------------------------------
Notes
* Spatial Variation and time variability.
‡ Sommerfeld considered α "important constants of nature, characteristic of the constitution of all the elements."[4]
-----------------------------------
References
[3] 张天蓉. (2022). 精细结构常数. https://blog.sciencenet.cn/blog-677221-1346617.html
[1] Sommerfeld, A. (1916). The fine structure of Hydrogen and Hydrogen-like lines: Presented at the meeting on 8 January 1916. The European Physical Journal H (2014), 39(2), 179-204.
[2] Sommerfeld, A. (1916). Zur quantentheorie der spektrallinien. Annalen der Physik, 356(17), 1-94.
[4] Heilbron, J. L. (1967). The Kossel-Sommerfeld theory and the ring atom. Isis, 58(4), 450-485.
[5] Eckert, M., & Märker, K. (2004). Arnold Sommerfeld. Wissenschaftlicher Briefwechsel, 2, 1919-1951.
[6] Wilczynska, M. R., Webb, J. K., Bainbridge, M., Barrow, J. D., Bosman, S. E. I., Carswell, R. F., Dąbrowski, M. P., Dumont, V., Lee, C.-C., Leite, A. C., Leszczyńska, K., Liske, J., Marosek, K., Martins, C. J. A. P., Milaković, D., Molaro, P., & Pasquini, L. (2020). Four direct measurements of the fine-structure constant 13 billion years ago. Science Advances, 6(17), eaay9672. https://doi.org/doi:10.1126/sciadv.aay9672
[7] Webb, J. K., King, J. A., Murphy, M. T., Flambaum, V. V., Carswell, R. F., & Bainbridge, M. B. (2011). Indications of a Spatial Variation of the Fine Structure Constant. Physical Review Letters, 107(19), 191101. https://doi.org/10.1103/PhysRevLett.107.191101
We are not in a position to scientifically accept five fundamental forces.
According to relativity, gravity is not considered a force. Nevertheless, scientists, including those who advocate for relativity, persist in asserting that there are four fundamental forces: gravitational, electromagnetic, strong nuclear, and weak nuclear. Simply put, physicists who celebrate the triumph of relativity decisively undermine its credibility or completeness.
This raises the question: Why haven't physicists reduced the fundamental forces to three?
Is Uniqueness Their Common and Only Correct Answer?
I. We often say that xx has no physical meaning or has physical meaning. So what is "physical meaning" and what is the meaning of "physical meaning "*?
"As far as the causality principle is concerned, if the physical quantities and their time derivatives are known in the present in any given coordinate system, then a statement will only have physical meaning if it is invariant with respect to those transformations for which the coordinates used are precisely those for which the known present values remain invariant. I claim that all assertions of this kind are uniquely determined for the future as well, i.e., that the causality principle is valid in the following formulation: From knowledge of the fourteen potentials ......, in the present all statements about them in the future follow necessarily and uniquely insofar as they have physical meaning" [1].“Hilbert's answer is based on a more precise formulation of the concept of causality that hinges on the distinction between meaningful and meaningless statements.”[2]
Hawking said [4], "I take the positivist view that a physical theory is nothing more than a mathematical model, and it is pointless to ask whether it corresponds to the real. All one can seek is that its predictions agree with its observations."
Is there no difference between physics and Mathematics? We believe that the difference between physics and mathematics lies in the fact that physics must have a physical meaning, whereas mathematics does not have to. Mathematics can be said to have a physical meaning only if it finds a corresponding expression in physics.
II. We often say, restore naturalness, preserve naturalness, the degree of unnaturalness, Higgs naturalness problem, structural naturalness, etc., so what is naturalness or unnaturalness?
“There are two fundamental concepts that enter the formulation of the naturalness criterion: symmetry and effective theories. Both concepts have played a pivotal role in the reductionist approach that has successfully led to the understanding of fundamental forces through the Standard Model. ” [6]
Judging naturalness by symmetry is a good piece of criteria; symmetry is the only result of choosing stability, and there seems to be nothing lacking. But using effective theories as another criterion must be incomplete, because truncate obscures some of the most important details.
III. We often say that "The greatest truths are the simplest"(大道至简†), so is there a standard for judging the simplest?
"Einstein was firmly convinced that all forces must have an ultimate unified description and he even speculated on the uniqueness of this fundamental theory, whose parameters are fixed in the only possible consistent way, with no deformations allowed: 'What really interests me is whether God had any choice in the creation of the world; that is, whether the necessity of logical simplicity leaves any freedom at all' ”[6]
When God created the world, there would not have been another option. The absolute matching of the physical world with the mathematical world has shown that as long as mathematics is unique, physics must be equally unique. The physical world can only be an automatic emulator of the mathematical world, similar to a Cellular Automata.
It is clear that consensus is still a distant goal, and there will be no agreement on any of the following issues at this time:
1) Should there be a precise and uniform definition of having physical meaning? Does the absence of physical meaning mean that there is no corresponding physical reality?
2) Are all concepts in modern physics physically meaningful? For example, probabilistic interpretation of wave functions, superposition states, negative energy seas, spacetime singularities, finite and unbounded, and so on.
3) "Is naturalness a good guiding principle?"[3] "Does nature respect the naturalness criterion?"[6]
4) In physics, is simplicity in essence uniqueness? Is uniqueness a necessary sign of correctness‡?
---------------------------------------------------------
Notes:
* xx wrote a book, "The Meaning of Meaning", which Wittgenstein rated poorly, but Russell thought otherwise and gave it a positive review instead. Wittgenstein thought Russell was trying to help sell the author and Russell was no longer serious [5]. If one can write about the Meaning of Meaning, then one can follow with the Meaning of Meaning of Meaning. In that case, how does one end up with meaning? It is the same as causality; there must exist an ultimate meaning which cannot be pursued any further.
‡ For example, the Shortest Path Principle, Einstein's field equation Gµν=k*Tµν, all embody the idea that uniqueness is correctness (excluding the ultimate interpretation of space-time).
† “万物之始,大道至简,衍化至繁。”At the beginning of all things, the Tao is simple; later on, it evolves into prosperous and complexity. Similar to Leonardo Da Vinci,"Simplicity is the ultimate sophistication." However, the provenance of many of the quotes is dubious.
------------------------------
References:
[1] Rowe, D. E. (2019). Emmy Noether on energy conservation in general relativity. arXiv preprint arXiv:1912.03269.
[2] Sauer, T., & Majer, U. (2009). David Hilbert's Lectures on the Foundations of Physics 1915-1927: Relativity, Quantum Theory and Epistemology. Springer.
[3] Giudice, G. F. (2013). Naturalness after LHC8. arXiv preprint arXiv:1307.7879.
[4] Hawking, S., & Penrose, R. (2018). The nature of space and time (吴忠超,杜欣欣, Trans.; Chinese ed., Vol. 3). Princeton University Press.
[5] Monk, R. (1990). Ludwig Wittgenstein: the duty of genius. London: J. Cape. Morgan, G. (Chinese @2011)
[6] Giudice, G. F. (2008). Naturally speaking: the naturalness criterion and physics at the LHC. Perspectives on LHC physics, 155-178.
Gavitational potential originating from distant masses of the universe is about 108 times larger than the Sun's gravitational potential at the Earth's distance, and yet the latter can keep the Earth in its orbit.
It cannot be excludd that luminal speed according to c2 = 2GMu/Ru is essentially determined and limited by the gravitational potential of distant masses (subscript u). Notably, Einstein 1911 found light deflection close to the Sun to result from locally enhanced gravitational potential.
So it also cannot be excluded that electromagnetic properties of vacuum space according to 1/(ε0µ0) = 2GMu/Ru are essentially determined by the gravitational potential from distant masses.
Accidentally or not, it appears noticeable that the potential energy of a mass m at the gravitational potential of the universal masses approximately corresponds to the relativistic energy equivalent E = mc2.
Finally, a characteristic deceleration observed on rapidly spinning rotors also indicates a possible interaction with distant masses.
I just added an answer to an elder discussion,
"When it is not accidental that potential energy of a mass m at the level of local cumulative gravitational potential originating from remote masses of the universe equals E = mc^2, shouldn't it be worthwhile to reconsider Mach's principle ?"
In fact what is a charge? This is a question that has not yet response in physics. But according to me the charge vibrates at the speed of light and provide this speed c to the photons to travel at the same speed which is the speed of light by a mechanism not yet known in fundamental physics. In addition according to me the charge quantify the energies it provides for the photons that it "produce"!
I think this speed is quantified but is it a constant in all the jumps of the electron?
The cumulative gravitational potential originating from mainly the outer masses of our visible universe is about 8 orders of magnitude larger than the Sun's gravitational potential at the Earth's distance, which also holds all other planets on track. Remarkably, the potential energy of a mass m at the level of gravitational potential originating from the masses of remote parts of our universe is of the order E = mc^2.
The potential energy of a 1 kg mass due to the sun's gravitational potential at the earth position is about 109 J (1 GWs). The cumulative gravitational potential of all masses within the visible universe is about 108 times larger. At this potential a 1 kg mass will hold a potential energy of about 1017 J which is equivalent to E = mc^2. This may be interpreted as a strong vote in favour of Mach's Principle telling that certain local phenomena might be related to the background masses of the universe.
In Special Relativity, a photon of frequency f is considered as a particle of mass m=h f/c2 with zero proper mass. It is experimentally verified that the photon carries momentum and exerts radiation pressure on the targets it impacts.
On the other hand, in General Relativity, it is proposed that the gravitational interaction between massive objects is due to the fact that gravitational field curves space-time. It has been verified that a massive body alters the trajectory and velocity of a beam of light that interacts with its gravitational field (Shapiro effect and gravitational lens effect). Within the frame of General Relativity, these effects are explained by proposing that photons follow geodesic trajectories within curved spaces. Obviously in this framework the mass of the photons can be considered negligible. But, in General Relativity, is the photon a massless particle?
Special&general relativity, one the pillars and one of the 3 more successful physical theories, gives a prime or hierarchically high role to the properties of light, which are the startingbpoint or "effective cause" in Aristotelian linga, tonits inspection.
However, as these theories, despite continuing confirmational success, show resistance to be compstible with a big portion of other physics (exceptions such as Dirac relativistic Quantum mechanics exist).
Therefore one gets to think that maybe the tact that their physical motivational mothering was more peripheral than required to deserve such a prime role in unification (a discipline level definitional trait) than currently ascribed.
First of all, we must concede the "variability" of space-time, and it's a physical reality. First, this is the common ground of both the Special and General Theory of Relativity; the main body of relativity can be trusted to be correct and has been more than adequately verified by experiments. These experiments typically include the earliest light-bending experiments [Eddington 1919], the round-the-world flight experiments [1], the GPS clock-correction experiments [2], and later the LIGO gravitational-wave-detection experiments [3], the observation of gravitational lensing phenomena [4] [5], and so on. The meaning of "variable" is not necessarily the relative spacetime of SR or the curved spacetime of GR. Second, philosophically we should also recognize that spacetime will not be just a variable background for matter, since all interactions cannot be separated from spacetime. Spacetime is not just a distance scale but should take on the function of transmitting interactions.
What our teachers emphasized when they talked about the difference between the applicability of SR and GR is that SR is an event in flat spacetime and GR is an event in curved spacetime. But there is only one spacetime, and for a moving electron, its SR spacetime and GR spacetime would have to be the same** if we had to consider both its SR and GR effects*.
Einstein's fondness for the concept of curved spacetime may have arisen from the intuitive nature of the geodesic concept, or perhaps from the affirmation of the maximal nature of spacetime dynamics. In any case, GR's expression of gravity in terms of a curved spacetime concept was already orthodox, though beyond the empirical perception of all. Feynman constantly questioned the notion of "spacetime curvature" and used the concept of a " measure " of spacetime in general relativity instead of "curvature" [6]. Weinberg thought that geometry might be more appropriately viewed as an analog of GR, politely expressing his skepticism, and L. Susskind, when teaching GR, said that no one knows what four-dimensional spacetime bending looks like†. We believe that Einstein was also not a great believer in the notion of four-dimensional spacetime bending, and his subsequent repeated turn to the study of five-dimensional spacetime[7][8] does not appear to have been solely for the sake of gravitational unification with Maxwell's electromagnetic theory, but perhaps also as a passing attempt to find a dimension for the three-dimensional sphere into which it could be embedded‡.
All of our current measurements and verifications of SR and GR spacetime do not involve true spacetime "curvature", although there are many proposed methods [9]. The LIGO gravitational wave measurements, the gravitational redshift and violetshift, can only be considered as a response to changes in the spacetime metric. This is similar to Feynman's view.
Let us assume a scenario: an electron of mass m in four-dimensional spacetime, and a stationary observer in a fifth-dimensional abstract space, who keeps changing the direction and velocity of the motion of the electron in four-dimensional spacetime through the fifth dimension. Ask, in the opinion of this observer:
1) Do SR spacetime and GR spacetime have to be identical?
2) Is it possible to fully express spacetime "curvature" with a spacetime metric? Excluding " twisting ".
3) Is there a notion of "curvature" for the "curvature" of one-dimensional time? Usually in GR it is also said to be the gravitational time dilation [10]. The curvature of one-dimensional space can have the concept of curvature, but in which direction? How can it not interfere with the other two dimensions?
--------------------------------------------------------------
Notes:
* Usually physics recognizes that GR effects are ignored because the electron mass is so small. This realization masks great problems. We are extrapolating from macroscopic manifestations to microscopic manifestations, and from manifestations abstracted as point particles at a distance to manifestations when structure exists at close range. As long as structure exists, when distance is sufficiently small, everything behaves as a distributed field. At this point, the abstract notion of force (magnitude, direction, point of action) has disappeared. For electrons, even the concept of charge disappears. Yet the concept of gravity does not necessarily disappear at this point, thus causing a reversal of the order of magnitude difference in action at very close distances.
** There is a difference between this and the state of affairs during GPS clock calibration. When doing GPS calibration, we are using the ground as the reference frame. A flying satellite in the sky has an SR effect, but we approximate it to be flat in space-time. The GR effect, on the other hand, is relative to the ground, not of itself. Thus, the composite calibration is the difference between the two. If one were to change the scenario and the relatively immobile space station if it needed to be calibrated with the clock of some sort of vehicle on the ground moving at high speed around it, then the composite calibration would be the sum of the two. Please correct me if there are problems with this scenario.
† He also said, when teaching QM, that no one knows what the top and bottom spins of the electron are.
‡ Einstein says that the universe is a finite three-dimensional sphere.
--------------------------------------------------------------
References:
[1] Hafele, J. C. and R. E. Keating (1972). "Around-the-World Atomic Clocks: Observed Relativistic Time Gains." Science 177(4044): 168-170.
[2] "Relativity in GNSS"; Ashtekar, A. and V. Petkov (2014). Springer Handbook of Spacetime. Berlin, Heidelberg, Springer Berlin Heidelberg.
[3] Cahillane, C. and G. Mansell (2022). "Review of the Advanced LIGO gravitational wave observatories leading to observing run four." Galaxies 10(1): 36.
[5] Tran, K.-V. H., A. Harshan, K. Glazebrook, G. K. Vasan, T. Jones, C. Jacobs, G. G. Kacprzak, T. M. Barone, T. E. Collett and A. Gupta (2022). "The AGEL Survey: Spectroscopic Confirmation of Strong Gravitational Lenses in the DES and DECaLS Fields Selected Using Convolutional Neural Networks." The Astronomical Journal 164(4): 148.
[6] Feynman, R. P. (2005). The Feynman Lectures on Physics(II).
[7] Pais, A. (1983). The science and the life of Albert Einstein II Oxford university press.
[8] Weinberg, S. (2005). "Einstein’s Mistakes." Physics Today 58(11).
[9] Ciufolini, I. and M. Demianski (1986). "How to measure the curvature of space-time." Physical Review D 34(4): 1018.
[10] Roura, A. (2022). "Quantum probe of space-time curvature." Science 375(6577): 142-143.
If the transition is instantaneous, the moment the photon appears must be superluminal.
In quantum mechanics, Bohr's semi-classical model, Heisenberg's matrix mechanics, and Schödinger's wave function are all able to support the assumption of energy levels of atoms and coincide with the spectra of atoms. It is the operating mode of most light sources, including lasers. This shows that the body of their theories is all correct. If they are merged into one theory describing the structure image, it must have the characteristics of all three at the same time. Bohr's ∨ Heisenberg's ∨ Schödinger's, will form the final atomic theory*.
The jump of an electron in an atom, whether absorbed or radiated, is in the form of a single photon, and taking the smallest energy unit. For the same energy difference ΔE, jumping chooses a single photon over multiple photons with lower frequency ν, suggesting that a single photon structure has a more reasonable match between atomic orbital structures**.
ΔE=hν ......(1)
ΔE=Em-En ......(2)
It is clear that without information about Em, En at the same time, generating a definite jump frequency ν is impossible. "Rutherford pointed out that Rutherford pointed out that if, as Bohr did, one postulates that the frequency of light ν, which an electron emits in a transition, depends on the difference between the initial energy level and the final energy level, it appears as if the electron must "know" the frequency of light ν. level and the final energy level, it appears as if the electron must "know" to what final energy level it is heading in order to emit light with the right frequency."[1].
Bohr's postulate of Eq. (1)(2) energy level difference is valid [2]. But it does not hold as axiomatic postulate. This is not just because all possible reasons have not been ruled out. For example, one of the most important reasons is that the relationship between the "wave structure" of the electron and the electromagnetic field has not been determined†. Only if this direct relationship is established can the transition process between them be described. It is also required that the wave function and the electromagnetic field are not independent things, and it is required that the wave function is a continuous field distribution, not a probability distribution [5]. More importantly, Eqs. (1)(2) do not fulfill the axiomatic condition of being axiomatic postulate, which is not capable of ignoring the null information‡.
Doing it as a comparison of questions is the same as when we ask how the photon controls its speed [3] and where the photon should reach next. They are both photon behaviors that must rest on a common ground.
Considering the electron transition as a source of light, it is equally consistent with the principle of Special Relativity, and the photons radiated must be at the speed of light c and independent of the speed of the electrons††. However, if the light-emitting process is not continuous, the phenomenon of superluminal speed occurs.
We decompose the light-emitting process into two stages. The first stage, from "nothing" to "something", is the transition stage; the second stage, from something to propagation, is the normal state. According to classical physics, if the light emission is instantaneous, i.e., it does not occupy time and space. Then we can infer that the photon from nothing to something is not a continuous process, but an infinite process, and the speed at which the photon is produced is infinity. We cannot believe that the speed of propagation of light is finite and the speed at which light is produced is infinite. There is no way to bridge from the infinite to the finite, and we believe that this also violates the principle of the constancy of the speed of light.
There is no other choice for the way to solve this problem. The first is to recognize that all light emitting is a transitional "process" that occupies the same time and space, and that this transitional process must also be at the speed of light, regardless of the speed of the source of light (and we consider all forms of light emitting to be sources of light). This is guaranteed by and only by the theory of relativity. SR will match the spacetime measure to the speed of light at any light source speed. Secondly, photons cannot occur in a probabilistic manner, since probability implies independence from spacetime and remains an infinity problem. Third, photons cannot be treated as point particles in this scenario. That is, the photon must be spatially scaled, otherwise the transition process cannot be established. Fourth, in order to establish a continuous process of light emission, the "source" of photons, whether it is an accelerated electron, or the "wave function" of the electron jump, or the positive and negative electron annihilation, are required to be able to, with the help of space and time, continuous transition to photons. This will force us to think about what the wave function is.
Thinking carefully about this question, maybe we can get a sense of the nature of everything, of the extensive and indispensable role of time and space.
Our questions are:
1) Regardless of the solution belonging to which theory, where did the electron get the information about the jump target? Does this mean that the wave function of the electron should span all "orbitals" of the atom at the same time.
2) If the jump is a non-time-consuming process, should it be considered a superluminal phenomenon¶ [4]?
3) If the jump is a non-time consuming process, does it conflict with the Uncertainty Principle [5]?
4) What relationship should the wave function have to the photon to ensure that it produces the right photon?
-------------------------------------------------------------------------
Notes:
* Even the theory of the atomic nucleus. After all, when the nucleus is considered as a "black box", it presents only electromagnetic and gravitational fields.
* * It also limits the possibility that the photon is a mixed-wavelength structure. "Bohr noticed that a wave packet of limited extension in space and time can only be built up by the superposition of a number of elementary waves with a large range of wave numbers and frequencies [2].
† For example, there is a direct relationship between the "electron cloud" expressed by the wave function of the hydrogen steady state, and the radiating photons. With this direct relationship, it is possible to determine the frequency information between the transition energy levels.
‡ If a theory considers information as the most fundamental constituent, then it has to be able to answer the questions involved here.
†† Why and how to achieve independence from the speed of light cannot be divorced from SR by its very nature, but additional definitions are needed. See separate topic.
¶ These questions would relate to the questions posed in [3][4][5].
-------------------------------------------------------------------------
References:
[1] Faye, J. (2019). "Copenhagen Interpretation of Quantum Mechanics." The Stanford Encyclopedia of Philosophy from <https://plato.stanford.edu/archives/win2019/entries/qm-copenhagen/>.
[2] Bohr, N., H. A. Kramers and J. C. Slater (1924). "LXXVI. The quantum theory of radiation." The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science 47(281): 785-802. This was an important paper known as "BSK"; the principle of conservation of energy-momentum was abandoned, and only conservation of energy-momentum in the statistical sense was recognized.
[3] “How does light know its speed?”;
[4] “Should all light-emitting processes be described by the same equations?”;
[5] “Does Born's statistical interpretation of the wave function conflict with ‘the Uncertainty Principle’?” https://www.researchgate.net/post/NO13_Does_Borns_statistical_interpretation_of_the_wave_function_conflict_with_the_Uncertainty_Principle;
Quantum field theory has a named field for each particle. There is an electron field, a muon field, a Higgs field, etc. To these particle fields the four force fields are added: gravity, electromagnetism, the strong nuclear force and the weak nuclear force. Therefore, rather than nature being a marvel of simplicity, it is currently depicted as a less than elegant collage of about 17 overlapping fields. These fields have quantifiable values at points. However, the fundamental physics and structure of fields is not understood. For all the praise of quantum field theory, this is a glaring deficiency.
Therefore, do you expect that future development of physics will simplify the model of the universe down to one fundamental field with multiple resonances? Alternatively, will multiple independent fields always be required? Will we ever understand the structure of fields?
God said, "Let there be light."
So, did God need to use many means when He created light? Physically we have to ask, "Should all processes of light generation obey the same equation?" "Is this equation the 'God equation'?"
Regarding the types of "light sources", we categorize them according to "how the light is emitted" (the way it is emitted):
Type 0 - naturally existing light. This philosophical assumption is important. It is important because it is impossible to determine whether it is more essential that all light is produced by matter, or that all light exists naturally and is transformed into matter. Moreover, naturally existing light can provide us with an absolute spacetime background (free light has a constant speed of light, independent of the motion of the light source and independent of the observer, which is equivalent to an absolute reference system).
Type I - Orbital Electron Transition[1]: usually determines the characteristic spectra of the elements in the periodic table, they are the "fingerprints" of the elements; if there is human intervention, coherent optical lasers can be generated. According to the assumptions of Bohr's orbital theory, the transitions are instantaneous, there is no process, and no time is required*. Therefore, it also cannot be described using specific differential equations, but only by probabilities. However, Schrödinger believed that the wave equation could give a reasonable explanation, and that the transition was no longer an instantaneous process, but a transitional one. The wave function transitions from one stable state to another, with a "superposition of states" in between [2].
Type II - Accelerated motion of charged particles emitting light. There are various scenarios here, and it should be emphasized that theoretically they can produce light of any wavelength, infinitely short to infinitely long, and they are all photons. 1) Blackbody radiation [3][4]: produced by the thermal motion of charged particles [5], is closely dependent on the temperature, and has a continuous spectrum in terms of statistical properties. This is the most ubiquitous class of light sources, ranging from stars like the Sun to the cosmic microwave background radiation [6], all of which have the same properties. 2) Radio: the most ubiquitous example of this is the electromagnetic waves radiated from antennas of devices such as wireless broadcasting, wireless communications, and radar. 3)Synchrotron radiation[7],e+e− → e+e−γ;the electromagnetic radiation emitted when charged particles travel in curved paths. 4)bremsstrahlung[8],for example, e+e− → qqg → 3 jets[11];electromagnetic radiation produced by the acceleration or especially the deceleration of a charged particle after passing through the electric and magnetic fields of a nucleus,continuous spectrum. 5)Cherenkov Radiation[9]:light produced by charged particles when they pass through an optically transparent medium at speeds greater than the speed of light in that medium.
Type III:Partical reactions、Nuclear reactions:Any physical reaction process that produces photon (boson**) output. 1)the Gamma Decay;2)Annihilation of particles and antiparticles when they meet[10]: this is a universal property of symmetric particles, the most typical physical reaction;3)Various concomitant light, such as during particle collisions;4)Transformational light output when light interacts with matter, such as Compton scattering[12].
Type IV: Various redshifts and violet shifts, changing the relative energies of light: gravitational redshift and violet shift, Doppler shift; cosmological redshift.
Type V: Virtual Photon[13][14]?
Our questions are:
Among these types of light-emitting modes, type II and type IV light-emitting obey Maxwell's equation, and the type I and type III light-emitting processes are not clearly explained.
We can not know the light-emitting process, but we can be sure that the result, the final output of photons, is the same. Can we be sure that it is a different process that produces the same photons?
Is the thing that is capable of producing light, itself light? Or at least contains elements of light, e.g., an electric field E, a magnetic field H. If there aren't any elements of light in it, then how was it created? By what means was one energy, momentum, converted into another energy hν, momentum h/λ?
There is a view that "Virtual particles are indeed real particles. Quantum theory predicts that every particle spends some time as a combination of other particles in all possible ways"[15]. What then are the actual things that can fulfill this interpretation? Can it only be energy-momentum?
We believe everything needs to be described by mathematical equations (not made-up operators). If the output of a system is the same, then the process that bridges the output should also be the same. That is, the output equations for light are the same, whether it is a transition, an accelerated moving charged particle, or an annihilation process, the difference is only in the input.
------------------------------------------------------------------------------
* Schrödinger said:the theory was silent about the period s of transition or 'quantum jumps' (as one then began to call them). Since intermediary states had to remain disallowed, one could not but regard the transition as instantaneous; but on the other hand, the radiating of a coherent wave train of 3 or 4 feet length, as it can be observed in an interferometer, would use up just about the average interval between two transitions, leaving the atom no time to 'be' in those stationary states, the only ones of which the theory gave a description.
** We know the most about photons, but not so much about the nature of W, Z, and g. Their mass and confined existence is a problem. We hope to be able to discuss this in a follow-up issue.
------------------------------------------------------------------------------
Links to related issues:
【1】"How does light know its speed and maintain that speed?”;
【2】"How do light and particles know that they are choosing the shortest path?”
【3】"light is always propagated with a definite velocity c which is independent of the state of motion of the emitting body.";
【4】“Are annihilation and pair production mutually inverse processes?”; https://www.researchgate.net/post/NO8_Are_annihilation_and_pair_production_mutually_inverse_processes;
------------------------------------------------------------------------------
Reference:
[1] Bohr, N. (1913). "On the constitution of atoms and molecules." The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science 26(151): 1-25.
[2] Schrödinger, E. (1952). "Are there quantum jumps? Part I." The British Journal for the Philosophy of science 3.10 (1952): 109-123.
[3] Gearhart, C. A. (2002). "Planck, the Quantum, and the Historians." Physics in perspective 4(2): 170-215.
[4] Jain, P. and L. Sharma (1998). "The Physics of blackbody radiation: A review." Journal of Applied Science in Southern Africa 4: 80-101. 【GR@Pushpendra K. Jain】
[5] Arons, A. B. and M. Peppard (1965). "Einstein's Proposal of the Photon Concept—a Translation of the Annalen der Physik Paper of 1905." American Journal of Physics 33(5): 367-374.
[6] PROGRAM, P. "PLANCK PROGRAM."
[8] 韧致辐射;
[9] Neutrino detection by Cherenkov radiation:" Super-Kamiokande(超级神冈)." from https://www-sk.icrr.u-tokyo.ac.jp/en/sk/about/. 江门中微子实验 "The Jiangmen Underground Neutrino Observatory (JUNO)." from http://juno.ihep.cas.cn/.
[10] Li, B. A. and C. N. Yang (1989). "CY Chao, Pair creation and Pair Annihilation." International Journal of Modern Physics A 4(17): 4325-4335.
[11] Schmitz, W. (2019). Particles, Fields and Forces, Springer.
[12] Compton, A. H. (1923). "The Spectrum of Scattered X-Rays." Physical Review 22(5): 409-413.
[13] Manoukian, E. B. (2020). Transition Amplitudes and the Meaning of Virtual Particles. 100 Years of Fundamental Theoretical Physics in the Palm of Your Hand: Integrated Technical Treatment. E. B. Manoukian. Cham, Springer International Publishing: 169-175.
[14] Jaeger, G. (2021). "Exchange Forces in Particle Physics." Foundations of Physics 51(1): 13.
[15] Are virtual particles really constantly popping in and out of existence? Or are they merely a mathematical bookkeeping device for quantum mechanics? - Scientific American.
Let's set a reference point very far away from the Earth which holds unmoved.
If I am at the equator and the Earth starts to accelerate to the escaping speed on the surface, my guess is I will be shaken off the Earth. My relative speed to the materials that make the Earth is zero near me and big to the materials on the other side of the Earth thus a net relative speed.
If I am at the north pole and the earth rotates faster then I should stay feeling the same gravitational force. My relative speed to the materials of Earth is zero cause they cancel each other at different locations.
If I speed up and shoot off the earth, my relative speed to the earth is big as Newton has told us.
Now what happens if I jump up into the air and the Earth starts rotating faster and faster? My guess is I will fall back to the ground and hit hard. Again my relative speed to the materials that made Earth is zero in this case because they cancel each other again.
My question is, does the effective force I feel from the materials that make Earth has any correlation with their relative speed to me or not? More particularly, relative angular momentum and gravity.
Does energy have an origin or root?
When Plato talks about beauty in the "Hippias Major", he asks: "A beautiful young girl is beautiful", "A sturdy mare is beautiful", "A fine harp is beautiful", "A smooth clay pot is beautiful" ....... , So what exactly is beauty? [1]
We can likewise ask, Mechanical energy is energy, Heat energy is energy, Electrical and magnetic energy is energy, Chemical and internal energy is energy, Radiant energy is energy, so what exactly is "energy"?[2]
Richard Feynman, said in his Lectures in the sixties, "It is important to realize that in physics today we have no knowledge of what energy is". Thus, Feynman introduced energy as an abstract quantity from the beginning of his university teaching [3].
However, the universal concept of energy in physics states that energy can neither be created nor destroyed, but can only be transformed. If energy cannot be destroyed, then it must be a real thing that exists, because it makes no sense to say that we cannot destroy something that does not exist. If energy can be transformed, then, in reality, it must appear in a different form. Therefore, based on this concept of energy, one can easily be led to the idea that energy is a real thing, a substance. This concept of energy is often used, for example, that energy can flow and that it can be carried, lost, stored, or added to a system [4][5].
Indeed, in different areas of physics, there is no definition of what energy are, and what is consistent is only their Metrics and measures. So, whether energy is a concrete Substance**, or is just heat, or is the capacity of doing work, or is just an abstract cause of change, was much discussed by early physicists. However, we must be clear that there is only one kind of energy, and it is called energy. It is stored in different systems and in different ways in those systems, and it is transferred by some mechanism or other from one system to another[9].
Based on a comprehensive analysis of physical interactions and chemical reaction processes, energy is considered to be the only thing that communicates various phenomena. Thus, "Energism" was born*[8]. Ostwald had argued that matter and energy had a “parallel” existence, he developed a more radical position: matter is subordinate to energy. “Energy is always stored or contained in some physical system. Therefore, we will always have to think of energy as a property of some identifiable physical system”. “Ostwald regarded his Energism as the ultimate monism, a unitary "science of science" which would bridge not only physics and chemistry, but the physical and biological sciences as well”[6]. This view has expressed the idea of considering "pure energy" as a "unity" and has assumed the process of energy interaction. However, because of the impossibility to determine what energy is, it has been rejected by both scientific and philosophical circles as "metaphysics" and "materialism"[10].
The consistency and transitivity of energy and momentum in different physical domains have actually shown that they must be linked and bound by something fundamental. Therefore, it is necessary to re-examine the "Energism" and try to promote it.
The relationship between energy and momentum, which are independent in classical mechanics, and their conservation are also independent. the momentum of the particle does not involve its energy. but In relativity, the conservations of momentum and energy cannot be dissociated. The conservation of momentum in all inertial frames requires the conservation of energy and vice versa. space and time are frame-dependent projections of spacetime[7].
Our questions are:
1) What is energy, is it a fundamental thing of entity nature**, or is it just a measure, like the property "label" of "beauty", which can be used by anyone: heat, light, electricity, machinery, atomic nuclei. Do the various forms of energy express the same meaning? Can they be expressed mathematically in a uniform way? Is there a mathematical definition of "energy"? ***
2) Is the conservation of energy a universal principle? How does physics ensure this conservation?
3) Why is there a definite relationship between energy and momentum in all situations? Where are they rooted?
4) If the various forms of energy and momentum are unified, given the existence of relativity, is there any definite relationship between them and time and space?
-------------------------------------------------------------------------
* At the end of the nineteenth century, two theories were born that tried to unify the physical world, "electromagnetic worldview" and "Energism". We believe that this is the most intuitive and simple view of the world. And, probably the most beautiful and correct view of the world.
** If it is an entity, then it must still exist at absolute zero. Like the energy and momentum of the photon itself, it does not change because of the temperature, as long as it does not interact with each other.
*** We believe that this is an extremely important issue, first mentioned by Sergey Shevchenko( https://www.researchgate.net/profile/Sergey-Shevchenko )in his reply to a question on Researchgate, see https://www.researchgate.net/post/NO1_Three-dimensional_space_issue; SS's reply.
-------------------------------------------------------------------------
Referencs
[1] Plato.
[2] Ostwald identified five “Arten der Energie”: I. Mechanical energy, II. Heat, III. Electrical and magnetic energy, IV. Chemical and internal energy, and V. Radiant energy. Each form of energy (heat, chemical, electrical, volume, etc.) is assigned an intensity. And formulated two fundamental laws of energetics. The first expresses the conservation of energy in the process of transfer and conversion; the second explains in terms of intensity equilibrium what can start and stop the transfer and conversion of energy.
[3] Duit, R. (1981). "Understanding Energy as a Conserved Quantity‐‐Remarks on the Article by RU Sexl." European journal of science education 3(3): 291-301.
[4] Swackhamer, G. (2005). Cognitive resources for understanding energy.
[5] Coelho, R. L. (2014). "On the Concept of Energy: Eclecticism and Rationality." Science & Education 23(6): 1361-1380.
[6] Holt, N. R. (1970). "A note on Wilhelm Ostwald's energism." Isis 61(3): 386-389.
[7] Ashtekar, A. and V. Petkov (2014). Springer Handbook of Spacetime. Berlin, Heidelberg, Springer Berlin Heidelberg.
[8] Leegwater, A. (1986). "The development of Wilhelm Ostwald's chemical energetics." Centaurus 29(4): 314-337.
[9] Swackhamer, G. (2005). Cognitive resources for understanding energy.
[10] The two major scientific critics of Energism are Max Planck and Ernst Mach. The leading critic of the political-philosophical community was Vladimir Lenin (the founder of the organization known as Comintern). But he criticized not only Ostwald, but also Ernst Mach.
The unexploited unification of general relativity and quantum physics is a painstaking issue. Is it feasible to build a nonempty set, with a binary operation defined on it, that encompasses both the theories as subsets, making it possible to join together two of their most dissimilar marks, i.e., the commutativity detectable in our macroscopic relativistic world and the non-commutativity detectable in the quantum, microscopic world? Could the gravitational field be the physical counterpart able to throw a bridge between relativity and quantum mechanics? It is feasible that gravity stands for an operator able to reduce the countless orthonormal bases required by quantum mechanics to just one, i.e., the relativistic basis of an observer located in a single cosmic area?
What do you think?
We assume that any N-dimensional space can have an (N-1)-dimensional "boundary". However, if the boundary is limited to points, lines, and surfaces, and not to bodies, then three-dimensional space will be the maximum spatial dimension that satisfies this condition. What is the mathematical concept involved here?
By now, we've all realised how well GPT AI is able to find and replicate patterns in language and in 2D images. Its ability to find and interact with data patterns sometimes allows it to answer questions better than some students.
I expect that right now there will be teams training GPT installations with molecular structure and physical characteristics data to try to find candidates for new materials for high-temperature superconductors, or to find organic lattice structures with high hydrogen affinity to replace palladium for hydrogen storage cells. The financial and social rewards for success in these areas make it difficult to justify NOT trying out GPT AI.
But what about fundamental physics theory? Could AI find a solution to the current mismatch between Einstein's general relativity and quantum mechanics? Could it start to solve hard problems that have defeated mainstream academia for decades? If so, what happens to the instiutions?
Further reading:
According to special relativity [1], the mass of a moving object is generally considered to be a relative value that increases with velocity [2]. m=γm0, γ is the relativistic factor and m0 is defined as the rest mass. The mass-energy equation E=mc^2 is a derivative of Einstein's special relativity. Einstein assumed two inertial systems moving at relatively constant velocity, where one object in the stationary inertial frame radiates photons in two opposite directions, and if the total energy of the photons is E, then in the other inertial frame it is seen that the mass of the object will decrease by E/c^2, i.e., E=mc^2. He thus concluded that The mass of an object is a measure of the energy it contains [3].
Our question is, if there is no absolute spacetime and the mass of any object in an inertial system can be considered as a rest mass, if it arbitrarily changes its speed of motion and is able to measure itself, will there exist a minimum rest mass, i.e. a minimum energy?
[1] Einstein 1905r:On the electrodynamics of moving objects.
[2] Feynman, R. P. (2005). The Feynman Lectures on Physics(I).
[3] Einstein 1905s:Einstein, A. (1905). "Does the inertia of a body depend upon its energy-content." Annalen der Physik 18(13): 639-641.
Complex numbers are involved almost everywhere in modern physics, but the understanding of imaginary numbers has been controversial.
In fact there is a process of acceptance of imaginary numbers in physics. For example.
1) Weyl in establishing the Gauge field theory
After the development of quantum mechanics in 1925–26, Vladimir Fock and Fritz London independently pointed out that it was necessary to replace γ by −iħ 。“Evidently, Weyl accepted the idea that γ should be imaginary, and in 1929 he published an important paper in which he explicitly defined the concept of gauge transformation in QED and showed that under such a transformation, Maxwell’s theory in quantum mechanics is invariant.”【Yang, C. N. (2014). "The conceptual origins of Maxwell’s equations and gauge theory." Physics today 67(11): 45.】
【Wu, T. T. and C. N. Yang (1975). "Concept of nonintegrable phase factors and global formulation of gauge fields." Physical Review D 12(12): 3845.】
2) Schrödinger when he established the quantum wave equation
In fact, Schrödinger rejected the concept of imaginary numbers earlier.
【Yang, C. N. (1987). Square root of minus one, complex phases and Erwin Schrödinger.】
【Kwong, C. P. (2009). "The mystery of square root of minus one in quantum mechanics, and its demystification." arXiv preprint arXiv:0912.3996.】
【Karam, R. (2020). "Schrödinger's original struggles with a complex wave function." American Journal of Physics 88(6): 433-438.】
The imaginary number here is also related to the introduction of the energy and momentum operators in quantum mechanics:
Recently @Ed Gerck published an article dedicated to complex numbers:
Our question is, is there a consistent understanding of the concept of imaginary numbers (complex numbers) in current physics? Do we need to discuss imaginary numbers and complex numbers ( dual numbers) in two separate concepts.
_______________________________________________________________________
2023-06-19 补充
On the question of complex numbers in physics, add some relevant literatures collected in recent days.
1) Jordan, T. F. (1975). "Why− i∇ is the momentum." American Journal of Physics 43(12): 1089-1093.
2)Chen, R. L. (1989). "Derivation of the real form of Schrödinger's equation for a nonconservative system and the unique relation between Re (ψ) and Im (ψ)." Journal of mathematical physics 30(1): 83-86.
3) Baylis, W. E., J. Huschilt and J. Wei (1992). "Why i?" American Journal of Physics 60(9): 788-797.
4)Baylis, W. and J. Keselica (2012). "The complex algebra of physical space: a framework for relativity." Advances in Applied Clifford Algebras 22(3): 537-561.
5)Faulkner, S. (2015). "A short note on why the imaginary unit is inherent in physics"; Researchgate
6)Faulkner, S. (2016). "How the imaginary unit is inherent in quantum indeterminacy"; Researchgate
7)Tanguay, P. (2018). "Quantum wave function realism, time, and the imaginary unit i"; Researchgate
8)Huang, C. H., Y.; Song, J. (2020). "General Quantum Theory No Axiom Presumption: I ----Quantum Mechanics and Solutions to Crisises of Origins of Both Wave-Particle Duality and the First Quantization." Preprints.org.
9)Karam, R. (2020). "Why are complex numbers needed in quantum mechanics? Some answers for the introductory level." American Journal of Physics 88(1): 39-45.
I realize that the great theories of fundamental physics are already united in their cradles. I quote here especially the general and restreint relativities and that of Planck. There are certainly other fundamental theories that are related to these three theories. Therefore I do not understand why all this controversy on the unification of the great fundamental theories which have lasted for more than a century to date. I explain the unification through the cosmological constants. Indeed I discovered that these theories all without exception use cosmological constants, that's one thing. But I also discover that the cosmological constants are derived from each other, which guarantees the link between these theories. So as an application of the fact that these theories are already linked in their cradles it is fair to say that people can use for example the Schwarzschild radius equation to calculate the radii of the proton and the neutron without waiting for someone, via a any theory, to allow this equation to be used.
I believe I have solved what was called the "most fundamental unsolved problem of physics" by Paul Dirac:
"The fine-structure constant [...] has no dimensions or units. It’s a pure number that shapes the universe to an astonishing degree — “a magic number that comes to us with no understanding,” as Richard Feynman described it. Paul Dirac considered the origin of the number “the most fundamental unsolved problem of physics.”"
I've worked things out in Jupyter notebook and generated a PDF version as well:
The results are quite surprising, to say the least.......
Earlier work in progress:
Quantum started (pre 1960) with weird explanations of the math and large ensembles of particles.
Then came Aspect and entanglement (ca 1980) with Bell's inequality requiring 2 or more particles.
Now the next step is to become fully ``realistic'' (more intuitive) by modeling causality and superluminal signal speed with characteristics of 1 particle. More ``realistic'' suggests have an analogy with classical modeling.
Adlam, E., 2022, Is there causation in fundamental Physics? New insightsfrom process matrices and quantum causal modeling, arXiv: 2208.02721[quant-ph].
The dimensioned physical constants (G, h, c, e, me, kB ...), can be considered fundamental only if the units they are measured in (kg, m, s ...) are independent. The 2019 redefinition of SI base units resulted in 4 physical constants assigned exact values, and this confirmed the independence of their associated SI units. However there are anomalies which occur in certain combinations of these constants which suggest a mathematical (unit number) relationship (kg -> 15, m -> -13, s -> -30, A -> 3, K -> 20) and as these are embedded in the constants, they are easy to test, the results are consistent with CODATA precision. Statistically therefore, can these anomalies be dismissed as coincidence?
For example, we can see how to make the physical units kg, m, s, A from dimensionless mathematical forms using this unit number relationship and this has applications to simulation universe modelling.
For convenience, the article has been transcribed to this wiki site.
...
Some general background to the physical constants.


Having worked on the spacetime wave theory for some time and recently published a preprint paper on the Space Rest Frame I realised the full implications which are quite shocking in a way.
The spacetime wave theory proposes that electrons, neutrons and protons are looped waves in spacetime:
The paper on the space rest frame proposes that waves in spacetime take place in the space rest frame K0:
Preprint Space Rest Frame (Dec 2021)
This then implies that the proton which is a looped wave in spacetime of three wavelengths is actually a looped wave taking place in the space rest frame and we are moving at somewhere between 150 km/sec and 350 km/sec relative to that frame of reference.
This also gives a clue as to the cause of the length contraction taking place in objects moving relative to the rest frame. The length contraction takes place in the individual particles, namely the electron, neutron and proton.
I find myself in a similar position to Earnest Rutherford when he discovered that the atom was mostly empty space. I don't expect to fall through the floor but I might expect to suddenly fly away at around 250 km/sec. Of course this doesn't happen because there is zero resistance to uniform motion through space and momentum is conserved.
It still seems quite a shocking realisation.
Richard
Our answer is YES. The wave-particle duality is a model proposed to explain the interference of photons, electrons, neutrons, or any matter. One deprecates a model when it is no longer needed. Therefore, we show here that the wave-particle duality is deprecated.
This offers an immediate solution for the thermal radiation of bodies, as Einstein demonstrated experimentally in 1917, in terms of ternary trees in progression, to tri-state+, using the model of GF(3^n), where the same atom can show so-called spontaneous emission, absorption, or stimulated emission, and further collective effects, in a ternary way.
Continuity or classical waves are not needed, do not fit into this philosophy, and are not representable by any edges or updating events, or sink, in an oriented graph [1] model with stimulated emission.
However, taking into account the principle of universality in physics, the same phenomena — even a particle such as a photon or electron — can be seen, although approximately and partially, in terms of continuous waves, macroscopically. Then, the wave theory of electrons can be used in the universality limit, when collective effects can play a role, and explain superconductivity.
This solves the apparent confusion given by the common wave-particle duality model, where the ontological view can become now, however — more indicative of a particle in all cases — and does not depend on amplitude.
This explains both the photoelectric effect, that does not depend on the amplitude, and wave-interference, that depends on the amplitude. The ground rule is quantum, the particle -- but one apparently "sees" interference at a distance that is far enough not to distinguish individual contributions.
On deprecating the wave-particle duality, we are looking into abandoning probability in quantum mechanics. This speaks against a duality that is not based on finite integers!
Support comes from the preprint
the article published in Mathematica in 2023 at
the free PDF book
the paper-based books at
and other references, including on the new field of "quantum circuits" using the new set Q*.
REFERENCE
[1] Stephen Wolfram, “A Class of Models with the Potential To Represent Fundamental Physics.” Arxiv; https://arxiv.org/ftp/arxiv/papers/2004/2004.08210.pdf, 2004.
Research Proposal Investigation on mass-gravity properties of graphene anomalo...
Coherency is a major difference separating EM from gravity.
In layman's terms someone can say that gravity is degenerated incoherent quantum EM, projected on the macroscale far field.
The graphene is the simplest macrospopic quantum matter structure and therefore if there is a gravity-EM relation existing this would be our best chance presented to find out I believe.
The anomalous mass behavior of the fermions inside the graphene may as well produce anomalous gravity effects as far as I know not yet investigated from the science community and academia. This I believe is because the atomic lattice structure in the graphene is coherent and not a mess --> mass :) we have usually in macroscopic matter.
In other words graphene is the most coherent macroscopic matter we can get today.
Therefore any EM-gravity correlation will become evident with the experiments I propose. Each layer of graphene piled on top will increase the incoherence of the film in total, change the wave function and also the gravity effect. If the piled up number of isometric layers of graphene do not produce a linear analogue increase of the total weight of the film then I would have proved my point of direct relation of gravity effect and quantum EM in matter.
Cooper pairs superconductivity coherence is a different case I believe from the above. In a molecular level there is still very much incoherent matter present in an superconductor and therefore EM coherency of charge matter will little to none affect the gravity effect of the total mass of the superconductor.
In my proposed experiment my concern is if the instrumentation used would prove adequate to measure reliable and resolve such minuscule changes in weight I expect.
Essentially what I try to proof in my above proposed experiment is that adding on top of a single layer of graphene sheet an identical isometric layer of graphene assuming that no other matter was added (i.e. clean room vacuum conditions) then the total mass will not double. Because mass value depends also on matter incoherence within an object with dictates the number of quantum EM interactions taking place inside the object. The mass readout on the experiment of the stacked graphene layers will differ for different degrees of alignment achieved between the two stacked layers. I expect the experiment to produce measurable anomalous results for the first few layers and afterwards the results to be smoothed out and normalized as more and more layers are added and a critical value of overall matter incoherence has been reached and W=mg will become again a linear function where W is the total weight of the stacked graphene layers film.
copyright©Emmanouil Markoulakis Hellenic Mediterranean University (HMU) 2019

The fundamental physical constants, ħ, c and G, appear to be the same everywhere in the observable universe. Observers in different gravitational potentials or with different relative velocity, encounter the same values of ħ, c and G. What enforces this uniformity? For example, angular momentum is quantized everywhere in the universe. An isolated carbon monoxide molecule (CO) never stops rotating. Even in its lowest energy state, it has ħ/2 quantized angular momentum zero-point energy causing a 57 GHz rotation. The observable CO absorption and emission frequencies are integer multiples of ħ quantized angular momentum. An isolated CO molecule cannot be forced to rotate with some non-integer angular momentum such as 0.7ħ. What enforces this?
Even though the rates of time are different in different gravitational potentials, the locally measured speed of light is constant. What enforces a constant speed of light? It is not sufficient to mention covariance of the laws of physics without further explanation. This just gives a different name to the mysteries.
Are the natural laws imposed on the universe by an unseen internal or external entity? Do the properties of vacuum fluctuations create the fundamental physical constants? Are the physical constants the same when they are not observed?
It feels strange to have discovered a new fundamental physics discipline after a gap of a century. It is called Cryodynamics, sister of the chaos-borne deterministic Thermodynamics discovered by Yakov Sinai in 1970. It proves that Fritz Zwicky was right in 1929 with his alleged “tired light” theory.
The light traversing the cosmos hence lawfully loses energy in a distance-proportional fashion, much as Edwin Hubble tried to prove.
Such a revolutionary development is a rare event in the history of science. So the reader has every reason to be skeptical. But it is also a wonderful occasion to be one of the first who jump the new giant bandwagon. Famous cosmologist Wolfgang Rindler was the first to do so. This note is devoted to his memory.
November 26, 2019
There is an opinion that the wave-function represents the knowledge that we have about a quantum (microscopic) object. But if this object is, say, an electron, the wave-function is bent by an electric field.
In my modest opinion matter influences matter. I can't imagine how the wave-function could be influenced by fields if it were not matter too.
Has anybody another opinion?
Dear Colleagues.
The Faraday constant as a fundamental physical value has its peculiar features, which make it standing out of the other physical constants. According tothe official documents of NIST, this constant has two values:
F = 96485.33289 ± 0.00059 C/mole and
F* = 96485.3251 ± 0.0012 C/mole.
The second value refers to the "ordinary electric current".
Is the Faraday constant constant?
One of the ways to answer this question is proposed in the works.
Sincerely,
Yuriy.
According to special relativity (SR), the relative velocity between two inertial reference frames (IRF), say two spaceships, is calculated by
u=(v1-v2) /(1-v1v2/c2) (1)
Where v1and v2 are constant velocities of the two vessels moving in parallel to each other.
For low speeds v1v2/c2 is negligible and the formula is reduced to
u=v1-v2
But neither v1 nor v2 is supposed to be known in SR. Both can have any value between -c and +c as illustrated in Figure 1 (please see the attached file).
Not knowing the speed of each vessel means that the calculated relative speed can also be any value between -c and +c. For example:
v1= - 0.6c v2 = - c ̀ ==> u= -c (possibility 5 in Figure 1)
v1= 0 v2 = - 0.4c ==> u= -c/2.5 (possibility 2)
v1= 0.2c v2 = - 0.2c ̀ ==> u= c/2.6 (possibility 3)
v1= 0.4c v2 = 0 ==> u= c/2.5 (possibility 1)
v1= c v2 = 0.6c ==> u= c (possibility 4)
Meaning that the real relative speed between two IRFs in fact cannot be calculated.
To remedy this situation, it is assumed that:
1. One of the vessels in which observer number one, Bob, resides is stationary and the other vessel, Alice, is moving at the relative speed of u.
This is, obviously, a wrong scientific statement and in contrast to SR. Here only one specific possibility among countless possibilities is arbitrarily selected to hide the difficult situation. We should also remind ourselves of the damaging effect of this type of assumptions. Scientists tried hard to discard the dominating geocentric dogma of the past, championed by the Catholic Church, and now a comparable assumption is accepted under a new groundbreakingly concept.
Based on this assumption, the equation is simply reducing to either u= -v2 or u=v1, depending on the observer.
2. There is a third reference frame based on which the speeds are measured.
Like the first cases we are back to Newtonian mechanics, an assumed fixed reference frame. This assumption explicitly accepts the first assumption. Only then, the formula makes sense. Specifically, to be able to present SR as a scientific/quantitative theory it is forced to accept that the frame of the observer or a third frame is a stationary reference frame for any measurement or analysis. Zero speed is just a convenient value between countless other possibilities which SR has introduced and then has decided not to deal with the consequences.
The problem with Einstein velocity addition formula also applies in this case as the assumed velocities as well as the calculated relative velocity between Bob and Alice depends on the relative speed of the observer.
Somehow, both conflicting cases are accepted in SR quite subjectively. In other words, SR is arbitrarily benefiting from classical science, to push its own undeserved credibility, while at the same time denying it.
Is this a fair assessment?
P.S. for simplicity only parallel movements are considered.
This question is closely related with a previous question I raised in this Forum: "What is the characteristic of matter that we refer as "electric charge"?"
As stated in my previous question, the main objective of bringing this topic to discussion is to try to understand the fundamental physical phenomena associated with the Universe we live in, where energy, matter and other key ingredients, like the Laws that govern them, which all together seem to play a harmonious role, so harmonious that even life, as we know it, can exist in this planet.
My background is from engineering. Hence, I am trying to go deep into the causes behind the effects, the physical phenomena that support the Universe as we know it, prior to go deep into complex mathematical models and formulation, which may obscure reality.
With an open mind, I try to ask questions whose answers may help us to understand the whys, rather than to prove theories and their formulations.
From our previous discussion, it became clear that mass and electric charge are two inseparable attributes of matter. Moreover, Electromagnetic (EM) fields propagate through vacuum. Hence, no physical matter is required for energy or information flow through the Universe. However, electric charges remain clustered in physical matter, i.e., they require, not vacuum, but matter.
Matter has the property of radiation. Matter under Gravitational (G) and EM fields is subjected to forces, producing movement. Radiation depends strongly on Temperature.
The absolute limit of T is 0º Kelvin. At this limit, particle movement stops. Magnetic fields depend on moving electric charges; as, at this limit, movement vanishes, then Magnetic fields should vanish with it. As Electrical and Magnetic fields are nested in each other, so does Electric field and consequently the effect of EM fields (and, hence, radiation, too) should vanish as T approaches 0ºK. Black Holes (BH) do not radiate, their Temperature being close to 0ºK.
Can we assume that EM fields ultimately vanishes as T approaches 0ºK?
Could this help explaining why protons in an atomic nucleus stay together, and are not violently scattered away from each other?
Would it be reasonable to assume that the atomic nucleuses are at Temperatures close to 0ºK, although electrons and matter, at macroscopic level, are at Room Temperature?
What is really the Temperature of atomic nucleuses? Can we measure it? Is it possible that a cloud of electrons, either orbiting the atoms nucleuses or moving as free electrons, play a shielding effect, capturing the energy associated with Room Temperature, and preventing the nucleuses from heating? Can atom's nucleus Temperature be close to 0ºK, like it occurs in BH?
Wikipedia describes Physics, lit. 'knowledge of nature' , as the natural science that studies matter, its motion and behavior through space and time, and the related entities of energy and force
But isn’t this definition a redundancy? Any visible object is made of matter and its motion is a consequence of energy applied. We might as well say, study of stuff that happens. But then, what does study entail?
Fundamentally, ‘physics’ is a category word, and category words have inherent problems. How broad or inclusive is the category word, and is the ordinary use of the category word too restrictive?
Is biophysics a subcategory of biology? Is econophysics a subcategory of economics? If, for example, biophysics combines elements of physics and biology, does one predominate as categorization? If, as in biophysics and econophysics and astrophysics there are overlapping disciplines, does the category word ‘physics’ gives us insight about what physics studies or obscure what physics studies?
Is defining what physics does more a problem of semantics (ascribing meaning to a category word) than of science?
Might another way of looking at it be this? Physics generally involves the detection of patterns common to different patterns in phenomena, including those natural, emergent, and engineered; if possible detecting fundamental principles and laws that model them, and when possible using mathematical notation to describe those principles and laws; if possible devising and implementing experiments to test whether hypothesized or observed patterns provide evidence for or give clues to fundamental principles and laws.
Maybe physics more generally just involves problem solving and the collection of inferences about things that happen.
Your views?
In physics, we have a number of "fundamental" variables: force, mass, velocity, acceleration, time, position, electric field, spin, charge, etc
How do we know that we have in fact got the most compact set of variables? If we were to examine the physics textbooks of an intelligent alien civilization, could it be they have cleverly set up their system of variables so that they don't need (say) "mass"? Maybe mass is accounted by everything else and is hence redundant? Maybe the aliens have factored mass out of their physics and it is not needed?
Bottom line question: how do we know that each of the physical variables we commonly use are fundamental and not, in fact, redundant?
Has anyone tried to formally prove we have a non-redundant compact set?
Is this even something that is possible to prove? Is it an unprovable question to start with? How do we set about trying to prove it?
You will find an article, with more precision under my profile.
The question is non relativistic and depends only on logic.
The answer could make a reset of all fundamental physics and is therefore of extreme importance!
JES
What is consciousness? What do the latest neurology findings tell us about consciousness and what is it about a highly excitable piece of brain matter that gives rise to consciousness?
It has radically altered it by rehabilitating Fritz Zwicky 1929.
Hence ten Nobel medals are gone. And cheap energy for all is made possible. Provided, that is, that humankind is capable of mentally following in Yakov Sinai’s chaotic footsteps. If not, energy remains expensive and CERN remains dangerous to all: A funny time that we are living in. With the crown of Corona yet waiting to be delivered.
April 1st, 2020
Why a complete theory of fundamental physics is ignored just because it is outside the realms of Quantum Field Theories and General Relativity? It has been “marked” as a speculative alternative and has never been studied nor has been any attempt to verify it. The fundamental physics community is still in complete ignorance of the extremely successful Electrodiscrete Theory.
The Electrodiscrete Theory is not a speculative alternative and not just a new idea in the workings but it is a complete theory of fundamental physics describing all our elementary particles and their interactions including gravity. The Electrodiscrete Theory beautifully describes the patterns in nature revealed by observations. The Electrodiscrete Theory gives a single (unified) description of nature in a relatively simple and in a self-consistent way. Moreover, it can calculate and it can make predictions. Then, why it is ignored?
The Electrodiscrete Theory provides the complete conceptual foundation for describing nature that we are all seeking, but nobody bothers to take a look. Why?
The Electrodiscrete Theory opens new horizons. This is progress in science that is being held back by prejudice and new kind of ignorance. What is wrong with the system?
Mathematics is crucial in many fields.
What are the latest trends in Maths?
Which recent topics and advances in Maths? Why are they important?
Please share your valuable knowledge and expertise.
A new Phenomenon in Nature: Antifriction
Otto E. Rossler
Faculty of Science, University of Tuebingen, Auf der Morgenstelle 8, 72076 Tuebingen, Germany
Abstract
A new natural phenomenon is described: Antifriction. It refers to the distance-proportional cooling suffered by a light-and-fast particle when it is injected into a cloud of randomly moving heavy-and-slow particles if the latter are attractive. The new phenomenon is dual to “dynamical friction” in which the fast-and-light particle gets heated up.
(June 27, 2006, submitted to Nature)
******
Everyone is familiar with friction. Friction brings an old car to a screeching halt if you jump on the brake. The kinetic energy of a heavy body thereby gets “dissipated” into fine motions – the heating-up of many particles in the end. (Only some cars do re-utilize their motion energy by converting it into electricity.) But there also exists a less well-known form of friction called dynamical friction. It differs from ordinary friction by its being touchless.
The standard example of dynamical friction is a heavy particle that is repulsive over a short distance, getting injected into a dilute gas of light-and-fast other particles. The heavy particle then comes to an effective halt. For all the repelled gas particles that it forced out of its way in a touchless fashion carried away some of its energy of motion while getting heated-up in the process themselves – much as in ordinary friction.
In the following, it is proposed that a dual situation exists in which the opposite effect occurs: “antifriction.” Antifriction arises under the same condition as friction – if repulsion is replaced by attraction. The fast particles then rather than being heated up (friction) paradoxically get cooled-down (antifriction). This surprising claim does not amount to an irrational perpetual-motion-like effect. Only the fast-and-light (“cold”) particle paradoxically imparts some of its kinetic energy onto the slow-and-heavy “hot” particles encountered.
A simplified case can be considered: A single light-and-fast particle gets injected into a cloud of many randomly moving heavy-and-slow particles of attractive type. Think of a fast space probe getting injected into a globular cluster of gravitating stars. It is bound to be slowed-down under the many grazing-type almost-encounters it suffers. The small particle will hence be “cooled” rather than heated-up as one would naively expect in analogy to the repulsive case.
The new effect is going to be demonstrated in two steps. In the first step, we return to repulsion. This case can be understood intuitively as follows: On the way towards equipartition (which characterizes the final equilibrium in the repulsive case as is well known), the light-and-fast particles – a single specimen in the present case – do predictably get heated up in their kinetic energy. In the second step, we then “translate” this result into the analogous attraction-type scenario to obtain the surprising opposite effect there.
First step: the repulsive case. Many heavy repulsive particles in random motion are assumed to be traversed by a light-and-fast particle in a grazing-type fashion. A typical case is focused on: as the light-and-fast particle starts to approach the next moving heavy repellor while leaving behind the last one at about the same distance, the new interaction partner is with the same probability either approaching or receding-from the fast particle’s momentary course. Whilst there are many directions of motion possible, the transversally directed ones are the most effective so that it suffices to focus on the latter. Since the approaching and the receding course do both have the same probability of occurrence, a single pair already yields the main effect: there is a net energy gain for the fast particle on average. Why?
In the approaching subcase the fast particle gains energy, and in the receding subcase it loses energy. But the two effects are not the same: The gain is larger than the loss on average if the repulsive potential is assumed to be of the inversely distance-proportional type assumed. This is because in the approaching case, the fast particle automatically gets moved-up higher by the approached potential hill gaining energy, than it is hauled-down by the receding motion of the same potential hill in the departing case losing energy. The difference is due to the potential hill’s round concave form as an inverted funnel. The present “typical pair” of encounters thus enables us to predict the very result well known to hold true: a time- and distance-proportional energy gain of the fast lighter particle as a consequence of the “dynamical friction” exerted by the heavy particles encountered along its way. Thus, eventually an “equipartition” of the kinetic energies applies.
Second step: the attractive case. Everything is the same as before – except that the moving potential hill has become a moving potential trough (the funnel now is pointing downward rather than upward). The asymmetry between approach and recession is the same as before. Therefore there is a greater downwards directed loss of energy (formerly: upwards directed gain) in the approaching subcase than there is an up-wards directed gain of energy (formerly: downwards directed loss) in the receding subcase. The former net gain thus is literally turned-over into a net loss. With this symmetry-based new result we are finished: Antifriction is dual to dynamical friction, being valid in the case of attraction just as dynamical friction is valid in the case of repulsion.
Thus a new feature of nature – antifriction – has thus been found. The limits of its applicability have yet to be determined. It deserves to be studied in detail – for example, by numerical simulation. It is likely to have practical implications, not only in the sky with its slowed-down space probes and redshifted photons [1), but perhaps even in automobiles and refrigerators down here on earth.
To conclude, the fascinating phenomenon of dynamical friction – touchless friction – was shown to possess a natural “dual”: antifriction. A prototype subcase (a pair of representative encounters) was considered above in either scenario, thereby yielding the new twin result. Practical applications can be expected to be found.
I thank Guilherme Kujawski for stimulation. For J.O.R.
Added in proof: After the present paper got finished, Ramis Movassagh kindly pointed to the fact that the historically first paper on “dynamical friction,” written by Subrahmanyan Chandrasekhar [2] who also coined the term, actually describes antifriction. This fact went unnoticed because the smallest objects in the interactions considered by Chandra were fast-moving stars. Chandra’s correctly seen energy loss of these objects therefore got classified by him as a form of “friction” suffered in the interaction with the fields of other heavy moving masses. However, the energy loss found does actually represent a “cooling effect” of the type described above: antifriction. One can see this best when the cooling is exerted on a small mass (like the above-mentioned tiny space probe traversing a globular cluster of stars). While friction heats up, antifriction cools down. Thus what has been achieved above is nothing else but the re-discovery of an old result that had been interpreted as a form of “friction” even though it actually represents the first example of antifriction.
References
[1] O.E. Rossler and R. Movassagh, Bitemporal dynamic Sinai divergence: an energetic analog to Boltzmann’s entropy? Int. J. Nonlinear Sciences and Numerical Simul. 6(4), 349-350 (2005).
[2] S. Chandrasekhar, Dynamical friction. Astrophys. J. 97, 255-263 (1943).
(Remark: The present paper after not being accepted by Nature in 2006 was recently found lingering in a forgotten folder.)
See also: R. Movassgh, A time-asymmetric process in central force scatterings (Submitted on 4 Aug 2010, revised 5 Mar 2013, https://arxiv.org/abs/1008.0875)
Nov. 23, 2019
It is well known that light filed can be decomposed into polarized field and unpolarized field. But, is it possible to consider this sum as a only the sum of linearly polarized and unpolarized or circularly polarized and unpolarized? or is it always matters a degree of polarization not a type of polarization?
The incredible thing about Physarum polycephalum is that whilst being completely devoid of any nervous system whatsoever (not possessing a single neuron) it exhibits intelligent behaviours. Does its ability to intelligently solve problems suggest it must also be conscious? If you think, yes, then please describe if-and-how its consciousness may differ {physically or qualitatively ... rather than quantitatively} from the consciousness of brained organisms (e.g., humans)? Does this intelligent behaviour (sans neurons) suggest that consciousness may be a universal fundamental related more to the physical transfer or flow of information rather than being (as supposed by most psychological researchers) an emergent property of processes in brain matter?
General background information:
"Physarum polycephalum has been shown to exhibit characteristics similar to those seen in single-celled creatures and eusocial insects. For example, a team of Japanese and Hungarian researchers have shown P. polycephalum can solve the Shortest path problem. When grown in a maze with oatmeal at two spots, P. polycephalum retracts from everywhere in the maze, except the shortest route connecting the two food sources.[3] When presented with more than two food sources, P. polycephalum apparently solves a more complicated transportation problem. With more than two sources, the amoeba also produces efficient networks.[4] In a 2010 paper, oatflakes were dispersed to represent Tokyo and 36 surrounding towns.[5][6] P. polycephalum created a network similar to the existing train system, and "with comparable efficiency, fault tolerance, and cost". Similar results have been shown based on road networks in the United Kingdom[7] and the Iberian peninsula (i.e., Spain and Portugal).[8] Some researchers claim that P. polycephalum is even able to solve the NP-hard Steiner minimum treeproblem.[9]
P. polycephalum can not only solve these computational problems, but also exhibits some form of memory. By repeatedly making the test environment of a specimen of P. polycephalum cold and dry for 60-minute intervals, Hokkaido University biophysicists discovered that the slime mould appears to anticipate the pattern by reacting to the conditions when they did not repeat the conditions for the next interval. Upon repeating the conditions, it would react to expect the 60-minute intervals, as well as testing with 30- and 90-minute intervals.[10][11]
P. polycephalum has also been shown to dynamically re-allocate to apparently maintain constant levels of different nutrients simultaneously.[12][13] In particular, specimen placed at the center of a Petri dish spatially re-allocated over combinations of food sources that each had different protein–carbohydrate ratios. After 60 hours, the slime mould area over each food source was measured. For each specimen, the results were consistent with the hypothesis that the amoeba would balance total protein and carbohydrate intake to reach particular levels that were invariant to the actual ratios presented to the slime mould.
As the slime mould does not have any nervous system that could explain these intelligent behaviours, there has been considerable interdisciplinary interest in understanding the rules that govern its behaviour [emphasis added]. Scientists are trying to model the slime mold using a number of simple, distributed rules. For example, P. polycephalum has been modeled as a set of differential equations inspired by electrical networks. This model can be shown to be able to compute shortest paths.[14] A very similar model can be shown to solve the Steiner tree problem.[9]"
source of quotation: https://en.wikipedia.org/wiki/Physarum_polycephalum
The theory of special relativity requires that the laws of the universe be the same for the objects that move with uniform velocity to each other. The law that changes from one frame to another is wrong. Lorentz transformations do not guarantee only three transformations. These three quantities are length, time and mass, which basic are physical quantities. Derived quantities can be derived from it covering the laws of mechanics only. In addition, the Lorentz transformation of the mass was found using the principle of corresponding and not directly if we want to get the Derived quantities Lorentz transformation we must be finds the Lorentz transformations for Fundamental Physical Quantities.
To what extent, are we compromising Darcy’s law, when we characterize the oil/gas flow within a petroleum reservoir?
Does the fundamental physics associated with the Darcy’s law not change significantly while applying it for the above application?
Darcy’s law requires that any resistance to the flow through a porous medium should result only from the viscous stresses induced by a single-phase, laminar, steady flow of a Newtonian fluid under isothermal conditions within an inert, rigid and homogeneous porous medium.
For many years I worked on the NSE under the assumption of incompressible flow. This assumption drive us to work with a simplified model (M=0) according to the fact that
a2 =dp/drho|s=const. ->+Inf.
Of course, any model is an approximate intepretation of the reality but this specific mathematical model assumption contradicts the fundamental physical limit of the light velocity.
Despite the fact that low (but finite) Mach model were developed, the M=0 model is still largely used both in engineering aerodynamics and in basic researches (instability, turbulence, etc.) in fluid dynamics.
Could we really accept the M=0 model that violates a fundamental physical assumption? If yes, that is a result from assessed studies that used a very low but finite Mach number for comparison?
Are there any evidence or theoretical framework to explain the values of fundamental physical constants? In other words, could be the values of physical constants differents (contingency)? Or is there any physical need to be as they are? Obs.: It is not a metaphysical question.
The 1998 astronomical observations of SN 1A implied a (so-called) accelerating universe. It is over 20 years later and no consensus explanation exists for the 1998 observations. Despite FLRW metric, despite GR, despite QM, despite modified theories like MOND, despite other inventive approaches, still no explanation. It is hard to believe that hundreds or thousands of physicists having available a sophisticated conceptual mathematical and physics toolkit relating to cosmology, gravity, light, and mechanics are all missing how existing physics applies to explain the accelerating expansion of space. Suppose instead that all serious and plausible explanations using the existing toolkit have been made. What would that imply? Does it not imply a fundamental physical principle of the universe has been overlooked or even, not overlooked, but does not yet form part of physics knowledge? In that case, physics is looking for the unknown unknown (to borrow an expression). I suspect the unknown principle relates to dimension (dimension is fundamental and Galileo’s scaling approach in 1638 for a problem originating with the concept of dimensions --- the weight-bearing strength of animal bone — suggests fundamental features of dimension may have been overlooked, beginning then). Is there a concept gap?
Solitons is the common but we are changing the structures which are also based on the common photonic crystal. Is there possibility of same kind of soliton in all three structures.
Version:2.0
The question of the nature (or ontological status) of fundamental physics theories, such as spacetime in special and general relativity, and quantum mechanics, have been, each, a permanent puzzle, and a source of debates. This discussion aims to resolve the issue, and submit the solution to comments.
Also, when something is correct, this is a sign that it can be be proved in more than one way. In support of this question, we found evidence of the same answer of the ontological status, in three diverse ways.
Please see at:
DISCLAIMER: We reserve the right to improve this text. All questions, public or not, are usually to be answered here. This will help make this discussion text more complete, and save that Space for others, please avoid off-topic. References are provided by self-search. This text may modify frequently.
It is widely seen that large-scale cosmic fluids should be treated as "viscoelastic fluids" in theoretical formulation of their stability analyses. Can anyone explain it from the viewpoint of fundamental physical insight?
Where from we have arrived to the conclusion that space of our Universe is 3D (and so the dimensionality of spacetime is 4D)?
I suppose this is the result of our sense of vision that is based on both of our eyes. However, the image we conceive is the result of mind manipulation (illusion) of the two “images” that each of our eyes send to our brain. This mind manipulation gives us the notion of depth that is translated as the third dimension of space. This is why one eye vision (or photography, cinema, TV, ...) is actually a 2D vision. In other words, when we see a 3D object and our eyes are (approx.) on a line perpendicular to the plane that form object's “height” and “long”, our mind concludes about object's “width”. Photons detectable by each of our eyes were, e.g. t(=10-20sec) before, on the surface of a sphere with our eye as center and radius t*c. As the surface of a sphere is 2D (detectable space) and if we add the dimension of "time" (to form the spacetime) we should conclude that the dimensionality of our detectable Universe is 3D ((2+1) and NOT 4D(3+1)).
PS: (27/8/2018) Though, I am aware that this opinion will reveal an instinctive opposition as it contradicts our “common sense”… I will take the risk to open the issue.
The final target is to study the fundamental physical processes involved in bubble dynamics and the phenomenon of cavitation. Develop a new bubble dynamics CFD model to study the evolution of a suspension of bubbles over a wide range of vesicularity, and that accounts for hydrodynamical interactions between bubbles while they grow, deform under shear flow conditions, and exchange mass by diffusion coarsening. Which commercial/open source CFD tool and turbulence model would be the most appropriate ones?
Mark Srednicki has claimed to demonstrate the entropy ~ area law -- https://arxiv.org/pdf/hep-th/9303048.pdf
Does anyone know of an independent verification or another demonstration of this result?
Is there a proof of this law?
It seems that our progress in standard of living in the last 500 or so years is mainly connected with different forms of energy conversion and discovery of newer materials for that purpose. So, how the fundamental science projects of today (e.g. detection of gravity waves, neutrino observatory, etc.) are going to contribute to that single point program? Is this a premature question?
- In the conclusion (page 14) of this paper, I suggest that “Younger physicists should also be encouraged to play a significant role in looking after and protecting our physics knowledge before they become exposed to the detrimental effects of the commercial influence on physics.”
Also in the conclusion I offer an idea on how this could be initiated. However I imagine there are existing schemes that encourage university students and physicists to get involved in theoretical physics & the fundamentals of physics. Do you know of such schemes and/or have your own suggestions in this connection?
Theme for Developing new perspectives of physics:
Let’s return to the traditional domain of original ideas and rigorous arguments of theoretical physics - “Physics with an ideas- and imagination-based ‘art’ where we’re dreaming, imagining and creating …” - (Physics: No longer a vocation? by Anita Mehta, vol 61 no. 6 Physics Today June 2008)
currently i am beginning to work in photo diode using wide band gap semiconductors like NiO and ZnO etc. so i like to study the fundamental physics of p n junction that helpful for my topic.anyone please suggest some books or documents.
What are the evidences that speed of light is constant all over the universe? Is it the same value even in places in universe which dark energy is occupied?
1) How one can describe short range and long range ferromagnetic ordering by analysing M(T,H) data?
2) is superexchange always short-range order?
3) how to idetify the type of exchange interaction in the magnetism shown by a system?
4) Does superexchange has some relationship among magnetic parameters (such as Curie tempeature, doping concentration, carrier concentration)?
Erik verlinde said; this emergent gravity constructed using the insights of string theory, black hole physics and quantum information theory(all these theories are struggling to take breath)..its appreciation to Verlinde of his dare step of constructing emergent gravity based on dead theories ..we loudly take inspiration from him...!!!!!!!
Since experimental evidence, it is well known that a desynchronization of clocks appears between different altitudes on earth (simultaneity is relative). However, simultaneity (absolute for the sky) of the sun or the moon (since million years for example) is a fact.
Shouldn't the concept of relativity be questionned?
Professor Michael Longo (University of Michigan in Ann Arbor) and Professor Lior Shamir (Lawrence Technological University) on experimental data have shown that there is an asymmetry between the right- and left - twisted spiral galaxies. Its value is about 7%. In the article:
ROTATING SPACE OF THE UNIVERSE, AS A SOURCE OF DARK ENERGY AND DARK MATTER
it is shown that the source of dark matter can be the kinetic energy of rotation of the space of the observed Universe. At the same time, the contribution of the Carioles force is 6.8%, or about 7%. The high degree of proximity of the value of the asymmetry between the the right- and left - twisted spiral galaxies and the value of the contribution of the Carioles force to the kinetic energy of rotation of the space of the observable Universe is a strong indirect evidence (on experimental data!) that the space of the observed Universe rotates.
An article from Nature "Undecidability of the spectral gap" (arXiv:1502.04573 [quant-ph]) shows that finding the spectral gap based on a complete quantum level description of a material is undecidable (in Turing sense). No matter how completely we can analytically describe a material on the microscopic level, we can't predict its macroscopic behavior. The problem has been shown to be uncomputable as no algorithm can determine the spectral gap. Even if there is in a way to make a prediction, we can't determine what prediction is, as for a given a program, there is no method to determine if it halts.
Does this result eliminate once and for all the possibility of a theory of everything based on fundamental physics? Is Quantum physics undecidable? Is this an an epistemic result proving that undecidability places a limit on our knowledge of the world?
I have a question regarding one unusual (thought) system.
Some years ago at one Russian forum we discussed one thought device that, as its author claimed, can provide one-directional motion and only due to the internal forces. The puzzle had been resolved by Kirk McDonald from Princeton Univ. I attach Kirk's solution. I wish to say that the author of the paradox is Georgy Ivanov but not me.
Anyway, Kirk found that there is no resulting directional force. But one puzzle of this device remains. The center-of-mass of the device should move (in the closed orbit) only due to the internal forces. I marked this result of McDonald in the file.
In this connection, two questions arise:
1. Why the center-of-mass moves despite the total momentum conserves?
2. If the center-of-mass can move and this motion is created by the internal forces, is it possible to change the design of the device to provide one-directional motion?
Formally there is no obstacles to realize it. The total momentum conserves... Could some one give the answers to them?
This thought device works not on the action-reaction principle and if similar device can be made as hardware, it could be a good prototype for the interstellar flight thruster.
How did Einstein's Spacetime pull of gravity on the Planet Mercury differ in value than Newtons? Was it simply via the spacetime fabric adjusting this value?
Thanks:)
Schrödinger self adjoint operator H is crucial for the current quantum model of the hydrogen atom. It essentially specifies the stationary states and energies. Then there is Schrödinger unitary evolution equation that tells how states change with time. In this evolution equation the same operator H appears. Thus, H provides the "motionless" states, H gives the energies of these motionless states, and H is inserted in a unitary law of movement.
But this unitary evolution fails to explain or predict the physical transitions that occur between stationary states. Therefore, to fill the gap, the probabilistic interpretation of states was introduced. We then have two very different evolution laws. One is the deterministic unitary equation, and the other consists of random jumps between stationary states. The jumps openly violate the unitary evolution, and the unitary evolution does not allow the jumps. But both are simultaneously accepted by Quantism, creating a most uncomfortable state of affairs.
And what if the quantum evolution equation is plainly wrong? Perhaps there are alternative manners to use H.
Imagine a model, or theory, where the stationary states and energies remain the very same specified by H, but with a different (from the unitary) continuous evolution, and where an initial stationary state evolves in a deterministic manner into a final stationary state, with energy being continuously absorbed and radiated between the stationary energy levels. In this natural theory there is no use, nor need, for a probabilistic interpretation. The natural model for the hydrogen, comprising a space of states, energy observable and evolution equation is explained in
My question is: With this natural theory of atoms already elaborated, what are the chances for its acceptance by mainstream Physics.
Professional scientists, in particular physicists and chemists, are well versed in the history of science, and modern communication hastens the diffusion of knowledge. Nevertheless important scientific changes seem to require a lengthy processes including the disappearance of most leaders, as was noted by Max Planck: "They are not convinced, they die".
Scientists seem particularly conservative and incapable of admitting that their viewpoints are mistaken, as was the case time ago with flat Earth, Geocentrism, phlogiston, and other scientific misconceptions.
MY EMAIL TO NFS:
My name is Andrei-Lucian Drăgoi and I am a Romanian pediatrician specialist, also undertaking independent research in digital physics and informational biology. Regarding your project called " Ideas Lab: Measuring "Big G" Challenge" (that I’ve found at this link: http://www.nsf.gov/funding/pgm_summ.jsp?pims_id=505229&org=PHY&from=home), I want to propose you a USA-Romania collaboration in this direction, based on my hypothesis that each chemical isotope may have its own “big G” imprint.
The idea is simple. Analogously to the photon, the hypothetical graviton may actually have a quantum angular momentum measured by a gravitational Planck-like quanta which I have noted h_eg, and a quantum G scalar G_q=f(h_eg). Despite Planck constant (h) being constant, h_eg may not be constant and may have slight variability that can depend on many factors including the intranuclear energetic pressures measured by the average binding energy per nucleon (E_BN) in any (quasi-)stable nucleus. I have proposed a simple grade I function that can generate a series hs_eg(E_BN) as a scalar function of E_BN, that also implies a series of quantum G scalars Gs_q(E_BN)= f[hs_eg(E_BN)] which is also a function of E_BN, as it depends on hs_eg(EBN). In conclusion: every isotope may have its own G "imprint" and that is one possible explanation (the suspected so-called “systematic error”) for the variability of the experimental G values from one team to another: I have called this hypothesis the multiple G hypothesis (mGH). I also propose a series of systematic experiments to verify mGH. As I don't work as a physicist (I am a Pediatrics specialist working in Bucharest, Romania) and just do independent research in theoretical physics, I don't have access to experimental resources, so I propose you a collaboration between USA and Romania and some experiments conducted either in the USA or in Romania (at "Horia Hulubei National Institute for R&D in Physics and Nuclear Engineering (IFIN-HH)", from Magurele, Romania: http://www.nipne.ro)
I have attached an article (in pdf format) that contains my hypothesis and its arguments (exposed in the first part of this paper): this work can also be downloaded from the link http://dragoii.com/BIDUM3.0_beta_version.pdf
My main research pages are:
Please, send me a minimal feedback to know that my message was received.
I am opened to any additional comment/suggestion/advice you may have on my idea on the big G.
===============================
THE REPLY FROM NFS:
Dear Dr. Dragoi,
Thank you for your interest in our programs. Unfortunately, NSF does not fund research groups based outside the US. Should you succeed in your goal of creating a Romanian-US collaboration, please have your American collaborators contact NSF directly.
Best regards,
Pedro Marronetti
====================================
FINAL CONCLUSION: If you are interested in this collaboration, please send feedback on dr.dragoi@yahoo.com so that we may apply to NFS challenge until 26 October 2016 (which is the deadline).
I'm going to put an insulator, playdough, on some copper metal. I was wondering how this would effect charge collection from a fundamental physics standpoint. These free electrons (source) would be coming from/already on the surface. I was thinking they would go around the insulator but remain on the surface. Am I correct in this assumption?
In Chapter V, of The Nature of the Physical World, Arthur Eddington, wrote as follows:
Linkage of Entropy with Becoming. When you say to yourself, “Every day I grow better and better,” science churlishly replies—
“I see no signs of it. I see you extended as a four-dimensional worm in space-time; and, although goodness is not strictly within my province, I will grant that one end of you is better than the other. But whether you grow better or worse depends on which way up I hold you. There is in your consciousness an idea of growth or ‘becoming’ which, if it is not illusory, implies that you have a label ‘This side up.’ I have searched for such a label all through the physical world and can find no trace of it, so I strongly suspect that the label is non-existent in the world of reality.”
That is the reply of science comprised in primary law. Taking account of secondary law, the reply is modified a little, though it is still none too gracious—
“I have looked again and, in the course of studying a property called entropy, I find that the physical world is marked with an arrow which may possibly be intended to indicate which way up it should be regarded. With that orientation I find that you really do grow better. Or, to speak precisely, your good end is in the part of the world with most entropy and your bad end in the part with least. Why this arrangement should be considered more creditable than that of your neighbor who has his good and bad ends the other way round, I cannot imagine.”
See:
The Cambridge philosopher, Huw Price provides an very engaging contemporary discussion of this topic in the following short video of his 2011 lecture (27 Min.):
This is well worth a viewing. Price has claimed that the ordinary or common-sense conception of time is "subjective" partly by including an emphatic distinction between past and future, the idea of "becoming" in time, or a notion of time "flowing." The argument arises from the temporal symmetry of the laws of fundamental physics --in some contrast and tension with the second law of thermodynamics. So we want to know if "becoming" in particular is merely "subjective," and whether this follows on the basis of fundamental physics.
Chapter Eddington, Chapter V "Becoming"
I returned to Einstein's 1907 paper and found that the final conclusion offered at the end apparently omitted one last step. Namely, that the lowered value of the speed of light c of a horizontal light ray downstairs, when watched from above, is absolutely correct; but only the conclusion drawn from this observation – that the speed of light is indeed reduced downstairs – was premature.
This is because the light ray hugging the floor downstairs is hugging a constantly receding floor despite the fact that the distance is constant.
(In the same vein, the increased speed of light of a light ray hugging the ceiling of the constantly accelerating rocketship – not mentioned by Einstein – holds true for a ceiling that is constantly approaching the lower floor despite the fact that the distance is constant.) The correctly predicted "gravitational redshift" – and the opposite blueshift in the other direction – qualify as a proof that this thinking is sound.
N.B.: The proposal is perhaps not as stupid as it sounds because the theory employed here is alone the special theory of relativity (which by definition presupposes global constancy of c). This fact was of course constantly on the mind of Einstein and can expplain why he fell silent on the topic of gravitation for 3 ½ years.
When he returned to it in mid-19011, writing the originally unfinished c-modifying equation of 1907 down explicitly, he may have been hoping in the back of his mind that someone could spot the error that he still felt might be involved. It is not an error, only the omission of a final step.
Now my dear readers have the same chance of offering their help regarding my above "constant-c solution" to this conundrum of Einstein’s, which perhaps is the most important one of history.
for example Carbon( atomic no 6 atomic mass 14) = N ( atomic no 7 atomic mass 14) + 1 beta particle (electron) in this example how does nitrogen gets another electron to neutralize its charge ( no of proton = no of electron) ?
regards
My thesis subject is "study of ephemeral organizational phenomena inside meta-organizations".
I'm currently looking for articles that are connecting fundamental physics and management science.
Also looking for articles speaking about timespace as a whole instead of time or space separately, mostly in management science.
If you have any suggestions about my subject, feel free to send me your advices !
Your help will be highly appreciated !
Are the fundamental physical constants rational numbers? I think it would be true to say we cannot make measurements that are non-rational.
Over the years, many physicists have wondered whether the fundamental constants of nature might have been different when the universe was younger. If so, the evidence ought to be out there in the cosmos where we can see distant things exactly as they were in the past.
One thing that ought to be obvious is whether a number known as the fine structure constant was different. The fine structure constant determines how strongly atoms hold onto their electrons and is an important factor in the frequencies at which atoms absorb light.
If the fine structure were different earlier in the universe, we ought to be able to see the evidence in the way distant gas clouds absorb light on its way here from even more distant objects such as quasars.
That debate pales in comparison to new claims being made about the fine structure constant. In 2010, John Webb at the University of South Wales, one of the leading proponents of the varying constant idea, and a few cobbers said they have new evidence from the Very Large Telescope in Chile that the fine structure constant was different when the universe was younger.
While data from the Keck telescope indicate the fine structure constant was once smaller, the data from the Very Large Telescope indicates the opposite, that the fine structure constant was once larger. That’s significant because Keck looks out into the northern hemisphere, while the VLT looks south.
This means that in one direction, the fine structure constant was once smaller and in exactly the opposite direction, it was once bigger. And here we are in the middle, where the constant as it is (about 1/137.03599…)
So, do you think that fine structure constant varies with direction in space?
For further reading on this issue, see http://www.technologyreview.com/view/420529/fine-structure-constant-varies-with-direction-in-space-says-new-data/.
Refs:
arxiv.org/abs/1008.3907: Evidence For Spatial Variation Of The Fine Structure Constant
arxiv.org/abs/1008.3957: Manifestations Of A Spatial Variation Of Fundamental Constants On Atomic Clocks, Oklo.
Included here you can also find a 2004 ApJ paper by John Bahcall, who is a proponent of varying fine structure constant. (URL: http://www.sns.ias.edu/~jnb/Papers/Preprints/Finestructure/alpha.pdf)
Also known as the reversibility paradox, this is an objection to the effect that it should not be possible to derive an irreversible process from time-symmetric dynamics, or that there is an apparently conflict between the temporally symmetric character of fundamental physics and the temporal asymmetry of the second law.
It has sometimes been held in response to the problem that the second law is somehow "subjective" (L. Maccone) or that entropy has an "anthropomorphic" character. I quote from an older paper by E.T. Jaynes,
"After the above insistence that any demonstration of the second law must involve the entropy as measured experimentally, it may come as a shock to realize that, nevertheless, thermodynamics knows no such notion as the "entropy of a physical system." Thermodynamics does have the notion of the entropy of a thermodynamic system; but a given physical system corresponds to many thermodynamic systems" (p. 397).
The idea here is that there is no way to take account of every possible degree of freedom of a physical system within thermodynamics, and that measures of entropy depend on the relevancy of particular degrees of freedom in particular studies or projects.
Does Loschmidt's paradox tell us something of importance about the second law? What is the crucial difference between a "physical system" and a "thermodynamic system?" Does this distinction cast light on the relationship between thermodynamics and measurements of quantum systems?
Regarding our current understanding of quantum mechanics, especially the interpretation of the theory of measurements in terms of parallel universes.
Theoretical physics, quantum mechanics, Fundamental physics