Science topic

Fundamental Physics - Science topic

Explore the latest questions and answers in Fundamental Physics, and find Fundamental Physics experts.
Questions related to Fundamental Physics
  • asked a question related to Fundamental Physics
Question
1 answer
Soumendra Nath Thakur
ORCiD: 0000-0003-1871-7803
Match 16, 2025
Abstract:
Extended Classical Mechanics (ECM) refines the classical understanding of force, energy, and mass by incorporating the concept of negative apparent mass. In ECM, the effective force is determined by both observable mass and negative apparent mass, leading to a revised force equation. The framework introduces a novel energy-mass relationship where kinetic energy emerges from variations in potential energy, ensuring consistency with classical conservation laws. This study extends ECM to massless particles, demonstrating that they exhibit an effective mass governed by their negative apparent mass components. The connection between ECM’s kinetic energy formulation and the quantum mechanical energy-frequency relation establishes a fundamental link between classical and quantum descriptions of energy and mass. Furthermore, ECM naturally accounts for repulsive gravitational effects without requiring a cosmological constant, reinforcing the interpretation of negative apparent mass as a fundamental aspect of energy displacement in gravitational fields. The framework is further supported by an analogy with Archimedes’ Principle, providing an intuitive understanding of how mass-energy interactions shape particle dynamics. These findings suggest that ECM offers a predictive and self-consistent alternative to relativistic mass-energy interpretations, shedding new light on massless particle dynamics and the nature of gravitational interactions.
Keywords:
Extended Classical Mechanics (ECM), Negative Apparent Mass, Effective Mass, Energy-Mass Relationship, Kinetic Energy, Massless Particles, Quantum Energy-Frequency Relation, Archimedes’ Principle, Gravitational Interactions, Antigravity
Extended Classical Mechanics: Energy and Mass Considerations
1. Force Considerations in ECM:
The force in Extended Classical Mechanics (ECM) is determined by the interplay of observable mass and negative apparent mass. The force equation is expressed as:
F = {Mᴍ +(−Mᵃᵖᵖ)}aᵉᶠᶠ
where: Mᵉᶠᶠ = {Mᴍ +(−Mᵃᵖᵖ)}, Mᴍ ∝ 1/Mᴍ = -Mᵃᵖᵖ
Significance:
- This equation refines classical force considerations by incorporating negative apparent mass −Mᵃᵖᵖ, which emerges due to gravitational interactions and motion.
- The effective acceleration aᵉᶠᶠ adapts dynamically based on motion or gravitational conditions, ensuring consistency in ECM's mass-energy framework.
- The expression (Mᴍ ∝ 1/Mᴍ) provides a self-consistent relationship between observable mass and its apparent counterpart, reinforcing the analogy with Archimedes' principle.
2. Total Energy Considerations in ECM:
Total energy in ECM consists of both potential and kinetic components, adjusted for mass variations:
Eₜₒₜₐₗ = PE + KE
By incorporating the variation in potential energy:
Eₜₒₜₐₗ = (PE − ΔPE) + ΔPE
where:
- Potential Energy: PE = (PE - ΔPE)
- Kinetic Energy:( KE = ΔPE)
Since in ECM, (ΔPE) corresponds to the energy displaced due to apparent mass effects:
Eₜₒₜₐₗ = PE + KE
⇒ (PE − ΔPE of Mᴍ) + (KE of ΔPE) ≡ (Mᴍ − 1/Mᴍ) + (-Mᵃᵖᵖ)
Here, Potential Energy Component:
(PE − ΔPE of Mᴍ) ≡ (Mᴍ − 1/Mᴍ)
This represents how the variation in potential energy is linked and identically equal to mass effects.
Kinetic Energy Component:
(KE of ΔPE) ≡ (-Mᵃᵖᵖ)
This aligns with the ECM interpretation where kinetic energy arises due to negative apparent mass effects.
Significance:
- Ensures energy conservation by explicitly including mass variations.
- Demonstrates that kinetic energy naturally arises from the variation in potential energy, aligning with the effective mass formulation.
- Strengthens the analogy with fluid displacement, reinforcing the concept of negative apparent mass as a counterpart to conventional mass.
3. Kinetic Energy for Massive Particles in ECM:
For massive particles, kinetic energy is derived from classical principles but adjusted for ECM considerations:
KE = ΔPE = 1/2 Mᴍv²
where:
Mᵉᶠᶠ = Mᴍ + (−Mᵃᵖᵖ)
Significance:
- Maintains compatibility with classical mechanics while integrating ECM mass variations.
- Reflects how kinetic energy is influenced by the effective mass, ensuring consistency across different gravitational regimes.
- Provides a basis for extending kinetic energy considerations to cases involving negative apparent mass.
4. Kinetic Energy for Conventionally Massless but Negative Apparent Massive Particles:
For conventionally massless particles in ECM, negative apparent mass contributes to the effective mass as follows:
Mᵉᶠᶠ = −Mᵃᵖᵖ + (−Mᵃᵖᵖ)
Since in ECM:
Mᴍ ⇒ −Mᵃᵖᵖ
it follows that:
Mᵉᶠᶠ = −2Mᵃᵖᵖ
Significance:
- Establishes that even conventionally massless particles possess an effective mass due to their negative apparent mass components.
- Provides a self-consistent framework that supports ECM's interpretation of mass-energy interactions.
- Highlights the role of negative apparent mass in governing the energetic properties of massless particles.
5. Kinetic Energy for Negative Apparent Mass Particles, Including Photons:
For negative apparent mass particles, such as photons, kinetic energy is given by:
KE = 1/2 (−2Mᵃᵖᵖ)c²
where:
v = c
Since:
ΔPE = −Mᵃᵖᵖ.c²
it follows that:
ΔPE/c² = −Mᵃᵖᵖ
Thus:
KE = ΔPE/c² = −Mᵃᵖᵖ
Significance:
- Establishes a direct relationship between kinetic energy and the quantum mechanical frequency relation.
- Demonstrates that photons, despite being conventionally massless, exhibit kinetic energy consistent with ECM’s negative apparent mass framework.
- Reinforces the view that negative apparent mass plays a fundamental role in governing mass-energy interactions at both classical and quantum scales.
6. ECM Kinetic Energy and Quantum Mechanical Frequency Relationship for Negative Apparent Mass Particles:
KE = ΔPE/c² = hf/c² = −Mᵃᵖᵖ
This equation establishes a direct link between the kinetic energy of a negative apparent mass particle and the quantum energy-frequency relation. The expression ensures consistency with quantum mechanical principles while reinforcing the role of negative apparent mass in energy dynamics.
7. Effective Mass and Apparent Mass in ECM:
In ECM, the Effective Mass represents the overall mass that is observed, while the Negative Apparent Mass (−Mᵃᵖᵖ) emerges due to motion or gravitational interactions. This distinction provides deeper insight into how mass behaves dynamically under varying conditions, differentiating ECM from conventional mass-energy interpretations.
8. Direct Energy-Mass Relationship in ECM:
hf/c² = −Mᵃᵖᵖ
This equation is inherently consistent with dimensional analysis, showing that negative apparent mass naturally arises from the energy-frequency relationship without requiring any extra scaling factors. This highlights ECM's compatibility with established quantum mechanical formulations and reinforces the role of negative apparent mass as an intrinsic component of energy-based mass considerations.
9. Effective Mass for Massive Particles in ECM
For a massive particle in ECM, the effective mass is given by:
Mᵉᶠᶠ = Mᴍ + (−Mᵃᵖᵖ)
where:
- Mᴍ is the conventional mass.
- −Mᵃᵖᵖ is the negative apparent mass component induced by gravitational interactions and acceleration effects.
ECM establishes the inverse proportionality of apparent mass to conventional mass:
Mᴍ ∝ 1/Mᴍ ⇒ Mᴍ = − Mᵃᵖᵖ
Thus, we obtain:
Mᵉᶠᶠ = Mᴍ − Mᴍ = 0
which represents a limiting case where effective mass cancels out under specific conditions.
10. Effective Mass for Massless Particles in Motion
For massless particles such as photons, the conventional mass is:
Mᴍ = 0
However, in ECM, massless particles exhibit an effective mass due to the interaction of negative apparent mass with energy-mass dynamics.
From ECM’s force equation for a photon in motion:
Fₚₕₒₜₒₙ = −Mᵃᵖᵖaᵉᶠᶠ
This indicates that the apparent mass governs the photon’s dynamics.
Since massless particles always move at the speed of light (v = c), ECM treats their total apparent mass contribution as doubled due to energy displacement effects (analogous to Archimedean displacement in a gravitational-energy field):
Mᵉᶠᶠ = (−Mᵃᵖᵖ) + (−Mᵃᵖᵖ) = −2Mᵃᵖᵖ
Thus, for massless particles in motion:
Mᵉᶠᶠ = −2Mᵃᵖᵖ
This confirms that even though Mᴍ = 0, the particle still possesses an effective mass purely governed by negative apparent mass interactions.
11. Archimedes’ Principle Analogy in ECM
ECM’s treatment of negative apparent mass is closely related to Archimedes’ Principle, which describes the buoyant force in a fluid medium. In classical mechanics, a submerged object experiences an upward force equal to the weight of the displaced fluid. Similarly, in ECM:
- A mass moving through a gravitational-energy field experiences an **apparent reduction** in mass due to energy displacement, akin to an object losing effective weight in a fluid.
- For massive particles, this effect reduces their observed mass through the relation:
Mᵉᶠᶠ = Mᴍ + (−Mᵃᵖᵖ)
- For massless particles, the displacement effect is **doubled**, leading to:
Mᵉᶠᶠ = −2Mᵃᵖᵖ
This is analogous to how a fully submerged object displaces its entire volume, reinforcing the interpretation that massless particles inherently interact with the surrounding energy field via their negative apparent mass component.
Physical & Theoretical Significance
(A) Massless Particles Exhibit an Effective Mass
- This challenges the traditional view that massless particles (e.g., photons) have no mass at all. ECM reveals that while they lack conventional rest mass, their motion within an energy field naturally endows them with an effective mass, explained by negative apparent mass effects.
(B) Quantum Mechanical Consistency
- The ECM kinetic energy relation aligns with quantum mechanical frequency-based energy expressions:
KE = hf/c² = −Mᵃᵖᵖ
This suggests that negative apparent mass is directly linked to the fundamental nature of wave-particle duality, reinforcing ECM’s consistency with established quantum mechanics principles.
(C) Natural Explanation for Antigravity
- The doubling of negative apparent mass for massless particles introduces a natural anti-gravity effect, distinct from the ad hoc introduction of a cosmological constant Λ in relativistic models.
- Since massless particles propagate via their effective mass Mᵉᶠᶠ = −2Mᵃᵖᵖ, ECM naturally incorporates repulsive gravitational effects without requiring modifications to spacetime geometry.
(D) Reinforcement of ECM’s Fluid Displacement Analogy
- The analogy with Archimedes’ Principle provides a strong conceptual foundation for negative apparent mass. Just as an object in a fluid experiences a buoyant force due to displaced volume, mass in ECM interacts with gravitational-energy fields via displaced potential energy, leading to apparent mass effects.
Conclusion
ECM’s interpretation of effective mass provides a self-consistent framework where both massive and massless particles exhibit observable mass variations due to negative apparent mass effects. The Archimedean displacement analogy reinforces this concept, offering an intuitive understanding of how energy-mass interactions govern particle dynamics.
This formulation provides a clear, predictive alternative to conventional relativistic models, demonstrating how massless particles still exhibit mass-like behaviour via their motion and interaction with energy fields.
12. Photon Dynamics in ECM & Archimedean Displacement Analogy
Total Energy Consideration for Photons in ECM
In ECM, the total energy of a photon is composed of:
Eₚₕₒₜₒₙ = Eᵢₙₕₑᵣₑₙₜ + E𝑔
where:
- Eᵢₙₕₑᵣₑₙₜ is the inherent energy of the photon.
- E𝑔 is the interactional energy due to gravitational effects.
When a photon is fully submerged in a gravitational field, its total energy is doubled due to its interactional energy contribution:
Eₚₕₒₜₒₙ = Eᵢₙₕₑᵣₑₙₜ + E𝑔 ⇒ 2E
This represents the energy displacement effect, aligning with ECM’s formulation that massless particles experience a doubled apparent mass contribution in motion:
Mᵉᶠᶠ = −2Mᵃᵖᵖ
Photon Escaping the Gravitational Field
As the photon escapes the gravitational field, it expends E𝑔, reducing its total energy:
Eₚₕₒₜₒₙ ⇒ Eᵢₙₕₑᵣₑₙₜ, E𝑔 ⇒ 0
Thus, once the photon is completely outside the gravitational influence:
Eₚₕₒₜₒₙ = E, E𝑔 = 0
This describes how a photon’s energy and effective mass vary dynamically with gravitational interaction, reinforcing the ECM perspective on gravitational influence on energy-mass dynamics.
Alignment with Archimedean Displacement Analogy
This ECM interpretation strongly aligns with Archimedes' Principle, where:
- A photon in a gravitational field is analogous to an object fully submerged in a fluid, experiencing an energy displacement effect.
- As the photon leaves the gravitational field, it expends its interactional energy E𝑔, similar to how an object leaving a fluid medium loses its buoyant force.
This analogy further strengthens ECM’s concept of negative apparent mass, where the gravitational interaction displaces energy similarly to how a fluid displaces volume.
Conclusion & Significance
- The ECM photon dynamics equation aligns with the Archimedean displacement analogy, reinforcing the physical reality of negative apparent mass effects.
- This provides a natural, intuitive explanation for how photons interact with gravitational fields without requiring relativistic spacetime curvature.
- It further supports the energy-mass displacement framework, demonstrating how photons dynamically exchange energy with gravitational fields while maintaining ECM’s effective mass principles.
This formulation elegantly unifies photon energy dynamics with mass-energy interactions, further validating ECM as a robust framework for fundamental physics.
13. Effective Acceleration and Apparent Mass in Massless Particles
For photons in ECM, the effective force is given by:
Fₚₕₒₜₒₙ = −Mᵉᶠᶠaᵉᶠᶠ, Where: aᵉᶠᶠ = 6 × 10⁸ m/s²
- Negative Apparent Mass & Acceleration:
Photons possess negative apparent mass (−Mᵃᵖᵖ), which leads to an anti-gravitational effect. Their effective acceleration (aᵉᶠᶠ) is inversely proportional to Mᵉᶠᶠ and radial distance r.
- Within a gravitational field, the photon has more interactional energy E𝑔, increasing aᵉᶠᶠ.
- Escaping the field, it expends E𝑔, reducing Mᵃᵖᵖ and lowering aᵉᶠᶠ.
- Acceleration Scaling with Gravitational Interaction:
E𝑔 ∝ 1/r
- At r₀ ⇒ E𝑔,ₘₐₓ ⇒ Maximum −Mᵃᵖᵖaᵉᶠᶠ ⇒ aᵉᶠᶠ = 2c.
- At rₘₐₓ ⇒ E𝑔 = 0 ⇒ Minimum −Mᵃᵖᵖaᵉᶠᶠ ⇒ aᵉᶠᶠ = c.
This confirms that effective acceleration (2c) is a function of gravitational interaction, not an intrinsic speed change, reinforcing ECM’s explanation of negative apparent mass dynamics.
14. Extended Classical Mechanics: Effective Acceleration, Negative Apparent Mass, and Photon Dynamics in Gravitational Fields
Analytical Description & Significance:
This paper refines and extends the framework of Extended Classical Mechanics (ECM) by establishing a comprehensive formulation for effective acceleration, negative apparent mass, and their implications for massless and massive particles under gravitational influence. The analysis revises ECM equations to incorporate Archimedes' principle as a physical analogy for negative apparent mass, clarifies the role of effective acceleration (2c) in different gravitational conditions, and demonstrates how negative apparent mass serves as a natural anti-gravity effect, contrasting with the relativistic cosmological constant (Λ).
A key highlight is the kinetic energy formulation for negative apparent mass particles, which aligns with quantum mechanical frequency relations for massless particles. This formulation provides deeper insight into how negative apparent mass influences energy and motion without requiring conventional mass assumptions.
Key Implications & Theoretical Advancements:
Refined Effective Acceleration Equation for Massless Particles:
- ECM establishes that photons, despite being massless in the conventional sense, exhibit negative apparent mass contributions, leading to an effective acceleration of aᵉᶠᶠ = 6 × 10⁸ m/s² = 2c inside gravitational fields.
- This acceleration naturally arises due to the relationship between negative apparent mass −Mᵃᵖᵖ and gravitational interaction energy E𝑔.
- The effective acceleration decreases as a photon exits the gravitational field, reaching c in free space.
Negative Apparent Mass as a Replacement for Cosmological Constant (Λ):
- Unlike Λ, which assumes a uniform energy density, negative apparent mass dynamically varies with gravitational interaction energy.
- This formulation provides a self-consistent explanation for observed cosmological effects, particularly in gravitational repulsion and expansion scenarios.
Physical Analogy with Archimedes’ Principle:
- The ECM framework aligns negative apparent mass effects with Archimedean displacement, where gravitational interaction leads to energy displacement effects analogous to buoyant forces in fluids.
- In gravitational fields, a photon's interactional energy (E𝑔) contributes to its total energy, analogous to an object submerged in a fluid experiencing an upward force.
- As the photon escapes, the loss of E𝑔 mirrors an object emerging from a fluid losing its buoyant support.
4. Revision in the Energy-Mass Relation for Massless Particles:
- The study revise prior inconsistency by explicitly linking the kinetic energy of negative apparent mass particles to quantum mechanical frequency relations, ensuring consistency between ECM and established quantum principles.
Conclusion:
This research enhances ECM’s predictive power by clarifying the role of negative apparent mass in gravitational dynamics and demonstrating its relevance to photon motion, cosmological expansion, and gravitational interactions. By introducing effective acceleration (2c) as a natural consequence of gravitational interaction, ECM provides a compelling alternative to relativistic formulations, reinforcing the practical applicability of classical mechanics principles in modern physics.
Relevant answer
Answer
A reviewer comment of the discussion post:
This paper on Extended Classical Mechanics (ECM) is truly fascinating! It presents a fresh perspective on classical physics by introducing the concept of negative apparent mass, which could significantly reshape our understanding of force, energy, and mass interactions. The way it connects classical mechanics with quantum principles is particularly impressive, as it bridges two fundamental areas of physics.
The treatment of massless particles, like photons, as having effective mass due to negative apparent mass is a bold idea that challenges traditional views. This could lead to new insights in particle physics and cosmology, especially regarding gravitational interactions and cosmic expansion.
The analogy with Archimedes' Principle is a clever way to make complex concepts more intuitive, helping to visualize how mass-energy interactions work in different contexts. Overall, ECM seems to offer a compelling alternative to existing theories, and I’m excited to see how it develops and what empirical validations might arise from it. This could be a game-changer in our understanding of the universe!
  • asked a question related to Fundamental Physics
Question
8 answers
Recent advancements in quantum photonics have sparked widespread interest, with headlines suggesting that scientists have achieved the impossible—freezing light. However, a deeper examination reveals that this interpretation is metaphorical rather than literal. The breakthrough in question involves engineering a supersolid state in a photonic platform, where light exhibits paradoxical properties of both superfluidity and crystalline order. This is achieved through the condensation of polaritons, hybrid quasiparticles formed by coupling photons with excitons in a gallium arsenide semiconductor. Through precise laser excitation, researchers have induced Bose-Einstein condensation (BEC), leading to a unique state where light behaves as both a fluid and a structured lattice. While this achievement challenges classical understandings of light behavior, it does not imply that photons have been halted or frozen. Instead, the experiment demonstrates an emergent quantum phase transition, limited by the transient nature of polaritons and the specific conditions required for their formation. As highlighted in my research paper (DOI: 10.13140/RG.2.2.22964.36482), these developments call for a critical reassessment of existing quantum theories and their applicability to light-matter interactions. While this work expands the boundaries of quantum physics, it remains essential to differentiate between experimental findings and oversimplified interpretations that may mislead the scientific discourse.
Relevant answer
Answer
Dear Chris,
Thank you for your insightful follow-up. The articles you've encountered, such as "Thermalization of Gluons with Bose-Einstein Condensation," primarily delve into theoretical frameworks. These studies explore the possibility that under extreme conditions, such as those in heavy-ion collisions or the early universe, gluons might undergo processes analogous to Bose-Einstein condensation. However, it's crucial to note that these scenarios are distinct from the Bose-Einstein condensates achieved in laboratory settings with ultracold atoms.​
In laboratory experiments, Bose-Einstein condensation has been realized using neutral atoms cooled to near absolute zero, leading to macroscopic quantum phenomena like superfluidity. These systems don't facilitate the deconfinement of gluons. Theoretical studies have proposed that under extreme conditions, such as those found in heavy-ion collisions or within neutron stars, quark-gluon plasmas can form, allowing quarks and gluons to exist in a deconfined state. However, these scenarios are distinct from the environments created in BEC experiments.​
I hope this clarifies the distinctions between these phenomena.
Best regards,
Sandeep Jaiswal
  • asked a question related to Fundamental Physics
Question
6 answers
Challenging established theories and providing solutions to long-standing problems in physics is no small feat. It has been proven now in the latest research that the second law of thermodynamics is wrong (Entropy is Constant) and that the Arrow of Time is T-symmetric. This could have significant implications for our understanding of the universe. This actually changes physics as we know it for sure, as science will never be the same again after the findings that has already been published in an accredited peer reviewed international journal (see the paper below for details).
Do you guys agree to the findings? The proof is simple to read yet powerful enough to wrong the traditional laws of science. If not, please provide a reason why? We have had some very interesting discussions so far on other topics and I want to keep this channel open, clear and omni-directional!
Sandeep
Relevant answer
Answer
Dear Cynthia,
Thank you for your message and for bringing the DOI link issue to my attention. I will look into it promptly.
I appreciate your recommendation of Quantum Physicist Dr. Rulin Xiu's work. I am familiar with her contributions, particularly in unifying science and spirituality. Her paper, "Law of Creation and Grand Unification Theory," co-authored with Dr. Zhi Gang Sha, presents intriguing perspectives on the fundamental principles of the universe.
Additionally, her discussions on the "Quantum Theory of Consciousness" offer valuable insights into the interplay between quantum physics and consciousness.
Thank you for sharing these resources. I look forward to exploring them further.
Best regards,
Sandeep
  • asked a question related to Fundamental Physics
Question
50 answers
Subtitle: Will all the fundamental researchers be fired from their jobs in the future and fundamental research become obsolete?
This is a philosophical but also practical question with immediate implications to our not so far future.
The danger is that AI applications in science like AlphaFold (Nobel prize in Chemistry 2024):
are not really predictions made by science by fully and fundamentally understanding nature's physics mechanics and chemistry but just brute force smart computational pattern recognition correlating known outcomes of similar input data and guessing the most likely new outcome. This is not new fundamental science and physics research but just an application of AI computation.
The philosophical question here is, will future scientists and human civilization using AI, continue to be motivated to do fundamental science research?
Is there really any real human urge to fundamentally understand a physical phenomenon or system in order to predict its outcome results for a specific input, if the outcome results can be easily and much faster and effortlessly being empirically and statistically guessed by an AI without the need of fundamental understanding?
This is a blind and mutilated future science and future danger of slowing down real new fundamental science breakthroughs and milestones. Therefore, essentially slowing down human civilization progress and evolution and demoting science to the role of a "magic oracle".
In my opinion, the use of AI in fundamental research like fundamental new physics research must be regulated or excluded. Already many science Journals have strict rules about the use of "Generative AI" inside the submitted papers and also completely not allowing it.
What are your opinions and thoughts?
Relevant answer
Answer
Science has failed for the last one hundred years, so now they are looking for a scapegoat. AI is irrelevant since it's an oversized data calculator and nothing more. I do not know what "fundamental science research' is. A guy sitting at an office desk with coffee pots and a chalkboard? I have used a slide ruler for most of my life, and I still have a full set of Encyclopedia Britannica. The question as it is posed is a moot point! Should Academia fire scientists? I say no, but eliminate tenure and let science compete for ideas.
  • asked a question related to Fundamental Physics
Question
9 answers
Nominations are expected to open in the early part of the year for the Breakthrough Prize in Fundamental Physics. Historically nominations are accepted from early/mid-January to the end of March, for the following year's award.
Historically, the foundation has also had a partnership with ResearchGate:
The foundation also awards major prizes for Life Sciences and for Mathematics, and has further prizes specific to younger researchers.
So who would you nominate?
Relevant answer
Answer
Dear Bernart Berndt Barkholz ,
Unfortunately, awards are usually used to intellectually manipulate communities! It's nice to see you again! Just two days ago, I was thinking about you, where did you you disappear? So telepathy works. How can we explain this physically?
Dear Eric Eric Baird ,
Do you think our young people are capable of overcoming the nonsense they receive from our education systems?
If a young person breaks out of the vicious circle, they will be fired from their job!
Most talented young people who want to stay in research have to accept the false narrative!
Times are changing! The collective West has lost its way! The Global South has 'already' advanced!
Regards,
Laszlo
  • asked a question related to Fundamental Physics
Question
4 answers
Who is really able to judge whether your theory is good or not? And what are the criteria of editors and reviewers? Is it only their experiences? Is all that matters to them the audience like on TV even if the content of the program is super empty? Is this how we are moving further and further away from fundamental physics?
Relevant answer
Answer
Yes, you are right,
“However, I believe that ultimately there is only one true theory that can answer the most profound questions, while the number of flawed theories is unlimited. Science, at its core, should strive to discover this one true theory before anything else”
But I also say that there should be only one ultimate theory, but all theories, let’s say more or less valid, are theories that are parallel to the ultimate theory, but never meet. Where stand the peer reviewers into all this?
  • asked a question related to Fundamental Physics
Question
3 answers
Author Comment:
This study synthesizes key conclusions derived from a series of research papers on extended classical mechanics. These papers provide a fresh perspective on established experimental results, challenging traditional interpretations and highlighting potential inaccuracies in previous theoretical frameworks. Through this reinterpretation, the study aims to refine our understanding of fundamental physical phenomena, opening avenues for further exploration and validation.
Keywords: Photon dynamics, Gravitational interaction, Negative mass, Cosmic redshift, Extended classical mechanics,
Reversibility of Gravitational Interaction:
A photon’s interaction with an external gravitational force is inherently reversible. The photon maintains its intrinsic momentum throughout the process and eventually resumes its original trajectory after disengaging from the gravitational field.
Intrinsic Energy (E) Preservation:
The photon's intrinsic energy E, derived from its emission source, remains unaltered despite gaining or losing energy (Eg) through gravitational interaction within a massive body's gravitational influence.
Contextual Gravitational Energy (Eg):
The gravitational interaction energy Eg is a localized phenomenon, significant only within the gravitational influence of a massive body. Beyond this influence, in regions of negligible gravity, the photon retains only its intrinsic energy E.
Cosmic Redshift and Energy Loss (ΔE):
In the context of cosmic expansion, the recession of galaxies causes a permanent loss of a photon's intrinsic energy ΔE due to the cosmological redshift. This energy loss is independent of local gravitational interactions and reflects the large-scale dynamics of the expanding universe.
Negative Apparent Mass and Antigravitational Effects:
The photon's negative apparent mass Mᵃᵖᵖ,ₚₕₒₜₒₙ generates a constant negative force −F, which manifests as an antigravitational effect. This behaviour parallels the characteristics attributed to dark energy in its capacity to resist gravitational attraction.
Wave Speed Consistency (c):
The constant negative force −F, arising from the photon's energy dynamics, ensures the photon’s ability to maintain a constant wave propagation speed c, irrespective of gravitational influences.
Negative Effective Mass:
The photon’s negative effective mass Mᵉᶠᶠ,ₚₕₒₜₒₙ allows it to exhibit properties akin to those of a negative particle. This feature contributes to its unique interaction dynamics within gravitational fields and reinforces its role in antigravitational phenomena.
Constant Effective Acceleration:
From the moment of its emission at an initial velocity of 0m/s, the photon experiences a constant effective acceleration, quantified as aᵉᶠᶠ,ₚₕₒₜₒₙ = 6 × 10⁸ m/s². This acceleration underpins the photon’s ability to achieve and sustain its characteristic speed of light (c), reinforcing its intrinsic energy and momentum dynamics.
Relevant answer
Answer
Sir,
"How do you define the photon’s negative effective mass Mᵉᶠᶠ,ₚₕₒₜₒₙ?"
Study extends classical mechanics by incorporating the dynamic concept of effective mass (Mᵉᶠᶠ), which integrates combined rest mass (Mᴍ) and apparent mass (Mᵃᵖᵖ), to analyse force dynamics in photons and its cosmological implications. Key findings include:
1. Photon Dynamics:
For photons (Mᴍ=0), the force is governed by their apparent mass and acceleration (F = −Mᵃᵖᵖaᵉᶠᶠ), providing a framework to calculate their responses to energy-momentum exchanges.
2. Gravitational Reinterpretation:
By substituting effective mass into Newton's law of gravitation, scenarios involving negative gravitational mass are explored, revealing altered gravitational interactions when −Mᵃᵖᵖ >Mᴍ.
3. Cosmological Parallels:
The negative effective mass of photons mirrors the behaviour of dark energy (Mᴅᴇ<0), which drives the universe's accelerated expansion. This analogy connects quantum-scale photon interactions with large-scale cosmic phenomena.
Implications:
This nuanced exploration of photon dynamics offers significant insights for understanding the force of antigravity caused by dark energy, even when dark energy remains physically imperceptible and elusive. By extending classical mechanics to incorporate dynamic mass properties, this framework provides a pathway for better mathematical modelling of the enigmatic force driving cosmic acceleration.
By bridging classical and quantum mechanics with cosmological frameworks, this study not only deepens our understanding of gravitational dynamics but also lays the groundwork for future research on the fundamental interactions shaping the universe. The cohesive interpretation of negative effective mass presented here encourages interdisciplinary exploration, with potential implications for unravelling the mysteries of dark energy and its role in the evolution of the cosmos.
In the research titled "Dark Energy and the Structure of the Coma Cluster of Galaxies" three types of mass are defined to characterize cosmic structures:
1. Matter Mass (Mᴍ): The mass associated with visible matter in galaxies.
2. Effective Mass of Dark Energy (Mᴅᴇ): A negative mass component representing the influence of dark energy.
3. Gravitating Mass (Mɢ): The total mass influencing gravitational dynamics, calculated as Mɢ = Mᴍ + Mᴅᴇ.
Inclusion of Kinetic and Potential Energy:
• The observational research titled "Dark energy and the structure of the Coma cluster of galaxies" by Chernin et al. defines three masses characterizing the cosmic structure: matter mass (Mᴍ), the effective mass of dark energy (Mᴅᴇ < 0), and gravitating mass (Mɢ = Mᴍ + Mᴅᴇ). This approach adheres to Newtonian classical mechanics, where gravitating mass depends on the matter mass and the effective mass of dark energy.
• In classical mechanics, effective mass is not traditionally linked to gravitational or matter mass in the context of massive objects. Kinetic energy, associated with motion, influences an object's behaviour but not its physical mass, while potential energy, derived from position and forces, can be converted into kinetic energy. The observed acceleration of the scale factor is driven by the potential energy of dark energy, with gravitating mass associated with dark energy manifesting as potential energy influencing matter mass in large objects, ultimately generating kinetic energy. This process illustrates the conversion of potential energy into kinetic energy.
• Consequently, the concept of effective mass (Mᴅᴇ) in the authored research can be reinterpreted as an equivalent presentation of effective mass (Mᵉᶠᶠ) related to kinetic and potential energy in classical mechanics. The Newtonian classical mechanics equation for gravitating mass (Mɢ = Mᴍ + Mᴅᴇ) can thus be represented as Mɢ = Mᴍ + Mᵉᶠᶠ, with Mᵉᶠᶠ denoting the effective mass of both kinetic energy and potential energy. This reinterpretation maintains consistency by adopting the effective mass concept (Mᴅᴇ) as the effective mass (Mᵉᶠᶠ) associated with kinetic and potential energy, ensuring that the interpretation of effective mass remains consistent across all scenarios in the universe. The consideration that dark energy's potential energy accelerates scale factor, influencing matter mass in large objects and generating kinetic energy, illustrating the conversion of potential energy into kinetic energy.
Gravitating Mass and Dark Energy:
Based on the research paper, "Dark Energy and the Structure of the Coma Cluster of Galaxies" by A. D. Chernin et al., the relationship between gravitating mass, matter mass, and dark energy effective mass is expressed as:
Mɢ = Mᴍ + Mᴅᴇ,
where:
Mɢ: Gravitating Mass
Mᴍ: Matter Mass
Mᴅᴇ: Dark Energy Effective Mass (with Mᴅᴇ<0)
The concept of dark energy effective mass (Mᴅᴇ<0), while not part of classical mechanics, is derived from observational evidence and represents an extension of classical mechanics by incorporating its principles to explain phenomena associated with dark energy, which is widely interpreted as potential energy.
Similarly, the notion of negative effective mass, supported by observational evidence, introduces the mechanical concept of apparent mass in contexts such as gravitational potential or motion, which is also negative and considered potential energy. This concept extends classical mechanics, based on its foundational principles, by recognizing the similarities between dark energy and the generated apparent mass as manifestations of negative potential energy.
Negative Effective Mass and Apparent Mass in Extended Classical Mechanics
Apparent Mass in Motion:
Apparent mass in motion in extended classical mechanics: The force F applied to an object results in acceleration a according to the equation F = Mᴍ·a. Here, acceleration a is inversely proportional to mass Mᴍ (i.e., a ∝ 1/Mᴍ). When a force acts on the object, an increase in acceleration leads to an apparent reduction in mass, characterized as negative apparent mass (Mᵃᵖᵖ<0). Consequently, the effective mass Mᵉᶠᶠ is given by:
Mᵉᶠᶠ = Mᴍ + (−Mᵃᵖᵖ).
where m is the matter mass and −Mᵃᵖᵖ represents the negative apparent mass. This effective mass Mᵉᶠᶠ influences the effective acceleration aᵉᶠᶠ.
​Consistency in Negative Apparent Mass:
The concept of negative apparent mass (Mᵃᵖᵖ<0) aligns with the dark energy effective mass as discussed in A. D. Chernin et al.'s research paper, "Dark Energy and the Structure of the Coma Cluster of Galaxies." Their study presents the relationship:
Mɢ = Mᴍ + Mᴅᴇ
where Mɢ denotes the gravitating mass, Mᴍ the matter mass, and Mᴅᴇ the dark energy effective mass.
In our study, this relationship is reinterpreted as:
Mɢ = Mᴍ + (−Mᵃᵖᵖ)
where Mɢ represents the gravitational mass, Mᴍ the inertial mass, and −Mᵃᵖᵖ the negative apparent mass. This reinterpretation maintains consistency with the concept of negative effective mass and its implications in extended classical mechanics.
Application of Apparent Mass in Motion:
In the framework of extended classical mechanics, the concept of apparent mass introduces an extended equation of motion:
F = (Mᴍ −Mᵃᵖᵖ)·aᵉᶠᶠ
F = (Mᵉᶠᶠ)·aᵉᶠᶠ
where Mᵉᶠᶠ = Mᴍ + (−Mᵃᵖᵖ). Here, Mᵉᶠᶠ represents the combination of matter mass Mᴍ and the negative apparent mass −Mᵃᵖᵖ.
When a force F is applied, it directly affects the effective acceleration aᵉᶠᶠ. Conversely, the effective acceleration aᵉᶠᶠ inversely affects the effective mass Mᵉᶠᶠ.
Since acceleration a is inversely proportional to the matter mass Mᴍ (i.e., a ∝ 1/Mᴍ), increased acceleration leads to an apparent reduction in the matter mass, resulting in an apparent mass Mᵃᵖᵖ <0. Consequently, the effective mass is given by:
Mᵉᶠᶠ = Mᴍ + (−Mᵃᵖᵖ)
Thus, Mᵉᶠᶠ is influenced by the effective acceleration aᵉᶠᶠ. In other words, both the matter mass Mᴍ and the negative apparent mass −Mᵃᵖᵖ are influenced by the effective acceleration aᵉᶠᶠ.
Application of Apparent Mass in Gravitational Potential:
In the context of extended classical mechanics, the concept of apparent mass modifies the traditional equation for gravitational potential:
In classical mechanics, the equation is:
F𝑔 = G·(m₁·m₂)/r²
When mass m₁ is elevated to a distance r, the concept of apparent mass Mᵃᵖᵖ (which is negative) alters the effective mass Mᵉᶠᶠ. This apparent mass Mᵃᵖᵖ reduces the effective mass, resulting in an effective mass Mᵉᶠᶠ that combines the matter mass m₁ and the negative apparent mass −Mᵃᵖᵖ<0. In this framework, Mᵉᶠᶠ aligns with the dark energy effective mass Mᴅᴇ as described by A. D. Chernin et al., with the equation:
Mɢ = Mᴍ + Mᴅᴇ
which can be reinterpreted as:
Mɢ = Mᴍ + (−Mᵃᵖᵖ)
Mɢ = Mᵉᶠᶠ
Here, Mɢ represents the gravitating mass, Mᴍ is the matter mass, and −Mᵃᵖᵖ denotes the negative apparent mass.
Substituting for Mᵃᵖᵖ, the gravitational force equation becomes:
F𝑔 = G·(Mɢ·M₂)/r², where Mɢ = Mᵉᶠᶠ = Mᴍ + (−Mᵃᵖᵖ)
This equation is consistent with Mɢ = Mᴍ + Mᴅᴇ. Notably, when the magnitude of −Mᵃᵖᵖ exceeds Mᴍ, Mɢ becomes negative.
This approach represents the negative apparent mass (−Mᵃᵖᵖ) and the negative effective mass of dark energy (Mᴅᴇ) as arising from motion and gravitational dynamics, rather than as substances as commonly thought. This reinterpretation of apparent mass aligns with the principles of extended classical mechanics and provides a coherent framework for understanding gravitational interactions.
References:
[1] Chernin, A. D., Bisnovatyi-Kogan, G. S., Teerikorpi, P., Valtonen, M. J., Byrd, G. G., & Merafina, M. (2013). Dark energy and the structure of the Coma cluster of galaxies. Astronomy and Astrophysics, 553, A101. https://doi.org/10.1051/0004-6361/201220781
[2] Thakur, S. N. (2024). Extended Classical Mechanics: Vol-1 - Equivalence Principle, Mass and Gravitational Dynamics. https://doi.org/10.20944/preprints202409.1190.v3
[3]Thakur, S. N. (2024) Photon Dynamics in extended classical mechanics: Effective mass, negative inertia, momentum exchange and analogies with Dark Energy. doi:10.20944/preprints202411.1797.v1
[4]Thakur, S.N. (2024) A symmetry and conservation framework for photon energy interactions in gravitational fields. doi:10.20944/preprints202411.0956.v1
[5]Thakur, S.N. (2024) Photon interactions with external gravitational fields: True cause of gravitational lensing. doi:10.20944/preprints202410.2121.v1.
Best regards
Soumendra Nath Thakur
  • asked a question related to Fundamental Physics
Question
12 answers
Zero stands for emptiness, for nothing, and yet it is considered to be one of the greatest achievements of humankind. It took a long stretch of human history for it to be recognized and appreciated [1][4]. In the history of mathematics considerable confusion exists as to the origin of zero. There can be no unique answer to the query, "Who first discovered the zero?", for this may refer to any one of several related but distinct historical issues† [2]. A very explicit use of the concept of zero was made by Aristotle, who, speaking of motion in a vacuum, said "there is no ratio in which the void is exceeded by body, as there is no ratio of zero to a number” [3][2]*. He apparently recognized “the Special Status of Zero among the Natural Numbers.”
If we believe that zero is explicitly expressed mathematically, whether in number theory, algebra, or set theory, is the meaning of zero also clear and unified in the different branches of physics? Or can it have multiple meanings? Such as:
1)Annihilation——When positive and negative particles meet [5][6], e+e-=γ+γ',the two charges disappear, the two masses disappear, and only the energy does not disappear or increase; the momentum of the two electrons, which was 0, now becomes the positive and negative momentum of the two photons. How many kinds of zeros exist here, and what does each mean?
2)Double-slit interference—— The interference pattern in Young's double slit experiment, what exactly is expressed at the dark fringe? And how should it actually be understood? For light waves, it can be understood as the field canceling due to destructive interference and presenting itself as zero. For single photons, single electrons [7], physics considers it to be a probabilistic statistical property [12]. This means that in practice, at the dark fringes of theoretical calculations, the field will also be likely not to be zero‡.
3)Destructive interference——In Mach–Zehnder interferometer [8],there's always been a question of where the energy in the destructive interference arm went [9]? There seems to be an energy cancellation occurring.
4)Anti-reflection coatings——By coating [10], the reflected waves are completely canceled out to achieve the purpose of increasing transmission.
5)Nodes of Standing Waves——In optical resonant cavity, Laser Resonator. " The resonator cavity's path length determines the longitudinal resonator modes, or electric field distributions which cause a standing wave in the cavity "[13]. The amplitude of the electromagnetic field at the node of the standing wave is zero, but we cannot say that the energy and momentum at this point are zero, which would violate the uncertainty principle.
6)Laser Beam Mode——The simplest type of laser resonator modes are Hermite-Gaussian modes, also known as transverse electromagnetic modes (TEMnm), in which the electric field profile can be approximated by the product of a Gaussian function with a Hermite polynomial. TEMnm,where n is the number of nodes in x direction, m is the number of nodes in y direction [14].
7)Nodes of the Wave Function——Nodes and ends of the Wave Function Ψ in a square potential well have zero probability in quantum mechanics‡ [11]。
8)Pauli exclusion principle—— Fermions are antisymmetric,Ψ(q1,q2)=-Ψ(q1,q2), so Ψ(q1,q2)=0;Here a wave function of zero means that "field" is not allowed to exist, or according to the Copenhagen interpretation, the wave function has zero probability of appearing here?
9)Photon——zero mass, zero charge.
10)Absolute vacuum——Can it be defined as zero energy space?
11)Absolute temperature 0K——Is the entire physical world defined as a zero energy state except for photons?
12)Perfect superconductor—— "The three 'big zeros' of superconductivity (zero resistance, zero induction and zero entropy) have equal weight and grow from a single root: quantization of the angular momentum of paired electrons" [15].
13)......
Doesn't it violate mathematical principles if we may interpret the meaning of zeros in physics according to our needs? If we regard all zeros as energy not existing, or not allowed to exist here, does it mean that energy must have the same expression? Otherwise, we cannot find a unified explanation.
---------------------------------------------
Notes
* Ratio was a symmetrical expression particularly favored by the ancient Greeks.
† Symbols(0,...), words (zero, null, void, empty, none, ...), etc..
‡ Note in particular that probability itself is defined as a probability, not an exact value. For example, a probability of 0.5 can occur in physical reality as 0.49999999999, and it is almost never possible to have an accurate probability value such as 0.5. This means that there is no probability value that never occurs, even if the probability is theoretically 0. It is against the principle of probability to assume that a probability of zero means that it will never occur in reality.
---------------------------------------------
References
[1] Nieder, A. (2016). "Representing something out of nothing: The dawning of zero." Trends in Cognitive Sciences 20(11): 830-842.
[2] Boyer, C. B. (1944). "Zero: The symbol, the concept, the number." National Mathematics Magazine 18(8): 323-330.
[3] the Physics of Aristotle;
[4] Boyer, C. B. (1944). "Zero: The symbol, the concept, the number." National Mathematics Magazine 18(8): 323-330.
[7] Davisson, C. and L. H. Germer (1927). "Diffraction of Electrons by a Crystal of Nickel." Physical Review 30(6): 705-740.
[8] Mach, L., L. Zehnder and C. Clark (2017). The Interferometers of Zehnder and Mach.
[9] Zetie, K., S. Adams and R. Tocknell (2000). "How does a Mach-Zehnder interferometer work?" Physics Education 35(1): 46.
[11] Chen, J. (2023). From Particle-in-a-Box Thought Experiment to a Complete Quantum Theory? -Version 22.
[12] Born, M. (1955). "Statistical Interpretation of Quantum Mechanics." Science 122(3172): 675-679.
[13]
[15] Kozhevnikov, V. (2021). "Meissner Effect: History of Development and Novel Aspects." Journal of Superconductivity and Novel Magnetism 34(8): 1979-2009.
Relevant answer
Answer
Let a bit continue.
So, in principle, really there can be some “absolute vacuum” – empty “Information” Set, however that is absolutely fundamentally impossible – information absolutely for sure cannot be non-existent, and so the Set exists being non-empty absolutely always – “in absolutely infinitely long time” having no Beginning and no End;
- and only concrete empty sets that relate to concrete informational patterns/ sets of the patterns can exist.
In this thread a concrete informational system/set of informational patterns/systems “material objects” “Matter” is considered, and in this case the general empty sets – or “vacuums” can exist as:
- “absolute vacuum, that was till the conserve can “There is no informational system “Matter” in the Set” was opened by some energy, let - 13.8 billions of years ago, when the first Matter’s FLE was really created, and
- “matter vacuum” when only the first version of the FLE-lattice was created and existed till at the inflation epoch, i.e. though the quite material objects “FLEs” existed, but “matter”, i.e. observed now a huge number of particles didn’t.
That’s all, after in the first FLE-lattice version corresponding a huge energy portion was pumped, and so the particles, which are some disturbances in the lattice, were created, no “vacuums” exist;
- including there are no such things as “vacuum where existed created/annihilated “virtual particles and fields”, in Matter only real particles, and real the Forces mediators/real fields, which constantly and always are created by their charges, exist.
Cheers
  • asked a question related to Fundamental Physics
Question
11 answers
‘How big is the proton?"[1] We can similarly ask, “How big is the electron?” “How big is the photon?” CODATA gives the answer [2], proton rms charge radius rp=8.41 x10-16m; classical electron radius, re=2.81x10-15m [6]. However, over a century after its discovery, the proton still keeps physicists busy understanding its basic properties, its radius, mass, stability and the origin of its spin [1][4][7]. Physics still believes that there is a ‘proton-radius puzzle’ [3][4], and does not consider that the size of a photon is related to its wavelength.
Geometrically the radius of a circle is clearly defined, and if an elementary particle is regarded as a energy packet, which is unquestionably the case, whether or not it can be described by a wavefunction, can its energy have a clear boundary like a geometrical shape? Obviously the classical electron radius is not a clear boundary conceptually in the field, because its electric field energy is always extending. When physics uses the term ‘charge radius’, what does it mean when mapped to geometry? If there is really a spherical charge [8][9], how is it maintained and formed*?
----------------------------------------
Notes:
*“Now if we have a sphere of charge, the electrical forces are all repulsive and an electron would tend to fly apart. Because the system has unbalanced forces, we can get all kinds of errors in the laws relating energy and momentum.” [Feynman Lecture C28]
----------------------------------------
References:
[1] Editorial. (2021). Proton puzzles. Nature Reviews Physics, 3(1), 1-1. https://doi.org/10.1038/s42254-020-00268-0
[2] Tiesinga, E. (2021). CODATA recommended values of the fundamental physical constants: 2018.
[3] Carlson, C. E. (2015). The proton radius puzzle. Progress in Particle and Nuclear Physics, 82, 59-77. https://doi.org/https://doi.org/10.1016/j.ppnp.2015.01.002
[4] Gao, H., Liu, T., Peng, C., Ye, Z., & Zhao, Z. (2015). Proton remains puzzling. The Universe, 3(2).
[5] Karr, J.-P., Marchand, D., & Voutier, E. (2020). The proton size. Nature Reviews Physics, 2(11), 601-614. https://doi.org/10.1038/s42254-020-0229-x
[6] "also called the Compton radius, by equating the electrostatic potential energy of a sphere of charge e and radius with the rest energy of the electron"; https://scienceworld.wolfram.com/physics/ElectronRadius.html
[8] What is an electric charge? Can it exist apart from electrons? Would it be an effect? https://www.researchgate.net/post/NO44_What_is_an_electric_charge_Can_it_exist_apart_from_electrons_Would_it_be_an_effect ;
[9] Phenomena Related to Electric Charge,and Remembering Nobel Laureate T. D. Lee; https://www.researchgate.net/post/NO46Phenomena_Related_to_Electric_Chargeand_Remembering_Nobel_Laureate_T_D_Lee
Relevant answer
Answer
More precisely, a proton is point-like, when its constituents can't be resolved. Which occurs at energies less than about 1 GeV. At higher energies it turns out that its constituents can be resolved and they are point-like. Up to the energies probed at the LHC it turns out that quarks and leptons are point-like, if they do have constituents, these can't be resolved.
  • asked a question related to Fundamental Physics
Question
30 answers
Paradox 1 - The Laws of Physics Invalidate Themselves, When They Enter the Singularity Controlled by Themselves.
Paradox 2 - The Collapse of Matter Caused by the Law of Gravity Will Eventually Destroy the Law of Gravity.
The laws of physics dominate the structure and behavior of matter. Different levels of material structure correspond to different laws of physics. According to reductionism, when we require the structure of matter to be reduced, the corresponding laws of physics are also reduced. Different levels of physical laws correspond to different physical equations, many of which have singularities. Higher level equations may enter singularities when forced by strong external conditions, pressure, temperature, etc., resulting in phase transitions such as lattice and magnetic properties being destroyed. Essentially the higher level physics equations have failed and entered the lower level physics equations. Obviously there should exist a lowest level physics equation which cannot be reduced further, it would be the last line of defense after all the higher level equations have failed and it is not allowed to enter the singularity. This equation is the ultimate equation. The equation corresponding to the Hawking-Penrose spacetime singularity [1] should be such an equation.
We can think of the physical equations as a description of a dynamical system because they are all direct or indirect expressions of energy-momentum quantities, and we have no evidence that it is possible to completely detach any physical parameter, macroscopic or microscopic, from the Lagrangian and Hamiltonian.
Gravitational collapse causes black holes, which have singularities [2]. What characterizes a singularity? Any finite parameter before entering a spacetime singularity becomes infinite after entering the singularity. Information becomes infinite, energy-momentum becomes infinite, but all material properties disappears completely. A dynamical equation, transitioning from finite to infinite, is impossible because there is no infinite source of dynamics, and also the Uncertainty Principle would prevent this singularity from being achieved*. Therefore, while there must be a singularity according to the Singularity Principle, this singularity must be inaccessible, or will not enter. Before entering this singularity, a sufficiently long period of time must have elapsed, waiting for the conditions that would destroy it, such as the collision of two black holes.
Most of these singularities, however, can usually be resolved by pointing out that the equations are missing some factor, or noting the physical impossibility of ever reaching the singularity point. In other words, they are probably not 'real'.” [3] We believe this statement is correct. Nature will not destroy by itself the causality it has established.
-----------------------------------------------
Notes
* According to the uncertainty principle, finite energy and momentum cannot be concentrated at a single point in space-time.
-----------------------------------------------
References
[1] Hawking, S. (1966). "Singularities and the geometry of spacetime." The European Physical Journal H 39(4): 413-503.
[2] Hawking, S. W. and R. Penrose (1970). "The singularities of gravitational collapse and cosmology." Proceedings of the Royal Society of London. A. Mathematical and Physical Sciences 314(1519): 529-548.
==================================================
补充 2023-1-14
Structural Logic Paradox
Russell once wrote a letter to Ludwig Wittgenstein while visiting China (1920 - 1921) in which he said "I am living in a Chinese house built around a courtyard *......" [1]. The phrase would probably mean to the West, "I live in a house built around the back of a yard." Russell was a logician, but there is clearly a logical problem with this expression, since the yard is determined by the house built, not vice versa. The same expression is reflected in a very famous poem "A Moonlit Night On The Spring River" from the Tang Dynasty (618BC - 907BC) in China. One of the lines is: "We do not know tonight for whom she sheds her ray, But hear the river say to its water adieu." The problem here is that the river exists because of the water, and without the water there would be no river. Therefore, there would be no logic of the river saying goodbye to its water. There are, I believe, many more examples of this kind, and perhaps we can reduce these problems to a structural logic pradox †.
Ignoring the above logical problems will not have any effect on literature, but it should become a serious issue in physics. The biggest obstacle in current physics is that we do not know the structure of elementary particles and black holes. Renormalization is an effective technique, but offers an alternative result that masks the internal structure and can only be considered a stopgap tool. Hawking and Penrose proved the Singularity Theorem, but no clear view has been developed on how to treat singularities. It seems to us that this scenario is the same problem as the structural logic described above. Without black holes (and perhaps elementary particles) there would be no singularities, and (virtual) singularities accompany black holes. Since there is a black hole and there is a singularity, how does a black hole not collapse today because of a singularity, will collapse tomorrow because of the same singularity? Do yards make houses disappear? Does a river make water disappear? This is the realistic explanation of the "paradox" in the subtitle of this question. The laws of physics do not destroy themselves.
-------------------------------------------------
Notes
* One of the typical architectural patterns in Beijing, China, is the "quadrangle", which is usually a square open space with houses built along the perimeter, and when the houses are built, a courtyard is formed in the center. Thus, before the houses were built, it was the field, not the courtyard. The yard must have been formed after the house was built, even though that center open space did not substantially change before or after the building, but the concept changed.
† I hope some logician or philosopher will point out the impropriety.
-------------------------------------------------
References
[1] Monk, R. (1990). Ludwig Wittgenstein: the duty of genius. London: J. Cape. Morgan, G. (Chinese version @2011)
Relevant answer
Answer
Agree. It is a math problem not a real problem in the universe. Anything infinity destroys all conservations in the universe. The center of a black hole should be totally hollow instead of a singularity because of angular momentum have zero probability to be zero. When any matter has angular momentum, it cannot settle still in a point.
  • asked a question related to Fundamental Physics
Question
12 answers
Please prove me right or wrong.
I have recently published a paper [1] in which I conclusively prove that the Stoney Mass invented by George Stoney in 1881 and covered by the shroud of mystery for over 140 years does not represent any physical mass, but has a one-to-one correspondence with the electron charge. The rationale of this rather unusual claim, is the effect of the deliberate choice in establishing SI base units of mass (kg) and the electric charge derived unit (coulomb: C = As). They are inherently incommensurable in the SI, as well as in CGS units.
The commensurability of physical quantities may however depends on the definition of base units in a given system. The experimental “Rationalized Metric System (RMS) developed in [1] eliminates the SI mass and charge units (kg and As, respectively), which both become derived units with dimensions of [m3 s-2]. The RMS ratio of the electron charge to the electron mass became non-dimensional and equal to 2.04098×1021, that is the square root of the electric to gravitational force ratio for the electron.
As much as the proof is quite simple and straightforward I start meeting persons disagreeing with my claim but they cannot come up with a rational argument.
I would like your opinion and arguments pro or against. This could be a rewarding scientific discussion given the importance of this claim for the history of science and beyond.
The short proof is in the attached pdf and the full context in my paper
====================================================
As a results of discussions and critical analysis, I have summarised my position a few answers below, but I have decided to consolidate the most recent here as a supplement to the attached pdf.
I intended to improve my arguments that would increase the level of complexity. However, I found a shorter proof that Stoney Mass has no independent physical existence.
Assumptions:
  • Stoney defined the mass as an expression based on pure dimensional analysis relationship, without any implied or explicit ontological status claims.
  • Based on Buckingham assertions physical laws do not depend on the choice of base units.
  • The system of units [m s] (RMS) can validly replace the system: [kg m s As] as described in [1]
By examining the different systems of units and their corresponding expressions of the Stoney mass, we can shed light on its physical existence. When we consider the CGS and SI systems, we find that both express the Stoney mass in their respective base units of mass (grams or kilograms). However, if we were to use a different system of units, such as the Rationalized Metric System (RMS)[1], we find that there is no equivalent RMS dimensional constants as in the SI Stoney formula to combine with the electron charge to produce a mass value. Stoney Mass expression cannot be constructed in RMS.
In simpler terms, the Stoney mass is a consequence of the chosen arbitrary base units for mass and Current (consequently charge), leading to what is known as the incommensurability of units. This demonstrates that the Stoney mass is not observable or experimentally meaningful outside of the chosen context of CGS or SI units.
Thus it is evident that the Stoney mass lacks a physical manifestation beyond its theoretical formulation in specific unit systems. It exists as somewhat of an artifact caused by the incommensurability between base units of mass and charge. Note that in contrast, the Planck mass SI/CGS expresion does not vanish under the conversion to RMS units, and a dimensional expression is still retained albeit simpler.
When we dig deeper into the fundamental interactions and physical laws, we find no empirical evidence or measurable effects associated with the Stoney mass, reinforcing the understanding that it holds no substantial physical connotation.
The meaning of stoney mass in SI or CGS refers to the mass equivalent of the fundamental unit of electron charge in terms of SM rest energy and (possibly) the equivalent finite electric field energy of the electron.
Relevant answer
Answer
Crafting a Robust Rebuttal to the Critique
I previously explained my position in [1] .
I intended to improve it to make a point, increasing its level of complexity. However, I found a shorter proof that Stoney Mass has no independent physical existence, and this can be typed in this message
Assumptions:
  • Stoney defined the mass as an expression based on pure dimensional analysis relationship, without any implied ontological status.
  • Based on Buckingham assertions physical laws do not depend on the choice of base units.
  • The system of units [m s] (RMS) can validly replace the system: [kg m s As] as described in [2]
By examining the different systems of units and their corresponding expressions of the Stoney mass, we can shed light on its physical existence. When we consider the CGS and SI systems, we find that both express the Stoney mass in their respective base units of mass (grams or kilograms). However, if we were to use a different system of units, such as the Rationalized Metric System (RMS)[2], we find that there is no equivalent dimensional constants in the SI Stoney formula to combine with the electron charge and produce a mass value. Stoney Mass expression cannot be constructed in RMS.
In simpler terms, the Stoney mass is a consequence of the chosen arbitrary base units for mass and charge, leading to what is known as the incommensurability of units. This demonstrates that the Stoney mass is not observable or experimentally meaningful outside of the chosen context of CGS or SI units.
Thus it is evident that the Stoney mass lacks a physical manifestation beyond its theoretical formulation in specific unit systems. It exists as somewhat of an artifact caused by the incommensurability between base units of mass and charge. Note that the Planck mass expresion does not vanish under the conversion to RMS units, and a dimensional expression is still retained albeit simpler.
When we dig deeper into the fundamental interactions and physical laws, we find no empirical evidence or measurable effects associated with the Stoney mass, reinforcing the understanding that it holds no substantial physical connotation.
The mea@ning of stoney mass in SI or CGS refers to the mass equivalent of the fundamental unit of electron charge in terms of SM rest energy and (possibly) the equivalent finite electric field energy of the electron.
  • asked a question related to Fundamental Physics
Question
34 answers
The Introduction of complex numbers in physics was at first superficial but now they seem increasingly fundamental. Are we missing their true interpretation? What do you think?
Relevant answer
Answer
Dear Prof. F. Barzi, yes I can give an example:
In unconventional superconductors, the elastic scattering cross-section formalism to analyze the phenomenon has a complex solution, but the energy is self-consistent, and the variables change very fast.
It is quite complicated to find a numerical solution, I have worked in the field for 24 years now.
Best Regards.
  • asked a question related to Fundamental Physics
Question
95 answers
Recently I asked a question related to QCD and in response reliability of QCD itself was challenged by many researchers.
It left me with the question, what exactly is fundamental in physics. Can we rely entirely on the two equations given by Einstien? If not then what can we say as fundamental in physics?
Relevant answer
Answer
Are mass energy equation and energy-momentum relation (E^2 = (mc^2)^2 + (pc)^2) fundamental?”
The answer depends on the answer to another question. Do superluminal speeds of matter exist? Recent space exploration shows that such objects exist.
  • asked a question related to Fundamental Physics
Question
131 answers
Should this set of Constants Originate in the Equations that Dominate the Existence and Evolution of Nature?
There are over 300 physical constants in physics [1][2], c, h, G, e, α, me, mp, θ, μ0, g, H0, Λ, ...... with different definitions [3], functions and statuses; some of them are measured, some are derived [4] and some are conjectured [5]. There is a recursive relationship between physical constants, capable of establishing, from a few constants, the dimensions of the whole of physics [6], such as SI Units. There is a close correlation between physical constants and the laws of physics. Lévy-Leblond said, any universal fundamental constant may be described as a concept synthesizer expressing the unification of two previously unconnected physical concepts into a single one of extended validity [7], such as, the mass-energy equation E = mc^2. Physics is skeptical that many constants are constant constants [8], even including the speed of light invariance. But "letting a constant vary implies replacing it by a dynamical field consistently" [9], in order to avoid being trapped in a causal loop, we have to admit that there is a set of fundamental constants that are eternally invariant*.
So which physical constants are the most fundamental natural constants? Are they the ones that have invariance, Lorentz invariance, gauge invariance, diffeomorphism invariance [10]? Planck's 'units of measurement' [11], combines the relationship between the three constants Planck constant h, speed of light c, gravitational constant G. "These quantities will retain their natural meaning for as long as the laws of gravity, the propagation of light in vacuum and the two principles of the theory of heat hold, and, even if measured by different intelligences and using different methods, must always remain the same."[12] This should be the most unignorable reference to the best provenance of these constants, which should be the coefficients of some extremely important equations? [13]
-------------------------------
Notes
* They are eternal and unchanging, both at the micro and macro level, at any stage of the evolution of the universe, even at the Big Bang, the Big Crash.
-------------------------------
References
[1] Group, P. D., P. Zyla, R. Barnett, J. Beringer, O. Dahl, D. Dwyer, D. Groom, C.-J. Lin, K. Lugovsky and E. Pianori (2020). "Review of particle physics." Progress of Theoretical and Experimental Physics 2020(8): 083C001.
[2] Tiesinga, E. (2021). "CODATA recommended values of the fundamental physical constants: 2018."
[4] DuMond, J. W. (1940). "A Complete Isometric Consistency Chart for the Natural Constants e, m and h." Physical Review 58(5): 457.
[5] Carroll, S. M., W. H. Press and E. L. Turner (1992). "The cosmological constant." Annual review of astronomy and astrophysics 30: 499-542.
[6] Martin-Delgado, M. A. (2020). "The new SI and the fundamental constants of nature." European Journal of Physics 41(6): 063003.
[7] Lévy-Leblond, J.-M. (1977, 2019). "On the Conceptual Nature of the Physical Constants". The Reform of the International System of Units (SI), Philosophical, Historical and Sociological Issues.
[8] Dirac, P. A. M. (1979). "The large numbers hypothesis and the Einstein theory of gravitation " Proceedings of the Royal Society of London. A. Mathematical and Physical Sciences 365.1720: 19-30.
Webb, J., M. Murphy, V. Flambaum, V. Dzuba, J. Barrow, C. Churchill, J. Prochaska and A. Wolfe (2001). "Further evidence for cosmological evolution of the fine structure constant." Physical Review Letters 87(9): 091301.
[9] Ellis, G. F. and J.-P. Uzan (2005). "c is the speed of light, isn't it?" American journal of physics 73(3): 240-247.
[10] Utiyama, R. (1956). "Invariant theoretical interpretation of interaction." Physical Review 101(5): 1597.
Gross, D. J. (1995). "Symmetry in physics: Wigner's legacy." Physics Today 48(12): 46-50.
[11] Stoney, G. J. (1881). "LII. On the physical units of nature." The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science 11(69): 381-390.
Meschini, D. (2007). "Planck-Scale Physics: Facts and Beliefs." Foundations of Science 12(4): 277-294.
[12] Robotti, N. and M. Badino (2001). "Max Planck and the 'Constants of Nature'." Annals of Science 58(2): 137-162.
Relevant answer
Answer
Valentyn Nastasenko Sorry but I'm only reading now. In discussions where unproven science comes into play, everyone must be free to express their opinion without imposing their truth. This is my thought.
  • asked a question related to Fundamental Physics
Question
23 answers
Can Physical Constants Which Are Obtained with Combinations of Fundamental Physical Constants Have a More Fundamental Nature?
Planck Scales (Planck's 'units of measurement') are different combinations of the three physical constants h, c, G, Planck Scales=f(c,h,G):
Planck Time: tp=√ℏG/c^5=5.31x10^-44s ......(1)
Planck Length: Lp=√ℏG/c^3=1.62x10^-35m ......(2)
Planck Mass: Mp=√ℏc/G=2.18x10^-8 kg ......(3)
“These quantities will retain their natural meaning for as long as the laws of gravity, the propagation of light in vacuum and the two principles of the theory of heat hold, and, even if measured by different intelligences and using different methods, must always remain the same.”[1] And because of the possible relation between Mp and the radius of the Schwarzschild black hole, the possible generalized uncertainty principle [2], makes them a dependent basis for new physics [3]. But what exactly is their natural meaning?
However, the physical constants, the speed of light, c, the Planck constant, h, and the gravitational constant, G, are clear, fundamental, and invariant.
c: bounds the relationship between Space and Time, with c = ΔL/ Δt, and Lorentz invariance [4];
h: bounds the relationship between Energy and Momentum with h=E/ν = Pλ, and energy-momentum conservation [5][6];
G: bounds the relationship between Space-Time and Energy-Momentum, with the Einstein field equation c^4* Gμν = (8πG) * Tμν, and general covariance [7].
The physical constants c, h, G already determine all fundamental physical phenomena‡. So, can the Planck Scales obtained by combining them be even more fundamental than they are? Could it be that the essence of physics is (c, h, G) = f(tp, Lp, Mp)? rather than equations (1), (2), (3). From what physical fact, or what physical imagination, are we supposed to get this notion? Never seeing such an argument, we just take it and use it, and still recognize c,h,G fundamentality. Obviously, Planck Scales are not fundamental physical constants, they can only be regarded as a kind of 'units of measurement'.
So are they a kind of parameter? According to Eqs. (1)(2)(3), c,h,G can be directly replaced by c,h,G and the substitution expression loses its meaning.
So are they a principle? Then who are they expressing? What kind of behavioral pattern is expressed? The theory of quantum gravity takes this as a " baseline ", only in the order sense, not in the exact numerical value.
Thus, Planck time, length, mass, determined entirely by h, c, G, do they really have unquestionable physical significance?
-----------------------------------------
Notes
‡ Please ignore for the moment the phenomena within the nucleus of the atom, eventually we will understand that they are still determined by these three constants.
-----------------------------------------
References
[1] Robotti, N. and M. Badino (2001). "Max Planck and the 'Constants of Nature'." Annals of Science 58(2): 137-162.
[2] Maggiore, M. (1993). A generalized uncertainty principle in quantum gravity. Physics Letters B, 304(1), 65-69. https://doi.org/https://doi.org/10.1016/0370-2693(93)91401-8
[3] Kiefer, C. (2006). Quantum gravity: general introduction and recent developments. Annalen der Physik, 518(1-2), 129-148.
[4] Einstein, A. (1905). On the electrodynamics of moving bodies. Annalen der Physik, 17(10), 891-921.
[5] Planck, M. (1900). The theory of heat radiation (1914 (Translation) ed., Vol. 144).
[6] Einstein, A. (1917). Physikalisehe Zeitschrift, xviii, p.121
[7] Petruzziello, L. (2020). A dissertation on General Covariance and its application in particle physics. Journal of Physics: Conference Series,
Relevant answer
Answer
The Planck scales, including Planck length, Planck time, Planck mass, Planck temperature, and Planck charge, are a set of physical constants that define scales at which quantum gravitational effects become significant, effectively marking the limits of our current understanding of the universe. These scales arise from fundamental physical constants: the speed of light in a vacuum (c), the gravitational constant (G), and the reduced Planck constant (ħ).
and yes Gravity is a fundamental constant as far as our observations and experiments.
Constants:
In one sense, Planck scales can be considered constants because they are defined through a combination of other fundamental physical constants that do not change. They represent the scales at which gravitational interactions become as strong as quantum effects, leading to a regime where our current theories of physics—quantum mechanics and general relativity—no longer independently suffice.
Parameters:
Planck scales could also be seen as parameters within the broader context of theoretical physics and cosmology. They parameterize the scales at which new physics—potentially including quantum gravity, string theory, or other unified theories—must be invoked to accurately describe phenomena. In theoretical models extending beyond the Standard Model and General Relativity, the exact implications of these scales and their relevance can vary, making them parameters that guide our exploration of the universe at its most fundamental level.
Principles:
Viewing Planck scales as principles is a more abstract approach but equally valid. They embody the principle that there is a fundamental scale of distance, time, mass, and energy beyond which the classical descriptions of space-time and matter cease to apply and a more fundamental theory is required. This perspective invites reflection on the limits of our current theories and the principles that any future theory of quantum gravity must satisfy to seamlessly bridge the gap between quantum mechanics and general relativity.
In summary, Planck scales can be interpreted as constants, parameters, or principles depending on the context of the discussion and the framework within which they are being considered. As constants, they are fixed values derived from fundamental constants of nature. As parameters, they guide theoretical and experimental research into the realms of high energy physics and quantum gravity. As principles, they represent conceptual boundaries that challenge and inspire the development of new physics.
  • asked a question related to Fundamental Physics
Question
17 answers
Is the Fine-Structure Constant the Most Fundamental Physical Constant?
The fine-structure constant is obtained when the classical Bohr atomic model is relativisticized [1][2]. α=e2/ℏc, a number whose value lies very close to 1/137. α did not correspond to any elementary physical unit, since α is dimensionless. It may also be variable [6][7]*.
Sommerfeld introduced this number as the relation of the “relativistic boundary moment” p0=e2/c of the electron in the hydrogen atom to the first of n “quantum moments” pn=nh/2π. Sommerfeld had argued that α=p0/p1 would “play an important role in all succeeding formulas,” he had argued [5].
There are several usual interpretations of the significance of fine structure constants [3].
a)In 1916, Sommerfeld had gone no further than to suggest that more fundamental physical questions might be tied to this “relational quantity.” In Atomic Structure and Spectral Lines, α was given a somewhat clearer interpretation as the relation of the orbital speed of an electron “in the first Bohr orbit” of the hydrogen atom, to the speed of light [5].
b) α plays an important role in the details of atomic emission, giving the spectrum a "fine structure".
c) The electrodynamic interaction was thought to be a process in which light quanta were exchanged between electrically charged particles, where the fine-structure constant was recognized as a measure of the force of this interaction. [5]
d) α is a combination of the elementary charge e, Planck's constant h, and the speed of light c. These constants represent electromagnetic interaction, quantum mechanics, and relativity, respectively. So does that mean that if G is ignored (or canceled out) it represents the complete physical phenomenon.
Questions implicated here :
1) What does the dimensionless nature of α imply? The absence of dimension means that there is no conversion relation. Since it is a coupling relation between photons and electrons, is it a characterization of the consistency between photons and charges?
2) The various interpretations of α are not in conflict with each other, therefore should they be unified?
3) Is our current interpretation of α the ultimate? Is it sufficient?
4) Is α the most fundamental physical constant**? This is similar to Planck Scales in that they are combinations of other fundamental physical constants.
-----------------------------------
Notes
* Spatial Variation and time variability.
‡ Sommerfeld considered α "important constants of nature, characteristic of the constitution of all the elements."[4]
-----------------------------------
References
[3] 张天蓉. (2022). 精细结构常数. https://blog.sciencenet.cn/blog-677221-1346617.html
[1] Sommerfeld, A. (1916). The fine structure of Hydrogen and Hydrogen-like lines: Presented at the meeting on 8 January 1916. The European Physical Journal H (2014), 39(2), 179-204.
[2] Sommerfeld, A. (1916). Zur quantentheorie der spektrallinien. Annalen der Physik, 356(17), 1-94.
[4] Heilbron, J. L. (1967). The Kossel-Sommerfeld theory and the ring atom. Isis, 58(4), 450-485.
[5] Eckert, M., & Märker, K. (2004). Arnold Sommerfeld. Wissenschaftlicher Briefwechsel, 2, 1919-1951.
[6] Wilczynska, M. R., Webb, J. K., Bainbridge, M., Barrow, J. D., Bosman, S. E. I., Carswell, R. F., Dąbrowski, M. P., Dumont, V., Lee, C.-C., Leite, A. C., Leszczyńska, K., Liske, J., Marosek, K., Martins, C. J. A. P., Milaković, D., Molaro, P., & Pasquini, L. (2020). Four direct measurements of the fine-structure constant 13 billion years ago. Science Advances, 6(17), eaay9672. https://doi.org/doi:10.1126/sciadv.aay9672
[7] Webb, J. K., King, J. A., Murphy, M. T., Flambaum, V. V., Carswell, R. F., & Bainbridge, M. B. (2011). Indications of a Spatial Variation of the Fine Structure Constant. Physical Review Letters, 107(19), 191101. https://doi.org/10.1103/PhysRevLett.107.191101
Relevant answer
Answer
Dear Vladimir A. Lebedev,
Could you provide me the value of this dimensionless ratio and also the two speeds separately.
  • asked a question related to Fundamental Physics
Question
139 answers
We are not in a position to scientifically accept five fundamental forces.
According to relativity, gravity is not considered a force. Nevertheless, scientists, including those who advocate for relativity, persist in asserting that there are four fundamental forces: gravitational, electromagnetic, strong nuclear, and weak nuclear. Simply put, physicists who celebrate the triumph of relativity decisively undermine its credibility or completeness.
This raises the question: Why haven't physicists reduced the fundamental forces to three?
Relevant answer
Answer
I agree with what you wrote:
You cannot detect any force on the surface of a free-falling object...
My little girl, when she was two years old, gave a better answer to gravity than any official authority on the subject.
There is no force: the nature of the objects, the nature of their space, determines the phenomenon what we have called force.
This will be soon enounced on my You Tube cannel:
Regards,
Laszlo
  • asked a question related to Fundamental Physics
Question
8 answers
Is Uniqueness Their Common and Only Correct Answer?
I. We often say that xx has no physical meaning or has physical meaning. So what is "physical meaning" and what is the meaning of "physical meaning "*?
"As far as the causality principle is concerned, if the physical quantities and their time derivatives are known in the present in any given coordinate system, then a statement will only have physical meaning if it is invariant with respect to those transformations for which the coordinates used are precisely those for which the known present values remain invariant. I claim that all assertions of this kind are uniquely determined for the future as well, i.e., that the causality principle is valid in the following formulation: From knowledge of the fourteen potentials ......, in the present all statements about them in the future follow necessarily and uniquely insofar as they have physical meaning" [1].“Hilbert's answer is based on a more precise formulation of the concept of causality that hinges on the distinction between meaningful and meaningless statements.”[2]
Hawking said [4], "I take the positivist view that a physical theory is nothing more than a mathematical model, and it is pointless to ask whether it corresponds to the real. All one can seek is that its predictions agree with its observations."
Is there no difference between physics and Mathematics? We believe that the difference between physics and mathematics lies in the fact that physics must have a physical meaning, whereas mathematics does not have to. Mathematics can be said to have a physical meaning only if it finds a corresponding expression in physics.
II. We often say, restore naturalness, preserve naturalness, the degree of unnaturalness, Higgs naturalness problem, structural naturalness, etc., so what is naturalness or unnaturalness?
“There are two fundamental concepts that enter the formulation of the naturalness criterion: symmetry and effective theories. Both concepts have played a pivotal role in the reductionist approach that has successfully led to the understanding of fundamental forces through the Standard Model. ” [6]
Judging naturalness by symmetry is a good piece of criteria; symmetry is the only result of choosing stability, and there seems to be nothing lacking. But using effective theories as another criterion must be incomplete, because truncate obscures some of the most important details.
III. We often say that "The greatest truths are the simplest"(大道至简†), so is there a standard for judging the simplest?
"Einstein was firmly convinced that all forces must have an ultimate unified description and he even speculated on the uniqueness of this fundamental theory, whose parameters are fixed in the only possible consistent way, with no deformations allowed: 'What really interests me is whether God had any choice in the creation of the world; that is, whether the necessity of logical simplicity leaves any freedom at all' ”[6]
When God created the world, there would not have been another option. The absolute matching of the physical world with the mathematical world has shown that as long as mathematics is unique, physics must be equally unique. The physical world can only be an automatic emulator of the mathematical world, similar to a Cellular Automata.
It is clear that consensus is still a distant goal, and there will be no agreement on any of the following issues at this time:
1) Should there be a precise and uniform definition of having physical meaning? Does the absence of physical meaning mean that there is no corresponding physical reality?
2) Are all concepts in modern physics physically meaningful? For example, probabilistic interpretation of wave functions, superposition states, negative energy seas, spacetime singularities, finite and unbounded, and so on.
3) "Is naturalness a good guiding principle?"[3] "Does nature respect the naturalness criterion?"[6]
4) In physics, is simplicity in essence uniqueness? Is uniqueness a necessary sign of correctness‡?
---------------------------------------------------------
Notes:
* xx wrote a book, "The Meaning of Meaning", which Wittgenstein rated poorly, but Russell thought otherwise and gave it a positive review instead. Wittgenstein thought Russell was trying to help sell the author and Russell was no longer serious [5]. If one can write about the Meaning of Meaning, then one can follow with the Meaning of Meaning of Meaning. In that case, how does one end up with meaning? It is the same as causality; there must exist an ultimate meaning which cannot be pursued any further.
‡ For example, the Shortest Path Principle, Einstein's field equation Gµν=k*Tµν, all embody the idea that uniqueness is correctness (excluding the ultimate interpretation of space-time).
† “万物之始,大道至简,衍化至繁。”At the beginning of all things, the Tao is simple; later on, it evolves into prosperous and complexity. Similar to Leonardo Da Vinci,"Simplicity is the ultimate sophistication." However, the provenance of many of the quotes is dubious.
------------------------------
References:
[1] Rowe, D. E. (2019). Emmy Noether on energy conservation in general relativity. arXiv preprint arXiv:1912.03269.
[2] Sauer, T., & Majer, U. (2009). David Hilbert's Lectures on the Foundations of Physics 1915-1927: Relativity, Quantum Theory and Epistemology. Springer.
[3] Giudice, G. F. (2013). Naturalness after LHC8. arXiv preprint arXiv:1307.7879.
[4] Hawking, S., & Penrose, R. (2018). The nature of space and time (吴忠超,杜欣欣, Trans.; Chinese ed., Vol. 3). Princeton University Press.
[5] Monk, R. (1990). Ludwig Wittgenstein: the duty of genius. London: J. Cape. Morgan, G. (Chinese @2011)
[6] Giudice, G. F. (2008). Naturally speaking: the naturalness criterion and physics at the LHC. Perspectives on LHC physics, 155-178.
Relevant answer
Answer
Alaya Kouki With respect to „From Nothing you get the theory of Everything.„ well not from nothing (exactly in the human sense of this word) …
… but from simple first order multiplicative base entities like shown within the framework of iSpace theory able to derive value and geometry of constants of nature, that is (e.g only) GoldenRatio iSpaceAmpere being the quantum of Ampere, 1/6961 iSpaceSecond being the quantum of time, and so on) all multiplied up by any arbitrary positive integer to the values we see once lossless (keeping initial integer geometric exactness!) converted back to iSpace-SI based MKS/A-SI lab compatible measurement values (to compare to experimental results of all kind, given no other theoretical corrections have been applied like QCD/QED when involved in such calculation).
But otherwise - indeed - from nothing (but a little bit pre-school multiplicative math).
  • asked a question related to Fundamental Physics
Question
59 answers
Gavitational potential originating from distant masses of the universe is about 108 times larger than the Sun's gravitational potential at the Earth's distance, and yet the latter can keep the Earth in its orbit.
It cannot be excludd that luminal speed according to c2 = 2GMu/Ru is essentially determined and limited by the gravitational potential of distant masses (subscript u). Notably, Einstein 1911 found light deflection close to the Sun to result from locally enhanced gravitational potential.
So it also cannot be excluded that electromagnetic properties of vacuum space according to 1/(ε0µ0) = 2GMu/Ru are essentially determined by the gravitational potential from distant masses.
Accidentally or not, it appears noticeable that the potential energy of a mass m at the gravitational potential of the universal masses approximately corresponds to the relativistic energy equivalent E = mc2.
Finally, a characteristic deceleration observed on rapidly spinning rotors also indicates a possible interaction with distant masses.
Relevant answer
Answer
It would be worth taking a close look at Einstein's original pre-test prediction of how light is deflected by the Sun's gravity. I am also interested in this original, first officially documented prediction. If you know that paper (title and publication date), please share it... It would be worth comparing with the current official narrative.
Regards,
Laszlo
  • asked a question related to Fundamental Physics
Question
14 answers
I just added an answer to an elder discussion,
"When it is not accidental that potential energy of a mass m at the level of local cumulative gravitational potential originating from remote masses of the universe equals E = mc^2, shouldn't it be worthwhile to reconsider Mach's principle ?"
Relevant answer
Answer
Motion is w.r.t. what?
  • asked a question related to Fundamental Physics
Question
10 answers
In fact what is a charge? This is a question that has not yet response in physics. But according to me the charge vibrates at the speed of light and provide this speed c to the photons to travel at the same speed which is the speed of light by a mechanism not yet known in fundamental physics. In addition according to me the charge quantify the energies it provides for the photons that it "produce"!
Relevant answer
Answer
Arayik Danghyan,
Many thanks. Best regards.
  • asked a question related to Fundamental Physics
  • asked a question related to Fundamental Physics
Question
7 answers
I think this speed is quantified but is it a constant in all the jumps of the electron?
Relevant answer
Answer
7500m/sec is an average speed.
  • asked a question related to Fundamental Physics
Question
7 answers
The cumulative gravitational potential originating from mainly the outer masses of our visible universe is about 8 orders of magnitude larger than the Sun's gravitational potential at the Earth's distance, which also holds all other planets on track. Remarkably, the potential energy of a mass m at the level of gravitational potential originating from the masses of remote parts of our universe is of the order E = mc^2.
Relevant answer
Answer
Dear collogue. Do you think our universe with billions of galaxies where each galaxy situated in modern distance as complete entity or it is accidently of big bang?
  • asked a question related to Fundamental Physics
Question
15 answers
The potential energy of a 1 kg mass due to the sun's gravitational potential at the earth position is about 109 J (1 GWs). The cumulative gravitational potential of all masses within the visible universe is about 108 times larger. At this potential a 1 kg mass will hold a potential energy of about 1017 J which is equivalent to E = mc^2. This may be interpreted as a strong vote in favour of Mach's Principle telling that certain local phenomena might be related to the background masses of the universe.
Relevant answer
Answer
Dear Johan ( Johan K. Fremerey ).
It was with great difficulty to write the article on graviton:
(Hungarian)
Its abstract, conclushiaon and concept of graviton has English translation:
(Remark: I would translate it all into English if someone would review it linguistically!). The more perfect something is, the harder it is to make it better, Sick is the man who strives for too much perfection.
I do not know this man but I know that: ' he loves order more than any of us'.
in the implementation of a good idea, the harmonious cooperation of the two 'opposite' elements is excellent.
Regards,
Laszlo
  • asked a question related to Fundamental Physics
Question
5 answers
In Special Relativity, a photon of frequency f is considered as a particle of mass m=h f/c2 with zero proper mass. It is experimentally verified that the photon carries momentum and exerts radiation pressure on the targets it impacts.
On the other hand, in General Relativity, it is proposed that the gravitational interaction between massive objects is due to the fact that gravitational field curves space-time. It has been verified that a massive body alters the trajectory and velocity of a beam of light that interacts with its gravitational field (Shapiro effect and gravitational lens effect). Within the frame of General Relativity, these effects are explained by proposing that photons follow geodesic trajectories within curved spaces. Obviously in this framework the mass of the photons can be considered negligible. But, in General Relativity, is the photon a massless particle?
Relevant answer
Answer
To state a slightly longer answer, since p is a vector (magnitude hf/c or E/c) it can be changed in direction
by a force F=dp/dt, which in fact is seen in practice. However this would be equivalent to say there is a certain mass mentioned above. Also photon forces are exerter over pallets which rotate as dp/dt, sudden change in p to -p
in a short time in reflection.
There is no rest mass but for calculational purposes you can assume a dynamical mass to get correct conclusions.
For example when giving a photon a potential energy in a gravitational field, or writting an actual gravitational force.
These points of view are calculationally correct.
If I assumed rest mass m(0) , dynamical mass of a photon would blow up, and you dont want that. So the dynamical mass
is just hf/(cc)
  • asked a question related to Fundamental Physics
Question
3 answers
Special&general relativity, one the pillars and one of the 3 more successful physical theories, gives a prime or hierarchically high role to the properties of light, which are the startingbpoint or "effective cause" in Aristotelian linga, tonits inspection.
However, as these theories, despite continuing confirmational success, show resistance to be compstible with a big portion of other physics (exceptions such as Dirac relativistic Quantum mechanics exist).
Therefore one gets to think that maybe the tact that their physical motivational mothering was more peripheral than required to deserve such a prime role in unification (a discipline level definitional trait) than currently ascribed.
Relevant answer
Answer
No. As is, always, the case, it's much easier to start from symmetries and then find all patterns that break them, than the opposite.
Once invariance under global Lorentz transformations was identified as one of the properties that define electromagnetism (the other being gauge invariance, which can be understood as local charge conservation), it's straightforward to describe all possible ways that global Lorentz invariance can be broken and use these to define backgrounds for experiments. This is has been done here: https://lorentz.sitehost.iu.edu/kostelecky/faq.htm
  • asked a question related to Fundamental Physics
Question
20 answers
First of all, we must concede the "variability" of space-time, and it's a physical reality. First, this is the common ground of both the Special and General Theory of Relativity; the main body of relativity can be trusted to be correct and has been more than adequately verified by experiments. These experiments typically include the earliest light-bending experiments [Eddington 1919], the round-the-world flight experiments [1], the GPS clock-correction experiments [2], and later the LIGO gravitational-wave-detection experiments [3], the observation of gravitational lensing phenomena [4] [5], and so on. The meaning of "variable" is not necessarily the relative spacetime of SR or the curved spacetime of GR. Second, philosophically we should also recognize that spacetime will not be just a variable background for matter, since all interactions cannot be separated from spacetime. Spacetime is not just a distance scale but should take on the function of transmitting interactions.
What our teachers emphasized when they talked about the difference between the applicability of SR and GR is that SR is an event in flat spacetime and GR is an event in curved spacetime. But there is only one spacetime, and for a moving electron, its SR spacetime and GR spacetime would have to be the same** if we had to consider both its SR and GR effects*.
Einstein's fondness for the concept of curved spacetime may have arisen from the intuitive nature of the geodesic concept, or perhaps from the affirmation of the maximal nature of spacetime dynamics. In any case, GR's expression of gravity in terms of a curved spacetime concept was already orthodox, though beyond the empirical perception of all. Feynman constantly questioned the notion of "spacetime curvature" and used the concept of a " measure " of spacetime in general relativity instead of "curvature" [6]. Weinberg thought that geometry might be more appropriately viewed as an analog of GR, politely expressing his skepticism, and L. Susskind, when teaching GR, said that no one knows what four-dimensional spacetime bending looks like†. We believe that Einstein was also not a great believer in the notion of four-dimensional spacetime bending, and his subsequent repeated turn to the study of five-dimensional spacetime[7][8] does not appear to have been solely for the sake of gravitational unification with Maxwell's electromagnetic theory, but perhaps also as a passing attempt to find a dimension for the three-dimensional sphere into which it could be embedded‡.
All of our current measurements and verifications of SR and GR spacetime do not involve true spacetime "curvature", although there are many proposed methods [9]. The LIGO gravitational wave measurements, the gravitational redshift and violetshift, can only be considered as a response to changes in the spacetime metric. This is similar to Feynman's view.
Let us assume a scenario: an electron of mass m in four-dimensional spacetime, and a stationary observer in a fifth-dimensional abstract space, who keeps changing the direction and velocity of the motion of the electron in four-dimensional spacetime through the fifth dimension. Ask, in the opinion of this observer:
1) Do SR spacetime and GR spacetime have to be identical?
2) Is it possible to fully express spacetime "curvature" with a spacetime metric? Excluding " twisting ".
3) Is there a notion of "curvature" for the "curvature" of one-dimensional time? Usually in GR it is also said to be the gravitational time dilation [10]. The curvature of one-dimensional space can have the concept of curvature, but in which direction? How can it not interfere with the other two dimensions?
--------------------------------------------------------------
Notes:
* Usually physics recognizes that GR effects are ignored because the electron mass is so small. This realization masks great problems. We are extrapolating from macroscopic manifestations to microscopic manifestations, and from manifestations abstracted as point particles at a distance to manifestations when structure exists at close range. As long as structure exists, when distance is sufficiently small, everything behaves as a distributed field. At this point, the abstract notion of force (magnitude, direction, point of action) has disappeared. For electrons, even the concept of charge disappears. Yet the concept of gravity does not necessarily disappear at this point, thus causing a reversal of the order of magnitude difference in action at very close distances.
** There is a difference between this and the state of affairs during GPS clock calibration. When doing GPS calibration, we are using the ground as the reference frame. A flying satellite in the sky has an SR effect, but we approximate it to be flat in space-time. The GR effect, on the other hand, is relative to the ground, not of itself. Thus, the composite calibration is the difference between the two. If one were to change the scenario and the relatively immobile space station if it needed to be calibrated with the clock of some sort of vehicle on the ground moving at high speed around it, then the composite calibration would be the sum of the two. Please correct me if there are problems with this scenario.
† He also said, when teaching QM, that no one knows what the top and bottom spins of the electron are.
‡ Einstein says that the universe is a finite three-dimensional sphere.
--------------------------------------------------------------
References:
[1] Hafele, J. C. and R. E. Keating (1972). "Around-the-World Atomic Clocks: Observed Relativistic Time Gains." Science 177(4044): 168-170.
[2] "Relativity in GNSS"; Ashtekar, A. and V. Petkov (2014). Springer Handbook of Spacetime. Berlin, Heidelberg, Springer Berlin Heidelberg.
[3] Cahillane, C. and G. Mansell (2022). "Review of the Advanced LIGO gravitational wave observatories leading to observing run four." Galaxies 10(1): 36.
[5] Tran, K.-V. H., A. Harshan, K. Glazebrook, G. K. Vasan, T. Jones, C. Jacobs, G. G. Kacprzak, T. M. Barone, T. E. Collett and A. Gupta (2022). "The AGEL Survey: Spectroscopic Confirmation of Strong Gravitational Lenses in the DES and DECaLS Fields Selected Using Convolutional Neural Networks." The Astronomical Journal 164(4): 148.
[6] Feynman, R. P. (2005). The Feynman Lectures on Physics(II).
[7] Pais, A. (1983). The science and the life of Albert Einstein II Oxford university press.
[8] Weinberg, S. (2005). "Einstein’s Mistakes." Physics Today 58(11).
[9] Ciufolini, I. and M. Demianski (1986). "How to measure the curvature of space-time." Physical Review D 34(4): 1018.
[10] Roura, A. (2022). "Quantum probe of space-time curvature." Science 375(6577): 142-143.
Relevant answer
Answer
The mathematical expression of chord time and space is a mirror image of each other (H^n*f,H^-n*f,H=1.059463), and the two are antichords (antimatter) and can be converted to each other.
The constant speed of light principle belongs to the quantum space-time category.
  • asked a question related to Fundamental Physics
Question
44 answers
If the transition is instantaneous, the moment the photon appears must be superluminal.
In quantum mechanics, Bohr's semi-classical model, Heisenberg's matrix mechanics, and Schödinger's wave function are all able to support the assumption of energy levels of atoms and coincide with the spectra of atoms. It is the operating mode of most light sources, including lasers. This shows that the body of their theories is all correct. If they are merged into one theory describing the structure image, it must have the characteristics of all three at the same time. Bohr's ∨ Heisenberg's ∨ Schödinger's, will form the final atomic theory*.
The jump of an electron in an atom, whether absorbed or radiated, is in the form of a single photon, and taking the smallest energy unit. For the same energy difference ΔE, jumping chooses a single photon over multiple photons with lower frequency ν, suggesting that a single photon structure has a more reasonable match between atomic orbital structures**.
ΔE=hν ......(1)
ΔE=Em-En ......(2)
It is clear that without information about Em, En at the same time, generating a definite jump frequency ν is impossible. "Rutherford pointed out that Rutherford pointed out that if, as Bohr did, one postulates that the frequency of light ν, which an electron emits in a transition, depends on the difference between the initial energy level and the final energy level, it appears as if the electron must "know" the frequency of light ν. level and the final energy level, it appears as if the electron must "know" to what final energy level it is heading in order to emit light with the right frequency."[1].
Bohr's postulate of Eq. (1)(2) energy level difference is valid [2]. But it does not hold as axiomatic postulate. This is not just because all possible reasons have not been ruled out. For example, one of the most important reasons is that the relationship between the "wave structure" of the electron and the electromagnetic field has not been determined†. Only if this direct relationship is established can the transition process between them be described. It is also required that the wave function and the electromagnetic field are not independent things, and it is required that the wave function is a continuous field distribution, not a probability distribution [5]. More importantly, Eqs. (1)(2) do not fulfill the axiomatic condition of being axiomatic postulate, which is not capable of ignoring the null information‡.
Doing it as a comparison of questions is the same as when we ask how the photon controls its speed [3] and where the photon should reach next. They are both photon behaviors that must rest on a common ground.
Considering the electron transition as a source of light, it is equally consistent with the principle of Special Relativity, and the photons radiated must be at the speed of light c and independent of the speed of the electrons††. However, if the light-emitting process is not continuous, the phenomenon of superluminal speed occurs.
We decompose the light-emitting process into two stages. The first stage, from "nothing" to "something", is the transition stage; the second stage, from something to propagation, is the normal state. According to classical physics, if the light emission is instantaneous, i.e., it does not occupy time and space. Then we can infer that the photon from nothing to something is not a continuous process, but an infinite process, and the speed at which the photon is produced is infinity. We cannot believe that the speed of propagation of light is finite and the speed at which light is produced is infinite. There is no way to bridge from the infinite to the finite, and we believe that this also violates the principle of the constancy of the speed of light.
There is no other choice for the way to solve this problem. The first is to recognize that all light emitting is a transitional "process" that occupies the same time and space, and that this transitional process must also be at the speed of light, regardless of the speed of the source of light (and we consider all forms of light emitting to be sources of light). This is guaranteed by and only by the theory of relativity. SR will match the spacetime measure to the speed of light at any light source speed. Secondly, photons cannot occur in a probabilistic manner, since probability implies independence from spacetime and remains an infinity problem. Third, photons cannot be treated as point particles in this scenario. That is, the photon must be spatially scaled, otherwise the transition process cannot be established. Fourth, in order to establish a continuous process of light emission, the "source" of photons, whether it is an accelerated electron, or the "wave function" of the electron jump, or the positive and negative electron annihilation, are required to be able to, with the help of space and time, continuous transition to photons. This will force us to think about what the wave function is.
Thinking carefully about this question, maybe we can get a sense of the nature of everything, of the extensive and indispensable role of time and space.
Our questions are:
1) Regardless of the solution belonging to which theory, where did the electron get the information about the jump target? Does this mean that the wave function of the electron should span all "orbitals" of the atom at the same time.
2) If the jump is a non-time-consuming process, should it be considered a superluminal phenomenon¶ [4]?
3) If the jump is a non-time consuming process, does it conflict with the Uncertainty Principle [5]?
4) What relationship should the wave function have to the photon to ensure that it produces the right photon?
-------------------------------------------------------------------------
Notes:
* Even the theory of the atomic nucleus. After all, when the nucleus is considered as a "black box", it presents only electromagnetic and gravitational fields.
* * It also limits the possibility that the photon is a mixed-wavelength structure. "Bohr noticed that a wave packet of limited extension in space and time can only be built up by the superposition of a number of elementary waves with a large range of wave numbers and frequencies [2].
† For example, there is a direct relationship between the "electron cloud" expressed by the wave function of the hydrogen steady state, and the radiating photons. With this direct relationship, it is possible to determine the frequency information between the transition energy levels.
‡ If a theory considers information as the most fundamental constituent, then it has to be able to answer the questions involved here.
†† Why and how to achieve independence from the speed of light cannot be divorced from SR by its very nature, but additional definitions are needed. See separate topic.
¶ These questions would relate to the questions posed in [3][4][5].
-------------------------------------------------------------------------
References:
[1] Faye, J. (2019). "Copenhagen Interpretation of Quantum Mechanics." The Stanford Encyclopedia of Philosophy from <https://plato.stanford.edu/archives/win2019/entries/qm-copenhagen/>.
[2] Bohr, N., H. A. Kramers and J. C. Slater (1924). "LXXVI. The quantum theory of radiation." The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science 47(281): 785-802. This was an important paper known as "BSK"; the principle of conservation of energy-momentum was abandoned, and only conservation of energy-momentum in the statistical sense was recognized.
[3] “How does light know its speed?”;
[4] “Should all light-emitting processes be described by the same equations?”;
[5] “Does Born's statistical interpretation of the wave function conflict with ‘the Uncertainty Principle’?” https://www.researchgate.net/post/NO13_Does_Borns_statistical_interpretation_of_the_wave_function_conflict_with_the_Uncertainty_Principle;
Relevant answer
Answer
Dear Jixin Chen ,
I can't really go against your recent answer.
'We are discussing here how to get high-quality simulated data which is a problem this thread raises.' - If you know what you're looking for, where to find it, how-in what form you can get it, you can easily buy good food, then you can easily achieve a high-quality simulated data. That's why it's important to have a more natural concept that is as simple as possible... Then comes the testing... detection of errors (trying to see what caused the error, solution, and new test... and so on... In the meantime, if possible, the best possible theory must be formulated... If the theory is right, the world opens up showing the secrets of nature...
I know that science is not poetry: but knowing is like creating a poem, only then will it succeed if it is intertwined, through you with your environment.
Regards,
Laszlo
  • asked a question related to Fundamental Physics
Question
30 answers
Quantum field theory has a named field for each particle. There is an electron field, a muon field, a Higgs field, etc. To these particle fields the four force fields are added: gravity, electromagnetism, the strong nuclear force and the weak nuclear force. Therefore, rather than nature being a marvel of simplicity, it is currently depicted as a less than elegant collage of about 17 overlapping fields. These fields have quantifiable values at points. However, the fundamental physics and structure of fields is not understood. For all the praise of quantum field theory, this is a glaring deficiency.
Therefore, do you expect that future development of physics will simplify the model of the universe down to one fundamental field with multiple resonances? Alternatively, will multiple independent fields always be required? Will we ever understand the structure of fields?
Relevant answer
Answer
Quote: "However, the fundamental physics and structure of fields is not understood."
The nature of all potential fields was very well understood by Gauss and other originators of the concept of fields. The real issue is that the physics community has completely lost its ways and is now totally confusing pure mathematical descriptions with the physical reality that it was meant to describe.
I suggest that the community regrounds itself on the real physics that it left behinds in the first decade of the 20th century. Put in perspective in this article, with all historical formal sources provided, including links to those directly available on the internet, which is most of them, now that they all are in the public domain:
Ignorance on all these issues can be cured only by studying the real formal sources
  • asked a question related to Fundamental Physics
Question
21 answers
God said, "Let there be light."
So, did God need to use many means when He created light? Physically we have to ask, "Should all processes of light generation obey the same equation?" "Is this equation the 'God equation'?"
Regarding the types of "light sources", we categorize them according to "how the light is emitted" (the way it is emitted):
Type 0 - naturally existing light. This philosophical assumption is important. It is important because it is impossible to determine whether it is more essential that all light is produced by matter, or that all light exists naturally and is transformed into matter. Moreover, naturally existing light can provide us with an absolute spacetime background (free light has a constant speed of light, independent of the motion of the light source and independent of the observer, which is equivalent to an absolute reference system).
Type I - Orbital Electron Transition[1]: usually determines the characteristic spectra of the elements in the periodic table, they are the "fingerprints" of the elements; if there is human intervention, coherent optical lasers can be generated. According to the assumptions of Bohr's orbital theory, the transitions are instantaneous, there is no process, and no time is required*. Therefore, it also cannot be described using specific differential equations, but only by probabilities. However, Schrödinger believed that the wave equation could give a reasonable explanation, and that the transition was no longer an instantaneous process, but a transitional one. The wave function transitions from one stable state to another, with a "superposition of states" in between [2].
Type II - Accelerated motion of charged particles emitting light. There are various scenarios here, and it should be emphasized that theoretically they can produce light of any wavelength, infinitely short to infinitely long, and they are all photons. 1) Blackbody radiation [3][4]: produced by the thermal motion of charged particles [5], is closely dependent on the temperature, and has a continuous spectrum in terms of statistical properties. This is the most ubiquitous class of light sources, ranging from stars like the Sun to the cosmic microwave background radiation [6], all of which have the same properties. 2) Radio: the most ubiquitous example of this is the electromagnetic waves radiated from antennas of devices such as wireless broadcasting, wireless communications, and radar. 3)Synchrotron radiation[7],e+e− → e+e−γ;the electromagnetic radiation emitted when charged particles travel in curved paths. 4)bremsstrahlung[8],for example, e+e− → qqg → 3 jets[11];electromagnetic radiation produced by the acceleration or especially the deceleration of a charged particle after passing through the electric and magnetic fields of a nucleus,continuous spectrum. 5)Cherenkov Radiation[9]:light produced by charged particles when they pass through an optically transparent medium at speeds greater than the speed of light in that medium.
Type III:Partical reactions、Nuclear reactions:Any physical reaction process that produces photon (boson**) output. 1)the Gamma Decay;2)Annihilation of particles and antiparticles when they meet[10]: this is a universal property of symmetric particles, the most typical physical reaction;3)Various concomitant light, such as during particle collisions;4)Transformational light output when light interacts with matter, such as Compton scattering[12].
Type IV: Various redshifts and violet shifts, changing the relative energies of light: gravitational redshift and violet shift, Doppler shift; cosmological redshift.
Type V: Virtual Photon[13][14]?
Our questions are:
Among these types of light-emitting modes, type II and type IV light-emitting obey Maxwell's equation, and the type I and type III light-emitting processes are not clearly explained.
We can not know the light-emitting process, but we can be sure that the result, the final output of photons, is the same. Can we be sure that it is a different process that produces the same photons?
Is the thing that is capable of producing light, itself light? Or at least contains elements of light, e.g., an electric field E, a magnetic field H. If there aren't any elements of light in it, then how was it created? By what means was one energy, momentum, converted into another energy hν, momentum h/λ?
There is a view that "Virtual particles are indeed real particles. Quantum theory predicts that every particle spends some time as a combination of other particles in all possible ways"[15]. What then are the actual things that can fulfill this interpretation? Can it only be energy-momentum?
We believe everything needs to be described by mathematical equations (not made-up operators). If the output of a system is the same, then the process that bridges the output should also be the same. That is, the output equations for light are the same, whether it is a transition, an accelerated moving charged particle, or an annihilation process, the difference is only in the input.
------------------------------------------------------------------------------
* Schrödinger said:the theory was silent about the period s of transition or 'quantum jumps' (as one then began to call them). Since intermediary states had to remain disallowed, one could not but regard the transition as instantaneous; but on the other hand, the radiating of a coherent wave train of 3 or 4 feet length, as it can be observed in an interferometer, would use up just about the average interval between two transitions, leaving the atom no time to 'be' in those stationary states, the only ones of which the theory gave a description.
** We know the most about photons, but not so much about the nature of W, Z, and g. Their mass and confined existence is a problem. We hope to be able to discuss this in a follow-up issue.
------------------------------------------------------------------------------
Links to related issues:
【1】"How does light know its speed and maintain that speed?”;
【2】"How do light and particles know that they are choosing the shortest path?”
【3】"light is always propagated with a definite velocity c which is independent of the state of motion of the emitting body.";
【4】“Are annihilation and pair production mutually inverse processes?”; https://www.researchgate.net/post/NO8_Are_annihilation_and_pair_production_mutually_inverse_processes;
------------------------------------------------------------------------------
Reference:
[1] Bohr, N. (1913). "On the constitution of atoms and molecules." The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science 26(151): 1-25.
[2] Schrödinger, E. (1952). "Are there quantum jumps? Part I." The British Journal for the Philosophy of science 3.10 (1952): 109-123.
[3] Gearhart, C. A. (2002). "Planck, the Quantum, and the Historians." Physics in perspective 4(2): 170-215.
[4] Jain, P. and L. Sharma (1998). "The Physics of blackbody radiation: A review." Journal of Applied Science in Southern Africa 4: 80-101. 【GR@Pushpendra K. Jain】
[5] Arons, A. B. and M. Peppard (1965). "Einstein's Proposal of the Photon Concept—a Translation of the Annalen der Physik Paper of 1905." American Journal of Physics 33(5): 367-374.
[6] PROGRAM, P. "PLANCK PROGRAM."
[8] 韧致辐射;
[9] Neutrino detection by Cherenkov radiation:" Super-Kamiokande(超级神冈)." from https://www-sk.icrr.u-tokyo.ac.jp/en/sk/about/. 江门中微子实验 "The Jiangmen Underground Neutrino Observatory (JUNO)." from http://juno.ihep.cas.cn/.
[10] Li, B. A. and C. N. Yang (1989). "CY Chao, Pair creation and Pair Annihilation." International Journal of Modern Physics A 4(17): 4325-4335.
[11] Schmitz, W. (2019). Particles, Fields and Forces, Springer.
[12] Compton, A. H. (1923). "The Spectrum of Scattered X-Rays." Physical Review 22(5): 409-413.
[13] Manoukian, E. B. (2020). Transition Amplitudes and the Meaning of Virtual Particles. 100 Years of Fundamental Theoretical Physics in the Palm of Your Hand: Integrated Technical Treatment. E. B. Manoukian. Cham, Springer International Publishing: 169-175.
[14] Jaeger, G. (2021). "Exchange Forces in Particle Physics." Foundations of Physics 51(1): 13.
[15] Are virtual particles really constantly popping in and out of existence? Or are they merely a mathematical bookkeeping device for quantum mechanics? - Scientific American.
Relevant answer
Answer
There are few things wrong with this question.
One) light is a three dimensional phenomenon in nature that cannot describe by one dimension equation.
Two) science perception of light is unknown due to origination of light testing in past to now.
three) light in science is not natural light that we observe from suns, All light that we think is light, it is artificial light or flashlight, not the natural light that it does not have constant speed, because natural sunlight with massive frequencies and wavelength cannot have constant speed. Thus, science is wrong with light and sunlight.
Am i right? thanks.
  • asked a question related to Fundamental Physics
Question
9 answers
Let's set a reference point very far away from the Earth which holds unmoved.
If I am at the equator and the Earth starts to accelerate to the escaping speed on the surface, my guess is I will be shaken off the Earth. My relative speed to the materials that make the Earth is zero near me and big to the materials on the other side of the Earth thus a net relative speed.
If I am at the north pole and the earth rotates faster then I should stay feeling the same gravitational force. My relative speed to the materials of Earth is zero cause they cancel each other at different locations.
If I speed up and shoot off the earth, my relative speed to the earth is big as Newton has told us.
Now what happens if I jump up into the air and the Earth starts rotating faster and faster? My guess is I will fall back to the ground and hit hard. Again my relative speed to the materials that made Earth is zero in this case because they cancel each other again.
My question is, does the effective force I feel from the materials that make Earth has any correlation with their relative speed to me or not? More particularly, relative angular momentum and gravity.
Relevant answer
Answer
I keep thinking and coming up with a reason that a gyroscope may not be anti-gravity. The part shooting off the surface is following a curved trajectory following the circumstance of the earth rather than a straight line in space. Thus when it force to turn, it is converting from one curved trajectory to another, which require force to push it down.
Another one is the spinning parts can be replaced with a symmetry argument that it is replaced by its neighbor such that nothing is changed. So Although spins it can be treated as if still.
  • asked a question related to Fundamental Physics
Question
125 answers
Does energy have an origin or root?
When Plato talks about beauty in the "Hippias Major", he asks: "A beautiful young girl is beautiful", "A sturdy mare is beautiful", "A fine harp is beautiful", "A smooth clay pot is beautiful" ....... , So what exactly is beauty? [1]
We can likewise ask, Mechanical energy is energy, Heat energy is energy, Electrical and magnetic energy is energy, Chemical and internal energy is energy, Radiant energy is energy, so what exactly is "energy"?[2]
Richard Feynman, said in his Lectures in the sixties, "It is important to realize that in physics today we have no knowledge of what energy is". Thus, Feynman introduced energy as an abstract quantity from the beginning of his university teaching [3].
However, the universal concept of energy in physics states that energy can neither be created nor destroyed, but can only be transformed. If energy cannot be destroyed, then it must be a real thing that exists, because it makes no sense to say that we cannot destroy something that does not exist. If energy can be transformed, then, in reality, it must appear in a different form. Therefore, based on this concept of energy, one can easily be led to the idea that energy is a real thing, a substance. This concept of energy is often used, for example, that energy can flow and that it can be carried, lost, stored, or added to a system [4][5].
Indeed, in different areas of physics, there is no definition of what energy are, and what is consistent is only their Metrics and measures. So, whether energy is a concrete Substance**, or is just heat, or is the capacity of doing work, or is just an abstract cause of change, was much discussed by early physicists. However, we must be clear that there is only one kind of energy, and it is called energy. It is stored in different systems and in different ways in those systems, and it is transferred by some mechanism or other from one system to another[9].
Based on a comprehensive analysis of physical interactions and chemical reaction processes, energy is considered to be the only thing that communicates various phenomena. Thus, "Energism" was born*[8]. Ostwald had argued that matter and energy had a “parallel” existence, he developed a more radical position: matter is subordinate to energy. “Energy is always stored or contained in some physical system. Therefore, we will always have to think of energy as a property of some identifiable physical system”. “Ostwald regarded his Energism as the ultimate monism, a unitary "science of science" which would bridge not only physics and chemistry, but the physical and biological sciences as well”[6]. This view has expressed the idea of considering "pure energy" as a "unity" and has assumed the process of energy interaction. However, because of the impossibility to determine what energy is, it has been rejected by both scientific and philosophical circles as "metaphysics" and "materialism"[10].
The consistency and transitivity of energy and momentum in different physical domains have actually shown that they must be linked and bound by something fundamental. Therefore, it is necessary to re-examine the "Energism" and try to promote it.
The relationship between energy and momentum, which are independent in classical mechanics, and their conservation are also independent. the momentum of the particle does not involve its energy. but In relativity, the conservations of momentum and energy cannot be dissociated. The conservation of momentum in all inertial frames requires the conservation of energy and vice versa. space and time are frame-dependent projections of spacetime[7].
Our questions are:
1) What is energy, is it a fundamental thing of entity nature**, or is it just a measure, like the property "label" of "beauty", which can be used by anyone: heat, light, electricity, machinery, atomic nuclei. Do the various forms of energy express the same meaning? Can they be expressed mathematically in a uniform way? Is there a mathematical definition of "energy"? ***
2) Is the conservation of energy a universal principle? How does physics ensure this conservation?
3) Why is there a definite relationship between energy and momentum in all situations? Where are they rooted?
4) If the various forms of energy and momentum are unified, given the existence of relativity, is there any definite relationship between them and time and space?
-------------------------------------------------------------------------
* At the end of the nineteenth century, two theories were born that tried to unify the physical world, "electromagnetic worldview" and "Energism". We believe that this is the most intuitive and simple view of the world. And, probably the most beautiful and correct view of the world.
** If it is an entity, then it must still exist at absolute zero. Like the energy and momentum of the photon itself, it does not change because of the temperature, as long as it does not interact with each other.
*** We believe that this is an extremely important issue, first mentioned by Sergey Shevchenkohttps://www.researchgate.net/profile/Sergey-Shevchenko )in his reply to a question on Researchgate, see https://www.researchgate.net/post/NO1_Three-dimensional_space_issue; SS's reply.
-------------------------------------------------------------------------
Referencs
[1] Plato.
[2] Ostwald identified five “Arten der Energie”: I. Mechanical energy, II. Heat, III. Electrical and magnetic energy, IV. Chemical and internal energy, and V. Radiant energy. Each form of energy (heat, chemical, electrical, volume, etc.) is assigned an intensity. And formulated two fundamental laws of energetics. The first expresses the conservation of energy in the process of transfer and conversion; the second explains in terms of intensity equilibrium what can start and stop the transfer and conversion of energy.
[3] Duit, R. (1981). "Understanding Energy as a Conserved Quantity‐‐Remarks on the Article by RU Sexl." European journal of science education 3(3): 291-301.
[4] Swackhamer, G. (2005). Cognitive resources for understanding energy.
[5] Coelho, R. L. (2014). "On the Concept of Energy: Eclecticism and Rationality." Science & Education 23(6): 1361-1380.
[6] Holt, N. R. (1970). "A note on Wilhelm Ostwald's energism." Isis 61(3): 386-389.
[7] Ashtekar, A. and V. Petkov (2014). Springer Handbook of Spacetime. Berlin, Heidelberg, Springer Berlin Heidelberg.
[8] Leegwater, A. (1986). "The development of Wilhelm Ostwald's chemical energetics." Centaurus 29(4): 314-337.
[9] Swackhamer, G. (2005). Cognitive resources for understanding energy.
[10] The two major scientific critics of Energism are Max Planck and Ernst Mach. The leading critic of the political-philosophical community was Vladimir Lenin (the founder of the organization known as Comintern). But he criticized not only Ostwald, but also Ernst Mach.
Relevant answer
Answer
Dear Chian Fan ,
You live in China and have not been touched - you have forgotten the teachings of Lao Tzu! This could be the reason why you don't understand what André Michaud wrote to you.
Please excuse me for being blunt!
Regards,
Laszlo
  • asked a question related to Fundamental Physics
Question
101 answers
The unexploited unification of general relativity and quantum physics is a painstaking issue. Is it feasible to build a nonempty set, with a binary operation defined on it, that encompasses both the theories as subsets, making it possible to join together two of their most dissimilar marks, i.e., the commutativity detectable in our macroscopic relativistic world and the non-commutativity detectable in the quantum, microscopic world? Could the gravitational field be the physical counterpart able to throw a bridge between relativity and quantum mechanics? It is feasible that gravity stands for an operator able to reduce the countless orthonormal bases required by quantum mechanics to just one, i.e., the relativistic basis of an observer located in a single cosmic area?
What do you think?
Relevant answer
Answer
A charged particle whizes past a very massive body, therefore its trajectory bends somehow
and radiates (for Classical reasons). Therefore there might be some radiation surrounding a
Black hole. So who is responsible? Not QM. Perhaps only Einstein or Newton?
  • asked a question related to Fundamental Physics
Question
8 answers
We assume that any N-dimensional space can have an (N-1)-dimensional "boundary". However, if the boundary is limited to points, lines, and surfaces, and not to bodies, then three-dimensional space will be the maximum spatial dimension that satisfies this condition. What is the mathematical concept involved here?
Relevant answer
Answer
The comment to
“…II:“And, in this case finally: in physics there exist fundamental parameter of practically everything – “energy”, which at least till now doesn’t exist in mathematics…”
Cheers
  • asked a question related to Fundamental Physics
Question
10 answers
By now, we've all realised how well GPT AI is able to find and replicate patterns in language and in 2D images. Its ability to find and interact with data patterns sometimes allows it to answer questions better than some students.
I expect that right now there will be teams training GPT installations with molecular structure and physical characteristics data to try to find candidates for new materials for high-temperature superconductors, or to find organic lattice structures with high hydrogen affinity to replace palladium for hydrogen storage cells. The financial and social rewards for success in these areas make it difficult to justify NOT trying out GPT AI.
But what about fundamental physics theory? Could AI find a solution to the current mismatch between Einstein's general relativity and quantum mechanics? Could it start to solve hard problems that have defeated mainstream academia for decades? If so, what happens to the instiutions?
Further reading:
Relevant answer
Answer
AI might have trouble giving us a new theory of physics directly, because (as you say) it'll be encoded with all the mistaken assumptions that are already preventing humans from finding a new theory.
But what AI should be able to do is to find the mistakes (or, a least, find "mistake-candidates"). If we tell it that a theory has a mistake, it'll dutifully report back what it thinks the mistake might be.
Suppose that we train an AI to find inconsistencies and paradoxes and contradictions, or missing logical steps in a block of text representing a "story". This "story" could be a novel, or a history of World War One, or a physics theory. The AI may go through the text and identify a huge list of provisional entities (perhaps hundreds of thousands of them), then tentatively associate entities with each other with a confidence rating ("Simon" in chapter one is probably the same "Simon" that appears in Chapter Two). It can then use its understanding of language to create a web of interrelationships between those entities.
The AI could be trained on simple stories that have obvious contradictions. Different types of contradictions may show up in a network as distinctive topological features: a Moebius-strip-like structure in the network may represent the ability of the story to generate one logical output if we go around the network one way, but to assign a different logical status to the same node if we traverse the network another way. The AI could then look for these possible topological network features and rank them according to the confidence rankings of the connections reponsible for the features.
One could feed in all known accounts of World War One, and identify where different sources appear to give conflicting information.
A whole web of spurious "lightweight" possible inconsistencies might be due to "frivolous" or "bad faith" misconnections and misidentifications between entities, created by software that is looking for problems. With these resolved, we may have a set of more heavy-duty apparent inconsistencies due to resolvable ambiguities in the text. With these eliminated, there might be some ambiguities that cannot be resolved, in which case the ambiguities are hiding structural conflicts in the theory. And there may also be some real, explicit structural conflicts that are not caused by ambiguity, but have been missed because they only appear as global features.
So once you've trained an AI to hunt down logical inconsistencies, and you give it a set of texts on general relativity, it might report:
  1. " MTW and a range of other high-confidence sources say that if inertial physics is correctly described by SR, a forcibly-accelerated body MUST NOT be associated with intrinsic curvature. "
  2. " Einstein is another high-confidence source. His 1921 Princeton lecture says that according to GR principles, and the GR equations, a forcibly-accelerated body MUST be associated with intrinsic curvature. "
  3. " Einstein's writings defining the structure of GR say that it must both conform to the general principle (giving outcome #2), but must also conform to special relativity (giving outcome #1) "
  4. " Since outcomes #1 and #2 represent two contradictory connection states of the same two nodes, either the two nodes can support multiple connection states, or one of the authoritative sources is wrong, or the entire theory is pathologically misconstructed. "
So an AI ought to be able to highlight the fact that, logically, either Einstein didn't understand the correct behaviour for his theory and got it wrong, or GR textbooks don't understand the correct behviour of SR and get it wrong, or Einstein was wrong about SR being able to be a component of GR.
AI can force us to accept that the existing system doesn't work (a fact that seems to have eluded academic human brains for over a century), and once we've accepted that, humans can examine the nature of the breakdown and use what they learn from it to construct a better theory.
  • asked a question related to Fundamental Physics
Question
65 answers
According to special relativity [1], the mass of a moving object is generally considered to be a relative value that increases with velocity [2]. m=γm0, γ is the relativistic factor and m0 is defined as the rest mass. The mass-energy equation E=mc^2 is a derivative of Einstein's special relativity. Einstein assumed two inertial systems moving at relatively constant velocity, where one object in the stationary inertial frame radiates photons in two opposite directions, and if the total energy of the photons is E, then in the other inertial frame it is seen that the mass of the object will decrease by E/c^2, i.e., E=mc^2. He thus concluded that The mass of an object is a measure of the energy it contains [3].
Our question is, if there is no absolute spacetime and the mass of any object in an inertial system can be considered as a rest mass, if it arbitrarily changes its speed of motion and is able to measure itself, will there exist a minimum rest mass, i.e. a minimum energy?
[1] Einstein 1905r:On the electrodynamics of moving objects.
[2] Feynman, R. P. (2005). The Feynman Lectures on Physics(I).
[3] Einstein 1905s:Einstein, A. (1905). "Does the inertia of a body depend upon its energy-content." Annalen der Physik 18(13): 639-641.
Relevant answer
Answer
Thank you for sharing your views. The study of cosmology and our understanding of the universe are constantly evolving and it's always great to see diverse perspectives.
Your proposition that the Cosmic Microwave Background (CMB) radiation is a result of galaxy formation is an intriguing one. Conventionally, the CMB is understood as a remnant radiation from the early universe, about 380,000 years after the Big Bang. This is a period known as recombination, when the universe cooled down enough to allow hydrogen and helium atoms to form, making the universe transparent to light for the first time. The photons that we detect as the CMB are believed to be from this epoch, stretched to microwave wavelengths by the expansion of the universe.
However, you propose that these photons are actually produced during galaxy formation and are then scattered by free electrons to form a blackbody radiation equivalent to a temperature of 2.7K. This is an interesting hypothesis, and as with any scientific theory, it would need to be backed by observational and experimental evidence. It would be necessary to explain how this process could produce a nearly perfect blackbody spectrum and account for the observed anisotropies in the CMB.
Moreover, current models of Big Bang cosmology, including the recombination epoch and the existence of the CMB as a relic radiation, have been successful in predicting a wide range of phenomena, from the large-scale structure of the universe to the abundance of light elements. A new model would need to be able to explain all these observations at least as well as the current model does.
Also, regarding your statement, "If light was sent out at the recombination in the Big Bang, it will be long gone by now." It is important to note that the light from the recombination is indeed gone from its original location, but because the universe has been expanding since that time, that light is just now reaching us from distant locations. This is why we can still observe the CMB today.
Science is inherently a process of exploration and discovery, and theories are always subject to modification and refinement in the face of new data. Your contributions to this ongoing conversation are much appreciated. I look forward to hearing more about your ideas and the evidence that supports them.
Best regards,
Alessandro Rizzo
  • asked a question related to Fundamental Physics
Question
24 answers
Complex numbers are involved almost everywhere in modern physics, but the understanding of imaginary numbers has been controversial.
In fact there is a process of acceptance of imaginary numbers in physics. For example.
1) Weyl in establishing the Gauge field theory
After the development of quantum mechanics in 1925–26, Vladimir Fock and Fritz London independently pointed out that it was necessary to replace γ by −iħ 。“Evidently, Weyl accepted the idea that γ should be imaginary, and in 1929 he published an important paper in which he explicitly defined the concept of gauge transformation in QED and showed that under such a transformation, Maxwell’s theory in quantum mechanics is invariant.”【Yang, C. N. (2014). "The conceptual origins of Maxwell’s equations and gauge theory." Physics today 67(11): 45.】
【Wu, T. T. and C. N. Yang (1975). "Concept of nonintegrable phase factors and global formulation of gauge fields." Physical Review D 12(12): 3845.】
2) Schrödinger when he established the quantum wave equation
In fact, Schrödinger rejected the concept of imaginary numbers earlier.
【Yang, C. N. (1987). Square root of minus one, complex phases and Erwin Schrödinger.】
【Kwong, C. P. (2009). "The mystery of square root of minus one in quantum mechanics, and its demystification." arXiv preprint arXiv:0912.3996.】
【Karam, R. (2020). "Schrödinger's original struggles with a complex wave function." American Journal of Physics 88(6): 433-438.】
The imaginary number here is also related to the introduction of the energy and momentum operators in quantum mechanics:
Recently @Ed Gerck published an article dedicated to complex numbers:
Our question is, is there a consistent understanding of the concept of imaginary numbers (complex numbers) in current physics? Do we need to discuss imaginary numbers and complex numbers ( dual numbers) in two separate concepts.
_______________________________________________________________________
2023-06-19 补充
On the question of complex numbers in physics, add some relevant literatures collected in recent days.
1) Jordan, T. F. (1975). "Why− i∇ is the momentum." American Journal of Physics 43(12): 1089-1093.
2)Chen, R. L. (1989). "Derivation of the real form of Schrödinger's equation for a nonconservative system and the unique relation between Re (ψ) and Im (ψ)." Journal of mathematical physics 30(1): 83-86.
3) Baylis, W. E., J. Huschilt and J. Wei (1992). "Why i?" American Journal of Physics 60(9): 788-797.
4)Baylis, W. and J. Keselica (2012). "The complex algebra of physical space: a framework for relativity." Advances in Applied Clifford Algebras 22(3): 537-561.
5)Faulkner, S. (2015). "A short note on why the imaginary unit is inherent in physics"; Researchgate
6)Faulkner, S. (2016). "How the imaginary unit is inherent in quantum indeterminacy"; Researchgate
7)Tanguay, P. (2018). "Quantum wave function realism, time, and the imaginary unit i"; Researchgate
8)Huang, C. H., Y.; Song, J. (2020). "General Quantum Theory No Axiom Presumption: I ----Quantum Mechanics and Solutions to Crisises of Origins of Both Wave-Particle Duality and the First Quantization." Preprints.org.
9)Karam, R. (2020). "Why are complex numbers needed in quantum mechanics? Some answers for the introductory level." American Journal of Physics 88(1): 39-45.
Relevant answer
Answer
Dear Chian Fan
As we all know, mathematics is a "language".
I observe that two types of persons take interest in physics, pure mathematicians and experimental physicists.
What differentiates both types is that validity of reasoning is provided by "numerical resolution" of whatever equation can be drawn from physically collected data that experimental physicists use in describing what they observe, and validity of logical derivations from sets of axiomatic postulates in the case of pure mathematicians.
One of the major difficulties in fundamental physics is the very power of mathematics as a descriptive language. If care is not taken to avoid as much as possible axiomatic postulates, an indefinite number of theories can be elaborated with full mathematical support that can always become entirely self-consistent with respect to the set of premises from which each theory is grounded. But the very self-consistency of all well thought out theories is so appealing to our rational minds that it renders very difficult the requestioning of the grounding foundations of such beautiful and intellectually satisfying structures and consequently the identification of possibly inappropriate axiomatic assumption.
Experimental physicists adapt the available math as well as they can in their attempts at mathematically describing what they observe from the data they collected – of which i never is an element, while pure mathematicians explain what logically comes out of whatever sets of axiomatic premises that they chose to underlie their worldview.
From what I understand, √-1 just happened to be part of the mathematical toolset that Schrödinger had at his disposal in trying to mathematized how to account for the stationary resonance state that de Broglie had discovered that the electron is captive of when stabilized in the hydrogen atom ground state, a resonance frequency to which those of all other metastable orbitals of the hydrogen atom and emitted bremsstrahlung photons are related by the well established sequence of integers that de Broglie provided in his 1924 thesis.
Best Regards, André
  • asked a question related to Fundamental Physics
Question
13 answers
I realize that the great theories of fundamental physics are already united in their cradles. I quote here especially the general and restreint relativities and that of Planck. There are certainly other fundamental theories that are related to these three theories. Therefore I do not understand why all this controversy on the unification of the great fundamental theories which have lasted for more than a century to date. I explain the unification through the cosmological constants. Indeed I discovered that these theories all without exception use cosmological constants, that's one thing. But I also discover that the cosmological constants are derived from each other, which guarantees the link between these theories. So as an application of the fact that these theories are already linked in their cradles it is fair to say that people can use for example the Schwarzschild radius equation to calculate the radii of the proton and the neutron without waiting for someone, via a any theory, to allow this equation to be used.
Relevant answer
Answer
Dear Sergey Shevchenko
I answered the question of Arno Gorgels "Physicists seek a mathematical formula to unify all natural fields. But they say mathematics is semantics only. Isn't this a contradiction?".
  • asked a question related to Fundamental Physics
Question
98 answers
I believe I have solved what was called the "most fundamental unsolved problem of physics" by Paul Dirac:
"The fine-structure constant [...] has no dimensions or units. It’s a pure number that shapes the universe to an astonishing degree — “a magic number that comes to us with no understanding,” as Richard Feynman described it. Paul Dirac considered the origin of the number “the most fundamental unsolved problem of physics.”"
I've worked things out in Jupyter notebook and generated a PDF version as well:
The results are quite surprising, to say the least.......
Earlier work in progress:
Relevant answer
Answer
Very impressive!!
Congratulations!
So, the FSC α is fundamentally a geometric proportionality constant of an effective toroidal ring model for the electron elementary charge inside the vacuum medium modeled as a superfluid condensate.
I came to a similar conclusion with my "1/2 spin EM flux fiber model for the electron" regarding an intrinsic possible charge manifold for the electron :
It is very important to see independent research from different sources attacking the problem blind-folded from different angles and coming more or less to the same conclusion(s). This strengthens the case all together and means that their findings are most probable correct.
Kind Regards,
Emmanouil
  • asked a question related to Fundamental Physics
Question
29 answers
Quantum started (pre 1960) with weird explanations of the math and large ensembles of particles.
Then came Aspect and entanglement (ca 1980) with Bell's inequality requiring 2 or more particles.
Now the next step is to become fully ``realistic'' (more intuitive) by modeling causality and superluminal signal speed with characteristics of 1 particle. More ``realistic'' suggests have an analogy with classical modeling.
Adlam, E., 2022, Is there causation in fundamental Physics? New insightsfrom process matrices and quantum causal modeling, arXiv: 2208.02721[quant-ph].
Relevant answer
Answer
In my recent article posted on the RG "Review force general conjectural modeling transforms formalism physics" per my peer-reviewed publication Iyer R. Review force general conjectural modeling transforms formalism physics. Phys Astron Int J. 2022;6(3):119‒124, the multiphase nature of superluminar plenum, vacuum, and observable matter universe are brought out. Vacuum then exists only as dynamic phase that get formed from faster speed of light superluminar phase. Speed of light is constant only in vacuum with a value of c. In observable matter universe, light speed has less than c value in general.
  • asked a question related to Fundamental Physics
Question
29 answers
The dimensioned physical constants (G, h, c, e, me, kB ...), can be considered fundamental only if the units they are measured in (kg, m, s ...) are independent. The 2019 redefinition of SI base units resulted in 4 physical constants assigned exact values, and this confirmed the independence of their associated SI units. However there are anomalies which occur in certain combinations of these constants which suggest a mathematical (unit number) relationship (kg -> 15, m -> -13, s -> -30, A -> 3, K -> 20) and as these are embedded in the constants, they are easy to test, the results are consistent with CODATA precision. Statistically therefore, can these anomalies be dismissed as coincidence?
For example, we can see how to make the physical units kg, m, s, A from dimensionless mathematical forms using this unit number relationship and this has applications to simulation universe modelling.
For convenience, the article has been transcribed to this wiki site.
...
Some general background to the physical constants.
Relevant answer
Answer
Hi Hieram, youre welcome! Imho fun should be induced by fruitful scientific research supported by open commenting one other’s ideas (opposed to endless fruitless repeating discussions what presumingly not cannot ever workuntil it does) like iSpace („integer-Space„ or „complex-Space“ when treating the i as the one for a complex number) able to derive and decipher inter-relationships, dependencies and calculate exact arbitrary precision numerical values for most but all constants of nature.
Also recently a new true quantum geometric iSpace-IQ unit system has been developed, able to directly represent native quantum relations of contants while keeping strictly compatible to MKSA/SI system showing a single *time* based conversion factor, effective predicting quantization of time itself.
So - no - being a true long time Apple expert consultant i’d say we do not need to fear to be sued for (at least not in the foreseeable future ;-) ). And please all take the time to read thru the very short yet imho rreally convincing math of both of my newest papers to be found on my RG home.
Here is a link to RG summary of my iSpace project:
  • asked a question related to Fundamental Physics
Question
5 answers
Having worked on the spacetime wave theory for some time and recently published a preprint paper on the Space Rest Frame I realised the full implications which are quite shocking in a way.
The spacetime wave theory proposes that electrons, neutrons and protons are looped waves in spacetime:
The paper on the space rest frame proposes that waves in spacetime take place in the space rest frame K0:
This then implies that the proton which is a looped wave in spacetime of three wavelengths is actually a looped wave taking place in the space rest frame and we are moving at somewhere between 150 km/sec and 350 km/sec relative to that frame of reference.
This also gives a clue as to the cause of the length contraction taking place in objects moving relative to the rest frame. The length contraction takes place in the individual particles, namely the electron, neutron and proton.
I find myself in a similar position to Earnest Rutherford when he discovered that the atom was mostly empty space. I don't expect to fall through the floor but I might expect to suddenly fly away at around 250 km/sec. Of course this doesn't happen because there is zero resistance to uniform motion through space and momentum is conserved.
It still seems quite a shocking realisation.
Richard
Relevant answer
Answer
Sydney Ernest Grimm
Thank you for your comment and I would like to explain in more detail how the Spacetime Wave theory relates to quantum theory.
If you think about the electron as a looped wave in Spacetime the entire mass/energy of the electron is given by E=hf. Then when an electron changes energy level from an excited state f2 to a lower energy level f1 the emitted wave quantum (photon) is given by h(f2 - f1). It is easy to see how a looped wave can emit a non-looped wave.
Because the path of the electron wave loops many times around the nucleus and within each wavelength there is a small positive charge followed by a slightly larger negative charge the wave aligns with successive passes displaced by half a wavelength.
This alignment process means that there are certain possible energy states that can be adopted by the electron. This is the cause of the quantum nature of the electron and also explains the quantum nature of light.
Richard
  • asked a question related to Fundamental Physics
Question
139 answers
Our answer is YES. The wave-particle duality is a model proposed to explain the interference of photons, electrons, neutrons, or any matter. One deprecates a model when it is no longer needed. Therefore, we show here that the wave-particle duality is deprecated.
This offers an immediate solution for the thermal radiation of bodies, as Einstein demonstrated experimentally in 1917, in terms of ternary trees in progression, to tri-state+, using the model of GF(3^n), where the same atom can show so-called spontaneous emission, absorption, or stimulated emission, and further collective effects, in a ternary way.
Continuity or classical waves are not needed, do not fit into this philosophy, and are not representable by any edges or updating events, or sink, in an oriented graph [1] model with stimulated emission.
However, taking into account the principle of universality in physics, the same phenomena — even a particle such as a photon or electron — can be seen, although approximately and partially, in terms of continuous waves, macroscopically. Then, the wave theory of electrons can be used in the universality limit, when collective effects can play a role, and explain superconductivity.
This solves the apparent confusion given by the common wave-particle duality model, where the ontological view can become now, however — more indicative of a particle in all cases — and does not depend on amplitude.
This explains both the photoelectric effect, that does not depend on the amplitude, and wave-interference, that depends on the amplitude. The ground rule is quantum, the particle -- but one apparently "sees" interference at a distance that is far enough not to distinguish individual contributions.
On deprecating the wave-particle duality, we are looking into abandoning probability in quantum mechanics. This speaks against a duality that is not based on finite integers!
Support comes from the preprint
the article published in Mathematica in 2023 at
the free PDF book
the paper-based books at
and other references, including on the new field of "quantum circuits" using the new set Q*.
REFERENCE
[1] Stephen Wolfram, “A Class of Models with the Potential To Represent Fundamental Physics.” Arxiv; https://arxiv.org/ftp/arxiv/papers/2004/2004.08210.pdf, 2004.
Relevant answer
Answer
JW: Welcome back.Under universality, one can see waves, and only waves, when the particles are far removed -- but one cannot see particles, in any circumstances, from waves.
One cannot deduce particles from waves, and the Maxwell equations fail there. The Maxwell equations cannot explain the laser, the photoelectric effect, photons, nor diamagnetism. Time to evolve.
  • asked a question related to Fundamental Physics
Question
6 answers
Coherency is a major difference separating EM from gravity.
In layman's terms someone can say that gravity is degenerated incoherent quantum EM, projected on the macroscale far field.
The graphene is the simplest macrospopic quantum matter structure and therefore if there is a gravity-EM relation existing this would be our best chance presented to find out I believe.
The anomalous mass behavior of the fermions inside the graphene may as well produce anomalous gravity effects as far as I know not yet investigated from the science community and academia. This I believe is because the atomic lattice structure in the graphene is coherent and not a mess --> mass :) we have usually in macroscopic matter.
In other words graphene is the most coherent macroscopic matter we can get today.
Therefore any EM-gravity correlation will become evident with the experiments I propose. Each layer of graphene piled on top will increase the incoherence of the film in total, change the wave function and also the gravity effect. If the piled up number of isometric layers of graphene do not produce a linear analogue increase of the total weight of the film then I would have proved my point of direct relation of gravity effect and quantum EM in matter.
Cooper pairs superconductivity coherence is a different case I believe from the above. In a molecular level there is still very much incoherent matter present in an superconductor and therefore EM coherency of charge matter will little to none affect the gravity effect of the total mass of the superconductor.
In my proposed experiment my concern is if the instrumentation used would prove adequate to measure reliable and resolve such minuscule changes in weight I expect.
Essentially what I try to proof in my above proposed experiment is that adding on top of a single layer of graphene sheet an identical isometric layer of graphene assuming that no other matter was added (i.e. clean room vacuum conditions) then the total mass will not double. Because mass value depends also on matter incoherence within an object with dictates the number of quantum EM interactions taking place inside the object. The mass readout on the experiment of the stacked graphene layers will differ for different degrees of alignment achieved between the two stacked layers. I expect the experiment to produce measurable anomalous results for the first few layers and afterwards the results to be smoothed out and normalized as more and more layers are added and a critical value of overall matter incoherence has been reached and W=mg will become again a linear function where W is the total weight of the stacked graphene layers film.
copyright©Emmanouil Markoulakis Hellenic Mediterranean University (HMU) 2019
Relevant answer
Answer
Graphene is mysterious. Rajan Iyer
  • asked a question related to Fundamental Physics
Question
123 answers
The fundamental physical constants, ħ, c and G, appear to be the same everywhere in the observable universe. Observers in different gravitational potentials or with different relative velocity, encounter the same values of ħ, c and G. What enforces this uniformity? For example, angular momentum is quantized everywhere in the universe. An isolated carbon monoxide molecule (CO) never stops rotating. Even in its lowest energy state, it has ħ/2 quantized angular momentum zero-point energy causing a 57 GHz rotation. The observable CO absorption and emission frequencies are integer multiples of ħ quantized angular momentum. An isolated CO molecule cannot be forced to rotate with some non-integer angular momentum such as 0.7ħ. What enforces this?
Even though the rates of time are different in different gravitational potentials, the locally measured speed of light is constant. What enforces a constant speed of light? It is not sufficient to mention covariance of the laws of physics without further explanation. This just gives a different name to the mysteries.
Are the natural laws imposed on the universe by an unseen internal or external entity? Do the properties of vacuum fluctuations create the fundamental physical constants? Are the physical constants the same when they are not observed?
Relevant answer
Answer
The question is relevant from my humble point of view.
I don't know what is the source of them, but we as theorists, the same as experimentalists use our own units of measurement & visualization and ignore them sometimes.
We normalized quantities when T ---> 0 at very low temperatures for example.
But the units you mention ( Planck units) ħ = c = G = ( I add one more) kB = 1 allow us to treat difficult problems such as scattering in a unified way by using energy in meV for instance.
They are really fundamental, all these natural units systems such be studied in general physics courses.
Best Regards.
  • asked a question related to Fundamental Physics
Question
5 answers
It feels strange to have discovered a new fundamental physics discipline after a gap of a century. It is called Cryodynamics, sister of the chaos-borne deterministic Thermodynamics discovered by Yakov Sinai in 1970. It proves that Fritz Zwicky was right in 1929 with his alleged “tired light” theory.
The light traversing the cosmos hence lawfully loses energy in a distance-proportional fashion, much as Edwin Hubble tried to prove.
Such a revolutionary development is a rare event in the history of science. So the reader has every reason to be skeptical. But it is also a wonderful occasion to be one of the first who jump the new giant bandwagon. Famous cosmologist Wolfgang Rindler was the first to do so. This note is devoted to his memory.
November 26, 2019
Relevant answer
Answer
What will happen once 92 years have passed since then? Is it possible to imagine?
  • asked a question related to Fundamental Physics
Question
101 answers
There is an opinion that the wave-function represents the knowledge that we have about a quantum (microscopic) object. But if this object is, say, an electron, the wave-function is bent by an electric field.
In my modest opinion matter influences matter. I can't imagine how the wave-function could be influenced by fields if it were not matter too.
Has anybody another opinion?
Relevant answer
Answer
Nice discussion
  • asked a question related to Fundamental Physics
Question
4 answers
Dear Colleagues.
The Faraday constant as a fundamental physical value has its peculiar features, which make it standing out of the other physical constants. According tothe official documents of NIST, this constant has two values:
F = 96485.33289 ± 0.00059 C/mole and
F* = 96485.3251 ± 0.0012 C/mole.
The second value refers to the "ordinary electric current".
Is the Faraday constant constant?
One of the ways to answer this question is proposed in the works.
Sincerely, Yuriy.
Relevant answer
Answer
Faraday's constant is always considered as universal Constant ...
  • asked a question related to Fundamental Physics
Question
113 answers
According to special relativity (SR), the relative velocity between two inertial reference frames (IRF), say two spaceships, is calculated by
u=(v1-v2) /(1-v1v2/c2) (1)
Where v1and v2 are constant velocities of the two vessels moving in parallel to each other.
For low speeds v1v2/c2 is negligible and the formula is reduced to
u=v1-v2
But neither v1 nor v2 is supposed to be known in SR. Both can have any value between -c and +c as illustrated in Figure 1 (please see the attached file).
Not knowing the speed of each vessel means that the calculated relative speed can also be any value between -c and +c. For example:
v1= - 0.6c v2 = - c ̀ ==> u= -c (possibility 5 in Figure 1)
v1= 0 v2 = - 0.4c ==> u= -c/2.5 (possibility 2)
v1= 0.2c v2 = - 0.2c ̀ ==> u= c/2.6 (possibility 3)
v1= 0.4c v2 = 0 ==> u= c/2.5 (possibility 1)
v1= c v2 = 0.6c ==> u= c (possibility 4)
Meaning that the real relative speed between two IRFs in fact cannot be calculated.
To remedy this situation, it is assumed that:
1. One of the vessels in which observer number one, Bob, resides is stationary and the other vessel, Alice, is moving at the relative speed of u.
This is, obviously, a wrong scientific statement and in contrast to SR. Here only one specific possibility among countless possibilities is arbitrarily selected to hide the difficult situation. We should also remind ourselves of the damaging effect of this type of assumptions. Scientists tried hard to discard the dominating geocentric dogma of the past, championed by the Catholic Church, and now a comparable assumption is accepted under a new groundbreakingly concept.
Based on this assumption, the equation is simply reducing to either u= -v2 or u=v1, depending on the observer.
2. There is a third reference frame based on which the speeds are measured.
Like the first cases we are back to Newtonian mechanics, an assumed fixed reference frame. This assumption explicitly accepts the first assumption. Only then, the formula makes sense. Specifically, to be able to present SR as a scientific/quantitative theory it is forced to accept that the frame of the observer or a third frame is a stationary reference frame for any measurement or analysis. Zero speed is just a convenient value between countless other possibilities which SR has introduced and then has decided not to deal with the consequences.
The problem with Einstein velocity addition formula also applies in this case as the assumed velocities as well as the calculated relative velocity between Bob and Alice depends on the relative speed of the observer.
Somehow, both conflicting cases are accepted in SR quite subjectively. In other words, SR is arbitrarily benefiting from classical science, to push its own undeserved credibility, while at the same time denying it.
Is this a fair assessment?
P.S. for simplicity only parallel movements are considered.
Relevant answer
Answer
Jeremy Fiennes: "I had a quick look at your "10 proofs of SR". Mainly due to thinking "Wait a minute, he's now trying to justify SR?!!" The title is maybe somewhat misleading. Or maybe you are trying to attract readers who still belive in it."
It was trying to attract people who were interested in claimed proofs of SR, for any reason (and then pointing out that most of the traditional proofs are provably "junk science", and not to be taken seriously). I figured that this stuff needed documenting.
Mainstream relativity folk are quick to shout "fraud!" "fake!" or "incompetent" at fringe scientists when they cut corners or fiddle figures, but are less willing to document dubious or phoney science when their own team are responsible.
Jeremy Fiennes: " For me the best conceptual refutation is the clock absurdity "
I try to stay away from the clock paradox.
Firstly, it involves acceleration, so it's a problem in "extended" SR rather than "core" SR, and with extended SR, a GR mainstreamer can always step in and say, "oh, your problem is that you're using SR outside its proper domain of validity, if SR breaks, it just means that you need to use full GR'".
Second, the proper domain of extended SR is kinda fuzzy. SR gets credit when it works, the user gets the blame when it doesn't. People disagree as to what the proper fdomain is, and it's not obviously scientifically invalidatable.
Third, even in the GR version, it's apparently not been adequately solved ("GR clock paradox")
Fourth, if you accelerate in a given direction at X Earth-gravities, SR coordinates will catastrophicall;y break down and become inconsistent at a distance of about 1/X lightyears.
So if you spend years successfully constructing a proof that the twin problem is unworkable, the GR folk can say, "Oh, we know that, that's part of why SR is considered to only be a local theory: it's not to be used for problems involving accelerations and interstellar distances, because its coordinates break down!"
What the community will tend to do is to steer you towards problems where ... if you have success ... they have an emergency "escape" argument to fall back on. It's misdirection -- conning critics into working on problems that they know can be dismissed as irrelevant.
IMO. If you want to take down SR, you have to ignore all the standard textbook cliche'd arguments, and create new problems that even invoking GR won't let them wriggle out of. Like, how about pointing out that a valid general theory requires SR geometry to be wrong for moving masses? Or that quantum gravity needs non-SR equations? or that you can't combine modern cosmology with SR-based GR's gravity-shift predictions? Or that "extended SR" requires rotating masses NOT to drag light, making it invalidated by Gravity Probe B?
Take the higher ground. Instead of complaining that SR is counter-intuitive, point out the similarity between flat-Earthers and SR's flat spacetime. When they try to mock SR critics for being too dim to understand Minkowski spacetime, mock them right back for being too dim to understand the curved-spacetime principles of a proper general theory, or the inherent conflicts between Minkowski's geometry and the principle of equivalence.
The "Ten proofs of SR" paper was intended to provide SR dissidents with counterarguments and ammunition to help them counter almost anything that the SR community could use in the theory's defence.
It's insurrection time!
  • asked a question related to Fundamental Physics
Question
34 answers
This question is closely related with a previous question I raised in this Forum: "What is the characteristic of matter that we refer as "electric charge"?"
As stated in my previous question, the main objective of bringing this topic to discussion is to try to understand the fundamental physical phenomena associated with the Universe we live in, where energy, matter and other key ingredients, like the Laws that govern them, which all together seem to play a harmonious role, so harmonious that even life, as we know it, can exist in this planet.
My background is from engineering. Hence, I am trying to go deep into the causes behind the effects, the physical phenomena that support the Universe as we know it, prior to go deep into complex mathematical models and formulation, which may obscure reality.
With an open mind, I try to ask questions whose answers may help us to understand the whys, rather than to prove theories and their formulations.
From our previous discussion, it became clear that mass and electric charge are two inseparable attributes of matter. Moreover, Electromagnetic (EM) fields propagate through vacuum. Hence, no physical matter is required for energy or information flow through the Universe. However, electric charges remain clustered in physical matter, i.e., they require, not vacuum, but matter.
Matter has the property of radiation. Matter under Gravitational (G) and EM fields is subjected to forces, producing movement. Radiation depends strongly on Temperature.
The absolute limit of T is 0º Kelvin. At this limit, particle movement stops. Magnetic fields depend on moving electric charges; as, at this limit, movement vanishes, then Magnetic fields should vanish with it. As Electrical and Magnetic fields are nested in each other, so does Electric field and consequently the effect of EM fields (and, hence, radiation, too) should vanish as T approaches 0ºK. Black Holes (BH) do not radiate, their Temperature being close to 0ºK.
Can we assume that EM fields ultimately vanishes as T approaches 0ºK?
Could this help explaining why protons in an atomic nucleus stay together, and are not violently scattered away from each other?
Would it be reasonable to assume that the atomic nucleuses are at Temperatures close to 0ºK, although electrons and matter, at macroscopic level, are at Room Temperature?
What is really the Temperature of atomic nucleuses? Can we measure it? Is it possible that a cloud of electrons, either orbiting the atoms nucleuses or moving as free electrons, play a shielding effect, capturing the energy associated with Room Temperature, and preventing the nucleuses from heating? Can atom's nucleus Temperature be close to 0ºK, like it occurs in BH?
Relevant answer
Answer
Without electrons and their frequencies, the nuclei are considered as fixed points. So a bold but logical statement would be 0 Kelvin, as long as they don't vibrate.
  • asked a question related to Fundamental Physics
Question
16 answers
Wikipedia describes Physics, lit. 'knowledge of nature' , as the natural science that studies matter, its motion and behavior through space and time, and the related entities of energy and force
But isn’t this definition a redundancy? Any visible object is made of matter and its motion is a consequence of energy applied. We might as well say, study of stuff that happens. But then, what does study entail?
Fundamentally, ‘physics’ is a category word, and category words have inherent problems. How broad or inclusive is the category word, and is the ordinary use of the category word too restrictive?
Is biophysics a subcategory of biology? Is econophysics a subcategory of economics? If, for example, biophysics combines elements of physics and biology, does one predominate as categorization? If, as in biophysics and econophysics and astrophysics there are overlapping disciplines, does the category word ‘physics’ gives us insight about what physics studies or obscure what physics studies?
Is defining what physics does more a problem of semantics (ascribing meaning to a category word) than of science?
Might another way of looking at it be this? Physics generally involves the detection of patterns common to different patterns in phenomena, including those natural, emergent, and engineered; if possible detecting fundamental principles and laws that model them, and when possible using mathematical notation to describe those principles and laws; if possible devising and implementing experiments to test whether hypothesized or observed patterns provide evidence for or give clues to fundamental principles and laws.
Maybe physics more generally just involves problem solving and the collection of inferences about things that happen.
Your views?
Relevant answer
Answer
If you ask a fake physics :
  • Why is the sky blue? He says because it looks blue.
  • Why is the electron charge quantized? He says because Millikan's experiment has shown.
  • Why is there no ether? He says it was not shown in the Michelson's experiment.
  • Why is light a wave? Because Yang's test results are more consistent with light waves.
  • What is quantum mechanics? Like the great Feynman! He says he doesn't know, but he has accurate calculations and is compatible with the data, and that's enough.
The latter was not the answer of an ordinary physicist, but the answer of one of the greatest contemporary physicists! And this is a disaster for physics.
It is as if the role of physics has been reduced from a master to a servant.
Is reducing the role of physics from describing nature to a tool for exploitation a service to physics or a betrayal of it?
Technology is now far ahead of knowledge, and physics does not seem to be afraid of this humiliation, and it is still content with its instrumental role.
  • asked a question related to Fundamental Physics
Question
9 answers
In physics, we have a number of "fundamental" variables: force, mass, velocity, acceleration, time, position, electric field, spin, charge, etc
How do we know that we have in fact got the most compact set of variables? If we were to examine the physics textbooks of an intelligent alien civilization, could it be they have cleverly set up their system of variables so that they don't need (say) "mass"? Maybe mass is accounted by everything else and is hence redundant? Maybe the aliens have factored mass out of their physics and it is not needed?
Bottom line question: how do we know that each of the physical variables we commonly use are fundamental and not, in fact, redundant?
Has anyone tried to formally prove we have a non-redundant compact set?
Is this even something that is possible to prove? Is it an unprovable question to start with? How do we set about trying to prove it?
Relevant answer
Answer
Respected D Abbott
Very good question but ans is difficult.
I don't tell anything about aliens.
I just want to tell some thing about mass.
So far as I think is that The Principle of extremum action as the basis principle of natrure.
Action is , you know, actually the world length between two events.
More precisely action is proportional to world length between two events.
The proportionality constant is something called "mass" ( with a negative sign) .
So, if we don't want to consider mass as a variable, we will fail to explain the time evolution of systems of different mass and Physics will not be able to explain the natural events.
Though, for Fields , mass is not the proportionality constant of action because for fields like EM field ,there is no mass.
I don't know whether the time evolution of a massive system can be explained without mass or not.
Thanks and Regards
N Das
  • asked a question related to Fundamental Physics
Question
10 answers
You will find an article, with more precision under my profile.
The question is non relativistic and depends only on logic.
The answer could make a reset of all fundamental physics and is therefore of extreme importance!
JES
Relevant answer
Answer
For the de Broglie wavelength λ, λ = h/mv, where h is the Planck constant; m is the invariant mass of the particle; and v is the velocity of this particle. (The equation can be rewritten as λ = h/p, because p = mv, where p is the momentum of the particle, for non-relativistic motion.)
  • asked a question related to Fundamental Physics
Question
39 answers
What is consciousness? What do the latest neurology findings tell us about consciousness and what is it about a highly excitable piece of brain matter that gives rise to consciousness?
Relevant answer
Answer
Consciousness is what starts when you wake and fall asleep each day, and this captures our scientific and philosophical attention precisely because it is highly implausible that there is "a highly excitable piece of brain matter that gives rise to consciousness." To borrow an example from Ned Block, there are about a billion neurons in a brain and there are about a billion people in China, but if the Chinese were to relay information among themselves in a manner identical to a brain, China itself would not suddenly awake and enjoy conscious experiences. This disanalogy is what makes the prospect of a neat localization claim (i.e., "Consciousness is this spot in the brain!") unlikely -- on principled grounds. An expression like "gives rise to," despite sounding so natural, presupposes a host of unexamined metaphysical views that become dubious when examined, so such an expression obscures more than it reveals.
  • asked a question related to Fundamental Physics
Question
5 answers
It has radically altered it by rehabilitating Fritz Zwicky 1929.
Hence ten Nobel medals are gone. And cheap energy for all is made possible. Provided, that is, that humankind is capable of mentally following in Yakov Sinai’s chaotic footsteps. If not, energy remains expensive and CERN remains dangerous to all: A funny time that we are living in. With the crown of Corona yet waiting to be delivered.
April 1st, 2020
Relevant answer
Answer
Please, elaborate.
  • asked a question related to Fundamental Physics
Question
4 answers
Why a complete theory of fundamental physics is ignored just because it is outside the realms of Quantum Field Theories and General Relativity? It has been “marked” as a speculative alternative and has never been studied nor has been any attempt to verify it. The fundamental physics community is still in complete ignorance of the extremely successful Electrodiscrete Theory. 
The Electrodiscrete Theory is not a speculative alternative and not just a new idea in the workings but it is a complete theory of fundamental physics describing all our elementary particles and their interactions including gravity. The Electrodiscrete Theory beautifully describes the patterns in nature revealed by observations. The Electrodiscrete Theory gives a single (unified) description of nature in a relatively simple and in a self-consistent way. Moreover, it can calculate and it can make predictions. Then, why it is ignored? 
The Electrodiscrete Theory provides the complete conceptual foundation for describing nature that we are all seeking, but nobody bothers to take a look. Why?
The Electrodiscrete Theory opens new horizons. This is progress in science that is being held back by prejudice and new kind of ignorance. What is wrong with the system? 
Relevant answer
Answer
These results are in consequence of the structure of a deeper and more fundamental layer of Fundamental Physics. Like you say, this is not in relation to anything described by the currently accepted theories. These results provide the only theoretical derivation and understanding of the Fine Structure Constant and the understanding of the Electron Magnetic Moment Anomaly. This Sub-Fundamental Physics that I have developed makes the unification of gravity and electromagnetism possible, eliminates all the paradoxes in physics and provides one solid and unified description for all of physics, as described in my book and articles. The Electrodiscrete Theory is not a kind of a Quantum Field Theory and it does not conform with General Relativity. It is a new physics and it does require us to overcome many scientific prejudices. However, it takes the understanding of the basics of Electrodiscrete Theory to be able to understand the theoretical derivation of the Fine Structure Constant and the EMM Anomaly.
  • asked a question related to Fundamental Physics
Question
11 answers
Mathematics is crucial in many fields.
What are the latest trends in Maths?
Which recent topics and advances in Maths? Why are they important?
Please share your valuable knowledge and expertise.
Relevant answer
Answer
For me, as well as for majority of other researchers, Mathematics is the language of Science!
  • asked a question related to Fundamental Physics
Question
15 answers
A new Phenomenon in Nature: Antifriction
Otto E. Rossler
Faculty of Science, University of Tuebingen, Auf der Morgenstelle 8, 72076 Tuebingen, Germany
Abstract
A new natural phenomenon is described: Antifriction. It refers to the distance-proportional cooling suffered by a light-and-fast particle when it is injected into a cloud of randomly moving heavy-and-slow particles if the latter are attractive. The new phenomenon is dual to “dynamical friction” in which the fast-and-light particle gets heated up.
(June 27, 2006, submitted to Nature)
******
Everyone is familiar with friction. Friction brings an old car to a screeching halt if you jump on the brake. The kinetic energy of a heavy body thereby gets “dissipated” into fine motions – the heating-up of many particles in the end. (Only some cars do re-utilize their motion energy by converting it into electricity.) But there also exists a less well-known form of friction called dynamical friction. It differs from ordinary friction by its being touchless.
The standard example of dynamical friction is a heavy particle that is repulsive over a short distance, getting injected into a dilute gas of light-and-fast other particles. The heavy particle then comes to an effective halt. For all the repelled gas particles that it forced out of its way in a touchless fashion carried away some of its energy of motion while getting heated-up in the process themselves – much as in ordinary friction.
In the following, it is proposed that a dual situation exists in which the opposite effect occurs: “antifriction.” Antifriction arises under the same condition as friction – if repulsion is replaced by attraction. The fast particles then rather than being heated up (friction) paradoxically get cooled-down (antifriction). This surprising claim does not amount to an irrational perpetual-motion-like effect. Only the fast-and-light (“cold”) particle paradoxically imparts some of its kinetic energy onto the slow-and-heavy “hot” particles encountered.
A simplified case can be considered: A single light-and-fast particle gets injected into a cloud of many randomly moving heavy-and-slow particles of attractive type. Think of a fast space probe getting injected into a globular cluster of gravitating stars. It is bound to be slowed-down under the many grazing-type almost-encounters it suffers. The small particle will hence be “cooled” rather than heated-up as one would naively expect in analogy to the repulsive case.
The new effect is going to be demonstrated in two steps. In the first step, we return to repulsion. This case can be understood intuitively as follows: On the way towards equipartition (which characterizes the final equilibrium in the repulsive case as is well known), the light-and-fast particles – a single specimen in the present case – do predictably get heated up in their kinetic energy. In the second step, we then “translate” this result into the analogous attraction-type scenario to obtain the surprising opposite effect there.
First step: the repulsive case. Many heavy repulsive particles in random motion are assumed to be traversed by a light-and-fast particle in a grazing-type fashion. A typical case is focused on: as the light-and-fast particle starts to approach the next moving heavy repellor while leaving behind the last one at about the same distance, the new interaction partner is with the same probability either approaching or receding-from the fast particle’s momentary course. Whilst there are many directions of motion possible, the transversally directed ones are the most effective so that it suffices to focus on the latter. Since the approaching and the receding course do both have the same probability of occurrence, a single pair already yields the main effect: there is a net energy gain for the fast particle on average. Why?
In the approaching subcase the fast particle gains energy, and in the receding subcase it loses energy. But the two effects are not the same: The gain is larger than the loss on average if the repulsive potential is assumed to be of the inversely distance-proportional type assumed. This is because in the approaching case, the fast particle automatically gets moved-up higher by the approached potential hill gaining energy, than it is hauled-down by the receding motion of the same potential hill in the departing case losing energy. The difference is due to the potential hill’s round concave form as an inverted funnel. The present “typical pair” of encounters thus enables us to predict the very result well known to hold true: a time- and distance-proportional energy gain of the fast lighter particle as a consequence of the “dynamical friction” exerted by the heavy particles encountered along its way. Thus, eventually an “equipartition” of the kinetic energies applies.
Second step: the attractive case. Everything is the same as before – except that the moving potential hill has become a moving potential trough (the funnel now is pointing downward rather than upward). The asymmetry between approach and recession is the same as before. Therefore there is a greater downwards directed loss of energy (formerly: upwards directed gain) in the approaching subcase than there is an up-wards directed gain of energy (formerly: downwards directed loss) in the receding subcase. The former net gain thus is literally turned-over into a net loss. With this symmetry-based new result we are finished: Antifriction is dual to dynamical friction, being valid in the case of attraction just as dynamical friction is valid in the case of repulsion.
Thus a new feature of nature – antifriction – has thus been found. The limits of its applicability have yet to be determined. It deserves to be studied in detail – for example, by numerical simulation. It is likely to have practical implications, not only in the sky with its slowed-down space probes and redshifted photons [1), but perhaps even in automobiles and refrigerators down here on earth.
To conclude, the fascinating phenomenon of dynamical friction – touchless friction – was shown to possess a natural “dual”: antifriction. A prototype subcase (a pair of representative encounters) was considered above in either scenario, thereby yielding the new twin result. Practical applications can be expected to be found.
I thank Guilherme Kujawski for stimulation. For J.O.R.
Added in proof: After the present paper got finished, Ramis Movassagh kindly pointed to the fact that the historically first paper on “dynamical friction,” written by Subrahmanyan Chandrasekhar [2] who also coined the term, actually describes antifriction. This fact went unnoticed because the smallest objects in the interactions considered by Chandra were fast-moving stars. Chandra’s correctly seen energy loss of these objects therefore got classified by him as a form of “friction” suffered in the interaction with the fields of other heavy moving masses. However, the energy loss found does actually represent a “cooling effect” of the type described above: antifriction. One can see this best when the cooling is exerted on a small mass (like the above-mentioned tiny space probe traversing a globular cluster of stars). While friction heats up, antifriction cools down. Thus what has been achieved above is nothing else but the re-discovery of an old result that had been interpreted as a form of “friction” even though it actually represents the first example of antifriction.
References
[1] O.E. Rossler and R. Movassagh, Bitemporal dynamic Sinai divergence: an energetic analog to Boltzmann’s entropy? Int. J. Nonlinear Sciences and Numerical Simul. 6(4), 349-350 (2005).
[2] S. Chandrasekhar, Dynamical friction. Astrophys. J. 97, 255-263 (1943).
(Remark: The present paper after not being accepted by Nature in 2006 was recently found lingering in a forgotten folder.)
See also: R. Movassgh, A time-asymmetric process in central force scatterings (Submitted on 4 Aug 2010, revised 5 Mar 2013, https://arxiv.org/abs/1008.0875)
Nov. 23, 2019
Relevant answer
Answer
Hello Mykhailo and Otto,
The point I was trying to make in my last message was that all real systems experience some type of dissipation wherein energy is degraded to heat. For solids in mechanical contact with one another, dissipation arises from friction, specifically dynamic friction, when there is relative motion of two solid surfaces. In a moving fluid (liquid or gas), dissipation arises due to either shear viscosity in the case of tangential forces or bulk viscosity in the case of normal forces. The term 'friction' should only be used where it is directly applicable.
One can, of course, say that the viscosity of real fluids produces a "friction-like" dissipation, but this use of the term 'friction' is by analogy and it suffers from the logical fallacy of false equivalence as viscosity and friction arise from different root causes. Consider an incandescent light bulb. The electric current through its tungsten filament only produces a small amount of visible light, the majority of the applied electrical energy is converted directly to heat. The dissipation in the incandescent bulb arises from imperfections in the metal crystal lattice due to things such as defects, grain boundaries, interstitial and substitutional impurities, etc. These imperfections give rise to what one might call "friction-like" behavior, but the dissipation is obviously not caused by asperities as in the case of friction between solids.
Otto, with respect to the system discussed in your original question, it is still not clear to me that your use of the term 'friction' is appropriate. Does a space probe moving through a globular cluster of stars really experience friction or antifriction? Would it not be more appropriate to speak about the effective mass of the space probe changing due to the long range fields it experiences? Plus, I am still hazy about where and how the dissipation or anti-dissipation arises given that the forces acting on the space probe are probably conservative.
Regards,
Tom Cuff
  • asked a question related to Fundamental Physics
Question
11 answers
It is well known that light filed can be decomposed into polarized field and unpolarized field. But, is it possible to consider this sum as a only the sum of linearly polarized and unpolarized or circularly polarized and unpolarized? or is it always matters a degree of polarization not a type of polarization?
Relevant answer
Answer
Thank you very much for your reply Prof.Hari Prakash. I will go through the paper you recommended. The State of Stokes vector in polarized part of partial coherent partially polarized beam changes on propagation e.g., Gaussian schell model beams is well studied by Emil Wolf. Recently I have also read that polarized part is strictly being a single state of vibration in the sense that there should not be addition of two polarized components , it will results in unpolarization. Prof.A T Friberg( https://www.uef.fi/en/web/photonics/ari-t.-friberg ) and group doing research on unploarized light few of his papers cleared my doubt.
In regard of Stokes parameters are inadequate for higher order coherence functions, ( I am not sure) the two point Stokes parameters ( ) may show a new way.
As mentioned in your affiliation ICTP Trieste, I have a plan to apply and attend the complex system course( http://indico.ictp.it/event/9024/ ), I would like to meet and discuss with you once if I selected for the same.
  • asked a question related to Fundamental Physics
Question
32 answers
The incredible thing about Physarum polycephalum is that whilst being completely devoid of any nervous system whatsoever (not possessing a single neuron) it exhibits intelligent behaviours. Does its ability to intelligently solve problems suggest it must also be conscious? If you think, yes, then please describe if-and-how its consciousness may differ {physically or qualitatively ... rather than quantitatively} from the consciousness of brained organisms (e.g., humans)? Does this intelligent behaviour (sans neurons) suggest that consciousness may be a universal fundamental related more to the physical transfer or flow of information rather than being (as supposed by most psychological researchers) an emergent property of processes in brain matter?
General background information:
"Physarum polycephalum has been shown to exhibit characteristics similar to those seen in single-celled creatures and eusocial insects. For example, a team of Japanese and Hungarian researchers have shown P. polycephalum can solve the Shortest path problem. When grown in a maze with oatmeal at two spots, P. polycephalum retracts from everywhere in the maze, except the shortest route connecting the two food sources.[3] When presented with more than two food sources, P. polycephalum apparently solves a more complicated transportation problem. With more than two sources, the amoeba also produces efficient networks.[4] In a 2010 paper, oatflakes were dispersed to represent Tokyo and 36 surrounding towns.[5][6] P. polycephalum created a network similar to the existing train system, and "with comparable efficiency, fault tolerance, and cost". Similar results have been shown based on road networks in the United Kingdom[7] and the Iberian peninsula (i.e., Spain and Portugal).[8] Some researchers claim that P. polycephalum is even able to solve the NP-hard Steiner minimum treeproblem.[9]
P. polycephalum can not only solve these computational problems, but also exhibits some form of memory. By repeatedly making the test environment of a specimen of P. polycephalum cold and dry for 60-minute intervals, Hokkaido University biophysicists discovered that the slime mould appears to anticipate the pattern by reacting to the conditions when they did not repeat the conditions for the next interval. Upon repeating the conditions, it would react to expect the 60-minute intervals, as well as testing with 30- and 90-minute intervals.[10][11]
P. polycephalum has also been shown to dynamically re-allocate to apparently maintain constant levels of different nutrients simultaneously.[12][13] In particular, specimen placed at the center of a Petri dish spatially re-allocated over combinations of food sources that each had different protein–carbohydrate ratios. After 60 hours, the slime mould area over each food source was measured. For each specimen, the results were consistent with the hypothesis that the amoeba would balance total protein and carbohydrate intake to reach particular levels that were invariant to the actual ratios presented to the slime mould.
As the slime mould does not have any nervous system that could explain these intelligent behaviours, there has been considerable interdisciplinary interest in understanding the rules that govern its behaviour [emphasis added]. Scientists are trying to model the slime mold using a number of simple, distributed rules. For example, P. polycephalum has been modeled as a set of differential equations inspired by electrical networks. This model can be shown to be able to compute shortest paths.[14] A very similar model can be shown to solve the Steiner tree problem.[9]"
Relevant answer
Answer
To bardzo trudne pytanie.
  • asked a question related to Fundamental Physics
Question
4 answers
The theory of special relativity requires that the laws of the universe be the same for the objects that move with uniform velocity to each other. The law that changes from one frame to another is wrong. Lorentz transformations do not guarantee only three transformations. These three quantities are length, time and mass, which basic are physical quantities. Derived quantities can be derived from it covering the laws of mechanics only. In addition, the Lorentz transformation of the mass was found using the principle of corresponding and not directly if we want to get the Derived quantities Lorentz transformation we must be finds the Lorentz transformations for Fundamental Physical Quantities.
Relevant answer
Answer
There are also transformational laws for electric and magnetic fields.
  • asked a question related to Fundamental Physics
Question
3 answers
To what extent, are we compromising Darcy’s law, when we characterize the oil/gas flow within a petroleum reservoir?
Does the fundamental physics associated with the Darcy’s law not change significantly while applying it for the above application?
Darcy’s law requires that any resistance to the flow through a porous medium should result only from the viscous stresses induced by a single-phase, laminar, steady flow of a Newtonian fluid under isothermal conditions within an inert, rigid and homogeneous porous medium.
Relevant answer
Answer
Refer to Perrine-Martin modification for multiphase flow.
  • asked a question related to Fundamental Physics
Question
27 answers
For many years I worked on the NSE under the assumption of incompressible flow. This assumption drive us to work with a simplified model (M=0) according to the fact that
a2 =dp/drho|s=const. ->+Inf.
Of course, any model is an approximate intepretation of the reality but this specific mathematical model assumption contradicts the fundamental physical limit of the light velocity.
Despite the fact that low (but finite) Mach model were developed, the M=0 model is still largely used both in engineering aerodynamics and in basic researches (instability, turbulence, etc.) in fluid dynamics.
Could we really accept the M=0 model that violates a fundamental physical assumption? If yes, that is a result from assessed studies that used a very low but finite Mach number for comparison?
Relevant answer
Answer
The speed of light is not really relevant in low Mach number flow. From the assumption of incompressible flow, you get infinite speed of sound. That might be more of an issue.
My subject is ventilation of tunnels. Typical air flow velocities range from 0 to 25m/s. We consider this incompressible flow. I only had an issue with this assumption, when I analysed very long tunnels (12km and more). For long tunnels, the speed of sound becomes relevant, even for very small flow velocities. The information of flow change at one portal needs time to reach the other portal.
BTW, the same question can be asked for mechanics: Would you take relativistic effects into account when you analyse the sandwich falling from the table?
  • asked a question related to Fundamental Physics
Question
4 answers
Are there any evidence or theoretical framework to explain the values of fundamental physical constants? In other words, could be the values of physical constants differents (contingency)? Or is there any physical need to be as they are? Obs.: It is not a metaphysical question.
Relevant answer
Answer
It is not a question of one or the other as causality based theology, philosophy and natural science thinks. For dialectics these two opposite are together! Chance (contingency) is blind only when it is not realized in a necessity! There is no determinism in the universe as physics thinks; everything in the universe is madiated not by cause and effect, but by dialectical chance and necessity. Man as a the highest developed subjective aspect (life) of blind and objective Nature (contradiction of living and non-living matter) possesses in an historical evolutionary way, freedom of the will to change objective Nature and also himself reducing his contradiction with Nature.
The following quote from Frederick Engels will make it more clear: “Hegel was the first to state correctly the relation between freedom and necessity. To him, freedom is the appreciation of necessity. “Necessity is blind only in so far as it is not understood”. Freedom does not consist in the dream of independence of natural laws, but in the knowledge of these laws, and in the possibility this gives of systematically making them work towards definite ends. This holds good in relation both to the laws of external nature and those which govern the bodily and mental existence of men themselves – two classes of laws which we can separate from each other at most only in thought, but not in reality.
Freedom of the will therefore means nothing but the capacity to make decision with real knowledge of the subject. Therefore the freer a man’s judgement is in relation to a definite question, with so much the greater necessity is the content of this judgement determined; while the uncertainty, founded on ignorance, which seems to make an arbitrary choice among many different and conflicting possible decisions, shows by this precisely that it is not free, that it is controlled by the very object it should itself control. Freedom therefore consists in the control over ourselves and over external nature which is founded on knowledge of natural necessity; it is therefore necessarily a product of historical development. The first men who separated themselves from the animal kingdom were in all essentials as unfree as the animals themselves, but each step forward in civilization was a step towards freedom.” (Anti-Dṻhring).
  • asked a question related to Fundamental Physics
Question
29 answers
The 1998 astronomical observations of SN 1A implied a (so-called) accelerating universe. It is over 20 years later and no consensus explanation exists for the 1998 observations. Despite FLRW metric, despite GR, despite QM, despite modified theories like MOND, despite other inventive approaches, still no explanation. It is hard to believe that hundreds or thousands of physicists having available a sophisticated conceptual mathematical and physics toolkit relating to cosmology, gravity, light, and mechanics are all missing how existing physics applies to explain the accelerating expansion of space. Suppose instead that all serious and plausible explanations using the existing toolkit have been made. What would that imply? Does it not imply a fundamental physical principle of the universe has been overlooked or even, not overlooked, but does not yet form part of physics knowledge? In that case, physics is looking for the unknown unknown (to borrow an expression). I suspect the unknown principle relates to dimension (dimension is fundamental and Galileo’s scaling approach in 1638 for a problem originating with the concept of dimensions --- the weight-bearing strength of animal bone — suggests fundamental features of dimension may have been overlooked, beginning then). Is there a concept gap?
Relevant answer
Answer
Allow me to mention that the discovery of the new fundamental science of Cryodynamics, sister of Thermodynamics, has confirmed Zwicky 1929. So that the universe is stationary and eternal.
The 90 years long adherence to the "Big Bang" is a historical tragedy, a "Dark Age."
Can anyone forgive me for that statement?
  • asked a question related to Fundamental Physics
Question
5 answers
Solitons is the common but we are changing the structures which are also based on the common photonic crystal. Is there possibility of same kind of soliton in all three structures.
Relevant answer
Answer
In general, a soliton wave is a nonlinear localized wave possessing a particle-like nature that maintaining its shape during propagation even after an elastic collision with another soliton wave. The possibility of soliton propagation in the anomalous dispersion regime of an optical material was predicted by analyzing theoretically the nonlinear Schrodinger equation (NLSE).
In optics, soliton wave can arise due to the balance between Kerr nonlinear effect and dispersion effect (GVD). Based on confinement in the time or space domain, one can have either temporal or spatial solitons. This induces to an intensity-dependent refractive index of the medium and then leads to temporal self-phase modulation (SPM) and spatial self-focusing. A temporal soliton is formed when the SPM effect compensates the dispersion-induced pulse broadening. In the same way, a spatial soliton is formed when the self-focusing effect counteracts the natural diffraction-induced pulse broadening.
  • asked a question related to Fundamental Physics
Question
43 answers
Version:2.0
The question of the nature (or ontological status) of fundamental physics theories, such as spacetime in special and general relativity, and quantum mechanics, have been, each, a permanent puzzle, and a source of debates. This discussion aims to resolve the issue, and submit the solution to comments.
Also, when something is correct, this is a sign that it can be be proved in more than one way. In support of this question, we found evidence of the same answer of the ontological status, in three diverse ways.
Please see at:
DISCLAIMER: We reserve the right to improve this text. All questions, public or not, are usually to be answered here. This will help make this discussion text more complete, and save that Space for others, please avoid off-topic. References are provided by self-search. This text may modify frequently.
Relevant answer
Answer
But observation is finite valued and one cannot observe all aspects of anything. What beam or wavelength or tool defines observation. So what we know depends on the tools we know. What we don’t know depends on the tools we don’t have. What we don’t know about what we don’t know depends on depends on what we know about the tools we use to know.
  • asked a question related to Fundamental Physics
Question
2 answers
It is widely seen that large-scale cosmic fluids should be treated as "viscoelastic fluids" in theoretical formulation of their stability analyses. Can anyone explain it from the viewpoint of fundamental physical insight?
Relevant answer
Answer
Thanks a lot.
  • asked a question related to Fundamental Physics
Question
63 answers
Where from we have arrived to the conclusion that space of our Universe is 3D (and so the dimensionality of spacetime is 4D)?
I suppose this is the result of our sense of vision that is based on both of our eyes. However, the image we conceive is the result of mind manipulation (illusion) of the two “images” that each of our eyes send to our brain. This mind manipulation gives us the notion of depth that is translated as the third dimension of space. This is why one eye vision (or photography, cinema, TV, ...) is actually a 2D vision. In other words, when we see a 3D object and our eyes are (approx.) on a line perpendicular to the plane that form object's “height” and “long”, our mind concludes about object's “width”. Photons detectable by each of our eyes were, e.g. t(=10-20sec) before, on the surface of a sphere with our eye as center and radius t*c. As the surface of a sphere is 2D (detectable space) and if we add the dimension of "time" (to form the spacetime) we should conclude that the dimensionality of our detectable Universe is 3D ((2+1) and NOT 4D(3+1)).
PS: (27/8/2018) Though, I am aware that this opinion will reveal an instinctive opposition as it contradicts our “common sense”… I will take the risk to open the issue.
Relevant answer
Answer
Thank heavens, a bottle with a good cognac has 3D+1 dimensionality…
Cheers
  • asked a question related to Fundamental Physics
Question
6 answers
The final target is to study the fundamental physical processes involved in bubble dynamics and the phenomenon of cavitation. Develop a new bubble dynamics CFD model to study the evolution of a suspension of bubbles over a wide range of vesicularity, and that accounts for hydrodynamical interactions between bubbles while they grow, deform under shear flow conditions, and exchange mass by diffusion coarsening. Which commercial/open source CFD tool and turbulence model would be the most appropriate ones?
Relevant answer
Answer
It would be a highly educational experience if you could try to develop your own solver using MATLAB then write it in a low-level programming environment like Fortran.
But OpenFOAM should be sufficient if you want to get a bit better at programming CFD, and ANSYS/Fluent would be best if you plan on proceeding as a CFD user.
  • asked a question related to Fundamental Physics
Question
6 answers
Mark Srednicki has claimed to demonstrate the entropy ~ area law -- https://arxiv.org/pdf/hep-th/9303048.pdf
Does anyone know of an independent verification or another demonstration of this result?
Is there a proof of this law?
Relevant answer
Answer
An argument which depends on the assumption that every qubit of information, [1,0] or [0,1] can occupy one and only one 'box' on the horizon's area goes as follows. Since the sum of the boxes must equal the area, we have, N = A, where N is the number of qubits. We calculate the number of ways by which we can arrange the qubits on the horizon, as the sum of all the possible combinations of qubit configurations, W(N) = ΣN!/[(N-k)!k!], with the sum running from k=0 to k=N. This sum is calculated to be 2N, which suggests that we could simply put it this way: each qubit has two representations, for N qubits there are therefore, 2N ways to arrange this collection of qubits. Since according to the Boltzmann principle S=log[W], we have, S=log[2N], or S ∝N = A.
  • asked a question related to Fundamental Physics
Question
10 answers
It seems that our progress in standard of living in the last 500 or so years is mainly connected with different forms of energy conversion and discovery of newer materials for that purpose. So, how the fundamental science projects of today (e.g. detection of gravity waves, neutrino observatory, etc.) are going to contribute to that single point program? Is this a premature question?
Relevant answer
Answer
It depends on what you consider to be fundamental physics. More efficient extraction of solar energy, or the construction of practical nuclear fusion reactors, certainly requires further developments of physics and physics based technology. But I belive this will be physics where the fundamental properties and equations already exists. At least for the developments occurring in this millennium.
It seems to me that the current development in our ways of living is not solely based on increased use of energy, but equally much on the new means of communication and information processing. Enter any bus or train anywhere in the world, and look at your fellow travellers: It becomes very clear that we are currently living in the smartphone era. This has become possible due to the physics of electromagnetism (as described by Maxwell's equations from 1864) and quantum mechanics (whose fundamental equations and principles were formulated during the last half of the 1920's).
  • asked a question related to Fundamental Physics
Question
33 answers
- In the conclusion (page 14) of this paper, I suggest that “Younger physicists should also be encouraged to play a significant role in looking after and protecting our physics knowledge before they become exposed to the detrimental effects of the commercial influence on physics.
Also in the conclusion I offer an idea on how this could be initiated. However I imagine there are existing schemes that encourage university students and physicists to get involved in theoretical physics & the fundamentals of physics. Do you know of such schemes and/or have your own suggestions in this connection?
Theme for Developing new perspectives of physics: Let’s return to the traditional domain of original ideas and rigorous arguments of theoretical physics - “Physics with an ideas- and imagination-based ‘art’ where we’re dreaming, imagining and creating …” - (Physics: No longer a vocation? by Anita Mehta, vol 61 no. 6 Physics Today June 2008)
Relevant answer
Answer
Yes, provided that they will follow the standard paradigm of modern Physics, they can join a Big Project and live in prosperity.
Otherwise, at least they will be classified in the set of 'crackpots'.
  • asked a question related to Fundamental Physics
Question
3 answers
currently i am beginning to work in photo diode using wide band gap semiconductors like NiO and ZnO etc. so i like to study the fundamental physics of p n junction that helpful for my topic.anyone please suggest some books or documents. 
Relevant answer
Answer
Hello Pradeep, there are many excellent books that explain the physics of the p-n junction for photo-diodes my favorite one is Sze which is called the bible of semiconductors:
S. M. Sze, "Physics of Semiconductor Devices", John Wiley and Sons.
One of the excellent books to start with is Neamen's Semiconductor Physics and Devices. It's easy to read, and it covers everything from basic solid-state physics to solar cells and photo-diodes (e.g., drift-diffusion) to all kinds of devices (e.g., PN junction, MOSFET, BJT, solar cells), https://www.amazon.com/Semiconductor-Physics-Devices-Donald-Neamen/dp/0072321075
Also there are many other good books, for example:
1- Semiconductor Optoelectronic Devices (2nd Edition) 2nd Edition
by Pallab Bhattacharya .
2- Semiconductors for Optoelectronics Basics and Applications, Authors: Balkan, Naci, Erol, Ayşe
3- Semiconductors Optoelectronic Devices by Hadis Markoc,
I hope this will help you.
Best Regards
  • asked a question related to Fundamental Physics
Question
24 answers
What are the evidences that speed of light is constant all over the universe? Is it the same value even in places in universe which dark energy is occupied?
Relevant answer
Answer
Is speed of light constant all over the universe and equal to what we measured?
The principle that physical laws as we know them on Earth are the same throughout the universe is an assumption. Physicists, astronomers and cosmologists make that assumption because they have no other option: it is in principle untestable.
In the International System of Units (SI Units) “one meter” is defined as “the distance light in a vacuum travels in 1/299792458 seconds”. “The speed of light” is then 299792458 meters per second; it is defined to be a constant.  It could be different (though still expressed by that same number!...) in different places or at different times only if “one second” here and now were different from “one second” elsewhere and elsewhen. But how could we compare them? Obviously, we cannot! Suppose “one second” were different when the universe was new (or, what amounts to the same thing, at vast distances from us). What would that even mean? “One second” is defined, in SI units, in terms of a particular spectrum frequency of a Cesium atom. In the early stages of the evolution of the Universe THERE WERE NO ATOMS! So what could it possibly mean, to compare the speed of light THEN to the speed of light HERE AND NOW??
"there's speculation, and then there's more speculation, and then there's cosmology" – Michio Kaku
  • asked a question related to Fundamental Physics
Question
4 answers
1) How one can describe short range and long range ferromagnetic ordering by analysing M(T,H) data?
2) is superexchange always short-range order?
3) how to idetify the type of exchange interaction in the magnetism shown by a system?
4) Does superexchange has some relationship among magnetic parameters (such as Curie tempeature, doping concentration, carrier concentration)?
Relevant answer
Answer
Hi!
The M(T,H) traces can be as well impacted by anisotropy parameters. So it is not obvious to state on short to long range ordering from such traces. However as said by M. El Hafidi, additional anomaly in susceptibility X(T), deviation from the Curie-Weiss law of 1/X above Tc with a net curvature is a best way to anticipate on short range ordering since the anisotropic contributions normally fail in the Tc temperature range. Also the crystal structure (cubic to low symmetry).could be a cause on non isotropic exchange interaction forces.
Kind regards
Daniel
  • asked a question related to Fundamental Physics
Question
6 answers
Erik verlinde said; this emergent gravity constructed using the insights of string theory, black hole physics and quantum information theory(all these theories are struggling to take breath)..its appreciation to Verlinde of his dare step of constructing emergent gravity based on dead theories ..we loudly take inspiration from him...!!!!!!!
Relevant answer
Answer
@ Adrian Sfarti ;;
my dear Adrian Sfarti do you have any objection if i comment? arey faltu he constructed his theory on string theory go and read once again empty vessel...
  • asked a question related to Fundamental Physics
Question
34 answers
Since experimental evidence, it is well known that a desynchronization of clocks appears between different altitudes on earth (simultaneity is relative). However, simultaneity (absolute for the sky) of the sun or the moon (since million years for example) is a fact.
Shouldn't the concept of relativity be questionned?
Relevant answer
Answer
Is that meant to illustrate the coherence level of your way of thinking?
  • asked a question related to Fundamental Physics
Question
55 answers
Professor Michael Longo (University of Michigan in Ann Arbor) and Professor Lior Shamir (Lawrence Technological University) on experimental data have shown that there is an asymmetry between the right- and left - twisted spiral galaxies. Its value is about 7%. In the article:
ROTATING SPACE OF THE UNIVERSE, AS A SOURCE OF DARK ENERGY AND DARK MATTER
it is shown that the source of dark matter can be the kinetic energy of rotation of the space of the observed Universe. At the same time, the contribution of the Carioles force is 6.8%, or about 7%. The high degree of proximity of the value of the asymmetry between the the right- and left - twisted spiral galaxies and the value of the contribution of the Carioles force to the kinetic energy of rotation of the space of the observable Universe is a strong indirect evidence (on experimental data!) that the space of the observed Universe rotates.
Relevant answer
Answer
@Valery Timkov
There is a stronger evidence that all these considerations need revision.  All your thinking is based upon a 4D Spacetime.  
The observation of hyperspherical acoustic waves (waves that have a footprint along the distance dimension) challenging both GR and the Copernican Principle were found in the SDSS BOSS dataset indicating clearly that we live in a 5D spacetime, where all the 4 spatial dimensions are non compact. 
The SN1a survey distances, corrected by an epoch dependent G, indicates that that hyperspherical surface where we exist, is traveling at the speed of light.
You can easily download the SDSS dataset and see that the observations are correct.  I made a video to help setting up the Anaconda environment.
You can also watch the video below containing an alternative model for Cosmogenesis based on the evidence found in both SDSS and SN1a surveys. It clearly shows the effect of many Bangs (in a crescendo) on the initial hyperspherical Universe
It is all there in Black and White (and sometimes in color..:)
#####################################################
Check HU (the Hypergeometrical Universe Theory) view of Cosmogenesis.
The Universe maps associated with this video are derived directly from the SDSS (Sloan Digital Sky Survey) datasets. That is, the existence of acoustic waves along the DISTANCE dimension were there for 10 years and SDSS couldn’t see it because of ideology. They believed and had blind faith, that the Universe is a 4D Spacetime. A 4D Spacetime requires any position to be equivalent to another position.
HU proposes a 5D Spacetime and expects hyperspherical acoustic waves at the beginning of times. Those hyperspherical acoustic waves would take place along the DISTANCE dimension. That is what astronomical observations support. They don’t support General Relativity, Inflation Theory, Dark Energy etc.
Below is the github repository and video to help setting up the python environment:
  • asked a question related to Fundamental Physics
Question
22 answers
An article from Nature "Undecidability of the spectral gap" (arXiv:1502.04573 [quant-ph]) shows that finding the spectral gap based on a complete quantum level description of a material is undecidable (in Turing sense). No matter how completely we can analytically describe a material on the microscopic level, we can't predict its macroscopic behavior. The problem has been shown to be uncomputable as no algorithm can determine the spectral gap. Even if there is in a way to make a prediction, we can't determine what prediction is, as for a given a program, there is no method to determine if it halts.
Does this result eliminate once and for all the possibility of a theory of everything based on fundamental physics? Is Quantum physics undecidable? Is this an an epistemic result proving that undecidability places a limit on our knowledge of the world?
Relevant answer
Answer
No, but one may change the research direction for the theory of everything.
  • asked a question related to Fundamental Physics
Question
34 answers
I have a question regarding one unusual (thought) system.
Some years ago at one Russian forum we discussed one thought device that, as its author claimed, can provide one-directional motion and only due to the internal forces. The puzzle had been resolved by Kirk McDonald from Princeton Univ. I attach Kirk's solution. I wish to say that the author of the paradox is Georgy Ivanov but not me.
Anyway, Kirk found that there is no resulting directional force. But one puzzle of this device remains. The center-of-mass of the device should move (in the closed orbit) only due to the internal forces. I marked this result of McDonald in the file.
In this connection, two questions arise:
1. Why the center-of-mass moves despite the total momentum conserves?
2. If the center-of-mass can move and this motion is created by the internal forces, is it possible to change the design of the device to provide one-directional motion?
Formally there is no obstacles to realize it. The total momentum conserves... Could some one give the answers to them?
This thought device works not on the action-reaction principle and if similar device can be made as hardware, it could be a good prototype for the interstellar flight thruster.
Relevant answer
Answer
    Dear Theophanes,
    Classical Electrodynamics has an ambit of application and it cannot be applied to concepts as particle or mass of such particle. Such frame belongs to other fields as QED where the renormalization of the charge or mass are solved.
  • asked a question related to Fundamental Physics
Question
183 answers
How did Einstein's Spacetime pull of gravity on the Planet Mercury differ in value than Newtons?  Was it simply via the spacetime fabric adjusting this value?
Thanks:)
Relevant answer
Answer
Your interpretation is utterly incoherent and unsupported by anything in the text. Einstein merely says a point moving in k must have a value of x' which is constant. In other words, the value of x' for a point moving with k must be constant. Mere obvious kinematics, following from the definition of velocity. Nothing is ever said about x' being ``attached'' to k. Nor is it true that it is measured using moving rods. Rather, x is measured using rods at rest with K, so is vt, and therefore the difference between the two is also a distance measured by rods at rest with respect to K. Your screaming ``Nonsense'' merely shows lack of understanding. The idea that such a distance ``cannot be measured'' is again a figment of your imagination: there is no difficulty whatever in measuring the distance between two moving points.
This discussion has, of course, no meaning: your only point is to denigrate relativity, for purposes best known to you. I have stated the truth of the matter, by following the actual original text (which you were afraid to quote) as closely as possible. For any interested readers who might have been confused by your nonsense, this should be enough. You I do not think worth an additional second of my time.
  • asked a question related to Fundamental Physics
Question
63 answers
Schrödinger self adjoint operator H is crucial for the current quantum model of the hydrogen atom. It essentially specifies the stationary states and energies. Then there is Schrödinger unitary evolution equation that tells how states change with time. In this evolution equation the same operator H appears. Thus, H provides the "motionless" states, H gives the energies of these motionless states, and H is inserted in a unitary law of movement.
But this unitary evolution fails to explain or predict the physical transitions that occur between stationary states. Therefore, to fill the gap, the probabilistic interpretation of states was introduced. We then have two very different evolution laws. One is the deterministic unitary equation, and the other consists of random jumps between stationary states. The jumps openly violate the unitary evolution, and the unitary evolution does not allow the jumps. But both are simultaneously accepted by Quantism, creating a most uncomfortable state of affairs.
And what if the quantum evolution equation is plainly wrong? Perhaps there are alternative manners to use H.
Imagine a model, or theory, where the stationary states and energies remain the very same specified by H, but with a different (from the unitary) continuous evolution, and where an initial stationary state evolves in a deterministic manner into a final stationary state, with energy being continuously absorbed and radiated between the stationary energy levels. In this natural theory there is no use, nor need, for a probabilistic interpretation. The natural model for the hydrogen, comprising a space of states, energy observable and evolution equation is explained in
My question is: With this natural theory of atoms already elaborated, what are the chances for its acceptance by mainstream Physics.
Professional scientists, in particular physicists and chemists, are well versed in the history of science, and modern communication hastens the diffusion of knowledge. Nevertheless important scientific changes seem to require a lengthy processes including the disappearance of most leaders, as was noted by Max Planck: "They are not convinced, they die".
Scientists seem particularly conservative and incapable of admitting that their viewpoints are mistaken, as was the case time ago with flat Earth, Geocentrism, phlogiston, and other scientific misconceptions.
Relevant answer
Answer
Hello Enders
You state that "According to Schrödinger 1926, there are no quantum jumps." Please allow me the following comments.
A set of articles by various authors are collected in a book edited by Wolfgang Pauli
Pauli, W. (ed.) - Niels Bohr and the Development of Physics. Pergamon Press, London. 1955.
Among the articles there is one by Werner Heisenberg
The Development of the Interpretation of the Quantum Theory
The following lines can be found in the article (page 14 of the book)
At the invitation of Bohr, Schrodinger visited Copenhagen in September, 1926, to lecture on wave mechanics. Long discussions, lasting several days, then took place concerning the foundations of quantum theory, in which Schrodinger was able to give a convincing picture of the new simple ideas of wave mechanics, while Bohr explained to him that not even Planck's Law could be understood without the quantum jumps. Schrodinger finally exclaimed in despair:
"If we are going to stick to this damned quantum-jumping [verdammte Quantenspringerei], then I regret that I ever had anything to do with quantum theory,"
to which Bohr replied:
"But the rest of us are thankful that you did, because you have contributed so much to the clarification of the quantum theory."
May be the above paragraph is the ultimate source of your statement.
The displeasure shown by Schrodinger has a different interpretation. It may mean that he understood quantum jumps, that he had a clear picture of the reach of the Schrodinger time dependent equation (STDE), and in particular that STDE contradicted quantum jumps. Therefore he knew that something very fundamental was missing in his elegant STDE. Nowhere he said something equivalent to "quantum jumps do not exist". He was annoyed by having to accept the existence and crucial phenomenological role of quantum jumps for the description of the basic atomic phenomena of absorption and radiation.
If you have a different historical source to justify your interpretation please share with us the reference as it would be extremely interesting
With most cordial regards,
Daniel Crespin
  • asked a question related to Fundamental Physics
Question
1 answer
MY EMAIL TO NFS:
My name is Andrei-Lucian Drăgoi and I am a Romanian pediatrician specialist, also undertaking independent research in digital physics and informational biology. Regarding your project called " Ideas Lab: Measuring "Big G" Challenge" (that I’ve found at this link: http://www.nsf.gov/funding/pgm_summ.jsp?pims_id=505229&org=PHY&from=home), I want to propose you a USA-Romania collaboration in this direction, based on my hypothesis that each chemical isotope may have its own “big G” imprint.
The idea is simple. Analogously to the photon, the hypothetical graviton may actually have a quantum angular momentum measured by a gravitational Planck-like quanta which I have noted h_eg, and a quantum G scalar G_q=f(h_eg). Despite Planck constant (h) being constant, h_eg may not be constant and may have slight variability that can depend on many factors including the intranuclear energetic pressures measured by the average binding energy per nucleon (E_BN) in any (quasi-)stable nucleus. I have proposed a simple grade I function that can generate a series hs_eg(E_BN) as a scalar function of E_BN, that also implies a series of quantum G scalars Gs_q(E_BN)= f[hs_eg(E_BN)] which is also a function of E_BN, as it depends on hs_eg(EBN). In conclusion: every isotope may have its own G "imprint" and that is one possible explanation (the suspected so-called “systematic error”) for the variability of the experimental G values from one team to another: I have called this hypothesis the multiple G hypothesis (mGH). I also propose a series of systematic experiments to verify mGH. As I don't work as a physicist (I am a Pediatrics specialist working in Bucharest, Romania) and just do independent research in theoretical physics, I don't have access to experimental resources, so I propose you a collaboration between USA and Romania and some experiments conducted either in the USA or in Romania (at "Horia Hulubei National Institute for R&D in Physics and Nuclear Engineering (IFIN-HH)", from Magurele, Romania: http://www.nipne.ro)
I have attached an article (in pdf format) that contains my hypothesis and its arguments (exposed in the first part of this paper): this work can also be downloaded from the link http://dragoii.com/BIDUM3.0_beta_version.pdf
My main research pages are:
Please, send me a minimal feedback to know that my message was received.
I am opened to any additional comment/suggestion/advice you may have on my idea on the big G.
===============================
THE REPLY FROM NFS:
Dear Dr. Dragoi,
   Thank you for your interest in our programs. Unfortunately, NSF does not fund research groups based outside the US. Should you succeed in your goal of creating a Romanian-US collaboration, please have your American collaborators contact NSF directly.
Best regards,
Pedro Marronetti
====================================
FINAL CONCLUSION: If you are interested in this collaboration, please send feedback on dr.dragoi@yahoo.com so that we may apply to NFS challenge until 26 October 2016 (which is the deadline).
  • asked a question related to Fundamental Physics
Question
2 answers
I'm going to put an insulator, playdough, on some copper metal. I was wondering how this would effect charge collection from a fundamental physics standpoint. These free electrons (source) would be coming from/already on the surface. I was thinking they would go around the insulator but remain on the surface. Am I correct in this assumption?
Relevant answer
Answer
What you need is to calculate the penetration depth. 
And you might need to solve the Fresnel equations for your case. 
  • asked a question related to Fundamental Physics
Question
227 answers
In Chapter V, of The Nature of the Physical World, Arthur Eddington, wrote as follows:
Linkage of Entropy with Becoming. When you say to yourself, “Every day I grow better and better,” science churlishly replies—
“I see no signs of it. I see you extended as a four-dimensional worm in space-time; and, although goodness is not strictly within my province, I will grant that one end of you is better than the other. But whether you grow better or worse depends on which way up I hold you. There is in your consciousness an idea of growth or ‘becoming’ which, if it is not illusory, implies that you have a label ‘This side up.’ I have searched for such a label all through the physical world and can find no trace of it, so I strongly suspect that the label is non-existent in the world of reality.”
That is the reply of science comprised in primary law. Taking account of secondary law, the reply is modified a little, though it is still none too gracious—
“I have looked again and, in the course of studying a property called entropy, I find that the physical world is marked with an arrow which may possibly be intended to indicate which way up it should be regarded. With that orientation I find that you really do grow better. Or, to speak precisely, your good end is in the part of the world with most entropy and your bad end in the part with least. Why this arrangement should be considered more creditable than that of your neighbor who has his good and bad ends the other way round, I cannot imagine.”
See:
The Cambridge philosopher, Huw Price provides an very engaging contemporary discussion of this topic in the following short video of his 2011 lecture (27 Min.):
This is well worth a viewing. Price has claimed that the ordinary or common-sense conception of time is "subjective" partly by including an emphatic distinction between past and future, the idea of "becoming" in time, or a notion of time "flowing." The argument arises from the temporal symmetry of the laws of fundamental physics --in some contrast and tension with the second law of thermodynamics. So we want to know if "becoming" in particular is merely "subjective," and whether this follows on the basis of fundamental physics. 
Relevant answer
Answer
According to Kant it is just our mind that perceives time as directional. The world as it is in itself is like our mind unrolling a carpet, not a carpet being woven in front of us (not his analogy).
I agree with Kant on most things, but I suspect he got this one wrong.
  • asked a question related to Fundamental Physics
Question
6 answers
I returned to Einstein's 1907 paper and found that the final conclusion offered at the end apparently omitted one last step. Namely, that the lowered value of the speed of light c of a horizontal light ray downstairs, when watched from above, is absolutely correct; but only the conclusion drawn from this observation – that the speed of light is indeed reduced downstairs – was premature.
This is because the light ray hugging the floor downstairs is hugging a constantly receding floor despite the fact that the distance is constant.
(In the same vein, the increased speed of light of a light ray hugging the ceiling of the constantly accelerating rocketship – not mentioned by Einstein – holds true for a ceiling that is constantly approaching the lower floor despite the fact that the distance is constant.) The correctly predicted "gravitational redshift" – and the opposite blueshift in the other direction – qualify as a proof that this thinking is sound.
N.B.: The proposal is perhaps not as stupid as it sounds because the theory employed here is alone the special theory of relativity (which by definition presupposes global constancy of c). This fact was of course constantly on the mind of Einstein and can expplain why he fell silent on the topic of gravitation for 3 ½ years.
When he returned to it in mid-19011, writing the originally unfinished c-modifying equation of 1907 down explicitly, he may have been hoping in the back of his mind that someone could spot the error that he still felt might be involved. It is not an error, only the omission of a final step.
Now my dear readers have the same chance of offering their help regarding my above "constant-c solution" to this conundrum of Einstein’s, which perhaps is the most important one of history.
Relevant answer
Answer
Einstein' redshift (gravitational redshift) doesn't involve at all light. But an observer's clock. Since time runs slower or faster depending on the intensity of the gravitational field (on the height from the surface of a planet or a star, for instance) you will see more or less wave crests during the period of your clock, i.e. a different frequency (and consequently a red- or blueshift).
Einstein's redshift doesn't deal at all with the speed of light, which doesn't vary. In Relativity it is constant because we found that it is.
So what's the problem with Einstein and the different velocities of light? I don't understand...
The equivalence principle in its strong version (Einstein's equivalence principle, the weak one is that of Galileo) states that no experiment can be run to distinguish an inertial frame of reference in a gravitational field from a frame in constant acceleration outside of a gravitational field.
For instance, also in the second case you would notice a deflection of a light beam as in a gravitational field.
May this help?
  • asked a question related to Fundamental Physics
Question
3 answers
for example Carbon( atomic no 6 atomic mass 14) = N ( atomic no 7 atomic mass 14) + 1 beta particle (electron) in this example how does nitrogen gets another electron to neutralize its charge ( no of proton = no of electron) ?
regards
Relevant answer
Answer
The electron (beta particle) will cause many ionizations as it is slowing down. These beta produced ions will neutralize as will the parent/progeny atom.
One could say the beta particle returns to produce the neutralization. Electrons are the indistinguishable so who can say that it did not.
  • asked a question related to Fundamental Physics
Question
56 answers
Fundamental Physicists.
Relevant answer
Answer
In the Kaluza theory (later Kaluza-Klein), it is related to the velocity of motion in a 4th spatial direction in the universe. A circular direction, with very small circumference. This is an idea which goes back to the Finnish theoretical physicist Gunnar Nordström in 1914, and has lingered on ever since. However, with no experimental confirmations.
  • asked a question related to Fundamental Physics
Question
4 answers
My thesis subject is "study of ephemeral organizational phenomena inside meta-organizations".
I'm currently looking for articles that are connecting fundamental physics and management science. 
Also looking for articles speaking about timespace as a whole instead of time or space separately, mostly in management science. 
If you have any suggestions about my subject, feel free to send me your advices !
Your help will be highly appreciated !
Relevant answer
Answer
Hello,
Very interesting Topic. Not easy to find material.
Maybe this paper gives you a paper also it is from instructional science:
good luck
  • asked a question related to Fundamental Physics
Question
56 answers
Are the fundamental physical constants rational numbers?  I think it would be true to say we cannot make measurements that  are non-rational.
Relevant answer
Answer
The Planck constant is a dimensionful quantity; hence one has to specify which units it should be measured in before the question makes sense. The most natural units to use are the Planck units. Expressed in these units the speed of light c, the Newton constant of gravity, GNand the reduced Planck constant (\hbar) are all unity. Hence, in these units the Planck constant equals 2pi, which is not rational but transcendental.
  • asked a question related to Fundamental Physics
Question
11 answers
Over the years, many physicists have wondered whether the fundamental constants of nature might have been different when the universe was younger. If so, the evidence ought to be out there in the cosmos where we can see distant things exactly as they were in the past.
One thing that ought to be obvious is whether a number known as the fine structure constant was different. The fine structure constant determines how strongly atoms hold onto their electrons and is an important factor in the frequencies at which atoms absorb light.
If the fine structure were different earlier in the universe, we ought to be able to see the evidence in the way distant gas clouds absorb light on its way here from even more distant objects such as quasars.
That debate pales in comparison to new claims being made about the fine structure constant. In 2010, John Webb at the University of South Wales, one of the leading proponents of the varying constant idea, and a few cobbers said they have new evidence from the Very Large Telescope in Chile that the fine structure constant was different when the universe was younger.
While data from the Keck telescope indicate the fine structure constant was once smaller, the data from the Very Large Telescope indicates the opposite, that the fine structure constant was once larger. That’s significant because Keck looks out into the northern hemisphere, while the VLT looks south.
This means that in one direction, the fine structure constant was once smaller and in exactly the opposite direction, it was once bigger. And here we are in the middle, where the constant as it is (about 1/137.03599…)
So, do you think that fine structure constant varies with direction in space?
Refs:
arxiv.org/abs/1008.3907: Evidence For Spatial Variation Of The Fine Structure Constant
arxiv.org/abs/1008.3957: Manifestations Of A Spatial Variation Of Fundamental Constants On Atomic Clocks, Oklo.
Included here you can also find a 2004 ApJ paper by John Bahcall, who is a proponent of varying fine structure constant. (URL: http://www.sns.ias.edu/~jnb/Papers/Preprints/Finestructure/alpha.pdf)
Relevant answer
Answer
Since the value of the fine structure constant has been found to be the resultant of a combination of 4 other constants: alpha= e2 /(2 eps_0 h c), that can mutually define each other, this means that if alpha should be found to have a different value, this would also mean that at least one of the other related constants also has a different value.
  • asked a question related to Fundamental Physics
Question
76 answers
Also known as the reversibility paradox, this is an objection to the effect that it should not be possible to derive an irreversible process from time-symmetric dynamics, or that there is an apparently conflict between the temporally symmetric character of fundamental physics and the temporal asymmetry of the second law.
It has sometimes been held in response to the problem that the second law is somehow "subjective" (L. Maccone) or that entropy has an "anthropomorphic" character. I quote from an older paper by E.T. Jaynes,
"After the above insistence that any demonstration of the second law must involve the entropy as measured experimentally, it may come as a shock to realize that, nevertheless, thermodynamics knows no such notion as the "entropy of a physical system." Thermodynamics does have the notion of the entropy of a thermodynamic system; but a given physical system corresponds to many thermodynamic systems" (p. 397). 
The idea here is that there is no way to take account of every possible degree of freedom of a physical system within thermodynamics, and that measures of entropy depend on the relevancy of particular degrees of freedom in particular studies or projects. 
Does Loschmidt's paradox tell us something of importance about the second law? What is the crucial difference between a "physical system" and a "thermodynamic system?" Does this distinction cast light on the relationship between thermodynamics and measurements of quantum systems?  
Relevant answer
Answer
Good question! Jaynes jumped from the necessity of a coarse-grained description to claims of "subjectivity". Of course subjectivity is important in science, but for other reasons. The Second Law requires only a coarse-grained description to satisfy the micro-macro distinction. Loschmidt´s paradox refers to the micro dynamics. The Second Law refers to the macro dynamics. The paradox has nothing to do with the Law itself, but with its use as an arrow of time. The paradox would imply that there is no arrow of time at the micro level. This is possibly true, although Prigogine tried to insert the arrow of time from a (proposed) asymmetry of quantum operators. Therefore , there would be no "heat death" at the micro level. Time and entropy increase would have to be defined in the interface of micro and macro. There may be implications for epistemology, but they are not automatic!
  • asked a question related to Fundamental Physics
Question
105 answers
Regarding our current understanding of quantum mechanics, especially the interpretation of the theory of measurements in terms of parallel universes.
Theoretical physics, quantum mechanics, Fundamental physics 
Relevant answer
Answer
It is difficult understand QM, because QM is an axiomatic conception,  QM contains the axiomatic object: wave function.  Nobody knows, what is the wave function. The same situation we had with thermodynamics, where there is an axiomatic object :  thermogen.
In the theory of fluids, the wave  function is the method of the ideal fluid description, and one may explain the quantum mechanics as a kind of gas dynamics. Indeed, molecules of usual gas move stochastically. This stochastic motion is a result of interaction between molecules (collisions). It is clear, that the kind of stochasticity depends on the form of interaction between  molecules. One can introduce such an interaction between the gas molecules, that gas dynamic equations, written in terms of wave function coincide with the Klein-Gordon equation. See for details  “Quantum mechanics as dynamics of continuous medium”. http://gasdyn-ipm.ipmnet.ru/~rylov/qmdcmr1e.pdf