Science topics: PhysicsTheoretical Physics

Science topic

# Theoretical Physics - Science topic

Explore the latest questions and answers in Theoretical Physics, and find Theoretical Physics experts.

Questions related to Theoretical Physics

I don't even know what the mathematical definition of "equal footing" is, but I do understand the meaning of the postulate (which I am not complaining about) that the laws of physics are expressible in a way that can be used by all observers. However, given this postulate that I accept until convinced otherwise, this still does not imply any equivalence between time and space. They have some similarities in the Lorentz transformation in special relativity but they also have profound differences, including:

1. The most obvious difference is human perception that perceives time differently from space.

2. On a more mathematical level, the metric tensor has only one eigenvalue having the sign for the time coordinate and three eigenvalues having the opposite sign for spatial coordinates.

3. Still using math, the time coordinate can always be used as the parameter in the parametric equations representing a particle trajectory, while other coordinates can serve this purpose only for special cases.

4. Because of the usefulness of time as a parameter (see item 3), Hamilton's equations give time a special role.

5. Constants of motion in any physics topic refer to quantities that do not change with time.

6. Getting more mathematical, but really referring to Item 5 above, the topic of field theory identifies field invariant quantities as spatial volume integrals that are constant in time.

So why are we told to treat time and space in the same way?

Max Planck wrote of natural units; ...

*These necessarily retain their meaning for all times and for all civilizations, even extraterrestrial and non-human ones, and can therefore be designated as 'natural units'*1. Could the

**(***units (kg, m, s, A, K) share this numerical relationship**kg*=*15*,*m*=*-13*,*s*=*-30*,*A*=*3*,*K*=*20*)?2. Could these

**geometrical MLTA objects**(see diagram below) be natural units?This is actually easy to test. These MLTA objects are the geometry of 2 dimensionless constants; the fine structure constant alpha = 137.035999139 and Omega = 2.0071349496, and so can have numerical solutions, i.e.: V = 25.3123819. We can then use a scalar v = 11843707.905m/s such that

**V*v**= 299792458m/s or**v**= 7359.323miles/s gives**V*v**= 186282miles/s.As the scalars have units associated (v uses m/s or miles/s), they will share the same numerical relationship (v = 17, l = -13, t = -30 ...), and so we would only need 2 scalars to define the others.

This then permits us to arrange combinations of (G, h, c, e, me, kB) whereby the scalars will cancel (scalars = 1), if this unit relationship is correct, then the SI constants will return the same numerical solutions as the equivalent MLTA constants ... for if the scalars are gone (cancelled), then the SI constants are MLTA constants (in the example, see diagram below, the 2 scalars are r, v).

Eliminating the SI numerical values from the SI constants will expose any embedded natural units, which we can use to determine what the natural units are, and 1 solution is those MLTA objects.

The methodology is explained here

- https://codingthecosmos.com/physical-constants-evidence-of-a-simulation-universe.html
- Do the fundamental constants embed evidence of a mathematical universe at the Planck scale? dx.doi.org/10.13140/RG.2.2.15874.15041/1
- The (scalars = 1) examples are listed here https://en.wikiversity.org/wiki/Planck_units_(geometrical)#Natural_units_MLTPA

1. Is there a logical flaw to the above?

2. Are there other arguments to support/refute this?

3. Is this evidence that ours is a mathematical (simulation) universe?

Some background.

For those that have the seventh printing of Goldstein's "Classical Mechanics" so I don't have to write any equations here. The Lagrangian for electromagnetic fields (expressed in terms of scalar and vector potentials) for a given charge density and current density that creates the fields is the spatial volume integral of the Lagrangian density listed in Goldstein's book as Eq. (11-65) (page 366 in my edition of the book). Goldstein then considers the case (page 369 in my edition of the book) in which the charges and currents are carried by point charges. The charge density (for example) is taken to be a Dirac delta function of the spatial coordinates. This is utilized in the evaluation of one of the integrals used to construct the Lagrangian. This integral is the spatial volume integral of charge density multiplied by the scalar potential. What is giving me trouble is as follows.

In the discussion below, a "particle" refers to an object that is small in some sense but has a greater-than-zero size. It becomes a point as a limiting case as the size shrinks to zero. In order for the charge density of a particle, regardless of how small the particle is, to be represented by a delta function in the volume integral of charge density multiplied by potential, it is necessary for the potential to be nearly constant over distances equal to the particle size. This is true (when the particle is sufficiently small) for external potentials evaluated at the location of the particle of interest, where the external potential as seen by the particle of interest is defined to be the potential created by all particles except the particle of interest. However, total potential, which includes the potential created by the particle of interest, is not slowly varying over the dimensions of the particle of interest regardless of how small the particle is. The charge density cannot be represented by a delta function in the integral of charge density times potential, when the potential is total potential, regardless of how small the particle is. If we imagine the particles to be charged marbles (greater than zero size and having finite charge densities) the potential that should be multiplying the charge density in the integral is total potential. As the marble size shrinks to zero the potential is still total potential and the marble charge density cannot be represented by a delta function. Yet textbooks do use this representation, as if the potential is external potential instead of total potential. How do we justify replacing total potential with external potential in this integral?

I won't be surprised if the answers get into the issues of self forces (the forces producing the recoil of a particle from its own emitted electromagnetic radiation). I am happy with using the simple textbook approach and ignoring self forces if some justification can be given for replacing total potential with external potential. But without that justification being given, I don't see how the textbooks reach the conclusions they reach with or without self forces being ignored.

Dear Sirs,

In the below I give some very dubious speculations and recent theoretical articles about the question. Maybe they promote some discussion.

1.) One can suppose that every part of our reality should be explained by some physical laws. Particularly general relativity showed that even space and time are curved and governed by physical laws. But the physical laws themself is also a part of reality. Of course, one can say that every physical theory can only approximately describe a reality. But let me suppose that there are physical laws in nature which describe the universe with zero error. So then the question arises. Are the physical laws (as an information) some special kind of matter described by some more general laws? May the physical law as an information transform to an energy and mass?

2.) Besides of the above logical approach one can come to the same question by another way. Let us considers a transition from macroscopic world to atomic scale. It is well known that in quantum mechanics some physical information or some physical laws dissapear. For example a free paricle has a momentum but it has not a position. Magnetic moment of nucleus has a projection on the external magnetic field direction but the transverse projection does not exist. So we can not talk that nuclear magnetic moment is moving around the external magnetic field like an compass arror in the Earth magnetic field. The similar consideration can be made for a spin of elementary particle.

One can hypothesize that if an information is equivalent to some very small mass or energy (e. g. as shown in the next item) then it maybe so that some information or physical laws are lossed e.g. for an electron having extremely low mass. This conjecture agrees with the fact that objects having mass much more than proton's one are described by classical Newton's physics.

But one can express an objection to the above view that a photon has not a rest mass and, e.g. rest neutrino mass is extremely small. Despite of it they have a spin and momentum as an electron. This spin and momentum information is not lost. Moreover the photon energy for long EM waves is extremely low, much less then 1 eV, while the electron rest energy is about 0.5 MeV. These facts contradict to a conjecture that an information transforms into energy or mass.

But there is possibly a solution to the above problem. Photon moves with light speed (neutrino speed is very near to light speed) that is why the physical information cannot be detatched and go away from photon (information distribution speed is light speed).

3.) Searching the internet I have found recent articles by Melvin M. Vopson

which propose mass-energy-information equivalence principle and its experimental verification. As far as I know this experimental verification has not yet be done.

I would be grateful to hear your view on this subject.

My understanding of the significance of Bell's inequality in quantum mechanics (QM) is as follows. The assumption of hidden variables implies an inequality called Bell's inequality. This inequality is violated not only by conventional QM theory but also by experimental data designed to test the prediction (the experimental data agree with conventional QM theory). This implies that the hidden variable assumption is wrong. But from reading Bell's paper it looks to me that the assumption proven wrong is hidden variables (without saying local or otherwise), while people smarter than me say that the assumption proven wrong is local hidden variables. I don't understand why it is only local hidden variables, instead of just hidden variables, that was proven wrong. Can somebody explain this?

Dear

**R**^{G}*, in this thread, I will discuss the similitudes and differences between two marvelous superconductors:***community members**One is the liquid isotope Helium three (

*) which has a superconducting transition temperature of T*^{3}He_{c}~ 2.4 mK, very close to the absolute zero, it has several phases that can be described in a pressure - P vs temperature T phase diagram.^{3}

*was discovered by professors Lee, Oshero, and Richardson and it was an initial point of remarkable investigations in unconventional superconductors which has other symmetries broken in addition to the global phase symmetry.*

**He**The other is the crystal strontium ruthenate (

*) which is a metallic solid alloy with a superconducting transition temperature of T***Sr**_{2}RuO_{4}_{c}~ 1.5 K and where nonmagnetic impurities play a crucial role in the building up of a phase diagram from my particular point of view.*was discovered by Prof. Maeno and collaborators in 1994.*

**Sr**_{2}RuO_{4}The rest of the discussion will be part of this thread.

Best Regards to All.

How much does the existence of advanced laboratories and appropriate financial budgets and different support for a researcher's research affect the quality and quantity of a researcher's work?

Heidegger said that philosophy is

*thinking*. What else is philosophy? What is the ultimate aim of philosophy? Truth? Certainty? …Heidegger said that science is

*knowledge*. What else is science? What is the ultimate aim of science? Knowledge? Truth? Certainty? …Scientists have been using quantum theory for almost a century now, but embarrassingly they still don’t know what it means. An informal poll taken at a 2011 conference on Quantum Physics and the Nature of Reality showed that there’s still no consensus on what quantum theory says about reality — the participants remained deeply divided about how the theory should be interpreted.

1. Bose-Einstein condensation: How do we rigorously prove the existence of Bose-Einstein condensates for general interacting systems? (Schlein, Benjamin. "Graduate Seminar on Partial Differential Equations in the Sciences – Energy, and Dynamics of Boson Systems". Hausdorff Center for Mathematics. Retrieved 23 April 2012.)

2. Scharnhorst effect: Can light signals travel slightly faster than c between two closely spaced conducting plates, exploiting the Casimir effect?(Barton, G.; Scharnhorst, K. (1993). "QED between parallel mirrors: light signals faster than c, or amplified by the vacuum". Journal of Physics A. 26 (8): 2037.)

I have to fabricate a 2D Hot electron transistor for my project.

I come mainly from a theoretical physics background. So I don't know what to search or read to know about what parameters affect the frequency of a 2D heterostructure Transistor. Can someone help me out by pointing the literature I should really be looking for?

At the beginning of the 20th century, Newton’s second law was corrected considering the limit speed c and the relativistic mass. At that time there has not been a clear understanding of the subatomic particles and basically there was little research in high energy physics.

According to particles of matter transfer discrete amounts of energy by exchanging bosons with each other and energy has mass and momentum, we can recorrect relativistic Newton’s second laws directly by using conservation law of momentum.

quote from the book "Мathematical notes on the nature of the things"

-- So, conceptually, we proceed from the fact that the real Universe is a dynamic flow on a seven-dimensional sphere, therefore, a vacuum, without taking into account the evolutionary component, is a globally minimal vector field of matter accelerations, forming on the surface of the sphere $S^{7}$ a foliation $S^{1}\times S^{3}\times S^{3}$, a typical layer of which has the shape of a Clifford torus $S^{3}\times S^{3}$, and taking into account the periodicity of the foliation in time, its dynamics it is described by a toroidal manifold $S^{1}\times S^{1}\times S^{3}\times S^{3}$. However, since the globally minimal vector field of matter accelerations evolves to its absolutely minimal state so that in the process of evolution the radius of one of the spheres of the Clifford torus increases and the radius of the other sphere decreases, then there is no periodicity of foliation in time, and the dynamics of vacuum foliation is described by a cylindrical manifold $\mathbb{R}^{4}\times S^{1}\times S^{3}$ and it is convenient for the observer to operate with the space $M\times S^{1}\times S^ {3}$, where $M$ is Minkowski spacetime, and $S^{1}\times S^{3}$ is the compact component of the vacuum foliation.

Dynamic flows on a seven-dimensional sphere that do not coincide with the globally minimal vector field, but remain locally minimal vector fields of matter accelerations, we interpret as physical fields and particles. Moreover, bosons are associated with point-like perturbations of the vacuum vector field, and fermions are associated with node-like perturbations of the vacuum vector field, that is, the current lines of fermionic vector fields have a topological feature in the form of nodes. -- (p. 16)

(19) (PDF) MATHEMATICAL NOTES ON THE NATURE OF THINGS (researchgate.net)

I was wondering if surface electromagnetic waves can propagate at interface between two dielectrics, both isotropic and homogeneous, but having different relative permittivities.

The literature shows that their characteristics must be different.

Should the scholars at RG and elsewhere be alarmed by the press reports on the influence of the unholy alliance of big money and theology on high-value scientific research, particularly on theoretical physics and cosmology?

The British newspaper,

**The Guardian report**:**The MIT-Epstein debacle shows ‘the prostitution of intellectual activity’.**https://www.theguardian.com/commentisfree/2019/sep/07/jeffrey-epstein-mit-funding-tech-intellectuals**BBC reports**:

**Big Bang and religion mixed in Cern debate**

**Big Bang: Is there room for God?**

As we all know, the wavefunction of such a particle has a certain number n of zeros due to boundary conditions. If at these points the wavefunction is zero, then, since the probability of finding the particle there is equal to the square of the wavefunction, it follows that the particle cannot ever be there. However, there is nothing physical at those points that would prevent the particle from being there at some instant.

Moreover, a wavefunction psi_n corresponds to an energy level E_n. As you change to a higher energy level, the index n grows, and we have more nodes of the wavefunction; i.e., more places where the particle cannot be. Again, there is nothing physical at these points.

*This paper is a project to build a new function. I will propose a form of this function and I let people help me to develop the idea of this project, and in the same time we will try to applied this function in other sciences as quantum mechanics, probability, electronics …*

The special theory of relativity assumes space time is formed from fixed points with sticks and clocks to measure length and time respectively. The electromagnetic waves are transmitted at the speed of light through this space time. This classical space time does not explain the mysteries of quantum mechanics. Do you think that maybe there is more than one space time?

I have several confusions about the Hall and quantum Hall effect:

1. does Hall/QHE depend on the length and width of the sample?

2. Why integer quantum Hall effect is called one electron phenomenon? there are many electrons occupying in single landau level then why a single electron?

3. Can SDH oscillation be seen in 3D materials?

4. suppose if there is one edge channel and the corresponding resistance is h/e^2 then why different values such as h/3e^2, h/4e^2, h/5e^2 are measured across contacts? how contact leads change the exact quantization value and how it can be calculated depending on a number of leads?

5. how can we differentiate that observed edge conductance does not have any bulk contribution?

The Nilsson diagram is obtained by solving the Schrodinger equation. If the deformation parameters are continuous, I wonder the orbits should be continuous as well. If the Pauli exclusion principle is the reason, the nilsson quantum number are not always equal, such as 5/2[402] and 5/2[642], why?

Possibly: 4/3 scaling is a fundamental universal principle. Nothing underlies it. Why? It accounts for expanding cosmological space. Since 4/3 scaling brings 3 dimensional space, and hence everything else, into existence, it must be fundamental.

Can that be right? What favors and disfavors this notion?

Could anyone suggest a suitable and affordable phase noise analyzer to characterize pulsed laser sources with rep rates around 200MHz.

Thanks

Dear Sirs,

The elevator example in general relativity is used to show that gravitational force and an inertial force are not distinguishable. In other words the 2nd Newton's law is the same in the two frames: inertial frame with homogenous gravitational field and the elevator's frame without gravitational field which has constant acceleration in respect to the inertial frame.

But every one knows that an inertial force is a force which does not obey the 3rd Newton's law. For example such forces are cetrifugal force and Coriolis force existing in the Earth reference frame. Gravitational force satisfies the 3rd Newton's law. So one can conclude that the gravitational force is not inertial.

Could you clarify the above controversy.

Dear Sirs,

R Feynman in his lectures, vol 1, chapter 12, Characteristics of force wrote:

"The real content of Newton’s laws is this: that the force is supposed to have some

*independent properties*, in addition to the law F=ma; but the*specific*independent properties that the force has were not completely described by Newton or by anybody else, and therefore the physical law F=ma is an incomplete law. ".Other researchers may consider the 2nd Newton's law as a definition of force or mass. But R. Feynman did not agree with them in the above chapter.

What is your view on the 2nd Newton's law?

Sometimes I have found an inconsistency gives a helpful clue of how to improve a theoretical investigation. Early on I viewed mistakes as hurdles. I still think they are hurdles but have many times found them to be helpful. My view is that it encourages persistence to know that mistakes are part of the process of figuring things out. Are there articles about the role of making mistakes in theoretical physics?

Dear Sirs,

Everyone knows the derivation of Lorentz transformations from electromagnetic wave front propagation. But Lorentz transformations are the basis of the general mechanics theory. It seems to me it is logically correct to derive the transformations from purely mechanical grounds. But how to do this? Mechanical (sound) waves are not of course applicable here. Or there is only purely mathematical approach? I The later is also not good in physics. Could it be derived from gravitational wave propagation? If it is so is there any controversy because General relativity is based on special relativity? I would be grateful for your suggestions.

I would like to do work on quantum gravity. But general relativity is not complete. So if i want to do work on GRT. I am beginner for this course. GRT fails in few aspects. Any one suggest me research papers. Please send me your answers.

Dear Sirs,

I would like to find out whether galilean relativity principle (which means the same

form of three Newton's laws in all inertial frames) is derived from the three Newton's laws or

any other classical mechanics statements.

The document: DOI: 10.13140/RG.2.1.4285.9289

Mathematically the question is to determine all the transformations realized between some coordinate systems which have a physical reality for the experimenters: each of these four-dimensional coordinate systems is formed by a cartesian and rectangular coordinate system of a three-dimensional Euclidean physical space, and by a particular temporal parameter which is qualified as cartesian and whose construction is specified. We obtain then a group of nonlinear transformations that contains the Poincaré group and is described by about fifteen real numbers.

Interpretation:

1 / The paradox of Ehrenfest:

If the elements of a family of observers are not motionless the ones with recpect to the others, in other words if their world lines are not elements of a unique physical space, then even in the context of classical kinematics, how they can manage to put end to end their infinitesimal rules to determine the length of a segment of curve of their reference frame (each will naturally ask his neighbor not to move until measurement is ended) ? this is the basis for the proposed solution to Ehrenfest paradox. Inspired by the expression of the law of Hubble, every theory must provide explicit or implicit assumptions to compare "the proper distance" D (which can vary over time) which separates an arbitrarily chosen experimenter P from a certain object, and "the proper distance" D' which separates another arbitrarily selected experimenter P' from the same object and this because it is admitted that this concept of proper distance has a physical meaning even in a non-comoving reference frame.

2 / The authorized relative motions are quantified:

I establish an Eulerian description of the construction of all the physical spaces of the "classical kinematics" and an Eulerian description of the construction of all the physical spaces of nature in the context of the new theory. In classical kinematics all the authorized relative motions between observers can be described by two arbitrary functions of the universal temporal parameter (one of the rotation and one of the translation) and in the context of the new theory, all the authorized relative motions between observers are described by at most 15 real numbers. A notion of expansion of the universe is established as being a structural reality and a rigorous formulation of the experimental law of Hubble is proposed.

Thank you.

Dear

**R**community. I have a question that probably most of you are not facing. But where are the citations in RG to papers written in the former URSS? First-class Soviet Journals such that Journal of experimental and theoretical Physics (JETP and its letters) and Low-temperature Physics (from Kharkiv)^{G}Vzla 09/05/20

Thanks to various inputs to this thread, the topic of discussion became wider & more interesting:

*The kind of science made in the former URSS: Did it become after the cold war a forgotten ghost? or Did it spread all over the world? or Did former URSS scientists change their science schools for new ones?*Regards,

Pedro L.

A. Bejan, A. Almerbati and S. Lorente have concluded that `the economies of scale phenomenon is a fundamental feature of all flow (moving) systems, animate, inanimate, and human made’ (https://doi.org/10.1063/1.4974962).

The universe’s space everywhere flows — expands — outwards from its beginning. Economies of scale appear to arise in flowing systems. Is cosmogenesis an economy of scale phenomenon for the entire universe?

Are the physics of cosmogenesis and economies of scale the same?

This question relates to my recently posted question: What are the best proofs (derivations) of Stefan’s Law?

Stefan’s Law is E is proportional to T^4.

The standard derivation includes use of the concepts of entropy and temperature, and use of calculus.

Suppose we consider counting numbers and, in geometry, triangles, as level 1 concepts, simple and in a sense fundamental. Entropy and temperature are concepts built up from simpler ideas which historically took time to develop. Clausius’s derivation of entropy is itself complex.

The derivation of entropy in Clausius’s text, The Mechanical Theory of Heat (1867) is in the Fourth Memoir which begins at page 111 and concludes at page 135.

Why does the power relationship E proportional to T^4 need to use the concept of entropy, let alone other level 3 concepts, which takes Clausius 24 pages to develop in his aforementioned text book?

Does this reasoning validly suggest that the standard derivation of Stefan’s Law, as in Planck’s text The Theory of Heat Radiation (Masius translation) is not a minimally complex derivation?

In principle, is the standard derivation too complicated?

The reasoning behind this question is to align the ratios of the proton potential in energy per charge to the speed of light in distance per time. As these two dimensions appear to be related, having identical numerical values would simplify the maths when making calculations in physics.

approximately;

(938,272,310 Joules per Coulomb)/(299,792,458 meters per second)

Basically setting the units so 1 J/C = 1 m/s

For our friends in the U.S. this would be like bringing back the foot ;)

In my brief paper below I show why

*potential*and*speed*are one and the same thing.Working Paper A Case for Absolute Electrical Potential

Steven

As we know there are many papers in literature trying to derive or explain fine structure constant from theories. Two of interesting papers are by Gilson and by Stephen Adler (see http://lss.fnal.gov/archive/1972/pub/Pub-72-059-T.pdf), other papers are mostly based on speculation or numerology.

In this regards, in December 2008 i once attended a seminar in Moscow State University, Moscow. The topic of that seminar is relation between fundamental constants. Since the seminar was presented in russian language which i don,t understand, i asked a friend about the presenter. And my friend said that the presenter was Prof. Anosov. I only had a glimpse of his ideas, he tried to describe fine structure constant from Shannon entropy. I put some of his ideas in my note book, but today that book is lost.

I have tried to search in google and arxiv.org to find out if there is paper describing similar idea, i.e. to derive fine structure constant from Shannon entropy, but i cannot find any paper. So if you know that paper by Anosov or someone else discussing relation between fine structure constant and Shannon entropy, please let me know. Or perhaps you can explain to me the basic ideas.

In epidemiology, earthquakes, tokamak disruptions etc., there is possibility of approximations with the sequence of Gaussians and with the appropriate risks ( see my papers in Journal of Fusion Energy, 2015 and International Journal of Molecular and Theoretical Physics, 2017 ). What is the role of thresholding and bifurcations?

- Is the GHZ argument more useful than BKS theorem or is only a misinterpretation of EPR argument?

Chord language is a natural information system, The basic forms are: chords (quantized discrete spectrum), chord geometry (open, closed, membrane strings), and mathematical models of chords (temperament, harmonics), often used in time (music) ), space (painting), life (meridians) and other chord semantic expressions; chord semantics comes from the chord spectrum, which is the manifestation of natural spirit and natural laws.

The impression of chord observation is: the language of chords is the language of time-space (life); the language of all things.

Preprint Chord Language

Preprint Chord Painting

We know that these principles apply to particles. How is it used in magnetic field now?...

With 2 indices it's A

_{ij}=(A_{ij}+A_{ji})/2+(A_{ij}-A_{ji})/2 for GL(N).No. At the most fundamental level, Interpretation of physical theory by mathematics may not be appropriate. The reason-

"Suppose a Circle of radius zero and with theta(zero) radians. It is physically a point. But a point also will have radius(how ever small it may be, it will have a radius).So point is two dimensional or we can say one dimensional but we can not say zero dimensional. Thus a 'zero dimensional' object can not be interpreted by mathematics".

In physics-

" Suppose a quantum particle like photon.Space is zero for it . So every thing is zero for it.Even energy also should not exist, since there is no length(one dimension)or space (three dimensions). It is a quantum particle of EM force.

By quantum physics ,it can be explained as zero dimensional object.That means ,it is not even a point in space(nothing-since space zero as per physical meaning) but it will have some thing which can be explained by quantum mechanics .Thus zero dimension can be explained by physics.

Then Why physics is completely dependent on mathematics? Why mathematics is dominating in theoretical physics? Is it reasonable?. It is an important point in synchronization of QM with GR.

This question does not relate to philosophical romanticism applied to science that had some currency in the 1800s. Roughly it seems that scientific romanticism differed from enlightenment by inserting humanity into nature and seeking union via human consciousness and problem solving.

The romantic aspect of physics I allude to shares some features of the medieval tale relating to chivalry, such as Don Quixote and qualities of adventure into unknown parts remote from settled life, such as the adventures of Richard Burton, who translated the Arabian Nights.

The mystery is: how has nature contrived these things we observe?

The remoteness is that the answers may require extrapolation in size, microscopic or cosmological, or in length of time, short or long, or in eons past or yet to arrive, remote from human experience, or principles that defy and challenge human perception, such as universal gravitation, or the nature of time, curvature of space, or quantum particles.

The adventure involves all the steps to solve the problem.

It seems to me that theoretical physics is a romantic quest. If the physicist arrives at a partial or provisional understanding of some mystery, then that is a great romance.

Your view?

Will this be the final incarnation of this question?

My purpose in asking these questions is to motivate the kind of physical theory that accepts that Physical Laws are part of The Universe, as opposed to standing outside it. And that rules governing The Universe must stem form The Universe itself. Otherwise, we should be asking: Where do the Physical Laws come from?

Sometimes, perhaps, the largest impediment to solving a problem, in general and in physics, is the framing of the question in the context of widely accepted implicit, or unstated or assumed concepts.

If, unaware and unknowledgeable — ignorant – of those learned assumptions, one blunders into the problem, can that be an advantage, unencumbered by what the learned think they know? The more unquestioned those assumptions are, the harder it is to tackle the unsolved problem is?

Or does the inquiry of the ignorant lead to merely ignorance elaborated?

Are there historical examples?

It was the day Carl Anderson discovered a particle which we today know as the positron and also the day I suspect physics went off on a tangent.

About 4 years earlier Dirac published a paper predicting the yet unseen anti-matter, so when Anderson's cloud chamber experiments found tracks that looked like electrons curving the opposite way in a magnetic field, he assumed it was the antimatter predicted by Dirac and consequently won the Nobel price for his discovery.

The problem I see is, Dirac never predicted these particles to be so rare, in fact he suggests in one of his papers that the proton might be a candidate, so clearly in his mind matter and anti-matter ought to appear in equal numbers.

** Incidentally over the last 90 years we have never ever witnessed matter creation events where matter and anti-matter did NOT appear in equal numbers.*

The assumption made by Anderson, about the positron being anti-matter swept some major problems under the carpet, and we are now paying for it big time, think LHC, dark matter, Higgs boson etc.

There is a better way, I show in Ground Potential Theory, how Dirac's intuition was absolutely correct and explain why our world is made of protons and why the electron is proper anti-matter. Further I go on to calculate ground potential from first principles and show that time is nothing more than a change in electric potential.

The historical sequence of events should have been...

1) Discover that light speed is finite

2) Discover that electric potential is finite

3) Discover the laws of special relativity

4) Apply wave theory and develop quantum mechanics

Missing out on the second step left a void in our understanding which has now been pushed around for 90 years..

The standard cosmological model and the standard model for particle physics is consequently full of unicorns*.

If we want to make progress, I think we need to go back to 1932 and right the wrongs, what do you think?

Steven Sesselmann

Ref:

Article The Positive Electron

Research Ground Potential Theory

* Unicorns - Unseen inexplicable phenomena that can't be measured

PS: When in the next 5 -10 years physics finds itself in a major crisis because the fundamental electron is gaining mass, remember Ground Potential predicted it.

I have attached there some equations which is needed to be solved by keller box method.but I have faced problems with block elimination portion because of here momentum equation starts with f'' instead of f'''.I have also attached here sample matrix when equation starts with f'''.what will happen when it starts with f''?what will be iteration of converges for this?

Dear Sirs,

I think many knows the ideas due to Jules Henri Poincaré that the physics laws can be formally rewriten as a space-time curvature or as new geometry solely without forces. It is because the physics laws and geometry laws only together are verified in the experiment. So we can arbitrary choose the one of them.

Do you know any works, researchers who realized this idea. I understand that it is just fantasy as it is not proved in the experiment for all forces excepting gravitation.

Do you know works where three Newtons laws are rewritten as just space-time curvature or 5D space curvature or the like without FORCES. Kaluzi-Klein theory is only about electricity.

Theoretical physics is often distinguished from experimental physics.

Is the philosophy of physics written by philosophers, and theoretical physics something physicists do?

What are the distinctions?

Or are there none, apart from how the author is designated?

There exist theoretical evidences that this hipothesis is true, especially, it is strongly supported by the Casimir type electron stability mechanism suggested by Prof. Hal Puthoff in his nice work: Puthoff H.E. "Casimir vacuum energy and the semicalssical electron". Int J Theor Phys, 46, 2007, p. 3005-3008, as well as in the works by Valerii B Morozov, 2011 Phys.-Usp. 54 371 doi:10.3367/UFNe.0181.201104c.0389

"On the question of the electromagnetic momentum of a charged body",

Rohrlich F. Self-Energy and Stability of the Classical Electron. American Journal of Physics, 28(7), 1960, p. 639-643,

Prykarpatsky A.K., Bogolubov N.N. (Jr.) On the classical Maxwell-Lorentz electrodynamics, the inertia problem and the Feynman proper time paradigm. Ukr. J. Phys. 2016, Vol. 61, No. 3, p. 187-212

and by Rodrigo Medina in the work "Radiation reaction of a classical quasi-rigid extended

particle", J. Phys. A: Math. Gen. 39 (2006) 3801–3816 doi:10.1088/0305-4470/39/14/021

The last one is very learning and also solves the well known "4/3"-problem formulated by Abraham, Lorentz and Dirac more than 100 years ago.

To me it is mostly a story.

There is, at the outset, a puzzle about some natural phenomena, perhaps encountered by inadvertence.

Then some other process exhibits a similar pattern. The question becomes is there some reason, perhaps based on the thermodynamics of the two systems, that connects them?

This takes the curious inquirer into a conceptual forest, or overgrown garden, path obscured, looking for a common principle. When a principle is discerned, there are more questions.

Does the pattern appear elsewhere?

Is there a more fundamental principle underlying the first principle discerned?

Does a principle, even more fundamental, connect all the different phenomena sharing a kind of pattern? Does the same pattern appear but in subtle ways in other phenomena?

Can the phenomena be modeled? What assumptions are extraneous to arriving a model in common? What is the set of minimal assumptions?

Many more paths and tangles appear.

Can the winding path so obscure at the outset be reduced to a set of logical statements that resemble in their appearance mathematical deduction? Never finally, but at least provisionally?

But first, there is a story.

How do you regard physics?

One of the consequences of relativistic physics is the rejection of the well-known concepts of space and time in science, and replacing them with the new concept of Minkowski space-time or simply space-time.

In classical mechanics, the three spatial dimensions in Cartesian coordinates are usually denoted by x, y and z. The dimensional symbol of each is L. Time is represented by

*t*with the dimensional symbol of T.In relativistic physics x, y and z are still intactly used for the three spatial dimensions, but time is replaced by

*ct*. It means its dimension has changed from T to L. Therefore, this new time is yet another spatial dimension. One thus wonders where and what is time in space-time?Probably, due to this awkwardness,

*ct*is not commonly used by physicists as the notion for time after more than a century since its introduction and despite the fact that it applies to any object at any speed.The root of this manipulation of time comes directly from Lorentz transformations equations. But what are the consequences of this change?

We are told that an observer in any inertial reference frame is allowed to consider its own frame to be stationary. However, the space-time concept tells us that if the same observer does not move at all in the same frame, he or she still moves at the new so-called time dimension with the speed of light! In fact, every object which is apparently moving at a constant speed through space is actually moving with the speed of light in space-time, divided partially in time and partially in spatial directions. The difference is that going at the speed of light in the time direction is disassociated with momentum energy but going at the fraction of that speed in the other three dimensions accumulates substantial momentum energy, reaching infinity when approaching the speed of light.

The Carnot cycle is a theoretical thermodynamic cycle proposed by Nicolas Léonard Sadi Carnot in 1824 and expanded by others in the 1830s and 1840s. It can be shown that it is the most efficient cycle for converting a given amount of thermal energy into work, or conversely, creating a temperature difference (e.g. refrigeration) by doing a given amount of work.One of the great virtues of the Carnot cycle is its potential applicability to any working substance.The Carnot cycle for a photon gas provides a very useful tool to illustrate the thermodynamics laws and it is possible to use for introducing the concepts of creation and annihilation of photons in an introductory course of physics.

Where the redshift value could be the combination of gravitational, rotational and Doppler and matching with observed values.

If Temperature is related to motion, then what’s wrong if I relate Temperature to Time Dilation concept of Relativity?

Temperature is a measure of the average translational kinetic energy associated with the disordered microscopic motion of atoms and molecules.

Thermodynamic temperature is a measure of the kinetic energy in molecules or atoms of a substance. The greater this energy, the faster the particles are moving, and the higher the reading an instrument will render. This is the method lay people most often use.

**therefore; based on this motion verses velocity dependency of Temperature, i am predicting that Time runs slow at greater Temperature.**

**the complete mathematical flow is given in the attached document or link. please go through it and most welcome for your valuable comments/feedback.**

Article TEMPERATURE TIME DILATION

Is a theoretical physicist, believing a hypothesis true, ethically obliged to advocate for it? Is there an obligation running from physicist to theory?

Is it a desire to repay in some measure society for the advantages it confers?

Does vanity seek triumph over an unsolved problem?

Does the possibility that a mind can forge ideas and advance civilization lead to a theoretician promoting a hypothesis?

Is it curiosity?

Is it career advancement?

Is it the desire to create ideas that outlast a human life?

Is it the desire to experience personal and private joy of (partly) converting confusion to knowledge, lifting the curtain?

What motivates?

In Nature in 1971, volume 233, page 357, W. H. Brock reviewed a book by Robert Fox, The Caloric Theory of Gases from Lavoisier to Regnault, published by Oxford U. The reviewer begins with this comment of Edward Frankland: “it is by no means necessary that a theory should be absolutely true in order to be a great help to the progress of science." Very nice quote. It relates to the epistemology of scientific knowledge as well as the historical progress of science. No source is given for the quote. Do you know the provenance of the quote?

Isaac Newton floated his boat in the cosmic ocean, powered by his brilliant terrestrial mechanics and the Second Law of motion aided by a “First Impulse”. But this boat was lunched without the balancing radar of his Third Law and against the expressed wisdom of G.W. Leibniz. The journey faltered because Newton cannot take even a single step in terrestrial Nature without his Third Law! Albert Einstein foisted a magic sail of ideal mathematics on Newton’s boat that gave it unlimited motion. Where do you think this project will lead humanity to?

Hi there,

my question concerns a situation in which the law of free fall and the relativity of simultaneity come into play simultaneously.

The general assumptions are as follows:

1. if we let two objects fall at the same time, they will reach the surface at the same time, regardless of their mass (although gravity has a stronger effect on larger masses, inertia is also greater to the same extent).

2. the relativity of simultaneity shows impressively that different observers moving relatively to each other do not have to agree on whether two events really happen at the same time, depending on their reference system.

My question now is, what happens, if we combine both things. A person is standing in a space ship and lets two objects with different masses fall simultaneously through a technical apparatus (atomic clock). In his frame of reference this person has no problem - he sees that both objects arrive at the floor at the same time.
But what does an external observer see when the space ship passes? Does he now have the impression that the objects no longer fall onto the surface at the same time, even though the law of free fall implies uniform acceleration? Or must all external observers agree that both objects reach the floor at the same time, because the law of free fall cannot be circumvented? Or is it the case that the external observer could observe that the person in the space ship does not drop the objects at the same time, although the person in the space ship observes that the objects are dropped at the same time?

As the explanation goes, hole is

*a figment*to the absence of electron as it moves to some different energy state as a result of absorption of energy of some kind. That is, I think of holes as*voids,*which are "*have positive charge for the sake of charge neutrality because there once happened to be an electron at that place. But, it doesn't have it's own actual charge like any other physical charged particle (like an electron), right? Then, how can we define an exciton that is based on coulombic forces between an electron (in conduction band) and a hole (valence band), which actually requires presence of***said to"****two physical charges**?The question is wether we could differentiate the two procedures: a) Information sent by an transmitter/emitter and b) Information received by a receiver.

If these procedures could be distinctive, then how we could exclude the possibility that information reaches receiver simultaneously with its emission, and not that the time elapsed (according to receiver) is due to receiver's (in)ability to encode it?

Is there a possibility that information-transfer consists of two mechanisms: one simultaneous and the other with light's speed? Obviously, the later is the time-related one.

I demonstrated that all astronomical observations refute General Relativity.

https://www.quora.com/What-scientific-ideas-or-theories-are-blocking-progress/answer/Marco-Pereira-1

Since I did that, somehow, not a single scientist came to refute that conclusion.

Here is the argument:

Feel free to rebut it.

############################################

############################################

############################################

Bruce M. Boghosian in the November 2019 issue of Scientific American (p. 73) writes about wealth distribution. Using math and physics, it seems that a slight perturbation to a symmetric or isotropic starting point can result in inequality. Slight inequality results in increasing inequality (anisotropy) over time. These issues are also canvassed in the Growth of Oligarchy in a Yard-Sale Model of Asset Exchange by Bruce M. Boghosian, Adrian Devitt-Lee, and Hongyan Wang, arxiv 2016 and in The Affine Wealth Model by Jie Li, Bruce M. Boghosian, and Chengli Lion, arxiv 2018.

Ehud Meron in Physics Today November 2019 issue writes about Vegetation Pattern Formation (p. 31). While water distribution for a given topography may initially be isotropic, vegetation can distribute in anisotropic patterns.

Are these two instances of initial isotropic distribution leading to anisotropic patterns connected by the same physics? If so, what is the physics?

As we know, many cosmologists argue that the Universe emerged out of nothing, for example Hawking-Mlodinow (Grand Design, 2010), and Lawrence Krauss, see http://www.wall.org/~aron/blog/a-universe-from-nothing/. Most of their arguments rely on conviction that the Universe emerged out of vacuum fluctuations.

While that kind of argument may sound interesting, it is too weak argument in particular from the viewpoint of Quantum Field Theory. In QFT, the quantum vaccuum is far from the classical definition of vaccuum ("nothing"), but it is an active field which consists of virtual particles. Theoretically, under special external field (such as strong laser), those virtual particles can turn to become real particle, this effect is known as Schwinger effect. See for example a dissertation by Florian Hebenstreit at http://arxiv.org/pdf/1106.5965v1.pdf.

Of course, some cosmologists argue in favor of the so-called Cosmological Schwinger effect, which essentially says that under strong gravitational field some virtual particles can be pushed to become real particles.

Therefore, if we want to put this idea of pair production into cosmological setting, we find at least two possibilities from QFT:

a. The universe may have beginning from vacuum fluctuations, but it needs very large laser or other external field to trigger the Schwinger effect. But then one can ask: Who triggered that laser in the beginning?

b. In the beginning there could be strong gravitational field which triggered Cosmological Schwinger effect. But how could it be possible because in the beginning nothing exists including large gravitational field? So it seems like a tautology.

Based on the above two considerations, it seems that the idea of Hawking-Mlodinow-Krauss that the universe emerged from nothing is very weak. What do you think?

Assuming that particle consists of a photon that moves circularly in the loop, creating standing wave obeying the rule that its closed path 2*Pi*R=n*Lambda (1) (resonator where resonator length 2*Pi*R is „n” times photon wavelength) and knowing particle's mass (experimental value), we can calculate its radius.

Lets put c/v (2) for Lambda in the above equation (1), where v is photon frequency. We get then 2*Pi*R=n*c/v (3). Now if we assume that particle's mass is of EM origin and m=E/c^2 (4) where E=hv (5) is the energy of circulating photon as described above we can rewrite (4) as m=hv/c^2 (6) or v=mc^2/h (7) (letter h stands for Planck constant of course). Now lets put (7) into (3) and we get 2*Pi*R=n*c/(mc^2/h) (8) or simplifying R=n* h/2*Pi*mc (9).

Now, let's take proton for our considerations. Assuming n=4 in eq. (9) and m=1,672621637(83)*10^(-27)kg (experimental value) we can calculate proton's radius to be R=

**0.84124 fm**which stays in agreement with the experimental value of**0.84184 fm +/- 0,00067 fm**(the most accurate experimental value measured in a Hydrogen atom with a Muon in 2010).You can read more about that theory and mechanism in my paper here:

What do You think?

1) There is some tradition in

**philosophy of mathematics**starting at the late 19th century and culminating in the crisis of foundations at the beginning of the 20th century. Names here are Zermelo, Frege, Whitehead and Russel, Cantor, Brouwer, Hilbert, Gödel, Cavaillès, and some more. At that time mathematics was already focused on itself, separated from general rationalist philosophy and epistemology, from a philosophy of the cosmos and the spirit.2) Stepping backwards in time we have the great “rationalist” philosophers of the 17th, 18th, 19th century: Descartes, Leibniz, Malebranche, Spinoza, Hegel proposing a global view of the universe in which the subject, trying to understand his situation, is immersed.

3) Still making a big step backwards in time, we have the philosophers of the late antiquity and the beginning of our era (Greek philosophy, Neoplatonist schools, oriental philosophies). These should not be left out from our considerations.

4) Returning to the late 20th century we see inside mathematics appears the foundation (Eilenberg, Lavwere, Grothendieck, Maclane,…) of

**Category theory**, which is in some sense a transversal theory inside mathematics. Among its basic principles are the notions of object, arrow, functor, on which then are founded adjunctions, (co-)limits, monads, and more evolved concepts.**Do you think these principles have their signification a) for science b) the rationalist philosophies we described before, and ultimately c) for more general philosophies of the cosmos?**

Examples: The existence of an adjunction of two functors could have a meaning in physics e.g.. The existence of a natural numbers - object known from topos theory could have philosophical consequences. (cf. Immanuel Kant,

*Antinomien der reinen Vernunft*).Is a physical basis that necessarily requires constancy of the speed of light a logical impossibility, or is the constancy of the speed of light the result of ideas not yet found or applied?

Does isotropy require constancy of the speed of light?

Jensen’s inequality for concave and convex functions, implies for a logarithmic function maximal value when the base of the log is the system’s mean. Mathematically, this implies that the speed of light must be uniform in all directions to optimize distribution of energy. This idea has a flaw. Creation of the universe happened considerably before mathematics and before Jensen’s inequality in 1906. Invert the conceptual reference frame and suppose that Jensen’s inequality is mathematically provable in our universe because it is exactly the type of universe that makes Jensen’s inequality mathematically true in it. A mathematical argument based on Jensen’s inequality goes around in a circle. Are there reasons, leaving aside Jensen’s inequality (or even including Jensen’s inequality), that require constancy of the speed of light?

We have to expect that theoretical physics needs even more than 61 particles to declare the world of real particles and atomic nuclei. Otherwise was the proof of 'quarks' at LHC a failure. You can justifiably claim that this particles do not exist and possibly some others.

In chemistry is valid: Analysis and synthesis are the proof of the composition and structure of a chemical compound.

In physics is valid: Particles and structures are being created by theoretical considerations and proven by experiments which are evaluated in the sense of the underlying theories.

If I analyse the the particle decays, nuclear fragmentations etc. than electron and positron arise at least as elementary particles. The electron-positron impact experiments synthesized at different research facilities in the world light mesons and other particles.

Experimental physics has already proven the structure of the particles and atomic nuclei but theoretical physics persist in its theories of the day before yesterday.