Science topic

Fundamental Physics - Science topic

Explore the latest questions and answers in Fundamental Physics, and find Fundamental Physics experts.
Questions related to Fundamental Physics
  • asked a question related to Fundamental Physics
Question
29 answers
The constants (G, h, c, e, me, kB), can be considered fundamental only if the SI units they are measured in (kg, m, s ...) are independent. However, if we assign numerical values to the SI units (kg = 15, m = -13, s = -30, A = 3, K = 20), then by matching the unit numbers, we can define (and solve) the least precise (CODATA 2014) constants (G, h, e, me, kB) in terms of the 3 most precise constants (c, μ0, R) ... (diagram #1). Officially this must be just a coincidence, but the precision is difficult to ignore.
We find further anomalies to equal precision when we combine the constants (G, h, c, e, me, kB) in combinations whereby the unit numerical value sums to 0 ... (diagram #2).
The methodology is introduced here
Is a simulation universe the best explanation for these anomalies?
Some general background to the physical constants.
Relevant answer
Answer
Hi Hieram, youre welcome! Imho fun should be induced by fruitful scientific research supported by open commenting one other’s ideas (opposed to endless fruitless repeating discussions what presumingly not cannot ever workuntil it does) like iSpace („integer-Space„ or „complex-Space“ when treating the i as the one for a complex number) able to derive and decipher inter-relationships, dependencies and calculate exact arbitrary precision numerical values for most but all constants of nature.
Also recently a new true quantum geometric iSpace-IQ unit system has been developed, able to directly represent native quantum relations of contants while keeping strictly compatible to MKSA/SI system showing a single *time* based conversion factor, effective predicting quantization of time itself.
So - no - being a true long time Apple expert consultant i’d say we do not need to fear to be sued for (at least not in the foreseeable future ;-) ). And please all take the time to read thru the very short yet imho rreally convincing math of both of my newest papers to be found on my RG home.
Here is a link to RG summary of my iSpace project:
  • asked a question related to Fundamental Physics
Question
5 answers
Having worked on the spacetime wave theory for some time and recently published a preprint paper on the Space Rest Frame I realised the full implications which are quite shocking in a way.
The spacetime wave theory proposes that electrons, neutrons and protons are looped waves in spacetime:
The paper on the space rest frame proposes that waves in spacetime take place in the space rest frame K0:
This then implies that the proton which is a looped wave in spacetime of three wavelengths is actually a looped wave taking place in the space rest frame and we are moving at somewhere between 150 km/sec and 350 km/sec relative to that frame of reference.
This also gives a clue as to the cause of the length contraction taking place in objects moving relative to the rest frame. The length contraction takes place in the individual particles, namely the electron, neutron and proton.
I find myself in a similar position to Earnest Rutherford when he discovered that the atom was mostly empty space. I don't expect to fall through the floor but I might expect to suddenly fly away at around 250 km/sec. Of course this doesn't happen because there is zero resistance to uniform motion through space and momentum is conserved.
It still seems quite a shocking realisation.
Richard
Relevant answer
Answer
Sydney Ernest Grimm Thank you for your comment and I would like to explain in more detail how the Spacetime Wave theory relates to quantum theory.
If you think about the electron as a looped wave in Spacetime the entire mass/energy of the electron is given by E=hf. Then when an electron changes energy level from an excited state f2 to a lower energy level f1 the emitted wave quantum (photon) is given by h(f2 - f1). It is easy to see how a looped wave can emit a non-looped wave.
Because the path of the electron wave loops many times around the nucleus and within each wavelength there is a small positive charge followed by a slightly larger negative charge the wave aligns with successive passes displaced by half a wavelength.
This alignment process means that there are certain possible energy states that can be adopted by the electron. This is the cause of the quantum nature of the electron and also explains the quantum nature of light.
Richard
  • asked a question related to Fundamental Physics
Question
137 answers
Our answer is YES. The wave-particle duality is a model proposed to explain the interference of photons, electrons, neutrons, or any matter. One deprecates a model when it is no longer needed. Therefore, we show here that the wave-particle duality is deprecated.
This offers an immediate solution for the thermal radiation of bodies, as Einstein demonstrated experimentally in 1917, in terms of ternary trees in progression, to tri-state+, using the model of GF(3^n), where the same atom can show so-called spontaneous emission, absorption, or stimulated emission, and further collective effects, in a ternary way.
Continuity or classical waves are not needed, do not fit into this philosophy, and are not representable by any edges or updating events, or sink, in an oriented graph [1] model with stimulated emission.
However, taking into account the principle of universality in physics, the same phenomena — even a particle such as a photon or electron — can be seen, although approximately and partially, in terms of continuous waves, macroscopically. Then, the wave theory of electrons can be used in the universality limit, when collective effects can play a role, and explain superconductivity.
This solves the apparent confusion given by the common wave-particle duality model, where the ontological view can become now, however — more indicative of a particle in all cases — and does not depend on amplitude.
This explains both the photoelectric effect, that does not depend on the amplitude, and wave-interference, that depends on the amplitude. The ground rule is quantum, the particle, but one apparently "sees" interference at a distance that is far enough not to distinguish individual contributions.
What is your informed opinion?
REFERENCE
[1] Stephen Wolfram, “A Class of Models with the Potential To Represent Fundamental Physics.” Arxiv; https://arxiv.org/ftp/arxiv/papers/2004/2004.08210.pdf, 2004.
Relevant answer
  • asked a question related to Fundamental Physics
Question
123 answers
The fundamental physical constants, ħ, c and G, appear to be the same everywhere in the observable universe. Observers in different gravitational potentials or with different relative velocity, encounter the same values of ħ, c and G. What enforces this uniformity? For example, angular momentum is quantized everywhere in the universe. An isolated carbon monoxide molecule (CO) never stops rotating. Even in its lowest energy state, it has ħ/2 quantized angular momentum zero-point energy causing a 57 GHz rotation. The observable CO absorption and emission frequencies are integer multiples of ħ quantized angular momentum. An isolated CO molecule cannot be forced to rotate with some non-integer angular momentum such as 0.7ħ. What enforces this?
Even though the rates of time are different in different gravitational potentials, the locally measured speed of light is constant. What enforces a constant speed of light? It is not sufficient to mention covariance of the laws of physics without further explanation. This just gives a different name to the mysteries.
Are the natural laws imposed on the universe by an unseen internal or external entity? Do the properties of vacuum fluctuations create the fundamental physical constants? Are the physical constants the same when they are not observed?
Relevant answer
Answer
Hi Dr John A. Macken . I hope the following article link could answer your question: https://arxiv.org/pdf/0905.0975.pdf
  • asked a question related to Fundamental Physics
Question
5 answers
It feels strange to have discovered a new fundamental physics discipline after a gap of a century. It is called Cryodynamics, sister of the chaos-borne deterministic Thermodynamics discovered by Yakov Sinai in 1970. It proves that Fritz Zwicky was right in 1929 with his alleged “tired light” theory.
The light traversing the cosmos hence lawfully loses energy in a distance-proportional fashion, much as Edwin Hubble tried to prove.
Such a revolutionary development is a rare event in the history of science. So the reader has every reason to be skeptical. But it is also a wonderful occasion to be one of the first who jump the new giant bandwagon. Famous cosmologist Wolfgang Rindler was the first to do so. This note is devoted to his memory.
November 26, 2019
Relevant answer
Answer
What will happen once 92 years have passed since then? Is it possible to imagine?
  • asked a question related to Fundamental Physics
Question
101 answers
There is an opinion that the wave-function represents the knowledge that we have about a quantum (microscopic) object. But if this object is, say, an electron, the wave-function is bent by an electric field.
In my modest opinion matter influences matter. I can't imagine how the wave-function could be influenced by fields if it were not matter too.
Has anybody another opinion?
Relevant answer
Answer
Nice discussion
  • asked a question related to Fundamental Physics
Question
4 answers
Dear Colleagues.
The Faraday constant as a fundamental physical value has its peculiar features, which make it standing out of the other physical constants. According tothe official documents of NIST, this constant has two values:
F = 96485.33289 ± 0.00059 C/mole and
F* = 96485.3251 ± 0.0012 C/mole.
The second value refers to the "ordinary electric current".
Is the Faraday constant constant?
One of the ways to answer this question is proposed in the works.
Sincerely, Yuriy.
Relevant answer
Answer
Faraday's constant is always considered as universal Constant ...
  • asked a question related to Fundamental Physics
Question
112 answers
According to special relativity (SR), the relative velocity between two inertial reference frames (IRF), say two spaceships, is calculated by
u=(v1-v2) /(1-v1v2/c2) (1)
Where v1and v2 are constant velocities of the two vessels moving in parallel to each other.
For low speeds v1v2/c2 is negligible and the formula is reduced to
u=v1-v2
But neither v1 nor v2 is supposed to be known in SR. Both can have any value between -c and +c as illustrated in Figure 1 (please see the attached file).
Not knowing the speed of each vessel means that the calculated relative speed can also be any value between -c and +c. For example:
v1= - 0.6c v2 = - c ̀ ==> u= -c (possibility 5 in Figure 1)
v1= 0 v2 = - 0.4c ==> u= -c/2.5 (possibility 2)
v1= 0.2c v2 = - 0.2c ̀ ==> u= c/2.6 (possibility 3)
v1= 0.4c v2 = 0 ==> u= c/2.5 (possibility 1)
v1= c v2 = 0.6c ==> u= c (possibility 4)
Meaning that the real relative speed between two IRFs in fact cannot be calculated.
To remedy this situation, it is assumed that:
1. One of the vessels in which observer number one, Bob, resides is stationary and the other vessel, Alice, is moving at the relative speed of u.
This is, obviously, a wrong scientific statement and in contrast to SR. Here only one specific possibility among countless possibilities is arbitrarily selected to hide the difficult situation. We should also remind ourselves of the damaging effect of this type of assumptions. Scientists tried hard to discard the dominating geocentric dogma of the past, championed by the Catholic Church, and now a comparable assumption is accepted under a new groundbreakingly concept.
Based on this assumption, the equation is simply reducing to either u= -v2 or u=v1, depending on the observer.
2. There is a third reference frame based on which the speeds are measured.
Like the first cases we are back to Newtonian mechanics, an assumed fixed reference frame. This assumption explicitly accepts the first assumption. Only then, the formula makes sense. Specifically, to be able to present SR as a scientific/quantitative theory it is forced to accept that the frame of the observer or a third frame is a stationary reference frame for any measurement or analysis. Zero speed is just a convenient value between countless other possibilities which SR has introduced and then has decided not to deal with the consequences.
The problem with Einstein velocity addition formula also applies in this case as the assumed velocities as well as the calculated relative velocity between Bob and Alice depends on the relative speed of the observer.
Somehow, both conflicting cases are accepted in SR quite subjectively. In other words, SR is arbitrarily benefiting from classical science, to push its own undeserved credibility, while at the same time denying it.
Is this a fair assessment?
P.S. for simplicity only parallel movements are considered.
Relevant answer
Answer
Jeremy Fiennes: "I had a quick look at your "10 proofs of SR". Mainly due to thinking "Wait a minute, he's now trying to justify SR?!!" The title is maybe somewhat misleading. Or maybe you are trying to attract readers who still belive in it."
It was trying to attract people who were interested in claimed proofs of SR, for any reason (and then pointing out that most of the traditional proofs are provably "junk science", and not to be taken seriously). I figured that this stuff needed documenting.
Mainstream relativity folk are quick to shout "fraud!" "fake!" or "incompetent" at fringe scientists when they cut corners or fiddle figures, but are less willing to document dubious or phoney science when their own team are responsible.
Jeremy Fiennes: " For me the best conceptual refutation is the clock absurdity "
I try to stay away from the clock paradox.
Firstly, it involves acceleration, so it's a problem in "extended" SR rather than "core" SR, and with extended SR, a GR mainstreamer can always step in and say, "oh, your problem is that you're using SR outside its proper domain of validity, if SR breaks, it just means that you need to use full GR'".
Second, the proper domain of extended SR is kinda fuzzy. SR gets credit when it works, the user gets the blame when it doesn't. People disagree as to what the proper fdomain is, and it's not obviously scientifically invalidatable.
Third, even in the GR version, it's apparently not been adequately solved ("GR clock paradox")
Fourth, if you accelerate in a given direction at X Earth-gravities, SR coordinates will catastrophicall;y break down and become inconsistent at a distance of about 1/X lightyears.
So if you spend years successfully constructing a proof that the twin problem is unworkable, the GR folk can say, "Oh, we know that, that's part of why SR is considered to only be a local theory: it's not to be used for problems involving accelerations and interstellar distances, because its coordinates break down!"
What the community will tend to do is to steer you towards problems where ... if you have success ... they have an emergency "escape" argument to fall back on. It's misdirection -- conning critics into working on problems that they know can be dismissed as irrelevant.
IMO. If you want to take down SR, you have to ignore all the standard textbook cliche'd arguments, and create new problems that even invoking GR won't let them wriggle out of. Like, how about pointing out that a valid general theory requires SR geometry to be wrong for moving masses? Or that quantum gravity needs non-SR equations? or that you can't combine modern cosmology with SR-based GR's gravity-shift predictions? Or that "extended SR" requires rotating masses NOT to drag light, making it invalidated by Gravity Probe B?
Take the higher ground. Instead of complaining that SR is counter-intuitive, point out the similarity between flat-Earthers and SR's flat spacetime. When they try to mock SR critics for being too dim to understand Minkowski spacetime, mock them right back for being too dim to understand the curved-spacetime principles of a proper general theory, or the inherent conflicts between Minkowski's geometry and the principle of equivalence.
The "Ten proofs of SR" paper was intended to provide SR dissidents with counterarguments and ammunition to help them counter almost anything that the SR community could use in the theory's defence.
It's insurrection time!
  • asked a question related to Fundamental Physics
Question
16 answers
Wikipedia describes Physics, lit. 'knowledge of nature' , as the natural science that studies matter, its motion and behavior through space and time, and the related entities of energy and force
But isn’t this definition a redundancy? Any visible object is made of matter and its motion is a consequence of energy applied. We might as well say, study of stuff that happens. But then, what does study entail?
Fundamentally, ‘physics’ is a category word, and category words have inherent problems. How broad or inclusive is the category word, and is the ordinary use of the category word too restrictive?
Is biophysics a subcategory of biology? Is econophysics a subcategory of economics? If, for example, biophysics combines elements of physics and biology, does one predominate as categorization? If, as in biophysics and econophysics and astrophysics there are overlapping disciplines, does the category word ‘physics’ gives us insight about what physics studies or obscure what physics studies?
Is defining what physics does more a problem of semantics (ascribing meaning to a category word) than of science?
Might another way of looking at it be this? Physics generally involves the detection of patterns common to different patterns in phenomena, including those natural, emergent, and engineered; if possible detecting fundamental principles and laws that model them, and when possible using mathematical notation to describe those principles and laws; if possible devising and implementing experiments to test whether hypothesized or observed patterns provide evidence for or give clues to fundamental principles and laws.
Maybe physics more generally just involves problem solving and the collection of inferences about things that happen.
Your views?
Relevant answer
Answer
If you ask a fake physics :
  • Why is the sky blue? He says because it looks blue.
  • Why is the electron charge quantized? He says because Millikan's experiment has shown.
  • Why is there no ether? He says it was not shown in the Michelson's experiment.
  • Why is light a wave? Because Yang's test results are more consistent with light waves.
  • What is quantum mechanics? Like the great Feynman! He says he doesn't know, but he has accurate calculations and is compatible with the data, and that's enough.
The latter was not the answer of an ordinary physicist, but the answer of one of the greatest contemporary physicists! And this is a disaster for physics.
It is as if the role of physics has been reduced from a master to a servant.
Is reducing the role of physics from describing nature to a tool for exploitation a service to physics or a betrayal of it?
Technology is now far ahead of knowledge, and physics does not seem to be afraid of this humiliation, and it is still content with its instrumental role.
  • asked a question related to Fundamental Physics
Question
34 answers
This question is closely related with a previous question I raised in this Forum: "What is the characteristic of matter that we refer as "electric charge"?"
As stated in my previous question, the main objective of bringing this topic to discussion is to try to understand the fundamental physical phenomena associated with the Universe we live in, where energy, matter and other key ingredients, like the Laws that govern them, which all together seem to play a harmonious role, so harmonious that even life, as we know it, can exist in this planet.
My background is from engineering. Hence, I am trying to go deep into the causes behind the effects, the physical phenomena that support the Universe as we know it, prior to go deep into complex mathematical models and formulation, which may obscure reality.
With an open mind, I try to ask questions whose answers may help us to understand the whys, rather than to prove theories and their formulations.
From our previous discussion, it became clear that mass and electric charge are two inseparable attributes of matter. Moreover, Electromagnetic (EM) fields propagate through vacuum. Hence, no physical matter is required for energy or information flow through the Universe. However, electric charges remain clustered in physical matter, i.e., they require, not vacuum, but matter.
Matter has the property of radiation. Matter under Gravitational (G) and EM fields is subjected to forces, producing movement. Radiation depends strongly on Temperature.
The absolute limit of T is 0º Kelvin. At this limit, particle movement stops. Magnetic fields depend on moving electric charges; as, at this limit, movement vanishes, then Magnetic fields should vanish with it. As Electrical and Magnetic fields are nested in each other, so does Electric field and consequently the effect of EM fields (and, hence, radiation, too) should vanish as T approaches 0ºK. Black Holes (BH) do not radiate, their Temperature being close to 0ºK.
Can we assume that EM fields ultimately vanishes as T approaches 0ºK?
Could this help explaining why protons in an atomic nucleus stay together, and are not violently scattered away from each other?
Would it be reasonable to assume that the atomic nucleuses are at Temperatures close to 0ºK, although electrons and matter, at macroscopic level, are at Room Temperature?
What is really the Temperature of atomic nucleuses? Can we measure it? Is it possible that a cloud of electrons, either orbiting the atoms nucleuses or moving as free electrons, play a shielding effect, capturing the energy associated with Room Temperature, and preventing the nucleuses from heating? Can atom's nucleus Temperature be close to 0ºK, like it occurs in BH?
Relevant answer
Answer
Dear J.P. Teixeira,
My co-author, Eng. Robert Therriault, of the article 'Gravity a paradym shift in reasoning ' wrote to me yesterday to your question:
'the model I have for atoms is based on least stacking lattice structures . . .  it is only based on EM the attractive forces between electrons and protons in the nucleus . . . with Coulomb's law to be the basis of the understanding.'
Regards,
Laszlo
  • asked a question related to Fundamental Physics
Question
9 answers
In physics, we have a number of "fundamental" variables: force, mass, velocity, acceleration, time, position, electric field, spin, charge, etc
How do we know that we have in fact got the most compact set of variables? If we were to examine the physics textbooks of an intelligent alien civilization, could it be they have cleverly set up their system of variables so that they don't need (say) "mass"? Maybe mass is accounted by everything else and is hence redundant? Maybe the aliens have factored mass out of their physics and it is not needed?
Bottom line question: how do we know that each of the physical variables we commonly use are fundamental and not, in fact, redundant?
Has anyone tried to formally prove we have a non-redundant compact set?
Is this even something that is possible to prove? Is it an unprovable question to start with? How do we set about trying to prove it?
Relevant answer
Answer
Respected D Abbott
Very good question but ans is difficult.
I don't tell anything about aliens.
I just want to tell some thing about mass.
So far as I think is that The Principle of extremum action as the basis principle of natrure.
Action is , you know, actually the world length between two events.
More precisely action is proportional to world length between two events.
The proportionality constant is something called "mass" ( with a negative sign) .
So, if we don't want to consider mass as a variable, we will fail to explain the time evolution of systems of different mass and Physics will not be able to explain the natural events.
Though, for Fields , mass is not the proportionality constant of action because for fields like EM field ,there is no mass.
I don't know whether the time evolution of a massive system can be explained without mass or not.
Thanks and Regards
N Das
  • asked a question related to Fundamental Physics
Question
10 answers
You will find an article, with more precision under my profile.
The question is non relativistic and depends only on logic.
The answer could make a reset of all fundamental physics and is therefore of extreme importance!
JES
Relevant answer
Answer
For the de Broglie wavelength λ, λ = h/mv, where h is the Planck constant; m is the invariant mass of the particle; and v is the velocity of this particle. (The equation can be rewritten as λ = h/p, because p = mv, where p is the momentum of the particle, for non-relativistic motion.)
  • asked a question related to Fundamental Physics
Question
39 answers
What is consciousness? What do the latest neurology findings tell us about consciousness and what is it about a highly excitable piece of brain matter that gives rise to consciousness?
Relevant answer
Answer
Consciousness is what starts when you wake and fall asleep each day, and this captures our scientific and philosophical attention precisely because it is highly implausible that there is "a highly excitable piece of brain matter that gives rise to consciousness." To borrow an example from Ned Block, there are about a billion neurons in a brain and there are about a billion people in China, but if the Chinese were to relay information among themselves in a manner identical to a brain, China itself would not suddenly awake and enjoy conscious experiences. This disanalogy is what makes the prospect of a neat localization claim (i.e., "Consciousness is this spot in the brain!") unlikely -- on principled grounds. An expression like "gives rise to," despite sounding so natural, presupposes a host of unexamined metaphysical views that become dubious when examined, so such an expression obscures more than it reveals.
  • asked a question related to Fundamental Physics
Question
5 answers
It has radically altered it by rehabilitating Fritz Zwicky 1929.
Hence ten Nobel medals are gone. And cheap energy for all is made possible. Provided, that is, that humankind is capable of mentally following in Yakov Sinai’s chaotic footsteps. If not, energy remains expensive and CERN remains dangerous to all: A funny time that we are living in. With the crown of Corona yet waiting to be delivered.
April 1st, 2020
Relevant answer
Answer
Please, elaborate.
  • asked a question related to Fundamental Physics
Question
4 answers
Why a complete theory of fundamental physics is ignored just because it is outside the realms of Quantum Field Theories and General Relativity? It has been “marked” as a speculative alternative and has never been studied nor has been any attempt to verify it. The fundamental physics community is still in complete ignorance of the extremely successful Electrodiscrete Theory. 
The Electrodiscrete Theory is not a speculative alternative and not just a new idea in the workings but it is a complete theory of fundamental physics describing all our elementary particles and their interactions including gravity. The Electrodiscrete Theory beautifully describes the patterns in nature revealed by observations. The Electrodiscrete Theory gives a single (unified) description of nature in a relatively simple and in a self-consistent way. Moreover, it can calculate and it can make predictions. Then, why it is ignored? 
The Electrodiscrete Theory provides the complete conceptual foundation for describing nature that we are all seeking, but nobody bothers to take a look. Why?
The Electrodiscrete Theory opens new horizons. This is progress in science that is being held back by prejudice and new kind of ignorance. What is wrong with the system? 
Relevant answer
Answer
These results are in consequence of the structure of a deeper and more fundamental layer of Fundamental Physics. Like you say, this is not in relation to anything described by the currently accepted theories. These results provide the only theoretical derivation and understanding of the Fine Structure Constant and the understanding of the Electron Magnetic Moment Anomaly. This Sub-Fundamental Physics that I have developed makes the unification of gravity and electromagnetism possible, eliminates all the paradoxes in physics and provides one solid and unified description for all of physics, as described in my book and articles. The Electrodiscrete Theory is not a kind of a Quantum Field Theory and it does not conform with General Relativity. It is a new physics and it does require us to overcome many scientific prejudices. However, it takes the understanding of the basics of Electrodiscrete Theory to be able to understand the theoretical derivation of the Fine Structure Constant and the EMM Anomaly.
  • asked a question related to Fundamental Physics
Question
11 answers
Mathematics is crucial in many fields.
What are the latest trends in Maths?
Which recent topics and advances in Maths? Why are they important?
Please share your valuable knowledge and expertise.
Relevant answer
Answer
For me, as well as for majority of other researchers, Mathematics is the language of Science!
  • asked a question related to Fundamental Physics
Question
15 answers
A new Phenomenon in Nature: Antifriction
Otto E. Rossler
Faculty of Science, University of Tuebingen, Auf der Morgenstelle 8, 72076 Tuebingen, Germany
Abstract
A new natural phenomenon is described: Antifriction. It refers to the distance-proportional cooling suffered by a light-and-fast particle when it is injected into a cloud of randomly moving heavy-and-slow particles if the latter are attractive. The new phenomenon is dual to “dynamical friction” in which the fast-and-light particle gets heated up.
(June 27, 2006, submitted to Nature)
******
Everyone is familiar with friction. Friction brings an old car to a screeching halt if you jump on the brake. The kinetic energy of a heavy body thereby gets “dissipated” into fine motions – the heating-up of many particles in the end. (Only some cars do re-utilize their motion energy by converting it into electricity.) But there also exists a less well-known form of friction called dynamical friction. It differs from ordinary friction by its being touchless.
The standard example of dynamical friction is a heavy particle that is repulsive over a short distance, getting injected into a dilute gas of light-and-fast other particles. The heavy particle then comes to an effective halt. For all the repelled gas particles that it forced out of its way in a touchless fashion carried away some of its energy of motion while getting heated-up in the process themselves – much as in ordinary friction.
In the following, it is proposed that a dual situation exists in which the opposite effect occurs: “antifriction.” Antifriction arises under the same condition as friction – if repulsion is replaced by attraction. The fast particles then rather than being heated up (friction) paradoxically get cooled-down (antifriction). This surprising claim does not amount to an irrational perpetual-motion-like effect. Only the fast-and-light (“cold”) particle paradoxically imparts some of its kinetic energy onto the slow-and-heavy “hot” particles encountered.
A simplified case can be considered: A single light-and-fast particle gets injected into a cloud of many randomly moving heavy-and-slow particles of attractive type. Think of a fast space probe getting injected into a globular cluster of gravitating stars. It is bound to be slowed-down under the many grazing-type almost-encounters it suffers. The small particle will hence be “cooled” rather than heated-up as one would naively expect in analogy to the repulsive case.
The new effect is going to be demonstrated in two steps. In the first step, we return to repulsion. This case can be understood intuitively as follows: On the way towards equipartition (which characterizes the final equilibrium in the repulsive case as is well known), the light-and-fast particles – a single specimen in the present case – do predictably get heated up in their kinetic energy. In the second step, we then “translate” this result into the analogous attraction-type scenario to obtain the surprising opposite effect there.
First step: the repulsive case. Many heavy repulsive particles in random motion are assumed to be traversed by a light-and-fast particle in a grazing-type fashion. A typical case is focused on: as the light-and-fast particle starts to approach the next moving heavy repellor while leaving behind the last one at about the same distance, the new interaction partner is with the same probability either approaching or receding-from the fast particle’s momentary course. Whilst there are many directions of motion possible, the transversally directed ones are the most effective so that it suffices to focus on the latter. Since the approaching and the receding course do both have the same probability of occurrence, a single pair already yields the main effect: there is a net energy gain for the fast particle on average. Why?
In the approaching subcase the fast particle gains energy, and in the receding subcase it loses energy. But the two effects are not the same: The gain is larger than the loss on average if the repulsive potential is assumed to be of the inversely distance-proportional type assumed. This is because in the approaching case, the fast particle automatically gets moved-up higher by the approached potential hill gaining energy, than it is hauled-down by the receding motion of the same potential hill in the departing case losing energy. The difference is due to the potential hill’s round concave form as an inverted funnel. The present “typical pair” of encounters thus enables us to predict the very result well known to hold true: a time- and distance-proportional energy gain of the fast lighter particle as a consequence of the “dynamical friction” exerted by the heavy particles encountered along its way. Thus, eventually an “equipartition” of the kinetic energies applies.
Second step: the attractive case. Everything is the same as before – except that the moving potential hill has become a moving potential trough (the funnel now is pointing downward rather than upward). The asymmetry between approach and recession is the same as before. Therefore there is a greater downwards directed loss of energy (formerly: upwards directed gain) in the approaching subcase than there is an up-wards directed gain of energy (formerly: downwards directed loss) in the receding subcase. The former net gain thus is literally turned-over into a net loss. With this symmetry-based new result we are finished: Antifriction is dual to dynamical friction, being valid in the case of attraction just as dynamical friction is valid in the case of repulsion.
Thus a new feature of nature – antifriction – has thus been found. The limits of its applicability have yet to be determined. It deserves to be studied in detail – for example, by numerical simulation. It is likely to have practical implications, not only in the sky with its slowed-down space probes and redshifted photons [1), but perhaps even in automobiles and refrigerators down here on earth.
To conclude, the fascinating phenomenon of dynamical friction – touchless friction – was shown to possess a natural “dual”: antifriction. A prototype subcase (a pair of representative encounters) was considered above in either scenario, thereby yielding the new twin result. Practical applications can be expected to be found.
I thank Guilherme Kujawski for stimulation. For J.O.R.
Added in proof: After the present paper got finished, Ramis Movassagh kindly pointed to the fact that the historically first paper on “dynamical friction,” written by Subrahmanyan Chandrasekhar [2] who also coined the term, actually describes antifriction. This fact went unnoticed because the smallest objects in the interactions considered by Chandra were fast-moving stars. Chandra’s correctly seen energy loss of these objects therefore got classified by him as a form of “friction” suffered in the interaction with the fields of other heavy moving masses. However, the energy loss found does actually represent a “cooling effect” of the type described above: antifriction. One can see this best when the cooling is exerted on a small mass (like the above-mentioned tiny space probe traversing a globular cluster of stars). While friction heats up, antifriction cools down. Thus what has been achieved above is nothing else but the re-discovery of an old result that had been interpreted as a form of “friction” even though it actually represents the first example of antifriction.
References
[1] O.E. Rossler and R. Movassagh, Bitemporal dynamic Sinai divergence: an energetic analog to Boltzmann’s entropy? Int. J. Nonlinear Sciences and Numerical Simul. 6(4), 349-350 (2005).
[2] S. Chandrasekhar, Dynamical friction. Astrophys. J. 97, 255-263 (1943).
(Remark: The present paper after not being accepted by Nature in 2006 was recently found lingering in a forgotten folder.)
See also: R. Movassgh, A time-asymmetric process in central force scatterings (Submitted on 4 Aug 2010, revised 5 Mar 2013, https://arxiv.org/abs/1008.0875)
Nov. 23, 2019
Relevant answer
Answer
Hello Mykhailo and Otto,
The point I was trying to make in my last message was that all real systems experience some type of dissipation wherein energy is degraded to heat. For solids in mechanical contact with one another, dissipation arises from friction, specifically dynamic friction, when there is relative motion of two solid surfaces. In a moving fluid (liquid or gas), dissipation arises due to either shear viscosity in the case of tangential forces or bulk viscosity in the case of normal forces. The term 'friction' should only be used where it is directly applicable.
One can, of course, say that the viscosity of real fluids produces a "friction-like" dissipation, but this use of the term 'friction' is by analogy and it suffers from the logical fallacy of false equivalence as viscosity and friction arise from different root causes. Consider an incandescent light bulb. The electric current through its tungsten filament only produces a small amount of visible light, the majority of the applied electrical energy is converted directly to heat. The dissipation in the incandescent bulb arises from imperfections in the metal crystal lattice due to things such as defects, grain boundaries, interstitial and substitutional impurities, etc. These imperfections give rise to what one might call "friction-like" behavior, but the dissipation is obviously not caused by asperities as in the case of friction between solids.
Otto, with respect to the system discussed in your original question, it is still not clear to me that your use of the term 'friction' is appropriate. Does a space probe moving through a globular cluster of stars really experience friction or antifriction? Would it not be more appropriate to speak about the effective mass of the space probe changing due to the long range fields it experiences? Plus, I am still hazy about where and how the dissipation or anti-dissipation arises given that the forces acting on the space probe are probably conservative.
Regards,
Tom Cuff
  • asked a question related to Fundamental Physics
Question
11 answers
It is well known that light filed can be decomposed into polarized field and unpolarized field. But, is it possible to consider this sum as a only the sum of linearly polarized and unpolarized or circularly polarized and unpolarized? or is it always matters a degree of polarization not a type of polarization?
Relevant answer
Answer
Thank you very much for your reply Prof.Hari Prakash. I will go through the paper you recommended. The State of Stokes vector in polarized part of partial coherent partially polarized beam changes on propagation e.g., Gaussian schell model beams is well studied by Emil Wolf. Recently I have also read that polarized part is strictly being a single state of vibration in the sense that there should not be addition of two polarized components , it will results in unpolarization. Prof.A T Friberg( https://www.uef.fi/en/web/photonics/ari-t.-friberg ) and group doing research on unploarized light few of his papers cleared my doubt.
In regard of Stokes parameters are inadequate for higher order coherence functions, ( I am not sure) the two point Stokes parameters ( ) may show a new way.
As mentioned in your affiliation ICTP Trieste, I have a plan to apply and attend the complex system course( http://indico.ictp.it/event/9024/ ), I would like to meet and discuss with you once if I selected for the same.
  • asked a question related to Fundamental Physics
Question
32 answers
The incredible thing about Physarum polycephalum is that whilst being completely devoid of any nervous system whatsoever (not possessing a single neuron) it exhibits intelligent behaviours. Does its ability to intelligently solve problems suggest it must also be conscious? If you think, yes, then please describe if-and-how its consciousness may differ {physically or qualitatively ... rather than quantitatively} from the consciousness of brained organisms (e.g., humans)? Does this intelligent behaviour (sans neurons) suggest that consciousness may be a universal fundamental related more to the physical transfer or flow of information rather than being (as supposed by most psychological researchers) an emergent property of processes in brain matter?
General background information:
"Physarum polycephalum has been shown to exhibit characteristics similar to those seen in single-celled creatures and eusocial insects. For example, a team of Japanese and Hungarian researchers have shown P. polycephalum can solve the Shortest path problem. When grown in a maze with oatmeal at two spots, P. polycephalum retracts from everywhere in the maze, except the shortest route connecting the two food sources.[3] When presented with more than two food sources, P. polycephalum apparently solves a more complicated transportation problem. With more than two sources, the amoeba also produces efficient networks.[4] In a 2010 paper, oatflakes were dispersed to represent Tokyo and 36 surrounding towns.[5][6] P. polycephalum created a network similar to the existing train system, and "with comparable efficiency, fault tolerance, and cost". Similar results have been shown based on road networks in the United Kingdom[7] and the Iberian peninsula (i.e., Spain and Portugal).[8] Some researchers claim that P. polycephalum is even able to solve the NP-hard Steiner minimum treeproblem.[9]
P. polycephalum can not only solve these computational problems, but also exhibits some form of memory. By repeatedly making the test environment of a specimen of P. polycephalum cold and dry for 60-minute intervals, Hokkaido University biophysicists discovered that the slime mould appears to anticipate the pattern by reacting to the conditions when they did not repeat the conditions for the next interval. Upon repeating the conditions, it would react to expect the 60-minute intervals, as well as testing with 30- and 90-minute intervals.[10][11]
P. polycephalum has also been shown to dynamically re-allocate to apparently maintain constant levels of different nutrients simultaneously.[12][13] In particular, specimen placed at the center of a Petri dish spatially re-allocated over combinations of food sources that each had different protein–carbohydrate ratios. After 60 hours, the slime mould area over each food source was measured. For each specimen, the results were consistent with the hypothesis that the amoeba would balance total protein and carbohydrate intake to reach particular levels that were invariant to the actual ratios presented to the slime mould.
As the slime mould does not have any nervous system that could explain these intelligent behaviours, there has been considerable interdisciplinary interest in understanding the rules that govern its behaviour [emphasis added]. Scientists are trying to model the slime mold using a number of simple, distributed rules. For example, P. polycephalum has been modeled as a set of differential equations inspired by electrical networks. This model can be shown to be able to compute shortest paths.[14] A very similar model can be shown to solve the Steiner tree problem.[9]"
Relevant answer
Answer
Woah! Hold your horses! Please, Richard Poznanski, elaborate on your forms of matter because i think I smell major disagreement. There is ONE form of matter as physics is concerned. Now, it can be in equilibrium or out of equilibrium. Active matter, i.e., matter driven by (bio)-chemical reaction is still matter just in another physical state. A living cell has an active cytoskeleton which exhibits structure formation due to being driven. Now human (!) conscious will in my view turn out to be information processing in between special highly correlated neural state to paraphrase Max Tegmarks views on consciousness. There will be no special kind of matter needed. But it will need Physics of Patterns. What is the definition of Panexperiential Matter? Are we talking about Panpsychism? MY view on that is: Mathematically nice. Everything has two components. However I STRONGLY reject that idea. In any case a statement, Panexperiential Matter is needed for consciousness is invalid without physical proof of the theory.
Finally, what is an atomic microfeel? Can we also define that for the sake of naive physicist. And commenting already on that expression: The problem is the use of technical terms in cloudy ill-defined ways. The answer to the brain will not be found on the atomic level. A feeling is the result of a complex interplay between at the very least perception, memory and the cortex. It is so to speak a qualia. In what sense can we speak about micro? Smaller lenght? Shorter thought? Less intense? In my view this expressions is ill-defined at best to put it diplomatically.
  • asked a question related to Fundamental Physics
Question
4 answers
The theory of special relativity requires that the laws of the universe be the same for the objects that move with uniform velocity to each other. The law that changes from one frame to another is wrong. Lorentz transformations do not guarantee only three transformations. These three quantities are length, time and mass, which basic are physical quantities. Derived quantities can be derived from it covering the laws of mechanics only. In addition, the Lorentz transformation of the mass was found using the principle of corresponding and not directly if we want to get the Derived quantities Lorentz transformation we must be finds the Lorentz transformations for Fundamental Physical Quantities.
Relevant answer
Answer
There are also transformational laws for electric and magnetic fields.
  • asked a question related to Fundamental Physics
Question
3 answers
To what extent, are we compromising Darcy’s law, when we characterize the oil/gas flow within a petroleum reservoir?
Does the fundamental physics associated with the Darcy’s law not change significantly while applying it for the above application?
Darcy’s law requires that any resistance to the flow through a porous medium should result only from the viscous stresses induced by a single-phase, laminar, steady flow of a Newtonian fluid under isothermal conditions within an inert, rigid and homogeneous porous medium.
Relevant answer
Answer
Refer to Perrine-Martin modification for multiphase flow.
  • asked a question related to Fundamental Physics
Question
27 answers
For many years I worked on the NSE under the assumption of incompressible flow. This assumption drive us to work with a simplified model (M=0) according to the fact that
a2 =dp/drho|s=const. ->+Inf.
Of course, any model is an approximate intepretation of the reality but this specific mathematical model assumption contradicts the fundamental physical limit of the light velocity.
Despite the fact that low (but finite) Mach model were developed, the M=0 model is still largely used both in engineering aerodynamics and in basic researches (instability, turbulence, etc.) in fluid dynamics.
Could we really accept the M=0 model that violates a fundamental physical assumption? If yes, that is a result from assessed studies that used a very low but finite Mach number for comparison?
Relevant answer
Answer
The speed of light is not really relevant in low Mach number flow. From the assumption of incompressible flow, you get infinite speed of sound. That might be more of an issue.
My subject is ventilation of tunnels. Typical air flow velocities range from 0 to 25m/s. We consider this incompressible flow. I only had an issue with this assumption, when I analysed very long tunnels (12km and more). For long tunnels, the speed of sound becomes relevant, even for very small flow velocities. The information of flow change at one portal needs time to reach the other portal.
BTW, the same question can be asked for mechanics: Would you take relativistic effects into account when you analyse the sandwich falling from the table?
  • asked a question related to Fundamental Physics
Question
4 answers
Are there any evidence or theoretical framework to explain the values of fundamental physical constants? In other words, could be the values of physical constants differents (contingency)? Or is there any physical need to be as they are? Obs.: It is not a metaphysical question.
Relevant answer
Answer
It is not a question of one or the other as causality based theology, philosophy and natural science thinks. For dialectics these two opposite are together! Chance (contingency) is blind only when it is not realized in a necessity! There is no determinism in the universe as physics thinks; everything in the universe is madiated not by cause and effect, but by dialectical chance and necessity. Man as a the highest developed subjective aspect (life) of blind and objective Nature (contradiction of living and non-living matter) possesses in an historical evolutionary way, freedom of the will to change objective Nature and also himself reducing his contradiction with Nature.
The following quote from Frederick Engels will make it more clear: “Hegel was the first to state correctly the relation between freedom and necessity. To him, freedom is the appreciation of necessity. “Necessity is blind only in so far as it is not understood”. Freedom does not consist in the dream of independence of natural laws, but in the knowledge of these laws, and in the possibility this gives of systematically making them work towards definite ends. This holds good in relation both to the laws of external nature and those which govern the bodily and mental existence of men themselves – two classes of laws which we can separate from each other at most only in thought, but not in reality.
Freedom of the will therefore means nothing but the capacity to make decision with real knowledge of the subject. Therefore the freer a man’s judgement is in relation to a definite question, with so much the greater necessity is the content of this judgement determined; while the uncertainty, founded on ignorance, which seems to make an arbitrary choice among many different and conflicting possible decisions, shows by this precisely that it is not free, that it is controlled by the very object it should itself control. Freedom therefore consists in the control over ourselves and over external nature which is founded on knowledge of natural necessity; it is therefore necessarily a product of historical development. The first men who separated themselves from the animal kingdom were in all essentials as unfree as the animals themselves, but each step forward in civilization was a step towards freedom.” (Anti-Dṻhring).
  • asked a question related to Fundamental Physics
Question
29 answers
The 1998 astronomical observations of SN 1A implied a (so-called) accelerating universe. It is over 20 years later and no consensus explanation exists for the 1998 observations. Despite FLRW metric, despite GR, despite QM, despite modified theories like MOND, despite other inventive approaches, still no explanation. It is hard to believe that hundreds or thousands of physicists having available a sophisticated conceptual mathematical and physics toolkit relating to cosmology, gravity, light, and mechanics are all missing how existing physics applies to explain the accelerating expansion of space. Suppose instead that all serious and plausible explanations using the existing toolkit have been made. What would that imply? Does it not imply a fundamental physical principle of the universe has been overlooked or even, not overlooked, but does not yet form part of physics knowledge? In that case, physics is looking for the unknown unknown (to borrow an expression). I suspect the unknown principle relates to dimension (dimension is fundamental and Galileo’s scaling approach in 1638 for a problem originating with the concept of dimensions --- the weight-bearing strength of animal bone — suggests fundamental features of dimension may have been overlooked, beginning then). Is there a concept gap?
Relevant answer
Answer
Allow me to mention that the discovery of the new fundamental science of Cryodynamics, sister of Thermodynamics, has confirmed Zwicky 1929. So that the universe is stationary and eternal.
The 90 years long adherence to the "Big Bang" is a historical tragedy, a "Dark Age."
Can anyone forgive me for that statement?
  • asked a question related to Fundamental Physics
Question
5 answers
Solitons is the common but we are changing the structures which are also based on the common photonic crystal. Is there possibility of same kind of soliton in all three structures.
Relevant answer
Answer
In general, a soliton wave is a nonlinear localized wave possessing a particle-like nature that maintaining its shape during propagation even after an elastic collision with another soliton wave. The possibility of soliton propagation in the anomalous dispersion regime of an optical material was predicted by analyzing theoretically the nonlinear Schrodinger equation (NLSE).
In optics, soliton wave can arise due to the balance between Kerr nonlinear effect and dispersion effect (GVD). Based on confinement in the time or space domain, one can have either temporal or spatial solitons. This induces to an intensity-dependent refractive index of the medium and then leads to temporal self-phase modulation (SPM) and spatial self-focusing. A temporal soliton is formed when the SPM effect compensates the dispersion-induced pulse broadening. In the same way, a spatial soliton is formed when the self-focusing effect counteracts the natural diffraction-induced pulse broadening.
  • asked a question related to Fundamental Physics
Question
7 answers
How and why the velocity is internal property of the massive body?
Relevant answer
Answer
I would like to read your article. Preston Guynn
Thank you.
  • asked a question related to Fundamental Physics
Question
43 answers
Version:2.0
The question of the nature (or ontological status) of fundamental physics theories, such as spacetime in special and general relativity, and quantum mechanics, have been, each, a permanent puzzle, and a source of debates. This discussion aims to resolve the issue, and submit the solution to comments.
Also, when something is correct, this is a sign that it can be be proved in more than one way. In support of this question, we found evidence of the same answer of the ontological status, in three diverse ways.
Please see at:
DISCLAIMER: We reserve the right to improve this text. All questions, public or not, are usually to be answered here. This will help make this discussion text more complete, and save that Space for others, please avoid off-topic. References are provided by self-search. This text may modify frequently.
Relevant answer
Answer
But observation is finite valued and one cannot observe all aspects of anything. What beam or wavelength or tool defines observation. So what we know depends on the tools we know. What we don’t know depends on the tools we don’t have. What we don’t know about what we don’t know depends on depends on what we know about the tools we use to know.
  • asked a question related to Fundamental Physics
Question
2 answers
It is widely seen that large-scale cosmic fluids should be treated as "viscoelastic fluids" in theoretical formulation of their stability analyses. Can anyone explain it from the viewpoint of fundamental physical insight?
Relevant answer
Answer
Thanks a lot.
  • asked a question related to Fundamental Physics
Question
57 answers
Where from we have arrived to the conclusion that space of our Universe is 3D (and so the dimensionality of spacetime is 4D)?
I suppose this is the result of our sense of vision that is based on both of our eyes. However, the image we conceive is the result of mind manipulation (illusion) of the two “images” that each of our eyes send to our brain. This mind manipulation gives us the notion of depth that is translated as the third dimension of space. This is why one eye vision (or photography, cinema, TV, ...) is actually a 2D vision. In other words, when we see a 3D object and our eyes are (approx.) on a line perpendicular to the plane that form object's “height” and “long”, our mind concludes about object's “width”. Photons detectable by each of our eyes were, e.g. t(=10-20sec) before, on the surface of a sphere with our eye as center and radius t*c. As the surface of a sphere is 2D (detectable space) and if we add the dimension of "time" (to form the spacetime) we should conclude that the dimensionality of our detectable Universe is 3D ((2+1) and NOT 4D(3+1)).
PS: (27/8/2018) Though, I am aware that this opinion will reveal an instinctive opposition as it contradicts our “common sense”… I will take the risk to open the issue.
Relevant answer
Answer
Thank heavens, a bottle with a good cognac has 3D+1 dimensionality…
Cheers
  • asked a question related to Fundamental Physics
Question
6 answers
The final target is to study the fundamental physical processes involved in bubble dynamics and the phenomenon of cavitation. Develop a new bubble dynamics CFD model to study the evolution of a suspension of bubbles over a wide range of vesicularity, and that accounts for hydrodynamical interactions between bubbles while they grow, deform under shear flow conditions, and exchange mass by diffusion coarsening. Which commercial/open source CFD tool and turbulence model would be the most appropriate ones?
Relevant answer
Answer
It would be a highly educational experience if you could try to develop your own solver using MATLAB then write it in a low-level programming environment like Fortran.
But OpenFOAM should be sufficient if you want to get a bit better at programming CFD, and ANSYS/Fluent would be best if you plan on proceeding as a CFD user.
  • asked a question related to Fundamental Physics
Question
6 answers
Mark Srednicki has claimed to demonstrate the entropy ~ area law -- https://arxiv.org/pdf/hep-th/9303048.pdf
Does anyone know of an independent verification or another demonstration of this result?
Is there a proof of this law?
Relevant answer
Answer
An argument which depends on the assumption that every qubit of information, [1,0] or [0,1] can occupy one and only one 'box' on the horizon's area goes as follows. Since the sum of the boxes must equal the area, we have, N = A, where N is the number of qubits. We calculate the number of ways by which we can arrange the qubits on the horizon, as the sum of all the possible combinations of qubit configurations, W(N) = ΣN!/[(N-k)!k!], with the sum running from k=0 to k=N. This sum is calculated to be 2N, which suggests that we could simply put it this way: each qubit has two representations, for N qubits there are therefore, 2N ways to arrange this collection of qubits. Since according to the Boltzmann principle S=log[W], we have, S=log[2N], or S ∝N = A.
  • asked a question related to Fundamental Physics
Question
98 answers
The unexploited unification of general relativity and quantum physics is a painstaking issue. Is it feasible to build a nonempty set, with a binary operation defined on it, that encompasses both the theories as subsets, making it possible to join together two of their most dissimilar marks, i.e., the commutativity detectable in our macroscopic relativistic world and the non-commutativity detectable in the quantum, microscopic world? Could the gravitational field be the physical counterpart able to throw a bridge between relativity and quantum mechanics? It is feasible that gravity stands for an operator able to reduce the countless orthonormal bases required by quantum mechanics to just one, i.e., the relativistic basis of an observer located in a single cosmic area?
What do you think?
Relevant answer
Answer
Continuing the discussion about non-commutatity and spacetime, see
and
More to the point, there is the issue of inflation in non-commutative space time, introduced in
  • asked a question related to Fundamental Physics
Question
10 answers
It seems that our progress in standard of living in the last 500 or so years is mainly connected with different forms of energy conversion and discovery of newer materials for that purpose. So, how the fundamental science projects of today (e.g. detection of gravity waves, neutrino observatory, etc.) are going to contribute to that single point program? Is this a premature question?
Relevant answer
Answer
It depends on what you consider to be fundamental physics. More efficient extraction of solar energy, or the construction of practical nuclear fusion reactors, certainly requires further developments of physics and physics based technology. But I belive this will be physics where the fundamental properties and equations already exists. At least for the developments occurring in this millennium.
It seems to me that the current development in our ways of living is not solely based on increased use of energy, but equally much on the new means of communication and information processing. Enter any bus or train anywhere in the world, and look at your fellow travellers: It becomes very clear that we are currently living in the smartphone era. This has become possible due to the physics of electromagnetism (as described by Maxwell's equations from 1864) and quantum mechanics (whose fundamental equations and principles were formulated during the last half of the 1920's).
  • asked a question related to Fundamental Physics
Question
33 answers
- In the conclusion (page 14) of this paper, I suggest that “Younger physicists should also be encouraged to play a significant role in looking after and protecting our physics knowledge before they become exposed to the detrimental effects of the commercial influence on physics.
Also in the conclusion I offer an idea on how this could be initiated. However I imagine there are existing schemes that encourage university students and physicists to get involved in theoretical physics & the fundamentals of physics. Do you know of such schemes and/or have your own suggestions in this connection?
Theme for Developing new perspectives of physics: Let’s return to the traditional domain of original ideas and rigorous arguments of theoretical physics - “Physics with an ideas- and imagination-based ‘art’ where we’re dreaming, imagining and creating …” - (Physics: No longer a vocation? by Anita Mehta, vol 61 no. 6 Physics Today June 2008)
Relevant answer
Answer
Yes, provided that they will follow the standard paradigm of modern Physics, they can join a Big Project and live in prosperity.
Otherwise, at least they will be classified in the set of 'crackpots'.
  • asked a question related to Fundamental Physics
Question
3 answers
currently i am beginning to work in photo diode using wide band gap semiconductors like NiO and ZnO etc. so i like to study the fundamental physics of p n junction that helpful for my topic.anyone please suggest some books or documents. 
Relevant answer
Answer
Hello Pradeep, there are many excellent books that explain the physics of the p-n junction for photo-diodes my favorite one is Sze which is called the bible of semiconductors:
S. M. Sze, "Physics of Semiconductor Devices", John Wiley and Sons.
One of the excellent books to start with is Neamen's Semiconductor Physics and Devices. It's easy to read, and it covers everything from basic solid-state physics to solar cells and photo-diodes (e.g., drift-diffusion) to all kinds of devices (e.g., PN junction, MOSFET, BJT, solar cells), https://www.amazon.com/Semiconductor-Physics-Devices-Donald-Neamen/dp/0072321075
Also there are many other good books, for example:
1- Semiconductor Optoelectronic Devices (2nd Edition) 2nd Edition
by Pallab Bhattacharya .
2- Semiconductors for Optoelectronics Basics and Applications, Authors: Balkan, Naci, Erol, Ayşe
3- Semiconductors Optoelectronic Devices by Hadis Markoc,
I hope this will help you.
Best Regards
  • asked a question related to Fundamental Physics
Question
24 answers
What are the evidences that speed of light is constant all over the universe? Is it the same value even in places in universe which dark energy is occupied?
Relevant answer
Answer
Is speed of light constant all over the universe and equal to what we measured?
The principle that physical laws as we know them on Earth are the same throughout the universe is an assumption. Physicists, astronomers and cosmologists make that assumption because they have no other option: it is in principle untestable.
In the International System of Units (SI Units) “one meter” is defined as “the distance light in a vacuum travels in 1/299792458 seconds”. “The speed of light” is then 299792458 meters per second; it is defined to be a constant.  It could be different (though still expressed by that same number!...) in different places or at different times only if “one second” here and now were different from “one second” elsewhere and elsewhen. But how could we compare them? Obviously, we cannot! Suppose “one second” were different when the universe was new (or, what amounts to the same thing, at vast distances from us). What would that even mean? “One second” is defined, in SI units, in terms of a particular spectrum frequency of a Cesium atom. In the early stages of the evolution of the Universe THERE WERE NO ATOMS! So what could it possibly mean, to compare the speed of light THEN to the speed of light HERE AND NOW??
"there's speculation, and then there's more speculation, and then there's cosmology" – Michio Kaku
  • asked a question related to Fundamental Physics
Question
4 answers
1) How one can describe short range and long range ferromagnetic ordering by analysing M(T,H) data?
2) is superexchange always short-range order?
3) how to idetify the type of exchange interaction in the magnetism shown by a system?
4) Does superexchange has some relationship among magnetic parameters (such as Curie tempeature, doping concentration, carrier concentration)?
Relevant answer
Answer
Hi!
The M(T,H) traces can be as well impacted by anisotropy parameters. So it is not obvious to state on short to long range ordering from such traces. However as said by M. El Hafidi, additional anomaly in susceptibility X(T), deviation from the Curie-Weiss law of 1/X above Tc with a net curvature is a best way to anticipate on short range ordering since the anisotropic contributions normally fail in the Tc temperature range. Also the crystal structure (cubic to low symmetry).could be a cause on non isotropic exchange interaction forces.
Kind regards
Daniel
  • asked a question related to Fundamental Physics
Question
6 answers
Erik verlinde said; this emergent gravity constructed using the insights of string theory, black hole physics and quantum information theory(all these theories are struggling to take breath)..its appreciation to Verlinde of his dare step of constructing emergent gravity based on dead theories ..we loudly take inspiration from him...!!!!!!!
Relevant answer
Answer
@ Adrian Sfarti ;;
my dear Adrian Sfarti do you have any objection if i comment? arey faltu he constructed his theory on string theory go and read once again empty vessel...
  • asked a question related to Fundamental Physics
Question
34 answers
Since experimental evidence, it is well known that a desynchronization of clocks appears between different altitudes on earth (simultaneity is relative). However, simultaneity (absolute for the sky) of the sun or the moon (since million years for example) is a fact.
Shouldn't the concept of relativity be questionned?
Relevant answer
Answer
Is that meant to illustrate the coherence level of your way of thinking?
  • asked a question related to Fundamental Physics
Question
55 answers
Professor Michael Longo (University of Michigan in Ann Arbor) and Professor Lior Shamir (Lawrence Technological University) on experimental data have shown that there is an asymmetry between the right- and left - twisted spiral galaxies. Its value is about 7%. In the article:
ROTATING SPACE OF THE UNIVERSE, AS A SOURCE OF DARK ENERGY AND DARK MATTER
it is shown that the source of dark matter can be the kinetic energy of rotation of the space of the observed Universe. At the same time, the contribution of the Carioles force is 6.8%, or about 7%. The high degree of proximity of the value of the asymmetry between the the right- and left - twisted spiral galaxies and the value of the contribution of the Carioles force to the kinetic energy of rotation of the space of the observable Universe is a strong indirect evidence (on experimental data!) that the space of the observed Universe rotates.
Relevant answer
Answer
@Valery Timkov
There is a stronger evidence that all these considerations need revision.  All your thinking is based upon a 4D Spacetime.  
The observation of hyperspherical acoustic waves (waves that have a footprint along the distance dimension) challenging both GR and the Copernican Principle were found in the SDSS BOSS dataset indicating clearly that we live in a 5D spacetime, where all the 4 spatial dimensions are non compact. 
The SN1a survey distances, corrected by an epoch dependent G, indicates that that hyperspherical surface where we exist, is traveling at the speed of light.
You can easily download the SDSS dataset and see that the observations are correct.  I made a video to help setting up the Anaconda environment.
You can also watch the video below containing an alternative model for Cosmogenesis based on the evidence found in both SDSS and SN1a surveys. It clearly shows the effect of many Bangs (in a crescendo) on the initial hyperspherical Universe
It is all there in Black and White (and sometimes in color..:)
#####################################################
Check HU (the Hypergeometrical Universe Theory) view of Cosmogenesis.
The Universe maps associated with this video are derived directly from the SDSS (Sloan Digital Sky Survey) datasets. That is, the existence of acoustic waves along the DISTANCE dimension were there for 10 years and SDSS couldn’t see it because of ideology. They believed and had blind faith, that the Universe is a 4D Spacetime. A 4D Spacetime requires any position to be equivalent to another position.
HU proposes a 5D Spacetime and expects hyperspherical acoustic waves at the beginning of times. Those hyperspherical acoustic waves would take place along the DISTANCE dimension. That is what astronomical observations support. They don’t support General Relativity, Inflation Theory, Dark Energy etc.
Below is the github repository and video to help setting up the python environment:
  • asked a question related to Fundamental Physics
Question
22 answers
An article from Nature "Undecidability of the spectral gap" (arXiv:1502.04573 [quant-ph]) shows that finding the spectral gap based on a complete quantum level description of a material is undecidable (in Turing sense). No matter how completely we can analytically describe a material on the microscopic level, we can't predict its macroscopic behavior. The problem has been shown to be uncomputable as no algorithm can determine the spectral gap. Even if there is in a way to make a prediction, we can't determine what prediction is, as for a given a program, there is no method to determine if it halts.
Does this result eliminate once and for all the possibility of a theory of everything based on fundamental physics? Is Quantum physics undecidable? Is this an an epistemic result proving that undecidability places a limit on our knowledge of the world?
Relevant answer
Answer
No, but one may change the research direction for the theory of everything.
  • asked a question related to Fundamental Physics
Question
34 answers
I have a question regarding one unusual (thought) system.
Some years ago at one Russian forum we discussed one thought device that, as its author claimed, can provide one-directional motion and only due to the internal forces. The puzzle had been resolved by Kirk McDonald from Princeton Univ. I attach Kirk's solution. I wish to say that the author of the paradox is Georgy Ivanov but not me.
Anyway, Kirk found that there is no resulting directional force. But one puzzle of this device remains. The center-of-mass of the device should move (in the closed orbit) only due to the internal forces. I marked this result of McDonald in the file.
In this connection, two questions arise:
1. Why the center-of-mass moves despite the total momentum conserves?
2. If the center-of-mass can move and this motion is created by the internal forces, is it possible to change the design of the device to provide one-directional motion?
Formally there is no obstacles to realize it. The total momentum conserves... Could some one give the answers to them?
This thought device works not on the action-reaction principle and if similar device can be made as hardware, it could be a good prototype for the interstellar flight thruster.
Relevant answer
Answer
    Dear Theophanes,
    Classical Electrodynamics has an ambit of application and it cannot be applied to concepts as particle or mass of such particle. Such frame belongs to other fields as QED where the renormalization of the charge or mass are solved.
  • asked a question related to Fundamental Physics
Question
183 answers
How did Einstein's Spacetime pull of gravity on the Planet Mercury differ in value than Newtons?  Was it simply via the spacetime fabric adjusting this value?
Thanks:)
Relevant answer
Answer
Your interpretation is utterly incoherent and unsupported by anything in the text. Einstein merely says a point moving in k must have a value of x' which is constant. In other words, the value of x' for a point moving with k must be constant. Mere obvious kinematics, following from the definition of velocity. Nothing is ever said about x' being ``attached'' to k. Nor is it true that it is measured using moving rods. Rather, x is measured using rods at rest with K, so is vt, and therefore the difference between the two is also a distance measured by rods at rest with respect to K. Your screaming ``Nonsense'' merely shows lack of understanding. The idea that such a distance ``cannot be measured'' is again a figment of your imagination: there is no difficulty whatever in measuring the distance between two moving points.
This discussion has, of course, no meaning: your only point is to denigrate relativity, for purposes best known to you. I have stated the truth of the matter, by following the actual original text (which you were afraid to quote) as closely as possible. For any interested readers who might have been confused by your nonsense, this should be enough. You I do not think worth an additional second of my time.
  • asked a question related to Fundamental Physics
Question
63 answers
Schrödinger self adjoint operator H is crucial for the current quantum model of the hydrogen atom. It essentially specifies the stationary states and energies. Then there is Schrödinger unitary evolution equation that tells how states change with time. In this evolution equation the same operator H appears. Thus, H provides the "motionless" states, H gives the energies of these motionless states, and H is inserted in a unitary law of movement.
But this unitary evolution fails to explain or predict the physical transitions that occur between stationary states. Therefore, to fill the gap, the probabilistic interpretation of states was introduced. We then have two very different evolution laws. One is the deterministic unitary equation, and the other consists of random jumps between stationary states. The jumps openly violate the unitary evolution, and the unitary evolution does not allow the jumps. But both are simultaneously accepted by Quantism, creating a most uncomfortable state of affairs.
And what if the quantum evolution equation is plainly wrong? Perhaps there are alternative manners to use H.
Imagine a model, or theory, where the stationary states and energies remain the very same specified by H, but with a different (from the unitary) continuous evolution, and where an initial stationary state evolves in a deterministic manner into a final stationary state, with energy being continuously absorbed and radiated between the stationary energy levels. In this natural theory there is no use, nor need, for a probabilistic interpretation. The natural model for the hydrogen, comprising a space of states, energy observable and evolution equation is explained in
My question is: With this natural theory of atoms already elaborated, what are the chances for its acceptance by mainstream Physics.
Professional scientists, in particular physicists and chemists, are well versed in the history of science, and modern communication hastens the diffusion of knowledge. Nevertheless important scientific changes seem to require a lengthy processes including the disappearance of most leaders, as was noted by Max Planck: "They are not convinced, they die".
Scientists seem particularly conservative and incapable of admitting that their viewpoints are mistaken, as was the case time ago with flat Earth, Geocentrism, phlogiston, and other scientific misconceptions.
Relevant answer
Answer
Hello Enders
You state that "According to Schrödinger 1926, there are no quantum jumps." Please allow me the following comments.
A set of articles by various authors are collected in a book edited by Wolfgang Pauli
Pauli, W. (ed.) - Niels Bohr and the Development of Physics. Pergamon Press, London. 1955.
Among the articles there is one by Werner Heisenberg
The Development of the Interpretation of the Quantum Theory
The following lines can be found in the article (page 14 of the book)
At the invitation of Bohr, Schrodinger visited Copenhagen in September, 1926, to lecture on wave mechanics. Long discussions, lasting several days, then took place concerning the foundations of quantum theory, in which Schrodinger was able to give a convincing picture of the new simple ideas of wave mechanics, while Bohr explained to him that not even Planck's Law could be understood without the quantum jumps. Schrodinger finally exclaimed in despair:
"If we are going to stick to this damned quantum-jumping [verdammte Quantenspringerei], then I regret that I ever had anything to do with quantum theory,"
to which Bohr replied:
"But the rest of us are thankful that you did, because you have contributed so much to the clarification of the quantum theory."
May be the above paragraph is the ultimate source of your statement.
The displeasure shown by Schrodinger has a different interpretation. It may mean that he understood quantum jumps, that he had a clear picture of the reach of the Schrodinger time dependent equation (STDE), and in particular that STDE contradicted quantum jumps. Therefore he knew that something very fundamental was missing in his elegant STDE. Nowhere he said something equivalent to "quantum jumps do not exist". He was annoyed by having to accept the existence and crucial phenomenological role of quantum jumps for the description of the basic atomic phenomena of absorption and radiation.
If you have a different historical source to justify your interpretation please share with us the reference as it would be extremely interesting
With most cordial regards,
Daniel Crespin
  • asked a question related to Fundamental Physics
Question
1 answer
MY EMAIL TO NFS:
My name is Andrei-Lucian Drăgoi and I am a Romanian pediatrician specialist, also undertaking independent research in digital physics and informational biology. Regarding your project called " Ideas Lab: Measuring "Big G" Challenge" (that I’ve found at this link: http://www.nsf.gov/funding/pgm_summ.jsp?pims_id=505229&org=PHY&from=home), I want to propose you a USA-Romania collaboration in this direction, based on my hypothesis that each chemical isotope may have its own “big G” imprint.
The idea is simple. Analogously to the photon, the hypothetical graviton may actually have a quantum angular momentum measured by a gravitational Planck-like quanta which I have noted h_eg, and a quantum G scalar G_q=f(h_eg). Despite Planck constant (h) being constant, h_eg may not be constant and may have slight variability that can depend on many factors including the intranuclear energetic pressures measured by the average binding energy per nucleon (E_BN) in any (quasi-)stable nucleus. I have proposed a simple grade I function that can generate a series hs_eg(E_BN) as a scalar function of E_BN, that also implies a series of quantum G scalars Gs_q(E_BN)= f[hs_eg(E_BN)] which is also a function of E_BN, as it depends on hs_eg(EBN). In conclusion: every isotope may have its own G "imprint" and that is one possible explanation (the suspected so-called “systematic error”) for the variability of the experimental G values from one team to another: I have called this hypothesis the multiple G hypothesis (mGH). I also propose a series of systematic experiments to verify mGH. As I don't work as a physicist (I am a Pediatrics specialist working in Bucharest, Romania) and just do independent research in theoretical physics, I don't have access to experimental resources, so I propose you a collaboration between USA and Romania and some experiments conducted either in the USA or in Romania (at "Horia Hulubei National Institute for R&D in Physics and Nuclear Engineering (IFIN-HH)", from Magurele, Romania: http://www.nipne.ro)
I have attached an article (in pdf format) that contains my hypothesis and its arguments (exposed in the first part of this paper): this work can also be downloaded from the link http://dragoii.com/BIDUM3.0_beta_version.pdf
My main research pages are:
Please, send me a minimal feedback to know that my message was received.
I am opened to any additional comment/suggestion/advice you may have on my idea on the big G.
===============================
THE REPLY FROM NFS:
Dear Dr. Dragoi,
   Thank you for your interest in our programs. Unfortunately, NSF does not fund research groups based outside the US. Should you succeed in your goal of creating a Romanian-US collaboration, please have your American collaborators contact NSF directly.
Best regards,
Pedro Marronetti
====================================
FINAL CONCLUSION: If you are interested in this collaboration, please send feedback on dr.dragoi@yahoo.com so that we may apply to NFS challenge until 26 October 2016 (which is the deadline).
Relevant answer
  • asked a question related to Fundamental Physics
Question
2 answers
I'm going to put an insulator, playdough, on some copper metal. I was wondering how this would effect charge collection from a fundamental physics standpoint. These free electrons (source) would be coming from/already on the surface. I was thinking they would go around the insulator but remain on the surface. Am I correct in this assumption?
Relevant answer
Answer
What you need is to calculate the penetration depth. 
And you might need to solve the Fresnel equations for your case. 
  • asked a question related to Fundamental Physics
Question
227 answers
In Chapter V, of The Nature of the Physical World, Arthur Eddington, wrote as follows:
Linkage of Entropy with Becoming. When you say to yourself, “Every day I grow better and better,” science churlishly replies—
“I see no signs of it. I see you extended as a four-dimensional worm in space-time; and, although goodness is not strictly within my province, I will grant that one end of you is better than the other. But whether you grow better or worse depends on which way up I hold you. There is in your consciousness an idea of growth or ‘becoming’ which, if it is not illusory, implies that you have a label ‘This side up.’ I have searched for such a label all through the physical world and can find no trace of it, so I strongly suspect that the label is non-existent in the world of reality.”
That is the reply of science comprised in primary law. Taking account of secondary law, the reply is modified a little, though it is still none too gracious—
“I have looked again and, in the course of studying a property called entropy, I find that the physical world is marked with an arrow which may possibly be intended to indicate which way up it should be regarded. With that orientation I find that you really do grow better. Or, to speak precisely, your good end is in the part of the world with most entropy and your bad end in the part with least. Why this arrangement should be considered more creditable than that of your neighbor who has his good and bad ends the other way round, I cannot imagine.”
See:
The Cambridge philosopher, Huw Price provides an very engaging contemporary discussion of this topic in the following short video of his 2011 lecture (27 Min.):
This is well worth a viewing. Price has claimed that the ordinary or common-sense conception of time is "subjective" partly by including an emphatic distinction between past and future, the idea of "becoming" in time, or a notion of time "flowing." The argument arises from the temporal symmetry of the laws of fundamental physics --in some contrast and tension with the second law of thermodynamics. So we want to know if "becoming" in particular is merely "subjective," and whether this follows on the basis of fundamental physics. 
Relevant answer
Answer
According to Kant it is just our mind that perceives time as directional. The world as it is in itself is like our mind unrolling a carpet, not a carpet being woven in front of us (not his analogy).
I agree with Kant on most things, but I suspect he got this one wrong.
  • asked a question related to Fundamental Physics
Question
6 answers
I returned to Einstein's 1907 paper and found that the final conclusion offered at the end apparently omitted one last step. Namely, that the lowered value of the speed of light c of a horizontal light ray downstairs, when watched from above, is absolutely correct; but only the conclusion drawn from this observation – that the speed of light is indeed reduced downstairs – was premature.
This is because the light ray hugging the floor downstairs is hugging a constantly receding floor despite the fact that the distance is constant.
(In the same vein, the increased speed of light of a light ray hugging the ceiling of the constantly accelerating rocketship – not mentioned by Einstein – holds true for a ceiling that is constantly approaching the lower floor despite the fact that the distance is constant.) The correctly predicted "gravitational redshift" – and the opposite blueshift in the other direction – qualify as a proof that this thinking is sound.
N.B.: The proposal is perhaps not as stupid as it sounds because the theory employed here is alone the special theory of relativity (which by definition presupposes global constancy of c). This fact was of course constantly on the mind of Einstein and can expplain why he fell silent on the topic of gravitation for 3 ½ years.
When he returned to it in mid-19011, writing the originally unfinished c-modifying equation of 1907 down explicitly, he may have been hoping in the back of his mind that someone could spot the error that he still felt might be involved. It is not an error, only the omission of a final step.
Now my dear readers have the same chance of offering their help regarding my above "constant-c solution" to this conundrum of Einstein’s, which perhaps is the most important one of history.
Relevant answer
Answer
Einstein' redshift (gravitational redshift) doesn't involve at all light. But an observer's clock. Since time runs slower or faster depending on the intensity of the gravitational field (on the height from the surface of a planet or a star, for instance) you will see more or less wave crests during the period of your clock, i.e. a different frequency (and consequently a red- or blueshift).
Einstein's redshift doesn't deal at all with the speed of light, which doesn't vary. In Relativity it is constant because we found that it is.
So what's the problem with Einstein and the different velocities of light? I don't understand...
The equivalence principle in its strong version (Einstein's equivalence principle, the weak one is that of Galileo) states that no experiment can be run to distinguish an inertial frame of reference in a gravitational field from a frame in constant acceleration outside of a gravitational field.
For instance, also in the second case you would notice a deflection of a light beam as in a gravitational field.
May this help?
  • asked a question related to Fundamental Physics
Question
3 answers
for example Carbon( atomic no 6 atomic mass 14) = N ( atomic no 7 atomic mass 14) + 1 beta particle (electron) in this example how does nitrogen gets another electron to neutralize its charge ( no of proton = no of electron) ?
regards
Relevant answer
Answer
The electron (beta particle) will cause many ionizations as it is slowing down. These beta produced ions will neutralize as will the parent/progeny atom.
One could say the beta particle returns to produce the neutralization. Electrons are the indistinguishable so who can say that it did not.
  • asked a question related to Fundamental Physics
Question
56 answers
Fundamental Physicists.
Relevant answer
Answer
In the Kaluza theory (later Kaluza-Klein), it is related to the velocity of motion in a 4th spatial direction in the universe. A circular direction, with very small circumference. This is an idea which goes back to the Finnish theoretical physicist Gunnar Nordström in 1914, and has lingered on ever since. However, with no experimental confirmations.
  • asked a question related to Fundamental Physics
Question
4 answers
My thesis subject is "study of ephemeral organizational phenomena inside meta-organizations".
I'm currently looking for articles that are connecting fundamental physics and management science. 
Also looking for articles speaking about timespace as a whole instead of time or space separately, mostly in management science. 
If you have any suggestions about my subject, feel free to send me your advices !
Your help will be highly appreciated !
Relevant answer
Answer
Hello,
Very interesting Topic. Not easy to find material.
Maybe this paper gives you a paper also it is from instructional science:
good luck
  • asked a question related to Fundamental Physics
Question
56 answers
Are the fundamental physical constants rational numbers?  I think it would be true to say we cannot make measurements that  are non-rational.
Relevant answer
Answer
The Planck constant is a dimensionful quantity; hence one has to specify which units it should be measured in before the question makes sense. The most natural units to use are the Planck units. Expressed in these units the speed of light c, the Newton constant of gravity, GNand the reduced Planck constant (\hbar) are all unity. Hence, in these units the Planck constant equals 2pi, which is not rational but transcendental.
  • asked a question related to Fundamental Physics
Question
11 answers
Over the years, many physicists have wondered whether the fundamental constants of nature might have been different when the universe was younger. If so, the evidence ought to be out there in the cosmos where we can see distant things exactly as they were in the past.
One thing that ought to be obvious is whether a number known as the fine structure constant was different. The fine structure constant determines how strongly atoms hold onto their electrons and is an important factor in the frequencies at which atoms absorb light.
If the fine structure were different earlier in the universe, we ought to be able to see the evidence in the way distant gas clouds absorb light on its way here from even more distant objects such as quasars.
That debate pales in comparison to new claims being made about the fine structure constant. In 2010, John Webb at the University of South Wales, one of the leading proponents of the varying constant idea, and a few cobbers said they have new evidence from the Very Large Telescope in Chile that the fine structure constant was different when the universe was younger.
While data from the Keck telescope indicate the fine structure constant was once smaller, the data from the Very Large Telescope indicates the opposite, that the fine structure constant was once larger. That’s significant because Keck looks out into the northern hemisphere, while the VLT looks south.
This means that in one direction, the fine structure constant was once smaller and in exactly the opposite direction, it was once bigger. And here we are in the middle, where the constant as it is (about 1/137.03599…)
So, do you think that fine structure constant varies with direction in space?
Refs:
arxiv.org/abs/1008.3907: Evidence For Spatial Variation Of The Fine Structure Constant
arxiv.org/abs/1008.3957: Manifestations Of A Spatial Variation Of Fundamental Constants On Atomic Clocks, Oklo.
Included here you can also find a 2004 ApJ paper by John Bahcall, who is a proponent of varying fine structure constant. (URL: http://www.sns.ias.edu/~jnb/Papers/Preprints/Finestructure/alpha.pdf)
Relevant answer
Answer
Since the value of the fine structure constant has been found to be the resultant of a combination of 4 other constants: alpha= e2 /(2 eps_0 h c), that can mutually define each other, this means that if alpha should be found to have a different value, this would also mean that at least one of the other related constants also has a different value.
  • asked a question related to Fundamental Physics
Question
76 answers
Also known as the reversibility paradox, this is an objection to the effect that it should not be possible to derive an irreversible process from time-symmetric dynamics, or that there is an apparently conflict between the temporally symmetric character of fundamental physics and the temporal asymmetry of the second law.
It has sometimes been held in response to the problem that the second law is somehow "subjective" (L. Maccone) or that entropy has an "anthropomorphic" character. I quote from an older paper by E.T. Jaynes,
"After the above insistence that any demonstration of the second law must involve the entropy as measured experimentally, it may come as a shock to realize that, nevertheless, thermodynamics knows no such notion as the "entropy of a physical system." Thermodynamics does have the notion of the entropy of a thermodynamic system; but a given physical system corresponds to many thermodynamic systems" (p. 397). 
The idea here is that there is no way to take account of every possible degree of freedom of a physical system within thermodynamics, and that measures of entropy depend on the relevancy of particular degrees of freedom in particular studies or projects. 
Does Loschmidt's paradox tell us something of importance about the second law? What is the crucial difference between a "physical system" and a "thermodynamic system?" Does this distinction cast light on the relationship between thermodynamics and measurements of quantum systems?  
Relevant answer
Answer
Good question! Jaynes jumped from the necessity of a coarse-grained description to claims of "subjectivity". Of course subjectivity is important in science, but for other reasons. The Second Law requires only a coarse-grained description to satisfy the micro-macro distinction. Loschmidt´s paradox refers to the micro dynamics. The Second Law refers to the macro dynamics. The paradox has nothing to do with the Law itself, but with its use as an arrow of time. The paradox would imply that there is no arrow of time at the micro level. This is possibly true, although Prigogine tried to insert the arrow of time from a (proposed) asymmetry of quantum operators. Therefore , there would be no "heat death" at the micro level. Time and entropy increase would have to be defined in the interface of micro and macro. There may be implications for epistemology, but they are not automatic!
  • asked a question related to Fundamental Physics
Question
105 answers
Regarding our current understanding of quantum mechanics, especially the interpretation of the theory of measurements in terms of parallel universes.
Theoretical physics, quantum mechanics, Fundamental physics 
Relevant answer
Answer
It is difficult understand QM, because QM is an axiomatic conception,  QM contains the axiomatic object: wave function.  Nobody knows, what is the wave function. The same situation we had with thermodynamics, where there is an axiomatic object :  thermogen.
In the theory of fluids, the wave  function is the method of the ideal fluid description, and one may explain the quantum mechanics as a kind of gas dynamics. Indeed, molecules of usual gas move stochastically. This stochastic motion is a result of interaction between molecules (collisions). It is clear, that the kind of stochasticity depends on the form of interaction between  molecules. One can introduce such an interaction between the gas molecules, that gas dynamic equations, written in terms of wave function coincide with the Klein-Gordon equation. See for details  “Quantum mechanics as dynamics of continuous medium”. http://gasdyn-ipm.ipmnet.ru/~rylov/qmdcmr1e.pdf
  • asked a question related to Fundamental Physics
Question
26 answers
The Smirnov-Rueda team claimed that they measured that the speed of the bound electromagnetic field is limit but is larger than the speed of light.[1-3] However, their result need be further tested.
A direct way was presented to measure the speed of the electromagnetic force.[4] In this way, as three stationary charged balls or magnetisms (M1, M2 and M3) are interacting with each other, making M3 moved, M1 and M2 shall be moved by the motion of M3. If the distances between M1 and M3 and between M2 and M3 are L1 and L2 respectively, by observing the times that M1 and M2 start to move, the speed of the electromagnetic force can be calculated with: v=( L1-L2)/(t1-t2), where t1 and t2 are the times that M1 and M2 start to move respectively.
Thus, the speed of the electromagnetic force can be direct observed.
M3 can be a transformer. As the current is stopped, the magnetic field of it shall disappear. And M1 and M2 shall be moved by the gravity as they set at the positions under the condition they can be moved by the gravity as soon as the magnetic force disappears. In this case, the speed of the propagation of the magnetic field is measured.
This is a simple experiment. Only three magnetisms (or charged balls) are needed. But, to observe the times that the magnetisms start to move, a high speed camera is needed. As ∆L=L1-L2 is on the level about 30cm, the precision of the observed time need be larger than 10-11 seconds. However, the general high speed camera can be observed the time on the precision about 10-12 seconds.
This experiment is fundamental for physics.Besides Smirnov-Rueda team’s work, there is no the experimental and general accepted conclusion for the speed of electromagnetic force. It is clear, if there was such ones, Smirnov-Rueda team’s work could not be published.
References
[1] Kholmetskii A. L. et al, 2007, Experimental test on the applicability of the standard retardation
    condition to bound magnetic fields, J. Appl. Phys. 101, 023532
[2] Kholmetskii A. L., Missevitch O. V. and Smirnov Rueda R., 2007, Measurement of propagation velocity of bound electromagnetic fields in near zone, J. Appl. Phys.102 013529
[3] Missevitch O. V., Kholmetskii A. L. and Smirnov Rueda R., 2011, Anomalously small retardation of bound (force) electromagnetic fields in antenna near zone, Europhys. Lett.93 64004
[4] Zhu Y., 2011, Measurement of the speed of gravity, arXiv: 1108.3761v8
Relevant answer
Answer
   Let me interpret your "the speed of e&m forcé" as "the speed of propagation of the e&m field".  And the latter, I understand, is equal to c, not infinite.
  • asked a question related to Fundamental Physics
Question
10 answers
Or, for that matter, Ideal Gas pressure? It can't be gravity or weak nuclear (both too weak). It can't be electromagnetic, as neutrons can exhibit it. It can't be strong nuclear, as that is always attractive, and doesn't act between electrons, anyway. So, what's going on?
Relevant answer
Answer
One more thought. An ideal gas has no potential to give a spatial interaction within the wave functions. The properties are determined only by the statistics of the momentum (or energy) states. An ideal Fermi gas has large pressure because the momentum states fill in from low to high momentum.
  • asked a question related to Fundamental Physics
Question
9 answers
A glance through our cosmic neck of the woods reveals that matter in the Universe is distributed in a highly structured fashion, why is it so?
Regards,
Bhushan Poojary
Relevant answer
Answer
On the fractality of the universe I have a paper in:  arXiv:0905.3966
"Fractal universe and the speed of light: Revision of the universal constants"
 Published in New Advances in Physics Journal-ref: New Advances in Physics, 2(2), Sept. 2008, pp 109-113 .
This paper is also availbe here in RG in my Publications.
The question of why the universe is not composed of black holes only, all matter "collapsed" by gravitation, is explained by kinematics: the moon does not fall on the earth because it is in dynamic equilibrium, as viewed from a classical mechanics point of view. And the same happens with galaxies. Nevertheless, lookingf at one isolated galaxy, we see that its core is a huge black hole, where some collapse to form it has occured.
  • asked a question related to Fundamental Physics
Question
10 answers
In the attached paper from the Gauge Institute, the definition of differential in e-calculus is (see page 8):
F'(x)={f(x+e)-f(x)}/e (1)
where e is defined as an infinitesimal (i.e. it should be smaller than any number but greater than zero).
From this definition in (1) it should be clear that as e approaches zero, it is assumed that the function of f'(x) has the form of a slope (linear). But this assumption has problems in real data of many phenomena, i.e. when the observation scale goes smaller and smaller then it behaves not as a linear slope but as brownian motion. Other applications such as in earthquake data, stock market price data, etc. indicate that each data includes indeterminacy (I).
I just thought that perhaps we can extend the definition of differentiation to include indeterminacy (I), perhaps something like this:
F'(x)={f(x+e)+2I-f(x)}/e. (2)
The I parameter implies that the geometry of differential is not a slope anymore. The term 2I has been introduced to include unpredictability/indeterminacy of the brownian motion. And it can split into left and right differentiation. The left differential will carry one I, and the right differential will carry one I.
Another possible way is something like this:
F'(x)=(1+I).{f(x+e)-f(x)}/e (3)
Where I represents indeterminacy parameter, ranging from 0.0-0.5.
Other possible approaches may include Nelson's Internal Set Theory, Fuzzy Differential Calculus, or Nonsmooth Analysis.
My purpose is to find out how to include indeterminacy into differential operators like curl and div.
That is my idea so far, you can develop it further if you like. This idea is surely far from conclusive, it is intended to stimulate further thinking.
So do you have other ideas? Please kindly share here. Thanks
Relevant answer
Answer
Calculus ultimately relies on infinite precision, the differential being infinitessemal, i.e. the next number to zero. In many instances this causes no problems even if it is mathematical garbage. Some authors now state than their differential is a small number not an infinitessimal then procced using the normal methods of calculus. Where Nature disagrees then we end up with contradictions like the UV catastrophy. Since we do not know when Nature will not bowl us a googly we should always check that taking the differential to be arbtrary small actually works. This is seldom done so there is no literature on this. There is no a priori method of introducing indeterminacy to calculs but if indeterminacy exists in Nature, she will let us know if we ask the right questions. Planck set up his differential of action and increased the pecission of his calculations  he found that when da = h he got the correct spectrum but lost it again when da<h Thus he found the indetrminacy in his calculus. I have never seen any other method than try it and see.
  • asked a question related to Fundamental Physics
Question
11 answers
We assume an empty universe containing only a disk. There are two positions our observers could stand. Position A is the center and position B is a point on the edge of the disk. Two observers in positions A and B could define which of one is rotating around the other because of the existence of the centrifugal force which appears ONLY in position B!! This rotation could be called as absolute!
How is this independent of the fact that space is relative? 
Is there a centrifugal force on B, if the disk (and observers) was massless?
Relevant answer
Answer
With respect to Eric Lord's answer, lets assume the same disk in an empty universe this time equipped with a set of small rockets in opposite sides with opposite directions of their exhausts. This way the disk is gradually set in motion. Does this make rotation more "definable"?
  • asked a question related to Fundamental Physics
Question
43 answers
I want to understand physically why the observed mass is increased as the particle speed increased to relativistic speeds. Is the potential energy of the particle affected by this increase?
Relevant answer
Answer
There isn't any relation between time and mass in speical relativity: mass is a Lorentz invariant, the same in all reference frames, time is the fourth component of the position 4-vector. Similarly, energy isn't a Lorentz invariant-it's the fourth component of the energy-momentum 4-vector. So the answer to the question is that the observed mass, which is Lorentz invariant, doesn't increase-or decrease-under Lorentz transformations. The correct formulation is that energy and momentum transform under Lorentz transformations as a 4-vector, so that E^2-(p c)^2 = (mc^2)^2 = (E')^2-(p'c)^2, where (E, p) and (E', p') are the 4-vectors in different frames, that are related through a Lorentz transformation. The expressions for the energy and the momentum, that contain the relative velocity between the two frames, express just that and nothing more: they relate the values of the energy and momentum in one frame to the values in a frame, moving with velocity beta=v/c with respect to the other, which gives rise to the factor 1/sqrt(1-beta^2)=gamma.
  • asked a question related to Fundamental Physics
Question
22 answers
I want to know the sources of errors that prevent us from using Schrodinger equation to find the eigenvalues of energy for atoms that have electrons more than one? What are the best models for the high electrons atoms?
Relevant answer
Answer
To my knowledge only the 2-body problem with central forces has been solved "exactly" either using classical mechanics or quantum mechanics. Systems  with 3 or more particles are solved using approximate methods.
  • asked a question related to Fundamental Physics
Question
4 answers
As ‘Big Questions’ I would like to name those which are fundamental to the physical understanding of our Universe, such as: Why is the Universe made of matter rather than antimatter? What is the nature of Dark Matter, and of Dark Energy? Is there a preferred reference frame to the Universe? How are the forces of nature, including gravity, unified?
For decades, the primary experimental tools for addressing such Big Questions were large particle accelerators. However, scaling of these facilities to higher energies and larger sizes has become increasingly difficult and expensive—and may soon be impossible.
The conjecture I would like to discuss is: What can we learn from small-scale, low-cost terrestrial experiments in which subtle signs of new physics are sought through extreme sensitivity and precision?
Searches for tiny deviations from “ordinary” physical laws can be interpreted as tests of the very structure of the physical world. Examples might include the braking of symmetries like time reversal symmetry, or the search for a variation of the fundamental constants of nature like fine stricture constant or searches for a deviatipn from the 1/r law for the gravitational potential.
Relevant answer
Answer
This is indeed an excellent question. I have some examples which could be relevant to investigations into dark matter, dark energy, cosmology and astrobiology. Neutrinos have a tiny yet non-zero mass, evident from their flavour oscillations. However, we still do not know whether their mass hierarchy is inverted or normal and we don't have a good fix on the absolute masses of the mass eigenstates. One way of determining the mass scale would be a sensitive experiment operating at low energy which uses the mutual annihilation of pairs of nonrelativistic neutrinos mediated by leptons of the same flavour. For instance, annihilation could excite an electron orbiting an atom, closely followed by the spontaneous emission of a photon of measurable wavelength and hence energy. This interaction is essentially the time reversal of the photoneutrino process but, since the neutrinos would be nonrelativistic, the annihilation/excitation energy would be well-defined and informative. More precise meaurements of neutrino flavour oscillations could also establish the existence of sterile neutrinos - which may be an important component of dark matter in galaxy clusters. I am not saying these experiments would be easy to conduct, but they are at the opposite end of the energy scale to the (still) very well-funded particle colliders. You can assess for yourself their potential importance to the 'Big Questions' by following the link provided below:
  • asked a question related to Fundamental Physics
Question
7 answers
Within an appropriately chosen coordinate system and without incorporating spatial curvature, geodetic precession of a gyroscope orbiting a spherically symmetric, spinning mass can be remoulded as a Lense-Thirring frame-dragging effect. Geodesic precession and Lense-Thirring precession can therefore be described in terms of two components of a single gravitomagnetic effect. Are de Sitter precession and frame dragging actually fundamentally different phenomena?
Relevant answer
Answer
The solution is simple: Send your keywords (de Sitter precession; Lense-Thirring precession frame-dragging) to Google to translate into your target language. Then send these Spanish / Russian / Chinese phrases to Google. (And get Google Translate to give you a rough translation)!
  • asked a question related to Fundamental Physics
Question
11 answers
Experiments in physics are more and more heavy, and require more and more people. There is a similar, although not as important, trend in theoretical physics. Will that change the status of the physicist? Will there still be great /savants/ or scholars?
But on the other hand, the last particle of the standard model, the Higgs, has been observed, and has validated the whole construct, so there is no more such routine work. There have not been significant advances in fundamental physics for forty years. It seems the old paradigm has been exhausted. There is still research to find supersymmetry particle and the likes, but that remains speculative. Wouldn't a reorganization of the activity be necessary? Should more stress on individual potentially revolutionary ideas be put, or to the contrary should the current trend be intensified in order to force our way forward?
Relevant answer
Answer
I would not be so pessimistic. Already at the end of the 19th century it was generally accepted that all the important laws of physics had been discovered and that, henceforth, research would be concerned with clearing up minor problems and particularly with improvements of method and measurement. And then we know what has happened. So we are still in a kind of a "punctuated equilibrium" (using terminology of the complexity theory) which could end with a jump to a new level of our understanding of the Nature. But how and when it is hard to speculate.
  • asked a question related to Fundamental Physics
Question
5689 answers
Herbert Dingle's argument is as follows (1950):
According to the theory, if you have two exactly similar clocks, A and B, and one is moving with respect to the other, they must work at different rates,i.e. one works more slowly than the other. But the theory also requires that you cannot distinguish which clock is the 'moving' one; it is equally true to say that A rests while B moves and that B rests while A moves. The question therefore arises: how does one determine, 3 consistently with the theory, which clock works the more slowly? Unless the question is answerable, the theory unavoidably requires that A works more slowly than B and B more slowly than A - which it requires no super- intelligence to see is impossible. Now, clearly, a theory that requires an impossibility cannot be true, and scientific integrity requires, therefore, either that the question just posed shall be answered, or else that the theory shall be acknowledged to be false.
Relevant answer
Answer
Different observers will disagree as to which clock moves (or moves faster). Unless you bring the two clocks together, there is no observer-independent way to synchronize them. And once you do bring the two clocks together, whichever clock spent more time accelerating is the one that shows less time elapsed. This is not a counter-argument against relativity, just a trivial misunderstanding/incomplete understanding of the theory.
  • asked a question related to Fundamental Physics
Question
14 answers
I would like to find literature about propositions to test the Einstein-Cartan theory with lab experiments.
According to wikipedia "(e) [Einstein-Cartan theory] generates new predictions that can in principle validate or falsify the theory, but it cannot be validated by empirical results due to current limitations in technology."
However, I have not been able to find any paper on the subject so far.
Do any of you have some papers in mind?
Relevant answer