Science topic
Fundamental Physics - Science topic
Explore the latest questions and answers in Fundamental Physics, and find Fundamental Physics experts.
Questions related to Fundamental Physics
The constants (G, h, c, e, me, kB), can be considered fundamental only if the SI units they are measured in (kg, m, s ...) are independent. However, if we assign numerical values to the SI units (kg = 15, m = -13, s = -30, A = 3, K = 20), then by matching the unit numbers, we can define (and solve) the least precise (CODATA 2014) constants (G, h, e, me, kB) in terms of the 3 most precise constants (c, μ0, R) ... (diagram #1). Officially this must be just a coincidence, but the precision is difficult to ignore.
We find further anomalies to equal precision when we combine the constants (G, h, c, e, me, kB) in combinations whereby the unit numerical value sums to 0 ... (diagram #2).
The methodology is introduced here
- Are these physical constant anomalies evidence we are in a simulation? dx.doi.org/10.13140/RG.2.2.15874.15041/3
- https://en.wikiversity.org/wiki/Physical_constant_(anomaly)
Is a simulation universe the best explanation for these anomalies?
Some general background to the physical constants.
Having worked on the spacetime wave theory for some time and recently published a preprint paper on the Space Rest Frame I realised the full implications which are quite shocking in a way.
The spacetime wave theory proposes that electrons, neutrons and protons are looped waves in spacetime:
The paper on the space rest frame proposes that waves in spacetime take place in the space rest frame K0:
Preprint Space Rest Frame (Dec 2021)
This then implies that the proton which is a looped wave in spacetime of three wavelengths is actually a looped wave taking place in the space rest frame and we are moving at somewhere between 150 km/sec and 350 km/sec relative to that frame of reference.
This also gives a clue as to the cause of the length contraction taking place in objects moving relative to the rest frame. The length contraction takes place in the individual particles, namely the electron, neutron and proton.
I find myself in a similar position to Earnest Rutherford when he discovered that the atom was mostly empty space. I don't expect to fall through the floor but I might expect to suddenly fly away at around 250 km/sec. Of course this doesn't happen because there is zero resistance to uniform motion through space and momentum is conserved.
It still seems quite a shocking realisation.
Richard
Our answer is YES. The wave-particle duality is a model proposed to explain the interference of photons, electrons, neutrons, or any matter. One deprecates a model when it is no longer needed. Therefore, we show here that the wave-particle duality is deprecated.
This offers an immediate solution for the thermal radiation of bodies, as Einstein demonstrated experimentally in 1917, in terms of ternary trees in progression, to tri-state+, using the model of GF(3^n), where the same atom can show so-called spontaneous emission, absorption, or stimulated emission, and further collective effects, in a ternary way.
Continuity or classical waves are not needed, do not fit into this philosophy, and are not representable by any edges or updating events, or sink, in an oriented graph [1] model with stimulated emission.
However, taking into account the principle of universality in physics, the same phenomena — even a particle such as a photon or electron — can be seen, although approximately and partially, in terms of continuous waves, macroscopically. Then, the wave theory of electrons can be used in the universality limit, when collective effects can play a role, and explain superconductivity.
This solves the apparent confusion given by the common wave-particle duality model, where the ontological view can become now, however — more indicative of a particle in all cases — and does not depend on amplitude.
This explains both the photoelectric effect, that does not depend on the amplitude, and wave-interference, that depends on the amplitude. The ground rule is quantum, the particle, but one apparently "sees" interference at a distance that is far enough not to distinguish individual contributions.
What is your informed opinion?
REFERENCE
[1] Stephen Wolfram, “A Class of Models with the Potential To Represent Fundamental Physics.” Arxiv; https://arxiv.org/ftp/arxiv/papers/2004/2004.08210.pdf, 2004.
The fundamental physical constants, ħ, c and G, appear to be the same everywhere in the observable universe. Observers in different gravitational potentials or with different relative velocity, encounter the same values of ħ, c and G. What enforces this uniformity? For example, angular momentum is quantized everywhere in the universe. An isolated carbon monoxide molecule (CO) never stops rotating. Even in its lowest energy state, it has ħ/2 quantized angular momentum zero-point energy causing a 57 GHz rotation. The observable CO absorption and emission frequencies are integer multiples of ħ quantized angular momentum. An isolated CO molecule cannot be forced to rotate with some non-integer angular momentum such as 0.7ħ. What enforces this?
Even though the rates of time are different in different gravitational potentials, the locally measured speed of light is constant. What enforces a constant speed of light? It is not sufficient to mention covariance of the laws of physics without further explanation. This just gives a different name to the mysteries.
Are the natural laws imposed on the universe by an unseen internal or external entity? Do the properties of vacuum fluctuations create the fundamental physical constants? Are the physical constants the same when they are not observed?
It feels strange to have discovered a new fundamental physics discipline after a gap of a century. It is called Cryodynamics, sister of the chaos-borne deterministic Thermodynamics discovered by Yakov Sinai in 1970. It proves that Fritz Zwicky was right in 1929 with his alleged “tired light” theory.
The light traversing the cosmos hence lawfully loses energy in a distance-proportional fashion, much as Edwin Hubble tried to prove.
Such a revolutionary development is a rare event in the history of science. So the reader has every reason to be skeptical. But it is also a wonderful occasion to be one of the first who jump the new giant bandwagon. Famous cosmologist Wolfgang Rindler was the first to do so. This note is devoted to his memory.
November 26, 2019
There is an opinion that the wave-function represents the knowledge that we have about a quantum (microscopic) object. But if this object is, say, an electron, the wave-function is bent by an electric field.
In my modest opinion matter influences matter. I can't imagine how the wave-function could be influenced by fields if it were not matter too.
Has anybody another opinion?
Dear Colleagues.
The Faraday constant as a fundamental physical value has its peculiar features, which make it standing out of the other physical constants. According tothe official documents of NIST, this constant has two values:
F = 96485.33289 ± 0.00059 C/mole and
F* = 96485.3251 ± 0.0012 C/mole.
The second value refers to the "ordinary electric current".
Is the Faraday constant constant?
One of the ways to answer this question is proposed in the works.
Sincerely,
Yuriy.
According to special relativity (SR), the relative velocity between two inertial reference frames (IRF), say two spaceships, is calculated by
u=(v_{1}-v_{2}) /(1-v_{1}v_{2/c2) }(1)
Where v_{1}and v_{2} are constant velocities of the two vessels moving in parallel to each other.
For low speeds v_{1}v_{2/c2} is negligible and the formula is reduced to
u=v_{1}-v_{2}
But neither v_{1 }nor v_{2} is supposed to be known in SR. Both can have any value between -c and +c as illustrated in Figure 1 (please see the attached file).
Not knowing the speed of each vessel means that the calculated relative speed can also be any value between -c and +c. For example:
v_{1}= - 0.6c v_{2} = - c ̀ ==> u= -c (possibility 5 in Figure 1)
v_{1}= 0 v_{2} = - 0.4c ==> u= -c/2.5 (possibility 2)
v_{1}= 0.2c v_{2} = - 0.2c ̀ ==> u= c/2.6 (possibility 3)
v_{1}= 0.4c v_{2} = 0 ==> u= c/2.5 (possibility 1)
v_{1}= c v_{2} = 0.6c ==> u= c (possibility 4)
Meaning that the real relative speed between two IRFs in fact cannot be calculated.
To remedy this situation, it is assumed that:
1. One of the vessels in which observer number one, Bob, resides is stationary and the other vessel, Alice, is moving at the relative speed of u.
This is, obviously, a wrong scientific statement and in contrast to SR. Here only one specific possibility among countless possibilities is arbitrarily selected to hide the difficult situation. We should also remind ourselves of the damaging effect of this type of assumptions. Scientists tried hard to discard the dominating geocentric dogma of the past, championed by the Catholic Church, and now a comparable assumption is accepted under a new groundbreakingly concept.
Based on this assumption, the equation is simply reducing to either u= -v_{2} or u=v_{1}, depending on the observer.
2. There is a third reference frame based on which the speeds are measured.
Like the first cases we are back to Newtonian mechanics, an assumed fixed reference frame. This assumption explicitly accepts the first assumption. Only then, the formula makes sense. Specifically, to be able to present SR as a scientific/quantitative theory it is forced to accept that the frame of the observer or a third frame is a stationary reference frame for any measurement or analysis. Zero speed is just a convenient value between countless other possibilities which SR has introduced and then has decided not to deal with the consequences.
The problem with Einstein velocity addition formula also applies in this case as the assumed velocities as well as the calculated relative velocity between Bob and Alice depends on the relative speed of the observer.
Somehow, both conflicting cases are accepted in SR quite subjectively. In other words, SR is arbitrarily benefiting from classical science, to push its own undeserved credibility, while at the same time denying it.
Is this a fair assessment?
P.S. for simplicity only parallel movements are considered.
Wikipedia describes Physics, lit. 'knowledge of nature' , as the natural science that studies matter, its motion and behavior through space and time, and the related entities of energy and force
But isn’t this definition a redundancy? Any visible object is made of matter and its motion is a consequence of energy applied. We might as well say, study of stuff that happens. But then, what does study entail?
Fundamentally, ‘physics’ is a category word, and category words have inherent problems. How broad or inclusive is the category word, and is the ordinary use of the category word too restrictive?
Is biophysics a subcategory of biology? Is econophysics a subcategory of economics? If, for example, biophysics combines elements of physics and biology, does one predominate as categorization? If, as in biophysics and econophysics and astrophysics there are overlapping disciplines, does the category word ‘physics’ gives us insight about what physics studies or obscure what physics studies?
Is defining what physics does more a problem of semantics (ascribing meaning to a category word) than of science?
Might another way of looking at it be this? Physics generally involves the detection of patterns common to different patterns in phenomena, including those natural, emergent, and engineered; if possible detecting fundamental principles and laws that model them, and when possible using mathematical notation to describe those principles and laws; if possible devising and implementing experiments to test whether hypothesized or observed patterns provide evidence for or give clues to fundamental principles and laws.
Maybe physics more generally just involves problem solving and the collection of inferences about things that happen.
Your views?
This question is closely related with a previous question I raised in this Forum: "What is the characteristic of matter that we refer as "electric charge"?"
As stated in my previous question, the main objective of bringing this topic to discussion is to try to understand the fundamental physical phenomena associated with the Universe we live in, where energy, matter and other key ingredients, like the Laws that govern them, which all together seem to play a harmonious role, so harmonious that even life, as we know it, can exist in this planet.
My background is from engineering. Hence, I am trying to go deep into the causes behind the effects, the physical phenomena that support the Universe as we know it, prior to go deep into complex mathematical models and formulation, which may obscure reality.
With an open mind, I try to ask questions whose answers may help us to understand the whys, rather than to prove theories and their formulations.
From our previous discussion, it became clear that mass and electric charge are two inseparable attributes of matter. Moreover, Electromagnetic (EM) fields propagate through vacuum. Hence, no physical matter is required for energy or information flow through the Universe. However, electric charges remain clustered in physical matter, i.e., they require, not vacuum, but matter.
Matter has the property of radiation. Matter under Gravitational (G) and EM fields is subjected to forces, producing movement. Radiation depends strongly on Temperature.
The absolute limit of T is 0º Kelvin. At this limit, particle movement stops. Magnetic fields depend on moving electric charges; as, at this limit, movement vanishes, then Magnetic fields should vanish with it. As Electrical and Magnetic fields are nested in each other, so does Electric field and consequently the effect of EM fields (and, hence, radiation, too) should vanish as T approaches 0ºK. Black Holes (BH) do not radiate, their Temperature being close to 0ºK.
Can we assume that EM fields ultimately vanishes as T approaches 0ºK?
Could this help explaining why protons in an atomic nucleus stay together, and are not violently scattered away from each other?
Would it be reasonable to assume that the atomic nucleuses are at Temperatures close to 0ºK, although electrons and matter, at macroscopic level, are at Room Temperature?
What is really the Temperature of atomic nucleuses? Can we measure it? Is it possible that a cloud of electrons, either orbiting the atoms nucleuses or moving as free electrons, play a shielding effect, capturing the energy associated with Room Temperature, and preventing the nucleuses from heating? Can atom's nucleus Temperature be close to 0ºK, like it occurs in BH?
In physics, we have a number of "fundamental" variables: force, mass, velocity, acceleration, time, position, electric field, spin, charge, etc
How do we know that we have in fact got the most compact set of variables? If we were to examine the physics textbooks of an intelligent alien civilization, could it be they have cleverly set up their system of variables so that they don't need (say) "mass"? Maybe mass is accounted by everything else and is hence redundant? Maybe the aliens have factored mass out of their physics and it is not needed?
Bottom line question: how do we know that each of the physical variables we commonly use are fundamental and not, in fact, redundant?
Has anyone tried to formally prove we have a non-redundant compact set?
Is this even something that is possible to prove? Is it an unprovable question to start with? How do we set about trying to prove it?
You will find an article, with more precision under my profile.
The question is non relativistic and depends only on logic.
The answer could make a reset of all fundamental physics and is therefore of extreme importance!
JES
What is consciousness? What do the latest neurology findings tell us about consciousness and what is it about a highly excitable piece of brain matter that gives rise to consciousness?
It has radically altered it by rehabilitating Fritz Zwicky 1929.
Hence ten Nobel medals are gone. And cheap energy for all is made possible. Provided, that is, that humankind is capable of mentally following in Yakov Sinai’s chaotic footsteps. If not, energy remains expensive and CERN remains dangerous to all: A funny time that we are living in. With the crown of Corona yet waiting to be delivered.
April 1st, 2020
Why a complete theory of fundamental physics is ignored just because it is outside the realms of Quantum Field Theories and General Relativity? It has been “marked” as a speculative alternative and has never been studied nor has been any attempt to verify it. The fundamental physics community is still in complete ignorance of the extremely successful Electrodiscrete Theory.
The Electrodiscrete Theory is not a speculative alternative and not just a new idea in the workings but it is a complete theory of fundamental physics describing all our elementary particles and their interactions including gravity. The Electrodiscrete Theory beautifully describes the patterns in nature revealed by observations. The Electrodiscrete Theory gives a single (unified) description of nature in a relatively simple and in a self-consistent way. Moreover, it can calculate and it can make predictions. Then, why it is ignored?
The Electrodiscrete Theory provides the complete conceptual foundation for describing nature that we are all seeking, but nobody bothers to take a look. Why?
The Electrodiscrete Theory opens new horizons. This is progress in science that is being held back by prejudice and new kind of ignorance. What is wrong with the system?
Mathematics is crucial in many fields.
What are the latest trends in Maths?
Which recent topics and advances in Maths? Why are they important?
Please share your valuable knowledge and expertise.
A new Phenomenon in Nature: Antifriction
Otto E. Rossler
Faculty of Science, University of Tuebingen, Auf der Morgenstelle 8, 72076 Tuebingen, Germany
Abstract
A new natural phenomenon is described: Antifriction. It refers to the distance-proportional cooling suffered by a light-and-fast particle when it is injected into a cloud of randomly moving heavy-and-slow particles if the latter are attractive. The new phenomenon is dual to “dynamical friction” in which the fast-and-light particle gets heated up.
(June 27, 2006, submitted to Nature)
******
Everyone is familiar with friction. Friction brings an old car to a screeching halt if you jump on the brake. The kinetic energy of a heavy body thereby gets “dissipated” into fine motions – the heating-up of many particles in the end. (Only some cars do re-utilize their motion energy by converting it into electricity.) But there also exists a less well-known form of friction called dynamical friction. It differs from ordinary friction by its being touchless.
The standard example of dynamical friction is a heavy particle that is repulsive over a short distance, getting injected into a dilute gas of light-and-fast other particles. The heavy particle then comes to an effective halt. For all the repelled gas particles that it forced out of its way in a touchless fashion carried away some of its energy of motion while getting heated-up in the process themselves – much as in ordinary friction.
In the following, it is proposed that a dual situation exists in which the opposite effect occurs: “antifriction.” Antifriction arises under the same condition as friction – if repulsion is replaced by attraction. The fast particles then rather than being heated up (friction) paradoxically get cooled-down (antifriction). This surprising claim does not amount to an irrational perpetual-motion-like effect. Only the fast-and-light (“cold”) particle paradoxically imparts some of its kinetic energy onto the slow-and-heavy “hot” particles encountered.
A simplified case can be considered: A single light-and-fast particle gets injected into a cloud of many randomly moving heavy-and-slow particles of attractive type. Think of a fast space probe getting injected into a globular cluster of gravitating stars. It is bound to be slowed-down under the many grazing-type almost-encounters it suffers. The small particle will hence be “cooled” rather than heated-up as one would naively expect in analogy to the repulsive case.
The new effect is going to be demonstrated in two steps. In the first step, we return to repulsion. This case can be understood intuitively as follows: On the way towards equipartition (which characterizes the final equilibrium in the repulsive case as is well known), the light-and-fast particles – a single specimen in the present case – do predictably get heated up in their kinetic energy. In the second step, we then “translate” this result into the analogous attraction-type scenario to obtain the surprising opposite effect there.
First step: the repulsive case. Many heavy repulsive particles in random motion are assumed to be traversed by a light-and-fast particle in a grazing-type fashion. A typical case is focused on: as the light-and-fast particle starts to approach the next moving heavy repellor while leaving behind the last one at about the same distance, the new interaction partner is with the same probability either approaching or receding-from the fast particle’s momentary course. Whilst there are many directions of motion possible, the transversally directed ones are the most effective so that it suffices to focus on the latter. Since the approaching and the receding course do both have the same probability of occurrence, a single pair already yields the main effect: there is a net energy gain for the fast particle on average. Why?
In the approaching subcase the fast particle gains energy, and in the receding subcase it loses energy. But the two effects are not the same: The gain is larger than the loss on average if the repulsive potential is assumed to be of the inversely distance-proportional type assumed. This is because in the approaching case, the fast particle automatically gets moved-up higher by the approached potential hill gaining energy, than it is hauled-down by the receding motion of the same potential hill in the departing case losing energy. The difference is due to the potential hill’s round concave form as an inverted funnel. The present “typical pair” of encounters thus enables us to predict the very result well known to hold true: a time- and distance-proportional energy gain of the fast lighter particle as a consequence of the “dynamical friction” exerted by the heavy particles encountered along its way. Thus, eventually an “equipartition” of the kinetic energies applies.
Second step: the attractive case. Everything is the same as before – except that the moving potential hill has become a moving potential trough (the funnel now is pointing downward rather than upward). The asymmetry between approach and recession is the same as before. Therefore there is a greater downwards directed loss of energy (formerly: upwards directed gain) in the approaching subcase than there is an up-wards directed gain of energy (formerly: downwards directed loss) in the receding subcase. The former net gain thus is literally turned-over into a net loss. With this symmetry-based new result we are finished: Antifriction is dual to dynamical friction, being valid in the case of attraction just as dynamical friction is valid in the case of repulsion.
Thus a new feature of nature – antifriction – has thus been found. The limits of its applicability have yet to be determined. It deserves to be studied in detail – for example, by numerical simulation. It is likely to have practical implications, not only in the sky with its slowed-down space probes and redshifted photons [1), but perhaps even in automobiles and refrigerators down here on earth.
To conclude, the fascinating phenomenon of dynamical friction – touchless friction – was shown to possess a natural “dual”: antifriction. A prototype subcase (a pair of representative encounters) was considered above in either scenario, thereby yielding the new twin result. Practical applications can be expected to be found.
I thank Guilherme Kujawski for stimulation. For J.O.R.
Added in proof: After the present paper got finished, Ramis Movassagh kindly pointed to the fact that the historically first paper on “dynamical friction,” written by Subrahmanyan Chandrasekhar [2] who also coined the term, actually describes antifriction. This fact went unnoticed because the smallest objects in the interactions considered by Chandra were fast-moving stars. Chandra’s correctly seen energy loss of these objects therefore got classified by him as a form of “friction” suffered in the interaction with the fields of other heavy moving masses. However, the energy loss found does actually represent a “cooling effect” of the type described above: antifriction. One can see this best when the cooling is exerted on a small mass (like the above-mentioned tiny space probe traversing a globular cluster of stars). While friction heats up, antifriction cools down. Thus what has been achieved above is nothing else but the re-discovery of an old result that had been interpreted as a form of “friction” even though it actually represents the first example of antifriction.
References
[1] O.E. Rossler and R. Movassagh, Bitemporal dynamic Sinai divergence: an energetic analog to Boltzmann’s entropy? Int. J. Nonlinear Sciences and Numerical Simul. 6(4), 349-350 (2005).
[2] S. Chandrasekhar, Dynamical friction. Astrophys. J. 97, 255-263 (1943).
(Remark: The present paper after not being accepted by Nature in 2006 was recently found lingering in a forgotten folder.)
See also: R. Movassgh, A time-asymmetric process in central force scatterings (Submitted on 4 Aug 2010, revised 5 Mar 2013, https://arxiv.org/abs/1008.0875)
Nov. 23, 2019
It is well known that light filed can be decomposed into polarized field and unpolarized field. But, is it possible to consider this sum as a only the sum of linearly polarized and unpolarized or circularly polarized and unpolarized? or is it always matters a degree of polarization not a type of polarization?
The incredible thing about Physarum polycephalum is that whilst being completely devoid of any nervous system whatsoever (not possessing a single neuron) it exhibits intelligent behaviours. Does its ability to intelligently solve problems suggest it must also be conscious? If you think, yes, then please describe if-and-how its consciousness may differ {physically or qualitatively ... rather than quantitatively} from the consciousness of brained organisms (e.g., humans)? Does this intelligent behaviour (sans neurons) suggest that consciousness may be a universal fundamental related more to the physical transfer or flow of information rather than being (as supposed by most psychological researchers) an emergent property of processes in brain matter?
General background information:
"Physarum polycephalum has been shown to exhibit characteristics similar to those seen in single-celled creatures and eusocial insects. For example, a team of Japanese and Hungarian researchers have shown P. polycephalum can solve the Shortest path problem. When grown in a maze with oatmeal at two spots, P. polycephalum retracts from everywhere in the maze, except the shortest route connecting the two food sources.[3] When presented with more than two food sources, P. polycephalum apparently solves a more complicated transportation problem. With more than two sources, the amoeba also produces efficient networks.[4] In a 2010 paper, oatflakes were dispersed to represent Tokyo and 36 surrounding towns.[5][6] P. polycephalum created a network similar to the existing train system, and "with comparable efficiency, fault tolerance, and cost". Similar results have been shown based on road networks in the United Kingdom[7] and the Iberian peninsula (i.e., Spain and Portugal).[8] Some researchers claim that P. polycephalum is even able to solve the NP-hard Steiner minimum treeproblem.[9]
P. polycephalum can not only solve these computational problems, but also exhibits some form of memory. By repeatedly making the test environment of a specimen of P. polycephalum cold and dry for 60-minute intervals, Hokkaido University biophysicists discovered that the slime mould appears to anticipate the pattern by reacting to the conditions when they did not repeat the conditions for the next interval. Upon repeating the conditions, it would react to expect the 60-minute intervals, as well as testing with 30- and 90-minute intervals.[10][11]
P. polycephalum has also been shown to dynamically re-allocate to apparently maintain constant levels of different nutrients simultaneously.[12][13] In particular, specimen placed at the center of a Petri dish spatially re-allocated over combinations of food sources that each had different protein–carbohydrate ratios. After 60 hours, the slime mould area over each food source was measured. For each specimen, the results were consistent with the hypothesis that the amoeba would balance total protein and carbohydrate intake to reach particular levels that were invariant to the actual ratios presented to the slime mould.
As the slime mould does not have any nervous system that could explain these intelligent behaviours, there has been considerable interdisciplinary interest in understanding the rules that govern its behaviour [emphasis added]. Scientists are trying to model the slime mold using a number of simple, distributed rules. For example, P. polycephalum has been modeled as a set of differential equations inspired by electrical networks. This model can be shown to be able to compute shortest paths.[14] A very similar model can be shown to solve the Steiner tree problem.[9]"
source of quotation: https://en.wikipedia.org/wiki/Physarum_polycephalum
The theory of special relativity requires that the laws of the universe be the same for the objects that move with uniform velocity to each other. The law that changes from one frame to another is wrong. Lorentz transformations do not guarantee only three transformations. These three quantities are length, time and mass, which basic are physical quantities. Derived quantities can be derived from it covering the laws of mechanics only. In addition, the Lorentz transformation of the mass was found using the principle of corresponding and not directly if we want to get the Derived quantities Lorentz transformation we must be finds the Lorentz transformations for Fundamental Physical Quantities.
To what extent, are we compromising Darcy’s law, when we characterize the oil/gas flow within a petroleum reservoir?
Does the fundamental physics associated with the Darcy’s law not change significantly while applying it for the above application?
Darcy’s law requires that any resistance to the flow through a porous medium should result only from the viscous stresses induced by a single-phase, laminar, steady flow of a Newtonian fluid under isothermal conditions within an inert, rigid and homogeneous porous medium.
For many years I worked on the NSE under the assumption of incompressible flow. This assumption drive us to work with a simplified model (M=0) according to the fact that
a^{2} =dp/drho|_{s=const.} ->+Inf.
Of course, any model is an approximate intepretation of the reality but this specific mathematical model assumption contradicts the fundamental physical limit of the light velocity.
Despite the fact that low (but finite) Mach model were developed, the M=0 model is still largely used both in engineering aerodynamics and in basic researches (instability, turbulence, etc.) in fluid dynamics.
Could we really accept the M=0 model that violates a fundamental physical assumption? If yes, that is a result from assessed studies that used a very low but finite Mach number for comparison?
Are there any evidence or theoretical framework to explain the values of fundamental physical constants? In other words, could be the values of physical constants differents (contingency)? Or is there any physical need to be as they are? Obs.: It is not a metaphysical question.
The 1998 astronomical observations of SN 1A implied a (so-called) accelerating universe. It is over 20 years later and no consensus explanation exists for the 1998 observations. Despite FLRW metric, despite GR, despite QM, despite modified theories like MOND, despite other inventive approaches, still no explanation. It is hard to believe that hundreds or thousands of physicists having available a sophisticated conceptual mathematical and physics toolkit relating to cosmology, gravity, light, and mechanics are all missing how existing physics applies to explain the accelerating expansion of space. Suppose instead that all serious and plausible explanations using the existing toolkit have been made. What would that imply? Does it not imply a fundamental physical principle of the universe has been overlooked or even, not overlooked, but does not yet form part of physics knowledge? In that case, physics is looking for the unknown unknown (to borrow an expression). I suspect the unknown principle relates to dimension (dimension is fundamental and Galileo’s scaling approach in 1638 for a problem originating with the concept of dimensions --- the weight-bearing strength of animal bone — suggests fundamental features of dimension may have been overlooked, beginning then). Is there a concept gap?
Solitons is the common but we are changing the structures which are also based on the common photonic crystal. Is there possibility of same kind of soliton in all three structures.
How and why the velocity is internal property of the massive body?
Version:2.0
The question of the nature (or ontological status) of fundamental physics theories, such as spacetime in special and general relativity, and quantum mechanics, have been, each, a permanent puzzle, and a source of debates. This discussion aims to resolve the issue, and submit the solution to comments.
Also, when something is correct, this is a sign that it can be be proved in more than one way. In support of this question, we found evidence of the same answer of the ontological status, in three diverse ways.
Please see at:
DISCLAIMER: We reserve the right to improve this text. All questions, public or not, are usually to be answered here. This will help make this discussion text more complete, and save that Space for others, please avoid off-topic. References are provided by self-search. This text may modify frequently.
It is widely seen that large-scale cosmic fluids should be treated as "viscoelastic fluids" in theoretical formulation of their stability analyses. Can anyone explain it from the viewpoint of fundamental physical insight?
Where from we have arrived to the conclusion that space of our Universe is 3D (and so the dimensionality of spacetime is 4D)?
I suppose this is the result of our sense of vision that is based on both of our eyes. However, the image we conceive is the result of mind manipulation (illusion) of the two “images” that each of our eyes send to our brain. This mind manipulation gives us the notion of depth that is translated as the third dimension of space. This is why one eye vision (or photography, cinema, TV, ...) is actually a 2D vision. In other words, when we see a 3D object and our eyes are (approx.) on a line perpendicular to the plane that form object's “height” and “long”, our mind concludes about object's “width”. Photons detectable by each of our eyes were, e.g. t(=10^{-20}sec) before, on the surface of a sphere with our eye as center and radius t*c. As the surface of a sphere is 2D (detectable space) and if we add the dimension of "time" (to form the spacetime) we should conclude that the dimensionality of our detectable Universe is 3D ((2+1) and NOT 4D(3+1)).
PS: (27/8/2018) Though, I am aware that this opinion will reveal an instinctive opposition as it contradicts our “common sense”… I will take the risk to open the issue.
The final target is to study the fundamental physical processes involved in bubble dynamics and the phenomenon of cavitation. Develop a new bubble dynamics CFD model to study the evolution of a suspension of bubbles over a wide range of vesicularity, and that accounts for hydrodynamical interactions between bubbles while they grow, deform under shear flow conditions, and exchange mass by diffusion coarsening. Which commercial/open source CFD tool and turbulence model would be the most appropriate ones?
Mark Srednicki has claimed to demonstrate the entropy ~ area law -- https://arxiv.org/pdf/hep-th/9303048.pdf
Does anyone know of an independent verification or another demonstration of this result?
Is there a proof of this law?
The unexploited unification of general relativity and quantum physics is a painstaking issue. Is it feasible to build a nonempty set, with a binary operation defined on it, that encompasses both the theories as subsets, making it possible to join together two of their most dissimilar marks, i.e., the commutativity detectable in our macroscopic relativistic world and the non-commutativity detectable in the quantum, microscopic world? Could the gravitational field be the physical counterpart able to throw a bridge between relativity and quantum mechanics? It is feasible that gravity stands for an operator able to reduce the countless orthonormal bases required by quantum mechanics to just one, i.e., the relativistic basis of an observer located in a single cosmic area?
What do you think?
It seems that our progress in standard of living in the last 500 or so years is mainly connected with different forms of energy conversion and discovery of newer materials for that purpose. So, how the fundamental science projects of today (e.g. detection of gravity waves, neutrino observatory, etc.) are going to contribute to that single point program? Is this a premature question?
- In the conclusion (page 14) of this paper, I suggest that “Younger physicists should also be encouraged to play a significant role in looking after and protecting our physics knowledge before they become exposed to the detrimental effects of the commercial influence on physics.”
Also in the conclusion I offer an idea on how this could be initiated. However I imagine there are existing schemes that encourage university students and physicists to get involved in theoretical physics & the fundamentals of physics. Do you know of such schemes and/or have your own suggestions in this connection?
Theme for Developing new perspectives of physics:
Let’s return to the traditional domain of original ideas and rigorous arguments of theoretical physics - “Physics with an ideas- and imagination-based ‘art’ where we’re dreaming, imagining and creating …” - (Physics: No longer a vocation? by Anita Mehta, vol 61 no. 6 Physics Today June 2008)
currently i am beginning to work in photo diode using wide band gap semiconductors like NiO and ZnO etc. so i like to study the fundamental physics of p n junction that helpful for my topic.anyone please suggest some books or documents.
What are the evidences that speed of light is constant all over the universe? Is it the same value even in places in universe which dark energy is occupied?
1) How one can describe short range and long range ferromagnetic ordering by analysing M(T,H) data?
2) is superexchange always short-range order?
3) how to idetify the type of exchange interaction in the magnetism shown by a system?
4) Does superexchange has some relationship among magnetic parameters (such as Curie tempeature, doping concentration, carrier concentration)?
Erik verlinde said; this emergent gravity constructed using the insights of string theory, black hole physics and quantum information theory(all these theories are struggling to take breath)..its appreciation to Verlinde of his dare step of constructing emergent gravity based on dead theories ..we loudly take inspiration from him...!!!!!!!
Since experimental evidence, it is well known that a desynchronization of clocks appears between different altitudes on earth (simultaneity is relative). However, simultaneity (absolute for the sky) of the sun or the moon (since million years for example) is a fact.
Shouldn't the concept of relativity be questionned?
Professor Michael Longo (University of Michigan in Ann Arbor) and Professor Lior Shamir (Lawrence Technological University) on experimental data have shown that there is an asymmetry between the right- and left - twisted spiral galaxies. Its value is about 7%. In the article:
ROTATING SPACE OF THE UNIVERSE, AS A SOURCE OF DARK ENERGY AND DARK MATTER
it is shown that the source of dark matter can be the kinetic energy of rotation of the space of the observed Universe. At the same time, the contribution of the Carioles force is 6.8%, or about 7%. The high degree of proximity of the value of the asymmetry between the the right- and left - twisted spiral galaxies and the value of the contribution of the Carioles force to the kinetic energy of rotation of the space of the observable Universe is a strong indirect evidence (on experimental data!) that the space of the observed Universe rotates.
An article from Nature "Undecidability of the spectral gap" (arXiv:1502.04573 [quant-ph]) shows that finding the spectral gap based on a complete quantum level description of a material is undecidable (in Turing sense). No matter how completely we can analytically describe a material on the microscopic level, we can't predict its macroscopic behavior. The problem has been shown to be uncomputable as no algorithm can determine the spectral gap. Even if there is in a way to make a prediction, we can't determine what prediction is, as for a given a program, there is no method to determine if it halts.
Does this result eliminate once and for all the possibility of a theory of everything based on fundamental physics? Is Quantum physics undecidable? Is this an an epistemic result proving that undecidability places a limit on our knowledge of the world?
I have a question regarding one unusual (thought) system.
Some years ago at one Russian forum we discussed one thought device that, as its author claimed, can provide one-directional motion and only due to the internal forces. The puzzle had been resolved by Kirk McDonald from Princeton Univ. I attach Kirk's solution. I wish to say that the author of the paradox is Georgy Ivanov but not me.
Anyway, Kirk found that there is no resulting directional force. But one puzzle of this device remains. The center-of-mass of the device should move (in the closed orbit) only due to the internal forces. I marked this result of McDonald in the file.
In this connection, two questions arise:
1. Why the center-of-mass moves despite the total momentum conserves?
2. If the center-of-mass can move and this motion is created by the internal forces, is it possible to change the design of the device to provide one-directional motion?
Formally there is no obstacles to realize it. The total momentum conserves... Could some one give the answers to them?
This thought device works not on the action-reaction principle and if similar device can be made as hardware, it could be a good prototype for the interstellar flight thruster.
How did Einstein's Spacetime pull of gravity on the Planet Mercury differ in value than Newtons? Was it simply via the spacetime fabric adjusting this value?
Thanks:)
Schrödinger self adjoint operator H is crucial for the current quantum model of the hydrogen atom. It essentially specifies the stationary states and energies. Then there is Schrödinger unitary evolution equation that tells how states change with time. In this evolution equation the same operator H appears. Thus, H provides the "motionless" states, H gives the energies of these motionless states, and H is inserted in a unitary law of movement.
But this unitary evolution fails to explain or predict the physical transitions that occur between stationary states. Therefore, to fill the gap, the probabilistic interpretation of states was introduced. We then have two very different evolution laws. One is the deterministic unitary equation, and the other consists of random jumps between stationary states. The jumps openly violate the unitary evolution, and the unitary evolution does not allow the jumps. But both are simultaneously accepted by Quantism, creating a most uncomfortable state of affairs.
And what if the quantum evolution equation is plainly wrong? Perhaps there are alternative manners to use H.
Imagine a model, or theory, where the stationary states and energies remain the very same specified by H, but with a different (from the unitary) continuous evolution, and where an initial stationary state evolves in a deterministic manner into a final stationary state, with energy being continuously absorbed and radiated between the stationary energy levels. In this natural theory there is no use, nor need, for a probabilistic interpretation. The natural model for the hydrogen, comprising a space of states, energy observable and evolution equation is explained in
My question is: With this natural theory of atoms already elaborated, what are the chances for its acceptance by mainstream Physics.
Professional scientists, in particular physicists and chemists, are well versed in the history of science, and modern communication hastens the diffusion of knowledge. Nevertheless important scientific changes seem to require a lengthy processes including the disappearance of most leaders, as was noted by Max Planck: "They are not convinced, they die".
Scientists seem particularly conservative and incapable of admitting that their viewpoints are mistaken, as was the case time ago with flat Earth, Geocentrism, phlogiston, and other scientific misconceptions.
MY EMAIL TO NFS:
My name is Andrei-Lucian Drăgoi and I am a Romanian pediatrician specialist, also undertaking independent research in digital physics and informational biology. Regarding your project called " Ideas Lab: Measuring "Big G" Challenge" (that I’ve found at this link: http://www.nsf.gov/funding/pgm_summ.jsp?pims_id=505229&org=PHY&from=home), I want to propose you a USA-Romania collaboration in this direction, based on my hypothesis that each chemical isotope may have its own “big G” imprint.
The idea is simple. Analogously to the photon, the hypothetical graviton may actually have a quantum angular momentum measured by a gravitational Planck-like quanta which I have noted h_eg, and a quantum G scalar G_q=f(h_eg). Despite Planck constant (h) being constant, h_eg may not be constant and may have slight variability that can depend on many factors including the intranuclear energetic pressures measured by the average binding energy per nucleon (E_BN) in any (quasi-)stable nucleus. I have proposed a simple grade I function that can generate a series hs_eg(E_BN) as a scalar function of E_BN, that also implies a series of quantum G scalars Gs_q(E_BN)= f[hs_eg(E_BN)] which is also a function of E_BN, as it depends on hs_eg(EBN). In conclusion: every isotope may have its own G "imprint" and that is one possible explanation (the suspected so-called “systematic error”) for the variability of the experimental G values from one team to another: I have called this hypothesis the multiple G hypothesis (mGH). I also propose a series of systematic experiments to verify mGH. As I don't work as a physicist (I am a Pediatrics specialist working in Bucharest, Romania) and just do independent research in theoretical physics, I don't have access to experimental resources, so I propose you a collaboration between USA and Romania and some experiments conducted either in the USA or in Romania (at "Horia Hulubei National Institute for R&D in Physics and Nuclear Engineering (IFIN-HH)", from Magurele, Romania: http://www.nipne.ro)
I have attached an article (in pdf format) that contains my hypothesis and its arguments (exposed in the first part of this paper): this work can also be downloaded from the link http://dragoii.com/BIDUM3.0_beta_version.pdf
My main research pages are:
Please, send me a minimal feedback to know that my message was received.
I am opened to any additional comment/suggestion/advice you may have on my idea on the big G.
===============================
THE REPLY FROM NFS:
Dear Dr. Dragoi,
Thank you for your interest in our programs. Unfortunately, NSF does not fund research groups based outside the US. Should you succeed in your goal of creating a Romanian-US collaboration, please have your American collaborators contact NSF directly.
Best regards,
Pedro Marronetti
====================================
FINAL CONCLUSION: If you are interested in this collaboration, please send feedback on dr.dragoi@yahoo.com so that we may apply to NFS challenge until 26 October 2016 (which is the deadline).
I'm going to put an insulator, playdough, on some copper metal. I was wondering how this would effect charge collection from a fundamental physics standpoint. These free electrons (source) would be coming from/already on the surface. I was thinking they would go around the insulator but remain on the surface. Am I correct in this assumption?
In Chapter V, of The Nature of the Physical World, Arthur Eddington, wrote as follows:
Linkage of Entropy with Becoming. When you say to yourself, “Every day I grow better and better,” science churlishly replies—
“I see no signs of it. I see you extended as a four-dimensional worm in space-time; and, although goodness is not strictly within my province, I will grant that one end of you is better than the other. But whether you grow better or worse depends on which way up I hold you. There is in your consciousness an idea of growth or ‘becoming’ which, if it is not illusory, implies that you have a label ‘This side up.’ I have searched for such a label all through the physical world and can find no trace of it, so I strongly suspect that the label is non-existent in the world of reality.”
That is the reply of science comprised in primary law. Taking account of secondary law, the reply is modified a little, though it is still none too gracious—
“I have looked again and, in the course of studying a property called entropy, I find that the physical world is marked with an arrow which may possibly be intended to indicate which way up it should be regarded. With that orientation I find that you really do grow better. Or, to speak precisely, your good end is in the part of the world with most entropy and your bad end in the part with least. Why this arrangement should be considered more creditable than that of your neighbor who has his good and bad ends the other way round, I cannot imagine.”
See:
The Cambridge philosopher, Huw Price provides an very engaging contemporary discussion of this topic in the following short video of his 2011 lecture (27 Min.):
This is well worth a viewing. Price has claimed that the ordinary or common-sense conception of time is "subjective" partly by including an emphatic distinction between past and future, the idea of "becoming" in time, or a notion of time "flowing." The argument arises from the temporal symmetry of the laws of fundamental physics --in some contrast and tension with the second law of thermodynamics. So we want to know if "becoming" in particular is merely "subjective," and whether this follows on the basis of fundamental physics.
Chapter Eddington, Chapter V "Becoming"
I returned to Einstein's 1907 paper and found that the final conclusion offered at the end apparently omitted one last step. Namely, that the lowered value of the speed of light c of a horizontal light ray downstairs, when watched from above, is absolutely correct; but only the conclusion drawn from this observation – that the speed of light is indeed reduced downstairs – was premature.
This is because the light ray hugging the floor downstairs is hugging a constantly receding floor despite the fact that the distance is constant.
(In the same vein, the increased speed of light of a light ray hugging the ceiling of the constantly accelerating rocketship – not mentioned by Einstein – holds true for a ceiling that is constantly approaching the lower floor despite the fact that the distance is constant.) The correctly predicted "gravitational redshift" – and the opposite blueshift in the other direction – qualify as a proof that this thinking is sound.
N.B.: The proposal is perhaps not as stupid as it sounds because the theory employed here is alone the special theory of relativity (which by definition presupposes global constancy of c). This fact was of course constantly on the mind of Einstein and can expplain why he fell silent on the topic of gravitation for 3 ½ years.
When he returned to it in mid-19011, writing the originally unfinished c-modifying equation of 1907 down explicitly, he may have been hoping in the back of his mind that someone could spot the error that he still felt might be involved. It is not an error, only the omission of a final step.
Now my dear readers have the same chance of offering their help regarding my above "constant-c solution" to this conundrum of Einstein’s, which perhaps is the most important one of history.
for example Carbon( atomic no 6 atomic mass 14) = N ( atomic no 7 atomic mass 14) + 1 beta particle (electron) in this example how does nitrogen gets another electron to neutralize its charge ( no of proton = no of electron) ?
regards
My thesis subject is "study of ephemeral organizational phenomena inside meta-organizations".
I'm currently looking for articles that are connecting fundamental physics and management science.
Also looking for articles speaking about timespace as a whole instead of time or space separately, mostly in management science.
If you have any suggestions about my subject, feel free to send me your advices !
Your help will be highly appreciated !
Are the fundamental physical constants rational numbers? I think it would be true to say we cannot make measurements that are non-rational.
Over the years, many physicists have wondered whether the fundamental constants of nature might have been different when the universe was younger. If so, the evidence ought to be out there in the cosmos where we can see distant things exactly as they were in the past.
One thing that ought to be obvious is whether a number known as the fine structure constant was different. The fine structure constant determines how strongly atoms hold onto their electrons and is an important factor in the frequencies at which atoms absorb light.
If the fine structure were different earlier in the universe, we ought to be able to see the evidence in the way distant gas clouds absorb light on its way here from even more distant objects such as quasars.
That debate pales in comparison to new claims being made about the fine structure constant. In 2010, John Webb at the University of South Wales, one of the leading proponents of the varying constant idea, and a few cobbers said they have new evidence from the Very Large Telescope in Chile that the fine structure constant was different when the universe was younger.
While data from the Keck telescope indicate the fine structure constant was once smaller, the data from the Very Large Telescope indicates the opposite, that the fine structure constant was once larger. That’s significant because Keck looks out into the northern hemisphere, while the VLT looks south.
This means that in one direction, the fine structure constant was once smaller and in exactly the opposite direction, it was once bigger. And here we are in the middle, where the constant as it is (about 1/137.03599…)
So, do you think that fine structure constant varies with direction in space?
For further reading on this issue, see http://www.technologyreview.com/view/420529/fine-structure-constant-varies-with-direction-in-space-says-new-data/.
Refs:
arxiv.org/abs/1008.3907: Evidence For Spatial Variation Of The Fine Structure Constant
arxiv.org/abs/1008.3957: Manifestations Of A Spatial Variation Of Fundamental Constants On Atomic Clocks, Oklo.
Included here you can also find a 2004 ApJ paper by John Bahcall, who is a proponent of varying fine structure constant. (URL: http://www.sns.ias.edu/~jnb/Papers/Preprints/Finestructure/alpha.pdf)
Also known as the reversibility paradox, this is an objection to the effect that it should not be possible to derive an irreversible process from time-symmetric dynamics, or that there is an apparently conflict between the temporally symmetric character of fundamental physics and the temporal asymmetry of the second law.
It has sometimes been held in response to the problem that the second law is somehow "subjective" (L. Maccone) or that entropy has an "anthropomorphic" character. I quote from an older paper by E.T. Jaynes,
"After the above insistence that any demonstration of the second law must involve the entropy as measured experimentally, it may come as a shock to realize that, nevertheless, thermodynamics knows no such notion as the "entropy of a physical system." Thermodynamics does have the notion of the entropy of a thermodynamic system; but a given physical system corresponds to many thermodynamic systems" (p. 397).
The idea here is that there is no way to take account of every possible degree of freedom of a physical system within thermodynamics, and that measures of entropy depend on the relevancy of particular degrees of freedom in particular studies or projects.
Does Loschmidt's paradox tell us something of importance about the second law? What is the crucial difference between a "physical system" and a "thermodynamic system?" Does this distinction cast light on the relationship between thermodynamics and measurements of quantum systems?
Regarding our current understanding of quantum mechanics, especially the interpretation of the theory of measurements in terms of parallel universes.
Theoretical physics, quantum mechanics, Fundamental physics
The Smirnov-Rueda team claimed that they measured that the speed of the bound electromagnetic field is limit but is larger than the speed of light.[1-3] However, their result need be further tested.
A direct way was presented to measure the speed of the electromagnetic force.[4] In this way, as three stationary charged balls or magnetisms (M_{1}, M_{2} and M_{3}) are interacting with each other, making M_{3} moved, M_{1} and M_{2} shall be moved by the motion of M_{3}. If the distances between M_{1} and M_{3} and between M_{2} and M_{3} are L_{1} and L_{2} respectively, by observing the times that M_{1} and M_{2} start to move, the speed of the electromagnetic force can be calculated with: v=( L_{1}-L_{2})/(t_{1}-t_{2}), where t_{1} and t_{2} are the times that M_{1} and M_{2} start to move respectively.
Thus, the speed of the electromagnetic force can be direct observed.
M3 can be a transformer. As the current is stopped, the magnetic field of it shall disappear. And M1 and M2 shall be moved by the gravity as they set at the positions under the condition they can be moved by the gravity as soon as the magnetic force disappears. In this case, the speed of the propagation of the magnetic field is measured.
This is a simple experiment. Only three magnetisms (or charged balls) are needed. But, to observe the times that the magnetisms start to move, a high speed camera is needed. As ∆L=L_{1}-L_{2} is on the level about 30cm, the precision of the observed time need be larger than 10^{-11} seconds. However, the general high speed camera can be observed the time on the precision about 10^{-12} seconds.
This experiment is fundamental for physics.Besides Smirnov-Rueda team’s work, there is no the experimental and general accepted conclusion for the speed of electromagnetic force. It is clear, if there was such ones, Smirnov-Rueda team’s work could not be published.
References
[1] Kholmetskii A. L. et al, 2007, Experimental test on the applicability of the standard retardation
condition to bound magnetic fields, J. Appl. Phys. 101, 023532
[2] Kholmetskii A. L., Missevitch O. V. and Smirnov Rueda R., 2007, Measurement of propagation velocity of bound electromagnetic fields in near zone, J. Appl. Phys.102 013529
[3] Missevitch O. V., Kholmetskii A. L. and Smirnov Rueda R., 2011, Anomalously small retardation of bound (force) electromagnetic fields in antenna near zone, Europhys. Lett.93 64004
[4] Zhu Y., 2011, Measurement of the speed of gravity, arXiv: 1108.3761v8
Or, for that matter, Ideal Gas pressure? It can't be gravity or weak nuclear (both too weak). It can't be electromagnetic, as neutrons can exhibit it. It can't be strong nuclear, as that is always attractive, and doesn't act between electrons, anyway. So, what's going on?
A glance through our cosmic neck of the woods reveals that matter in the Universe is distributed in a highly structured fashion, why is it so?
Regards,
Bhushan Poojary
In the attached paper from the Gauge Institute, the definition of differential in e-calculus is (see page 8):
F'(x)={f(x+e)-f(x)}/e (1)
where e is defined as an infinitesimal (i.e. it should be smaller than any number but greater than zero).
From this definition in (1) it should be clear that as e approaches zero, it is assumed that the function of f'(x) has the form of a slope (linear). But this assumption has problems in real data of many phenomena, i.e. when the observation scale goes smaller and smaller then it behaves not as a linear slope but as brownian motion. Other applications such as in earthquake data, stock market price data, etc. indicate that each data includes indeterminacy (I).
I just thought that perhaps we can extend the definition of differentiation to include indeterminacy (I), perhaps something like this:
F'(x)={f(x+e)+2I-f(x)}/e. (2)
The I parameter implies that the geometry of differential is not a slope anymore. The term 2I has been introduced to include unpredictability/indeterminacy of the brownian motion. And it can split into left and right differentiation. The left differential will carry one I, and the right differential will carry one I.
Another possible way is something like this:
F'(x)=(1+I).{f(x+e)-f(x)}/e (3)
Where I represents indeterminacy parameter, ranging from 0.0-0.5.
Other possible approaches may include Nelson's Internal Set Theory, Fuzzy Differential Calculus, or Nonsmooth Analysis.
My purpose is to find out how to include indeterminacy into differential operators like curl and div.
That is my idea so far, you can develop it further if you like. This idea is surely far from conclusive, it is intended to stimulate further thinking.
So do you have other ideas? Please kindly share here. Thanks
We assume an empty universe containing only a disk. There are two positions our observers could stand. Position A is the center and position B is a point on the edge of the disk. Two observers in positions A and B could define which of one is rotating around the other because of the existence of the centrifugal force which appears ONLY in position B!! This rotation could be called as absolute!
How is this independent of the fact that space is relative?
Is there a centrifugal force on B, if the disk (and observers) was massless?
I want to understand physically why the observed mass is increased as the particle speed increased to relativistic speeds. Is the potential energy of the particle affected by this increase?
I want to know the sources of errors that prevent us from using Schrodinger equation to find the eigenvalues of energy for atoms that have electrons more than one? What are the best models for the high electrons atoms?
As ‘Big Questions’ I would like to name those which are fundamental to the physical understanding of our Universe, such as: Why is the Universe made of matter rather than antimatter? What is the nature of Dark Matter, and of Dark Energy? Is there a preferred reference frame to the Universe? How are the forces of nature, including gravity, unified?
For decades, the primary experimental tools for addressing such Big Questions were large particle accelerators. However, scaling of these facilities to higher energies and larger sizes has become increasingly difficult and expensive—and may soon be impossible.
The conjecture I would like to discuss is: What can we learn from small-scale, low-cost terrestrial experiments in which subtle signs of new physics are sought through extreme sensitivity and precision?
Searches for tiny deviations from “ordinary” physical laws can be interpreted as tests of the very structure of the physical world. Examples might include the braking of symmetries like time reversal symmetry, or the search for a variation of the fundamental constants of nature like fine stricture constant or searches for a deviatipn from the 1/r law for the gravitational potential.
Within an appropriately chosen coordinate system and without incorporating spatial curvature, geodetic precession of a gyroscope orbiting a spherically symmetric, spinning mass can be remoulded as a Lense-Thirring frame-dragging effect. Geodesic precession and Lense-Thirring precession can therefore be described in terms of two components of a single gravitomagnetic effect. Are de Sitter precession and frame dragging actually fundamentally different phenomena?
Experiments in physics are more and more heavy, and require more and more people. There is a similar, although not as important, trend in theoretical physics. Will that change the status of the physicist? Will there still be great /savants/ or scholars?
But on the other hand, the last particle of the standard model, the Higgs, has been observed, and has validated the whole construct, so there is no more such routine work. There have not been significant advances in fundamental physics for forty years. It seems the old paradigm has been exhausted. There is still research to find supersymmetry particle and the likes, but that remains speculative. Wouldn't a reorganization of the activity be necessary? Should more stress on individual potentially revolutionary ideas be put, or to the contrary should the current trend be intensified in order to force our way forward?
Herbert Dingle's argument is as follows (1950):
According to the theory, if you have two exactly similar clocks, A and B, and one is moving with respect to the other, they must work at different rates,i.e. one works more slowly than the other. But the theory also requires that you cannot distinguish which clock is the 'moving' one; it is equally true to say that A rests while B moves and that B rests while A moves. The question therefore arises: how does one determine, 3 consistently with the theory, which clock works the more slowly? Unless the question is answerable, the theory unavoidably requires that A works more slowly than B and B more slowly than A - which it requires no super- intelligence to see is impossible. Now, clearly, a theory that requires an impossibility cannot be true, and scientific integrity requires, therefore, either that the question just posed shall be answered, or else that the theory shall be acknowledged to be false.
I would like to find literature about propositions to test the Einstein-Cartan theory with lab experiments.
According to wikipedia "(e) [Einstein-Cartan theory] generates new predictions that can in principle validate or falsify the theory, but it cannot be validated by empirical results due to current limitations in technology."
However, I have not been able to find any paper on the subject so far.
Do any of you have some papers in mind?