Science topic

Theoretical Astrophysics - Science topic

Explore the latest questions and answers in Theoretical Astrophysics, and find Theoretical Astrophysics experts.
Questions related to Theoretical Astrophysics
  • asked a question related to Theoretical Astrophysics
8 answers
“The Essence of ‘E’: Unveiling the Infinitely Infinite” for your consideration. Enclosed, you will find a comprehensive exploration into the enigmatic concept of “E,” a cosmic force that transcends the boundaries of finite and infinite existence.
This manuscript represents a labor of passion and dedication, offering a unique perspective on the role of “E” in the universe. From its profound cosmic order to its paradoxical nature of being both infinitesimal and infinitely powerful, this work delves deep into the heart of a concept that defies human comprehension.
The content is structured meticulously, with an abstract that provides a concise overview of the manuscript’s scope, an engaging introduction that draws the reader into the subject matter, and detailed sections that explore the mass of “E” and the cataclysmic events it undergoes. The manuscript concludes with a thought-provoking summary of our journey into the infinitely infinite.
I believe this manuscript would make a valuable addition to [Company/Organization Name]’s collection of publications, given its unique perspective and the depth of research invested in it. It has the potential to appeal to a wide audience interested in cosmology, astrophysics, and the mysteries of the universe.
I would be delighted to discuss any further steps or provide additional information as needed. I eagerly await your response
Relevant answer
There seems to be something quite unclear about the interplay of elements stemming into different causal orders (we shall define them as expressions of different, hierarchically ordered complex systems [their concept, too, being such an expression]).
We cannot speak of absolute and homogeneously present and consistent "heat", with the same being held for "mass".
A fundamental problem therefore arises from mixing different levels of analysis, whose specific articulations tend to negate one another.
  • asked a question related to Theoretical Astrophysics
10 answers
The question is loaded and really doesn't fit into the space for questions.
My theory proposes a candidate for Dark Matter.  The candidate would be antimatter traveling in a Lagging Hypersphere ( a true parallel Universe made of antimatter).
HU allows for the derivation of asymmetric Gravitational attraction.  It is conceivable that in early epochs, when the LH were more dense, antimatter Black Holes or Stars could align themselves with matter stars in this Hypersphere. After reaching a critical mass, tunneling could elicit annihilation and a Gamma Ray Burst.
This mechanism would be consistent with the full conversion of a Star mass into gamma rays as it is observed in GMB.
Antimatter in a LH is also consistent with Gravitational Lensing in regions where matter is not visible.
A subtle but important aspect of my theory is that those hyperspheres are traveling at the speed of light along a radial direction.
Relevant answer
Electrons have spin because they live in a 4D spatial manifold and that is one of their degrees of freedom. The summation of all spins should be zero.
In other words, one can explain why electrons have spin without disobeying the law of conservation of angular momentum.
I don't deal with the geodesics model, and thus, I don't use spacetime.
I could benefit from your explanation of what do you mean by:
Observed Radial Momentum of Spacetime
By the way, you might want to watch a short video and read a short paper:
The Big Pop Cosmogenesis - replacement to the Big Bang
Big Pop Article
Such that we establish a common language.
  • asked a question related to Theoretical Astrophysics
78 answers
Consider the two propositions of the Kalam cosmological argument:
1. Everything that begins to exist has a cause.
2. The universe began to exist.
Both are based on assuming full knowledge of whatever exists in the world which is obviously not totally true. Even big bang cosmology relies on a primordial seed which science has no idea of its origin or characteristics.
The attached article proposes that such deductive arguments should not be allowed in philosophy and science as it is the tell-tale sign that human wrongly presupposes omniscient.
Your comments are much appreciated.
Relevant answer
Good deductive arguments have two properties: (1) validity and (2) soundness. Validity is entirely a formal property: it says that IF the premises are true then so is the conclusion; soundness says that not only is the argument valid, but its premises ARE true. Whether the premises are indeed true may be a matter of empirical discovery or of previous deductions or definitions (including deductions or definitions in mathematics). Sometimes it's just interesting to see what else a certain assumption commits one to and deduction can answer that question and sometimes also give us a good reason for rejecting that assumption (that is the rationale for reductio ad absurdum arguments, aka indirect proofs). It helps to keep in mind that the alleged shortcoming of deduction is not an indictment of its formal nature but a matter of the "garbage in, garbage out" principle.
  • asked a question related to Theoretical Astrophysics
5 answers
while studying the longitudinal momentum distribution for any halo nuclei during a reaction in lab frame .I want to change this distribution into the center of mass(CM) frame .is it possible change it directly? What factor I need to multiply or divide so that momentum distribution became according to the CM frame. Any suggestion or method or any reference ?
  • asked a question related to Theoretical Astrophysics
33 answers
According to Weyl and Chandrasekhar, general relativity (GR) is a triumph of speculative thought. But it is a well-known fact that GR is initiated by two analogies. Analogy is known to be a weak reasoning in science and philosophy. To redress the case this type of reasoning is renamed as Equivalence Principle (EP) in relativistic physics. The renaming, however, could not hide the fact that the presented analogy was not flawless. Irrefutable disproves were side-stepped and the analogy was instated to be the seed of new kind of physics. EP was defended by reducing the size of the lab and the duration of the experiment. This type of defending is like the proponents of flat-earth idea defend their case by reducing the patch of the land for examination until their pseudo-science theory is proven.
The attached document is a short description of EP analogies and its well-known critics. The document also introduces a new EP based on Uniform Deceleration of a spaceship in open space. This new analogy results in a different curvature of light in comparison to what original EP has established using uniform acceleration. The author believes that none of the conclusions from EPs should be allowed in science as they are based on inconclusive comparison/analogy and they ignore glaring flaws in the argument.
The author would like to present this new EP for discussion and criticism.
Relevant answer
I agree with all your criticism.
It would be interesting to see a new theory evolve from your insight.
  • asked a question related to Theoretical Astrophysics
15 answers
General Relativity. Can the strength of gravity reduce for dense masses?
New discussion
Is there anything in Einstein’s Field Equations that allows the strength of gravity to reduce for regions of high mass/radius ratio? It could be desirable for two reasons.
Reason 1) From Newtonian considerations. The flatness problem is equivalent to (for each mass m).
mc^2-GMm/R=0 (1)
G=Rc^2/M (2)
Where M and R represent the mass and radius of the rest of the universe up to the Hubble radius. Small numerical constants omitted for simplicity.
For a larger mass, with the self-potential energy term included
mc^2-GMm/R-(Gm^2)/r=0 (3)
r is the radius of mass m , leading to
G_reduced = c^2/(M/R+m/r) = G/(1+Gm/(rc^2 )) (4)
i.e. a reduction in G for masses of high m/r ratio, approaching c^2/G
Reason 2)
It would allow bounces or explosions form galactic centres and avoid a situation of infinite density and pressure. It could account for the ‘foam’ like large scale structure.
It's part of a new cosmology
that predicts an apparent omega(m) of between 0.25 and 0.333 and matches supernovae data without a cosmological constant.
Relevant answer
Hi John,
Instead of "reducing gravity", what about "reducing the mass of matter" once neutron degeneracy pressure is exceeded? For example, in a collapsing neutron star, what if the mass of those neutrons were to be reduced during the collapse?
Recall that a neutron is thought to get 99% of its mass from (1) a cloud of virtual gluons, and (2) the momentum of its quarks. Only 1% of the mass of a normal neutron is thought to come from its quarks interacting with a Higgs type field.
Which makes me wonder: if the quarks in each neutron can be "confined" by gravity instead of by gluons, there'd be no need for all that glue (and the associated mc^2). Also, as gravity confines the range for those quarks to move, their momentum might also be reduced (asymptotic freedom).
Which raises the question: in a collapsing neutron star, as all those neutrons become increasingly compressed by gravity, as the mass each neutron gets from gluons and momentum disappears, does this mean that the mass of a collapsing neutron star reduces as it shrinks?
  • asked a question related to Theoretical Astrophysics
86 answers
As we know, many cosmologists argue that the Universe emerged out of nothing, for example Hawking-Mlodinow (Grand Design, 2010), and Lawrence Krauss, see Most of their arguments rely on conviction that the Universe emerged out of vacuum fluctuations.
While that kind of argument may sound interesting, it is too weak argument in particular from the viewpoint of Quantum Field Theory. In QFT, the quantum vaccuum is far from the classical definition of vaccuum ("nothing"), but it is an active field which consists of virtual particles. Theoretically, under special external field (such as strong laser), those virtual particles can turn to become real particle, this effect is known as Schwinger effect. See for example a dissertation by Florian Hebenstreit at
Of course, some cosmologists argue in favor of the so-called Cosmological Schwinger effect, which essentially says that under strong gravitational field some virtual particles can be pushed to become real particles.
Therefore, if we want to put this idea of pair production into cosmological setting, we find at least two possibilities from QFT:
a. The universe may have beginning from vacuum fluctuations, but it needs very large laser or other external field to trigger the Schwinger effect. But then one can ask: Who triggered that laser in the beginning?
b. In the beginning there could be strong gravitational field which triggered Cosmological Schwinger effect. But how could it be possible because in the beginning nothing exists including large gravitational field? So it seems like a tautology.
Based on the above two considerations, it seems that the idea of Hawking-Mlodinow-Krauss that the universe emerged from nothing is very weak. What do you think?
Relevant answer
A universe can be created from nothing without any external laser or strong gravitational field. In QFT vacuum real particles can be created and annihilated thereafter provided their lifetime dt and energy dE satisfy the uncertainty relation, roughly dtdE~h where h is the Planck constant.
Owing to the uncertainty relation, borrowing a small amount of energy (dE~0) from vacuum is allowed for a long time. According to some estimates the total energy (including negative gravitational energy) of our Universe is precisely zero or very close to zero. Hence, if the Universe is created as a quantum fluctuation its lifetime can be almost infinite.
  • asked a question related to Theoretical Astrophysics
5 answers
Why should we care about axions which were not found in connection with dark matter?
They are just hypothetical particles.
Relevant answer
Dear Natalia S Duxbury.
The axions would be a good candidate for the dark matter for two reasons:
they haven't electric charge and their interaction with matter is very weak. The main problem is that these only speculative particles never observed.
  • asked a question related to Theoretical Astrophysics
50 answers
The obvious answer is one year as the speed of the oh-my-God (OMG) particle was calculated to be about 0.999,999,999,999,999,999,999,995,1c.
However, at this speed, from the frame of the particle, one light year distance should be reduced to almost zero according to the idea of length contraction in special relativity. Thus with no distance to travel the particle should travel the distance in no time.
What is the correct answer? About zero or one year?
It provides a brief analyses of the consequences of the mount Washington experiment which conducted to prove time dilation and length contraction.
Frisch, D. H.; Smith, J. H. (1963). "Measurement of the Relativistic Time Dilation Using μ-Mesons". American Journal of Physics. 31 (5): 342–355.
Relevant answer
Dear Ziaedin,
ZS: To make the problem easier suppose the OMG particle is approaching another particle. The observer in the frame of OMG particle can assume its own frame is standing still and the other particle is approaching it. In that case it takes one year for these two particles to meet.
Slow down, what event are you timing that from? We know when they meet but you have no starting point to define a duration.
ZS: But the same observer in the frame of OMG particle can also assume the OMG particle is speeding towards the other particle and thus the distance between them vanishes to zero which consequently reduces the time of travel to zero. The same two arguments can be repeated in the frame of other particle.
Yes, or in any other frame you choose, but if you want to say "it takes one year for these two particles to meet" you need an event at each end of that period, not just the meeting.
What you are saying is a completely standard aspect ant not a problem in any way, I would recommend you look up something called the "Parable of the Surveyors" to understand this better. I think it was originated by John heeler but you'll find several versions of it on the web.
  • asked a question related to Theoretical Astrophysics
4 answers
Hi All!! I have recently completed my PhD in theoretical astrophysics with work on accretion flow around black holes. Now I want to venture into some observational studies. I would like to know how I can make use of the VIRTUAL OBSERVATORIES to start some good quality research work in observations. I would be very glad if you could share some useful links or documents. Thanking you all...
Relevant answer
A Virtual observatory provides user a virtual platform to get access to astronomical large data base, user friendly softwares for processing and analyzing the data. The Virtual Observatory India project ( is one such platform. Link to other virtual observatory sites can be found at
  • asked a question related to Theoretical Astrophysics
4 answers
I understand that some "Two Time" approaches work well mathematically. I understand one of these configuration is 3D space with 2D Time. How does this model compare with the 3S2T model by Itzhak Bars? 
Relevant answer
"Minkowski formulation itself was a mathematical model designed to emulate Einstein’s results". I always assumed Minkowski space came before Special Relativity.
  • asked a question related to Theoretical Astrophysics
12 answers
Is there any opportunity for criticizing Einstein's theory of gravity because of the present lack of a satisfactory theory of dark matter? As far as I know (and my knowledge of DM is almost null) the only Ricci-flat solutions thoroughly considered as potential sources of dark matter are the black holes. What about other Ricci-flat solutions? The nicest ones are Corvino-Penrose solutions. Also there are gravitational waves and transients. Transients may disappear sooner, but then newer ones are coming. It is reasonable to assume that as time passes galaxies will have more of them at any moment. Another related question is as follows. Sometime we see in the news that new clusters, galaxies and other objects having huge mass of ordinary matter or a black hole are being discovered. Admittedly these are drops in an ocean. Still one can ask how such a discovery can affect the fraction of dark matter locally in a galaxy if not globally?
Relevant answer
GR has not predicted a single galaxy's rotation (without invisible & arbitrary help) and its invisible helper has not been detected in any form after 40 yrs. It needs correction.
  • asked a question related to Theoretical Astrophysics
6 answers
The goal of this question was for e to provide a proof that the Absolute Peak Luminosity of type 1A Supernovae have a G^(-3) dependence.
The argument is correct but it seems to be too complex.
There is a simpler argument that people can understand better.  Just follow these links.
Supernovae distances are mapped to the Absolute Peak Luminosity of their light profiles. This means the the only two measured values are luminosity at the peak and and 15 days later (to measure width).
Supernova explodes through a nuclear chain reaction:
1) C+C->Mg
2) Mg+O->Ca
3) Ca+O->Ni
4) Ni->Co->Fe 
Luminosity is equal to the number of Ni atoms decay per second or dNidt.
So the peak Luminosity is the Peak dNidt. 
There are TWO considerations that together support my approximation:
a) The detonation process accelerates 2-3 reactions (in comparison with equilibrium rates prior to detonation).
b) The detonation process adds a delay to photon diffusion. The shock wave originated in the core will travel to the surface. When the shock wave arrives at the surface, reaction 1-3 should (in principle) stop. Ejecta (non burned residues) are then eject and the photons resulting from the Ni decay have to diffuse through the thick ejecta cloud.
If you look into the Light/[C]^2 curve, you will realize that is has a small delay with respect to Light curve.  The constraint of having a finite star size forces the maximum absolute peak luminosity to synchronize itself with the maximum peak Magnesium rate dMgdt, which happens at the maximum radius. So, the Physics of a finite star and a shockwave nuclear chemistry process forces the Peak Absolute Luminosity (dNidt) to match the maximum rate of Magnesium formation (dMgdt). Implicit in this conclusion is the idea that the pressure and temperature jump expedites intermediate fusions.
My contention is:
a) Light has to go through a diffusion process while traveling from the core. The motion of the detonation curve might synchronize light and [C]^2
b) The model in the python script contains a parameter associated with the light diffusional process leading to the peak luminosity.
I would love to hear about the chosen rate values (I used arbitrary values that would provide a time profile in the order of the observed ones). I would appreciate if you had better values or a model for rewriting the equations for the nuclear chain reaction.
I see the detonation process as a Mg shock wave propagating through the star. Light would follow that layer and thus be automatically synchronized with [C]^2
Under these circumstances, volumetric nuclear chemistry depicted in the python script would have to be replaced by shockwave chemistry.  That would certainly be only dependent upon the Mg content on the shockwave and thus make light be directly proportional to [C]^2!!!
In Summary:
HU see the Supernova Light process to be proportional to [C]^2.  This assertion has support on two mechanisms:
  1. Detonation temperature increase will increase the rate of equations 2-3
  2. Detonation process should be modeled as a nuclear chemistry shockwave where Mg is being consumed as fast as it is being created. Light is following this shockwave and will peak by the time the shockwave reaches the surface of the Star.  So, the shockwave mechanism ties together light diffusion and Carbon nuclear chemistry.
Since I wrote this, I followed up on my own suggestion and considered the shockwave nuclear chemistry approach. You can download all my scripts at the github below.
The shockwave model considers that the amount of light on a cell along the shockwave is is the integrated light created through its evolution. It is developed as a unidimensional process since the observation (billions of years away from the supernova) can be construed as having only contributions from all the cells along the radial line connecting us to the Supernova.
So, the model is unidimensional. That said, it contains all the physics of a tri-dimensional simple model. All rates are effective rates since during the Supernova explosion nuclear reactions are abundant (one can have tremendous variations on neutron content).
The physics is the following:
a) White Dwarf reaches epoch-dependent Chandrasekhar mass. Compression triggers Carbon detonation A shockwave starts at the center of the White Dwarf
b) That shockwave induces 2C->Mg step. The energy released increases local temperature and drive second and third equation to the formation of Ni.  Ni decay releases photons.
c) Photons follow the shockwave and diffuse to the surface where we can detect them. The shockwave takes tc to reach the Chandrasekhar radius (surface of the White Dwarf).
d) Luminosity comes from the Ni decay from the element of volume plus the aggregate photons traveling with the shockwave. They diffuse to the surface
e) Two diffusion rates are considered. One for light diffusion within the Star and another for diffusion in the ejecta.
# Diffusion process with two rates 0.3 for radiation created before the shockwave
# reaches surface and 0.03 for radiation diffusion across ejecta
f) I considered tc to be 15 days, that is, it takes 15 days for peak luminosity. Changing this value doesn't change the picture.
g) The peak luminosity is matched to the peak Magnesium formation at t=tc or when the shockwave reaches the Star surface.
This means that Physics makes the Absolute Luminosity Peak to be also the peak of Magnesium formation and that takes place at the Star surface.
Relevant answer
Researchgate doesn't allow for deletion of a question, so I will just replicate a derivation of Luminosity proportional to G^(-3).
Dr. David Arnett was generous to point out that he didn't agree with my argument using the thermonuclear chain reactions.  It is not clear that he disagreed with the final conclusion.
In any event, Dr. Arnett made an imprint on me.  Like a duckling recently hashed, Dr. Arnett's work was the first one I found modeling Luminosity.  So, he is like a (duckling) mother to me...:)
If, my idea were to be correct, it would probably have to be correct within the logical framework created by Dr Arnett's oeuvre.
So, I searched his work for equations of Luminosity of Supernovae.  Found one from 1980. I started with equation (40) and revied what influence a variable G would have on it. Since the radius of this Supernova varies according to Chandrasekhar radius or G^(-1/2), and since epoch-dependent Supernovae are scaled down Supernovae, their thermal energy (a volumetric integral) would scaled with G^(-3/2)... Left was the calculation of the mass of the Sun within that context.
I understood as Mass of the Sun, not as the mass of our current Star Sun, but as a unit of mass (currently matching the Sun's mass) but in the context of a Supernova.
The context of a Supernova means that radiative pressure outweighs gas pressure.  In that regimen, all the Gravitational dependence of the Sun's mass is included in its mass.
L (Sun) is proportional to Sun's mass.
The Luminosity of the star is also directly proportional to its surface area. That bring about another G(-1). The last G(-1/2) comes directly from a R factor.  So:
Thermal energy scales with G(-3/2)
Sun Mass has a dependence  of G^(-1)
Radius scales with G^(-1/2).
The total dependence of the Supernova Luminosity is the product of all these dependences or G^(-3).
So, I obtained the same result from my simple-minded physical reasoning using the infinitely more sophisticated work of Dr. David Arnett.
  • asked a question related to Theoretical Astrophysics
1 answer
This is a rebuttal to a reviewer's comment about my claim that Absolute Peak Luminosity of type 1A Supernova are proportional to G^(-3) rendering Apparent SN1a distances having a dependence to G^(3/2)
Let me know if you disagree that I kicked this objection to the curb and thus all Supernovae distances are overestimated by G^(3/2) !!!!!  :)
REVIEWER:  I’m also skeptical that the luminosity of a SN Ia if G were different would scale as G^-3 (or M_ch^2).
Ni-56 production is not a simple rate-limited process; SNe Ia undergo a deflagration that (in most cases) transitions to a detonation. They burn about half their mass to Ni-56 (depending on when the detonation occurs). Even if Ni-56 production were a simple process, the radius (and thus the density) of the white dwarf also changes with G.
ANSWER: Firstly, let’s consider the reviewer's assertion that density in a White Dwarf also changes with G. That is incorrect. Detailed derivation was contained in appendix and is reproduced below.
I corrected an assumption about Luminosity and Mass.  Now it is perfect...:)
This argument proves that Luminosity depends upon G^(-3) and since G is epoch dependent in my theory and proportional to the inverse of the 4D radius of the Universe, earlier epochs had stronger Gravitation. Stronger Gravitation means weaker SN1a, resulting in overestimation of distances.  The farther the SN1a is, the larger the overestimation.
The distances are overestimated by G^(1.5).  Once one corrects them, Inflation Theory disappears in a puff... The same goes to General Relativity and Dark Energy....:)
The argument supporting this dependence is based on the work of David Arnett about type II Supernova Luminosity. This is an estimation of the dependence of the Luminosity with G.
To extract the dependence, we force the radius of the Supernova to have a Chandrasekhar radius dependence. We also estimated the M0 (Sun mass) dependence upon G. The Sun mass is a relative mass reference within the context of Supernova mass. Supernovae occurs in the dominant radiative pressure (as opposed to gas pressure) regimen.  Under that circumstance, the Luminosity dependence comes up as Luminosity Proportional to G^(-3).
Needless to say, this derivation is trivial and consistent with Supernovae and Star models.
It takes just one page to be derived, easy as Butter.  (of course, after David Arnett did all the hard work...:)
This means that the SN1a ruler, which is the basis of Cosmology and Astrophysics would be faulty under an epoch dependent G context.  Since HU is epoch-dependent and predicts the Supernovae distances perfectly and without any parametrization, that places the Standard Model in a very precarious position.
If you add to that, HU observation of Neutronium Acoustic Oscillations or NAO... (which is what Algebraists should be saying right now...:)  I think, there is reckoning coming...
See the NAO... SDSS had this data for 10 years. Since they are basically Algebraists and see the Universe according to GR, they cannot imagine acoustic waves along distances. The Universe is supposed to be Uniform...
HU sees oscillations primarily along distances (which corresponds to cosmological angles).  There may or may not be cross-talk with the 3D angular modes.   I say that because I don't see the 150 Mpc wavelength in HU 2-point correlation.
So, How Long will the Community refrain from welcoming my conclusion that there was no Big Bang (there was Many Bangs) and that the Universe didn't come out of a fiery explosion, dilation is nonsense, vacuum fluctuations driving a Big Bang are utterly nonsense.... GR is nonsense..etc.
Relevant answer
Below is the Hypergeometrical Universe Theory (HU) to the question of how environments with different Gravitation would affect the Absolute Luminosity of SN1A Supernovae.
The derivation starts with the work of David Arnett on type II Supernova.  We apply type 1A Chandrasekhar radius G dependence to the thermal Energy and to R0. We also scale a Sun Mass by modeling the Sun mass under different regimen - Low and HIgh radiative pressure.
This is an estimation to see if they expected dependence is consistent with astronomical observations, so the precise value is less relevant than the range.
Astronomical observations are consistent, within the logical framework of the Hypergeometrical Universe, with a G^(-3) dependence. That is exactly what one obtains using the High Radiative Regimen to evaluate the mass of the Sun G dependence.
That said, irrespective to which model we useed the range varied Between G^(-3.5) and G^(-3).
The relevance of this results is that in HU, G is inversely proportional to the 4D radius of the Universe. 
The Standard Cosmological Model considers that the Absolute Luminosity of a SN1a is constant or standardizable (variations due to different amount of ejecta are renormalizable through the empirical methodology WLR).
HU proposes the G is epoch-dependent, which would make SN1a to be epoch-dependent in a way that farther away SN1a had intrinsically weaker explosions and thus reduced Luminosity. Our adjustment of David Arnett's work indicated a G^(-3) Luminosity dependence.
That would result in an overstimation of the distances of Supernovae by G^(1.5).  After correction of the distances we can compare HU d(z) predictions with the astronomical observations.  See plot below:
The results below are consistent with 
  • asked a question related to Theoretical Astrophysics
11 answers
As electrons, protons and neutrons become degenerate......
Relevant answer
Pauli exclusion in some stars leads to degeneracy and explanations of how and why stars have the sizes, temperatures, and compositions they appear to have.
Some red giants and other large stars are said to have electron degeneracy operating in some part of the star. It means the electrons are so close together that Pauli exclusion modifies the equation of state, Temperature, Pressure, and Volume.
White dwarfs are thought to be proton degenerate near the centers also leading to a different equation of state that may describe some part or maybe almost all of the star.
Neutron stars are believed to be neutron degenerate, with yet another equation of state.
Black holes can be regarded as degenerate time and space, which might simply disappear if some principle related to Pauli exclusion did not prevent it. The model is made of ZPE oscillators with virtual particle pairs packed so closely together that they can't vibrate independently. 
Most of these explanations are nearly a hundred years old, and are probably not the best available now, but some of them can be found in old books written by famous pioneers in Astrophysics.
I have some arguments about black holes that can't be resolved here, and don't change the answers to your question.
Since time and space can become degenerate in a high gravity case, time and space should also become degenerate in a case of high acceleration, and maybe in cases of extremely high kinetic energy.  It is discussed in other threads.
  • asked a question related to Theoretical Astrophysics
29 answers
I have observed rotation curve data of a galaxy, and I want to know what is the best and simplest mathematical model to find the galactic rotation curve and dynamical mass of such galaxy. 
Your help will be appreciated
Relevant answer
 Dear Thierry De Mees,
I have downloaded your book "Gravitomagnetism/Coriolis Gravity Theory" and will spend my winter break reading through it. As I recall, in the past, Voyager images lead researchers towards the Coriolis effect as an explanation for the Great Red Spot on Jupiter. Thank you.
Terry R. Fisher
  • asked a question related to Theoretical Astrophysics
35 answers
When Einstein's general relativity theory predicted that light is bent in a gravitational field, Eddington verified that during a solar eclipse. He showed that  some stars behind the sun were viisible, seemingly moved from their real position of almost 2 seconds of arc.
But I've read somewhere that Newton's theory predicted a smaller angle, but not zero. How is this possible? I thought that Newton's gravity only acts on things with non zero mass, and consequently not on photons.
Can anyone explain ?
Relevant answer
To answer this: "I thought that Newton's gravity only acts on things with non zero mass, and consequently not on photons."  -- We have universality of free fall in Newton's theory (i.e. a weak form of the equivalence principle), so the trajectory of a particle in a gravitational field does not depend on the mass of a particle. That extends to zero or negative mass particles -- they all fall alike in a gravitational field.
The question then depends only on whether the kind of entities you consider interacts gravitationally. As long as you believe with Newton that light is made up of particles, you can (and von Soldner did in 1804) predict light deflection in a gravitational field. The answer is half the value predicted by Einstein in 1915 (but Einstein predicted the same value as von Soldner in 1911, albeit based on the wave picture of light).
If you believe that light is a wave phenomenon and no particles are involved, there is no reason in the Newtonial world view to assume it to interact with a gravitational field and then Newton's theory would predict zero deflection.
  • asked a question related to Theoretical Astrophysics
3 answers
While studying about the Silicon crystal and its energy band diagram, I came across electron-hole pairing, also about recombination which results in discharge on energy. The form of this release of energy is different in GaAs and InP compared to Si and Ge. In GaAs and InP photon is released but in Si and Ge the excess energy is lost in lattice vibration. Why Si cannot lose the energy in form of photon? What characteristic disables GaAs from losing the energy in form of lattice vibration? I am curious about the cause of this distinctive difference.
Relevant answer
Thank you very much for the answers! So GaAs and InP must be direct band gaps and Si and Ga must be indirect band gap.
  • asked a question related to Theoretical Astrophysics
55 answers
The surface area is well known and the entropy is A/4 in Planck units. But what is the volume occupied by the static Black hole in space? I don't find this discussed anywhere.  
Relevant answer
Rs^3 4/3Pi is a measure of its volume as seen by an outside observer, which is the volume of a sphere in Euclidean space of radius Rs. Christodoulou and Rovelli wrote about the "largest volume" that can be bounded by the event horizon. See arXiv:1411.2854 [gr-qc]. Even though a Schwarzschild black hole looks the same forever to an outside observer, its volume actually gets larger with time. However, there is no unique volume that one can assign to a black hole because 3-volumes depend on the choice of spacelike hypersurfaces. The size of a BH is related to the horizon radius but its volume is related to length contraction and infinite curvature, therefore the volume is zero in Riemann sense. However, the mass of BH does not reside inside its radius, but is concentrated in the singularity with infinity density and zero volume.The coordinate radius and the proper radius would be zero. This is a prediction of GR as an incomplete theory, defining a singularity as a point (!) of infinite density in space. At the same time, a singularity appears as a consequence of a geodesically incomplete spacetime, therefore it is not a point or set of points in spacetime, The dimension or volume of a singularity does not make sense without a theory of Quantum Gravity.
  • asked a question related to Theoretical Astrophysics
5 answers
It was suggested many years ago by Jacob Bekenstein that the rotation parameter of a black hole should be quantized as in usual quantum mechanics: J=n * hbar.
Relevant answer
This would make sense only if this angular momentum, defined at the boundary, where the Kerr metric is asymptotically flat, can be identified with the angular momentum of a quantum system. This system, however, is a quantum system in flat spacetime, so the fact that its angular momentum is that of a Kerr black hole doesn't seem relevant. In fact it's the other way around, cf.
  • asked a question related to Theoretical Astrophysics
11 answers
Assume that there exist anti-neutron stars in our galaxis. Would it be possible to distinguish a neutron star from an anti-neutron star just by means of astronomic observation data?
Relevant answer
I think we have no sure theory to claim that we can determine which is matter and which is antimatter just by means of astronomic observation data. Of course if there is annihilation process just in progress, we can get some clues about antimatter in emission spectrum...
  • asked a question related to Theoretical Astrophysics
8 answers
Some astrophysicians say that gravitational waves (GW) were created during the inflation phase of the universe, when the universe was 10-36 second old and expanded billions of billions of billions times. 
I understand this phenomenon was really explosive and created very violent effects, maybe including GW, but thinking that the GW would then propagate somewhere else would suppose that there was a "somewhere". But the universe just expanded, increased in size, but did not grow inside something else. So these GW had nothing to propagate in, and could not reach us now.
Furthermore, if the GW propagate at the speed of light in vacuum, they should be far away from us because our speed is much less that theirs. The only GW we can catch from earth should have been created much earlier than the inflation.
Is this correct?
If yes, one could then infer that the GW were the source of the inflation mechanism and somehow pushed the limits of the universe to make it grow.
Can anyone explain this? Thank you.
Relevant answer
This is a very standard material, available in a number of text books.
Inflationary cosmological model was proposed by Particle Physicist Alan Guth and others in early 1980s. Essentially, at high energies, as particles behave as quantum fields, contribution of a quantum field to the energy density and pressure need not be zero. In the Planck era the lowest energy state of this quantum field might have corresponded to a false vacuum, generating a cosmological constant which is  10100 times larger than what it is today, and resulting in a super-large repulsive force. Such a repulsive force stretches 3-D space which doubles in volume in every 10-43 seconds. After 85 such doubling (in volume) temperature drops from 1032 K to near absolute zero. 
  1. The Inflationary Universe: The Quest for a New Theory of Cosmic Origins, by Alan H. Guth, Perseus Books, 1997 
  2. A. H. Guth, Phys. Rev. D 23, 347 (1981)
  3. A. H. Guth, S.Y. Pi, Phys. Rev. Lett. 49, 1110 (1982)
  4. A. H. Guth, E. J. Weinberg, Nucl. Phys. B212, 321 (1983)
  • asked a question related to Theoretical Astrophysics
30 answers
The 'standard model' of theoretical physics says the basic forces are transmitted between bodies and detectors by particles or fields, as speeds up to the velocity of light.  The LIGO gravitational signal is found to be of short duration (light crossing time of the emitter, perhaps) and very close to light speed. .Very likely to be waves on the gravitational field (Einstein) but could still be 'gravitons'. Black holes can't emit light waves, likewise they can't emit gravity waves and cannot exert gravitational force on matter outside.  So how do the proponents explain two black holes pulling on each other, orbiting in the gradually closing binary system?
Relevant answer
Philadelphia, PA
Dear Wallis,
Its seems to me that your question concerns how black holes could radiate. You ask:
Black holes can't emit light waves, likewise they can't emit gravity waves and cannot exert gravitational force on matter outside. So how do the proponents explain two black holes pulling on each other, orbiting in the gradually closing binary system?
---end quotation
The key to the production of gravitational waves is not that massive objects emit gravity waves, but that the acceleration of massive objects does. Black holes, since they constitute massive objects, do curve space-time in their vicinity, however, and other objects follow the lines of curvature. In the LIGO discovery, it is the mutual acceleration of two black holes toward each other, and the consequent loss of angular momentum which is responsible for the emission of gravitational waves.
The oscillation of a charge produces electromagnetic radiation, and  the concept of gravitational radiation is analogous: accelerating masses emit gravitational radiation, though in most cases, and with small masses, this is negligible. Its takes the acceleration of gigantic masses to produce detectable gravitational radiation.
The hypothesis of gravitons does not really belong to the standard model of particle physics --which does not include gravity. Instead it belongs to proposals to extend the standard model to include gravitation. These proposals are much more speculative than the standard model. While gravitational waves were strongly and confidently predicted as a large-scale implication of Einstein's theory of general relativity, the concept of the graviton, as the supposed carrier of the gravitational force, is much more problematic, since this concerns extremely small scale phenomena, at or near the Planck length, and falls into the domain of proposals for theories of quantum gravity. Gravitational waves are an implication of GR, gravitons not.
In any case, if one follows the proposals for gravitons (Freeman Dyson, e.g., has expressed strong doubts that they could ever be detected), then they would arise as the quantum of the gravitational field in correspondence with gravitational waves. It is not that black holes, un-accelerated, or other massive bodies would be though to emit gravitons, instead, like the gravitational waves, they would be emitted by accelerating masses.
The two black holes are supposed to "pull on" each other, simply by following the curved space-time in their mutual vicinity. This is in accord with GR. But as they approach each other and there is a loss of angular momentum, the local curvature is repeatedly perturbed so as to produce the expanding gravitational radiation.
H.G. Callaway
  • asked a question related to Theoretical Astrophysics
1 answer
Relevant answer
No, the two statements don't imply any such suggestion. 
  • asked a question related to Theoretical Astrophysics
28 answers
every mass should (perhaps) collapse if it is enclosed in a sphere with radius less than its Schwarzschild radius. If the universe was  "some times ago" smaller than its Schwarzschild radius, how can it expand beyond this?
Relevant answer
As pointed out earlier, an" intergalactic space density of DQV has a value of Planck density" is only Dr. Sorli's assumption, without any justification or smallest hint of physical background. It's a kind of new ether.
  • asked a question related to Theoretical Astrophysics
9 answers
In the degenerate interiors of neutron stars, the equation of state is usually just density (and composition) dependent. You can express the pressure as a polytropic law of the form P∝ρα, where ρ is the density.
A stiff (or hard) equation of state is one where the pressure increases a lot for a given increase in density. Such a material would be harder to compress and offers more support against gravity. Conversely, a soft equation of state produces a smaller increase of pressure for a change in density and is easy to compress.
If we using the Lagrangian density of the nucleon-meson many body system. Solve the equation of state and corresponding using different parameters like FSUGOLD,NL3,NL3*.....etc. I know every parameters have a different symmetry energy and compressibility.  But, what is physics behind the stiff or soft equation of state?
Relevant answer
I should probably start by saying that the very Physics behind the occurrence of the "stiff" and "soft" equations of state is quite complicated. You can subdivide the EOS into regions: I. Nuclear saturation density; II. Supra-saturation nuclear density. Now for each case the stiffness/softness comes from two main ingredients: (a) EOS of symmetric nuclear matter; (b) EOS of pure neutron matter --> this one can be replaced by the nuclear symmetry energy. Now we can talk about the Physics behind the stiffness/softness for each case:
I. (a) The stiffness/softness is mostly controlled by the incompressibility parameter that can be more or less constrained by nuclear breathing modes. There are no big variations (for example it is well constrained to be about 200 < K < 260 MeV)
I. (b) The stiffness/softness is due variation of the nuclear symmetry energy. The large uncertainty comes from our poor knowledge on isovector channel of nuclear interaction. Physics of exotic nuclei close to dripline contain this information, however they are not yet readily available. This is a hot topic of research.
II. (a) The stiffness/softness is controlled by the skewness parameter (third derivative of energy per nucleon). The physics of finite nuclei is not sensitive to this variation, and therefore one can have models giving ranges of the EOS with maximum NS mass from 2 to 3 solar masses.
II. (b) Density dependence of the symmetry energy at suprasaturation is basically unknown. That is why we have a huge variations in the EOSs. 
  • asked a question related to Theoretical Astrophysics
5 answers
I am interested by some connections between cosmology, gravity etc with life sciences, biophysics or whatever. I will be appreciative if you can give me some initial starting points with articles, textbooks etc dedicated to my interests.
Relevant answer
see arXiv 1311.6328 quant-ph.
  • asked a question related to Theoretical Astrophysics
1 answer
We know that the equilibrium condition for neutron star. But I do not know in the case of hyperon star. Is it same or not?
Relevant answer
please have a look at the paper by
Roles of Hyperons in Neutron Stars
Shmuel Balberg, Itamar Lichtenstadt
The Racah Institute of Physics, The Hebrew University, Jerusalem 91904, Israel
Gregory. B. Cook
Center for Radiophysics and Space Research, Space Sciences Building,
Cornell University, Ithaca, NY 14853
some portions are copied for you below.
We examine the roles the presence of hyperons in the cores of neutron stars may
play in determining global properties of these stars. The study is based on estimates
that hyperons appear in neutron star matter at about twice the nuclear saturation
density, and emphasis is placed on effects that can be attributed to the general multispecies
composition of the matter, hence being only weakly dependent on the specific
modeling of strong interactions. Our analysis indicates that hyperon formation not only
softens the equation of state but also severely constrains its values at high densities.
Correspondingly, the valid range for the maximum neutron star mass is limited to
about 1.5 − 1.8 M⊙, which is a much narrower range than available when hyperon
formation is ignored. Effects concerning neutron star radii and rotational evolution are
suggested, and we demonstrate that the effect of hyperons on the equation of state
allows a reconciliation of observed pulsar glitches with a low neutron star maximum
mass. We discuss the effects hyperons may have on neutron star cooling rates, including
recent results which indicate that hyperons may also couple to a superfluid state in high
density matter. We compare nuclear matter to matter with hyperons and show that
once hyperons accumulate in neutron star matter they reduce the likelihood of a meson
condensate, but increase the susceptibility to baryon deconfinement, which could result
in a mixed baryon-quark matter phase.
Subject headings: stars: neutron — elementary particles — equation of state — stars:
– 2 –
1. Introduction
The existence of stable matter at supernuclear densities is unique to neutron stars. Unlike
all other physical systems in nature, where the baryonic component appears in the form of atomic
nuclei, matter in the cores of neutron stars is expected to be a homogeneous mixture of hadrons and
leptons. As a result the macroscopic features of neutron stars, including some observable quantities,
have the potential to illuminate the physics of supernuclear densities. In this sense, neutron stars
serve as cosmological laboratories for hadronic physics. A specific feature of supernuclear densities
is the possibility for new hadronic degrees of freedom to appear, in addition to neutrons and protons.
One such possible degree of freedom is the formation of hyperons - strange baryons - which is the
main subject of the present work. Other possible degrees of freedom include meson condensation
and a deconfined quark phase.
While hyperons are unstable under terrestrial conditions and decay into nucleons through the
weak interaction, the equilibrium conditions in neutron stars can make the reverse process, i.e.,
the conversion of nucleons into hyperons, energetically favorable. The appearance of hyperons in
neutron stars was first suggested by Ambartsumyan & Saakyan (1960) and has since been examined
in many works. Earlier calculations include the works of Pandharipande (1971b), Bethe & Johnson
(1974) and Moszkowski (1974), which were performed by describing the nuclear force in Schr¨odinger
theory. In recent years, studies of high density matter with hyperons have been performed mainly
in the framework of field theoretical models (Glendenning 1985, Weber & Weigel 1989, Knorren,
Prakash & Ellis 1995, Schaffner & Mishustin 1996, Huber at al. 1997). For a review, see Glendenning
(1996) and Prakash et al. (1997). It was also recently demonstrated that good agreement with these
models can be attained with an effective potential model (Balberg & Gal 1997).
These recent works share a wide consensus that hyperons should appear in neutron star (cold,
beta-equilibrated, neutrino-free) matter at a density of about twice the nuclear saturation density.
This consensus is attributed to the fact that all these more modern works base their estimates of
hyperon-nucleon and hyperon-hyperon interactions on the experimental constraints inferred from
hypernuclei. The fundamental qualitative result from hypernuclei experiments is that hyperon
related interactions are similar in character and in order of magnitude to nucleon-nucleon interactions.
In a broader sense, this result indicates that in high density matter, the differences between
hyperons and nucleons will be less significant than for free particles.
The aim of the present work is to examine what roles the presence of hyperons in the cores
of neutron stars may play in determining the global properties of these stars. We place special
emphasis on effects which can be attributed to the multi-species composition of the matter while
being only weakly dependent on the details of the model used to describe the underlying strong
We begin our survey in § 2 with a brief summary of the equilibrium conditions which determine
the formation and abundance of hyperon species in neutron star cores. A review of the widely
accepted results regarding hyperon formation in neutron stars is given in § 3. We devote § 4 to
an examination of the effect of hyperon formation on the equation of state of dense matter, and
the corresponding effects on the star’s global properties: maximum mass, mass-radius correlations,
rotation limits, and crustal sizes. In § 5 we discuss neutron star cooling rates, where hyperons
might play a decisive role. A discussion of the effects of hyperons on phase transitions which may
occur in high density matter is given in § 6. Conclusions and discussion are offered in § 7.
– 3 –
2. Equilibrium Conditions for Hyperon Formation Neutron Stars
In the following discussion we assume that the cores of neutron stars are composed of a mixture
of baryons and leptons in full beta equilibrium (thus ignoring possible meson condensation and a
deconfined quark phase - these issues will be picked up again in § 6). The procedure for solving
the equilibrium composition of such matter has been describes in many works (see e.g., Glendenning
(1996) and Prakash et al. (1997) and references therein), and in essence requires chemical
equilibrium of all weak processes of the type
B1 → B2 + ℓ + ¯νℓ
; B2 + ℓ → B1 + νℓ
, (1)
where B1 and B2 are baryons, ℓ is a lepton (electron or muon), and ν (¯ν) is its corresponding
neutrino (anti-neutrino). Charge conservation is implied in all processes, determining the legitimate
combinations of baryons which may couple together in such reactions.
Imposing all the conditions for chemical equilibrium yields the ground state composition of
beta-equilibrated high density matter. The equilibrium composition of such matter at any given
baryon density, ρB, is described by the relative fraction of each species of baryons xBi ≡ ρBi
and leptons xℓ ≡ρℓ/ρB.
Evolved neutron stars can be assumed to be transparent to neutrinos on any relevant time scale
so that neutrinos are absent and µν = µν¯ = 0. All equilibrium conditions may then be summarized
by a single generic equation
µi = µn − qiµe , (2)
where µi and qi are, respectively, the chemical potential and electric charge of baryon species i,
µn is the neutron chemical potential, and µe is the electron chemical potential. Note that in the
absence of neutrinos, equilibrium requires µe =µµ. The neutron and electron chemical potentials are
constrained by the requirements of a constant total baryon number and electric charge neutrality,
xBi = 1 ; X
qixBi +
qℓxℓ = 0 . (3)
The temperature range of evolved neutron stars is typically much lower than the relevant
chemical potentials of baryons and leptons at supernuclear densities. Neutron star matter is thus
commonly approximated as having zero temperature, so that the equilibrium composition and
other thermodynamic properties depend on density alone. Solving the equilibrium compositions
for a given equation of state (EOS) at various baryon densities yields the energy density and pressure
which enable the calculation of global neutron star properties.
3. Hyperon Formation in Neutron Stars
In this section we review the principal results of recent studies regarding hyperon formation in
neutron stars. The masses, along with the strangeness and isospin, of nucleons and hyperons are
given in Tab. 1. The electric charge and isospin combine in determining the exact conditions for
each hyperon species to appear in the matter. Since nuclear matter has an excess of positive charge
and negative isospin, negative charge and positive isospin are favorable along with a lower mass
for hyperon formation, and it is generally a combination of the three that determines the baryon
density at which each hyperon species appears. A quantitative examination requires, of course,
– 4 –
modeling of high density interactions. We begin with a brief discussion of the current experimental
and theoretical basis used in recent studies that have examined hyperon formation in neutron stars.
3.1. Experimental and Theoretical Background
The properties of high density matter chiefly depend on the nature of the strong interactions.
Quantitative analysis of the composition and physical state of neutron star matter are currently
complicated by the large uncertainties regarding strong interactions, both in terms of the difficulties
in their theoretical description and from the limited relevant experimental data. None the less,
progress in both experiment and theory have provided the basis for several recent studies of the
composition of high density matter, and in particular suggests it will include various hyperon
Experimental data from nuclei set some constraints on various physical quantities of nuclear
matter at the nuclear saturation density, ρ0 = 0.16 fm−3
. Important quantities are the bulk binding
energy, the symmetry energy of non-symmetric matter (i.e., different numbers of neutrons and
protons), the nucleon effective mass in a nuclear medium, and a reasonable constraint on the compression
modulus of symmetric nuclear matter. However, at present, little can be deduced regarding
properties of matter at higher densities. Heavy ion collisions have been able to provide some information
regarding higher density nuclear matter, but the extrapolation of these experiments to
neutron star matter is questionable since they deal with hot non-equilibrated matter.
Relevant data for hyperon-nucleon and hyperon-hyperon interactions is more scarce, and relies
mainly on hypernuclei experiments (for a review of hypernuclei experiments, see Chrien & Dover
(1989), Gibson & Hungerford (1995)). In these experiments a single hyperon is formed in a nucleus,
and its binding energy is deduced from the energetics of the reaction (typically meson scattering
such as X(K−, π−)X).
There exists a large body of data for single Λ-hypernuclei, which clearly shows bound states of
a Λ hyperon in a nuclear medium. Millener, Dover & Gal (1988) used the nuclear mass dependence
of Λ levels in hypernuclei to derive the density dependence of the binding energy of a Λ hyperon in at density ρ0 to be about −28 MeV, which is about one third of the equivalent value for a nucleon
in symmetric nuclear matter. The data from Σ-hypernuclei are more problematic (see below). A
few emulsion events that have been attributed to Ξ-hypernuclei seem to suggest an attractive Ξ
potential in a nuclear medium, somewhat weaker than the Λ−nuclear matter potential.
A few measured events have been attributed to the formation of double Λ hypernuclei, where two
Λ’s have been captured in a single nucleus. The decay of these hypernuclei suggests an attractive Λ−
Λ interaction potential of 4−5 MeV (Bodmer & Usmani 1987), somewhat less than the corresponding
nucleon-nucleon value of 6−7 MeV. This value of the Λ−Λ interaction is often used as the baseline
for assuming a common hyperon-hyperon potential, corresponding to a well depth for a single
hyperon in isospin-symmetric hyperon matter of -40 MeV. While this value should be taken with
a large uncertainty, the typical results regarding hyperon formation in neutron stars are generally
insensitive to the exact choice for the hyperon-hyperon interaction, as discussed below.
We emphasize again that the experimental data is far from comprehensive, and great uncertainties
still remain in the modeling of baryonic interactions. This is especially true regarding densities
– 5 –
greater than ρ0, where the importance of many body forces increases. Three body interactions are
used in some nuclear matter models (Wiringa, Fiks & Fabrocini 1988, Akmal, Pandharipande &
Ravenhall 1998). Many-body forces for hyperons are currently difficult to constrain from experiment
(Bodmer & Usmani 1988), although some attempts have been made on the basis of light
hypernuclei (Gibson & Hungerford 1995). Indeed, field theoretical models include a repulsive component
in the two-body interactions through the exchange of vector mesons, rather than introduce
explicit many body terms. We note that the effective equation used here is also compatible with
theoretical estimates of ΛNN forces through the repulsive terms it includes (Millener, et al. 1988).
In spite of these significant uncertainties, the qualitative conclusion that can be drawn from
hypernuclei is that hyperon-related interactions are similar both in character and in order of magnitude
to the nucleon-nucleon interactions. Thus nuclear matter models can be reasonably generalized
to include hyperons as well. In recent years this has been performed mainly with relativistic theoretical
field models, where the meson fields are explicitly included in an effective Lagrangian. A
commonly used approximation is the relativistic mean field (RMF) model following Serot & Walecka
(1980), and implemented first for multi-species matter by Glendenning (1985), and more recently
by Knorren et al. (1995) and Schaffner & Mishustin (1996) (see the recent review by Glendenning
(1996)). A related approach is the relativistic Hartree-Fock (RHF) method that is solved with
relativistic Green’s functions (Weber & Weigel 1989, Huber at al. 1997). Balberg & Gal (1997)
demonstrated that the quantitative results of field theoretical calculations can be reproduced by
an effective potential model.
The results of these works provide a wide consensus regarding the principal features of hyperon
formation in neutron star matter. This consensus is a direct consequence of incorporating
experimental data on hypernuclei (Balberg & Gal 1997). These principal features are discussed
3.2. Estimates for Hyperon Formation in Neutron Stars
Hyperons can form in neutron star cores when the nucleon chemical potentials grow large
enough to compensate for the mass differences between nucleons and hyperons, while the threshold
for the appearance of the hyperons is tuned by their interactions. The general trend in recent
studies of neutron star matter is that hyperons begin to appear at a density of about ρB = 2ρ0,
and that by ρB ≈ 3ρ0 hyperons sustain a significant fraction of the total baryon population. An
example of the estimates for hyperon formation in neutron star matter, as found in many works, is
displayed in Fig. 1. The equilibrium compositions - relative particle fractions xi - are plotted as a
function of the baryon density, ρB. These compositions were calculated with case 2 of the effective
equation of state detailed in the appendix, which is similar to model δ = γ =
of Balberg & Gal
(1997). Figure 1a presents the equilibrium compositions for the “classic” case of nuclear matter,
nuclear matter. In particular, they estimate the potential depth of a Λ hyperon in nuclear matter
  • asked a question related to Theoretical Astrophysics
3 answers
Synchrotron Self-Compton (SSC) Spectrum
Relevant answer
and also see:
Searching for Dark Matter Annihilation in M87
Sheetal Saxena, Dominik Elsässer, Michael Rüger, Alexander Summa, Karl Mannheim
Sep 20 2011 astro-ph.HE arXiv:1109.3810v1
  • asked a question related to Theoretical Astrophysics
14 answers
A collapsing star when explode (supernova), due to the sudden ejection of massive mass around the central core, there is a disturbance in space-time leading to emission of gravitational waves. But what will happen if a collapsing goes on till black hole is formed, without any explosion? Will there be an emission of gravitational waves due to the continuous grow in curvature because of the growing mass?
Relevant answer
 No  gravitational waves  radiated in spherically symmetric system, while axisymmetric situation can be.
  • asked a question related to Theoretical Astrophysics
5 answers
I am interested in finding closure relations for the BBGKY-equations and came across an old article ``On the integration of the BBGKY equations for the development of strongly nonlinear clustering in an expanding universe'' by M DAVIS, PJE PEEBLES, Astrophysical Journal Supplement Series, 1977. I am used to closures where the three-particle correlations are either neglected or are approximated by Kirkwood's superposition principle, which appears ad hoc to me and is inaccurate for certain applications.
In the article by Davis and Peebles, something I haven't seen before was used, a scaling solution where n-particle correlation functions are approximated by a simple power law. It looks like this only works if the particles are far apart from each other and if there is no characteristic length scale in the particle interactions, which is the case for gravity. This made me curious: How useful did this approach become in Astrophysics simulations, is it still used or was this a dead end? If not, has there been any recent improvements on this approach?
Has such a scaling approach to the BBGKY-hierarchy been used in other areas of physics, like plasma physics where there is also a power law (Coulomb) interaction?
Relevant answer
Thank you for your answer. If your truth would be embarrassing for me, it would also embarrass such icons of soviet physics as Bogolubov and Landau, and I'd be totally fine with being part of that axis-of-shame :-) I agree that one has to be careful with ensemble averages when talking about inhomogeneous systems, for example, when describing soliton-like waves in active matter (see my Phys. Rev. E from 2013, and the Discussion on Ohta paper in 2014)). However, I disagree that hierarchy-equations like the BBGKY-hierarchy are useless in general. In your own papers, you also use such hierarchies of equations for multi-point correlation functions. In my opinion, for the ensemble average, one just has to imagine that only  the proper microscopic members that describe a particular inhomogeneous state are part of the ensemble. For example, for a single soliton, only those microscopic states should be  included that have the same density and momentum profile (coarse-grained over small cells in phase space) at the same time. Another argument to not dismiss BBGKY-like hierarchies is to consider the extreme case of a N-particle ensemble distribution that consists of products of delta-functions at time t. That means, all ensemble members have particles at the same positions with the same momenta and the BBGKY-equations would be equivalent to the evolution of a particular real system with those initial conditions. A more pragmatic argument: I recently used the first two BBGKY equations (adopted to my system) for self-propelled particles in a weakly-correlated limit where three-particle correlations are negligible compared to two-particle ones and I found perfect quantitative agreement with particle-based direct simulations. I didn't have to explicitly select the proper members of the ensemble, I did neither use thermodynamics nor Gibbs distributions, I just solved the time-dependent hierarchy equations until  a stationary state was reached. Thus, if you have an alternative theory, I suggest testing it quantitatively with Molecular Dynamics simulations and see how it does. In contrast to experiments, simulations can be adjusted to mirror all the approximations made in a theory and thus allow ``comparing apples with apples''. In my opinion, the more serious problem with kinetic theory is the closure of the hierarchy. You mentioned in your papers that you somehow close your equations at the two-particle level but I couldn't find any details. For self-propelled particles, there is parameter ranges where the three-particle correlations are stronger than the two-particle ones and the four-particle-ones are even stronger than three-particle ones and so on. Thus, in this strongly-correlated system, some type of generating function should be found that contains correlations to all orders, at least approximately, or a systematic closure for all multi-particle correlations is needed. In your papers, you mention discrepancies between experiments and theory. Could that also be due to neglecting these higher multi-particle correlations or using an inappropriate closure ?
  • asked a question related to Theoretical Astrophysics
10 answers
I have seen different estimations of photon diffusion time in different papers (30000 to a million years). There are many different mean free paths used. I want to know if these results are just speculations or if there is a way to verify these results?
I also want to know what the current standard time of diffusion is. It would be helpful if you could give me the links to some important research papers on this subject.
Relevant answer
The main source of uncertainty in photon diffusion time is related to the opacity in different layers of the sun as inferred from various solar models. Also this opacity will change as the sun evolves so the photon diffusion time is not constant. Convection in the interior of the sun also complicates this this issue.
A simple random walk calculation combined with a simple density gradient model for the sun is a relatively clean calculation.
is a good source for this and is the source for the 170,000 yr timescale.
  • asked a question related to Theoretical Astrophysics
42 answers
Spacetime curvature creates a space time "seeing" effect, which is analogous to astronomical seeing. This is due to microlensing and weak lensing, which becomes problematic from an ICRF stability viewpoint as astrometric VLBI and optical measurements (e.g. GAIA) moves into the tens of micro-arcsecond accuracy region. This space time seeing effect will create a noise floor, limiting ICRF stability due to apparent source position distortions and amplitude variations. This will of course have an impact on the high accuracy requirements of the Global Geodetic Observing System (GGOS), which has the objectives of 0.1 mm stability and accuracy of 1 mm for the ITRF. The distribution of dark matter complicates the problem.
Relevant answer
Dear Ludwig,
to be honest, your work is beyond my scientific horizon, but thank you for sharing. I´m going to try to understand.
  • asked a question related to Theoretical Astrophysics
7 answers
For some far objects there is a strong red shift due to expansion, if the object was highly affected by a strong gravitational field that lead to affect its movement how can we distinguish between the red shift from expansion and red shift or blue shift that yielded from the fast rotational movement of the object due to being near the highly gravitational field. neglecting the red shift from gravitational field it self. Is the equations of special relativity enough for describing such type of movement?
Relevant answer
Red shift can originate also in following cases.
1) The distance between a source and an observer does not change, but the source moves with velocity comparable with the speed of light. Then red shift in the source spectrum arises. It is interpreted as a result of relativistic slowdown of time on the source relative to the observer (it is called transverse Doppler effect).
2) Red shift arises if the observer is located in a region where the gravitation is less than near the source. In classical physics it means that photons surmount gravitation and thus lose energy. Red shifts in spectra of white dwarfs are explained in this way.
  • asked a question related to Theoretical Astrophysics
11 answers
CME can leads to blackout on Earth due to disturbance in Earth's magnetic field.
Relevant answer
Q1: Probably not.
Coronal mass ejections will perturb the net magnetic field at the Earth's surface, but we regularly expose ourselves and other creatures to rapidly varying magnetic fields (stand at an electrically powered tram station...) and more slowly changing fields (have an MRI scan).
Dedicate attempts to induce nervous currents have been done.
As I am sure you know.
Ham CLG, Engels JML, Van de Weil GT et al. Peripheral Nerve stimulation during MRI: Effects of high gradient amplitudes and switching rates. Journal of Magnetic Resonance Imaging 1997; 7(5): 933-937.
But the field strengths and magnitude of dB/dt were far larger than you would encounter from a CME-induced perturbation.
  • asked a question related to Theoretical Astrophysics
148 answers
Clearly the observable universe is big compared to the human let alone (sub) atomic scale. However we also know that the universe is very flat, which in the Friedman Robertson universe means that the mass density is (close to) critical. But why can't the radius of curvature (positive or negative) of the universe be, say, 10 or 20 orders of magnitude larger than the size of the observable universe, and the apparent flatness be a the consequence of our limited field of view a mere 13.7 billion years after the big bang?
Relevant answer
Dear Rogier,
a short answer to your question is: Yes, the Universe can be curved at super-large scales. A more detailed answer involves at least two aspects.
What do we know from observations?
From the observational point of view we know from a detailed analysis of the temperature fluctuations of the cosmic microwave background, combined with a local measurement of the Hubble expansion rate (the precise value does not matter too much here) that the curvature scale of the observable Universe is at least one to two orders of magnitude larger than the Hubble scale (the characteristic scale of the observable Universe), i.e. curvature still could be a few per cent effect. You ask if there could be curvature at scales 10 to 20 magnitudes larger than the observable Universe? The answer is yes. This could be the case.
What do we believe based on theoretical ideas?
Our standard model of cosmology relies on the idea of cosmological inflation.
Inflation predicts that the observable part of the Universe is very close to being flat. It does not predict that the curvature on scales much larger than the observable domain is exactly zero. Different models of inflation make different predictions about the super-large scales. Thus the answer is again, the Universe could be curved at those super-large scales. At the observable scales we expect a departure from flatness, but we expect that it is just a 0.1 per mille effect (a factor of 100 smaller than current observational constraint).
  • asked a question related to Theoretical Astrophysics
18 answers
A preference for spiral galaxies in one sector of the sky to be left-handed or right-handed spirals has indicated a parity violating asymmetry in the overall universe and a preferred axis.
Could the large-scale magnetic field be related to the predominant left-handed neutrinos in our cosmic sphere as well?
Relevant answer
As far as I know the handedness of spiral galaxies
is determined by the citizen-science project Galaxy Zoo.
Michael Longo et al. found that there seems to be an
excess of left-handed spiral galaxies which might be
a sign of a rotating universe.
But in the meantime it turned out that there is a psychological
bias toward seeing left handed spirals.
Further reading:
  • asked a question related to Theoretical Astrophysics
10 answers
I am looking for a theory of the diffusion of charged particles in an isotropic EM radiation (including fully relativistic case). Landau & Lifshits in their "Classical fields theory" discuss averaged damping force on a particle by EM radiation (and of course there must be similar calcs in other sources), but the particle diffusion by radiation (in particular lateral diffusion) isn't there. The subject must be well studied (e. g. in astrophysics); I would appreciate a pointer to a good source.
  • asked a question related to Theoretical Astrophysics
25 answers
This is not my area of expertise but I just stumbled across a paper on ``dark entropy'' (arXiv:0806.1277v2) and got confused by the ``Holographic Principle''. It sounds like it is interpreted and used as if the information contained in a volume is encrypted in the enclosing surface area. I might be wrong but I vaguely remember that the holographic principle is that the information one finds on the surface at time T is just the information from inside the volume at earlier times T' and positions X', where the difference T-T' is just the travel time of a wave going from that spot to the surface. If this is true, one would not find the entire information a volume contains at a given time written on its walls, its just the information from specific space-time points. This sounds kind of trivial, am I missing something here? Is the way the holographic principle is used is this paper consistent with this picture ?
Relevant answer
The total entropy of the universe is not constant and there is no heat exchange with the "outside" that we know of. The entropy of the universe in the distant past was much lower and has been increasing ever since. The second law of thermodynamics states that the entropy of an isolated system always increases. Isolated means no heat flow in or out. In fact, many physicists would say that the "arrow of time" in the universe comes from the increasing entropy.
Also, in the paper, we are clearly saying that the cosmological term (which Einstein called the cosmological "constant") is not constant, nor are Einstein's equations "accepted", they are generalized. That is the whole point of the paper.