Questions related to Theoretical Astrophysics
“The Essence of ‘E’: Unveiling the Infinitely Infinite” for your consideration. Enclosed, you will find a comprehensive exploration into the enigmatic concept of “E,” a cosmic force that transcends the boundaries of finite and infinite existence.
This manuscript represents a labor of passion and dedication, offering a unique perspective on the role of “E” in the universe. From its profound cosmic order to its paradoxical nature of being both infinitesimal and infinitely powerful, this work delves deep into the heart of a concept that defies human comprehension.
The content is structured meticulously, with an abstract that provides a concise overview of the manuscript’s scope, an engaging introduction that draws the reader into the subject matter, and detailed sections that explore the mass of “E” and the cataclysmic events it undergoes. The manuscript concludes with a thought-provoking summary of our journey into the infinitely infinite.
I believe this manuscript would make a valuable addition to [Company/Organization Name]’s collection of publications, given its unique perspective and the depth of research invested in it. It has the potential to appeal to a wide audience interested in cosmology, astrophysics, and the mysteries of the universe.
I would be delighted to discuss any further steps or provide additional information as needed. I eagerly await your response
The question is loaded and really doesn't fit into the space for questions.
My theory proposes a candidate for Dark Matter. The candidate would be antimatter traveling in a Lagging Hypersphere ( a true parallel Universe made of antimatter).
HU allows for the derivation of asymmetric Gravitational attraction. It is conceivable that in early epochs, when the LH were more dense, antimatter Black Holes or Stars could align themselves with matter stars in this Hypersphere. After reaching a critical mass, tunneling could elicit annihilation and a Gamma Ray Burst.
This mechanism would be consistent with the full conversion of a Star mass into gamma rays as it is observed in GMB.
Antimatter in a LH is also consistent with Gravitational Lensing in regions where matter is not visible.
A subtle but important aspect of my theory is that those hyperspheres are traveling at the speed of light along a radial direction.
Consider the two propositions of the Kalam cosmological argument:
1. Everything that begins to exist has a cause.
2. The universe began to exist.
Both are based on assuming full knowledge of whatever exists in the world which is obviously not totally true. Even big bang cosmology relies on a primordial seed which science has no idea of its origin or characteristics.
The attached article proposes that such deductive arguments should not be allowed in philosophy and science as it is the tell-tale sign that human wrongly presupposes omniscient.
Your comments are much appreciated.
while studying the longitudinal momentum distribution for any halo nuclei during a reaction in lab frame .I want to change this distribution into the center of mass(CM) frame .is it possible change it directly? What factor I need to multiply or divide so that momentum distribution became according to the CM frame. Any suggestion or method or any reference ?
According to Weyl and Chandrasekhar, general relativity (GR) is a triumph of speculative thought. But it is a well-known fact that GR is initiated by two analogies. Analogy is known to be a weak reasoning in science and philosophy. To redress the case this type of reasoning is renamed as Equivalence Principle (EP) in relativistic physics. The renaming, however, could not hide the fact that the presented analogy was not flawless. Irrefutable disproves were side-stepped and the analogy was instated to be the seed of new kind of physics. EP was defended by reducing the size of the lab and the duration of the experiment. This type of defending is like the proponents of flat-earth idea defend their case by reducing the patch of the land for examination until their pseudo-science theory is proven.
The attached document is a short description of EP analogies and its well-known critics. The document also introduces a new EP based on Uniform Deceleration of a spaceship in open space. This new analogy results in a different curvature of light in comparison to what original EP has established using uniform acceleration. The author believes that none of the conclusions from EPs should be allowed in science as they are based on inconclusive comparison/analogy and they ignore glaring flaws in the argument.
The author would like to present this new EP for discussion and criticism.
General Relativity. Can the strength of gravity reduce for dense masses?
Is there anything in Einstein’s Field Equations that allows the strength of gravity to reduce for regions of high mass/radius ratio? It could be desirable for two reasons.
Reason 1) From Newtonian considerations. The flatness problem is equivalent to (for each mass m).
Where M and R represent the mass and radius of the rest of the universe up to the Hubble radius. Small numerical constants omitted for simplicity.
For a larger mass, with the self-potential energy term included
r is the radius of mass m , leading to
G_reduced = c^2/(M/R+m/r) = G/(1+Gm/(rc^2 )) (4)
i.e. a reduction in G for masses of high m/r ratio, approaching c^2/G
It would allow bounces or explosions form galactic centres and avoid a situation of infinite density and pressure. It could account for the ‘foam’ like large scale structure.
It's part of a new cosmology
that predicts an apparent omega(m) of between 0.25 and 0.333 and matches supernovae data without a cosmological constant.
As we know, many cosmologists argue that the Universe emerged out of nothing, for example Hawking-Mlodinow (Grand Design, 2010), and Lawrence Krauss, see http://www.wall.org/~aron/blog/a-universe-from-nothing/. Most of their arguments rely on conviction that the Universe emerged out of vacuum fluctuations.
While that kind of argument may sound interesting, it is too weak argument in particular from the viewpoint of Quantum Field Theory. In QFT, the quantum vaccuum is far from the classical definition of vaccuum ("nothing"), but it is an active field which consists of virtual particles. Theoretically, under special external field (such as strong laser), those virtual particles can turn to become real particle, this effect is known as Schwinger effect. See for example a dissertation by Florian Hebenstreit at http://arxiv.org/pdf/1106.5965v1.pdf.
Of course, some cosmologists argue in favor of the so-called Cosmological Schwinger effect, which essentially says that under strong gravitational field some virtual particles can be pushed to become real particles.
Therefore, if we want to put this idea of pair production into cosmological setting, we find at least two possibilities from QFT:
a. The universe may have beginning from vacuum fluctuations, but it needs very large laser or other external field to trigger the Schwinger effect. But then one can ask: Who triggered that laser in the beginning?
b. In the beginning there could be strong gravitational field which triggered Cosmological Schwinger effect. But how could it be possible because in the beginning nothing exists including large gravitational field? So it seems like a tautology.
Based on the above two considerations, it seems that the idea of Hawking-Mlodinow-Krauss that the universe emerged from nothing is very weak. What do you think?
Why should we care about axions which were not found in connection with dark matter?
They are just hypothetical particles.
The obvious answer is one year as the speed of the oh-my-God (OMG) particle was calculated to be about 0.999,999,999,999,999,999,999,995,1c.
However, at this speed, from the frame of the particle, one light year distance should be reduced to almost zero according to the idea of length contraction in special relativity. Thus with no distance to travel the particle should travel the distance in no time.
What is the correct answer? About zero or one year?
Please see pages 15-19 of a presentation material https://www.researchgate.net/profile/Ziaedin_Shafiei/project/Special-Relativity-
It provides a brief analyses of the consequences of the mount Washington experiment which conducted to prove time dilation and length contraction.
Frisch, D. H.; Smith, J. H. (1963). "Measurement of the Relativistic Time Dilation Using μ-Mesons". American Journal of Physics. 31 (5): 342–355.
Hi All!! I have recently completed my PhD in theoretical astrophysics with work on accretion flow around black holes. Now I want to venture into some observational studies. I would like to know how I can make use of the VIRTUAL OBSERVATORIES to start some good quality research work in observations. I would be very glad if you could share some useful links or documents. Thanking you all...
I understand that some "Two Time" approaches work well mathematically. I understand one of these configuration is 3D space with 2D Time. How does this model compare with the 3S2T model by Itzhak Bars?
Is there any opportunity for criticizing Einstein's theory of gravity because of the present lack of a satisfactory theory of dark matter? As far as I know (and my knowledge of DM is almost null) the only Ricci-flat solutions thoroughly considered as potential sources of dark matter are the black holes. What about other Ricci-flat solutions? The nicest ones are Corvino-Penrose solutions. Also there are gravitational waves and transients. Transients may disappear sooner, but then newer ones are coming. It is reasonable to assume that as time passes galaxies will have more of them at any moment. Another related question is as follows. Sometime we see in the news that new clusters, galaxies and other objects having huge mass of ordinary matter or a black hole are being discovered. Admittedly these are drops in an ocean. Still one can ask how such a discovery can affect the fraction of dark matter locally in a galaxy if not globally?
The goal of this question was for e to provide a proof that the Absolute Peak Luminosity of type 1A Supernovae have a G^(-3) dependence.
The argument is correct but it seems to be too complex.
There is a simpler argument that people can understand better. Just follow these links.
Supernovae distances are mapped to the Absolute Peak Luminosity of their light profiles. This means the the only two measured values are luminosity at the peak and and 15 days later (to measure width).
Supernova explodes through a nuclear chain reaction:
Luminosity is equal to the number of Ni atoms decay per second or dNidt.
So the peak Luminosity is the Peak dNidt.
There are TWO considerations that together support my approximation:
a) The detonation process accelerates 2-3 reactions (in comparison with equilibrium rates prior to detonation).
b) The detonation process adds a delay to photon diffusion. The shock wave originated in the core will travel to the surface. When the shock wave arrives at the surface, reaction 1-3 should (in principle) stop. Ejecta (non burned residues) are then eject and the photons resulting from the Ni decay have to diffuse through the thick ejecta cloud.
If you look into the Light/[C]^2 curve, you will realize that is has a small delay with respect to Light curve. The constraint of having a finite star size forces the maximum absolute peak luminosity to synchronize itself with the maximum peak Magnesium rate dMgdt, which happens at the maximum radius. So, the Physics of a finite star and a shockwave nuclear chemistry process forces the Peak Absolute Luminosity (dNidt) to match the maximum rate of Magnesium formation (dMgdt). Implicit in this conclusion is the idea that the pressure and temperature jump expedites intermediate fusions.
My contention is:
a) Light has to go through a diffusion process while traveling from the core. The motion of the detonation curve might synchronize light and [C]^2
b) The model in the python script contains a parameter associated with the light diffusional process leading to the peak luminosity.
I would love to hear about the chosen rate values (I used arbitrary values that would provide a time profile in the order of the observed ones). I would appreciate if you had better values or a model for rewriting the equations for the nuclear chain reaction.
I see the detonation process as a Mg shock wave propagating through the star. Light would follow that layer and thus be automatically synchronized with [C]^2
Under these circumstances, volumetric nuclear chemistry depicted in the python script would have to be replaced by shockwave chemistry. That would certainly be only dependent upon the Mg content on the shockwave and thus make light be directly proportional to [C]^2!!!
HU see the Supernova Light process to be proportional to [C]^2. This assertion has support on two mechanisms:
- Detonation temperature increase will increase the rate of equations 2-3
- Detonation process should be modeled as a nuclear chemistry shockwave where Mg is being consumed as fast as it is being created. Light is following this shockwave and will peak by the time the shockwave reaches the surface of the Star. So, the shockwave mechanism ties together light diffusion and Carbon nuclear chemistry.
Since I wrote this, I followed up on my own suggestion and considered the shockwave nuclear chemistry approach. You can download all my scripts at the github below.
The shockwave model considers that the amount of light on a cell along the shockwave is is the integrated light created through its evolution. It is developed as a unidimensional process since the observation (billions of years away from the supernova) can be construed as having only contributions from all the cells along the radial line connecting us to the Supernova.
So, the model is unidimensional. That said, it contains all the physics of a tri-dimensional simple model. All rates are effective rates since during the Supernova explosion nuclear reactions are abundant (one can have tremendous variations on neutron content).
The physics is the following:
a) White Dwarf reaches epoch-dependent Chandrasekhar mass. Compression triggers Carbon detonation A shockwave starts at the center of the White Dwarf
b) That shockwave induces 2C->Mg step. The energy released increases local temperature and drive second and third equation to the formation of Ni. Ni decay releases photons.
c) Photons follow the shockwave and diffuse to the surface where we can detect them. The shockwave takes tc to reach the Chandrasekhar radius (surface of the White Dwarf).
d) Luminosity comes from the Ni decay from the element of volume plus the aggregate photons traveling with the shockwave. They diffuse to the surface
e) Two diffusion rates are considered. One for light diffusion within the Star and another for diffusion in the ejecta.
# Diffusion process with two rates 0.3 for radiation created before the shockwave
# reaches surface and 0.03 for radiation diffusion across ejecta
f) I considered tc to be 15 days, that is, it takes 15 days for peak luminosity. Changing this value doesn't change the picture.
g) The peak luminosity is matched to the peak Magnesium formation at t=tc or when the shockwave reaches the Star surface.
This means that Physics makes the Absolute Luminosity Peak to be also the peak of Magnesium formation and that takes place at the Star surface.
This is a rebuttal to a reviewer's comment about my claim that Absolute Peak Luminosity of type 1A Supernova are proportional to G^(-3) rendering Apparent SN1a distances having a dependence to G^(3/2)
Let me know if you disagree that I kicked this objection to the curb and thus all Supernovae distances are overestimated by G^(3/2) !!!!! :)
REVIEWER: I’m also skeptical that the luminosity of a SN Ia if G were different would scale as G^-3 (or M_ch^2).
Ni-56 production is not a simple rate-limited process; SNe Ia undergo a deflagration that (in most cases) transitions to a detonation. They burn about half their mass to Ni-56 (depending on when the detonation occurs). Even if Ni-56 production were a simple process, the radius (and thus the density) of the white dwarf also changes with G.
ANSWER: Firstly, let’s consider the reviewer's assertion that density in a White Dwarf also changes with G. That is incorrect. Detailed derivation was contained in appendix and is reproduced below.
I corrected an assumption about Luminosity and Mass. Now it is perfect...:)
This argument proves that Luminosity depends upon G^(-3) and since G is epoch dependent in my theory and proportional to the inverse of the 4D radius of the Universe, earlier epochs had stronger Gravitation. Stronger Gravitation means weaker SN1a, resulting in overestimation of distances. The farther the SN1a is, the larger the overestimation.
The distances are overestimated by G^(1.5). Once one corrects them, Inflation Theory disappears in a puff... The same goes to General Relativity and Dark Energy....:)
The argument supporting this dependence is based on the work of David Arnett about type II Supernova Luminosity. This is an estimation of the dependence of the Luminosity with G.
To extract the dependence, we force the radius of the Supernova to have a Chandrasekhar radius dependence. We also estimated the M0 (Sun mass) dependence upon G. The Sun mass is a relative mass reference within the context of Supernova mass. Supernovae occurs in the dominant radiative pressure (as opposed to gas pressure) regimen. Under that circumstance, the Luminosity dependence comes up as Luminosity Proportional to G^(-3).
Needless to say, this derivation is trivial and consistent with Supernovae and Star models.
It takes just one page to be derived, easy as Butter. (of course, after David Arnett did all the hard work...:)
This means that the SN1a ruler, which is the basis of Cosmology and Astrophysics would be faulty under an epoch dependent G context. Since HU is epoch-dependent and predicts the Supernovae distances perfectly and without any parametrization, that places the Standard Model in a very precarious position.
If you add to that, HU observation of Neutronium Acoustic Oscillations or NAO... (which is what Algebraists should be saying right now...:) I think, there is reckoning coming...
See the NAO... SDSS had this data for 10 years. Since they are basically Algebraists and see the Universe according to GR, they cannot imagine acoustic waves along distances. The Universe is supposed to be Uniform...
HU sees oscillations primarily along distances (which corresponds to cosmological angles). There may or may not be cross-talk with the 3D angular modes. I say that because I don't see the 150 Mpc wavelength in HU 2-point correlation.
So, How Long will the Community refrain from welcoming my conclusion that there was no Big Bang (there was Many Bangs) and that the Universe didn't come out of a fiery explosion, dilation is nonsense, vacuum fluctuations driving a Big Bang are utterly nonsense.... GR is nonsense..etc.
I have observed rotation curve data of a galaxy, and I want to know what is the best and simplest mathematical model to find the galactic rotation curve and dynamical mass of such galaxy.
Your help will be appreciated
When Einstein's general relativity theory predicted that light is bent in a gravitational field, Eddington verified that during a solar eclipse. He showed that some stars behind the sun were viisible, seemingly moved from their real position of almost 2 seconds of arc.
But I've read somewhere that Newton's theory predicted a smaller angle, but not zero. How is this possible? I thought that Newton's gravity only acts on things with non zero mass, and consequently not on photons.
Can anyone explain ?
While studying about the Silicon crystal and its energy band diagram, I came across electron-hole pairing, also about recombination which results in discharge on energy. The form of this release of energy is different in GaAs and InP compared to Si and Ge. In GaAs and InP photon is released but in Si and Ge the excess energy is lost in lattice vibration. Why Si cannot lose the energy in form of photon? What characteristic disables GaAs from losing the energy in form of lattice vibration? I am curious about the cause of this distinctive difference.
The surface area is well known and the entropy is A/4 in Planck units. But what is the volume occupied by the static Black hole in space? I don't find this discussed anywhere.
It was suggested many years ago by Jacob Bekenstein that the rotation parameter of a black hole should be quantized as in usual quantum mechanics: J=n * hbar.
Assume that there exist anti-neutron stars in our galaxis. Would it be possible to distinguish a neutron star from an anti-neutron star just by means of astronomic observation data?
Some astrophysicians say that gravitational waves (GW) were created during the inflation phase of the universe, when the universe was 10-36 second old and expanded billions of billions of billions times.
I understand this phenomenon was really explosive and created very violent effects, maybe including GW, but thinking that the GW would then propagate somewhere else would suppose that there was a "somewhere". But the universe just expanded, increased in size, but did not grow inside something else. So these GW had nothing to propagate in, and could not reach us now.
Furthermore, if the GW propagate at the speed of light in vacuum, they should be far away from us because our speed is much less that theirs. The only GW we can catch from earth should have been created much earlier than the inflation.
Is this correct?
If yes, one could then infer that the GW were the source of the inflation mechanism and somehow pushed the limits of the universe to make it grow.
Can anyone explain this? Thank you.
The 'standard model' of theoretical physics says the basic forces are transmitted between bodies and detectors by particles or fields, as speeds up to the velocity of light. The LIGO gravitational signal is found to be of short duration (light crossing time of the emitter, perhaps) and very close to light speed. .Very likely to be waves on the gravitational field (Einstein) but could still be 'gravitons'. Black holes can't emit light waves, likewise they can't emit gravity waves and cannot exert gravitational force on matter outside. So how do the proponents explain two black holes pulling on each other, orbiting in the gradually closing binary system?
every mass should (perhaps) collapse if it is enclosed in a sphere with radius less than its Schwarzschild radius. If the universe was "some times ago" smaller than its Schwarzschild radius, how can it expand beyond this?
In the degenerate interiors of neutron stars, the equation of state is usually just density (and composition) dependent. You can express the pressure as a polytropic law of the form P∝ρα, where ρ is the density.
A stiff (or hard) equation of state is one where the pressure increases a lot for a given increase in density. Such a material would be harder to compress and offers more support against gravity. Conversely, a soft equation of state produces a smaller increase of pressure for a change in density and is easy to compress.
If we using the Lagrangian density of the nucleon-meson many body system. Solve the equation of state and corresponding using different parameters like FSUGOLD,NL3,NL3*.....etc. I know every parameters have a different symmetry energy and compressibility. But, what is physics behind the stiff or soft equation of state?
A collapsing star when explode (supernova), due to the sudden ejection of massive mass around the central core, there is a disturbance in space-time leading to emission of gravitational waves. But what will happen if a collapsing goes on till black hole is formed, without any explosion? Will there be an emission of gravitational waves due to the continuous grow in curvature because of the growing mass?
I am interested in finding closure relations for the BBGKY-equations and came across an old article ``On the integration of the BBGKY equations for the development of strongly nonlinear clustering in an expanding universe'' by M DAVIS, PJE PEEBLES, Astrophysical Journal Supplement Series, 1977. I am used to closures where the three-particle correlations are either neglected or are approximated by Kirkwood's superposition principle, which appears ad hoc to me and is inaccurate for certain applications.
In the article by Davis and Peebles, something I haven't seen before was used, a scaling solution where n-particle correlation functions are approximated by a simple power law. It looks like this only works if the particles are far apart from each other and if there is no characteristic length scale in the particle interactions, which is the case for gravity. This made me curious: How useful did this approach become in Astrophysics simulations, is it still used or was this a dead end? If not, has there been any recent improvements on this approach?
Has such a scaling approach to the BBGKY-hierarchy been used in other areas of physics, like plasma physics where there is also a power law (Coulomb) interaction?
I have seen different estimations of photon diffusion time in different papers (30000 to a million years). There are many different mean free paths used. I want to know if these results are just speculations or if there is a way to verify these results?
I also want to know what the current standard time of diffusion is. It would be helpful if you could give me the links to some important research papers on this subject.
Spacetime curvature creates a space time "seeing" effect, which is analogous to astronomical seeing. This is due to microlensing and weak lensing, which becomes problematic from an ICRF stability viewpoint as astrometric VLBI and optical measurements (e.g. GAIA) moves into the tens of micro-arcsecond accuracy region. This space time seeing effect will create a noise floor, limiting ICRF stability due to apparent source position distortions and amplitude variations. This will of course have an impact on the high accuracy requirements of the Global Geodetic Observing System (GGOS), which has the objectives of 0.1 mm stability and accuracy of 1 mm for the ITRF. The distribution of dark matter complicates the problem.
For some far objects there is a strong red shift due to expansion, if the object was highly affected by a strong gravitational field that lead to affect its movement how can we distinguish between the red shift from expansion and red shift or blue shift that yielded from the fast rotational movement of the object due to being near the highly gravitational field. neglecting the red shift from gravitational field it self. Is the equations of special relativity enough for describing such type of movement?
Clearly the observable universe is big compared to the human let alone (sub) atomic scale. However we also know that the universe is very flat, which in the Friedman Robertson universe means that the mass density is (close to) critical. But why can't the radius of curvature (positive or negative) of the universe be, say, 10 or 20 orders of magnitude larger than the size of the observable universe, and the apparent flatness be a the consequence of our limited field of view a mere 13.7 billion years after the big bang?
A preference for spiral galaxies in one sector of the sky to be left-handed or right-handed spirals has indicated a parity violating asymmetry in the overall universe and a preferred axis.
Could the large-scale magnetic field be related to the predominant left-handed neutrinos in our cosmic sphere as well?
I am looking for a theory of the diffusion of charged particles in an isotropic EM radiation (including fully relativistic case). Landau & Lifshits in their "Classical fields theory" discuss averaged damping force on a particle by EM radiation (and of course there must be similar calcs in other sources), but the particle diffusion by radiation (in particular lateral diffusion) isn't there. The subject must be well studied (e. g. in astrophysics); I would appreciate a pointer to a good source.
This is not my area of expertise but I just stumbled across a paper on ``dark entropy'' (arXiv:0806.1277v2) and got confused by the ``Holographic Principle''. It sounds like it is interpreted and used as if the information contained in a volume is encrypted in the enclosing surface area. I might be wrong but I vaguely remember that the holographic principle is that the information one finds on the surface at time T is just the information from inside the volume at earlier times T' and positions X', where the difference T-T' is just the travel time of a wave going from that spot to the surface. If this is true, one would not find the entire information a volume contains at a given time written on its walls, its just the information from specific space-time points. This sounds kind of trivial, am I missing something here? Is the way the holographic principle is used is this paper consistent with this picture ?