Science topic

Astrophysics - Science topic

Astrophysics is the branch of astronomy that deals with the physics of the universe, including the physical properties of celestial objects, as well as their interactions and behavior.
Questions related to Astrophysics
  • asked a question related to Astrophysics
Question
1 answer
Relevant answer
Answer
No, the two statements don't imply any such suggestion. 
  • asked a question related to Astrophysics
Question
34 answers
Quasars are believed to be objects ejected from the centers of the Galaxies (or Black holes). Do all of them blow outwards in opposite direction to us in order to agree all of them with such high redshifts ? Note that the motion of galaxies is random! While, even no one Quasar exhibits a blueshift !. Moreover, according to their high redshift all of the Quasars are very distant away. But the universe is isotropic, so our position is not preferred. Hence why we didn`t obsreve any Quasar nearby? According to the isotropy, a distant observer should  observe the Quasars very distant with respect to him, that is they should be nearby to us!. Contradiction. According to Hubble`s law, if the object is bright then its near by and the distant objects are faint. The Quasars are very bright, why shouldn`t they nearby? Why we just accept one part from Hubble`s law, that is: the high redshift of the Quasar indicates that its distant and ignored the other part, that is: the brightness of the Quasar indicates they are nearby?!!1. Finally, Why our Galaxy and many other nearby Galaxies didn`t eject Quasars from their centers? Why this jop is exclusive for distant Galaxies? Because our Galaxy and many other nearby Galaxies are inactive, said astronomers. Why they are the inactiv among the active distant Galaxies? False justificatioin.It is clear such Paradigm is not satisfactory and insufficient, it depends on many unjustified reasons , many contradictions and inconsistent. The paradigm must be reconsidered and readjusted
Relevant answer
Answer
Quasars occurred throughout the early universe but only while there was a lot of gas around to fall into the central black holes. Most of it has been used up so that's why we don't see them nearby, you have to look far away to see earlier times.
They are complex structures and send jets of material out from the poles of the rotating accretion disc but look different if we are not in that direction.
  • asked a question related to Astrophysics
Question
2 answers
I am interested in parametric resonance. An introduction to this phenomenon can be found in Landau's book on classical mechanics. Can anyone point out some good reviews on the phenomenon beyond the textbook level? I am looking for some applications of this phenomenon in astrophysical systems. Any suggestions in this direction are welcome
Relevant answer
Answer
References Beyond Text Book Level but may not be in astrophysical systems:
(i)E. Kh. Akhmedov ,hep-ph/9903302
(ii)E. Kh. Akhmedov et al ,Nucl. Phys. B542 (1999)3-30; hep-ph/9808270
(iv)V. Gudkov et al, Phys. Rev.  C83 (2011) 025501
  • asked a question related to Astrophysics
Question
3 answers
Estimate the ratio between the temperature scale height h = T(dT/dr)-1 and the mean free path of a hydrogen atom in the atmosphere of a star. Is local thermodynamic equilibrium a valid assumption in the stellar atmosphere? Where does it break down?
Relevant answer
Answer
From your question it seems you are asking about the validity of describing the hydrogen in terms of its collisional relaxation.  The first answer is about the radiation field which usually is described in terms ofthe language of  "LTE", but you are asking about the description only of the particles in terms of the mean density rho, momentum density rho v, and the energy density of which the molecular motions are described by temperature T.  You ask if the mfp is < temperature scale height.  RIght?  If so then the mfp is about  1/(N_H sigma) where sigma is a  bit more than  pi a_o^2, a_0 = Bohr radius.  In nearly all cases  mfp is << scale height so single temperatures are a good description.  For example in the Sun, N_H is maybe 10^17 /cm3, sigma is 10^-16 cm2, and so mfp is 0.1 cm.  The scale height in T is enormous, about 10^8 cm order of magnitude.
If you want to see an example of a breakdown of this for helium,  see C. Jordan (1975) paper in MNRAS or Proc. R. Soc. - I cannot remember which.  There she shows that the mfp of some helium atoms can exceed the very small temperature scale height (a few km) in the low density solar transition region.  This is where number densities N are small which increases mfp a lot.  See also Pietarila & Judge 2004 APJ for a more general discussion of helium in the sun and the solar transition region.
hope this helps.
  • asked a question related to Astrophysics
Question
8 answers
Its basic ingredients are fields : The Standard Model includes a field for each
type of elementary particle; These particles exhibit a wide variety of masses
that follow no recognizable pattern; The Standard Model has no mechanism that would account for any of these masses, unless we supplement it by adding additional fields, of a type known as scalar fields. The word “scalar” means that these fields do not carry a sense of direction, unlike the electric and magnetic fields and the other fields of the Standard Model.
To complete the Standard Model, we need to confirm the existence of these scalar fields and find out how many types there are? This is a matter of discovering new elementary particles, often called Higgs particles(why ?), that can be recognized as the quanta of these fields.
Relevant answer
Answer
Standard model is an SU(3) x SU2)Lx U(1)em invariant gauge theory. This theory is renormalizable and not anomalous. Anomaly free condition means that after quantization of classical fields, the quantized field theory can maintain symmetries of its classical counterpart.
Gauge couplings αs g and g' are universal, ie they do not differentiate between generations and also they are same for multiplets within each generation. Yukawa part of the standard model Lagrangian has a number of free complex parameters, which are the Yukawa couplings. At present the theory does not have enough predictive power to fix them. They are fixed by experiments. In this sense, the standard model is a phenomenological model.
The Higgs boson used to be a theoretical idea (within standard model) until a few years ago. But now its existence is experimentally established. This discovery has put standard model into a firm theoretical footing.
  • asked a question related to Astrophysics
Question
1 answer
A massive O star has a typical luminosity of 3 x 1039 ergs s-1, a lifetime of 3 x 106 yr, a stellar-wind velocity of 5000 km s-1, and a mass- loss rate of 10-5 Ms yr-1. When it ends up as a supernova, ~5 Ms is ejected with a velocity of 5000 km s-1. How can estimate the contribution of these processes to the energy and the momentum of the ISM?
Relevant answer
Answer
There is a large body of literature on this.  There are basically two kinds of feedback from Stars to the ISM:
--Chemical evolution of the ISM
-Kinematic Feedback - which can involve shocks in terms of SN explosions but just massive star formation in spiral arms will do this.   In some 2D velocity fields  there is evidence for such kinematic star formation.
If all you want is an estimate, then make a model - how many SN events or O-Star formation events are there in the lifetime of a galaxy?
Note, in practice this is difficult, acretion disks or dense places in the ISM are likely the main inhibiting factors.  The ISM is not a smooth environment at all.
  • asked a question related to Astrophysics
Question
83 answers
I am specifically looking for mass distribution and observed angular velocity of as many galaxies as possible.
Relevant answer
Answer
See the section "The Rotation Curves of Low Surface Brightness Galaxies" in attached first link. And also the last link. That has loads of data you looking for.
Other option is to look for articles and wikipedia and extract the data points manually or use some online available tools. I have attached multiple links below.
  • asked a question related to Astrophysics
Question
5 answers
(I already solved the problem by setting the keyboard to US English, I explain a message below)
I'm trying to use SpekCalc to simulate a X ray tube, but apprently it's not showing the bremsstrahlung radiation. I´m using a peak energy of 100keV, a theta of 16 degrees, and 5 milimeters of aluminium. But I obtain the same results whatever I have tried. I'm runing release 1.1 light for macOS, and I obtained the same results for windows.
Note: I'm using the values Nf=2 and p=1, instead of those  suggested by Poludniowski because apparently the GUI is not accepting any value minor than 1.0. I have been trying other values with similar results.
Relevant answer
Answer
I have tried the same steps but did not face the error which you specified (see attached image).
  • asked a question related to Astrophysics
Question
8 answers
Is it proven? In other words, Why stars are round?
Relevant answer
Answer
Yes, the assertion is true. Topology equates any simply connected domain to a sphere. The equipotential surfaces are deformed into the special shape under consideration. The theory of Homeotopy proves it, plus, Gauss' theorem.
  • asked a question related to Astrophysics
Question
124 answers
Cosmic Microwave Background Radiation (CMB). (question  edit  March 29, 2016)
Assume very dense proto-stellar formation prior to a cosmic inflation event as hypothesized in The Pearlman SPIRAL cosmological redshift hypothesis and no ongoing cosmic expansion subsequent to that cosmic inflation event.
Could the cause of the CMB be from:
Prior to the stellar formation?
During stellar formation?
Post stellar formation?
Prior to that cosmic inflation event?
during that event?
at the very end of that event?
Under The Standard Cosmology Model (SCM) the current distance to the most distant visible galaxies is 46.5 B LY. Is that understanding correct?
CMB-LEAK:
A sub-hypothesize in SPIRAL is the CMB should 'leak' have dissipated 1 LY per year radius beyond the most distant galaxy.
If valid under SCM the CMB is spread out over an area of a sphere with a radius of at least 59.99 B LY = 46.5B + 13.4B is this already established or a published hypothesis?
If CMB does not 'leak' / spread beyond the most distant stars at 1 LY per year why not?
.
Thank you in advance for any and all proposed solutions you can think of. r
Relevant answer
Answer
Hi Roger,
Yes, there is a big difference between the standard concept, commonly known as the Concordance Model (because it uses the best fit parameters and is therefore "in accordance" with the observations), and your vision of what it says. There is a section on the first page of your notes that should give you a clue to this:
"This occurred roughly 400,000 years after the Big Bang when the universe was about one eleven hundredth its present size."
The number used in the Concordance Model is 1090 so if the CMB was emitted from material that was then 13.8 GLy away, that material would now be 1090 times farther or 15042 GLy from us. Those are wrong, the correct values are that it was 0.0416 GLY from us then and is 45.4 GLy from us now.
Your mistake is treating the distance as if it hadn't changed between then and now even though your notes recognise that it does change in standard cosmology.
I've added a link to a simple calculator, you can type in the 'z' value for the redshift of any source and it will tell you lots of interesting stuff about both then and now. If you have any questions about what the Concordance model says, that's an easy way to check. The default value is 1089 which is the CMB (one less than the ratio of expansion).
  • asked a question related to Astrophysics
Question
4 answers
Can we speak of a slight linear increase of the core density of sun-like stars during their stay in the main sequence ? As this is a period where gravitational effects are steadily balanced by the radiative pressure resulting from the progressive conversion of H into He. And the same with a bigger slope for the further period of conversion of He into C ?   
Relevant answer
Answer
Dear Guibert,
The core densities of MS stars are checked observationally by apsidal motion theory. Long term period variation observations of binaries in eccentric orbits reveal the observational structure constants of the component stars which can be compared with theoretical values obtained from stellar models. As I remember the present stellar models predict always the less central condensations for the MS stars. Recently we are also preparing a paper on this topic. You can read papers by Kopal 1965, Claret, Gimenez 2010. It seems the core density already increased before the MS stage, and continues increasing through the evolution.
Best regards
  • asked a question related to Astrophysics
Question
1 answer
Suppose the object belongs to high-soft regime as well as mechanically powerful (cavities, shock) and also has bright nucleus what does it indicate?
Relevant answer
Answer
Please read the paper, may be helpful for you.  A new approach to the correction of
Galilean transformation.
  • asked a question related to Astrophysics
Question
4 answers
Is the anisotropic pressure can destroy the spherical symmetry of
the space-time? Can we use Anisotropic pressure for a spherically symmetric star?
Relevant answer
Answer
Anisotropic pressure means that the principal stresses are not all equal.Thus, I understand the original question proposed by Hasrat, as equivalent to: Does spherical symmetry implies that all principal stress are equal?
The answer to the above question is :NO.
Spherical symmetry has two implications.
1)There may be two different principal stresses, but not three as in the more general (non spherical) case.
2)Neither one of them depend on the adapted spherical coordinates.
Stability is no the issue here.
  • asked a question related to Astrophysics
Question
2 answers
According to Kippenhahn's diagram, there is a decrease, but it is not clear whether  this decrease has the general shape of a negative exponential or a power law or is linear or whatever else. Could the shape be different in function of the star mass, eg. 10 - 30 times the solar mass compared to 1 - 3 times the solar mass ?    
Relevant answer
Answer
Actually, there are indications that the initial collapse only produces relatively small masses (up to about 20 Msol). According to this model the more massive stars are formed at a later stage through mergers or accretion.  A brief review can be found here:
  • asked a question related to Astrophysics
Question
3 answers
In the light that we get from stars we discern certain lines belonging to the most abundant elements in those stars. Assume that we would pass the light from a star through a prism in order to disperse the spectrum, then isolate the hydrogen line(s).
Which one of the hydrogen lines is the most abundant in stars? And what are its properties: is the light in that line coherent light, or is it thermal light? Is it polarized?
Relevant answer
Answer
This would depend on the type of star. In hot stars the hydrogen in the stellar envelope the hydrogen is fully ionized. In cold star it's not. Also, stars like the sun have a hot, low density corona where the hydrogen is ionized.
  • asked a question related to Astrophysics
Question
1 answer
We know that the equilibrium condition for neutron star. But I do not know in the case of hyperon star. Is it same or not?
Relevant answer
Answer
please have a look at the paper by
Roles of Hyperons in Neutron Stars
Shmuel Balberg, Itamar Lichtenstadt
The Racah Institute of Physics, The Hebrew University, Jerusalem 91904, Israel
and
Gregory. B. Cook
Center for Radiophysics and Space Research, Space Sciences Building,
Cornell University, Ithaca, NY 14853
some portions are copied for you below.
ABSTRACT
We examine the roles the presence of hyperons in the cores of neutron stars may
play in determining global properties of these stars. The study is based on estimates
that hyperons appear in neutron star matter at about twice the nuclear saturation
density, and emphasis is placed on effects that can be attributed to the general multispecies
composition of the matter, hence being only weakly dependent on the specific
modeling of strong interactions. Our analysis indicates that hyperon formation not only
softens the equation of state but also severely constrains its values at high densities.
Correspondingly, the valid range for the maximum neutron star mass is limited to
about 1.5 − 1.8 M⊙, which is a much narrower range than available when hyperon
formation is ignored. Effects concerning neutron star radii and rotational evolution are
suggested, and we demonstrate that the effect of hyperons on the equation of state
allows a reconciliation of observed pulsar glitches with a low neutron star maximum
mass. We discuss the effects hyperons may have on neutron star cooling rates, including
recent results which indicate that hyperons may also couple to a superfluid state in high
density matter. We compare nuclear matter to matter with hyperons and show that
once hyperons accumulate in neutron star matter they reduce the likelihood of a meson
condensate, but increase the susceptibility to baryon deconfinement, which could result
in a mixed baryon-quark matter phase.
Subject headings: stars: neutron — elementary particles — equation of state — stars:
evolution
– 2 –
1. Introduction
The existence of stable matter at supernuclear densities is unique to neutron stars. Unlike
all other physical systems in nature, where the baryonic component appears in the form of atomic
nuclei, matter in the cores of neutron stars is expected to be a homogeneous mixture of hadrons and
leptons. As a result the macroscopic features of neutron stars, including some observable quantities,
have the potential to illuminate the physics of supernuclear densities. In this sense, neutron stars
serve as cosmological laboratories for hadronic physics. A specific feature of supernuclear densities
is the possibility for new hadronic degrees of freedom to appear, in addition to neutrons and protons.
One such possible degree of freedom is the formation of hyperons - strange baryons - which is the
main subject of the present work. Other possible degrees of freedom include meson condensation
and a deconfined quark phase.
While hyperons are unstable under terrestrial conditions and decay into nucleons through the
weak interaction, the equilibrium conditions in neutron stars can make the reverse process, i.e.,
the conversion of nucleons into hyperons, energetically favorable. The appearance of hyperons in
neutron stars was first suggested by Ambartsumyan & Saakyan (1960) and has since been examined
in many works. Earlier calculations include the works of Pandharipande (1971b), Bethe & Johnson
(1974) and Moszkowski (1974), which were performed by describing the nuclear force in Schr¨odinger
theory. In recent years, studies of high density matter with hyperons have been performed mainly
in the framework of field theoretical models (Glendenning 1985, Weber & Weigel 1989, Knorren,
Prakash & Ellis 1995, Schaffner & Mishustin 1996, Huber at al. 1997). For a review, see Glendenning
(1996) and Prakash et al. (1997). It was also recently demonstrated that good agreement with these
models can be attained with an effective potential model (Balberg & Gal 1997).
These recent works share a wide consensus that hyperons should appear in neutron star (cold,
beta-equilibrated, neutrino-free) matter at a density of about twice the nuclear saturation density.
This consensus is attributed to the fact that all these more modern works base their estimates of
hyperon-nucleon and hyperon-hyperon interactions on the experimental constraints inferred from
hypernuclei. The fundamental qualitative result from hypernuclei experiments is that hyperon
related interactions are similar in character and in order of magnitude to nucleon-nucleon interactions.
In a broader sense, this result indicates that in high density matter, the differences between
hyperons and nucleons will be less significant than for free particles.
The aim of the present work is to examine what roles the presence of hyperons in the cores
of neutron stars may play in determining the global properties of these stars. We place special
emphasis on effects which can be attributed to the multi-species composition of the matter while
being only weakly dependent on the details of the model used to describe the underlying strong
interactions.
We begin our survey in § 2 with a brief summary of the equilibrium conditions which determine
the formation and abundance of hyperon species in neutron star cores. A review of the widely
accepted results regarding hyperon formation in neutron stars is given in § 3. We devote § 4 to
an examination of the effect of hyperon formation on the equation of state of dense matter, and
the corresponding effects on the star’s global properties: maximum mass, mass-radius correlations,
rotation limits, and crustal sizes. In § 5 we discuss neutron star cooling rates, where hyperons
might play a decisive role. A discussion of the effects of hyperons on phase transitions which may
occur in high density matter is given in § 6. Conclusions and discussion are offered in § 7.
– 3 –
2. Equilibrium Conditions for Hyperon Formation Neutron Stars
In the following discussion we assume that the cores of neutron stars are composed of a mixture
of baryons and leptons in full beta equilibrium (thus ignoring possible meson condensation and a
deconfined quark phase - these issues will be picked up again in § 6). The procedure for solving
the equilibrium composition of such matter has been describes in many works (see e.g., Glendenning
(1996) and Prakash et al. (1997) and references therein), and in essence requires chemical
equilibrium of all weak processes of the type
B1 → B2 + ℓ + ¯νℓ
; B2 + ℓ → B1 + νℓ
, (1)
where B1 and B2 are baryons, ℓ is a lepton (electron or muon), and ν (¯ν) is its corresponding
neutrino (anti-neutrino). Charge conservation is implied in all processes, determining the legitimate
combinations of baryons which may couple together in such reactions.
Imposing all the conditions for chemical equilibrium yields the ground state composition of
beta-equilibrated high density matter. The equilibrium composition of such matter at any given
baryon density, ρB, is described by the relative fraction of each species of baryons xBi ≡ ρBi
/ρB
and leptons xℓ ≡ρℓ/ρB.
Evolved neutron stars can be assumed to be transparent to neutrinos on any relevant time scale
so that neutrinos are absent and µν = µν¯ = 0. All equilibrium conditions may then be summarized
by a single generic equation
µi = µn − qiµe , (2)
where µi and qi are, respectively, the chemical potential and electric charge of baryon species i,
µn is the neutron chemical potential, and µe is the electron chemical potential. Note that in the
absence of neutrinos, equilibrium requires µe =µµ. The neutron and electron chemical potentials are
constrained by the requirements of a constant total baryon number and electric charge neutrality,
X
i
xBi = 1 ; X
i
qixBi +
X
qℓxℓ = 0 . (3)
The temperature range of evolved neutron stars is typically much lower than the relevant
chemical potentials of baryons and leptons at supernuclear densities. Neutron star matter is thus
commonly approximated as having zero temperature, so that the equilibrium composition and
other thermodynamic properties depend on density alone. Solving the equilibrium compositions
for a given equation of state (EOS) at various baryon densities yields the energy density and pressure
which enable the calculation of global neutron star properties.
3. Hyperon Formation in Neutron Stars
In this section we review the principal results of recent studies regarding hyperon formation in
neutron stars. The masses, along with the strangeness and isospin, of nucleons and hyperons are
given in Tab. 1. The electric charge and isospin combine in determining the exact conditions for
each hyperon species to appear in the matter. Since nuclear matter has an excess of positive charge
and negative isospin, negative charge and positive isospin are favorable along with a lower mass
for hyperon formation, and it is generally a combination of the three that determines the baryon
density at which each hyperon species appears. A quantitative examination requires, of course,
– 4 –
modeling of high density interactions. We begin with a brief discussion of the current experimental
and theoretical basis used in recent studies that have examined hyperon formation in neutron stars.
3.1. Experimental and Theoretical Background
The properties of high density matter chiefly depend on the nature of the strong interactions.
Quantitative analysis of the composition and physical state of neutron star matter are currently
complicated by the large uncertainties regarding strong interactions, both in terms of the difficulties
in their theoretical description and from the limited relevant experimental data. None the less,
progress in both experiment and theory have provided the basis for several recent studies of the
composition of high density matter, and in particular suggests it will include various hyperon
species.
Experimental data from nuclei set some constraints on various physical quantities of nuclear
matter at the nuclear saturation density, ρ0 = 0.16 fm−3
. Important quantities are the bulk binding
energy, the symmetry energy of non-symmetric matter (i.e., different numbers of neutrons and
protons), the nucleon effective mass in a nuclear medium, and a reasonable constraint on the compression
modulus of symmetric nuclear matter. However, at present, little can be deduced regarding
properties of matter at higher densities. Heavy ion collisions have been able to provide some information
regarding higher density nuclear matter, but the extrapolation of these experiments to
neutron star matter is questionable since they deal with hot non-equilibrated matter.
Relevant data for hyperon-nucleon and hyperon-hyperon interactions is more scarce, and relies
mainly on hypernuclei experiments (for a review of hypernuclei experiments, see Chrien & Dover
(1989), Gibson & Hungerford (1995)). In these experiments a single hyperon is formed in a nucleus,
and its binding energy is deduced from the energetics of the reaction (typically meson scattering
such as X(K−, π−)X).
There exists a large body of data for single Λ-hypernuclei, which clearly shows bound states of
a Λ hyperon in a nuclear medium. Millener, Dover & Gal (1988) used the nuclear mass dependence
of Λ levels in hypernuclei to derive the density dependence of the binding energy of a Λ hyperon in at density ρ0 to be about −28 MeV, which is about one third of the equivalent value for a nucleon
in symmetric nuclear matter. The data from Σ-hypernuclei are more problematic (see below). A
few emulsion events that have been attributed to Ξ-hypernuclei seem to suggest an attractive Ξ
potential in a nuclear medium, somewhat weaker than the Λ−nuclear matter potential.
A few measured events have been attributed to the formation of double Λ hypernuclei, where two
Λ’s have been captured in a single nucleus. The decay of these hypernuclei suggests an attractive Λ−
Λ interaction potential of 4−5 MeV (Bodmer & Usmani 1987), somewhat less than the corresponding
nucleon-nucleon value of 6−7 MeV. This value of the Λ−Λ interaction is often used as the baseline
for assuming a common hyperon-hyperon potential, corresponding to a well depth for a single
hyperon in isospin-symmetric hyperon matter of -40 MeV. While this value should be taken with
a large uncertainty, the typical results regarding hyperon formation in neutron stars are generally
insensitive to the exact choice for the hyperon-hyperon interaction, as discussed below.
We emphasize again that the experimental data is far from comprehensive, and great uncertainties
still remain in the modeling of baryonic interactions. This is especially true regarding densities
– 5 –
greater than ρ0, where the importance of many body forces increases. Three body interactions are
used in some nuclear matter models (Wiringa, Fiks & Fabrocini 1988, Akmal, Pandharipande &
Ravenhall 1998). Many-body forces for hyperons are currently difficult to constrain from experiment
(Bodmer & Usmani 1988), although some attempts have been made on the basis of light
hypernuclei (Gibson & Hungerford 1995). Indeed, field theoretical models include a repulsive component
in the two-body interactions through the exchange of vector mesons, rather than introduce
explicit many body terms. We note that the effective equation used here is also compatible with
theoretical estimates of ΛNN forces through the repulsive terms it includes (Millener, et al. 1988).
In spite of these significant uncertainties, the qualitative conclusion that can be drawn from
hypernuclei is that hyperon-related interactions are similar both in character and in order of magnitude
to the nucleon-nucleon interactions. Thus nuclear matter models can be reasonably generalized
to include hyperons as well. In recent years this has been performed mainly with relativistic theoretical
field models, where the meson fields are explicitly included in an effective Lagrangian. A
commonly used approximation is the relativistic mean field (RMF) model following Serot & Walecka
(1980), and implemented first for multi-species matter by Glendenning (1985), and more recently
by Knorren et al. (1995) and Schaffner & Mishustin (1996) (see the recent review by Glendenning
(1996)). A related approach is the relativistic Hartree-Fock (RHF) method that is solved with
relativistic Green’s functions (Weber & Weigel 1989, Huber at al. 1997). Balberg & Gal (1997)
demonstrated that the quantitative results of field theoretical calculations can be reproduced by
an effective potential model.
The results of these works provide a wide consensus regarding the principal features of hyperon
formation in neutron star matter. This consensus is a direct consequence of incorporating
experimental data on hypernuclei (Balberg & Gal 1997). These principal features are discussed
below.
3.2. Estimates for Hyperon Formation in Neutron Stars
Hyperons can form in neutron star cores when the nucleon chemical potentials grow large
enough to compensate for the mass differences between nucleons and hyperons, while the threshold
for the appearance of the hyperons is tuned by their interactions. The general trend in recent
studies of neutron star matter is that hyperons begin to appear at a density of about ρB = 2ρ0,
and that by ρB ≈ 3ρ0 hyperons sustain a significant fraction of the total baryon population. An
example of the estimates for hyperon formation in neutron star matter, as found in many works, is
displayed in Fig. 1. The equilibrium compositions - relative particle fractions xi - are plotted as a
function of the baryon density, ρB. These compositions were calculated with case 2 of the effective
equation of state detailed in the appendix, which is similar to model δ = γ =
5
3
of Balberg & Gal
(1997). Figure 1a presents the equilibrium compositions for the “classic” case of nuclear matter,
nuclear matter. In particular, they estimate the potential depth of a Λ hyperon in nuclear matter
  • asked a question related to Astrophysics
Question
2 answers
Related to science & technology park development
Relevant answer
Answer
Hey Madelon, never came across these specific sectors. There is quite a bit of literature on business incubators, and there were a few studies comparing different types of incubators (including privately-owned ones, which could be more geared towards what you are looking for). Don't know how much literature you have covered but I would point you to: Grimaldi and Grandi 2005, Clarysse et al. 2005, Carayannis and von Zedtwitz 2005, Bruneel et al. 2012. These perhaps could point you somewhere. Best of luck!
  • asked a question related to Astrophysics
Question
12 answers
Hi, I am trying to find out the formula to calculate the precision of differential photometry. IRAF or SExtractor can give magnitude errors, but these errors are usually given as instrumental errors, therefore ignoring the errors introduced by the different extinction behaviors of comparison star of unknown spectral type. The possible intrinsic variation of the comparison star is usually not addressed. Besides, bias frame and dark current and flat field calibration error also exists. However, there is still not a unified formula to calculate these potential error for the time being. I've been puzzled by above problems and would greatly appreciate your comments and suggestions.
Relevant answer
Answer
See Broeg et al for computing an optimum artificial comparison star:
CH. BROEG, M. FERNANDEZ, and R. NEUHAUSER. "A new algorithm for differential photometry: computing an optimum artificial comparison star". Astron. Nachr. / AN 326, No. 2, 134–142 (2005) / DOI 10.1002/asna.200410350
  • asked a question related to Astrophysics
Question
6 answers
What I mean by this question is as follows:
We know that the growth/rise phase of solar flare components (Soft X-rays, Hard X-rays) is always associated with the solar radio type III burst and conconcurrent radio flux density. According to some space scientists and also we can see clearly, the type III burst extended over longer solar longitude, which is often consistent with the decay phase of the low energy (Soft X-ray of longer wavelength) flare component, indicating that it goes up to several thousands of kilometers in the IP medium. However, the radiated energy emitted from the decay phase is distributed over a long time indicating that the energy by shock wave perhaps is much more than the energy distributed by the decay phase of the solar flare component. To overcome the confusion on whether flare decay phase or shock wave is dominant in the interplanetary medium, it is important to determine the speed of the decay phase of the Soft X-ray components. I am looking for suggestion on this particular issue to determine the speed of the soft X-ray flux intensity (w/m^2).
Relevant answer
Answer
Hi Kazi,
you need to segregate your observation to understand the type of emisssion (thermal vs non thermal for example).
Once this is done, you need to infer the particle fluxes responsible for these emissions but it gets tricky depending on the transport model you are assuming, and there is a clear signature of return currents which will saturate your emission for large flares.
From the particle distribution functions, you can infer energies and speeds and integrate depending on the physical characteristics you are interested in.
You can review some of the papers on my page to better understand this or follow the procedure we normally do.
I can help you with your analytical and numerical models if you want.
  • asked a question related to Astrophysics
Question
9 answers
Wormholes, within the theory of General Relativity, are microscopic and short lived, but many serious physicists have speculated openly and hoped that perhaps with negative energy (as yet undiscovered) they could be enlarged and held open longer. Kip Thorne is one of the leading names proposing such ideas. https://en.wikipedia.org/wiki/Kip_Thorne
However, astrophysicists have observed space to be essentially a flat Euclidean geometry at large scales https://en.wikipedia.org/wiki/Wilkinson_Microwave_Anisotropy_Probe . Inflation cosmology has been invoked to explain this (as well as uniformity) https://en.wikipedia.org/wiki/Inflation_(cosmology).
My question is, if space is Euclidean flat, does or does not that imply wormhole paths between two points would be longer than paths through ordinary space? I want to rule out time dilation paths, because these are already known from special relativity and are very short for the traveler, but the traveler cannot get back to the time point she started from. (Thorne suggests we could even get back to earlier time points with wormholes but I don't want to go there in this thread as there is no end to it). I am looking for space-like paths only that would take less time at velocities below the speed of light than paths through flat Euclidean space. Please, also no arguments about whether flat space conclusion is correct. This question is restricted to "what if the flat space conclusion is correct, then do wormholes, if ever practical, have a practical use for space travel."
Relevant answer
Answer
 Robert did do a good job explaining. One thing to remember, the throat doesn't go through normal space but hyper space and the distance is nil regardless of how far the mouths are separated in normal space.  I love Stagate SG-1 but all that soaring around inside varies wormhole throats is pure rubbish.
  • asked a question related to Astrophysics
Question
8 answers
In a recent paper, Yu-Gang Ma reports detection of antihelium-4 by RHIC-STAR team. Considering that our solar system is also composed of helium and hydrogen, then  would propose a hypothesize that antimatter form of hydrogen and helium can also be detected in our solar system?
So what do you think? Is it possible to detect antihydrogen and antihelium in our solar system? If such detection is confirmed, then we will face a challenge to formulate a matter-antimatter model of solar system. 
Relevant answer
Answer
There is no way to detect the antihelium nuclei in cosmic rays and even in the heavy ion colliders due to the infinitesimal survival and production probability respectively. The antihydrogen nuclei, with nickname - antiprotons, are generated in cosmic rays (and accelerators: CERN, Fermilab) mostly via p+A→p-+X reactions and were detected in the balloon-borne experiments (BESS, HEAT) and space-based experiments (AMS, PAMELA).
  • asked a question related to Astrophysics
Question
2 answers
Consider a star with a static magnetic field (a magnetic Ap star, for example). Outside the star there is vacuum, the field there a potential field curl(B) = 0. Some force-free field is constructed in the interior of the star. The normal component Bn of this force-free field is evaluated on the surface of the star. This used as boundary condition to construct the field outside the star, using standard potential field theory. What's wrong with such a model for Ap stars; why is it not a counterexample to the vanishing force-free field theorem?
Relevant answer
Answer
Well, technically outside the star is not a vacuum, there is a stellar wind that exerts a force on the magnetic field. Because the magnetic field strength of a dipole field decreases faster with radius than the ram pressure of the wind, the wind always wins in the end and tears open the field lines. The result is a field that effectively becomes a monopole at long distance.
How much this influences the configuration on the surface of the star depends, of course,  on the wind-strength.
  • asked a question related to Astrophysics
Question
6 answers
Suppose one part in 108 of the Sun’s luminosity is absorbed or isotropically scattered by grains circling the Sun. What is the total mass of such matter falling into the Sun each second?
Relevant answer
Answer
Actually the cosmic dust would be disintegrated before ever being absorbed into the sun by the corona sphere. The sun looses mass much faster than what could ever be absorbed and it looses heat energy as is disintegrates the cosmic dust. It is possible that the dust changes states of matter and become a plasma but that could be expelled before reaching the surface due to the magnetic sheering and realigning. However the question serves no purpose because any absorption is insignificantly small compared to the loss of mass a star experiences. It's comparable to saying how many more miles can I get out of my car if I lunge myself forward while driving.
  • asked a question related to Astrophysics
Question
7 answers
A disk-shaped rotating galaxy is seen edge on. By Doppler-shift spectroscopic measurements we can determine the speed V with which the stars near the edge of the galaxy rotate about its center.How can show that the mass (M) of the galaxy is in terms of the observed velocity?
Relevant answer
Answer
There is a tremendous discrepancy to suggested answers as to how much mass a galaxy has. And yes, it can be determined from rotation profiles, or velocity curves as explained above.
There is a maximum measured tangential velocity which is determined from the shifting of spectral lines and easily determined from rotation profiles. Furthermore, the total luminosity of a galaxy, can also be used to get an estimate of its mass. If the distance to the galaxy is known, then the size can be determined. If the length of the galaxy is L and the maximum tangential velocity is taken as v_{max}, then the "linear" density of the galaxy is v_{max}^2 / (2*G), where G is the gravitational constant. If the overall length of the galaxy, or length of the major axis is taken as L, then the mass of the galaxy is L*  v_{max}^2 / (2*G). This is roughly 10^11 solar masses depending on the galaxy. This also matches the luminosity mass estimates and the luminosity profile as measured by Shapely. 
A very good book which gives a way to determine the mass of the Andromeda galaxy is in Exploration of the Universe by Abell. He measures M31 as having a mass in the order of 10^11 solar masses as do I using the formula given. Although his calculations are different using Newtonian orbital dynamics which is fairly straightforward. There is another book by Binney & Tremaine on galactic astronomy; however Binney laments the lack of a complete exposition: “despite much progress, astronomers are still groping towards this goal,” he writes. 
  • asked a question related to Astrophysics
Question
4 answers
Could it be expressed by Bi-metric theory ?
Relevant answer
Answer
This may give a an indication that one of possible versions of bimetric theory of gravity is the requirement.
  • asked a question related to Astrophysics
Question
4 answers
.
Relevant answer
Answer
While Naima did not specifically ask about internals, Eric's diagram is better than others I've seen at showing them.  It is also useful for answering a specific type of question ... where can something go?  The little funnels give the space of future possible positions.  I would call this a "data plot."
I've heard (or read) others, mostly teachers writing pedagogical papers, swearing by the time vs. one spatial dimension plots, but I never got anything out of one (except for Eric's just now).  You cannot get curved space that way because in one dimension, according to the conventions of differential geometry, variations in the spatial distances in one dimension can always be transformed away.
There is an interesting underlying problem with interpreting these plots as more, as a visualization of coordinate space.  The "convention" to drop the "i " from in front of "t " (adopted after I left school in the mid 70s) hides the fact that time is not a true coordinate, at least not in the same way as spatial dimensions.  Nothing can move about freely in time.  4-space is not a space.  It is 3-space with a Lorentzian relationship to time.
You can't look at a time-radius or a time-x plot and make correct inferences about the distances between points, which is an essential geometric quality written in human genes.  Therefore such plots are misleading to humans.  What you have is:
Minkowski: ds2 = dr2 - (c·dt)2
Euclidean: ds2 = dx2 + dy2
In my view, only when one "gets over" the Minkowski space thing, is one able to actually understand and visualize space-time curvature.  Humans in their ordinary experience do not visualize time.  They visualize and see space, and motion.  Based on the number of downloads, and other feedback, the projection graphic I presented is probably the most useful graphic I've ever created on any subject.  People find they can understand it.  Especially the two interpretations of length contraction vs. space expansion. 
If you want to represent an object moving, other than in a plot, you have to resort to animation or video or some type of multi-frame drawing.  Both plots and representations have extremely valuable uses.  I don't know which Naima was looking for, but it's nice to have both.  I just wanted to clear up the difference since so many people have suffered from incorrect interpretation of the Minkowski space-time "plots."
  • asked a question related to Astrophysics
Question
3 answers
I am searching for spectra defined at least between 300 and 1100 nm with different temperatures, that are:
7250
7120
7000
6750
6659
6550
6395
6250
6170
5900
5800
5750
5686
5644
5580
5592
5580
5430
5280
5110
4940
4700
4538
4400
4275
4130
3900
3760
3625
3490
3355
3220
3085
2950
2815
2680
Where can i find them?
Relevant answer
Answer
Libraries of both real and synthetic stellar spectra are readily available online.  Since you asked about model spectra, the primary resource that springs to mind is the work of Robert Kurucz (Harvard).  Kurucz's spectra have been used by a generation of astronomers and can be found online. Many researchers have used (and extended) his codes to  generate libraries of model spectra for stars across the full ranges of temperature, luminosity, mass, metallicity etc.  
For example "A library of high-resolution Kurucz spectra in the range λλ3000-10 000
by  Murphy & Meiksin, 2014, Monthly Notices of the Royal Astronomical Society, 351, 1430.
There are several groups working  to compute models at higher resolution, and to include more physical phenomena. 
A good example of such a comprehensive library was published by Coelho et al., 2005, Astronomy and Astrophysics, Volume 443, 735. Which you can find on the ADS at  http://adsabs.harvard.edu/abs/2005A%26A...443..735C 
The catalog can be searched via the CDS service (linked from the ADS).
  • asked a question related to Astrophysics
Question
11 answers
The observed Lithium abundance is in disagreement with the standard big bang model. What are the possible solutions to the problem?  How much the observations are reliable in this case? Is it possible to exist a method of destruction of Li that we don't considering or it is beyond standard model phenomenon?
Relevant answer
Answer
Short answer for now.
The disagreement in the Lithium abundance is more significant than most cosmological researches care to admit.   Its basically a factor of 2, too low and that is significant.  In the past, Lithium abundances were difficult to measure but there has been substantial improvement in measurement techniques which make the discrepancy real and problematic.
is a good treatment of the issue.
  • asked a question related to Astrophysics
Question
3 answers
Domain walls or boundaries are expected to form when a discrete symmetry is broken spontaneously.
REF:
Y. B. Zeldovich, I. Y. Kobzarev and L. B. Okun, Zh. Eksp. Teor. Fiz. 67 (1974) 3
[Sov. Phys. JETP 40 (1974) 1]; T. W. B. Kibble, J. Phys. A9, 1387 (1976); A. Vilenkin, Phys. Rept. 121 (1985) 263
Relevant answer
Answer
Can they scatter gravitational waves ?
See for example:
Does a domain wall emit gravitational waves? -- General-relativistic perturbative treatment
Hideo Kodama, Hideki Ishihara, Yoshihisa Fujiwara
arXiv:gr-qc/9401007
  • asked a question related to Astrophysics
Question
4 answers
I am thinking of typical Mercury and Gemini retrofire maneuvers. Mercury was nose down about 34 degrees at retrofire. Gemini was about 20 degrees nose down. Is there a convenient way for a non-mathematician to estimate the resulting orbit after such a maneuver--maybe a graph or a macro or app?
Relevant answer
Answer
John no offence but IMHO old good math is still required - I use this as a starting point http://www.braeunig.us/space/, please go on Orbital mechanics there is a good example of Hofmann transfer with tangential impulse (when spacecraft manuever change altitude and inclination). And you need to know at least parameters of initial orbit (apogee, pedigree and inclination). 
  • asked a question related to Astrophysics
Question
5 answers
Astrophysics and Astronomy
Relevant answer
Answer
The contribution of Coriolis forces to the evolution of system is determined numerically by the Rossby number (Ro) representing the ratio of involved centrifugal to Coriolis accelerations. The smaller Ro, the greater the contribution of Coriolis forces. The recent measurement of Rossby number by the helioseismology of solar interior turned out to be Ro<0.01 (see reference below), which signifies the dominant contribution of Coriolis forces in the alpha-effect of Solar Dynamo. For comparison, the magnitudes Ro~0.1 0.01 correspond to a large hurricane system (e.g. Katrina, 2005) in the Earth atmosphere.
  • asked a question related to Astrophysics
Question
8 answers
Please refer me the book or any research paper from which I can find out.
Relevant answer
Answer
Astrophysical S-factor?  S(E) = E exp(2πη)σ(E) - a nuclear crossection without the energy dependence in it,
  • asked a question related to Astrophysics
Question
7 answers
As some of the research papers suggest that low FIP (below 10 KeV) elements get enhance by factor of 3-4 in Corona from Photosphere, while high FIP (above 10 KeV) elements don't show this characteristics.
Does this effect conclude anything about energy transfer from Photospere to Corona ?
Relevant answer
Answer
The question is most easily addressed using a multi fluid picture, in
which different atoms, electrons and ions separately reach local
equilibrium so that each is described by a number density, center of
mass momentum, and temperature. This works because particles of
similar mass tend to "relax" via collisions faster than those of
different masses. Conservation equations for mass, momentum and energy
< SORRY RESEARCHGATE DROPPED 90% OF MY ANSWER... grrr... >
are given by the SUM of separate equations for each species, and
coupling between different species is described by differences between
these equations (e.g. Schunck 1977). This coupling can be thought of
as "friction" between different atoms, ions, electrons. The question
of energy transport into the corona is then seen as identifying terms
essentially in the energy equation for each species, or for the total
plasma (predominantly hydrogen, protons and electrons in the Sun). We
are yet to successfully identify the precise mechanisms in the energy
equation(s) that describe coronal heating, but it appears related to
magnetic fields threading the plasma (based a wealth of data since the
1960s demonstrated this). Most models focus only on the total energy
equation. The trick with coronal heating is to transport ordered
energy into the corona and dissipate it ("friction" being a useful
concept to destroy ordered motion- such ordered motion might be bulk
flows or differences in flows between protons and electrons,
electrical currents, for example) under conditions that have very weak
dissipation (low friction) unless tiny scales can be generated.
The question of abundance differences concerns DIFFERENCES in the
multi-fluid equations. The fact that the FIP (with high FIP being >
10 eV not keV) appears important indicates that the action dictating
the abundance differences occurs in plasma where high FIP elements are
neutral, low FIP ionized. These conditions exist in the photosphere,
chromosphere (..spicules, prominences). The "friction" scales with
density. A large amount of friction (such as in the photosphere)
means that all species will tend to be strongly coupled- a single
fluid picture thus applying and abundances not changing. Thus the
photosphere is probably ruled out. Thus we look to the chromosphere to
separate elements according to FIP. Martin Laming has a model in
which the forces trying to separate different species are Alfven
waves. Myself and Hardi Peter have a review in 2000 discussing more
generally the problem of separation of species in the chromosphere.
So to answer your question, while both the coronal heating problem and
abundance issues are easily related through the multi-fluid formalism
that seems appropriate, and both depend on friction at some level or
other (friction being used both for destruction of ordered motion in
the total energy equation or in destruction of ordered motion between
different particle species), the two phenomena are of slightly
different origins and so are not necessarily coupled in a decisive
way. Only by introducing a specific model describing processes in the
governing equations can one tell if these two phenomena are really
closely coupled, or a "red herring" (false trail).
I hope this helps
Philip Judge
  • asked a question related to Astrophysics
Question
3 answers
Is it possible to source a URL or database of the available antennas in the field of millimeter and submillimeter astronomy?
I need to be able to tabulate available frequency-dependent beamsizes and antenna efficiencies.
Relevant answer
Answer
Hi Robert,
Most probably there is no such database. But the number of millimeter and especially submillimeter antennas is not so large and it's rather easy to list them. 
Igor
  • asked a question related to Astrophysics
Question
10 answers
The quantum field theoretic prediction for the vacuum energy density leads to a value for the effective cosmological constant that is incorrect by between 60 to 120 orders of magnitude. In a paper, George Ellis et al. (2010) review an old proposal of replacing Einstein's Field Equations by their trace-free part (the Trace-Free Einstein Equations), together with an independent assumption of energy--momentum conservation by matter fields. While this does not solve the fundamental issue of why the cosmological constant has the value that is observed cosmologically, it is indeed a viable theory that resolves the problem of the discrepancy between the vacuum energy density and the observed value of the cosmological constant. Therefore they confirm that no problems arise in such a scheme: hence, the Trace-Free Einstein Equations are indeed viable for cosmological and astrophysical applications. 
So what do you think? Can the Trace-Free Einstein Equations be a Viable Alternative to General Relativity? Your comments are welcome
Relevant answer
Answer
This paper by G. Ellis is published in Class.Quant.Grav. 28 (2011) 225007, you can find it on arXiv:1008.1196 and it contains lots of references to work done on this idea. The usual term for it is unimodular gravity.
  • asked a question related to Astrophysics
Question
1 answer
How do the fan and spine reconnections take place?
Relevant answer
Answer
This terminology refers to magnetic reconnection in the vicinity of a three-dimensional magnetic null point (a point in space at which the magnetic field strength is zero). Electric current sheets can form in various configurations around these points in response to different external forces, leading to different types of reconnection. A summary of the properties of these types of reconnection, as well as references to various articles with further details, can be found in the attached review.
  • asked a question related to Astrophysics
Question
3 answers
Warp field generation using metamaterials and sub-nanoscale casimir cavities architecture (e.g. optimising multiscallar geometry of warp propultion). 
Relevant answer
Answer
I can't believe NASA is funding this before having established human settlements on other solar-system bodies with a less ambitious but much better understood propulsion technology, namely nuclear power.  Warping space to any practical extent would probably require extreme energy densities that only nuclear processes - and lots of experience manipulating them - can provide.
  • asked a question related to Astrophysics
Question
6 answers
Does anyone know the status of comparison  between eigenmodes of spherical space and CMB map? Is there new result? 
Relevant answer
Answer
@Victor: Thank you for the citations on a different way to analyse CMB data, that is not in the popular literature. I've scanned them, and realized my post was off topic.  The harmonics I referred to were used to subtract 'vibrations' from the raw data.  To answer your question, perhaps the papers, here up to 2011: http://inspirehep.net/record/920248/references that might interest you, as they do me.
  • asked a question related to Astrophysics
Question
8 answers
If neutrino travels through higher dimension than it may be possible that without travelling faster than light it can cover same distance as covered by light but in less time if light travels in 4 dimension
Relevant answer
Answer
Case against higher dimension that 3D + 1D for faster than light travel:
Assertion: If an observable particle can travel in higher dimension than 3D + 1D, then an observer in 3D + 1D can observe multiple/infinite particles at the same location, which will show properties different than a single particle at that location.
Proof: Let us consider 4D + 1D dimension (x,y,z,w,ict). If 'w' in (x,y,z,w,ict) is a independent dimension of (x, y, z, ict) then it has to accommodate matter in at different locations at different 'w's for it allow travel of matter across it. So for an 3D + 1D observer who sees the (x,y,z,w,ict) universe as projection to (x,y,z,ict), multiple higher dimension particles may exist at the same position. In fact infinite such particles may exist at the same location. This means for the 3D + 1D observer there will infinite such particles, each of which is a permutation of the 4D + 1D particles located at different location along 'w' dimension.
  • asked a question related to Astrophysics
Question
2 answers
Please suggest me bonding parameters and silicon specification.
Relevant answer
Answer
For bonding AuSn will be used and laser lift off will be used for sapphire removal. 
Is there any specific article or journal related? 
Appreciate for your sharing!! 
  • asked a question related to Astrophysics
Question
6 answers
It would be helpful if the explanations are given for 3 categories:
1. Beginners using GUI
2. Wannabes using both GUI and TUI
3. Experts using TUI
Relevant answer
Answer
Matlab has a strong GUI, so learning and making plots is quite easy as compared to GNU Plot.
However, I would suggest you to switch to Python. It has dedicated libraries for Astronomy and Astrophysics that goes by the name 'astropy' ( http://www.astropy.org/ ) . It has separate library for plotting 'matplotlib' (http://matplotlib.org/).
Python, being an open source high level language, has really good documentations and is very easy to learn. Following is a link to tutorial for making 3D plots:
You can find numerous other examples under the 'Gallary' section on matplotlib web page.
  • asked a question related to Astrophysics
Question
6 answers
 Ex-AGN, Particularly Gamma Ray Study, GRB, QUASARS & Jet's
Relevant answer
Answer
Agreeing with the previous answers, I would add that different wavebands have different strengths. Gamma ray detectors are sensitive to the most extremely energetic events such as GRBs, but are very bad at localising the source; for this you need to match the event to detections at other wavelengths. X-ray telescopes are useful for following up GRBs as well as studying accreting black holes such as those in AGN. But for distant sources the angular resolution is still poor, and optical imaging is needed to identify the host. The optical benefits from the best resolution and sensitivity, but the problem in the optical and infrared is separating the emission of an AGN from that of the host galaxy, and to understand this one needs to model the broadband spectrum or multi-wavelength SED (spectral energy distribution). The radio spectrum is also useful for the study of high-energy sources since it includes synchrotron radiation from relativistic electrons that are accelerated by high-energy processes such as jets and shocks associated with SNe and AGN.
  • asked a question related to Astrophysics
Question
8 answers
The central densities that are adopted for the initial, cold, WDs in this paper, seem to be a lot lower than those quoted in most papers on the stability of WDs near the Chandrasekhar limit. These say that non-rotating WDs become unstable at 1.39Msun but at densities of 2-3E13 kg/m^3, which seems to be an order of magnitude greater than adopted in the initial configurations here. Is there a simple explanation for this? Neglect of GR?
Relevant answer
Answer
"Would you then say that the central densities associated with Carbon ignition and gravitational collapse are not the same?"
Yes, that's exactly what I'm saying.
At densities we're concerned with GR is negligible and would be a secondary effect even to Coulomb corrections. And Jeffries is right that the WDs are essentially cold. I think the problem may be one of incorrect use of Chandrasekhar-mass WD in the literature. Strictly speaking, a Mch WD cannot exist, since the Chandraskhar mass is the limiting mass. All non-rotating WDs therefore must have a lower mass. When we say "Chandrasekhar-mass white dwarf", we always mean "near Chandraskhar-mass white dwarf". Why don't you try it out and integrate a hydrostatic profile using TOV. Pick a central density and see what mass you get. The effect will certainly be small compared to metallicity effects (e.g. how much Ne22), Coulomb corrections to the equation of state (it's a plasma) or even non-zero temperature.
  • asked a question related to Astrophysics
Question
3 answers
Surface energy of YFeO3 (yttrium orthoferrite).
Relevant answer
Answer
First of all, you should realize that the surface energy of solids is a function of anisotropy, especially in case of such crystalline materials like orthoferrites. Secondly, YFeO3 exists in two states (hexagonal and orthrombic) whose XRD profiles have different shapes. By this means, their surface energies even will be different for sure, so long as we consider their average values.
  • asked a question related to Astrophysics
Question
5 answers
In PNAS, August 25, 2009 edition, Blake and Smoller argued that perhaps the standard terminologies of dark energy and cosmological constant are not required for explaining the accelerated expansion of galaxies. According to them, the key to solve that puzzle is to find expanding wave solutions of the Einstein's equations.
So do you think it is possible to use expanding wave solutions of the Einstein's equations to solve accelerated expansion of galaxies? Your comments are welcome.
Relevant answer
Answer
Thanks Allan, for your answer. I will read your papers soon. Best wishes
  • asked a question related to Astrophysics
Question
2 answers
The Javalambre-PAU Astrophysical Survey has just published is red book providing all the technical and scientific details about it. The main characteristic of the J-PAS project is its used of a particular set of narrow band optical filters (54) to compute photometric redshifts for millions of galaxies spread along more than 8500deg².
As a J-PAS member, I'd like to know your opinions about this technique and about the J-PAS survey in general.
Relevant answer
Answer
Hi Somak,
  as far as I know, there hasn't been work (within the collaboration) on specific spectral features since BPZ uses templates. I'd say that depending on the spectral type of the galaxies and their redshift, the most informative feature will change. Therefore, using templates avoid biases related with putting too much weight to specific features.
  With respect to previous multifilter surveys, like COMBO17, I would say that what made very powerful the J-PAS filter system is that the design of the filters has been done from scratch to maximize the photo-z determination (this is explained in Sect.2 of the document). The result is that the filters are narrow (145A), almost top hat with no wings, and there is no gaps between them, covering the whole optical range.
  For any further information about J-PAS, please don't hesitate to contact me or any member of the collaboration.
  Cheers!
  • asked a question related to Astrophysics
Question
18 answers
I plan to conduct a research on Tardigrada survival in simulated Europa (moon) conditions. The question is:
If results will end up being significant - wouldn't it indirectly support the hypothesis of life existence possibility on Europa (in sum with results and analysis from Voyager & Galileo missions (R. Greenberg et al.)?
Relevant answer
Answer
The problem with interplanetary panspermia is that even if this hypothesis is true (let's imagine) - it still does not answer the question of the life origins. Some people think life came to Earth from Mars. Therefore, if we imagine that life from Earth could be transported to Europa - it starts to remind me the movie "Inception" or STI. For now we do not know anything about interplanetary panspermia - no robust evidence yet. However, NASA (sorry, if I am wrong) did experiment with extremophiles, including Tardigrades and results showed that some organisms might be able to survive in highly hostile environment - space.
For the start - it is highly doubtful that life can exist on the surface of Europa (exposed to huge radiation, extremely low temperatures, no oxygen etc.).
What I'd like to check - ability of organism to survive on Europa (under the ice crust). If Tardigrades die there - probably, we lose the "last hope". It would at least expose a huge weak point of interplanetary panspermia posibility: if one of the most "tough" organisms cannot manage to survive, than whom can? We have no clue if there are tougher organisms on Earth or anywhere else (so far).
BUT if Tardigrade is able to survive for quite a long time - it would not only support the panspermia hypothesis, but it might support the existence of living organisms on Europa. Analysis of Voyager and Galileo data showed a relatively high activity of Europa crust, which means that ice on Europa, is probably not as thick as people thought before and it probably has a liquid ocean underneath. The biggest planet in the solar system + sun + Io / Ganymede / Callisto + mobile, relatively rapidly changing crust => incredible gravitational influence + evidence => high tidal activity. Such tidal activity should produce incredible amount of energy (heat) - maybe enough to support life. The question is - how tough the environment was throughout the existence of Europa. If it was moderate, it is possible that hypothetical life on Europa did not need any panspermia. However, without the evidence all this is just a number of theories, talks and dreams.
Sorry for spelling and weak text structure - in a rush right now. Thank you all for the replies, they are drastically helpful and give me a chance to look at the problem from different angles.
 
  • asked a question related to Astrophysics
Question
8 answers
Long back perception on gravity was changed from a kind of force to a phenomenon which actually bends space- time in its influence. Is there any chance that even magnetism and electric forces and in fact all 4 types of forces can also be interpreted as such?
Relevant answer
Answer
The original Kaluza-Klein theory attempted exactly this, namely using the additional degrees of freedom in 5-dimensional spacetime to describe the electromagnetic field. There have been further attempts to generalize the theory to include other forms of matter (see "Space-Time-Matter theory", advocated by Paul Wesson and colleagues).
Another approach is to generalize the metric tensor to a nonsymmetric tensor, which can be split into a symmetric and an antisymmetric part. The symmetric part would represent gravity, while the antisymmetric part would be the Maxwell tensor of electrodynamics. This is the unified field theory approach that Einstein pursued late in his life. A good (albeit hard to find) monograph on this subject is Tonnelet's "Einstein's Theory of Unified Fields".
These classical attempts to unify gravity with electromagnetism and possibly other forces are not popular these days as there is no clear connection between them and quantum field theory, which in turn forms the basis of the standard model of particle physics, which, while somewhat inelegant (18+ free parameters), has been enormously successful at predicting and explaining all known forms of matter and interactions except gravity.
  • asked a question related to Astrophysics
Question
5 answers
According to the authors, the regression method proposed by them takes into account the intrinsic scatter and errors in both variables. I have heard that this method does not rely on the Chi-square minimization but then how does one calculate whether his or her fit is good or not. Can I use this method to fit a model? if yes ,how do I conclude if my model fits the data correctly or not ?
Relevant answer
Answer
survival statistics are best coupled with the technique of maximum likelihood
for goodness of model considerations:
brief primer on maximum likelihood
  • asked a question related to Astrophysics
Question
4 answers
By empty space I mean interstellar space.
I have read en.wikipedia.org/wiki/Neutron#Production_and_sources - I am not asking about those processes.
I remember reading about this years ago but can now find no reference to this process. I have forgotten the rate, something like 1 neutron per 1 cubic km per year or per century. The process may have been stated as the "decay" of empty space to produce a neutron.
Relevant answer
Answer
Such spontanous creation of matter (and anti-matter) is postulated in the quasi-steady models of the Universe, like in that proposed by Fred Hoyle.This mysterious process of creation (mini big-bangs?) has no support from any current model of particle production: http://en.wikipedia.org/wiki/Steady_State_theory
  • asked a question related to Astrophysics
Question
10 answers
I have seen different estimations of photon diffusion time in different papers (30000 to a million years). There are many different mean free paths used. I want to know if these results are just speculations or if there is a way to verify these results?
I also want to know what the current standard time of diffusion is. It would be helpful if you could give me the links to some important research papers on this subject.
Relevant answer
Answer
The main source of uncertainty in photon diffusion time is related to the opacity in different layers of the sun as inferred from various solar models. Also this opacity will change as the sun evolves so the photon diffusion time is not constant. Convection in the interior of the sun also complicates this this issue.
A simple random walk calculation combined with a simple density gradient model for the sun is a relatively clean calculation.
is a good source for this and is the source for the 170,000 yr timescale.
  • asked a question related to Astrophysics
Question
42 answers
Spacetime curvature creates a space time "seeing" effect, which is analogous to astronomical seeing. This is due to microlensing and weak lensing, which becomes problematic from an ICRF stability viewpoint as astrometric VLBI and optical measurements (e.g. GAIA) moves into the tens of micro-arcsecond accuracy region. This space time seeing effect will create a noise floor, limiting ICRF stability due to apparent source position distortions and amplitude variations. This will of course have an impact on the high accuracy requirements of the Global Geodetic Observing System (GGOS), which has the objectives of 0.1 mm stability and accuracy of 1 mm for the ITRF. The distribution of dark matter complicates the problem.
Relevant answer
Answer
Dear Ludwig,
to be honest, your work is beyond my scientific horizon, but thank you for sharing. I´m going to try to understand.
Regards
  • asked a question related to Astrophysics
Question
13 answers
It is said that tensor perturbations in FRW metric satisfies some wave equations. And the solutions of those wave equations are called gravity waves. My confusion is why the word "gravity" comes into picture. Also scalar perturbations satisfy a wave equations. Why scalar perturbation are not called gravity waves?
Relevant answer
Answer
If a spherically symmetric, pulsating object could generate gravitational radiation, this would be scalar radiation. However, a mass cannot generate such radiation because of the shell theorem/Birkhoff's theorem, which states that outside a spherically symmetric object, the object's field will be identical to the field of a similar point mass located at the center.
Next, dipole radiation is produced by charge separation: for instance, an antenna can radiate electromagnetic waves if positive and negative electric charges in it are separated by a voltage. This cannot happen for gravity though, because there is no negative gravitational charge (i.e., no negative mass).
So the "simplest" gravitational mode that there is is quadrupole radiation. This radiation leaves its imprint on the cosmic microwave background in the form of tensor perturbations.
Scalar perturbations, in contrast, are not due to gravitational waves, they arise as a result of the varying distribution of mass density across the Universe.
So to sum up, the mechanisms behind the two perturbations are very different: Scalar perturbations are caused by the presence of matter with variable mass density and hence, variable gravity (but no waves/radiation!) whereas tensor perturbations are caused by freely traveling gravitational radiation (i.e., waves).
  • asked a question related to Astrophysics
Question
6 answers
Parks et al. [2012] found the entropy of electrons are increased across the Earth's bow shock. Based on the Vlasov equation or the entropy conservation equation, the entropy of electrons are almost unchanged if their distribution is symmetrical (e.g. Maxwellian, flat-top etc.). What are your views on this?
Relevant answer
Answer
The answer is in the question. Firstly, most of the distributions are not maxwellian in the magnetosphere, that is moslty loss cone and anti loss cone and power distribution and so on. Even if the distribution is maxwellian it will get remodified due to the various acceleration mechanisms of the particles, gain energy from the solar wind. It is not much difficulty in understanding that the bow shock region is a dynamicall unbalanced region, so the magnetic field pressure and the particle pressure will be highly fluctuating (beta), So the entropy should change and energy is definitely from the solar wind may little bit from earths magnetic field
  • asked a question related to Astrophysics
Question
2 answers
We get this conclusion theoretically, but how?
Relevant answer
Answer
This is not a difficult task: you simply have to take into account that some electrons come also from the ionization of Helium (i.e. 2 electrons per He atom), wich is the second most abundant element after Hydrogen. If you assume for instance to have a fully ionized plasma with 90% of H and 10% of He (while other elements are negligible), hence that N(He)/N(H) ~ 0.1, then the ratio N(H)/N(e) is given by
N(H) / N(e) = N(H) / [N(H) + 2N(He)] ~ 1/(1+0.2) ~ 0.83
and that's all.
  • asked a question related to Astrophysics
Question
4 answers
What happens to a simple hydrogen atom, i.e. what happens to the energy binding between the electron and the proton, from the point of view of QED, when the hydrogen atom is close to a black hole?
Relevant answer
Answer
V. Fulcoli,
Hi, I think so, although the dynamic link above does not seem to get to the correct entry - try http://en.wikipedia.org/wiki/Black_hole_firewall if that's not the same page you were linking to (the closing parenthesis is not incorporated in to the link).
Also see http://arxiv.org/abs/1207.3123 (which is I think the most recent research paper on the subject from the original proponents).
Again, much of the original discussion has been staged within the context of the entanglement of virtual particles in Harking radiation and the proposed Information Paradox. IMO, however, these issues are trivial considerations, when the principal issue is the nature of black hole formation, the accretion of external material and - most importantly, the structure of the black hole interior.
Certainly your question goes to the nature of black hole accretion - I can only guess that the binding energies between particles is released - incorporated into the black hole's mass-energy, while residual particles are expelled from the corona/event horizon. In this case, the EM charges of residual particles would be retained within the corona, likely contributing to fields that direct particles to the black hole polar jets. This is, of course speculation on my part...
  • asked a question related to Astrophysics
Question
14 answers
Light experiences a redshift (blueshift) if it passes through a strong static gravitational field, as demonstrated by Einstein. Owing to gravity-electrostatic analogy, why does light not have the same effect if it passes through a static electric field?
Relevant answer
Answer
If there is time dilation between locations that are at different electric potentials then there seem to be two possibilities: 1) Time flows slower at more positive potentials. 2) Time flows slower at more negative potentials. Since these possibilities cannot coexist, the positive-negative symmetry in electromagnetism should be broken if there is red shift associated with static electric fields (in addition to any gravitational red shifts).
  • asked a question related to Astrophysics
Question
36 answers
"… The background radiation shows the small density variations in the early universe that would eventually cause matter to clump in some places and form voids in others. We can see the end product of this clumping in the recent universe by observing the spread of galaxy clusters across space.
"The best measurements of the cosmic background radiation came from the European Space Agency’s orbiting Planck telescope in March 2013. Galaxy-cluster measurements, on the other hand, come from various methods that include mapping the spread of mass across the universe by looking for the gravitational lensing, or warping of light, it causes. The two measurements, however, are inconsistent with one another. "We compare the universe at an early time to a later time, and we have a model that extrapolates between the two," says Richard Battye of the University of Manchester, UK, co-author of the new study1 published on 7 February in Physical Review Letters (PRL). "If you stick to the model that fits the CMB data, then number of clusters you find is a factor of two lower than you expect.""
Recent reports of X-ray signals that may signify the decay of sterile neutrinos have raised hopes for a cosmological solution, as some mix of Cold Dark Matter (CDM), Warm Dark Matter (WDM) and/or Hot Dark Matter (HDM) could fit CMB projections of galaxy cluster formations to observations since WDM and HDM would prevent structure formation at increasingly larger scales. See http://arxiv.org/abs/1308.3255 http://arxiv.org/abs/1402.2301 http://arxiv.org/abs/1402.4119 http://dx.doi.org/10.1103/PhysRevLett.112.051303 and http://dx.doi.org/10.1103/PhysRevLett.112.051302.
However, if the composition of universal mass-energy included HDM or WDM and less total CDM than now thought, how would that affect the enormous gravitational effects routinely attributed to CDM in the observed universe?
Adding HDM and/or WDM may help with LCDM problems such as 'the small scale structure problem', 'the missing satellite problem', the 'cuspy halo problem' and others, but it must do so while maintaining alignment with other observations. What other L-CDM results would be affected by the inclusion of HDM and/or WDM?
Relevant answer
Answer
I don't believe the claimed gamma ray excess from the galactic centre will turn out to be dark matter. First of all, the diffuse gamma ray background from these regions was assumed to have a certain form, which might not be right. Secondly, neutron stars (especially millisecond pulsars) generate gamma rays and it's not obvious to me that fermi would resolve these as extended sources. Thirdly, such a particle at that cross section would probably show up when looking at the supposedly dark matter dominated satellite galaxies of the Milky Way. In fact, a paper titled:
Stringent constraints on the DM annihilation cross section from subhalo searches with Fermi
suggests that such a mass and cross section for decay via those channels is unlikely to be consistent with dark matter. More worrying perhaps is that a 35 GeV particle has a mass that allows for rather easy investigation, so the LHC would likely have produced events missing 35 GeV of energy (and lots of momentum).
From a historical perspective, how many times have people claimed to have found the dark matter?
  • asked a question related to Astrophysics
Question
68 answers
What drives plate tectonics on Earth? Earth global forces (true polar wander, ridge push, slab pull) or forces connected to Earth axial rotation (tidal friction, eötvös effect), are usually invoked to describe the driving mechanisms of plate tectonics. Some of these forces provide the required energy to move plates (4x10^18 J/yr), but always with strong assumptions. Can the dark energy be considered/added with the previous ones? Is the dark energy sufficiently strong to affect the Earth dynamics? What are the best estimates of such energy?
Relevant answer
Answer
Very unlikely. Dark energy does not interact with matter directly. Its gravitational interaction with Earth's matter is by many orders of magnitude smaller of any other physical effect in such a compact body as Earth. Moreover, dark energy is distributed isotropically in space while the plate tectonics has the apparent anisotropy.
  • asked a question related to Astrophysics
Question
43 answers
Cosmological simulation predicts a large number of dwarf galaxies than the observed ones. This is known as missing satellite problem or dwarf galaxy problem. The interesting thing is that these satellite galaxies of Milky Way lie on a single plane.
Relevant answer
Answer
The simplest explanation for their orbits being aligned is that they were captured in a single interaction - that they all arrived at the Milky way from the same direction at the same time. See http://www.scilogs.com/the-dark-matter-crisis/2013/01/03/andromedas-satellites-behave-as-expected-if-they-are-tidal-dwarf-galaxies/
It has also been proposed, based on dark matter simulations, that the aligned dwarf galaxy streams arrived on a cosmic web filament - a dark matter 'superhighway'...
The 'missing galaxy problem' of the LCDM cosmological model is a separate issue - it has been proposed to be caused by the presence of warm dark matter that would prevent the expected formation of so many dwarf galaxies (since cold dark matter would have to be reduced). This would seem to have other serious implications for the LCDM model, however... See https://www.researchgate.net/post/Can_the_Lambda-CDM_cosmological_model_survive_the_discrepancy_between_galaxy_cluster_observations_and_CMB_projections for additional references.
  • asked a question related to Astrophysics
Question
1 answer
When considering bending modes for linear polyatomic molecules containing hyperfine structure, how does increasing the vibrational state affect the various molecular parameters involved in calculating the hyperfine energy levels?
Relevant answer
Answer
An old classic attached.
Also take a look at "Microwave molecular spectra" by Gordy and Cook.
  • asked a question related to Astrophysics
Question
41 answers
SN-1a data [Reiss et al., Perlmutter et al.] indicates that there is a deviation from a linear Hubble curve for data points at large distances, > 3-4 Gly. Theoretical fits are then made, based on Friedmann equations, that include a “dark energy” term, Lambda. From this fit it is shown that the rate of the expansion of the universe will lead to an accelerating universe. It is often stated that the universe is accelerating at the present time, T_0. But if one looks at the Hubble curve for only the past 2-3 billion years the Hubble curve can be fit nearly perfectly with a straight line. This indicates to me that there is no empirical evidence that the expansion of the universe is accelerating at the present time. In other words, the Lambda -fit curve _suggests_ that the expansion is accelerating at the present time but this acceleration is, in fact, presently _not_ observed to any degree of certainty over the past few billion years.
Relevant answer
Answer
[I have attached a generic plot of the expansion of the universe in time. The Omega-m = 0.3 and Omega-L = 0.7 curve is the accepted LCDM fit.] Many of the responses to my question have indicated there may be some problems with the LCDM curve fit. I would like to reiterate what my question is addressing. The plot shows clearly what I am trying to point out. The observed expansion for the past 3-4 billion years can be fit nearly perfectly with a straight line. The “acceleration” shown by LCDM curve fit is merely an extrapolation. We would need to wait another 2-3 billion years into the future before we could observe a non-linear expansion according to the LCDM fit. Thus, we are not observing any acceleration “now” nor will we for a long time to come. The acceleration occurring “now” as predicted by the LCDM curve fit is just that, a prediction. It is not based on empirical evidence of the recent expansion. Viewing the expansion over the past 2-3 billion years is 10%-20% the age of the universe. This time range is not negligible and therefore the significance of linear fit over that range cannot be simply ignored. The LCDM fit “ignores” this linear fit issue by claiming this is simply an inflection in the overall expansion curve. This, of course, is the well known coincidence problem. So, is the expansion curve truly linear in the recent history of the universe or are we simple witnessing a temporary inflection in an accelerating universe? In my mind that is an open question.
  • asked a question related to Astrophysics
Question
1 answer
I want to know about data acquisition, control system, and various systems used for astrophysics observatory systems.
Relevant answer
Answer
In order to get a virtual experience you can visit the website on CLEA. CLEA stands for contemporary lab. Exercises in astronomy. You will be able to download good number of astronomy softwares that will convert your computer into a virtual astronomy observatory. This would be a good beginning.
  • asked a question related to Astrophysics
Question
8 answers
As far as I know, pulsar-based navigation has been intensively investigated. I wonder are there any other natural bodies in the universe that can emit stable signals like pulsars?
Relevant answer
Answer
It depends on what you want a reference for, time, linear momentum or angular momentum.
The Cosmic Microwave Background can be used as a velocity reference. Closer to home, the orbit of the Moon was at one time the most accurate clock available, and it, plus the orbits of the planets were used to construct, "ephemeris time". These orbits, plus the orbits of artificial satellites such as LAGEOS, are still used as references for a non-rotating frame. The best non-rotating frame is, however, derived today by VLBI from the quasars, and that is purely on the basis of their enormous distances.
Of course, the entire Doppler technique of finding exoplanets is based on stars emitting very precise spectral lines, which can be used as velocity references, good to a fraction of a meter per second.
If we could ever get a good signal to noise in gravitational radiation the normal modes of a nearby black hole could also be used as a time reference.
  • asked a question related to Astrophysics
Question
8 answers
Is the question wrong?
Relevant answer
Answer
Not sure the word "huge" is needed but the basic question of interest is whether or not most of the baryons in the Universe are inside or outside of gravitational potentials (e.g. galaxies).
Agreed that understanding how the first generation of stars formed inside galaxy halos remains largely unknown.
  • asked a question related to Astrophysics
Question
17 answers
See ARXIV guidelines on who can introduce submissions.
Relevant answer
Answer
As for protection, I am not a lawyer, and cannot discuss copyright issues. If you are worried about intellectual priority, in my opinion any of these (arxiv, vixra, RG, others) would be adequate. However
- note that intellectual priority is not the same as copyright. Many times someone comes out with some idea, only to have a later paper become the standard and the idea (equation, etc.) even named after the later author. This is very common, and realistically, there is only so much you can do about it. There is, for example, no court you can go to establish your priority (and attempts to go that route tend to end badly).
- Having said that, in my experience, in many cases people are too optimistic about the novelty of their ideas. It is a big world, people in many countries have been doing physics a long time, with lots of published papers, and it is rare for an idea to be truly new. That, in my opinion, doesn't matter much either, as long as the idea is appropriate for the case in hand.
So, I would recommend you worry about getting your research and your ideas in good order and publishing them as best you can, and making the best case for them you can, in print, in meetings and in places like this one. The rest is, frankly, not really under your control.
It is also my experience, by the way, that at a sufficiently advanced level physicists do not care about credentials. That does not mean they will accept any idea, but it does mean that they will accept good ideas, regardless of whom they come from. I am sure, if your ideas are good, you can find arxiv sponsors.
  • asked a question related to Astrophysics
Question
3 answers
What will be the statistical properties of the error present in the signal?
Relevant answer
Answer
It depends on how "raw" your raw data is. The number of triggers received in a given time period will be Poisson distributed. This is true for most kinds of cuts you have on your data; for example, an energy threshold. If you have downstream non-linear data processing or electronics effects, you can push this off Poisson, but it's pretty robust. Now, the measured energy is a different beast, which will be entirely detector dependent. If you have a measured energy dispersion function (ideally but rarely measured by shooting a known monoenergetic beam into the detector, and measuring the distribution of the reported energy) your on-orbit data will be the true energy spectrum convolved with your energy dispersion function.
See Gregory and Loredo, ApJ 398, 146, 1992, or Feigelson and Babu, "Promise of Bayesian Inference for Astrophysics" (a book of articles), or WF Tompkins, "Applications of LIkelihood Analysis in Gamma-Ray Astrophysics," a PhD thesis from Stanford -- pretty sure it's on arXiv.
  • asked a question related to Astrophysics
Question
2 answers
It is of the order of 1 m^−3 (intergalactic medium) to 10^30 m^-3 (stellar core). My calculations show that it should be of the order of 10^23 m^-3. Is my order of magnitude alright?
Relevant answer
Answer
I recommend you to look through the paper "A two-temperature accretion disk model for Cygnus X-1 - Structure and spectrum" of Shapiro, S. L.; Lightman, A. P.; Eardley, D. M. (http://adsabs.harvard.edu/abs/1976ApJ...204..187S).