ArticlePublisher preview available

NONPARAMETRIC DETERMINATION OF REDSHIFT EVOLUTION INDEX OF DARK ENERGY

World Scientific
Modern Physics Letters A
Authors:
To read the full-text of this research, you can request a copy directly from the author.

Abstract

We propose a nonparametric method to determine the sign of γ — the redshift evolution index of dark energy. This is important for distinguishing between positive energy models, a cosmological constant, and what is generally called ghost models. Our method is based on geometrical properties and is more tolerant to uncertainties of other cosmological parameters than fitting methods in what concerns the sign of γ. The same parametrization can also be used for determining γ and its redshift dependence by fitting. We apply this method to SNLS supernovae and to gold sample of re-analyzed supernovae data from Riess et al. Both datasets show strong indication of a negative γ. If this result is confirmed by more extended and precise data, many of the dark energy models, including simple cosmological constant, standard quintessence models without interaction between quintessence scalar field(s) and matter, and scaling models are ruled out. We have also applied this method to Gurzadyan–Xue models with varying fundamental constants to demonstrate the possibility of using it to test other cosmologies.
July 26, 2007 2:42 WSPC/146-MPLA 02384
Modern Physics Letters A
Vol. 22, No. 21 (2007) 1569–1580
c
World Scientific Publishing Company
NONPARAMETRIC DETERMINATION OF REDSHIFT
EVOLUTION INDEX OF DARK ENERGY
HOURI ZIAEEPOUR
Mullard Space Science Laboratory, University College London, Holmbury St. Mary,
Dorking, Surrey, RH5 6NT, UK
hz@mssl.ucl.ac.uk
Received 13 April 2007
Revised 11 May 2007
We propose a nonparametric method to determine the sign of γ the redshift evolu-
tion index of dark energy. This is important for distinguishing between positive energy
models, a cosmological constant, and what is generally called ghost models. Our method
is based on geometrical properties and is more tolerant to uncertainties of other cos-
mological parameters than fitting methods in what concerns the sign of γ. The same
parametrization can also be used for determining γand its redshift dependence by fitting.
We apply this method to SNLS supernovae and to gold sample of re-analyzed super-
novae data from Riess et al. Both datasets show strong indication of a negative γ. If this
result is confirmed by more extended and precise data, many of the dark energy models,
including simple cosmological constant, standard quintessence models without interac-
tion between quintessence scalar field(s) and matter, and scaling models are ruled out.
We have also applied this method to Gurzadyan–Xue models with varying fundamental
constants to demonstrate the possibility of using it to test other cosmologies.
Recent observations of Supernovae (SNe), Cosmic Microwave Background (CMB),
and Large Scale Structures (LSS) indicate that the dominant content of the
Universe is a mysterious energy with an equation of state very close to Einstein
cosmological constant. The equation of state is defined by w, the ratio of pressure
pto density ρ,w=P/ρ. For a cosmological constant w=1. The observed mean
value of wfor dark energy is very close to 1. Some of the most recent estima-
tions of ware: From combination of 3-year WMAP and SuperNova Legacy Survey
(SNLS),2w=0.97+0.07
0.09 ; from combination of 3-year WMAP, large scale structure
and supernova data,2w=1.06+0.016
0.009; from combination of CMAGIC supernovae
analysis and baryon acoustic peak in SDSS galaxy clustering statistics at3z= 0.35,
w=1.21+0.15
0.12; and finally from baryon acoustic peak alone w=0.8±0.18. It is
evident that with inclusion of one or two sigma uncertainties to measured mean val-
ues, the range of possible values for wruns across the critical value of 1. Moreover,
in all these measurements the value of wdepends on other cosmological parameters
1569
Preprint
Full-text available
After taking into account the mass loss of galaxies and stars at the cosmic scale, the speed and acceleration of the accelerating expansion of the Universe are derived from general relativity and Newtonian mechanics, as respectively. The physical significance of the Hubble constant is proved to be the average of the masses ejected per second per unit mass in the observed range, and it is shown that the accelerated expansion of the universe doesn’t require dark energy
Article
Full-text available
In this paper, we address some of the issues raised in the literature about the conflict between a large vacuum energy density, a priori predicted by quantum field theory, and the observed dark energy which must be the energy of vacuum or include it. We present a number of arguments against this claim and in favor of a null vacuum energy. They are based on the following arguments: A new definition for the vacuum in quantum field theory as a frame-independent coherent state; results from a detailed study of condensation of scalar fields in Friedmann–Lemaître–Robertson–Walker (FLRW) background performed in a previous work; and our present knowledge about the Standard Model of particle physics. One of the predictions of these arguments is the confinement of nonzero expectation value of Higgs field to scales roughly comparable with the width of electroweak gauge bosons or shorter. If the observation of Higgs by the LHC is confirmed, accumulation of relevant events and their energy dependence in near future should allow us to measure the spatial extend of the Higgs condensate.
Article
Recent data and new data analysis methods show that most probably the parameter w in the equation of state of the dark energy is smaller than -1 at low redshifts. We briefly review some of the models with such a property and without violating null energy condition. We investigate the difference between the observables and predictions of these models, and how they can be explored to single out or constrain the origin of dark energy and its properties.
Article
We address the issue of constraining the class of f(R) gravity able to reproduce the observed cosmological acceleration, by using the so-called cosmography of the Universe. We consider a model independent procedure to build up a f(z) series in terms of the measurable cosmographic coefficients; we therefore derive cosmological late time bounds on f(z) and its derivatives up to the fourth order, by fitting the luminosity distance directly in terms of such coefficients. We perform a Monte Carlo analysis, by using three different statistical sets of cosmographic coefficients, in which the only assumptions are the validity of the cosmological principle and that the class of f(R) gravity reduces to ΛCDM when z≪1. We use the updated union 2.1 for supernovae Ia, the constraint on the H0 value imposed by the measurements of the Hubble space telescope and the Hubble data set, with measures of H at different z. We find a statistically good agreement of the f(R) class under examination with the cosmological data; we thus propose a candidate for f(R) gravity, which is able to pass our cosmological test, reproducing the late time acceleration in agreement with observations.
Article
Full-text available
We consider cosmological implications of the formula for the dark energy density derived by Gurzadyan and Xue1,2 which predicts a value fitting the observational one. Cosmological models with varying by time physical constants, namely, speed of light and gravitational constant and/or their combinations, are considered. In one of the models, for example, vacuum energy density induces effective negative curvature, while another one has an unusual asymptotic. This analysis also explicitly rises the issue of the meaning and content of physical units and constants in cosmological context.
Article
Full-text available
We report measurements of ΩM, ΩΛ, and w from 11 supernovae (SNe) at z = 0.36-0.86 with high-quality light curves measured using WFPC2 on the Hubble Space Telescope (HST). This is an independent set of high-redshift SNe that confirms previous SN evidence for an accelerating universe. The high-quality light curves available from photometry on WFPC2 make it possible for these 11 SNe alone to provide measurements of the cosmological parameters comparable in statistical weight to the previous results. Combined with earlier Supernova Cosmology Project data, the new SNe yield a measurement of the mass density ΩM = 0.25 (statistical) ± 0.04 (identified systematics), or equivalently, a cosmological constant of ΩΛ = 0.75 (statistical) ± 0.04 (identified systematics), under the assumptions of a flat universe and that the dark energy equation-of-state parameter has a constant value w = -1. When the SN results are combined with independent flat-universe measurements of ΩM from cosmic microwave background and galaxy redshift distortion data, they provide a measurement of w = -1.05 (statistical) ± 0.09 (identified systematic), if w is assumed to be constant in time. In addition to high-precision light-curve measurements, the new data offer greatly improved color measurements of the high-redshift SNe and hence improved host galaxy extinction estimates. These extinction measurements show no anomalous negative E(B-V) at high redshift. The precision of the measurements is such that it is possible to perform a host galaxy extinction correction directly for individual SNe without any assumptions or priors on the parent E(B-V) distribution. Our cosmological fits using full extinction corrections confirm that dark energy is required with P(ΩΛ > 0) > 0.99, a result consistent with previous and current SN analyses that rely on the identification of a low-extinction subset or prior assumptions concerning the intrinsic extinction distribution.
Article
Full-text available
We address the issue of whether extra dimensions could have an infinite volume and yet reproduce the effects of observable four-dimensional gravity on a brane. There is no normalizable zero-mode graviton in this case, nevertheless correct Newton's law can be obtained by exchanging bulk gravitons. This can be interpreted as an exchange of a single metastable 4D graviton. Such theories have remarkable phenomenological signatures since the evolution of the Universe becomes high-dimensional at very large scales. Furthermore, the bulk supersymmetry in the infinite volume limit might be preserved while being completely broken on a brane. This gives rise to a possibility of controlling the value of the bulk cosmological constant. Unfortunately, these theories have difficulties in reproducing certain predictions of Einstein's theory related to relativistic sources. This is due to the van Dam–Veltman–Zakharov discontinuity in the propagator of a massive graviton. This suggests that all theories in which contributions to effective 4D gravity come predominantly from the bulk graviton exchange should encounter serious phenomenological difficulties.
Article
Full-text available
We analyze cosmological equations in the brane world scenario with one extra space-like dimension. We demonstrate that the cosmological equations can be reduced to the usual 4D Friedmann type if the bulk energy-momentum tensor is different from zero. We then generalize these equations to the case of a brane of finite thickness. We also demonstrate that when the bulk energy-momentum tensor is different from zero, the extra space-like dimension can be compactified with a single brane and show that the stability of the radius of compactification implies standard cosmology and vice versa. For a brane of finite thickness, we provide a solution such that the 4D Planck scale is related to the fundamental scale by the thickness of the brane. In this case, compactification of the extra dimension is unnecessary.
Article
We propose a non-parametric method of smoothing supernova data over redshift using a Gaussian kernel in order to reconstruct important cosmological quantities including H(z) and w(z) in a model-independent manner. This method is shown to be successful in discriminating between different models of dark energy when the quality of data is commensurate with that expected from the future Supernova Acceleration Probe (SNAP). We find that the Hubble parameter is especially well determined and useful for this purpose. The look-back time of the Universe may also be determined to a very high degree of accuracy (≲0.2 per cent) using this method. By refining the method, it is also possible to obtain reasonable bounds on the equation of state of dark energy. We explore a new diagnostic of dark energy — the ‘w-probe’— which can be calculated from the first derivative of the data. We find that this diagnostic is reconstructed extremely accurately for different reconstruction methods even if Ω0 m is marginalized over. The w-probe can be used to successfully distinguish between Λ cold dark matter and other models of dark energy to a high degree of accuracy.
Article
We show that an interaction between dark matter and dark energy generically results in an effective dark-energy equation of state of w<-1. This arises because the interaction alters the redshift dependence of the matter density. An observer who fits the data treating the dark matter as noninteracting will infer an effective dark-energy fluid with w<-1. We argue that the model is consistent with all current observations, the tightest constraint coming from estimates of the matter density at different redshifts. Comparing the luminosity and angular-diameter distance relations with ΛCDM and phantom models, we find that the three models are degenerate within current uncertainties but likely distinguishable by the next generation of dark-energy experiments.
Article
We present measurements of Ωm and ΩΛ from a blind analysis of 21 high-redshift supernovae using a new technique (CMAGIC) for fitting the multicolor light curves of Type Ia supernovae, first introduced by Wang and coworkers. CMAGIC takes advantage of the remarkably simple behavior of Type Ia supernovae on color-magnitude diagrams and has several advantages over current techniques based on maximum magnitudes. Among these are a reduced sensitivity to host galaxy dust extinction, a shallower luminosity-width relation, and the relative simplicity of the fitting procedure. This allows us to provide a cross-check of previous supernova cosmology results, despite the fact that current data sets were not observed in a manner optimized for CMAGIC. We describe the details of our novel blindness procedure, which is designed to prevent experimenter bias. The data are broadly consistent with the picture of an accelerating universe and agree with a flat universe within 1.7 σ, including systematics. We also compare the CMAGIC results directly with those of a maximum magnitude fit to the same supernovae, finding that CMAGIC favors more acceleration at the 1.6 σ level, including systematics and the correlation between the two measurements. A fit for w assuming a flat universe yields a value that is consistent with a cosmological constant within 1.2 σ.
Article
The two Numerical Recipes books are marvellous. The principal book, The Art of Scientific Computing, contains program listings for almost every conceivable requirement, and it also contains a well written discussion of the algorithms and the numerical methods involved. The Example Book provides a complete driving program, with helpful notes, for nearly all the routines in the principal book. The first edition of Numerical Recipes: The Art of Scientific Computing was published in 1986 in two versions, one with programs in Fortran, the other with programs in Pascal. There were subsequent versions with programs in BASIC and in C. The second, enlarged edition was published in 1992, again in two versions, one with programs in Fortran (NR(F)), the other with programs in C (NR(C)). In 1996 the authors produced Numerical Recipes in Fortran 90: The Art of Parallel Scientific Computing as a supplement, called Volume 2, with the original (Fortran) version referred to as Volume 1. Numerical Recipes in C++ (NR(C++)) is another version of the 1992 edition. The numerical recipes are also available on a CD ROM: if you want to use any of the recipes, I would strongly advise you to buy the CD ROM. The CD ROM contains the programs in all the languages. When the first edition was published I bought it, and have also bought copies of the other editions as they have appeared. Anyone involved in scientific computing ought to have a copy of at least one version of Numerical Recipes, and there also ought to be copies in every library. If you already have NR(F), should you buy the NR(C++) and, if not, which version should you buy? In the preface to Volume 2 of NR(F), the authors say 'C and C++ programmers have not been far from our minds as we have written this volume, and we think that you will find that time spent in absorbing its principal lessons will be amply repaid in the future as C and C++ eventually develop standard parallel extensions'. In the preface and introduction to NR(C++), the authors point out some of the problems in the use of C++ in scientific computing. I have not found any mention of parallel computing in NR(C++). Fortran has quite a lot going for it. As someone who has used it in most of its versions from Fortran II, I have seen it develop and leave behind other languages promoted by various enthusiasts: who now uses Algol or Pascal? I think it unlikely that C++ will disappear: it was devised as a systems language, and can also be used for other purposes such as scientific computing. It is possible that Fortran will disappear, but Fortran has the strengths that it can develop, that there are extensive Fortran subroutine libraries, and that it has been developed for parallel computing. To argue with programmers as to which is the best language to use is sterile. If you wish to use C++, then buy NR(C++), but you should also look at volume 2 of NR(F). If you are a Fortran programmer, then make sure you have NR(F), volumes 1 and 2. But whichever language you use, make sure you have one version or the other, and the CD ROM. The Example Book provides listings of complete programs to run nearly all the routines in NR, frequently based on cases where an anlytical solution is available. It is helpful when developing a new program incorporating an unfamiliar routine to see that routine actually working, and this is what the programs in the Example Book achieve. I started teaching computational physics before Numerical Recipes was published. If I were starting again, I would make heavy use of both The Art of Scientific Computing and of the Example Book. Every computational physics teaching laboratory should have both volumes: the programs in the Example Book are included on the CD ROM, but the extra commentary in the book itself is of considerable value. P Borcherds
Article
Theories with infinite volume extra dimensions open exciting opportunities for particle physics. We argued recently that along with attractive features there are phenomenological difficulties in this class of models. In fact, there is no graviton zero-mode in this case and 4D gravity is obtained by means of continuum bulk modes. These modes have additional degrees of freedom which do not decouple at low energies and lead to inconsistent predictions for light bending and the precession of Mercury's perihelion. In recent papers, [hep-th/0003020] and [hep-th/0003045], the authors made use of brane bending in order to cancel the unwanted physical polarization of gravitons. In this note we point out that this mechanism does not solve the problem since it uses a ghost which cancels the extra degrees of freedom. In order to have a consistent model the ghost should be eliminated. As soon as this is done, 4D gravity becomes unconventional and contradicts General Relativity. New mechanisms are needed to cure these models. We also comment on the possible decoupling of the ghost at large distances due to an apparent flat-5D nature of space-time and on the link between the presence of ghosts and the violation of positive-energy conditions.
Article
We suggest a mechanism by which four-dimensional Newtonian gravity emerges on a 3-brane in 5D Minkowski space with an infinite size extra dimension. The worldvolume theory gives rise to the correct 4D potential at short distances whereas at large distances the potential is that of a 5D theory. We discuss some phenomenological issues in this framework.