Article

Estimation of cosmological parameters using adaptive importance sampling

Physical review D: Particles and fields 03/2009; DOI: 10.1103/PhysRevD.80.023507
Source: arXiv

ABSTRACT We present a Bayesian sampling algorithm called adaptive importance sampling or Population Monte Carlo (PMC), whose computational workload is easily parallelizable and thus has the potential to considerably reduce the wall-clock time required for sampling, along with providing other benefits. To assess the performance of the approach for cosmological problems, we use simulated and actual data consisting of CMB anisotropies, supernovae of type Ia, and weak cosmological lensing, and provide a comparison of results to those obtained using state-of-the-art Markov Chain Monte Carlo (MCMC). For both types of data sets, we find comparable parameter estimates for PMC and MCMC, with the advantage of a significantly lower computational time for PMC. In the case of WMAP5 data, for example, the wall-clock time reduces from several days for MCMC to a few hours using PMC on a cluster of processors. Other benefits of the PMC approach, along with potential difficulties in using the approach, are analysed and discussed. Comment: 17 pages, 11 figures

0 Bookmarks
 · 
63 Views
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: We make the first detailed MCMC likelihood study of cosmological constraints that are expected from some of the largest, ongoing and proposed, cluster surveys in different wave-bands and compare the estimates to the prevalent Fisher matrix forecasts. Mock catalogs of cluster counts expected from the surveys -- eROSITA, WFXT, RCS2, DES and Planck, along with a mock dataset of follow-up mass calibrations are analyzed for this purpose. A fair agreement between MCMC and Fisher results is found only in the case of minimal models. However, for many cases, the marginalized constraints obtained from Fisher and MCMC methods can differ by factors of 30-100%. The discrepancy can be alarmingly large for a time dependent dark energy equation of state, w(a); the Fisher methods are seen to under-estimate the constraints by as much as a factor of 4--5. Typically, Fisher estimates become more and more inappropriate as we move away from LCDM, to a constant-w dark energy to varying-w dark energy cosmologies. Fisher analysis, also, predicts incorrect parameter degeneracies. From the point of mass-calibration uncertainties, a high value of unknown scatter about the mean mass-observable relation, and its redshift dependence, is seen to have large degeneracies with the cosmological parameters sigma_8 and w(a) and can degrade the cosmological constraints considerably. We find that the addition of mass-calibrated cluster datasets can improve dark energy and sigma_8 constraints by factors of 2--3 from what can be obtained compared to CMB+SNe+BAO only. Since, details of future cluster surveys are still being planned, we emphasize that optimal survey design must be done using MCMC analysis rather than Fisher forecasting. [abridged]
    Journal of Cosmology and Astroparticle Physics 10/2012; 2013(02). · 6.04 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: In this mini-review we discuss first why we should investigate cosmological models beyond LCDM. We then show how to describe dark energy or modified gravity models in a fluid language with the help of one background and two perturbation quantities. We review a range of dark energy models and study how they fit into the phenomenological framework, including generalizations like phantom crossing, sound speeds different from c and non-zero anisotropic stress, and how these effective quantities are linked to the underlying physical models. We also discuss the limits of what can be measured with cosmological data, and some challenges for the framework.
    Comptes Rendus Physique 04/2012; 13(s 6–7). · 1.82 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: In this paper we use Adaptive Importance Sampling for simulating the multivariate Generalized Hyperbolic Distribution and computing tail probabilities. Adaptive Importance Sampling is an extension of classical Importance Sampling that updates sequentially the instrumental density at each iteration. Under the only condition that the density is known in closed form, the method can be used for sampling multivariate distributions and estimate quantities of interest. Some simu-lations experiments and a real-data application show that, for the problem at hand, the method has an excellent performance, which makes this technique an appealing alternative to more traditional sampling algorithms, whose implementation for the simulation of the Generalized Hyperbolic Distribution is not straightforward.
    01/2010;

Full-text (2 Sources)

View
13 Downloads
Available from
Jun 3, 2014