Estimation of cosmological parameters using adaptive importance sampling

Physical review D: Particles and fields 03/2009; DOI: 10.1103/PhysRevD.80.023507
Source: arXiv

ABSTRACT We present a Bayesian sampling algorithm called adaptive importance sampling or Population Monte Carlo (PMC), whose computational workload is easily parallelizable and thus has the potential to considerably reduce the wall-clock time required for sampling, along with providing other benefits. To assess the performance of the approach for cosmological problems, we use simulated and actual data consisting of CMB anisotropies, supernovae of type Ia, and weak cosmological lensing, and provide a comparison of results to those obtained using state-of-the-art Markov Chain Monte Carlo (MCMC). For both types of data sets, we find comparable parameter estimates for PMC and MCMC, with the advantage of a significantly lower computational time for PMC. In the case of WMAP5 data, for example, the wall-clock time reduces from several days for MCMC to a few hours using PMC on a cluster of processors. Other benefits of the PMC approach, along with potential difficulties in using the approach, are analysed and discussed. Comment: 17 pages, 11 figures

  • [Show abstract] [Hide abstract]
    ABSTRACT: We use the Bayesian estimation on direct T - Q - U cosmic microwave background (CMB) polarization maps to forecast errors on the tensor-to-scalar power ratio r, and hence on primordial gravitational waves, as a function of sky coverage f sky. This map-based likelihood filters the information in the pixel-pixel space into the optimal combinations needed for r detection for cut skies, providing enhanced information over a first-step linear separation into a combination of E, B, and mixed modes, and ignoring the latter. With current computational power and for typical resolutions appropriate for r detection, the large matrix inversions required are accurate and fast. Our simulations explore two classes of experiments, with differing bolometric detector numbers, sensitivities, and observational strategies. One is motivated by a long duration balloon experiment like Spider, with pixel noise for a specified observing period. This analysis also applies to ground-based array experiments. We find that, in the absence of systematic effects and foregrounds, an experiment with Spider-like noise concentrating on f sky ~ 0.02-0.2 could place a 2σr 0.014 boundary (~95% confidence level), which rises to 0.02 with an ℓ-dependent foreground residual left over from an assumed efficient component separation. We contrast this with a Planck-like fixed instrumental noise as f sky varies, which gives a Galaxy-masked (f sky = 0.75) 2σr 0.015, rising to 0.05 with the foreground residuals. Using as the figure of merit the (marginalized) one-dimensional Shannon entropy of r, taken relative to the first 2003 WMAP CMB-only constraint, gives –2.7 bits from the 2012 WMAP9+ACT+SPT+LSS data, and forecasts of –6 bits from Spider (+ Planck); this compares with up to –11 bits for CMBPol, COrE, and PIXIE post-Planck satellites and –13 bits for a perfectly noiseless cosmic variance limited experiment. We thus confirm the wisdom of the current strategy for r detection of deeply probed patches covering the f sky minimum-error trough with balloon and ground experiments.
    The Astrophysical Journal 06/2013; 771(1):12. · 6.28 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Measurement of cosmic shear using weak gravitational lensing is a challenging task that involves a number of complicated procedures. We study in detail the systematic errors in measurement of weak lensing Minkowski Functionals (MFs). Specifically, we focus on systematics associated with galaxy shape measurement, photometric redshift errors, and shear calibration correction. We first generate mock weak lensing catalogues that directly incorporate the actual observational characteristics of the Canada-France-Hawaii Lensing Survey (CFHTLenS). We then perform the Fisher analysis using the large set of mock catalogues for various cosmological models. We find that the statistical error associated with the observational effects degrades the cosmological parameter constraints by a factor of a few. Subaru Hyper Suprime-Cam (HSC) survey with the sky coverage of ~1400 deg2 will constrain the dark energy equation of state parameter with an error of Delta w_0 ~ 0.25 by the lensing MFs alone, but biases induced by the systematics can be comparable to the 1-sigma error. We conclude that the lensing MFs are powerful statistics beyond the two-point statistics, only if well calibrated measurement of both the redshifts and the shapes of source galaxies is performed. Finally, we analyse the CFHTLenS data to explore the ability of the MFs to break degeneracies between a few cosmological parameters. Using a combined analysis of the MFs and the shear correlation function, we derive the matter density Omega_m0= 0.256+0.054-0.046.
    The Astrophysical Journal 12/2013; 786(1). · 6.28 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: In the context of the halo model, the simple — but powerful — assumption that the number of galaxies only depends on halo mass, the halo occupation distribution (HOD) model, leads to an accurate analytic prediction of the galaxy distribution. Reciprocally, interpreting galaxy clustering using the HOD model allows to make a direct comparison between galaxy properties and halo mass. By using accurate galaxy clustering measurements over 133 deg2 of the Wide component of the Canada-France-Hawaii Telescope Legacy Survey (CFHTLS), we performed a detailed investigation of the changing relationship between galaxies and the dark matter haloes they inhabit from z = 1.2 to the local Universe. In this paper, we present our results and their implication for galaxy evolution.

Full-text (2 Sources)

Available from
Jun 3, 2014