Deaglan J. Bartlett’s research while affiliated with Paris Institute of Astrophysics, French National Centre for Scientific Research and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (41)


Figure 1. Slices of the density field in the SGX-SGY plane from −155h −1 Mpc to 155h −1 Mpc in supergalactic coordinates at z = 0 for the local Universe reconstructions used in this work (Table 1). Redder (bluer) colours correspond to overdensities (underdensities). The density fields are presented without additional smoothing. For the fields in the bottom row we resimulated the ICs at higher resolution. The CSiBORG suites and Courtois et al. (2023) are averaged over 20 posterior samples. The black cross marks the origin, indicating the approximate position of the Local Group.
Figure 4. Differences in logarithmic evidences Z from our flow validation model for various local Universe reconstructions (shown on the x-axis, see Table 1), compared against peculiar velocity samples (individual panels, see Table 2). Higher bars indicate a preferred model, and a bar of zero height indicates the reference (least successful) model. The logarithmic evidences are normalised with respect to the reference model as only relative differences are meaningful. Solid bars show evidences using the highest available resolution for each model, while hatched bars show evidences when all fields are smoothed to the resolution of 7.8h −1 Mpc, twice that of Courtois et al. (2023). Overall, CSiBORG2 is the preferred model while the CosmicFlows-based reconstructions (Sorce 2018 and Courtois et al. 2023) are disfavoured. Upon smoothing, the linear reconstruction of Carrick et al. (2015) becomes marginally preferred.
Figure 14. The S 8 ≡ σ 8,L Ωm/0.3 parameter inferred from Carrick et al. (2015) calibrated against the joint peculiar velocity sample (LOSS+Foundation+CF4 TFR W 1-band+CF4 TFR i-band). We compare to literature results using peculiar velocities (Huterer et al. 2017; Nusser 2017; Boruah et al. 2020; Said et al. 2020), weak lensing (Dark Energy Survey and KiloDegree Survey Collaboration et al. 2023; Abbott et al. 2022; Heymans et al. 2021), clustering (DESI Collaboration et al. 2024; Porredon et al. 2022), cluster abundance (Bocquet et al. 2019; Planck Collaboration et al. 2016; Abbott et al. 2020) and Planck TT,TE,EE+lowE+lensing (Planck Collaboration et al. 2020b). The errors are 1σ.
Summary of the free parameters of our model and their priors.
The Velocity Field Olympics: Assessing velocity field reconstructions with direct distance tracers
  • Preprint
  • File available

January 2025

·

2 Reads

·

·

·

[...]

·

Hélène M. Courtois

The peculiar velocity field of the local Universe provides direct insights into its matter distribution and the underlying theory of gravity, and is essential in cosmological analyses for modelling deviations from the Hubble flow. Numerous methods have been developed to reconstruct the density and velocity fields at z0.05z \lesssim 0.05, typically constrained by redshift-space galaxy positions or by direct distance tracers such as the Tully-Fisher relation, the fundamental plane, or Type Ia supernovae. We introduce a validation framework to evaluate the accuracy of these reconstructions against catalogues of direct distance tracers. Our framework assesses the goodness-of-fit of each reconstruction using Bayesian evidence, residual redshift discrepancies, velocity scaling, and the need for external bulk flows. Applying this framework to a suite of reconstructions -- including those derived from the Bayesian Origin Reconstruction from Galaxies (BORG) algorithm and from linear theory -- we find that the non-linear BORG reconstruction consistently outperforms others. We highlight the utility of such a comparative approach for supernova or gravitational wave cosmological studies, where selecting an optimal peculiar velocity model is essential. Additionally, we present calibrated bulk flow curves predicted by the reconstructions and perform a density-velocity cross-correlation using a linear theory reconstruction to constrain the growth factor, yielding S8=0.69±0.034S_8 = 0.69 \pm 0.034. This result is in significant tension with Planck but agrees with other peculiar velocity studies.

Download

Fig. 1. Schematic illustration of the (a) COLA and (b) COCA formalism for cosmological simulations. In COLA, we solve the equations of motion in the frame of reference given by LPT, so we compute the residual ('res') between the LPT trajectory and the true position, x, of particles. In COCA, we emulate a frame of reference closer to the true trajectory by adding a ML contribution to LPT. Thus, we can solve for the (smaller) residuals between x and the emulated frame.
Fig. 2. Schematic illustration of the kick-drift-kick integration scheme employed in this study. The initial positions x i and momenta p i are evolved to their final values, x f and p f , with updates to these quantities occurring at different times. Unlike traditional kick-drift-kick integration, we choose not to evaluate forces that appear in the equations of motion at all time steps, but only at a subset (steps 8, 9 and 10 in this example). At all other kick steps, the momenta are updated according to the emulated frame of reference only.
Fig. 9. Ratio of the reduced bispectrum (Eq. (18)) between COCA simulations and the reference, as a function of the number of force evaluations, n f . In the left panel, we plot a squeezed configuration, where the third wavenumber is k 3 = k = 9.8 × 10 −2 h Mpc −1 and we vary k 1 = k 2 = k s . In the right panel, we choose k 1 = 0.1 h Mpc −1 and k 2 = 1.0 h Mpc −1 , and plot Q as a function of the angle between these two vectors, θ. The coloured lines represent the mean over the test set. The grey band indicates 1% agreement.
Fig. 10. Same as Fig. 8, but for the velocity-field two-point statistics. Again, we see that COCA performs much better than COLA with fewer force evaluations, and that the result converges to the truth as n f increases.
Fig. 12. Relative performance of COCA and COLA versus an emulator of the displacement field Ψ. We compute the summary statistics outlined in Sect. 3.5 for the final (a = 1) matter density field and compare results at both the training cosmology and a mis-specified one for COCA (COLA is evaluated at the correct cosmology in both cases). Although directly emulating Ψ produces a more accurate density field than simply emulating the momentum field p (with n f = 0), using the COCA framework (emulating the frame of reference and employing additional force evaluations) yields the best performance. Moreover, even when using a mis-specified cosmology to predict the frame of reference, COCA significantly outperforms COLA for any given number of time steps.
COmoving Computer Acceleration (COCA): N-body simulations in an emulated frame of reference

January 2025

·

3 Reads

·

2 Citations

Astronomy and Astrophysics

Context.N -body simulations are computationally expensive and machine learning (ML) based emulation techniques have thus emerged as a way to increase their speed. Surrogate models are indeed fast, however, they are limited in terms of their trustworthiness due to potentially substantial emulation errors that current approaches are not equipped to correct. Aims. To alleviate this problem, we have introduced COmoving Computer Acceleration (COCA), a hybrid framework interfacing ML algorithm with an N -body simulator. The correct physical equations of motion are solved in an emulated frame of reference, so that any emulation error is corrected by design. Thus, we are able to find a solution for the perturbation of particle trajectories around the ML solution. This approach is computationally cheaper than obtaining the full solution and it is guaranteed to converge to the truth as the number of force evaluations is increased. Methods. Even though it is applicable to any ML algorithm and N -body simulator, we assessed this approach in the particular case of particle-mesh (PM) cosmological simulations in a frame of reference predicted by a convolutional neural network. In such cases, the time dependence is encoded as an additional input parameter to the network. Results. We find that COCA efficiently reduces emulation errors in particle trajectories, requiring far fewer force evaluations than running the corresponding simulation without ML. As a consequence, we were able to obtain accurate final density and velocity fields for a reduced computational budget. We demonstrate that this method exhibits robustness when applied to examples outside the range of the training data. When compared to the direct emulation of the Lagrangian displacement field using the same training resources, COCA’s ability to correct emulation errors results in more accurate predictions. Conclusions. Therefore, COCA makes N -body simulations cheaper by skipping unnecessary force evaluations, while still solving the correct equations of motion and correcting for emulation errors made by ML.


Figure 1. Power spectrum of the halo counts field before and after permuting voxels with approximately equal density, for halos of mass 10 13.7 − 10 14.0 h −1 M ☉ . The permuted fields have significantly greater power on large scales; thus, LIMD halo biasing cannot reproduce the distribution of halos. The mean and standard deviation are computed over 100 permutations. The vertical colored lines correspond to the wavenumbers used to characterize this discrepancy in later plots.
Figure 5. Fractional error on the power spectrum of halo counts after permuting voxels within the same density bin as a function of the number of bins. We consider halos of mass 10 13.7 −10 14.0 h −1 M ☉ and choose a binning scheme such that each bin contains approximately an equal number of voxels. After an initial transient behavior, we find constant biases in the power spectrum across many orders of magnitude; hence, our conclusion is robust to the choice of n bin .
Bye-bye, Local-in-matter-density Bias: The Statistics of the Halo Field Are Poorly Determined by the Local Mass Density

December 2024

·

14 Reads

·

1 Citation

The Astrophysical Journal Letters

Bias models relating the dark matter field to the spatial distribution of halos are widely used in current cosmological analyses. Many models predict halos purely from the local Eulerian matter density, yet bias models in perturbation theory require other local properties. We assess the validity of assuming that only the local dark matter density can be used to predict the number density of halos in a model-independent way and in the nonperturbative regime. Utilizing N -body simulations, we study the properties of the halo counts field after spatial voxels with near-equal dark matter density have been permuted. If local-in-matter-density (LIMD) biasing were valid, the statistical properties of the permuted and unpermuted fields would be indistinguishable since both represent equally fair draws of the stochastic biasing model. If the Lagrangian radius is greater than approximately half the voxel size and for halos less massive than ∼10 ¹⁵ h ⁻¹ M ☉ , we find the permuted halo field has a scale-dependent bias with greater than 25% more power on scales relevant for current surveys. These bias models remove small-scale power by not modeling correlations between neighboring voxels, which substantially boosts large-scale power to conserve the field’s total variance. This conclusion is robust to the choice of initial conditions and cosmology. Assuming LIMD halo biasing cannot, therefore, reproduce the distribution of halos across a large range of scales and halo masses, no matter how complex the model. One must either allow the biasing to be a function of other quantities and/or remove the assumption that neighboring voxels are statistically independent.


FIG. 2. Equation of state, wðaÞ, for a scalar field model with m 2 < 0 [ðV 0 ; m 2 Þ ¼ ð0.9725; −40.0Þ, black solid line] alongside the corresponding best fit CPL models when using the BAO, SNe, and CMB survey characteristics (as described in Sec. III) both individually and combined, showing that the best fit (w 0 , w a ) parameters are clearly sensitive to the survey parameters.
FIG. 5. Blue: The predicted slope w a =ð1 þ w 0 Þ of thawing quintessence models in the (w 0 , w a ) plane for survey characteristics of BAO þ CMB þ SNe data. Red: CPL parametrization data constraints (as Fig. 1). The shaded areas represent 2σ regions for both thawing quintessence and the data.
FIG. 13. 1D-posterior distribution for m 2 from the combination of DESI BAO data [1], Pantheonþ SNe data [3], and CMB data [5-8] (blue) and the effective prior imposed (gray).
Scant evidence for thawing quintessence

October 2024

·

32 Reads

·

30 Citations

Physical Review D

New constraints on the expansion rate of the Universe seem to favor evolving dark energy in the form of thawing quintessence models, i.e., models for which a canonical, minimally coupled scalar field has, at late times, begun to evolve away from potential energy domination. We scrutinize the evidence for thawing quintessence by exploring what it predicts for the equation of state. We show that, in terms of the usual Chevalier-Polarski-Linder parameters, ( w 0 , w a ), thawing quintessence is, in fact, only marginally consistent with a compilation of the current data. Despite this, we embrace the possibility that thawing quintessence is dark energy and find constraints on the microphysics of this scenario. We do so in terms of the effective mass m 2 and energy scale V 0 of the scalar field potential. We are particularly careful to enforce uninformative, flat priors on these parameters so as to minimize their effect on the final posteriors. While the current data favors a large and negative value of m 2 , when we compare these models to the standard Λ CDM model we find that there is scant evidence for thawing quintessence. Published by the American Physical Society 2024


syren-new: Precise formulae for the linear and nonlinear matter power spectra with massive neutrinos and dynamical dark energy

October 2024

·

16 Reads

Current and future large scale structure surveys aim to constrain the neutrino mass and the equation of state of dark energy. We aim to construct accurate and interpretable symbolic approximations to the linear and nonlinear matter power spectra as a function of cosmological parameters in extended Λ\LambdaCDM models which contain massive neutrinos and non-constant equations of state for dark energy. This constitutes an extension of the syren-halofit emulators to incorporate these two effects, which we call syren-new (SYmbolic-Regression-ENhanced power spectrum emulator with NEutrinos and W0waW_0-w_a). We also obtain a simple approximation to the derived parameter σ8\sigma_8 as a function of the cosmological parameters for these models. Our results for the linear power spectrum are designed to emulate CLASS, whereas for the nonlinear case we aim to match the results of EuclidEmulator2. We compare our results to existing emulators and N-body simulations. Our analytic emulators for σ8\sigma_8, the linear and nonlinear power spectra achieve root mean squared errors of 0.1%, 0.3% and 1.3%, respectively, across a wide range of cosmological parameters, redshifts and wavenumbers. We verify that emulator-related discrepancies are subdominant compared to observational errors and other modelling uncertainties when computing shear power spectra for LSST-like surveys. Our expressions have similar accuracy to existing (numerical) emulators, but are at least an order of magnitude faster, both on a CPU and GPU. Our work greatly improves the accuracy, speed and range of applicability of current symbolic approximations to the linear and nonlinear matter power spectra. We provide publicly available code for all symbolic approximations found.


Figure 1. Residuals of the global performance of the model with halos in redshift space: Same as Fig. 4 but showing the residuals of each of the summary statistics. Here we show the residuals for all the 200 test simulations and color the lines with the cosmological parameter í µí¼Ž 8 .
Figure 2. Local performance of the model with halos in real space: Comparison of the one-point statistics between the true (square markers) and mock (solid lines) halo catalogs in test simulations with different cosmologies. The left, middle, and right columns compare the number of halos, the mass of the heaviest halo, and the mass of the third heaviest halo, respectively, in 128 3 voxels for each cosmology. In the top row, we show the histogram for sub-selections of voxels with dark matter density in the range 0 < í µí»¿ m < 2, while in the bottom row, we use high-density voxels with í µí»¿ m > 2. This test individually compares the performance of each of the three stages of the halo mass network (as described in § 3), demonstrating the high fidelity of the mock catalogs obtained using CHARM.
Figure 5. Global performance of the model with galaxies in redshift space: Same as Fig. 4 but for galaxies in redshift space and each curve for different cosmology uses a random set of HOD parameters.
Figure 6. Parameter inference with galaxies in redshift space: Comparison of the true values of cosmological parameters in test simulations with the predicted values and their 1í µí¼Ž uncertainties using simulation-based inference with the redshift-space power spectra of galaxies (í µí±ƒ ℓ=0 , í µí±ƒ ℓ=2 , í µí±ƒ ℓ=4 ) for í µí±˜ < 0.32 ℎ/Mpc, monopole equilateral bispectra (with í µí¼ƒ k ∈ [0.1, 3.04] radians) at í µí±˜ ∈ {0.06, 0.12, 0.32}ℎ/Mpc, and monopole first, second, and third-order wavelets:
CHARM: Creating Halos with Auto-Regressive Multi-stage networks

September 2024

·

7 Reads

To maximize the amount of information extracted from cosmological datasets, simulations that accurately represent these observations are necessary. However, traditional simulations that evolve particles under gravity by estimating particle-particle interactions (N-body simulations) are computationally expensive and prohibitive to scale to the large volumes and resolutions necessary for the upcoming datasets. Moreover, modeling the distribution of galaxies typically involves identifying virialized dark matter halos, which is also a time- and memory-consuming process for large N-body simulations, further exacerbating the computational cost. In this study, we introduce CHARM, a novel method for creating mock halo catalogs by matching the spatial, mass, and velocity statistics of halos directly from the large-scale distribution of the dark matter density field. We develop multi-stage neural spline flow-based networks to learn this mapping at redshift z=0.5 directly with computationally cheaper low-resolution particle mesh simulations instead of relying on the high-resolution N-body simulations. We show that the mock halo catalogs and painted galaxy catalogs have the same statistical properties as obtained from N-body simulations in both real space and redshift space. Finally, we use these mock catalogs for cosmological inference using redshift-space galaxy power spectrum, bispectrum, and wavelet-based statistics using simulation-based inference, performing the first inference with accelerated forward model simulations and finding unbiased cosmological constraints with well-calibrated posteriors. The code was developed as part of the Simons Collaboration on Learning the Universe and is publicly available at \url{https://github.com/shivampcosmo/CHARM}.



COmoving Computer Acceleration (COCA): N-body simulations in an emulated frame of reference

September 2024

·

50 Reads

Nbodysimulationsarecomputationallyexpensive,somachinelearning(ML)basedemulationtechniqueshaveemergedasawaytoincreasetheirspeed.Althoughfast,surrogatemodelshavelimitedtrustworthinessduetopotentiallysubstantialemulationerrorsthatcurrentapproachescannotcorrectfor.Toalleviatethisproblem,weintroduceCOmovingComputerAcceleration(COCA),ahybridframeworkinterfacingMLwithan-body simulations are computationally expensive, so machine-learning (ML)-based emulation techniques have emerged as a way to increase their speed. Although fast, surrogate models have limited trustworthiness due to potentially substantial emulation errors that current approaches cannot correct for. To alleviate this problem, we introduce COmoving Computer Acceleration (COCA), a hybrid framework interfacing ML with an Nbodysimulator.Thecorrectphysicalequationsofmotionaresolvedinanemulatedframeofreference,sothatanyemulationerroriscorrectedbydesign.Thisapproachcorrespondstosolvingfortheperturbationofparticletrajectoriesaroundthemachinelearntsolution,whichiscomputationallycheaperthanobtainingthefullsolution,yetisguaranteedtoconvergetothetruthasoneincreasesthenumberofforceevaluations.AlthoughapplicabletoanyMLalgorithmand-body simulator. The correct physical equations of motion are solved in an emulated frame of reference, so that any emulation error is corrected by design. This approach corresponds to solving for the perturbation of particle trajectories around the machine-learnt solution, which is computationally cheaper than obtaining the full solution, yet is guaranteed to converge to the truth as one increases the number of force evaluations. Although applicable to any ML algorithm and Nbodysimulator,thisapproachisassessedintheparticularcaseofparticlemeshcosmologicalsimulationsinaframeofreferencepredictedbyaconvolutionalneuralnetwork,wherethetimedependenceisencodedasanadditionalinputparametertothenetwork.COCAefficientlyreducesemulationerrorsinparticletrajectories,requiringfarfewerforceevaluationsthanrunningthecorrespondingsimulationwithoutML.Weobtainaccuratefinaldensityandvelocityfieldsforareducedcomputationalbudget.Wedemonstratethatthismethodshowsrobustnesswhenappliedtoexamplesoutsidetherangeofthetrainingdata.WhencomparedtothedirectemulationoftheLagrangiandisplacementfieldusingthesametrainingresources,COCAsabilitytocorrectemulationerrorsresultsinmoreaccuratepredictions.COCAmakes-body simulator, this approach is assessed in the particular case of particle-mesh cosmological simulations in a frame of reference predicted by a convolutional neural network, where the time dependence is encoded as an additional input parameter to the network. COCA efficiently reduces emulation errors in particle trajectories, requiring far fewer force evaluations than running the corresponding simulation without ML. We obtain accurate final density and velocity fields for a reduced computational budget. We demonstrate that this method shows robustness when applied to examples outside the range of the training data. When compared to the direct emulation of the Lagrangian displacement field using the same training resources, COCA's ability to correct emulation errors results in more accurate predictions. COCA makes N$-body simulations cheaper by skipping unnecessary force evaluations, while still solving the correct equations of motion and correcting for emulation errors made by ML.


Scant evidence for thawing quintessence

August 2024

·

8 Reads

New constraints on the expansion rate of the Universe seem to favor evolving dark energy in the form of thawing quintessence models, i.e., models for which a canonical, minimally coupled scalar field has, at late times, begun to evolve away from potential energy domination. We scrutinize the evidence for thawing quintessence by exploring what it predicts for the equation of state. We show that, in terms of the usual Chevalier-Polarski-Linder parameters, (w0w_0, waw_a), thawing quintessence is, in fact, only marginally consistent with a compilation of the current data. Despite this, we embrace the possibility that thawing quintessence is dark energy and find constraints on the microphysics of this scenario. We do so in terms of the effective mass m2m^2 and energy scale V0V_0 of the scalar field potential. We are particularly careful to enforce un-informative, flat priors on these parameters so as to minimize their effect on the final posteriors. While the current data favors a large and negative value of m2m^2, when we compare these models to the standard Λ\LambdaCDM model we find that there is scant evidence for thawing quintessence.


Statistical Patterns in the Equations of Physics and the Emergence of a Meta-Law of Nature

August 2024

·

79 Reads

Physics, as a fundamental science, aims to understand the laws of Nature and describe them in mathematical equations. While the physical reality manifests itself in a wide range of phenomena with varying levels of complexity, the equations that describe them display certain statistical regularities and patterns, which we begin to explore here. By drawing inspiration from linguistics, where Zipf's law states that the frequency of any word in a large corpus of text is roughly inversely proportional to its rank in the frequency table, we investigate whether similar patterns for the distribution of operators emerge in the equations of physics. We analyse three corpora of formulae and find, using sophisticated implicit-likelihood methods, that the frequency of operators as a function of their rank in the frequency table is best described by an exponential law with a stable exponent, in contrast with Zipf's inverse power-law. Understanding the underlying reasons behind this statistical pattern may shed light on Nature's modus operandi or reveal recurrent patterns in physicists' attempts to formalise the laws of Nature. It may also provide crucial input for symbolic regression, potentially augmenting language models to generate symbolic models for physical phenomena. By pioneering the study of statistical regularities in the equations of physics, our results open the door for a meta-law of Nature, a (probabilistic) law that all physical laws obey.


Citations (23)


... Having a high computational cost is a common limitation in numerical simulations, especially when a parameter space exploration is involved; a possible solution is the implementation of an emulator, to either reduce the number of simulations needed (e.g., Bird et al. 2019;Rogers et al. 2019;Brown et al. 2024) or to speed up the computation (e.g., Spurio Mancini et al. 2022;Branca & Pallottini 2024;Robinson et al. 2024;Bartlett et al. Article number, page 5 of 12 A&A proofs: manuscript no. stellar_halo_udf 2024). ...

Reference:

Stellar halos tracing the assembly of Ultra-Faint Dwarf galaxies
COmoving Computer Acceleration (COCA): N-body simulations in an emulated frame of reference

Astronomy and Astrophysics

... Desjacques et al. 2018). For example, Bartlett et al. (2024) showed that non-linear but local galaxy models are insufficient. As high-dimensional field-level inference methods incorporating non-linear structure formation and perturbative or phenomenological galaxy bias models have now become computationally feasible, the next pressing challenge involves how to integrate non-differentiable models in the data analysis. ...

Bye-bye, Local-in-matter-density Bias: The Statistics of the Halo Field Are Poorly Determined by the Local Mass Density

The Astrophysical Journal Letters

... Quintessence −1 <α Q < − 1 3 : This form describes a controlled accelerating universe expansion where energy conditions are always satisfied, i.e., P ϕ + ρ ϕ > 0 [60][61][62][63][64][65][66][67][68][69][70][71][72]. This usual DE form has been significantly studied in the literature in recent decades for the fascination it provokes and the realism of the models. ...

Scant evidence for thawing quintessence

Physical Review D

... Both operations were performed six times, and each candidate model produced was evaluated for consistency at a sample of domain locations. We did not investigate if there was some optimum number with respect to overall run time or if the preferential search strategy made a significant impact on an appropriate measure of success rate, as discussed in Kronberger et al. (2024). Although, Garbrecht et al. (2021b) showed that the preferential search strategy can improve overall run time when the evaluation and selection steps performed in crossover and mutation are faster than the main evaluation and selection step, assuming it beneficially guides the algorithm's search. ...

The Inefficiency of Genetic Programming for Symbolic Regression
  • Citing Chapter
  • September 2024

... We utilise the outputs of the CNN to train a normalising flow model (Papamakarios et al. 2021) with the Python package ltu-ili (Ho et al. 2024a, Learning the Universe Implicit Likelihood Inference). Specifically, the 32 values from the penultimate layer of the CNN are employed as inputs to train a Masked Autoregressive Flow (Papamakarios et al. 2018, MAF) for neural posterior estimation. ...

LtU-ILI: An All-in-One Framework for Implicit Inference in Astrophysics and Cosmology
  • Citing Article
  • July 2024

The Open Journal of Astrophysics

... A method for distinguishing between the non-thermal γ-ray emissions originating from astrophysical sources and those that might be caused by DM annihilation or decay within the Unresolved Gamma-Ray Background (UGRB) hinges on the concept of cross-correlating UGRB maps with various other maps that trace the underlying large-scale structure of the Universe. Such tracers include cosmic phenomena like the weak gravitational lensing effect [3][4][5][6][7][8], the clustering of galaxies [9][10][11][12][13][14][15] and galaxy clusters [16][17][18][19][20], and the lensing effect of the Cosmic Microwave Background (CMB) [21], which reflect the large-scale distribution of matter across cosmological distances (see also Refs. [22][23][24][25]). ...

Constraints on dark matter and astrophysics from tomographic γ -ray cross-correlations

Physical Review D

... (The ramp potential V RAMP (ϕ) is represented in Fig. 10, left panel.) Taking inspiration from [77,78], we then apply SR to regularize the potential features, and to determine a smooth hilltop potential symmetric around the origin, rapidly increasing at large values of ϕ. We use the Python package PySR [79] to explore the model space under a set of constraints on the allowed functional form for the desired scalar potential. ...

Optimal inflationary potentials
  • Citing Article
  • April 2024

Physical Review D

... After the first seminal papers on this topic [5][6][7], several emulators have been produced in the literature, emulating the output of Boltzmann solvers such as CAMB [8] or CLASS [9], with applications ranging from the Cosmic Microwave Background (CMB) [10][11][12][13][14], the linear matter power spectrum [11,[15][16][17][18][19], galaxy power spectrum multipoles [17,[19][20][21][22], and the galaxy survey angular power spectrum [23][24][25][26][27][28][29]. ...

A precise symbolic emulator of the linear matter power spectrum

Astronomy and Astrophysics

... For all these conversions, we assume a flat ΛCDM with the cosmological parameters h = 0.6766, Ωm = 0.3111, Ω b = 0.02242/h 2 and ns = 0.9665, except the amplitude of the primordial power spectrum As which we wish to infer. This is achieved by running a root-finding algorithm, where, for a given As, σ8,NL is computed by performing the appropriate integral of the non-linear matter power spectrum, which is evaluated using the syren-new emulator 6 ( Bartlett et al. 2024;Sui et al. 2024). Once the corresponding value of As has been obtained, this is converted to the linear value of σ8,L using the conversion given in Eq. 5 of Sui et al. (2024). ...

syren-halofit: A fast, interpretable, high-precision formula for the Lambda CDM nonlinear matter power spectrum

Astronomy and Astrophysics

... However, we find that these problematic cases are not encountered during our inferences. The approach of distinguishing between observed and true parameters, to which a Gaussian hyperprior is assigned, is analogous to Marginalised Normal Regression (Bartlett & Desmond 2023) which was shown to be an unbiased regression method, and also follows previous work in the field of supernova cosmology (March et al. 2011(March et al. , 2014Rubin et al. 2015;March et al. 2018;Rubin et al. 2023). ...

Marginalised Normal Regression: Unbiased curve fitting in the presence of x-errors
  • Citing Article
  • November 2023

The Open Journal of Astrophysics