Marcelo Pereyra’s research while affiliated with Heriot-Watt University and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (107)


Bayesian computation with generative diffusion models by Multilevel Monte Carlo
  • Article
  • Publisher preview available

June 2025

·

8 Reads

·

·

Marcelo Pereyra

·

Konstantinos Zygalakis

Generative diffusion models have recently emerged as a powerful strategy to perform stochastic sampling in Bayesian inverse problems, delivering remarkably accurate solutions for a wide range of challenging applications. However, diffusion models often require a large number of neural function evaluations per sample in order to deliver accurate posterior samples. As a result, using diffusion models as stochastic samplers for Monte Carlo integration in Bayesian computation can be highly computationally expensive, particularly in applications that require a substantial number of Monte Carlo samples for conducting uncertainty quantification analyses. This cost is especially high in large-scale inverse problems such as computational imaging, which rely on large neural networks that are expensive to evaluate. With quantitative imaging applications in mind, this paper presents a Multilevel Monte Carlo strategy that significantly reduces the cost of Bayesian computation with diffusion models. This is achieved by exploiting cost-accuracy trade-offs inherent to diffusion models to carefully couple models of different levels of accuracy in a manner that significantly reduces the overall cost of the calculation, without reducing the final accuracy. The proposed approach achieves a 4×-to-8× reduction in computational cost with respect to standard techniques across three benchmark imaging problems. This article is part of the theme issue ‘Generative modelling meets Bayesian inference: a new paradigm for inverse problems’.

View access options

Generative modelling meets Bayesian inference: a new paradigm for inverse problems

June 2025

·

11 Reads

Alain Oliviero-Durmus

·

Yazid Janati

·

·

[...]

·

Sebastian Reich

This special issue addresses Bayesian inverse problems using data-driven priors derived from deep generative models (DGMs) and the convergence of generative modelling techniques and Bayesian inference methods. Conventional Bayesian priors often fail to accurately capture the properties and the underlying geometry of complex, real-world data distributions. In contrast, deep generative models (DGMs), which include generative adversarial networks (GANs), variational auto-encoders (VAEs), normalizing flows and diffusion models (DMs), have demonstrated tremendous success in capturing detailed data representations learned directly from empirical observations. As a result, these models produce priors endowed with superior accuracy, increased perceptual realism and enhanced capacities for uncertainty quantification within inverse problem contexts. This paradigm emerged in the late 2010s, when pioneering efforts were made to explicitly formulate Bayesian inverse problems using conditional Wasserstein generative adversarial networks (GANs). These advances have greatly improved methods for quantifying uncertainties, especially in large-scale imaging applications. Building on these fundamental insights, posterior sampling techniques utilizing DMs have demonstrated remarkable efficiency and robustness, highlighting their potential to effectively tackle complex and diverse inverse problems. The articles collected herein provide essential theoretical breakthroughs and significant algorithmic innovations, collectively demonstrating how deep generative priors mitigate traditional limitations and profoundly enrich both the practical applicability and theoretical foundations of Bayesian inversion. This article is part of the theme issue ‘Generative modelling meets Bayesian inference: a new paradigm for inverse problems’.


Figure 2: Example images from Oxford 102 Flower and ImageWoof experiments: true image x * , measurements y and y 1 , supervised and self-supervised estimatesˆxestimatesˆ estimatesˆx sup (y 1 ) andˆxandˆ andˆx self (y 1 )).
Figure 3: Example images from Oxford 102 Flower with Poisson noise: true image x * , measurements y and y 1 , self-supervised estimatesˆxestimatesˆ estimatesˆx(y 1 ).
ImageWoof image inpainting experiment: Type I error probability and statistical power for the proposed framework implemented with the RAM [43] reconstruction network (zero-shot and with self-supervised fine-tuning).
κ for different set of experiments being 1% of the true median value.
Statistical test results of supervised and unsupervised networks on Oxford Flowers and ImageWoof datasets using equivariant boostrap sampling method with tuned κ.
Hypothesis Testing in Imaging Inverse Problems

May 2025

·

7 Reads

This paper proposes a framework for semantic hypothesis testing tailored to imaging inverse problems. Modern imaging methods struggle to support hypothesis testing, a core component of the scientific method that is essential for the rigorous interpretation of experiments and robust interfacing with decision-making processes. There are three main reasons why image-based hypothesis testing is challenging. First, the difficulty of using a single observation to simultaneously reconstruct an image, formulate hypotheses, and quantify their statistical significance. Second, the hypotheses encountered in imaging are mostly of semantic nature, rather than quantitative statements about pixel values. Third, it is challenging to control test error probabilities because the null and alternative distributions are often unknown. Our proposed approach addresses these difficulties by leveraging concepts from self-supervised computational imaging, vision-language models, and non-parametric hypothesis testing with e-values. We demonstrate our proposed framework through numerical experiments related to image-based phenotyping, where we achieve excellent power while robustly controlling Type I errors.



Efficient Bayesian Computation Using Plug-and-Play Priors for Poisson Inverse Problems

March 2025

·

16 Reads

This paper introduces a novel plug-and-play (PnP) Langevin sampling methodology for Bayesian inference in low-photon Poisson imaging problems, a challenging class of problems with significant applications in astronomy, medicine, and biology. PnP Langevin sampling algorithms offer a powerful framework for Bayesian image restoration, enabling accurate point estimation as well as advanced inference tasks, including uncertainty quantification and visualization analyses, and empirical Bayesian inference for automatic model parameter tuning. However, existing PnP Langevin algorithms are not well-suited for low-photon Poisson imaging due to high solution uncertainty and poor regularity properties, such as exploding gradients and non-negativity constraints. To address these challenges, we propose two strategies for extending Langevin PnP sampling to Poisson imaging models: (i) an accelerated PnP Langevin method that incorporates boundary reflections and a Poisson likelihood approximation and (ii) a mirror sampling algorithm that leverages a Riemannian geometry to handle the constraints and the poor regularity of the likelihood without approximations. The effectiveness of these approaches is demonstrated through extensive numerical experiments and comparisons with state-of-the-art methods.


LATINO-PRO: LAtent consisTency INverse sOlver with PRompt Optimization

March 2025

·

28 Reads

Text-to-image latent diffusion models (LDMs) have recently emerged as powerful generative models with great potential for solving inverse problems in imaging. However, leveraging such models in a Plug & Play (PnP), zero-shot manner remains challenging because it requires identifying a suitable text prompt for the unknown image of interest. Also, existing text-to-image PnP approaches are highly computationally expensive. We herein address these challenges by proposing a novel PnP inference paradigm specifically designed for embedding generative models within stochastic inverse solvers, with special attention to Latent Consistency Models (LCMs), which distill LDMs into fast generators. We leverage our framework to propose LAtent consisTency INverse sOlver (LATINO), the first zero-shot PnP framework to solve inverse problems with priors encoded by LCMs. Our conditioning mechanism avoids automatic differentiation and reaches SOTA quality in as little as 8 neural function evaluations. As a result, LATINO delivers remarkably accurate solutions and is significantly more memory and computationally efficient than previous approaches. We then embed LATINO within an empirical Bayesian framework that automatically calibrates the text prompt from the observed measurements by marginal maximum likelihood estimation. Extensive experiments show that prompt self-calibration greatly improves estimation, allowing LATINO with PRompt Optimization to define new SOTAs in image reconstruction quality and computational efficiency.


Fig. 2: Image reconstruction results for image denoising (top) and non-blind image deblurring (bottom), by using a self-supervised neural-network estimatorˆxestimatorˆ estimatorˆx(y) and images from DIV2K.
Fig. 3: Poisson image denoising experiment: desired confidence level vs empirical coverage; the proposed self-supervised conformal prediction methods deliver prediction sets with near perfect coverage.
Self-supervised conformal prediction for uncertainty quantification in Poisson imaging problems

February 2025

·

17 Reads

Image restoration problems are often ill-posed, leading to significant uncertainty in reconstructed images. Accurately quantifying this uncertainty is essential for the reliable interpretation of reconstructed images. However, image restoration methods often lack uncertainty quantification capabilities. Conformal prediction offers a rigorous framework to augment image restoration methods with accurate uncertainty quantification estimates, but it typically requires abundant ground truth data for calibration. This paper presents a self-supervised conformal prediction method for Poisson imaging problems which leverages Poisson Unbiased Risk Estimator to eliminate the need for ground truth data. The resulting self-calibrating conformal prediction approach is applicable to any Poisson linear imaging problem that is ill-conditioned, and is particularly effective when combined with modern self-supervised image restoration techniques trained directly on measurement data. The proposed method is demonstrated through numerical experiments on image denoising and deblurring; its performance are comparable to supervised conformal prediction methods relying on ground truth data.


Self-supervised Conformal Prediction for Uncertainty Quantification in Imaging Problems

February 2025

·

7 Reads

Most image restoration problems are ill-conditioned or ill-posed and hence involve significant uncertainty. Quantifying this uncertainty is crucial for reliably interpreting experimental results, particularly when reconstructed images inform critical decisions and science. However, most existing image restoration methods either fail to quantify uncertainty or provide estimates that are highly inaccurate. Conformal prediction has recently emerged as a flexible framework to equip any estimator with uncertainty quantification capabilities that, by construction, have nearly exact marginal coverage. To achieve this, conformal prediction relies on abundant ground truth data for calibration. However, in image restoration problems, reliable ground truth data is often expensive or not possible to acquire. Also, reliance on ground truth data can introduce large biases in situations of distribution shift between calibration and deployment. This paper seeks to develop a more robust approach to conformal prediction for image restoration problems by proposing a self-supervised conformal prediction method that leverages Stein's Unbiased Risk Estimator (SURE) to self-calibrate itself directly from the observed noisy measurements, bypassing the need for ground truth. The method is suitable for any linear imaging inverse problem that is ill-conditioned, and it is especially powerful when used with modern self-supervised image restoration techniques that can also be trained directly from measurement data. The proposed approach is demonstrated through numerical experiments on image denoising and deblurring, where it delivers results that are remarkably accurate and comparable to those obtained by supervised conformal prediction with ground truth data.


Statistical modelling and Bayesian inversion for a Compton imaging system: application to radioactive source localization

December 2024

·

11 Reads

This paper presents a statistical forward model for a Compton imaging system, called Compton imager. This system, under development at the University of Illinois Urbana Champaign, is a variant of Compton cameras with a single type of sensors which can simultaneously act as scatterers and absorbers. This imager is convenient for imaging situations requiring a wide field of view. The proposed statistical forward model is then used to solve the inverse problem of estimating the location and energy of point-like sources from observed data. This inverse problem is formulated and solved in a Bayesian framework by using a Metropolis within Gibbs algorithm for the estimation of the location, and an expectation-maximization algorithm for the estimation of the energy. This approach leads to more accurate estimation when compared with the deterministic standard back-projection approach, with the additional benefit of uncertainty quantification in the low photon imaging setting.


Bayesian computation with generative diffusion models by Multilevel Monte Carlo

September 2024

·

65 Reads

Generative diffusion models have recently emerged as a powerful strategy to perform stochastic sampling in Bayesian inverse problems, delivering remarkably accurate solutions for a wide range of challenging applications. However, diffusion models often require a large number of neural function evaluations per sample in order to deliver accurate posterior samples. As a result, using diffusion models as stochastic samplers for Monte Carlo integration in Bayesian computation can be highly computationally expensive. This cost is especially high in large-scale inverse problems such as computational imaging, which rely on large neural networks that are expensive to evaluate. With Bayesian imaging problems in mind, this paper presents a Multilevel Monte Carlo strategy that significantly reduces the cost of Bayesian computation with diffusion models. This is achieved by exploiting cost-accuracy trade-offs inherent to diffusion models to carefully couple models of different levels of accuracy in a manner that significantly reduces the overall cost of the calculation, without reducing the final accuracy. The effectiveness of the proposed Multilevel Monte Carlo approach is demonstrated with three canonical computational imaging problems, where we observe a 4×4\times-to-8×8\times reduction in computational cost compared to conventional Monte Carlo averaging.


Citations (57)


... Despite remarkable progress in image reconstruction accuracy, most computational imaging methods still struggle to reliably quantify the uncertainty in their solutions. Some can deliver accurate confidence regions [36,15], but the methodology for supporting more advanced inferences such as hypothesis testing is still in its infancy. This critical methodological gap reduces the value of the reconstructed images as quantitative evidence, hindering the rigorous interpretation of scientific imaging experiments and robust interfacing of imaging pipelines with decision-making processes. ...

Reference:

Hypothesis Testing in Imaging Inverse Problems
Self-supervised Conformal Prediction for Uncertainty Quantification in Imaging Problems
  • Citing Chapter
  • May 2025

... With regards to Bayesian hypothesis testing, we note the optimization-based framework [35,40], which relies on log-concave models and hypotheses that are modelled as convex sets in X . Similar tests are considered in [14,4,26], again by modelling the hypotheses geometrically in pixel-space. Moreover, [20] tests for model misspecification in Bayesian imaging models with deep generative priors. ...

Scalable Bayesian uncertainty quantification with data-driven priors for radio interferometric imaging
  • Citing Article
  • August 2024

RAS Techniques and Instruments

... Such techniques [28,11,59] are valuable because they allow forms of inference that are not accessible with PnP techniques based on optimization [23] or denoising diffusion models [40]. For example, PnP MCMC techniques can be embedded within empirical Bayesian machinery to tackle semi-blind imaging problems [38] or to automatically optimize regularization parameters [64,61]. PnP MCMC techniques are also valuable for performing uncertainty quantification analyses [28,34]. ...

Marginal Likelihood Estimation in Semiblind Image Deconvolution: A Stochastic Approximation Approach
  • Citing Article
  • June 2024

SIAM Journal on Imaging Sciences

... Contrary to PSGLA, in MYULA, the noise is added outside the proximal operator. Recently the authors of [52] consider algorithms based on MYULA to accelerate Langevin samplers. Note that if a pre-trained denoiser is plugged in place of the proximal operator, then we recover PnP-ULA. ...

Accelerated Bayesian Imaging by Relaxed Proximal-Point Langevin Sampling
  • Citing Article
  • June 2024

SIAM Journal on Imaging Sciences

... Notably, [29] embeds a DM within a PnP unadjusted Langevin algorithm (ULA) [24] that computes the posterior mean E(x|y), whereas [9] embeds a DM within a split-Gibbs sampler [58] that is equivariant to a noisy ULA [38]. Moreover, [8] and [31] consider PnP priors encoded by a normalizing flow, whereas [17] provides a general theoretical framework for using VAE and GAN priors. ...

Empirical Bayesian Imaging With Large-Scale Push-Forward Generative Priors
  • Citing Article
  • January 2024

Signal Processing Letters, IEEE

... 2. Approximation-free methods that integrate DMs with traditional posterior sampling methods: Examples include split Gibbs sampler (SGS) + DM methods [35,36,45,46], which are built upon the split Gibbs sampler for Bayesian inference [47,48], and sequential Monte Carlo (SMC) + DM methods [31][32][33][49][50][51][52][53], which combine DMs with SMC [54][55][56][57][58][59] to obtain asymptotically consistent posterior samples. ...

The Split Gibbs Sampler Revisited: Improvements to Its Algorithmic Structure and Augmented Target Distribution
  • Citing Article
  • November 2023

SIAM Journal on Imaging Sciences

... Modern Bayesian inversion methods are increasingly strongly reliant on machine learning techniques in order to leverage information that is available in the form of training data [3]. Deep generative modelling provides a highly effective approach for constructing machine-learningbased Bayesian inversion methods, both for directly modelling posterior distributions [4][5][6], as well as for constructing data-driven priors that can be combined with an explicit likelihood function derived from a physical forward model [3, §5]. ...

Learned Reconstruction Methods With Convergence Guarantees: A survey of concepts and applications
  • Citing Article
  • January 2023

IEEE Signal Processing Magazine

... Bayesian inversion has recently gained popularity as an alternative to variational inversion that allows for accurate uncertainty quantification in the estimates. Appropriate prior distributions in imaging are, e.g. total variation priors [62] (although not discretisation-invariant [45]), Cauchy [51,78] or other α-stable priors [9,79], hierarchical Horseshoe and Student's t priors [22,71,84], and machine-learning-based priors [46,47]. Image reconstruction problems are typically linear and, thus, Gaussian process priors are a natural choice [13,65,78]. ...

On Maximum a Posteriori Estimation with Plug & Play Priors and Stochastic Gradient Descent

Journal of Mathematical Imaging and Vision

... Such challenges arise in problems involving ℓ 1 regularization (as in LASSO), total variation priors, and energy-based models with discontinuous potentials. There has been a vast literature in sampling from non-smooth potentials through Langevin dynamics where people either use smoothing techniques such as the Moreau-Yosida envelope ( [39], [4], [15]) or Gaussian smoothing ( [8], [27] and [38]) or other more direct and computationally efficient methods such as [28], [25], [23], [20]. ...

A Proximal Markov Chain Monte Carlo Method for Bayesian Inference in Imaging Inverse Problems: When Langevin Meets Moreau
  • Citing Article
  • November 2022

SIAM Review