Nicolas Papadakis's research while affiliated with University of Bordeaux and other places
What is this page?
This page lists the scientific contributions of an author, who either does not have a ResearchGate profile, or has not yet added these contributions to their profile.
It was automatically created by ResearchGate to create a record of this author's body of work. We create such pages to advance our goal of creating and maintaining the most comprehensive scientific repository possible. In doing so, we process publicly available (personal) data relating to the author as a member of the scientific community.
If you're a ResearchGate member, you can follow this page to keep up with this author's work.
If you are this author, and you don't want us to display this page anymore, please let us know.
It was automatically created by ResearchGate to create a record of this author's body of work. We create such pages to advance our goal of creating and maintaining the most comprehensive scientific repository possible. In doing so, we process publicly available (personal) data relating to the author as a member of the scientific community.
If you're a ResearchGate member, you can follow this page to keep up with this author's work.
If you are this author, and you don't want us to display this page anymore, please let us know.
Publications (59)
In this work, we present new proofs of convergence for Plug-and-Play (PnP) algorithms. PnP methods are efficient iterative algorithms for solving image inverse problems where regularization is performed by plugging a pre-trained denoiser in a proximal algorithm, such as Proximal Gradient Descent (PGD) or Douglas-Rachford Splitting (DRS). Recent res...
Plug-and-Play (PnP) methods are efficient iterative algorithms for solving ill-posed image inverse problems. PnP methods are obtained by using deep Gaussian denoisers instead of the proximal operator or the gradient-descent step within proximal algorithms. Current PnP schemes rely on data-fidelity terms that have either Lipschitz gradients or close...
This paper presents a new convergent Plug-and-Play (PnP) algorithm. PnP methods are efficient iterative algorithms for solving image inverse problems formulated as the minimization of the sum of a data-fidelity term and a regularization term. PnP methods perform regularization by plugging a pre-trained denoiser in a proximal algorithm, such as Prox...
In this paper, we propose to regularize ill-posed inverse problems using a deep hierarchical variational autoencoder (HVAE) as an image prior. The proposed method synthesizes the advantages of i) denoiser-based Plug \& Play approaches and ii) generative model based approaches to inverse problems. First, we exploit VAE properties to design an effici...
This paper presents a new convergent Plug-and-Play (PnP) algorithm. PnP methods are efficient iterative algorithms for solving image inverse problems formulated as the minimization of the sum of a data-fidelity term and a regularization term. PnP methods perform regularization by plugging a pre-trained denoiser in a proximal algorithm, such as Prox...
We propose GOTEX, a general framework for texture synthesis by optimization that constrains the statistical distribution of local features. While our model encompasses several existing texture models, we focus on the case where the comparison between feature distributions relies on optimal transport distances. We show that the semi-dual formulation...
Image super-resolution is a one-to-many problem, but most deep-learning based methods only provide one single solution to this problem. In this work, we tackle the problem of diverse super-resolution by reusing VD-VAE, a state-of-the art variational autoencoder (VAE). We find that the hierarchical latent representation learned by VD-VAE naturally s...
Plug-and-Play (PnP) methods solve ill-posed inverse problems through iterative proximal algorithms by replacing a proximal operator by a denoising operation. When applied with deep neural network denoisers, these methods have shown state-of-the-art visual performance for image restoration problems. However, their theoretical convergence analysis is...
Plug-and-Play methods constitute a class of iterative algorithms for imaging problems where regularization is performed by an off-the-shelf denoiser. Although Plug-and-Play methods can lead to tremendous visual performance for various image problems, the few existing convergence guarantees are based on unrealistic (or suboptimal) hypotheses on the...
Neural networks have revolutionized the field of data science, yielding remarkable solutions in a data-driven manner. For instance, in the field of mathematical imaging, they have surpassed traditional methods based on convex regularization. However, a fundamental theory supporting the practical applications is still in the early stages of developm...
This work addresses texture synthesis by relying on the local representation of images through their patch distributions. The main contribution is a framework that imposes the patch distributions at several scales using optimal transport. This leads to two formulations. First, a pixel-based optimization method is proposed, based on discrete optimal...
In this work, we propose a framework to learn a local regularization model for solving general image restoration problems. This regularizer is defined with a fully convolutional neural network that sees the image through a receptive field corresponding to small image patches. The regularizer is then learned as a critic between unpaired distribution...
In this work, we propose a framework to learn a local regularization model for solving general image restoration problems. This regularizer is defined with a fully convolutional neural network that sees the image through a receptive field corresponding to small image patches. The regularizer is then learned as a critic between unpaired distribution...
The use of optimal transport cost for learning generative models has become popular with Wasserstein Generative Adversarial Networks (WGAN). Training of WGAN relies on a theoretical background: the calculation of the gradient of the optimal transport cost with respect to the generative model parameters. We first demonstrate that such gradient may n...
In this paper, we propose a framework to train a generative model for texture image synthesis from a single example. To do so, we exploit the local representation of images via the space of patches, that is, square sub-images of fixed size (e.g. $4\times 4$). Our main contribution is to consider optimal transport to enforce the multiscale patch dis...
This report presents a synthesis of outcome of the workshop on AI for Ocean, Atmosphere and Climate held in Brest on January 2020 in the framework of LEFE/MANU project IA-OAC, ANR project Melody and AI chair Oceanix with the additional support of Isblue. This report provides a short description of the outcome of each of the 9 working groups which m...
Neural networks have revolutionized the field of data science, yielding remarkable solutions in a data-driven manner. For instance, in the field of mathematical imaging, they have surpassed traditional methods based on convex regularization. However, a fundamental theory supporting the practical applications is still in the early stages of developm...
In the near future, the Surface Water Ocean Topography (SWOT) mission will provide images of altimetric data at kilometric resolution. This unprecedented 2-dimensional data structure will allow the estimation of geostrophy-related quantities that are essential for studying the ocean surface dynamics and for data assimilation uses. To estimate these...
This paper introduces the use of the proper generalized decomposition (PGD) method for the optical flow (OF) problem in a classical framework of Sobolev spaces, ie, optical flow methods including a robust energy for the data fidelity term together with a quadratic penalizer for the regularization term. A mathematical study of PGD methods is first p...
An algorithm for approximating the p-Wasserstein distance between histograms defined on unstructured discrete grids is presented. It is based on the computation of a barycenter constrained to be supported on a low dimensional subspace, which corresponds to a transshipment problem. A multi-scale strategy is also considered. The method provides spars...
We present a framework to simultaneously align and smooth data in the form of multiple point clouds sampled from unknown densities with support in a d-dimensional Euclidean space. This work is motivated by applications in bio-informatics where researchers aim to automatically normalize large datasets to compare and analyze characteristics within a...
Nonlinear eigenfunctions, induced by subgradients of one-homogeneous functionals (such as the 1- Laplacian), have shown to be instrumental in segmentation, clustering, and image decomposition. We present a class of flows for finding such eigenfunctions, generalizing a method recently suggested by Nossek and Gilboa. We analyze the flows on grids and...
The notion of Sinkhorn divergence has recently gained popularity in machine learning and statistics, as it makes feasible the use of smoothed optimal transportation distances for data analysis. The Sinkhorn divergence allows the fast computation of an entropically regularized Wasserstein distance between two probability distributions supported on a...
This article describes a method for quickly computing the solution to the regularized optimal transport problem. It generalizes and improves upon the widely-used iterative Bregman projections algorithm (or Sinkhorn-Knopp algorithm). The idea is to overrelax the Bregman projection operators, allowing for faster convergence. In practice this correspo...
This paper is an overview of results that have been obtain in [2] on the convex regularization of Wasserstein barycenters for random measures supported on \({\mathbb R}^{d}\). We discuss the existence and uniqueness of such barycenters for a large class of regularizing functions. A stability result of regularized barycenters in terms of Bregman dis...
This paper is concerned by the statistical analysis of data sets whose elements are random histograms. For the purpose of learning principal modes of variation from such data, we consider the issue of computing the PCA of histograms with respect to the 2-Wasserstein distance between probability measures. To this end, we propose to compare the metho...
We focus on the maximum regularization parameter for anisotropic total-variation denoising. It corresponds to the minimum value of the regularization parameter above which the solution remains constant. While this value is well know for the Lasso, such a critical value has not been investigated in details for the total-variation. Though, it is of i...
In this paper, we propose a new framework to remove parts of the systematic errors affecting popular restoration algorithms, with a special focus for image processing tasks. Extending ideas that emerged for $\ell_1$ regularization, we develop an approach that can help re-fitting the results of standard methods towards the input data. Total variatio...
The concept of barycenter in the Wasserstein space allows to define a notion of Fr\'echet mean of a set of probability measures. However, depending on the data at hand, such barycenters may be irregular. In this paper, we thus introduce a convex regularization of Wasserstein barycenters for random measures supported on ${\mathbb R}^{d}$. We prove t...
Optimal transportation theory is a powerful tool to deal with image interpolation. This was first investigated by Benamou and Brenier \cite{BB00} where an algorithm based on the minimization of a kinetic energy under a conservation of mass constraint was devised. By structure, this algorithm does not preserve image regions along the optimal interpo...
Magnetic resonance (MR) guided high intensity focused ultrasound and external beam radiotherapy interventions, which we shall refer to as beam therapies/interventions, are promising techniques for the non-invasive ablation of tumours in abdominal organs. However, therapeutic energy delivery in these areas becomes challenging due to the continuous d...
Optimal transport (OT) is a major statistical tool to measure similarity between features or to match and average features. However, OT requires some relaxation and regularization to be robust to outliers. With relaxed methods, as one feature can be matched to several ones, important interpolations between different features arise. This is not an i...
Eigenvalue analysis based on linear operators has been extensively used in signal and image processing to solve a variety of problems such as segmentation, dimensionality reduction and more. Recently, nonlinear spectral approaches, based on the total variation functional have been proposed. In this context, functions for which the nonlinear eigenva...
This work is about the use of regularized optimal-transport distances for
convex, histogram-based image segmentation. In the considered framework, fixed
exemplar histograms define a prior on the statistical features of the two
regions in competition. In this paper, we investigate the use of various
transport-based cost functions as discrepancy meas...
Bias in image restoration algorithms can hamper further analysis, typically
when the intensities have a physical meaning of interest , e.g., in medical
imaging. We propose to suppress a part of the bias -- the method bias -- while
leaving unchanged the other unavoidable part -- the model bias. Our debiasing
technique can be used for any locally aff...
This paper deals with the assimilation of image-type data. Such kinds of data, such as satellite images, have good properties (dense coverage in space and time), but also one crucial problem for data assimilation: they are affected by spatially correlated errors. Classical approaches in data assimilation assume uncorrelated noise, because the prope...
This book constitutes the refereed proceedings of the 5th International Conference on Scale Space and Variational Methods in Computer Vision, SSVM 2015, held in Lège-Cap Ferret, France, in May 2015. The 56 revised full papers presented were carefully reviewed and selected from 83 submissions. The papers are organized in the following topical sectio...
This paper studies the problem of color transfer between images using optimal transport techniques. While being a generic framework to handle statistics properly, it is also known to be sensitive to noise and outliers, and is not suitable for direct application to images without additional postprocessing regularization to remove artifacts. To tackl...
Cet article présente les travaux que nous avons menés ces dernières années autour de l’analyse d’images Météosat
Seconde Génération (MSG). Comparés à la première génération, les données MSG possèdent une résolution spatiale et
temporelle plus élevée, autorisant l’accès à un certain nombre d’informations liées aux phénomènes climatiques observés.
Ce...
This paper is concerned with the relaxation of nonconvex functionals used in image processing. We review most of the recently introduced relaxation methods, and we propose a new convex one based on a probabilistic approach, which has the advantages of being intuitive, flexible, and involving an algorithm without inner loops. We investigate in detai...
In this paper, we present a general convex formulation for global histogram-based binary segmentation. The model relies on a data term measuring the histograms of the regions to segment w.r.t. reference histograms as well as TV regularization allowing the penalization of the length of the interface between the two regions. The framework is based on...
Sequential and variational assimilation methods allow tracking physical states using dynamic prior together with external observation of the studied system. However, when dense image satellite observations are available, such approaches realize a correction of the amplitude of the different state values but do not incorporate the spatial errors of...
Benamou and Brenier formulation of Monge transportation problem has proven to be of great interest in image processing to compute warpings and distances between pair of images. In some applications, however, the built-in minimization of kinetic energy does not give satisfactory results. In particular cases where some specific regions represent phys...
At the present time, the Earth is observed by dozens of satellites giving a permanent information on the evolution of the atmosphere and of the ocean. This information is partly used by transforming radiances into state variables of the models then performing a regular method of Data Assimilation. But the dynamics of these images also contains an i...
Citations
... One can partially circumvent this limitation by combining a proximal algorithm with a generic weakly convex regularizer, for which the proximal operator is well-defined. The convergence to stationary points of the objective is established in [24,22] for the forward-backward splitting [5] based on the very general convergence result for functions with the Kurdyka-Lojasiewicz (KL) property given in [3]. When R is differentiable, as will be assumed in our setting, similar results can be obtained for gradient descent applied to the non-convex objective (2), see [3]. ...
... They have nice mathematical properties, since they are the Fréchet means with respect to the Wasserstein distance [3][4][5]. Their applications range from mixing textures [6,7], stippling patterns and bidirectional reflectance distribution functions [8], or color distributions and shapes [9] over averaging of sensor data [10] to Bayesian statistics [11], just to name a few. For further reading, we refer to the surveys [12,13]. ...
... The convergence proof was later provided in [1]. Under the same setting, a nonlinear power method was proposed in [6] with connections to proximal operators and neural networks. For the case when J is the total variation (TV) and H is the L 1 norm, the Rayleigh quotient (6) approximates the Cheeger cut problem [11,5]. ...
Reference: Minimizing Quotient Regularization Model
... More details of the texture synthesis can be found in [20,21]. Please note that our framework is generic and a variety of different texture synthesis techniques can be applied in our approach, such as [28][29][30]. ...
... where D is a data-fidelity term which depends on the noise model and measures how well the reconstruction fits to the observation and R is a regularizer which copes with the ill-posedness and incorporates prior information. Over the last years, learned regularizers like the total deep variation [46,47] or adversarial regularizers [55,58,63] as well as extensions of plug-andplay and unrolled methods [24,78,84,88] with learned denoisers [32,35,67,91] showed promising results, see [8,59] for an overview. Furthermore, many papers leveraged the tractability of the likelihood of normalizing flows (NFs) to learn a prior [9,30,85,86,90] or use conditional variants to learn the posterior [12,53,79,87] They utilize the invertibility to optimize over the range of the flow together with the Gaussian assumption on the latent space. ...
... For all these results we see that for fixed ε > 0 the empirical EOT cost admits faster rates in n than the empirical unregularized OT cost. Such results are complemented by extensive research on distributional limits for the empirical OT cost at scaling rate n 1/2 which establish the parametric rate to be sharp [5,11,34,35,36,37,44,49,59]. However, as the regularization parameter decreases to zero, the statistical error bound generally deteriorates in high dimensions polynomially or even exponentially in ε −1 . ...
... In preparation for the SWOT mission, the Jet Propulsion Laboratory (JPL) developed the observation error covariance model [2,4] which served as a basis for numerous experiments aimed at assessing both the added value of the mission in monitoring the global ocean (e.g., [5][6][7][8][9]) and at dealing with spatial correlations of the errors contaminating SWOT observations (e.g., [10][11][12]). ...
... The Wasserstein barycenter problem (i.e., Example 1.4) has recently become a highly active research area due to its widespread applications in statistical inference [14,62], pattern recognition [64], image synthesis [49], clustering [73], and various other fields in machine learning. Most studies about the computation of Wasserstein barycenter focus on the case where µ 1 , . . . ...
... The quotient minimization (6) also appears in learning parameterized regularizations [3] and filter functions [2]. In this paper, we propose a novel scheme to minimize the general model (1) based on a gradient descent flow for the Rayleigh quotient minimization [9]. We then apply the proposed algorithm to the three specific examples (L 1 /L 2 , L 1 /S K , and L 1 /L 2 on the gradient). ...
Reference: Minimizing Quotient Regularization Model
... For other noise types, the data fidelity term is formulated differently. We give three specific signal and image processing examples that fit into our general model (1). ...
Reference: Minimizing Quotient Regularization Model