• Home
  • André Jalobeanu
André Jalobeanu

André Jalobeanu
BayesMap Solutions

PhD

About

62
Publications
19,209
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
674
Citations
Introduction
My current research projects include full-waveform topographic LiDAR data processing, point cloud processing, ground filtering and gridding for DEM generation. My main area is data processing and analysis (images, signals, time series) through Bayesian inference, and one of the priorities is uncertainty computation. My research is application-inspired, and so far it has been motivated by various inverse problems in remote sensing, planetary sciences, Earth sciences and astronomy.
Additional affiliations
September 2014 - present
BayesMap Solutions, LLC
Position
  • Founder & Research Scientist
November 2012 - July 2014
University of Texas at Austin
Position
  • Research Associate
October 2006 - February 2008
University of Strasbourg
Position
  • Researcher
Description
  • 2 years: Astronomical image processing class, M. S., ENSPS (engineering school)
Education
September 1996 - June 1998
University of Nice Sophia Antipolis
Field of study
  • Astronomy, Astrophysics, Image Processing and High Resolution Imaging
September 1995 - June 1996
University of Nice Sophia Antipolis
Field of study
  • Physics (Statistical Physics and Quantum Physics)

Publications

Publications (62)
Conference Paper
Full-text available
We recently developed a new point cloud registration algorithm. Compared to Iterated Closest Point (ICP) techniques, it is robust to noise and outliers, and easier to use, as it is less sensitive to initial conditions. It minimizes the entropy of the joint point cloud (including intensity attributes to help register areas with poor relief), uses a...
Article
Full-text available
Is it possible to use stereo images to generate point clouds and to compute rigorous uncertainty maps? Currently, neither modern commercial photogrammetric software nor state of the art algorithms are able to provide a spatial distribution of uncertainty. In this study, we explain why this is the case, despite a high demand from the user community....
Conference Paper
Full-text available
Change detection using remote sensing has become increasingly important for characterization of natural dis-asters. Pre-and post-event LiDAR data can be used to identify and quantify changes. The main challenge consists of producing reliable change maps that are robust to differences in collection conditions, free of process-ing artifacts, and that...
Article
Full-text available
Topographic mapping is one of the main applications of airborne LiDAR. Waveform digitization and processing allow for both an improved accuracy and a higher ground detection rate compared to discrete return systems. Nevertheless, the quality of the ground peak estimation, based on last return extraction, strongly depends on the algorithm used. Best...
Article
Full-text available
Extreme precipitation events may cause flooding, slope failure, erosion, deposition, and damage to infrastructure over a regional scale, but the impacts of these events are often difficult to fully characterize. Regional‐scale landscape change occurred during an extreme rain event in June 2012 in northeastern Minnesota. Landscape change was documen...
Conference Paper
Full-text available
The Naval Postgraduate School (NPS) Remote Sensing Center (RSC) and research partners have completed a remote sensing pilot project in support of California post-earthquake-event emergency response. The project goals were to dovetail emergency management requirements with remote sensing capabilities to develop prototype map products for improved ea...
Conference Paper
Full-text available
Terrestrial LiDAR scans of building models collected with a FARO Focus3D and a RIEGL VZ-400 were used to investigate point-to-point and model-to-model LiDAR change detection. LiDAR data were scaled, decimated, and georegistered to mimic real world airborne collects. Two physical building models were used to explore various aspects of the change det...
Conference Paper
Full-text available
Change detection using remote sensing has become increasingly important for characterization of natural disasters. Pre- and post-event LiDAR data can be used to identify and quantify changes. The main challenge consists of producing reliable change maps that are robust to differences in collection conditions, free of processing artifacts, and that...
Data
Predictive vertical accuracy map (spatial distribution of DEM uncertainty) Version 3
Conference Paper
Full-text available
When both pre- and post-event LiDAR point clouds are available, change detection can be performed to identify areas that were most affected by a disaster event, and to obtain a map of quantitative changes in terms of height differences. In the case of earthquakes in built-up areas for instance, first responders can use a LiDAR change map to help pr...
Data
Full-text available
Website of the AutoProbaDTM Project (April 2010 - October 2012, research still in progress) Automated Probabilistic Digital Terrain Model generation from raw LiDAR data (DEM generation, full waveform LiDAR, Bayesian inference, uncertainty, automated mapping)
Conference Paper
Full-text available
The main objective of the AutoProbaDTM project was to develop new methods for automated probabilistic topographic map production using the latest LiDAR scanners. It included algorithmic development, implementation and validation over a 200 km2 test area in continental Portugal, representing roughly 100 GB of raw data and half a billion waveforms. W...
Data
Full-text available
FCT Project PTDC/EIA-CCO/102669/2008 “AutoProbaDTM” Automated Probabilistic Digital Terrain Model generation from raw LiDAR data Keywords: DEM generation, full waveform LiDAR, Bayesian inference, uncertainty, automated mapping FINAL REPORT
Conference Paper
Full-text available
There is quite some debate in the earthquake community about the complexity of the recurrence models that should be considered to describe the recurrence of events on given faults. The null- hypothesis testing approach seems to be favored as more rigorous and conservative, in particular for hazard assessment purposes, whereas still few Bayesian app...
Conference Paper
Full-text available
MUSE, the Multi Unit Spectroscopic Explorer, is a 2nd generation integral-field spectrograph under final assembly to see first light at the Very Large Telescope in 2013. By capturing ~ 90000 optical spectra in a single exposure, MUSE represents a challenge for data reduction and analysis. We summarise here the main features of the Data Reduction Sy...
Conference Paper
The seacliffs evolution is an important aspect to be taken in account in the evolution of the world coastline. The seacliffs can suffer erosion induced by the storm wave incidence or subaerial erosion leading to the retreat of the coastline. However the amount of sediments that come from the cliff retreat represent an important sediment source to t...
Article
Full-text available
We propose a probabilistic framework in which different types of information pertaining to the recurrence of large earthquakes on a fault can be combined, in order to constrain the parameter space of candidate recurrence models and provide the best combination of models knowing the chosen data set and priors. We use Bayesian inference for parameter...
Conference Paper
Full-text available
Within the AutoProbaDTM project, we plan to develop fast and fully automated techniques to derive topographic maps and error maps, from full-waveform airborne LiDAR data. A probabilistic approach is used in order to modelling surfaces and data acquisition, solving inverse problems and handling uncertainty. Bayesian inference provides a rigorous fra...
Article
New generation integral-field spectrographs (IFS) such as MUSE will soon start observing distant astronomical objects with much higher spectral and spatial resolutions than today’s instruments. The new hyperspectral observations will represent a huge amount of scientific data (up to 1.2 GB per each MUSE raw acquisition) whose analysis requires the...
Article
Full-text available
The development of new data processing methods requires access to the raw data. Unfortunately some LiDAR manufacturers do not provide information about the format and the users can only rely on proprietary software to do their processing. Even if using black boxes might be sufficient for some simple applications, it might be an impedi-ment to scien...
Conference Paper
Full-text available
Monitoring the sediment budget of coastal systems is essential to understand the costal equilibrium, and is an important aspect to be considered in coastal management. Thus, the identification and the quantitative evaluation of sedimentary sources and sinks are the first steps towards a better understanding of the dynamics of coastal morphology. Th...
Conference Paper
Full-text available
The main goal of the AutoProbaDTM project is to derive new methodologies to measure the topography and terrain characteristics using the latest full-waveform airborne LiDAR technology. It includes algorithmic development, implementation, and validation over a large test area. In the long run, we wish to develop techniques that are scalable and appl...
Conference Paper
Full-text available
Within the AutoProbaDTM project, we plan to develop fast and fully automated techniques to derive topographic maps from full-waveform airborne LiDAR data, based on a probabilistic approach to modelling surfaces and data acquisition, solving inverse problems and handling uncertainty. Bayesian inference provides a rigorous framework for unsupervised...
Conference Paper
Full-text available
We present a new probabilistic method for digital surface model generation from optical stereo pairs, with an expected ability to propagate errors from the data to the final result, providing spatial uncertainty estimates to be used for quantitative analyis in planetary or Earth sciences. Existing stereo-derived surfaces lack rigorous, quantitative...
Article
Full-text available
Today, the probabilistic seismic hazard assessment (PSHA) community relies on stochastic models to compute occurrence probabilities for large earthquakes. Considerable efforts have been devoted to extracting information from long catalogs of large earthquakes based on instrumental, historical, archeological and paleoseismological data. However, the...
Conference Paper
Full-text available
A new method for reconstructing digital elevation models (DEM) from optical stereo pairs is proposed. The main originality is the ability to propagate errors from the observed data to the final result, providing all the spatial accuracy estimates required for the use of topography in planetary or Earth science applications. In general, stereo-deriv...
Conference Paper
Full-text available
We propose a new method to measure changes in terrain topography from two optical stereo image pairs acquired at different dates. The main novelty is in the ability of computing the spatial distribution of uncertainty, thanks to stochastic modeling and probabilistic inference. Thus, scientists will have access to quantitative error estimates of loc...
Article
Full-text available
In this paper, we propose a model-based approach for the multiresolution fusion of satellite images. Given the high-spatial-resolution panchromatic (Pan) image and the low-spatial- and high-spectral-resolution multispectral (MS) image acquired over the same geographical area, the problem is to generate a high-spatial- and high-spectral-resolution M...
Data
Full-text available
We focus on an area comprising roughly a quarter of the 1°x1° GDEM tile due to the size of the available reference data (raster DEM provided by the Portuguese Geographic Institute, actually 15.3% of the tile area). Within this area the elevation ranges between 0 and 250 m and the slope never exceeds 25°. The differences between GDEM and reference D...
Conference Paper
Full-text available
We propose a new method for the measurement of high resolution topography from an optical stereo pair. The main contribution is the ability to propagate errors from the imperfect observed data to the final result, providing all accuracy estimates required for the use of topography in planetary or Earth science applications. Indeed, digital elevatio...
Conference Paper
Full-text available
Flood maps are usually computed by thresholding digital elevation models (DEM) without taking into account errors on the topography. Even if scientists wish to do so in the future, the only information about DEM uncertainty available now is a RMS error at best. Thus, we propose to use our recent work on uncertainty estimation, allowing us to recons...
Conference Paper
Full-text available
We propose a new method for the measurement of high resolution topography from a stereo pair. The main application area is the study of planetary surfaces. Digital elevation models (DEM) computed from image pairs using state of the art algorithms usually lack quantitative error estimates. This can be a major issue when the result is used to measure...
Conference Paper
Full-text available
In this paper we propose a model based approach for multi-resolution fusion of satellite images. Given the high spatial resolution panchromatic (Pan) image and a low spatial and high spectral resolution multi-spectral (MS) image of the same geographical area, the problem is to generate a high spatial and high spectral resolution multi-spectral imag...
Conference Paper
Full-text available
We focus on a geophysical application of image processing: the measurement of high resolution ground deformation from two optical satellite images taken at different dates. Disparity maps estimated from image pairs usually lack quantitative error estimates. This is a major issue for measuring physical parameters, such as ground deformation or topog...
Article
Virtual Observatories give us access to huge amounts of image data that are often redundant. Our goal is to take advantage of this redundancy by combining images of the same field of view into a single model. To achieve this goal, we propose to develop a multi-source data fusion method that relies on probability and band-limited signal theory. The...
Conference Paper
Full-text available
Disparity maps estimated using computer vision-derived algorithms usually lack quantitative error estimates. This can be a major issue when the result is used to measure reliable physical parameters, such as topography for instance. Thus, we developed a new method to infer the dense disparity map from two images. We use a probabilistic approach in...
Article
Full-text available
When analyzing rock deformation experimental data, one deals with both uncertainty and complexity. Though each part of the problem might be simple, the relationships between them can form a complex system. This often leads to partial or only qualitative data analyses from the experimental rock mechanics community, which limits the impact of these s...
Conference Paper
Full-text available
When it comes to manipulating uncertain knowledge such as noisy observations of physical quantities, one may ask how to do it in a simple way. Processing corrupted signals or images always propagates the uncertainties from the data to the final results, whether these errors are explicitly computed or not. When such error estimates are provided, it...
Chapter
Full-text available
We propose a Bayesian approach to infer the parameters of both blur and noise in remote sensing images. The modulation transfer function (MTF) of the imaging system, including atmosphere, optics and pixel-level sampling, is modeled by a parametric function with a small number of parameters. The noise is assumed to be white, additive and Gaussian. B...
Article
Full-text available
We present an atlas of Hubble Space Telescope images and ground-based, long-slit, narrowband spectra centered on the 6584 Å line of [N II] and the 5007 Å line of [O III]. The spectra were obtained for a variety of slit positions across each target (as shown on the images) in an effort to account for nonspherical nebular geometries in a robust manne...
Patent
Full-text available
The invention concerns processing of digital images, captured by detection of electromagnetic waves, such as satellite pictures. The inventive processing consists in applying a parameterable fractal modelling (M) to Fourier transforms of the pixels of the image and comparing (22) the thus modelled transforms (aijq, wo) to the initial transforms (ai...
Conference Paper
Full-text available
Generative models of natural images have long been used in computer vision. However, since they only describe the statistics of 2D scenes, they fail to capture all the properties of the underlying 3D world. Even though such models are sufficient for many vision tasks, a 3D scene model is needed when it comes to inferring a 3D object or its characte...
Conference Paper
Full-text available
We present a new Bayesian vision technique that aims at recovering a shape from two or more noisy observations taken under similar lighting conditions. The shape is parametrized by a piecewise linear height field, textured by a piecewise linear irradiance field, and we assume Gaussian Markovian priors for both shape vertices and irradiance variable...
Article
Full-text available
The deconvolution of blurred and noisy satellite images is an ill-posed inverse problem, which can be regularized within a Bayesian context by using an a priori model of the reconstructed solution. Since real satellite data show spatially variant characteristics, we propose here to use an inhomogeneous model. We use the maximum likelihood estimator...
Article
In this work, we study the 3D geometry of the small bodies in our Solar System in order to derive a probabilistic model of such objects. Images taken by various spacecrafts seem to exhibit a fractal behaviour, which we propose to investigate by using a multiscale approach. The idea is to look for a scale-invariant model that could simply describe t...
Article
Full-text available
The deconvolution of blurred and noisy satellite images is an ill-posed inverse problem. Direct inversion leads to unacceptable noise amplification. Usually, the problem is either regularized during the inversion process, or the noise is filtered after deconvolution and decomposition in the wavelet transform domain. Herein, we have developed the se...
Conference Paper
Full-text available
In this paper we propose a new algorithm to estimate the parameters of the noise related to the sensor and the impulse response of the optical system, from a blurred and noisy satellite or aerial image. The noise is supposed to be white, Gaussian and stationary. The blurring kernel has a parametric form and is modeled in such a way as to take into...
Article
Full-text available
The modeling of remote sensing and astrophysics applications by random Markov fields (MRF) image processing techniques was discussed. The MRF method enabled the construction of an automatic image deconvolution method, within a stochastic framework. The MRF models express global constraints or hypothesis in a local way. They are applied to remote se...
Article
The satellite image deconvolution problem is ill-posed and must be regularized. Herein, we use an edge-preserving regularization model using a ϕ function, involving two hyperparameters. Our goal is to estimate the optimal parameters in order to automatically reconstruct images. We propose to use the maximum-likelihood estimator (MLE), applied to th...
Conference Paper
Full-text available
In this paper we present a new deconvolution method, able to deal with noninvertible blurring functions. To avoid noise amplification, a prior model of the image to be reconstructed is used within a Bayesian framework. We use a spatially adaptive prior defined with a complex wavelet transform in order to preserve shift invariance and to better rest...
Thesis
Full-text available
Satellite or aerial images are corrupted by the optical system and the sensor. To reconstruct a good quality image from a noisy and blurred observation, one needs to perform a deconvolution. First, we recall the principles of the acquisition chain, from optics to the sensor (visible or infrared), enabling us to model the degradation of the image. I...
Conference Paper
Full-text available
In this paper, we propose to use a hidden Markov tree modeling of the complex wavelet packet transform, to capture the inter-scale dependencies of natural images. First, the observed image, blurred and noisy, is deconvolved without regularization. Then its transform is denoised within a Bayesian framework using the proposed model, whose parameters...
Conference Paper
Full-text available
The deconvolution of blurred and noisy satellite images is an ill-posed inverse problem. Donoho (1994) has proposed to deconvolve the image without regularization and to denoise the result in a wavelet basis by thresholding the transformed coefficients. We have developed a new filtering method, consisting of using a complex wavelet packet basis. He...
Conference Paper
Full-text available
The deconvolution of blurred and noisy satellite images is an ill-posed inverse problem, which can be regularized within a Bayesian context by using an a priori model of the reconstructed solution. Since real satellite data show spatially variant characteristics, we propose to use an inhomogeneous model. We use the maximum likelihood estimator (MLE...
Conference Paper
Full-text available
Satellite images can be corrupted by an optical blur and electronic noise. Blurring is modeled by convolution, with a known linear operator H, and the noise is supposed to be additive, white and Gaussian, with a known variance. The recovery problem is ill-posed and therefore must be regularized. Herein, we use a regularization model which introduce...

Network

Cited By