About

205

Research items

8,348

Reads

2,943

Citations

Introduction

**Skills and Expertise**

Network

Cited

Followers

Following

Research

Research Items (205)

- Feb 2019

A computationally efficient flow reconstruction technique is proposed, exploiting homogeneity in a given direction, to recreate three-dimensional instantaneous turbulent velocity fields from snapshots of two dimension planar fields. This methodology, termed as ’snapshot optimisation’ or SO, can help to provide 3D data sets for studies which are currently restricted by the limitations of experimental measurement techniques. The SO method aims at optimising the error between an inlet plane with a homogeneous direction and snapshots, obtained over a sufficient period of time, on the observation plane. The observations are carried out on a plane perpendicular to the inlet plane with a shared edge normal to the homogeneity direction. The method is applicable to all flows which display a direction of homogeneity such as cylinder wake flows, channel flow, mixing layer, and jet (axisymmetric). The ability of the method is assessed with two synthetic data sets, and three experimental PIV data sets. A good reconstruction of the large-scale structures is observed for all cases. The small-scale reconstruction ability is partially limited especially for higher dimensional observation systems. POD-based SO method and averaging SO variations of the method are shown to reduce discontinuities created due to temporal mismatch in the homogenous direction providing a smooth velocity reconstruction. The volumetric reconstruction is seen to capture large-scale structures for synthetic and experimental case studies. The algorithm run time is found to be in the order of a minute providing results comparable with the reference. Such a reconstruction methodology can provide important information for data assimilation in the form of initial condition, background condition, and 3D observations.
Graphical abstract
Open image in new window

- Dec 2018

To date no satisfying model exists to explain the mean velocity profile within the whole turbulent layer of canonical wall bounded flows. We propose a modification of the velocity profile expression that ensues from a recently proposed stochastic representation of fluid flows dynamics. This modeling, called modeling under location uncertainty introduces in a rigorous way a subgrid term generalizing the eddy-viscosity assumption and an eddy-induced advection term resulting from turbulence inhomogeneity. This latter term gives rise to a theoretically well-grounded model for the transitional zone between the viscous sublayer and the turbulent sublayer. An expression of the small-scale velocity component is also provided in the viscous zone. Numerical assessment of the results are provided for turbulent boundary layer flows, for pipe flows and channel flows at various Reynolds numbers.

- Nov 2018

We present a novel motion estimation technique for image-based river velocimetry. It is based on the so-called optical flow, which is a well developed method for rigid motion estimation in image sequences, devised in computer vision community. Contrary to PIV (Particle Image Velocimetry) techniques, optical flow formulation is flexible enough to incorporate physics equations that govern the observed quantity motion. Over the past years, it has been adopted by experimental fluid dynamics community where many new models were introduced to better represent different fluids motions, (see [18] for a review). Our optical flow is based on the scalar transport equation and is augmented with a weighted diffusion term to compensate for small scale (non-captured) contributions. Additionally, since there is no ground truth data for such type of image sequences, we present a new evaluation method to assess the results. It is based on trajectory reconstruction of few Lagrangian particles of interest and a direct comparison against their manually-reconstructed trajectories. The new motion estimation technique outperformed traditional optical flow and PIV-based methods.

- Nov 2018

Estimating the parameters of geophysical dynamic models is an important task in Data Assimilation (DA) technique used for forecast initialization and reanalysis. In the past, most parameter estimation strategies were derived by state augmentation, yielding algorithms that are easy to implement but may exhibit convergence difficulties. The Expectation‐Maximization (EM) algorithm is considered advantageous because it employs two iterative steps to estimate the model state and the model parameter separately. In this work, we propose a novel ensemble formulation of the Maximization step in EM that allows a direct optimal estimation of physical parameters using iterative methods for linear systems. This departs from current EM formulations that are only capable of dealing with additive model error structures. This contribution shows how the EM technique can be used for dynamics identification problem with a model error parameterized as arbitrary complex form. The proposed technique is here used for the identification of stochastic subgrid terms that account for processes unresolved by a geophysical fluid model. This method, along with the augmented state technique, are evaluated to estimate such subgrid terms through high resolution data. Compared to the augmented state technique, our method is shown to yield considerably more accurate parameters. In addition, in terms of prediction capacity, it leads to smaller generalization error as caused by the overfitting of the trained model on presented data and eventually better forecasts.

- Jul 2018
- 19th International Symposium on Applications of Laser and Imaging Techniques to Fluid Mechanics

We propose a novel motion estimation from image sequences method to study turbulent flows. This method
consists in modeling the luminance motions between successive frames as a new stochastic scalar transport
equation. Thanks to stochastic formalism the motion field is separated into a large-scale smooth component and a
random small-scale component. This decomposition gives rise to the mentioned new stochastic transport equation
that takes into account the interactions between observed large scales and unresolved ones. This equation provides
a new data term for optical flow algorithm. We then resolved the aperture problem with local Lucas-Kanade
approach. We show that the method improves the motion estimation on a synthetic case. Then the method is tested
on real large-scale observations of a mixing layer. Two kinds of tracers are compared and a new lighting system
based on LED technology is employed to perform this large-scale experiment.

- Jun 2018
- 2018 Annual American Control Conference (ACC)

- Apr 2018

The focus of this paper is to perform coarse-grid large eddy simulation (LES) using recently developed sub-grid scale (SGS) models of cylinder wake flow at Reynolds number (Re) of 3900. As we approach coarser resolutions, a drop in accuracy is noted for all LES models but more importantly, the nu- merical stability of classical models is called into question. The objective is to identify a statistically accurate, stable sub-grid scale (SGS) model for this transitional flow at a coarse resolution. The proposed new models under loca- tion uncertainty (MULU) are applied in a deterministic coarse LES context and the statistical results are compared with variants of the Smagorinsky model and various reference data-sets (both experimental and Direct Nu- merical Simulation (DNS)). MULU are shown to better estimate statistics for coarse resolution (at 0.46% the cost of a DNS) while being numerically stable. The performance of the MULU is studied through statistical compar- isons, energy spectra, and sub-grid scale (SGS) contributions. The physics behind the MULU are characterised and explored using divergence and curl functions. The additional terms present (velocity bias) in the MULU are shown to improve model performance. The spanwise periodicity observed at low Reynolds is achieved at this moderate Reynolds number through the curl function, in coherence with the birth of streamwise vortices.

- Jan 2018
- Energy Minimization Methods in Computer Vision and Pattern Recognition

Using a classical example, the Lorenz-63 model, an original stochastic framework is applied to represent large-scale geophysical flow dynamics. Rigorously derived from a reformulated material derivative, the proposed framework encompasses several meaningful mechanisms to model geophysical flows. The slightly compressible set-up, as treated in the Boussinesq approximation, brings up a stochastic transport equation for the density and other related thermo-dynamical variables. Coupled to the momentum equation through a forcing term, a resulting stochastic Lorenz-63 model is consistently derived. Based on such a reformulated model, the pertinence of this large-scale stochastic approach is demonstrated over classical eddy-viscosity based large-scale representations.

We present here a new stochastic modelling approach in the constitution of fluid flow reduced-order models. This framework introduces a spatially inhomogeneous random field to represent the unresolved small-scale velocity component. Such a decomposition of the velocity in terms of a smooth large-scale velocity component and a rough, highly oscillating component gives rise, without any supplementary assumption, to a large-scale flow dynamics that includes a modified advection term together with an inhomogeneous diffusion term. Both of those terms, related respectively to turbophoresis and mixing effects, depend on the variance of the unresolved small-scale velocity component. They bring an explicit subgrid term to the reduced system which enables us to take into account the action of the truncated modes. Besides, a decomposition of the variance tensor in terms of diffusion modes provides a meaningful statistical representation of the stationary or non-stationary structuration of the small-scale velocity and of its action on the resolved modes. This supplies a useful tool for turbulent fluid flow data analysis. We apply this methodology to circular cylinder wake flow at Reynolds numbers $Re=100$ and $Re=3900$ . The finite-dimensional models of the wake flows reveal the energy and the anisotropy distributions of the small-scale diffusion modes. These distributions identify critical regions where corrective advection effects, as well as structured energy dissipation effects, take place. In providing rigorously derived subgrid terms, the proposed approach yields accurate and robust temporal reconstruction of the low-dimensional models.

The models under location uncertainty recently introduced by Mémin [16] provide a new outlook on LES modelling for turbulence studies. These models are derived from the stochastic conservation equations using stochastic calculus. These stochastic conservation equations are similar to the filtered Navier- Stokes equation wherein we observe a sub-grid scale dissipation term. However, in the stochastic version, an extra term appears, termed as "velocity bias", which can be treated as a biasing/modification of the large scale advection by the small scales. This velocity bias, introduced first in stochastic models by MacInnes and Bracco [14] albeit artificially, appears here automatically through a decorrelation assumption of the small scales at the resolved scale. All sub-grid contributions for the stochastic models are defined by the small scale velocity auto-correlation (a = T ) which can be modelled through a Smagorinsky equivalency or by a local variance calculation. In this study, we have worked towards verifying the applicability and accuracy of these models in two well-studied cases namely that of flow over a circular cylinder at Re ⇠ 3900 and smooth channel flow at Re⌧ ⇠ 395. Both these flows have been extensively studied in literature and provide well-established data sets for model comparison. In addition, these flows display numerous important characteristics of turbulence flows that needs to be captured e ciently by the model. This combined with the flow associated numerical complexities makes these the ideal flow for model study. A comparison of the models indicates a statistical improvement in the models under location uncertainty compared with classical deterministic models for both flows.

The PIV characterization of flows on large fields of view requires an adaptation of the motion estimation method from image sequences. The backward shift of the camera coupled to a dense scalar seeding involves a large scale observation of the flow, thereby producing uncertainty about the observed phenomena. By introducing a stochastic term related to this uncertainty into the observation term, we show in this paper that we can improve the accuracy of the estimated velocity field by optical flow.

Large Eddy Simulations (LES) have become common place in the current research scenario with increasing computational resources. However, constraints still limit the application of LES in a variety of scenarios: high Reynolds (Re) number flows, complex geometry flows, or flows involving complicated wall boundary layers. While the last scenario is limited due to physical aspects of the model, the first two can be rectified by reducing the computational cost of performing LES. An immediate foreseeable solution is to reduce the number of computational points in the simulation. However, this leads to a stark decrease in accuracy for an LES model. Complex methodologies have been developed to negate this decrease such as hybrid RANS-LES models: using RANS model (respec. LES model) for coarse grid (respec. fine grid) regions. In this study, the focus is on physical behaviour characterisation of novel models under location uncertainty [1] in a coarse mesh construct. Analysis and comparisons with the performances of classic LES models, decreasing in accuracy with increasingly coarse meshes, are conducted. The models under location uncertainty originate from the stochastic mass and momentum conservation equations which are derived using stochastic calculus. Similar to a filtered NS equation for LES, the stochastic version contains a sub-grid scale dissipation terms-this term is fully specified; there is thus no need to rely on the additional Boussinesq viscosity assumption. It also contains a sub-grid scale velocity bias term acting on the advection component-this term is related to a phenomenon termed 'turbophoresis' in literature and is usually not taken into account in classical sub-grid modelling. Both terms are characterised by the small-scale velocity auto-correlation which requires modelling. While a Smagorinsky-like model under location uncertainty (StSm) can be envisaged (through a local isotropy assumption), the performance of these models excels when a local variance based a (StSp-spatial variance; StTe-temporal variance) is realised. The performance of these models is compared with the classic (Smag) and dynamic Smagorinsky (DSmag) models, and the Wall-Adaptive Local Eddy viscosity (WALE) model. Two well-studied flows, namely wake flow around cylinder at Re = 3900, and channel flow at Re ! = 395, are used to analyse the performance of the models under a coarse resolution with reference statistics from [2] for wake flow and [3] for channel flow. The statistical correlations are shown to be better even at low resolutions for the models under location uncertainty while the classical LES models are either inaccurate or numerically unstable. A flow with well-resolved vortices is observed with the models under location uncertainty and they also capture the important turbulent characteristics of a given flow better than the classical models.

- Jul 2017
- IGARSS 2017 - 2017 IEEE International Geoscience and Remote Sensing Symposium

In this paper, we explore a dynamical formulation allowing to assimilate high-resolution data in a large-scale fluid flow model. This large-scale formulation relies on a random modelling of the small-scale velocity component and allows to take into account the scale discrepancy between the dynamics and the observations. It introduces a subgrid stress tensor that naturally emerges from a modified Reynolds transport theorem adapted to this stochastic representation of the flow. This principle is used within a stochastic shallow water model coupled with an 4DEnVar assimilation technique to estimate both the flow initial conditions and the inhomogeneous time-varying subgrid parameters. The performance of this modelling has been assessed numerically with both synthetic and real-world data. Our strategy has shown to be very effective in providing a more relevant prior/posterior ensemble in terms of the dispersion compared to other tests using the standard shallow water equations with no subgrid parameterization or with simple eddy viscosity models. We also compared two localization techniques. The results indicate the localized covariance approach is more suitable to deal with the scale discrepancy-related errors.

In this paper, we propose a novel optical flow formulation for estimating two-dimensional velocity fields from an image sequence depicting the evolution of a passive scalar transported by a fluid flow. This motion estimator relies on a stochastic representation of the flow allowing to incorporate naturally a notion of uncertainty in the flow measurement. In this context, the Eulerian fluid flow velocity field is decomposed into two components: a large-scale motion field and a small-scale uncertainty component. We define the small-scale component as a random field. Subsequently, the data term of the optical flow formulation is based on a stochastic transport equation, derived from the formalism under location uncertainty proposed in Mémin (Geophys Astrophys Fluid Dyn 108(2):119–146, 2014) and Resseguier et al. (Geophys Astrophys Fluid Dyn 111(3):149–176, 2017a). In addition, a specific regularization term built from the assumption of constant kinetic energy involves the very same diffusion tensor as the one appearing in the data transport term. Opposite to the classical motion estimators, this enables us to devise an optical flow method dedicated to fluid flows in which the regularization parameter has now a clear physical interpretation and can be easily estimated. Experimental evaluations are presented on both synthetic and real world image sequences. Results and comparisons indicate very good performance of the proposed formulation for turbulent flow motion estimation.

Models under location uncertainty are derived assuming that a component of the velocity is uncorrelated in time. The material derivative is accordingly modified to include an advection correction, inhomogeneous and anisotropic diffusion terms and a multiplicative noise contribution. This change can be consitently applied to all fluid dynamics evolution laws. This paper continues to explore benefits of this framework and consequences of specific scaling assumptions. Starting from a Boussinesq model under location uncertainty, a model is developed to describe a mesoscale flow subject to a strong underlying submesoscale activity. As obtained, the geostrophic balance is modified and the Quasi-Geostrophic (QG) assumptions remarkably lead to a zero Potential Vorticity (PV). The ensuing Surface Quasi-Geostrophic (SQG) model provides a simple diagnosis of warm frontolysis and cold frontogenesis.

We introduce a stochastic modelling in the constitution of fluid flow reduced-order models. This framework introduces a spatially inhomogeneous random field to represent the unresolved small-scale velocity component. Such a decomposition of the velocity in terms of a smooth large-scale velocity component and a rough, highly oscillating, component gives rise, without any supplementary assumption, to a large-scale flow dynamics that includes a modified advection term together with an inhomogeneous diffusion term. Both of those terms, related respectively to turbophoresis and mixing effects, depend on the variance of the unre-solved small-scale velocity component. They bring to the reduced system an explicit subgrid term enabling to take into account the action of the truncated modes. Besides, a decomposition of the variance tensor in terms of diffusion modes provides a meaningful statistical representation of the stationary or nonstationary structuration of the small-scale velocity and of its action on the resolved modes. This supplies a useful tool for turbulent fluid flows data analysis. We apply this methodology to circular cylinder wake flow at Reynolds numbers Re = 300 and Re = 3900, respectively. The finite dimensional models of the wake flows reveal the energy and the anisotropy distributions of the small-scale diffusion modes. These distributions identify critical regions where corrective advection effects as well as structured energy dissipation effects take place. In providing rigorously derived subgrid terms, the proposed approach yields accurate and robust temporal reconstruction of the low-dimensional models.

We explore the potential of a formulation of the Navier-Stokes equations incorporating a random description of the small-scale velocity component. This model, established from a version of the Reynolds transport theorem adapted to a stochastic representation of the flow, gives rise to a large-scale description of the flow dynamics in which emerges an anisotropic subgrid tensor, reminiscent to the Reynolds stress tensor, together with a drift correction due to an inhomogeneous turbulence. The corresponding subgrid model, which depends on the small scales velocity variance, generalizes the Boussinesq eddy viscosity assumption. However, it is not anymore obtained from an analogy with molecular dissipation but ensues rigorously from the random modeling of the flow. This principle allows us to propose several subgrid models defined directly on the resolved flow component. We assess and compare numerically those models on a standard Green-Taylor vortex flow at Reynolds 1600. The numerical simulations, carried out with an accurate divergence-free scheme, outperform classical large-eddies formulations and provides a simple demonstration of the pertinence of the proposed large-scale modeling.

Models under location uncertainty are derived assuming that a component of the velocity is uncorrelated in time. The material derivative is accordingly modified to include an advection correction, inhomogeneous and anisotropic diffusion terms and a multiplicative noise contribution. In this paper, simplified geophysical dynamics are derived from a Boussinesq model under location uncertainty. Invoking usual scaling approximations and a moderate influence of the subgrid terms, stochastic formulations are obtained for the stratified Quasi-Geostrophy (QG) and the Surface Quasi-Geostrophy (SQG) models. Based on numerical simulations, benefits of the proposed stochastic formalism are demonstrated. A single realization of models under location uncertainty can restore small-scale structures. An ensemble of realizations further helps to assess model error prediction and outperforms perturbed deterministic models by one order of magnitude. Such a high uncertainty quantification skill is of primary interests for assimilation ensemble methods. MATLAB code examples are available online.

A stochastic flow representation is considered with the Eulerian velocity decomposed between a smooth large scale component and a rough small-scale turbulent component. The latter is specified as a random field uncorrelated in time. Subsequently, the material derivative is modified and leads to a stochastic version of the material derivative to include a drift correction , an inhomogeneous and anisotropic diffusion, and a multiplicative noise. As derived, this stochastic transport exhibits a remarkable energy conservation property for any realizations. As demonstrated, this pivotal operator further provides elegant means to derive stochastic formulations of classical representations of geophysical flow dynamics.

- Sep 2016
- UK Fluids Conference 2016, 7-9 Sep 2016, Imperial College London, United Kingdom

Large Eddy Simulations (LES) are effectively used as reduced cost alternative to Direct Numerical Simulations (DNS). However, LES are still computationally expensive for complex flows. This brings into the forefront the concept of Coarse Large Eddy Simulations (cLES) involving coarser meshes and hence cheaper computations. An associated limitation with cLES is the accuracy and stability of the sub-grid scale (SGS) model to be used. This is the focus of this poster wherein several SGS models have been compared for the simple case of channel flow at coarse resolution. The development of LES SGS models has been an area of scientific research for many decades starting with Smagorinsky [6]. Recent developments in this field have produced many modern SGS models which have been shown to work better than classical models [3, 2]. In this study, the classical models namely, classic Smagorinsky as well as its dynamic version along with the Wall Adaptive Local-Eddy Viscosity (WALE) model, have been compared with the newly developed uncertainty based models of [3] and the implicit LES version of [2] in the context of cLES. The accuracy of the statistics have been analysed by comparison with the channel flow data of [4].

- Jun 2016

In order to cope with small-scale unpredictable details of mesoscale structures in cloud-resolving models, it is suggested in this paper to process the model outputs following a fuzzy object-oriented approach to extract and track precipitating features (associated with a higher predictability than the direct model outputs). The present approach uses the particle filter method to recognize patterns based on predefined texture or spatial variability of the model output. This provides an ensemble of precipitating objects, which are then propagated in time using a stochastic advection-diffusion process. This method is applied to both deterministic and ensemble forecasts provided by the AROME-France convective-scale model . Specific case studies support the ability of the approach to handle precipitation of different types .

We investigate the combined use of a kinect depth sensor and of a stochastic data assimilation (DA) method to recover free-surface flows. More specifically, we use a weighted ensemble Kalman filter method to reconstruct the complete state of free-surface flows from a sequence of depth images only. This particle filter accounts for model and observations errors. This DA scheme is enhanced with the use of two observations instead of one classically. We evaluate the developed approach on two numerical test cases: a collapse of a water column as a toy-example and a flow in an suddenly expanding flume as a more realistic flow. The robustness of the method to depth data errors and also to initial and inflow conditions is considered. We illustrate the interest of using two observations instead of one observation into the correction step, especially for unknown inflow boundary conditions. Then, the performance of the Kinect sensor in capturing the temporal sequences of depth observations is investigated. Finally, the efficiency of the algorithm is qualified for a wave in a real rectangular flat bottomed tank. It is shown that for basic initial conditions, the particle filter rapidly and remarkably reconstructs the velocity and height of the free surface flow based on noisy measurements of the elevation alone. © 2015 The Japan Society of Fluid Mechanics and IOP Publishing Ltd.

- Jun 2015
- International symposium on turbulence and shear flow phenomena (TSFP-9)

This paper uses a new decomposition of the fluid velocity in terms of a large-scale continuous component with respect to time and a small-scale non continuous random component. Within this general framework, a stochastic representation of the Reynolds transport theorem and Navier-Stokes equations can be derived, based on physical conservation laws. This physically relevant stochastic model is applied in the context of the POD-Galerkin method. In both the stochastic Navier-Stokes equation and its reduced model, a possibly time-dependent, inhomogeneous and anisotropic diffusive subgrid tensor appears naturally and generalizes classical subgrid models. We proposed two ways of estimating its parametrization in the context of POD-Galerkin. This method has shown to be able to successfully reconstruct energetic Chronos for a wake flow at Reynolds 3900, whereas standard POD-Galerkin diverged systematically.

- Apr 2015

Ensemble based optimal control schemes combine the components of ensemble Kalman filters and variational data assimilation (4DVar). They are trendy because they are easier to implement than 4DVar. In this paper, we evaluate a modified version of an ensemble based optimal control strategy for image data assimilation. This modified method is assessed with a shallow water model combined with synthetic data and original incomplete experimental depth sensor observations. This paper shows that the modified ensemble technique is better in quality and can reduce the computational cost.

- Jan 2015

Ensemble based optimal control schemes combine the components of ensemble Kalman filters and variational data assimilation (4DVar). They are trendy because they are easier to implement than 4DVar. In this paper, we evaluate a modified version of an ensemble based optimal control strategy for image data assimilation. This modified method is assessed with a shallow water model combined with synthetic data and original incomplete experimental depth sensor observations. This paper shows that the modified ensemble technique is better in quality and can reduce the computational cost.

In large-scale Fluids Dynamics systems, the velocity lives in a broad range of scales. To be able to simulate its large-scale component, the flow can be de- composed into a finite variation process, which represents a smooth large-scale velocity component, and a martingale part, associated to the highly oscillating small-scale velocities. Within this general framework, a stochastic representation of the Navier-Stokes equations can be derived, based on physical conservation laws. In this equation, a diffusive sub-grid tensor appears naturally and gener- alizes classical sub-grid tensors.
Here, a dimensionally reduced large-scale simulation is performed. A Galerkin projection of our Navier-Stokes equation is done on a Proper Orthogonal De- composition basis. In our approach of the POD, the resolved temporal modes are differentiable with respect to time, whereas the unresolved temporal modes are assumed to be decorrelated in time. The corresponding reduced stochastic model enables to simulate, at low computational cost, the resolved temporal modes. It allows taking into account the possibly time-dependent, inhomoge- neous and anisotropic covariance of the small scale velocity. We proposed two ways of estimating such contributions in the context of POD-Galerkin.
This method has proved successful to reconstruct energetic Chronos for a wake flow at Reynolds 3900, even with a large time step, whereas standard POD- Galerkin diverged systematically. This paper describes the principles of our stochastic Navier-Stokes equation, together with the estimation approaches, elaborated for the model reduction strategy.

We present a derivation of a stochastic model of Navier Stokes equations that
relies on a decomposition of the velocity fields into a differentiable drift
component and a time uncorrelated uncertainty random term. This type of
decomposition is reminiscent in spirit to the classical Reynolds decomposition.
However, the random velocity fluctuations considered here are not
differentiable with respect to time, and they must be handled through
stochastic calculus. The dynamics associated with the differentiable drift
component is derived from a stochastic version of the Reynolds transport
theorem. It includes in its general form an uncertainty dependent "subgrid"
bulk formula that cannot be immediately related to the usual Boussinesq eddy
viscosity assumption constructed from thermal molecular agitation analogy. This
formulation, emerging from uncertainties on the fluid parcels location,
explains with another viewpoint some subgrid eddy diffusion models currently
used in computational fluid dynamics or in geophysical sciences and paves the
way for new large-scales flow modelling. We finally describe an applications of
our formalism to the derivation of stochastic versions of the Shallow water
equations or to the definition of reduced order dynamical systems.

This paper presents an algorithm for Monte Carlo fixed-lag smoothing in
state-space models defined by a diffusion process observed through noisy
discrete-time measurements. Based on a particles approximation of the filtering
and smoothing distributions, the method relies on a simulation technique of
conditioned diffusions. The proposed sequential smoother can be applied to
general non linear and multidimensional models, like the ones used in
environmental applications. The smoothing of a turbulent flow in a
high-dimensional context is given as a practical example.

Cet article présente les travaux que nous avons menés ces dernières années autour de l’analyse d’images Météosat
Seconde Génération (MSG). Comparés à la première génération, les données MSG possèdent une résolution spatiale et
temporelle plus élevée, autorisant l’accès à un certain nombre d’informations liées aux phénomènes climatiques observés.
Cependant, l’étape consistant à remonter à cette information physique à partir des données images s’avère délicate car
nous sommes confrontés à des structures soumises à de très fortes déformations, parfois observées en transparence et
avec une durée de vie variable. Par conséquent, les outils classiques issus de l’analyse d’images se révèlent souvent
limités et il est nécessaire de les adapter à cette spécificité. Nous nous focalisons ici sur trois applications particulières :
l’estimation du mouvement, permettant de remonter aux vents atmosphériques, le suivi de masses nuageuses, autorisant
par exemple l’analyse de phénomènes convectifs et la détection de fronts, appliquée ici aux brises de mer.

- Mar 2014

In this work we explore the numerical simulation of Navier-Stokes equations representation incorporating an uncertainty component on the fluid flow velocity. The uncertainty considered is formalized through a random field uncorrelated in time but correlated in space. This model enables the constitution of large scale dynamical models of the flows in which emerges an anisotropic subgrid tensor reminiscent to the Reynolds stress tensor. This subgrid model is directly related to the uncertainty variance tensor. This property allows us to propose simple models of this stress tensor that are computed directly on the resolved component. These models are here assessed on a standard Green-Taylor vortex at Reynolds 1600 and on a Crow instability at Reynolds 3200. We also describe in this paper an efficient divergence free wavelet scheme for the numerical simulation of this model. The stability condition of the divergence-free wavelet based numerical scheme we used in this study is also discussed.

In this work, we aim at studying ensemble based optimal control strategies
for data assimilation. Such formulation nicely combines the ingredients of
ensemble Kalman filters and variational data assimilation (4DVar). In the same
way as variational assimilation schemes, it is formulated as the minimization
of an objective function, but similarly to ensemble filter, it introduces in
its objective function an empirical ensemble-based background-error covariance
and works in an off-line smoothing mode rather than sequentially like
sequential filters. These techniques have the great advantage to avoid the
introduction of tangent linear and adjoint models, which are necessary for
standard incremental variational techniques. They also allow handling a time
varying background covariance matrix representing the error evolution between
the estimated solution and a background solution. As this background error
covariance matrix -- of reduced rank in practice -- plays a key role in the
variational process, our study particularly focuses on the generation of the
analysis ensemble state with localization techniques. Besides, to clarify well
the differences between the different methods and to highlight the potential
pitfall and advantages of the different methods, we present key theoretical
properties associated to different choices involved in their setup. We compared
experimentally the performances of several variations of an ensemble technique
of interest with an incremental 4DVar method. The comparisons have been leaded
on the basis of a Shallow Water model and have been carried out both with
synthetic data and through a close experimental setup. The cases where the
system's components are either fully observed or only partially have been in
particular addressed.

In this work, we aim at studying ensemble based optimal control strategies for data assimilation. Such formulation nicely combines the ingredients of ensemble Kalman filters and variational data assimilation (4DVar). In the same way as variational assimilation schemes, it is formulated as the minimization of an objective function, but similarly to ensemble filter, it introduces in its objective function an empirical ensemble-based background-error covariance and works in an off-line smoothing mode rather than sequentially like sequential filters. These techniques have the great advantage to avoid the introduction of tangent linear and adjoint models, which are necessary for standard incremental variational techniques. They also allow handling a time varying background covariance matrix representing the error evolution between the estimated solution and a background solution. As this background error covariance matrix – of reduced rank in practice – plays a key role in the variational process, our study particularly focuses on the generation of the analysis ensemble state with localization techniques. Besides, to clarify well the differences between the different methods and to highlight the potential pitfall and advantages of the different methods, we present key theoretical properties associated to different choices involved in their setup. We compared experimentally the performances of several variations of an ensemble technique of interest with an incremental 4DVar method. The comparisons have been leaded on the basis of a Shallow Water model and have been carried out both with synthetic data and through a close experimental setup. The cases where the system's components are either fully observed or only partially have been in particular addressed.

We introduce a stochastic filtering technique for the tracking of closed curves from image sequence. For that purpose, we design a continuous-time dy- namics that allows us to infer inter-frame deformations. The curve is defined by an implicit level-set representation and the stochastic dynamics is expressed on the level-set function. It takes the form of a stochastic partial differential equation with a Brownian motion of low dimension. The evolution model we propose combines local photometric information, deformations induced by the curve displacement and an uncertainty modeling of the dynamics. Specific choices of noise models and drift terms lead to an evolution law based on mean curvature as in classic level set methods, while other choices yield new evolution laws. The approach we pro- pose is implemented through a particle filter, which includes color measurements characterizing the target and the background photometric probability densities respectively. The merit of this filter is demonstrated on various satellite image sequences depicting the evolution of complex geophysical flows.

- Aug 2013
- 8th International Symposium on turbulence and shear flow phenomena (TSFP8)

We present simulation results of a stochastic Navier-Stokes model that incorporates uncertainty on the fluid parcels location. This model ensues from a decomposition of the flow in terms of a differentiable drift component and a time uncorrelated uncertainty random term. The dynamics associated to the drift component, derived from a stochas-tic version of the Reynolds transport theorem, includes in its general form an uncertainty dependent anisotropic diffu-sion that cannot be immediately related to usual eddy vis-cosity assumption. The simulation we present relies on a wavelet numerical scheme and is here experimented on a Green-Taylor vortex.

A variational data assimilation technique (4DVar) was used to reconstruct turbulent flows (Gronskis et al., 2013). The problem consists in recovering a flow by modifying the initial and inflow conditions of a system composed of a DNS code coupled with some noisy and possibly incomplete PIV measurements. In the present study the ability of the technique to reconstruct flow in gappy PIV data was investigated.

- Jul 2013
- The 10th International Symposium on Particle Image Velocimetry

- Jun 2013

A method for generating inflow conditions for direct numerical simulations (DNS) of spatially-developing flows is presented. The proposed method is based on variational data assimilation and adjoint-based optimization. The estimation is conducted through an iterative process involving a forward in-tegration of a given dynamical model followed by a backward integration of an adjoint system defined by the adjoint of the discrete scheme associated to the dynamical system. The approach's robustness is evaluated on two syn-thetic velocity field sequences provided by numerical simulation of a mixing layer and a wake flow behind a cylinder. The performance of the technique is also illustrated in a real world application by using noisy large scale PIV measurements. This method denoises experimental velocity fields and recon-structs a continuous trajectory of motion fields from discrete and unstable measurements.

Based on physical laws describing the multiscale structure of turbulent flows, this paper proposes a regularizer for fluid motion estimation from an image sequence. Regularization is achieved by imposing some scale invariance property between histograms of motion increments computed at different scales. By reformulating this problem from a Bayesian perspective, an algorithm is proposed to jointly estimate motion, regularization hyperparameters, and to select the most likely physical prior among a set of models. Hyperparameter and model inference are conducted by posterior maximization, obtained by marginalizing out non--Gaussian motion variables. The Bayesian estimator is assessed on several image sequences depicting synthetic and real turbulent fluid flows. Results obtained with the proposed approach exceed the state-of-the-art results in fluid flow estimation.

Expanding on a wavelet basis the solution of an inverse problem provides several advantages. First of all, wavelet bases yield a natural and efficient multireso-lution analysis which allows defining clear optimization strategies on nested subspaces of the solution space. Be-sides, the continuous representation of the solution with wavelets enables analytical calculation of regularization integrals over the spatial domain. By choosing differen-tiable wavelets, accurate high-order derivative regular-izers can be efficiently designed via the basis's mass and stiffness matrices. More importantly, differential constraints on vector solutions, such as the divergence-free constraint in physics, can be nicely handled with biorthogonal wavelet bases. This paper illustrates these advantages in the particular case of fluid flow motion estimation. Numerical results on synthetic and real im-ages of incompressible turbulence show that divergence-free wavelets and high-order regularizers are particu-larly relevant in this context.

- Feb 2013
- Inverse Problems in Vision and 3D Tomography

We propose an algorithm that combines Proper Orthogonal Decomposition with a spectral method to analyse and extract from time data series of velocity fields, reduced order models of flows. The flows considered in this study are assumed to be driven by non linear dynamical systems exhibiting a complex behavior within quasi-periodic orbits in the phase space. The technique is appropiate to achieve efficient reduced order models even in complex cases for which the flow description requires a discretization with a fine spatial and temporal resolution. The proposed analysis enables to decompose complex flow dynamics into modes oscillating at a single frequency. These modes are associated with different energy levels and spatial structures. The approach is illustrated using time resolved PIV data of a cylinder wake flow with associated Reynolds number equal to 3900.

This paper proposes a novel multi-scale fluid flow data assimilation approach, which integrates and complements the advantages of a Bayesian sequential assimilation technique, the Weighted Ensemble Kalman filter (WEnKF) [27]. The data assimilation proposed in this work incorporates measurement brought by an efficient multiscale stochastic formulation of the well-known Lucas-Kanade (LK) estimator. This estimator has the great advantage to provide uncertainties associated to the motion measurements at different scales. The proposed assimilation scheme benefits from this multi-scale uncertainty information and enables to enforce a physically plausible dynamical consistency of the estimated motion fields along the image sequence. Experimental evaluations are presented on synthetic and real fluid flow sequences.

This paper proposes a novel multi-scale fluid flow data assimilation approach, which integrates and complements the advantages of a Bayesian sequential assimilation technique, the Weighted Ensemble Kalman filter (WEnKF) [27]. The data assimila-tion proposed in this work incorporates measurement brought by an efficient multiscale stochastic formulation of the well-known Lucas-Kanade (LK) estimator. This estimator has the great advantage to provide uncertainties associated to the motion measure-ments at different scales. The proposed assimilation scheme benefits from this multi-scale uncertainty information and enables to enforce a physically plausible dynamical consistency of the estimated motion fields along the image sequence. Experimental evaluations are presented on synthetic and real fluid flow sequences.

A B S T R A C T This study proposes an extension of the Weighted Ensemble Kalman filter (WEnKF) proposed by Papadakis et al. (2010) for the assimilation of image observations. The main focus of this study is on a novel formulation of the Weighted filter with the Ensemble Transform Kalman filter (WETKF), incorporating directly as a measurement model a non-linear image reconstruction criterion. This technique has been compared to the original WEnKF on numerical and real world data of 2-D turbulence observed through the transport of a passive scalar. In particular, it has been applied for the reconstruction of oceanic surface current vorticity fields from sea surface temperature (SST) satellite data. This latter technique enables a consistent recovery along time of oceanic surface currents and vorticity maps in presence of large missing data areas and strong noise.

Compare En4DVar with a classic 4DVar method.

Results of the application of optical flow methods to eye-safe aerosol lidar images leading to dense velocity field estimations are presented. A fluid motion dedicated for-mulation is employed, taking into account the deform-ing shapes and changing brightness of flow visualization. The optical flow technique has the advantage of provid-ing a vector at every pixel in the image, hence enabling access to improved multiscale properties. In order to as-sess the performances of the method, we compare vectors with punctual sonic anemometer measurements. Power spectra of the velocity data are also calculated to explore the spectral behavior of the technique.

A B S T R A C T In the context of tackling the ill-posed inverse problem of motion estimation from image sequences, we propose to introduce prior knowledge on flow regularity given by turbulence statistical models. Prior regularity is formalised using turbulence power laws describing statistically self-similar structure of motion increments across scales. The motion estimation method minimises the error of an image observation model while constraining second-order structure function to behave as a power law within a prescribed range. Thanks to a Bayesian modelling framework, the motion estimation method is able to jointly infer the most likely power law directly from image data. The method is assessed on velocity fields of 2-D or quasi-2-D flows. Estimation accuracy is first evaluated on a synthetic image sequence of homogeneous and isotropic 2-D turbulence. Results obtained with the approach based on physics of fluids outperform state-of-the-art. Then, the method analyses atmospheric turbulence using a real meteorological image sequence. Selecting the most likely power law model enables the recovery of physical quantities, which are of major interest for turbulence atmospheric characterisation. In particular, from meteorological images we are able to estimate energy and enstrophy fluxes of turbulent cascades, which are in agreement with previous in situ measurements.

In this work we propose and evaluate two variational data assimilation techniques for the estimation of low order surrogate experimental dynamical models for fluid flows. Both methods are built from optimal control recipes and rely on proper orthogonal decomposition and a Galerkin projection of the Navier Stokes equation. The techniques proposed differ in the control variables they involve. The first one introduces a weak dynamical model defined only up to an additional uncertainty time-dependent function whereas the second one, handles a strong dynamical constraint in which the dynamical system’s coefficients constitute the control variables. Both choices correspond to different approximations of the relation between the reduced basis on which is expressed the motion field and the basis components that have been neglected in the reduced order model construction. The techniques have been assessed on numerical data and for real experimental conditions with noisy particle image velocimetry data.

- Jan 2012
- Proceedings of the Third international conference on Scale Space and Variational Methods in Computer Vision

In this paper, we present a stochastic interpretation of the motion estimation problem. The usual optical flow constraint
equation (assuming that the points keep their brightness along time), embed for instance within a Lucas-Kanade estimator,
can indeed be seen as the minimization of a stochastic process under some strong constraints. These constraints can be relaxed
by imposing a weaker temporal assumption on the luminance function and also in introducing anisotropic intensity-based uncertainty
assumptions. The amplitude of these uncertainties are jointly computed with the unknown velocity at each point of the image
grid. We propose different versions depending on the various hypothesis assumed for the luminance function. The substitution
of our new observation terms on a simple Lucas-Kanade estimator improves significantly the quality of the results. It also
enables to extract an uncertainty connected to quality of the motion field.
KeywordsOptical flow–stochastic formulation–brightness consistency assumption

This article describes the implementation of a simple wavelet-based optical-flow motion estimator dedicated to continuous motions such as fluid flows. The wavelet representation of the unknown velocity field is considered. This scale-space represen-tation, associated to a simple gradient-based optimization algorithm, sets up a well-defined multiresolution framework for the optical flow estimation. Moreover, a very simple closure mechanism, approaching locally the solution by high-order polynomials, is provided by truncating the wavelet basis at fine scales. Accuracy and efficiency of the proposed method is evaluated on image sequences of turbulent fluid flows.

Selecting optimal models and hyperparameters is crucial for accurate optical-flow estimation. This paper provides a solution to the problem in a generic Bayesian framework. The method is based on a conditional model linking the image intensity function, the unknown velocity field, hyperparameters, and the prior and likelihood motion models. Inference is performed on each of the three levels of this so-defined hierarchical model by maximization of marginalized a posteriori probability distribution functions. In particular, the first level is used to achieve motion estimation in a classical a posteriori scheme. By marginalizing out the motion variable, the second level enables to infer regularization coefficients and hyperparameters of non-Gaussian M-estimators commonly used in robust statistics. The last level of the hierarchy is used for selection of the likelihood and prior motion models conditioned to the image data. The method is evaluated on image sequences of fluid flows and from the "Middlebury" database. Experiments prove that applying the proposed inference strategy yields better results than manually tuning smoothing parameters or discontinuity preserving cost functions of the state-of-the-art methods.

In this work, we investigate the combined use of a Kinect depth sensor and of a stochastic data assimilation method to recover free-surface flows. For this purpose, we first show that the Kinect is likely to capture temporal sequences of depth observations of wave-like surfaces with wavelengths and amplitudes sufficiently small to characterise medium/large scale flows. Then, we illustrate the ability of a stochastic data assimila-tion method to estimate both time-dynamic water surface elevations and velocities from sequences of synthetical depth images having characteristics close to the Kinect ones.

The goal of this article is to study the performance of pursuit algorithms when applied to the tomographic problem of particle reconstruction.

In this work, we investigate the combined use of a Kinect depth sensor and of a stochastic data assimilation method to recover free-surface flows. For this purpose, we first show that the Kinect is likely to capture temporal sequences of depth observations of wave-like surfaces with wavelengths and amplitudes sufficiently small to characterise medium/large scale flows. Then, we illustrate the ability of a stochastic data assimilation method to estimate both time-dynamic water surface elevations and velocities from sequences of synthetical depth images having characteristics close to the Kinect ones.

- Jul 2011
- 2011 IEEE International Geoscience and Remote Sensing Symposium, IGARSS 2011, Vancouver, BC, Canada, July 24-29, 2011

This paper presents a novel, efficient scheme for the analysis of Sea Surface Temperature (SST) ocean images. We consider the estimation of the velocity fields and vorticity values from a sequence of oceanic images. The contribution of this paper lies in proposing a novel, robust and simple approach based on Weighted Ensemble Transform Kalman filter (WETKF) data assimilation technique for the analysis of real SST images, that may contain coast regions or large areas of missing data due to the cloud cover. The analysis of geophysical fluid flows is of the utmost importance in domains such as oceanography, hydrology or meteorology for applications of forecasting, climate changes study, or for monitoring hazards or events. In all these domains orbital or geostationary satellites provide a huge amount of image data with a still increasing spatial and temporal resolution. Since several years there is a growing interest in extracting from those images a sequence of motion fields depicting the evolution of the observed fluid flow. Compared to in situ measurement techniques supplied by dedicated probes or Lagrangian drifters, satellite images provide a much more denser observation field. They however offer only an indirect access to the physical quantities of interest, and give consequently rise to difficult inverse problems to estimate characteristic features of the flow such as the velocity fields or its vorticity maps. Motion estimation are classically based on the temporal conservation of the luminance between two successive images and assume other assumptions to obtain the unicity of the solutions [1] but, although these approaches provide consistent estimates over space, they can fail to be consistent in time and to reconstruct accurately the dynamical evolution law.

In the context of turbulent fluid motion measure-ment from image sequences, we propose in this paper to reverse the traditional point of view of wavelets per-ceived as an analyzing tool: wavelets and their proper-ties are now considered as prior regularization models for the motion estimation problem, in order to exhibit some well-known turbulence regularities and multifrac-tal behaviors on the reconstructed motion field.

- May 2011
- Scale Space and Variational Methods in Computer Vision - Third International Conference, SSVM 2011, Ein-Gedi, Israel, May 29 - June 2, 2011, Revised Selected Papers

Based on a wavelet expansion of the velocity field, we present a novel optical flow algorithm dedicated to the estimation of continuous motion fields such as fluid flows. This scale-space representation, associated to a simple gradient-based optimization algorithm, naturally sets up a well-defined multi-resolution analysis framework for the optical flow estimation problem, thus avoiding the common drawbacks of standard multi-resolution schemes. Moreover, wavelet properties enable the design of simple yet efficient high-order regularizers or polynomial approximations associated to a low computational complexity. Accuracy of proposed methods is assessed on challenging sequences of turbulent fluids flows.

- May 2011
- Scale Space and Variational Methods in Computer Vision - Third International Conference, SSVM 2011, Ein-Gedi, Israel, May 29 - June 2, 2011, Revised Selected Papers
- International Conference on Scale Space and Variational Methods in Computer Vision

This paper proposes a novel multi-scale fluid flow data assimilation approach, which integrates and complements the advantages of a Bayesian sequential assimilation technique, the Weighted Ensemble Kalman filter (WEnKF) [12], and an improved multiscale stochastic formulation of the Lucas-Kanade (LK) estimator. The proposed scheme enables to enforce a physically plausible dynamical consistency of the estimated motion fields along the image sequence.

- May 2011
- Scale Space and Variational Methods in Computer Vision - Third International Conference, SSVM 2011, Ein-Gedi, Israel, May 29 - June 2, 2011, Revised Selected Papers
- International Conference on Scale Space and Variational Methods in Computer Vision

Selecting optimal models and hyper-parameters is crucial for accurate optic-flow estimation. This paper solves the problem in a generic variational Bayesian framework. The method is based on a conditional model linking the image intensity function, the velocity field and the hyper-parameters characterizing the motion model. Inference is performed at three levels by considering maximum a posteriori problem of marginalized probabilities. We assessed the performance of the proposed method on image sequences of fluid flows and of the "Middlebury" database. Experiments prove that applying the proposed inference strategy on very simple models yields better results than manually tuning smoothing parameters or discontinuity preserving cost functions of classical state-of-the-art methods.

Des centaines d'images nous parviennent chaque jour depuis les satellites d'observation de notre biosphère. Les modèles météorologiques et climatiques pourraient tirer grand bénéfice d'une meilleure exploitation de ces images et des informations qu'elles contiennent.

- Aug 2010
- 20th International Conference on Pattern Recognition, ICPR 2010, Istanbul, Turkey, 23-26 August 2010

- Jul 2010
- Curves and Surfaces - 7th International Conference, Avignon, France, June 24-30, 2010, Revised Selected Papers

We introduce a non-linear stochastic filtering technique to track the state of a free curve from image data. The approach we propose is implemented through a particle filter, which includes color measurements characterizing the target and the background respectively. We design a continuous-time dynamics that allows us to infer inter-frame deformations. The curve is defined by an implicit level-set representation and the stochastic dynamics is expressed on the level-set function. It takes the form of a stochastic partial differential equation with a Brownian motion of low dimension. Specific noise models lead to the traditional level set evolution law based on mean curvature motions, while other forms lead to new evolution laws with different smoothing behaviors. In these evolution models, we propose to combine local photometric information, some velocity induced by the curve displacement and an uncertainty modeling of the dynamics. The associated filter capabilities are demonstrated on various sequences with highly deformable objects.

We consider a novel optic flow estimation algorithm based on a wavelet expansion of the velocity field. In particular, we propose an efficient gradient-based estimation algorithm which naturally encompasses the estimation process into a multiresolution framework while avoiding most of the drawbacks common to this kind of hierarchical methods. We then emphasize that the proposed methodology is well-suited to the practical implementation of high-order regularizations. The powerfulness of the proposed algorithm and regularization schemes are finally assessed by simulation results on challenging image sequence of turbulent fluids.

- May 2010

In this paper, two data assimilation methods based on sequential Monte Carlo sampling are studied and compared: the ensemble Kalman filter and the particle filter. Each of these techniques has its own advantages and drawbacks. In this work, we try to get the best of each method by combining them. The proposed algorithm, called the weighted ensemble Kalman filter, consists to rely on the Ensemble Kalman Filter updates of samples in order to define a proposal distribution for the particle filter that depends on the history of measurement. The corresponding particle filter reveals to be efficient with a small number of samples and does not rely anymore on the Gaussian approximations of the ensemble Kalman filter. The efficiency of the new algorithm is demonstrated both in terms of accuracy and computational load. This latter aspect is of the utmost importance in meteorology or in oceanography since in these domains, data assimilation processes involve a huge number of state variables driven by highly non-linear dynamical models. Numerical experiments have been performed on different dynamical scenarios. The performances of the proposed technique have been compared to the ensemble Kalman filter procedure, which has demonstrated to provide meaningful results in geophysical sciences.

Variational approaches to image motion segmentation has been an active field of study in image processing and computer vision
for two decades. We present a short overview over basic estimation schemes and report in more detail recent modifications
and applications to fluid flow estimation. Key properties of these approaches are illustrated by numerical examples. We outline
promising research directions and point out the potential of variational techniques in combination with correlation-based
PIV methods, for improving the consistency of fluid flow estimation and simulation.

- Nov 2009
- Computer Vision, 2009 IEEE 12th International Conference on

Based on scaling laws describing the statistical structure of turbulent motion across scales, we propose a multiscale and non-parametric regularizer for optic-flow estimation. Regularization is achieved by constraining motion increments to behave through scales as the most likely self-similar process given some image data. In a first level of inference, the hard constrained minimization problem is optimally solved by taking advantage of lagrangian duality. It results in a collection of first-order regularizers acting at different scales. This estimation is non-parametric since the optimal regularization parameters at the different scales are obtained by solving the dual problem. In a second level of inference, the most likely self-similar model given the data is optimally selected by maximization of Bayesian evidence. The motion estimator accuracy is first evaluated on a synthetic image sequence of simulated bi-dimensional turbulence and then on a real meteorological image sequence. Results obtained with the proposed physical based approach exceeds the best state of the art results. Furthermore, selecting from images the most evident multiscale motion model enables the recovery of physical quantities, which are of major interest for turbulence characterization.

The approach we investigate for point tracking combines within a stochastic filtering framework a dynamic model relying on the optical flow constraint and measurements provided by a matching technique. Focusing on points belonging to regions described by a global dominant motion, the proposed tracking system is linear. Since we focus on the case where the system depends on the images, the tracker is built from a Conditional Linear Filter, derived through the use of a conditional linear minimum variance estimator. This conditional tracker authorizes to significantly improve results in some general situation. In particular, such an approach allows us to deal in a simple way with the tracking of points following trajectories with abrupt changes and occlusions.

- Aug 2009

Le sillage turbulent d'un cylindre circulaire à Re=3900 a été mesuré par PIV résolue en temps. Un système dynamique d'ordre réduit est obtenu par projection de Galerkin des équations de Navier-Stokes sur la base issue de la POD de la séquence temporelle des champs de vitesse. L'assimilation variationnelle des modes temporels discrets est mise en oeuvre par contrôle des coefficients du système dynamique.

- Aug 2009
- Geoscience and Remote Sensing Symposium,2009 IEEE International,IGARSS 2009

In this paper, Bayesian inference is used to select the most evident Gibbs prior model for motion estimation given some image sequence. The proposed method supplements the maximum a posteriori motion estimation scheme proposed in HeÂ¿as et al. (2008). Indeed, in this recent work, the authors have introduced a family of multiscale spatial priors in order to cure the ill-posed inverse motion estimation problem. We propose here a second level of inference where the most likely prior model is optimally chosen given the data by maximization of Bayesian evidence. Model selection and motion estimation are assessed on Meteorological Second Generation (MSG) image sequences. Selecting from images the most evident multiscale model enables the recovery of physical quantities which are of major interest for atmospheric turbulence characterization.

- Aug 2009
- Geoscience and Remote Sensing Symposium,2009 IEEE International,IGARSS 2009

This paper is interested with the tracking and the analysis of convective cells on Meteosat Second Generation images. Due to the highly deformable nature of convective clouds, the noisy measurements obtained with image data and the complexity of the physics that governs such phenomena, the conventional tools issued from computer vision to detect and track usual (rigid) objects are not well adapt. In this paper, we face this problem using variational data assimilation tools. Such techniques enable to perform the estimation of an unknown state function according to a given dynamical model and to noisy and incomplete measurements. The system state is composed of two curves (represented with implicit surfaces) corresponding to the the whole cloud and the coldest part (heart) of the convective system. Since no precise and manageable dynamical model exists concerning such phenomena, the dynamics is directly measured from images using some motion estimation techniques devoted to fluid motions.

We propose a new multiscale PIV method based on turbulent kinetic energy decay. The technique is based on scaling power laws describing the statistical structure of turbulence. A spatial regularization constraints the solution to behave through scales as a self similar process via second-order structure function and a given power law. The real parameters of the power-law, corresponding to the distribution of the turbulent kinetic energy decay, are estimated from a simple hot-wire measurement. The method is assessed in a turbulent wake flow and grid turbulence through comparisons with HWA measurements and other PIV approaches. Results indicate that the present method is superior because it accounts for the whole dynamic range involved in the flows.

In this paper, we present a method for the temporal tracking of fluid flow velocity fields. The technique we propose is formalized within a sequential Bayesian filtering framework. The filtering model combines an Itô diffusion process coming from a stochastic formulation of the vorticity-velocity form of the Navier-Stokes equation and discrete measurements extracted from the image sequence. In order to handle a state space of reasonable dimension, the motion field is represented as a combination of adapted basis functions, derived from a discretization of the vorticity map of the fluid flow velocity field. The resulting nonlinear filtering problem is solved with the particle filter algorithm in continuous time. An adaptive dimensional reduction method is applied to the filtering technique, relying on dynamical systems theory. The efficiency of the tracking method is demonstrated on synthetic and real-world sequences.

- Jun 2009
- Scale Space and Variational Methods in Computer Vision, Second International Conference, SSVM 2009, Voss, Norway, June 1-5, 2009. Proceedings
- International Conference on Scale Space and Variational Methods in Computer Vision

The joint analysis of motions and deformations is crucial in a number of computer vision applications. In this paper, we introduce
a non-linear stochastic filtering technique to track the state of a free curve. The approach we propose is implemented through
a particle filter which includes color measurements characterizing the target and the background respectively. We design a
continuous-time dynamics that allows us to infer inter-frame deformations. The curve is defined by an implicit level-set representation
and the stochastic dynamics is expressed on the level-set function. It takes the form of a stochastic differential equation
with Brownian motion of low dimension. Specific noise models lead to traditional evolution laws based on mean curvature motions,
while other forms lead to new evolution laws with different smoothing behaviors. In these evolution models, we propose to
combine local motion information extracted from the images and an incertitude modeling of the dynamics. The associated filter
we propose for curve tracking thus belongs to the family of conditional particle filters. Its capabilities are demonstrated
on various sequences with highly deformable objects.

- May 2009

In this work, we report on electrical and fluid-dynamics studies concerning the flow induced by a sliding discharge (SD). This kind of discharge was created with a three electrode system configuration: one excited with AC and the others with a DC negative voltage. The SD was activated on a quiescent fluid at atmospheric pressure. The flow field induced by the SD was analysed by measurements undertaken with Pitot probes and Schlieren Image Velocimetry. Under the conditions of our experiments two "jet flows", that blown towards the interelectrode space, were induced from the air exposed electrodes. As a consequence of the mutual interaction of these two flows and of the magnitude of each flow, a resulting plume like planar jet of adjustable direction (0-180deg) could be formed. A robust control of the axis direction of the plume could be achieved by modifying the AC voltage value.

The complexity of the laws of dynamics governing 3-D atmospheric flows associated with incomplete and noisy observations make the recovery of atmospheric dynamics from satellite image sequences very difficult. In this paper, we address the challenging problem of estimating physical sound and time-consistent horizontal motion fields at various atmospheric depths for a whole image sequence. Based on a vertical decomposition of the atmosphere, we propose a dynamically consistent atmospheric motion estimator relying on a multilayer dynamic model. This estimator is based on a weak constraint variational data assimilation scheme and is applied on noisy and incomplete pressure difference observations derived from satellite images. The dynamic model is a simplified vorticity-divergence form of a multilayer shallow-water model. Average horizontal motion fields are estimated for each layer. The performance of the proposed technique is assessed using synthetic examples and using real world meteorological satellite image sequences. In particular, it is shown that the estimator enables exploiting fine spatio-temporal image structures and succeeds in characterizing motion at small spatial scales.

- Jan 2009
- International Conference and Advanced School "Turbulent Mixing and Beyond"

Based on scaling laws describing the statistical structure of turbulent motion across scales, we propose a multiscale and non-parametric regularizer for the estimation of velocity fields of bidimensional or quasi bidimensional flows from image sequences. Spatial regularization principle used in order to close the ill-posed nature of motion estimation is achieved by constraining motion increments to behave through scales as the most likely self-similar process given some image data. In a first level of inference, the estimation formulated as a hard constrained minimization problem is optimally solved by taking advantage of lagrangian duality. It results in a collection of first-order regularizers acting at different scales. This estimation is non-parametric since the optimal regularization parameters at the different scales are obtained by solving the dual problem. In a second level of inference, the most likely self-similar model given the data is optimally selected by maximization of bayesian evidence. The motion estimator accuracy is first evaluated on a synthetic image sequence of simulated bidimensional turbulence and then on a real meteorological image sequence. Results obtained with the proposed physical based approach exceeds the best state of the art results. Furthermore, selecting from images the most evident multiscale motion model enables the recovery of physical quantities which are of major interest for turbulence characterization.

- Nov 2008
- Image Processing, 2008. ICIP 2008. 15th IEEE International Conference on

This paper proposes an hybrid approach to estimate the 3D pose of an object. The integration of texture information based on image intensities in a more classical non-linear edge-based pose estimation computation has proven to highly increase the reliability of the tracker. We propose in this work to exploit the data provided by an optical flow algorithm for a similar purpose. The advantage of using the optical flow is that it does not require any a priori knowledge on the object appearance. The registration of 2D and 3D cues for monocular tracking is performed by a non linear minimization. Results obtained show that using optical flow enables to perform robust 3D hybrid tracking even without any texture model.

EMINy Abstract. In this paper, a variational technique derived from optimal control theory is used in order to realize a dynamically consistent motion estimation of a whole ∞uid image sequence. The estimation is conducted through an iterative process involving a forward integration of a given dynamical model followed by a backward integration of an adjoint evolution law. By combining physical conservation laws and image observations, a physically grounded temporal consistency is imposed and the quality of the motion estimation is signiflcantly improved. The method is validated on two synthetic image sequences provided by numerical simulation of ∞uid ∞ows and on real world meteorological examples.

Based on self-similar models of turbulence, we propose in this paper a multi-scale regularizer in order to provide a closure to the optic-flow estimation problem. Regularization is achieved by constraining motion increments to behave as a self-similar process. The associate constrained minimization problem results in a collection of first-order optic-flow regularizers acting at the different scales. The problem is optimally solved by taking advantage of lagrangian duality. Furthermore, an advantage of using a dual formulation, is that we also infer the regularization parameters. Since, the self-similar model parameters observed in real cases can deviate from theory, we propose to add in the algorithm a bayesian learning stage. The performance of the resulting optic-flow estimator is evaluated on a particle image sequence of a simulated turbulent flow. The self-similar regularizer is also assessed on a meteorological image sequence.

In this paper, we address the problem of estimating 3-D motions of a stratified atmosphere from satellite image sequences. The analysis of 3-D atmospheric fluid flows associated with incomplete observation of atmospheric layers due to the sparsity of cloud systems is very difficult. This makes the estimation of dense atmospheric motion field from satellite image sequences very difficult. The recovery of the vertical component of fluid motion from a monocular sequence of image observations is a very challenging problem for which no solution exists in the literature. Based on a physically sound vertical decomposition of the atmosphere into cloud layers of different altitudes, we propose here a dense motion estimator dedicated to the extraction of 3-D wind fields characterizing the dynamics of a layered atmosphere. Wind estimation is performed over the complete 3-D space, using a multilayer model describing a stack of dynamic horizontal layers of evolving thickness, interacting at their boundaries via vertical winds. The efficiency of our approach is demonstrated on synthetic and real sequences.

- Aug 2008
- Geoscience and Remote Sensing Symposium, 2008. IGARSS 2008. IEEE International

The complexity of dynamical laws governing 3D atmospheric flows associated with incomplete and noisy observations make the recovery of atmospheric dynamics from satellite images sequences very difficult. In this paper, we face the challenging problem of estimating physical sound and time-consistent horizontal motion fields at various atmospheric depths for a whole image sequence. Based on a vertical decomposition of the atmosphere, we propose a dynamically consistent atmospheric motion estimator relying on a multi-layer dynamical model. This estimator is based on a weak constraint variational data assimilation scheme and is applied on noisy and incomplete pressure difference observations derived from satellite images. The dynamical model consists in a simplified vorticity-divergence form of a multi-layer shallow-water model. Average horizontal motion fields are estimated for each layer. The performance of the proposed technique is assessed on real world meteorological satellite image sequences.

Current institution

Co-authors

**Top co-authors**

**All co-authors (50)**