## About

175

Publications

19,842

Reads

**How we measure 'reads'**

A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more

17,821

Citations

Citations since 2016

## Publications

Publications (175)

The use of discrete-time stochastic parameterization to account for model error due to unresolved scales in ensemble Kalman filters is investigated by numerical experiments. The parameterization quantifies the model error and produces an improved non-Markovian forecast model, which generates high quality forecast ensembles and improves filter perfo...

We compare two approaches to the predictive modeling of dynamical systems from partial observations at discrete times. The first is continuous in time, where one uses data to infer a model in the form of stochastic differential equations, which are then discretized for numerical solution. The second is discrete in time: the model one infers is a pa...

Importance sampling algorithms are discussed in detail, with an emphasis on implicit sampling, and applied to data assimilation via particle filters. Implicit sampling makes it possible to use the data to find high-probability samples at relatively low cost, making the assimilation more efficient. A new analysis of the feasibility of data assimilat...

The problem of constructing data-based, predictive, reduced models for the
Kuramoto-Sivashinsky equation is considered, under circumstances where one has
observation data only for a small subset of the dynamical variables. Accurate
prediction is achieved by developing a discrete-time stochastic reduced system,
based on a NARMAX (Nonlinear Autoregre...

Implicit sampling is a weighted sampling method that is used in data assimilation to sequentially update state estimates of a stochastic model based on noisy and incomplete data. Here we apply implicit sampling to sample the posterior probability density of parameter estimation problems. The posterior probability combines prior information about th...

Many physical systems are described by nonlinear differential equations that
are too complicated to solve in full. A natural way to proceed is to divide the
variables into those that are of direct interest and those that are not,
formulate solvable approximate equations for the variables of greater interest,
and use data and statistical methods to...

Importance sampling algorithms are discussed in detail, with an emphasis on
implicit sampling, and applied to data assimilation via particle filters.
Implicit sampling makes it possible to use the data to find high-probability
samples at relatively low cost, making the assimilation more efficient. A new
analysis of the feasibility of data assimilat...

The problem of turbulent flow in pipes, although at first sight of purely engineering interest, has since the 1930s been the subject of much attention by mathematicians and physicists, including such outstanding figures as Th von Kármán, L Prandtl, and L D Landau. It has turned out that despite – or perhaps due to – the seemingly simple formulation...

The problem of turbulent flow in pipes, although at first sight of purely engineering interest, has since the 1930s been the subject of much attention by mathematicians and physicists, including such outstanding figures as Th von Karman, L Prandtl, and L D Landau. It has turned out that despite - or perhaps due to - the seemingly simple formulation...

Path integral control solves a class of stochastic optimal control problems
with a Monte Carlo (MC) method for an associated Hamilton-Jacobi-Bellman (HJB)
equation. The MC approach avoids the need for a global grid of the domain of
the HJB equation and, therefore, path integral control is in principle
applicable to control problems of moderate to l...

The universal (Reynolds-number-independent) von Kármán–Prandtl logarithmic law for the velocity distribution in the basic intermediate region of a turbulent shear flow is generally considered to be one of the fundamental laws of engineering science and is taught universally in fluid mechanics and hydraulics courses. We show here that this law is ba...

Polynomial chaos expansions are used to reduce the computational cost in the
Bayesian solutions of inverse problems by creating a surrogate posterior that
can be evaluated inexpensively. We show, by analysis and example, that when the
data contain significant information beyond what is assumed in the prior, the
surrogate posterior can be very diffe...

The problem of turbulent flow in pipes, although at first sight of purely engineering interest, has since the 1930s been the subject of much attention by mathematicians and physicists, including such outstanding figures as Th von K arm an, L Prandtl, and L D Landau. It has turned out that despite Ð or perhaps due to Ð the seemingly simple formulati...

We show, using idealized models, that numerical data assimilation can be
successful only if an effective dimension of the problem is not excessive. This
effective dimension depends on the noise in the model and the data, and in
physically reasonable problems it can be moderate even when the number of
variables is huge. We then analyze several data...

Implicit sampling is a Monte Carlo (MC) method that focuses the computational
effort on the region of high probability, by first locating this region via
numerical optimization, and then solving random algebraic equations to explore
it. Implicit sampling has been shown to be efficient in online state estimation
and filtering (data assimilation) pro...

The implicit particle filter is a sequential Monte Carlo method for data assimilation. The idea is to focus the particles onto the high probability regions of the target probability density function (pdf) so that the number of particles required for a good approximation of this pdf remains manageable, even if the dimension of the state space is lar...

The Mori-Zwanzig formalism of statistical mechanics is used to estimate the
uncertainty caused by underresolution in the solution of a nonlinear dynamical
system. A general approach is outlined and applied to a simple example. The
noise term that describes the uncertainty turns out to be neither Markovian nor
Gaussian. It is argued that this is the...

In the last chapter, we showed that in many cases, the computation of properties of mechanical systems with many variables reduces to the evaluation of averages with respect to the canonical density \({e}^{-\beta H}/Z\). We now show how such calculations can be done, using the Ising model as an example.

We now turn to problems in statistical mechanics where the assumption of thermal equilibrium does not apply. In nonequilibrium problems, one should in principle solve the full Liouville equation, at least approximately. There are many situations in which one attempts to do that under different assumptions and conditions, giving rise to the Euler an...

In weather forecasts, one often hears a sentence such as, “the probability of rain tomorrow is 50 percent.” What does this mean? Something like, “if we look at all possible tomorrows, in half of them there will be rain” or “if we make the experiment of observing tomorrow, there is a quantifiable chance of having rain tomorrow, and somehow or other...

In the chapters that follow, we will provide a reasonably systematic introduction to stochastic processes; we start here by considering a particular stochastic process that is important both in the theory and in applications, together with some applications.

The goal of this chapter is to show how mechanics problems with a very large number of variables can be reduced to the solution of a single linear partial differential equation, albeit one with many independent variables. Furthermore, under conditions that define a thermal equilibrium, the solution of this partial differential equation can be writt...

In this chapter we present some of the ways in which probability can be put to use in scientific computation. We begin with a class of Monte Carlo methods (so named in honor of that town’s gambling casinos) where one evaluates a nonrandom quantity, for example a definite integral, as the expected value of a random variable.

There are many situations in which one needs to consider differential equations that contain a stochastic element, for example, equations in which the value of some coefficient depends on a measurement. The solution of the equation is then a function of the independent variables in the equation as well as of a point ω in some probability space; i.e...

Let V be a vector space with vectors \(u,v,w,\ldots \) and scalars \(\alpha,\beta,\ldots \). The space V is an inner product space if one has defined a function \((\cdot,\cdot )\) from \(V \times V\) to the real numbers (if the vector space is real) or to the complex numbers (if V is complex) such that for all \(u,v \in V\) and all scalars \(\alpha...

Implicit sampling is a Monte Carlo importance sampling method and we
present its application to data assimilation. The basic idea is to
construct an importance function such that the samples (often called
particles) are guided towards the high-probability regions of the
posterior pdf, which is defined jointly by the model and the data. This
is done...

There are many computational tasks, in which it is necessary to sample a given probability density function (or pdf for short), i.e., to use a computer to construct a sequence of independent random vectors x
i
(i = 1, 2, …), whose histogram converges to the given pdf. This can be difficult because the sample space can be huge, and more importantly,...

The implicit particle filter is a sequential Monte Carlo method for data
assimilation that guides the particles to the high-probability regions via a
sequence of steps that includes minimizations. We present a new and more
general derivation of this approach and extend the method to particle smoothing
as well as to data assimilation for perfect mod...

Implicit sampling is a sampling scheme for particle filters, designed to move particles one-by-one so that they remain in high-probability domains. We present a new derivation of implicit sampling, as well as a new iteration method for solving the resulting algebraic equations.

Implicit particle filtering is a sequential Monte Carlo method for data
assim- ilation, designed to keep the number of particles manageable by
focussing attention on regions of large probability. These regions are found by
min- imizing, for each particle, a scalar function F of the state variables.
Some previous implementations of the implicit filt...

Implicit particle filters for data assimilation generate high-probability
samples by representing each particle location as a separate function of a
common reference variable. This representation requires that a certain
underdetermined equation be solved for each particle and at each time an
observation becomes available. We present a new implement...

Particle filters for data assimilation are usually presented in terms of an Ito stochastic ordinary differential equation (SODE). The task is to estimate the state a(t) of the SODE, with additional information provided by noisy observations bn, n=1,2,..., of this state. In principle, the solution of this problem is known: the optimal estimate of th...

Implicit particle filters for data assimilation update the particles by first choosing probabilities and then looking for particle locations that assume them, guiding the particles one by one to the high probability domain. We provide a detailed description of these filters, with illustrative examples, together with new, more general, methods for s...

Implicit particle filters and applications

We present a general form of the iteration and interpolation process used in implicit particle filters. Implicit filters are based on a pseudo-Gaussian representation of posterior densities, and are designed to focus the particle paths so as to reduce the number of particles needed in nonlinear data assimilation. Examples are given.

We present a particle-based nonlinear filtering scheme, related to recent work on chainless Monte Carlo, designed to focus particle paths sharply so that fewer particles are required. The main features of the scheme are a representation of each new probability density function by means of a set of functions of Gaussian variables (a distinct functio...

We now turn to the statistical mechanics of systems not in equilibrium. The first few sections are devoted to special cases,
which will be used to build up experience with the questions one can reasonably ask and the kinds of answer one may expect.
A general formalism will follow, with applications.

We begin the discussion of statistical mechanics by a quick review of standard mechanics. Suppose we are given N particles whose position coordinates are given by a set of scalar quantities q 1;…; q n. In a d-dimensional space, one needs d numbers to specify a location, so that n = Nd.

This chapter is devoted to further topics in the theory of stochastic processes and their applications. We start with a weaker definition of a stochastic process that is sufficient in the study of stationary processes. We said before that a stochastic process is a function u of both a variable ω in a probability space and a continuous parameter t,...

In the chapter that follows we will provide a reasonably systematic introduction to stochastic processes; we start, however, here by considering a particular stochastic process that is of particular importance both in the theory and in the applications.

Let V be a vector space with vectors u; v; w; … and scalars α; β; ….The space V is an inner product space if one has defined a function(·; ·) from V × V to the reals (if the vector space is real) or to the complex (if V is complex) such that for all u; v Є V and all scalars α.

In weather forecasts, one often hears a sentence such as “the probability of rain tomorrow is 50 percent.” What does this mean? Something like: “If we look at all possible tomorrows, in half of them there will be rain” or “if we make the experiment of observing tomorrow, there is a quantifiable chance of having rain tomorrow, and somehow or other t...

Particle filters for data assimilation in nonlinear problems use “particles” (replicas of the underlying system) to generate a sequence of probability density functions (pdfs) through a Bayesian process. This can be expensive because a significant number of particles has to be used to maintain accuracy. We offer here an alternative, in which the re...

A sampling method for spin systems is presented. The spin lattice is written as the union of a nested sequence of sublattices, all but the last with conditionally independent spins, which are sampled in succession using their marginals. The marginals are computed concurrently by a fast algorithm; errors in the evaluation of the marginals are offset...

The paper describes a random-choice method, based on Glimm's (1965)
work, for solving problems in gas dynamics involving flames. The method
involves no numerical viscosity, is explicit, and is unconditionally
stable without being unconditionally convergent. The principle of the
method is developed for solving a Riemann problem, and it involves
choo...

We summarize and compare our recent methods for reducing the complexity of computational problems, in particular dimensional reduction methods based on the Mori-Zwanzig formalism of statistical physics, block Monte-Carlo meth- ods, and an averaging method for deriving an effective equation for a nonlinear wave propagation problem. We show that thei...

We sample a velocity field that has an inertial spectrum and a skewness that matches experimental data. In particular, we compute a self-consistent correction to the Kolmogorov exponent and find that for our model it is zero. We find that the higher-order structure functions diverge for orders larger than a certain threshold, as theorized in some r...

We show how to use numerical methods within the framework of successive scaling to analyse the microstructure of turbulence, in particular to find inertial range exponents and structure functions. The methods are first calibrated on the Burgers problem and are then applied to the 3D Euler equations. Known properties of low order structure functions...

The basic element of Lighthill's “sandwich model” of tropical cyclones is the existence of “ocean spray,” a layer intermediate between air and sea made up of a cloud of droplets that can be viewed as a “third fluid.” We propose a mathematical model of the flow in the ocean spray based on a semiempirical turbulence theory and demonstrate that the av...

We demonstrate using the high-quality experimental data that turbulent wall jet flows consist of two self-similar layers: a top layer and a wall layer, separated by a mixing layer where the velocity is close to maximum. The top and wall layers are significantly different from each other, and both exhibit incomplete similarity, i.e., a strong influe...

We present methods for the reduction of the complexity of computational problems, both time-dependent and stationary, together with connections to renormalization, scaling, and irreversible statistical mechanics. Most of the methods have been presented before; what is new here is the common framework which relates the several constructions to each...

We show that the inertial range spectrum of the Burgers equation has a viscosity-dependent correction at any wave number when the viscosity is small but not zero. We also calculate the spectrum of the Korteweg-deVries-Burgers equation and show that it can be partially mapped onto the inertial spectrum of a Burgers equation with a suitable effective...

We present a simple physical model of turbulent wall-bounded shear flows that reveals exactly the scaling properties we had previously obtained by similarity considerations. The significance of our results for the understanding of turbulence is pointed out.

An adaptive strategy is proposed for reducing the number of unknowns in the calculation of a proposal distribution in a sequential Monte Carlo implementation of a Bayesian filter for nonlinear dynamics. The idea is to solve only in directions in which the dynamics is expanding, found adaptively; this strategy is suggested by earlier work on optimal...

We consider traveling wave solutions of the Korteveg-deVries-Burgers equation and set up an analogy between the spatial averaging of these traveling waves and real-space renormalization for Hamiltonian systems. The result is an effective equation that reproduces means of the unaveraged, highly oscillatory, solution. The averaging enhances the appar...

Suppose one wants to approximate m components of an n-dimensional system of nonlinear dierential equations (m < n) without solving the full system. In general a smaller system of m equations has right-hand-sides which depend on all of the n variables. The simplest approximation is the replacement of those right-hand-sides by their conditional expec...

A sequence which converges is a Cauchy sequence, although the converse is not necessarily true. If the converse is true for all Cauchy sequences in a given inner product space, then the space is called complete. We shall assume that all of the spaces we work with from now on are complete. A few more de nitions from real analysis: De nition 3. An op...

Optimal prediction methods estimate the solution of nonlinear time-dependent problems when that solution is too complex to be fully resolved or when data are missing. The initial conditions for the unresolved components of the solution are drawn from a probability distribution, and their effect on a small set of variables that are actually computed...

According to a model of the turbulent boundary layer that we propose, in the absence of external turbulence the intermediate region between the viscous sublayer and the external flow consists of two sharply separated self-similar structures. The velocity distribution in these structures is described by two different scaling laws. The mean velocity...

Optimal prediction methods compensate for a lack of resolution in the numerical solution of complex problems through the use of prior statistical information. We know from previous work that in the presence of strong underresolution a good approximation needs a non-Markovian "memory", determined by an equation for the "orthogonal", i.e., unresolved...

A correlation is obtained for the drag coefficient $c '_f$ of the turbulent boundary layer as a function of the effective boundary layer Reynolds number $Re$ that we previously introduced. A comparison is performed also with another correlation for the drag coefficient as a function of the traditional Reynolds number $Re_{\th}$, based on the moment...

After a short introduction to incomplete similarity and anomalous
scaling, with mathematical examples, I will show that the mean velocity
profile in turbulent boundary layers provides instances of these
notions; as a consequence, the region between the viscous sublayer and
the free stream consists of two subregions described by power laws,
separate...

Optimal prediction (OP) methods compensate for a lack of resolution in the numerical solution of complex problems through the use of an invariant measure as a prior measure in the Bayesian sense. In first-order OP, unresolved information is approximated by its conditional expectation with respect to the invariant measure. In higher-order OP, unreso...

Processing the data from a large variety of zero-pressure-gradient boundary layer flows shows that the Reynolds-number-dependent scaling law, which the present authors obtained earlier for pipes, gives an accurate description of the velocity distribution in a self-similar intermediate region adjacent to the viscous sublayer next to the wall. The ap...

In a turbulent boundary layer over a smooth flat plate with zero pressure gradient, the intermediate structure between the viscous sublayer and the free stream consists of two layers: one adjacent to the viscous sublayer and one adjacent to the free stream. When the level of turbulence in the free stream is low, the boundary between the two layers...

Optimal prediction methods compensate for a lack of resolution in the numerical solution of complex problems through the use of prior statistical information. We point out a relation between optimal prediction and the statistical mechanics of irreversible processes, and use a version of the Mori-Zwanzig formalism to produce a higher-order optimal p...

Analysis of the Stockholm group data on zero-pressure-gradient boundary flows, presented in the thesis of J.~M.~\"Osterlund ({\tt http://www.mesh.kth.se/$\sim$jens/zpg/}) is performed. The results of processing of all 70 mean velocity profiles are presented. It is demonstrated that, properly processed, these data lead to a conclusion opposite from...

We demonstrate that the processing of the experimental data for the average velocity profiles obtained by J. M. \"Osterlund (www.mesh.kth.se/$\sim$jens/zpg/) presented in [1] was incorrect. Properly processed these data lead to the opposite conclusion: they confirm the Reynolds-number-dependent scaling law and disprove the conclusion that the flow...

Optimal prediction methods compensate for a lack of resolution in the numerical solution of time-dependent differential equations through the use of prior statistical information. We present a new derivation of the basic methodology, show that field-theoretical perturbation theory provides a useful device for dealing with quasi-linear problems, and...

In recent papers Benzi et al. presented experimental data and an analysis to the effect that the well-known "2/3" Kolmogorov-Obukhov exponent in the inertial range of local structure in turbulence should be corrected by a small but definitely non-zero amount. We reexamine the very same data and show that this conclusion is unjustified. The data are...

We present a status report on a discrete approach to the the near-equilibrium statistical theory of three-dimensional turbulence, which generalizes earlier work by no longer requiring that the vorticity field be a union of discrete vortex filaments. The idea is to take a special limit of a dense lattice vortex system, in a way that brings out a con...

Benzi et al. presented experimental data and an analysis to the effect
that the well-known ``2/3'' Kolmogorov-Obukhov exponent in the inertial
range of local structure in turbulence should be corrected by a small
but definitely nonzero amount. We reexamine the very same data and show
that this conclusion is unjustified. The data are in fact consist...

Introduction Turbulence at very large Reynolds numbers is generally considered to be one of the happier provinces of the turbulence realm, as it is widely thought that two of its results are well-established and have a chance to enter, basically untouched, into a future complete theory; these results are the von K'arm'an-Prandtl logarithmic law in...

The renormalization group (RNG) analysis of the Kosterlitz–Thouless (KT) phase transition is a basic paradigm in statistical physics and a well-known success of the RNG approach. It was recently shown that the derivation of the RNG parameter flow for the KT problem rests on the assumption of one-sided polarization, which is implausible and cannot b...

We analyze concepts of scaling laws for turbulence at large Reynolds numbers (Re). Instead of classical Re-independent logarithmic dependence of the average velocity on governing parameters, we propose a new version of power law and verify it on the basis of recent experimental data. Reasons for mistaking the logarithmic law are explained.

An analysis of the mean velocity profile in the intermediate region of wall-bounded turbulence shows that the well-known von Karman-Prandtl logarithmic law of the wall must be jettisoned in favor of a power law. An analogous analysis of the local structure of turbulence shows that the Kolmogorov-Obukhov scaling of the second and third structure fun...

An analysis of the mean velocity profile in the intermediate region of wall-bounded turbulence shows that the well-known von Kármán-Prandtl logarithmic law of the wall must be jettisoned in favor of a power law. An analogous analysis of the local structure of turbulence shows that the Kolmogorov-Obukhov scaling of the second and third structure fun...

Turbulence at very large Reynolds numbers (often called developed turbulence) is widely considered to be one of the happier provinces of the turbulence realm, as it is widely thought that two of its basic results are well-established, and have a chance to enter, basically untouched, into a future complete theory of turbulence. These results are the...

A method is presented for computing the average solution of problems which are too complicated for adequate resolution, but where information about the statistics of the solution is available. The method involves computing average derivatives by interpolation based on linear regression, and an updating of a measure constrained by the available crud...

The classical problem of thermal explosion is modified so that the chemically active gas is not at rest but is flowing in a long cylindrical pipe. Up to a certain section the heat-conducting walls of the pipe are held at low temperature so that the reaction rate is small and there is no heat release; at that section the ambient temperature is incre...

: An analysis of the mean velocity profile in the intermediate region of wall-bounded turbulence shows that the well-known von K'arm'an-Prandtl logarithmic law of the wall must be jettisoned in favor of a power law. An analogous analysis of the local structure of turbulence shows that the KolmogorovObukhov scaling of the second and third structure...