Article

A Central Limit Theorem for Fleming-Viot Particle Systems

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

The distribution of a Markov process with killing, conditioned to be still alive at a given time, can be approximated by a Fleming-Viot type particle system. In such a system, each particle is simulated independently according to the law of the underlying Markov process, and branches onto another particle at each killing time. The consistency of this method in the large population limit was the subject of several recent articles. In the present paper, we go one step forward and prove a central limit theorem for the law of the Fleming-Viot particle system at a given time under two conditions: a "soft killing" assumption and a boundedness condition involving the "carr\'e du champ" operator of the underlying Markov process.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Under very general assumptions, Villemonais [12] proves among other things that p N T (or equivalently e −N T ) converges in probability to p T when N goes to infinity, and that η N T converges in law to η T . In [3], we went one step further and established central limit results for η N T and p N T . For this, we had to make two specific assumptions. ...
... The purpose of this paper is to generalize the central limit results given in [3] for η N T and p N T under arguably minimal assumptions. In particular, it includes the case of elliptic diffusive processes killed when hitting the boundary of a given domain. ...
... In particular, it includes the case of elliptic diffusive processes killed when hitting the boundary of a given domain. This latter case is usually called "hard killing" in the literature and this kind of situation was not covered by [3]. ...
Article
Fleming-Viot type particle systems represent a classical way to approximate the distribution of a Markov process with killing, given that it is still alive at a final deterministic time. In this context, each particle evolves independently according to the law of the underlying Markov process until its killing, and then branches instantaneously on another randomly chosen particle. While the consistency of this algorithm in the large population limit has been recently studied in several articles, our purpose here is to prove Central Limit Theorems under very general assumptions. For this, we only suppose that the particle system does not explode in finite time, and that the jump and killing times have atomless distributions. In particular, this includes the case of elliptic diffusions with hard killing.
... In addition, it has been proved, in different contexts, see [81,82], thatq N is a consistent estimator of q: the convergenceq N → N →∞ q holds true, in probability. More precisely, it is proved in [82], that the estimatorq N satisfies a Central Limit Theorem, √ N q N − q → N →∞ N (0, σ 2 (ξ, q)), ...
... In addition, it has been proved, in different contexts, see [81,82], thatq N is a consistent estimator of q: the convergenceq N → N →∞ q holds true, in probability. More precisely, it is proved in [82], that the estimatorq N satisfies a Central Limit Theorem, √ N q N − q → N →∞ N (0, σ 2 (ξ, q)), ...
Preprint
The average time between two occurrences of the same event, referred to as its return time (or return period), is a useful statistical concept for practical applications. For instance insurances or public agency may be interested by the return time of a 10m flood of the Seine river in Paris. However, due to their scarcity, reliably estimating return times for rare events is very difficult using either observational data or direct numerical simulations. For rare events, an estimator for return times can be built from the extrema of the observable on trajectory blocks. Here, we show that this estimator can be improved to remain accurate for return times of the order of the block size. More importantly, we show that this approach can be generalised to estimate return times from numerical algorithms specifically designed to sample rare events. So far those algorithms often compute probabilities, rather than return times. The approach we propose provides a computationally extremely efficient way to estimate numerically the return times of rare events for a dynamical system, gaining several orders of magnitude of computational costs. We illustrate the method on two kinds of observables, instantaneous and time-averaged, using two different rare event algorithms, for a simple stochastic process, the Ornstein-Uhlenbeck process. As an example of realistic applications to complex systems, we finally discuss extreme values of the drag on an object in a turbulent flow.
... For processes in a general space, many results are available. Finite time propagation of chaos is addressed in [10,25,37,17], with central limit theorems as N → ∞ in [11,20]. Then, uniform in time propagation of chaos and long-time convergence are established in [18,16,34]. ...
... Second, to bound the probability that U (X x T ) U 3 , we consider two possible events: either the process stays during the whole interval [0, T ] above the energy level U 1 (which is unlikely because it would mean it stays in an unstable region where ∇U is non-zero) or the process goes down to U 1 but then climbs back in a time less than T to the level U 3 (which is also unlikely). More precisely, for all functions ϕ : [0, T ] → D such that ϕ t / ∈ D \B 1 for all t ∈ [0, T ], using (20), we have that ...
Preprint
Full-text available
We study the long-time convergence of a Fleming-Viot process, in the case where the underlying process is a metastable diffusion killed when it reaches some level set. Through a coupling argument, we establish the long-time convergence of the Fleming-Viot process toward some stationary measure at an exponential rate independent of N, the size of the system, as well as uniform in time propagation of chaos estimates.
... Additionally, the choice of the score function can be shown to have an important impact on the statistical error, see [17,141]. Moreover, it has been proved, in different contexts, see [19,28], thatq N c is a consistent estimator of q: the convergenceq N c → N c →∞ q holds true, in probability. More precisely, it is proved in [28], that the estimatorq N c satisfies a Central Limit Theorem, ...
... Moreover, it has been proved, in different contexts, see [19,28], thatq N c is a consistent estimator of q: the convergenceq N c → N c →∞ q holds true, in probability. More precisely, it is proved in [28], that the estimatorq N c satisfies a Central Limit Theorem, ...
Thesis
This thesis discusses the numerical simulation of extreme fluctuations of the drag force acting on an object immersed in a turbulent medium.Because such fluctuations are rare events, they are particularly difficult to investigate by means of direct sampling. Indeed, such approach requires to simulate the dynamics over extremely long durations.In this work an alternative route is introduced, based on rare events algorithms.The underlying idea of such algorithms is to modify the sampling statistics so as to favour rare trajectories of the dynamical system of interest.These techniques recently led to impressive results for relatively simple dynamics. However, it is not clear yet if such algorithms are useful for complex deterministic dynamics, such as turbulent flows.This thesis focuses on the study of both the dynamics and statistics of extreme fluctuations of the drag experienced by a square cylinder mounted in a two-dimensional channel flow.This simple framework allows for very long simulations of the dynamics, thus leading to the sampling of a large number of events with an amplitude large enough so as they can be considered extreme.Subsequently, the application of two different rare events algorithms is presented and discussed.In the first case, a drastic reduction of the computational cost required to sample configurations resulting in extreme fluctuations is achieved.Furthermore, several difficulties related to the flow dynamics are highlighted, paving the way to novel approaches specifically designed to turbulent flows.
... In a much more general framework, Cérou, Delyon, Guyader and Rousset [9,10] recently established a Central Limit Theorem for η n T , T ∈ [0, +∞). In the particular case of Markov chains with a finite space, and assuming for the sake of simplicity that in the Fleming-Viot particle system, the particles are initially iid according to π, their asymptotic variance for η n T reads ...
... Apart from the use of Proposition 2.3 made in the proof of our tightness result, we emphasise that our arguments are entirely static, in the sense that they merely involve estimates on the law of η n ∞ , which stem from elementary manipulations of the infinitesimal generator L n . At the technical level, we thereby avoid resorting to graphical constructions of the process [3,19], coupling techniques [12,11] or martingale arguments [9,10]. ...
Preprint
We consider the Fleming-Viot particle system associated with a continuous-time Markov chain in a finite space. Assuming irreducibility, it is known that the particle system possesses a unique stationary distribution, under which its empirical measure converges to the quasistationary distribution of the Markov chain. We complement this Law of Large Numbers with a Central Limit Theorem. Our proof essentially relies on elementary computations on the infinitesimal generator of the Fleming-Viot particle system, and involves the so-called π\pi-return process in the expression of the asymptotic variance. Our work can be seen as an infinite-time version, in the setting of finite space Markov chains, of recent results by C{\'e}rou, Delyon, Guyader and Rousset [arXiv:1611.00515, arXiv:1709.06771].
... In addition, it has been proved, in different contexts, see [71,72], thatq N is a consistent estimator of q: the convergenceq N → N →∞ q holds true, in probability. More precisely, it is proved in [72], that the estimatorq N satisfies a Central Limit Theorem, ...
... In addition, it has been proved, in different contexts, see [71,72], thatq N is a consistent estimator of q: the convergenceq N → N →∞ q holds true, in probability. More precisely, it is proved in [72], that the estimatorq N satisfies a Central Limit Theorem, ...
Article
Full-text available
The average time between two occurrences of the same event, referred to as its return time (or return period), is a useful statistical concept for practical applications. For instance insurances or public agency may be interested by the return time of a 10m flood of the Seine river in Paris. However, due to their scarcity, reliably estimating return times for rare events is very difficult using either observational data or direct numerical simulations. For rare events, an estimator for return times can be built from the extrema of the observable on trajectory blocks. Here, we show that this estimator can be improved to remain accurate for return times of the order of the block size. More importantly, we show that this approach can be generalised to estimate return times from numerical algorithms specifically designed to sample rare events. So far those algorithms often compute probabilities, rather than return times. The approach we propose provides a computationally extremely efficient way to estimate numerically the return times of rare events for a dynamical system, gaining several orders of magnitude of computational costs. We illustrate the method on two kinds of observables, instantaneous and time-averaged, using two different rare event algorithms, for a simple stochastic process, the Ornstein-Uhlenbeck process. As an example of realistic applications to complex systems, we finally discuss extreme values of the drag on an object in a turbulent flow.
... Proof. See Lemma A.4 in [8]. ...
Preprint
Diffusion processes with small noise conditioned to reach a target set are considered. The AMS algorithm is a Monte Carlo method that is used to sample such rare events by iteratively simulating clones of the process and selecting trajectories that have reached the highest value of a so-called importance function. In this paper, the large sample size relative variance of the AMS small probability estimator is considered. The main result is a large deviations logarithmic equivalent of the latter in the small noise asymptotics, which is rigorously derived. It is given as a maximisation problem explicit in terms of the quasi-potential cost function associated with the underlying small noise large deviations. Necessary and sufficient geometric conditions ensuring the vanishing of the obtained quantity ('weak' asymptotic efficiency) are provided. Interpretations and practical consequences are discussed.
... [18], [1], [2], [21]) and also refined limit theorems such as theorems of central limit type have been obtained. Interesting recent work include: [13], [24], [14], [12], [22], [4] and [26]. ...
... There has been substantial recent progress in analyzing the convergence rates of these algorithms. Cérou et al. [13] proved a Central Limit Theorem (CLT) for the law of Fleming-Viot particle systems at a given fixed time under very general assumptions. Lelievre et al. [28] obtained an infinite-time version in the setting of finite space Markov chains, extending the ideas of Del Moral and Miclo [15]. ...
Article
Full-text available
We propose two numerical schemes for approximating quasi-stationary distributions (QSD) of finite state Markov chains with absorbing states. Both schemes are described in terms of interacting chains where the interaction is given in terms of the total time occupation measure of all particles in the system and has the impact of reinforcing transitions, in an appropriate fashion, to states where the collection of particles has spent more time. The schemes can be viewed as combining the key features of the two basic simulation-based methods for approximating QSD originating from the works of Fleming and Viot (1979) and Aldous, Flannery and Palacios (1998), respectively. The key difference between the two schemes studied here is that in the first method one starts with a(n) particles at time 0 and number of particles stays constant over time whereas in the second method we start with one particle and at most one particle is added at each time instant in such a manner that there are a(n) particles at time n. We prove almost sure convergence to the unique QSD and establish Central Limit Theorems for the two schemes under the key assumption that a(n)=o(n). Exploratory numerical results are presented to illustrate the performance.
... Fewer results address continuous-time dynamics, dating back to [11] in the filtering context, with a Feynman-Kac-based treatment provided by [16] and references therein; a survey of the filtering literature is provided by [3,Chapter 9]. In the current context, particularly relevant recent works include [8,15,17,22,50]. This literature generally considers diffusive dynamics and relies upon approximative time-discretizations of those dynamics. ...
Article
Full-text available
Large deviations for additive path functionals of stochastic processes have attracted significant research interest, in particular in the context of stochastic particle systems and statistical physics. Efficient numerical ‘cloning’ algorithms have been developed to estimate the scaled cumulant generating function, based on importance sampling via cloning of rare event trajectories. So far, attempts to study the convergence properties of these algorithms in continuous time have led to only partial results for particular cases. Adapting previous results from the literature of particle filters and sequential Monte Carlo methods, we establish a first comprehensive and fully rigorous approach to bound systematic and random errors of cloning algorithms in continuous time. To this end we develop a method to compare different algorithms for particular classes of observables, based on the martingale characterization of stochastic processes. Our results apply to a large class of jump processes on compact state space, and do not involve any time discretization in contrast to previous approaches. This provides a robust and rigorous framework that can also be used to evaluate and improve the efficiency of algorithms.
... The study of this system when the underlying Markov process X is a continuous time Markov chain in a countable state space has been initiated in [12] and followed by [1], [2], [16], [3] and [10]. We also refer the reader to [15], where general considerations on the link between the study of such systems and front propagation problems are considered and to [7,11] where CLTs for this Fleming-Viot type process have been proved. ...
... The study of this system when the underlying Markov process X is a continuous time Markov chain in a countable state space has been initiated in [12] and followed by [1], [2], [16], [3] and [10]. We also refer the reader to [15], where general considerations on the link between the study of such systems and front propagation problems are considered and to [7,11] where CLTs for this Fleming-Viot type process have been proved. ...
Preprint
Full-text available
We prove under mild conditions that the Fleming-Viot process selects the minimal quasi-stationary distribution for Markov processes with soft killing on non-compact state spaces. Our results are applied to multi-dimensional birth and death processes, continuous time Galton-Watson processes and diffusion processes with soft killing.
Preprint
Higher order fluctuation expansions for stochastic heat equations (SHE) with nonlinear, non-conservative and conservative noise are obtained. These Edgeworth-type expansions describe the asymptotic behavior of solutions in suitable joint scaling regimes of small noise intensity and diverging singularity. The results include both the case of the SHE with regular and irregular diffusion coefficients. In particular, this includes the correlated Dawson-Watanabe and Dean-Kawasaki SPDEs, as well as SPDEs corresponding to the Fleming-Viot and symmetric simple exclusion processes.
Article
This article studies the limit of the empirical distribution induced by a mutation-selection multi-allelic Moran model. Our results include a uniform in time bound for the propagation of chaos in Lp of order N, and the proof of the asymptotic normality with zero mean and explicit variance, when the number of individuals tend towards infinity, for the approximation error between the empirical distribution and its limit. Additionally, we explore the interpretation of this Moran model as a particle process whose empirical probability measure approximates a quasi-stationary distribution, in the same spirit as the Fleming – Viot particle systems.
Thesis
The main goal of this thesis is to study the evolution of a multi-allelic Moran model, which is a continuous-time discrete state Markov process, inspired by biological applications. We study, among many other aspects, the relation between the Moran process, understood as an interacting particle system, and the theory of quasi-stationary distributions. More precisely, we prove the existence of a propagation of chaos phenomenon when the population size is large, and we study the quantitative control for the long time convergence to stationarity by spectral arguments. The main results are divided in three chapters. In the first chapter we show that the empirical probability measure induced by the particle system converges, when the number of particles goes to infinity, to the law of an absorbing Markov process conditioned to non-absorption. Furthermore, we establish a control on this convergence, by proving a uniform in time propagation of chaos. We also prove the asymptotic normality of the bias and we provide an explicit expression for the asymptotic variance, which is later used to define another particle system with smaller quadratic error. In the second chapter, we consider a simpler model where the state space is finite and the killing rate is uniform. In this context we find an explicit expression for the spectrum of the particle system generator in terms of the spectrum of the mutation rate matrix. Moreover, we study the ergodicity of the process and, for a particular mutation scheme, which is the parent independent mutation, we are able to prove the existence of cutoff phenomena in the total variation and chi-square distances. The third chapter is devoted to the study of a particular case, where the mutation process is driven by an asymmetric random walk on the cycle graph. We show that this model has a remarkable exact solvability, despite the fact that it is non-reversible with non-explicit invariant distribution.
Article
Full-text available
We establish the convergences (with respect to the simulation time t; the number of particles N; the timestep γ\gamma) of a Moran/Fleming-Viot type particle scheme toward the quasi-stationary distribution of a diffusion on the d-dimensional torus, killed at a smooth rate. In these conditions, quantitative bounds are obtained that, for each parameter (tt\rightarrow \infty, NN\rightarrow \infty or γ0\gamma\rightarrow 0) are independent from the two others. p, li { white-space: pre-wrap; }
Preprint
The goal of this article is to study the limit of the empirical distribution induced by a multi-allelic Moran model, whose dynamic is given by a continuous-time irreducible Markov chain. Throughout the paper, the mutation rate driving the mutation is assumed irreducible and the selection rates are assumed uniformly bounded. The paper is divided in two parts. The first one deals with processes with general selection rates. For this case we are able to prove the propagation of chaos in Lp\mathbb{L}^p over the compacts with speed of convergence of order 1/N1/\sqrt{N}. Further on, we will consider a specific type of selection that we will call \emph{additive selection}. Essentially, we assume that the selection rate can be decomposed in the sum of three elements: a term depending on the allelic type of the parent (which can be understood as selection at death), another term depending on the allelic type of the descendant (which can be understood as selection at birth) and a third term which is symmetric. Under this setting, our results include a uniform in time bound for the propagation on chaos in Lp\mathbb{L}^p of order 1/N1/\sqrt{N}, and an asymptotic expression for the quadratic error between the empirical distribution and its limits when both, the time and the number of individuals, tend towards infinity. Additionally, we explore the interpretation of the Moran model with additive selection as a particle process whose empirical distribution approximate a quasi-stationary distribution, in the same spirit of the Fleming\,--\,Viot particle systems. We then address to the problem of minimising the asymptotic quadratic error, when the time and the number of particles go to infinity.
Preprint
We propose two numerical schemes for approximating quasi-stationary distributions (QSD) of finite state Markov chains with absorbing states. Both schemes are described in terms of certain interacting chains in which the interaction is given in terms of the total time occupation measure of all particles in the system and has the impact of reinforcing transitions, in an appropriate fashion, to states where the collection of particles has spent more time. The schemes can be viewed as combining the key features of the two basic simulation-based methods for approximating QSD originating from the works of Fleming and Viot (1979) and Aldous, Flannery and Palacios (1998), respectively. The key difference between the two schemes studied here is that in the first method one starts with a(n) particles at time 0 and number of particles stays constant over time whereas in the second method we start with one particle and at most one particle is added at each time instant in such a manner that there are a(n) particles at time n. We prove almost sure convergence to the unique QSD and establish Central Limit Theorems for the two schemes under the key assumption that a(n)=o(n). When a(n)na(n)\sim n, the fluctuation behavior is expected to be non-standard. Some exploratory numerical results are presented to illustrate the performance of the two approximation schemes.
Article
Full-text available
We study the existence and asymptotic properties of a conservative branching particle system driven by a diffusion with smooth coefficients for which birth and death are triggered by contact with a set. Sufficient conditions for the process to be non-explosive are given. In the Brownian motions case the domain of evolution can be non-smooth, including Lipschitz, with integrable Martin kernel. The results are valid for an arbitrary number of particles and non-uniform redistribution after branching. Additionally, with probability one, it is shown that only one ancestry line survives. In special cases, the evolution of the surviving particle is studied and for a two particle system on a half line we derive explicitly the transition function of a chain representing the position at successive branching times. KeywordsFleming–Viot branching–Immortal particle–Martin kernel–Doeblin condition–Jump diffusion process
Article
Full-text available
Let X be a random vector with distribution μ on ℝ d and Φ be a mapping from ℝ d to ℝ. That mapping acts as a black box, e.g., the result from some computer experiments for which no analytical expression is available. This paper presents an efficient algorithm to estimate a tail probability given a quantile or a quantile given a tail probability. The algorithm improves upon existing multilevel splitting methods and can be analyzed using Poisson process tools that lead to exact description of the distribution of the estimated probabilities and quantiles. The performance of the algorithm is demonstrated in a problem related to digital watermarking. KeywordsMonte Carlo simulation–Rare event–Metropolis-Hastings–Watermarking
Chapter
Full-text available
This paper focuses on interacting particle systems methods for solving numerically a class of Feynman-Kac formulae arising in the study of certain parabolic differential equations, physics, biology, evolutionary computing, nonlinear filtering and elsewhere. We have tried to give an “exposé” of the mathematical theory that is useful for analyzing the convergence of such genetic-type and particle approximating models including law of large numbers, large deviations principles, fluctuations and empirical process theory as well as semigroup techniques and limit theorems for processes. In addition, we investigate the delicate and probably the most important problem of the long time behavior of such interacting measure valued processes. We will show how to link this problem with the asymptotic stability of the corresponding limiting process in order to derive useful uniform convergence results with respect to the time parameter. Several variations including branching particle models with random population size will also be presented. In the last part of this work we apply these results to continuous time and discrete time filtering problems.
Article
Full-text available
We consider a strong Markov process with killing and prove an approximation method for the distribution of the process conditioned not to be killed when it is observed. The method is based on a Fleming-Viot type particle system with rebirths, whose particles evolve as independent copies of the original strong Markov process and jump onto each others instead of being killed. Our only assumption is that the number of rebirths of the Fleming-Viot type system doesn't explode in finite time almost surely and that the survival probability of the original process remains positive in finite time. The approximation method generalizes previous results and comes with a speed of convergence. A criterion for the non-explosion of the number of rebirths is also provided for general systems of time and environment dependent diffusion particles. This includes, but is not limited to, the case of the Fleming-Viot type system of the approximation method. The proof of the non-explosion criterion uses an original non-attainability of (0,0) result for pair of non-negative semi-martingales with positive jumps.
Article
Full-text available
A method to generate reactive trajectories, namely equilibrium trajectories leaving a metastable state and ending in another one is proposed. The algorithm is based on simulating in parallel many copies of the system, and selecting the replicas which have reached the highest values along a chosen one-dimensional reaction coordinate. This reaction coordinate does not need to precisely describe all the metastabilities of the system for the method to give reliable results. An extension of the algorithm to compute transition times from one metastable state to another one is also presented. We demonstrate the interest of the method on two simple cases: a one-dimensional two-well potential and a two-dimensional potential exhibiting two channels to pass from one metastable state to another one.
Article
Full-text available
We consider a branching particle model in which particles move inside a Euclidean domain according to the following rules. The particles move as independent Brownian motions until one of them hits the boundary. This particle is killed but another randomly chosen particle branches into two particles, to keep the population size constant. We prove that the particle population does not approach the boundary simultaneously in a finite time in some Lipschitz domains. This is used to prove a limit theorem for the empirical distribution of the particle family.
Article
Full-text available
We analyze and simulate a two-dimensional Brownian multi-type particle system with death and branching (birth) depending on the position of particles of different types. The system is confined in the two-dimensional box, whose boundaries act as the sink of Brownian particles. The branching rate matches the death rate so that the total number of particles is kept constant. In the case of m types of particles in the rectangular box of size a, b and elongated shape a >> b we observe that the stationary distribution of particles corresponds to the m-th Laplacian eigenfunction. For smaller elongations a > b we find a configurational transition to a new limiting distribution. The ratio a/b for which the transition occurs is related to the value of the m-th eigenvalue of the Laplacian with rectangular boundaries. This work was supported in part by the NSF grant DMS 9322689 and KBN and FWPN grants.
Article
Fleming-Viot type particle systems represent a classical way to approximate the distribution of a Markov process with killing, given that it is still alive at a final deterministic time. In this context, each particle evolves independently according to the law of the underlying Markov process until its killing, and then branches instantaneously on another randomly chosen particle. While the consistency of this algorithm in the large population limit has been recently studied in several articles, our purpose here is to prove Central Limit Theorems under very general assumptions. For this, we only suppose that the particle system does not explode in finite time, and that the jump and killing times have atomless distributions. In particular, this includes the case of elliptic diffusions with hard killing.
Article
The additive-increase multiplicative-decrease (AIMD) schemes designed to control congestion in communication networks are investigated from a probabilistic point of view. Functional limit theorems for a general class of Markov processes that describe these algorithms are obtained. The asymptotic behaviour of the corresponding invariant measures is described in terms of the limiting Markov processes. For some special important cases, including TCP congestion avoidance, an important autoregressive property is proved. As a consequence, the explicit expression of the related invariant probabilities is derived. The transient behaviour of these algorithms is also analysed.
Article
A general class of non‐diffusion stochastic models is introduced with a view to providing a framework for studying optimization problems arising in queueing systems, inventory theory, resource allocation and other areas. The corresponding stochastic processes are Markov processes consisting of a mixture of deterministic motion and random jumps. Stochastic calculus for these processes is developed and a complete characterization of the extended generator is given; this is the main technical result of the paper. The relevance of the extended generator concept in applied problems is discussed and some recent results on optimal control of piecewise‐deterministic processes are described.
Article
The estimation of rare event probability is a crucial issue in areas such as reliability, telecommunications, aircraft management. In complex systems, analytical study is out of question and one has to use Monte Carlo methods. When rare is really rare, which means a probability less than 10^−9, naive Monte Carlo becomes unreasonable. A widespread technique consists in multilevel splitting, but this method requires enough knowledge about the system to decide where to put the levels at hand. This is unfortunately not always possible. In this paper, we propose an adaptive algorithm to cope with this problem: the estimation is asymptotically consistent, costs just a little bit more than classical multilevel splitting and has the same efficiency in terms of asymptotic variance. In the one dimensional case, we prove rigorously the a.s. convergence and the asymptotic normality of our estimator, with the same variance as with other algorithms that use fixed crossing levels. In our proofs we mainly use tools from the theory of empirical processes, which seems to be quite new in the field of rare events.
Article
We consider a system of N Brownian particles evolving independently in a domain D. As soon as one particle reaches the boundary it is killed and one of the other particles is chosen uniformly and splits into two independent particles resuming a new cycle of independent motion until the next boundary hit. We prove the hydrodynamic limit for the joint law of the empirical measure process and the average number of visits to the boundary as N approaches infinity.
Article
We consider a system {X1,¼,XN}{\{X_1,\ldots,X_N\}} of N particles in a bounded d-dimensional domain D. During periods in which none of the particles X1,¼,XN{X_1,\ldots,X_N} hit the boundary ¶D{\partial D} , the system behaves like N independent d-dimensional Brownian motions. When one of the particles hits the boundary ¶D{\partial D} , then it instantaneously jumps to the site of one of the remaining N−1 particles with probability (N−1)−1. For the system {X1,¼,XN}{\{X_1,\ldots,X_N\}} , the existence of an invariant measure nmu n{\nu\mskip-12mu \nu} has been demonstrated in Burdzy et al. [Comm Math Phys 214(3):679–703, 2000]. We provide a structural formula for this invariant measure nmu n{\nu\mskip-12mu \nu} in terms of the invariant measure m of the Markov chain x{\xi} which returns the sites the process X:=(X1,¼,XN){X:=(X_1,\ldots,X_N)} jumps to after hitting the boundary ¶DN{\partial D^N} . In addition, we characterize the asymptotic behavior of the invariant measure m of x{\xi} when N → ∞. Using the methods of the paper, we provide a rigorous proof of the fact that the stationary empirical measure processes \frac1Nåi=1NdXi{\frac1N\sum_{i=1}^N\delta_{X_i}} converge weakly as N → ∞ to a deterministic constant motion. This motion is concentrated on the probability measure whose density with respect to the Lebesgue measure is the first eigenfunction of the Dirichlet Laplacian on D. This result can be regarded as a complement to a previous one in Grigorescu and Kang [Stoch Process Appl 110(1):111–143, 2004].
Controlled diffusion processes, volume 14 of Stochastic Modelling and Applied Probability
  • N V Krylov
N.V. Krylov. Controlled diffusion processes, volume 14 of Stochastic Modelling and Applied Probability. Springer-Verlag, Berlin, 2009.