Article

Uniform Stability of a Particle Approximation of the Optimal Filter Derivative

(Impact Factor: 1.46). 06/2011; 53(3). DOI: 10.1137/140993703
Source: arXiv

ABSTRACT

Sequential Monte Carlo methods, also known as particle methods, are a widely
used set of computational tools for inference in non-linear non-Gaussian
state-space models. In many applications it may be necessary to compute the
sensitivity, or derivative, of the optimal filter with respect to the static
parameters of the state-space model; for instance, in order to obtain maximum
likelihood model parameters of interest, or to compute the optimal controller
in an optimal control problem. In Poyiadjis et al. [2011] an original particle
algorithm to compute the filter derivative was proposed and it was shown using
numerical examples that the particle estimate was numerically stable in the
sense that it did not deteriorate over time. In this paper we substantiate this
claim with a detailed theoretical study. Lp bounds and a central limit theorem
for this particle approximation of the filter derivative are presented. It is
further shown that under mixing conditions these Lp bounds and the asymptotic
variance characterized by the central limit theorem are uniformly bounded with
respect to the time index. We demon- strate the performance predicted by theory
with several numerical examples. We also use the particle approximation of the
filter derivative to perform online maximum likelihood parameter estimation for
a stochastic volatility model.

Full-text

Available from: Pierre Del Moral
• Source
• "where {V t } t∈N * and {U t } t∈N are independent sequences of mutually independent standard Gaussian noise variables. Parameters to be estimated were θ = (φ, σ 2 , β 2 ), and we compared the performance of our PaRIS-based RML to that of the particle RML proposed in[5]. To get a fair comparison of the algorithms we set the number of particles used in each algorithm such that both algorithms ran in the same computational time. "
Article: Efficient parameter inference in general hidden Markov models using the filter derivatives
[Hide abstract]
ABSTRACT: Estimating online the parameters of general state-space hidden Markov models is a topic of importance in many scientific and engineering disciplines. In this paper we present an online parameter estimation algorithm obtained by casting our recently proposed particle-based, rapid incremental smoother (PaRIS) into the framework of recursive maximum likelihood estimation for general hidden Markov models. Previous such particle implementations suffer from either quadratic complexity in the number of particles or from the well-known degeneracy of the genealogical particle paths. By using the computational efficient and numerically stable PaRIS algorithm for estimating the needed prediction filter derivatives we obtain a fast algorithm with a computational complexity that grows only linearly with the number of particles. The efficiency and stability of the proposed algorithm are illustrated in a simulation study.
Full-text · Article · Jan 2016
• Source
• "As a result Poyiadjis et al. (2011) introduce an alternative algorithm whose computational cost is quadratic in the number of particles, but which has better Monte Carlo properties. Del Moral et al. (2011) show that this alternative approach, under standard mixing assumptions, produces estimates of the score and observed information whose asymptotic variance only increases linearly with time. Details of this algorithm are omitted for brevity, for further details see Poyiadjis et al. (2011). "
Article: Particle Approximations of the Score and Observed Information Matrix for Parameter Estimation in State Space Models With Linear Computational Cost
[Hide abstract]
ABSTRACT: Poyiadjis et al. (2011) show how particle methods can be used to estimate both the score and the observed information matrix for state-space models. These methods either suffer from a computational cost that is quadratic in the number of particles, or produce estimates whose variance increases quadratically with the amount of data. This paper introduces an alternative approach for estimating the score and information matrix, which has a computational cost that is linear in the number of particles. The method is derived using a combination of kernel density estimation to avoid the particle degeneracy that causes the quadratically increasing variance, and Rao-Blackwellisation. Crucially, we show the method is robust to the choice of bandwidth within the kernel density estimation, as it has good asymptotic properties regardless of this choice. Our estimates of the score and observed information matrix can be used within both online and batch procedures for estimating parameters for state-space models. Empirical results show improved parameter estimates compared to existing methods at a significantly reduced computational cost.
Full-text · Article · Jun 2013 · Journal of Computational and Graphical Statistics
• Source
• "The above Theorem again provides some explicit guarantees when using an ABC approximation along with SMC-based numerical methods. For example, if one can consider approximating gradients in an ABC context (see [31]), then from the results of [14], one expects that the variance of the SMC estimates to increase only linearly in time. Again, as time increases the ABC bias does not necessarily dominate the variance that would be present even if g θ is evaluated (i.e. one uses SMC on the true model). "
Article: Static Parameter Estimation for ABC Approximations of Hidden Markov Models
[Hide abstract]
ABSTRACT: In this article we focus on Maximum Likelihood estimation (MLE) for the static parameters of hidden Markov models (HMMs). We will consider the case where one cannot or does not want to compute the conditional likelihood density of the observation given the hidden state because of increased computational complexity or analytical intractability. Instead we will assume that one may obtain samples from this conditional likelihood and hence use approximate Bayesian computation (ABC) approximations of the original HMM. ABC approximations are biased, but the bias can be controlled to arbitrary precision via a parameter \epsilon>0; the bias typically goes to zero as \epsilon \searrow 0. We first establish that the bias in the log-likelihood and gradient of the log-likelihood of the ABC approximation, for a fixed batch of data, is no worse than \mathcal{O}(n\epsilon), n being the number of data; hence, for computational reasons, one might expect reasonable parameter estimates using such an ABC approximation. Turning to the computational problem of estimating $\theta$, we propose, using the ABC-sequential Monte Carlo (SMC) algorithm in Jasra et al. (2012), an approach based upon simultaneous perturbation stochastic approximation (SPSA). Our method is investigated on two numerical examples
Full-text · Article · Oct 2012