Article

Uniform Stability of a Particle Approximation of the Optimal Filter Derivative

SIAM Journal on Control and Optimization (Impact Factor: 1.39). 06/2011; 53(3). DOI: 10.1137/140993703
Source: arXiv

ABSTRACT Sequential Monte Carlo methods, also known as particle methods, are a widely
used set of computational tools for inference in non-linear non-Gaussian
state-space models. In many applications it may be necessary to compute the
sensitivity, or derivative, of the optimal filter with respect to the static
parameters of the state-space model; for instance, in order to obtain maximum
likelihood model parameters of interest, or to compute the optimal controller
in an optimal control problem. In Poyiadjis et al. [2011] an original particle
algorithm to compute the filter derivative was proposed and it was shown using
numerical examples that the particle estimate was numerically stable in the
sense that it did not deteriorate over time. In this paper we substantiate this
claim with a detailed theoretical study. Lp bounds and a central limit theorem
for this particle approximation of the filter derivative are presented. It is
further shown that under mixing conditions these Lp bounds and the asymptotic
variance characterized by the central limit theorem are uniformly bounded with
respect to the time index. We demon- strate the performance predicted by theory
with several numerical examples. We also use the particle approximation of the
filter derivative to perform online maximum likelihood parameter estimation for
a stochastic volatility model.

Download full-text

Full-text

Available from: Pierre Del Moral, Aug 19, 2015
0 Followers
 · 
119 Views
  • Source
    • "As a result Poyiadjis et al. (2011) introduce an alternative algorithm whose computational cost is quadratic in the number of particles, but which has better Monte Carlo properties. Del Moral et al. (2011) show that this alternative approach, under standard mixing assumptions, produces estimates of the score and observed information whose asymptotic variance only increases linearly with time. Details of this algorithm are omitted for brevity, for further details see Poyiadjis et al. (2011). "
    [Show abstract] [Hide abstract]
    ABSTRACT: Poyiadjis et al. (2011) show how particle methods can be used to estimate both the score and the observed information matrix for state-space models. These methods either suffer from a computational cost that is quadratic in the number of particles, or produce estimates whose variance increases quadratically with the amount of data. This paper introduces an alternative approach for estimating the score and information matrix, which has a computational cost that is linear in the number of particles. The method is derived using a combination of kernel density estimation to avoid the particle degeneracy that causes the quadratically increasing variance, and Rao-Blackwellisation. Crucially, we show the method is robust to the choice of bandwidth within the kernel density estimation, as it has good asymptotic properties regardless of this choice. Our estimates of the score and observed information matrix can be used within both online and batch procedures for estimating parameters for state-space models. Empirical results show improved parameter estimates compared to existing methods at a significantly reduced computational cost.
  • Source
    • "The above Theorem again provides some explicit guarantees when using an ABC approximation along with SMC-based numerical methods. For example, if one can consider approximating gradients in an ABC context (see [31]), then from the results of [14], one expects that the variance of the SMC estimates to increase only linearly in time. Again, as time increases the ABC bias does not necessarily dominate the variance that would be present even if g θ is evaluated (i.e. one uses SMC on the true model). "
    [Show abstract] [Hide abstract]
    ABSTRACT: In this article we focus on Maximum Likelihood estimation (MLE) for the static parameters of hidden Markov models (HMMs). We will consider the case where one cannot or does not want to compute the conditional likelihood density of the observation given the hidden state because of increased computational complexity or analytical intractability. Instead we will assume that one may obtain samples from this conditional likelihood and hence use approximate Bayesian computation (ABC) approximations of the original HMM. ABC approximations are biased, but the bias can be controlled to arbitrary precision via a parameter \epsilon>0; the bias typically goes to zero as \epsilon \searrow 0. We first establish that the bias in the log-likelihood and gradient of the log-likelihood of the ABC approximation, for a fixed batch of data, is no worse than \mathcal{O}(n\epsilon), n being the number of data; hence, for computational reasons, one might expect reasonable parameter estimates using such an ABC approximation. Turning to the computational problem of estimating $\theta$, we propose, using the ABC-sequential Monte Carlo (SMC) algorithm in Jasra et al. (2012), an approach based upon simultaneous perturbation stochastic approximation (SPSA). Our method is investigated on two numerical examples
Show more