Article

Uniform Stability of a Particle Approximation of the Optimal Filter Derivative

06/2011;
Source: arXiv

ABSTRACT Sequential Monte Carlo methods, also known as particle methods, are a widely
used set of computational tools for inference in non-linear non-Gaussian
state-space models. In many applications it may be necessary to compute the
sensitivity, or derivative, of the optimal filter with respect to the static
parameters of the state-space model; for instance, in order to obtain maximum
likelihood model parameters of interest, or to compute the optimal controller
in an optimal control problem. In Poyiadjis et al. [2011] an original particle
algorithm to compute the filter derivative was proposed and it was shown using
numerical examples that the particle estimate was numerically stable in the
sense that it did not deteriorate over time. In this paper we substantiate this
claim with a detailed theoretical study. Lp bounds and a central limit theorem
for this particle approximation of the filter derivative are presented. It is
further shown that under mixing conditions these Lp bounds and the asymptotic
variance characterized by the central limit theorem are uniformly bounded with
respect to the time index. We demon- strate the performance predicted by theory
with several numerical examples. We also use the particle approximation of the
filter derivative to perform online maximum likelihood parameter estimation for
a stochastic volatility model.

0 Bookmarks
 · 
110 Views
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Poyiadjis et al. (2011) show how particle methods can be used to estimate both the score and the observed information matrix for state-space models. These methods either suffer from a computational cost that is quadratic in the number of particles, or produce estimates whose variance increases quadratically with the amount of data. This paper introduces an alternative approach for estimating the score and information matrix, which has a computational cost that is linear in the number of particles. The method is derived using a combination of kernel density estimation to avoid the particle degeneracy that causes the quadratically increasing variance, and Rao-Blackwellisation. Crucially, we show the method is robust to the choice of bandwidth within the kernel density estimation, as it has good asymptotic properties regardless of this choice. Our estimates of the score and observed information matrix can be used within both online and batch procedures for estimating parameters for state-space models. Empirical results show improved parameter estimates compared to existing methods at a significantly reduced computational cost.
    06/2013;
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Sequential Monte Carlo (SMC) methods, also known as particle filters, are simulation-based recursive algorithms for the approximation of the a posteriori probability measures generated by state-space dynamical models. At any given (discrete) time t, a particle filter produces a set of samples over the state space of the system of interest. These samples are referred to as "particles" and can be used to build a discrete approximation of the a posteriori probability distribution of the state, conditional on a sequence of available observations. When new observations are collected, a recursive stochastic procedure allows to update the set of particles to obtain an approximation of the new posterior. In this paper, we address the problem of constructing kernel-based estimates of the filtering probability density function. Kernel methods are the most widely employed techniques for nonparametric density estimation using i.i.d. samples and it seems natural to investigate their performance when applied to the approximate samples produced by particle filters. Here, we show how to obtain asymptotic convergence results for the particle-kernel approximations of the filtering density and its derivatives. In particular, we find convergence rates for the approximation error that hold uniformly on the state space and guarantee that the error vanishes almost surely (a.s.) as the number of particles in the filter grows. Based on this uniform convergence result, we first show how to build continuous measures that converge a.s. (with known rate) toward the filtering measure and then address a few applications. The latter include maximum a posteriori estimation of the system state using the approximate derivatives of the posterior density and the approximation of functionals of it, e.g., Shannon's entropy.
    11/2011;
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: In this article we focus on Maximum Likelihood estimation (MLE) for the static parameters of hidden Markov models (HMMs). We will consider the case where one cannot or does not want to compute the conditional likelihood density of the observation given the hidden state because of increased computational complexity or analytical intractability. Instead we will assume that one may obtain samples from this conditional likelihood and hence use approximate Bayesian computation (ABC) approximations of the original HMM. ABC approximations are biased, but the bias can be controlled to arbitrary precision via a parameter \epsilon>0; the bias typically goes to zero as \epsilon \searrow 0. We first establish that the bias in the log-likelihood and gradient of the log-likelihood of the ABC approximation, for a fixed batch of data, is no worse than \mathcal{O}(n\epsilon), n being the number of data; hence, for computational reasons, one might expect reasonable parameter estimates using such an ABC approximation. Turning to the computational problem of estimating $\theta$, we propose, using the ABC-sequential Monte Carlo (SMC) algorithm in Jasra et al. (2012), an approach based upon simultaneous perturbation stochastic approximation (SPSA). Our method is investigated on two numerical examples
    10/2012;

Full-text (2 Sources)

Download
35 Downloads
Available from
May 23, 2014