Adaptive rejection Metropolis sampling using Lagrange interpolation polynomials of degree 2

Department of Statistics, University of Auckland, Private Bag 92019, Auckland, New Zealand
Computational Statistics & Data Analysis (Impact Factor: 1.4). 03/2008; 52(7):3408-3423. DOI: 10.1016/j.csda.2008.01.005
Source: DBLP

ABSTRACT A crucial problem in Bayesian posterior computation is efficient sampling from a univariate distribution, e.g. a full conditional distribution in applications of the Gibbs sampler. This full conditional distribution is usually non-conjugate, algebraically complex and computationally expensive to evaluate. We propose an alternative algorithm, called ARMS2, to the widely used adaptive rejection sampling technique ARS [Gilks, W.R., Wild, P., 1992. Adaptive rejection sampling for Gibbs sampling. Applied Statistics 41 (2), 337–348; Gilks, W.R., 1992. Derivative-free adaptive rejection sampling for Gibbs sampling. In: Bernardo, J.M., Berger, J.O., Dawid, A.P., Smith, A.F.M. (Eds.), Bayesian Statistics, Vol. 4. Clarendon, Oxford, pp. 641–649] for generating a sample from univariate log-concave densities. Whereas ARS is based on sampling from piecewise exponentials, the new algorithm uses truncated normal distributions and makes use of a clever auxiliary variable technique [Damien, P., Walker, S.G., 2001. Sampling truncated normal, beta, and gamma densities. Journal of Computational and Graphical Statistics 10 (2) 206–215]. Furthermore, we extend this algorithm to deal with non-log-concave densities to provide an enhanced alternative to adaptive rejection Metropolis sampling, ARMS [Gilks, W.R., Best, N.G., Tan, K.K.C., 1995. Adaptive rejection Metropolis sampling within Gibbs sampling. Applied Statistics 44, 455–472]. The performance of ARMS and ARMS2 is compared in simulations of standard univariate distributions as well as in Gibbs sampling of a Bayesian hierarchical state-space model used for fisheries stock assessment.

55 Reads
  • Source
    • "Iterations (while n < N ): 2. Build a proposal, πt(x), given the set St = {s 1 , . . . , sm t }, using a convenient procedure (e.g. the ones described in [28], [30] or the simpler ones proposed in Section III-C). 3. Draw x ∼ e πt(x) ∝ πt(x) and u ∼ U ([0] [1]). "
  • Source
    • "(a) illustrates an example of this construction. More sophisticated approaches to build W t (x) (e.g., using quadratic segments when possible [12] "
    [Show abstract] [Hide abstract]
    ABSTRACT: Adaptive Rejection Metropolis Sampling (ARMS) is a well-known MCMC scheme for generating samples from one-dimensional target distributions. ARMS is widely used within Gibbs sampling, where automatic and fast samplers are of-ten needed to draw from univariate full-conditional densities. In this work, we propose an alternative adaptive algorithm (IA 2 RMS) that overcomes the main drawback of ARMS (an uncomplete adaptation of the proposal in some cases), speed-ing up the convergence of the chain to the target. Numerical results show that IA 2 RMS outperforms the standard ARMS, providing a correlation among samples close to zero. Index Terms— Monte Carlo methods, Gibbs sampler, adaptive rejection Metropolis sampling (ARMS).
    IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP); 01/2014
  • Source
    • ", this construction requires to modify lines for the intervals I 0 and I 1 of Fig. 1(a) and to compute the intersection point between two straight lines (see interval I 2 = (s 2 , s 3 ] of Fig. 1(b)), to be able to draw adequately from the corresponding proposal distribution. Note that, a similar procedure using pieces of quadratic functions in the log-domain (namely, pieces of truncated Gaussians density in the pdf domain) also has been proposed in Meyer et al. (2008). 3.2 Disjoint supports and proposal changes in one interval Gilks et al. (1995b) introduced for the ARMS algorithm the procedure to build q t+1 (x|S t+1 ), described in the previous section. "
    [Show abstract] [Hide abstract]
    ABSTRACT: We introduce a new class of adaptive Metropolis algorithms called adaptive sticky algorithms for efficient general-purpose simulation from a target probability distribution. The transition of the Metropolis chain is based on a multiple-try scheme and the different proposals are generated by adaptive nonparametric distributions. Our adaptation strategy uses the interpolation of support points from the past history of the chain as in the adaptive rejection Metropolis. The algorithm efficiency is strengthened by a step that controls the evolution of the set of support points. This extra stage improves the computational cost and accelerates the convergence of the proposal distribution to the target. Despite the algorithms are presented for univariate target distributions, we show that they can be easily extended to the multivariate context by a Gibbs sampling strategy. We show the ergodicity of the proposed algorithms and illustrate their efficiency and effectiveness through some simulated examples involving target distributions with complex structures.
Show more


55 Reads
Available from