Introducing Monte Carlo Methods with R

Source: OAI

ABSTRACT Solutions des exercices proposés dans cet ouvrage librement accessibles à Computational techniques based on simulation have now become an essential part of the statistician's toolbox. It is thus crucial to provide statisticians with a practical understanding of those methods, and there is no better way to develop intuition and skills for simulation than to use simulation to solve statistical problems. Introducing Monte Carlo Methods with R covers the main tools used in statistical simulation from a programmer's point of view, explaining the R implementation of each simulation technique and providing the output for better understanding and comparison. While this book constitutes a comprehensive treatment of simulation methods, the theoretical justification of those methods has been considerably reduced, compared with Robert and Casella (2004). Similarly, the more exploratory and less stable solutions are not covered here. This book does not require a preliminary exposure to the R programming language or to Monte Carlo methods, nor an advanced mathematical background. While many examples are set within a Bayesian framework, advanced expertise in Bayesian statistics is not required. The book covers basic random generation algorithms, Monte Carlo techniques for integration and optimization, convergence diagnoses, Markov chain Monte Carlo methods, including Metropolis {Hastings and Gibbs algorithms, and adaptive algorithms. All chapters include exercises and all R programs are available as an R package called mcsm. The book appeals to anyone with a practical interest in simulation methods but no previous exposure. It is meant to be useful for students and practitioners in areas such as statistics, signal processing, communications engineering, control theory, econometrics, finance and more. The programming parts are introduced progressively to be accessible to any reader. oui

  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: We introduce a multivariate hidden Markov model to jointly cluster observations with different support, i.e. circular and linear. Relying on the general projected normal distribution, our approach allows us to have clusters with bimodal marginal distributions for the circular variable. Furthermore, we relax the independence assumption between the circular and linear components observed at the same time. Such an assumption is generally used to alleviate the computational burden involved in the parameter estimation step, but it is hard to justify in empirical applications. We carry out a simulation study using different simulation schemes to investigate model behavior, focusing on how well the hidden structure is recovered. Finally, the model is used to fit a real data example on a bivariate time series of wind velocity and direction.
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Frequentist and likelihood methods of inference based on the multivariate skew-normal model encounter several technical difficulties with this model. In spite of the popularity of this class of densities, there are no broadly satisfactory solutions for estimation and testing problems. A general population Monte Carlo algorithm is proposed which: 1) exploits the latent structure stochastic representation of skew-normal random variables to provide a full Bayesian analysis of the model and 2) accounts for the presence of constraints in the parameter space. The proposed approach can be defined as weakly informative, since the prior distribution approximates the actual reference prior for the shape parameter vector. Results are compared with the existing classical solutions and the practical implementation of the algorithm is illustrated via a simulation study and a real data example. A generalization to the matrix variate regression model with skew-normal error is also presented.
    Computational Statistics & Data Analysis. 02/2013; 63.
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: In this paper we consider Markov chains associated with the Metropolis-Hastings algorithm. We propose conditions under which the sequence of the successive densities of such a chain converges to the target density according to the total variation distance for any choice of the initial density. In particular we prove that the positiveness of the target and the proposal densities is enough for the chain to converge.

Full-text (2 Sources)

Available from
May 31, 2014

Christian P. Robert