Introducing Monte Carlo Methods with R (Use R!)

01/2010; DOI: 10.1007/978-1-4419-1576-4
Source: OAI


Solutions des exercices proposés dans cet ouvrage librement accessibles à Computational techniques based on simulation have now become an essential part of the statistician's toolbox. It is thus crucial to provide statisticians with a practical understanding of those methods, and there is no better way to develop intuition and skills for simulation than to use simulation to solve statistical problems. Introducing Monte Carlo Methods with R covers the main tools used in statistical simulation from a programmer's point of view, explaining the R implementation of each simulation technique and providing the output for better understanding and comparison. While this book constitutes a comprehensive treatment of simulation methods, the theoretical justification of those methods has been considerably reduced, compared with Robert and Casella (2004). Similarly, the more exploratory and less stable solutions are not covered here. This book does not require a preliminary exposure to the R programming language or to Monte Carlo methods, nor an advanced mathematical background. While many examples are set within a Bayesian framework, advanced expertise in Bayesian statistics is not required. The book covers basic random generation algorithms, Monte Carlo techniques for integration and optimization, convergence diagnoses, Markov chain Monte Carlo methods, including Metropolis {Hastings and Gibbs algorithms, and adaptive algorithms. All chapters include exercises and all R programs are available as an R package called mcsm. The book appeals to anyone with a practical interest in simulation methods but no previous exposure. It is meant to be useful for students and practitioners in areas such as statistics, signal processing, communications engineering, control theory, econometrics, finance and more. The programming parts are introduced progressively to be accessible to any reader. oui

Download full-text


Available from: Christian P. Robert, Mar 14, 2014
  • Source
    • "Possible reasons of this are as follows. First, the likelihood distribution could be multimodal, which limits MCMC in exploring the full posterior distribution (Robert, 2009). Second, MCMC is limited in exploring all possible DT structures because of the hierarchical structure of DT models (Denison, 2002). "
    [Show abstract] [Hide abstract]
    ABSTRACT: Bayesian Model Averaging (BMA), computa-tionally feasible using Markov Chain Monte Carlo (MCMC), is a well-known method for reliable estimation of predictive distributions. The use of decision tree (DT) models for the averaging enables experts not only to estimate a pre-dictive posterior but also to interpret models of interest and estimate the importance of predictor factors that are assumed to contribute to the prediction. The MCMC method generates parameters of DT models in order to explore their posterior distributions and to draw samples from the models. However, these samples can often over-represent DT models of an excessive size, which in cases of real-world applications affects the results of BMA. When this happens, it is unlikely for a DT model that provides Maximum a Posteriori probability to explain the observed data with high accuracy. We propose a new technology in order to estimate and interpret predictive posteriors. In our experiments with aircraft short-term conflict alerts, we show how this technology can be used for analysing uncertainties in detections of conflicts.
    Full-text · Technical Report · Jul 2015
  • Source
    • "Thus, tuning the proposal distribution for all the 2K + T variables may be an hard task and then we might prefer to use an adaptive (auto-tuning) algorithm. Among all the algorithm proposed in literature, we decide to use the one of Robert and Casella (2009), page 258. In order to get parameter estimates, different strategies can be pursued. "
    [Show abstract] [Hide abstract]
    ABSTRACT: We introduce a multivariate hidden Markov model to jointly cluster observations with different support, i.e. circular and linear. Relying on the general projected normal distribution, our approach allows us to have clusters with bimodal marginal distributions for the circular variable. Furthermore, we relax the independence assumption between the circular and linear components observed at the same time. Such an assumption is generally used to alleviate the computational burden involved in the parameter estimation step, but it is hard to justify in empirical applications. We carry out a simulation study using different simulation schemes to investigate model behavior, focusing on how well the hidden structure is recovered. Finally, the model is used to fit a real data example on a bivariate time series of wind velocity and direction.
    Full-text · Article · Aug 2014 · Environmetrics
  • Source
    • "Optimization problems can mostly be seen as one of two kinds: we need to find the extrema of a target function cost over a given domain; performance is highly dependent on the analytical properties of the target function. Therefore, if the target function is too complex to allow an analytical study or if the domain is too irregular, the method of choice is rather the stochastic approach [55, 56]. Since visual search is a complex system under the influence of many mechanisms, it is not easy to predict the selection mechanism through an implicit and deterministic model; therefore, we developed a stochastic model based on the MC simulation (To understand Monte Carlo simulation the reader should consider the following case: a player wants to measure the surface of his carpet in his room 3 m × 3 m; the player may randomly launch a button 100 times and count the number of times (k) the button falls onto the carpet. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Attention allows us to selectively process the vast amount of information with which we are confronted, prioritizing some aspects of information and ignoring others by focusing on a certain location or aspect of the visual scene. Selective attention is guided by two cognitive mechanisms: saliency of the image (bottom up) and endogenous mechanisms (top down). These two mechanisms interact to direct attention and plan eye movements; then, the movement profile is sent to the motor system, which must constantly update the command needed to produce the desired eye movement. A new approach is described here to study how the eye motor control could influence this selection mechanism in clinical behavior: two groups of patients (SCA2 and late onset cerebellar ataxia LOCA) with well-known problems of motor control were studied; patients performed a cognitively demanding task; the results were compared to a stochastic model based on Monte Carlo simulations and a group of healthy subjects. The analytical procedure evaluated some energy functions for understanding the process. The implemented model suggested that patients performed an optimal visual search, reducing intrinsic noise sources. Our findings theorize a strict correlation between the "optimal motor system" and the "optimal stimulus encoders."
    Full-text · Article · Feb 2014
Show more