Adaptive Design Optimization: A Mutual Information-Based Approach to Model Discrimination in Cognitive Science

Department of Psychology, Ohio State University, Columbus, OH 43201, USA.
Neural Computation (Impact Factor: 2.21). 12/2009; 22(4):887-905. DOI: 10.1162/neco.2009.02-09-959
Source: PubMed


Discriminating among competing statistical models is a pressing issue for many experimentalists in the field of cognitive science. Resolving this issue begins with designing maximally informative experiments. To this end, the problem to be solved in adaptive design optimization is identifying experimental designs under which one can infer the underlying model in the fewest possible steps. When the models under consideration are nonlinear, as is often the case in cognitive science, this problem can be impossible to solve analytically without simplifying assumptions. However, as we show in this letter, a full solution can be found numerically with the help of a Bayesian computational trick derived from the statistics literature, which recasts the problem as a probability density simulation in which the optimal design is the mode of the density. We use a utility function based on mutual information and give three intuitive interpretations of the utility function in terms of Bayesian posterior estimates. As a proof of concept, we offer a simple example application to an experiment on memory retention.

Download full-text


Available from: Daniel R Cavagnaro, Jul 29, 2014
  • Source
    • "u(d, θ m , y) = log p(m|y, d) p(m) , (9) which gives U (d) a similar, information theoretic interpretation as the expected amount of information about the data-generating model that would be provided by the observation of an experimental outcome under design d (Cavagnaro et al., 2010 "
    [Show description] [Hide description]
    DESCRIPTION: Working paper on testing models of temporal discounting using Adaptive Design Optimization
    Full-text · Research · Sep 2015
  • Source
    • "The expression p s (y|d) = ∑ K m=1 p s (m)p s (y|m, d) is the " grand " marginal likelihood, obtained by averaging the marginal likelihood across K models weighted by the model prior p s (m). Equation 1 is called the " expected utility " of the design d because it measures, in an information theoretic sense, the expected reduction in uncertainty about the true model that would be provided by observing the outcome of a mini-experiment conducted with design d (Cavagnaro et al., 2010). "
    [Show abstract] [Hide abstract]
    ABSTRACT: Probability weighting functions relate objective probabilities and their subjective weights, and play a central role in modeling choices under risk within cumulative prospect theory. While several different parametric forms have been proposed, their qualitative similarities make it challenging to discriminate among them empirically. In this paper, we use both simulation and choice experiments to investigate the extent to which different parametric forms of the probability weighting function can be discriminated using adaptive design optimization, a computer-based methodology that identifies and exploits model differences for the purpose of model discrimination. The simulation experiments show that the correct (data-generating) form can be conclusively discriminated from its competitors. The results of an empirical experiment reveal heterogeneity between participants in terms of the functional form, with two models (Prelec-2, Linear in Log Odds) emerging as the most common best-fitting models. The findings shed light on assumptions underlying these models.
    Full-text · Article · Dec 2013 · Journal of Risk and Uncertainty
  • Source
    • "An information-theoretic method for model comparison was recently derived by Cavagnaro et al. (2010). Given a set of models with the i-th model having prior probability P0(i), stimuli are chosen to maximize the mutual information between the stimulus and the model index i by minimizing the expected model space entropy in a manner directly analogous to information-theoretic model estimation (Paninski, 2005), except that in this case the unknown variable is a discrete model index i rather than a continuous parameter value θ. "
    [Show abstract] [Hide abstract]
    ABSTRACT: In this paper, we review several lines of recent work aimed at developing practical methods for adaptive on-line stimulus generation for sensory neurophysiology. We consider various experimental paradigms where on-line stimulus optimization is utilized, including the classical optimal stimulus paradigm where the goal of experiments is to identify a stimulus which maximizes neural responses, the iso-response paradigm which finds sets of stimuli giving rise to constant responses, and the system identification paradigm where the experimental goal is to estimate and possibly compare sensory processing models. We discuss various theoretical and practical aspects of adaptive firing rate optimization, including optimization with stimulus space constraints, firing rate adaptation, and possible network constraints on the optimal stimulus. We consider the problem of system identification, and show how accurate estimation of non-linear models can be highly dependent on the stimulus set used to probe the network. We suggest that optimizing stimuli for accurate model estimation may make it possible to successfully identify non-linear models which are otherwise intractable, and summarize several recent studies of this type. Finally, we present a two-stage stimulus design procedure which combines the dual goals of model estimation and model comparison and may be especially useful for system identification experiments where the appropriate model is unknown beforehand. We propose that fast, on-line stimulus optimization enabled by increasing computer power can make it practical to move sensory neuroscience away from a descriptive paradigm and toward a new paradigm of real-time model estimation and comparison.
    Full-text · Article · Jun 2013 · Frontiers in Neural Circuits
Show more