Philosophy and the practice of Bayesian statistics

Department of Statistics and Department of Political Science, Columbia University, New York, USA Statistics Department, Carnegie Mellon University, Santa Fe Institute, Pittsburgh, USA.
British Journal of Mathematical and Statistical Psychology (Impact Factor: 1.53). 02/2012; 66(1). DOI: 10.1111/j.2044-8317.2011.02037.x
Source: PubMed

ABSTRACT A substantial school in the philosophy of science identifies Bayesian inference with inductive inference and even rationality as such, and seems to be strengthened by the rise and practical success of Bayesian statistics. We argue that the most successful forms of Bayesian statistics do not actually support that particular philosophy but rather accord much better with sophisticated forms of hypothetico-deductivism. We examine the actual role played by prior distributions in Bayesian models, and the crucial aspects of model checking and model revision, which fall outside the scope of Bayesian confirmation theory. We draw on the literature on the consistency of Bayesian updating and also on our experience of applied work in social science. Clarity about these matters should benefit not just philosophy of science, but also statistical practice. At best, the inductivist view has encouraged researchers to fit and compare models without checking them; at worst, theorists have actively discouraged practitioners from performing model checking because it does not fit into their framework.

  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Multimodel inference accommodates uncertainty when selecting or averaging models, which seems logical and natural. However, there are costs associated with multimodel inferences, so they are not always appropriate or desirable. First, we present statistical inference in the big picture of data analysis and the deductive–inductive process of scientific discovery. Inferences on fixed states of nature, such as survey sampling methods, generally use a single model. Multimodel inferences are used primarily when modeling processes of nature, when there is no hope of knowing the true model. However, even in these cases, iterating on a single model may meet objectives without introducing additional complexity. Additionally, discovering new features in the data through model diagnostics is easier when considering a single model. There are costs for multimodel inferences, including the coding, computing, and summarization time on each model. When cost is included, a reasonable strategy may often be iterating on a single model. We recommend that researchers and managers carefully examine objectives and cost when considering multimodel inference methods. Published 2015. This article is a U.S. Government work and is in the public domain in the USA.
    Journal of Wildlife Management 05/2015; DOI:10.1002/jwmg.891 · 1.61 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: As a contribution to current debates on the ‘social life of methods’, in this article we present an ethnomethodological study of the role of understanding within statistical practice. After reviewing the empirical turn in the methods literature and the challenges to the qualitative-quantitative divide it has given rise to, we argue such case studies are relevant because they enable us to see different ways in which ‘methods’, here quantitative methods, come to have a social life – by embodying and exhibiting understanding they ‘make the social structures of everyday activities observable’ (Garfinkel, 1967: 75), thereby putting society on display. Exhibited understandings rest on distinctive lines of practical social and cultural inquiry – ethnographic ‘forays’ into the worlds of the producers and users of statistics – which are central to good statistical work but are not themselves quantitative. In highlighting these non-statistical forms of social and cultural inquiry at work in statistical practice, our case study is an addition to understandings of statistics and usefully points to ways in which studies of the social life of methods might be further developed from here.
    Theory Culture &amp Society 01/2015; DOI:10.1177/0263276414559058 · 1.77 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: In the management literature, generalizability theory (GT) has been typically used to investigate the reliability of assessment center and job performance ratings. However, the management field has yet to take full advantage of the information GT can offer regarding the reliability of measurement. It is likely that GT has not been adopted because of the complexities involved with its notation and practical application. Moreover, current methods for obtaining accurate interval estimates around estimated variance components or their reliability coefficients are not easily implementable. Alternatively, Bayesian methods provide a different method for estimating GT variance components. Bayesian methods enable management researchers to estimate the posterior distributions of each GT variance component as well as the GT reliability coefficients. From these posterior distributions, researchers can easily obtain the interval estimates for each variance component and the corresponding reliability estimates. Conducting two studies, the authors examine what priors should be used when conducting a Bayesian GT analysis and what estimates should be used to summarize a variance component's posterior distribution. Additionally, the authors find that under certain conditions, Bayesian methods perform better than frequentist methods.
    Journal of Management 01/2014; 41(2):692-717. DOI:10.1177/0149206314554215 · 6.86 Impact Factor


Available from