James O. Berger's research while affiliated with East China Normal University and other places

Publications (177)

Article
Full-text available
Effective volcanic hazard management in regions where populations live in close proximity to persistent volcanic activity involves understanding the dynamic nature of hazards, and associated risk. Emphasis until now has been placed on identification and forecasting of the escalation phase of activity, in order to provide adequate warning of what mi...
Article
We consider the standard problem of multiple testing of normal means, obtaining Bayesian multiplicity control by assuming that the prior inclusion probability (the assumed equal prior probability that each mean is nonzero) is unknown and assigned a prior distribution. The asymptotic frequentist behavior of the Bayesian procedure is studied, as the...
Article
Hierarchical models are the workhorse of much of Bayesian analysis, yet there is uncertainty as to which priors to use for hyperparmeters. Formal approaches to objective Bayesian analysis, such as the Jeffreys-rule approach or reference prior approach, are only implementable in simple hierarchical settings. It is thus common to use less formal appr...
Article
Full-text available
Informally, ‘information inconsistency’ is the property that has been observed in some Bayesian hypothesis testing and model selection scenarios whereby the Bayesian conclusion does not become definitive when the data seem to become definitive. An example is that, when performing a t test using standard conjugate priors, the Bayes factor of the alt...
Article
Full-text available
Researchers commonly use p-values to answer the question: How strongly does the evidence favor the alternative hypothesis relative to the null hypothesis? p-Values themselves do not directly answer this question and are often misinterpreted in ways that lead to overstating the evidence against the null hypothesis. Even in the “post p < 0.05 era,” h...
Article
We present a new approach to model selection and Bayes factor determination, based on Laplace expansions (as in BIC), which we call Prior-based Bayes Information Criterion (PBIC). In this approach, the Laplace expansion is only done with the likelihood function, and then a suitable prior distribution is chosen to allow exact computation of the (app...
Article
The use of models to try to better understand reality is ubiquitous. Models have proven useful in testing our current understanding of reality; for instance, climate models of the 1980s were built for science discovery, to achieve a better understanding of the general dynamics of climate systems. Scientific insights often take the form of general q...
Preprint
The median probability model (MPM) Barbieri and Berger (2004) is defined as the model consisting of those variables whose marginal posterior probability of inclusion is at least 0.5. The MPM rule yields the best single model for prediction in orthogonal and nested correlated designs. This result was originally conceived under a specific class of pr...
Article
Full-text available
Gaussian stochastic process (GaSP) emulation is a powerful tool for approximating computationally intensive computer models. However, estimation of parameters in the GaSP emulator is a challenging task. No closed-form estimator is available and many numerical problems arise with standard estimates, e.g., the maximum likelihood estimator (MLE). In t...
Article
Direct coupling of computer models is often difficult for computational and logistical reasons. We propose coupling computer models by linking independently developed Gaussian process emulators (GaSPs) of these models. Linked emulators are developed that are closed form, namely normally distributed with closed form predictive mean and variance func...
Article
In the context of model uncertainty and selection, empirical Bayes procedures can have undesirable properties such as extreme estimates of inclusion probabilities (Scott and Berger, 2010) or inconsistency under the null model (Liang et al., 2008). To avoid these issues, we define empirical Bayes priors with constraints that ensure that the estimate...
Article
In this note, we provide critical commentary on two articles that cast doubt on the validity and implications of Birnbaum's theorem: Evans (2013) and Mayo (2014). In our view, the proof is correct and the consequences of the theorem are alive and well.
Article
Informally, "Information Inconsistency" is the property that has been observed in many Bayesian hypothesis testing and model selection procedures whereby the Bayesian conclusion does not become definitive when the data seems to become definitive. An example is that, when performing a t-test using standard conjugate priors, the Bayes factor of the a...
Article
Full-text available
We propose to change the default P-value threshold for statistical significance for claims of new discoveries from 0.05 to 0.005.
Article
We consider estimation of the parameters of a Gaussian Stochastic Process (GaSP), in the context of emulation (approximation) of computer models for which the outcomes are real-valued scalars. The main focus is on estimation of the GaSP parameters through various generalized maximum likelihood methods, mostly involving finding posterior modes; this...
Preprint
Full-text available
"We propose to change the default P-value threshold forstatistical significance for claims of new discoveries from 0.05 to 0.005."
Article
The problem of testing mutually exclusive hypotheses with dependent test statistics is considered. Bayesian and frequentist approaches to multiplicity control are studied and compared to help gain understanding as to the effect of test statistic dependence on each approach. The Bayesian approach is shown to have excellent frequentist properties and...
Article
We consider the problem of emulating (approximating) computer models (simulators) that produce massive output. The specific simulator we study is a computer model of volcanic pyroclastic flow, a single run of which produces up to 10⁹ outputs over a space–time grid of coordinates. An emulator (essentially a statistical model of the simulator—we use...
Article
Full-text available
We consider the problem of variable selection in linear models when $p$, the number of potential regressors, may exceed (and perhaps substantially) the sample size $n$ (which is possibly small).
Article
Full-text available
In volcanology, the sparsity of datasets for individual volcanoes is an important problem, which, in many cases, compromises our ability to make robust judgments about future volcanic hazards. In this contribution we develop a method for using hierarchical Bayesian analysis of global datasets to combine information across different volcanoes and to...
Article
Gaussian processes are a popular tool for nonparametric function estimation because of their flexibility and the fact that much of the ensuing computation is parametric Gaussian computation. Often, the function is known to be in a shape-constrained class, such as the class of monotonic or convex functions. Such shape constraints can be incorporated...
Article
Full-text available
Much of science is (rightly or wrongly) driven by hypothesis testing. Even in situations where the hypothesis testing paradigm is correct, the common practice of basing inferences solely on p-values has been under intense criticism for over 50 years. We propose, as an alternative, the use of the odds of a correct rejection of the null hypothesis to...
Chapter
Bayes factors are the primary tool used in Bayesian inference for hypothesis testing and model selection. They also are used by non-Bayesians in the construction of test statistics. For instance, in the testing of simple hypotheses, the Bayes factor equals the ordinary likelihood ratio. Bayesian Model Selection discusses the use of Bayes factors in...
Article
Full-text available
Rejoinder to Overall Objective Priors by James O. Berger, Jose M. Bernardo, Dongchu Sun [arXiv:1504.02689]
Article
Full-text available
In multi-parameter models, reference priors typically depend on the parameter or quantity of interest, and it is well known that this is necessary to produce objective posterior distributions with optimal properties. There are, however, many situations where one is simultaneously interested in all the parameters of the model or, more realistically,...
Article
Full-text available
This paper presents a novel approach to assessing the hazard threat to a locale due to a large volcanic avalanche. The methodology combines: (i) mathematical modeling of volcanic mass flows; (ii) field data of avalanche frequency, volume, and runout; (iii) large-scale numerical simulations of flow events; (iv) use of statistical methods to minimize...
Article
Model selection procedures often depend explicitly on the sample size n of the experiment. One example is the Bayesian information criterion (BIC) criterion and another is the use of Zellner–Siow priors in Bayesian model selection. Sample size is well-defined if one has i.i.d real observations, but is not well-defined for vector observations or in...
Article
This article discusses subgroup identification, the goal of which is to determine the heterogeneity of treatment effects across subpopulations. Searching for differences among subgroups is challenging because it is inherently a multiple testing problem with the complication that test statistics for subgroups are typically highly dependent, making s...
Article
Item response theory (IRT) models have been widely used in educational measurement testing. When there are repeated observations available for individuals through time, a dynamic structure for the latent trait of ability needs to be incorporated into the model, to accommodate changes in ability. Other complications that often arise in such settings...
Article
Full-text available
This article considers the development of objective prior distributions for discrete parameter spaces. Formal approaches to such developments, such as the reference prior approach, often result in a constant prior for a discrete parameter, which is questionable for problems that exhibit certain types of structure. To take advantage of structures, t...
Article
Bayesian nonparametric regression with dependent wavelets has dual shrinkage properties: there is shrinkage through a dependent prior put on functional differences, and shrinkage through the setting of most of the wavelet coefficients to zero through Bayesian variable selection methods. The methodology can deal with unequally spaced data and is eff...
Article
Full-text available
We describe work in progress by a collaboration of astronomers and statisticians developing a suite of Bayesian data analysis tools for extrasolar planet (exoplanet) detection, planetary orbit estimation, and adaptive scheduling of observations. Our work addresses analysis of stellar reflex motion data, where a planet is detected by observing the "...
Article
Full-text available
Recently, the RV144 randomized, double-blind, efficacy trial in Thailand reported that a prime-boost human immunodeficiency virus (HIV) vaccine regimen conferred ∼30% protection against HIV acquisition. However, different analyses seemed to give conflicting results, and a heated debate ensued as scientists and the broader public struggled with thei...
Article
This paper studies the multiplicity-correction effect of standard Bayesian variable-selection priors in linear regression. Our first goal is to clarify when, and how, multiplicity correction happens automatically in Bayesian analysis, and to distinguish this correction from the Bayesian Ockham's-razor effect. Our second goal is to contrast empirica...
Article
Full-text available
Templeton (1) makes a broad attack on the foundations of Bayesian statistical methods—rather than on the purely numerical technique called approximate Bayesian computation (ABC)—using incorrect arguments and selective references taken out of context. The most significant example is the argument, “The probability of the nested special case must be l...
Article
Full-text available
Several statistical agencies use, or are considering the use of, multiple imputation to limit the risk of disclosing respondents' identities or sensitive attributes in public use files. For example, agencies can release partially synthetic datasets, comprising the units originally surveyed with some values, such as sensitive values at high risk of...
Article
Full-text available
Risk assessment of rare natural hazards— such as large volcanic block and ash or pyroclastic flows— is addressed. Assessment is approached through a combination of computer modeling, statistical modeling, and extreme-event probability computation. A computer model of the natural hazard is used to provide the needed extrapolation to unseen parts of...
Article
Full-text available
The CRASH computer model simulates the effect of a vehicle colliding against different barrier types. If it accurately represents real vehicle crash-worthiness, the computer model can be of great value in various aspects of vehicle design, such as the setting of timing of air bag releases. The goal of this study is to address the problem of validat...
Article
The Statistical and Applied Mathematical Sciences Institute (SAMSI) is a national institute in the USA devoted to forging a synthesis of the statistical sciences and the applied mathematical sciences with disciplinary science to confront the very hardest and most important data and model-driven scientific challenges. The vision of SAMSI is describe...
Article
Full-text available
Reference analysis produces objective Bayesian inference, in the sense that inferential statements depend only on the assumed model and the available data, and the prior distribution used to make an inference is least informative in a certain information-theoretic sense. Reference priors have been rigorously defined in specific contexts and heurist...
Article
Full-text available
We appreciate the positive general comments of most of the discussants. And, of course, we are grateful for the interesting and thought-provoking additional insights and comments that they have provided. We provide below a response to these comments. Gi on and Moreno. We certainly agree with Professors Girón and Moreno on the interest in sensitivit...
Article
Full-text available
The statistical analysis of a sample taken from a finite population is a classic problem for which no generally accepted objective Bayesian results seem to exist. Bayesian solutions to this problem may be very sensitive to the choice of the prior, and there is no consensus as to the appropriate prior to use. This paper uses new developments in refe...
Article
The Poisson distribution is often used as a standard model for count data. Quite often, however, such data sets are not well fit by a Poisson model because they have more zeros than are compatible with this model. For these situations, a zero-inflated Poisson (ZIP) distribution is often proposed. This article addresses testing a Poisson versus a ZI...
Article
Objective priors for sequential experiments are considered. Common priors, such as the Jeffreys prior and the reference prior, will typically depend on the stopping rule used for the sequential experiment. New expressions for reference priors are obtained in various contexts, and computational issues involving such priors are considered.
Article
This paper studies the multiplicity-correction effect of standard Bayesian variable-selection priors in linear regression. The first goal of the paper is to clarify when, and how, multiplicity correction is automatic in Bayesian analysis, and contrast this multiplicity correction with the Bayesian Ockham's-razor effect. Secondly, we contrast empiri...
Article
In this paper, we present a framework that enables computer model evaluation oriented towards answering the question: Does the computer model adequately represent reality? The proposed validation framework is a six-step procedure based upon a mix of Bayesian sta- tistical methodology and likelihood methodology. The methodology is particularly suite...
Article
Bayesian statistical practice makes extensive use of versions of objective Bayesian analysis. We discuss why this is so, and address some of the criticisms that have been raised concerning objective Bayesian analysis. The dangers of treating the issue too casually are also considered. In particular, we suggest that the statistical community should...
Article
Objective Bayesian inference for the multivariate normal distribution is il-lustrated, using different types of formal objective priors (Jeffreys, invari-ant, reference and matching), different modes of inference (Bayesian and fre-quentist), and different criteria involved in selecting optimal objective pri-ors (ease of computation, frequentist per...
Article
There has been increased interest of late in the Bayesian approach to multiple testing (often called the multiple comparisons problem), motivated by the need to analyze DNA microarray data in which it is desired to learn which of potentially several thousand genes are activated by a particular stimulus. We study the issue of prior specification for...
Article
Full-text available
Objective Bayesian inference for the multivariate normal distribution is il-lustrated, using different types of formal objective priors (Jeffreys, invari-ant, reference and matching), different modes of inference (Bayesian and fre-quentist), and different criteria involved in selecting optimal objective pri-ors (ease of computation, frequentist per...
Chapter
Cepheid variables are a class of pulsating variable stars with the useful property that their periods of variability are strongly correlated with their absolute luminosity. Once this relationship has been calibrated, knowledge of the period gives knowledge of the luminosity. This makes these stars useful as “standard candles” for estimating distanc...
Article
Study of the bivariate normal distribution raises the full range of issues involving ob-jective Bayesian inference, including the different types of objective priors (e.g., Jeffreys, invariant, reference, matching), the different modes of inference (e.g., Bayesian, frequentist, fiducial), and the criteria involved in deciding on optimal objective p...
Article
A variety of pseudo-Bayes factors have been proposed, based on using part of the data to update an improper prior, and using the remainder of the data to compute the Bayes factor. A number of these approaches are of a bootstrap or cross-validation nature, with some type of average being taken over the data used for updating. Asymptotic characterist...
Article
Hierarchical modeling is wonderful and here to stay, but hyperparameter priors are often chosen in a casual fashion. Unfortunately, as the number of hyperparameters grows, the effects of casual choices can multiply, leading to considerably inferior performance. As an extreme, but not uncommon, example use of the wrong hyperparameter priors can even...
Article
We focus on Bayesian model selection for the variable selection problem in large model spaces. The challenge is to search the huge model space adequately, while accurately approximating model posterior probabilities for the visited models. The issue of choice of prior distributions for the visited models is also important.
Article
After a general discussion of some basic issues in Bayesian model selection, we briefly review three fairly recent developments: (i) The median probability model (rather than the highest posterior probability model) is the model which is typically optimal for prediction in variable selection problems; discussion of this highlights the central role...
Article
Statistics has struggled for nearly a century over the issue of whether the Bayesian or frequentist paradigm is superior. This debate is far from over and, indeed, should continue, since there are fundamental philosophical and pedagogical issues at stake. At the methodological level, however, the debate has become considerably muted, with the recog...
Article
Calibrating and validating a traffic simulation model for use on a transportation network depend on field data that are often limited but essential for determining inputs to the model and for assessing its reliability. Quantification and systemization of the calibration/validation process expose statistical issues inherent in the use of such data....
Article
Ozonesondes collect data relevant to ozone level at various altitudes. Modeling this data involves a combination of spatial and temporal modeling. The spatial component can be conveniently modeled as a four component mixture of normal distributions. The (relatively few) parameters of this mixture can then be modeled in a time-dependent fashion, via...
Article
Full-text available
CORSIM is a large microsimulator for vehicular traffic, and is being studied with respect to its ability to successfully model and predict behavior of traffic in a 36 block section of Chicago. Inputs to the simulator include information about street configuration, driver behavior, traffic light timing, turning probabilities at each corner and distr...
Article
Stone (J. Roy. Statist. Soc. Ser. B 41 (1979) 276) showed that BIC can fail to be asymptotically consistent. Note, however, that BIC was developed as an asymptotic approximation to Bayes factors between models, and that the approximation is valid only under certain conditions. The counterexample of Stone arises in situations in which BIC is not an...
Article
Testing of a composite null hypothesis versus a composite alternative is con- sidered when both have a related invariance structure. The goal is to develop conditional frequentist tests that allow the reporting of data-dependent error probabilities, error probabilities that have a strict frequentist interpretation and that reflect the actual amount...
Article
Full-text available
Central to several objective approaches to Bayesian model selection is the use of training samples (subsets of the data), so as to allow utilization of improper objective priors. The most common prescription for choosing training samples is to choose them to be as small as possible, subject to yielding proper posteriors; these are called minimal tr...
Article
Often the goal of model selection is to choose a model for future prediction, and it is natural to measure the accuracy of a future prediction by squared error loss. Under the Bayesian approach, it is commonly perceived that the optimal predictive model is the model with highest posterior probability, but this is not necessarily the case. In this p...
Article
Ronald Fisher advocated testing using p-values, Harold Jeffreys proposed use of objective posterior probabilities of hypotheses, and Jerzy Neyman recommended testing with fixed error probabilities. Each was quite critical of the other approaches. Most troubling for statistics and science is that the three approaches can lead to quite different prac...
Article
Confidence in computational predictions is enhanced if the potential 'error' in these predictions (the difference between the prediction and nature's outcome in the situation being simulated) can be credibly bounded. The "model-validation" process by which experimental or field results are compared to computational predictions to produce this confi...
Article
Several medical articles discuss methods of constructing confidence intervals for single proportions and the likelihood ratio, but scant attention has been given to the systematic study of intervals for the posterior odds, or the positive predictive value, of a test. The authors describe 5 methods of constructing confidence intervals for posttest p...
Article
Cepheid variables are a class of pulsating variable stars with the useful property that their periods of variability are strongly correlated with their absolute luminosity. Once this relationship has been calibrated, knowledge of the period gives knowledge of the luminosity. This makes these stars useful as "standard candles" for estimating distanc...
Article
Spatially varying phenomena are often modeled using Gaussian random fields, specified by their mean function and covariance function. The spatial correlation structure of these models is commonly specified to be of a certain form (e.g., spherical, power exponential, rational quadratic, or Mat'ern) with a small number of unknown parameters. We consi...
Article
Testing the fit of data to a parametric model can be done by embedding the parametric model in a nonparametric alternative and computing the Bayes factor of the parametric model to the nonparametric alternative. Doing so by specifying the nonparametric alternative via a Polya tree process is particularly attractive, from both theoretical and method...
Article
Selection models are appropriate when a datum x enters the sample only with probability or weight w(x). It is typically assumed that the weight function w is monotone, but the precise functional form of the weight function is often unknown. In this paper, the Dirichlet process prior, centered on a parametric form, is used as a prior distribution on...
Article
The basics of the Bayesian approach to model selection are first presented, as well as the motivations for the Bayesian approach. We then review four methods of developing default Bayesian procedures that have undergone considerable recent development, the Conventional Prior approach, the Bayes Information Criterion, the Intrinsic Bayes Factor, and...
Article
The problem of investigating compatibility of an assumed model with the data is investigated in the situation when the assumed model has unknown parameters. The most frequently used measures of compatibility are p values, based on statistics T for which large values are deemed to indicate incompatibility of the data and the model. When the null mod...
Article
Spatially varying phenomena are often modeled using Gaussian random fields, specified by their mean function and covariance function. The spatial correlation structure of these models is commonly specified to be of a certain form (e.g., spherical, power exponential, rational quadratic, or Mat'ern) with a small number of unknown parameters. We consi...