James O. Berger’s research while affiliated with East China Normal University and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (190)


Objective Bayesian Inference and Its Relationship to Frequentism
  • Chapter

January 2024

·

4 Reads

·

2 Citations

James Berger

·

Jose Bernardo

·

Dongchu Sun


Seismic, Global Positioning System (GPS), and SO2 monitoring data for the period January 1, 1995–November 7, 2019. Extrusive phases and pauses are in shown red and green, respectively. Top: Number of seismic events detected by the seismic system. Middle: Radial ground displacement of continuous GPS (cGPS) stations MVO1 (red) and GERD (blue) smoothed with 7-days running mean filter, Black: GPS Height of HARR. Bottom: Measured daily SO2 flux, filtered with 7-days running median filter. Green: Correlation Spectrometer (COSPEC), Blue: Differential Optical Absorption Spectroscopy (DOAS), White: Traverse data, Red: new DOAS network (Adapted from MVO activity report; MVO OFR 18-02-draft).
Cumulative number of recorded, dome-collapse PDCs, beginning July 18, 1995 and ending with the last recorded PDC on March 28, 2013. Data are from FlowDat (Ogburn and Calder, 2012; Ogburn and Calder, 2017).
Solid black curve shows forecast probability (nearly zero) of at least one PDC in s years, for 0 ≤ s ≤ 20 , starting t = 7 years after the most recent eruption. Assumed activity rates are (A) λ = 1 event per year, (B) λ = 0.1 yr⁻¹ (once per decade), and (C) λ = 0.01 yr⁻¹ (once per century), on average, until the uncertain time T the eruption ends. Thin dashed black curve shows probability of the complimentary event, zero PDCs in s years, as the sum of three parts: the probability the eruption has already paused by time t (red curve); and the probabilities that the eruption pauses during the next s years (blue dash-dot curve) or after the next s years (green dashed curve) without an intervening PDC.
Solid black curve shows forecast probability of at least one PDC in s = 5 years, starting t = 7 years after the most recent eruption and assuming an activity rate of λ events per year (on average) until the uncertain time T when the eruption ends, plotted for 0 ≤ λ ≤ 0.5 . Peak probability of P = 0.0947 is achieved at λ = 0.085 , about one event per decade. Results are similar for other choices of s in the range 0 ≤ s ≤ 20 years. Other curves have same meaning as in Figure 3.
Solid black curve shows forecast probability of at least one PDC in s = 100 years, assuming an activity rate of λ = 0.1 event per year, starting t years after the most recent eruption for 7 ≤ t ≤ 20 . Dotted red curve shows forecast probability that the eruption has already ended by time t, rising from 67% at t = 7 years (i.e., in 2020) up to 95% at t = 20 years.

+5

Volcanic Hazard Assessment for an Eruption Hiatus, or Post-eruption Unrest Context: Modeling Continued Dome Collapse Hazards for Soufrière Hills Volcano
  • Article
  • Full-text available

December 2020

·

72 Reads

·

7 Citations

·

·

·

[...]

·

Effective volcanic hazard management in regions where populations live in close proximity to persistent volcanic activity involves understanding the dynamic nature of hazards, and associated risk. Emphasis until now has been placed on identification and forecasting of the escalation phase of activity, in order to provide adequate warning of what might be to come. However, understanding eruption hiatus and post-eruption unrest hazards, or how to quantify residual hazard after the end of an eruption, is also important and often key to timely post-eruption recovery. Unfortunately, in many cases when the level of activity lessens, the hazards, although reduced, do not necessarily cease altogether. This is due to both the imprecise nature of determination of the “end” of an eruptive phase as well as to the possibility that post-eruption hazardous processes may continue to occur. An example of the latter is continued dome collapse hazard from lava domes which have ceased to grow, or sector collapse of parts of volcanic edifices, including lava dome complexes. We present a new probabilistic model for forecasting pyroclastic density currents (PDCs) from lava dome collapse that takes into account the heavy-tailed distribution of the lengths of eruptive phases, the periods of quiescence, and the forecast window of interest. In the hazard analysis, we also consider probabilistic scenario models describing the flow’s volume and initial direction. Further, with the use of statistical emulators, we combine these models with physics-based simulations of PDCs at Soufrière Hills Volcano to produce a series of probabilistic hazard maps for flow inundation over 5, 10, and 20 year periods. The development and application of this assessment approach is the first of its kind for the quantification of periods of diminished volcanic activity. As such, it offers evidence-based guidance for dome collapse hazards that can be used to inform decision-making around provisions of access and reoccupation in areas around volcanoes that are becoming less active over time.

Download



Frequentist Properties of Bayesian Multiplicity Control for Multiple Testing of Normal Means

March 2020

·

18 Reads

·

1 Citation

Sankhya A

We consider the standard problem of multiple testing of normal means, obtaining Bayesian multiplicity control by assuming that the prior inclusion probability (the assumed equal prior probability that each mean is nonzero) is unknown and assigned a prior distribution. The asymptotic frequentist behavior of the Bayesian procedure is studied, as the number of tests grows. Studied quantities include the false positive probability, which is shown to go to zero asymptotically. The asymptotics of a Bayesian decision-theoretic approach are also presented.


An objective prior for hyperparameters in normal hierarchical models

March 2020

·

65 Reads

·

8 Citations

Journal of Multivariate Analysis

Hierarchical models are the workhorse of much of Bayesian analysis, yet there is uncertainty as to which priors to use for hyperparmeters. Formal approaches to objective Bayesian analysis, such as the Jeffreys-rule approach or reference prior approach, are only implementable in simple hierarchical settings. It is thus common to use less formal approaches, such as utilizing formal priors from non-hierarchical models in hierarchical settings. This can be fraught with danger, however. For instance, non-hierarchical Jeffreys-rule priors for variances or covariance matrices result in improper posterior distributions if they are used at higher levels of a hierarchical model. Berger et al. (2005) approached the question of choice of hyperpriors in normal hierarchical models by looking at the frequentist notion of admissibility of resulting estimators. Hyperpriors that are ‘on the boundary of admissibility’ are sensible choices for objective priors, being as diffuse as possible without resulting in inadmissible procedures. The admissibility (and propriety) properties of a number of priors were considered in the paper, but no overall conclusion was reached as to a specific prior. In this paper, we complete the story and propose a particular objective prior for use in all normal hierarchical models, based on considerations of admissibility, ease of implementation and performance.


Fig. 1 The Bayes factor B 10 based on the conjugate prior (solid line) and independence prior (dashed line) as a function of t values when n = 7, ρ = .5, s 2 y = n − 1 = 6, s 2 0 = s 2 1 = 1, and different choices for the prior degrees of freedom ν 0 and ν 1
Limiting values of the Bayes factor for a univariate t test as |t| → ∞ for different choices of the sample size n and the correlation ρ
Severity of information inconsistency of various priors for different hypothesis tests
On the prevalence of information inconsistency in normal linear models

February 2020

·

79 Reads

·

8 Citations

Test

Informally, ‘information inconsistency’ is the property that has been observed in some Bayesian hypothesis testing and model selection scenarios whereby the Bayesian conclusion does not become definitive when the data seem to become definitive. An example is that, when performing a t test using standard conjugate priors, the Bayes factor of the alternative hypothesis to the null hypothesis remains bounded as the t statistic grows to infinity. The goal of this paper is to thoroughly investigate information inconsistency in various Bayesian testing problems. We consider precise hypothesis tests, one-sided hypothesis tests, and multiple hypothesis tests under normal linear models with dependent observations. Standard priors are considered, such as conjugate and semi-conjugate priors, as well as variations of Zellner’s g prior (e.g., fixed g priors, mixtures of g priors, and adaptive (data-based) g priors). It is shown that information inconsistency is a widespread problem using standard priors while certain theoretically recommended priors, including scale mixtures of conjugate priors and adaptive priors, are information consistent.




Citations (78)


... In the absence of genuine prior information, or when some sort of "objectivity" of the estimation process in the field of official statistics is required, the use of formal noninformative priors must be recommended in order to provide "calibrated answers" with good frequentist properties (Berger et al. 2022). In complex design, the derivation of the formal noninformative prior is really too difficult to obtain and approximations are necessary, as for example in Berger et al. (2020). ...

Reference:

Bayesian Ideas in Survey Sampling: The Legacy of Basu
Objective Bayesian Inference and Its Relationship to Frequentism
  • Citing Chapter
  • January 2024

... However, in most multidimensional occasions, the analytical evaluations related to an inference prior appear to be quite cumbersome. Thus, some numerical algorithms are designed to do the computations in objective Bayesian inference (e.g., [34,35]), which seems unfavorable for the theoretical derivations of this paper. Additionally, in Section 4, the applications of Jeffreys priors yield an appropriate scale of the eigenvalues of sample covariance matrix, and also reproduce the classic constant false alarm ratio (CFAR) detector in radar detection theory. ...

Objective Bayesian Analysis for the Multivariate Normal Model
  • Citing Chapter
  • July 2007

... where δ 0 (·) denotes a Dirac measure with point mass at 0 and ω represents the probability that γ m = 0. The mixing parameter ω is assigned the Jeffreys' prior distribution (see, e.g., Gelman et al., 2014 [12]; Murphy, 2022 [13]; Berger et al., 2024 [18]) over the interval (0, 1), ensuring a non-informative prior that allows the model to adaptively determine which variables are important. The slab component g(· | a γ , b γ ) is modeled as an inverse uniform distribution on R + , defined as ...

Objective Bayesian Inference
  • Citing Book
  • September 2023

... Since 2010, SHV has been in a prolonged repose interval, with signs of volcanic unrest that could prelude renewed eruptive activity still being detected, including island-wide surface inflation, fumarole outgassing, and seismicity. However this unusually long eruptive pause prompts questions regarding its nature (Hickey et al., 2022;Spiller et al., 2020). Inter-eruptive repose, signaling the end of the eruptive episode, and intra-eruptive repose, preceding continued eruptive activity, have vastly different hazard assessment implications. ...

Volcanic Hazard Assessment for an Eruption Hiatus, or Post-eruption Unrest Context: Modeling Continued Dome Collapse Hazards for Soufrière Hills Volcano

... While [55] argued that the median of theŷ i 's was a good predictor of Y 1 = y 1 , perhaps thinking of absolute error on account of the asymmetry, [116] noted that in this particular instance, the mean of theŷ i 's was actually closer to y i than the median. In effect, [116] argued that (8) 1 787 ...

The Median Probability Model and Correlated Variables
  • Citing Article
  • December 2020

Bayesian Analysis

... Recently, [1] proposed the Shrinkage Inverse Wishart (SIW) prior and show that it has excellent decision-theoretic estimation properties as well as good eigenstructure shrinkage. The prior density is given by ...

Bayesian analysis of the covariance matrix of a multivariate normal distribution with a new class of priors
  • Citing Article
  • August 2020

The Annals of Statistics

... For example, usual improper priors which are routinely used in standard statistical models are not adequate for small area estimation and more generally for hierarchical models. See, as a general reference, (Berger et al. 2020) where the Authors derive a proper prior on the boundary of admissibility, which results as diffuse as possible without resulting in inadmissible procedures. A more specific analysis for small area models is described in Burris and Hoff (2019), where an alternative confidence interval procedure for the area means and totals is proposed under normally distributed sampling errors. ...

An objective prior for hyperparameters in normal hierarchical models
  • Citing Article
  • March 2020

Journal of Multivariate Analysis

... Thus, the Bayes factor in (7) should also approach to infinity, whereas with a fixed value of g, it approaches a constant (1 + g) (n−p−1)∕2 as f → ∞ . This is referred to as the "information paradox", discussed by Liang et al. (2008), Wang and Liu (2016), Mulder et al. (2021) for the variable selection problem in linear models. ...

On the prevalence of information inconsistency in normal linear models

Test

... This method, commonly referred to as null hypothesis significance testing (NHST), is extensively applied not only in marketing but also across multiple scientific fields, such as biomedical and social sciences (Goodman et al., 2019;Hofmann & Meyer-Nieberg, 2018;McShane et al., 2024). Given that the null hypothesis holds, the pvalue quantifies the likelihood of observing an effect as extreme as or more extreme than the one measured (Benjamin & Berger, 2019;Goodman et al., 2019). ...

Three Recommendations for Improving the Use of p -Values

... Second, the UIP, ZS, and HGN priors rely on the total number of observations N from all studies in the IPD-MA. While including N in the Bayesian model selection is common with two typical examples of the Bayesian Information Criterion (BIC) and the ZS prior, Berger et al. (2014) and Bayarri et al. (2019) have proposed to use the effective sample size (TESS) instead of a simple number of observations to achieve model selection consistency. Drawing inspirations from the TESS, it becomes apparent that using N in the prior assumes that each participant contributes equally to the estimation of γ k , overlooking differences in individuals across studies and imposing insufficient shrinkage of the estimates. ...

Prior-based Bayesian information criterion
  • Citing Article
  • March 2019

Statistical Theory and Related Fields