PreprintPDF Available

Comparing Bayesian Posterior Passing with Meta-Analysis (Comment on Brand et al., 2019)

Authors:

Abstract

Brand, von der Post, Ounsley, and Morgan (2019) introduced Bayesian posterior passing as an alternative to traditional meta-analyses. In this commentary I relate their procedure to traditional meta-analysis, showing that posterior passing is equivalent to fixed effects meta-analysis. To overcome the limitations of simple posterior passing, I introduce improved posterior passing methods to account for heterogeneity and publication bias. Additionally, practical limitations of posterior passing and the role that it can play in future research are discussed.
Running head: COMPARING BAYESIAN POSTERIOR PASSING WITH META-ANALYSIS 1
Comparing Bayesian Posterior Passing with Meta-analysis
Joshua Pritsker
San Diego Mesa College
Purdue University
COMPARING BAYESIAN POSTERIOR PASSING WITH META-ANALYSIS 2
Abstract
Brand, von der Post, Ounsley, and Morgan (2019) introduced Bayesian posterior passing as an
alternative to traditional meta-analyses. In this commentary I relate their procedure to traditional
meta-analysis, showing that posterior passing is equivalent to fixed effects meta-analysis. To
overcome the limitations of simple posterior passing, I introduce improved posterior passing
methods to account for heterogeneity and publication bias. Additionally, practical limitations of
posterior passing and the role that it can play in future research are discussed.
Keywords: bayesian updating, posterior passing, meta-analysis
COMPARING BAYESIAN POSTERIOR PASSING WITH META-ANALYSIS 3
Comparing Bayesian Posterior Passing with Meta-analysis
The ability to accumulate evidence across studies is often said to be a great advantage of
Bayesian inference. This point was recently discussed by Brand, von der Post, Ounsley, and
Morgan (2019), who suggested that Bayseian posterior passing alleviates the need for traditional
meta-analysis. They performed a simulation study comparing posterior passing to non-
cumulative analysis and combined analysis of the data from all studies. However, they lacked a
formal theoretical comparison of posterior passing to traditional methods of meta-analysis. This
commentary relates posterior passing to traditional meta-analyses, allowing for one to determine
the performance of posterior passing on the basis of how traditional meta-analytic methods are
known to perform. To avoid some of the pitfalls of posterior passing, I suggest improved
procedures that account for heterogeneity and publication bias. I address Brand et al. (2019)’s
suggestion that posterior passing avoids some of the problems of traditional meta-analyses, and
discuss practical limitations in using it as a replacement for traditional meta-analysis.
Posterior Passing as Meta-analysis
As Brand et al. (2019) suggest that posterior passing may replace traditional meta-
analyses, one might wonder how posterior passing relates to traditional meta-analyses. Given
that meta-analyses are typically based on proxy statistics and their standard errors instead of
individual patient data, one might consider posterior distributions generated the same way. We
can do this by using the likelihood of a statistic instead of the likelihood of the full data, such as
an estimate of the parameter of interest. Hence, ignoring the normalizing constant, we derive:
(1a)
(1b)
f(θxi)π(θ)f(xiθ)
=f(θxi1)f(̂
θiθ, SE2
i)
COMPARING BAYESIAN POSTERIOR PASSING WITH META-ANALYSIS 4
Where is the standard error for . Typically, , where
is the density at under a normal distribution with a mean of and variance of .
If and are sufficient for , no information is gained by using the full data over these
statistics. Now, we can expand (1) across studies:
(2)
Where is the prior distribution used by the initial study. How does this compare to the
posterior distribution given by a traditional meta-analysis? In a fixed-effects setting (3), they are
identical, provided that the meta-analysis uses the same prior as the initial study.
(3a)
(3b)
Hence, in a random-effects setting (4), posterior passing, like (3), will underestimate the
posterior variance. This point was previously noted by Martin (2017). This equivalence also
answers some of the questions brought up by Brand et al. (2019), such as how posterior passing
would perform under publication bias. Now, it is clear that we can use existing studies on
traditional fixed-effects meta-analyses to answer these questions (e.g. Simonsohn, Nelson, &
Simmons, 2014).
(4a)
(4b)
(4c)
(4d)
SEi
̂
θi
f(̂
θiθ, SE2
i)=ϕ(̂
θiθ, SE2
i)
ϕ(xμ,σ2)
μ
σ2
̂
θ
SE
θ
k
f(θx)π0(θ)
k
i=1
ϕ(̂
θiθ, SE2
i)
π0(θ)
̂
θi 𝒩 (θ, SE2
i)
ffi xed (θx)π(θ)
k
i=1
ϕ(̂
θiθ, SE2
i)
μi 𝒩 (θ,τ2)
̂
θi 𝒩 (μi, SE2
̂
θi)
fran dom (θx)π(θ)
k
i=1
μ
ϕ(̂
θiμi, SE2
i)ϕ(μiθ,τ2)dμ
=π(θ)
k
i=1
ϕ(̂
θiθ, SE2
i+τ2)
COMPARING BAYESIAN POSTERIOR PASSING WITH META-ANALYSIS 5
A random-effects model is typically preferable, as the fixed-effects assumption of no between-
study variance is implausible in most settings (Borenstein, Hedges, Higgins, & Rothstein, 2010).
The use of the full-data posterior by Brand et al. (2019) is inconsequential to this result. Hence,
although posterior passing will produce consistent point estimates, the posterior variance will be
underestimated.
How Can we Improve Posterior Passing?
Incorporating Random Effects
An obvious question to ask at this point is if we can modify posterior passing to
incorporate random effects. A Bayesian solution is to update along with , modeling their joint
posterior distribution. Using statistic likelihoods as in the previous section, we can derive the
joint posterior update as follows:
(5a)
(5b)
(5c)
(5d)
To get the marginal posterior distribution of , one may integrate out , and vice versa.
Addressing Publication Bias
A second major issue with posterior passing is that Brand et al. (2019) provide no way to
address publication bias. Without adjustment, posterior passing will perform identical to a fixed-
effects meta-analysis that completely ignores publication bias. Unadjusted meta-analyses are
known to perform poorly when publication bias is substantiative, yielding potentially misleading
results (Simonsohn, Nelson, & Simmons, 2014). This problem can be viewed as one of biased
τ
θ
f(θ,τxi)π(θ,τ)f(xiθ,τ)
=f(θ,τxi1)f(xiθ,τ)
=f(θ,τxi1)
μ
f(xiμi)ϕ(μiθ,τ2)dμi
=f(θ,τxi1)ϕ(̂
θiθ, SE2
i+τ2)
θ
τ
COMPARING BAYESIAN POSTERIOR PASSING WITH META-ANALYSIS 6
sampling, hence the likelihood function is given by a weighted distribution as follows
(Pfeffermann, Krieger, & Rinott, 1998):
(6)
Where are the probabilities of publication. Now, we need to model . Considering
that studies are typically given a dichotomous interpretation (McShane & Gal, 2017), a realistic
option is a simple step function:
(7)
Where is a function that gives the interpretation of in a pass/fail manner, such as the p-
value of being below 0.05. However, it is unclear if dichotomization exists to the same extent
in Bayesian studies as it does in frequentist studies. In such a context one may replace (7) with a
smoother model, such as a logistic one:
(8)
For a review of other models that have been suggested, see Sutton, Song, Gilbody, and Abrams
(2000). In either case, the posterior update is now as follows:
(9a)
(9b)
(9c)
(9d)
As multiple studies are needed to identify , , and , inference in early studies will be highly
dependent on the prior distribution for these parameters, which should not be uninformative.
However, this problem dissipates as evidence for these parameters accumulates.
f(xipublishedi)=E[pxi]f(xi)
E[p]
p
E[pxi]
E[pxi]={αs(xi)= 0
βs(xi)= 1
s(xi)
xi
̂
θi
E[pxi]= logistic (α+βs(xi))
f(θ,τ,α,βxi)π(θ,τ,α,β)f(xiθ,τ,α,β)
π(θ,τ,α,β)E[pxi,α,β]
E[pθ,τ,α,β]f(xiθ,τ)
=f(θ,τ,α,βxi1)E[pxi,α,β]
E[pθ,τ,α,β]f(xiμi)ϕ(μiθ,τ2)dμi
=f(θ,τ,α,βxi1)E[pxi,α,β]
E[pθ,τ,α,β]ϕ(̂
θiθ, SE2
i+τ2)
τ
α
β
COMPARING BAYESIAN POSTERIOR PASSING WITH META-ANALYSIS 7
Including Studies Outside of the Posterior Passing Chain
In meta-analysis, extensive literature searches are conducted to avoid systematically
excluding any studies. However, it may not be obvious how one can incorporate studies outside
of a posterior passing ‘chain’, a sequence of studies where each study’s prior is equal to the
previous study’s posterior, into our prior distribution. The same problem occurs if two studies
are done simultaneously, creating a fork in the chain. Brand et al. (2019) suggest that a normal
prior with variance representing our uncertainty and with a mean at the estimate given by the
study may be used. However, it can be difficult to determine such a variance, and this procedure
only makes sense if we are the first study in a chain aiming to get information from an unchained
study. A better answer is to include the study’s likelihood function in our posterior derivation.
Switching back to the simple fixed effects procedure for conciseness, this gives us:
(10)
To include more studies, simply add more functions. This is made particularly simple
by using likelihood functions based on statistics and their standard errors as discussed previously.
Further Comments and Discussion
Is Bayesianism Needed for Bayesian-style Updating?
Although posterior passing may seem like a highly Bayesian process, posterior passing
can be easily converted to a pure likelihood procedure by simply ignoring the initial prior
distribution. For instance, using the fixed effects procedure:
(11)
The interpretation of the resulting function is that it is simply the likelihood for across all the
studies in the chain. Frequentists can then derive results from (11) via standard trial sequential
methods (cf. Wetterslev, Jakobsen, & Gluud, 2017), and likelihoodists can construct support
f(θxi)π(θ)f(xi1θ)f(xiθ)
f(xjθ)
L(θ;x1, …, xi)L(θ;x1, …, xi1)f(xi;θ)
θ
COMPARING BAYESIAN POSTERIOR PASSING WITH META-ANALYSIS 8
intervals. The use of a statistic likelihood serves as a counterpart to Brand et al. (2019)’s use of a
normal approximation to the posterior distribution, approximating the full data likelihood by a
gaussian function when the statistic is a maximum likelihood estimate. Hence, the utility
remains even without a Bayesian framework.
Is Posterior Passing a Practical Replacement for Meta-analysis?
Brand et al. (2019) suggest that posterior passing can replace traditional meta-analyses.
Indeed, the improved posterior passing procedures introduced in the previous section can
compete with traditional meta-analyses, but is it practical? To match the quality of traditional
meta-analyses, one would have to meet the same conditions, including extensive literature search
and the inclusion of all available studies. Furthermore, most studies are currently done in a
frequentist manner, and splits can occur in a chain due to simultaneous studies, so in practice
each study in the chain will have to do its own mini meta-analysis. At that point, it might be
preferable to just do traditional meta-analyses.
Brand et al. (2019) also suggest that posterior passing can solve the problem of
conflicting meta-analyses by updating evidence in real time. Meta-analyses can provide
contradictory results due to a number of reasons, such as differing inclusion criteria and using
different methods. However, posterior passing only appears to mitigate differences occurring by
meta-analyses occurring at different times. In this case though, one would simply go with the
most recent meta-analysis. When multiple meta-analyses disagree, they often have differences in
statistical methodology or inclusion criteria. Replacing meta-analyses with posterior passing
would likely result in multiple posterior passing chains to reflect this disagreement. In fact, for
any criticism of a meta-analysis, one could seemingly make an equivalent criticism of a posterior
COMPARING BAYESIAN POSTERIOR PASSING WITH META-ANALYSIS 9
passing chain. Instead of having disagreeing meta-analyses, we would instead simply have a
number of individual studies in disagreement. One could argue that posterior passing could help
mitigate disagreements of this nature by the fact that inclusion criteria could change with any
study in the chain. However, having such fuzzy inclusion criteria is clearly undesirable as it
would lead to results with an unclear interpretation. Hence, posterior passing does not appear to
avoid the problem of conflicting meta-analyses in a desirable manner.
Alternative Roles for Posterior Passing
Even if posterior passing cannot generally replace traditional meta-analyses, it may
nonetheless be useful. With the improvements suggested above, posterior passing can replace
traditional meta-analysis in areas where meta-analyses are unlikely to produce conflicting results
in the first place. In this scenario, the advantage to posterior passing is that we get live updates
to the field’s knowledge with each new study. An alternative to creating posterior passing chains
that still utilizes the posterior passing mechanism is to use it in meta-analyses. This doesn’t
solve the issue of conflicting meta-analyses, but has practical advantages. By using the posterior
distribution of the last similar meta-analysis as a prior distribution, meta-analyses can be
performed in chunks instead of having to redo the entire meta-analysis with each update.
Similarly, instead of using posterior passing chains, studies can use posterior distributions from
meta-analyses as their priors to get accurate net effect estimates within each study. This allows
for broader conclusions than would otherwise be warranted by the study alone. Hence, although
posterior passing may have problems as a replacement for meta-analysis, it can have utility
regardless.
COMPARING BAYESIAN POSTERIOR PASSING WITH META-ANALYSIS 10
References
Brand, C. O., Ounsley, J. P., van der Post, D. J., & Morgan, T. J. H. (2019). Cumulative Science
via Bayesian Posterior Passing. Meta-Psychology, 3. doi:10.15626/MP.2017.840
Borenstein, M., Hedges, L. V., Higgins, J. P., & Rothstein, H. R. (2010). A basic introduction to
fixed-effect and random-effects models for meta-analysis. Research synthesis
methods, 1(2), 97-111. doi:10.1002/jrsm.12
Martin, S. (2017). Open Peer Review by Stephen Martin. Decision Letter for Brand et al.
McShane, B. B., & Gal, D. (2017). Statistical significance and the dichotomization of
evidence. Journal of the American Statistical Association, 112(519), 885-895.
doi:10.1080/01621459.2017.1289846
Pfeffermann, D., Krieger, A. M., & Rinott, Y. (1998). Parametric distributions of complex survey
data under informative probability sampling. Statistica Sinica, 1087-1114.
Simonsohn, U., Nelson, L. D., & Simmons, J. P. (2014). p-Curve and effect size: Correcting for
publication bias using only significant results. Perspectives on Psychological Science,
9(6), 666-681.
Sutton, A. J., Song, F., Gilbody, S. M., & Abrams, K. R. (2000). Modelling publication bias in
meta-analysis: a review. Statistical Methods in Medical Research, 9(5), 421–
445. doi:10.1177/096228020000900503
Wetterslev, J., Jakobsen, J. C., & Gluud, C. (2017). Trial Sequential Analysis in systematic
reviews with meta-analysis. BMC medical research methodology, 17(1), 39. doi:10.1186/
s12874-017-0315-7
ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
Background Most meta-analyses in systematic reviews, including Cochrane ones, do not have sufficient statistical power to detect or refute even large intervention effects. This is why a meta-analysis ought to be regarded as an interim analysis on its way towards a required information size. The results of the meta-analyses should relate the total number of randomised participants to the estimated required meta-analytic information size accounting for statistical diversity. When the number of participants and the corresponding number of trials in a meta-analysis are insufficient, the use of the traditional 95% confidence interval or the 5% statistical significance threshold will lead to too many false positive conclusions (type I errors) and too many false negative conclusions (type II errors). Methods We developed a methodology for interpreting meta-analysis results, using generally accepted, valid evidence on how to adjust thresholds for significance in randomised clinical trials when the required sample size has not been reached. ResultsThe Lan-DeMets trial sequential monitoring boundaries in Trial Sequential Analysis offer adjusted confidence intervals and restricted thresholds for statistical significance when the diversity-adjusted required information size and the corresponding number of required trials for the meta-analysis have not been reached. Trial Sequential Analysis provides a frequentistic approach to control both type I and type II errors. We define the required information size and the corresponding number of required trials in a meta-analysis and the diversity (D2) measure of heterogeneity. We explain the reasons for using Trial Sequential Analysis of meta-analysis when the actual information size fails to reach the required information size. We present examples drawn from traditional meta-analyses using unadjusted naïve 95% confidence intervals and 5% thresholds for statistical significance. Spurious conclusions in systematic reviews with traditional meta-analyses can be reduced using Trial Sequential Analysis. Several empirical studies have demonstrated that the Trial Sequential Analysis provides better control of type I errors and of type II errors than the traditional naïve meta-analysis. Conclusions Trial Sequential Analysis represents analysis of meta-analytic data, with transparent assumptions, and better control of type I and type II errors than the traditional meta-analysis using naïve unadjusted confidence intervals.
Article
Full-text available
The sample distribution is defined as the distribution of the sample mea-surements given the selected sample. Under informative sampling, this distribution is different from the corresponding population distribution, although for several examples the two distributions are shown to be in the same family and only differ in some or all the parameters. A general approach of approximating the marginal sample distribution for a given population distribution and first order sample se-lection probabilities is discussed and illustrated. Theoretical and simulation results indicate that under common sampling methods of selection with unequal proba-bilities, when the population measurements are independently drawn from some distribution (superpopulation), the sample measurements are asymptotically inde-pendent as the population size increases. This asymptotic independence combined with the approximation of the marginal sample distribution permits the use of stan-dard methods such as direct likelihood inference or residual analysis for inference on the population distribution.
Article
Meta-analysis is now a widely used technique for summarizing evidence from multiple studies. Publication bias, the bias induced by the fact that research with statistically significant results is potentially more likely to be submitted and published than work with null or non-significant results, poses a thereat to the validity of such analyses. The implication of this is that combining only the identified published studies uncritically may lead to an incorrect, usually over optimistic, conclusion. How publication bias should be addressed when carrying out a meta-analysis is currently a hotly debated subject. While statistical methods to test for its presence are starting be used, they do not address the problem of how to proceed if publication bias is suspected. This paper provides a review of methods, which can be employed as a sensitivity analysis to assess the likely impact of publication bias on a meta-analysis. It is hoped that this will raise awareness of such methods, and promote their use and development, as well as provide an agenda for future research.
Article
This paper introduces a statistical technique known as “posterior passing” in which the results of past studies can be used to inform the analyses carried out by subsequent studies. We first describe the technique in detail and show how it can be implemented by individual researchers on an experiment by experiment basis. We then use a simulation to explore its success in identifying true parameter values compared to current statistical norms (ANOVAs and GLMMs). We find that posterior passing allows the true effect in the population to be found with greater accuracy and consistency than the other analysis types considered. Furthermore, posterior passing performs almost identically to a data analysis in which all data from all simulated studies are combined and analysed as one dataset. On this basis, we suggest that posterior passing is a viable means of implementing cumulative science. Furthermore, because it prevents the accumulation of large bodies of conflicting literature, it alleviates the need for traditional meta-analyses. Instead, posterior passing cumulatively and collaboratively provides clarity in real time as each new study is produced and is thus a strong candidate for a new, cumulative approach to scientific analyses and publishing.
Article
Journals tend to publish only statistically significant evidence, creating a scientific record that markedly overstates the size of effects. We provide a new tool that corrects for this bias without requiring access to nonsignificant results. It capitalizes on the fact that the distribution of significant p values, p-curve, is a function of the true underlying effect. Researchers armed only with sample sizes and test results of the published findings can correct for publication bias. We validate the technique with simulations and by reanalyzing data from the Many-Labs Replication project. We demonstrate that p-curve can arrive at conclusions opposite that of existing tools by reanalyzing the meta-analysis of the “choice overload” literature.
Article
In light of recent concerns about reproducibility and replicability, the ASA issued a Statement on Statistical Significance and p-values aimed at those who are not primarily statisticians. While the ASA Statement notes that statistical significance and p-values are “commonly misused and misinterpreted,” it does not discuss and document broader implications of these errors for the interpretation of evidence. In this article, we review research on how applied researchers who are not primarily statisticians misuse and misinterpret p-values in practice and how this can lead to errors in the interpretation of evidence. We also present new data showing, perhaps surprisingly, that researchers who are primarily statisticians are also prone to misuse and misinterpret p-values thus resulting in similar errors. In particular, we show that statisticians tend to interpret evidence dichotomously based on whether or not a p-value crosses the conventional 0.05 threshold for statistical significance. We discuss implications and offer recommendations.
Article
There are two popular statistical models for meta-analysis, the fixed-effect model and the random-effects model. The fact that these two models employ similar sets of formulas to compute statistics, and sometimes yield similar estimates for the various parameters, may lead people to believe that the models are interchangeable. In fact, though, the models represent fundamentally different assumptions about the data. The selection of the appropriate model is important to ensure that the various statistics are estimated correctly. Additionally, and more fundamentally, the model serves to place the analysis in context. It provides a framework for the goals of the analysis as well as for the interpretation of the statistics. In this paper we explain the key assumptions of each model, and then outline the differences between the models. We conclude with a discussion of factors to consider when choosing between the two models. Copyright © 2010 John Wiley & Sons, Ltd. Copyright © 2010 John Wiley & Sons, Ltd.
Open Peer Review by Stephen Martin. Decision Letter for Brand
  • S Martin
Martin, S. (2017). Open Peer Review by Stephen Martin. Decision Letter for Brand et al.