Running head: COMPARING BAYESIAN POSTERIOR PASSING WITH META-ANALYSIS 1
Comparing Bayesian Posterior Passing with Meta-analysis
San Diego Mesa College
COMPARING BAYESIAN POSTERIOR PASSING WITH META-ANALYSIS 2
Brand, von der Post, Ounsley, and Morgan (2019) introduced Bayesian posterior passing as an
alternative to traditional meta-analyses. In this commentary I relate their procedure to traditional
meta-analysis, showing that posterior passing is equivalent to fixed effects meta-analysis. To
overcome the limitations of simple posterior passing, I introduce improved posterior passing
methods to account for heterogeneity and publication bias. Additionally, practical limitations of
posterior passing and the role that it can play in future research are discussed.
Keywords: bayesian updating, posterior passing, meta-analysis
COMPARING BAYESIAN POSTERIOR PASSING WITH META-ANALYSIS 3
Comparing Bayesian Posterior Passing with Meta-analysis
The ability to accumulate evidence across studies is often said to be a great advantage of
Bayesian inference. This point was recently discussed by Brand, von der Post, Ounsley, and
Morgan (2019), who suggested that Bayseian posterior passing alleviates the need for traditional
meta-analysis. They performed a simulation study comparing posterior passing to non-
cumulative analysis and combined analysis of the data from all studies. However, they lacked a
formal theoretical comparison of posterior passing to traditional methods of meta-analysis. This
commentary relates posterior passing to traditional meta-analyses, allowing for one to determine
the performance of posterior passing on the basis of how traditional meta-analytic methods are
known to perform. To avoid some of the pitfalls of posterior passing, I suggest improved
procedures that account for heterogeneity and publication bias. I address Brand et al. (2019)’s
suggestion that posterior passing avoids some of the problems of traditional meta-analyses, and
discuss practical limitations in using it as a replacement for traditional meta-analysis.
Posterior Passing as Meta-analysis
As Brand et al. (2019) suggest that posterior passing may replace traditional meta-
analyses, one might wonder how posterior passing relates to traditional meta-analyses. Given
that meta-analyses are typically based on proxy statistics and their standard errors instead of
individual patient data, one might consider posterior distributions generated the same way. We
can do this by using the likelihood of a statistic instead of the likelihood of the full data, such as
an estimate of the parameter of interest. Hence, ignoring the normalizing constant, we derive:
COMPARING BAYESIAN POSTERIOR PASSING WITH META-ANALYSIS 4
Where is the standard error for . Typically, , where
is the density at under a normal distribution with a mean of and variance of .
If and are sufficient for , no information is gained by using the full data over these
statistics. Now, we can expand (1) across studies:
Where is the prior distribution used by the initial study. How does this compare to the
posterior distribution given by a traditional meta-analysis? In a fixed-effects setting (3), they are
identical, provided that the meta-analysis uses the same prior as the initial study.
Hence, in a random-effects setting (4), posterior passing, like (3), will underestimate the
posterior variance. This point was previously noted by Martin (2017). This equivalence also
answers some of the questions brought up by Brand et al. (2019), such as how posterior passing
would perform under publication bias. Now, it is clear that we can use existing studies on
traditional fixed-effects meta-analyses to answer these questions (e.g. Simonsohn, Nelson, &
θi∼ 𝒩 (θ, SE2
ffi xed (θ∣x)∝π(θ)
μi∼ 𝒩 (θ,τ2)
θi∼ 𝒩 (μi, SE2
fran dom (θ∣x)∝π(θ)
COMPARING BAYESIAN POSTERIOR PASSING WITH META-ANALYSIS 5
A random-effects model is typically preferable, as the fixed-effects assumption of no between-
study variance is implausible in most settings (Borenstein, Hedges, Higgins, & Rothstein, 2010).
The use of the full-data posterior by Brand et al. (2019) is inconsequential to this result. Hence,
although posterior passing will produce consistent point estimates, the posterior variance will be
How Can we Improve Posterior Passing?
Incorporating Random Effects
An obvious question to ask at this point is if we can modify posterior passing to
incorporate random effects. A Bayesian solution is to update along with , modeling their joint
posterior distribution. Using statistic likelihoods as in the previous section, we can derive the
joint posterior update as follows:
To get the marginal posterior distribution of , one may integrate out , and vice versa.
Addressing Publication Bias
A second major issue with posterior passing is that Brand et al. (2019) provide no way to
address publication bias. Without adjustment, posterior passing will perform identical to a fixed-
effects meta-analysis that completely ignores publication bias. Unadjusted meta-analyses are
known to perform poorly when publication bias is substantiative, yielding potentially misleading
results (Simonsohn, Nelson, & Simmons, 2014). This problem can be viewed as one of biased
COMPARING BAYESIAN POSTERIOR PASSING WITH META-ANALYSIS 6
sampling, hence the likelihood function is given by a weighted distribution as follows
(Pfeffermann, Krieger, & Rinott, 1998):
Where are the probabilities of publication. Now, we need to model . Considering
that studies are typically given a dichotomous interpretation (McShane & Gal, 2017), a realistic
option is a simple step function:
Where is a function that gives the interpretation of in a pass/fail manner, such as the p-
value of being below 0.05. However, it is unclear if dichotomization exists to the same extent
in Bayesian studies as it does in frequentist studies. In such a context one may replace (7) with a
smoother model, such as a logistic one:
For a review of other models that have been suggested, see Sutton, Song, Gilbody, and Abrams
(2000). In either case, the posterior update is now as follows:
As multiple studies are needed to identify , , and , inference in early studies will be highly
dependent on the prior distribution for these parameters, which should not be uninformative.
However, this problem dissipates as evidence for these parameters accumulates.
E[p∣xi]= logistic (α+βs(xi))
COMPARING BAYESIAN POSTERIOR PASSING WITH META-ANALYSIS 7
Including Studies Outside of the Posterior Passing Chain
In meta-analysis, extensive literature searches are conducted to avoid systematically
excluding any studies. However, it may not be obvious how one can incorporate studies outside
of a posterior passing ‘chain’, a sequence of studies where each study’s prior is equal to the
previous study’s posterior, into our prior distribution. The same problem occurs if two studies
are done simultaneously, creating a fork in the chain. Brand et al. (2019) suggest that a normal
prior with variance representing our uncertainty and with a mean at the estimate given by the
study may be used. However, it can be difficult to determine such a variance, and this procedure
only makes sense if we are the first study in a chain aiming to get information from an unchained
study. A better answer is to include the study’s likelihood function in our posterior derivation.
Switching back to the simple fixed effects procedure for conciseness, this gives us:
To include more studies, simply add more functions. This is made particularly simple
by using likelihood functions based on statistics and their standard errors as discussed previously.
Further Comments and Discussion
Is Bayesianism Needed for Bayesian-style Updating?
Although posterior passing may seem like a highly Bayesian process, posterior passing
can be easily converted to a pure likelihood procedure by simply ignoring the initial prior
distribution. For instance, using the fixed effects procedure:
The interpretation of the resulting function is that it is simply the likelihood for across all the
studies in the chain. Frequentists can then derive results from (11) via standard trial sequential
methods (cf. Wetterslev, Jakobsen, & Gluud, 2017), and likelihoodists can construct support
L(θ;x1, …, xi)∝L(θ;x1, …, xi−1)f(xi;θ)
COMPARING BAYESIAN POSTERIOR PASSING WITH META-ANALYSIS 8
intervals. The use of a statistic likelihood serves as a counterpart to Brand et al. (2019)’s use of a
normal approximation to the posterior distribution, approximating the full data likelihood by a
gaussian function when the statistic is a maximum likelihood estimate. Hence, the utility
remains even without a Bayesian framework.
Is Posterior Passing a Practical Replacement for Meta-analysis?
Brand et al. (2019) suggest that posterior passing can replace traditional meta-analyses.
Indeed, the improved posterior passing procedures introduced in the previous section can
compete with traditional meta-analyses, but is it practical? To match the quality of traditional
meta-analyses, one would have to meet the same conditions, including extensive literature search
and the inclusion of all available studies. Furthermore, most studies are currently done in a
frequentist manner, and splits can occur in a chain due to simultaneous studies, so in practice
each study in the chain will have to do its own mini meta-analysis. At that point, it might be
preferable to just do traditional meta-analyses.
Brand et al. (2019) also suggest that posterior passing can solve the problem of
conflicting meta-analyses by updating evidence in real time. Meta-analyses can provide
contradictory results due to a number of reasons, such as differing inclusion criteria and using
different methods. However, posterior passing only appears to mitigate differences occurring by
meta-analyses occurring at different times. In this case though, one would simply go with the
most recent meta-analysis. When multiple meta-analyses disagree, they often have differences in
statistical methodology or inclusion criteria. Replacing meta-analyses with posterior passing
would likely result in multiple posterior passing chains to reflect this disagreement. In fact, for
any criticism of a meta-analysis, one could seemingly make an equivalent criticism of a posterior
COMPARING BAYESIAN POSTERIOR PASSING WITH META-ANALYSIS 9
passing chain. Instead of having disagreeing meta-analyses, we would instead simply have a
number of individual studies in disagreement. One could argue that posterior passing could help
mitigate disagreements of this nature by the fact that inclusion criteria could change with any
study in the chain. However, having such fuzzy inclusion criteria is clearly undesirable as it
would lead to results with an unclear interpretation. Hence, posterior passing does not appear to
avoid the problem of conflicting meta-analyses in a desirable manner.
Alternative Roles for Posterior Passing
Even if posterior passing cannot generally replace traditional meta-analyses, it may
nonetheless be useful. With the improvements suggested above, posterior passing can replace
traditional meta-analysis in areas where meta-analyses are unlikely to produce conflicting results
in the first place. In this scenario, the advantage to posterior passing is that we get live updates
to the field’s knowledge with each new study. An alternative to creating posterior passing chains
that still utilizes the posterior passing mechanism is to use it in meta-analyses. This doesn’t
solve the issue of conflicting meta-analyses, but has practical advantages. By using the posterior
distribution of the last similar meta-analysis as a prior distribution, meta-analyses can be
performed in chunks instead of having to redo the entire meta-analysis with each update.
Similarly, instead of using posterior passing chains, studies can use posterior distributions from
meta-analyses as their priors to get accurate net effect estimates within each study. This allows
for broader conclusions than would otherwise be warranted by the study alone. Hence, although
posterior passing may have problems as a replacement for meta-analysis, it can have utility
COMPARING BAYESIAN POSTERIOR PASSING WITH META-ANALYSIS 10
Brand, C. O., Ounsley, J. P., van der Post, D. J., & Morgan, T. J. H. (2019). Cumulative Science
via Bayesian Posterior Passing. Meta-Psychology, 3. doi:10.15626/MP.2017.840
Borenstein, M., Hedges, L. V., Higgins, J. P., & Rothstein, H. R. (2010). A basic introduction to
fixed-effect and random-effects models for meta-analysis. Research synthesis
methods, 1(2), 97-111. doi:10.1002/jrsm.12
Martin, S. (2017). Open Peer Review by Stephen Martin. Decision Letter for Brand et al.
McShane, B. B., & Gal, D. (2017). Statistical significance and the dichotomization of
evidence. Journal of the American Statistical Association, 112(519), 885-895.
Pfeffermann, D., Krieger, A. M., & Rinott, Y. (1998). Parametric distributions of complex survey
data under informative probability sampling. Statistica Sinica, 1087-1114.
Simonsohn, U., Nelson, L. D., & Simmons, J. P. (2014). p-Curve and effect size: Correcting for
publication bias using only significant results. Perspectives on Psychological Science,
Sutton, A. J., Song, F., Gilbody, S. M., & Abrams, K. R. (2000). Modelling publication bias in
meta-analysis: a review. Statistical Methods in Medical Research, 9(5), 421–
Wetterslev, J., Jakobsen, J. C., & Gluud, C. (2017). Trial Sequential Analysis in systematic
reviews with meta-analysis. BMC medical research methodology, 17(1), 39. doi:10.1186/