Article
To read the full-text of this research, you can request a copy directly from the author.

Abstract

A Bayesian model may be relied on to the extent of its adequacy by minimizing the posterior expected loss raised to the power of a discounting exponent. The resulting action is minimax under broad conditions when the sample size is held fixed and the discounting exponent is infinite. On the other hand, for any finite discounting exponent, the action is Bayes when the sample size is sufficiently large. Thus, the action is Bayes when there is enough reliable information in the posterior distribution, is minimax when the posterior distribution is completely unreliable, and is a continuous blend of the two extremes otherwise.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... which is derived from a decision-theoretic method of moderating posterior distributions, where ∆ ≥ 1 is the degree of discounting (Bickel, 2017, Example 1). The case of no discounting (∆ = 1) then results in P ⋆ ∆ (y (1)) = P (y (1)). ...
Preprint
Full-text available
The probability distributions that statistical methods use to represent uncertainty fail to capture all of the uncertainty that may be relevant to decision making. A simple way to adjust probability distributions for the uncertainty not represented in their models is to average the distributions with a uniform distribution or another distribution of maximum uncertainty. A decision theoretic framework leads to averaging the distributions by taking the means of the logit transforms of the probabilities. That method does not prevent convergence to the truth, as does taking the means of the probabilities themselves. The mean-logit approach to moderating distributions is applied to natural language processing performed by a deep neural network.
Article
Confidence intervals of divergence times and branch lengths do not reflect uncertainty about their clades or about the prior distributions and other model assumptions on which they are based. Uncertainty about the clade may be propagated to a confidence interval by multiplying its confidence level by the bootstrap proportion of its clade or by another probability that the clade is correct. (If the confidence level is 95% and the bootstrap proportion is 90%, then the uncertainty-adjusted confidence level is (0.95)(0.90) = 86%.) Uncertainty about the model can be propagated to the confidence interval by reporting the union of the confidence intervals from all the plausible models. Unless there is no overlap between the confidence intervals, that results in an uncertainty-adjusted interval that has as its lower and upper limits the most extreme limits of the models. The proposed methods of uncertainty quantification may be used together.
Article
The probability distributions that statistical methods use to represent uncertainty fail to capture all of the uncertainty that may be relevant to decision making. A simple way to adjust probability distributions for the uncertainty not represented in their models is to average the distributions with a uniform distribution or another distribution of maximum uncertainty. A decision-theoretic framework leads to averaging the distributions by taking the means of the logit transforms of the probabilities. That method does not prevent convergence to the truth, as does taking the means of the probabilities themselves. The mean-logit approach to moderating distributions is applied to natural language processing performed by a deep neural network.
Article
Full-text available
We propose to change the default P-value threshold for statistical significance for claims of new discoveries from 0.05 to 0.005.
Article
We describe a range of routine statistical problems in which marginal posterior distributions derived from improper prior measures are found to have an unBayesian property—one that could not occur if proper prior measures were employed. This paradoxical possibility is shown to have several facets that can be successfully analysed in the framework of a general group structure. The results cast a shadow on the uncritical use of improper prior measures. A separate examination of a particular application of Fraser's structural theory shows that it is intrinsically paradoxical under marginalization.
Article
A substantial school in the philosophy of science identifies Bayesian inference with inductive inference and even rationality as such, and seems to be strengthened by the rise and practical success of Bayesian statistics. We argue that the most successful forms of Bayesian statistics do not actually support that particular philosophy but rather accord much better with sophisticated forms of hypothetico-deductivism. We examine the actual role played by prior distributions in Bayesian models, and the crucial aspects of model checking and model revision, which fall outside the scope of Bayesian confirmation theory. We draw on the literature on the consistency of Bayesian updating and also on our experience of applied work in social science. Clarity about these matters should benefit not just philosophy of science, but also statistical practice. At best, the inductivist view has encouraged researchers to fit and compare models without checking them; at worst, theorists have actively discouraged practitioners from performing model checking because it does not fit into their framework.
Article
The following zero-sum game between nature and a statistician blends Bayesian methods with frequentist methods such as p-values and confidence intervals. Nature chooses a posterior distribution consistent with a set of possible priors. At the same time, the statistician selects a parameter distribution for inference with the goal of maximizing the minimum Kullback-Leibler information gained over a confidence distribution or other benchmark distribution. An application to testing a simple null hypothesis leads the statistician to report a posterior probability of the hypothesis that is informed by both Bayesian and frequentist methodology, each weighted according how well the prior is known. Since neither the Bayesian approach nor the frequentist approach is entirely satisfactory in situations involving partial knowledge of the prior distribution, the proposed procedure reduces to a Bayesian method given complete knowledge of the prior, to a frequentist method given complete ignorance about the prior, and to a blend between the two methods given partial knowledge of the prior. The blended approach resembles the Bayesian method rather than the frequentist method to the precise extent that the prior is known. The problem of testing a point null hypothesis illustrates the proposed framework. The blended probability that the null hypothesis is true is equal to the p-value or a lower bound of an unknown Bayesian posterior probability, whichever is greater. Thus, given total ignorance represented by a lower bound of 0, the p-value is used instead of any Bayesian posterior probability. At the opposite extreme of a known prior, the p-value is ignored. In the intermediate case, the possible Bayesian posterior probability that is closest to the p-value is used for inference. Thus, both the Bayesian method and the frequentist method influence the inferences made.