David R. Bickel

David R. Bickel
University of North Carolina at Greensboro | UNCG · Informatics and Analytics

PhD
davidbickel.com

About

148
Publications
7,500
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
1,234
Citations
Introduction
* More research: http://www.davidbickel.com/ * Tweets: https://twitter.com/DavidRBickel * Publons: https://publons.com/researcher/4845687/david-bickel/ * unquantified uncertainty * calibrated null hypothesis significance testing * evidence & likelihood * confidence distributions & other fiducial distributions * empirical Bayes * foundations of statistics * complexity
Additional affiliations
June 2007 - July 2020
University of Ottawa
Position
  • Professor (Associate)
Description
  • Contributions by topic: www.davidbickel.com
April 2004 - May 2007
DuPont Pioneer
Position
  • Researcher
Description
  • Confidential
April 2001 - March 2004
Medical College of Georgia
Position
  • Professor (Assistant)
Description
  • Older publications: www.davidbickel.com

Publications

Publications (148)
Article
Typical statistical methods of data analysis only handle determinate uncertainty, the type of uncertainty that can be modeled under the Bayesian or confidence theories of inference. An example of indeterminate uncertainty is uncertainty about whether the Bayesian theory or the frequentist theory is better suited to the problem at hand. Another exam...
Article
Confidence intervals of divergence times and branch lengths do not reflect uncertainty about their clades or about the prior distributions and other model assumptions on which they are based. Uncertainty about the clade may be propagated to a confidence interval by multiplying its confidence level by the bootstrap proportion of its clade or by anot...
Article
A Bayesian model has two parts. The first part is a family of sampling distributions that could have generated the data. The second part of a Bayesian model is a prior distribution over the sampling distributions. Both the diagnostics used to check the model and the process of updating a failed model are widely thought to violate the standard found...
Preprint
Confidence intervals of divergence times and branch lengths do not reflect uncertainty about their clades or about the prior distributions and other model assumptions on which they are based. Uncertainty about the clade may be propagated to a confidence interval by multiplying its confidence level by the bootstrap proportion of its clade or by anot...
Preprint
Null hypothesis significance testing is generalized by controlling the Type I error rate conditional on the existence of a non-empty confidence interval. The control of that conditional error rate results in corrected p-values called c-values. A further generalization from point null hypotheses to composite hypotheses generates C-values. The framew...
Presentation
https://bityl.co/7EEH has the slides and description.
Article
Much of the blame for failed attempts to replicate reports of scientific findings has been placed on ubiquitous and persistent misinterpretations of the p value. An increasingly popular solution is to transform a two-sided p value to a lower bound on a Bayes factor. Another solution is to interpret a one-sided p value as an approximate posterior pr...
Article
Ensemble methods of machine learning combine neural networks or other machine learning models in order to improve predictive performance. The proposed ensemble method is based on Occam’s razor idealized as adjusting hyperprior distributions over models according to a Rényi entropy of the data distribution that corresponds to each model. The entropy...
Article
The probability distributions that statistical methods use to represent uncertainty fail to capture all of the uncertainty that may be relevant to decision making. A simple way to adjust probability distributions for the uncertainty not represented in their models is to average the distributions with a uniform distribution or another distribution o...
Article
Consider a data set as a body of evidence that might confirm or disconfirm a hypothesis about a parameter value. If the posterior probability of the hypothesis is high enough, then the truth of the hypothesis is accepted for some purpose such as reporting a new discovery. In that way, the posterior probability measures the sufficiency of the eviden...
Article
Hypothesis tests are conducted not only to determine whether a null hypothesis (H0) is true but also to determine the direction or sign of an effect. A simple estimate of the posterior probability of a sign error is PSE = (1 - PH0) p/2 + PH0, depending only on a two-sided p value and PH0, an estimate of the posterior probability of H0. A convenient...
Article
While empirical Bayes methods thrive in the presence of the thousands of simultaneous hypothesis tests in genomics and other large-scale applications, significance tests and confidence intervals are considered more appropriate for small numbers of tested hypotheses. Indeed, for fewer hypotheses, there is more uncertainty in empirical Bayes estimate...
Article
A Bayesian model may be relied on to the extent of its adequacy by minimizing the posterior expected loss raised to the power of a discounting exponent. The resulting action is minimax under broad conditions when the sample size is held fixed and the discounting exponent is infinite. On the other hand, for any finite discounting exponent, the actio...
Preprint
Full-text available
Much of the blame for failed attempts to replicate reports of scientific findings has been placed on ubiquitous and persistent misinterpretations of the p value. An increasingly popular solution is to transform a two-sided p value to a lower bound on a Bayes factor. Another solution is to interpret a one-sided p value as an approximate posterior pr...
Article
Bayesian models use posterior predictive distributions to quantify the uncertainty of their predictions. Similarly, the point predictions of neural networks and other machine learning algorithms may be converted to predictive distributions by various bootstrap methods. The predictive performance of each algorithm can then be assessed by quantifying...
Preprint
Full-text available
The probability distributions that statistical methods use to represent uncertainty fail to capture all of the uncertainty that may be relevant to decision making. A simple way to adjust probability distributions for the uncertainty not represented in their models is to average the distributions with a uniform distribution or another distribution o...
Preprint
Full-text available
Concepts from multiple testing can improve tests of single hypotheses. The proposed definition of the calibrated p value is an estimate of the local false sign rate, the posterior probability that the direction of the estimated effect is incorrect. Interpreting one-sided p values as estimates of conditional posterior probabilities, that calibrated...
Preprint
Full-text available
Ensemble methods of machine learning combine neural networks or other machine learning models in order to improve predictive performance. The proposed ensemble method is based on Occam's razor idealized as adjusting hyperprior distributions over models according to a Rényi entropy of the data distribution that corresponds to each model. The entropy...
Article
Significance testing is often criticized because p values can be low even though posterior probabilities of the null hypothesis are not low according to some Bayesian models. Those models, however, would assign low prior probabilities to the observation that the the p value is sufficiently low. That conflict between the models and the data may indi...
Article
In Bayesian statistics, if the distribution of the data is unknown, then each plausible distribution of the data is indexed by a parameter value, and the prior distribution of the parameter is specified. To the extent that more complicated data distributions tend to require more coincidences for their construction than simpler data distributions, d...
Preprint
Full-text available
Bayesian models use posterior predictive distributions to quantify the uncertainty of their predictions. Similarly, the point predictions of neural networks and other machine learning algorithms may be converted to predictive distributions by various bootstrap methods. The predictive performance of each algorithm can then be assessed by quantifying...
Research
Full-text available
Review of Fraser, D. A. S.(3-TRNT-S) On evolution of statistical inference. (English summary) J. Stat. Theory Appl. 17 (2018), no. 2, 193–205. 62A01
Research
Full-text available
Review of MR3839887 Davies, Laurie(D-DUES2M) On P-values. (English summary) Statist. Sinica 28 (2018), no. 4, part 2, 2823–2840. 62A01 (62F03 62F15)
Book
Statisticians have met the need to test hundreds or thousands of genomics hypotheses simultaneously with novel empirical Bayes methods that combine advantages of traditional Bayesian and frequentist statistics. Techniques for estimating the local false discovery rate assign probabilities of differential gene expression, genetic association, etc. wi...
Article
According to the general law of likelihood, the strength of statistical evidence for a hypothesis as opposed to its alternative is the ratio of their likelihoods, each maximized over the parameter of interest. Consider the problem of assessing the weight of evidence for each of several hypotheses. Under a realistic model with a free parameter for e...
Article
Confidence sets, p values, maximum likelihood estimates, and other results of non-Bayesian statistical methods may be adjusted to favor sampling distributions that are simple compared to others in the parametric family. The adjustments are derived from a prior likelihood function previously used to adjust posterior distributions.
Article
The way false discovery rates (FDRs) are used in the analysis of genomics data leads to excessive false positive rates. In this sense, FDRs overcorrect for the excessive conservatism (bias toward false negatives) of methods of adjusting p values that control a family-wise error rate. Estimators of the local FDR (LFDR) are much less biased but have...
Presentation
Full-text available
Typical statistical methods of data analysis only handle determinate uncertainty, the type of uncertainty that can be modeled under the Bayesian or confidence theories of inference. An example of indeterminate uncertainty is uncertainty about whether the Bayesian theory or the frequentist theory is better suited to the problem at hand. Another exam...
Research
Full-text available
Dr. Truthlove or: how I learned to stop worrying and love Bayesian probabilities. Noûs 50 (2016), no. 4, 816-853. The author argues that certain probability distributions guarantee the satisfaction of a coherence requirement of rationality. The doxastic state of a human agent as opposed to an artificial agent assigns B to every proposition that is...
Research
Full-text available
This article introduces a frequentist formalism for combining the forecasts of multiple experts who base their forecasts on information from different sources. The authors use the formalism to provide a theoretical justification for the method of reporting a combined value that is between an average value and the extreme that is closest to the aver...
Research
Full-text available
MR3616661 62-02 60A05 62A01 62F15 Evans, Michael [Evans, Michael John] (3-TRNT-NDM) Measuring statistical evidence using relative belief. Monographs on Statistics and Applied Probability, 144. CRC Press, Boca Raton, FL, 2015. xvii+232 pp. ISBN 978-1-4822-4279-9 This creative book by Michael Evans describes not only a new way to measure the strength...
Research
Full-text available
MR3704899 62A01 Dawid, Philip [Dawid, Alexander Philip] (4-CAMB-NDM) On individual risk. (English summary) Synthese 194 (2017), no. 9, 3445-3474. What does it mean when your computer program says there is a 20% chance of rain tomorrow? Is it the proportion of times it rained in the past under similar conditions? How should you quantify the performa...
Research
Full-text available
On some principles of statistical inference. (English summary) Int. Stat. Rev. 83 (2015), no. 2, 293-308. Reid and Cox bear the standard of a broad Fisherian school of frequentist statistics embracing not only time-tested confidence intervals and p values derived from para-metric models, perfected by higher-order asymptotics, but also such developm...
Research
Full-text available
The author, a chief architect of the theory of large deviations, chronicles several manifestations of entropy. It made appearances in the realms indicated by these section headings: • Entropy and information theory • Entropy and dynamical systems • Relative entropy and large deviations • Entropy and duality • Log Sobolev inequality • Gibbs states •...
Research
Full-text available
MR3439649 94A17 62B10 62C10 Kelbert, M. [Kelbert, Mark Ya.] (RS-HSE-SAA); Mozgunov, P. [Mozgunov, Pavel] (RS-HSE-SAA) Asymptotic behaviour of the weighted Renyi, Tsallis and Fisher entropies in a Bayesian problem. (English summary) Eurasian Math. J. 6 (2015), no. 2, 6-17. This paper considers a weighted version of the differential entropy of the po...
Article
Frequentist methods, without the coherence guarantees of fully Bayesian methods, are known to yield self-contradictory inferences in certain settings. The framework introduced in this paper provides a simple adjustment to p values and confidence sets to ensure the mutual consistency of all inferences without sacrificing frequentist validity. Based...
Article
Occam's razor suggests assigning more prior probability to a hypothesis corresponding to a simpler distribution of data than to a hypothesis with a more complex distribution of data, other things equal. An idealization of Occam's razor in terms of the entropy of the data distributions tends to favor the null hypothesis over the alternative hypothes...
Article
Full-text available
Methods of estimating the local false discovery rate (LFDR) have been applied to different types of datasets such as high-throughput biological data, diffusion tensor imaging (DTI), and genome-wide association (GWA) studies. We present a model for LFDR estimation that incorporates a covariate into each test. Incorporating the covariates may improve...
Data
The proof of Theorem 1 proceeds by a series of lemmas. (PDF)
Article
In a genome-wide association study (GWAS), the probability that a single nucleotide polymorphism (SNP) is not associated with a disease is its local false discovery rate (LFDR). The LFDR for each SNP is relative to a reference class of SNPs. For example, the LFDR of an exonic SNP can vary widely depending on whether it is considered relative to the...
Presentation
Full-text available
Confidence sets, p values, and maximum likelihood estimates may be adjusted to favor sampling distributions that are simple compared to others in the parametric family. The adjustments are derived from a prior likelihood function previously used to adjust posterior distributions.
Research
Full-text available
Efficient estimation of the mode of continuous multivariate data. (English summary) Comput. Statist. Data Anal. 63 (2013), 148-159. To estimate the mode of a unimodal multivariate distribution, the authors propose the following algorithm. First, the data are transformed to become approximately multivariate normal by means of a transformation determ...
Article
When competing interests seek to influence a decision maker, a scientist must report a posterior probability or a Bayes factor among those consistent with the evidence. The disinterested scientist seeks to report the value that is least controversial in the sense that it is best protected from being discredited by one of the competing interests. If...
Research
Full-text available
Integrated likelihood in a finitely additive setting. (English summary) Symbolic and quantitative approaches to reasoning with uncertainty, 554-565, Lecture Notes in Comput. Sci., 5590, Lecture Notes in Artificial Intelligence, Springer, Berlin, 2009. For an observed sample of data, the likelihood function specifies the probability or probability d...
Article
Learning from model diagnostics that a prior distribution must be replaced by one that conflicts less with the data raises the question of which prior should instead be used for inference and decision. The same problem arises when a decision maker learns that one or more reliable experts express unexpected beliefs. In both cases, coherence of the s...
Article
Full-text available
The maximum entropy (ME) method is a recently-developed approach for estimating local false discovery rates (LFDR) that incorporates external information allowing assignment of a subset of tests to a category with a different prior probability of following the null hypothesis. Using this ME method, we have reanalyzed the findings from a recent larg...
Data
Scatter plot of the LFDR-ME estimates by minor allele frequency and the decrease in LFDR estimates using the ME method, when using the Enhancer Hoffman annotation. (TIF)
Data
Scatter plot of the LFDR-ME estimates by minor allele frequency and the decrease in LFDR estimates using the ME method, when using the Fetal DHS annotation. (TIF)
Data
Local false discovery rate estimates using the maximum entropy method for nine annotation categories. Columns include the SNP id (legendrs), chromosome (chr), position (pos), minor allele frequency (maf), slope coefficient (beta) and p-value (p_dgc) for association with CAD from the consortium, z-squared (z_sq), and then various LFDR estimates. The...
Article
Just as frequentist hypothesis tests have been developed to check model assumptions, prior predictive p values and other Bayesian p values check prior distributions as well as other model assumptions. These model checks not only suffer from the usual threshold dependence of p values but also from the suppression of model uncertainty in subsequent i...
Article
Empirical Bayes estimates of the local false discovery rate can reflect uncertainty about the estimated prior by supplementing their Bayesian posterior probabilities with confidence levels as posterior probabilities. This use of coherent fiducial inference with hierarchical models generates set estimators that propagate uncertainty to varying degre...
Article
Two major approaches have developed within Bayesian statistics to address uncertainty in the prior distribution and in the rest of the model. First, methods of model checking, including those assessing prior-data conflict, determine whether the posterior resulting from the model is adequate for purposes of inference and estimation or other decision...
Article
The reasoning behind uses of confidence intervals and p-values in scientific practice may be made coherent by modeling the inferring statistician or scientist as an idealized intelligent agent. With other things equal, such an agent regards a hypothesis coinciding with a confidence interval of a higher confidence level as more certain than a hypoth...
Article
RésuméLe rapport de vraisemblance mesure l'évidence statistique, par rapport ô une hypothèse nulle, d'une contre‐hypothèse simple. Il n'existe pas d'équivalent direct dans le contexte d'hypothèses composées. Nous montrons comment, en traitant le paramètre d'intérêt comme une grandeur aléatoire, il est cependant possible d'évaluer l'évidence statist...
Article
Full-text available
Exercise substantially improves metabolic health, making the elicited mechanisms important targets for novel therapeutic strategies. Uncoupling protein 3 (UCP3) is a mitochondrial inner membrane protein highly selectively expressed in skeletal muscle. Here we report that moderate UCP3 overexpression (roughly 3-fold) in muscles of UCP3 transgenic (U...
Article
Abstract Multiple comparison procedures that control a family-wise error rate or false discovery rate provide an achieved error rate as the adjusted p-value or q-value for each hypothesis tested. However, since achieved error rates are not understood as probabilities that the null hypotheses are true, empirical Bayes methods have been employed to e...
Article
Full-text available
Background In investigating differentially expressed genes or other selected features, researchers conduct hypothesis tests to determine which biological categories, such as those of the Gene Ontology (GO), are enriched for the selected features. Multiple comparison procedures (MCPs) are commonly used to prevent excessive false positive rates. Trad...
Article
Many genome-wide association studies have been conducted to identify single nucleotide polymorphisms (SNPs) that are associated with particular diseases or other traits. The local false discovery rate (LFDR) estimated using semiparametric models has enjoyed success in simultaneous inference. However, semiparametric LFDR estimators can be biased bec...
Article
In the typical analysis of a data set, a single method is selected for statistical reporting even when equally applicable methods yield very different results. Examples of equally applicable methods can correspond to those of different ancillary statistics in frequentist inference and of different prior distributions in Bayesian inference. More bro...
Article
A general function to quantify the weight of evidence in a sample of data for one hypothesis over another is derived from the law of likelihood and from a statistical formalization of inference to the best explanation. For a fixed parameter of interest, the resulting weight of evidence that favors one composite hypothesis over another is the likeli...
Article
Full-text available
By representing fair betting odds according to one or more pairs of confidence set estimators, dual parameter distributions called confidence posteriors secure the coherence of actions without any prior distribution. This theory reduces to the maximization of expected utility when the pair of posteriors is induced by an exact or approximate confide...
Article
Problems involving thousands of null hypotheses have been addressed by estimating the local false discovery rate (LFDR). A previous LFDR approach to reporting point and interval estimates of an effect-size parameter uses an estimate of the prior distribution of the parameter conditional on the alternative hypothesis. That estimated prior is often u...
Article
Full-text available
Abstract Histogram-based empirical Bayes methods developed for analyzing data for large numbers of genes, SNPs, or other biological features tend to have large biases when applied to data with a smaller number of features such as genes with expression measured conventionally, proteins, and metabolites. To analyze such small-scale and medium-scale d...
Article
The normalized maximum likelihood (NML) is a recent penalized likelihood that has properties that justify defining the amount of discrimination information (DI) in the data supporting an alternative hypothesis over a null hypothesis as the logarithm of an NML ratio, namely, the alternative hypothesis NML divided by the null hypothesis NML. The resu...
Article
In statistical practice, whether a Bayesian or frequentist approach is used in inference depends not only on the availability of prior information but also on the attitude taken toward partial prior information, with frequentists tending to be more cautious than Bayesians. The proposed framework defines that attitude in terms of a specified amount...
Article
The following zero-sum game between nature and a statistician blends Bayesian methods with frequentist methods such as p-values and confidence intervals. Nature chooses a posterior distribution consistent with a set of possible priors. At the same time, the statistician selects a parameter distribution for inference with the goal of maximizing the...

Questions

Questions (3)
Question
Among CRC, Oxford University Press, and Springer, which has the best reputation for publishing credible books?
Which has the worst?
Question
I was wondering what to do with reprints, the kind printed on paper.
Any ideas?
Question
I was wondering what to do with reprints, the kind printed on paper.
Any ideas?

Network

Cited By

Projects

Projects (20)
Project
Manage uncertainty not represented by probability distributions
Project
Develop statistical methods from a "modal point of view" (Chacón, 2020). Chacón, J. E. ( 2020) The Modal Age of Statistics. International Statistical Review, https://doi.org/10.1111/insr.12340. https://bit.ly/2J96F7V
Project
Create models of evolution as working hypotheses for the analysis of DNA and protein sequence data. Develop methods of error propagation to quantify the uncertainty in the results of such data analysis.