Project

Foundations of probability and statistics ✅

Goal: Review literature related to the foundations of probability and statistics.

Older reviews in HTML:
https://davidbickel.com/category/contributions/reviews/

The dates of posting to RG are given. These Mathematical Reviews are posted with permission from the American Mathematical Society.

Updates
0 new
0
Recommendations
0 new
0
Followers
0 new
3
Reads
0 new
114

Project log

David R. Bickel
added 2 research items
Review of MR3839887 Davies, Laurie(D-DUES2M) On P-values. (English summary) Statist. Sinica 28 (2018), no. 4, part 2, 2823–2840. 62A01 (62F03 62F15)
Review of Fraser, D. A. S.(3-TRNT-S) On evolution of statistical inference. (English summary) J. Stat. Theory Appl. 17 (2018), no. 2, 193–205. 62A01
David R. Bickel
added 6 research items
MR3704899 62A01 Dawid, Philip [Dawid, Alexander Philip] (4-CAMB-NDM) On individual risk. (English summary) Synthese 194 (2017), no. 9, 3445-3474. What does it mean when your computer program says there is a 20% chance of rain tomorrow? Is it the proportion of times it rained in the past under similar conditions? How should you quantify the performance of the program's history of forecasting a 20% probability of rain the next day? One answer is to determine how close the relative frequency of rain among those tomorrows is to 20%. This paper makes a case for assessing personal or subjective probabilities by their agreement with relative frequencies of events observed after the probabilities are reported. The thesis is argued in the final two sections after thoroughly reviewing background concepts in the first seven sections. A unique aspect of the coverage of philosophical theories of probability is the recurring tension between probabilities of individual events and probabilities of classes of events. The paper surveys enumerative probability, formal probability, and metaphysical probability alongside the usual classical probability, frequency probability, personal probability, propensity or chance, and logical probability. Personal probabilities are presented as requiring assessment, in part because basing them on relative frequencies that are already observed necessarily makes them relative to a reference class, the selection of which is problematic in complex systems. The tentative nature of the paper's conclusions is fitting given the strength of arguments that proper scoring rules provide more informative assessments of reported probabilities than do observed relative frequencies [see, e.g., F. Lad, Operational subjective statistical methods, Wiley Ser. Probab. Statist.
David R. Bickel
added a research item
Efficient estimation of the mode of continuous multivariate data. (English summary) Comput. Statist. Data Anal. 63 (2013), 148-159. To estimate the mode of a unimodal multivariate distribution, the authors propose the following algorithm. First, the data are transformed to become approximately multivariate normal by means of a transformation determined by maximum likelihood estimation (MLE) of a transformation parameter joint with the parameters of the multivariate normal distribution. Second, the resulting inverse transformation is applied to the MLE multivariate normal density function, yielding an estimate of the probability density function on the space of the original data. Third, the point at which that density function achieves its maximum is taken as the estimate of the multivariate mode. The paper features a theorem reporting the weak consistency of the estimator under the lognormality of the data. The authors cite several papers indicating the need for such multivariate mode estimation in applications. They illustrate the practical use of their estimator by applying it to climatology and handwriting data sets. Simulations indicate a large variety of distributions and dependence structures under which the proposed estimator performs substantially better than its competitors. An exception is the case of contamination with data from a distribution that has a different mode than the mode that is the target of inference.
David R. Bickel
added a research item
Integrated likelihood in a finitely additive setting. (English summary) Symbolic and quantitative approaches to reasoning with uncertainty, 554-565, Lecture Notes in Comput. Sci., 5590, Lecture Notes in Artificial Intelligence, Springer, Berlin, 2009. For an observed sample of data, the likelihood function specifies the probability or probability density of that observation as a function of the parameter value. Since each sample hypothesis corresponds to a single parameter value, the likelihood of any simple hypothesis is an uncontroversial function of the data and the model. However, there is no standard definition of the likelihood of a composite hypothesis, which instead corresponds to multiple parameter values. Such a definition could be useful not only for quantifying the strength of statistical evidence in favor of composite hypotheses that are faced in both science and law, but also for likelihood-based measures of corroboration and of explanatory power for epistemological research involving Popper's critical rationalism or recent accounts of inference to the best explanation. Interpreting the likelihood function under the coherence framework of de Finetti, this paper mathematically formulates the problem by defining the likelihood of a simple or composite hypothesis as a subjective probability of the observed data conditional on the truth of the hypothesis. In the probability theory of this framework, conditional probabilities given a hypothesis or event of probability zero are well defined, even for finite parameter sets. That differs from the familiar probability measures that Kolmogorov introduced for frequency-type probabilities, each of which, in the finite case, can only have zero probability mass if its event cannot occur. (The latter but not the former agrees in spirit with Cournot's principle that an event of infinitesimally small probability is physically impossible.) Thus, in the de Finetti framework, the likelihood function assigns a conditional probability to each simple hypothesis, whether or not its probability is zero. When the parameter set is finite, every coherent conditional probability of a sample of discrete data given a composite hypothesis is a weighted arithmetic mean of the conditional probabilities of the simple hypotheses that together constitute the composite hypothesis. In other words, the coherence constraint requires that the likelihood of a composite hypothesis be a linear combination of the likelihoods of its constituent simple hypotheses. Important special cases include the maximum and the minimum of the likelihood over the parameter set. They are made possible in the non-Kolmogorov framework by assigning zero probability to all of the simple hypotheses except those of maximum or minimum likelihood. The main result of the paper extends this result to infinite parameter sets. In general, the likelihood of a composite hypothesis is a mixture of the likelihoods of its component simple hypotheses. {For the collection containing this paper see MR2907743}
David R. Bickel
added a project goal
Review literature related to the foundations of probability and statistics.
Older reviews in HTML:
The dates of posting to RG are given. These Mathematical Reviews are posted with permission from the American Mathematical Society.