D.A. Sprott’s research while affiliated with University of Waterloo and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (21)


Marginal and Conditional Sufficiency
  • Article

December 1975

·

18 Reads

·

48 Citations

Biometrika

D. A. SPROTT

SUMMARY The concepts of sufficiency and ancillarity are extended to marginal and conditional sufficiency in the presence of nuisance parameters. It is required that under the assumption of the absence of knowledge of the nuisance parameter β no information shall be lost in basing inferences about Θ on a statistic that is marginally or conditionally sufficient for Θ. The approach is based on the likelihood function and relates the two previously studied apparently dissimilar structures of (a) the factorization of the likelihood and (b) group sufficiency. Some examples are given and the final example illustrates a possible loss of information entailed by the use of statistics that are not marginally or conditionally sufficient.


Statistical analysis of data bearing on the number of particles required to form a plaque

September 1974

·

8 Reads

·

4 Citations

Journal of Hygiene

Methods of statistical analysis are presented for one or more dilution series experiments where the quantity of interest is the number of virus particles required to infect a cell. These methods are illustrated on several data sets drawn from the literature. Data from seven series, which have been used to support a two-particle model in the literature, are here shown to reject such a model decisively, whereas fifteen other experiments are found to be in excellent agreement with a one-particle model.


Inferences about Hit Number in a Virological Model

March 1974

·

6 Reads

·

15 Citations

Biometrics

Ailing [1971] considers a dilution series model arising in virology and discusses the estimation of a parameter h called the hit number. His analysis is based on asymptotic properties of various estimators. The present paper develops other methods for making inferences about h. Tests for the homogeneity of data from several series and for the goodness of fit of the model are also considered. The analysis is based upon the use of sufficient and ancillary statistics to factor the joint likelihood into parts appropriate for the various types of problem considered, and is illustrative of a general approach to problems of inference.


On the Logic of Tests of Significance with Special Reference to Testing the Significance of Poisson-Distributed Observations

January 1974

·

5 Reads

·

6 Citations

Some logical aspects of tests of significance are illustrated using the example of equality of means of Poisson-distributed observations. Specifically, in the conventional test for the significance of a difference between two Poisson-distributed observations, the significance level is computed from the conditional distribution given their observed total. However, an unconditional test has greater power and so is sometimes advocated in place of the conditional test. The present paper argues that power considerations are not relevant in choosing between conditional and unconditional tests, and that the conditional test is the appropriate one. The example is extended to include tests of equality of means of two Poisson samples (pointing out an error in a formula that is sometimes used) and also to include tests concerning the ratio of Poisson means. There is a general discussion of significance tests with reference to the above examples.




A model to account for mortality curves of various species

December 1970

·

6 Reads

·

8 Citations

Journal of Theoretical Biology

The probability function, P(T)=Θe−Θt/ns(1−e−Θt/ns)n−[1−(1−e−Θt/ns)n]s−1 where θ is associated with the number of random “hits” per unit time, s is taken as 2000 and n as 2x, can simulate mortality curves for various species. This function is derived by assuming that critical gene sets exist which comprise the corresponding loci on each of the n strands where each strand may correspond to a double helix. The number of genes on each strand is assumed to be, on average, 2000.


Application of Likelihood Methods to Models Involving Large Numbers of Parameters

July 1970

·

14 Reads

·

291 Citations

Journal of the Royal Statistical Society Series B (Methodological)

Likelihood methods of dealing with some multiparameter problems are introduced and exemplified. Specifically, methods of eliminating nuisance parameters from the likelihood function so that inferences can be made about the parameters of interest are considered. In this regard integrated likelihoods, maximum relative likelihoods, conditional likelihoods, marginal likelihoods and second‐order likelihoods are introduced and their uses illustrated in examples. Marginal and conditional likelihoods are dependent upon factorings of the likelihood function. They are applied to the linear functional relationship and to related models and are found to give intuitively appealing results. These methods indicate that in many situations commonly encountered objective methods of eliminating unwanted parameters from the likelihood function can be adopted. This gives an alternative method of interpreting multiparameter likelihoods to that offered by the Bayesian approach.



Examples of Likelihoods and Comparison with Point Estimates and Large Sample Approximations

June 1969

·

13 Reads

·

61 Citations

The exact likelihood functions are examined for several examples of data chosen from the literature. These are compared with the likelihoods arising from the large sample approximations and with point estimates that were actually used in the literature. It is concluded that large sample approximations (application of standard maximum likelihood theory) can be misleading for inferences and should be checked against the actual likelihood functions. Similarly point estimates can be misleading or uninformative and their properties such as bias, variance, etc. are relatively unimportant. Because of the availability of high speed computers, exact methods and asymptotic comparisons are now feasible and this should be reflected where possible in the theory and application of statistical inference.


Citations (12)


... Bayesian inference is a technique for updating knowledge about the parameters of a model based on new information. This process is related to the dataset D and a probabilistic model for a given distribution L(D|θ), known as the likelihood function [54,55], conditioned by the knowledge of the free parameter set θ. Our understanding of θ is quantified by the prior distribution, f (θ). These functions are connected through Bayes' theorem: ...

Reference:

Superstatistics Applied to Cucurbitaceae DNA Sequences
Application of Likelihood Methods to Models Involving Large Numbers of Parameters
  • Citing Article
  • July 1970

Journal of the Royal Statistical Society Series B (Methodological)

... The importance of the likelihood function has been emphasized for at least 40 years by Fisher (1922Fisher ( , 1925Fisher ( , 1934Fisher ( , 1946, and more recently specific examples of its use have been given (Barnard, Jenkins & Winston, 1962;Birnbaum, 1962;Fisher, 1955). In fact there is a "likelihood school" of statistics which says that all the information of the sample relative to the population under consideration is contained within the likelihood function and inferences should be based on it (Barnard et al., 1962;Birnbaum, 1962;Fisher, 1955;Sprott, 1961). The Bayesians (Edwards, Lindman, & Savage, 1963) also emphasize the importance of the likelihood function, but the use they make of it is different from that in this paper. ...

Similarities between Likelihoods and Associated Distributions a Posteriori
  • Citing Article
  • July 1961

Journal of the Royal Statistical Society Series B (Methodological)

... where f c (x|s 2 ; μ, σ 2 ) is the conditional density ofX given S 2 = s 2 , which is the density of N (μ, σ 2 /n), and f m (s 2 ; σ 2 ) is the marginal density of S 2 , which is the density of σ 2 n−1 χ 2 n−1 . Kalbfleisch and Sprott (1973) suggested that f m (s 2 ; σ 2 ) should be used to obtain inference for σ 2 because it only involves σ 2 . This agrees with the standard approach that inference for σ 2 is based on the χ 2 n−1 distribution. ...

Marginal and Conditional Likelihood
  • Citing Article
  • January 1973

... As pointed out by Rao and continually emphasized by Fisher, different situations require different approaches depending on what is known. Sprott (1967) concludes that the fiducial argument, in the cases to which it applies, produces objective probability measures of uncertainty arising solely from the observations at hand. In particular, unwanted parameters can then be integrated out, and fiducial probability provides the only non-Bayesian method where this is possible. ...

Fiducial probability
  • Citing Article
  • June 1967

Statistische Hefte

... Surprisingly, only very few authors have studied extensions of the like-lihood approach to cover decision making. Besides the author (Cattaneo, 2005(Cattaneo, , 2007, only Lehmann and Romano (2005, Section 1.7, substantially unchanged since the first edition in 1959), Diehl and Sprott (1965), and Giang and Shenoy (2005) seem to have worked in this direction. However, the latter three approaches are not directly applicable to general statistical decision problems in the sense of Wald (1950), and their properties have not been investigated. ...

Die Likelihoodfunktion und ihre Verwendung beim statistischen Schluß
  • Citing Article
  • December 1965

Statistische Hefte

... A number of authors - [4], [8], [16] -criticized Fisher's fiducial approach and presented inconsistent or seemingly paradoxical results of his theory; other authors defended Fisher's position, e.g. [18], or set up a theoretical framework that restricts its application, , e.g. [11], which actually seems the best way to go on. ...

Statistical estimation — Some approaches and controversies
  • Citing Article
  • December 1965

Statistische Hefte

... Conditional, marginal, and these orthogonalizable likelihoods are not always available, but one can always calculate a profile likelihood. The profile likelihood maximizes the likelihood function with respect to the nuisance parameter for each fixed value of the parameter of interest (Kalbfleisch & Sprott, 1970), where L θ L θ γ L θ γ θ ( ) = max ( , ) = ( ,ˆ( )). ...

Application of Likelihood Methods to Models Involving Large Numbers of Parameters
  • Citing Article
  • September 1972

Journal of the Royal Statistical Society Series B (Methodological)

... The problem of the elimination of nuisance parameters in statistical inference has a long history and remains a major issue. Proposals to deal with it include the marginalization of the likelihood function by integrating out the nuisance parameter ( [9][10][11]), the construction of partial likelihood functions ( [12][13][14], among others) and the consideration of conditional likelihood functions based on different notions of non-informativeness, sufficiency and ancillarity. Elimination of nuisance parameters and different notions of non-information have also been studied in more detail in [15][16][17][18], where, based on suitable statistics, the concepts of B, S and G non-information are presented. ...

Marginal and Conditional Sufficiency
  • Citing Article
  • December 1975

Biometrika

... Accelerated failure time (AFT) models were utilized to model survival over time following diagnosis (Supplementary Methods). 29 Survival following diagnosis was modeled with relation to age group (younger-AYA, middle-AYA, and older-AYA) and time point (pre-ACA DCE vs post-ACA DCE), including an interaction between age group and time point, with adjustment for demographic covariates, race, sex, rurality, SES index, and (where available) lymphoma Ann Arbor stage. Differences in survival among age groups by time point and among time points by age group were assessed by contrasts, with Hommeladjusted P-values. ...

Statistical analysis of data bearing on the number of particles required to form a plaque
  • Citing Article
  • September 1974

Journal of Hygiene