Anthony O'Hagan

The University of Sheffield, Sheffield, England, United Kingdom

Are you Anthony O'Hagan?

Claim your profile

Publications (58)91.06 Total impact

  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: We review statistical methods for analysing healthcare resource use and costs, their ability to address skewness, excess zeros, multimodality and heavy right tails, and their ease for general use. We aim to provide guidance on analysing resource use and costs focusing on randomised trials, although methods often have wider applicability. Twelve broad categories of methods were identified: (I) methods based on the normal distribution, (II) methods following transformation of data, (III) single-distribution generalized linear models (GLMs), (IV) parametric models based on skewed distributions outside the GLM family, (V) models based on mixtures of parametric distributions, (VI) two (or multi)-part and Tobit models, (VII) survival methods, (VIII) non-parametric methods, (IX) methods based on truncation or trimming of data, (X) data components models, (XI) methods based on averaging across models, and (XII) Markov chain methods. Based on this review, our recommendations are that, first, simple methods are preferred in large samples where the near-normality of sample means is assured. Second, in somewhat smaller samples, relatively simple methods, able to deal with one or two of above data characteristics, may be preferable but checking sensitivity to assumptions is necessary. Finally, some more complex methods hold promise, but are relatively untried; their implementation requires substantial expertise and they are not currently recommended for wider applied work.
    Health Economics 08/2011; 20(8):897-916. · 2.23 Impact Factor
  • Source
    Samer A Kharroubi, Anthony O'Hagan, John E Brazier
    [Show abstract] [Hide abstract]
    ABSTRACT: Cost-effectiveness analysis of alternative medical treatments relies on having a measure of effectiveness, and many regard the quality adjusted life year (QALY) to be the current 'gold standard.' In order to compute QALYs, we require a suitable system for describing a person's health state, and a utility measure to value the quality of life associated with each possible state. There are a number of different health state descriptive systems, and we focus here on one known as the EQ-5D. Data for estimating utilities for different health states have a number of features that mean care is necessary in statistical modelling.There is interest in the extent to which valuations of health may differ between different countries and cultures, but few studies have compared preference values of health states obtained from different countries. This article applies a nonparametric model to estimate and compare EQ-5D health state valuation data obtained from two countries using Bayesian methods. The data set is the US and UK EQ-5D valuation studies where a sample of 42 states defined by the EQ-5D was valued by representative samples of the general population from each country using the time trade-off technique. We estimate a utility function across both countries which explicitly accounts for the differences between them, and is estimated using the data from both countries. The article discusses the implications of these results for future applications of the EQ-5D and for further work in this field.
    Statistics in Medicine 03/2010; 29(15):1622-34. · 2.04 Impact Factor
  • A O'Hagan, M West
    edited by A. O'Hagan and M. West, 01/2010; Oxford University Press.
  • [Show abstract] [Hide abstract]
    ABSTRACT: The development of a new drug is a major undertaking and it is important to consider carefully the key decisions in the development process. Decisions are made in the presence of uncertainty and outcomes such as the probability of successful drug registration depend on the clinical development programmme.The Rheumatoid Arthritis Drug Development Model was developed to support key decisions for drugs in development for the treatment of rheumatoid arthritis. It is configured to simulate Phase 2b and 3 trials based on the efficacy of new drugs at the end of Phase 2a, evidence about the efficacy of existing treatments, and expert opinion regarding key safety criteria.The model evaluates the performance of different development programmes with respect to the duration of disease of the target population, Phase 2b and 3 sample sizes, the dose(s) of the experimental treatment, the choice of comparator, the duration of the Phase 2b clinical trial, the primary efficacy outcome and decision criteria for successfully passing Phases 2b and 3. It uses Bayesian clinical trial simulation to calculate the probability of successful drug registration based on the uncertainty about parameters of interest, thereby providing a more realistic assessment of the likely outcomes of individual trials and sequences of trials for the purpose of decision making.In this case study, the results show that, depending on the trial design, the new treatment has assurances of successful drug registration in the range 0.044-0.142 for an ACR20 outcome and 0.057-0.213 for an ACR50 outcome.
    Pharmaceutical Statistics 04/2009; 8(4):371-89. · 0.99 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Background: Risk sharing schemes represent an innovative and important approach to the problems of rationing and achieving cost-effectiveness in high cost or controversial health interventions. This study aimed to assess the feasibility of risk sharing schemes, looking at long term clinical outcomes, to determine the price at which high cost treatments would be acceptable to the NHS. Methods: This case study of the first NHS risk sharing scheme, a long term prospective cohort study of beta interferon and glatiramer acetate in multiple sclerosis ( MS) patients in 71 specialist MS centres in UK NHS hospitals, recruited adults with relapsing forms of MS, meeting Association of British Neurologists (ABN) criteria for disease modifying therapy. Outcome measures were: success of recruitment and follow up over the first three years, analysis of baseline and initial follow up data and the prospect of estimating the long term cost-effectiveness of these treatments. Results: Centres consented 5560 patients. Of the 4240 patients who had been in the study for a least one year, annual review data were available for 3730 (88.0%). Of the patients who had been in the study for at least two years and three years, subsequent annual review data were available for 2055 (78.5%) and 265 (71.8%) patients respectively. Baseline characteristics and a small but statistically significant progression of disease were similar to those reported in previous pivotal studies. Conclusion: Successful recruitment, follow up and early data analysis suggest that risk sharing schemes should be able to deliver their objectives. However, important issues of analysis, and political and commercial conflicts of interest still need to be addressed.
    BMC Neurology 02/2009; · 2.56 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Computer codes are used in scientific research to study and predict the behaviour of complex systems. Their run times often make uncertainty and sensitivity analyses impractical because of the thousands of runs that are conventionally required, so efficient techniques have been developed based on a statistical representation of the code. The approach is less straightforward for dynamic codes, which represent time-evolving systems. We develop a novel iterative system to build a statistical model of dynamic computer codes, which is demonstrated on a rainfall-runoff simulator. Copyright 2009, Oxford University Press.
    Biometrika 01/2009; 96(3):663-676. · 1.65 Impact Factor
  • Source
    Christopher McCabe, Karl Claxton, Anthony O'Hagan
    [Show abstract] [Hide abstract]
    ABSTRACT: Pharmaceutical regulators and healthcare reimbursement authorities operate in different intellectual paradigms and adopt very different decision rules. As a result, drugs that have been licensed are often not available to all patients who could benefit because reimbursement authorities judge that the cost of therapies is greater than the health produced. This finding creates uncertainty for pharmaceutical companies planning their research and development investment, as licensing is no longer a guarantee of market access. In this study, we propose that it would be consistent with the objectives of pharmaceutical regulators to use the Net Benefit Framework of reimbursement authorities to identify those therapies that should be subject to priority review, that it is feasible to do so and that this would have several positive effects for patients, industry, and healthcare systems.
    International Journal of Technology Assessment in Health Care 02/2008; 24(2):140-5. · 1.55 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: A crucial issue in the current global warming debate is the effect of vegetation and soils on carbon dioxide (CO<sub>2</sub>) concentrations in the atmosphere. Vegetation can extract CO<sub>2</sub> through photosynthesis, but respiration, decay of soil organic matter and disturbance effects such as fire return it to the atmosphere. The balance of these processes is the net carbon flux. To estimate the biospheric carbon flux for England and Wales, we address the statistical problem of inference for the sum of multiple outputs from a complex deterministic computer code whose input parameters are uncertain. The code is a process model which simulates the carbon dynamics of vegetation and soils, including the amount of carbon that is stored as a result of photosynthesis and the amount that is returned to the atmosphere through respiration. The aggregation of outputs corresponding to multiple sites and types of vegetation in a region gives an estimate of the total carbon flux for that region over a period of time. Expert prior opinions are elicited for marginal uncertainty about the relevant input parameters and for correlations of inputs between sites. A Gaussian process model is used to build emulators of the multiple code outputs and Bayesian uncertainty analysis is then used to propagate uncertainty in the input parameters through to uncertainty on the aggregated output. Numerical results are presented for England and Wales in the year 2000. It is estimated that vegetation and soils in England and Wales constituted a net sink of 7.55 Mt C (1 Mt C &equals; 10-super-12 g of carbon) in 2000, with standard deviation 0.56 Mt C resulting from the sources of uncertainty that are considered. Copyright 2008 Royal Statistical Society.
    Journal of the Royal Statistical Society Series A (Statistics in Society) 01/2008; 171(1):109-135. · 1.36 Impact Factor
  • Anthony O'Hagan, Matt Stevenson, Jason Madan
    [Show abstract] [Hide abstract]
    ABSTRACT: Probabilistic sensitivity analysis (PSA) is required to account for uncertainty in cost-effectiveness calculations arising from health economic models. The simplest way to perform PSA in practice is by Monte Carlo methods, which involves running the model many times using randomly sampled values of the model inputs. However, this can be impractical when the economic model takes appreciable amounts of time to run. This situation arises, in particular, for patient-level simulation models (also known as micro-simulation or individual-level simulation models), where a single run of the model simulates the health care of many thousands of individual patients. The large number of patients required in each run to achieve accurate estimation of cost-effectiveness means that only a relatively small number of runs is possible. For this reason, it is often said that PSA is not practical for patient-level models. We develop a way to reduce the computational burden of Monte Carlo PSA for patient-level models, based on the algebra of analysis of variance. Methods are presented to estimate the mean and variance of the model output, with formulae for determining optimal sample sizes. The methods are simple to apply and will typically reduce the computational demand very substantially.
    Health Economics 11/2007; 16(10):1009-23. · 2.23 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper reports on the findings from applying a new approach to modelling health state valuation data. The approach applies a nonparametric model to estimate SF-6D health state utility values using Bayesian methods. The data set is the UK SF-6D valuation study where a sample of 249 states defined by the SF-6D (a derivative of the SF-36) was valued by a representative sample of the UK general population using standard gamble. The paper presents the results from applying the nonparametric model and comparing it to the original model estimated using a conventional parametric random effects model. The two models are compared theoretically and in terms of empirical performance. The paper discusses the implications of these results for future applications of the SF-6D and further work in this field.
    Journal of Health Economics 06/2007; 26(3):597-612. · 1.60 Impact Factor
  • Samer Kharroubi, John E Brazier, Anthony O'Hagan
    [Show abstract] [Hide abstract]
    ABSTRACT: It has long been recognised that respondent characteristics can impact on the values they give to health states. This paper reports on the findings from applying a non-parametric approach to estimate the covariates in a model of SF-6D health state values using Bayesian methods. The data set is the UK SF-6D valuation study, where a sample of 249 states defined by the SF-6D (a derivate of the SF-36) was valued by a sample of the UK general population using standard gamble. Advantages of the nonparametric model are that it can be used to predict scores in populations with different distributions of characteristics and that it allows for an impact to vary by health state (whilst ensuring that full health passes through unity). The results suggest an important age effect, with sex, class, education, employment and physical functioning probably having some effect, but the remaining covariates having no discernable effect. Adjusting for covariates in the UK sample made little difference to mean health state values. The paper discusses the implications of these results for policy.
    Social Science [?] Medicine 04/2007; 64(6):1242-52. · 2.73 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Partial expected value of perfect information (EVPI) calculations can quantify the value of learning about particular subsets of uncertain parameters in decision models. Published case studies have used different computational approaches. This article examines the computation of partial EVPI estimates via Monte Carlo sampling algorithms. The mathematical definition shows 2 nested expectations, which must be evaluated separately because of the need to compute a maximum between them. A generalized Monte Carlo sampling algorithm uses nested simulation with an outer loop to sample parameters of interest and, conditional upon these, an inner loop to sample remaining uncertain parameters. Alternative computation methods and shortcut algorithms are discussed and mathematical conditions for their use considered. Maxima of Monte Carlo estimates of expectations are biased upward, and the authors show that the use of small samples results in biased EVPI estimates. Three case studies illustrate 1) the bias due to maximization and also the inaccuracy of shortcut algorithms 2) when correlated variables are present and 3) when there is nonlinearity in net benefit functions. If relatively small correlation or nonlinearity is present, then the shortcut algorithm can be substantially inaccurate. Empirical investigation of the numbers of Monte Carlo samples suggests that fewer samples on the outer level and more on the inner level could be efficient and that relatively small numbers of samples can sometimes be used. Several remaining areas for methodological development are set out. A wider application of partial EVPI is recommended both for greater understanding of decision uncertainty and for analyzing research priorities.
    Medical Decision Making 01/2007; 27(4):448-70. · 2.89 Impact Factor
  • Source
    Jeremy E. Oakley, Anthony O'Hagan
    [Show abstract] [Hide abstract]
    ABSTRACT: A key task in the elicitation of expert knowledge is to construct a distribution from the finite, and usually small, number of statements that have been elicited from the expert. These statements typically specify some quantiles or moments of the distribution. Such statements are not enough to identify the expert's probability distribution uniquely, and the usual approach is to fit some member of a convenient parametric family. There are two clear deficiencies in this solution. First, the expert's beliefs are forced to fit the parametric family. Secondly, no account is then taken of the many other possible distributions that might have fitted the elicited statements equally well. We present a nonparametric approach which tackles both of these deficiencies. We also consider the issue of the imprecision in the elicited probability judgements. Copyright 2007, Oxford University Press.
    Biometrika 01/2007; 94(2):427-441. · 1.65 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: see also www.hta.ac.uk
    Health technology assessment (Winchester, England) 08/2006; · 4.03 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: In this paper we report the estimation of conditional logistic regression models for the Health Utilities Index Mark 2 and the SF-6D, using ordinal preference data. The results are compared to the conventional regression models estimated from standard gamble data, and to the observed mean standard gamble health state valuations. For both the HUI2 and the SF-6D, the models estimated using ordinal data are broadly comparable to the models estimated on standard gamble data and the predictive performance of these models is close to that of the standard gamble models. Our research indicates that ordinal data have the potential to provide useful insights into community health state preferences. However, important questions remain.
    Journal of Health Economics 06/2006; 25(3):418-31. · 1.60 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The process of calibrating radiocarbon determinations onto the calendar scale involves, as a first stage, the estimation of the relationship between calendar and radiocarbon ages (the radiocarbon calibration curve) from a set of available high-precision calibration data. Traditionally the radiocarbon calibration curve has been constructed by forming a weighted average of the data, and then taking the curve as the piece-wise linear function joining the resulting calibration data points. Alternative proposals for creating a calibration curve from the averaged data involve a spline or cubic interpolation, or the use of Fourier transformation and other filtering techniques, in order to obtain a smooth calibration curve. Between the various approaches, there is no consensus as to how to make use of the data in order to solve the problems related to the calibration of radiocarbon determinations. ¶ We propose a nonparametric Bayesian solution to the problem of the estimation of the radiocarbon calibration curve, based on a Gaussian process prior structure on the space of possible functions. Our approach is model-based, taking into account specific characteristics of the dating method, and provides a generic solution to the problem of estimating calibration curves for chronology building. ¶ We apply our method to the 1998 international high-precision calibration dataset, and demonstrate that our model predictions are well calibrated and have smaller variances than other methods. These data have deficiencies and complications that will only be unravelled with the publication of new data, expected in early 2005, but this analysis suggests that the nonparametric Bayesian model will allow more precise calibration of radiocarbon ages for archaeological specimens.
    Bayesian Analysis. 06/2006; 1(2).
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Elicitation is the process of extracting expert knowledge about some unknown quantity or quantities, and formulating that information as a probability distribution. Elicitation is important in situations, such as modelling the safety of nuclear installations or assessing the risk of terrorist attacks, where expert knowledge is essentially the only source of good information. It also plays a major role in other contexts by augmenting scarce observational data, through the use of Bayesian statistical methods. However, elicitation is not a simple task, and practitioners need to be aware of a wide range of research findings in order to elicit expert judgements accurately and reliably. Uncertain Judgements introduces the area, before guiding the reader through the study of appropriate elicitation methods, illustrated by a variety of multi-disciplinary examples.
    01/2006;
  • [Show abstract] [Hide abstract]
    ABSTRACT: Summary. We consider the problem of reconstructing prehistoric climates by using fossil data that have been extracted from lake sediment cores. Such reconstructions promise to provide one of the few ways to validate modern models of climate change. A hierarchical Bayesian modelling approach is presented and its use, inversely, is demonstrated in a relatively small but statistically challenging exercise: the reconstruction of prehistoric climate at Glendalough in Ireland from fossil pollen. This computationally intensive method extends current approaches by explicitly modelling uncertainty and reconstructing entire climate histories. The statistical issues that are raised relate to the use of compositional data (pollen) with covariates (climate) which are available at many modern sites but are missing for the fossil data. The compositional data arise as mixtures and the missing covariates have a temporal structure. Novel aspects of the analysis include a spatial process model for compositional data, local modelling of lattice data, the use, as a prior, of a random walk with long-tailed increments, a two-stage implementation of the Markov chain Monte Carlo approach and a fast approximate procedure for cross-validation in inverse problems. We present some details, contrasting its reconstructions with those which have been generated by a method in use in the palaeoclimatology literature. We suggest that the method provides a basis for resolving important challenging issues in palaeoclimate research. We draw attention to several challenging statistical issues that need to be overcome.
    Journal of the Royal Statistical Society. Series A, Statistics in Society 169 (2006) 3. 01/2006;
  • Source
    Samer A. Kharroubi, O&apos, Anthony Hagan, John E. Brazier, Anthony O'Hagan
    [Show abstract] [Hide abstract]
    ABSTRACT:   A fundamental benefit that is conferred by medical treatments is to increase the health-related quality of life (HRQOL) that is experienced by patients. Various descriptive systems exist to define a patient's health state, and we address the problem of assigning an HRQOL value to any given state in such a descriptive system. Data derive from experiments in which individuals are asked to assign their personal values to various health states. We construct a Bayesian model that takes account of various important aspects of such data. Specifically, we allow for the repeated measures feature that each individual values several different states, and the fact that individuals vary markedly in their valuations, with some people consistently providing higher valuations than others. We model the relationship between HRQOL and health state nonparametrically. We illustrate our method by using data from an experiment in which 611 individuals each valued up to six states in the descriptive system known as the SF-6D system. Although the SF-6D system distinguishes 18000 different health states, only 249 of these were valued in this experiment. We provide posterior inference about the HRQOL values for all 18000 states.
    Journal of the Royal Statistical Society Series C Applied Statistics 10/2005; 54(5):879 - 895. · 1.25 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Conventional clinical trial design involves considerations of power, and sample size is typically chosen to achieve a desired power conditional on a specified treatment effect. In practice, there is considerable uncertainty about what the true underlying treatment effect may be, and so power does not give a good indication of the probability that the trial will demonstrate a positive outcome.Assurance is the unconditional probability that the trial will yield a ‘positive outcome’. A positive outcome usually means a statistically significant result, according to some standard frequentist significance test. The assurance is then the prior expectation of the power, averaged over the prior distribution for the unknown true treatment effect.We argue that assurance is an important measure of the practical utility of a proposed trial, and indeed that it will often be appropriate to choose the size of the sample (and perhaps other aspects of the design) to achieve a desired assurance, rather than to achieve a desired power conditional on an assumed treatment effect. We extend the theory of assurance to two-sided testing and equivalence trials. We also show that assurance is straightforward to compute in some simple problems of normal, binary and gamma distributed data, and that the method is not restricted to simple conjugate prior distributions for parameters. Several illustrations are given. Copyright © 2005 John Wiley & Sons, Ltd.
    Pharmaceutical Statistics 06/2005; 4(3):187 - 201. · 0.99 Impact Factor

Publication Stats

2k Citations
91.06 Total Impact Points

Institutions

  • 2000–2011
    • The University of Sheffield
      • • School of Health and Related Research (ScHARR)
      • • Centre of Bayesian Statistics in Health Economics (CHEBS)
      Sheffield, England, United Kingdom
  • 2008
    • University of Leeds
      Leeds, England, United Kingdom
  • 2007
    • The University of York
      • Department of Mathematics
      York, England, United Kingdom
  • 2003
    • University Medical Center Utrecht
      Utrecht, Utrecht, Netherlands