[Show abstract][Hide abstract] ABSTRACT: To assess the accuracy and precision of inverse probability weighted (IPW) least squares regression analysis for censored cost data.
By using Surveillance, Epidemiology, and End Results-Medicare, we identified 1500 breast cancer patients who died and had complete cost information within the database. Patients were followed for up to 48 months (partitions) after diagnosis, and their actual total cost was calculated in each partition. We then simulated patterns of administrative and dropout censoring and also added censoring to patients receiving chemotherapy to simulate comparing a newer to older intervention. For each censoring simulation, we performed 1000 IPW regression analyses (bootstrap, sampling with replacement), calculated the average value of each coefficient in each partition, and summed the coefficients for each regression parameter to obtain the cumulative values from 1 to 48 months.
The cumulative, 48-month, average cost was $67,796 (95% confidence interval [CI] $58,454-$78,291) with no censoring, $66,313 (95% CI $54,975-$80,074) with administrative censoring, and $66,765 (95% CI $54,510-$81,843) with administrative plus dropout censoring. In multivariate analysis, chemotherapy was associated with increased cost of $25,325 (95% CI $17,549-$32,827) compared with $28,937 (95% CI $20,510-$37,088) with administrative censoring and $29,593 ($20,564-$39,399) with administrative plus dropout censoring. Adding censoring to the chemotherapy group resulted in less accurate IPW estimates. This was ameliorated, however, by applying IPW within treatment groups.
IPW is a consistent estimator of population mean costs if the weight is correctly specified. If the censoring distribution depends on some covariates, a model that accommodates this dependency must be correctly specified in IPW to obtain accurate estimates.
Value in Health 07/2012; 15(5):656-63. DOI:10.1016/j.jval.2012.03.1388 · 3.28 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: Granulocyte-colony stimulating factor (G-CSF) reduces the risk of severe neutropenia associated with chemotherapy, but its cost implications following chemotherapy are unknown.
Our objective was to examine associations between G-CSF use and medical costs after initial adjuvant chemotherapy in early-stage (stage I-III) breast cancer (ESBC).
Women diagnosed with ESBC from 1999 to 2005, who had an initial course of chemotherapy beginning within 180 days of diagnosis and including ≥1 highly myelosuppressive agent, were identified from the Surveillance, Epidemiology, and End Results (SEER)-Medicare database. Medicare claims were used to describe the initial chemotherapy regimen according to the classes of agents used: anthracycline ([A]: doxorubicin or epirubicin); cyclophosphamide (C); taxane ([T]: paclitaxel or docetaxel); and fluorouracil (F). Patients were classified into four study groups according to their G-CSF use: (i) primary prophylaxis, if the first G-CSF claim was within 5 days of the start of the first chemotherapy cycle; (ii) secondary prophylaxis, if the first claim was within 5 days of the start of the second or subsequent cycles; (iii) G-CSF treatment, if the first claim occurred outside of prophylactic use; and (iv) no G-CSF. Patients were described by age, race, year of diagnosis, stage, grade, estrogen (ER) and progesterone (PR) receptor status, National Cancer Institute (NCI) Co-morbidity Index, chemotherapy regimen and G-CSF use. Total direct medical costs ($US, year 2009 values) to Medicare were estimated from 4 weeks after the last chemotherapy administration up to 48 months. Medical costs included those for ESBC treatment and all other medical services received after chemotherapy. Least squares regression, using inverse probability weighting (IPW) to account for censoring within the cohort, was used to evaluate adjusted associations between G-CSF use and costs.
A total of 7026 patients were identified, with an average age of 72 years, of which 63% had stage II disease, and 59% were ER and/or PR positive. Compared with no G-CSF, those receiving G-CSF primary prophylaxis were more likely to have stage III disease (30% vs. 16%; p < 0.0001), to be diagnosed in 2003-5 (87% vs. 26%; p < 0.0001), and to receive dose-dense AC-T (26% vs. 1%; p < 0.0001), while they were less likely to receive an F-based regimen (12% vs. 42%; p < 0.0001). Overall, the estimated average direct medical cost over 48 months after initial chemotherapy was $US 42,628. In multivariate analysis, stage II or III diagnosis (compared with stage I), NCI Co-morbidity Index score 1 or ≥2 (compared with 0), or FAC or standard AC-T (each compared with AC) were associated with significantly higher IPW 48-month costs. Adjusting for patient demographic and clinical factors, costs in the G-CSF primary prophylaxis group were not significantly different from those not receiving primary prophylaxis (the other three study groups combined). In an analysis that included four separate study groups, G-CSF treatment was associated with significantly greater costs (incremental cost = $US 2938; 95% CI 285, 5590) than no G-CSF.
Direct medical costs after initial chemotherapy were not statistically different between those receiving G-CSF primary prophylaxis and those receiving no G-CSF, after adjusting for potential confounders.
[Show abstract][Hide abstract] ABSTRACT: We review statistical methods for analysing healthcare resource use and costs, their ability to address skewness, excess zeros, multimodality and heavy right tails, and their ease for general use. We aim to provide guidance on analysing resource use and costs focusing on randomised trials, although methods often have wider applicability. Twelve broad categories of methods were identified: (I) methods based on the normal distribution, (II) methods following transformation of data, (III) single-distribution generalized linear models (GLMs), (IV) parametric models based on skewed distributions outside the GLM family, (V) models based on mixtures of parametric distributions, (VI) two (or multi)-part and Tobit models, (VII) survival methods, (VIII) non-parametric methods, (IX) methods based on truncation or trimming of data, (X) data components models, (XI) methods based on averaging across models, and (XII) Markov chain methods. Based on this review, our recommendations are that, first, simple methods are preferred in large samples where the near-normality of sample means is assured. Second, in somewhat smaller samples, relatively simple methods, able to deal with one or two of above data characteristics, may be preferable but checking sensitivity to assumptions is necessary. Finally, some more complex methods hold promise, but are relatively untried; their implementation requires substantial expertise and they are not currently recommended for wider applied work.
Health Economics 08/2011; 20(8):897-916. DOI:10.1002/hec.1653 · 2.23 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: In spatial statistics one usually assumes that observations are partial realizations of a stochastic process , where commonly C = 2, and the components of the location vector x are geographical coordinates. Frequently, it is assumed that follows a Gaussian process (GP) with stationary covariance structure. In this setting the usual aim is to make spatial interpolation to unobserved locations of interest, based on observed values at monitored locations. This interpolation is heavily based on the specification of the mean and covariance structure of the GP. In environmental problems the assumption of stationary covariance structures is commonly violated due to local influences in the covariance structure of the process.
We propose models which relax the assumption of stationary GP by accounting for covariate information in the covariance structure of the process. Usually at each location x, covariates related to are also observed. We initially propose the use of covariates to allow the latent space model of Sampson and Guttorp to be of dimension C > 2. Then we discuss a particular case of the latent space model by using a representation projected down from C dimensions to 2 in order to model the 2D correlation structure better. Inference is performed under the Bayesian paradigm, and Markov chain Monte Carlo methods are used to obtain samples from the resultant posterior distributions under each model. As illustration of the proposed models, we analyse solar radiation in British Columbia, and mean temperature in Colorado. Copyright
[Show abstract][Hide abstract] ABSTRACT: Cost-effectiveness analysis of alternative medical treatments relies on having a measure of effectiveness, and many regard the quality adjusted life year (QALY) to be the current 'gold standard.' In order to compute QALYs, we require a suitable system for describing a person's health state, and a utility measure to value the quality of life associated with each possible state. There are a number of different health state descriptive systems, and we focus here on one known as the EQ-5D. Data for estimating utilities for different health states have a number of features that mean care is necessary in statistical modelling.There is interest in the extent to which valuations of health may differ between different countries and cultures, but few studies have compared preference values of health states obtained from different countries. This article applies a nonparametric model to estimate and compare EQ-5D health state valuation data obtained from two countries using Bayesian methods. The data set is the US and UK EQ-5D valuation studies where a sample of 42 states defined by the EQ-5D was valued by representative samples of the general population from each country using the time trade-off technique. We estimate a utility function across both countries which explicitly accounts for the differences between them, and is estimated using the data from both countries. The article discusses the implications of these results for future applications of the EQ-5D and for further work in this field.
Statistics in Medicine 07/2010; 29(15):1622-34. DOI:10.1002/sim.3874 · 1.83 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: The development of a new drug is a major undertaking and it is important to consider carefully the key decisions in the development process. Decisions are made in the presence of uncertainty and outcomes such as the probability of successful drug registration depend on the clinical development programmme.The Rheumatoid Arthritis Drug Development Model was developed to support key decisions for drugs in development for the treatment of rheumatoid arthritis. It is configured to simulate Phase 2b and 3 trials based on the efficacy of new drugs at the end of Phase 2a, evidence about the efficacy of existing treatments, and expert opinion regarding key safety criteria.The model evaluates the performance of different development programmes with respect to the duration of disease of the target population, Phase 2b and 3 sample sizes, the dose(s) of the experimental treatment, the choice of comparator, the duration of the Phase 2b clinical trial, the primary efficacy outcome and decision criteria for successfully passing Phases 2b and 3. It uses Bayesian clinical trial simulation to calculate the probability of successful drug registration based on the uncertainty about parameters of interest, thereby providing a more realistic assessment of the likely outcomes of individual trials and sequences of trials for the purpose of decision making.In this case study, the results show that, depending on the trial design, the new treatment has assurances of successful drug registration in the range 0.044-0.142 for an ACR20 outcome and 0.057-0.213 for an ACR50 outcome.
[Show abstract][Hide abstract] ABSTRACT: Computer codes are used in scientific research to study and predict the behaviour of complex systems. Their run times often make uncertainty and sensitivity analyses impractical because of the thousands of runs that are conventionally required, so efficient techniques have been developed based on a statistical representation of the code. The approach is less straightforward for dynamic codes, which represent time-evolving systems. We develop a novel iterative system to build a statistical model of dynamic computer codes, which is demonstrated on a rainfall-runoff simulator. Copyright 2009, Oxford University Press.
[Show abstract][Hide abstract] ABSTRACT: Background: Risk sharing schemes represent an innovative and important approach to the problems of rationing and achieving cost-effectiveness in high cost or controversial health interventions. This study aimed to assess the feasibility of risk sharing schemes, looking at long term clinical outcomes, to determine the price at which high cost treatments would be acceptable to the NHS.
Methods: This case study of the first NHS risk sharing scheme, a long term prospective cohort study of beta interferon and glatiramer acetate in multiple sclerosis ( MS) patients in 71 specialist MS centres in UK NHS hospitals, recruited adults with relapsing forms of MS, meeting Association of British Neurologists (ABN) criteria for disease modifying therapy. Outcome measures were: success of recruitment and follow up over the first three years, analysis of baseline and initial follow up data and the prospect of estimating the long term cost-effectiveness of these treatments.
Results: Centres consented 5560 patients. Of the 4240 patients who had been in the study for a least one year, annual review data were available for 3730 (88.0%). Of the patients who had been in the study for at least two years and three years, subsequent annual review data were available for 2055 (78.5%) and 265 (71.8%) patients respectively. Baseline characteristics and a small but statistically significant progression of disease were similar to those reported in previous pivotal studies.
Conclusion: Successful recruitment, follow up and early data analysis suggest that risk sharing schemes should be able to deliver their objectives. However, important issues of analysis, and political and commercial conflicts of interest still need to be addressed.
[Show abstract][Hide abstract] ABSTRACT: Pharmaceutical regulators and healthcare reimbursement authorities operate in different intellectual paradigms and adopt very different decision rules. As a result, drugs that have been licensed are often not available to all patients who could benefit because reimbursement authorities judge that the cost of therapies is greater than the health produced. This finding creates uncertainty for pharmaceutical companies planning their research and development investment, as licensing is no longer a guarantee of market access. In this study, we propose that it would be consistent with the objectives of pharmaceutical regulators to use the Net Benefit Framework of reimbursement authorities to identify those therapies that should be subject to priority review, that it is feasible to do so and that this would have several positive effects for patients, industry, and healthcare systems.
International Journal of Technology Assessment in Health Care 02/2008; 24(2):140-5. DOI:10.1017/S0266462308080197 · 1.31 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: A crucial issue in the current global warming debate is the effect of vegetation and soils on carbon dioxide (CO<sub>2</sub>) concentrations in the atmosphere. Vegetation can extract CO<sub>2</sub> through photosynthesis, but respiration, decay of soil organic matter and disturbance effects such as fire return it to the atmosphere. The balance of these processes is the net carbon flux. To estimate the biospheric carbon flux for England and Wales, we address the statistical problem of inference for the sum of multiple outputs from a complex deterministic computer code whose input parameters are uncertain. The code is a process model which simulates the carbon dynamics of vegetation and soils, including the amount of carbon that is stored as a result of photosynthesis and the amount that is returned to the atmosphere through respiration. The aggregation of outputs corresponding to multiple sites and types of vegetation in a region gives an estimate of the total carbon flux for that region over a period of time. Expert prior opinions are elicited for marginal uncertainty about the relevant input parameters and for correlations of inputs between sites. A Gaussian process model is used to build emulators of the multiple code outputs and Bayesian uncertainty analysis is then used to propagate uncertainty in the input parameters through to uncertainty on the aggregated output. Numerical results are presented for England and Wales in the year 2000. It is estimated that vegetation and soils in England and Wales constituted a net sink of 7.55 Mt C (1 Mt C = 10-super-12 g of carbon) in 2000, with standard deviation 0.56 Mt C resulting from the sources of uncertainty that are considered. Copyright 2008 Royal Statistical Society.
Journal of the Royal Statistical Society Series A (Statistics in Society) 01/2008; 171(1):109-135. DOI:10.1111/j.1467-985X.2007.00489.x · 1.64 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: Probabilistic sensitivity analysis (PSA) is required to account for uncertainty in cost-effectiveness calculations arising from health economic models. The simplest way to perform PSA in practice is by Monte Carlo methods, which involves running the model many times using randomly sampled values of the model inputs. However, this can be impractical when the economic model takes appreciable amounts of time to run. This situation arises, in particular, for patient-level simulation models (also known as micro-simulation or individual-level simulation models), where a single run of the model simulates the health care of many thousands of individual patients. The large number of patients required in each run to achieve accurate estimation of cost-effectiveness means that only a relatively small number of runs is possible. For this reason, it is often said that PSA is not practical for patient-level models. We develop a way to reduce the computational burden of Monte Carlo PSA for patient-level models, based on the algebra of analysis of variance. Methods are presented to estimate the mean and variance of the model output, with formulae for determining optimal sample sizes. The methods are simple to apply and will typically reduce the computational demand very substantially.
Health Economics 10/2007; 16(10):1009-23. DOI:10.1002/hec.1199 · 2.23 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: Partial expected value of perfect information (EVPI) calculations can quantify the value of learning about particular subsets of uncertain parameters in decision models. Published case studies have used different computational approaches. This article examines the computation of partial EVPI estimates via Monte Carlo sampling algorithms. The mathematical definition shows 2 nested expectations, which must be evaluated separately because of the need to compute a maximum between them. A generalized Monte Carlo sampling algorithm uses nested simulation with an outer loop to sample parameters of interest and, conditional upon these, an inner loop to sample remaining uncertain parameters. Alternative computation methods and shortcut algorithms are discussed and mathematical conditions for their use considered. Maxima of Monte Carlo estimates of expectations are biased upward, and the authors show that the use of small samples results in biased EVPI estimates. Three case studies illustrate 1) the bias due to maximization and also the inaccuracy of shortcut algorithms 2) when correlated variables are present and 3) when there is nonlinearity in net benefit functions. If relatively small correlation or nonlinearity is present, then the shortcut algorithm can be substantially inaccurate. Empirical investigation of the numbers of Monte Carlo samples suggests that fewer samples on the outer level and more on the inner level could be efficient and that relatively small numbers of samples can sometimes be used. Several remaining areas for methodological development are set out. A wider application of partial EVPI is recommended both for greater understanding of decision uncertainty and for analyzing research priorities.
Medical Decision Making 07/2007; 27(4):448-70. DOI:10.1177/0272989X07302555 · 3.24 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: Objectives:
This article reports on the findings from applying a recently described approach to modeling health state valuation data and the impact of the respondent characteristics on health state valuations. The approach applies a nonparametric model to estimate a Bayesian six-dimensional health state short form (derived from short-form 36 health survey) health state valuation algorithm.
A sample of 197 states defined by the six-dimensional health state short form (derived from short-form 36 health survey)has been valued by a representative sample of the Hong Kong general population by using standard gamble. The article reports the application of the nonparametric model and compares it to the original model estimated by using a conventional parametric random effects model. The two models are compared theoretically and in terms of empirical performance.
Advantages of the nonparametric model are that it can be used to predict scores in populations with different distributions of characteristics than observed in the survey sample and that it allows for the impact of respondent characteristics to vary by health state (while ensuring that full health passes through unity). The results suggest an important age effect with sex, having some effect, but the remaining covariates having no discernible effect.
The nonparametric Bayesian model is argued to be more theoretically appropriate than previously used parametric models. Furthermore, it is more flexible to take into account the impact of covariates.
Journal of Health Economics 06/2007; 26(3):597-612. DOI:10.1016/j.jhealeco.2006.09.002 · 2.58 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: It has long been recognised that respondent characteristics can impact on the values they give to health states. This paper reports on the findings from applying a non-parametric approach to estimate the covariates in a model of SF-6D health state values using Bayesian methods. The data set is the UK SF-6D valuation study, where a sample of 249 states defined by the SF-6D (a derivate of the SF-36) was valued by a sample of the UK general population using standard gamble. Advantages of the nonparametric model are that it can be used to predict scores in populations with different distributions of characteristics and that it allows for an impact to vary by health state (whilst ensuring that full health passes through unity). The results suggest an important age effect, with sex, class, education, employment and physical functioning probably having some effect, but the remaining covariates having no discernable effect. Adjusting for covariates in the UK sample made little difference to mean health state values. The paper discusses the implications of these results for policy.
Social Science & Medicine 04/2007; 64(6):1242-52. DOI:10.1016/j.socscimed.2006.10.040 · 2.89 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: A key task in the elicitation of expert knowledge is to construct a distribution from the finite, and usually small, number of statements that have been elicited from the expert. These statements typically specify some quantiles or moments of the distribution. Such statements are not enough to identify the expert's probability distribution uniquely, and the usual approach is to fit some member of a convenient parametric family. There are two clear deficiencies in this solution. First, the expert's beliefs are forced to fit the parametric family. Secondly, no account is then taken of the many other possible distributions that might have fitted the elicited statements equally well. We present a nonparametric approach which tackles both of these deficiencies. We also consider the issue of the imprecision in the elicited probability judgements. Copyright 2007, Oxford University Press.
[Show abstract][Hide abstract] ABSTRACT: In this paper we report the estimation of conditional logistic regression models for the Health Utilities Index Mark 2 and the SF-6D, using ordinal preference data. The results are compared to the conventional regression models estimated from standard gamble data, and to the observed mean standard gamble health state valuations. For both the HUI2 and the SF-6D, the models estimated using ordinal data are broadly comparable to the models estimated on standard gamble data and the predictive performance of these models is close to that of the standard gamble models. Our research indicates that ordinal data have the potential to provide useful insights into community health state preferences. However, important questions remain.
Journal of Health Economics 06/2006; 25(3):418-31. DOI:10.1016/j.jhealeco.2005.07.008 · 2.58 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: The process of calibrating radiocarbon determinations onto the calendar scale involves,
as a first stage, the estimation of the relationship between calendar and radiocarbon ages
(the radiocarbon calibration curve) from a set of available high-precision calibration
data. Traditionally the radiocarbon calibration curve has been constructed by forming a
weighted average of the data, and then taking the curve as the piece-wise linear function
joining the resulting calibration data points. Alternative proposals for creating a
calibration curve from the averaged data involve a spline or cubic interpolation, or the
use of Fourier transformation and other filtering techniques, in order to obtain a smooth
calibration curve. Between the various approaches, there is no consensus as to how to make
use of the data in order to solve the problems related to the calibration of radiocarbon
¶ We propose a nonparametric Bayesian solution to the problem of the estimation of the
radiocarbon calibration curve, based on a Gaussian process prior structure on the space of
possible functions. Our approach is model-based, taking into account specific
characteristics of the dating method, and provides a generic solution to the problem of
estimating calibration curves for chronology building.
¶ We apply our method to the 1998 international high-precision calibration dataset, and
demonstrate that our model predictions are well calibrated and have smaller variances than
other methods. These data have deficiencies and complications that will only be unravelled
with the publication of new data, expected in early 2005, but this analysis suggests that
the nonparametric Bayesian model will allow more precise calibration of radiocarbon ages
for archaeological specimens.