The lag between effectiveness and cost-effectiveness evidence of new drugs. Implications for decision-making in health care.
ABSTRACT A new drug is approved for use if its efficacy and safety have been demonstrated. However, healthcare decision makers may also require data on the cost-effectiveness of new drugs if they are to make informed decisions about their place in therapy. Cost-effectiveness evidence may lag behind the effectiveness data in terms of its availability. We explored the timeliness of delivering cost-effectiveness information about new drugs with established effectiveness and significant financial impact. Drugs were identified, based on guidance documents and reports published by the UK National Institute for Clinical Excellence (NICE), and the following data were collected: dates of publication of first effectiveness and cost-effectiveness evidence, methodology of the cost-effectiveness analysis, quality scores of the clinical studies. Eighteen guidance documents on the use of new drugs/drug groups published by NICE by October 2001 covered 30 health technologies, which were included in the analysis. The analysis of the evidence showed that their effectiveness had been demonstrated in the last 12 years, with only two exceptions. However, cost-effectiveness evidence had been published for 21 (70%) of the technologies. The cost-effectiveness was estimated in 52.4% of cases using models. The good quality effectiveness evidence lagged behind the first effectiveness evidence by 1.40 years (95% CI 0.57-2.23), while the mean lag between the first effectiveness evidence and the first cost-effectiveness publications was estimated as 3.20 years (95% CI 1.76-4.65). Cost-effectiveness evidence thus often lags behind the effectiveness evidence. As a result healthcare decision makers are sometimes in a position of having to take decisions without having adequate cost-effectiveness data at their disposal.
SourceAvailable from: Yan Cheng[Show abstract] [Hide abstract]
ABSTRACT: OBJECTIVE:To review the use of number needed to treat (NNT) and/or number needed to harm (NNH) values to determine their relevance in helping clinicians evaluate cost-effectiveness analyses (CEAs).DATA SOURCES:PubMed and EconLit were searched from 1966 to September 2012.STUDY SELECTION AND DATA EXTRACTION:Reviews, editorials, non-English-language articles, and articles that did not report NNT/NNH or cost-effectiveness ratios were excluded. CEA studies reporting cost per life-year gained, per quality-adjusted life-year (QALY), or other cost per effectiveness measure were included. Full texts of all included articles were reviewed for study information, including type of journal, impact factor of the journal, focus of study, data source, publication year, how NNT/NNH values were reported, and outcome measures.DATA SYNTHESIS:A total of 188 studies were initially identified, with 69 meeting our inclusion criteria. Most were published in clinician-practice-focused journals (78.3%) while 5.8% were in policy-focused journals, and 15.9% in healtheconomics-focused journals. The majority (72.4%) of the articles were published in high-impact journals (impact factor >3.0). Many articles focused on either disease treatment (40.5%) or disease prevention (40.5%). Forty-eight percent reported NNT as a part of the CEA ratio per event. Most (53.6%) articles used data from literature reviews, while 24.6% used data from randomized clinical trials, and 20.3% used data from observational studies. In addition, 10% of the studies implemented modeling to perform CEA.CONCLUSIONS:CEA studies sometimes include NNT ratios. Although it has several limitations, clinicians often use NNT for decision-making, so including NNT information alongside CEA findings may help clinicians better understand and apply CEA results. Further research is needed to assess how NNT/NNH might meaningfully be incorporated into CEA publications.Annals of Pharmacotherapy 03/2013; 47(3). DOI:10.1345/aph.1R417 · 2.92 Impact Factor
[Show abstract] [Hide abstract]
ABSTRACT: BACKGROUND AND AIMS: Little is known about the extent and nature of publication bias in economic evaluations. Our objective was to determine whether economic evaluations are subject to publication bias by considering whether economic data are as likely to be reported, and reported as promptly, as effectiveness data. METHODS: Trials that intended to conduct an economic analysis and ended before 2008 were identified in the International Standard Randomised Controlled Trial Number (ISRCTN) register; a random sample of 100 trials was retrieved. Fifty comparator trials were randomly drawn from those not identified as intending to conduct an economic study. The trial start and end dates, estimated sample size and funder type were extracted. For trials planning economic evaluations, effectiveness and economic publications were sought; publication dates and journal impact factors were extracted. Effectiveness abstracts were assessed for whether they reached a firm conclusion that one intervention was most effective. Primary investigators were contacted about reasons for non-publication of results, or reasons for differential publication strategies for effectiveness and economic results. RESULTS: Trials planning an economic study were more likely to be funded by government (p = 0.01) and larger (p = 0.003) than other trials. The trials planning an economic evaluation had a mean of 6.5 (range 2.7-13.2) years since the trial end in which to publish their results. Effectiveness results were reported by 70 %, while only 43 % published economic evaluations (p < 0.001). Reasons for non-publication of economic results included the intervention being ineffective, and staffing issues. Funding source, time since trial end and length of study were not associated with a higher probability of publishing the economic evaluation. However, studies that were small or of unknown size were significantly less likely to publish economic evaluations than large studies (p < 0.001). The authors' confidence in labelling one intervention clearly most effective did not affect the probability of publication. The mean time to publication was 0.7 years longer for cost-effectiveness data than for effectiveness data where both were published (p = 0.001). The median journal impact factor was 1.6 points higher for effectiveness publications than for the corresponding economic publications (p = 0.01). Reasons for publishing in different journals included editorial decision making and the additional time that economic evaluation takes to conduct. CONCLUSIONS: Trials that intend to conduct an economic analysis are less likely to report economic data than effectiveness data. Where economic results do appear, they are published later, and in journals with lower impact factors. These results suggest that economic output may be more susceptible than effectiveness data to publication bias. Funders, grant reviewers and trialists themselves should ensure economic evaluations are prioritized and adequately staffed to avoid potential problems with bias.PharmacoEconomics 01/2013; 31(1):77-85. DOI:10.1007/s40273-012-0004-7 · 3.34 Impact Factor