[Show abstract][Hide abstract] ABSTRACT: Background Clinician uncertainty concerning the need for antibiotic prophylaxis to prevent prosthetic joint infection (PJI) after undergoing dental procedures persists. Improved understanding of the potential clinical and economic risks and benefits of antibiotic prophylaxis will help inform the debate and facilitate the continuing evolution of clinical management guidelines for dental patients with prosthetic joints. Methods The authors developed a Markov decision model to compare the lifetime cost-effectiveness of alternative antibiotic prophylaxis strategies for dental patients aged 65 years who had undergone total hip arthroplasty (THA). On the basis of the authors' interpretation of previous recommendations from the American Dental Association and American Academy of Orthopaedic Surgeons, they compared the following strategies: no prophylaxis, prophylaxis for the first 2 years after arthroplasty, and lifetime prophylaxis. Results A strategy of foregoing antibiotic prophylaxis before dental visits was cost-effective and resulted in lower lifetime accumulated costs ($11,909) and higher accumulated quality-adjusted life years (QALYs) (12.375) when compared with alternative prophylaxis strategies. Conclusions The results of Markov decision modeling indicated that a no-antibiotic prophylaxis strategy was cost-effective for dental patients who had undergone THA. These results support the findings of case-control studies and the conclusions of an American Dental Association Council on Scientific Affairs report that questioned general recommendations for antibiotic prophylaxis before dental procedures. Practical Implications The results of cost-effectiveness decision modeling support the contention that routine antibiotic prophylaxis for dental patients with total joint arthroplasty should be reconsidered.
No preview · Article · Nov 2015 · Journal of the American Dental Association (1939)
[Show abstract][Hide abstract] ABSTRACT: Purpose: Gout is the most common inflammatory arthritis in the United States, and several urate-lowering treatment strategies are used to manage symptoms. The value of collecting additional information of key parameters in the cost-effectiveness of urate-lowering treatment strategies for the management of gout is unknown. We apply a meta-modeling approach to calculate the expected value of perfect information (EVPI), expected value of partial perfect information (EVPPI), and the expected value of sample information for parameters (EVPSI) on all model parameters (e.g., utilities, efficacy, and cost).
Methods: We used a previously developed model that evaluated the cost-effectiveness of five urate-lowering strategies: no treatment, allopurinol or febuxostat only, allopurinol- febuxostat sequential therapy, and febuxostat-allopurinol sequential therapy. Health states in the model accounted for disease status: controlled, uncontrolled on medication, and uncontrolled off medication. To quantify uncertainty in the model we conducted a probabilistic sensitivity analysis (PSA). We implemented a linear regression meta-model to the dataset generated from the PSA. Conceptually similar parameters were evaluated together (e.g., utilities) since a single study is likely to inform all of these parameters. To inform future research design we extrapolated EVPI, EVPPI, and EVPSI on a United States population level for an annual incidence of 29,376 new gout patients assuming a decision lifetime of 10 years. Finally, we calculated the optimal sample size of a future study assuming a patient survey would be administered during a clinical visit (fixed cost $6,000; cost per patient $100) to evaluate the parameter group of interest.
Results: Population EVPI varies by a decision maker's willingness-to-pay (WTP) per quality-adjusted life year and is $227 million for WTP of $100,000. EVPPI is highest for utility parameters when WTP is $50,000-$100,000. Figure 1 shows population EVPSI for parameters evaluating utilities, cost of research, expected net benefit of sampling (ENBS), and the optimal sample size for a survey conducted in a clinic evaluating gout patients' health utilities. Given a WTP of $100,000, the optimal sample size of a survey based research study evaluating the health utility of gout patients is 8,600. If the costs of research doubles the optimal sample size is 5,700.
Conclusions: Future studies should be conducted to evaluate utility of gout patients.
[Show abstract][Hide abstract] ABSTRACT: An increasing proportion of breast cancer patients undergo contralateral prophylactic mastectomy (CPM) to reduce their risk of contralateral breast cancer (CBC). Our goal was to evaluate CBC risk perception changes over time among breast cancer patients.
We conducted a prospective, longitudinal study of women with newly diagnosed unilateral breast cancer. Patients completed a survey before and approximately 2 years after treatment. Survey questions used open-ended responses or 5-point Likert scale scoring (e.g., 5 = very likely, 1 = not at all likely).
A total of 74 women completed the presurgical treatment survey, and 43 completed the postsurgical treatment survey. Baseline characteristics were not significantly different between responders and nonresponders of the follow-up survey. The mean estimated 10-year risk of CBC was 35.7 % on the presurgical treatment survey and 13.8 % on the postsurgical treatment survey (p < 0.001). The perceived risks of developing cancer in the same breast and elsewhere in the body significantly decreased between surveys. Both CPM and non-CPM (breast-conserving surgery or unilateral mastectomy) patients' perceived risk of CBC significantly decreased from pre- to postsurgical treatment surveys. Compared with non-CPM patients, CPM patients had a significantly lower perceived 10-year risk of CBC (5.8 vs. 17.3 %, p = 0.046) on postsurgical treatment surveys.
The perceived risk of CBC significantly attenuated over time for both CPM and non-CPM patients. These data emphasize the importance of early physician counseling and improvement in patient education to provide women with accurate risk information before they make surgical treatment decisions.
No preview · Article · Mar 2015 · Annals of Surgical Oncology
[Show abstract][Hide abstract] ABSTRACT: Clinical practice guidelines should be based on the best scientific evidence derived from systematic reviews of primary research. However, these studies often do not provide evidence needed by guideline development groups to evaluate the tradeoffs between benefits and harms. In this article, the authors identify 4 areas where models can bridge the gaps between published evidence and the information needed for guideline development applying new or updated information on disease risk, diagnostic test properties, and treatment efficacy; exploring a more complete array of alternative intervention strategies; assessing benefits and harms over a lifetime horizon; and projecting outcomes for the conditions for which the guideline is intended. The use of modeling as an approach to bridge these gaps (provided that the models are high-quality and adequately validated) is considered. Colorectal and breast cancer screening are used as examples to show the utility of models for these purposes. The authors propose that a modeling study is most useful when strong primary evidence is available to inform the model but critical gaps remain between the evidence and the questions that the guideline group must address. In these cases, model results have a place alongside the findings of systematic reviews to inform health care practice and policy.
No preview · Article · Dec 2014 · Annals of internal medicine
[Show abstract][Hide abstract] ABSTRACT: Researchers are actively pursuing the development of a new non-invasive test (NIT) for colorectal cancer (CRC) screening as an alternative to fecal occult blood tests (FOBTs). The majority of pilot studies focus on the detection of invasive CRC rather than precursor lesions (i.e., adenomas). We aimed to explore the relevance of adenoma detection for the viability of an NIT for CRC screening by considering a hypothetical test that does not detect adenomas beyond chance.We used the Simulation Model of Colorectal Cancer (SimCRC) to estimate the effectiveness of CRC screening and the lifetime costs (payers' perspective) for a cohort of US 50-year-olds to whom CRC screening is offered from age 50-75. We compared annual screening with guaiac and immunochemical FOBTs (with sensitivities up to 70% and 24% for CRC and adenomas, respectively) to annual screening with a hypothetical NIT (sensitivity of 90% for CRC, no detection of adenomas beyond chance, specificity and cost similar to FOBTs).Screening with the NIT was not more effective, but was 29-44% more costly than screening with FOBTs. The findings were robust to varying the screening interval, the NIT's sensitivity for CRC, adherence rates favoring the NIT, and the NIT's unit cost. A comparative modelling approach using a model that assumes a shorter adenoma dwell time (MISCAN-COLON) confirmed the superiority of the immunochemical FOBT over a NIT with no ability to detect adenomas.Information on adenoma detection is crucial to determine whether a new NIT is a viable alternative to FOBTs for CRC screening. Current evidence thus lacks an important piece of information to identify marker candidates that hold real promise and deserve further (large-scale) evaluation. This article is protected by copyright. All rights reserved.
No preview · Article · Dec 2014 · International Journal of Cancer
[Show abstract][Hide abstract] ABSTRACT: Purpose: Relative survival, as reported by the Surveillance, Epidemiology, and End Results (SEER) Program, represents cancer survival in the absence of other causes of death. Often, cancer Markov models have a distant metastasis state, a state not directly observed in SEER, from which cancer deaths are presumed to occur. The aim of this research is to use a novel approach to calibrate the transition probabilities to and from an unobserved state of a Markov model to fit a relative survival curve.
Methods: We modeled relative survival through a three-piecewise Markov model (i.e., with a specific Markov chain within each specified pieces) for stage 3 colorectal cancer patients. For each piece we used a constant transition matrix with three states: 1) recurrence free, 2) metastatic recurrence and 3) dead from cancer. We estimated the optimal cutoff time points using a Bayesian Markov chain Monte Carlo (MCMC) change-point model. This technique allowed us to estimate the time points at which the slope of the relative survival changes. We calibrated the transition probabilities using a two-step iterative convex optimization algorithm previously published. The dynamics of the disease can be defined as xt+1= xtM, where xt+1 is the state vector that results from the transformation given by the monthly transition matrix M. The matrix M is a piecewise block-diagonal matrix that includes in each piece a block-diagonal matrix for each Markov chain.
Results: We applied our method to calibrate a Markov model to fit a relative survival curve for stage 3 colorectal cancer patients younger than 75 years old. We compared our piecewise calibration method to a single-piece approach (i.e., a Markov chain). While the single-piece converged faster, the piecewise method improved the goodness of fit by 60%. The mean of the change points estimated from the Bayesian change-point model was at months 3 and 24 (see figure). The observed, and the piecewise and single-piece calibrated relative survival curves are shown in the figure.
Conclusions: By estimating the change points in the relative survival curve we were able to find the optimal transition probabilities for a piecewise Markov model that allowed us to impose a particular structure defined by the progression of the disease. We propose a piecewise calibration method that produces more accurate solutions compared to a single-piece approach.
[Show abstract][Hide abstract] ABSTRACT: Purpose: Clinical trials often report treatment efficacy in terms of the reduction of all-cause mortality [i.e., overall hazard ratio (OHR)], and not the reduction in disease-specific mortality [i.e., disease-specific hazard ratio (DSHR)]. Using an OHR to reduce all-cause mortality beyond the time horizon of the clinical trial may introduce bias if the relative proportion of other-cause mortality increases with age. We aim to quantify this bias.
Methods: We simulated a hypothetical cohort of patients with a generic disease that increases the age-, sex-, and race-specific mortality rate (μASR) by a constant additive disease-specific rate (μDis). We assumed a DSHR of 0.75 (unreported) and an OHR of 0.80 (reported, derived from DSHR and assumptions of clinical trial population). We quantified the bias in terms of the difference in life expectancy (LE) gains with treatment between using an OHR approach to reduce all-cause mortality over a lifetime [(μASR+ μDis)xOHR] compared with using a DSHR approach to reduce disease-specific mortality [μASR+(μDis)xDSHR]. We varied the starting age of the cohort from 40 to 70 years old.
Results: The OHR bias increases as DSHR decreases and with younger starting ages of the cohort. For a cohort of 60-year-old sick patients, the mortality rate under OHR approach crosses μASR at the age of 90 (see figure) and LE gain is overestimated by 0.6 years (a 3.7% increase). We also used OHR as an estimate of DSHR [μASR+(μDis) × OHR] (as the latter is not often reported). This resulted in a slight shift in the mortality rate compared to the DSHR approach (see figure), yielding in an underestimation of the LE gain.
Conclusions: The use of an OHR approach to model treatment effectiveness beyond the time horizon of the trial overestimates the effectiveness of the treatment. Under an OHR approach, it is possible that sick individuals at some point will face a lower mortality rate compared with healthy individuals. We recommend either deriving a DSHR from trials and using the DSHR approach, or using the OHR as an estimate of DSHR in the model, which is a conservative assumption.
[Show abstract][Hide abstract] ABSTRACT: Objective
To determine whether, given a limited budget, a state's low-income uninsured population would have greater benefit from a colorectal cancer (CRC) screening program using colonoscopy or fecal immunochemical testing (FIT).Data Sources/Study SettingSouth Carolina's low-income, uninsured population.Study DesignComparative effectiveness analysis using microsimulation modeling to estimate the number of individuals screened, CRC cases prevented, CRC deaths prevented, and life-years gained from a screening program using colonoscopy versus a program using annual FIT in South Carolina's low-income, uninsured population. This analysis assumed an annual budget of $1 million and a budget availability of 2 years as a base case.Principal FindingsThe annual FIT screening program resulted in nearly eight times more individuals being screened, and more important, approximately four times as many CRC deaths prevented and life-years gained than the colonoscopy screening program. Our results were robust for assumptions concerning economic perspective and the target population, and they may therefore be generalized to other states and populations.ConclusionsA FIT screening program will prevent more CRC deaths than a colonoscopy-based program when a state's budget for CRC screening supports screening of only a fraction of the target population.
No preview · Article · Oct 2014 · Health Services Research
[Show abstract][Hide abstract] ABSTRACT: Background:
Contralateral prophylactic mastectomy (CPM) rates have substantially increased in recent years and may reflect an exaggerated perceived benefit from the procedure. The objective of this study was to evaluate the magnitude of the survival benefit of CPM for women with unilateral breast cancer.
We developed a Markov model to simulate survival outcomes after CPM and no CPM among women with stage I or II breast cancer without a BRCA mutation. Probabilities for developing contralateral breast cancer (CBC), dying from CBC, dying from primary breast cancer, and age-specific mortality rates were estimated from published studies. We estimated life expectancy (LE) gain, 20-year overall survival, and disease-free survival with each intervention strategy among cohorts of women defined by age, estrogen receptor (ER) status, and stage of cancer.
Predicted LE gain from CPM ranged from 0.13 to 0.59 years for women with stage I breast cancer and 0.08 to 0.29 years for those with stage II breast cancer. Absolute 20-year survival differences ranged from 0.56% to 0.94% for women with stage I breast cancer and 0.36% to 0.61% for women with stage II breast cancer. CPM was more beneficial among younger women, stage I, and ER-negative breast cancer. Sensitivity analyses yielded a maximum 20-year survival difference with CPM of only 1.45%.
The absolute 20-year survival benefit from CPM was less than 1% among all age, ER status, and cancer stage groups. Estimates of LE gains and survival differences derived from decision models may provide more realistic expectations of CPM.
No preview · Article · Aug 2014 · JNCI Journal of the National Cancer Institute
[Show abstract][Hide abstract] ABSTRACT: Background: Harms and benefits of cancer screening depend on age and comorbid conditions, but reliable estimates are lacking. Objective: To estimate the harms and benefits of cancer screening by age and comorbid conditions to inform decisions about screening cessation. Design: Collaborative modeling with 7 cancer simulation models and common data on average and comorbid condition level-specific life expectancy. Setting: U. S. population. Patients: U. S. cohorts aged 66 to 90 years in 2010 with average health or 1 of 4 comorbid condition levels: none, mild, moderate, or severe. Intervention: Mammography, prostate-specific antigen testing, or fecal immunochemical testing. Measurements: Lifetime cancer deaths prevented and life-years gained (benefits); false-positive test results and overdiagnosed cancer cases (harms). For each comorbid condition level, the age at which harms and benefits of screening were similar to that for persons with average health having screening at age 74 years. Results: Screening 1000 women with average life expectancy at age 74 years for breast cancer resulted in 79 to 96 (range across models) false-positive results, 0.5 to 0.8 overdiagnosed cancer cases, and 0.7 to 0.9 prevented cancer deaths. Although absolute numbers of harms and benefits differed across cancer sites, the ages at which to cease screening were consistent across models and cancer sites. For persons with no, mild, moderate, and severe comorbid conditions, screening until ages 76, 74, 72, and 66 years, respectively, resulted in harms and benefits similar to average-health persons. Limitation: Comorbid conditions influenced only life expectancy. Conclusion: Comorbid conditions are an important determinant of harms and benefits of screening. Estimates of screening benefits and harms by comorbid condition can inform discussions between providers and patients about personalizing screening cessation decisions.
No preview · Article · Jul 2014 · Annals of internal medicine
[Show abstract][Hide abstract] ABSTRACT: OBJECTIVE: The study objective was to evaluate and update the safety data from randomized controlled trials of tumor necrosis factor inhibitors in patients treated for rheumatoid arthritis.
METHODS: A systematic literature search was conducted from 1990 to May 2013. All studies included were randomized, double-blind, controlled trials of patients with rheumatoid arthritis that evaluated adalimumab, certolizumab pegol, etanercept, golimumab, or infliximab treatment. The serious adverse events and discontinuation rates were abstracted, and risk estimates were calculated by Peto odds ratios (ORs). RESULTS: Forty-four randomized controlled trials involving 11,700 subjects receiving tumor necrosis factor inhibitors and 5901 subjects receiving placebo or traditional disease-modifying antirheumatic drugs were included. Tumor necrosis factor inhibitor treatment as a group was associated with a higher risk of serious infection (OR, 1.42; 95% confidence interval [CI], 1.13-1.78) and treatment discontinuation due to adverse events (OR, 1.23; 95% CI, 1.06-1.43) compared with placebo and traditional disease-modifying anti- rheumatic drug treatments. Specifically, patients taking adalimumab, certolizumab pegol, and infliximab had an increased risk of serious infection (OR, 1.69, 1.98, and 1.63, respectively) and showed an increased risk of discontinuation due to adverse events (OR, 1.38, 1.67, and 2.04, respectively). In contrast, patients taking etanercept had a decreased risk of discontinuation due to adverse events (OR, 0.72; 95% CI, 0.55- 0.93). Although ORs for malignancy varied across the different tumor necrosis factor inhibitors, none reached statistical significance.
CONCLUSIONS: These meta-analysis updates of the comparative safety of tumor necrosis factor inhibitors suggest a higher risk of serious infection associated with adalimumab, certolizumab pegol, and infliximab, which seems to contribute to higher rates of discontinuation. In contrast, etanercept use showed a lower rate of discontinuation. These data may help guide clinical comparative decision making in the management of rheumatoid arthritis.
Full-text · Article · Jun 2014 · The American Journal of Medicine
[Show abstract][Hide abstract] ABSTRACT: Objective
To compare the cost-effectiveness of alternate treatment strategies using second-generation antipsychotics (SGAs) for patients with schizophrenia.
We developed a Markov model to estimate the costs and quality-adjusted life-years (QALYs) for different sequences of treatments for 40-year-old patients with schizophrenia. We considered first-line treatment with one of the four SGAs: olanzapine (OLZ), risperidone (RSP), quetiapine (QTP), and ziprasidone (ZSD). Patients could switch to another of these antipsychotics as second-line therapy, and only clozapine (CLZ) was allowed as third-line treatment. We derived parameter estimates from the Clinical Antipsychotic Trial of Intervention Effectiveness (CATIE) study and published sources.
The ZSD-QTP strategy (first-line treatment with ZSD, change to QTP if ZSD is discontinued, and switch to CLZ if QTP is discontinued) was most costly while yielding the greatest QALYs, with an incremental cost-effective ratio (ICER) of $542,500 per QALY gained compared with the ZSD-RSP strategy. However, the ZSD-RSP strategy had an ICER of $5,200/QALY gained versus the RSP-ZSD strategy and had the greatest probability of being cost-effective given a willingness-to-pay threshold between $50,000 and $100,000 per QALY. All other treatment strategies were more costly and less effective than another strategy or combination of other strategies. Results varied by different time horizons adopted.
The ZSD-RSP strategy was most cost-effective at a willingness-to-pay threshold between $5,200 and $542,500 per QALY. Our results should be interpreted with caution because they are based largely on the CATIE trial with potentially limited generalizability to all patient populations and doses of SGAs used in practice.