[Show abstract][Hide abstract] ABSTRACT: A growing body of observational literature on the association between glucose-lowering treatments and all-cause mortality has been accumulating in recent years. However, many investigations present designs or analyses that inadequately address the methodological challenges involved. We conducted a systematic search with a non-systematic extension to identify observational studies published between 2000 and 2012 that evaluated the effects of glucose-lowering medications on all-cause mortality. We reviewed these studies and assessed the design and analysis methods used, with a focus on their ability to address specific methodological challenges. We described these methodological issues and their potential impact on observed associations, providing examples from the reviewed literature, and suggested possible approaches to manage these methodological challenges. We evaluated 67 publications of observational studies evaluating the association between glucose-lowering treatments and all-cause mortality. The identified methodological challenges included trade-offs associated with the outcome of all-cause mortality, incorrect temporal sequencing in administrative databases, inadequate treatment of time-varying hazards and treatment duration effects, unclear definition of the exposure risk window, improper handling of time-varying exposures, and incomplete accounting for confounding by indication. Most of these methodological challenges may be adequately addressed through the application of appropriate methods. Observational research plays an increasingly important role in assessing the clinical effects of diabetes therapy. The implementation of suitable research methods can reduce the potential for spurious findings, and thus the risk of misleading the medical community about benefits and harms of diabetes therapy.
Drug Safety 03/2015; 38(3). DOI:10.1007/s40264-015-0280-1 · 2.82 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: Background:
Recent years have witnessed a growing body of observational literature on the association between glucose-lowering treatments and cardiovascular disease. However, many of the studies are based on designs or analyses that inadequately address the methodological challenges involved.
We reviewed recent observational literature on the association between glucose-lowering medications and cardiovascular outcomes and assessed the design and analysis methods used, with a focus on their ability to address specific methodological challenges. We describe and illustrate these methodological issues and their impact on observed associations, providing examples from the reviewed literature. We suggest approaches that may be employed to manage these methodological challenges.
From the evaluation of 81 publications of observational investigations assessing the association between glucose-lowering treatments and cardiovascular outcomes, we identified the following methodological challenges: 1) handling of temporality in administrative databases; 2) handling of risks that vary with time and treatment duration; 3) definitions of the exposure risk window; 4) handling of exposures that change over time; and 5) handling of confounding by indication. Most of these methodological challenges may be suitably addressed through application of appropriate methods.
Observational research plays an increasingly important role in the evaluation of the clinical effects of diabetes treatment. Implementation of appropriate research methods holds the promise of reducing the potential for spurious findings and the risk that the spurious findings will mislead the medical community about risks and benefits of diabetes medications.
[Show abstract][Hide abstract] ABSTRACT: Methods for near-real-time monitoring of new drugs in electronic healthcare data are needed.
In a novel application, we prospectively monitored ischemic, bleeding, and mortality outcomes among patients initiating prasugrel versus clopidogrel in routine care during the first 2 years following the approval of prasugrel.
Using the HealthCore Integrated Research Database, we conducted a prospective cohort study comparing prasugrel and clopidogrel initiators in the 6 months following the introduction of prasugrel and every 2 months thereafter. We identified patients who initiated antiplatelets within 14 days following discharge from hospitalizations for myocardial infarction (MI) or acute coronary syndrome. We matched patients using high-dimensional propensity scores (hd-PSs) and followed them for ischemic (i.e., MI and ischemic stroke) events, bleed (i.e., hemorrhagic stroke and gastrointestinal bleed) events, and all-cause mortality. For each outcome, we applied sequential alerting algorithms.
We identified 1,282 eligible new users of prasugrel and 8,263 eligible new users of clopidogrel between September 2009 and August 2011. In hd-PS matched cohorts, the overall MI rate difference (RD) comparing prasugrel with clopidogrel was -23.1 (95 % confidence interval [CI] -62.8-16.7) events per 1,000 person-years and RDs were -0.5 (-12.9-11.9) and -2.8 (-13.2-7.6) for a composite bleed event outcome and death from any cause, respectively. No algorithms generated alerts for any outcomes.
Near-real-time monitoring was feasible and, in contrast to the key pre-marketing trial that demonstrated the efficacy of prasugrel, did not suggest that prasugrel compared with clopidogrel was associated with an increased risk of gastrointestinal and intracranial bleeding.
Drug Safety 02/2014; 37(3). DOI:10.1007/s40264-014-0136-0 · 2.82 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: Background:
New anticoagulants may improve health outcomes in patients with atrial fibrillation, but it is unclear whether their use is cost-effective.
Methods and results:
A Markov state transition was created to compare 4 therapies: dabigatran 150 mg BID, apixaban 5 mg BID, rivaroxaban 20 mg QD, and warfarin therapy. The population included those with newly diagnosed atrial fibrillation who were eligible for treatment with warfarin. Compared with warfarin, apixaban, rivaroxaban, and dabigatran, costs were $93 063, $111 465, and $140 557 per additional quality-adjusted life year gained, respectively. At a threshold of $100 000 per quality-adjusted life year, apixaban provided the greatest absolute benefit while still being cost-effective, although warfarin would be superior if apixaban was 2% less effective than expected. Although apixaban was the optimal strategy in our base case, in probabilistic sensitivity analysis, warfarin was optimal in an equal number of iterations at a cost-effectiveness threshold of $100 000 per quality-adjusted life year.
While at a standard cost-effectiveness threshold of $100 000 per quality-adjusted life year, apixaban seems to be the optimal anticoagulation strategy; this finding is sensitive to assumptions about its efficacy and cost. In sensitivity analysis, warfarin seems to be the optimal choice in an equal number of simulations. As a result, although all the novel oral anticoagulants produce greater quality-adjusted life expectancy than warfarin, they may not represent good value for money.
[Show abstract][Hide abstract] ABSTRACT: Many patients with type 2 diabetes eventually require insulin, yet little is known about the patterns and quality of pharmacologic care received following insulin initiation. Guidelines from the American Diabetes Association and the European Association for the Study of Diabetes recommend that insulin secretagogues such as sulfonylureas be discontinued at the time of insulin initiation to reduce the risk of hypoglycemia, and that treatment be intensified if HbA1c levels remain above-target 3 months after insulin initiation.
To describe pharmacologic treatment patterns over time among adults initiating insulin and/or intensifying insulin treatment.
A large commercially insured population of adult patients without recorded type 1 diabetes who initiated insulin.
We evaluated changes in non-insulin antidiabetic medication use during the 120 days immediately following insulin initiation, rates of increase in insulin dose and/or dosing frequency during the 270 days following an insulin initiation treatment period of 90 days, and rates of insulin discontinuation.
Seven thousand, nine hundred and thirty-two patients initiated insulin during 2003-2008, with the majority (61 %) initiating basal insulin only. Metformin (55 %), sulfonylureas (39 %), and thiazolidinediones (30 %) were commonly used prior to insulin initiation. Metformin was continued by 64 % of patients following mixed or mealtime insulin initiation; the continuation rate was nearly as high for sulfonylureas (58 %). Insulin dose and/or dosing frequency increased among 22.9 % of patients. Insulin was discontinued by 27 % of patients.
We found evidence of substantial departures from guideline-recommended pharmacotherapy. Insulin secretagogues were frequently co-prescribed with insulin. The majority of patients had no evidence of treatment intensification following insulin initiation, although this finding is difficult to interpret without HbA1c levels. While each patient's care should be individualized, our data suggest that the quality of care following insulin initiation can be improved.
Journal of General Internal Medicine 10/2013; 29(2). DOI:10.1007/s11606-013-2643-6 · 3.42 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: Key to conducting active drug safety surveillance using longitudinal health care data is determining whether and when there is sufficient evidence to raise a safety alert. We propose to quantify the expected value of the information (VOI) to be gained through continued monitoring in terms of its potential to reduce health losses among future patients and weigh this against the health cost of exposing current patients during continued monitoring.
. To apply this sequential VOI approach to monitoring the comparative safety of prasugrel v. clopidogrel on gastrointestinal (GI) bleeding.
. We calculated expected health losses assuming expected mortality, nonfatal myocardial infarction (MI), and nonfatal stroke on clopidogrel were 1.27, 5.93, and 1.14 per 100 person-years, using historical data; relative rates on prasugrel were 0.95, 0.76, and 1.02 based on trial data; and MI, stroke, and GI bleed were 9%, 25%, and 0.1% as bad as death, respectively. We assigned gamma prior distributions to the rates of bleeding on clopidogrel and prasugrel to capture baseline uncertainty; in Monte Carlo simulations, prasugrel's efficacy parameters were sampled from distributions.
. Treating all patients with prasugrel minimized expected health losses, resulting in 475.3 death-equivalents over 25,000 person-years of treatment. Monitoring increased expected losses by 5, and treating all patients with clopidogrel increased losses by 46.4. In Monte Carlo simulation, monitoring on average increased expected losses by 4.6, but a reduction in losses from monitoring was supported within the bounds of uncertainty (95% confidence interval, -0.6 to 11.1). Limitations. Patient heterogeneity and the possibility of updating efficacy parameters during monitoring were not incorporated.
. The proposed approach integrates expected health harms and benefits of continued monitoring in the decision to raise a safety alert.
Medical Decision Making 08/2013; 33(7). DOI:10.1177/0272989X13497997 · 3.24 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: Purpose:
When using claims data, dichotomous covariates (C) are often assumed to be absent unless a claim for the condition is observed. When available historical data differs among subjects, investigators must choose between using all available historical data versus data from a fixed window to assess C. Our purpose was to compare estimation under these two approaches.
We simulated cohorts of 20,000 subjects with dichotomous variables representing exposure (E), outcome (D), and a single time-invariant C, as well as varying availability of historical data. C was operationally defined under each paradigm and used to estimate the adjusted risk ratio of E on D via Mantel-Haenszel methods.
In the base case scenario, less bias and lower mean square error were observed using all available information compared with a fixed window; differences were magnified at higher modeled confounder strength. Upon introduction of an unmeasured covariate (F), the all-available approach remained less biased in most circumstances and rendered estimates that better approximated those that were adjusted for the true (modeled) value of C in all instances.
In most instances considered, operationally defining time-invariant dichotomous C based on all available historical data, rather than on data observed over a commonly shared fixed historical window, results in less biased estimates.
Pharmacoepidemiology and Drug Safety 05/2013; 22(5). DOI:10.1002/pds.3434 · 2.94 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: Objective
To evaluate the comparative cost-effectiveness of interventions to improve adherence to evidence-based medications among postmyocardial infarction (MI) patients. Data Sources/Study SettingCost-effectiveness analysis. Study DesignWe developed a Markov model simulating a hypothetical cohort of 65-year-old post-MI patients who were prescribed secondary prevention medications. We evaluated mailed education, disease management, polypill use, and combinations of these interventions. The analysis was performed from a societal perspective over a lifetime horizon. The main outcome was an incremental cost-effectiveness ratio (ICER) as measured by cost per quality-adjusted life year (QALY) gained. Data Collection/Extraction Methods
Model inputs were extracted from published literature. Principal FindingsCompared with usual care, only mailed education had both improved health outcomes and reduced spending. Mailed education plus disease management, disease management, polypill use, polypill use plus mailed education, and polypill use plus disease management cost were $74,600, $69,200, $133,000, $113,000, and $142,900 per QALY gained, respectively. In an incremental analysis, only mailed education had an ICER of less than $100,000 per QALY and was therefore the optimal strategy. Polypill use, particularly when combined with mailed education, could be cost effective, and potentially cost saving if its price decreased to less than $100 per month. Conclusions
Mailed education and a polypill, once available, may be the cost-saving strategies for improving post-MI medication adherence.
Health Services Research 12/2012; 47(6). DOI:10.1111/j.1475-6773.2012.01462.x · 2.78 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: Dabigatran, an oral thrombin inhibitor, and rivaroxaban and apixaban, oral factor Xa inhibitors, have been found to be safe and effective in reducing stroke risk in patients with atrial fibrillation. We sought to compare the efficacy and safety of the 3 new agents based on data from their published warfarin-controlled randomized trials, using the method of adjusted indirect comparisons.
We included findings from 44 535 patients enrolled in 3 trials of the efficacy of dabigatran (Randomized Evaluation of Long-Term Anticoagulation Therapy [RELY]), apixaban (Apixaban for Reduction in Stroke and Other Thromboembolic Events in Atrial Fibrillation [ARISTOTLE]), and rivaroxaban (Rivaroxaban Once Daily Oral Direct Factor Xa Inhibition Compared With Vitamin K Antagonism for Prevention of Stroke and Embolism Trial in Atrial Fibrillation [ROCKET-AF]), each compared with warfarin. The primary efficacy end point was stroke or systemic embolism; the safety end point we studied was major hemorrhage. To address a lack of comparability between trial populations caused by the restriction of ROCKET-AF to high-risk patients, we conducted a subgroup analysis in patients with a CHADS(2) score ≥3. We found no statistically significant efficacy differences among the 3 drugs, although apixaban and dabigatran were numerically superior to rivaroxaban. Apixaban produced significantly fewer major hemorrhages than dabigatran and rivaroxaban.
An indirect comparison of new anticoagulants based on existing trial data indicates that in patients with a CHADS(2) score ≥3 dabigatran 150 mg, apixaban 5 mg, and rivaroxaban 20 mg resulted in statistically similar rates of stroke and systemic embolism, but apixaban had a lower risk of major hemorrhage compared with dabigatran and rivaroxaban. Until head-to-head trials or large-scale observational studies that reflect routine use of these agents are available, such adjusted indirect comparisons based on trial data are one tool to guide initial therapeutic choices.
[Show abstract][Hide abstract] ABSTRACT: Several efforts are under way to develop and test methods for prospective drug safety monitoring using large, electronic claims databases. Prospective monitoring systems must incorporate signalling algorithms and techniques to mitigate confounding in order to minimize false positive and false negative signals due to chance and bias.
The aim of the study was to describe a prototypical targeted active safety monitoring system and apply the framework to three empirical examples.
We performed sequential, targeted safety monitoring in three known drug/adverse event (AE) pairs: (i) paroxetine/upper gastrointestinal (UGI) bleed; (ii) lisinopril/angioedema; (iii) ciprofloxacin/Achilles tendon rupture (ATR). Data on new users of the drugs of interest were extracted from the HealthCore Integrated Research Database. New users were matched by propensity score to new users of comparator drugs in each example. Analyses were conducted sequentially to emulate prospective monitoring. Two signalling rules--a maximum sequential probability ratio test and an effect estimate-based approach--were applied to sequential, matched cohorts to identify signals within the system.
Signals were identified for all three examples: paroxetine/UGI bleed in the seventh monitoring cycle, within 2 calendar years of sequential data; lisinopril/angioedema in the second cycle, within the first monitoring year; ciprofloxacin/ATR in the tenth cycle, within the fifth year.
In this proof of concept, our targeted, active monitoring system provides an alternative to systems currently in the literature. Our system employs a sequential, propensity score-matched framework and signalling rules for prospective drug safety monitoring and identified signals for all three adverse drug reactions evaluated.
Drug Safety 04/2012; 35(5):407-16. DOI:10.2165/11594770-000000000-00000 · 2.82 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: A previous study suggested an increased risk of preeclampsia among women treated with selective serotonin reuptake inhibitors (SSRIs). Using population-based health-care utilization databases from British Columbia (1997-2006), the authors conducted a study of 69,448 pregnancies in women with depression. They compared risk of preeclampsia in women using SSRIs, serotonin-norepinephrine reuptake inhibitors (SNRIs), or tricyclic antidepressants (TCAs) between gestational weeks 10 and 20 with risk in depressed women not using antidepressants. Among prepregnancy antidepressant users, the authors compared the risk in women who continued antidepressants between gestational weeks 10 and 24 with the risk in those who discontinued. Relative risks and 95% confidence intervals were estimated. The risk of preeclampsia in depressed women not treated with antidepressants (2.4%) was similar to that in women without depression (2.3%). Compared with women with untreated depression, women treated with SSRI, SNRI, and TCA monotherapy had adjusted relative risks of 1.22 (95% confidence interval (CI): 0.97, 1.54), 1.95 (95% CI: 1.25, 3.03), and 3.23 (95% CI: 1.87, 5.59), respectively. Within prepregnancy antidepressant users, the relative risk for preeclampsia among continuers compared with discontinuers was 1.32 (95% CI: 0.95, 1.84) for SSRI, 3.43 (95% CI: 1.77, 6.65) for SNRI, and 3.26 (95% CI: 1.04, 10.24) for TCA monotherapy. Study results suggest that women who use antidepressants during pregnancy, especially SNRIs and TCAs, have an elevated risk of preeclampsia. These associations may reflect drug effects or more severe depression.
American journal of epidemiology 03/2012; 175(10):988-97. DOI:10.1093/aje/kwr394 · 5.23 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: Multiple studies demonstrate poor adherence to medication regimens prescribed for chronic illnesses, including osteoporosis, but few interventions have been proven to enhance adherence. We examined the effectiveness of a telephone-based counseling program rooted in motivational interviewing to improve adherence to a medication regimen for osteoporosis.
We conducted a 1-year randomized controlled clinical trial. Participants were recruited from a large pharmacy benefits program for Medicare beneficiaries. All potentially eligible individuals had been newly prescribed a medication for osteoporosis. Consenting participants were randomized to a program of telephone-based counseling (n = 1046) using a motivational interviewing framework or a control group (n = 1041) that received mailed educational materials. Medication regimen adherence was the primary outcome compared across treatment arms and was measured as the median (interquartile range) medication possession ratio, calculated as the ratio of days with filled prescriptions to total days of follow-up.
The groups were balanced at baseline, with a mean age of 78 years; 93.8% were female. In an intention-to-treat analysis, median adherence was 49% (interquartile range, 7%-88%) in the intervention arm and 41% (2%-86%) in the control arm (P = .07, Kruskal-Wallis test). There were no differences in self-reported fractures.
In this randomized controlled trial, we did not find a statistically significant improvement in adherence to an osteoporosis medication regimen using a telephonic motivational interviewing intervention.
Archives of internal medicine 02/2012; 172(6):477-83. DOI:10.1001/archinternmed.2011.1977 · 17.33 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: Little is known about the use of warfarin in hemodialysis (HD) patients with atrial fibrillation (AF). We studied temporal trends of AF among older HD patients, and of warfarin use among those with AF.
We linked US Medicare and prescription claims from older patients undergoing HD in 2 Eastern US states. We established annual cohorts of prevalent HD patients; AF was ascertained from >2 claims (>7 days apart) in the same year, with a diagnosis code indicating AF. Among those with AF, we defined current and past warfarin use. Demographic and clinical characteristics were also ascertained for each cohort. We used repeated-measures logistic regression to define the odds of AF and of current or past versus absence of warfarin use.
Of 6,563 unique patients, 2,185 were determined to have AF. The prevalence of AF increased from 26% in 1998 to 32% in 2005. In 2005, current warfarin use was present in 24% of AF patients and past use in 25%; 51% had no evidence of any warfarin use. No significant trends in utilization were observed from 1998 through 2005. Patients aged =85 years and nonwhites were less likely to have received warfarin; most comorbidities were not associated with warfarin use except for patients with past pulmonary embolism or deep venous thrombosis who were more likely than those without such history.
While the prevalence of AF has been increasing among older HD patients, warfarin use was low and unchanged over time, perhaps reflecting the lack of evidence supporting its use.
Journal of nephrology 12/2011; 25(3):341-53. DOI:10.5301/jn.5000010 · 1.45 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: Adherence to osteoporosis treatment is low. Although new therapies and behavioral interventions may improve medication adherence, questions are likely to arise regarding their cost-effectiveness.
Our objectives were to develop and validate a model to simulate the clinical outcomes and costs arising from various osteoporosis medication adherence patterns among women initiating bisphosphonate treatment and to estimate the cost-effectiveness of a hypothetical intervention to improve medication adherence.
We constructed a computer simulation using estimates of fracture rates, bisphosphonate treatment effects, costs, and utilities for health states drawn from the published literature. Probabilities of transitioning on and off treatment were estimated from administrative claims data.
Patients were women initiating bisphosphonate therapy from the general community.
We evaluated a hypothetical behavioral intervention to improve medication adherence.
Changes in 10-yr fracture rates and incremental cost-effectiveness ratios were evaluated.
A hypothetical intervention with a one-time cost of $250 and reducing bisphosphonate discontinuation by 30% had an incremental cost-effectiveness ratio (ICER) of $29,571 per quality-adjusted life year in 65-yr-old women initiating bisphosphonates. Although the ICER depended on patient age, intervention effectiveness, and intervention cost, the ICERs were less than $50,000 per quality-adjusted life year for the majority of intervention cost and effectiveness scenarios evaluated. Results were sensitive to bisphosphonate cost and effectiveness and assumptions about the rate at which intervention and treatment effects decline over time.
Our results suggests that behavioral interventions to improve osteoporosis medication adherence will likely have favorable ICERs if their efficacy can be sustained.
The Journal of Clinical Endocrinology and Metabolism 07/2011; 96(9):2762-70. DOI:10.1210/jc.2011-0575 · 6.21 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: Sudden cardiac death constitutes the leading cause of death in patients receiving dialysis. Little is known about the trends in implantable cardioverter-defibrillator (ICD) use and the outcomes of such device placement.
Retrospective cohort study.
US long-term dialysis patients who received an ICD in 1994-2006. PREDICTORS, OUTCOMES, & MEASUREMENTS: ICD utilization rates and incident rates of all-cause mortality, device infections, and other device-related procedures were measured. We compared mortality between recipients and otherwise similar patients who did not receive such a device using high-dimensional propensity score matching. We also examined the associations of demographics, dialysis type, baseline comorbid conditions, cardiovascular events at the time of admission, and recent infection with the study outcomes.
9,528 patients received an ICD in 1994-2006, with >88% placed after 2000. Almost all ICD use in the 1990s was for secondary prevention, however, half the patients received ICDs for apparent primary prevention in 2006. Mortality rates after implantation were high (448 deaths/1,000 patient-years) and most deaths were cardiovascular. Postimplantation infection rates were high, especially in the first year after implantation (988 events/1,000 patient-years) and were predicted by diabetes and recent infection. Patients receiving ICDs for secondary prevention had an overall 14% (95% CI, 9%-19%) lower mortality risk compared with propensity-matched controls, but these benefits seemed to be restricted to the early postimplantation time.
Lack of clinical data, especially for laboratory and heart function studies. Residual confounding by indication.
ICD use in dialysis patients is increasing, but rates of all-cause and cardiovascular mortality remain high in dialysis patients receiving these devices. Device infections are common, particularly in patients with recent infections. Randomized trials of ICDs are needed to determine the efficacy, safety, and risk-benefit ratio of these devices in dialysis patients.
American Journal of Kidney Diseases 06/2011; 58(3):409-17. DOI:10.1053/j.ajkd.2011.03.026 · 5.90 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: We examined variations in fracture rates among patients initiated on antidepressant drug treatment as identified from Medicare data in two US states and assessed whether the observed variation could be explained by affinity for serotonin transport receptors. We used Cox proportional hazards models to compare fracture rates of the hip, humerus, pelvis, wrist, and a composite of these, among propensity score-matched cohorts of users of secondary amine tricyclics, tertiary amine tricyclics, selective serotonin reuptake inhibitors (SSRIs), and atypical antidepressants. As compared with secondary amine tricyclics, SSRIs showed the highest association with composite fracture rate (hazard ratio 1.30; 95% confidence interval (CI) 1.12-1.52), followed by atypical antidepressants (hazard ratio 1.12; 95% CI 0.96-1.31) and tertiary amine tricyclics (hazard ratio 1.01; 95% CI 0.87-1.18). The results were robust to sensitivity analyses. Although SSRI use was associated with the highest rate of fractures, variation in fracture risk across specific antidepressant medications did not depend on affinity for serotonin transport receptors.
[Show abstract][Hide abstract] ABSTRACT: To explore the "healthy user" and "healthy adherer" effects-hypothetical sources of bias thought to arise when patients who initiate and adhere to preventive therapies are more likely to engage in healthy behaviors than are other subjects.
The authors examined the association between statin initiation and adherence, and the subsequent use of preventive health services and incidence of clinical outcomes unlikely to be associated with the need for, or use of, a statin among older enrollees in two state-sponsored drug benefit programs.
After adjustment for demographic and clinical covariates, patients who initiated statin use were more likely to receive recommended preventive services than noninitiators matched on age, sex, and state (hazard ratio [HR]: 1.10, 1.06-1.14 for males, HR: 1.09, 1.07-1.11 for females) and appeared to have a lower risk of a range of adverse outcomes (HR: 0.87, 0.85-0.89) thought to be unrelated to statin use. Adherence to a statin regimen was also associated with increased rates of preventive service use and a decreased rate of adverse clinical outcomes (HR: 0.93, 0.88-0.99).
These results suggest that patients initiating and adhering to chronic preventive drug therapies are more likely to engage in other health-promoting behaviors. Failure to account for this relationship may introduce bias in any epidemiologic study evaluating the effect of a preventive therapy on clinical outcomes.
Value in Health 06/2011; 14(4):513-20. DOI:10.1016/j.jval.2010.10.033 · 3.28 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: To examine the effect of variable selection strategies on the performance of propensity score (PS) methods in a study of statin initiation, mortality, and hip fracture assuming a true mortality reduction of < 15% and no effect on hip fracture.
We compared seniors initiating statins with seniors initiating glaucoma medications. Out of 202 covariates with a prevalence > 5%, PS variable selection strategies included none, a priori, factors predicting exposure, and factors predicting outcome. We estimated hazard ratios (HRs) for statin initiation on mortality and hip fracture from Cox models controlling for various PSs.
During 1 year follow-up, 2693 of 55,610 study subjects died and 496 suffered a hip fracture. The crude HR for statin initiators was 0.64 for mortality and 0.46 for hip fracture. Adjusting for the non-parsimonious PS yielded effect estimates of 0.83 (95%CI:0.75-0.93) and 0.72 (95%CI:0.56-0.93). Including in the PS only covariates associated with a greater than 20% increase or reduction in outcome rates yielded effect estimates of 0.84 (95%CI:0.75-0.94) and 0.76 (95%CI:0.61-0.95), which were closest to the effects predicted from randomized trials.
Due to the difficulty of pre-specifying all potential confounders of an exposure-outcome association, data-driven approaches to PS variable selection may be useful. Selecting covariates strongly associated with exposure but unrelated to outcome should be avoided, because this may increase bias. Selecting variables for PS based on their association with the outcome may help to reduce such bias.
Pharmacoepidemiology and Drug Safety 06/2011; 20(6):551-9. DOI:10.1002/pds.2098 · 2.94 Impact Factor