Propensity Score-based Sensitivity Analysis Method for Uncontrolled Confounding
ABSTRACT The authors developed a sensitivity analysis method to address the issue of uncontrolled confounding in observational studies. In this method, the authors use a 1-dimensional function of the propensity score, which they refer to as the sensitivity function (SF), to quantify the hidden bias due to unmeasured confounders. The propensity score is defined as the conditional probability of being treated given the measured covariates. Then the authors construct SF-corrected inverse-probability-weighted estimators to draw inference on the causal treatment effect. This approach allows analysts to conduct a comprehensive sensitivity analysis in a straightforward manner by varying sensitivity assumptions on both the functional form and the coefficients in the 1-dimensional SF. Furthermore, 1-dimensional continuous functions can be well approximated by low-order polynomial structures (e.g., linear, quadratic). Therefore, even if the imposed SF is practically certain to be incorrect, one can still hope to obtain valuable information on treatment effects by conducting a comprehensive sensitivity analysis using polynomial SFs with varying orders and coefficients. The authors demonstrate the new method by implementing it in an asthma study which evaluates the effect of clinician prescription patterns regarding inhaled corticosteroids for children with persistent asthma on selected clinical outcomes.
Full-textDOI: · Available from: Ann Chen Wu, Aug 30, 2015
- SourceAvailable from: Robert J Ursano
[Show abstract] [Hide abstract]
- "As noted in the body of the paper, this weakness was addressed in SHOS-A/B by using propensity score matching methods (Rosenbaum and Rubin, 1983) to select probability samples of soldiers from the AAS as controls with an over-sampling of AAS respondents who reported suicidal ideation. This design refinement, which allows much more sensitive comparisons of cases and controls than in previous case-control studies of suicidal behaviors (Li et al., 2011), would have been impossible in the absence of the HADS and AAS studies being carried out in parallel with SHOS-A/B. "
ABSTRACT: The Army Study to Assess Risk and Resilience in Servicemembers (Army STARRS) is a multi-component epidemiological and neurobiological study designed to generate actionable evidence-based recommendations to reduce US Army suicides and increase basic knowledge about the determinants of suicidality. This report presents an overview of the designs of the six components of the Army STARRS. These include: an integrated analysis of the Historical Administrative Data Study (HADS) designed to provide data on significant administrative predictors of suicides among the more than 1.6 million soldiers on active duty in 2004–2009; retrospective case-control studies of suicide attempts and fatalities; separate large-scale cross-sectional studies of new soldiers (i.e. those just beginning Basic Combat Training [BCT], who completed self-administered questionnaires [SAQs] and neurocognitive tests and provided blood samples) and soldiers exclusive of those in BCT (who completed SAQs); a pre-post deployment study of soldiers in three Brigade Combat Teams about to deploy to Afghanistan (who completed SAQs and provided blood samples) followed multiple times after returning from deployment; and a platform for following up Army STARRS participants who have returned to civilian life. Department of Defense/Army administrative data records are linked with SAQ data to examine prospective associations between self-reports and subsequent suicidality. The presentation closes with a discussion of the methodological advantages of cross-component coordination.International Journal of Methods in Psychiatric Research 12/2013; 22(4):267-275. DOI:10.1002/mpr.1401 · 3.42 Impact Factor
- [Show abstract] [Hide abstract]
ABSTRACT: The Army Study to Assess Risk and Resilience in Servicemembers (Army STARRS) is a multi-component epidemiological and neurobiological study of unprecedented size and complexity designed to generate actionable evidence-based recommendations to reduce US Army suicides and increase basic knowledge about determinants of suicidality by carrying out coordinated component studies. A number of major logistical challenges were faced in implementing these studies. The current report presents an overview of the approaches taken to meet these challenges, with a special focus on the field procedures used to implement the component studies. As detailed in the paper, these challenges were addressed at the onset of the initiative by establishing an Executive Committee, a Data Coordination Center (the Survey Research Center [SRC] at the University of Michigan), and study-specific design and analysis teams that worked with staff on instrumentation and field procedures. SRC staff, in turn, worked with the Office of the Deputy Under Secretary of the Army (ODUSA) and local Army Points of Contact (POCs) to address logistical issues and facilitate data collection. These structures, coupled with careful fieldworker training, supervision, and piloting, contributed to the major Army STARRS data collection efforts having higher response rates than previous large-scale studies of comparable military samples. Copyright © 2013 John Wiley & Sons, Ltd.International Journal of Methods in Psychiatric Research 12/2013; DOI:10.1002/mpr.1400 · 1.76 Impact Factor
- [Show abstract] [Hide abstract]
ABSTRACT: Although randomized controlled trials are considered the 'gold standard' for clinical studies, the use of exclusion criteria may impact the external validity of the results. It is unknown whether estimators of effect size are biased by excluding a portion of the target population from enrollment. We propose to use observational data to estimate the bias due to enrollment restrictions, which we term generalizability bias. In this paper, we introduce a class of estimators for the generalizability bias and use simulation to study its properties in the presence of non-constant treatment effects. We find the surprising result that our estimators can be unbiased for the true generalizability bias even when all potentially confounding variables are not measured. In addition, our proposed doubly robust estimator performs well even for mis-specified models. Copyright © 2013 John Wiley & Sons, Ltd.Statistics in Medicine 02/2014; 32(20). DOI:10.1002/sim.5802 · 2.04 Impact Factor