Case Definition and Design Sensitivity
Journal of the American Statistical Association (Impact Factor: 1.98). 12/2013; 108(504):1457-1468. DOI: 10.1080/01621459.2013.820660
In a case-referent study, cases of disease are compared to non-cases with respect to their antecedent exposure to a treatment in an effort to determine whether exposure causes some cases of the disease. Because exposure is not randomly assigned in the population, as it would be if the population were a vast randomized trial, exposed and unexposed subjects may differ prior to exposure with respect to covariates that may or may not have been measured. After controlling for measured pre-exposure differences, for instance by matching, a sensitivity analysis asks about the magnitude of bias from unmeasured covariates that would need to be present to alter the conclusions of a study that presumed matching for observed covariates removes all bias. The definition of a case of disease affects sensitivity to unmeasured bias. We explore this issue using: (i) an asymptotic tool, the design sensitivity, (ii) a simulation for finite samples, and (iii) an example. Under favorable circumstances, a narrower case definition can yield an increase in the design sensitivity, and hence an increase in the power of a sensitivity analysis. Also, we discuss an adaptive method that seeks to discover the best case definition from the data at hand while controlling for multiple testing. An implementation in R is available as SensitivityCaseControl.
- [Show abstract] [Hide abstract]
ABSTRACT: An observational study draws inferences about treatment effects when treatments are not randomly assigned, as they would be in a randomized experiment. The naive analysis of an observational study assumes that adjustments for measured covariates suffice to remove bias from nonrandom treatment assignment. A sensitivity analysis in an observational study determines the magnitude of bias from nonrandom treatment assignment that would need to be present to alter the qualitative conclusions of the naive analysis, say leading to the acceptance of a null hypothesis rejected in the naive analysis. Observational studies vary greatly in their sensitivity to unmeasured biases, but a poor choice of test statistic can lead to an exaggerated report of sensitivity to bias. The Bahadur efficiency of a sensitivity analysis is introduced, calculated, and connected to established concepts, such as the power of a sensitivity analysis and the design sensitivity. The Bahadur slope equals zero when the sensitivity parameter equals the design sensitivity, but the Bahadur slope permits more refined distinctions. Specifically, the Bahadur relative efficiency can also compare the relative performance of two test statistics at a value of the sensitivity parameter below the minimum of their design sensitivities. Adaptive procedures that combine several tests can achieve the best design sensitivity and the best Bahadur slope of their component tests. Ultimately, in sufficiently large sample sizes, design sensitivity is more important than efficiency for the power of a sensitivity analysis, and the exponential rate at which rate design sensitivity overtakes efficiency is characterized.
- [Show abstract] [Hide abstract]
ABSTRACT: A common practice with ordered doses of treatment and ordered responses, perhaps recorded in a contingency table with ordered rows and columns, is to cut or remove a cross from the table, leaving the outer corners-that is, the high-versus-low dose, high-versus-low response corners-and from these corners to compute a risk or odds ratio. This little remarked but common practice seems to be motivated by the oldest and most familiar method of sensitivity analysis in observational studies, proposed by Cornfield et al. (1959), which says that to explain a population risk ratio purely as bias from an unobserved binary covariate, the prevalence ratio of the covariate must exceed the risk ratio. Quite often, the largest risk ratio, hence the one least sensitive to bias by this standard, is derived from the corners of the ordered table with the central cross removed. Obviously, the corners use only a portion of the data, so a focus on the corners has consequences for the standard error as well as for bias, but sampling variability was not a consideration in this early and familiar form of sensitivity analysis, where point estimates replaced population parameters. Here, this cross-cut analysis is examined with the aid of design sensitivity and the power of a sensitivity analysis. © 2015, The International Biometric Society.
Data provided are for informational purposes only. Although carefully collected, accuracy cannot be guaranteed. The impact factor represents a rough estimation of the journal's impact factor and does not reflect the actual current impact factor. Publisher conditions are provided by RoMEO. Differing provisions from the publisher's actual policy or licence agreement may be applicable.