Current issues in non-inferiority trials

Department of Biostatistics, University of Washington, Seattle, WA 98195, USA.
Statistics in Medicine (Impact Factor: 1.83). 02/2008; 27(3):317-32. DOI: 10.1002/sim.2855
Source: PubMed


Non-inferiority (NI) trials enable a direct comparison of the relative benefit-to-risk profiles of an experimental intervention and a standard-of-care regimen. When the standard has clinical efficacy of substantial magnitude that is precisely estimated ideally using data from multiple adequate and well-controlled trials, with such estimates being relevant to the setting of the NI trial, then the NI trial can provide the scientific and regulatory evidence required to reliably assess the efficacy of the new intervention. In clinical practice, considerable uncertainty remains regarding when such trials should be conducted, how they should be designed, what standards for quality of trial conduct must be achieved, and how results should be interpreted. Recent examples will be considered to provide important insights and to highlight some of the challenges that remain to be adequately addressed regarding the use of the NI approach for the evaluation of new interventions. 'Imputed placebo' and 'margin'-based approaches to NI trial design will be considered, as well as the risk of 'bio-creep' with repeated NI trials, use of NI trials when determining whether excess safety risks can be ruled out, higher standards regarding quality of study conduct required with NI trials, and the myth that NI trials always require huge sample sizes.

4 Reads
  • Source
    • "The methods we have described are especially appropriate for equivalence and non-inferiority trials because ITT analysis is known to be anti-conservative in such trials (Jones et al. 1996) whereas per-protocol analyses are potentially biased (Fleming, 2008). An alternative approach to handling non-adherence is the complier average causal effect (CACE; Dunn et al. 2003) model, but this is not well defined in trials comparing two active treatments and also requires adherence to be binary. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Background: Meta-analyses suggest that reboxetine may be less effective than other antidepressants. Such comparisons may be biased by lower adherence to reboxetine and subsequent handling of missing outcome data. This study illustrates how to adjust for differential non-adherence and hence derive an unbiased estimate of the efficacy of reboxetine compared with citalopram in primary care patients with depression. Method: A structural mean modelling (SMM) approach was used to generate adherence-adjusted estimates of the efficacy of reboxetine compared with citalopram using GENetic and clinical Predictors Of treatment response in Depression (GENPOD) trial data. Intention-to-treat (ITT) analyses were performed to compare estimates of effectiveness with results from previous meta-analyses. Results: At 6 weeks, 92% of those randomized to citalopram were still taking their medication, compared with 72% of those randomized to reboxetine. In ITT analysis, there was only weak evidence that those on reboxetine had a slightly worse outcome than those on citalopram [adjusted difference in mean Beck Depression Inventory (BDI) scores: 1.19, 95% confidence interval (CI) -0.52 to 2.90, p = 0.17]. There was no evidence of a difference in efficacy when differential non-adherence was accounted for using the SMM approach for mean BDI (-0.29, 95% CI -3.04 to 2.46, p = 0.84) or the other mental health outcomes. Conclusions: There was no evidence of a difference in the efficacy of reboxetine and citalopram when these drugs are taken and tolerated by depressed patients. The SMM approach can be implemented in standard statistical software to adjust for differential non-adherence and generate unbiased estimates of treatment efficacy for comparisons of two (or more) active interventions.
    Psychological Medicine 03/2014; 44(13):1-12. DOI:10.1017/S0033291714000221 · 5.94 Impact Factor
  • Source
    • "Wiens and Zhao (2007) listed missing data as an important topic in noninferiority trials requiring further research. Fleming (2008) mentioned that missing data tend to reduce sensitivity of analyses to true differences, thereby biasing noninferiority trials toward a conclusion of no difference. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Missing data in clinical trials has been widely discussed in the literature, but issues specific to missing data in noninferiority trials have rarely been addressed. The goal of this article is to present missing data issues that are particularly important in noninferiority trials. Issues of assay sensitivity and the constancy assumption are affected by missing data. Importantly, these issues are not solved by per protocol analyses which remove patient data based on postrandomization criteria. We advocate collecting data to the extent possible for sensitivity analyses. We discuss some other issues that remain unresolved in assessing the impact of missing data in noninferiority trials. A simulation analysis of different strategies for assessing noninferiority in the presence of missing data is reported for a clinical trial comparing two treatments. Single imputation procedures and observed case analyses resulted in reduced power due to missing data and occasionally in inflation in Type I error rate or bias in estimates of treatment effect. The mixed-effect model repeated measures approach resulted in a method that controlled the Type I error rate when data are missing at random, and often with higher power than the other two methods. Further work on multiple imputation procedures is desired.
    Statistics in Biopharmaceutical Research 11/2013; 5(4):383-393. DOI:10.1080/19466315.2013.847383 · 0.62 Impact Factor
  • Source
    • "There have been a growing number of new drug applications in which the efficacy of the new treatment is established with noninferiority trials that do not contain a placebo arm. There are many issues and controversies regarding noninferiority trials (D'Agostino et al., 2003; Tsong et al., 2003; Tsong, 2007; Hung et al., 2003; Wang and Hung, 2003; Hung et al., 2007; Fleming, 2008; Hung et al., 2009; FDA, 2010). One of the issues is the dependency(ies) of noninferiority tests. "
    [Show abstract] [Hide abstract]
    ABSTRACT: This article deals with the dependency(ies) of noninferiority test(s) when the two confidence interval method is employed. There are two different definitions of the two confidence interval method. One of the objectives of this article is to sort out some of the confusion in these two different definitions. In the first definition the two confidence interval method is considered as the fixed margin method that treats a noninferiority margin as a fixed constant after it is determined based on historical data. In this article the method is called the two confidence interval method with fixed margin. The issue of the dependency(ies) of noninferiority test(s) does not occur in this case. In the second definition the two confidence interval method incorporates the uncertainty associated with the estimation for the noninferiority margin. In this article the method is called the two confidence interval method with random margin. The dependency(ies) occurs, because the two confidence interval method(s) with random margin shares the same historical data. In this article we investigate how the dependency(ies) affects the unconditional and conditional across-trial type I error rates.
    Journal of Biopharmaceutical Statistics 03/2013; 23(2):307-21. DOI:10.1080/10543406.2011.616965 · 0.59 Impact Factor
Show more