Noninferiority tial designs for odds ratios and risk differences

Department of Epidemiology & Biostatistics, University of California San Francisco, 185 Berry Street, Suite 5700, San Francisco, CA 94107-1762, U.S.A..
Statistics in Medicine (Impact Factor: 1.83). 04/2010; 29(9):982-93. DOI: 10.1002/sim.3846
Source: PubMed


This study presents constrained maximum likelihood derivations of the design parameters of noninferiority trials for binary outcomes with the margin defined on the odds ratio (ψ) or risk-difference (δ) scale. The derivations show that, for trials in which the group-specific response rates are equal under the point-alternative hypothesis, the common response rate, π(N), is a fixed design parameter whose value lies between the control and experimental rates hypothesized at the point-null, {π(C), π(E)}. We show that setting π(N) equal to the value of π(C) that holds under H(0) underestimates the overall sample size requirement. Given {π(C), ψ} or {π(C), δ} and the type I and II error rates, or algorithm finds clinically meaningful design values of π(N), and the corresponding minimum asymptotic sample size, N=n(E)+n(C), and optimal allocation ratio, γ=n(E)/n(C). We find that optimal allocations are increasingly imbalanced as ψ increases, with γ(ψ)<1 and γ(δ)≈1/γ(ψ), and that ranges of allocation ratios map to the minimum sample size. The latter characteristic allows trialists to consider trade-offs between optimal allocation at a smaller N and a preferred allocation at a larger N. For designs with relatively large margins (e.g. ψ>2.5), trial results that are presented on both scales will differ in power, with more power lost if the study is designed on the risk-difference scale and reported on the odds ratio scale than vice versa.

Full-text preview

Available from:

  • No preview · Article · Sep 2011 · European Journal of Cancer
  • [Show abstract] [Hide abstract]
    ABSTRACT: Adjusting for covariates makes efficient use of data and can improve the precision of study results or even reduce sample sizes. There is no easy way to adjust for covariates in a non-inferiority study for which the margin is defined as a risk difference. Adjustment is straightforward on the logit scale, but reviews of clinical studies suggest that the analysis is more often conducted on the more interpretable risk-difference scale. We examined four methods that allow for adjustment on the risk-difference scale: stratified analysis with Cochran-Mantel-Haenszel (CMH) weights, binomial regression with an identity link, the use of a Taylor approximation to convert results from the logit to the risk-difference scale and converting the risk-difference margin to the odds-ratio scale. These methods were compared using simulated data based on trials in HIV. We found that the CMH had the best trade-off between increased efficiency in the presence of predictive covariates and problems in analysis at extreme response rates. These results were shared with regulatory agencies in Europe and the USA, and the advice received is described.
    No preview · Article · Sep 2011 · Pharmaceutical Statistics
  • [Show abstract] [Hide abstract]
    ABSTRACT: Asymptotic simultaneous lower (upper) confidence bounds for risk differences arising from comparing several treatments to a common control are constructed by inverting the maximum (minimum) of score statistics. With a few exceptions, these bounds perform better in terms of simultaneous coverage probability than procedures based on adjusted Wald methods (e.g., adding pseudo-observations), especially over relevant parts of the parameter space in superiority or inferiority studies. A further improvement is realized by using an appropriate multiplicity adjusted critical value that takes advantage of the correlation information in the score statistics estimated under the null instead of a regular plug-in estimate. Simulation results and a worked example show a gain in terms of the precision of the lower bounds and their power; however, not too much is lost when using the straightforward Sidak multiplicity adjustment when the number of comparisons is small. All methods discussed are implemented and reproducible with general and publicly available R code.
    No preview · Article · May 2012 · Computational Statistics & Data Analysis
Show more