Article

A Multi-Treatment Two Stage Adaptive Allocation for Survival Outcomes

Taylor & Francis
Communications in Statistics - Theory and Methods
Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

A multi-treatment two stage adaptive allocation design is developed for survival responses. Assuming noninformative random censoring, asymptotic p values of relevant tests of equality of treatment effects are used to derive the assignment probability of incoming second stage subjects. Several ethical and inferential criteria of the design are studied, and are compared with those of an existing competitor. Applicability and performance of the proposed design are also illustrated using a data arising from a real clinical trial.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Additionally, we redesign the same real clinical trial (i.e. recurrent glioblastoma trial of Batchelor et al. (2013) as considered in Bhattacharya and Shome (2019) and study the deviations under model misspecification. After a brief discussion of the two stage design of Bhattacharya and Shome (2019) in section 2, we evaluate the performance of the design empirically under model violations in section 3. ...
... recurrent glioblastoma trial of Batchelor et al. (2013) as considered in Bhattacharya and Shome (2019) and study the deviations under model misspecification. After a brief discussion of the two stage design of Bhattacharya and Shome (2019) in section 2, we evaluate the performance of the design empirically under model violations in section 3. In section 4, we discuss how model misspecification affects the performance of a real clinical trial and lastly, we conclude the work with a relevant discussion in section 5. ...
... For the evaluation of the allocation design of Bhattacharya and Shome (2019) from the perspective of a real clinical practitioner, we consider the same real clinical trial considered in Bhattacharya and Shome (2019), that is, the recurrent glioblastoma trial of Batchelor et al. (2013), where the efficacies of combination of Cediranib 20 mg and Lomustine (Treatment 1), Cediranib 30 mg (Treatment 2) and Lomustine (Treatment 3) alone are evaluated. 325 patients with recurrent glioblastoma were randomly assigned using a 2:2:1 randomization to receive treatments 1, 2 and 3, respectively. ...
Article
Full-text available
Multi-treatment two stage adaptive design for survival responses is generally developed under different assumptions. In this work, we explore the performance of such a design when the assumptions are violated. As a choice of design, we consider a specific design of Bhattacharya and Shome (2019), which uses random censoring, exponential response, Koziol-Green model (Koziol and Green, 1976), etc. Several ethical and inferential criteria of the design are studied under model misspecification for different parameter configurations as well as for a data arising from a real clinical trial. IJSS, Vol. 24(2) Special, December, 2024, pp 67-79
... Although the empirical study is primarily performed under the assumption that both the survival and censoring variables follow exponential distributions, the effect of model misspecification has also been studied for Weibull and lognormal distributions. For the purpose of comparison, in addition to CR, we consider the two-stage design of Bhattacharya and Shome (2019) as another competitor, where the censoring scheme is random and the lifetime and censoring random variables are independent. In Section 4 we further redesign a real clinical trial, namely, the locally advanced nonnasopharyngeal head and neck cancer trial (Fountzilas et al. 2004), to envisage the practical applicability of the proposed procedure. ...
Article
In the context of clinical trials, a multi-treatment two-stage adaptive randomization procedure is developed for survival responses with copula-based nonrandom censoring. Because no such allocation design is available for survival responses with nonrandom censoring, this is probably the first work in this direction. After allocating the first-stage patients by complete randomization (CR), asymptotic p- values of relevant statistical tests are used to derive the allocation probabilities for the second-stage incoming patients. Several design and precision-based operating characteristics of the proposed design are studied both theoretically and empirically and compared with those of the CR procedure and the two-stage design of Bhattacharya and Shome (2019). Applicability of the proposed design is further envisaged through redesigning a real clinical trial with cancer patients.
... Consequently, we incorporate the two delay mechanisms introduced earlier in this work and simulate accordingly. The first delay mechanism is the uniform delay of Bhattacharya and Shome (2019), and the second one is the Poisson distribution-based natural delay of Hardwick et al. (2006), where the patients arrive at a rate per time unit. Here, we take = 1 and scale the entry times in such a way that the last patient enters after 55 months. ...
Article
Full-text available
Compromising ethics and precision in the context of a multiarmed clinical trial, an optimal order adjusted response adaptive design is proposed for survival outcomes subject to independent random censoring. The operating characteristics of the proposed design and the follow-up inference are studied both theoretically as well as empirically and are compared with those of the competitors. Applicability of the developed design is further illustrated through redesigning a real clinical trial with survival responses.
Article
Full-text available
PURPOSEA randomized, phase III, placebo-controlled, partially blinded clinical trial (REGAL [Recentin in Glioblastoma Alone and With Lomustine]) was conducted to determine the efficacy of cediranib, an oral pan-vascular endothelial growth factor (VEGF) receptor tyrosine kinase inhibitor, either as monotherapy or in combination with lomustine versus lomustine in patients with recurrent glioblastoma. PATIENTS AND METHODS Patients (N = 325) with recurrent glioblastoma who previously received radiation and temozolomide were randomly assigned 2:2:1 to receive (1) cediranib (30 mg) monotherapy; (2) cediranib (20 mg) plus lomustine (110 mg/m(2)); (3) lomustine (110 mg/m(2)) plus a placebo. The primary end point was progression-free survival based on blinded, independent radiographic assessment of postcontrast T1-weighted and noncontrast T2-weighted magnetic resonance imaging (MRI) brain scans.ResultsThe primary end point of progression-free survival (PFS) was not significantly different for either cediranib alone (hazard ratio [HR] = 1.05; 95% CI, 0.74 to 1.50; two-sided P = .90) or cediranib in combination with lomustine (HR = 0.76; 95% CI, 0.53 to 1.08; two-sided P = .16) versus lomustine based on independent or local review of postcontrast T1-weighted MRI. CONCLUSION This study did not meet its primary end point of PFS prolongation with cediranib either as monotherapy or in combination with lomustine versus lomustine in patients with recurrent glioblastoma, although cediranib showed evidence of clinical activity on some secondary end points including time to deterioration in neurologic status and corticosteroid-sparing effects.
Article
Full-text available
Randomized trials with dropouts or censored data and discrete time-to-event type outcomes are frequently analyzed using the Kaplan–Meier or product limit (PL) estimation method. However, the PL method assumes that the censoring mechanism is noninformative and when this assumption is violated, the inferences may not be valid. We propose an expanded PL method using a Bayesian framework to incorporate informative censoring mechanism and perform sensitivity analysis on estimates of the cumulative incidence curves. The expanded method uses a model, which can be viewed as a pattern mixture model, where odds for having an event during the follow-up interval ( t k − 1 , t k ] (tk−1,tk], conditional on being at risk at t k − 1 tk−1, differ across the patterns of missing data. The sensitivity parameters relate the odds of an event, between subjects from a missing-data pattern with the observed subjects for each interval. The large number of the sensitivity parameters is reduced by considering them as random and assumed to follow a log-normal distribution with prespecified mean and variance. Then we vary the mean and variance to explore sensitivity of inferences. The missing at random (MAR) mechanism is a special case of the expanded model, thus allowing exploration of the sensitivity to inferences as departures from the inferences under the MAR assumption. The proposed approach is applied to data from the TRial Of Preventing HYpertension.
Book
Randomised Response-Adaptive Designs in Clinical Trials presents methods for the randomised allocation of treatments to patients in sequential clinical trials. Emphasizing the practical application of clinical trial designs, the book is designed for medical and applied statisticians, clinicians, and statisticians in training. After introducing clinical trials in drug development, the authors assess a simple adaptive design for binary responses without covariates. They discuss randomisation and covariate balance in normally distributed responses and cover many important response-adaptive designs for binary responses. The book then develops response-adaptive designs for continuous and longitudinal responses, optimum designs with covariates, and response-adaptive designs with covariates. It also covers response-adaptive designs that are derived by optimising an objective function subject to constraints on the variance of estimated parametric functions. The concluding chapter explores future directions in the development of adaptive designs.
Book
This greatly expanded second edition of Survival Analysis- A Self-learning Text provides a highly readable description of state-of-the-art methods of analysis of survival/event-history data. This text is suitable for researchers and statisticians working in the medical and other life sciences as well as statisticians in academia who teach introductory and second-level courses on survival analysis. The second edition continues to use the unique "lecture-book" format of the first (1996) edition with the addition of three new chapters on advanced topics: Chapter 7: Parametric Models Chapter 8: Recurrent events Chapter 9: Competing Risks. Also, the Computer Appendix has been revised to provide step-by-step instructions for using the computer packages STATA (Version 7.0), SAS (Version 8.2), and SPSS (version 11.5) to carry out the procedures presented in the main text. The original six chapters have been modified slightly to expand and clarify aspects of survival analysis in response to suggestions by students, colleagues and reviewers, and to add theoretical background, particularly regarding the formulation of the (partial) likelihood functions for proportional hazards, stratified, and extended Cox regression models David Kleinbaum is Professor of Epidemiology at the Rollins School of Public Health at Emory University, Atlanta, Georgia. Dr. Kleinbaum is internationally known for innovative textbooks and teaching on epidemiological methods, multiple linear regression, logistic regression, and survival analysis. He has provided extensive worldwide short-course training in over 150 short courses on statistical and epidemiological methods. He is also the author of ActivEpi (2002), an interactive computer-based instructional text on fundamentals of epidemiology, which has been used in a variety of educational environments including distance learning. Mitchel Klein is Research Assistant Professor with a joint appointment in the Department of Environmental and Occupational Health (EOH) and the Department of Epidemiology, also at the Rollins School of Public Health at Emory University. Dr. Klein is also co-author with Dr. Kleinbaum of the second edition of Logistic Regression- A Self-Learning Text (2002). He has regularly taught epidemiologic methods courses at Emory to graduate students in public health and in clinical medicine. He is responsible for the epidemiologic methods training of physicians enrolled in Emory’s Master of Science in Clinical Research Program, and has collaborated with Dr. Kleinbaum both nationally and internationally in teaching several short courses on various topics in epidemiologic methods.
Article
Convex HullsClosure and Interior of a SetWeierstrass's TheoremSeparation and Support of SetsConvex Cones and PolarityPolyhedral Sets, Extreme Points, and Extreme DirectionsLinear Programming and the Simplex Method ExercisesNotes and References
Article
A randomized two treatment allocation design, conducted in two stages, is proposed for a class of continuous response trials. Patients are assigned to each treatment in equal numbers in the first stage and p value of a test of equality of treatment effects based on these data is used to determine the assignment probability of second stage patients. Relevant properties of the proposed allocation design are investigated and compared with suitable competitors.
Book
Probability. Measure. Integration. Random Variables and Expected Values. Convergence of Distributions. Derivatives and Conditional Probability. Stochastic Processes. Appendix. Notes on the Problems. Bibliography. List of Symbols. Index.
Article
The bioequivalence problem is of practical importance because the approval of most generic drugs in the United States and the European Community (EC) requires the establishment of bioequivalence between the brand-name drug and the proposed generic version. The problem is theoretically interesting because it has been recognized as one for which the desired inference, instead of the usual significant difference, is practical equivalence. The concept of intersection-union tests is shown to clarify, simplify and unify bioequivalence testing. A test more powerful than the one currently specified by the FDA and EC guidelines is derived. The claim that the bioequivalence problem defined in terms of the ratio of parameters is more difficult than the problem defined in terms of the difference of parameters is refuted. The misconception that size-α bioequivalence tests generally correspond to 100(1-2α)% confidence sets is shown to lead to incorrect statistical practices, and should be abandoned. Techniques for constructing 100(1-α)% confidence sets that correspond to size-α bioequivalence tests are described. Finally, multiparameter bioequivalence problems are discussed.
Article
In epidemiological studies, survival analyses are often carried out in order to better understand the onset of an event. The data have the particularity of being incomplete due to the different censoring phenomena. Traditional methods make the hypothesis of censoring being independent from the event, which may be a source of bias in certain pathologies. The Inverse Probability of Censoring Weighted (IPCW) method adapts the Kaplan-Meier estimators and the Cox partial likelihood method to cases, with non-independent censoring. This method uses the information resulting from censoring to modify the contribution of individuals in the estimators. This method is applied to asthma, a case in which therapists believe that patients lost to follow-up are patients who are in otherwise good health, and do not feel the necessity to consult a doctor (informative censoring).
Article
A simple cost function approach is proposed for designing an optimal clinical trial when a total of N patients with a disease are to be treated with one of two medical treatments. The cost function is constructed with but one cost, the consequences of treating a patient with the superior or inferior of the two treatments. Fixed sample size and sequential trials are considered. Minimax, maximin, and Bayesian approaches are used for determining the optimal size of a fixed sample trial and the optimal position of the boundaries of a sequential trial. Comparisons of the different approaches are made as well as comparisons of the results for the fixed and sequential plans.
Article
A clinical trial setting is considered in which two treatments are available for a particular ailment. The responses to the treatments are normally distributed with unknown means and a common known variance. A two-stage trial and a three-stage trial are studied. In the two-stage trial, patients are randomised equally in the first stage, and the better treatment at the end of this stage is used exclusively in the second stage. The three-stage trial permits a second randomised stage before a single treatment is selected. For these designs, the exact bias and variance of the estimated treatment difference at the end of the trial are derived. These quantities are also derived when there are time trends in the data. Numerical results indicate that the presence of time trends can seriously bias the estimated treatment difference and can also lead to an increase in its variance.
Book
An up-to-date approach to understanding statistical inference Statistical inference is finding useful applications in numerous fields, from sociology and econometrics to biostatistics. This volume enables professionals in these and related fields to master the concepts of statistical inference under inequality constraints and to apply the theory to problems in a variety of areas. Constrained Statistical Inference: Order, Inequality, and Shape Constraints provides a unified and up-to-date treatment of the methodology. It clearly illustrates concepts with practical examples from a variety of fields, focusing on sociology, econometrics, and biostatistics. The authors also discuss a broad range of other inequality-constrained inference problems that do not fit well in the contemplated unified framework, providing a meaningful way for readers to comprehend methodological resolutions. Chapter coverage includes: • Population means and isotonic regression • Inequality-constrained tests on normal means • Tests in general parametric models • Likelihood and alternatives • Analysis of categorical data • Inference on monotone density function, unimodal density function, shape constraints, and DMRL functions • Bayesian perspectives, including Stein's Paradox, shrinkage estimation, and decision theory.
Article
In the present work, we formulate a two-treatment single period two-stage adaptive allocation design for achieving larger allocation proportion to the better treatment arm in the course of the trial with increased precision of the parameter estimator. We examine some properties of the proposed rule and compare it with some of the existing allocation rules and report substantial gain in efficiency with a considerably larger number of allocations to the better treatment even for moderate sample sizes.
Article
A clinical trial setting is considered in which two treatments are available for a particular ailment. A two—stage trial is studied, in which patients are randomised equally in the first stage, and the better treatment at the end of this stage is used exclusively in the second stage. For exponential and Bernoulli responses, the exact bias and variance of the estimated treatment difference at the end of the trial are derived. Corresponding results for normal responses with unequal variances are also obtained, and the numerical accuracy of a normal approximation is investigated. The results indicate that the bias in estimation can be up to 25% when the size of the first stage is small, reducing to less than 7% for moderate first—stage sizes. For both exponential and Bernoulli responses, a normal approximation works well for moderate first—stage sizes, with the approximation for Bernoulli responses being slightly better.
Article
A randomized two-stage adaptive design is proposed and studied for allocation of patients to treatments and comparison in a phase III clinical trial with survival time as treatment responses. We consider the possibility of several covariates in the design and analysis. Several exact and limiting properties of the design and the follow-up inference are studied, both numerically and theoretically. The applicability of the proposed methodology is illustrated by using some real data.
Article
A randomized two-stage adaptive Bayesian design is proposed and studied for allocation and comparison in a phase III clinical trial with survival time as treatment response. Several exact and limiting properties of the design and the follow-up inference are studied, both numerically and theoretically, and are compared with a single-stage randomized procedure. The applicability of the proposed methodology is illustrated by using some real data.
Article
SUMMARY The asymptotic distributions of camér-von Mises type statistics based on the productlimit estimate of the distribution function of a certain class of randomly censored observations are derived; the asymptotic significance points of the statistics for various degrees of censoring are given. The statistics are also partitioned into orthogonal components in the manner of Durbin & Knott (1972). The asymptotic powers of the statistics and their components against normal mean and variance shifts, exponential scale shifts, and Weibull alternatives to exponentiality are compared. Data arising in a competing risk situation are examined, using the Cramér-von Mises statistic.
Article
In clinical studies, when censoring is caused by competing risks or patient withdrawal, there is always a concern about the validity of treatment effect estimates that are obtained under the assumption of independent censoring. Because dependent censoring is nonidentifiable without additional information, the best we can do is a sensitivity analysis to assess the changes of parameter estimates under different assumptions about the association between failure and censoring. This analysis is especially useful when knowledge about such association is available through literature review or expert opinions. In a regression analysis setting, the consequences of falsely assuming independent censoring on parameter estimates are not clear. Neither the direction nor the magnitude of the potential bias can be easily predicted. We provide an approach to do sensitivity analysis for the widely used Cox proportional hazards models. The joint distribution of the failure and censoring times is assumed to be a function of their marginal distributions. This function is called a copula. Under this assumption, we propose an iteration algorithm to estimate the regression parameters and marginal survival functions. Simulation studies show that this algorithm works well. We apply the proposed sensitivity analysis approach to the data from an AIDS clinical trial in which 27% of the patients withdrew due to toxicity or at the request of the patient or investigator.