[Show abstract][Hide abstract] ABSTRACT: In this paper, a Bayesian approach is developed for simultaneously comparing
multiple experimental treatments with a common control treatment in an
exploratory clinical trial. The sample size is set to ensure that, at the end
of the study, there will be at least one treatment for which the investigators
have a strong belief that it is better than control, or else they have a strong
belief that none of the experimental treatments are substantially better than
control. This criterion bears a direct relationship with conventional
frequentist power requirements, while allowing prior opinion to feature in the
analysis with a consequent reduction in sample size. If it is concluded that at
least one of the experimental treatments shows promise, then it is envisaged
that one or more of these promising treatments will be developed further in a
definitive phase III trial. The approach is developed in the context of
normally distributed responses sharing a common standard deviation regardless
of treatment. To begin with, the standard deviation will be assumed known when
the sample size is calculated. The final analysis will not rely upon this
assumption, although the intended properties of the design may not be achieved
if the anticipated standard deviation turns out to be inappropriate. Methods
that formally allow for uncertainty about the standard deviation, expressed in
the form of a Bayesian prior, are then explored. Illustrations of the sample
sizes computed from the new method are presented, and comparisons are made with
frequentist methods devised for the same situation.
[Show abstract][Hide abstract] ABSTRACT: Gaussian comparison inequalities provide a way of bounding probabilities
relating to multivariate Gaussian random vectors in terms of probabilities of
random variables with simpler correlation structures. In this paper, we
establish the partial stochastic dominance result that the cumulative
distribution function of the maximum of a multivariate normal random vector,
with positive intraclass correlation coefficient, intersects the cumulative
distribution function of a standard normal random variable at most once. This
result can be applied to the Bayesian design of a clinical trial in which
several experimental treatments are compared to a single control.
[Show abstract][Hide abstract] ABSTRACT: Almost all uveal melanomas showing chromosome 3 loss (i.e., monosomy 3) are fatal. Randomized clinical trials are therefore needed to evaluate various systemic adjuvant therapies. Conventional trial designs require large numbers of patients, which are difficult to achieve in a rare disease. The aim of this study was to use existing data to estimate how sample size and study duration could be reduced by selecting high-risk patients and adopting multistage trial designs.
We identified 217 patients with a monosomy 3 melanoma exceeding 15 mm in basal diameter; these patients had a median survival of 3.27 years. Several trial designs comparing overall survival were explored for such a population. A power of 0.90 to detect a hazard ratio of 0.737 was set, and recruitment of 16 patients per month was assumed.
A suitable single-stage study would require 960 patients and a duration of 76 months. A two-stage design with an interim analysis based on 852 patients after 53.3 months would have a 50% probability of stopping because no statistically significant treatment effect is seen. Encouraging but inconclusive results would require a further 108 patients and prolongation of the study to 77.2 months. A multistage design would have a 43% probability of stopping before 47 months having recruited 759 patients.
Prospects for clinical studies of systemic adjuvant therapy for uveal melanoma are enhanced by multistage trial designs enrolling only high-risk patients.
[Show abstract][Hide abstract] ABSTRACT: We generalize the Dunnett test to derive efficacy and futility boundaries for a flexible multi-arm multi-stage clinical trial for a normally distributed endpoint with known variance. We show that the boundaries control the familywise error rate in the strong sense. The method is applicable for any number of treatment arms, number of stages and number of patients per treatment per stage. It can be used for a wide variety of boundary types or rules derived from α-spending functions. Additionally, we show how sample size can be computed under a least favourable configuration power requirement and derive formulae for expected sample sizes. Copyright 2012, Oxford University Press.
[Show abstract][Hide abstract] ABSTRACT: The Cancer Research UK study CR0720-11 is a trial to determine the tolerability and effect on survival of using two agents in combination in patients with advanced pancreatic cancer. In particular, the trial is designed first to identify the most suitable combination of doses of the two agents in terms of the incidence of dose-limiting toxicities. Then, the survival of all patients who have received that dose combination in the study so far, together with additional patients assigned to that dose combination to ensure that the total number is sufficient, will be analysed. If the survival outcomes show promise, then a definitive randomised study of that dose combination will be recommended. The first two patients in the trial will be treated with the lowest doses of each agent in combination. An adaptive Bayesian procedure based only on monotonicity constraints concerning the risks of toxicity at different dose levels will then be used to suggest dose combinations for subsequent patients. The survival analysis will concern only patients who received the chosen dose combination, and will compare observed mortality with that expected from an exponential model based on the known survival rates associated with current treatment. In this paper, the Bayesian dose-finding procedure is described and illustrated, and its properties are evaluated through simulation. Computation of the appropriate sample size for the survival investigation is also discussed.
Statistics in Medicine 04/2012; 31(18):1931-43. · 2.04 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: The issues and dangers involved in testing multiple hypotheses are well recognised within the pharmaceutical industry. In reporting clinical trials, strenuous efforts are taken to avoid the inflation of type I error, with procedures such as the Bonferroni adjustment and its many elaborations and refinements being widely employed. Typically, such methods are conservative. They tend to be accurate if the multiple test statistics involved are mutually independent and achieve less than the type I error rate specified if these statistics are positively correlated. An alternative approach is to estimate the correlations between the test statistics and to perform a test that is conditional on those estimates being the true correlations. In this paper, we begin by assuming that test statistics are normally distributed and that their correlations are known. Under these circumstances, we explore several approaches to multiple testing, adapt them so that type I error is preserved exactly and then compare their powers over a range of true parameter values. For simplicity, the explorations are confined to the bivariate case. Having described the relative strengths and weaknesses of the approaches under study, we use simulation to assess the accuracy of the approximate theory developed when the correlations are estimated from the study data rather than being known in advance and when data are binary so that test statistics are only approximately normally distributed.
[Show abstract][Hide abstract] ABSTRACT: A study or experiment can be described as sequential if its design includes one or more interim analyses at which it is possible to stop the study, having reached a definitive conclusion concerning the primary question of interest. The potential of the sequential study to terminate earlier than the equivalent fixed sample size study means that, typically, there are ethical and economic advantages to be gained from using a sequential design. These advantages have secured a place for the methodology in the conduct of many clinical trials of novel therapies. Recently, there has been increasing interest in pharmacogenetics: the study of how DNA variation in the human genome affects the safety and efficacy of drugs. The potential for using sequential methodology in pharmacogenetic studies is considered and the conduct of candidate gene association studies, family-based designs and genome-wide association studies within the sequential setting is explored. The objective is to provide a unified framework for the conduct of these types of studies as sequential designs and hence allow experimenters to consider using sequential methodology in their future pharmacogenetic studies.
Computational Statistics & Data Analysis 01/2012; · 1.30 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: Many formal statistical procedures for phase I dose-finding studies have been proposed. Most concern a single novel agent available at a number of doses and administered to subjects participating in a single treatment period and returning a single binary indicator of toxicity. Such a structure is common when evaluating cytotoxic drugs for cancer. This paper concerns studies of combinations of two agents, both available at several doses. Subjects participate in one treatment period and provide two binary responses: one an indicator of benefit and the other of harm. The word 'benefit' is used loosely here: the response might be an early indicator of physiological change which, if induced in patients, is of potential therapeutic value. The context need not be oncology, but might be any study intended to meet both the phase I aim of establishing which doses are safe and the phase II goal of exploring potential therapeutic activity. A Bayesian approach is used based on an assumption of monotonicity in the relationship between the strength of the dose-combination and the distribution of the bivariate outcome. Special cases are described, and the procedure is evaluated using simulation. The parameters that define the model have immediate and simple interpretation. Graphical representations of the posterior opinions about model parameters are shown, and these can be used to inform the discussions of the trial safety committee.
Statistics in Medicine 07/2011; 30(16):1952-70. · 2.04 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: The availability of high-resolution genetic profiling raises the possibility, during the course of a drug development program, of discovering a subset of patients at particular risk of an adverse drug reaction who might be excluded from subsequent randomization into studies and identified as unsuitable for post-licensing use. Such methods depend on the estimation of the risk of adverse drug reactions for patients with differing genetic profiles followed by an assessment of the risks and benefits of their exposure to the drug. In this paper we explore the performance of a number alternative statistical methods for the estimation of risk in terms of the success of the subsequent exclusion rules. The approaches were evaluated using a single-nucleotide polymorphism dataset concerning HIV patients at risk of hypersensitivity to the drug abacavir. Overall we found that a method based on LASSO performed better than the alternatives that we studied, which included a decision-theoretic Bayesian approach, and that its performance suggested suitability for its prospective implementation.
Journal of Biopharmaceutical Statistics 01/2011; 21(1):111-24. · 0.73 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: The methodology of group sequential trials is now well established and widely implemented. The benefits of the group sequential approach are generally acknowledged, and its use, when applied properly, is accepted by researchers and regulators. This article describes how a wide range of group sequential designs can easily be implemented using two accessible SAS functions. One of these, PROBBNRM is a standard function, while the other, SEQ, is part of the interactive matrix language of SAS, PROC IML. The account focuses on the essentials of the approach and reveals how straightforward it can be. The design of studies is described, including their evaluation in terms of the distribution of final sample size. The conduct of the interim analyses is discussed, with emphasis on the consequences of inevitable departures from the planned schedule of information accrual. The computations required for the final analysis, allowing for the sequential design, are closely related to those conducted at the design stage. Illustrative examples are given and listings of suitable of SAS code are provided.
Statistical Methods in Medical Research 09/2010; 20(6):635-56. · 2.36 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: Despite an enormous and growing statistical literature, formal procedures for dose-finding are only slowly being implemented in phase I clinical trials. Even in oncology and other life-threatening conditions in which a balance between efficacy and toxicity has to be struck, model-based approaches, such as the Continual Reassessment Method, have not been universally adopted. Two related concerns have limited the adoption of the new methods. One relates to doubts about the appropriateness of models assumed to link the risk of toxicity to dose, and the other is the difficulty of communicating the nature of the process to clinical investigators responsible for early phase studies. In this paper, we adopt a new Bayesian approach involving a simple model assuming only monotonicity in the dose-toxicity relationship. The parameters that define the model have immediate and simple interpretation. The approach can be applied automatically, and we present a simulation investigation of its properties when it is. More importantly, it can be used in a transparent fashion as one element in the expert consideration of what dose to administer to the next patient or group of patients. The procedure serves to summarize the opinions and the data concerning risks of a binary characterization of toxicity which can then be considered, together with additional and less tidy trial information, by the clinicians responsible for making decisions on the allocation of doses. Graphical displays of these opinions can be used to ease communication with investigators.
Statistics in Medicine 07/2010; 29(17):1808-24. · 2.04 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: There is growing interest, especially for trials in stroke, in combining multiple endpoints in a single clinical evaluation of an experimental treatment. The endpoints might be repeated evaluations of the same characteristic or alternative measures of progress on different scales. Often they will be binary or ordinal, and those are the cases studied here. In this paper we take a direct approach to combining the univariate score statistics for comparing treatments with respect to each endpoint. The correlations between the score statistics are derived and used to allow a valid combined score test to be applied. A sample size formula is deduced and application in sequential designs is discussed. The method is compared with an alternative approach based on generalized estimating equations in an illustrative analysis and replicated simulations, and the advantages and disadvantages of the two approaches are discussed.
Statistics in Medicine 02/2010; 29(5):521-32. · 2.04 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: This article concerns the identification of associations between the incidence of adverse drug reactions and features apparent from whole genome scans of patients together with the subsequent implementation of an adaptive exclusion procedure within a drug development program. Our context is not a retrospective assessment of a large and complete database: instead we are concerned with identifying such a relationship during a drug development program and the consequences for the future conduct of that program. In particular, we seek methods for identifying changes to the exclusion criteria that will prevent future patients at high risk of an adverse reaction from continuing to be recruited. We discuss the levels of evidence needed to amend an existing recruitment policy, how this can be done, and how to evaluate and revise the reformulated recruitment policy as the trials continue. The approach will be illustrated using clinical trial data to demonstrate its potential for making an immediate reduction in the incidence of adverse drug reactions.
Therapeutic Innovation and Regulatory Science 01/2010; 44(2):147-157.
[Show abstract][Hide abstract] ABSTRACT: Clinical research into the treatment of acute stroke is complicated, is costly, and has often been unsuccessful. Developments in imaging technology based on computed tomography and magnetic resonance imaging scans offer opportunities for screening experimental therapies during phase II testing so as to deliver only the most promising interventions to phase III. We discuss the design and the appropriate sample size for phase II studies in stroke based on lesion volume.
Determination of the relation between analyses of lesion volumes and of neurologic outcomes is illustrated using data from placebo trial patients from the Virtual International Stroke Trials Archive. The size of an effect on lesion volume that would lead to a clinically relevant treatment effect in terms of a measure, such as modified Rankin score (mRS), is found. The sample size to detect that magnitude of effect on lesion volume is then calculated. Simulation is used to evaluate different criteria for proceeding from phase II to phase III.
The odds ratios for mRS correspond roughly to the square root of odds ratios for lesion volume, implying that for equivalent power specifications, sample sizes based on lesion volumes should be about one fourth of those based on mRS. Relaxation of power requirements, appropriate for phase II, lead to further sample size reductions. For example, a phase III trial comparing a novel treatment with placebo with a total sample size of 1518 patients might be motivated from a phase II trial of 126 patients comparing the same 2 treatment arms. Discussion- Definitive phase III trials in stroke should aim to demonstrate significant effects of treatment on clinical outcomes. However, more direct outcomes such as lesion volume can be useful in phase II for determining whether such phase III trials should be undertaken in the first place.
[Show abstract][Hide abstract] ABSTRACT: Phase II clinical trials are performed to investigate whether a novel treatment shows sufficient promise of efficacy to justify its evaluation in a subsequent definitive phase III trial, and they are often also used to select the dose to take forward. In this paper we discuss different design proposals for a phase II trial in which three active treatment doses and a placebo control are to be compared in terms of a single-ordered categorical endpoint. The sample size requirements for one-stage and two-stage designs are derived, based on an approach similar to that of Dunnett. Detailed computations are prepared for an illustrative example concerning a study in stroke. Allowance for early stopping for futility is made. Simulations are used to verify that the specified type I error and power requirements are valid, despite certain approximations used in the derivation of sample size. The advantages and disadvantages of the different designs are discussed, and the scope for extending the approach to different forms of endpoint is considered.
Statistics in Medicine 01/2009; 28(5):828-47. · 2.04 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: Two-stage designs offer substantial advantages for early phase II studies. The interim analysis following the first stage allows the study to be stopped for futility, or more positively, it might lead to early progression to the trials needed for late phase II and phase III. If the study is to continue to its second stage, then there is an opportunity for a revision of the total sample size. Two-stage designs have been implemented widely in oncology studies in which there is a single treatment arm and patient responses are binary. In this paper the case of two-arm comparative studies in which responses are quantitative is considered. This setting is common in therapeutic areas other than oncology. It will be assumed that observations are normally distributed, but that there is some doubt concerning their standard deviation, motivating the need for sample size review. The work reported has been motivated by a study in diabetic neuropathic pain, and the development of the design for that trial is described in detail.
[Show abstract][Hide abstract] ABSTRACT: The International Citicoline Trial in acUte Stroke is a sequential phase III study of the use of the drug citicoline in the treatment of acute ischaemic stroke, which was initiated in 2006 in 56 treatment centres. The primary objective of the trial is to demonstrate improved recovery of patients randomized to citicoline relative to those randomized to placebo after 12 weeks of follow-up. The primary analysis will take the form of a global test combining the dichotomized results of assessments on three well-established scales: the Barthel Index, the modified Rankin scale and the National Institutes of Health Stroke Scale. This approach was previously used in the analysis of the influential National Institute of Neurological Disorders and Stroke trial of recombinant tissue plasminogen activator in stroke.The purpose of this paper is to describe how this trial was designed, and in particular how the simultaneous objectives of taking into account three assessment scales, performing a series of interim analyses and conducting treatment allocation and adjusting the analyses to account for prognostic factors, including more than 50 treatment centres, were addressed.