[Show abstract][Hide abstract] ABSTRACT: In this paper, the authors express their views on a range of topics related to data monitoring committees (DMCs) for adaptive trials that have emerged recently. The topics pertain to DMC roles and responsibilities, membership, training, and communication. DMCs have been monitoring trials using the group sequential design (GSD) for over 30 years. While decisions may be more complicated with novel adaptive designs, the fundamental roles and responsibilities of a DMC will remain the same, namely, to protect patient safety and ensure the scientific integrity of the trial. It will be the DMC’s responsibility to recommend changes to the trial within the scope of a prespecified adaptation plan or decision criteria and not to otherwise recommend changes to the study design except for serious safety-related concerns. Nevertheless, compared with traditional data monitoring, some additional considerations are necessary when convening DMCs for novel adaptive designs. They include the need to identify DMC members who are familiar with adaptive design and to consider possible sponsor involvement in unique situations. The need for additional expertise in DMC members has prompted some researchers to propose alternative DMC models or alternative governance model. These various options and authors’ views on them are expressed in this article.
Therapeutic Innovation and Regulatory Science 07/2013; 47(4):495-502.
[Show abstract][Hide abstract] ABSTRACT: Traditionally, sample size considerations for phase 2 trials are based on the desired properties of the design and response information from the trials. In this article, we propose to design phase 2 trials based on program-level optimization. We present a framework to evaluate the impact that several phase 2 design features have on the probability of phase 3 success and the expected net present value of the product. These factors include the phase 2 sample size, decision rules to select a dose for phase 3 trials, and the sample size for phase 3 trials. Using neuropathic pain as an example, we use simulations to illustrate the framework and show the benefit of including these factors in the overall decision process.
Therapeutic Innovation and Regulatory Science 07/2012; 46(4):439-454.
[Show abstract][Hide abstract] ABSTRACT: We introduce the idea of a design to detect signals of efficacy in early phase clinical trials. Such a design features three possible decisions: to kill the compound; to continue with staged development; or to continue with accelerated development of the compound. We describe how such studies improve the trade-off between the two errors of killing a compound with good efficacy and committing to a complete full development program for a compound that has no efficacy and describe how they can be designed. We argue that such studies could be used to screen compounds at the proof-of-concept state, reduce late Phase 2 attrition, and speed up the development of highly efficacious drugs.
Journal of Biopharmaceutical Statistics 01/2012; 22(6):1097-108. · 0.73 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: Subgroup analysis is an integral part of access and reimbursement dossiers, in particular health technology assessment (HTA), and their HTA recommendations are often limited to subpopulations. HTA recommendations for subpopulations are not always clear and without controversies. In this paper, we review several HTA guidelines regarding subgroup analyses. We describe good statistical principles for subgroup analyses of clinical effectiveness to support HTAs and include case examples where HTA recommendations were given to subpopulations only. Unlike regulatory submissions, pharmaceutical statisticians in most companies have had limited involvement in the planning, design and preparation of HTA/payers submissions. We hope to change this by highlighting how pharmaceutical statisticians should contribute to payers' submissions. This includes early engagement in reimbursement strategy discussions to influence the design, analysis and interpretation of phase III randomized clinical trials as well as meta-analyses/network meta-analyses. The focus on this paper is on subgroup analyses relating to clinical effectiveness as we believe this is the first key step of statistical involvement and influence in the preparation of HTA and reimbursement submissions.
[Show abstract][Hide abstract] ABSTRACT: The increasing prevalence of Alzheimer disease (AD) and lack of effective agents to attenuate progression have accelerated research and development of disease modifying (DM) therapies. The traditional parallel group design and single time point analysis used in the support of past AD drug approvals address symptomatic benefit over relatively short treatment durations. More recent trials investigating disease modification are by necessity longer in duration and require larger sample sizes. Nevertheless, trial design and analysis remain mostly unchanged and may not be adequate to meet the objective of demonstrating disease modification. Randomized start design (RSD) has been proposed as an option to study DM effects, but its application in AD trials may have been hampered by certain methodological challenges.
To address the methodological issues that have impeded more extensive use of RSD in AD trial and to encourage other researchers to develop novel design and analysis methodologies to better ascertain DM effects for the next generation of AD therapies, we propose a stepwise testing procedure to evaluate potential DM effects of novel AD therapies.
Alzheimer Disease Assessment Scale-Cognitive Subscale (ADAS-cog) is used for illustration. We propose to test three hypotheses in a stepwise sequence. The three tests pertain to treatment difference at two separate time points and a difference in the rate of change. Estimation is facilitated by the Mixed-effects Model for Repeated Measures approach. The required sample size is estimated using Monte Carlo simulations and by modeling ADAS-cog data from prior longitudinal AD studies.
The greatest advantage of the RSD proposed in this article is its ability to critically address the question on a DM effect. The AD trial using the new approach would be longer (12-month placebo period plus 12-month delay-start period; total 24-month duration) and require more subjects (about 1000 subjects per arm for the non-inferiority margin chosen in the illustration). It would also require additional evaluations to estimate the rate of ADAS-cog change toward the end of the trial.
A regulatory claim of disease modification for any compound will likely require additional verification of a drug's effect on a validated biomarker of Alzheimer's pathology.
Incorporation of the RSD in AD trials is feasible. With proper trial setup and statistical procedures, this design could support the detection of a disease-modifying effect. In our opinion, a two-phase RSD with a stepwise hypothesis testing procedure could be a reasonable option for future studies.
[Show abstract][Hide abstract] ABSTRACT: There are many decision points along the product development continuum. Formal clinical milestones, such as the end of phase 1, phase 2a (proof of mechanism or proof of concept), and phase 2b provide useful decision points to critically evaluate the accumulating data. At each milestone, sound decisions begin with asking the right questions and choosing the appropriate design as well as criteria to make go/no-go decisions. It is also important that knowledge about the new investigational product, gained either directly from completed trials or indirectly from similar products for the same disorder, be systematically incorporated into the evaluation process. In this article, we look at metrics that go beyond type I and type II error rates associated with the traditional hypothesis test approach. We draw on the analogy between diagnostic tests and hypothesis tests to highlight the need for confirmation and the value of formally updating our prior belief about a compound's effect with new data. Furthermore, we show how incorporating probability distributions that characterize current evidence about the true treatment effect could help us make decisions that specifically address the need at each clinical milestone. We illustrate the above with examples.
Therapeutic Innovation and Regulatory Science 01/2011; 45(2):187-202.
[Show abstract][Hide abstract] ABSTRACT: The US Food and Drug Administration has recently released a draft guidance document on adaptive clinical trials. We comment on the document from the particular perspective of the authors as members of a PhRMA working group on this topic, which has interacted with FDA personnel on adaptive trial issue during recent years. We describe the activities and prior work of our working group, and use this as a basis to discuss the content of the guidance document as it relates to several issues of current relevance, such as data monitoring processes, adaptive dose finding, so-called seamless trial designs, and sample size reestimation.
Journal of Biopharmaceutical Statistics 11/2010; 20(6):1115-24. · 0.73 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: The Food and Drug Administration of the United States issued a draft guidance on adaptive design clinical trials in February 2010. This draft guidance has attracted a lot of attention because of the increasing interest in adaptive trials by the pharmaceutical industry in recent years. In this paper, we report on highlights of comments collected within Pfizer on this draft guidance. In addition, we share Pfizer's internal journey to promote efficient trial designs since 2005. Adaptive designs have been part of that journey.
Journal of Biopharmaceutical Statistics 11/2010; 20(6):1143-9. · 0.73 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: The minimum clinically important difference (MCID) between treatments is recognized as a key concept in the design and interpretation of results from a clinical trial. Yet even assuming such a difference can be derived, it is not necessarily clear how it should be used. In this paper, we consider three possible roles for the MCID. They are: (1) using the MCID to determine the required sample size so that the trial has a pre-specified statistical power to conclude a significant treatment effect when the treatment effect is equal to the MCID; (2) requiring with high probability, the observed treatment effect in a trial, in addition to being statistically significant, to be at least as large as the MCID; (3) demonstrating via hypothesis testing that the effect of the new treatment is at least as large as the MCID. We will examine the implications of the three different possible roles of the MCID on sample size, expectations of a new treatment, and the chance for a successful trial. We also give our opinion on how the MCID should generally be used in the design and interpretation of results from a clinical trial.
[Show abstract][Hide abstract] ABSTRACT: Patient-reported outcomes are important for assessing the effectiveness of treatments in many disease areas. For this reason, many new instruments that capture patient-reported outcomes have been developed over the past several decades. With the development of each new instrument, there is the ensuing question of what constitutes a minimum clinically important difference between treatments when using the new instrument. In this paper we describe a method for estimating a minimum clinically important difference between treatments for a patient-reported outcome through a desired difference in response rates for a definition of a responder. As well as being of interest in its own right, the use of a minimum clinically important difference on the patient-reported outcome scale is likely to lead to sample size advantages. We illustrate the method with data on neuropathic pain when responder is defined by requiring at least some improvement in the Patient Global Impression of Change and when responder is defined by existing responder definitions.
Journal of Biopharmaceutical Statistics 09/2010; 20(5):1043-54. · 0.73 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: Experience has shown us that when data are pooled from multiple studies to create an integrated summary, an analysis based on naïvely-pooled data is vulnerable to the mischief of Simpson's Paradox. Using the proportions of patients with a target adverse event (AE) as an example, we demonstrate the Paradox's effect on both the comparison and the estimation of the proportions. While meta analytic approaches have been recommended and increasingly used for comparing safety data between treatments, reporting proportions of subjects experiencing a target AE based on data from multiple studies has received little attention. In this paper, we suggest two possible approaches to report these cumulative proportions. In addition, we urge that regulatory guidelines on reporting such proportions be established so that risks can be communicated in a scientifically defensible and balanced manner.