Christy Chuang-Stein

Pfizer Inc., New York City, New York, United States

Are you Christy Chuang-Stein?

Claim your profile

Publications (40)75.46 Total impact

  • Christy Chuang-Stein, Qi Jiang, Olga Marchenko
    Clinical trials (London, England). 12/2014;
  • Christy Chuang-Stein, Simon Kirby
    [Show abstract] [Hide abstract]
    ABSTRACT: It is frequently noted that an initial clinical trial finding was not reproduced in a later trial. This is often met with some surprise. Yet, there is a relatively straightforward reason partially responsible for this observation. In this article, we examine this reason by first reviewing some findings in a recent publication in the Journal of the American Medical Association. To help explain the non-negligible chance of failing to reproduce a previous positive finding, we compare a series of trials to successive diagnostic tests used for identifying a condition. To help explain the suspicion that the treatment effect, when observed in a subsequent trial, seems to have decreased in magnitude, we draw a conceptual analogy between phases II-III development stages and interim analyses of a trial with a group sequential design. Both analogies remind us that what we observed in an early trial could be a false positive or a random high. We discuss statistical sources for these occurrences and discuss why it is important for statisticians to take these into consideration when designing and interpreting trial results. Copyright © 2014 John Wiley & Sons, Ltd.
    Pharmaceutical Statistics 09/2014; · 0.99 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: 'Success' in drug development is bringing to patients a new medicine that has an acceptable benefit-risk profile and that is also cost-effective. Cost-effectiveness means that the incremental clinical benefit is deemed worth paying for by a healthcare system, and it has an important role in enabling manufacturers to obtain new medicines to patients as soon as possible following regulatory approval. Subgroup analyses are increasingly being utilised by decision-makers in the determination of the cost-effectiveness of new medicines when making recommendations. This paper highlights the statistical considerations when using subgroup analyses to support cost-effectiveness for a health technology assessment. The key principles recommended for subgroup analyses supporting clinical effectiveness published by Paget et al. are evaluated with respect to subgroup analyses supporting cost-effectiveness. A health technology assessment case study is included to highlight the importance of subgroup analyses when incorporated into cost-effectiveness analyses. In summary, we recommend planning subgroup analyses for cost-effectiveness analyses early in the drug development process and adhering to good statistical principles when using subgroup analyses in this context. In particular, we consider it important to provide transparency in how subgroups are defined, be able to demonstrate the robustness of the subgroup results and be able to quantify the uncertainty in the subgroup analyses of cost-effectiveness. Copyright © 2014 John Wiley & Sons, Ltd.
    Pharmaceutical Statistics 06/2014; · 0.99 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: In this paper, the authors express their views on a range of topics related to data monitoring committees (DMCs) for adaptive trials that have emerged recently. The topics pertain to DMC roles and responsibilities, membership, training, and communication. DMCs have been monitoring trials using the group sequential design (GSD) for over 30 years. While decisions may be more complicated with novel adaptive designs, the fundamental roles and responsibilities of a DMC will remain the same, namely, to protect patient safety and ensure the scientific integrity of the trial. It will be the DMC’s responsibility to recommend changes to the trial within the scope of a prespecified adaptation plan or decision criteria and not to otherwise recommend changes to the study design except for serious safety-related concerns. Nevertheless, compared with traditional data monitoring, some additional considerations are necessary when convening DMCs for novel adaptive designs. They include the need to identify DMC members who are familiar with adaptive design and to consider possible sponsor involvement in unique situations. The need for additional expertise in DMC members has prompted some researchers to propose alternative DMC models or alternative governance model. These various options and authors’ views on them are expressed in this article.
    Therapeutic Innovation and Regulatory Science 07/2013; 47(4):495-502.
  • Andrew Stone, Christy Chuang-Stein
    Pharmaceutical Statistics 05/2013; · 0.99 Impact Factor
  • Therapeutic Innovation and Regulatory Science 04/2013; 48(3):316-326.
  • [Show abstract] [Hide abstract]
    ABSTRACT: Traditionally, sample size considerations for phase 2 trials are based on the desired properties of the design and response information from the trials. In this article, we propose to design phase 2 trials based on program-level optimization. We present a framework to evaluate the impact that several phase 2 design features have on the probability of phase 3 success and the expected net present value of the product. These factors include the phase 2 sample size, decision rules to select a dose for phase 3 trials, and the sample size for phase 3 trials. Using neuropathic pain as an example, we use simulations to illustrate the framework and show the benefit of including these factors in the overall decision process.
    Therapeutic Innovation and Regulatory Science 07/2012; 46(4):439-454.
  • S Kirby, J Burke, C Chuang-Stein, C Sin
    [Show abstract] [Hide abstract]
    ABSTRACT: Sample size planning is an important design consideration for a phase 3 trial. In this paper, we consider how to improve this planning when using data from phase 2 trials. We use an approach based on the concept of assurance. We consider adjusting phase 2 results because of two possible sources of bias. The first source arises from selecting compounds with pre-specified favourable phase 2 results and using these favourable results as the basis of treatment effect for phase 3 sample size planning. The next source arises from projecting phase 2 treatment effect to the phase 3 population when this projection is optimistic because of a generally more heterogeneous patient population at the confirmatory stage. In an attempt to reduce the impact of these two sources of bias, we adjust (discount) the phase 2 estimate of treatment effect. We consider multiplicative and additive adjustment. Following a previously proposed concept, we consider the properties of several criteria, termed launch criteria, for deciding whether or not to progress development to phase 3. We use simulations to investigate launch criteria with or without bias adjustment for the sample size calculation under various scenarios. The simulation results are supplemented with empirical evidence to support the need to discount phase 2 results when the latter are used in phase 3 planning. Finally, we offer some recommendations based on both the simulations and the empirical investigations. Copyright © 2012 John Wiley & Sons, Ltd.
    Pharmaceutical Statistics 05/2012; 11(5):373-85. · 0.99 Impact Factor
  • Michael J Brown, Christy Chuang-Stein, Simon Kirby
    [Show abstract] [Hide abstract]
    ABSTRACT: We introduce the idea of a design to detect signals of efficacy in early phase clinical trials. Such a design features three possible decisions: to kill the compound; to continue with staged development; or to continue with accelerated development of the compound. We describe how such studies improve the trade-off between the two errors of killing a compound with good efficacy and committing to a complete full development program for a compound that has no efficacy and describe how they can be designed. We argue that such studies could be used to screen compounds at the proof-of-concept state, reduce late Phase 2 attrition, and speed up the development of highly efficacious drugs.
    Journal of Biopharmaceutical Statistics 01/2012; 22(6):1097-108. · 0.73 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Subgroup analysis is an integral part of access and reimbursement dossiers, in particular health technology assessment (HTA), and their HTA recommendations are often limited to subpopulations. HTA recommendations for subpopulations are not always clear and without controversies. In this paper, we review several HTA guidelines regarding subgroup analyses. We describe good statistical principles for subgroup analyses of clinical effectiveness to support HTAs and include case examples where HTA recommendations were given to subpopulations only. Unlike regulatory submissions, pharmaceutical statisticians in most companies have had limited involvement in the planning, design and preparation of HTA/payers submissions. We hope to change this by highlighting how pharmaceutical statisticians should contribute to payers' submissions. This includes early engagement in reimbursement strategy discussions to influence the design, analysis and interpretation of phase III randomized clinical trials as well as meta-analyses/network meta-analyses. The focus on this paper is on subgroup analyses relating to clinical effectiveness as we believe this is the first key step of statistical involvement and influence in the preparation of HTA and reimbursement submissions.
    Pharmaceutical Statistics 12/2011; 10(6):532-8. · 0.99 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: The increasing prevalence of Alzheimer disease (AD) and lack of effective agents to attenuate progression have accelerated research and development of disease modifying (DM) therapies. The traditional parallel group design and single time point analysis used in the support of past AD drug approvals address symptomatic benefit over relatively short treatment durations. More recent trials investigating disease modification are by necessity longer in duration and require larger sample sizes. Nevertheless, trial design and analysis remain mostly unchanged and may not be adequate to meet the objective of demonstrating disease modification. Randomized start design (RSD) has been proposed as an option to study DM effects, but its application in AD trials may have been hampered by certain methodological challenges. To address the methodological issues that have impeded more extensive use of RSD in AD trial and to encourage other researchers to develop novel design and analysis methodologies to better ascertain DM effects for the next generation of AD therapies, we propose a stepwise testing procedure to evaluate potential DM effects of novel AD therapies. Alzheimer Disease Assessment Scale-Cognitive Subscale (ADAS-cog) is used for illustration. We propose to test three hypotheses in a stepwise sequence. The three tests pertain to treatment difference at two separate time points and a difference in the rate of change. Estimation is facilitated by the Mixed-effects Model for Repeated Measures approach. The required sample size is estimated using Monte Carlo simulations and by modeling ADAS-cog data from prior longitudinal AD studies. The greatest advantage of the RSD proposed in this article is its ability to critically address the question on a DM effect. The AD trial using the new approach would be longer (12-month placebo period plus 12-month delay-start period; total 24-month duration) and require more subjects (about 1000 subjects per arm for the non-inferiority margin chosen in the illustration). It would also require additional evaluations to estimate the rate of ADAS-cog change toward the end of the trial. A regulatory claim of disease modification for any compound will likely require additional verification of a drug's effect on a validated biomarker of Alzheimer's pathology. Incorporation of the RSD in AD trials is feasible. With proper trial setup and statistical procedures, this design could support the detection of a disease-modifying effect. In our opinion, a two-phase RSD with a stepwise hypothesis testing procedure could be a reasonable option for future studies.
    Clinical Trials 01/2011; 8(1):5-14. · 2.20 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: There are many decision points along the product development continuum. Formal clinical milestones, such as the end of phase 1, phase 2a (proof of mechanism or proof of concept), and phase 2b provide useful decision points to critically evaluate the accumulating data. At each milestone, sound decisions begin with asking the right questions and choosing the appropriate design as well as criteria to make go/no-go decisions. It is also important that knowledge about the new investigational product, gained either directly from completed trials or indirectly from similar products for the same disorder, be systematically incorporated into the evaluation process. In this article, we look at metrics that go beyond type I and type II error rates associated with the traditional hypothesis test approach. We draw on the analogy between diagnostic tests and hypothesis tests to highlight the need for confirmation and the value of formally updating our prior belief about a compound's effect with new data. Furthermore, we show how incorporating probability distributions that characterize current evidence about the true treatment effect could help us make decisions that specifically address the need at each clinical milestone. We illustrate the above with examples.
    Therapeutic Innovation and Regulatory Science 01/2011; 45(2):187-202.
  • [Show abstract] [Hide abstract]
    ABSTRACT: The US Food and Drug Administration has recently released a draft guidance document on adaptive clinical trials. We comment on the document from the particular perspective of the authors as members of a PhRMA working group on this topic, which has interacted with FDA personnel on adaptive trial issue during recent years. We describe the activities and prior work of our working group, and use this as a basis to discuss the content of the guidance document as it relates to several issues of current relevance, such as data monitoring processes, adaptive dose finding, so-called seamless trial designs, and sample size reestimation.
    Journal of Biopharmaceutical Statistics 11/2010; 20(6):1115-24. · 0.73 Impact Factor
  • Christy Chuang-Stein, Mohan Beltangady
    [Show abstract] [Hide abstract]
    ABSTRACT: The Food and Drug Administration of the United States issued a draft guidance on adaptive design clinical trials in February 2010. This draft guidance has attracted a lot of attention because of the increasing interest in adaptive trials by the pharmaceutical industry in recent years. In this paper, we report on highlights of comments collected within Pfizer on this draft guidance. In addition, we share Pfizer's internal journey to promote efficient trial designs since 2005. Adaptive designs have been part of that journey.
    Journal of Biopharmaceutical Statistics 11/2010; 20(6):1143-9. · 0.73 Impact Factor
  • Source
    Christy Chuang-Stein, Simon Kirby, Ian Hirsch, Gary Atkinson
    [Show abstract] [Hide abstract]
    ABSTRACT: The minimum clinically important difference (MCID) between treatments is recognized as a key concept in the design and interpretation of results from a clinical trial. Yet even assuming such a difference can be derived, it is not necessarily clear how it should be used. In this paper, we consider three possible roles for the MCID. They are: (1) using the MCID to determine the required sample size so that the trial has a pre-specified statistical power to conclude a significant treatment effect when the treatment effect is equal to the MCID; (2) requiring with high probability, the observed treatment effect in a trial, in addition to being statistically significant, to be at least as large as the MCID; (3) demonstrating via hypothesis testing that the effect of the new treatment is at least as large as the MCID. We will examine the implications of the three different possible roles of the MCID on sample size, expectations of a new treatment, and the chance for a successful trial. We also give our opinion on how the MCID should generally be used in the design and interpretation of results from a clinical trial.
    Pharmaceutical Statistics 10/2010; 10(3):250-6. · 0.99 Impact Factor
  • Simon Kirby, Christy Chuang-Stein, Mark Morris
    [Show abstract] [Hide abstract]
    ABSTRACT: Patient-reported outcomes are important for assessing the effectiveness of treatments in many disease areas. For this reason, many new instruments that capture patient-reported outcomes have been developed over the past several decades. With the development of each new instrument, there is the ensuing question of what constitutes a minimum clinically important difference between treatments when using the new instrument. In this paper we describe a method for estimating a minimum clinically important difference between treatments for a patient-reported outcome through a desired difference in response rates for a definition of a responder. As well as being of interest in its own right, the use of a minimum clinically important difference on the patient-reported outcome scale is likely to lead to sample size advantages. We illustrate the method with data on neuropathic pain when responder is defined by requiring at least some improvement in the Patient Global Impression of Change and when responder is defined by existing responder definitions.
    Journal of Biopharmaceutical Statistics 09/2010; 20(5):1043-54. · 0.73 Impact Factor
  • Source
    Christy Chuang-Stein, Mohan Beltangady
    [Show abstract] [Hide abstract]
    ABSTRACT: Experience has shown us that when data are pooled from multiple studies to create an integrated summary, an analysis based on naïvely-pooled data is vulnerable to the mischief of Simpson's Paradox. Using the proportions of patients with a target adverse event (AE) as an example, we demonstrate the Paradox's effect on both the comparison and the estimation of the proportions. While meta analytic approaches have been recommended and increasingly used for comparing safety data between treatments, reporting proportions of subjects experiencing a target AE based on data from multiple studies has received little attention. In this paper, we suggest two possible approaches to report these cumulative proportions. In addition, we urge that regulatory guidelines on reporting such proportions be established so that risks can be communicated in a scientifically defensible and balanced manner.
    Pharmaceutical Statistics 01/2010; 10(1):3-7. · 0.99 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: The development of novel drugs is becoming increasingly challenging, inefficient, and costly, as acknowledged by all major stakeholders of pharmaceutical products. Adaptive designs have attracted considerable attention in recent years, as they promise an increase in efficiency of the drug development process by making better use of the observed data. The key idea of adaptive designs is to use data accumulating from an ongoing experiment to decide on how to modify certain design aspects and better address the question(s) of interest and/or adjust for incorrect assumptions. When planned carefully and applied in appropriate situations, a number of adaptive designs allow for scientifically sound conclusions: early stopping either for futility or for success, sample size reassessment, treatment selection, etc. Most of the current discussions regarding adaptive designs, focus however, on clinical trial applications in the (late) development phase of a novel drug. The aim of this review is to broaden this perspective and to demonstrate that adaptivity is a fundamentally important concept that can be applied to many different stages of drug discovery and development. We review the major statistical methods available for planning and analyzing adaptive designs and then move through the drug discovery and development process and identify possible opportunities for adaptivity. To illustrate the ideas, we refer to examples and case studies from the literature, where available. A brief discussion about regulatory perspectives, operational aspects, and some potential hurdles is also given. Drug Dev Res 70, 2009. © 2009 Wiley-Liss, Inc.
    Drug Development Research 02/2009; 70(3):169 - 190. · 0.87 Impact Factor
  • Christy Chuang-Stein, Alex Dmitrienko, Walter Offen
    Journal of Biopharmaceutical Statistics 02/2009; 19(1):14-21. · 0.73 Impact Factor
  • Paul Gallo, Christy Chuang-Stein
    Pharmaceutical Statistics 01/2009; 8(1):1-4. · 0.99 Impact Factor

Publication Stats

262 Citations
75.46 Total Impact Points


  • 2004–2014
    • Pfizer Inc.
      • Pfizer Global Research & Development
      New York City, New York, United States
  • 2006–2010
    • Novartis
      Bâle, Basel-City, Switzerland
  • 2007
    • Wyeth
      New Johnsonville, Tennessee, United States