Christy Chuang-Stein

Pfizer Inc., New York, New York, United States

Are you Christy Chuang-Stein?

Claim your profile

Publications (61)112.87 Total impact

  • Statistics in Biopharmaceutical Research 09/2015; DOI:10.1080/19466315.2015.1077724 · 0.62 Impact Factor
  • Friedhelm Leverkus · Christy Chuang-Stein
    [Show abstract] [Hide abstract]
    ABSTRACT: In 2010, the Federal Parliament (Bundestag) of Germany passed a new law (Arzneimittelmarktneuordnungsgesetz, AMNOG) on the regulation of medicinal products that applies to all pharmaceutical products with active ingredients that are launched beginning January 1, 2011. The law describes the process to determine the price at which an approved new product will be reimbursed by the statutory health insurance system. The process consists of two phases. The first phase assesses the additional benefit of the new product versus an appropriate comparator (zweckmäßige Vergleichstherapie, zVT). The second phase involves price negotiation. Focusing on the first phase, this paper investigates requirements of benefit assessment of a new product under this law with special attention on the methods applied by the German authorities on issues such as the choice of the comparator, patient relevant endpoints, subgroup analyses, extent of benefit, determination of net benefit, primary and secondary endpoints, and uncertainty of the additional benefit. We propose alternative approaches to address the requirements in some cases and invite other researchers to help develop solutions in other cases. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
    Biometrical Journal 09/2015; DOI:10.1002/bimj.201300256 · 0.95 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Clinical trials that compare strategies to optimize antibiotic use are of critical importance but are limited by competing risks that distort outcome interpretation, complexities of noninferiority trials, large sample sizes, and inadequate evaluation of benefits and harms at the patient level. The Antibacterial Resistance Leadership Group strives to overcome these challenges through innovative trial design. Response adjusted for duration of antibiotic risk (RADAR) is a novel methodology utilizing a superiority design and a 2-step process: (1) categorizing patients into an overall clinical outcome (based on benefits and harms), and (2) ranking patients with respect to a desirability of outcome ranking (DOOR). DOORs are constructed by assigning higher ranks to patients with (1) better overall clinical outcomes, and (2) shorter durations of antibiotic use for similar overall clinical outcomes. DOOR distributions are compared between antibiotic use strategies. The probability that a randomly selected patient will have a better DOOR if assigned to the new strategy is estimated. DOOR/RADAR represents a new paradigm in assessing the risks and benefits of new strategies to optimize antibiotic use. © The Author 2015. Published by Oxford University Press on behalf of the Infectious Diseases Society of America. All rights reserved. For Permissions, please e-mail:
    Clinical Infectious Diseases 06/2015; 61(5). DOI:10.1093/cid/civ495 · 8.89 Impact Factor
  • Christy Chuang-Stein · Qi Jiang · Olga Marchenko
    Clinical Trials 12/2014; 12(1). DOI:10.1177/1740774514562032 · 1.93 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Recent research has fostered new guidance on preventing and treating missing data, most notably the landmark expert panel report from the National Research Council (NRC) that was commissioned by FDA. One of the findings from that panel was the need for better software tools to conduct missing data sensitivity analyses and frameworks for drawing inference from them. In response to the NRC recommendations, a Scientific Working Group was formed under the Auspices of the Drug Information Association (DIASWG). The present paper is from work of the DIASWG. Specifically, the NRC panel's 18 recommendations are distilled into 3 pillars for dealing with missing data: (1) providing clearly stated objectives and causal estimands; (2) preventing as much missing data as possible; and (3) combining a sensible primary analysis with sensitivity analyses to assess robustness of inferences to missing data assumptions. Sample data sets are used to illustrate how sensitivity analyses can be used to assess robustness of inferences to missing data assumptions. The suite of software tools used to conduct the sensitivity analyses are freely available for public use at
    Therapeutic Innovation and Regulatory Science 09/2014; 48(1):68-80. DOI:10.1177/2168479013501310 · 0.46 Impact Factor
  • Christy Chuang-Stein · Simon Kirby
    [Show abstract] [Hide abstract]
    ABSTRACT: It is frequently noted that an initial clinical trial finding was not reproduced in a later trial. This is often met with some surprise. Yet, there is a relatively straightforward reason partially responsible for this observation. In this article, we examine this reason by first reviewing some findings in a recent publication in the Journal of the American Medical Association. To help explain the non-negligible chance of failing to reproduce a previous positive finding, we compare a series of trials to successive diagnostic tests used for identifying a condition. To help explain the suspicion that the treatment effect, when observed in a subsequent trial, seems to have decreased in magnitude, we draw a conceptual analogy between phases II-III development stages and interim analyses of a trial with a group sequential design. Both analogies remind us that what we observed in an early trial could be a false positive or a random high. We discuss statistical sources for these occurrences and discuss why it is important for statisticians to take these into consideration when designing and interpreting trial results. Copyright © 2014 John Wiley & Sons, Ltd.
    Pharmaceutical Statistics 09/2014; 13(5). DOI:10.1002/pst.1633 · 0.83 Impact Factor
  • Christine Fletcher · Christy Chuang-Stein · Marie-Ange Paget · Carol Reid · Neil Hawkins
    [Show abstract] [Hide abstract]
    ABSTRACT: 'Success' in drug development is bringing to patients a new medicine that has an acceptable benefit-risk profile and that is also cost-effective. Cost-effectiveness means that the incremental clinical benefit is deemed worth paying for by a healthcare system, and it has an important role in enabling manufacturers to obtain new medicines to patients as soon as possible following regulatory approval. Subgroup analyses are increasingly being utilised by decision-makers in the determination of the cost-effectiveness of new medicines when making recommendations. This paper highlights the statistical considerations when using subgroup analyses to support cost-effectiveness for a health technology assessment. The key principles recommended for subgroup analyses supporting clinical effectiveness published by Paget et al. are evaluated with respect to subgroup analyses supporting cost-effectiveness. A health technology assessment case study is included to highlight the importance of subgroup analyses when incorporated into cost-effectiveness analyses. In summary, we recommend planning subgroup analyses for cost-effectiveness analyses early in the drug development process and adhering to good statistical principles when using subgroup analyses in this context. In particular, we consider it important to provide transparency in how subgroups are defined, be able to demonstrate the robustness of the subgroup results and be able to quantify the uncertainty in the subgroup analyses of cost-effectiveness. Copyright © 2014 John Wiley & Sons, Ltd.
    Pharmaceutical Statistics 07/2014; 13(4). DOI:10.1002/pst.1626 · 0.83 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Recent research has fostered new guidance on preventing and treating missing data. This article is the consensus opinion of the Drug Information Association's Scientific Working Group on Missing Data. Common elements from recent guidance are distilled and means for putting the guidance into action are proposed. The primary goal is to maximize the proportion of patients that adhere to the protocol specified interventions. In so doing, trial design and trial conduct should be considered. Completion rate should be focused upon as much as enrollment rate, with particular focus on minimizing loss to follow-up. Whether or not follow-up data after discontinuation of the originally randomized medication and/or initiation of rescue medication contribute to the primary estimand depends on the context. In outcomes trials (intervention thought to influence disease process) follow-up data are often included in the primary estimand, whereas in symptomatic trials (intervention alters symptom severity but does not change underlying disease) follow-up data are often not included. Regardless of scenario, the confounding influence of rescue medications can render follow-up data of little use in understanding the causal effects of the randomized interventions. A sensible primary analysis can often be formulated in the missing at random (MAR) framework. Sensitivity analyses assessing robustness to departures from MAR are crucial. Plausible sensitivity analyses can be prespecified using controlled imputation approaches to either implement a plausibly conservative analysis or to stress test the primary result, and used in combination with other model-based MNAR approaches such as selection, shared parameter, and pattern-mixture models. The example dataset and analyses used in this article are freely available for public use at
    Statistics in Biopharmaceutical Research 12/2013; 5(4):369-382. DOI:10.1080/19466315.2013.848822 · 0.62 Impact Factor
  • Andrew Stone · Christy Chuang-Stein
    Pharmaceutical Statistics 07/2013; 12(4). DOI:10.1002/pst.1574 · 0.83 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: In this paper, the authors express their views on a range of topics related to data monitoring committees (DMCs) for adaptive trials that have emerged recently. The topics pertain to DMC roles and responsibilities, membership, training, and communication. DMCs have been monitoring trials using the group sequential design (GSD) for over 30 years. While decisions may be more complicated with novel adaptive designs, the fundamental roles and responsibilities of a DMC will remain the same, namely, to protect patient safety and ensure the scientific integrity of the trial. It will be the DMC’s responsibility to recommend changes to the trial within the scope of a prespecified adaptation plan or decision criteria and not to otherwise recommend changes to the study design except for serious safety-related concerns. Nevertheless, compared with traditional data monitoring, some additional considerations are necessary when convening DMCs for novel adaptive designs. They include the need to identify DMC members who are familiar with adaptive design and to consider possible sponsor involvement in unique situations. The need for additional expertise in DMC members has prompted some researchers to propose alternative DMC models or alternative governance model. These various options and authors’ views on them are expressed in this article.
    Therapeutic Innovation and Regulatory Science 07/2013; 47(4):495-502. DOI:10.1177/2168479013486996 · 0.46 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Adaptive clinical trials require access to interim data to carry out trial modification as allowed by a prespecified adaptation plan. A data monitoring committee (DMC) is a group of experts that is charged with monitoring accruing trial data to ensure the safety of trial participants and that in adaptive trials may also play a role in implementing a preplanned adaptation. In this paper, we summarize current practices and viewpoints and provide guidance on evolving issues related to the use of DMCs in adaptive trials. We describe the common types of adaptive designs and point out some DMC-related issues that are unique to this class of designs. We include 3 examples of DMCs in late-stage adaptive trials that have been implemented in practice. We advocate training opportunities for researchers who may be interested in serving on a DMC for an adaptive trial since qualified DMC members are fundamental to the successful execution of DMC responsibilities.
    Therapeutic Innovation and Regulatory Science 04/2013; 48(3):316-326. DOI:10.1177/2168479013509805 · 0.46 Impact Factor
  • Christy Chuang-Stein · H Amy Xia
    [Show abstract] [Hide abstract]
    ABSTRACT: The last 15 years have seen a substantial increase in efforts devoted to safety assessment by statisticians in the pharmaceutical industry. While some of these efforts were driven by regulations and public demand for safer products, much of the motivation came from the realization that there is a strong need for a systematic approach to safety planning, evaluation, and reporting at the program level throughout the drug development life cycle. An efficient process can help us identify safety signals early and afford us the opportunity to develop effective risk minimization plan early in the development cycle. This awareness has led many pharmaceutical sponsors to set up internal systems and structures to effectively conduct safety assessment at all levels (patient, study, and program). In addition to process, tools have emerged that are designed to enhance data review and pattern recognition. In this paper, we describe advancements in the practice of safety assessment during the premarketing phase of drug development. In particular, we share examples of safety assessment practice at our respective companies, some of which are based on recommendations from industry-initiated working groups on best practice in recent years.
    Journal of Biopharmaceutical Statistics 01/2013; 23(1):3-25. DOI:10.1080/10543406.2013.736805 · 0.59 Impact Factor
  • Michael J Brown · Christy Chuang-Stein · Simon Kirby
    [Show abstract] [Hide abstract]
    ABSTRACT: We introduce the idea of a design to detect signals of efficacy in early phase clinical trials. Such a design features three possible decisions: to kill the compound; to continue with staged development; or to continue with accelerated development of the compound. We describe how such studies improve the trade-off between the two errors of killing a compound with good efficacy and committing to a complete full development program for a compound that has no efficacy and describe how they can be designed. We argue that such studies could be used to screen compounds at the proof-of-concept state, reduce late Phase 2 attrition, and speed up the development of highly efficacious drugs.
    Journal of Biopharmaceutical Statistics 11/2012; 22(6):1097-108. DOI:10.1080/10543406.2011.570466 · 0.59 Impact Factor
  • S Kirby · J Burke · C Chuang-Stein · C Sin
    [Show abstract] [Hide abstract]
    ABSTRACT: Sample size planning is an important design consideration for a phase 3 trial. In this paper, we consider how to improve this planning when using data from phase 2 trials. We use an approach based on the concept of assurance. We consider adjusting phase 2 results because of two possible sources of bias. The first source arises from selecting compounds with pre-specified favourable phase 2 results and using these favourable results as the basis of treatment effect for phase 3 sample size planning. The next source arises from projecting phase 2 treatment effect to the phase 3 population when this projection is optimistic because of a generally more heterogeneous patient population at the confirmatory stage. In an attempt to reduce the impact of these two sources of bias, we adjust (discount) the phase 2 estimate of treatment effect. We consider multiplicative and additive adjustment. Following a previously proposed concept, we consider the properties of several criteria, termed launch criteria, for deciding whether or not to progress development to phase 3. We use simulations to investigate launch criteria with or without bias adjustment for the sample size calculation under various scenarios. The simulation results are supplemented with empirical evidence to support the need to discount phase 2 results when the latter are used in phase 3 planning. Finally, we offer some recommendations based on both the simulations and the empirical investigations. Copyright © 2012 John Wiley & Sons, Ltd.
    Pharmaceutical Statistics 09/2012; 11(5):373-85. DOI:10.1002/pst.1521 · 0.83 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Traditionally, sample size considerations for phase 2 trials are based on the desired properties of the design and response information from the trials. In this article, we propose to design phase 2 trials based on program-level optimization. We present a framework to evaluate the impact that several phase 2 design features have on the probability of phase 3 success and the expected net present value of the product. These factors include the phase 2 sample size, decision rules to select a dose for phase 3 trials, and the sample size for phase 3 trials. Using neuropathic pain as an example, we use simulations to illustrate the framework and show the benefit of including these factors in the overall decision process.
    Therapeutic Innovation and Regulatory Science 07/2012; 46(4):439-454. DOI:10.1177/0092861512444031 · 0.46 Impact Factor
  • Marie-Ange Paget · Christy Chuang-Stein · Christine Fletcher · Carol Reid
    [Show abstract] [Hide abstract]
    ABSTRACT: Subgroup analysis is an integral part of access and reimbursement dossiers, in particular health technology assessment (HTA), and their HTA recommendations are often limited to subpopulations. HTA recommendations for subpopulations are not always clear and without controversies. In this paper, we review several HTA guidelines regarding subgroup analyses. We describe good statistical principles for subgroup analyses of clinical effectiveness to support HTAs and include case examples where HTA recommendations were given to subpopulations only. Unlike regulatory submissions, pharmaceutical statisticians in most companies have had limited involvement in the planning, design and preparation of HTA/payers submissions. We hope to change this by highlighting how pharmaceutical statisticians should contribute to payers' submissions. This includes early engagement in reimbursement strategy discussions to influence the design, analysis and interpretation of phase III randomized clinical trials as well as meta-analyses/network meta-analyses. The focus on this paper is on subgroup analyses relating to clinical effectiveness as we believe this is the first key step of statistical involvement and influence in the preparation of HTA and reimbursement submissions.
    Pharmaceutical Statistics 11/2011; 10(6):532-8. DOI:10.1002/pst.531 · 0.83 Impact Factor
  • Source
    Christy Chuang-Stein · Simon Kirby · Ian Hirsch · Gary Atkinson
    [Show abstract] [Hide abstract]
    ABSTRACT: The minimum clinically important difference (MCID) between treatments is recognized as a key concept in the design and interpretation of results from a clinical trial. Yet even assuming such a difference can be derived, it is not necessarily clear how it should be used. In this paper, we consider three possible roles for the MCID. They are: (1) using the MCID to determine the required sample size so that the trial has a pre-specified statistical power to conclude a significant treatment effect when the treatment effect is equal to the MCID; (2) requiring with high probability, the observed treatment effect in a trial, in addition to being statistically significant, to be at least as large as the MCID; (3) demonstrating via hypothesis testing that the effect of the new treatment is at least as large as the MCID. We will examine the implications of the three different possible roles of the MCID on sample size, expectations of a new treatment, and the chance for a successful trial. We also give our opinion on how the MCID should generally be used in the design and interpretation of results from a clinical trial.
    Pharmaceutical Statistics 05/2011; 10(3):250-6. DOI:10.1002/pst.459 · 0.83 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: There are many decision points along the product development continuum. Formal clinical milestones, such as the end of phase 1, phase 2a (proof of mechanism or proof of concept), and phase 2b provide useful decision points to critically evaluate the accumulating data. At each milestone, sound decisions begin with asking the right questions and choosing the appropriate design as well as criteria to make go/no-go decisions. It is also important that knowledge about the new investigational product, gained either directly from completed trials or indirectly from similar products for the same disorder, be systematically incorporated into the evaluation process. In this article, we look at metrics that go beyond type I and type II error rates associated with the traditional hypothesis test approach. We draw on the analogy between diagnostic tests and hypothesis tests to highlight the need for confirmation and the value of formally updating our prior belief about a compound's effect with new data. Furthermore, we show how incorporating probability distributions that characterize current evidence about the true treatment effect could help us make decisions that specifically address the need at each clinical milestone. We illustrate the above with examples.
    Therapeutic Innovation and Regulatory Science 03/2011; 45(2):187-202. DOI:10.1177/009286151104500213 · 0.46 Impact Factor
  • Richard Y Zhang · Andrew C Leon · Christy Chuang-Stein · Steven J Romano
    [Show abstract] [Hide abstract]
    ABSTRACT: Purpose To address the methodological issues that have impeded more extensive use of RSD in AD trial and to encourage other researchers to develop novel design and analysis methodologies to better ascertain DM effects for the next generation of AD therapies, we propose a stepwise testing procedure to evaluate potential DM effects of novel AD therapies. Methods Alzheimer Disease Assessment Scale-Cognitive Subscale (ADAS-cog) is used for illustration. We propose to test three hypotheses in a stepwise sequence. The three tests pertain to treatment difference at two separate time points and a difference in the rate of change. Estimation is facilitated by the Mixed-effects Model for Repeated Measures approach. The required sample size is estimated using Monte Carlo simulations and by modeling ADAS-cog data from prior longitudinal AD studies. Results The greatest advantage of the RSD proposed in this article is its ability to critically address the question on a DM effect. The AD trial using the new approach would be longer (12-month placebo period plus 12-month delay-start period; total 24-month duration) and require more subjects (about 1000 subjects per arm for the non-inferiority margin chosen in the illustration). It would also require additional evaluations to estimate the rate of ADAS-cog change toward the end of the trial. Limitations A regulatory claim of disease modification for any compound will likely require additional verification of a drug's effect on a validated biomarker of Alzheimer's pathology. Conclusions Incorporation of the RSD in AD trials is feasible. With proper trial setup and statistical procedures, this design could support the detection of a disease-modifying effect. In our opinion, a two-phase RSD with a stepwise hypothesis testing procedure could be a reasonable option for future studies. Clinical Trials 2011; 8: 5-14.
    Clinical Trials 02/2011; 8(1):5-14. DOI:10.1177/1740774510392255 · 1.93 Impact Factor
  • Source
    Christy Chuang-Stein · Mohan Beltangady
    [Show abstract] [Hide abstract]
    ABSTRACT: Experience has shown us that when data are pooled from multiple studies to create an integrated summary, an analysis based on naïvely-pooled data is vulnerable to the mischief of Simpson's Paradox. Using the proportions of patients with a target adverse event (AE) as an example, we demonstrate the Paradox's effect on both the comparison and the estimation of the proportions. While meta analytic approaches have been recommended and increasingly used for comparing safety data between treatments, reporting proportions of subjects experiencing a target AE based on data from multiple studies has received little attention. In this paper, we suggest two possible approaches to report these cumulative proportions. In addition, we urge that regulatory guidelines on reporting such proportions be established so that risks can be communicated in a scientifically defensible and balanced manner.
    Pharmaceutical Statistics 01/2011; 10(1):3-7. DOI:10.1002/pst.397 · 0.83 Impact Factor