Hughes MD, Pocock SJ. Stopping rules and estimation problems in clinical trials
Department of Clinical Epidemiology and General Practice, Royal Free Hospital School of Medicine, London, U.K. Statistics in Medicine
(Impact Factor: 1.83).
12/1988; 7(12):1231-42. DOI: 10.1002/sim.4780071204
Stopping rules in clinical trials can lead to bias in point estimation of the magnitude of treatment difference. A simulation exercise, based on estimation of the risk ratio in a typical post-myocardial infarction trial, examines the nature of this exaggeration of treatment effect under various group sequential plans and also under continuous naive monitoring for statistical significance. For a fixed treatment effect the median bias in group sequential design is small, but it is greatest for effects that the trial has reasonable power to detect. Bias is evidently greater in trials that stop early and is dramatic under naive monitoring for significance. Group sequential plans lead to a multimodal sampling distribution of treatment effect, which poses problems for incorporating their estimates into meta-analyses. By simulating a population of trials with treatment effects modelled by an underlying distribution of true risk ratios, a Bayesian method is proposed for assessing the plausible range of true treatment effect for any trial based on interim results. This approach is particularly useful for producing shrinkage of the unexpectedly large and imprecise observed treatment effects that arise in clinical trials that stop early. Its implications for trial design are discussed.
Available from: Nicole Rotmensz
- "Ethical reasons play also a role in the decision to stop a trial, since there is a responsibility to minimize the number of subjects treated with an unsafe, ineffective or clearly inferior treatment. On the other hand, conducting an interim analysis may also have drawbacks, since immature results on small numbers of patients will provide imprecise or even biased point and interval estimates of the treatment effect, increasing the error in inferential process : when a clinical trial is closed because a treatment difference has been detected, the estimate of the magnitude of that difference will overstate the "true" value . Finally, trials stopped early are likely to be of small size, and as a consequence their results may lack both statistical precision and credibility, since medical community might remain sceptical, even in case of highly significant results. "
[Show abstract] [Hide abstract]
ABSTRACT: Although interim analysis approaches in clinical trials are widely known, information on current practice of planned monitoring is still scarce. Reports of studies rarely include details on the strategies for both data monitoring and interim analysis. The aim of this project is to investigate the forms of monitoring used in cancer clinical trials and in particular to gather information on the role of interim analyses in the data monitoring process of a clinical trial. This study focused on the prevalence of different types of interim analyses and data monitoring in cancer clinical trials.
Source of investigation were the protocols of cancer clinical trials included in the Italian registry of clinical trials from 2000 to 2005. Evaluation was restricted to protocols of randomised studies with a time to event endpoint, such as overall survival (OS) or progression free survival (PFS). A template data extraction form was developed and tested in a pilot phase. Selection of relevant protocols and data extraction were performed independently by two evaluators, with differences in the data assessment resolved by consensus with a third reviewer, referring back to the original protocol. Information was obtained on a) general characteristics of the protocol b) disease localization and patient setting; c) study design d) interim analyses; e) DSMC.
The analysis of the collected protocols reveals that 70.7% of the protocols incorporate statistical interim analysis plans, but only 56% have also a DSMC and be considered adequately planned. The most concerning cases are related to lack of any form of monitoring (20.0% of the protocols), and the planning of interim analysis, without DSMC (14.7%).
The results indicate that there is still insufficient attention paid to the implementation of interim analysis.
Trials 07/2008; 9:46. DOI:10.1186/1745-6215-9-46 · 1.73 Impact Factor
Available from: www3.stat.sinica.edu.tw
- "Whitehead (1986) evaluated the magnitude of the bias of maximum likelihood estimation approximately following the sequential probability ratio test and the triangular test, and proposed a bias-adjusted estimate. Hughes and Pocock (1988) did a simulation study based on an estimate of the risk ratio in a typical post-myocardial infarction trial to examine the nature of this bias for various group sequential plans. A Bayesian method, referred to as a shrunken estimate, was proposed to assess the true treatment effect based on interim results (Hughes and Pocock (1988), Pocock and Hughes (1989)). "
[Show abstract] [Hide abstract]
ABSTRACT: Group sequential tests have been widely used to control the type I error rate at a prespecified level in comparative clinical trials. It is well known that due to the optional sampling effect, conventional maximum likelihood estimates will exaggerate the treatment difference, and hence a bias is introduced. We consider a group sequentially monitored Brownian motion process. An analytical expression of the bias of the maximum likelihood estimate for the Brownian motion drift is derived based on the alpha spending method of Lan and DeMets (1983). Through this formula, the bias can be evaluated exactly by numerical integration. We study how the Brownian motion drift and various alpha spending functions and interim analysis patterns affect the bias. A bias adjusted estimator is described and its properties are investigated. The behavior of this estimator is studied for differing situations.
Statistica Sinica 01/1999; 9(4). · 1.16 Impact Factor
Journal of the Royal Statistical Society Series A (Statistics in Society) 01/1988; 151(3):419. DOI:10.2307/2982993 · 1.64 Impact Factor
Data provided are for informational purposes only. Although carefully collected, accuracy cannot be guaranteed. The impact factor represents a rough estimation of the journal's impact factor and does not reflect the actual current impact factor. Publisher conditions are provided by RoMEO. Differing provisions from the publisher's actual policy or licence agreement may be applicable.