BookPDF Available

Abstract

Now viewed as its own scientific discipline, clinical trial methodology encompasses the methods required for the protection of participants in a clinical trial and the methods necessary to provide a valid inference about the objective of the trial. Drawing from the authors’ courses on the subject as well as the first author’s more than 30 years working in the pharmaceutical industry, Clinical Trial Methodology emphasizes the importance of statistical thinking in clinical research and presents the methodology as a key component of clinical research. From ethical issues and sample size considerations to adaptive design procedures and statistical analysis, the book first covers the methodology that spans every clinical trial regardless of the area of application. Crucial to the generic drug industry, bioequivalence clinical trials are then discussed. The authors describe a parallel bioequivalence clinical trial of six formulations incorporating group sequential procedures that permit sample size re-estimation. The final chapters incorporate real-world case studies of clinical trials from the authors’ own experiences. These examples include a landmark Phase III clinical trial involving the treatment of duodenal ulcers and Phase III clinical trials that contributed to the first drug approved for the treatment of Alzheimer’s disease. Aided by the U.S. FDA, the U.S. National Institutes of Health, the pharmaceutical industry, and academia, the area of clinical trial methodology has evolved over the last six decades into a scientific discipline. This guide explores the processes essential for developing and conducting a quality clinical trial protocol and providing quality data collection, biostatistical analyses, and a clinical study report, all while maintaining the highest standards of ethics and excellence.
A preview of the PDF is not available
... 1. The RCT is conceived as a standalone definitive study (a study that is designed to provide a meaningful answer on its own); 2. It addresses a superiority question evaluating evidence of a difference (in either direction); 3. Adoption of a two parallel-group RCT design (typically 1:1 allocation); 4. Application of the Neyman-Pearson framework to calculate the sample size [2,[10][11][12]. This requires specification of: the primary outcome for which the required sample size is to be calculated; the target difference (specification varies according to outcome type); statistical parameters (significance level and power) and other component(s) of the sample size calculation (such as standard deviation (SD)). ...
... For a superiority trial it is generally accepted that the target difference should be a clinically important difference [2,[10][11][12] or 'at least as large as the MCID [minimum clinically important difference]' [24]. The target difference in a conventional sample size calculation is not the minimum difference that can be statistically detected; statistical significance alone is not a sufficient consideration for attributing importance to a difference [2,12]. ...
... For a superiority trial it is generally accepted that the target difference should be a clinically important difference [2,[10][11][12] or 'at least as large as the MCID [minimum clinically important difference]' [24]. The target difference in a conventional sample size calculation is not the minimum difference that can be statistically detected; statistical significance alone is not a sufficient consideration for attributing importance to a difference [2,12]. ...
Article
Full-text available
Central to the design of a randomised controlled trial is the calculation of the number of participants needed. This is typically achieved by specifying a target difference and calculating the corresponding sample size, which provides reassurance that the trial will have the required statistical power (at the planned statistical significance level) to identify whether a difference of a particular magnitude exists. Beyond pure statistical or scientific concerns, it is ethically imperative that an appropriate number of participants should be recruited. Despite the critical role of the target difference for the primary outcome in the design of randomised controlled trials, its determination has received surprisingly little attention. This article provides guidance on the specification of the target difference for the primary outcome in a sample size calculation for a two parallel group randomised controlled trial with a superiority question. This work was part of the DELTA (Difference ELicitation in TriAls) project. Draft guidance was developed by the project steering and advisory groups utilising the results of the systematic review and surveys. Findings were circulated and presented to members of the combined group at a face-to-face meeting, along with a proposed outline of the guidance document structure, containing recommendations and reporting items for a trial protocol and report. The guidance and was subsequently drafted and circulated for further comment before finalisation. Guidance on specification of a target difference in the primary outcome for a two group parallel randomised controlled trial was produced. Additionally, a list of reporting items for protocols and trial reports was generated. Specification of the target difference for the primary outcome is a key component of a randomized controlled trial sample size calculation. There is a need for better justification of the target difference and reporting of its specification.
... The uncertainty region S \ D indicates characteristics of patients who may or may not benefit and for whom more evidence is needed. Patients in this region may be the focus of subsequent trials using enrichment designs (Peace and Chen, 2010). A sensitivity analysis of a 0 and b 0 ranging from 1 to 1/100,000 resulted in nearly identical credible subgroups. ...
... Adaptive designs (Berry et al., 2010) may be useful here. Additionally, enrichment designs (Peace and Chen, 2010) can shift the greatest power for detection to different areas of the covariate space. Trial design is a potential topic for later work. ...
Article
Many new experimental treatments benefit only a subset of the population. Identifying the baseline covariate profiles of patients who benefit from such a treatment, rather than determining whether or not the treatment has a population-level effect, can substantially lessen the risk in undertaking a clinical trial and expose fewer patients to treatments that do not benefit them. The standard analyses for identifying patient subgroups that benefit from an experimental treatment either do not account for multiplicity, or focus on testing for the presence of treatment-covariate interactions rather than the resulting individualized treatment effects. We propose a Bayesian credible subgroups method to identify two bounding subgroups for the benefiting subgroup: one for which it is likely that all members simultaneously have a treatment effect exceeding a specified threshold, and another for which it is likely that no members do. We examine frequentist properties of the credible subgroups method via simulations and illustrate the approach using data from an Alzheimer's disease treatment trial. We conclude with a discussion of the advantages and limitations of this approach to identifying patients for whom the treatment is beneficial.
... Specific training, legislative and governance requirements are required for any researcher or "Trialist" conducting clinical trials. 10 Clinical Trial Networks provide leadership, funding, education and support for trials and the teams delivering them. In the UK the largest group supporting fellowships in clinical trials is the NIHR, with competition for NIHR funding and fellowships consistently high. ...
... Sequential methods require researchers to specify in advance the number of interim analyses as well as the number of participants for each stage (Schönbrodt, Wagenmakers, Zehetleitner, & Perugini, 2015). Peace and Chen (2011) explained that the sample size at each stage is simply the total sample size of a nonsequential study divided by the number of planned stages. 21 As they emphasized, sequential methods still require specification of a total (i.e., maximum) sample size. ...
Article
The sample size necessary to obtain a desired level of statistical power depends in part on the population value of the effect size, which is, by definition, unknown. A common approach to sample-size planning uses the sample effect size from a prior study as an estimate of the population value of the effect to be detected in the future study. Although this strategy is intuitively appealing, effect-size estimates, taken at face value, are typically not accurate estimates of the population effect size because of publication bias and uncertainty. We show that the use of this approach often results in underpowered studies, sometimes to an alarming degree. We present an alternative approach that adjusts sample effect sizes for bias and uncertainty, and we demonstrate its effectiveness for several experimental designs. Furthermore, we discuss an open-source R package, BUCSS, and user-friendly Web applications that we have made available to researchers so that they can easily implement our suggested methods.
... The level of significance was set at 0.05. Since power to test for interaction was lower than the power to test for main effects, significance level was set at 0.10 for interaction terms to increase sensitivity, although at the cost of increasing the risk of type I error (false positive) [38]. The statistical analyses were carried out with the aid of the Stata software, version 10.0. ...
Article
Full-text available
The objective of the present study was to investigate whether dietary patterns are associated with excess weight and abdominal obesity among young adults (23-25 years). A cross-sectional study was conducted on 2061 participants of a birth cohort from Ribeirão Preto, Brazil, started in 1978-1979. Twenty-seven subjects with caloric intake outside ±3 standard deviation range were excluded, leaving 2034 individuals. Excess weight was defined as body mass index (BMI ≥ 25 kg/m(2)), abdominal obesity as waist circumference (WC > 80 cm for women; >90 cm for men) and waist/hip ratio (WHR > 0.85 for women; >0.90 for men). Poisson regression with robust variance adjustment was used to estimate the prevalence ratio (PR) adjusted for socio-demographic and lifestyle variables. Four dietary patterns were identified by principal component analysis: healthy, traditional Brazilian, bar and energy dense. In the adjusted analysis, the bar pattern was associated with a higher prevalence of excess weight (PR 1.46; 95 % CI 1.23-1.73) and abdominal obesity based on WHR (PR 2.19; 95 % CI 1.59-3.01). The energy-dense pattern was associated with a lower prevalence of excess weight (PR 0.73; 95 % CI 0.61-0.88). Men with greater adherence to the traditional Brazilian pattern showed a lower prevalence of excess weight (PR 0.65; 95 % CI 0.51-0.82), but no association was found for women. There was no association between the healthy pattern and excess weight/abdominal obesity. In this sample, the bar pattern was associated with higher prevalences of excess weight and abdominal obesity, while the energy-dense (for both genders) and traditional Brazilian (only for men) patterns were associated with lower prevalences of excess weight.
... This chapter does a great job summarizing the major phases of clinical trials, the overall clinical development plan, and major aspects of clinical trials that require statistical input. Readers who wish to learn more details about the biostatistics considerations for clinical trials may also find a book-length treatment of this topic in Peace and Chen (2011). ...
Chapter
Observational studies cannot demonstrate causality. To demonstrate cause and effect requires an intervention study in which consumption of a nutrient, food or diet is altered in a controlled way and the effect on selected outcomes is measured. The study design process should include careful consideration of the hypothesis, duration, intervention, amount and mode of delivery, control and blinding, primary and secondary outcome measures (including assessment of background diet), statistical power, eligibility criteria, data-collection methodology and ways of measuring and encouraging compliance. This chapter examines the different types of intervention study and then outlines some of the key factors to consider when planning such studies. It provides an overview of the major factors involved in the planning, conducting and reporting of intervention studies and highlights that local ethical approval, research governance procedures, and appropriate interpretations need to be followed.
Article
Full-text available
The randomised controlled trial (RCT) is widely considered to be the gold standard study for comparing the effectiveness of health interventions. Central to the design and validity of a RCT is a calculation of the number of participants needed (the sample size). The value used to determine the sample size can be considered the 'target difference'. From both a scientific and an ethical standpoint, selecting an appropriate target difference is of crucial importance. Determination of the target difference, as opposed to statistical approaches to calculating the sample size, has been greatly neglected though a variety of approaches have been proposed the current state of the evidence is unclear. The aim was to provide an overview of the current evidence regarding specifying the target difference in a RCT sample size calculation. The specific objectives were to conduct a systematic review of methods for specifying a target difference; to evaluate current practice by surveying triallists; to develop guidance on specifying the target difference in a RCT; and to identify future research needs. The biomedical and social science databases searched were MEDLINE, MEDLINE In-Process & Other Non-Indexed Citations, EMBASE, Cochrane Central Register of Controlled Trials (CENTRAL), Cochrane Methodology Register, PsycINFO, Science Citation Index, EconLit, Education Resources Information Center (ERIC) and Scopus for in-press publications. All were searched from 1966 or the earliest date of the database coverage and searches were undertaken between November 2010 and January 2011. There were three interlinked components: (1) systematic review of methods for specifying a target difference for RCTs - a comprehensive search strategy involving an electronic literature search of biomedical and some non-biomedical databases and clinical trials textbooks was carried out; (2) identification of current trial practice using two surveys of triallists - members of the Society for Clinical Trials (SCT) were invited to complete an online survey and respondents were asked about their awareness and use of, and willingness to recommend, methods; one individual per triallist group [UK Clinical Research Collaboration (UKCRC)-registered Clinical Trials Units (CTUs), Medical Research Council (MRC) UK Hubs for Trials Methodology Research and National Institute for Health Research (NIHR) UK Research Design Services (RDS)] was invited to complete a survey; (3) production of a structured guidance document to aid the design of future trials - the draft guidance was developed utilising the results of the systematic review and surveys by the project steering and advisory groups. Methodological review incorporating electronic searches, review of books and guidelines, two surveys of experts (membership of an international society and UK- and Ireland-based triallists) and development of guidance. The two surveys were sent out to membership of the SCT and UK- and Ireland-based triallists. The review focused on methods for specifying the target difference in a RCT. It was not restricted to any type of intervention or condition. Methods for specifying the target difference for a RCT were considered. The search identified 11,485 potentially relevant studies. In total, 1434 were selected for full-text assessment and 777 were included in the review. Seven methods to specify the target difference for a RCT were identified - anchor, distribution, health economic, opinion-seeking, pilot study, review of evidence base (RoEB) and standardised effect size (SES) - each having important variations in implementation. A total of 216 of the included studies used more than one method. A total of 180 (15%) responses to the SCT survey were received, representing 13 countries. Awareness of methods ranged from 38% (n =69) for the health economic method to 90% (n =162) for the pilot study. Of the 61 surveys sent out to UK triallist groups, 34 (56%) responses were received. Awareness ranged from 97% (n =33) for the RoEB and pilot study methods to only 41% (n =14) for the distribution method. Based on the most recent trial, all bar three groups (91%, n =30) used a formal method. Guidance was developed on the use of each method and the reporting of the sample size calculation in a trial protocol and results paper. There is a clear need for greater use of formal methods to determine the target difference and better reporting of its specification. Raising the standard of RCT sample size calculations and the corresponding reporting of them would aid health professionals, patients, researchers and funders in judging the strength of the evidence and ensuring better use of scarce resources. The Medical Research Council UK and the National Institute for Health Research Joint Methodology Research programme.
ResearchGate has not been able to resolve any references for this publication.