Ian R White’s research while affiliated with MRC Clinical Trials Unit at UCL and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (569)


Applying the Estimands Framework to Non‐Inferiority Trials: Guidance on Choice of Hypothetical Estimands for Non‐Adherence and Comparison of Estimation Methods
  • Article

February 2025

·

1 Read

Statistics in Medicine

Katy E. Morgan

·

Ian R. White

·

·

[...]

·

A common concern in non‐inferiority (NI) trials is that non‐adherence due, for example, to poor study conduct can make treatment arms artificially similar. Because intention‐to‐treat analyses can be anti‐conservative in this situation, per‐protocol analyses are sometimes recommended. However, such advice does not consider the estimands framework, nor the risk of bias from per‐protocol analyses. We therefore sought to update the above guidance using the estimands framework, and compare estimators to improve on the performance of per‐protocol analyses. We argue the main threat to validity of NI trials is the occurrence of “trial‐specific” intercurrent events (IEs), that is, IEs which occur in a trial setting, but would not occur in practice. To guard against erroneous conclusions of non‐inferiority, we suggest an estimand using a hypothetical strategy for trial‐specific IEs should be employed, with handling of other non‐trial‐specific IEs chosen based on clinical considerations. We provide an overview of estimators that could be used to estimate a hypothetical estimand, including inverse probability weighting (IPW), and two instrumental variable approaches (one using an informative Bayesian prior on the effect of standard treatment, and one using a treatment‐by‐covariate interaction as an instrument). We compare them, using simulation in the setting of all‐or‐nothing compliance in two active treatment arms, and conclude both IPW and the instrumental variable method using a Bayesian prior are potentially useful approaches, with the choice between them depending on which assumptions are most plausible for a given trial.


Figure 1. Schematic of a roadmap for conducting a NARFCS sensitivity analysis
Summary of variables relevant to the motivating example (n=4882)
Sensitivity analysis for multivariable missing data using multiple imputation: a tutorial
  • Preprint
  • File available

February 2025

·

3 Reads

Multiple imputation is a popular method for handling missing data, with fully conditional specification (FCS) being one of the predominant imputation approaches for multivariable missingness. Unbiased estimation with standard implementations of multiple imputation depends on assumptions concerning the missingness mechanism (e.g. that data are "missing at random"). The plausibility of these assumptions can only be assessed using subject-matter knowledge, and not data alone. It is therefore important to perform sensitivity analyses to explore the robustness of results to violations of these assumptions (e.g. if the data are in fact "missing not at random"). In this tutorial, we provide a roadmap for conducting sensitivity analysis using the Not at Random Fully Conditional Specification (NARFCS) procedure for multivariate imputation. Using a case study from the Longitudinal Study of Australian Children, we work through the steps involved, from assessing the need to perform the sensitivity analysis, and specifying the NARFCS models and sensitivity parameters, through to implementing NARFCS using FCS procedures in R and Stata.

Download

New Methods for Two-Stage Treatment Switching Estimation

February 2025

·

2 Reads

Pharmaceutical Statistics

Treatment switching is common in randomized trials of oncology treatments. For example, control group patients may receive the experimental treatment as a subsequent therapy. One possible estimand is the effect of trial treatment if this type of switching had instead not occurred. Two‐stage estimation is an established approach for estimating this estimand. We argue that other estimands of interest instead describe the effect of trial treatments if the proportion of patients who switched was different. We give precise definitions of such estimands. By motivating estimands using real‐world data, decision‐making in universal health care systems is facilitated. Focusing on estimation, we show that an alternative choice of secondary baseline, the time of first subsequent treatment, is easily defined, and widely applicable, and makes alternative estimands amenable to two‐stage estimation. We develop methodology using propensity scores, to adjust for confounding at a secondary baseline, and a new quantile matching technique that can be used to implement any parametric form of the post‐secondary baseline survival model. Our methodology was motivated by a recent immuno‐oncology trial where a substantial proportion of control group patients subsequently received a form of immunotherapy.


Reference‐Based Multiple Imputation for Longitudinal Binary Data

January 2025

·

2 Reads

Statistics in Medicine

Introduction In clinical trials, a treatment policy strategy is often used to handle treatment nonadherence. However, estimation in this context is complicated when data are missing after treatment deviation. Reference‐based multiple imputation has been developed for the analysis of a longitudinal continuous outcome in this setting. It has been shown that Rubin's variance estimator ensures that the proportional loss of information due to missing data is approximately the same as that seen in analysis under the missing‐at‐random assumption for a broad range of commonly used reference‐based alternatives; that is it is information anchored . However, the best way to implement reference‐based multiple imputation for longitudinal binary data is unclear. Methods We formulate and describe two algorithms for implementing reference‐based multiple imputation for longitudinal binary outcome data using: (i) joint modeling with the multivariate normal distribution and an adaptive rounding algorithm and (ii) joint modeling with a latent multivariate normal model. A simulation study was performed to compare the properties of the two methods. Results Across the broad range of scenarios evaluated, the latent normal approach typically gave slightly less bias; both methods provided approximately information anchored inference. The advantage of the latent normal approach was more marked with a rarer outcome. However, both approaches may not perform satisfactorily if the outcome prevalence is very rare, that is, . Discussion Reference‐based multiple imputation provides a practical information anchored tool for inferences about the treatment effect for a treatment policy estimand with a longitudinal binary outcome. The latent multivariate normal model is the preferred implementation.


Multiple Imputation for Longitudinal Data: A Tutorial

January 2025

·

8 Reads

Statistics in Medicine

Longitudinal studies are frequently used in medical research and involve collecting repeated measures on individuals over time. Observations from the same individual are invariably correlated and thus an analytic approach that accounts for this clustering by individual is required. While almost all research suffers from missing data, this can be particularly problematic in longitudinal studies as participation often becomes harder to maintain over time. Multiple imputation (MI) is widely used to handle missing data in such studies. When using MI, it is important that the imputation model is compatible with the proposed analysis model. In a longitudinal analysis, this implies that the clustering considered in the analysis model should be reflected in the imputation process. Several MI approaches have been proposed to impute incomplete longitudinal data, such as treating repeated measurements of the same variable as distinct variables or using generalized linear mixed imputation models. However, the uptake of these methods has been limited, as they require additional data manipulation and use of advanced imputation procedures. In this tutorial, we review the available MI approaches that can be used for handling incomplete longitudinal data, including where individuals are clustered within higher‐level clusters. We illustrate implementation with replicable R and Stata code using a case study from the Childhood to Adolescence Transition Study.


Adjusting for switches to multiple treatments: Should switches be handled separately or combined?

January 2025

·

1 Read

Treatment switching is common in randomised controlled trials (RCTs). Participants may switch onto a variety of different treatments, all of which may have different treatment effects. Adjustment analyses that target hypothetical estimands – estimating outcomes that would have been observed in the absence of treatment switching – have focused primarily on a single type of switch. In this study, we assess the performance of applications of inverse probability of censoring weights (IPCW) and two-stage estimation (TSE) which adjust for multiple switches by either (i) adjusting for each type of switching separately (‘treatments separate’) or (ii) adjusting for switches combined without differentiating between switched-to treatments (‘treatments combined’). We simulate 48 scenarios in which RCT participants may switch to multiple treatments. Switch proportions, treatment effects, number of switched-to treatments and censoring proportions were varied. Method performance measures included mean percentage bias in restricted mean survival time and the frequency of model convergence. Similar levels of bias were produced by treatments combined and treatments separate in both TSE and IPCW applications. In the scenarios examined, there was no demonstrable advantage associated with adjusting for each type of switch separately, compared with adjusting for all switches together.


A Meta-Analysis of Levofloxacin for Contacts of Multidrug-Resistant Tuberculosis

December 2024

·

40 Reads

·

4 Citations

NEJM Evidence

Background: Data from randomized trials evaluating the effectiveness of tuberculosis (TB) preventive treatment for contacts of multidrug-resistant (MDR)-TB are lacking. Two recently published randomized trials that did not achieve statistical significance provide the opportunity for a meta-analysis. Methods: We conducted combined analyses of two phase 3 trials of levofloxacin MDR-TB preventive treatment - Levofloxacin for the Prevention of Multidrug-Resistant Tuberculosis (VQUIN) trial and the Levofloxacin preventive treatment in children exposed to MDR-TB (TB-CHAMP) trial. Following MDR-TB household exposure, VQUIN enrolled mainly adults in Vietnam; TB-CHAMP enrolled mainly young children in South Africa. Random assignment in both trials was 1:1 at the household level to daily levofloxacin or placebo for 6 months. The primary outcome was incident TB by 54 weeks. We estimated the treatment effect overall using individual participant data meta-analysis. Results: The VQUIN trial (n=2041) randomly assigned 1023 participants to levofloxacin and 1018 participants to placebo; TB-CHAMP (n=922) assigned 453 participants to levofloxacin and 469 participants to placebo. Median age was 40 years (interquartile range 28 to 52 years) in VQUIN and 2.8 years (interquartile range 1.3 to 4.2 years) in TB-CHAMP. Overall, 8 levofloxacin-group participants developed TB by 54 weeks versus 21 placebo-group participants; the relative difference in cumulative incidence was 0.41 (95% confidence interval [CI] 0.18 to 0.92; P=0.03). No association was observed between levofloxacin and grade 3 or above adverse events (risk ratio 1.07, 95% CI 0.70 to 1.65). Musculoskeletal events of any grade occurred more frequently in the levofloxacin group (risk ratio 6.36, 95% CI 4.30 to 9.42), but not among children under 10 years of age. Overall, four levofloxacin-group participants and three placebo-group participants had grade 3 events. Conclusions: In this meta-analysis of two randomized trials, levofloxacin was associated with a 60% relative reduction in TB incidence among adult and child household MDR-TB contacts, but with an increased risk of musculoskeletal adverse events. (Funded by the Australian National Health and Medical Research Council, UNITAID, and others.).


Is inverse probability of censoring weighting a safer choice than per-protocol analysis in clinical trials?

December 2024

·

4 Reads

Deviation from the treatment strategy under investigation occurs in many clinical trials. We term this intervention deviation. Per-protocol analyses are widely adopted to estimate a hypothetical estimand without the occurrence of intervention deviation. Per-protocol by censoring is prone to selection bias when intervention deviation is associated with time-varying confounders that also influence counterfactual outcomes. This can be corrected by inverse probability of censoring weighting, which gives extra weight to uncensored individuals who had similar prognostic characteristics to censored individuals. Such weights are computed by modelling selected covariates. Inverse probability of censoring weighting relies on the no unmeasured confounding assumption whose plausibility is not statistically testable. Suboptimal implementation of inverse probability of censoring weighting which violates the assumption will lead to bias. In a simulation study, we evaluated the performance of per-protocol and inverse probability of censoring weighting with different implementations to explore whether inverse probability of censoring weighting is a safe alternative to per-protocol. Scenarios were designed to vary intervention deviation in one or both arms with different prevalences, correlation between two confounders, effect of each confounder, and sample size. Results show that inverse probability of censoring weighting with different combinations of covariates outperforms per-protocol in most scenarios, except for an unusual case where selection bias caused by two confounders is in two directions, and ‘cancels’ out.



Estimation of Treatment Policy Estimands for Continuous Outcomes Using Off-Treatment Sequential Multiple Imputation

August 2024

·

10 Reads

·

4 Citations

Pharmaceutical Statistics

The estimands framework outlined in ICH E9 (R1) describes the components needed to precisely define the effects to be estimated in clinical trials, which includes how post‐baseline ‘intercurrent’ events (IEs) are to be handled. In late‐stage clinical trials, it is common to handle IEs like ‘treatment discontinuation’ using the treatment policy strategy and target the treatment effect on outcomes regardless of treatment discontinuation. For continuous repeated measures, this type of effect is often estimated using all observed data before and after discontinuation using either a mixed model for repeated measures (MMRM) or multiple imputation (MI) to handle any missing data. In basic form, both these estimation methods ignore treatment discontinuation in the analysis and therefore may be biased if there are differences in patient outcomes after treatment discontinuation compared with patients still assigned to treatment, and missing data being more common for patients who have discontinued treatment. We therefore propose and evaluate a set of MI models that can accommodate differences between outcomes before and after treatment discontinuation. The models are evaluated in the context of planning a Phase 3 trial for a respiratory disease. We show that analyses ignoring treatment discontinuation can introduce substantial bias and can sometimes underestimate variability. We also show that some of the MI models proposed can successfully correct the bias, but inevitably lead to increases in variance. We conclude that some of the proposed MI models are preferable to the traditional analysis ignoring treatment discontinuation, but the precise choice of MI model will likely depend on the trial design, disease of interest and amount of observed and missing data following treatment discontinuation.


Citations (56)


... This recommendation is based on evidence from the two randomised controlled trials CHAMP and V-QUIN, a systematic review of studies on the use of preventive treatment for MDR/RR tuberculosis and studies on the programmatic feasibility and acceptability of this regimen. While the TB-CHAMP [9] and V-QUIN trials [10] did not achieve statistical significance, a meta-analysis of these trials showed a 60% relative reduction in TB incidence among adult and child household MDR tuberculosis contacts [11]. In addition to these studies, the ongoing randomised prospective PHOENIx trial (NCT03568383) aims to evaluate the efficacy of tuberculosis preventive treatment with delamanid or isoniazid in preventing tuberculosis disease in high-risk household contacts of adults with MDR/RR-tuberculosis. ...

Reference:

High risk of drug-resistant tuberculosis in IGRA-negative contacts: should preventive treatment be considered?
A Meta-Analysis of Levofloxacin for Contacts of Multidrug-Resistant Tuberculosis

NEJM Evidence

... This study included 24 articles, but several limitations could affect the reliability of the results. First, allocation concealment is critical for preventing selection bias, and a lack of transparency in this regard may reduce internal validity (Sharpe et al., 2024). Additionally, blinding was uncertain in some studies, introducing potential performance and detection biases, which could lead to an overestimation of the treatment effects (Choy et al., 2024). ...

Proactive integrated consultation-liaison psychiatry and time spent in hospital by older medical inpatients in England (The HOME Study): a multicentre, parallel-group, randomised controlled trial
  • Citing Article
  • August 2024

The Lancet Psychiatry

... ICH E9 (R1) provided minimal guidance on estimation but did emphasise the importance of assessing the sensitivity of conclusions to statistical assumptions. Regression modelling and missing data approaches such as multiple imputation and inverse probability weighting may be useful but given the complex definitions of what effect an ICH estimand targets, estimation is an evolving field with many opportunities for novel methods 16,17,18,19,20,21,22,23,24 . In particular, some estimators from causal inference are being explored 25 . ...

Estimation of Treatment Policy Estimands for Continuous Outcomes Using Off-Treatment Sequential Multiple Imputation
  • Citing Article
  • August 2024

Pharmaceutical Statistics

... Responses were recorded on a binary scale (Yes/No) to facilitate straightforward statistical analysis. [19] In addition to the survey, qualitative data were gathered through in-depth interviews with selected participants to gain deeper insights into the nuances of social barriers and the effectiveness of inclusive practices. The interviews were semi-structured, allowing for flexibility in exploring specific themes that emerged from the survey data. ...

Determining sample size in a personalized randomized controlled (PRACTical) trial
  • Citing Article
  • July 2024

Statistics in Medicine

... It can be speculated whether the increased prevalence rates were due to the societal and health-related consequences of the pandemic. In fact, several epidemiological studies reported an initial increase of depression and anxiety symptoms in the general population, particularly in the first two months of the pandemic [30]. After a peak in April and May 2020, however, depression and anxiety symptoms decreased again although the prevalence of mental health problems remained high during the entire pandemic [31]. ...

Changes in the prevalence of mental health problems during the first year of the pandemic: a systematic review and dose-response meta-analysis

BMJ Mental Health

... While we attempted to account for missing data in the explanatory variables and outcomes in the estimation procedure and the number of individuals excluded due to missingness in the exposure is small, missingness still introduces the possibility of bias into our estimates. Further, the use of a missingness indicator to handle missing data in the explanatory variables may introduce its own set of biases, although it likely offers less biased results than complete case analysis, and the assumptions for multiple imputations were not satisfied [59]. ...

Handling missing data when estimating causal effects with Targeted Maximum Likelihood Estimation
  • Citing Article
  • February 2024

American Journal of Epidemiology

... Regarding deep diving, the correlation between CWT and CNF (deep diving without fins) is 0.711 (P < 0.001). As the literature states that variables with correlations above 0.40 can be used to improve the imputation [9], it is valid to add the imputation procedure. Additionally, since STA, DYN, and CWT are highly correlated, a missing value in one discipline can be estimated by knowing the others. ...

A comparison of strategies for selecting auxiliary variables for multiple imputation
  • Citing Article
  • January 2024

Biometrical Journal

... Although working without missing values is convenient, it only produces reliable estimates in limited situations where missing values occur completely at random and only for the dependent variable. In other situations, this results in severely biased estimates, not to mention the potential waste of information in the omitted data and the low practical application of the obtained results [22]. This is of particular importance in disease prediction, where metrics obtained from the training datasets need to be applied in real-life situations [20]. ...

Missing data: Issues, concepts, methods
  • Citing Article
  • February 2024

Seminars in Orthodontics

... In contrast, standard meta-analysis allows for the comparison of results only from studies that have been directly compared in the literature. Thus, NMA can compare more than two interventions simultaneously and evaluate all treatment approaches used in a particular field at once, even if they have not been directly compared, thereby overcoming a significant limitation of standard meta-analysis [5,23,24,67,84]. These features of NMA can also facilitate timely decision-making and recommendations, saving time compared to standard meta-analysis and reducing research waste [85,86]. ...

Methodological review of NMA bias concepts provides groundwork for the development of a list of concepts for potential inclusion in a new risk of bias tool for network meta-analysis (RoB NMA Tool)

Systematic Reviews

... Second, the studies included in our analyses focused only on OS relative to first-line treatment; they did not account for the potential role of subsequent-line therapies. Although this was a study of first-line therapies, subsequent-line therapies can confound OS results, and previous studies have recommended that these therapies be reported (29,30), but these data were not available in our analysis. Third, although our evaluation of heterogeneity found that it was low for the outcome of OS, future studies should attempt to address the heterogeneity in evaluations of other effectiveness and safety outcomes. ...

How is overall survival assessed in randomised clinical trials in cancer and are subsequent treatment lines considered? A systematic review

Trials