Simon G Thompson

Goethe-Universität Frankfurt am Main, Frankfurt, Hesse, Germany

Are you Simon G Thompson?

Claim your profile

Publications (105)678.21 Total impact

  • [show abstract] [hide abstract]
    ABSTRACT: The value of measuring levels of glycated hemoglobin (HbA1c) for the prediction of first cardiovascular events is uncertain. To determine whether adding information on HbA1c values to conventional cardiovascular risk factors is associated with improvement in prediction of cardiovascular disease (CVD) risk. Analysis of individual-participant data available from 73 prospective studies involving 294,998 participants without a known history of diabetes mellitus or CVD at the baseline assessment. Measures of risk discrimination for CVD outcomes (eg, C-index) and reclassification (eg, net reclassification improvement) of participants across predicted 10-year risk categories of low (<5%), intermediate (5% to <7.5%), and high (≥7.5%) risk. During a median follow-up of 9.9 (interquartile range, 7.6-13.2) years, 20,840 incident fatal and nonfatal CVD outcomes (13,237 coronary heart disease and 7603 stroke outcomes) were recorded. In analyses adjusted for several conventional cardiovascular risk factors, there was an approximately J-shaped association between HbA1c values and CVD risk. The association between HbA1c values and CVD risk changed only slightly after adjustment for total cholesterol and triglyceride concentrations or estimated glomerular filtration rate, but this association attenuated somewhat after adjustment for concentrations of high-density lipoprotein cholesterol and C-reactive protein. The C-index for a CVD risk prediction model containing conventional cardiovascular risk factors alone was 0.7434 (95% CI, 0.7350 to 0.7517). The addition of information on HbA1c was associated with a C-index change of 0.0018 (0.0003 to 0.0033) and a net reclassification improvement of 0.42 (-0.63 to 1.48) for the categories of predicted 10-year CVD risk. The improvement provided by HbA1c assessment in prediction of CVD risk was equal to or better than estimated improvements for measurement of fasting, random, or postload plasma glucose levels. In a study of individuals without known CVD or diabetes, additional assessment of HbA1c values in the context of CVD risk assessment provided little incremental benefit for prediction of CVD risk.
    JAMA The Journal of the American Medical Association 03/2014; 311(12):1225-33. · 29.98 Impact Factor
  • [show abstract] [hide abstract]
    ABSTRACT: Guidelines advocate changes in fatty acid consumption to promote cardiovascular health. To summarize evidence about associations between fatty acids and coronary disease. MEDLINE, Science Citation Index, and Cochrane Central Register of Controlled Trials through July 2013. Prospective, observational studies and randomized, controlled trials. Investigators extracted data about study characteristics and assessed study biases. There were 32 observational studies (530 525 participants) of fatty acids from dietary intake; 17 observational studies (25 721 participants) of fatty acid biomarkers; and 27 randomized, controlled trials (103 052 participants) of fatty acid supplementation. In observational studies, relative risks for coronary disease were 1.02 (95% CI, 0.97 to 1.07) for saturated, 0.99 (CI, 0.89 to 1.09) for monounsaturated, 0.93 (CI, 0.84 to 1.02) for long-chain ω-3 polyunsaturated, 1.01 (CI, 0.96 to 1.07) for ω-6 polyunsaturated, and 1.16 (CI, 1.06 to 1.27) for trans fatty acids when the top and bottom thirds of baseline dietary fatty acid intake were compared. Corresponding estimates for circulating fatty acids were 1.06 (CI, 0.86 to 1.30), 1.06 (CI, 0.97 to 1.17), 0.84 (CI, 0.63 to 1.11), 0.94 (CI, 0.84 to 1.06), and 1.05 (CI, 0.76 to 1.44), respectively. There was heterogeneity of the associations among individual circulating fatty acids and coronary disease. In randomized, controlled trials, relative risks for coronary disease were 0.97 (CI, 0.69 to 1.36) for α-linolenic, 0.94 (CI, 0.86 to 1.03) for long-chain ω-3 polyunsaturated, and 0.89 (CI, 0.71 to 1.12) for ω-6 polyunsaturated fatty acid supplementations. Potential biases from preferential publication and selective reporting. Current evidence does not clearly support cardiovascular guidelines that encourage high consumption of polyunsaturated fatty acids and low consumption of total saturated fats. British Heart Foundation, Medical Research Council, Cambridge National Institute for Health Research Biomedical Research Centre, and Gates Cambridge.
    Annals of internal medicine 03/2014; 160(6). · 13.98 Impact Factor
  • [show abstract] [hide abstract]
    ABSTRACT: Multilevel models provide a flexible modelling framework for cost-effectiveness analyses that use cluster randomised trial data. However, there is a lack of guidance on how to choose the most appropriate multilevel models. This paper illustrates an approach for deciding what level of model complexity is warranted; in particular how best to accommodate complex variance-covariance structures, right-skewed costs and missing data. Our proposed models differ according to whether or not they allow individual-level variances and correlations to differ across treatment arms or clusters and by the assumed cost distribution (Normal, Gamma, Inverse Gaussian). The models are fitted by Markov chain Monte Carlo methods. Our approach to model choice is based on four main criteria: the characteristics of the data, model pre-specification informed by the previous literature, diagnostic plots and assessment of model appropriateness. This is illustrated by re-analysing a previous cost-effectiveness analysis that uses data from a cluster randomised trial. We find that the most useful criterion for model choice was the deviance information criterion, which distinguishes amongst models with alternative variance-covariance structures, as well as between those with different cost distributions. This strategy for model choice can help cost-effectiveness analyses provide reliable inferences for policy-making when using cluster trials, including those with missing data.
    Statistical Methods in Medical Research 12/2013; · 2.36 Impact Factor
  • Source
    Stephen Burgess, Adam Butterworth, Simon G Thompson
    [show abstract] [hide abstract]
    ABSTRACT: Genome-wide association studies, which typically report regression coefficients summarizing the associations of many genetic variants with various traits, are potentially a powerful source of data for Mendelian randomization investigations. We demonstrate how such coefficients from multiple variants can be combined in a Mendelian randomization analysis to estimate the causal effect of a risk factor on an outcome. The bias and efficiency of estimates based on summarized data are compared to those based on individual-level data in simulation studies. We investigate the impact of gene-gene interactions, linkage disequilibrium, and 'weak instruments' on these estimates. Both an inverse-variance weighted average of variant-specific associations and a likelihood-based approach for summarized data give similar estimates and precision to the two-stage least squares method for individual-level data, even when there are gene-gene interactions. However, these summarized data methods overstate precision when variants are in linkage disequilibrium. If the P-value in a linear regression of the risk factor for each variant is less than 1×10-5, then weak instrument bias will be small. We use these methods to estimate the causal association of low-density lipoprotein cholesterol (LDL-C) on coronary artery disease using published data on five genetic variants. A 30% reduction in LDL-C is estimated to reduce coronary artery disease risk by 67% (95% CI: 54% to 76%). We conclude that Mendelian randomization investigations using summarized data from uncorrelated variants are similarly efficient to those using individual-level data, although the necessary assumptions cannot be so fully assessed.
    Genetic Epidemiology 09/2013; · 4.02 Impact Factor
  • Source
    [show abstract] [hide abstract]
    ABSTRACT: Case-cohort studies are increasingly used to quantify the association of novel factors with disease risk. Conventional measures of predictive ability need modification for this design. We show how Harrell's C-index, Royston's D, and the category-based and continuous versions of the net reclassification index (NRI) can be adapted. We simulated full cohort and case-cohort data, with sampling fractions ranging from 1% to 90%, using covariates from a cohort study of coronary heart disease, and two incidence rates. We then compared the accuracy and precision of the proposed risk prediction metrics. The C-index and D must be weighted in order to obtain unbiased results. The NRI does not need modification, provided that the relevant non-subcohort cases are excluded from the calculation. The empirical standard errors across simulations were consistent with analytical standard errors for the C-index and D but not for the NRI. Good relative efficiency of the prediction metrics was observed in our examples, provided the sampling fraction was above 40% for the C-index, 60% for D, or 30% for the NRI. Stata code is made available. Case-cohort designs can be used to provide unbiased estimates of the C-index, D measure and NRI.
    BMC Medical Research Methodology 09/2013; 13(1):113. · 2.21 Impact Factor
  • Source
    Stephen Burgess, Simon G Thompson
    [show abstract] [hide abstract]
    ABSTRACT: An allele score is a single variable summarizing multiple genetic variants associated with a risk factor. It is calculated as the total number of risk factor-increasing alleles for an individual (unweighted score), or the sum of weights for each allele corresponding to estimated genetic effect sizes (weighted score). An allele score can be used in a Mendelian randomization analysis to estimate the causal effect of the risk factor on an outcome. Data were simulated to investigate the use of allele scores in Mendelian randomization where conventional instrumental variable techniques using multiple genetic variants demonstrate 'weak instrument' bias. The robustness of estimates using the allele score to misspecification (for example non-linearity, effect modification) and to violations of the instrumental variable assumptions was assessed. Causal estimates using a correctly specified allele score were unbiased with appropriate coverage levels. The estimates were generally robust to misspecification of the allele score, but not to instrumental variable violations, even if the majority of variants in the allele score were valid instruments. Using a weighted rather than an unweighted allele score increased power, but the increase was small when genetic variants had similar effect sizes. Naive use of the data under analysis to choose which variants to include in an allele score, or for deriving weights, resulted in substantial biases. Allele scores enable valid causal estimates with large numbers of genetic variants. The stringency of criteria for genetic variants in Mendelian randomization should be maintained for all variants in an allele score.
    International Journal of Epidemiology 08/2013; 42(4):1134-44. · 6.98 Impact Factor
  • Source
    [show abstract] [hide abstract]
    ABSTRACT: Within-person variability in measured values of a risk factor can bias its association with disease. We investigated the extent of regression dilution bias in calculated variables and its implications for comparing the aetiological associations of risk factors. Using a numerical illustration and repeats from 42 300 individuals (12 cohorts), we estimated regression dilution ratios (RDRs) in calculated risk factors [body-mass index (BMI), waist-to-hip ratio (WHR), and waist-to-height ratio (WHtR)] and in their components (height, weight, waist circumference, and hip circumference), assuming the long-term average exposure to be of interest. Error-corrected hazard ratios (HRs) for risk of coronary heart disease (CHD) were compared across adiposity measures per standard-deviation (SD) change in: (i) baseline and (ii) error-corrected levels. RDRs in calculated risk factors depend strongly on the RDRs, correlation, and comparative distributions of the components of these risk factors. For measures of adiposity, the RDR was lower for WHR [RDR: 0.72 (95% confidence interval 0.65-0.80)] than for either of its components [waist circumference: 0.87 (0.85-0.90); hip circumference: 0.90 (0.86-0.93) or for BMI: 0.96 (0.93-0.98) and WHtR: 0.87 (0.85-0.90)], predominantly because of the stronger correlation and more similar distributions observed between waist circumference and hip circumference than between height and weight or between waist circumference and height. Error-corrected HRs for BMI, waist circumference, WHR, and WHtR, were respectively 1.24, 1.30, 1.44, and 1.32 per SD change in baseline levels of these variables, and 1.24, 1.27, 1.35, and 1.30 per SD change in error-corrected levels. The extent of within-person variability relative to between-person variability in calculated risk factors can be considerably larger (or smaller) than in its components. Aetiological associations of risk factors should be compared through the use of error-corrected HRs per SD change in error-corrected levels of these risk factors.
    International Journal of Epidemiology 06/2013; 42(3):849-59. · 6.98 Impact Factor
  • [show abstract] [hide abstract]
    ABSTRACT: Small abdominal aortic aneurysms (AAAs [3.0 cm-5.4 cm in diameter]) are monitored by ultrasound surveillance. The intervals between surveillance scans should be chosen to detect an expanding aneurysm prior to rupture. To limit risk of aneurysm rupture or excessive growth by optimizing ultrasound surveillance intervals. Individual patient data from studies of small AAA growth and rupture were assessed. Studies were identified for inclusion through a systematic literature search through December 2010. Study authors were contacted, which yielded 18 data sets providing repeated ultrasound measurements of AAA diameter over time in 15,471 patients. AAA diameters were analyzed using a random-effects model that allowed for between-patient variability in size and growth rate. Rupture rates were analyzed by proportional hazards regression using the modeled AAA diameter as a time-varying covariate. Predictions of the risks of exceeding 5.5-cm diameter and of rupture within given time intervals were estimated and pooled across studies by random effects meta-analysis. AAA growth and rupture rates varied considerably across studies. For each 0.5-cm increase in AAA diameter, growth rates increased on average by 0.59 mm per year (95% CI, 0.51-0.66) and rupture rates increased by a factor of 1.91 (95% CI, 1.61-2.25). For example, to control the AAA growth risk in men of exceeding 5.5 cm to below 10%, on average, a 7.4-year surveillance interval (95% CI, 6.7-8.1) is sufficient for a 3.0-cm AAA, while an 8-month interval (95% CI, 7-10) is necessary for a 5.0-cm AAA. To control the risk of rupture in men to below 1%, the corresponding estimated surveillance intervals are 8.5 years (95% CI, 7.0-10.5) and 17 months (95% CI, 14-22). CONCLUSION AND RELEVANCE: In contrast to the commonly adopted surveillance intervals in current AAA screening programs, surveillance intervals of several years may be clinically acceptable for the majority of patients with small AAA.
    JAMA The Journal of the American Medical Association 02/2013; 309(8):806-13. · 29.98 Impact Factor
  • Source
    [show abstract] [hide abstract]
    ABSTRACT: There is debate about the value of assessing levels of C-reactive protein (CRP) and other biomarkers of inflammation for the prediction of first cardiovascular events. We analyzed data from 52 prospective studies that included 246,669 participants without a history of cardiovascular disease to investigate the value of adding CRP or fibrinogen levels to conventional risk factors for the prediction of cardiovascular risk. We calculated measures of discrimination and reclassification during follow-up and modeled the clinical implications of initiation of statin therapy after the assessment of CRP or fibrinogen. The addition of information on high-density lipoprotein cholesterol to a prognostic model for cardiovascular disease that included age, sex, smoking status, blood pressure, history of diabetes, and total cholesterol level increased the C-index, a measure of risk discrimination, by 0.0050. The further addition to this model of information on CRP or fibrinogen increased the C-index by 0.0039 and 0.0027, respectively (P<0.001), and yielded a net reclassification improvement of 1.52% and 0.83%, respectively, for the predicted 10-year risk categories of "low" (<10%), "intermediate" (10% to <20%), and "high" (≥20%) (P<0.02 for both comparisons). We estimated that among 100,000 adults 40 years of age or older, 15,025 persons would initially be classified as being at intermediate risk for a cardiovascular event if conventional risk factors alone were used to calculate risk. Assuming that statin therapy would be initiated in accordance with Adult Treatment Panel III guidelines (i.e., for persons with a predicted risk of ≥20% and for those with certain other risk factors, such as diabetes, irrespective of their 10-year predicted risk), additional targeted assessment of CRP or fibrinogen levels in the 13,199 remaining participants at intermediate risk could help prevent approximately 30 additional cardiovascular events over the course of 10 years. In a study of people without known cardiovascular disease, we estimated that under current treatment guidelines, assessment of the CRP or fibrinogen level in people at intermediate risk for a cardiovascular event could help prevent one additional event over a period of 10 years for every 400 to 500 people screened. (Funded by the British Heart Foundation and others.).
    New England Journal of Medicine 10/2012; 367(14):1310-20. · 51.66 Impact Factor
  • [show abstract] [hide abstract]
    ABSTRACT: The value of assessing various emerging lipid-related markers for prediction of first cardiovascular events is debated. To determine whether adding information on apolipoprotein B and apolipoprotein A-I, lipoprotein(a), or lipoprotein-associated phospholipase A2 to total cholesterol and high-density lipoprotein cholesterol (HDL-C) improves cardiovascular disease (CVD) risk prediction. Individual records were available for 165,544 participants without baseline CVD in 37 prospective cohorts (calendar years of recruitment: 1968-2007) with up to 15,126 incident fatal or nonfatal CVD outcomes (10,132 CHD and 4994 stroke outcomes) during a median follow-up of 10.4 years (interquartile range, 7.6-14 years). Discrimination of CVD outcomes and reclassification of participants across predicted 10-year risk categories of low (<10%), intermediate (10%-<20%), and high (≥20%) risk. The addition of information on various lipid-related markers to total cholesterol, HDL-C, and other conventional risk factors yielded improvement in the model's discrimination: C-index change, 0.0006 (95% CI, 0.0002-0.0009) for the combination of apolipoprotein B and A-I; 0.0016 (95% CI, 0.0009-0.0023) for lipoprotein(a); and 0.0018 (95% CI, 0.0010-0.0026) for lipoprotein-associated phospholipase A2 mass. Net reclassification improvements were less than 1% with the addition of each of these markers to risk scores containing conventional risk factors. We estimated that for 100,000 adults aged 40 years or older, 15,436 would be initially classified at intermediate risk using conventional risk factors alone. Additional testing with a combination of apolipoprotein B and A-I would reclassify 1.1%; lipoprotein(a), 4.1%; and lipoprotein-associated phospholipase A2 mass, 2.7% of people to a 20% or higher predicted CVD risk category and, therefore, in need of statin treatment under Adult Treatment Panel III guidelines. In a study of individuals without known CVD, the addition of information on the combination of apolipoprotein B and A-I, lipoprotein(a), or lipoprotein-associated phospholipase A2 mass to risk scores containing total cholesterol and HDL-C led to slight improvement in CVD prediction.
    JAMA The Journal of the American Medical Association 06/2012; 307(23):2499-506. · 29.98 Impact Factor
  • Source
    [show abstract] [hide abstract]
    ABSTRACT: Mendelian randomisation is an epidemiological method for estimating causal associations from observational data by using genetic variants as instrumental variables. Typically the genetic variants explain only a small proportion of the variation in the risk factor of interest, and so large sample sizes are required, necessitating data from multiple sources. Meta-analysis based on individual patient data requires synthesis of studies which differ in many aspects. A proposed Bayesian framework is able to estimate a causal effect from each study, and combine these using a hierarchical model. The method is illustrated for data on C-reactive protein and coronary heart disease (CHD) from the C-reactive protein CHD Genetics Collaboration (CCGC). Studies from the CCGC differ in terms of the genetic variants measured, the study design (prospective or retrospective, population-based or case-control), whether C-reactive protein was measured, the time of C-reactive protein measurement (pre- or post-disease), and whether full or tabular data were shared. We show how these data can be combined in an efficient way to give a single estimate of causal association based on the totality of the data available. Compared to a two-stage analysis, the Bayesian method is able to incorporate data on 23% additional participants and 51% more events, leading to a 23-26% gain in efficiency.
    Statistical Methods in Medical Research 06/2012; · 2.36 Impact Factor
  • [show abstract] [hide abstract]
    ABSTRACT: Carotid intima-media thickness (cIMT) is related to the risk of cardiovascular events in the general population. An association between changes in cIMT and cardiovascular risk is frequently assumed but has rarely been reported. Our aim was to test this association. We identified general population studies that assessed cIMT at least twice and followed up participants for myocardial infarction, stroke, or death. The study teams collaborated in an individual participant data meta-analysis. Excluding individuals with previous myocardial infarction or stroke, we assessed the association between cIMT progression and the risk of cardiovascular events (myocardial infarction, stroke, vascular death, or a combination of these) for each study with Cox regression. The log hazard ratios (HRs) per SD difference were pooled by random effects meta-analysis. Of 21 eligible studies, 16 with 36,984 participants were included. During a mean follow-up of 7·0 years, 1519 myocardial infarctions, 1339 strokes, and 2028 combined endpoints (myocardial infarction, stroke, vascular death) occurred. Yearly cIMT progression was derived from two ultrasound visits 2-7 years (median 4 years) apart. For mean common carotid artery intima-media thickness progression, the overall HR of the combined endpoint was 0·97 (95% CI 0·94-1·00) when adjusted for age, sex, and mean common carotid artery intima-media thickness, and 0·98 (0·95-1·01) when also adjusted for vascular risk factors. Although we detected no associations with cIMT progression in sensitivity analyses, the mean cIMT of the two ultrasound scans was positively and robustly associated with cardiovascular risk (HR for the combined endpoint 1·16, 95% CI 1·10-1·22, adjusted for age, sex, mean common carotid artery intima-media thickness progression, and vascular risk factors). In three studies including 3439 participants who had four ultrasound scans, cIMT progression did not correlate between occassions (reproducibility correlations between r=-0·06 and r=-0·02). The association between cIMT progression assessed from two ultrasound scans and cardiovascular risk in the general population remains unproven. No conclusion can be derived for the use of cIMT progression as a surrogate in clinical trials. Deutsche Forschungsgemeinschaft.
    The Lancet 04/2012; 379(9831):2053-62. · 39.06 Impact Factor
  • Source
    [show abstract] [hide abstract]
    ABSTRACT: Many meta-analyses contain only a small number of studies, which makes it difficult to estimate the extent of between-study heterogeneity. Bayesian meta-analysis allows incorporation of external evidence on heterogeneity, and offers advantages over conventional random-effects meta-analysis. To assist in this, we provide empirical evidence on the likely extent of heterogeneity in particular areas of health care. Our analyses included 14 886 meta-analyses from the Cochrane Database of Systematic Reviews. We classified each meta-analysis according to the type of outcome, type of intervention comparison and medical specialty. By modelling the study data from all meta-analyses simultaneously, using the log odds ratio scale, we investigated the impact of meta-analysis characteristics on the underlying between-study heterogeneity variance. Predictive distributions were obtained for the heterogeneity expected in future meta-analyses. Between-study heterogeneity variances for meta-analyses in which the outcome was all-cause mortality were found to be on average 17% (95% CI 10-26) of variances for other outcomes. In meta-analyses comparing two active pharmacological interventions, heterogeneity was on average 75% (95% CI 58-95) of variances for non-pharmacological interventions. Meta-analysis size was found to have only a small effect on heterogeneity. Predictive distributions are presented for nine different settings, defined by type of outcome and type of intervention comparison. For example, for a planned meta-analysis comparing a pharmacological intervention against placebo or control with a subjectively measured outcome, the predictive distribution for heterogeneity is a log-normal (-2.13, 1.58(2)) distribution, which has a median value of 0.12. In an example of meta-analysis of six studies, incorporating external evidence led to a smaller heterogeneity estimate and a narrower confidence interval for the combined intervention effect. Meta-analysis characteristics were strongly associated with the degree of between-study heterogeneity, and predictive distributions for heterogeneity differed substantially across settings. The informative priors provided will be very beneficial in future meta-analyses including few studies.
    International Journal of Epidemiology 03/2012; 41(3):818-27. · 6.98 Impact Factor
  • [show abstract] [hide abstract]
    ABSTRACT: Statistical methods have been developed for cost-effectiveness analyses of cluster randomised trials (CRTs) where baseline covariates are balanced. However, CRTs may show systematic differences in individual and cluster-level covariates between the treatment groups. This paper presents three methods to adjust for imbalances in observed covariates: seemingly unrelated regression with a robust standard error, a 'two-stage' bootstrap approach combined with seemingly unrelated regression and multilevel models. We consider the methods in a cost-effectiveness analysis of a CRT with covariate imbalance, unequal cluster sizes and a prognostic relationship that varied by treatment group. The cost-effectiveness results differed according to the approach for covariate adjustment. A simulation study then assessed the relative performance of methods for addressing systematic imbalance in baseline covariates. The simulations extended the case study and considered scenarios with different levels of confounding, cluster size variation and few clusters. Performance was reported as bias, root mean squared error and CI coverage of the incremental net benefit. Even with low levels of confounding, unadjusted methods were biased, but all adjusted methods were unbiased. Multilevel models performed well across all settings, and unlike the other methods, reported CI coverage close to nominal levels even with few clusters of unequal sizes.
    Health Economics 03/2012; 21(9):1101-18. · 2.23 Impact Factor
  • Source
    Stephen Burgess, Simon G Thompson
    [show abstract] [hide abstract]
    ABSTRACT: Causal estimates can be obtained by instrumental variable analysis using a two-stage method. However, these can be biased when the instruments are weak. We introduce a Bayesian method, which adjusts for the first-stage residuals in the second-stage regression and has much improved bias and coverage properties. In the continuous outcome case, this adjustment reduces median bias from weak instruments to close to zero. In the binary outcome case, bias from weak instruments is reduced and the estimand is changed from a marginal population-based effect to a conditional effect. The lack of distributional assumptions on the posterior distribution of the causal effect gives a better summary of uncertainty and more accurate coverage levels than methods that rely on the asymptotic distribution of the causal estimate. We discuss these properties in the context of Mendelian randomization.
    Statistics in Medicine 02/2012; 31(15):1582-600. · 2.04 Impact Factor
  • [show abstract] [hide abstract]
    ABSTRACT: Two-stage studies may be chosen optimally by minimising a single characteristic like the maximum sample size. However, given that an investigator will initially select a null treatment effect and the clinically relevant difference, it is better to choose a design that also considers the expected sample size for each of these values. The maximum sample size and the two expected sample sizes are here combined to produce an expected loss function to find designs that are admissible. Given the prior odds of success and the importance of the total sample size, minimising the expected loss gives the optimal design for this situation. A novel triangular graph to represent the admissible designs helps guide the decision-making process. The H₀-optimal, H₁-optimal, H₀-minimax and H₁-minimax designs are all particular cases of admissible designs. The commonly used H₀-optimal design is rarely good when allowing stopping for efficacy. Additionally, the δ-minimax design, which minimises the maximum expected sample size, is sometimes admissible under the loss function. However, the results can be varied and each situation will require the evaluation of all the admissible designs. Software to do this is provided.
    Pharmaceutical Statistics 01/2012; 11(2):91-6. · 0.99 Impact Factor
  • Source
    [show abstract] [hide abstract]
    ABSTRACT: To estimate the effectiveness of routine antenatal anti-D prophylaxis for preventing sensitisation in pregnant Rhesus negative women, and to explore whether this depends on the treatment regimen adopted. Ten studies identified in a previous systematic literature search were included. Potential sources of bias were systematically identified using bias checklists, and their impact and uncertainty were quantified using expert opinion. Study results were adjusted for biases and combined, first in a random-effects meta-analysis and then in a random-effects meta-regression analysis. In a conventional meta-analysis, the pooled odds ratio for sensitisation was estimated as 0.25 (95% CI 0.18, 0.36), comparing routine antenatal anti-D prophylaxis to control, with some heterogeneity (I²  =  19%). However, this naïve analysis ignores substantial differences in study quality and design. After adjusting for these, the pooled odds ratio for sensitisation was estimated as 0.31 (95% CI 0.17, 0.56), with no evidence of heterogeneity (I²  =  0%). A meta-regression analysis was performed, which used the data available from the ten anti-D prophylaxis studies to inform us about the relative effectiveness of three licensed treatments. This gave an 83% probability that a dose of 1250 IU at 28 and 34 weeks is most effective and a 76% probability that a single dose of 1500 IU at 28-30 weeks is least effective. There is strong evidence for the effectiveness of routine antenatal anti-D prophylaxis for prevention of sensitisation, in support of the policy of offering routine prophylaxis to all non-sensitised pregnant Rhesus negative women. All three licensed dose regimens are expected to be effective.
    PLoS ONE 01/2012; 7(2):e30711. · 3.73 Impact Factor
  • Source
    BMJ (Clinical research ed.). 01/2012; 345:e7325.
  • Source
    James M S Wason, Adrian P Mander, Simon G Thompson
    [show abstract] [hide abstract]
    ABSTRACT: Multistage designs allow considerable reductions in the expected sample size of a trial. When stopping for futility or efficacy is allowed at each stage, the expected sample size under different possible true treatment effects (δ) is of interest. The δ-minimax design is the one for which the maximum expected sample size is minimised amongst all designs that meet the types I and II error constraints. Previous work has compared a two-stage δ-minimax design with other optimal two-stage designs. Applying the δ-minimax design to designs with more than two stages was not previously considered because of computational issues. In this paper, we identify the δ-minimax designs with more than two stages through use of a novel application of simulated annealing. We compare them with other optimal multistage designs and the triangular design. We show that, as for two-stage designs, the δ-minimax design has good expected sample size properties across a broad range of treatment effects but generally has a higher maximum sample size. To overcome this drawback, we use the concept of admissible designs to find trials which balance the maximum expected sample size and maximum sample size. We show that such designs have good expected sample size properties and a reasonable maximum sample size and, thus, are very appealing for use in clinical trials.
    Statistics in Medicine 12/2011; 31(4):301-12. · 2.04 Impact Factor
  • Source
    [show abstract] [hide abstract]
    ABSTRACT: Cost-effectiveness analyses (CEAs) may use data from cluster randomized trials (CRTs), where the unit of randomization is the cluster, not the individual. However, most studies use analytical methods that ignore clustering. This article compares alternative statistical methods for accommodating clustering in CEAs of CRTs. Our simulation study compared the performance of statistical methods for CEAs of CRTs with 2 treatment arms. The study considered a method that ignored clustering--seemingly unrelated regression (SUR) without a robust standard error (SE)--and 4 methods that recognized clustering--SUR and generalized estimating equations (GEEs), both with robust SE, a "2-stage" nonparametric bootstrap (TSB) with shrinkage correction, and a multilevel model (MLM). The base case assumed CRTs with moderate numbers of balanced clusters (20 per arm) and normally distributed costs. Other scenarios included CRTs with few clusters, imbalanced cluster sizes, and skewed costs. Performance was reported as bias, root mean squared error (rMSE), and confidence interval (CI) coverage for estimating incremental net benefits (INBs). We also compared the methods in a case study. Each method reported low levels of bias. Without the robust SE, SUR gave poor CI coverage (base case: 0.89 v. nominal level: 0.95). The MLM and TSB performed well in each scenario (CI coverage, 0.92-0.95). With few clusters, the GEE and SUR (with robust SE) had coverage below 0.90. In the case study, the mean INBs were similar across all methods, but ignoring clustering underestimated statistical uncertainty and the value of further research. MLMs and the TSB are appropriate analytical methods for CEAs of CRTs with the characteristics described. SUR and GEE are not recommended for studies with few clusters.
    Medical Decision Making 10/2011; 32(2):350-61. · 2.89 Impact Factor

Publication Stats

12k Citations
678.21 Total Impact Points

Institutions

  • 2010–2012
    • Goethe-Universität Frankfurt am Main
      Frankfurt, Hesse, Germany
  • 2009–2012
    • London School of Hygiene and Tropical Medicine
      • • Department of Health Services Research and Policy
      • • Department of Medical Statistics
      London, ENG, United Kingdom
  • 2002–2012
    • University of Cambridge
      • • Cambridge Institute of Public Health
      • • Department of Public Health and Primary Care
      • • MRC Biostatistics Unit
      Cambridge, ENG, United Kingdom
  • 2011
    • University of Oxford
      • Health Economics Research Centre (HERC)
      Oxford, ENG, United Kingdom
  • 2008–2010
    • Imperial College London
      Londinium, England, United Kingdom
  • 2003–2010
    • Medical Research Council (UK)
      • MRC Biostatistics Unit
      Londinium, England, United Kingdom
  • 2007–2009
    • University College London
      • Department of Primary Care and Population Health (PCPH)
      London, ENG, United Kingdom
  • 2005–2007
    • University of Bristol
      Bristol, England, United Kingdom