Statistical Methods in Medical Research (STAT METHODS MED RES)

Publisher: SAGE Publications

Journal description

Statistical Methods in Medical Research is the leading vehicle for review articles in all the main areas of medical statistics and is an essential reference for all medical statisticians. It is particularly useful for medical researchers dealing with data and provides a key resource for medical and statistical libraries, as well as pharmaceutical companies. This unique journal is devoted solely to statistics and medicine and aims to keep professionals abreast of the many powerful statistical techniques now available to the medical profession. As techniques are constantly adopted by statisticians working both inside and outside the medical environment, this review journal aims to satisfy the increasing demand for accurate and up-to-the-minute information.

Current impact factor: 2.96

Impact Factor Rankings

2015 Impact Factor Available summer 2015
2013 / 2014 Impact Factor 2.957
2012 Impact Factor 2.364
2011 Impact Factor 2.443
2010 Impact Factor 1.768
2009 Impact Factor 2.569
2008 Impact Factor 2.177
2007 Impact Factor 1.492
2006 Impact Factor 1.377
2005 Impact Factor 1.327
2004 Impact Factor 2.583
2003 Impact Factor 1.857
2002 Impact Factor 1.553
2001 Impact Factor 1.886

Impact factor over time

Impact factor
Year

Additional details

5-year impact 3.14
Cited half-life 0.00
Immediacy index 0.76
Eigenfactor 0.01
Article influence 1.95
Website Statistical Methods in Medical Research website
Other titles Statistical methods in medical research (Online)
ISSN 1477-0334
OCLC 42423902
Material type Document, Periodical, Internet resource
Document type Internet Resource, Computer File, Journal / Magazine / Newspaper

Publisher details

SAGE Publications

  • Pre-print
    • Author can archive a pre-print version
  • Post-print
    • Author can archive a post-print version
  • Conditions
    • Authors retain copyright
    • Pre-print on any website
    • Author's post-print on author's personal website, departmental website, institutional website or institutional repository
    • On other repositories including PubMed Central after 12 months embargo
    • Publisher copyright and source must be acknowledged
    • Publisher's version/PDF cannot be used
    • Post-print version with changes from referees comments can be used
    • "as published" final version with layout and copy-editing changes cannot be archived but can be used on secure institutional intranet
  • Classification
    ​ green

Publications in this journal

  • Statistical Methods in Medical Research 08/2015; 24(4):399-402. DOI:10.1177/0962280214520734
  • [Show abstract] [Hide abstract]
    ABSTRACT: We propose in this article a Bayesian sequential design using alpha spending functions to control the overall type I error in phase III clinical trials. We provide algorithms to calculate critical values, power, and sample sizes for the proposed design. Sensitivity analysis is implemented to check the effects from different prior distributions, and conservative priors are recommended. We compare the power and actual sample sizes of the proposed Bayesian sequential design with different alpha spending functions through simulations. We also compare the power of the proposed method with frequentist sequential design using the same alpha spending function. Simulations show that, at the same sample size, the proposed method provides larger power than the corresponding frequentist sequential design. It also has larger power than traditional Bayesian sequential design which sets equal critical values for all interim analyses. When compared with other alpha spending functions, O'Brien-Fleming alpha spending function has the largest power and is the most conservative in terms that at the same sample size, the null hypothesis is the least likely to be rejected at early stage of clinical trials. And finally, we show that adding a step of stop for futility in the Bayesian sequential design can reduce the overall type I error and reduce the actual sample sizes. © The Author(s) 2015.
    Statistical Methods in Medical Research 07/2015; DOI:10.1177/0962280215595058
  • [Show abstract] [Hide abstract]
    ABSTRACT: Interval designs have recently attracted enormous attention due to their simplicity and desirable properties. We develop a Bayesian optimal interval design for dose finding in drug-combination trials. To determine the next dose combination based on the cumulative data, we propose an allocation rule by maximizing the posterior probability that the toxicity rate of the next dose falls inside a prespecified probability interval. The entire dose-finding procedure is nonparametric (model-free), which is thus robust and also does not require the typical "nonparametric" prephase used in model-based designs for drug-combination trials. The proposed two-dimensional interval design enjoys convergence properties for large samples. We conduct simulation studies to demonstrate the finite-sample performance of the proposed method under various scenarios and further make a modication to estimate toxicity contours by parallel dose-finding paths. Simulation results show that on average the performance of the proposed design is comparable with model-based designs, but it is much easier to implement. © The Author(s) 2015.
    Statistical Methods in Medical Research 07/2015; DOI:10.1177/0962280215594494
  • [Show abstract] [Hide abstract]
    ABSTRACT: A random effects logistic regression model is proposed for an incomplete block crossover trial comparing three treatments when the underlying patient response is dichotomous. On the basis of the conditional distributions, the conditional maximum likelihood estimator for the relative effect between treatments and its estimated asymptotic standard error are derived. Asymptotic interval estimator and exact interval estimator are also developed. Monte Carlo simulation is used to evaluate the performance of these estimators. Both asymptotic and exact interval estimators are found to perform well in a variety of situations. When the number of patients is small, the exact interval estimator with assuring the coverage probability larger than or equal to the desired confidence level can be especially of use. The data taken from a crossover trial comparing the low and high doses of an analgesic with a placebo for the relief of pain in primary dysmenorrhea are used to illustrate the use of estimators and the potential usefulness of the incomplete block crossover design. © The Author(s) 2015.
    Statistical Methods in Medical Research 07/2015; DOI:10.1177/0962280215595434
  • [Show abstract] [Hide abstract]
    ABSTRACT: The use of settings such as cohorts or clinical trials with interval-censored data and clustered event times are increasingly popular designs. First, the observed outcomes cannot be considered as independent and random effects survival models were introduced. Second, the failure time is not known exactly but it is only known to have occurred within a certain interval.We propose here an extension of shared frailty models to handle simultaneously the interval censoring, the clustering and also left truncation due to delayed entry in the cohort. A simulation study to evaluate the proposed method was conducted. The estimated results are used to obtain dynamic predictions for clustered patients, with interval-censored failure times and with a given history. We apply our method to the Three-City study, a prospective cohort with periodic follow-up in order to study prognostic factors of dementia. In this application scheme, couples are natural clusters and an intra-couple correlation might be present with a possible increased risk for dementia for subjects whose partner already developed incident dementia. No significant intra-couple correlation for the risk of dementia was observed before and after adjustments for covariates. We also present individual predictions of dementia underlining the usefulness of dynamic prognostic tools that can take into account the clustering.The consideration of frailty models for interval-censoring data and left-truncated data permits useful analysis of very complex clustered data. It could help to improve estimation of the impact of proposed prognostic features in a study with clustering. We proposed here a tractable model and a dynamic prediction tool that can easily be implemented using the R package Frailtypack. © The Author(s) 2015.
    Statistical Methods in Medical Research 07/2015; DOI:10.1177/0962280215594835
  • [Show abstract] [Hide abstract]
    ABSTRACT: In comparative effectiveness studies of multicomponent, sequential interventions like blood product transfusion (plasma, platelets, red blood cells) for trauma and critical care patients, the timing and dynamics of treatment relative to the fragility of a patient's condition is often overlooked and underappreciated. While many hospitals have established massive transfusion protocols to ensure that physiologically optimal combinations of blood products are rapidly available, the period of time required to achieve a specified massive transfusion standard (e.g. a 1:1 or 1:2 ratio of plasma or platelets:red blood cells) has been ignored. To account for the time-varying characteristics of transfusions, we use semiparametric rate models for multivariate recurrent events to estimate blood product ratios. We use latent variables to account for multiple sources of informative censoring (early surgical or endovascular hemorrhage control procedures or death). The major advantage is that the distributions of latent variables and the dependence structure between the multivariate recurrent events and informative censoring need not be specified. Thus, our approach is robust to complex model assumptions. We establish asymptotic properties and evaluate finite sample performance through simulations, and apply the method to data from the PRospective Observational Multicenter Major Trauma Transfusion study. © The Author(s) 2015.
    Statistical Methods in Medical Research 07/2015; DOI:10.1177/0962280215593974
  • [Show abstract] [Hide abstract]
    ABSTRACT: Assessing high-sensitivity tests for mortal illness is crucial in emergency and critical care medicine. Estimating the 95% confidence interval (CI) of the likelihood ratio (LR) can be challenging when sample sensitivity is 100%. We aimed to develop, compare, and automate a bootstrapping method to estimate the negative LR CI when sample sensitivity is 100%. The lowest population sensitivity that is most likely to yield sample sensitivity 100% is located using the binomial distribution. Random binomial samples generated using this population sensitivity are then used in the LR bootstrap. A free R program, "bootLR," automates the process. Extensive simulations were performed to determine how often the LR bootstrap and comparator method 95% CIs cover the true population negative LR value. Finally, the 95% CI was compared for theoretical sample sizes and sensitivities approaching and including 100% using: (1) a technique of individual extremes, (2) SAS software based on the technique of Gart and Nam, (3) the Score CI (as implemented in the StatXact, SAS, and R PropCI package), and (4) the bootstrapping technique. The bootstrapping approach demonstrates appropriate coverage of the nominal 95% CI over a spectrum of populations and sample sizes. Considering a study of sample size 200 with 100 patients with disease, and specificity 60%, the lowest population sensitivity with median sample sensitivity 100% is 99.31%. When all 100 patients with disease test positive, the negative LR 95% CIs are: individual extremes technique (0,0.073), StatXact (0,0.064), SAS Score method (0,0.057), R PropCI (0,0.062), and bootstrap (0,0.048). Similar trends were observed for other sample sizes. When study samples demonstrate 100% sensitivity, available methods may yield inappropriately wide negative LR CIs. An alternative bootstrapping approach and accompanying free open-source R package were developed to yield realistic estimates easily. This methodology and implementation are applicable to other binomial proportions with homogeneous responses. © The Author(s) 2015.
    Statistical Methods in Medical Research 07/2015; DOI:10.1177/0962280215592907
  • [Show abstract] [Hide abstract]
    ABSTRACT: Outlier detection covers the wide range of methods aiming at identifying observations that are considered unusual. Novelty detection, on the other hand, seeks observations among newly generated test data that are exceptional compared with previously observed training data. In many applications, the general existence of novelty is of more interest than identifying the individual novel observations. For instance, in high-throughput cancer treatment screening experiments, it is meaningful to test whether any new treatment effects are seen compared with existing compounds. Here, we present hypothesis tests for such global level novelty. The problem is approached through a set of very general assumptions, making it innovative in relation to the current literature. We introduce test statistics capable of detecting novelty. They operate on local neighborhoods and their null distribution is obtained by the permutation principle. We show that they are valid and able to find different types of novelty, e.g. location and scale alternatives. The performance of the methods is assessed with simulations and with applications to real data sets. © The Author(s) 2015.
    Statistical Methods in Medical Research 07/2015; DOI:10.1177/0962280215591236
  • [Show abstract] [Hide abstract]
    ABSTRACT: Generalized linear mixed models for longitudinal data assume that responses at different occasions are conditionally independent, given the random effects and covariates. Although this assumption is pivotal for consistent estimation, violation due to serial dependence is hard to assess by model elaboration. We therefore propose a targeted diagnostic test for serial dependence, called the transition model test (TMT), that is straightforward and computationally efficient to implement in standard software. The TMT is shown to have larger power than general misspecification tests. We also propose the targeted root mean squared error of approximation (TRSMEA) as a measure of the population misfit due to serial dependence. © The Author(s) 2015.
    Statistical Methods in Medical Research 06/2015; DOI:10.1177/0962280215588123
  • [Show abstract] [Hide abstract]
    ABSTRACT: Hierarchical models such as the bivariate and hierarchical summary receiver operating characteristic (HSROC) models are recommended for meta-analysis of test accuracy studies. These models are challenging to fit when there are few studies and/or sparse data (for example zero cells in contingency tables due to studies reporting 100% sensitivity or specificity); the models may not converge, or give unreliable parameter estimates. Using simulation, we investigated the performance of seven hierarchical models incorporating increasing simplifications in scenarios designed to replicate realistic situations for meta-analysis of test accuracy studies. Performance of the models was assessed in terms of estimability (percentage of meta-analyses that successfully converged and percentage where the between study correlation was estimable), bias, mean square error and coverage of the 95% confidence intervals. Our results indicate that simpler hierarchical models are valid in situations with few studies or sparse data. For synthesis of sensitivity and specificity, univariate random effects logistic regression models are appropriate when a bivariate model cannot be fitted. Alternatively, an HSROC model that assumes a symmetric SROC curve (by excluding the shape parameter) can be used if the HSROC model is the chosen meta-analytic approach. In the absence of heterogeneity, fixed effect equivalent of the models can be applied. © The Author(s) 2015.
    Statistical Methods in Medical Research 06/2015; DOI:10.1177/0962280215592269
  • [Show abstract] [Hide abstract]
    ABSTRACT: Longitudinal zero-inflated count data arise frequently in substance use research when assessing the effects of behavioral and pharmacological interventions. Zero-inflated count models (e.g. zero-inflated Poisson or zero-inflated negative binomial) with random effects have been developed to analyze this type of data. In random effects zero-inflated count models, the random effects covariance matrix is typically assumed to be homogeneous (constant across subjects). However, in many situations this matrix may be heterogeneous (differ by measured covariates). In this paper, we extend zero-inflated count models to account for random effects heterogeneity by modeling their variance as a function of covariates. We show via simulation that ignoring intervention and covariate-specific heterogeneity can produce biased estimates of covariate and random effect estimates. Moreover, those biased estimates can be rectified by correctly modeling the random effects covariance structure. The methodological development is motivated by and applied to the Combined Pharmacotherapies and Behavioral Interventions for Alcohol Dependence (COMBINE) study, the largest clinical trial of alcohol dependence performed in United States with 1383 individuals. © The Author(s) 2015.
    Statistical Methods in Medical Research 06/2015; DOI:10.1177/0962280215588224
  • [Show abstract] [Hide abstract]
    ABSTRACT: We propose a cause-specific quantile residual life regression where the cause-specific quantile residual life, defined as the inverse of the cumulative incidence function of the residual life distribution of a specific type of events of interest conditional on a fixed time point, is log-linear in observable covariates. The proposed test statistic for the effects of prognostic factors does not involve estimation of the improper probability density function of the cause-specific residual life distribution under competing risks. The asymptotic distribution of the test statistic is derived. Simulation studies are performed to assess the finite sample properties of the proposed estimating equation and the test statistic. The proposed method is illustrated with a real dataset from a clinical trial on breast cancer. © The Author(s) 2015.
    Statistical Methods in Medical Research 06/2015; DOI:10.1177/0962280215592426
  • [Show abstract] [Hide abstract]
    ABSTRACT: Joint modeling and within-cluster resampling are two approaches that are used for analyzing correlated data with informative cluster sizes. Motivated by a developmental toxicity study, we examined the performances and validity of these two approaches in testing covariate effects in generalized linear mixed-effects models. We show that the joint modeling approach is robust to the misspecification of cluster size models in terms of Type I and Type II errors when the corresponding covariates are not included in the random effects structure; otherwise, statistical tests may be affected. We also evaluate the performance of the within-cluster resampling procedure and thoroughly investigate the validity of it in modeling correlated data with informative cluster sizes. We show that within-cluster resampling is a valid alternative to joint modeling for cluster-specific covariates, but it is invalid for time-dependent covariates. The two methods are applied to a developmental toxicity study that investigated the effect of exposure to diethylene glycol dimethyl ether. © The Author(s) 2015.
    Statistical Methods in Medical Research 06/2015; DOI:10.1177/0962280215592268
  • [Show abstract] [Hide abstract]
    ABSTRACT: This paper presents a quasi-conditional likelihood method for the consistent estimation of both continuous and count data models with excess zeros and unobserved individual heterogeneity when the true data generating process is unknown. Monte Carlo simulation studies show that our zero-inflated quasi-conditional maximum likelihood (ZI-QCML) estimator outperforms other methods and is robust to distributional misspecifications. We apply the ZI-QCML estimator to analyze the frequency of doctor visits. © The Author(s) 2015.
    Statistical Methods in Medical Research 06/2015; DOI:10.1177/0962280215588940
  • [Show abstract] [Hide abstract]
    ABSTRACT: Semicontinuous data featured with an excessive proportion of zeros and right-skewed continuous positive values arise frequently in practice. One example would be the substance abuse/dependence symptoms data for which a substantial proportion of subjects investigated may report zero. Two-part mixed-effects models have been developed to analyze repeated measures of semicontinuous data from longitudinal studies. In this paper, we propose a flexible two-part mixed-effects model with skew distributions for correlated semicontinuous alcohol data under the framework of a Bayesian approach. The proposed model specification consists of two mixed-effects models linked by the correlated random effects: (i) a model on the occurrence of positive values using a generalized logistic mixed-effects model (Part I); and (ii) a model on the intensity of positive values using a linear mixed-effects model where the model errors follow skew distributions including skew-t and skew-normal distributions (Part II). The proposed method is illustrated with an alcohol abuse/dependence symptoms data from a longitudinal observational study, and the analytic results are reported by comparing potential models under different random-effects structures. Simulation studies are conducted to assess the performance of the proposed models and method. © The Author(s) 2015.
    Statistical Methods in Medical Research 06/2015; DOI:10.1177/0962280215590284
  • [Show abstract] [Hide abstract]
    ABSTRACT: Sample size justification is an important consideration when planning a clinical trial, not only for the main trial but also for any preliminary pilot trial. When the outcome is a continuous variable, the sample size calculation requires an accurate estimate of the standard deviation of the outcome measure. A pilot trial can be used to get an estimate of the standard deviation, which could then be used to anticipate what may be observed in the main trial. However, an important consideration is that pilot trials often estimate the standard deviation parameter imprecisely. This paper looks at how we can choose an external pilot trial sample size in order to minimise the sample size of the overall clinical trial programme, that is, the pilot and the main trial together. We produce a method of calculating the optimal solution to the required pilot trial sample size when the standardised effect size for the main trial is known. However, as it may not be possible to know the standardised effect size to be used prior to the pilot trial, approximate rules are also presented. For a main trial designed with 90% power and two-sided 5% significance, we recommend pilot trial sample sizes per treatment arm of 75, 25, 15 and 10 for standardised effect sizes that are extra small (≤0.1), small (0.2), medium (0.5) or large (0.8), respectively. © The Author(s) 2015.
    Statistical Methods in Medical Research 06/2015; DOI:10.1177/0962280215588241
  • [Show abstract] [Hide abstract]
    ABSTRACT: Dropout is a common problem in longitudinal cohort studies and clinical trials, often raising concerns of nonignorable dropout. Selection, frailty, and mixture models have been proposed to account for potentially nonignorable missingness by relating the longitudinal outcome to time of dropout. In addition, many longitudinal studies encounter multiple types of missing data or reasons for dropout, such as loss to follow-up, disease progression, treatment modifications and death. When clinically distinct dropout reasons are present, it may be preferable to control for both dropout reason and time to gain additional clinical insights. This may be especially interesting when the dropout reason and dropout times differ by the primary exposure variable. We extend a semi-parametric varying-coefficient method for nonignorable dropout to accommodate dropout reason. We apply our method to untreated HIV-infected subjects recruited to the Acute Infection and Early Disease Research Program HIV cohort and compare longitudinal CD4(+) T cell count in injection drug users to nonusers with two dropout reasons: anti-retroviral treatment initiation and loss to follow-up. © The Author(s) 2015.
    Statistical Methods in Medical Research 06/2015; DOI:10.1177/0962280215590432
  • [Show abstract] [Hide abstract]
    ABSTRACT: Early phase trials of complex interventions currently focus on assessing the feasibility of a large randomised control trial and on conducting pilot work. Assessing the efficacy of the proposed intervention is generally discouraged, due to concerns of underpowered hypothesis testing. In contrast, early assessment of efficacy is common for drug therapies, where phase II trials are often used as a screening mechanism to identify promising treatments. In this paper, we outline the challenges encountered in extending ideas developed in the phase II drug trial literature to the complex intervention setting. The prevalence of multiple endpoints and clustering of outcome data are identified as important considerations, having implications for timely and robust determination of optimal trial design parameters. The potential for Bayesian methods to help to identify robust trial designs and optimal decision rules is also explored. © The Author(s) 2015.
    Statistical Methods in Medical Research 06/2015; DOI:10.1177/0962280215589507