Statistical Methods in Medical Research (STAT METHODS MED RES )

Publisher: SAGE Publications

Journal description

Statistical Methods in Medical Research is the leading vehicle for review articles in all the main areas of medical statistics and is an essential reference for all medical statisticians. It is particularly useful for medical researchers dealing with data and provides a key resource for medical and statistical libraries, as well as pharmaceutical companies. This unique journal is devoted solely to statistics and medicine and aims to keep professionals abreast of the many powerful statistical techniques now available to the medical profession. As techniques are constantly adopted by statisticians working both inside and outside the medical environment, this review journal aims to satisfy the increasing demand for accurate and up-to-the-minute information.

Current impact factor: 2.96

Impact Factor Rankings

2015 Impact Factor Available summer 2015
2013/2014 Impact Factor 2.957
2012 Impact Factor 2.364
2011 Impact Factor 2.443
2010 Impact Factor 1.768
2009 Impact Factor 2.569
2008 Impact Factor 2.177

Impact factor over time

Impact factor

Additional details

5-year impact 3.14
Cited half-life 0.00
Immediacy index 0.76
Eigenfactor 0.01
Article influence 1.95
Website Statistical Methods in Medical Research website
Other titles Statistical methods in medical research (Online)
ISSN 1477-0334
OCLC 42423902
Material type Document, Periodical, Internet resource
Document type Internet Resource, Computer File, Journal / Magazine / Newspaper

Publisher details

SAGE Publications

  • Pre-print
    • Author can archive a pre-print version
  • Post-print
    • Author can archive a post-print version
  • Conditions
    • Authors retain copyright
    • Pre-print on any website
    • Author's post-print on author's personal website, departmental website, institutional website or institutional repository
    • On other repositories including PubMed Central after 12 months embargo
    • Publisher copyright and source must be acknowledged
    • Publisher's version/PDF cannot be used
    • Post-print version with changes from referees comments can be used
    • "as published" final version with layout and copy-editing changes cannot be archived but can be used on secure institutional intranet
  • Classification
    ​ green

Publications in this journal

  • [Show abstract] [Hide abstract]
    ABSTRACT: In multiple sclerosis, the primary clinical measure of disability level is an ordinal score, the expanded disability severity scale score. In relapsing-remitting multiple sclerosis, measures of relapse are additionally of interest. Multiple sclerosis patients are typically assessed with regard to both the expanded disability severity scale and relapse state at each follow-up visit. As both are discrete measures, the two can be viewed as jointly dependent Markov processes. One of the main goals of multiple sclerosis research is to accurately model, over time, both transitions between expanded disability severity scale states and change in relapse state. This objective requires a number of significant modeling decisions, including decisions about whether or not the combination of specific disease states is warranted and assessment of the dependence structure between the two disease processes. Historically, such decisions are often made in an ad hoc manner and are not formally justified. We propose novel use of Bayes factors and Bayesian variable selection in the assessment of jointly dependent Markovian processes in multiple sclerosis. Methods are assessed using both simulated data and data collected from the Partners Multiple Sclerosis Center in Boston, MA. © The Author(s) 2015 Reprints and permissions:
    Statistical Methods in Medical Research 02/2015;
  • [Show abstract] [Hide abstract]
    ABSTRACT: A crossover study, also referred to as a crossover trial, is a form of longitudinal study. Subjects are randomly assigned to different arms of the study and receive different treatments sequentially. While there are many frequentist methods to analyze data from a crossover study, random effects models for longitudinal data are perhaps most naturally modeled within a Bayesian framework. In this article, we introduce a Bayesian adaptive approach to crossover studies for both efficacy and safety endpoints using Gibbs sampling. Using simulation, we find our approach can detect a true difference between two treatments with a specific false-positive rate that we can readily control via the standard equal-tail posterior credible interval. We then illustrate our Bayesian approaches using real data from Johnson & Johnson Vision Care, Inc. contact lens studies. We then design a variety of Bayesian adaptive predictive probability crossover studies for single and multiple continuous efficacy endpoints, indicate their extension to binary safety endpoints, and investigate their frequentist operating characteristics via simulation. The Bayesian adaptive approach emerges as a crossover trials tool that is useful yet surprisingly overlooked to date, particularly in contact lens development. © The Author(s) 2015 Reprints and permissions:
    Statistical Methods in Medical Research 02/2015;
  • Statistical Methods in Medical Research 02/2015;
  • [Show abstract] [Hide abstract]
    ABSTRACT: The crossover design can be of use to save the number of patients or improve power of a parallel groups design in studying treatments to noncurable chronic diseases. We propose using the generalized odds ratio for paired sample data to measure the relative effects in ordinal data between treatments and between periods. We show that one can apply the commonly used asymptotic and exact test procedures for stratified analysis in epidemiology to test non-equality of treatments in ordinal data, as well as obtain asymptotic and exact interval estimators for the generalized odds ratio under a three-period crossover design. We further show that one can apply procedures for testing the homogeneity of the odds ratio under stratified sampling to examine whether there are treatment-by-period interactions. We use the data taken from a three-period crossover trial studying the effects of low and high doses of an analgesic versus a placebo for the relief of pain in primary dysmenorrhea to illustrate the use of these test procedures and estimators proposed here. © The Author(s) 2015 Reprints and permissions:
    Statistical Methods in Medical Research 02/2015;
  • Statistical Methods in Medical Research 02/2015;
  • [Show abstract] [Hide abstract]
    ABSTRACT: In AIDS clinical study, two biomarkers, HIV viral load and CD4 cell counts, play important roles. It is well known that there is inverse relationship between the two. Nevertheless, the relationship is not constant but time varying. The mixed-effects varying-coefficient model is capable of capturing the time varying nature of such relationship from both population and individual perspective. In practice, the nucleic acid sequence-based amplification assay is used to measure plasma HIV-1 RNA with a limit of detection (LOD) and the CD4 cell counts are usually measured with much noise and missing data often occur during the treatment. Furthermore, most of the statistical models assume symmetric distribution, such as normal, for the response variables. Often time, normality assumption does not hold in practice. Therefore, it is important to explore all these factors when modeling the real data. In this article, we establish a joint model that accounts for asymmetric and LOD data for the response variable, and covariate measurement error and missingness simultaneously in the mixed-effects varying-coefficient modeling framework. A Bayesian inference procedure is developed to estimate the parameters in the joint model. The proposed model and method are applied to a real AIDS clinical study and various comparisons of a few models are performed. © The Author(s) 2015 Reprints and permissions:
    Statistical Methods in Medical Research 02/2015;
  • Statistical Methods in Medical Research 02/2015;
  • [Show abstract] [Hide abstract]
    ABSTRACT: Estimation of net costs attributed to a disease or other health condition is very important for health economists and policy makers. Skewness and heteroscedasticity are well-known characteristics for cost data, making linear models generally inappropriate and dictating the use of other types of models, such as gamma regression. Additional hurdles emerge when individual level data are not available. In this paper, we consider the latter case were data are only available at the aggregate level, containing means and standard deviations for different strata defined by a number of demographic and clinical factors. We summarize a number of methods that can be used for this estimation, and we propose a Bayesian approach that utilizes the sample stratum specific standard deviations as stochastic. We investigate the performance of two linear mixed models, comparing them with two proposed gamma regression mixed models, to analyze simulated data generated by gamma and log-normal distributions. Our proposed Bayesian approach seems to have significant advantages for net cost estimation when only aggregate data are available. The implemented gamma models do not seem to offer the expected benefits over the linear models; however, further investigation and refinement is needed. © The Author(s) 2015 Reprints and permissions:
    Statistical Methods in Medical Research 01/2015;
  • [Show abstract] [Hide abstract]
    ABSTRACT: Response-adaptive randomization (RAR) offers clinical investigators benefit by modifying the treatment allocation probabilities to optimize the ethical, operational, or statistical performance of the trial. Delayed primary outcomes and their effect on RAR have been studied in the literature; however, the incorporation of surrogate outcomes has not been fully addressed. We explore the benefits and limitations of surrogate outcome utilization in RAR in the context of acute stroke clinical trials. We propose a novel surrogate-primary (S-P) replacement algorithm where a patient's surrogate outcome is used in the RAR algorithm only until their primary outcome becomes available to replace it. Computer simulations investigate the effect of both the delay in obtaining the primary outcome and the underlying surrogate and primary outcome distributional discrepancies on complete randomization, standard RAR and the S-P replacement algorithm methods. Results show that when the primary outcome is delayed, the S-P replacement algorithm reduces the variability of the treatment allocation probabilities and achieves stabilization sooner. Additionally, the S-P replacement algorithm benefit proved to be robust in that it preserved power and reduced the expected number of failures across a variety of scenarios. © The Author(s) 2015 Reprints and permissions:
    Statistical Methods in Medical Research 01/2015;
  • [Show abstract] [Hide abstract]
    ABSTRACT: This paper is concerned with the estimation of the logarithm of disease odds (log odds) when evaluating two risk factors, whether or not interactions are present. Statisticians define interaction as a departure from an additive model on a certain scale of measurement of the outcome. Certain interactions, known as removable interactions, may be eliminated by fitting an additive model under an invertible transformation of the outcome. This can potentially provide more precise estimates of log odds than fitting a model with interaction terms. In practice, we may also encounter nonremovable interactions. The model must then include interaction terms, regardless of the choice of the scale of the outcome. However, in practical settings, we do not know at the outset whether an interaction exists, and if so whether it is removable or nonremovable. Rather than trying to decide on significance levels to test for the existence of removable and nonremovable interactions, we develop a Bayes estimator based on a squared error loss function. We demonstrate the favorable bias-variance trade-offs of our approach using simulations, and provide empirical illustrations using data from three published endometrial cancer case-control studies. The methods are implemented in an R program, and available freely at © The Author(s) 2015 Reprints and permissions:
    Statistical Methods in Medical Research 01/2015;
  • Statistical Methods in Medical Research 01/2015;
  • Statistical Methods in Medical Research 01/2015;
  • [Show abstract] [Hide abstract]
    ABSTRACT: In this article, we present the novel approach of using a multi-state model to describe longitudinal changes in cognitive test scores. Scores are modelled according to a truncated Poisson distribution, conditional on survival to a fixed endpoint, with the Poisson mean dependent upon the baseline score and covariates. The model provides a unified treatment of the distribution of cognitive scores, taking into account baseline scores and survival. It offers a simple framework for the simultaneous estimation of the effect of covariates modulating these distributions, over different baseline scores. A distinguishing feature is that this approach permits estimation of the probabilities of transitions in different directions: improvements, declines and death. The basic model is characterised by four parameters, two of which represent cognitive transitions in survivors, both for individuals with no cognitive errors at baseline and for those with non-zero errors, within the range of test scores. The two other parameters represent corresponding likelihoods of death. The model is applied to an analysis of data from the Canadian Study of Health and Aging (1991-2001) to identify the risk of death, and of changes in cognitive function as assessed by errors in the Modified Mini-Mental State Examination. The model performance is compared with more conventional approaches, such as multivariate linear and polytomous regressions. This model can also be readily applied to a wide variety of other cognitive test scores and phenomena which change with age.
    Statistical Methods in Medical Research 09/2014;
  • [Show abstract] [Hide abstract]
    ABSTRACT: Biomedical data may be composed of individuals generated from distinct, meaningful sources. Due to possible contextual biases in the processes that generate data, there may exist an undesirable and unexpected variability among the probability distribution functions (PDFs) of the source subsamples, which, when uncontrolled, may lead to inaccurate or unreproducible research results. Classical statistical methods may have difficulties to undercover such variabilities when dealing with multi-modal, multi-type, multi-variate data. This work proposes two metrics for the analysis of stability among multiple data sources, robust to the aforementioned conditions, and defined in the context of data quality assessment. Specifically, a global probabilistic deviation and a source probabilistic outlyingness metrics are proposed. The first provides a bounded degree of the global multi-source variability, designed as an estimator equivalent to the notion of normalized standard deviation of PDFs. The second provides a bounded degree of the dissimilarity of each source to a latent central distribution. The metrics are based on the projection of a simplex geometrical structure constructed from the Jensen-Shannon distances among the sources PDFs. The metrics have been evaluated and demonstrated their correct behaviour on a simulated benchmark and with real multi-source biomedical data using the UCI Heart Disease data set. The biomedical data quality assessment based on the proposed stability metrics may improve the efficiency and effectiveness of biomedical data exploitation and research.
    Statistical Methods in Medical Research 08/2014;
  • [Show abstract] [Hide abstract]
    ABSTRACT: Inverse probability weighted estimating equations and multiple imputation are two of the most studied frameworks for dealing with incomplete data in clinical and epidemiological research. We examine the limiting behaviour of estimators arising from inverse probability weighted estimating equations, augmented inverse probability weighted estimating equations and multiple imputation when the requisite auxiliary models are misspecified. We compute limiting values for settings involving binary responses and covariates and illustrate the effects of model misspecification using simulations based on data from a breast cancer clinical trial. We demonstrate that, even when both auxiliary models are misspecified, the asymptotic biases of double-robust augmented inverse probability weighted estimators are often smaller than the asymptotic biases of estimators arising from complete-case analyses, inverse probability weighting or multiple imputation. We further demonstrate that use of inverse probability weighting or multiple imputation with slightly misspecified auxiliary models can actually result in greater asymptotic bias than the use of naïve, complete case analyses. These asymptotic results are shown to be consistent with empirical results from simulation studies.
    Statistical Methods in Medical Research 07/2014;
  • [Show abstract] [Hide abstract]
    ABSTRACT: It is a common practice to analyze longitudinal data frequently arisen in medical studies using various mixed-effects models in the literature. However, the following issues may standout in longitudinal data analysis: (i) In clinical practice, the profile of each subject's response from a longitudinal study may follow a "broken stick" like trajectory, indicating multiple phases of increase, decline and/or stable in response. Such multiple phases (with changepoints) may be an important indicator to help quantify treatment effect and improve management of patient care. To estimate changepoints, the various mixed-effects models become a challenge due to complicated structures of model formulations; (ii) an assumption of homogeneous population for models may be unrealistically obscuring important features of between-subject and within-subject variations; (iii) normality assumption for model errors may not always give robust and reliable results, in particular, if the data exhibit non-normality; and (iv) the response may be missing and the missingness may be non-ignorable. In the literature, there has been considerable interest in accommodating heterogeneity, non-normality or missingness in such models. However, there has been relatively little work concerning all of these features simultaneously. There is a need to fill up this gap as longitudinal data do often have these characteristics. In this article, our objectives are to study simultaneous impact of these data features by developing a Bayesian mixture modeling approach-based Finite Mixture of Changepoint (piecewise) Mixed-Effects (FMCME) models with skew distributions, allowing estimates of both model parameters and class membership probabilities at population and individual levels. Simulation studies are conducted to assess the performance of the proposed method, and an AIDS clinical data example is analyzed to demonstrate the proposed methodologies and to compare modeling results of potential mixture models under different scenarios.
    Statistical Methods in Medical Research 07/2014;