Statistical Methods in Medical Research (STAT METHODS MED RES)
Statistical Methods in Medical Research is the leading vehicle for review articles in all the main areas of medical statistics and is an essential reference for all medical statisticians. It is particularly useful for medical researchers dealing with data and provides a key resource for medical and statistical libraries, as well as pharmaceutical companies. This unique journal is devoted solely to statistics and medicine and aims to keep professionals abreast of the many powerful statistical techniques now available to the medical profession. As techniques are constantly adopted by statisticians working both inside and outside the medical environment, this review journal aims to satisfy the increasing demand for accurate and up-to-the-minute information.
Current impact factor: 4.47
Impact Factor Rankings
|2016 Impact Factor||Available summer 2017|
|2014 / 2015 Impact Factor||4.472|
|2013 Impact Factor||2.957|
|2012 Impact Factor||2.364|
|2011 Impact Factor||2.443|
|2010 Impact Factor||1.768|
|2009 Impact Factor||2.569|
|2008 Impact Factor||2.177|
|2007 Impact Factor||1.492|
|2006 Impact Factor||1.377|
|2005 Impact Factor||1.327|
|2004 Impact Factor||2.583|
|2003 Impact Factor||1.857|
|2002 Impact Factor||1.553|
|2001 Impact Factor||1.886|
Impact factor over time
|Website||Statistical Methods in Medical Research website|
|Other titles||Statistical methods in medical research (Online)|
|Material type||Document, Periodical, Internet resource|
|Document type||Internet Resource, Computer File, Journal / Magazine / Newspaper|
- Author can archive a pre-print version
- Author can archive a post-print version
- Authors retain copyright
- Pre-print on any website
- Author's post-print on author's personal website, departmental website, institutional website or institutional repository
- On other repositories including PubMed Central after 12 months embargo
- Publisher copyright and source must be acknowledged
- Publisher's version/PDF cannot be used
- Post-print version with changes from referees comments can be used
- "as published" final version with layout and copy-editing changes cannot be archived but can be used on secure institutional intranet
- Must link to publisher version with DOI
- Publisher last reviewed on 29/07/2015
Publications in this journal
- [Show abstract] [Hide abstract] ABSTRACT: The process of screening agents one-at-a-time under the current clinical trials system suffers from several deficiencies that could be addressed in order to extend financial and patient resources. In this article, we introduce a statistical framework for designing and conducting randomized multi-arm screening platforms with binary endpoints using Bayesian modeling. In essence, the proposed platform design consolidates inter-study control arms, enables investigators to assign more new patients to novel therapies, and accommodates mid-trial modifications to the study arms that allow both dropping poorly performing agents as well as incorporating new candidate agents. When compared to sequentially conducted randomized two-arm trials, screening platform designs have the potential to yield considerable reductions in cost, alleviate the bottleneck between phase I and II, eliminate bias stemming from inter-trial heterogeneity, and control for multiplicity over a sequence of a priori planned studies. When screening five experimental agents, our results suggest that platform designs have the potential to reduce the mean total sample size by as much as 40% and boost the mean overall response rate by as much as 15%. We explain how to design and conduct platform designs to achieve the aforementioned aims and preserve desirable frequentist properties for the treatment comparisons. In addition, we demonstrate how to conduct a platform design using look-up tables that can be generated in advance of the study. The gains in efficiency facilitated by platform design could prove to be consequential in oncologic settings, wherein trials often lack a proper control, and drug development suffers from low enrollment, long inter-trial latency periods, and an unacceptably high rate of failure in phase III.
- [Show abstract] [Hide abstract] ABSTRACT: Atrial fibrillation is an arrhythmic disorder where the electrical signals of the heart become irregular. The probability of atrial fibrillation (binary response) is often time varying in a structured fashion, as is the influence of associated risk factors. A generalized nonlinear mixed effects model is presented to estimate the time-related probability of atrial fibrillation using a temporal decomposition approach to reveal the pattern of the probability of atrial fibrillation and their determinants. This methodology generalizes to patient-specific analysis of longitudinal binary data with possibly time-varying effects of covariates and with different patient-specific random effects influencing different temporal phases. The motivation and application of this model is illustrated using longitudinally measured atrial fibrillation data obtained through weekly trans-telephonic monitoring from an NIH sponsored clinical trial being conducted by the Cardiothoracic Surgery Clinical Trials Network.
- [Show abstract] [Hide abstract] ABSTRACT: In disease mapping where predictor effects are to be modeled, it is often the case that sets of predictors are fixed, and the aim is to choose between fixed model sets. Model selection methods, both Bayesian model selection (BMS) and Bayesian model averaging (BMA), are approaches within the Bayesian paradigm for achieving this aim. In the spatial context, model selection could have a spatial component in the sense that some models may be more appropriate for certain areas of a study region than others. In this work we examine the use of spatially-referenced BMA and BMS via a large scale simulation study accompanied by a small scale case study. Our results suggest that BMS performs well when a strong regression signature is found.
- [Show abstract] [Hide abstract] ABSTRACT: Multivariate and network meta-analysis have the potential for the estimated mean of one effect to borrow strength from the data on other effects of interest. The extent of this borrowing of strength is usually assessed informally. We present new mathematical definitions of 'borrowing of strength'. Our main proposal is based on a decomposition of the score statistic, which we show can be interpreted as comparing the precision of estimates from the multivariate and univariate models. Our definition of borrowing of strength therefore emulates the usual informal assessment. We also derive a method for calculating study weights, which we embed into the same framework as our borrowing of strength statistics, so that percentage study weights can accompany the results from multivariate and network meta-analyses as they do in conventional univariate meta-analyses. Our proposals are illustrated using three meta-analyses involving correlated effects for multiple outcomes, multiple risk factor associations and multiple treatments (network meta-analysis).
- [Show abstract] [Hide abstract] ABSTRACT: Multi-type recurrent event data occur frequently in longitudinal studies. Dependent termination may occur when the terminal time is correlated to recurrent event times. In this article, we simultaneously model the multi-type recurrent events and a dependent terminal event, both with nonparametric covariate functions modeled by B-splines. We develop a Bayesian multivariate frailty model to account for the correlation among the dependent termination and various types of recurrent events. Extensive simulation results suggest that misspecifying nonparametric covariate functions may introduce bias in parameter estimation. This method development has been motivated by and applied to the lipid-lowering trial component of the Antihypertensive and Lipid-Lowering Treatment to Prevent Heart Attack Trial.
- [Show abstract] [Hide abstract] ABSTRACT: Joint mixed modeling is an attractive approach for the analysis of a scalar response measured at a primary endpoint and longitudinal measurements on a covariate. In the standard Bayesian analysis of these models, measurement error variance and the variance/covariance of random effects are a priori modeled independently. The key point is that these variances cannot be assumed independent given the total variation in a response. This article presents a joint Bayesian analysis in which these variance terms are a priori modeled jointly. Simulations illustrate that analysis with multivariate variance prior in general lead to reduced bias (smaller relative bias) and improved efficiency (smaller interquartile range) in the posterior inference compared with the analysis with independent variance priors.
- [Show abstract] [Hide abstract] ABSTRACT: Asymptotic tests are commonly used for comparing two binomial proportions when the sample size is sufficiently large. However, there is no consensus on the most powerful test. In this paper, we clarify this issue by comparing the power functions of three popular asymptotic tests: the Pearson's χ(2) test, the likelihood-ratio test and the odds-ratio based test. Considering Taylor decompositions under local alternatives, the comparisons lead to recommendations on which test to use in view of both the experimental design and the nature of the investigated signal. We show that when the design is balanced between the two binomials, the three tests are equivalent in terms of power. However, when the design is unbalanced, differences in power can be substantial and the choice of the most powerful test also depends on the value of the parameters of the two compared binomials. We further investigated situations where the two binomials are not compared directly but through tag binomials. In these cases of indirect association, we show that the differences in power between the three tests are enhanced with decreasing values of the parameters of the tag binomials. Our results are illustrated in the context of genetic epidemiology where the analysis of genome-wide association studies provides insights regarding the low power for detecting rare variants.
- [Show abstract] [Hide abstract] ABSTRACT: Time-dependent covariates can be modeled within the Cox regression framework and can allow both proportional and nonproportional hazards for the risk factor of research interest. However, in many areas of health services research, interest centers on being able to estimate residual longevity after the occurrence of a particular event such as stroke. The survival trajectory of patients experiencing a stroke can be potentially influenced by stroke type (hemorrhagic or ischemic), time of the stroke (relative to time zero), time since the stroke occurred, or a combination of these factors. In such situations, researchers are more interested in estimating lifetime lost due to stroke rather than merely estimating the relative hazard due to stroke. To achieve this, we propose an ensemble approach using the generalized gamma distribution by means of a semi-Markov type model with an additive hazards extension. Our modeling framework allows stroke as a time-dependent covariate to affect all three parameters (location, scale, and shape) of the generalized gamma distribution. Using the concept of relative times, we answer the research question by estimating residual life lost due to ischemic and hemorrhagic stroke in the chronic dialysis population.
- [Show abstract] [Hide abstract] ABSTRACT: In multiple fields of study, time series measured at high frequencies are used to estimate population curves that describe the temporal evolution of some characteristic of interest. These curves are typically nonlinear, and the deviations of each series from the corresponding curve are highly autocorrelated. In this scenario, we propose a procedure to compare the response curves for different groups at specific points in time. The method involves fitting the curves, performing potentially hundreds of serially correlated tests, and appropriately adjusting the overall alpha level of the tests. Our motivating application comes from psycholinguistics and the visual world paradigm. We describe how the proposed technique can be adapted to compare fixation curves within subjects as well as between groups. Our results lead to conclusions beyond the scope of previous analyses.
Data provided are for informational purposes only. Although carefully collected, accuracy cannot be guaranteed. The impact factor represents a rough estimation of the journal's impact factor and does not reflect the actual current impact factor. Publisher conditions are provided by RoMEO. Differing provisions from the publisher's actual policy or licence agreement may be applicable.