Statistical Methods in Medical Research (STAT METHODS MED RES)

Publisher: SAGE Publications

Journal description

Statistical Methods in Medical Research is the leading vehicle for review articles in all the main areas of medical statistics and is an essential reference for all medical statisticians. It is particularly useful for medical researchers dealing with data and provides a key resource for medical and statistical libraries, as well as pharmaceutical companies. This unique journal is devoted solely to statistics and medicine and aims to keep professionals abreast of the many powerful statistical techniques now available to the medical profession. As techniques are constantly adopted by statisticians working both inside and outside the medical environment, this review journal aims to satisfy the increasing demand for accurate and up-to-the-minute information.

Current impact factor: 2.96

Impact Factor Rankings

2015 Impact Factor Available summer 2015
2013 / 2014 Impact Factor 2.957
2012 Impact Factor 2.364
2011 Impact Factor 2.443
2010 Impact Factor 1.768
2009 Impact Factor 2.569
2008 Impact Factor 2.177
2007 Impact Factor 1.492
2006 Impact Factor 1.377
2005 Impact Factor 1.327
2004 Impact Factor 2.583
2003 Impact Factor 1.857
2002 Impact Factor 1.553
2001 Impact Factor 1.886

Impact factor over time

Impact factor
Year

Additional details

5-year impact 3.14
Cited half-life 0.00
Immediacy index 0.76
Eigenfactor 0.01
Article influence 1.95
Website Statistical Methods in Medical Research website
Other titles Statistical methods in medical research (Online)
ISSN 1477-0334
OCLC 42423902
Material type Document, Periodical, Internet resource
Document type Internet Resource, Computer File, Journal / Magazine / Newspaper

Publisher details

SAGE Publications

  • Pre-print
    • Author can archive a pre-print version
  • Post-print
    • Author can archive a post-print version
  • Conditions
    • Authors retain copyright
    • Pre-print on any website
    • Author's post-print on author's personal website, departmental website, institutional website or institutional repository
    • On other repositories including PubMed Central after 12 months embargo
    • Publisher copyright and source must be acknowledged
    • Publisher's version/PDF cannot be used
    • Post-print version with changes from referees comments can be used
    • "as published" final version with layout and copy-editing changes cannot be archived but can be used on secure institutional intranet
    • Must link to publisher version with DOI
    • Publisher last reviewed on 29/07/2015
  • Classification
    ​ green

Publications in this journal

  • [Show abstract] [Hide abstract]
    ABSTRACT: Rank-based sampling designs are widely used in situations where measuring the variable of interest is costly but a small number of sampling units (set) can be easily ranked prior to taking the final measurements on them and this can be done at little cost. When the variable of interest is binary, a common approach for ranking the sampling units is to estimate the probabilities of success through a logistic regression model. However, this requires training samples for model fitting. Also, in this approach once a sampling unit has been measured, the extra rank information obtained in the ranking process is not used further in the estimation process. To address these issues, in this paper, we propose to use the partially rank-ordered set sampling design with multiple concomitants. In this approach, instead of fitting a logistic regression model, a soft ranking technique is employed to obtain a vector of weights for each measured unit that represents the probability or the degree of belief associated with its rank among a small set of sampling units. We construct an estimator which combines the rank information and the observed partially rank-ordered set measurements themselves. The proposed methodology is applied to a breast cancer study to estimate the proportion of patients with malignant (cancerous) breast tumours in a given population. Through extensive numerical studies, the performance of the estimator is evaluated under various concomitants with different ranking potentials (i.e. good, intermediate and bad) and tie structures among the ranks. We show that the precision of the partially rank-ordered set estimator is better than its counterparts under simple random sampling and ranked set sampling designs and, hence, the sample size required to achieve a desired precision is reduced. © The Author(s) 2015.
    Statistical Methods in Medical Research 08/2015; DOI:10.1177/0962280215601458
  • [Show abstract] [Hide abstract]
    ABSTRACT: We study matched pair designs with two binary endpoints under three different approaches. Power approximation and sample size calculation are derived under these situations and facilitated by R programs. An adaptive design with sample size re-estimation is also presented. Through extensive simulations, we provide general guidelines for practitioners to choose the best approach according to the ranges of the interested parameters in the sense of feasibility and robustness. Application to a cancer chemotherapy trial is illustrated. © The Author(s) 2015.
    Statistical Methods in Medical Research 08/2015; DOI:10.1177/0962280215601136
  • [Show abstract] [Hide abstract]
    ABSTRACT: Adherence to medication is often measured as a continuous outcome but analyzed as a dichotomous outcome due to lack of appropriate tools. In this paper, we illustrate the use of the temporal kernel canonical correlation analysis (tkCCA) as a method to analyze adherence measurements and symptom levels on a continuous scale. The tkCCA is a novel method developed for studying the relationship between neural signals and hemodynamic response detected by functional MRI during spontaneous activity. Although the tkCCA is a powerful tool, it has not been utilized outside the application that it was originally developed for. In this paper, we simulate time series of symptoms and adherence levels for patients with a hypothetical brain disorder and show how the tkCCA can be used to understand the relationship between them. We also examine, via simulations, the behavior of the tkCCA under various missing value mechanisms and imputation methods. Finally, we apply the tkCCA to a real data example of psychotic symptoms and adherence levels obtained from a study based on subjects with a first episode of schizophrenia, schizophreniform or schizoaffective disorder. © The Author(s) 2015.
    Statistical Methods in Medical Research 08/2015; DOI:10.1177/0962280215598805
  • [Show abstract] [Hide abstract]
    ABSTRACT: One purpose of a longitudinal study is to gain a better understanding of how an outcome of interest changes among a given population over time. In what follows, a trajectory will be taken to mean the series of measurements of the outcome variable for an individual. Group-based trajectory modelling methods seek to identify subgroups of trajectories within a population, such that trajectories that are grouped together are more similar to each other than to trajectories in distinct groups. Group-based trajectory models generally assume a certain structure in the covariances between measurements, for example conditional independence, homogeneous variance between groups or stationary variance over time. Violations of these assumptions could be expected to result in poor model performance. We used simulation to investigate the effect of covariance misspecification on misclassification of trajectories in commonly used models under a range of scenarios. To do this we defined a measure of performance relative to the ideal Bayesian correct classification rate. We found that the more complex models generally performed better over a range of scenarios. In particular, incorrectly specified covariance matrices could significantly bias the results but using models with a correct but more complicated than necessary covariance matrix incurred little cost. © The Author(s) 2015.
    Statistical Methods in Medical Research 08/2015; DOI:10.1177/0962280215598806
  • [Show abstract] [Hide abstract]
    ABSTRACT: In longitudinal clinical trials, some subjects will drop out before completing the trial, so their measurements towards the end of the trial are not obtained. Mixed-effects models for repeated measures (MMRM) analysis with "unstructured" (UN) covariance structure are increasingly common as a primary analysis for group comparisons in these trials. Furthermore, model-based covariance estimators have been routinely used for testing the group difference and estimating confidence intervals of the difference in the MMRM analysis using the UN covariance. However, using the MMRM analysis with the UN covariance could lead to convergence problems for numerical optimization, especially in trials with a small-sample size. Although the so-called sandwich covariance estimator is robust to misspecification of the covariance structure, its performance deteriorates in settings with small-sample size. We investigated the performance of the sandwich covariance estimator and covariance estimators adjusted for small-sample bias proposed by Kauermann and Carroll (J Am Stat Assoc 2001; 96: 1387-1396) and Mancl and DeRouen (Biometrics 2001; 57: 126-134) fitting simpler covariance structures through a simulation study. In terms of the type 1 error rate and coverage probability of confidence intervals, Mancl and DeRouen's covariance estimator with compound symmetry, first-order autoregressive (AR(1)), heterogeneous AR(1), and antedependence structures performed better than the original sandwich estimator and Kauermann and Carroll's estimator with these structures in the scenarios where the variance increased across visits. The performance based on Mancl and DeRouen's estimator with these structures was nearly equivalent to that based on the Kenward-Roger method for adjusting the standard errors and degrees of freedom with the UN structure. The model-based covariance estimator with the UN structure under unadjustment of the degrees of freedom, which is frequently used in applications, resulted in substantial inflation of the type 1 error rate. We recommend the use of Mancl and DeRouen's estimator in MMRM analysis if the number of subjects completing is (n + 5) or less, where n is the number of planned visits. Otherwise, the use of Kenward and Roger's method with UN structure should be the best way. © The Author(s) 2015.
    Statistical Methods in Medical Research 08/2015; DOI:10.1177/0962280215597938
  • [Show abstract] [Hide abstract]
    ABSTRACT: Bayesian adaptive trials have the defining feature that the probability of randomization to a particular treatment arm can change as information becomes available as to its true worth. However, there is still a general reluctance to implement such designs in many clinical settings. One area of concern is that their frequentist operating characteristics are poor or, at least, poorly understood. We investigate the bias induced in the maximum likelihood estimate of a response probability parameter, p, for binary outcome by the process of adaptive randomization. We discover that it is small in magnitude and, under mild assumptions, can only be negative - causing one's estimate to be closer to zero on average than the truth. A simple unbiased estimator for p is obtained, but it is shown to have a large mean squared error. Two approaches are therefore explored to improve its precision based on inverse probability weighting and Rao-Blackwellization. We illustrate these estimation strategies using two well-known designs from the literature. © The Author(s) 2015.
    Statistical Methods in Medical Research 08/2015; DOI:10.1177/0962280215597716
  • [Show abstract] [Hide abstract]
    ABSTRACT: Recently, the joint analysis of longitudinal and survival data has been an active research area. Most joint models focus on survival data with only one type of failure. The research on joint modeling of longitudinal and competing risks survival data is sparse. Even so, many joint models for this type of data assume parametric function forms for both longitudinal and survival sub-models, thus limits their use. Further, the common data features that are usually observed in practice, such as asymmetric distribution and missingness in response, measurement errors in covariate, need to be taken into account for reliable parameter estimation. The statistical inference is complicated when all these factors are considered simultaneously. In the article, driven by a motivating example, we assume nonparametric function forms for the varying coefficients in both longitudinal and competing risks survival sub-models. We propose a Bayesian nonparametric mixed-effects joint model for the analysis of longitudinal-competing risks data with asymmetry, missingness, and measurement errors. Simulation studies are conducted to assess the performance of the proposed method. We apply the proposed method to an AIDS dataset and compare a few candidate models under various settings. Some interesting results are reported. © The Author(s) 2015.
    Statistical Methods in Medical Research 08/2015; DOI:10.1177/0962280215597939
  • [Show abstract] [Hide abstract]
    ABSTRACT: For complex surveys with a binary outcome, logistic regression is widely used to model the outcome as a function of covariates. Complex survey sampling designs are typically stratified cluster samples, but consistent and asymptotically unbiased estimates of the logistic regression parameters can be obtained using weighted estimating equations (WEEs) under the naive assumption that subjects within a cluster are independent. Despite the relatively large samples typical of many complex surveys, with rare outcomes, many interaction terms, or analysis of subgroups, the logistic regression parameters estimates from WEE can be markedly biased, just as with independent samples. In this paper, we propose bias-corrected WEEs for complex survey data. The proposed method is motivated by a study of postoperative complications in laparoscopic cystectomy, using data from the 2009 United States' Nationwide Inpatient Sample complex survey of hospitals. © The Author(s) 2015.
    Statistical Methods in Medical Research 08/2015; DOI:10.1177/0962280215596550
  • Statistical Methods in Medical Research 08/2015; DOI:10.1177/0962280215597580
  • [Show abstract] [Hide abstract]
    ABSTRACT: Current assessment of gene-gene interactions is typically based on separate parallel analysis, where each interaction term is tested separately, while less attention has been paid on simultaneous estimation of interaction terms in a prediction model. As the number of interaction terms grows fast, sparse estimation is desirable from statistical and interpretability reasons. There is a large literature on sparse estimation, but there is a natural hierarchy between the interaction and its corresponding main effects that requires special considerations. We describe random-effect models that impose sparse estimation of interactions under both strong and weak-hierarchy constraints. We develop an estimation procedure based on the hierarchical-likelihood argument and show that the modelling approach is equivalent to a penalty-based method, with the advantage of the models being more transparent and flexible. We compare the procedure with some standard methods in a simulation study and illustrate its application in an analysis of gene-gene interaction model to predict body-mass index. © The Author(s) 2015.
    Statistical Methods in Medical Research 08/2015; DOI:10.1177/0962280215597261
  • Statistical Methods in Medical Research 08/2015; 24(4):399-402. DOI:10.1177/0962280214520734
  • [Show abstract] [Hide abstract]
    ABSTRACT: In a prostate cancer study, the severity of genito-urinary (bladder) toxicity is assessed for patients who were given different doses of radiation. The ordinal responses (severity of side effects) are recorded longitudinally along with the cancer stage of a patient. Differences among the patients due to time-invariant covariates are captured by the parameters. To build up a suitable framework for an analysis of such data, we propose the use of self-modeling ordinal longitudinal model where the conditional cumulative probabilities for a category of an outcome have a relation with shape-invariant model. Since patients suffering from a common disease usually exhibit a similar pattern, it is natural to build up a nonlinear model that is shape invariant. The model is essentially semi-parametric where the population time curve is modeled with penalized regression spline. Monte Carlo expectation maximization technique is used to estimate the parameters of the model. A simulation study is also carried out to justify the methodology used. © The Author(s) 2015.
    Statistical Methods in Medical Research 07/2015; DOI:10.1177/0962280215594493
  • [Show abstract] [Hide abstract]
    ABSTRACT: Network meta-analysis expands the scope of a conventional pairwise meta-analysis to simultaneously compare multiple treatments, synthesizing both direct and indirect information and thus strengthening inference. Since most of trials only compare two treatments, a typical data set in a network meta-analysis managed as a trial-by-treatment matrix is extremely sparse, like an incomplete block structure with significant missing data. Zhang et al. proposed an arm-based method accounting for correlations among different treatments within the same trial and assuming that absent arms are missing at random. However, in randomized controlled trials, nonignorable missingness or missingness not at random may occur due to deliberate choices of treatments at the design stage. In addition, those undertaking a network meta-analysis may selectively choose treatments to include in the analysis, which may also lead to missingness not at random. In this paper, we extend our previous work to incorporate missingness not at random using selection models. The proposed method is then applied to two network meta-analyses and evaluated through extensive simulation studies. We also provide comprehensive comparisons of a commonly used contrast-based method and the arm-based method via simulations in a technical appendix under missing completely at random and missing at random. © The Author(s) 2015.
    Statistical Methods in Medical Research 07/2015; DOI:10.1177/0962280215596185
  • [Show abstract] [Hide abstract]
    ABSTRACT: In this paper, we extend the spatially explicit survival model for small area cancer data by allowing dependency between space and time and using accelerated failure time models. Spatial dependency is modeled directly in the definition of the survival, density, and hazard functions. The models are developed in the context of county level aggregated data. Two cases are considered: the first assumes that the spatial and temporal distributions are independent; the second allows for dependency between the spatial and temporal components. We apply the models to prostate cancer data from the Louisiana SEER cancer registry. © The Author(s) 2015.
    Statistical Methods in Medical Research 07/2015; DOI:10.1177/0962280215596186
  • [Show abstract] [Hide abstract]
    ABSTRACT: We propose in this article a Bayesian sequential design using alpha spending functions to control the overall type I error in phase III clinical trials. We provide algorithms to calculate critical values, power, and sample sizes for the proposed design. Sensitivity analysis is implemented to check the effects from different prior distributions, and conservative priors are recommended. We compare the power and actual sample sizes of the proposed Bayesian sequential design with different alpha spending functions through simulations. We also compare the power of the proposed method with frequentist sequential design using the same alpha spending function. Simulations show that, at the same sample size, the proposed method provides larger power than the corresponding frequentist sequential design. It also has larger power than traditional Bayesian sequential design which sets equal critical values for all interim analyses. When compared with other alpha spending functions, O'Brien-Fleming alpha spending function has the largest power and is the most conservative in terms that at the same sample size, the null hypothesis is the least likely to be rejected at early stage of clinical trials. And finally, we show that adding a step of stop for futility in the Bayesian sequential design can reduce the overall type I error and reduce the actual sample sizes. © The Author(s) 2015.
    Statistical Methods in Medical Research 07/2015; DOI:10.1177/0962280215595058
  • [Show abstract] [Hide abstract]
    ABSTRACT: Interval designs have recently attracted enormous attention due to their simplicity and desirable properties. We develop a Bayesian optimal interval design for dose finding in drug-combination trials. To determine the next dose combination based on the cumulative data, we propose an allocation rule by maximizing the posterior probability that the toxicity rate of the next dose falls inside a prespecified probability interval. The entire dose-finding procedure is nonparametric (model-free), which is thus robust and also does not require the typical "nonparametric" prephase used in model-based designs for drug-combination trials. The proposed two-dimensional interval design enjoys convergence properties for large samples. We conduct simulation studies to demonstrate the finite-sample performance of the proposed method under various scenarios and further make a modication to estimate toxicity contours by parallel dose-finding paths. Simulation results show that on average the performance of the proposed design is comparable with model-based designs, but it is much easier to implement. © The Author(s) 2015.
    Statistical Methods in Medical Research 07/2015; DOI:10.1177/0962280215594494
  • [Show abstract] [Hide abstract]
    ABSTRACT: A random effects logistic regression model is proposed for an incomplete block crossover trial comparing three treatments when the underlying patient response is dichotomous. On the basis of the conditional distributions, the conditional maximum likelihood estimator for the relative effect between treatments and its estimated asymptotic standard error are derived. Asymptotic interval estimator and exact interval estimator are also developed. Monte Carlo simulation is used to evaluate the performance of these estimators. Both asymptotic and exact interval estimators are found to perform well in a variety of situations. When the number of patients is small, the exact interval estimator with assuring the coverage probability larger than or equal to the desired confidence level can be especially of use. The data taken from a crossover trial comparing the low and high doses of an analgesic with a placebo for the relief of pain in primary dysmenorrhea are used to illustrate the use of estimators and the potential usefulness of the incomplete block crossover design. © The Author(s) 2015.
    Statistical Methods in Medical Research 07/2015; DOI:10.1177/0962280215595434