Biometrics Journal Impact Factor & Information

Publisher: American Statistical Association. Biometrics Section; Biometric Society; International Biometric Society, Wiley

Journal description

Biometrics is published quarterly. Its general objects are to promote and extend the use of mathematical and statistical methods in various subject-matter disciplines, by describing and exemplifying developments in these methods and their application in a form readily assimilable by experimenters and those concerned primarily with analysis of data. The journal is a ready medium for publication of papers by both the experimentalist and the statistician. The papers in the journal include statistical, authoritative expository or review articles, and analytical or methodological papers contributing to the planning or analysis of experiments and surveys, or the interpretation of data. Many of the papers in Biometrics contain worked examples of the statistical analyses proposed.

Current impact factor: 1.57

Impact Factor Rankings

2015 Impact Factor Available summer 2016
2014 Impact Factor 1.568
2013 Impact Factor 1.521
2012 Impact Factor 1.412
2011 Impact Factor 1.827
2010 Impact Factor 1.764
2009 Impact Factor 1.867
2008 Impact Factor 1.97
2007 Impact Factor 1.714
2006 Impact Factor 1.489
2005 Impact Factor 1.602
2004 Impact Factor 1.211
2003 Impact Factor 1.324
2002 Impact Factor 1.077
2001 Impact Factor 1.081
2000 Impact Factor 1.17
1999 Impact Factor 1.335
1998 Impact Factor 0.863
1997 Impact Factor 0.938
1996 Impact Factor 1.011
1995 Impact Factor 1.041
1994 Impact Factor 1.207
1993 Impact Factor 0.97
1992 Impact Factor 1.027

Impact factor over time

Impact factor

Additional details

5-year impact 1.88
Cited half-life >10.0
Immediacy index 0.22
Eigenfactor 0.02
Article influence 1.57
Website Biometrics website
Other titles Biometrics
ISSN 0006-341X
OCLC 5898885
Material type Periodical, Internet resource
Document type Journal / Magazine / Newspaper, Internet Resource

Publisher details


  • Pre-print
    • Author can archive a pre-print version
  • Post-print
    • Author can archive a post-print version
  • Conditions
    • Some journals have separate policies, please check with each journal directly
    • On author's personal website, institutional repositories, arXiv, AgEcon, PhilPapers, PubMed Central, RePEc or Social Science Research Network
    • Author's pre-print may not be updated with Publisher's Version/PDF
    • Author's pre-print must acknowledge acceptance for publication
    • Non-commercial
    • Publisher's version/PDF cannot be used
    • Publisher source must be acknowledged with citation
    • Must link to publisher version with set statement (see policy)
    • This policy is an exception to the default policies of 'Wiley'
  • Classification
    ​ green

Publications in this journal

  • [Show abstract] [Hide abstract]
    ABSTRACT: Causal mediation modeling has become a popular approach for studying the effect of an exposure on an outcome through a mediator. However, current methods are not applicable to the setting with a large number of mediators. We propose a testing procedure for mediation effects of high-dimensional continuous mediators. We characterize the marginal mediation effect, the multivariate component-wise mediation effects, and the L2 norm of the component-wise effects, and develop a Monte-Carlo procedure for evaluating their statistical significance. To accommodate the setting with a large number of mediators and a small sample size, we further propose a transformation model using the spectral decomposition. Under the transformation model, mediation effects can be estimated using a series of regression models with a univariate transformed mediator, and examined by our proposed testing procedure. Extensive simulation studies are conducted to assess the performance of our methods for continuous and dichotomous outcomes. We apply the methods to analyze genomic data investigating the effect of microRNA miR-223 on a dichotomous survival status of patients with glioblastoma multiforme (GBM). We identify nine gene ontology sets with expression values that significantly mediate the effect of miR-223 on GBM survival.
    Biometrics 09/2015; DOI:10.1111/biom.12421
  • [Show abstract] [Hide abstract]
    ABSTRACT: Semicontinuous data in the form of a mixture of a large portion of zero values and continuously distributed positive values frequently arise in many areas of biostatistics. This article is motivated by the analysis of relationships between disease outcomes and intakes of episodically consumed dietary components. An important aspect of studies in nutritional epidemiology is that true diet is unobservable and commonly evaluated by food frequency questionnaires with substantial measurement error. Following the regression calibration approach for measurement error correction, unknown individual intakes in the risk model are replaced by their conditional expectations given mismeasured intakes and other model covariates. Those regression calibration predictors are estimated using short-term unbiased reference measurements in a calibration substudy. Since dietary intakes are often "energy-adjusted," e.g., by using ratios of the intake of interest to total energy intake, the correct estimation of the regression calibration predictor for each energy-adjusted episodically consumed dietary component requires modeling short-term reference measurements of the component (a semicontinuous variable), and energy (a continuous variable) simultaneously in a bivariate model. In this article, we develop such a bivariate model, together with its application to regression calibration. We illustrate the new methodology using data from the NIH-AARP Diet and Health Study (Schatzkin et al., 2001, American Journal of Epidemiology 154, 1119-1125), and also evaluate its performance in a simulation study. © 2015, The International Biometric Society.
    Biometrics 08/2015; DOI:10.1111/biom.12377
  • Biometrics 06/2015; 71(2). DOI:10.1111/biom.12321
  • Biometrics 06/2015; 71(2). DOI:10.1111/biom.12320
  • [Show abstract] [Hide abstract]
    ABSTRACT: EDITOR: TAESUNG PARK Clinical Trials with Missing Data (M. O'Kelly and B. Ratitch) Juwon Song Multiple Imputation and its Application (J. R. Carpenter and M. G. Kenward) Sohee Park Statistical Inference on Residual Life (J.-H. Jeong) Seungyeoun Lee
    Biometrics 06/2015; 71(2). DOI:10.1111/biom.12319
  • [Show abstract] [Hide abstract]
    ABSTRACT: This article develops a Bayesian semiparametric approach to the extended hazard model, with generalization to high-dimensional spatially grouped data. County-level spatial correlation is accommodated marginally through the normal transformation model of Li and Lin (2006, Journal of the American Statistical Association 101, 591-603), using a correlation structure implied by an intrinsic conditionally autoregressive prior. Efficient Markov chain Monte Carlo algorithms are developed, especially applicable to fitting very large, highly censored areal survival data sets. Per-variable tests for proportional hazards, accelerated failure time, and accelerated hazards are efficiently carried out with and without spatial correlation through Bayes factors. The resulting reduced, interpretable spatial models can fit significantly better than a standard additive Cox model with spatial frailties. © 2014, The International Biometric Society.
    Biometrics 12/2014; 71(2). DOI:10.1111/biom.12268
  • Biometrics 12/2014; 70(4):1062-1062. DOI:10.1111/biom.12262
  • [Show abstract] [Hide abstract]
    ABSTRACT: Multistate models are used to characterize individuals' natural histories through diseases with discrete states. Observational data resources based on electronic medical records pose new opportunities for studying such diseases. However, these data consist of observations of the process at discrete sampling times, which may either be pre-scheduled and non-informative, or symptom-driven and informative about an individual's underlying disease status. We have developed a novel joint observation and disease transition model for this setting. The disease process is modeled according to a latent continuous-time Markov chain; and the observation process, according to a Markov-modulated Poisson process with observation rates that depend on the individual's underlying disease status. The disease process is observed at a combination of informative and non-informative sampling times, with possible misclassification error. We demonstrate that the model is computationally tractable and devise an expectation-maximization algorithm for parameter estimation. Using simulated data, we show how estimates from our joint observation and disease transition model lead to less biased and more precise estimates of the disease rate parameters. We apply the model to a study of secondary breast cancer events, utilizing mammography and biopsy records from a sample of women with a history of primary breast cancer.
    Biometrics 10/2014; 71(1). DOI:10.1111/biom.12252
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Electronic health record (EHR) data are becoming an increasingly common data source for understanding clinical risk of acute events. While their longitudinal nature presents opportunities to observe changing risk over time, these analyses are complicated by the sparse and irregular measurements of many of the clinical metrics making typical statistical methods unsuitable for these data. In this paper, we present an analytic procedure to both sample from an EHR and analyze the data to detect clinically meaningful markers of acute myocardial infarction (MI). Using an EHR from a large national dialysis organization we abstracted the records of 64,318 individuals and identified 5,314 people that had an MI during the study period. We describe a nested case-control design to sample appropriate controls and an analytic approach using regression splines. Fitting a mixed-model with truncated power splines we perform a series of goodness-of-fit tests to determine whether any of 11 regularly collected laboratory markers are useful clinical predictors. We test the clinical utility of each marker using an independent test set. The results suggest that EHR data can be easily used to detect markers of clinically acute events. Special software or analytic tools are not needed, even with irregular EHR data.
    Biometrics 07/2014; 71(2). DOI:10.1111/biom.12283
  • [Show abstract] [Hide abstract]
    ABSTRACT: There has been an increasing interest in the analysis of spatially distributed multivariate binary data motivated by a wide range of research problems. Two types of correlations are usually involved: the correlation between the multiple outcomes at one location and the spatial correlation between the locations for one particular outcome. The commonly used regression models only consider one type of correlations while ignoring or modeling inappropriately the other one. To address this limitation, we adopt a Bayesian nonparametric approach to jointly modeling multivariate spatial binary data by integrating both types of correlations. A multivariate probit model is employed to link the binary outcomes to Gaussian latent variables; and Gaussian processes are applied to specify the spatially correlated random effects. We develop an efficient Markov chain Monte Carlo algorithm for the posterior computation. We illustrate the proposed model on simulation studies and a multidrug-resistant tuberculosis case study.
    Biometrics 06/2014; 70(4). DOI:10.1111/biom.12198
  • [Show abstract] [Hide abstract]
    ABSTRACT: Integrative genomics offers a promising approach to more powerful genetic association studies. The hope is that combining outcome and genotype data with other types of genomic information can lead to more powerful SNP detection. We present a new association test based on a statistical model that explicitly assumes that genetic variations affect the outcome through perturbing gene expression levels. It is shown analytically that the proposed approach can have more power to detect SNPs that are associated with the outcome through transcriptional regulation, compared to tests using the outcome and genotype data alone, and simulations show that our method is relatively robust to misspecification. We also provide a strategy for applying our approach to high-dimensional genomic data. We use this strategy to identify a potentially new association between a SNP and a yeast cell's response to the natural product tomatidine, which standard association analysis did not detect.
    Biometrics 06/2014; 70(4). DOI:10.1111/biom.12206
  • [Show abstract] [Hide abstract]
    ABSTRACT: We develop a linear mixed regression model where both the response and the predictor are functions. Model parameters are estimated by maximizing the log likelihood via the ECME algorithm. The estimated variance parameters or covariance matrices are shown to be positive or positive definite at each iteration. In simulation studies, the approach outperforms in terms of the fitting error and the MSE of estimating the "regression coefficients."
    Biometrics 06/2014; 70(4). DOI:10.1111/biom.12207
  • [Show abstract] [Hide abstract]
    ABSTRACT: Research in the field of nonparametric shape constrained regression has been intensive. However, only few publications explicitly deal with unimodality although there is need for such methods in applications, for example, in dose-response analysis. In this article, we propose unimodal spline regression methods that make use of Bernstein-Schoenberg splines and their shape preservation property. To achieve unimodal and smooth solutions we use penalized splines, and extend the penalized spline approach toward penalizing against general parametric functions, instead of using just difference penalties. For tuning parameter selection under a unimodality constraint a restricted maximum likelihood and an alternative Bayesian approach for unimodal regression are developed. We compare the proposed methodologies to other common approaches in a simulation study and apply it to a dose-response data set. All results suggest that the unimodality constraint or the combination of unimodality and a penalty can substantially improve estimation of the functional relationship.
    Biometrics 06/2014; 70(4). DOI:10.1111/biom.12193
  • [Show abstract] [Hide abstract]
    ABSTRACT: Spatial-clustered data refer to high-dimensional correlated measurements collected from units or subjects that are spatially clustered. Such data arise frequently from studies in social and health sciences. We propose a unified modeling framework, termed as GeoCopula, to characterize both large-scale variation, and small-scale variation for various data types, including continuous data, binary data, and count data as special cases. To overcome challenges in the estimation and inference for the model parameters, we propose an efficient composite likelihood approach in that the estimation efficiency is resulted from a construction of over-identified joint composite estimating equations. Consequently, the statistical theory for the proposed estimation is developed by extending the classical theory of the generalized method of moments. A clear advantage of the proposed estimation method is the computation feasibility. We conduct several simulation studies to assess the performance of the proposed models and estimation methods for both Gaussian and binary spatial-clustered data. Results show a clear improvement on estimation efficiency over the conventional composite likelihood method. An illustrative data example is included to motivate and demonstrate the proposed method.
    Biometrics 06/2014; 70(3). DOI:10.1111/biom.12199