Biometrics (BIOMETRICS)

Publisher: American Statistical Association. Biometrics Section; Biometric Society; International Biometric Society, Wiley

Journal description

Biometrics is published quarterly. Its general objects are to promote and extend the use of mathematical and statistical methods in various subject-matter disciplines, by describing and exemplifying developments in these methods and their application in a form readily assimilable by experimenters and those concerned primarily with analysis of data. The journal is a ready medium for publication of papers by both the experimentalist and the statistician. The papers in the journal include statistical, authoritative expository or review articles, and analytical or methodological papers contributing to the planning or analysis of experiments and surveys, or the interpretation of data. Many of the papers in Biometrics contain worked examples of the statistical analyses proposed.

Current impact factor: 1.57

Impact Factor Rankings

2016 Impact Factor Available summer 2017
2014 / 2015 Impact Factor 1.568
2013 Impact Factor 1.521
2012 Impact Factor 1.412
2011 Impact Factor 1.827
2010 Impact Factor 1.764
2009 Impact Factor 1.867
2008 Impact Factor 1.97
2007 Impact Factor 1.714
2006 Impact Factor 1.489
2005 Impact Factor 1.602
2004 Impact Factor 1.211
2003 Impact Factor 1.324
2002 Impact Factor 1.077
2001 Impact Factor 1.081
2000 Impact Factor 1.17
1999 Impact Factor 1.335
1998 Impact Factor 0.863
1997 Impact Factor 0.938
1996 Impact Factor 1.011
1995 Impact Factor 1.041
1994 Impact Factor 1.207
1993 Impact Factor 0.97
1992 Impact Factor 1.027

Impact factor over time

Impact factor
Year

Additional details

5-year impact 1.88
Cited half-life >10.0
Immediacy index 0.22
Eigenfactor 0.02
Article influence 1.57
Website Biometrics website
Other titles Biometrics
ISSN 0006-341X
OCLC 5898885
Material type Periodical, Internet resource
Document type Journal / Magazine / Newspaper, Internet Resource

Publisher details

Wiley

  • Pre-print
    • Author can archive a pre-print version
  • Post-print
    • Author can archive a post-print version
  • Conditions
    • Some journals have separate policies, please check with each journal directly
    • On author's personal website, institutional repositories, arXiv, AgEcon, PhilPapers, PubMed Central, RePEc or Social Science Research Network
    • Author's pre-print may not be updated with Publisher's Version/PDF
    • Author's pre-print must acknowledge acceptance for publication
    • Non-commercial
    • Publisher's version/PDF cannot be used
    • Publisher source must be acknowledged with citation
    • Must link to publisher version with set statement (see policy)
    • This policy is an exception to the default policies of 'Wiley'
  • Classification
    green

Publications in this journal

  • [Show abstract] [Hide abstract]
    ABSTRACT: This article presents a new approach to modeling group animal movement in continuous time. The movement of a group of animals is modeled as a multivariate Ornstein Uhlenbeck diffusion process in a high-dimensional space. Each individual of the group is attracted to a leading point which is generally unobserved, and the movement of the leading point is also an Ornstein Uhlenbeck process attracted to an unknown attractor. The Ornstein Uhlenbeck bridge is applied to reconstruct the location of the leading point. All movement parameters are estimated using Markov chain Monte Carlo sampling, specifically a Metropolis Hastings algorithm. We apply the method to a small group of simultaneously tracked reindeer, Rangifer tarandus tarandus, showing that the method detects dependency in movement between individuals.
    No preview · Article · Jan 2016 · Biometrics
  • [Show abstract] [Hide abstract]
    ABSTRACT: We introduce in this work the Interval Testing Procedure (ITP), a novel inferential technique for functional data. The procedure can be used to test different functional hypotheses, e.g., distributional equality between two or more functional populations, equality of mean function of a functional population to a reference. ITP involves three steps: (i) the representation of data on a (possibly high-dimensional) functional basis; (ii) the test of each possible set of consecutive basis coefficients; (iii) the computation of the adjusted p-values associated to each basis component, by means of a new strategy here proposed. We define a new type of error control, the interval-wise control of the family wise error rate, particularly suited for functional data. We show that ITP is provided with such a control. A simulation study comparing ITP with other testing procedures is reported. ITP is then applied to the analysis of hemodynamical features involved with cerebral aneurysm pathology. ITP is implemented in the fdatest R package.
    No preview · Article · Jan 2016 · Biometrics
  • [Show abstract] [Hide abstract]
    ABSTRACT: Spatial data have become increasingly common in epidemiology and public health research thanks to advances in GIS (Geographic Information Systems) technology. In health research, for example, it is common for epidemiologists to incorporate geographically indexed data into their studies. In practice, however, the spatially defined covariates are often measured with error. Naive estimators of regression coefficients are attenuated if measurement error is ignored. Moreover, the classical measurement error theory is inapplicable in the context of spatial modeling because of the presence of spatial correlation among the observations. We propose a semiparametric regression approach to obtain bias-corrected estimates of regression parameters and derive their large sample properties. We evaluate the performance of the proposed method through simulation studies and illustrate using data on Ischemic Heart Disease (IHD). Both simulation and practical application demonstrate that the proposed method can be effective in practice.
    No preview · Article · Jan 2016 · Biometrics
  • [Show abstract] [Hide abstract]
    ABSTRACT: The twin method refers to the use of data from same-sex identical and fraternal twins to estimate the genetic and environmental contributions to a trait or outcome. The standard twin method is the variance component twin method that estimates heritability, the fraction of variance attributed to additive genetic inheritance. The latent class twin method estimates two quantities that are easier to interpret than heritability: the genetic prevalence, which is the fraction of persons in the genetic susceptibility latent class, and the heritability fraction, which is the fraction of persons in the genetic susceptibility latent class with the trait or outcome. We extend the latent class twin method in three important ways. First, we incorporate an additive genetic model to broaden the sensitivity analysis beyond the original autosomal dominant and recessive genetic models. Second, we specify a separate survival model to simplify computations and improve convergence. Third, we show how to easily adjust for covariates by extending the method of propensity scores from a treatment difference to zygosity. Applying the latent class twin method to data on breast cancer among Nordic twins, we estimated a genetic prevalence of 1%, a result with important implications for breast cancer prevention research.
    No preview · Article · Jan 2016 · Biometrics
  • [Show abstract] [Hide abstract]
    ABSTRACT: Li, Fine, and Brookhart (2015) presented an extension of the two-stage least squares (2SLS) method for additive hazards models which requires an assumption that the censoring distribution is unrelated to the endogenous exposure variable. We present another extension of 2SLS that can address this limitation.
    No preview · Article · Jan 2016 · Biometrics
  • [Show abstract] [Hide abstract]
    ABSTRACT: While there are many validated prognostic classifiers used in practice, often their accuracy is modest and heterogeneity in clinical outcomes exists in one or more risk subgroups. Newly available markers, such as genomic mutations, may be used to improve the accuracy of an existing classifier by reclassifying patients from a heterogenous group into a higher or lower risk category. The statistical tools typically applied to develop the initial classifiers are not easily adapted toward this reclassification goal. In this article, we develop a new method designed to refine an existing prognostic classifier by incorporating new markers. The two-stage algorithm called Boomerang first searches for modifications of the existing classifier that increase the overall predictive accuracy and then merges to a prespecified number of risk groups. Resampling techniques are proposed to assess the improvement in predictive accuracy when an independent validation data set is not available. The performance of the algorithm is assessed under various simulation scenarios where the marker frequency, degree of censoring, and total sample size are varied. The results suggest that the method selects few false positive markers and is able to improve the predictive accuracy of the classifier in many settings. Lastly, the method is illustrated on an acute myeloid leukemia data set where a new refined classifier incorporates four new mutations into the existing three category classifier and is validated on an independent data set.
    No preview · Article · Jan 2016 · Biometrics
  • [Show abstract] [Hide abstract]
    ABSTRACT: Predicting binary events such as newborns with large birthweight is important for obstetricians in their attempt to reduce both maternal and fetal morbidity and mortality. Such predictions have been a challenge in obstetric practice, where longitudinal ultrasound measurements taken at multiple gestational times during pregnancy may be useful for predicting various poor pregnancy outcomes. The focus of this article is on developing a flexible class of joint models for the multivariate longitudinal ultrasound measurements that can be used for predicting a binary event at birth. A skewed multivariate random effects model is proposed for the ultrasound measurements, and the skewed generalized t-link is assumed for the link function relating the binary event and the underlying longitudinal processes. We consider a shared random effect to link the two processes together. Markov chain Monte Carlo sampling is used to carry out Bayesian posterior computation. Several variations of the proposed model are considered and compared via the deviance information criterion, the logarithm of pseudomarginal likelihood, and with a training-test set prediction paradigm. The proposed methodology is illustrated with data from the NICHD Successive Small-for-Gestational-Age Births study, a large prospective fetal growth cohort conducted in Norway and Sweden.
    No preview · Article · Jan 2016 · Biometrics

  • No preview · Article · Dec 2015 · Biometrics
  • [Show abstract] [Hide abstract]
    ABSTRACT: Identifying factors associated with increased medical cost is important for many micro- and macro-institutions, including the national economy and public health, insurers and the insured. However, assembling comprehensive national databases that include both the cost and individual-level predictors can prove challenging. Alternatively, one can use data from smaller studies with the understanding that conclusions drawn from such analyses may be limited to the participant population. At the same time, smaller clinical studies have limited follow-up and lifetime medical cost may not be fully observed for all study participants. In this context, we develop new model selection methods and inference procedures for secondary analyses of clinical trial data when lifetime medical cost is subject to induced censoring. Our model selection methods extend a theory of penalized estimating function to a calibration regression estimator tailored for this data type. Next, we develop a novel inference procedure for the unpenalized regression estimator using perturbation and resampling theory. Then, we extend this resampling plan to accommodate regularized coefficient estimation of censored lifetime medical cost and develop postselection inference procedures for the final model. Our methods are motivated by data from Southwest Oncology Group Protocol 9509, a clinical trial of patients with advanced nonsmall cell lung cancer, and our models of lifetime medical cost are specific to this population. But the methods presented in this article are built on rather general techniques and could be applied to larger databases as those data become available.
    No preview · Article · Dec 2015 · Biometrics

  • No preview · Article · Dec 2015 · Biometrics

  • No preview · Article · Dec 2015 · Biometrics

  • No preview · Article · Dec 2015 · Biometrics
  • [Show abstract] [Hide abstract]
    ABSTRACT: We show how a spatial point process, where to each point there is associated a random quantitative mark, can be identified with a spatio-temporal point process specified by a conditional intensity function. For instance, the points can be tree locations, the marks can express the size of trees, and the conditional intensity function can describe the distribution of a tree (i.e., its location and size) conditionally on the larger trees. This enable us to construct parametric statistical models which are easily interpretable and where maximum-likelihood-based inference is tractable.
    No preview · Article · Dec 2015 · Biometrics
  • [Show abstract] [Hide abstract]
    ABSTRACT: To evaluate a new therapy versus a control via a randomized, comparative clinical study or a series of trials, due to heterogeneity of the study patient population, a pre-specified, predictive enrichment procedure may be implemented to identify an "enrichable" subpopulation. For patients in this subpopulation, the therapy is expected to have a desirable overall risk-benefit profile. To develop and validate such a "therapy-diagnostic co-development" strategy, a three-step procedure may be conducted with three independent data sets from a series of similar studies or a single trial. At the first stage, we create various candidate scoring systems based on the baseline information of the patients via, for example, parametric models using the first data set. Each individual score reflects an anticipated average treatment difference for future patients who share similar baseline profiles. A large score indicates that these patients tend to benefit from the new therapy. At the second step, a potentially promising, enrichable subgroup is identified using the totality of evidence from these scoring systems. At the final stage, we validate such a selection via two-sample inference procedures for assessing the treatment effectiveness statistically and clinically with the third data set, the so-called holdout sample. When the study size is not large, one may combine the first two steps using a "cross-training-evaluation" process. Comprehensive numerical studies are conducted to investigate the operational characteristics of the proposed method. The entire enrichment procedure is illustrated with the data from a cardiovascular trial to evaluate a beta-blocker versus a placebo for treating chronic heart failure patients.
    No preview · Article · Dec 2015 · Biometrics

  • No preview · Article · Dec 2015 · Biometrics
  • [Show abstract] [Hide abstract]
    ABSTRACT: Large assembled cohorts with banked biospecimens offer valuable opportunities to identify novel markers for risk prediction. When the outcome of interest is rare, an effective strategy to conserve limited biological resources while maintaining reasonable statistical power is the case cohort (CCH) sampling design, in which expensive markers are measured on a subset of cases and controls. However, the CCH design introduces significant analytical complexity due to outcome-dependent, finite-population sampling. Current methods for analyzing CCH studies focus primarily on the estimation of simple survival models with linear effects; testing and estimation procedures that can efficiently capture complex non-linear marker effects for CCH data remain elusive. In this article, we propose inverse probability weighted (IPW) variance component type tests for identifying important marker sets through a Cox proportional hazards kernel machine (CoxKM) regression framework previously considered for full cohort studies (Cai et al., 2011). The optimal choice of kernel, while vitally important to attain high power, is typically unknown for a given dataset. Thus, we also develop robust testing procedures that adaptively combine information from multiple kernels. The proposed IPW test statistics have complex null distributions that cannot easily be approximated explicitly. Furthermore, due to the correlation induced by CCH sampling, standard resampling methods such as the bootstrap fail to approximate the distribution correctly. We, therefore, propose a novel perturbation resampling scheme that can effectively recover the induced correlation structure. Results from extensive simulation studies suggest that the proposed IPW CoxKM testing procedures work well in finite samples. The proposed methods are further illustrated by application to a Danish CCH study of Apolipoprotein C-III markers on the risk of coronary heart disease.
    No preview · Article · Dec 2015 · Biometrics

  • No preview · Article · Dec 2015 · Biometrics
  • [Show abstract] [Hide abstract]
    ABSTRACT: Causal mediation modeling has become a popular approach for studying the effect of an exposure on an outcome through a mediator. However, current methods are not applicable to the setting with a large number of mediators. We propose a testing procedure for mediation effects of high-dimensional continuous mediators. We characterize the marginal mediation effect, the multivariate component-wise mediation effects, and the L2 norm of the component-wise effects, and develop a Monte-Carlo procedure for evaluating their statistical significance. To accommodate the setting with a large number of mediators and a small sample size, we further propose a transformation model using the spectral decomposition. Under the transformation model, mediation effects can be estimated using a series of regression models with a univariate transformed mediator, and examined by our proposed testing procedure. Extensive simulation studies are conducted to assess the performance of our methods for continuous and dichotomous outcomes. We apply the methods to analyze genomic data investigating the effect of microRNA miR-223 on a dichotomous survival status of patients with glioblastoma multiforme (GBM). We identify nine gene ontology sets with expression values that significantly mediate the effect of miR-223 on GBM survival.
    No preview · Article · Sep 2015 · Biometrics
  • [Show abstract] [Hide abstract]
    ABSTRACT: Semicontinuous data in the form of a mixture of a large portion of zero values and continuously distributed positive values frequently arise in many areas of biostatistics. This article is motivated by the analysis of relationships between disease outcomes and intakes of episodically consumed dietary components. An important aspect of studies in nutritional epidemiology is that true diet is unobservable and commonly evaluated by food frequency questionnaires with substantial measurement error. Following the regression calibration approach for measurement error correction, unknown individual intakes in the risk model are replaced by their conditional expectations given mismeasured intakes and other model covariates. Those regression calibration predictors are estimated using short-term unbiased reference measurements in a calibration substudy. Since dietary intakes are often "energy-adjusted," e.g., by using ratios of the intake of interest to total energy intake, the correct estimation of the regression calibration predictor for each energy-adjusted episodically consumed dietary component requires modeling short-term reference measurements of the component (a semicontinuous variable), and energy (a continuous variable) simultaneously in a bivariate model. In this article, we develop such a bivariate model, together with its application to regression calibration. We illustrate the new methodology using data from the NIH-AARP Diet and Health Study (Schatzkin et al., 2001, American Journal of Epidemiology 154, 1119-1125), and also evaluate its performance in a simulation study. © 2015, The International Biometric Society.
    No preview · Article · Aug 2015 · Biometrics