Journal of Biopharmaceutical Statistics (J Biopharm Stat)

Publisher: Taylor & Francis

Journal description

This rapid publication periodical discusses quality applications of statistics in biopharmaceutical research and development and expositions of statistical methodology with immediate applicability to such work in the form of full-length and short manuscripts, review articles, selected/invited conference papers, short articles, and letters to the editor. Addressing timely and provocative topics important to the biostatistical profession, the journal covers drug, device, and biological research and development drug screening and drug design assessment of pharmacological activity pharmaceutical formulation and scale-up preclinical safety assessment bioavailability, bioequivalence, and pharmacokinetics phase I, II, and III clinical development premarket approval assessment of clinical safety postmarketing surveillance manufacturing and quality control technical operations regulatory issues.

Current impact factor: 0.59

Impact Factor Rankings

2015 Impact Factor Available summer 2016
2014 Impact Factor 0.587
2013 Impact Factor 0.716
2012 Impact Factor 0.728
2011 Impact Factor 1.342
2010 Impact Factor 1.073
2009 Impact Factor 1.117
2008 Impact Factor 0.951
2007 Impact Factor 0.787

Impact factor over time

Impact factor

Additional details

5-year impact 0.87
Cited half-life 7.30
Immediacy index 0.14
Eigenfactor 0.00
Article influence 0.49
Website Journal of Biopharmaceutical Statistics website
Other titles Journal of biopharmaceutical statistics (Online), Journal of biopharmaceutical statistics, JBS
ISSN 1520-5711
OCLC 39496949
Material type Document, Periodical, Internet resource
Document type Internet Resource, Computer File, Journal / Magazine / Newspaper

Publisher details

Taylor & Francis

  • Pre-print
    • Author can archive a pre-print version
  • Post-print
    • Author can archive a post-print version
  • Conditions
    • Some individual journals may have policies prohibiting pre-print archiving
    • On author's personal website or departmental website immediately
    • On institutional repository or subject-based repository after either 12 months embargo
    • Publisher's version/PDF cannot be used
    • On a non-profit server
    • Published source must be acknowledged
    • Must link to publisher version
    • Set statements to accompany deposits (see policy)
    • The publisher will deposit in on behalf of authors to a designated institutional repository including PubMed Central, where a deposit agreement exists with the repository
    • STM: Science, Technology and Medicine
    • Publisher last contacted on 25/03/2014
    • This policy is an exception to the default policies of 'Taylor & Francis'
  • Classification

Publications in this journal

  • Journal of Biopharmaceutical Statistics 10/2015; DOI:10.1080/10543406.2015.1099542
  • [Show abstract] [Hide abstract]
    ABSTRACT: In a recent paper (JBS 2014, 24, 1059-70) Hung, Wang and Yang discussed the advantages and disadvantages of the CHW statistic for adaptive sample size re-estimation. Some statements in that paper, notably the importance of having a fixed critical value for the final analysis, notwithstanding the sample size adaptation, have the potential to be misinterpreted. This letter attempts to clarify the issue.
    Journal of Biopharmaceutical Statistics 10/2015; DOI:10.1080/10543406.2015.1099541
  • [Show abstract] [Hide abstract]
    ABSTRACT: Reference-based imputation (RBI) methods have been proposed as sensitivity analyses for longitudinal clinical trials with missing data. The RBI methods multiply impute the missing data in treatment group based on an imputation model built from the reference (control) group data to yield a conservative treatment effect estimate compared to multiple imputation (MI) under missing at random (MAR). However, the RBI analysis based on regular MI approach can be overly conservative because it not only applies discount to treatment effect estimate but also posts penalty on the variance estimate. In this paper, we investigate the statistical properties of RBI methods, and propose approaches to get accurate variance estimates using both frequentist and Bayesian methods for the RBI analysis. Results from simulation studies and applications to longitudinal clinical trial datasets are presented.
    Journal of Biopharmaceutical Statistics 09/2015; DOI:10.1080/10543406.2015.1094810

  • Journal of Biopharmaceutical Statistics 09/2015; DOI:10.1080/10543406.2015.1092036
  • [Show abstract] [Hide abstract]
    ABSTRACT: The decision on the primary endpoint in a randomized clinical trial is of paramount importance and the combination of several endpoints might be a reasonable choice. Gómez and Lagakos (2013) have developed a method that quantifies how much more efficient it could be to use a composite instead of an individual relevant endpoint. From the information provided by the frequencies of observing the component endpoints in the control group and by the relative treatment effects on each individual endpoint, the Asymptotic Relative Efficiency (ARE) can be computed. This paper presents the applicability of the ARE method as a practical and objective tool to evaluate which components, among the plausible ones, are more efficient in the construction of the primary endpoint. The method is illustrated with two real cardiovascular clinical trials and is extended to allow for different dependence structures between the times to the individual endpoints. The influence of this choice on the recommendation on whether or not to use the composite endpoint as the primary endpoint for the investigation is studied. We conclude that the recommendation between using the composite or the relevant endpoint only depends on the frequencies of the endpoints and the relative effects of the treatment.
    Journal of Biopharmaceutical Statistics 09/2015; DOI:10.1080/10543406.2015.1094808
  • [Show abstract] [Hide abstract]
    ABSTRACT: Statistical principles and ongoing proliferation of novel statistical methodologies have dramatically improved the clinical drug development process. This journey over the last seven decades reshaped the pharmaceutical industry and regulatory agencies, highlighted the importance of statistical thinking in drug development and decision-making and, most importantly, improved the lives of countless patients around the world. Some significant highlights in the history of this journey are recounted here as well as some exciting opportunities of what the future may hold for the science and profession of statistics.
    Journal of Biopharmaceutical Statistics 09/2015; DOI:10.1080/10543406.2015.1092035
  • [Show abstract] [Hide abstract]
    ABSTRACT: The use of surrogate outcomes that predict treatment effect on an unobserved true outcome may have substantial economic and ethical advantages, through reducing the length and size of clinical trials. There has been extensive investigation of the best means of evaluating putative surrogates. We present a systematic review on the evolution of statistical methods for validating surrogates starting from the defining paper of Prentice (1989). We highlight the fundamental differences in the current statistical evaluation approaches, their advantages and disadvantages and examine the understanding and perceptions of investigators in this area.
    Journal of Biopharmaceutical Statistics 09/2015; DOI:10.1080/10543406.2015.1094811
  • [Show abstract] [Hide abstract]
    ABSTRACT: Monitoring of toxicity is often conducted in Phase II trials in oncology to avoid an excessive number of toxicities if the wrong dose is chosen for Phase II. Existing stopping rules for toxicity use information from patients who have already completed follow-up. We describe a stopping rule that uses all available data to determine whether to stop for toxicity or not when follow-up for toxicity is long. We propose an enrollment rule that prescribes the maximum number of patients that may be enrolled at any given point in the trial.
    Journal of Biopharmaceutical Statistics 09/2015; 25(6):1-9. DOI:10.1080/10543406.2015.1086779
  • [Show abstract] [Hide abstract]
    ABSTRACT: Sample size estimation (SSE) is an important issue in the planning of clinical studies. While larger studies are likely to have sufficient power, it may be unethical to expose more patients than necessarily to answer a scientific question. Budget considerations may also cause one to limit the study to an adequate size to answer the question at hand. Typically at the planning stage, a statistically based justification for sample size is provided. An effective sample size is usually planned under a pre-specified type I error rate, a desired power under a particular alternative and variability associated with the observations recorded. The nuisance parameter such as the variance is unknown in practice. Thus, information from a preliminary pilot study is often used to estimate the variance. However, calculating the sample size based on the estimated nuisance parameter may not be stable. Sample size re-estimation (SSR) at the interim analysis may provide an opportunity to re-evaluate the uncertainties using accrued data and continue the trial with an updated sample size. This article evaluates a proposed SSR method based on controlling the variability of nuisance parameter. A numerical study is used to assess the performance of proposed method with respect to the control of Type I error. The proposed method and concepts could be extended to SSR approaches with respect to other criteria, such as maintaining effect size, achieving conditional power, and reaching a desired reproducibility probability.
    Journal of Biopharmaceutical Statistics 09/2015; DOI:10.1080/10543406.2015.1092031
  • [Show abstract] [Hide abstract]
    ABSTRACT: Phase I trials evaluating the safety of multi-drug combinations are becoming more common in oncology. Despite the emergence of novel methodology in the area, it is rare that innovative approaches are used in practice. In this article, we review three methods for Phase I combination studies that are easy to understand and straightforward to implement. We demonstrate the operating characteristics of the designs through illustration in a single trial, as well as through extensive simulation studies, with the aim of increasing the use of novel approaches in phase I combination studies. Design specifications and software capabilities are also discussed.
    Journal of Biopharmaceutical Statistics 09/2015; DOI:10.1080/10543406.2015.1092029
  • [Show abstract] [Hide abstract]
    ABSTRACT: The world of medical devices while highly diverse is extremely innovative and this facilitates the adoption of innovative statistical techniques. Statisticians in the Center for Devices and Radiological Health (CDRH) at the Food and Drug Administration (FDA) have provided leadership in implementing statistical innovations. The innovations discussed include: the incorporation of Bayesian methods in clinical trials, adaptive designs, the use and development of propensity score methodology in the design and analysis of non-randomized observational studies, the use of tipping point analysis for missing data, techniques for diagnostic test evaluation, bridging studies for companion diagnostic tests, quantitative benefit-risk decisions, and patient preference studies.
    Journal of Biopharmaceutical Statistics 09/2015; DOI:10.1080/10543406.2015.1092037
  • [Show abstract] [Hide abstract]
    ABSTRACT: Evaluation of safety is a critical component of drug review at the US Food and Drug Administration. Statisticians are playing an increasingly visible role in quantitative safety evaluation and regulatory decision-making. This paper reviews the history and the recent events relating to quantitative drug safety evaluation at FDA. The paper then focuses on five active areas of quantitative drug safety evaluation and the role Division of Biometrics VII (DBVII) plays in these areas, namely meta-analysis for safety evaluation, large safety outcome trials, Post-Marketing Requirements, the Sentinel Initiative, and the evaluation of risk from extended/long-acting opioids. This paper will focus chiefly on developments related to quantitative drug safety evaluation and not on the many additional developments in drug safety in general.
    Journal of Biopharmaceutical Statistics 09/2015; DOI:10.1080/10543406.2015.1092026
  • [Show abstract] [Hide abstract]
    ABSTRACT: This article proposes a curtailed two-stage matched pairs design to shorten the drug development process in phase II clinical trials for which there are two arms, a treatment arm and a control arm, and the primary goal being to test whether the treatment is significantly better than the control. The design presented in this article uses the inverse trinomial distribution to determine appropriate cut-off points for termination or continuation of the trial at each stage and is best suited for trials in which there is a low success rate and the available sample size is limited, such as is the case for trials involving rare forms of cancer or other uncommon diseases.
    Journal of Biopharmaceutical Statistics 09/2015; DOI:10.1080/10543406.2015.1074921
  • [Show abstract] [Hide abstract]
    ABSTRACT: Current regulation for generic approval is based on the assessment of average bioequivalence. As indicated by the United States Food and Drug Administration (FDA), an approved generic drug can be used as a substitute for the innovative drug. FDA does not indicate that two generic copies of the same innovative drug can be used interchangeably even though they are bioequivalent to the same brand-name drug. In practice, bioequivalence between generic copies of an innovative drug is not required. However, as more generic drug products become available, it is a concern whether the approved generic drug products have the same quality and therapeutic effect as the brand-name drug product and whether they can be used safely and interchangeably. In this article, several criteria including a newly proposed criterion for assessing drug interchangeability are studied. In addition, comments on possible study designs and power calculation for sample size under a specific design are also discussed.
    Journal of Biopharmaceutical Statistics 09/2015; DOI:10.1080/10543406.2015.1092027
  • [Show abstract] [Hide abstract]
    ABSTRACT: We investigated 9-year trends in statistical design and other features of phase II oncology clinical trials published in 2005, 2010, and 2014 in five leading oncology journals: Cancer, Clinical Cancer Research, Journal of Clinical Oncology, Annals of Oncology, and Lancet Oncology. The features analyzed included cancer type, multi-center vs. single-institution, statistical design, primary endpoint, number of treatment arms, number of patients per treatment arm, whether or not statistical methods were well described, whether the drug was found effective based on rigorous statistical testing of the null hypothesis, and whether the drug was recommended for future studies.
    Journal of Biopharmaceutical Statistics 09/2015; DOI:10.1080/10543406.2015.1092030
  • [Show abstract] [Hide abstract]
    ABSTRACT: There are many challenging statistical problems identified in regulatory review of large cardiovascular (CV) clinical outcome trials and central nervous system (CNS) trials. The problems can be common or distinct due to disease characteristics and the differences in trial design elements such as endpoints, trial duration, and trial size. In schizophrenia trials, heavy missing data is a big problem. In Alzheimer trials, the endpoints for assessing symptoms and the endpoints for assessing disease progression are essentially the same; it is difficult to construct a good trial design to evaluate a test drug for its ability to slow the disease progression. In cardiovascular trials, reliance on a composite endpoint with low event rate makes the trial size so large that it is infeasible to study multiple doses necessary to find the right dose for study patients. These are just a few typical problems. In the past decade, adaptive designs are increasingly used in these disease areas and some challenges occur with respect to that use. Based on our review experiences, group sequential designs have borne many successful stories in cardiovascular trials and are also increasingly used for developing treatments targeting CNS diseases. There is also a growing trend in using more advanced unblinded adaptive designs for producing efficacy evidence. Many statistical challenges with these kinds of adaptive designs have been identified through our experiences with the review of regulatory applications and are shared in this article.
    Journal of Biopharmaceutical Statistics 09/2015; DOI:10.1080/10543406.2015.1092025
  • [Show abstract] [Hide abstract]
    ABSTRACT: This paper focuses on a broad class of statistical and clinical considerations related to the assessment of treatment effects across patient subgroups in late-stage clinical trials. This paper begins with a comprehensive review of clinical trial literature and regulatory guidelines to help define scientifically sound approaches to evaluating subgroup effects in clinical trials. All commonly used types of subgroup analysis are considered in the paper, including different variations of prospectively defined and post-hoc subgroup investigations. In the context of confirmatory subgroup analysis, key design and analysis options are presented, which includes conventional and innovative trial designs that support multi-population tailoring approaches. A detailed summary of exploratory subgroup analysis (with the purpose of either consistency assessment or subgroup identification) is also provided. The paper promotes a more disciplined approach to post-hoc subgroup identification and formulates key principles that support reliable evaluation of subgroup effects in this setting.
    Journal of Biopharmaceutical Statistics 09/2015; DOI:10.1080/10543406.2015.1092033
  • [Show abstract] [Hide abstract]
    ABSTRACT: Sequential parallel comparison design (SPCD) was proposed for trials with high placebo response. In the first stage of SPCD subjects are randomized between placebo and active treatment. In the second stage placebo non-responders are re-randomized between placebo and active treatment. Data from the population of "all comers" and the sub-populations of placebo non-responders then combined to yield a single p-value for treatment comparison. Two-way enriched design (TED) is an extension of SPCD where active treatment responders are also re-randomized between placebo and active treatment in Stage 2. This article investigates potential uses of SPCD and TED in medical device trials.
    Journal of Biopharmaceutical Statistics 09/2015; DOI:10.1080/10543406.2015.1092028