Journal of Biopharmaceutical Statistics (J Biopharm Stat)

Publisher: Taylor & Francis

Journal description

This rapid publication periodical discusses quality applications of statistics in biopharmaceutical research and development and expositions of statistical methodology with immediate applicability to such work in the form of full-length and short manuscripts, review articles, selected/invited conference papers, short articles, and letters to the editor. Addressing timely and provocative topics important to the biostatistical profession, the journal covers drug, device, and biological research and development drug screening and drug design assessment of pharmacological activity pharmaceutical formulation and scale-up preclinical safety assessment bioavailability, bioequivalence, and pharmacokinetics phase I, II, and III clinical development premarket approval assessment of clinical safety postmarketing surveillance manufacturing and quality control technical operations regulatory issues.

Current impact factor: 0.72

Impact Factor Rankings

2015 Impact Factor Available summer 2015
2013 / 2014 Impact Factor 0.716
2012 Impact Factor 0.728
2011 Impact Factor 1.342
2010 Impact Factor 1.073
2009 Impact Factor 1.117
2008 Impact Factor 0.951
2007 Impact Factor 0.787

Impact factor over time

Impact factor
Year

Additional details

5-year impact 1.18
Cited half-life 5.80
Immediacy index 0.25
Eigenfactor 0.00
Article influence 0.65
Website Journal of Biopharmaceutical Statistics website
Other titles Journal of biopharmaceutical statistics (Online), Journal of biopharmaceutical statistics, JBS
ISSN 1520-5711
OCLC 39496949
Material type Document, Periodical, Internet resource
Document type Internet Resource, Computer File, Journal / Magazine / Newspaper

Publisher details

Taylor & Francis

  • Pre-print
    • Author can archive a pre-print version
  • Post-print
    • Author can archive a post-print version
  • Conditions
    • Some individual journals may have policies prohibiting pre-print archiving
    • On author's personal website or departmental website immediately
    • On institutional repository or subject-based repository after either 12 months embargo
    • Publisher's version/PDF cannot be used
    • On a non-profit server
    • Published source must be acknowledged
    • Must link to publisher version
    • Set statements to accompany deposits (see policy)
    • The publisher will deposit in on behalf of authors to a designated institutional repository including PubMed Central, where a deposit agreement exists with the repository
    • STM: Science, Technology and Medicine
    • Publisher last contacted on 25/03/2014
    • This policy is an exception to the default policies of 'Taylor & Francis'
  • Classification
    ​ green

Publications in this journal

  • Journal of Biopharmaceutical Statistics 04/2015; DOI:10.1080/10543406.2015.1033045
  • [Show abstract] [Hide abstract]
    ABSTRACT: Summary Clinical trials generally allow various efficacy and safety outcomes to be collected for health interventions. Benefit-risk assessment is an important issue when evaluating a new drug. Currently, there is a lack of standardized and validated benefit-risk assessment approaches in drug development due to various challenges. To quantify benefits and risks, we propose a counterfactual p-value (CP) approach. Our approach considers a spectrum of weights for weighting benefit-risk values and computes the extreme probabilities of observing the weighted benefit-risk value in one treatment group as if patients were treated in the other treatment group. The proposed approach is applicable to single benefit and single risk outcome as well as multiple benefit and risk outcomes assessment. In addition, the prior information in the weight schemes relevant to the importance of outcomes can be incorporated in the approach. The proposed counterfactual p-values plot is intuitive with a visualized weight pattern. The average area under CP (AUCP) and preferred probability over time are used for overall treatment comparison and a bootstrap approach is applied for statistical inference. We assess the proposed approach using simulated data with multiple efficacy and safety endpoints and compare its performance with a stochastic multi-criteria acceptability analysis (SMAA) approach.
    Journal of Biopharmaceutical Statistics 02/2015; 25(3). DOI:10.1080/10543406.2014.921514
  • [Show abstract] [Hide abstract]
    ABSTRACT: Bioequivalence trials are commonly conducted to assess therapeutic equivalence between a generic and an innovator brand formulation. In such trials, drug concentrations are obtained repeatedly over time and are summarized using a metric such as the area under the concentration versus time curve (AUC) for each subject. The usual practice is to then conduct two one-sided tests using these areas to evaluate for average bioequivalence. A major disadvantage of this approach is the loss of information encountered when ignoring the correlation structure between repeated measurements in the computation of areas. In this paper, we propose a general linear model approach that incorporates the within-subject covariance structure for making inferences on mean areas. The model-based method can be seen to arise naturally from the re-parameterization of the AUC as a linear combination of outcome means. We investigate and compare the inferential properties of our proposed method with the traditional two one-sided tests approach using Monte Carlo simulation studies. We also examine the properties of the method in the event of missing data. Simulations show that the proposed approach is a cost-effective, viable alternative to the traditional method with superior inferential properties. Inferential advantages are particularly apparent in the presence of missing data. To illustrate our approach, a real working example from an asthma study is utilized.
    Journal of Biopharmaceutical Statistics 02/2015; DOI:10.1080/10543406.2014.1000677
  • [Show abstract] [Hide abstract]
    ABSTRACT: Abstract Consider a trial comparing two treatments or doses A and B with a control C. Based on a unblinded interim look, a winner W between A and B will be chosen, and future patients will be randomized to W and C and compared at the end of a study. The naïve test statistic Z under this setting follows an approximate normal distribution, as shown by Lan et al. (2006) and Shun et al. (2008). Results of these two articles apply only to the fixed sample size design. With simple modifications, this manuscript extends the previous works to the group sequential setting.
    Journal of Biopharmaceutical Statistics 02/2015; 25(3). DOI:10.1080/10543406.2014.920341
  • [Show abstract] [Hide abstract]
    ABSTRACT: Abstract Covariate-adjusted sensitivity analyses is proposed for missing time-to-event outcomes. The method invokes multiple imputation (MI) for the missing failure times under a variety of specifications regarding the post-withdrawal tendency for having the event of interest. With a clinical trial example, we compared methods of covariance analyses for time-to-event data, i.e., the multivariable Cox proportional hazards model and non-parametric ANCOVA, and then illustrated how to incorporate these methods into the proposed sensitivity analysis for covariate adjustment. The MI methods considered are Kaplan-Meier Multiple Imputation (KMMI), covariate-adjusted and unadjusted proportional hazards multiple imputation (PHMI). The assumptions, statistical issues, and features for these methods are discussed.
    Journal of Biopharmaceutical Statistics 01/2015; DOI:10.1080/10543406.2014.1000549
  • [Show abstract] [Hide abstract]
    ABSTRACT: Abstract Under the assumption of missing at random, eight confidence intervals (CIs) for the difference between two correlated proportions in the presence of incomplete paired binary data are constructed on the basis of the likelihood ratio statistic, the score statistic, the Wald-type statistic, the hybrid method incorporated with the Wilson score and Agresti-Coull (AC) intervals, and the Bootstrap-resampling method. Extensive simulation studies are conducted to evaluate the performance of the presented CIs in terms of coverage probability and expected interval width. Our empirical results evidence that the Wilson-Score-based hybrid CI and the Wald-type CI together with the constrained maximum likelihood estimates perform well for small to moderate sample sizes in the sense that (i) their empirical coverage probabilities are quite close to the pre-specified confidence level, (ii) their expected interval widths are shorter and (iii) their ratios of the mesial non-coverage to non-coverage probabilities lie in interval [0.4, 0.6]. An example from a neurological study is used to illustrate the proposed methodologies.
    Journal of Biopharmaceutical Statistics 01/2015; DOI:10.1080/10543406.2014.1000544
  • [Show abstract] [Hide abstract]
    ABSTRACT: Abstract We introduce a three-parameter logistic model to analyze the dose limiting toxicity (DLT) as a time-to-event endpoint in oncology phase I trials. In the proposed model, patients are allowed to stay on trial without the constraint of a maximum follow-up time. Our model accommodates late-onset DLT as well as early-onset DLT, by both of which the dose recommendation is informed. A Bayesian approach is used to incorporate prior knowledge of the test treatment into dose recommendation. Simulation examples show that our proposed model has good operating characteristics in assessing the maximum tolerated dose (MTD).
    Journal of Biopharmaceutical Statistics 01/2015; DOI:10.1080/10543406.2014.1003433
  • [Show abstract] [Hide abstract]
    ABSTRACT: Abstract Longitudinal data arise frequently in medical studies and it is a common practice to analyze such complex data with nonlinear mixed-effects (NLME) models. However, the following four issues may be critical in longitudinal data analysis. (i) A homogeneous population assumption for models may be unrealistically obscuring important features of between-subject and within-subject variations; (ii) normality assumption for model errors may not always give robust and reliable results, in particular, if the data exhibit skewness; (iii) the responses may be missing and the missingness may be nonignorable; and (iv) some covariates of interest may often be measured with substantial errors. When carrying out statistical inference in such settings, it is important to account for the effects of these data features; otherwise, erroneous or even misleading results may be produced. Inferential procedures can be complicated dramatically when these four data features arise. In this article, the Bayesian joint modeling approach based on a finite mixture of NLME joint models with skew distributions is developed to study simultaneous impact of these four data features, allowing estimates of both model parameters and class membership probabilities at population and individual levels. A real data example is analyzed to demonstrate the proposed methodologies, and to compare various scenarios-based potential models with different specifications of distributions.
    Journal of Biopharmaceutical Statistics 01/2015; DOI:10.1080/10543406.2014.1000547
  • [Show abstract] [Hide abstract]
    ABSTRACT: ABSTRACT Piecewise growth models are very flexible methods for assessing distinct phases of development or progression in longitudinal data. As an extension of these models, this paper presents piecewise growth mixture Tobit models (PGMTM) for describing phasic changes of individual trajectories over time where the longitudinal data has a mixture of subpopulations and where left-censoring due to a lower limit of detection (LOD) is also observed. There has been relatively little work done simultaneously modeling heterogeneous growth trajectories, segmented phases of progression, and left-censoring with skewed responses. The proposed methods are illustrated using real data from an AIDS clinical study. Analysis results suggested two classes of viral load growth trajectories: Class 1 started with a decline in viral load after treatment but rebound after change-point; Class 2 had a decrease the same as the Class 1 and continue a slower decrease over time.
    Journal of Biopharmaceutical Statistics 01/2015; DOI:10.1080/10543406.2014.1002363
  • [Show abstract] [Hide abstract]
    ABSTRACT: Abstract Measuring a change in the existence of disease symptoms before and after a treatment is examined for statistical significance by means of the McNemar test. When two treatment groups of patients are compared, Feuer and Kessler (1989) proposed a two-sample McNemar test. In this paper, we show that this test usually inflates the type I error in the hypothesis testing, and propose a new two-sample McNemar test that is superior in terms of preserving type I error. We also make the connection between the two-sample McNemar test and the test statistic for the equal residual effects in a 2 × 2 crossover design. The limitations of the two-sample McNemar test are also discussed.
    Journal of Biopharmaceutical Statistics 01/2015; DOI:10.1080/10543406.2014.1000548
  • [Show abstract] [Hide abstract]
    ABSTRACT: Abstract From a regulatory perspective it is important that the sample size recalculation is performed such that all persons involved in the study remain blinded. The proposed method is an extension of the work by Shih and Zhao (1997) to continuous endpoints. The treatment means are constructed by the convex combinations of the stratum means and then estimated by using the linear model of the stratum responses. In this paper, the properties of the proposed estimators are studied. Simulation experiments are conducted to evaluate the difference between two estimators. The unblind estimators for the population mean and the population variance perform better than those of the blind estimators in terms of bias and MSE's in the most of cases. Given a particular sample size, the accuracies of the blind means and the blind variances depend on the treatment proportions in each stratum. An example of interim analysis is given in this article to illustrate the use of sample size determination. The proposed sample size calculations are recommended in the interim analyses to meet CPMP requirement, retaining the blinding.
    Journal of Biopharmaceutical Statistics 01/2015; DOI:10.1080/10543406.2014.971168
  • [Show abstract] [Hide abstract]
    ABSTRACT: Abstract The paper deals with seven special issues related to the assumptions, applicability and practical use of formulas for calculating power or sample size, respectively, for comparative clinical trials with time-to-event endpoints, with particular focus on the well-known Freedman and Schoenfeld methods. All problems addressed are illustrated by numerical examples, and recommendations are given on how to deal with them in the planning of clinical trials.
    Journal of Biopharmaceutical Statistics 01/2015; DOI:10.1080/10543406.2014.1000546
  • [Show abstract] [Hide abstract]
    ABSTRACT: Abstract For bioassay data in drug discovery and development, it is often important to test for parallelism of the mean response curves for two preparations, such as a test sample and a reference sample in determining the potency of the test preparation relative to the reference standard. For assessing parallelism under a four-parameter logistic model, tests of the parallelism hypothesis may be conducted based on the equivalence t-test or the traditional F-test. However, bioassay data often have heterogeneous variance across dose levels. Specifically, the variance of the response may be a function of the mean, frequently modeled as a power of the mean. Therefore, in this paper we discuss estimation and tests for parallelism under the power variance function. Two examples are considered to illustrate the estimation and testing approaches described. A simulation study is also presented to compare the empirical properties of the tests under the power variance function in comparison to the results from ordinary least squares fits, which ignore the non-constant variance pattern.
    Journal of Biopharmaceutical Statistics 01/2015; DOI:10.1080/10543406.2014.1003432
  • [Show abstract] [Hide abstract]
    ABSTRACT: This paper illustrates the use of a multi-criteria decision making approach, based on desirability functions, to identify an appropriate adjuvant composition for an influenza vaccine to be used in elderly. The proposed adjuvant system contained two main elements: monophosphoryl lipid and α-tocopherol with squalene in an oil/water emulsion. The objective was to elicit a stronger immune response while maintaining an acceptable reactogenicity and safety profile. The study design, the statistical models, the choice of the desirability functions, the computation of the overall desirability index and the assessment of the robustness of the ranking are all detailed in this manuscript.
    Journal of Biopharmaceutical Statistics 01/2015; DOI:10.1080/10543406.2015.1008517
  • [Show abstract] [Hide abstract]
    ABSTRACT: Abstract Surrogate endpoint validation for a binary surrogate endpoint and a binary true endpoint is investigated using the criteria of proportion explained (PE) and the relative effect (RE). The concepts of generalized confidence intervals and fiducial intervals are used for computing confidence intervals for PE and RE. The numerical results indicate that the proposed confidence intervals are satisfactory in terms of coverage probability, whereas the intervals based on Fieller's theorem and the delta method fall short in this regard. Our methodology can also be applied to interval estimation problems in a causal inference based approach to surrogate endpoint validation.
    Journal of Biopharmaceutical Statistics 01/2015; DOI:10.1080/10543406.2015.1008516
  • [Show abstract] [Hide abstract]
    ABSTRACT: The high consumption of psychotropic drugs is a public health problem. Rigorous statistical methods are needed to identify consumption characteristics in post-marketing phase. Agglomerative hierarchical clustering (AHC) and latent class analysis (LCA) can both provide clusters of subjects with similar characteristics. The objective of this study was to compare these two methods in pharmacoepidemiology, on several criteria: number of clusters, concordance, interpretation and stability over time. From a data set on bromazepam consumption, the two methods present a good concordance. AHC is a very stable method and provides homogeneous classes. LCA is an inferential approach and seems to allow identifying more accurately extreme deviant behaviour.
    Journal of Biopharmaceutical Statistics 12/2014; Under Press. DOI:10.1080/10543406.2014.920855
  • Journal of Biopharmaceutical Statistics 11/2014; 25(1). DOI:10.1080/10543406.2015.985162
  • [Show abstract] [Hide abstract]
    ABSTRACT: Abstract We propose a chi-square goodness-of-fit test for autoregressive logistic regression (ALR) models. General guidelines for a two-dimensional binning strategy are provided, which make use of two types of maximum likelihood parameter estimates. For smaller sample sizes, a bootstrap p-value procedure is discussed. Simulation studies indicate that the test procedure satisfactorily approximates the correct size and has good power for detecting model misspecification. In particular, the test is very good at detecting the need for an additional lag. An application to a dataset relating to screening patients for late-onset Alzheimer's disease is provided.
    Journal of Biopharmaceutical Statistics 05/2014; 25(1). DOI:10.1080/10543406.2014.919938