Journal of Biopharmaceutical Statistics (J Biopharm Stat)

Publisher: Taylor & Francis

Journal description

This rapid publication periodical discusses quality applications of statistics in biopharmaceutical research and development and expositions of statistical methodology with immediate applicability to such work in the form of full-length and short manuscripts, review articles, selected/invited conference papers, short articles, and letters to the editor. Addressing timely and provocative topics important to the biostatistical profession, the journal covers drug, device, and biological research and development drug screening and drug design assessment of pharmacological activity pharmaceutical formulation and scale-up preclinical safety assessment bioavailability, bioequivalence, and pharmacokinetics phase I, II, and III clinical development premarket approval assessment of clinical safety postmarketing surveillance manufacturing and quality control technical operations regulatory issues.

Current impact factor: 0.72

Impact Factor Rankings

2015 Impact Factor Available summer 2015
2013 / 2014 Impact Factor 0.716
2012 Impact Factor 0.728
2011 Impact Factor 1.342
2010 Impact Factor 1.073
2009 Impact Factor 1.117
2008 Impact Factor 0.951
2007 Impact Factor 0.787

Impact factor over time

Impact factor
Year

Additional details

5-year impact 1.18
Cited half-life 5.80
Immediacy index 0.25
Eigenfactor 0.00
Article influence 0.65
Website Journal of Biopharmaceutical Statistics website
Other titles Journal of biopharmaceutical statistics (Online), Journal of biopharmaceutical statistics, JBS
ISSN 1520-5711
OCLC 39496949
Material type Document, Periodical, Internet resource
Document type Internet Resource, Computer File, Journal / Magazine / Newspaper

Publisher details

Taylor & Francis

  • Pre-print
    • Author can archive a pre-print version
  • Post-print
    • Author can archive a post-print version
  • Conditions
    • Some individual journals may have policies prohibiting pre-print archiving
    • On author's personal website or departmental website immediately
    • On institutional repository or subject-based repository after either 12 months embargo
    • Publisher's version/PDF cannot be used
    • On a non-profit server
    • Published source must be acknowledged
    • Must link to publisher version
    • Set statements to accompany deposits (see policy)
    • The publisher will deposit in on behalf of authors to a designated institutional repository including PubMed Central, where a deposit agreement exists with the repository
    • STM: Science, Technology and Medicine
    • Publisher last contacted on 25/03/2014
    • This policy is an exception to the default policies of 'Taylor & Francis'
  • Classification
    ​ green

Publications in this journal

  • [Show abstract] [Hide abstract]
    ABSTRACT: Time-to-event or dichotomous outcomes in randomized clinical trials often have analyses using the Cox proportional hazards model or conditional logistic regression, respectively, to obtain covariate-adjusted log hazard (or odds) ratios. Nonparametric Randomization-Based Analysis of Covariance (NPANCOVA) can be applied to unadjusted log hazard (or odds) ratios estimated from a model containing treatment as the only explanatory variable. These adjusted estimates are stratified population-averaged treatment effects and only require a valid randomization to the two treatment groups and avoid key modeling assumptions (e.g. proportional hazards in the case of a Cox model) for the adjustment variables. The methodology has application in the regulatory environment where such assumptions can not be verified a priori. Application of the methodology is illustrated through three examples on real data from two randomized trials.
    Journal of Biopharmaceutical Statistics 06/2015; DOI:10.1080/10543406.2015.1052483
  • [Show abstract] [Hide abstract]
    ABSTRACT: Historical control trials (HCTs) are frequently conducted to compare an experimental treatment with a control treatment from a previous study, when they are applicable and favored over a randomized clinical trial (RCT) due to feasibility, ethics and cost concerns. Makuch and Simon developed a sample size formula for historical control (HC) studies with binary outcomes, assuming that the observed response rate in the HC group is the true response rate. This method was extended by Dixon and Simon to specify sample size for HC studies comparing survival outcomes. For HC studies with binary and continuous outcomes, many researchers have shown that the popular Makuch and Simon method does not preserve the nominal power and type I error, and suggested alternative approaches. For HC studies with survival outcomes, we reveal through simulation that the conditional power and type I error over all the random realizations of the HC data have highly skewed distributions. Therefore, the sampling variability of the HC data needs to be appropriately accounted for in determining sample size. A flexible sample size formula that controls arbitrary percentiles, instead of means, of the conditional power and type I error, is derived. Although an explicit sample size formula with survival outcomes is not available, the computation is straightforward. Simulations demonstrate that the proposed method preserves the operational characteristics in a more realistic scenario where the true hazard rate of the HC group is unknown. A real data application of an advanced non-small cell lung cancer (NSCLC) clinical trial is presented to illustrate sample size considerations for HC studies in comparison of survival outcomes.
    Journal of Biopharmaceutical Statistics 06/2015; DOI:10.1080/10543406.2015.1052495
  • [Show abstract] [Hide abstract]
    ABSTRACT: The gene regulation network (GRN) evaluates the interactions between genes and look for models to describe the gene expression behavior. These models have many applications, for instance by characterizing the gene expression mechanisms that cause certain disorders it would be possible to target those genes to block the progress of the disease. Many biological processes are driven by nonlinear dynamic GRN. In this article, we propose a nonparametric differential equations (ODE) to model the nonlinear dynamic GRN. Specially, we address following questions simultaneously: i) extract information from noisy time course gene expression data; ii) model the nonlinear ODE through a nonparametric smoothing function; iii) identify the important regulatory gene(s) through a group SCAD approach; iv) test the robustness of the model against possible shortening of experimental duration. We illustrate the usefulness of the model and associated statistical methods through a simulation and a real application examples.
    Journal of Biopharmaceutical Statistics 06/2015; DOI:10.1080/10543406.2015.1052496
  • [Show abstract] [Hide abstract]
    ABSTRACT: The self-controlled case series (SCCS) and self-controlled risk interval (SCRI) designs have recently become widely used in the field of post-licensure vaccine safety monitoring to detect potential elevated risks of adverse events following vaccinations. The SCRI design can be viewed as a subset of the SCCS method in that a reduced comparison time window is used for the analysis. Compared to the SCCS method, the SCRI design has less statistical power due to fewer events occurring in the shorter control interval. In this study, we derived the asymptotic relative efficiency (ARE) between these two methods to quantify this loss in power in the SCRI design. The equation is formulated as [Formula: see text] (a: control window length ratio between SCRI and SCCS designs; b: ratio of risk window length and control window length in the SCCS design; [Formula: see text]: relative risk of exposed window to control window). According to this equation, the relative efficiency declines as the ratio of control period length between SCRI and SCCS methods decreases, or with an increase in the relative risk [Formula: see text]. We provide an example utilizing data from the Vaccine Safety Datalink (VSD) to study the potential elevated risk of febrile seizure following seasonal influenza vaccine in the 2010-2011 season.
    Journal of Biopharmaceutical Statistics 06/2015; DOI:10.1080/10543406.2015.1052819
  • [Show abstract] [Hide abstract]
    ABSTRACT: The current practice for designing single-arm phase II trials with time-to-event endpoints is limited to using either a maximum likelihood estimate test under the exponential model or a naive approach based on dichotomizing the event time at a landmark time point. A trial designed under the exponential model may not be reliable, and the naive approach is inefficient. The modified one-sample log-rank test statistic proposed in this paper fills the void. In general, the proposed test can be used to design single-arm phase II survival trials under any parametric survival distribution. Simulation results showed that it preserves type I error well and provides adequate power for phase II cancer trial designs with time-to-event endpoints.
    Journal of Biopharmaceutical Statistics 06/2015; DOI:10.1080/10543406.2015.1052494
  • [Show abstract] [Hide abstract]
    ABSTRACT: 1 Abstract In case of small samples, asymptotic confidence sets may be inaccurate, with their actual coverage probability far from a nominal confidence level. In a single framework, we consider four popular asymptotic methods of confidence estimation. These methods are based on model linearization, F-test, likelihood ratio test and nonparametric bootstrapping procedure. Next, we apply each of these methods to derive three types of confidence sets: confidence intervals, confidence regions and pointwise confidence bands. Finally, to estimate the actual coverage of these confidence sets, we conduct a simulation study on three regression problems. A linear model and nonlinear Hill and Gompertz models are tested in conditions of different sample size and experimental noise. The simulation study comprises calculation of the actual coverage of confidence sets over pseudo-experimental data sets for each model. For confidence intervals, such metrics as width and simultaneous coverage are also considered. Our comparison shows that the F-test and linearization methods are the most suitable for the construction of confidence intervals, the F-test - for confidence regions and the linearization - for pointwise confidence bands.
    Journal of Biopharmaceutical Statistics 06/2015; DOI:10.1080/10543406.2015.1052818
  • [Show abstract] [Hide abstract]
    ABSTRACT: Clinical trials often involve two or more primary endpoints. However, observing or measuring high-cost endpoints often reduces the efficiency of the study because of high medical costs, highly invasive measurements, or long-term follow-up. Further, the individual powers to demonstrate the overall efficacy of a new intervention for the multiple endpoints often differ under a given sample size. We propose an efficient clinical trial design in which the sample size for each of the endpoints is individually determined, taking into consideration both the cost and the individual power for each endpoint. We compared the efficiency of the proposed design with that of the conventional design using three variables: (1) the number of participants in the study, (2) the total number of measurements for all endpoints, and (3) the cost of enrolling the participants and obtaining the measurements for all endpoints. We extended the proposed design to a group-sequential design. Numerical examples show that the proposed design can reduce unnecessary measurements and adjust the individual powers for the endpoints, especially when the individual power for one endpoint is relatively higher than that for other endpoints in a study with multiple co-primary endpoints.
    Journal of Biopharmaceutical Statistics 06/2015; DOI:10.1080/10543406.2015.1052497
  • [Show abstract] [Hide abstract]
    ABSTRACT: It is well known that there is strong relationship between HIV viral load and CD4 cell counts in AIDS studies. However, the relationship between them changes during the course of treatment and may vary among individuals. During treatments, some individuals may experience terminal events such as death. Because the terminal event may be related to the individual's viral load measurements, the terminal mechanism is non-ignorable. Furthermore, there exists competing risks from multiple types of events, such as AIDS-related death and other death. Most joint models for the analysis of longitudinal-survival data developed in literatures have focused on constant coefficients and assume symmetric distribution for the endpoints, which does not meet the needs for investigating the nature of varying relationship between HIV viral load and CD4 cell counts in practice. We develop a mixed-effects varying-coefficient model with skewed distribution coupled with cause-specific varying-coefficient hazard model with random-effects to deal with varying relationship between the two endpoints for longitudinal-competing risks survival data. A fully Bayesian inference procedure is established to estimate parameters in the joint model. The proposed method is applied to a multicenter AIDS cohort study. Various scenarios-based potential models that account for partial data features are compared. Some interesting findings are presented.
    Journal of Biopharmaceutical Statistics 06/2015; DOI:10.1080/10543406.2015.1052493
  • [Show abstract] [Hide abstract]
    ABSTRACT: In this paper we propose a statistical explorative method for data integration. It is developed in the context of early drug development for which it enables the detection of chemical substructures and the identification of genes that mediate their association with the bioactivity (BA). The core of the method is a sparse singular value decomposition for the identification of the gene set and a permutation-based method for the control of the false discovery rate. The method is illustrated using a real dataset, and its properties are empirically evaluated by means of a simulation study. Quantitative Structure Transcriptional Activity Relationship (QSTAR, www.qstar-consortium.org ) which is a new paradigm in early drug development that extends QSAR by not only considering data on the chemical structure of the compounds and on the compound-induced bioactivity, but by simultaneously using transcriptomics data (gene expression). This approach enables, for example, the detection of chemical substructures that are associated with bioactivity, while at the same time a gene set is correlated with both these substructures and the bioactivity. Although causal associations cannot be formally concluded, these associations may suggest that the compounds act on the bioactivity through a particular genomic pathway.
    Journal of Biopharmaceutical Statistics 06/2015; DOI:10.1080/10543406.2015.1052491
  • [Show abstract] [Hide abstract]
    ABSTRACT: Many methods have been proposed to account for the potential impact of ethnic/regional factors when extrapolating results from multi-regional clinical trials (MRCT) to targeted ethnic (TE) patients, i.e., "bridging". Most of them either focused on TE patients in the MRCT (i.e., internal bridging) or a separate local clinical trial (LCT) (i.e., external bridging). Huang et al. (2012) integrated both bridging concepts in their method for the Simultaneous Global Drug Development Program (SGDDP) which designs both the MRCT and the LCT prospectively and combines patients in both trials by ethnic origin, i.e., TE vs. Non-TE (NTE). The weighted Z test was used to combine information from TE and NTE patients to test with statistical rigor whether a new treatment is effective in the TE population. Practically, the MRCT is often completed before the LCT. Thus to increase the power for the SGDDP and/or obtain more informative data in TE patients, we may use the final results from the MRCT to reevaluate initial assumptions (e.g., effect sizes, variances, weight), and modify the LCT accordingly. We discuss various adaptive strategies for the LCT such as sample size reassessment, population enrichment, endpoint change, dose adjustment. As an example, we extend a popular adaptive design method to re-estimate the sample size for the LCT, and illustrate it for a normally distributed endpoint.
    Journal of Biopharmaceutical Statistics 06/2015; DOI:10.1080/10543406.2015.1052488
  • [Show abstract] [Hide abstract]
    ABSTRACT: When evaluating the usefulness of clinical information for the diagnosis of disease, multiple raters provide a diagnosis for the same set of data. These ratings provide important insights into the performance the diagnosis, determining the accuracy of each rater's diagnosis compared to the truth standard and the level of agreement among the raters. We demonstrate that the intraclass correlation coefficient (ICC) is dependent on the sensitivities and specificities of the raters involved in the study. Given the sensitivity and specificity of any number of raters, along with the prevalence of disease, the expected ICC can be determined.
    Journal of Biopharmaceutical Statistics 06/2015; DOI:10.1080/10543406.2015.1052490
  • [Show abstract] [Hide abstract]
    ABSTRACT: In both clinical development and post-marketing of a new therapy or a new treatment, incidence of an adverse event (AE) is always a concern. When sample sizes are small, large sample based inferential approaches on an AE incidence proportion in a certain time period no longer apply. In this brief discussion, we introduce a simple Bayesian framework to quantify, in small sample studies and the rare AE case, 1) the confidence level that the incidence proportion of a particular AE p is over or below a threshold, 2) the lower or upper bounds on p with a certain level of confidence, and 3) the minimum required number of patients with an AE before we can be certain that p surpasses a specific threshold, or the maximum allowable number of patients with an AE after which we can no longer be certain that p is below a certain threshold, given a certain confidence level. The method is easy to understand and implement; the interpretation of the results is intuitive. The discussion also demonstrates the usefulness of simple Bayesian concepts when it comes to answering practical questions.
    Journal of Biopharmaceutical Statistics 06/2015; DOI:10.1080/10543406.2015.1052489
  • [Show abstract] [Hide abstract]
    ABSTRACT: Many phase I trials in oncology involve multiple dose administrations on the same patient over multiple cycles, with a typical cycle lasting three weeks and having about six cycles per patient with a goal to find the maximum tolerated dose (MTD) and study the dose-toxicity relationship. A patient's dose is unchanged over the cycles and the data is reduced to a binary end point, the occurrence of a toxicity and analyzed either by considering the toxicity from the first dose or from any cycle on the study. In this paper an alternative approach allowing an assessment of toxicity from each cycle and dose variations for patient over cycles is presented. A Markov model for the conditional probability of toxicity on any cycle given no toxicity in previous cycles is formulated as a function of the current and previous doses. The extra information from each cycle provides more precise estimation of the dose-toxicity relationship. Simulation results demonstrating gains in using the Markov model as compared to analyses of a single binary outcome are presented. Methods for utilizing the Markov model to conduct a phase I study, including choices for selecting doses for the next cycle for each patient, are developed and presented via simulation.
    Journal of Biopharmaceutical Statistics 06/2015; DOI:10.1080/10543406.2015.1052492
  • [Show abstract] [Hide abstract]
    ABSTRACT: The high consumption of psychotropic drugs is a public health problem. Rigorous statistical methods are needed to identify consumption characteristics in post-marketing phase. Agglomerative hierarchical clustering (AHC) and latent class analysis (LCA) can both provide clusters of subjects with similar characteristics. The objective of this study was to compare these two methods in pharmacoepidemiology, on several criteria: number of clusters, concordance, interpretation and stability over time. From a data set on bromazepam consumption, the two methods present a good concordance. AHC is a very stable method and provides homogeneous classes. LCA is an inferential approach and seems to allow identifying more accurately extreme deviant behaviour.
    Journal of Biopharmaceutical Statistics 06/2015; 25:843-856. DOI:10.1080/10543406.2014.920855
  • [Show abstract] [Hide abstract]
    ABSTRACT: Randomization tests (sometimes referred to as "re-randomization" tests) are used in clinical trials, either as an assumption-free confirmation of parametric analyses, or as an independent analysis based on the principle of randomization-based inference. In the context of adaptive randomization, either restricted or response-adaptive procedures, it is unclear how accurate such Monte Carlo approximations are, or how many Monte Carlo sequences to generate. In this paper we describe several randomization procedures for which there is a known exact or asymptotic distribution of the randomization test. For a special class of procedures, called [Formula: see text], and binary responses, the exact test statistic has a simple closed form. For the limited subset of existing procedures with known exact and asymptotic distributions, we can use these as a benchmark for the accuracy of Monte Carlo randomization techniques. We conclude that Monte Carlo test are very accurate, and require minimal computation time. For simple tests with binary response in the class of [Formula: see text] procedures, the exact distribution provides the best test, but Monte Carlo approximations can be used when the exact distribution is difficult to compute.
    Journal of Biopharmaceutical Statistics 06/2015; DOI:10.1080/10543406.2015.1052486
  • [Show abstract] [Hide abstract]
    ABSTRACT: We treat the situations that the effect of covariates on hazard is differed in subgroups of patients. To handle this situation we can consider the hybrid model of the Cox model and tree-structured model. Through simulation studies, we compared several splitting criteria for constructing this hybrid model. As the result, the criterion using the degree of the improvement in the negative maximum partial log-likelihood obtained by splitting showed a good performance for many situations. We also present the results obtained by applying this tree model in an actual medical research study to show its utility.
    Journal of Biopharmaceutical Statistics 06/2015; DOI:10.1080/10543406.2015.1052485
  • Journal of Biopharmaceutical Statistics 05/2015; DOI:10.1080/10543406.2015.1052295
  • [Show abstract] [Hide abstract]
    ABSTRACT: The classification scenario needs handling of more than one biomarker. The main objective of the work is to propose a Multivariate Receiver Operating Characteristic (MROC) model which linearly combines the markers to classify them into one of the two groups and also to determine an optimal cut point. Simulation studies are conducted for four sets of mean vectors and covariance matrices and a real dataset is also used to demonstrate the proposed model. Linear and Quadratic discriminant analysis has also been applied to the above data sets in order to explain the ease of the proposed model. Bootstrapped estimates of the parameters of the ROC curve are also estimated.
    Journal of Biopharmaceutical Statistics 05/2015; DOI:10.1080/10543406.2015.1052479