Journal of Applied Statistics (J APPL STAT)

Publisher: Taylor & Francis (Routledge)

Journal description

Journal of Applied Statistics provides a forum for communication between both applied statisticians and users of applied statistical techniques across a wide range of disciplines. These areas include business, computing, economics, ecology, education, management, medicine, operational research and sociology, but papers from other areas are also considered. The editorial policy is to publish rigorous but clear and accessible papers on applied techniques. Purely theoretical papers are avoided but those on theoretical developments which clearly demonstrate significant applied potential are welcomed. Each paper is submitted to at least two independent referees. Each issue aims for a balance of methodological innovation, thorough evaluation of existing techniques, case studies, speculative articles, book reviews and letters. Gopal Kanji, the Editor, has been running the Journal of Applied Statistics for 25 years in 1998. Journal of Applied Statistics includes a supplement on Advances in Applied Statistics. Each annual edition of the supplement aims to provide a comprehensive and modern account of a subject at the cutting edge of applied statistics. Individual articles and entire thematic issues are invited and commissioned from authors in the forefront of their speciality, linking established themes to current and future developments.

Current impact factor: 0.42

Impact Factor Rankings

2016 Impact Factor Available summer 2017
2014 / 2015 Impact Factor 0.417
2013 Impact Factor 0.453
2012 Impact Factor 0.449
2011 Impact Factor 0.405
2010 Impact Factor 0.306
2009 Impact Factor 0.407
2008 Impact Factor 0.28
2007 Impact Factor 0.222
2006 Impact Factor 0.48
2005 Impact Factor 0.306
2004 Impact Factor 0.665
2003 Impact Factor 0.597
2002 Impact Factor 0.265
2001 Impact Factor 0.296
2000 Impact Factor 0.206
1999 Impact Factor 0.257
1998 Impact Factor 0.316
1997 Impact Factor 0.448

Impact factor over time

Impact factor
Year

Additional details

5-year impact 0.59
Cited half-life >10.0
Immediacy index 0.07
Eigenfactor 0.00
Article influence 0.29
Website Journal of Applied Statistics website
Other titles Journal of applied statistics (Online)
ISSN 0266-4763
OCLC 48215794
Material type Document, Periodical, Internet resource
Document type Internet Resource, Computer File, Journal / Magazine / Newspaper

Publisher details

Taylor & Francis (Routledge)

  • Pre-print
    • Author can archive a pre-print version
  • Post-print
    • Author can archive a post-print version
  • Conditions
    • Some individual journals may have policies prohibiting pre-print archiving
    • On author's personal website or departmental website immediately
    • On institutional repository or subject-based repository after either 12 months embargo
    • Publisher's version/PDF cannot be used
    • On a non-profit server
    • Published source must be acknowledged
    • Must link to publisher version
    • Set statements to accompany deposits (see policy)
    • The publisher will deposit in on behalf of authors to a designated institutional repository including PubMed Central, where a deposit agreement exists with the repository
    • STM: Science, Technology and Medicine
    • Publisher last contacted on 25/03/2014
    • This policy is an exception to the default policies of 'Taylor & Francis (Routledge)'
  • Classification
    green

Publications in this journal

  • [Show abstract] [Hide abstract]
    ABSTRACT: Three new weighted rank correlation coefficients are proposed which are sensitive to both agreement on top and bottom rankings. The first one is based on the weighted rank correlation coefficient proposed by Maturi and Abdelfattah [1313. T.A. Maturi and E.H. Abdelfattah, A new weighted rank correlation, J. Math. Stat. 4 (2008), pp. 226–230. doi: 10.3844/jmssp.2008.226.230View all references], the second and the third are based on the order statistics and the quantiles of the Laplace distribution, respectively. The limiting distributions of the new correlation coefficients under the null hypothesis of no association between the rankings are presented, and a summary of the exact and approximate quantiles for these coefficients is provided. A simulation study is performed to compare the performance of Kendall's tau, Spearman's rho, and the new weighted rank correlation coefficients in detecting the agreement on the top and the bottom rankings simultaneously. Finally, examples are given for illustration purposes, including a real data set from financial market indices.
    No preview · Article · Feb 2016 · Journal of Applied Statistics
  • [Show abstract] [Hide abstract]
    ABSTRACT: Adaptive clinical trial designs can often improve drug-study efficiency by utilizing data obtained during the course of the trial. We present a novel Bayesian two-stage adaptive design for Phase II clinical trials with Poisson-distributed outcomes that allows for person-observation-time adjustments for early termination due to either futility or efficacy. Our design is motivated by the adaptive trial from [99. V. Sambucini, A Bayesian predictive two-stage design for Phase II clinical trials, Stat. Med. 27 (2008), pp. 1199–1224. doi: 10.1002/sim.3021View all references], which uses binomial data. Although many frequentist and Bayesian two-stage adaptive designs for count data have been proposed in the literature, many designs do not allow for person-time adjustments after the first stage. This restriction limits flexibility in the study design. However, our proposed design allows for such flexibility by basing the second-stage person-time on the first-stage observed-count data. We demonstrate the implementation of our Bayesian predictive adaptive two-stage design using a hypothetical Phase II trial of Immune Globulin (Intravenous).
    No preview · Article · Feb 2016 · Journal of Applied Statistics
  • [Show abstract] [Hide abstract]
    ABSTRACT: In medicine, there are often two diagnostic tests that serve the same purpose. Typically, one of the tests will have a lower diagnostic performance but be less invasive, easier to perform, or cheaper. Clinicians must assess the agreement between the tests while accounting for test–retest variation in both techniques. In this paper, we investigate a specific example from interventional cardiology, studying the agreement between the fractional flow reserve and the instantaneous wave-free ratio. We analyze potential definitions of the agreement (accuracy) between the two tests and compare five families of statistical estimators. We contrast their statistical behavior both theoretically and using numerical simulations. Surprisingly for clinicians, seemingly natural and equivalent definitions of the concept of agreement can lead to discordant and even nonsensical estimates.
    No preview · Article · Feb 2016 · Journal of Applied Statistics
  • [Show abstract] [Hide abstract]
    ABSTRACT: This paper develops a new Bayesian approach to change-point modeling that allows the number of change-points in the observed autocorrelated times series to be unknown. The model we develop assumes that the number of change-points have a truncated Poisson distribution. A genetic algorithm is used to estimate a change-point model, which allows for structural changes with autocorrelated errors. We focus considerable attention on the construction of autocorrelated structure for each regime and for the parameters that characterize each regime. Our techniques are found to work well in the simulation with a few change-points. An empirical analysis is provided involving the annual flow of the Nile River and the monthly total energy production in South Korea to lead good estimates for structural change-points.
    No preview · Article · Feb 2016 · Journal of Applied Statistics
  • [Show abstract] [Hide abstract]
    ABSTRACT: Principal component analysis (PCA) and functional principal analysis are key tools in multivariate analysis, in particular modelling yield curves, but little attention is given to questions of uncertainty, neither in the components themselves nor in any derived quantities such as scores. Actuaries using PCA to model yield curves to assess interest rate risk for insurance companies are required to show any uncertainty in their calculations. Asymptotic results based on assumptions of multivariate normality are unsatisfactory for modest samples, and application of bootstrap methods is not straightforward, with the novel pitfalls of possible inversions in order of sample components and reversals of signs. We present methods for overcoming these difficulties and discuss arising of other potential hazards.
    No preview · Article · Feb 2016 · Journal of Applied Statistics
  • [Show abstract] [Hide abstract]
    ABSTRACT: This paper develops an objective Bayesian analysis method for estimating unknown parameters of the half-logistic distribution when a sample is available from the progressively Type-II censoring scheme. Noninformative priors such as Jeffreys and reference priors are derived. In addition, derived priors are checked to determine whether they satisfy probability-matching criteria. The Metropolis–Hasting algorithm is applied to generate Markov chain Monte Carlo samples from these posterior density functions because marginal posterior density functions of each parameter cannot be expressed in an explicit form. Monte Carlo simulations are conducted to investigate frequentist properties of estimated models under noninformative priors. For illustration purposes, a real data set is presented, and the quality of models under noninformative priors is evaluated through posterior predictive checking.
    No preview · Article · Jan 2016 · Journal of Applied Statistics
  • [Show abstract] [Hide abstract]
    ABSTRACT: Huberian statistical approach is applied to differentiate three neurodegenerative disorder gait rhythm and presents a method reducing the number of parameters of an autoregressive moving average (ARMA) modeling of the walking signal. Gait rhythm dynamics differ between healthy control, Parkinson's disease, Huntington's disease and amyotrophic lateral sclerosis. Random variables such as the stride interval and its two sub-phases (i.e. swing and stance) present a great variability with natural outliers. Huberian function as a mixture of and norms with low threshold γ is used to present new statistical indicators by deducing the corresponding skewness and kurtosis. The choice of γ is discussed to ensure consistency and convergence of a low-order ARMA estimator of the gait rhythm signal. A mathematical point of view is developed and experimental results are presented.
    No preview · Article · Jan 2016 · Journal of Applied Statistics
  • [Show abstract] [Hide abstract]
    ABSTRACT: The experimental design method is a pivotal factor for the reliability of the parameters estimation in the discrete choice model. The traditional orthogonal design is used widely, but insufficient empirical research has been conducted on the effectiveness of these new design methods. Several new experimental design methods, such as D-efficient, Bayesian D-efficient, have been proposed recently. This study finds that the D-adoption has statistically insignificant effect on the growth of productivity. This study is motivated by the lack of documented evidence on the effect of Chinese ESOS. This study contributes to the body of knowledge by documenting evidence on the impact of ESOS on productivity enhancement and earnings management practices. The existing literature on productivity effect and earnings management effect of ESOS falls under two isolated strands of research. No documented studies have been done to investigate these two issues simultaneously using the same dataset. As a result, the existing literature fails to identify which of these two countervailing effects of ESOS is more dominant.
    No preview · Article · Jan 2016 · Journal of Applied Statistics
  • [Show abstract] [Hide abstract]
    ABSTRACT: The cumulative incidence function is of great importance in the analysis of survival data when competing risks are present. Parametric modeling of such functions, which are by nature improper, suggests the use of improper distributions. One frequently used improper distribution is that of Gompertz, which captures only monotone hazard shapes. In some applications, however, subdistribution hazard estimates have been observed with unimodal shapes. An extension to the Gompertz distribution is presented which can capture unimodal as well as monotone hazard shapes. Important properties of the proposed distribution are discussed, and the proposed distribution is used to analyze survival data from a breast cancer clinical trial.
    No preview · Article · Jan 2016 · Journal of Applied Statistics
  • [Show abstract] [Hide abstract]
    ABSTRACT: Multilevel models have been widely applied to analyze data sets which present some hierarchical structure. In this paper we propose a generalization of the normal multilevel models, named elliptical multilevel models. This proposal suggests the use of distributions in the elliptical class, thus involving all symmetric continuous distributions, including the normal distribution as a particular case. Elliptical distributions may have lighter or heavier tails than the normal ones. In the case of normal error models with the presence of outlying observations, heavy-tailed error models may be applied to accommodate such observations. In particular, we discuss some aspects of the elliptical multilevel models, such as maximum likelihood estimation and residual analysis to assess features related to the fitting and the model assumptions. Finally, two motivating examples analyzed under normal multilevel models are reanalyzed under Student-t and power exponential multilevel models. Comparisons with the normal multilevel model are performed by using residual analysis.
    No preview · Article · Jan 2016 · Journal of Applied Statistics
  • [Show abstract] [Hide abstract]
    ABSTRACT: In this paper, the modes of the negative binomial distribution of order k are studied. Firstly, the method of transition probability flow graphs is introduced to deal with the probability-generating function of the geometric distribution of order k, which is a special case of the negative binomial distribution of the same order. And then, the general negative binomial distribution of order k is investigated. By means of probability distribution function, the mode of the geometric distribution of order k is derived, i.e. . Based on the Fibonacci sequence and Poly-nacci sequence, the modes of the negative binomial distribution of order k in some cases are obtained: (1) and , for p=0.5; (2) for p=0.5. Finally, an application of negative binomial distribution of order k in continuous sampling plans is given.
    No preview · Article · Jan 2016 · Journal of Applied Statistics
  • [Show abstract] [Hide abstract]
    ABSTRACT: The fluctuation test suggested by Hansen and Johansen [Some tests for parameter constancy in cointegrated VAR models, Econometrics J. 2 (1999), pp. 306–333] intends to distinguish between the presence of zero and one break in cointegration relations. In this article, we provide evidence by Monte Carlo simulations that it also serves as a graphical device to detect even multiple break locations. It suffices to consider a simplified and easy-to-implement version of the original fluctuation test. Its break detection performance depends on the sign of change in cointegration parameters and the break height. The sign issue can be approached successfully by a backward application of the test statistic. If breaks are observable, the break locations are detected at the true location on average. We apply the graphical procedure to assess the cointegration of bond yields of Spain, Italy and Portugal with German yields for the period 1995–2013 which is surprisingly supported by the trace test. However, the recursive cointegration approach shows that a stable relationship with German yields is only present for sub-periods between the introduction of the Euro and the global financial crisis which is in line with expectations. The statistical robustness of these results is supported by a forward and backward application of the cointegration breakdown test by Andrews and Kim [Tests for cointegration breakdown over a short time period, J. Bus. Econom. Stat. 24 (2006), pp. 379–394].
    No preview · Article · Jan 2016 · Journal of Applied Statistics
  • [Show abstract] [Hide abstract]
    ABSTRACT: This article provides alternative circular smoothing methods in nonparametric estimation of periodic functions. By treating the data as ‘circular’, we solve the “boundary issue” in the nonparametric estimation treating the data as ‘linear’. By redefining the distance metric and signed distance, we modify many estimators used in the situations involving periodic patterns. In the perspective of ‘nonparametric estimation of periodic functions’, we present the examples in nonparametric estimation of (1) a periodic function, (2) multiple periodic functions, (3) an evolving function, (4) a periodically varying-coefficient model and (5) a generalized linear model with periodically varying coefficient. In the perspective of ‘circular statistics’, we provide alternative approaches to calculate the weighted average and evaluate the ‘linear/circular–linear/circular’ association and regression. Simulation studies and an empirical study of electricity price index have been conducted to illustrate and compare our methods with other methods in the literature.
    No preview · Article · Jan 2016 · Journal of Applied Statistics
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The problem motivating the paper is the quantification of students' preferences regarding teaching/coursework quality, under certain numerical restrictions, in order to build a model for identifying, assessing and monitoring the major components of the overall academic quality. After reviewing the strengths and limitations of conjoint analysis and of the random coefficient regression model used in similar problems in the past, we propose a Bayesian beta regression model with a Dirichlet prior on the model coefficients. This approach not only allows for the incorporation of informative prior when it is available but also provides user friendly interfaces and direct probability interpretations for all quantities. Furthermore, it is a natural way to implement the usual constraints for the model weights/coefficients. This model was applied to data collected in 2009 and 2013 from undergraduate students in Panteion University, Athens, Greece and besides the construction of an instrument for the assessment and monitoring of teaching quality, it gave some input for a preliminary discussion on the association of the differences in students preferences between the two time periods with the current Greek economic and financial crisis.
    Preview · Article · Jan 2016 · Journal of Applied Statistics
  • Source

    Preview · Article · Jan 2016 · Journal of Applied Statistics
  • [Show abstract] [Hide abstract]
    ABSTRACT: In this study, we consider stochastic one-way analysis of covariance model when the distribution of the error terms is long-tailed symmetric. Estimators of the unknown model parameters are obtained by using the maximum likelihood (ML) methodology. Iteratively reweighting algorithm is used to compute the ML estimates of the parameters. We also propose new test statistic based on ML estimators for testing the linear contrasts of the treatment effects. In the simulation study, we compare the efficiencies of the traditional least-squares (LS) estimators of the model parameters with the corresponding ML estimators. We also compare the power of the test statistics based on LS and ML estimators, respectively. A real-life example is given at the end of the study.
    No preview · Article · Dec 2015 · Journal of Applied Statistics