Journal of Applied Statistics (J APPL STAT )

Publisher: Taylor & Francis


Journal of Applied Statistics provides a forum for communication between both applied statisticians and users of applied statistical techniques across a wide range of disciplines. These areas include business, computing, economics, ecology, education, management, medicine, operational research and sociology, but papers from other areas are also considered. The editorial policy is to publish rigorous but clear and accessible papers on applied techniques. Purely theoretical papers are avoided but those on theoretical developments which clearly demonstrate significant applied potential are welcomed. Each paper is submitted to at least two independent referees. Each issue aims for a balance of methodological innovation, thorough evaluation of existing techniques, case studies, speculative articles, book reviews and letters. Gopal Kanji, the Editor, has been running the Journal of Applied Statistics for 25 years in 1998. Journal of Applied Statistics includes a supplement on Advances in Applied Statistics. Each annual edition of the supplement aims to provide a comprehensive and modern account of a subject at the cutting edge of applied statistics. Individual articles and entire thematic issues are invited and commissioned from authors in the forefront of their speciality, linking established themes to current and future developments.

  • Impact factor
    Show impact factor history
    Impact factor
  • 5-year impact
  • Cited half-life
  • Immediacy index
  • Eigenfactor
  • Article influence
  • Website
    Journal of Applied Statistics website
  • Other titles
    Journal of applied statistics (Online)
  • ISSN
  • OCLC
  • Material type
    Document, Periodical, Internet resource
  • Document type
    Internet Resource, Computer File, Journal / Magazine / Newspaper

Publisher details

Taylor & Francis

  • Pre-print
    • Author can archive a pre-print version
  • Post-print
    • Author cannot archive a post-print version
  • Restrictions
    • 12 month embargo for STM, Behavioural Science and Public Health Journals
    • 18 month embargo for SSH journals
  • Conditions
    • Some individual journals may have policies prohibiting pre-print archiving
    • Pre-print on authors own website, Institutional or Subject Repository
    • Post-print on authors own website, Institutional or Subject Repository
    • Publisher's version/PDF cannot be used
    • On a non-profit server
    • Published source must be acknowledged
    • Must link to publisher version
    • Set statements to accompany deposits (see policy)
    • Publisher will deposit to PMC on behalf of NIH authors.
    • STM: Science, Technology and Medicine
    • SSH: Social Science and Humanities
    • 'Taylor & Francis (Psychology Press)' is an imprint of 'Taylor & Francis'
  • Classification
    ​ yellow

Publications in this journal

  • [show abstract] [hide abstract]
    ABSTRACT: This paper investigates a new test for normality that is easy for biomedical researchers to understand and easy to implement in all dimensions. In terms of power comparison against a broad range of alternatives, the new test outperforms the best known competitors in the literature as demonstrated by simulation results. In addition, the proposed test is illustrated using data from real biomedical studies.
    Journal of Applied Statistics 01/2014; 41(2):351-363.
  • [show abstract] [hide abstract]
    ABSTRACT: Determining the effectiveness of different treatments from observational data, which are characterized by imbalance between groups due to lack of randomization, is challenging. Propensity matching is often used to rectify imbalances among prognostic variables. However, there are no guidelines on how appropriately to analyze group matched data when the outcome is a zero inflated count. In addition, there is debate over whether to account for correlation of responses induced by matching, and/or whether to adjust for variables used in generating the propensity score in the final analysis. The aim of this research is to compare covariate unadjusted and adjusted zero-inflated Poisson models that do and do not account for the correlation. A simulation study is conducted, demonstrating that it is necessary to adjust for potential residual confounding, but that accounting for correlation is less important. The methods are applied to a biomedical research data set.
    Journal of Applied Statistics 01/2014; 41(1).
  • [show abstract] [hide abstract]
    ABSTRACT: We investigate and develop methods for structural break detection, considering time series from thermal spraying process monitoring. Since engineers induce technical malfunctions during the processes, the time series exhibit structural breaks at known time points, giving us valuable information to conduct the investigations. First, we consider a recently developed robust online (also real-time) filtering (i.e. smoothing) procedure that comprises a test for local linearity. This test rejects when jumps and trend changes are present, so that it can also be useful to detect such structural breaks online. Second, based on the filtering procedure we develop a robust method for the online detection of ongoing trends. We investigate these two methods as to the online detection of structural breaks by simulations and applications to the time series from the manipulated spraying processes. Third, we consider a recently developed fluctuation test for constant variances that can be applied offline, i.e. after the whole time series has been observed, to control the spraying results. Since this test is not reliable when jumps are present in the time series, we suggest data transformation based on filtering and demonstrate that this transformation makes the test applicable.
    Journal of Applied Statistics 11/2013;
  • [show abstract] [hide abstract]
    ABSTRACT: Regression models play a dominant role in analyzing several data sets arising from areas like agricultural experiment, space experiment, biological experiment, financial modeling, etc. One of the major strings in developing the regression models is the assumption of the distribution of the error terms. It is customary to consider that the error terms follow the Gaussian distribution. However, there are some drawbacks of Gaussian errors such as the distribution being mesokurtic having kurtosis three. In many practical situations the variables under study may not be having mesokurtic but they are platykurtic. Hence, to analyze these sorts of platykurtic variables, a two variable regression model with new symmetric distributed errors is developed and analyzed. The Maximum Likelihood (ML) estimators of the model parameters are derived. The properties of the ML estimators with respect to the new symmetrically distributed errors are also discussed. A simulation study is carried to compare the proposed model with that of Gaussian errors and found that the proposed model performs better when the variables are platykurtic. Some applications of the developed model are also pointed out. Key Words: Gaussian errors; Maximum Likelihood; New Symmetric Distribution; Platykurtic; Simulation Studies
    Journal of Applied Statistics 09/2013;
  • Journal of Applied Statistics 07/2013; 40(12):2777.
  • [show abstract] [hide abstract]
    ABSTRACT: In this article, optimal design under the restriction of pre-determined budget of experiment is developed for the Pareto distribution when the life test is progressively group censored. We use the maximum-likelihood method to obtain the point estimator of the Pareto parameter. We propose two approaches to decide the number of test units, the number of inspections, and the length of inspection interval under limited budget such that the asymptotic variance of estimator of Pareto parameter is minimum. A numerical example is given to illustrate the proposed method. Some sensitivity analysis is also studied.
    Journal of Applied Statistics 07/2013;
  • [show abstract] [hide abstract]
    ABSTRACT: In this paper,we shall develop a novel family of bimodal univariate distributions (also allowing for unimodal shapes) and demonstrate its use utilizing the well-known and almost classical data set involving durations and waiting times of eruptions of the Old-Faithful geyser in Yellowstone park. Specifically, we shall analyze the Old-Faithful data set with 272 data points provided in Dekking et al. [3]. In the process, we develop a bivariate distribution using a copula technique and compare its fit to a mixture of bivariate normal distributions also fitted to the same bivariate data set. We believe the fit-analysis and comparison is primarily illustrative from an educational perspective for distribution theory modelers, since in the process a variety of statistical techniques are demonstrated.We do not claim one model as preferred over the other.
    Journal of Applied Statistics 05/2013; 40(9):1965-1978.
  • [show abstract] [hide abstract]
    ABSTRACT: The dates of U.S. business cycle are reported by NBER with a considerable delay, so an early notion of turning points is of particular interest. This paper proposes a novel sequential approach designed for timely signaling these turning points using the time series of coincident economic indicators. The approach exhibits a range of theoretical optimality properties for early signaling, moreover, it is transparent and easy to implement. The empirical study evaluates the signaling ability of the proposed methodology.
    Journal of Applied Statistics 04/2013; 40(2):438-448.
  • [show abstract] [hide abstract]
    ABSTRACT: Agreement among raters is an important issue in medicine, as well as in education and psychology. The agreement among two raters on a nominal or ordinal rating scale has been investigated in many articles. The multi-rater case with normally distributed ratings has also been explored at length. However, there is a lack of research on multiple raters using an ordinal rating scale. In this simulation study, several methods were compared with analyze rater agreement. The special case that was focused on was the multi-rater case using a bounded ordinal rating scale. The proposed methods for agreement were compared within different settings. Three main ordinal data simulation settings were used (normal, skewed and shifted data). In addition, the proposed methods were applied to a real data set from dermatology. The simulation results showed that the Kendall’s W and mean gamma highly overestimated the agreement in data sets with shifts in data. ICC4 for bounded data should be avoided in agreement studies with rating scales <5, where this method highly overestimated the simulated agreement. The difference in bias for all methods under study, except the mean gamma and Kendall’s W, decreased as the rating scale increased. The bias of ICC3 was consistent and small for nearly all simulation settings except the low agreement setting in the shifted data set. Researchers should be careful in selecting agreement methods, especially if shifts in ratings between raters exist and may apply more than one method before any conclusions are made. Keywords: agreement; multi-rater; bounded ordinal scale; normal distribution; skewed distribution
    Journal of Applied Statistics 04/2013; 40(7):1506-1519.
  • Journal of Applied Statistics 04/2013;
  • [show abstract] [hide abstract]
    ABSTRACT: In this paper, we study the multi-class differential gene expression detection for microarray data. We propose a likelihood based approach to estimating an empirical null distribution to incorporate gene interactions and provide more accurate false positive control than the commonly used permutation or theoretical null distribution based approach. We propose to rank important genes by p-values or local false discovery rate based on the estimated empirical null distribution. Through simulations and application to a lung transplant microarray data, we illustrate the competitive performance of the proposed method.
    Journal of Applied Statistics 02/2013; 40(2):347-357.
  • [show abstract] [hide abstract]
    ABSTRACT: Current extremely large scale genetic data presents significant challenges for cluster analysis. Most existing clustering methods are typically built on Euclidean distance and geared toward analyzing continuous response. They work well for clustering, e.g., microarray gene expression data, but often perform poorly for clustering, e.g., large scale single nucleotide polymorphism data. In this paper, we study the penalized latent class model for clustering extremely large scale discrete data. The penalized latent class model takes into account the discrete nature of the response using appropriate generalized linear models and adopts the lasso penalized likelihood approach for simultaneous model estimation and selection of important covariates. We develop very efficient numerical algorithms for model estimation based on the iterative coordinate descent approach and further develop the Expectation-Maximization algorithm to incorporate and model missing values. We use simulation studies and applications to the international HapMap single nucleotide polymorphism data to illustrate the competitive performance of the penalized latent class model.
    Journal of Applied Statistics 02/2013; 40(2):358-367.
  • [show abstract] [hide abstract]
    ABSTRACT: Nowadays, there is an increasing interest in multi-point models and their applications in Earth sciences. However, users not only ask for multi-point methods able to capture the uncertainties of complex structures and to reproduce the properties of a training image, but also they need quantitative tools for assessing whether a set of realizations have the properties required. Moreover, it is crucial to study the sensitivity of the realizations to the size of the data template and to analyze how fast realization-based statistics converge on average toward training-based statistics. In this paper, some similarity measures and convergence indexes, based on some physically measurable quantities and cumulants of high-order, are presented. In the case study, multi-point simulations of the spatial distribution of coarse-grained limestone and calcareous rock, generated by using three templates of different sizes, are compared and convergence toward training-based statistics is analyzed by taking into account increasing numbers of realizations.
    Journal of Applied Statistics 01/2013;
  • Journal of Applied Statistics 01/2013; 40(9):2069-2086.

Related Journals