Journal of Applied Statistics Impact Factor & Information

Publisher: Taylor & Francis (Routledge)

Journal description

Journal of Applied Statistics provides a forum for communication between both applied statisticians and users of applied statistical techniques across a wide range of disciplines. These areas include business, computing, economics, ecology, education, management, medicine, operational research and sociology, but papers from other areas are also considered. The editorial policy is to publish rigorous but clear and accessible papers on applied techniques. Purely theoretical papers are avoided but those on theoretical developments which clearly demonstrate significant applied potential are welcomed. Each paper is submitted to at least two independent referees. Each issue aims for a balance of methodological innovation, thorough evaluation of existing techniques, case studies, speculative articles, book reviews and letters. Gopal Kanji, the Editor, has been running the Journal of Applied Statistics for 25 years in 1998. Journal of Applied Statistics includes a supplement on Advances in Applied Statistics. Each annual edition of the supplement aims to provide a comprehensive and modern account of a subject at the cutting edge of applied statistics. Individual articles and entire thematic issues are invited and commissioned from authors in the forefront of their speciality, linking established themes to current and future developments.

Current impact factor: 0.45

Impact Factor Rankings

2015 Impact Factor Available summer 2015
2013 / 2014 Impact Factor 0.453
2012 Impact Factor 0.449
2011 Impact Factor 0.405
2010 Impact Factor 0.306
2009 Impact Factor 0.407
2008 Impact Factor 0.28
2007 Impact Factor 0.222
2006 Impact Factor 0.48
2005 Impact Factor 0.306
2004 Impact Factor 0.665
2003 Impact Factor 0.597
2002 Impact Factor 0.265
2001 Impact Factor 0.296
2000 Impact Factor 0.206
1999 Impact Factor 0.257
1998 Impact Factor 0.316
1997 Impact Factor 0.448

Impact factor over time

Impact factor
Year

Additional details

5-year impact 0.53
Cited half-life 0.00
Immediacy index 0.07
Eigenfactor 0.00
Article influence 0.32
Website Journal of Applied Statistics website
Other titles Journal of applied statistics (Online)
ISSN 0266-4763
OCLC 48215794
Material type Document, Periodical, Internet resource
Document type Internet Resource, Computer File, Journal / Magazine / Newspaper

Publisher details

Taylor & Francis (Routledge)

  • Pre-print
    • Author can archive a pre-print version
  • Post-print
    • Author can archive a post-print version
  • Conditions
    • Some individual journals may have policies prohibiting pre-print archiving
    • On author's personal website or departmental website immediately
    • On institutional repository or subject-based repository after either 12 months embargo
    • Publisher's version/PDF cannot be used
    • On a non-profit server
    • Published source must be acknowledged
    • Must link to publisher version
    • Set statements to accompany deposits (see policy)
    • The publisher will deposit in on behalf of authors to a designated institutional repository including PubMed Central, where a deposit agreement exists with the repository
    • STM: Science, Technology and Medicine
    • Publisher last contacted on 25/03/2014
    • This policy is an exception to the default policies of 'Taylor & Francis (Routledge)'
  • Classification
    ​ green

Publications in this journal

  • Journal of Applied Statistics 08/2015; DOI:10.1080/02664763.2015.1070803
  • Journal of Applied Statistics 08/2015; DOI:10.1080/02664763.2015.1048672
  • Journal of Applied Statistics 08/2015; DOI:10.1080/02664763.2015.1063117
  • Journal of Applied Statistics 08/2015; DOI:10.1080/02664763.2015.1070809
  • [Show abstract] [Hide abstract]
    ABSTRACT: In the literature, traders are often classified into informed and uninformed and the trades from informed traders have market impacts. We investigate these trades by first establishing a scheme to identify the influential trades from the ordinary trades under certain criteria. The differential properties between these two types of trades are examined via the four transaction states classified by the trade price, trade volume, quotes, and quoted depth. Marginal distribution of the four states and the transition probability between different states are shown to be distinct for informed trades and ordinary liquidity trades. Furthermore, four market reaction factors are introduced and logistic regression models of the influential trades are established based on these four factors. Empirical study on the high-frequency transaction data from the NYSE TAQ database show supportive evidence for high correct classification rates of the logistic regression models.
    Journal of Applied Statistics 07/2015; 42(7). DOI:10.1080/02664763.2014.1000274
  • [Show abstract] [Hide abstract]
    ABSTRACT: This paper deals with the analysis of the MET Office Hadley Centre's sea surface temperature data set (HadSST3) by using long-range dependence techniques. We incorporate linear and segmented trends using fractional integration, and thus permitting long memory behavior in the detrended series. The results indicate the existence of warming trends in the three series examined (Northern and Southern Hemispheres along with global temperatures), with orders of integration which are in the range (0.5, 1) and thus implying nonstationary long memory and mean reverting behavior. This is innovative compared with other works that assume short memory behavior in the detrended series. Allowing for segmented trends two features are observed: increasing values in the degree of dependence of the series across time and significant warming trends from 1940 onwards.
    Journal of Applied Statistics 07/2015; 42(7). DOI:10.1080/02664763.2014.1001328
  • [Show abstract] [Hide abstract]
    ABSTRACT: The availability of the next generation sequencing (NGS) technology in today's biomedical research has provided new opportunities in scientific discovery of genetic information. The high-throughput NGS technology, especially DNA-seq, is particularly useful in profiling a genome for the analysis of DNA copy number variants (CNVs). The read count (RC) data resulting from NGS technology are massive and information rich. How to exploit the RC data for accurate CNV detection has become a computational and statistical challenge. We provide a statistical online change point method to help detect CNVs in the sequencing RC data in this paper. This method uses the idea of online searching for change point (or breakpoint) with a Markov chain assumption on the breakpoints loci and an iterative computing process via a Bayesian framework. We illustrate that an online change-point detection method is particularly suitable for identifying CNVs in the RC data. The algorithm is applied to the publicly available NCI-H2347 lung cancer cell line sequencing reads data for locating the breakpoints. Extensive simulation studies have been carried out and results show the good behavior of the proposed algorithm. The algorithm is implemented in R and the codes are available upon request.
    Journal of Applied Statistics 07/2015; 42(7). DOI:10.1080/02664763.2014.1001330
  • [Show abstract] [Hide abstract]
    ABSTRACT: In this article, we focus on estimation and test of conditional Kendall's tau under semi-competing risks data and truncated data. We apply the inverse probability censoring weighted technique to construct an estimator of conditional Kendall's tau, . Then, this study provides a test statistic for , where . When two random variables are quasi-independent, it implies . Thus, is a proxy for quasi-independence. Tsai [1210. W.-Y. Tsai, Testing the assumption of independence of truncation time and failure time, Biometrika 77(1) (1990), pp. 169–177. doi: 10.1093/biomet/77.1.169[CrossRef], [Web of Science ®]View all references], and Martin and Betensky [108. E.C. Martin and R.A. Betensky, Testing Quasi-independence of failure and truncation times via conditional Kendall's tau, J. Amer. Statist. Assoc. 100(470) (2005), pp. 484–492. doi: 10.1198/016214504000001538[Taylor & Francis Online], [Web of Science ®]View all references] considered the testing problem for quasi-independence. Via simulation studies, we compare the three test statistics for quasi-independence, and examine the finite-sample performance of the proposed estimator and the suggested test statistic. Furthermore, we provide the large sample properties for our proposed estimator. Finally, we provide two real data examples for illustration.
    Journal of Applied Statistics 07/2015; 42(7). DOI:10.1080/02664763.2015.1004624
  • [Show abstract] [Hide abstract]
    ABSTRACT: It is important to detect the variance heterogeneity in regression model because efficient inference requires that heteroscedasticity is taken into consideration if it really exists. For the varying-coefficient partially linear regression models, however, the problem of detecting heteroscedasticity has received very little attention. In this paper, we present two classes of tests of heteroscedasticity for varying-coefficient partially linear regression models. The first test statistic is constructed based on the residuals, in which the error term is from a normal distribution. The second one is motivated by the idea that testing heteroscedasticity is equivalent to testing pseudo-residuals for a constant mean. Asymptotic normality is established with different rates corresponding to the null hypothesis of homoscedasticity and the alternative. Some Monte Carlo simulations are conducted to investigate the finite sample performance of the proposed tests. The test methodologies are illustrated with a real data set example.
    Journal of Applied Statistics 06/2015; DOI:10.1080/02664763.2015.1043623
  • [Show abstract] [Hide abstract]
    ABSTRACT: The aim of this paper is to investigate, from a generational perspective, the effect of human capital on individual earnings and earnings differences in Germany, France and Italy, three developed countries in Western Europe with similar conservative welfare regimes but with important differences in their education systems. Income inequalities between and within education levels are explored using a two-stage probit model with quantile regressions in the second stage. More precisely, drawing upon 2005 EU-SILC data, returns on schooling and experience are estimated separately for employees and self-employed full-time workers by means of Mincerian earnings equations with sample selection; the sample selection correction accounts for the potential individual self-selection into the two labour force types. Although some determinants appear to be relatively similar across countries, state-specific differentials are drawn in light of the institutional features of each national context. The study reveals how each dimension of human capital differently affects individuals’ earnings and earnings inequality and, most of all, how their impacts differ along the conditional earnings distribution and across countries. In the comparative perspective, the country's leading position in terms of the highest rewards on education also depends on which earnings distribution (employee vs. self-employed) is analysed.
    Journal of Applied Statistics 06/2015; DOI:10.1080/02664763.2015.1049518
  • [Show abstract] [Hide abstract]
    ABSTRACT: Biclustering is the simultaneous clustering of two related dimensions, for example, of individuals and features, or genes and experimental conditions. Very few statistical models for biclustering have been proposed in the literature. Instead, most of the research has focused on algorithms to find biclusters. The models underlying them have not received much attention. Hence, very little is known about the adequacy and limitations of the models and the efficiency of the algorithms. In this work, we shed light on associated statistical models behind the algorithms. This allows us to generalize most of the known popular biclustering techniques, and to justify, and many times improve on, the algorithms used to find the biclusters. It turns out that most of the known techniques have a hidden Bayesian flavor. Therefore, we adopt a Bayesian framework to model biclustering. We propose a measure of biclustering complexity (number of biclusters and overlapping) through a penalized plaid model, and present a suitable version of the deviance information criterion to choose the number of biclusters, a problem that has not been adequately addressed yet. Our ideas are motivated by the analysis of gene expression data.
    Journal of Applied Statistics 06/2015; 42(6). DOI:10.1080/02664763.2014.999647
  • [Show abstract] [Hide abstract]
    ABSTRACT: The goal of this study is to analyze the quality of ratings assigned to two constructed response questions for evaluating the written ability of essays in Portuguese language from the perspective of the many-facet Rasch (MFR [1514. J.M. Linacre, Many-facet Rasch Measurement, 2nd ed., MESA Press, Chicago, 1994.View all references]) model. The analyzed data set comes from 350 written tests with two open-item tasks that were developed based on a rating process independently marked by two rater coordinators and a group of 42 raters. The MFR model analysis shows the measurement quality related to the examinees, raters, tasks and items, and classification scale that has been used for the task rating process. The findings indicate significant differences amongst the rater severities and show that the raters cannot be interchanged. The results also suggest that the comparison between the two task difficulties needs further investigation. An additional study has been done on the scale structure of the classification used by each rater for each item. The result suggests that there have been some similarities amongst the tasks and a need of revision for some criteria of the rating process. Overall, the scale of evaluation has shown to be efficient for a classification of the examinees.
    Journal of Applied Statistics 06/2015; DOI:10.1080/02664763.2015.1049938
  • [Show abstract] [Hide abstract]
    ABSTRACT: In biomedical research, two or more biomarkers may be available for diagnosis of a particular disease. Selecting one single biomarker which ideally discriminate a diseased group from a healthy group is confront in a diagnostic process. Frequently, most of the people use the accuracy measure, area under the receiver operating characteristic (ROC) curve to choose the best diagnostic marker among the available markers for diagnosis. Some authors have tried to combine the multiple markers by an optimal linear combination to increase the discriminatory power. In this paper, we propose an alternative method that combines two continuous biomarkers by direct bivariate modeling of the ROC curve under log-normality assumption. The proposed method is applied to simulated data set and prostate cancer diagnostic biomarker data set.
    Journal of Applied Statistics 06/2015; DOI:10.1080/02664763.2015.1046823
  • [Show abstract] [Hide abstract]
    ABSTRACT: In this paper the consequences of considering the household ‘food share’ distribution as a welfare measure, in isolation from the joint distribution of itemized budget shares, is examined through the unconditional and conditional distribution of ‘food share’ both parametrically and nonparametrically. The parametric framework uses Dirichlet and Beta distributions, while the nonparametric framework uses kernel smoothing methods. The analysis, in a three commodity setup (‘food’, ‘durables’, ‘others’), based on household level rural data for West Bengal, India, for the year 2009-2010 shows significant underrepresentation of households by the conventional unconditional ‘food share’ distribution in the higher range of food budget shares that correspond to the lower end of the income profile. This may have serious consequences for welfare measurement.
    Journal of Applied Statistics 05/2015; DOI:10.1080/02664763.2015.1049132
  • [Show abstract] [Hide abstract]
    ABSTRACT: Phillips [11] provides asymptotic theory for regressions that relate nonstationary time series including those integrated of order 1, . A practical implication of the literature on spurious regression is that one cannot trust the usual confidence intervals (CIs). In the absence of prior knowledge that two series are cointegrated, it is therefore recommended that we abandon the specification in levels and work with differenced or detrended series. For situations when the specification in levels is sacrosanct we propose new CIs based on the Maximum Entropy bootstrap explained in Vinod and López-de-Lacalle (Maximum entropy bootstrap for time series: The meboot R package, J. Statist. Softw. 29 (2009), pp. 1-19). An extensive Monte Carlo simulation shows that our proposal can provide more reliable conservative CIs than traditional and block bootstrap intervals.
    Journal of Applied Statistics 05/2015; DOI:10.1080/02664763.2015.1049939
  • [Show abstract] [Hide abstract]
    ABSTRACT: Dengue Hemmorage Fever (DHF) cases have become a serious problem every year in tropical countries such as Indonesia. Understanding the dynamic spread of the disease is essential in order to find an effective strategy in controlling its spread. In this study, a convolution (Poisson-lognormal) model that integrates both uncorrelated and correlated random effects was developed. A spatial-temporal convolution model to accomodate both spatial and temporal variations of the disease spread dynamics was considered. The model was applied to the DHF cases in the city of Kendari, Indonesia. DHF data for 10 districts during the period 2007-2010 were collected from the health services. The data of rainfall and population density were obtained from the local offices in Kendari. The numerical experiments indicated that both the rainfall and the population density played an important role in the increasing DHF cases in the city of Kendari. The result suggested that DHF cases mostly occured in January, the wet session with high rainfall, and in Kadia, the densest district in the city. As people in the city have high mobility while dengue mosquitoes tend to stay localized in their area, the best intervention is in January and in the district of Kadia.
    Journal of Applied Statistics 05/2015; DOI:10.1080/02664763.2015.1043863