Journal of Applied Statistics (J APPL STAT )

Publisher: Taylor & Francis

Description

Journal of Applied Statistics provides a forum for communication between both applied statisticians and users of applied statistical techniques across a wide range of disciplines. These areas include business, computing, economics, ecology, education, management, medicine, operational research and sociology, but papers from other areas are also considered. The editorial policy is to publish rigorous but clear and accessible papers on applied techniques. Purely theoretical papers are avoided but those on theoretical developments which clearly demonstrate significant applied potential are welcomed. Each paper is submitted to at least two independent referees. Each issue aims for a balance of methodological innovation, thorough evaluation of existing techniques, case studies, speculative articles, book reviews and letters. Gopal Kanji, the Editor, has been running the Journal of Applied Statistics for 25 years in 1998. Journal of Applied Statistics includes a supplement on Advances in Applied Statistics. Each annual edition of the supplement aims to provide a comprehensive and modern account of a subject at the cutting edge of applied statistics. Individual articles and entire thematic issues are invited and commissioned from authors in the forefront of their speciality, linking established themes to current and future developments.

  • Impact factor
    0.45
    Show impact factor history
     
    Impact factor
  • 5-year impact
    0.53
  • Cited half-life
    0.00
  • Immediacy index
    0.07
  • Eigenfactor
    0.00
  • Article influence
    0.32
  • Website
    Journal of Applied Statistics website
  • Other titles
    Journal of applied statistics (Online)
  • ISSN
    0266-4763
  • OCLC
    48215794
  • Material type
    Document, Periodical, Internet resource
  • Document type
    Internet Resource, Computer File, Journal / Magazine / Newspaper

Publisher details

Taylor & Francis

  • Pre-print
    • Author can archive a pre-print version
  • Post-print
    • Author can archive a post-print version
  • Conditions
    • Some individual journals may have policies prohibiting pre-print archiving
    • On author's personal website or departmental website immediately
    • On institutional repository or subject-based repository after either 12 months embargo for STM, Behavioural Science and Public Health Journals or 18 months embargo for SSH journals
    • Publisher's version/PDF cannot be used
    • On a non-profit server
    • Published source must be acknowledged
    • Must link to publisher version
    • Set statements to accompany deposits (see policy)
    • The publisher will deposit in on behalf of authors to a designated institutional repository including PubMed Central, where a deposit agreement exists with the repository
    • STM: Science, Technology and Medicine
    • SSH: Social Science and Humanities
    • Publisher last contacted on 25/03/2014
    • 'Taylor & Francis (Psychology Press)' is an imprint of 'Taylor & Francis'
  • Classification
    ​ green

Publications in this journal

  • [Show abstract] [Hide abstract]
    ABSTRACT: The purpose of this paper is to build a model for aggregate losses which constitutes a crucial step in evaluating premiums for health insurance systems. It aims at obtaining the predictive distribution of the aggregate loss within each age class of insured persons over the time horizon involved in planning employing the Bayesian methodology. The model proposed using the Bayesian approach is a generalization of the collective risk model, a commonly used model for analysing risk of an insurance system. Aggregate loss prediction is based on past information on size of loss, number of losses and size of population at risk. In modelling the frequency and severity of losses, the number of losses is assumed to follow a negative binomial distribution, individual loss sizes are independent and identically distributed exponential random variables, while the number of insured persons in a finite number of possible age groups is assumed to follow the multinomial distribution. Prediction of aggregate losses is based on the Gibbs sampling algorithm which incorporates the missing data approach.
    Journal of Applied Statistics 02/2015; 42(2).
  • [Show abstract] [Hide abstract]
    ABSTRACT: For the analysis of square contingency tables with ordered categories, this paper proposes a model which indicates the structure of marginal asymmetry. The model states that the absolute values of logarithm of ratio of the cumulative probability that an observation will fall in row category i or below and column category i+1 or above to the corresponding cumulative probability that the observation falls in column category i or below and row category i+1 or above are constant for every i. We deal with the estimation problem for the model parameter and goodness-of-fit tests. Also we discuss the relationships between the model and a measure which represents the degree of departure from marginal homogeneity. Examples are given.
    Journal of Applied Statistics 02/2015; 42(2).
  • [Show abstract] [Hide abstract]
    ABSTRACT: In practice, it often happens that we have a number of base methods of classification. We are not able to clearly determine which method is optimal in the sense of the smallest error rate. Then we have a combined method that allows us to consolidate information from multiple sources in a better classifier. I propose a different approach, a sequential approach. Sequentiality is understood here in the sense of adding posterior probabilities to the original data set and so created data are used during classification process. We combine posterior probabilities obtained from base classifiers using all combining methods. Finally, we combine these probabilities using a mean combining method. To the original data set we add obtained posterior probabilities as additional features. In each step we change our additional probabilities to achieve the minimum error rate for base methods. Experimental results on different data sets demonstrate that the method is efficient and that this approach outperforms base methods providing a reduction in the mean classification error rate.
    Journal of Applied Statistics 02/2015; 42(2).
  • [Show abstract] [Hide abstract]
    ABSTRACT: In models for predicting financial distress, ranging from traditional statistical models to artificial intelligence models, scholars have primarily paid attention to improving predictive accuracy as well as the progressivism and intellectualization of the prognostic methods. However, the extant models use static or short-term data rather than time-series data to draw inferences on future financial distress. If financial distress occurs at the end of a progressive process, then omitting time series of historical financial ratios from the analysis ignores the cumulative effect of previous financial ratios on the current consequences. This study incorporated the cumulative characteristics of financial distress by using the characteristics of a state space model that is able to perform long-term forecasts to dynamically predict an enterprise's financial distress. Kalman filtering is used to estimate the model parameters. Thus, the model constructed in this paper is a dynamic financial prediction model that has the benefit of forecasting over the long term. Additionally, current data are used to forecast the future annual financial position and to judge whether the establishment will be in financial distress.
    Journal of Applied Statistics 02/2015; 42(2).
  • [Show abstract] [Hide abstract]
    ABSTRACT: Time-varying coefficient models with autoregressive and moving-average–generalized autoregressive conditional heteroscedasticity structure are proposed for examining the time-varying effects of risk factors in longitudinal studies. Compared with existing models in the literature, the proposed models give explicit patterns for the time-varying coefficients. Maximum likelihood and marginal likelihood (based on a Laplace approximation) are used to estimate the parameters in the proposed models. Simulation studies are conducted to evaluate the performance of these two estimation methods, which is measured in terms of the Kullback–Leibler divergence and the root mean square error. The marginal likelihood approach leads to the more accurate parameter estimates, although it is more computationally intensive. The proposed models are applied to the Framingham Heart Study to investigate the time-varying effects of covariates on coronary heart disease incidence. The Bayesian information criterion is used for specifying the time series structures of the coefficients of the risk factors.
    Journal of Applied Statistics 02/2015; 42(2).
  • [Show abstract] [Hide abstract]
    ABSTRACT: The varying-coefficient single-index model has two distinguishing features: partially linear varying-coefficient functions and a single-index structure. This paper proposes a nonparametric method based on smoothing splines for estimating varying-coefficient functions and an unknown link function. Moreover, the average derivative estimation method is applied to obtain the single-index parameter estimates. For interval inference, Bayesian confidence intervals were obtained based on Bayes models for varying-coefficient functions and the link function. The performance of the proposed method is examined both through simulations and by applying it to Boston housing data.
    Journal of Applied Statistics 02/2015; 42(2).
  • Journal of Applied Statistics 02/2015; 42(2).
  • [Show abstract] [Hide abstract]
    ABSTRACT: We present a methodology for screening predictors that, given the response, follow a one-parameter exponential family distributions. Screening predictors can be an important step in regressions when the number of predictors p is excessively large or larger than n the number of observations. We consider instances where a large number of predictors are suspected irrelevant for having no information about the response. The proposed methodology helps remove these irrelevant predictors while capturing those linearly or nonlinearly related to the response.
    Journal of Applied Statistics 02/2015; 42(2).
  • Journal of Applied Statistics 02/2015; 42(2).
  • [Show abstract] [Hide abstract]
    ABSTRACT: Private and common values (CVs) are the two main competing valuation models in auction theory and empirical work. In the framework of second-price auctions, we compare the empirical performance of the independent private value (IPV) model to the CV model on a number of different dimensions, both on real data from eBay coin auctions and on simulated data. Both models fit the eBay data well with a slight edge for the CV model. However, the differences between the fit of the models seem to depend to some extent on the complexity of the models. According to log predictive score the IPV model predicts auction prices slightly better in most auctions, while the more robust CV model is much better at predicting auction prices in more unusual auctions. In terms of posterior odds, the CV model is clearly more supported by the eBay data.
    Journal of Applied Statistics 02/2015; 42(2).
  • [Show abstract] [Hide abstract]
    ABSTRACT: Finite growth mixture modeling may prove extremely useful for identifying initial pharmacotherapeutic targets for clinical intervention purposes in chronic kidney disease. The primary goal of this research is to demonstrate and describe the process of identifying a longitudinal classification scheme to guide timing and dose of treatment in future randomized clinical trials. After discussing the statistical architecture, we describe the model selection and fit criteria in detail before choosing and selecting our final 4-class solution (BIC = 1612.577, BLRT of p .001). The first class (highly elevated group) had an average starting point of 3.969 mg/dl of phosphorus at Visit 1, and increased 0.143 every two years until Visit 4. The second, elevated class had an average starting point of 3.460 mg/dl of phosphorus at Visit 1, and increased 0.101 every two years until Visit 4. The normative class had an average starting point of 3.019 mg/dl of phosphorus at Visit 1, and increased 0.099 every two years until Visit 4. Lastly, the low class had an average starting point of 2.525 mg/dl of phosphorus at Visit 1, and increased 0.158 every two years until Visit 4. We hope that this example will spur future applications in biomedical sciences in order to refine therapeutic targets and/or construct long-term risk categories.
    Journal of Applied Statistics 02/2015; 42(2).
  • [Show abstract] [Hide abstract]
    ABSTRACT: We study the problem of fitting a heteroscedastic median regression model with doubly truncated data. A self-consistency equation is proposed to obtain an estimator. We set up a least absolute deviation estimating function. We establish the consistency and asymptotic normality for the case when covariates are discrete. The finite sample performance of the proposed estimators are investigated through simulation studies. The proposed method is illustrated using the AIDS Blood Transfusion Data.
    Journal of Applied Statistics 02/2015; 42(2).
  • Journal of Applied Statistics 02/2015; 42(2).
  • Journal of Applied Statistics 02/2015; 42(2).
  • [Show abstract] [Hide abstract]
    ABSTRACT: Modeling the relationship between multiple financial markets has had a great deal of attention in both literature and real-life applications. One state-of-the-art technique is that the individual financial market is modeled by generalized autoregressive conditional heteroskedasticity (GARCH) process, while market dependence is modeled by copula, e.g. dynamic asymmetric copula-GARCH. As an extension, we propose a dynamic double asymmetric copula (DDAC)-GARCH model to allow for the joint asymmetry caused by the negative shocks as well as by the copula model. Furthermore, our model adopts a more intuitive way of constructing the sample correlation matrix. Our new model yet satisfies the positive-definite condition as found in dynamic conditional correlation-GARCH and constant conditional correlation-GARCH models. The simulation study shows the performance of the maximum likelihood estimate for DDAC-GARCH model. As a case study, we apply this model to examine the dependence between China and US stock markets since 1990s. We conduct a series of likelihood ratio test tests that demonstrate our extension (dynamic double joint asymmetry) is adequate in dynamic dependence modeling. Also, we propose a simulation method involving the DDAC-GARCH model to estimate value at risk (VaR) of a portfolio. Our study shows that the proposed method depicts VaR much better than well-established variance–covariance method.
    Journal of Applied Statistics 02/2015; 42(2).
  • [Show abstract] [Hide abstract]
    ABSTRACT: This article discusses the educational experiences of a group of French-speaking Black African-born students who entered Canada as refugees. They were attending a French school and were placed in a separate programme that was designed to meet their particular needs given their limited language skills and level of education. Drawing on critical race theory (CRT), the article analyzes how these students’ identities operated in linking their academic abilities and particular life experiences in terms of race, gender, class, language, and immigrant status. The youth identified their separate programme as a problem in that their placement in it has to do with the fact that they are Black. The study provides important insights into the ways students with refugee backgrounds are being integrated into Canadian schools; and that, in some cases, the approach to their education operates to stream them along the lines of ethnicity, race, and life experience – the consequence of which is likely limited educational, occupational and social outcomes.
    Journal of Applied Statistics 01/2015; 18(1).
  • Journal of Applied Statistics 01/2015; 12(1).
  • [Show abstract] [Hide abstract]
    ABSTRACT: The social construction of human-environment relations is a central concern of an emerging tradition of research on place, which extends the so-called “discursive turn” in social psychology. This research highlights the primary role of everyday linguistic practices in the production of place meanings, challenging the prevailing tendency among environmental psychologists to treat place meanings mainly as an expression of individual cognitions. By the same token, in this article we argue that research on human-environment relations also has the potential to enrich the field of discursive psychology, tempting discursive researchers to move beyond their customary focus on verbal and written texts. Specifically, we propose an analytic framework that transcends the dualism between the material and discursive dimensions of human-environment relations. In order to develop this argument, we outline the novel concept of place-assemblage and illustrate its utility by conducting an analysis of a recent conflict over a public space in Barcelona. This analysis shows how discursive constructions of the development of this public space over time were inextricably entwined with other kinds of material and embodied practices—practices through which place meanings were actively performed, reproduced and contested.
    Journal of Applied Statistics 01/2015; 12(1).
  • [Show abstract] [Hide abstract]
    ABSTRACT: This commentary welcomes the broadening of methods and theories in psychosocial studies evident in this special issue, “Researching the Psychosocial.” Three features are highlighted: the shift to synchronous investigation from the diachronic analysis of cultural sense-making, the focus on the intertwining of affect and discourse, and the opening of new routes to exploring participants’ investments and deep attachments. These new ways of working are briefly contrasted with the turn to affect in cultural studies, traditional psychobiological approaches, fine-grain discursive psychology, and psychoanalytic psychosocial research.
    Journal of Applied Statistics 01/2015; 12(1).
  • [Show abstract] [Hide abstract]
    ABSTRACT: This article outlines one tradition of qualitative research in social psychology, that of discourse analysis and discursive research. It proposes that the tradition offers an alternative conceptualisation of a psychosocial subject to accounts which draw on psychoanalytic theorising. The article reviews some of the problems around conceptualising a subject in discursive terms, then sets out some resolutions. It outlines a narrative-discursive approach to subjectivity and proposes that this is consistent with a psychosocial project to explore the person as inseparable from their social contexts. The narrative-discursive conceptualisation admits of agency and change, avoiding over-complete accounts of subjectification, while retaining the critical and political focus of the discursive tradition. It is also consistent with sociological theorisations of the subjects of late capitalism and neo-liberalism. The article discusses an example of narrative-discursive analysis from research on identities of residence and relationships to place.
    Journal of Applied Statistics 01/2015; 12(1).