Journal of the Royal Statistical Society Series A (Statistics in Society) (J R STAT SOC A STAT )

Publisher: Royal Statistical Society (Great Britain), Blackwell Publishing

Description

Datasets relating to articles published in the four series of the Journal of the Royal Statistical Society are available online. Please click here Statistics in Society publishes original papers whose primary appeal lies in their subject-matter rather than in their technical statistical content to encourage clear statistical thinking on issues of importance to society. The journal's particular focus is on statistics as applied to social issues and this is interpreted broadly to include all disciplines which take people as their subject-matter. Thus education sociology medicine psychology the law demography government and politics economics and social geography all fall within its remit. The journal welcomes contributions from workers in central and local government or in business as well as academics and researchers in relevant disciplines. Papers should generally have a substantial statistical component but innovative statistical methods are not essential. Papers containing mathematical exposition are acceptable provided that this is relevant and that explanations are presented in clear English. Review papers are encouraged. The journal also welcomes relevant methodological papers with illustrative applications involving appropriate data. Such papers could include discussions of methods of data collection and of ethical issues.

  • Impact factor
    1.36
    Show impact factor history
     
    Impact factor
  • 5-year impact
    2.29
  • Cited half-life
    0.00
  • Immediacy index
    0.50
  • Eigenfactor
    0.01
  • Article influence
    1.71
  • Website
    Journal of the Royal Statistical Society - Series A: Statistics in Society website
  • Other titles
    Journal of the Royal Statistical Society., Journal of the Royal Statistical Society. Series A, Statistics in society, Statistics in society
  • ISSN
    0964-1998
  • OCLC
    42017027
  • Material type
    Document, Periodical, Internet resource
  • Document type
    Internet Resource, Computer File, Journal / Magazine / Newspaper

Publisher details

Blackwell Publishing

  • Pre-print
    • Author can archive a pre-print version
  • Post-print
    • Author cannot archive a post-print version
  • Restrictions
    • Some journals impose embargoes typically of 6 or 12 months, occasionally of 24 months
    • no listing of affected journals available as yet
  • Conditions
    • See Wiley-Blackwell entry for articles after February 2007
    • Publisher version cannot be used
    • On author or institutional or subject-based server
    • Server must be non-commercial
    • Publisher copyright and source must be acknowledged with set statement ("The definitive version is available at www.blackwell-synergy.com ")
    • Articles in some journals can be made Open Access on payment of additional charge
    • 'Blackwell Publishing' is an imprint of 'Wiley-Blackwell'
  • Classification
    ​ yellow

Publications in this journal

  • [Show abstract] [Hide abstract]
    ABSTRACT: The paper presents a model that can be used to identify the goal scoring ability of footballers. By decomposing the scoring process into the generation of shots and the conversion of shots to goals, abilities can be estimated from two mixed effects models. We compare several versions of our model as a tool for predicting the number of goals that a player will score in the following season with that of a naive method whereby a player's goals‐per‐minute ratio is assumed to be constant from one season to the next. We find that our model outperforms the naive model and that this outperformance can be attributed, in some part, to the model's disaggregating a player's ability and chance that may have influenced his goal scoring statistic in the previous season.
    Journal of the Royal Statistical Society Series A (Statistics in Society) 02/2014; 177(2).
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The paper analyses the data resulting from the Italian campaign for newborns’ parents, ‘GenitoriPiù’, and focuses on the assessment of healthcare workers’ knowledge about sudden infant death syndrome. Considering two different response sets (dichotomous and polytomous), we used a Rasch model and a logistic quantile regression to analyse which demographic and professional backgrounds influenced the degree of knowledge of this topic. Significant differences between regions are evident, and the effect of training initiatives is proven as a way of rectifying these differences. With regard to professional background, the best‐prepared healthcare workers are paediatricians and healthcare workers working in birth centres and family planning clinics.
    Journal of the Royal Statistical Society Series A (Statistics in Society) 01/2014; 177(1):63-82.
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Questionnaires are important surveying tools that are used in numerous studies. Analyses of multiple‐response questions are not as well established in detail compared with single‐response questions. Wang has proposed several methods for ranking responses in multiple‐response questions under the frequentist set‐up. However, prior information may exist for ranks of responses in numerous situations. Therefore, establishing a methodology that combines updated survey data and past information for ranking responses is an essential issue in questionnaire data analysis. This study develops Bayesian ranking methods based on several Bayesian multiple‐testing procedures to rank responses by controlling the posterior expected false discovery rate. Moreover, a simulation is conducted to compare these approaches, and a real data example is presented to show the effectiveness of the methods proposed.
    Journal of the Royal Statistical Society Series A (Statistics in Society) 01/2014; 177(1).
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The effect of weather on health has been widely researched, and the ability to forecast meteorological events can offer valuable insights into the effect on public health services. In addition, better predictions of hospital demand that are more sensitive to fluctuations in weather can allow hospital administrators to optimize resource allocation and service delivery. Using historical hospital admission data and several seasonal and meteorological variables for a site near the hospital, the paper develops a novel Bayesian model for short‐term prediction of the numbers of admissions categorized by several factors such as age group and sex. The model proposed is extended by incorporating the inherent uncertainty in the meteorological forecasts into the predictions for the number of admissions. The methods are illustrated with admissions data obtained from two moderately large hospital trusts in Cardiff and Southampton, in the UK, each admitting about 30000–50000 non‐elective patients every year. The Bayesian model, computed by using Markov chain Monte Carlo methods, is shown to produce more accurate predictions of the number of hospital admissions than those obtained by using a 6‐week moving average method which is similar to that widely used by hospital managers. The gains are shown to be substantial during periods of rapid temperature changes, typically during the onset of cold and highly variable winter weather.
    Journal of the Royal Statistical Society Series A (Statistics in Society) 01/2014; 177(1).
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Antithetic time series analysis is the solution to a most perplexing problem in mathematical statistics. When a mathematical model is fitted to serially correlated data, the parameters of the model are unavoidably biased. All forecasts that are obtained from the model are unavoidably biased and therefore diverge. The forecast reliability worsens with the forecast horizon. It is shown that the forecast bias can be dynamically reduced. This is made possible by the entirely counterintuitive discovery of antithetic time series theory that permits unbiased forecast error convergence to a constant, independent of forecast origin. The forecast error variance in each time period is the same.
    Journal of the Royal Statistical Society Series A (Statistics in Society) 01/2014; 177(1).
  • Journal of the Royal Statistical Society Series A (Statistics in Society) 01/2014; 177(2).
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Professional sports facilities are among the most expensive development projects. Assessing the external effects related to these and the channels through which these effects operate is a challenging task. We propose a strategy to value the external effects that stadia deliver to their neighbourhoods based on the variation in property prices. Our strategy allows for unobserved spatial heterogeneity, anticipation effects, and disentangles the stadium's function as a sports facility from its form as a physical structure that (visually) dominates the neighbourhood. We apply this strategy to two of the largest stadium projects of the recent decade, the New Wembley and the Emirates Stadium in London. Our results suggest that there are positive stadium effects on property prices, which are large compared with construction costs. Notable anticipation effects are found immediately following the announcement of the stadium plans. We further argue that stadium architecture plays a role in promoting positive spillovers to the neighbourhood.
    Journal of the Royal Statistical Society Series A (Statistics in Society) 01/2014; 177(1).
  • [Show abstract] [Hide abstract]
    ABSTRACT: There are several challenges to testing the effectiveness of group therapy-based interventions in alcohol and other drug use (AOD) treatment settings. Enrollment into AOD therapy groups typically occurs on an open (rolling) basis. Changes in therapy group membership induce a complex correlation structure among client outcomes, with relatively small numbers of clients attending each therapy group session. Primary outcomes are measured post-treatment, so each datum reflects the effect of all sessions attended by a client. The number of post-treatment outcomes assessments is typically very limited. The first feature of our modeling approach relaxes the assumption of independent random effects in the standard multiple membership model by employing conditional autoregression (CAR) to model correlation in random therapy group session effects associated with clients' attendance of common group therapy sessions. A second feature specifies a longitudinal growth model under which the posterior distribution of client-specific random effects, or growth parameters, is modeled non-parametrically. The Dirichlet process prior helps to overcome limitations of standard parametric growth models given limited numbers of longitudinal assessments. We motivate and illustrate our approach with a data set from a study of group cognitive behavioral therapy to reduce depressive symptoms among residential AOD treatment clients.
    Journal of the Royal Statistical Society Series A (Statistics in Society) 06/2013; 176(3).
  • [Show abstract] [Hide abstract]
    ABSTRACT: The definitive evaluation of treatment to prevent a chronic disease with low incidence in middle age, such as cancer or cardiovascular disease, requires a trial with a large sample size of perhaps 20,000 or more. To help decide whether to implement a large true endpoint trial, investigators first typically estimate the effect of treatment on a surrogate endpoint in a trial with a greatly reduced sample size of perhaps 200 subjects. If investigators reject the null hypothesis of no treatment effect in the surrogate endpoint trial they implicitly assume they would likely correctly reject the null hypothesis of no treatment effect for the true endpoint. Surrogate endpoint trials are generally designed with adequate power to detect an effect of treatment on surrogate endpoint. However, we show that a small surrogate endpoint trial is more likely than a large surrogate endpoint trial to give a misleading conclusion about the beneficial effect of treatment on true endpoint, which can lead to a faulty (and costly) decision about implementing a large true endpoint prevention trial. If a small surrogate endpoint trial rejects the null hypothesis of no treatment effect, an intermediate-sized surrogate endpoint trial could be a useful next step in the decision-making process for launching a large true endpoint prevention trial.
    Journal of the Royal Statistical Society Series A (Statistics in Society) 02/2013; 176(2):603-608.
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: We develop a spatial Poisson hurdle model to explore geographic variation in emergency department (ED) visits while accounting for zero inflation. The model consists of two components: a Bernoulli component that models the probability of any ED use (i.e., at least one ED visit per year), and a truncated Poisson component that models the number of ED visits given use. Together, these components address both the abundance of zeros and the right-skewed nature of the nonzero counts. The model has a hierarchical structure that incorporates patient- and area-level covariates, as well as spatially correlated random effects for each areal unit. Because regions with high rates of ED use are likely to have high expected counts among users, we model the spatial random effects via a bivariate conditionally autoregressive (CAR) prior, which introduces dependence between the components and provides spatial smoothing and sharing of information across neighboring regions. Using a simulation study, we show that modeling the between-component correlation reduces bias in parameter estimates. We adopt a Bayesian estimation approach, and the model can be fit using standard Bayesian software. We apply the model to a study of patient and neighborhood factors influencing emergency department use in Durham County, North Carolina.
    Journal of the Royal Statistical Society Series A (Statistics in Society) 02/2013; 176(2):389-413.
  • Journal of the Royal Statistical Society Series A (Statistics in Society) 01/2013; 176(1):1-3.
  • [Show abstract] [Hide abstract]
    ABSTRACT: Different people have different definitions of fairness, some of which may be fairer than others. The paper considers how some key life chances have changed in Britain from the time when William Beveridge was a young man, to when his research assistant, Harold Wilson, was Prime Minister, and then to today. The emphasis is on inequalities in critical outcomes between different groups that have been defined geographically. How have income and wealth inequalities altered over the course of the last century and few decades and how have rates of mortality varied? How do the geographies of school examination passes, university entry, employment or even changing rates of imprisonment influence our lives today? To understand changes in fairness and our fortunes better these trends sometimes must be put in a longer historical and a wider geographical context. Sometimes it is necessary to look back a century in time to find comparable inequality with that of today. And to know that such inequality is not universal the changing levels of income inequalities within Britain need to be compared with trends in otherwise very similar nation states. Such comparison is essential if the argument that rising inequality is inevitable is to be countered. Precisely how fairness and fortune are measured alters whether we find them to be rising or falling. Thousands of statistics can also be dull and so graphics and some more unusual visualizations which open up the map are used to illustrate the trends that are discussed here.
    Journal of the Royal Statistical Society Series A (Statistics in Society) 01/2013; 176(1):97-128.
  • [Show abstract] [Hide abstract]
    ABSTRACT: We use univariate and multivariate singular spectrum analyses to predict the inflation rate as well as changes in the direction of inflation time series for the United States. We use consumer price indices and real-time chain-weighted GDP price index series in these prediction exercises. Moreover, we compare our out-of-sample, $h$-step-ahead moving prediction results with other prediction results based on methods such as activity-based NAIRU Phillips curve, $AR(p)$, the dynamic factors model, and random walk models with the latter as a naive forecasting method. We use short-run (quarterly) and long-run (one to six years) time windows for predictions and find that multivariate singular spectrum analysis outperforms all other competing prediction methods. Also, we confirm the results of earlier studies that prediction of the inflation rate in the United States during the period of the ``Great Moderation" is less challenging compared to the more volatile inflationary period of 1970-1985.
    Journal of the Royal Statistical Society Series A (Statistics in Society) 01/2013; 176:1-18.
  • Journal of the Royal Statistical Society Series A (Statistics in Society) 01/2013; 176(2).
  • [Show abstract] [Hide abstract]
    ABSTRACT: From the extant statistical data, the paper reconstructs several episodes in the history of the Royal Mint during Isaac Newton's tenure. We discuss four types of uncertainty that are embedded in the production of coins, extending Stephen Stigler's work in several directions. The jury verdicts in trials of the pyx for 1696–1727 allow judgement on the impartiality of the jury at the trials. The verdicts, together with several remarks by Newton in his correspondence with the Treasury, allow us to estimate the standard deviation σ in weights of individual guineas coined before and during Newton's Mastership. This parameter, in turn, permits us to estimate the amount of money that Newton saved Britain after he put a stop to the illegal practice by goldsmiths and bankers of culling heavy guineas from circulation and recoining them to their advantage; a conservative estimate of savings to the Crown is £41510, and possibly three times as much. The procedure by which Newton probably improved coinage gives historical insight on how important statistical notions—standard deviation and sampling—came to the forefront in practical matters: the former as a measure of variation of weights of coins, and the latter as a test of several coins to evaluate the quality of the entire population. Newton can be credited with the formal introduction of testing a small sample of coins, a pound in weight, in the trials of the pyx from 1707 onwards, effectively reducing the size of admissible error. Even Newton's ‘cooling law’ could have been contrived for the purpose of reducing variation in the weight of coins during initial stages of the minting process. Three open questions are posed in the conclusion.
    Journal of the Royal Statistical Society Series A (Statistics in Society) 01/2013; 176(2).
  • [Show abstract] [Hide abstract]
    ABSTRACT: In recent years, survey agencies have started to collect detailed call record data, including information on the timing and outcome of each interviewer call to a household. In interview-based household surveys, such information may be used to inform effective interviewer calling behaviours, which are critical in achieving co-operation and reducing the likelihood of refusal. However, call record data can be complex and it is not always clear how best to model such data. We present a general framework for the analysis of call record data by using multilevel event history modelling. A multilevel multinomial logistic regression approach is proposed in which the different possible outcomes at each call are modelled jointly, accounting for the clustering of calls within households and interviewers. Of particular interest are the influences of time varying characteristics on the outcome of a call. The analysis of interviewer call record data is illustrated by using paradata from several face-to-face household surveys with the aim of modelling non-response.
    Journal of the Royal Statistical Society Series A (Statistics in Society) 01/2013; 176(1):251-269.