Article

Editorial Peer Reviewers' Recommendations at a General Medical Journal: Are They Reliable and Do Editors Care?

Department of Medicine, University of California Davis, Sacramento, California, United States of America.
PLoS ONE (Impact Factor: 3.23). 04/2010; 5(4):e10072. DOI: 10.1371/journal.pone.0010072
Source: PubMed

ABSTRACT

Editorial peer review is universally used but little studied. We examined the relationship between external reviewers' recommendations and the editorial outcome of manuscripts undergoing external peer-review at the Journal of General Internal Medicine (JGIM).
We examined reviewer recommendations and editors' decisions at JGIM between 2004 and 2008. For manuscripts undergoing peer review, we calculated chance-corrected agreement among reviewers on recommendations to reject versus accept or revise. Using mixed effects logistic regression models, we estimated intra-class correlation coefficients (ICC) at the reviewer and manuscript level. Finally, we examined the probability of rejection in relation to reviewer agreement and disagreement. The 2264 manuscripts sent for external review during the study period received 5881 reviews provided by 2916 reviewers; 28% of reviews recommended rejection. Chance corrected agreement (kappa statistic) on rejection among reviewers was 0.11 (p<.01). In mixed effects models adjusting for study year and manuscript type, the reviewer-level ICC was 0.23 (95% confidence interval [CI], 0.19-0.29) and the manuscript-level ICC was 0.17 (95% CI, 0.12-0.22). The editors' overall rejection rate was 48%: 88% when all reviewers for a manuscript agreed on rejection (7% of manuscripts) and 20% when all reviewers agreed that the manuscript should not be rejected (48% of manuscripts) (p<0.01).
Reviewers at JGIM agreed on recommendations to reject vs. accept/revise at levels barely beyond chance, yet editors placed considerable weight on reviewers' recommendations. Efforts are needed to improve the reliability of the peer-review process while helping editors understand the limitations of reviewers' recommendations.

Download full-text

Full-text

Available from: Richard L Kravitz
  • Source
    • "Although rejected proposals might simply be of lower quality and deserve to be stopped, tremendous unexplained variation and seeming " noise " is the single most regular feature of scientific peer evaluations. Interrater reliability in funding decisions is routinely found to be very low (e.g., Rothwell and Martyn 2000, Bornmann and Daniel 2008, Jackson et al. 2011), with concordance sometimes " barely beyond chance " (Kravitz et al. 2010, p. 1) and " perilously close to rates found for Rorschach inkblot tests " (Lee 2012, p. 862). Variance among reviewers is sometimes greater than variance between submissions (Cole et al. 1981). "
    [Show abstract] [Hide abstract]
    ABSTRACT: Selecting among alternative projects is a core management task in all innovating organizations. In this paper, we focus on the evaluation of frontier scientific research projects. We argue that the “intellectual distance” between the knowledge embodied in research proposals and an evaluator’s own expertise systematically relates to the evaluations given. To estimate relationships, we designed and executed a grant proposal process at a leading research university in which we randomized the assignment of evaluators and proposals to generate 2,130 evaluator–proposal pairs. We find that evaluators systematically give lower scores to research proposals that are closer to their own areas of expertise and to those that are highly novel. The patterns are consistent with biases associated with boundedly rational evaluation of new ideas. The patterns are inconsistent with intellectual distance simply contributing “noise” or being associated with private interests of evaluators. We discuss implications for policy, managerial intervention, and allocation of resources in the ongoing accumulation of scientific knowledge.
    Full-text · Article · Jan 2016 · Management Science
  • Source
    • "Marsh and Ball (1989) reported a mean interrater reliability of .27 between same-manuscript peer reviews for 10 social science journals. Kravitz et al. (2010) found an interrater reliability of .17 between "

    Preview · Article · Sep 2015
  • Source
    • "In face of these problems, many suggestions have been proposed to make the peer review and editorial process more efficient and equitable [5]. In particular, the role of editors in the process of selecting and managing reviewers has been increasingly discussed [8] [9] [10]. The main focus of these discussions are ethical issues and general, qualitative recommendations for both the editors and the reviewers [6] [7] [11] [12]. "
    [Show abstract] [Hide abstract]
    ABSTRACT: We examine selected aspects of peer review and suggest possible improvements. To this end, we analyse a dataset containing information about 300 papers submitted to the Biochemistry and Biotechnology section of the Journal of the Serbian Chemical Society. After separating the peer review process into stages that each review has to go through, we use a weighted directed graph to describe it in a probabilistic manner and test the impact of some modifications of the editorial policy on the efficiency of the whole process.
    Full-text · Article · Aug 2015
Show more