Article

Editorial Peer Reviewers' Recommendations at a General Medical Journal: Are They Reliable and Do Editors Care?

Department of Medicine, University of California Davis, Sacramento, California, United States of America.
PLoS ONE (Impact Factor: 3.53). 04/2010; 5(4):e10072. DOI: 10.1371/journal.pone.0010072
Source: PubMed

ABSTRACT Editorial peer review is universally used but little studied. We examined the relationship between external reviewers' recommendations and the editorial outcome of manuscripts undergoing external peer-review at the Journal of General Internal Medicine (JGIM).
We examined reviewer recommendations and editors' decisions at JGIM between 2004 and 2008. For manuscripts undergoing peer review, we calculated chance-corrected agreement among reviewers on recommendations to reject versus accept or revise. Using mixed effects logistic regression models, we estimated intra-class correlation coefficients (ICC) at the reviewer and manuscript level. Finally, we examined the probability of rejection in relation to reviewer agreement and disagreement. The 2264 manuscripts sent for external review during the study period received 5881 reviews provided by 2916 reviewers; 28% of reviews recommended rejection. Chance corrected agreement (kappa statistic) on rejection among reviewers was 0.11 (p<.01). In mixed effects models adjusting for study year and manuscript type, the reviewer-level ICC was 0.23 (95% confidence interval [CI], 0.19-0.29) and the manuscript-level ICC was 0.17 (95% CI, 0.12-0.22). The editors' overall rejection rate was 48%: 88% when all reviewers for a manuscript agreed on rejection (7% of manuscripts) and 20% when all reviewers agreed that the manuscript should not be rejected (48% of manuscripts) (p<0.01).
Reviewers at JGIM agreed on recommendations to reject vs. accept/revise at levels barely beyond chance, yet editors placed considerable weight on reviewers' recommendations. Efforts are needed to improve the reliability of the peer-review process while helping editors understand the limitations of reviewers' recommendations.

Full-text

Available from: Richard L Kravitz, May 05, 2015
0 Followers
 · 
108 Views
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: An important topic in the scientific publication process is how well reviewers evaluate the quality of papers and how their recommendations influence editors’ decisions to accept or reject papers. Additionally, a particular concern for researchers from China and other countries with rapidly developing scientific communities is whether there are potential biases affecting their manuscripts in the review process. To address these topics, we examined 4575 manuscripts submitted to the journal Biological Conservation. For the 2093 papers sent out for review, reviewer recommendations strongly influenced the outcome of the review process. Reviewer recommendations of accept and minor revision were similar in their positive effects on editor decisions, while papers receiving at least one recommendation of reject (“the kiss of death”) were almost always rejected. Papers with more consistent reviews (e.g. both reviewers recommending a major revision) had a greater chance of acceptance than did papers with more variation (e.g. minor revision and reject). We found no evidence of editor bias against papers from China; however, reviewer recommendation for papers from China had a greater degree of agreement than did reviewers of papers from English-speaking countries (e.g. intra-class correlation of 0.25 vs. 0.55), due to reviewers of papers from China often agreeing that papers should be rejected or require major revision. Reviewers from China judged papers from China more harshly than did reviewers from other countries. Our results demonstrate that the review process is not a crapshoot; reviewers are providing useful information and editors are using this information to make reasonable decisions.
    Biological Conservation 06/2015; 186. DOI:10.1016/j.biocon.2015.02.025 · 4.04 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: In a situation where two raters are classifying a series of observations, it is useful to have an index of agreement among the raters that takes into account both the simple rate of agreement and the complexity of the rating task. Information theory provides a measure of the quantity of information in a list of classifications which can be used to produce an appropriate index of agreement. A normalized weighted mutual information index improves upon the traditional intercoder agreement index in a number of ways, key being that there is no need to develop a model of error generation before use; comparison across experiments is easier; and that ratings are based on the distribution of agreement across categories, not just an overall agreement level.
    Journal of official statistics 01/2012; 28(3):395-412. · 0.97 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: A growing body of literature has identified potential problems that can compromise the quality, fairness, and integrity of journal peer review, including inadequate review, inconsistent reviewer reports, reviewer biases, and ethical transgressions by reviewers. We examine the evidence concerning these problems and discuss proposed reforms, including double-blind and open review. Regardless of the outcome of additional research or attempts at reforming the system, it is clear that editors are the linchpin of peer review, since they make decisions that have a significant impact on the process and its outcome. We consider some of the steps editors should take to promote quality, fairness and integrity in different stages of the peer review process and make some recommendations for editorial conduct and decision-making.