Improving the Peer-Review Process for Grant Applications: Reliability, Validity, Bias, and Generalizability

Department of Education, University of Oxford, United Kingdom.
American Psychologist (Impact Factor: 6.87). 05/2008; 63(3):160-8. DOI: 10.1037/0003-066X.63.3.160
Source: PubMed


Peer review is a gatekeeper, the final arbiter of what is valued in academia, but it has been criticized in relation to traditional psychological research criteria of reliability, validity, generalizability, and potential biases. Despite a considerable literature, there is surprisingly little sound peer-review research examining these criteria or strategies for improving the process. This article summarizes the authors' research program with the Australian Research Council, which receives thousands of grant proposals from the social science, humanities, and science disciplines and reviews by assessors from all over the world. Using multilevel cross-classified models, the authors critically evaluated peer reviews of grant applications and potential biases associated with applicants, assessors, and their interaction (e.g., age, gender, university, academic rank, research team composition, nationality, experience). Peer reviews lacked reliability, but the only major systematic bias found involved the inflated, unreliable, and invalid ratings of assessors nominated by the applicants themselves. The authors propose a new approach, the reader system, which they evaluated with psychology and education grant proposals and found to be substantially more reliable and strategically advantageous than traditional peer reviews of grant applications.

Download full-text


Available from: Upali Jayasinghe, Oct 05, 2015
246 Reads
  • Source
    • "But peer review has also been criticized on the grounds that it imposes burden on research communities, that the selection of reviewers may introduce biases in the system, and that the reviewers' judgements may be subjective or arbitrary (Kassirer and Campion 1994; Hojat et al. 2003; Li and Agha 2015). Arbitrariness of peer review, which is the quality of accepting submitted items by chance or whim, and not by necessity or rationality, can be measured by the heterogeneity of evaluations among raters during the review process (Mutz et al. 2012; Marsh et al. 2008; Giraudeau et al. 2011). "
    [Show abstract] [Hide abstract]
    ABSTRACT: The principle of peer review is central to the evaluation of research, by ensuring that only high-quality items are funded or published. But peer review has also received criticism, as the selection of reviewers may introduce biases in the system. In 2014, the organizers of the ``Neural Information Processing Systems\rq\rq{} conference conducted an experiment in which $10\%$ of submitted manuscripts (166 items) went through the review process twice. Arbitrariness was measured as the conditional probability for an accepted submission to get rejected if examined by the second committee. This number was equal to $60\%$, for a total acceptance rate equal to $22.5\%$. Here we present a Bayesian analysis of those two numbers, by introducing a hidden parameter which measures the probability that a submission meets basic quality criteria. The standard quality criteria usually include novelty, clarity, reproducibility, correctness and no form of misconduct, and are met by a large proportions of submitted items. The Bayesian estimate for the hidden parameter was equal to $56\%$ ($95\%$CI: $ I = (0.34, 0.83)$), and had a clear interpretation. The result suggested the total acceptance rate should be increased in order to decrease arbitrariness estimates in future review processes.
  • Source
    • "Concerning grant peer reviewing, one of the most frequently cited studies on gender bias, that carried out by Wennerås and Wold (1997), demonstrated that female applicants for postdoctoral fellowships at the Swedish Medical Research Council had to be 2.5 times more productive than the average male applicant in order to obtain the same peer-review rating for scientific competence. Since then, an evergrowing body of academic research has found no conclusive evidence of sex discrimination in the awarding of specific project grants (Wellcome Trust 1997; Ward and Donnelly 1998; Bornmann et al. 2007; Marsh et al. 2008). In this regard, the meta-analyses conducted by Bornmann et al. (2007) and Marsh et al. (2009), and more recently the study by Mutz et al. (2012), have all concluded that there is negligible evidence of gender bias in grant awarding programs. "
    [Show abstract] [Hide abstract]
    ABSTRACT: The main objective of this paper is to study the development and growth of scientific literature on women in science and higher education. A total of 1415 articles and reviews published between 1991 and 2012 were extracted from the Thomson Reuters Web of Science database. Standard bibliometric indicators and laws (e.g. Price’s, Lotka’s, and Bradford’s laws) were applied to these data. In addition, the Gender Inequality Index (GII) was obtained for each country in order to rank them. The results suggest an upward trend not only in the number of papers but also in the number of authors per paper. However, this increase in the number of authors was not accompanied by greater international collaboration. The interest in gender differences in science extends too many authors (n = 3064), countries (n = 67), and research areas (n = 86). Data showed a high dispersion of the literature and a small set of core journals focused on the topic. Regarding the research areas, the area with the highest frequency of papers was Education and Educational Research. Finally, our results showed that countries with higher levels of inequality (higher GII values) tend to present higher relative values of scientific productivity in the field.
    Scientometrics 06/2015; 103(3). DOI:10.1007/s11192-015-1574-x · 2.18 Impact Factor
  • Source
    • "RQ4: Do Westerner reviewers assign different proportions of recommendations than do non-Westerners? H4: Westerner reviewers will assign a higher proportion of acceptance and a lower proportion of rejection than non-Westerners (based on Marsh et al. 2008). "
    [Show abstract] [Hide abstract]
    ABSTRACT: Numerous studies have sought to uncover violations of objectivity and impartiality in peer review; however the notion of reciprocity has been absent in much of this discussion, particularly as it relates to gendered and ethnicized behaviors of peer review. The current study addresses this gap in research by investigating patterns of reciprocity (i.e., correspondences between patterns of recommendations received by authors and patterns of recommendations given by reviewers in the same social group) by perceived gender and ethnicity of reviewers and authors for submissions to the Journal of the American Society for Information Science and Technology from June 2009 to May 2011. The degree of reciprocity for each social group was examined by employing Monte Carlo resampling to extrapolate more robust patterns from the limited data available. We found that papers with female authors received more negative reviews than reviews for male authors. Reciprocity was suggested by the fact that female reviewers gave lower reviews than male reviewers. Reciprocity was also exhibited by ethnicity, although non-Western reviewers gave disproportionately more recommendations of major revision, while non-Western authors tended to receive more outright rejections. This study provides a novel theoretical and methodological basis for future studies on reciprocity in peer review.
    Scientometrics 10/2014; 101(1):1-19. DOI:10.1007/s11192-014-1354-z · 2.18 Impact Factor
Show more