Article

Improving the Peer-Review Process for Grant Applications: Reliability, Validity, Bias, and Generalizability

Department of Education, University of Oxford, United Kingdom.
American Psychologist (Impact Factor: 6.87). 05/2008; 63(3):160-8. DOI: 10.1037/0003-066X.63.3.160
Source: PubMed

ABSTRACT

Peer review is a gatekeeper, the final arbiter of what is valued in academia, but it has been criticized in relation to traditional psychological research criteria of reliability, validity, generalizability, and potential biases. Despite a considerable literature, there is surprisingly little sound peer-review research examining these criteria or strategies for improving the process. This article summarizes the authors' research program with the Australian Research Council, which receives thousands of grant proposals from the social science, humanities, and science disciplines and reviews by assessors from all over the world. Using multilevel cross-classified models, the authors critically evaluated peer reviews of grant applications and potential biases associated with applicants, assessors, and their interaction (e.g., age, gender, university, academic rank, research team composition, nationality, experience). Peer reviews lacked reliability, but the only major systematic bias found involved the inflated, unreliable, and invalid ratings of assessors nominated by the applicants themselves. The authors propose a new approach, the reader system, which they evaluated with psychology and education grant proposals and found to be substantially more reliable and strategically advantageous than traditional peer reviews of grant applications.

Download full-text

Full-text

Available from: Upali Jayasinghe
  • Source
    • "Past research has argued that expert evaluation of research proposals may be shaped by any number of factors beyond the " true " quality of research, including researcher and evaluator characteristics, ties between researchers and their evaluators, proposal formats, and evaluation procedures. (See Marsh et al. 2008 and Lee et al. 2013 for comprehensive reviews and syntheses of the relevant findings.) "
    [Show abstract] [Hide abstract]
    ABSTRACT: Selecting among alternative projects is a core management task in all innovating organizations. In this paper, we focus on the evaluation of frontier scientific research projects. We argue that the “intellectual distance” between the knowledge embodied in research proposals and an evaluator’s own expertise systematically relates to the evaluations given. To estimate relationships, we designed and executed a grant proposal process at a leading research university in which we randomized the assignment of evaluators and proposals to generate 2,130 evaluator–proposal pairs. We find that evaluators systematically give lower scores to research proposals that are closer to their own areas of expertise and to those that are highly novel. The patterns are consistent with biases associated with boundedly rational evaluation of new ideas. The patterns are inconsistent with intellectual distance simply contributing “noise” or being associated with private interests of evaluators. We discuss implications for policy, managerial intervention, and allocation of resources in the ongoing accumulation of scientific knowledge.
    Full-text · Article · Jan 2016 · Management Science
  • Source
    • "Moreover, the 'unreliability' of peerreview applies to the natural sciences, the humanities and social sciences. For Marsh et al. (2008), lack of acceptable agreement among independent assessors is the major weakness of peerreview . Some are not surprised. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Peer-review is neither reliable, fair, nor a valid basis for predicting 'impact': as quality control, peer-review is not fit for purpose. Endorsing the consensus, I offer a reframing: while a normative social process, peer-review also shapes the writing of a scientific paper. In so far as 'cognition' describes enabling conditions for flexible behavior, the practices of peer-review thus constrain knowledge-making. To pursue cognitive functions of peer-review, however, manuscripts must be seen as 'symbolizations', replicable patterns that use technologically enabled activity. On this bio-cognitive view, peer-review constrains knowledge-making by writers, editors, reviewers. Authors are prompted to recursively re-aggregate symbolizations to present what are deemed acceptable knowledge claims. How, then, can recursive re-embodiment be explored? In illustration, I sketch how the paper's own content came to be re-aggregated: agonistic review drove reformatting of argument structure, changes in rhetorical ploys and careful choice of wordings. For this reason, the paper's knowledge-claims can be traced to human activity that occurs in distributed cognitive systems. Peer-review is on the frontline in the knowledge sector in that it delimits what can count as knowing. Its systemic nature is therefore crucial to not only discipline-centered 'real' science but also its 'post-academic' counterparts.
    Full-text · Article · Nov 2015 · Frontiers in Psychology
  • Source
    • "But peer review has also been criticized on the grounds that it imposes burden on research communities, that the selection of reviewers may introduce biases in the system, and that the reviewers' judgements may be subjective or arbitrary (Kassirer and Campion 1994; Hojat et al. 2003; Li and Agha 2015). Arbitrariness of peer review, which is the quality of accepting submitted items by chance or whim, and not by necessity or rationality, can be measured by the heterogeneity of evaluations among raters during the review process (Mutz et al. 2012; Marsh et al. 2008; Giraudeau et al. 2011). "
    [Show abstract] [Hide abstract]
    ABSTRACT: The principle of peer review is central to the evaluation of research, by ensuring that only high-quality items are funded or published. But peer review has also received criticism, as the selection of reviewers may introduce biases in the system. In 2014, the organizers of the ``Neural Information Processing Systems\rq\rq{} conference conducted an experiment in which $10\%$ of submitted manuscripts (166 items) went through the review process twice. Arbitrariness was measured as the conditional probability for an accepted submission to get rejected if examined by the second committee. This number was equal to $60\%$, for a total acceptance rate equal to $22.5\%$. Here we present a Bayesian analysis of those two numbers, by introducing a hidden parameter which measures the probability that a submission meets basic quality criteria. The standard quality criteria usually include novelty, clarity, reproducibility, correctness and no form of misconduct, and are met by a large proportions of submitted items. The Bayesian estimate for the hidden parameter was equal to $56\%$ ($95\%$CI: $ I = (0.34, 0.83)$), and had a clear interpretation. The result suggested the total acceptance rate should be increased in order to decrease arbitrariness estimates in future review processes.
    Preview · Article · Jul 2015
Show more