Article

Improving the peer-review process for grant applications - Reliability, validity, bias, and generalizability

Department of Education, University of Oxford, United Kingdom.
American Psychologist (Impact Factor: 6.87). 05/2008; 63(3):160-8. DOI: 10.1037/0003-066X.63.3.160
Source: PubMed

ABSTRACT Peer review is a gatekeeper, the final arbiter of what is valued in academia, but it has been criticized in relation to traditional psychological research criteria of reliability, validity, generalizability, and potential biases. Despite a considerable literature, there is surprisingly little sound peer-review research examining these criteria or strategies for improving the process. This article summarizes the authors' research program with the Australian Research Council, which receives thousands of grant proposals from the social science, humanities, and science disciplines and reviews by assessors from all over the world. Using multilevel cross-classified models, the authors critically evaluated peer reviews of grant applications and potential biases associated with applicants, assessors, and their interaction (e.g., age, gender, university, academic rank, research team composition, nationality, experience). Peer reviews lacked reliability, but the only major systematic bias found involved the inflated, unreliable, and invalid ratings of assessors nominated by the applicants themselves. The authors propose a new approach, the reader system, which they evaluated with psychology and education grant proposals and found to be substantially more reliable and strategically advantageous than traditional peer reviews of grant applications.

0 Followers
 · 
226 Views
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Peer review is the "gold standard" for evaluating journal and conference papers, research proposals, on-going projects and university departments. However, it is widely believed that current systems are expensive, conservative and prone to various forms of bias. One form of bias identified in the literature is " social bias " linked to the personal attributes of authors and reviewers. To quantify the importance of this form of bias in modern peer review, we analyze three datasets providing information on the attributes of authors and reviewers and review outcomes: one from Frontiers -an open access publishing house with a novel interactive review process, and two from Spanish and international computer science conferences, which use traditional peer review. We use a random intercept model in which review outcome is the dependent variable, author and reviewer attributes are the independent variables and bias is defined by the interaction between author and reviewer attributes. We find no evidence of bias in terms of gender, or the language or prestige of author and reviewer institutions in any of the three datasets, but some weak evidence of regional bias in all three. Reviewer gender and the language and prestige of reviewer institutions appear to have little effect on review outcomes, but author gender, and the characteristics of author institutions have large effects. The methodology used cannot determine whether these are due to objective differences in scientific merit or entrenched biases shared by all reviewers.
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper discusses a scientometric-statistical model for inferring attributes of 'frontier research' in peer-reviewed research proposals submitted to the European Research Council (ERC). The first step conceptualizes and defines indicators to capture attributes of frontier research, by using proposal texts as well as scientometric and bibliometric data of grant applicants. Based on the combination of indicators, the second step models the decision probability of a proposal to be accepted and compares outcomes between the model and peer-review decision, with the goal to determine the influence of frontier research on the peer-review process. In a first attempt, we demonstrate and discuss in a proof-of-concept approach a data sample of about 10% of all proposals submitted to the ERC call (StG2009) for starting grants in the year 2009, which shows the feasibility and usefulness of the scientometric-statistical model. Ultimately the overall concept is aiming at testing new methods for monitoring the effectiveness of peer-review processes by taking a scientometric perspective of research proposals beyond publication and citation statistics. Introduction Peers are central to the research community and scientific system at all stages of the publication or professional cycle (Wouters, 1997). Peer-review serves as an essential mechanism for resource allocation and quality control (Bornmann, 2011), with input in both ex-ante reviews (e.g., at the funding stage, deciding what proposed research deserves to be funded through national or regional agencies or funding institutions) and in ex-post reviews (e.g., at the dissemination stage, deciding what conducted research deserves to be published). Peers are given and take on the challenge to determine the "best-fitting" scientific research in accord with a journal's status or funding agency's strategy. Journals and grant schemes' objectives are often not aligned and consequently there are no standardized and easily transferable practices between ex-ante and ex-post reviews. While peer-review is widely accepted and actively supported by the scientific community, they are not free form criticism on a number of long-standing issues (Roy, 1985; Chubin & Hackett, 1990; Chubin, 1994). Because of its central role the monitoring of peer-review processes is important to continuously reveal to what extent goals are actually accomplished through review processes and decisions (Hojat & al., 2003; Sweizer & Collen, 1994; Bornmann & Daniel, 2008; Marsh & al., 2008).
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: We surveyed 113 astronomers and 82 psychologists active in applying for federally funded research on their grant-writing history between January, 2009 and November, 2012. We collected demographic data, effort levels, success rates, and perceived non-financial benefits from writing grant proposals. We find that the average proposal takes 116 PI hours and 55 CI hours to write; although time spent writing was not related to whether the grant was funded. Effort did translate into success, however, as academics who wrote more grants received more funding. Participants indicated modest non-monetary benefits from grant writing, with psychologists reporting a somewhat greater benefit overall than astronomers. These perceptions of non-financial benefits were unrelated to how many grants investigators applied for, the number of grants they received, or the amount of time they devoted to writing their proposals. We also explored the number of years an investigator can afford to apply unsuccessfully for research grants and our analyses suggest that funding rates below approximately 20%, commensurate with current NIH and NSF funding, are likely to drive at least half of the active researchers away from federally funded research. We conclude with recommendations and suggestions for individual investigators and for department heads.
    PLoS ONE 03/2015; 10(3):e0118494. DOI:10.1371/journal.pone.0118494 · 3.53 Impact Factor

Full-text (2 Sources)

Download
130 Downloads
Available from
May 21, 2014