Peer review for improving the quality of grant applications

Servizo Sovrazonale di Epidemiologia, ASL 20, Via Venezia 6, Alessandria, Piemonte, Italy, 15100.
Cochrane database of systematic reviews (Online) (Impact Factor: 6.03). 02/2007; 18(2):MR000003. DOI: 10.1002/14651858.MR000003.pub2
Source: PubMed


Grant giving relies heavily on peer review for the assessment of the quality of proposals but the evidence of effects of these procedures is scarce.
To estimate the effect of grant giving peer review processes on importance, relevance, usefulness, soundness of methods, soundness of ethics, completeness and accuracy of funded research.
Electronic database searches and citation searches; researchers in the field were contacted.
Prospective or retrospective comparative studies with two or more comparison groups assessing different interventions or one intervention against doing nothing. Interventions may regard different ways of screening, assigning or masking submissions, different ways of eliciting opinions or different decision making procedures. Only original research proposals and quality outcome measures were considered.
Studies were read, classified and described according to their design and study question. No quantitative analysis was performed.
Ten studies were included. Two studies assessed the effect of different ways of screening submissions, one study compared open versus blinded peer review and three studies assessed the effect of different decision making procedures. Four studies considered agreement of the results of peer review processes as the outcome measure. Screening procedures appear to have little effect on the result of the peer review process. Open peer reviewers behave differently from blinded ones. Studies on decision-making procedures gave conflicting results. Agreement among reviewers and between different ways of assigning proposals or eliciting opinions was usually high.
There is little empirical evidence on the effects of grant giving peer review. No studies assessing the impact of peer review on the quality of funded research are presently available. Experimental studies assessing the effects of grant giving peer review on importance, relevance, usefulness, soundness of methods, soundness of ethics, completeness and accuracy of funded research are urgently needed. Practices aimed to control and evaluate the potentially negative effects of peer review should be implemented meanwhile.

6 Reads
  • Source
    • "Citations resulting from a funded application are a very limited measure of scientific impact, and a more elaborate panel of bibliometric and non-bibliometric measures will be needed to obtain a more accurate sense of how well peer review scores predict scientific impact, particularly for unfunded applications [25]–[30]. Whatever the measure(s), there is a great need for prospective validation studies of application peer review processes in order to provide a much more robust test to determine what conditions result in the most efficient and accurate peer review [9]. With the PrX program, we observed a correlation between peer review scores and bibliometric impact, which potentially can be utilized as a testing ground for such validation studies, although it is clear more retrospective data need to be gathered before a testable peer review model system, accounting for the full scoring range, can be developed. "
    [Show abstract] [Hide abstract]
    ABSTRACT: There is a paucity of data in the literature concerning the validation of the grant application peer review process, which is used to help direct billions of dollars in research funds. Ultimately, this validation will hinge upon empirical data relating the output of funded projects to the predictions implicit in the overall scientific merit scores from the peer review of submitted applications. In an effort to address this need, the American Institute of Biological Sciences (AIBS) conducted a retrospective analysis of peer review data of 2,063 applications submitted to a particular research program and the bibliometric output of the resultant 227 funded projects over an 8-year period. Peer review scores associated with applications were found to be moderately correlated with the total time-adjusted citation output of funded projects, although a high degree of variability existed in the data. Analysis over time revealed that as average annual scores of all applications (both funded and unfunded) submitted to this program improved with time, the average annual citation output per application increased. Citation impact did not correlate with the amount of funds awarded per application or with the total annual programmatic budget. However, the number of funded applications per year was found to correlate well with total annual citation impact, suggesting that improving funding success rates by reducing the size of awards may be an efficient strategy to optimize the scientific impact of research program portfolios. This strategy must be weighed against the need for a balanced research portfolio and the inherent high costs of some areas of research. The relationship observed between peer review scores and bibliometric output lays the groundwork for establishing a model system for future prospective testing of the validity of peer review formats and procedures.
    PLoS ONE 09/2014; 9(9):e106474. DOI:10.1371/journal.pone.0106474 · 3.23 Impact Factor
  • Source
    • "Furthermore, it is questionable whether the peer-review process is sufficient to guarantee completeness and accuracy of funded research [35] and good reporting quality [31]. Because better structured papers that do not lack substantial information can improve readability, reviewers and readers might also benefit from author instructions that help to improve reporting quality. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Reporting guidelines (e.g. CONSORT) have been developed as tools to improve quality and reduce bias in reporting research findings. Trial registration has been recommended for countering selective publication. The International Committee of Medical Journal Editors (ICMJE) encourages the implementation of reporting guidelines and trial registration as uniform requirements (URM). For the last two decades, however, biased reporting and insufficient registration of clinical trials has been identified in several literature reviews and other investigations. No study has so far investigated the extent to which author instructions in psychiatry journals encourage following reporting guidelines and trial registration. Psychiatry Journals were identified from the 2011 Journal Citation Report. Information given in the author instructions and during the submission procedure of all journals was assessed on whether major reporting guidelines, trial registration and the ICMJE's URM in general were mentioned and adherence recommended. We included 123 psychiatry journals (English and German language) in our analysis. A minority recommend or require 1) following the URM (21%), 2) adherence to reporting guidelines such as CONSORT, PRISMA, STROBE (23%, 7%, 4%), or 3) registration of clinical trials (34%). The subsample of the top-10 psychiatry journals (ranked by impact factor) provided much better but still improvable rates. For example, 70% of the top-10 psychiatry journals do not ask for the specific trial registration number. Under the assumption that better reported and better registered clinical research that does not lack substantial information will improve the understanding, credibility, and unbiased translation of clinical research findings, several stakeholders including readers (physicians, patients), authors, reviewers, and editors might benefit from improved author instructions in psychiatry journals. A first step of improvement would consist in requiring adherence to the broadly accepted reporting guidelines and to trial registration.
    PLoS ONE 10/2013; 8(10):e75995. DOI:10.1371/journal.pone.0075995 · 3.23 Impact Factor
  • Source
    • "The existence of heterogeneity in grant application assessments by reviewers may be inherent to peer review [35] and may challenge the validity of this method of grant assessment [1], [6]. However, the impact of inter-reviewer heterogeneity on the quality and effectiveness of grant application reviews [2], [36]–[40] has rarely been investigated. Several strategies might help to reduce this heterogeneity. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Peer review of grant applications has been criticized as lacking reliability. Studies showing poor agreement among reviewers supported this possibility but usually focused on reviewers' scores and failed to investigate reasons for disagreement. Here, our goal was to determine how reviewers rate applications, by investigating reviewer practices and grant assessment criteria. We first collected and analyzed a convenience sample of French and international calls for proposals and assessment guidelines, from which we created an overall typology of assessment criteria comprising nine domains relevance to the call for proposals, usefulness, originality, innovativeness, methodology, feasibility, funding, ethical aspects, and writing of the grant application. We then performed a qualitative study of reviewer practices, particularly regarding the use of assessment criteria, among reviewers of the French Academic Hospital Research Grant Agencies (Programmes Hospitaliers de Recherche Clinique, PHRCs). Semi-structured interviews and observation sessions were conducted. Both the time spent assessing each grant application and the assessment methods varied across reviewers. The assessment criteria recommended by the PHRCs were listed by all reviewers as frequently evaluated and useful. However, use of the PHRC criteria was subjective and varied across reviewers. Some reviewers gave the same weight to each assessment criterion, whereas others considered originality to be the most important criterion (12/34), followed by methodology (10/34) and feasibility (4/34). Conceivably, this variability might adversely affect the reliability of the review process, and studies evaluating this hypothesis would be of interest. Variability across reviewers may result in mistrust among grant applicants about the review process. Consequently, ensuring transparency is of the utmost importance. Consistency in the review process could also be improved by providing common definitions for each assessment criterion and uniform requirements for grant application submissions. Further research is needed to assess the feasibility and acceptability of these measures.
    PLoS ONE 09/2012; 7(9):e46054. DOI:10.1371/journal.pone.0046054 · 3.23 Impact Factor
Show more

Similar Publications