ArticlePDF Available

On the time spent preparing grant proposals: An observational study of Australian researchers


Abstract and Figures

To estimate the time spent by the researchers for preparing grant proposals, and to examine whether spending more time increase the chances of success. Observational study. The National Health and Medical Research Council (NHMRC) of Australia. Researchers who submitted one or more NHMRC Project Grant proposals in March 2012. Total researcher time spent preparing proposals; funding success as predicted by the time spent. The NHMRC received 3727 proposals of which 3570 were reviewed and 731 (21%) were funded. Among our 285 participants who submitted 632 proposals, 21% were successful. Preparing a new proposal took an average of 38 working days of researcher time and a resubmitted proposal took 28 working days, an overall average of 34 days per proposal. An estimated 550 working years of researchers' time (95% CI 513 to 589) was spent preparing the 3727 proposals, which translates into annual salary costs of AU$66 million. More time spent preparing a proposal did not increase the chances of success for the lead researcher (prevalence ratio (PR) of success for 10 day increase=0.91, 95% credible interval 0.78 to 1.04) or other researchers (PR=0.89, 95% CI 0.67 to 1.17). Considerable time is spent preparing NHMRC Project Grant proposals. As success rates are historically 20-25%, much of this time has no immediate benefit to either the researcher or society, and there are large opportunity costs in lost research output. The application process could be shortened so that only information relevant for peer review, not administration, is collected. This would have little impact on the quality of peer review and the time saved could be reinvested into research.
Content may be subject to copyright.
On the time spent preparing grant
proposals: an observational study of
Australian researchers
Danielle L Herbert,
Adrian G Barnett,
Philip Clarke,
Nicholas Graves
To cite: Herbert DL,
Barnett AG, Clarke P, et al.
On the time spent preparing
grant proposals: an
observational study of
Australian researchers. BMJ
Open 2013;3:e002800.
Prepublication history for
this paper are available
online. To view these files
please visit the journal online
Received 27 February 2013
Accepted 23 April 2013
This final article is available
for use under the terms of
the Creative Commons
Attribution Non-Commercial
2.0 Licence; see
School of Public Health &
Institute of Health and
Biomedical Innovation,
Queensland University of
Technology, Brisbane,
Melbourne School of
Population and Global Health,
The University of Melbourne,
Melbourne, Australia
Correspondence to
Dr Danielle L Herbert;
To estimate the time spent by the
researchers for preparing grant proposals, and to
examine whether spending more time increase the
chances of success.
Design: Observational study.
Setting: The National Health and Medical Research
Council (NHMRC) of Australia.
Participants: Researchers who submitted one or
more NHMRC Project Grant proposals in March 2012.
Main outcome measures: Total researcher time
spent preparing proposals; funding success as
predicted by the time spent.
Results: The NHMRC received 3727 proposals of
which 3570 were reviewed and 731 (21%) were
funded. Among our 285 participants who submitted
632 proposals, 21% were successful. Preparing a new
proposal took an average of 38 working days of
researcher time and a resubmitted proposal took 28
working days, an overall average of 34 days per
proposal. An estimated 550 working years of
researchers time (95% CI 513 to 589) was spent
preparing the 3727 proposals, which translates into
annual salary costs of AU$66 million. More time spent
preparing a proposal did not increase the chances of
success for the lead researcher ( prevalence ratio (PR)
of success for 10 day increase=0.91, 95% credible
interval 0.78 to 1.04) or other researchers (PR=0.89,
95% CI 0.67 to 1.17).
Conclusions: Considerable time is spent preparing
NHMRC Project Grant proposals. As success rates are
historically 2025%, much of this time has no
immediate benefit to either the researcher or society,
and there are large opportunity costs in lost research
output. The application process could be shortened so
that only information relevant for peer review, not
administration, is collected. This would have little
impact on the quality of peer review and the time saved
could be reinvested into research.
Project Grants are the major source of
medical research funding in Australia, and
were around 70% of all research funds
awarded by the National Health and Medical
Research Council (NHMRC) in 2012.
Application numbers have steadily risen over
time making the process more competitive;
there were 1881 proposals in 2003 and 3727
in 2012, a 98% increase. For Australian
researchers, this increase in proposal
numbers has led to declining success rates
and budget cuts for successful proposals.
Project Grants aim to support single or
small teams of researchers for a dened
project from 1 to 5 years. The application
process takes almost a year, and has
remained essentially the same for the last
decade. The funding round opens in
December, full proposals are submitted
Article focus
Researchers would prefer to spend less time pre-
paring grant proposals and more time on actual
The time spent preparing grant proposals is
thought to be large, but we do not hav e accurat e
estima tes of the total time spent acr oss Australia.
Key messages
An estima ted 550 working y ears of the researchers
time was spent pr eparing proposals for Austr alias
major health and medical funding scheme.
More time spent preparing a proposal did not
increase the chances of success and there was no
agreement between the researchers ranking of
their proposals and the results from peer review.
Most researchers understand that a perfect peer-
review system is not realistic.
Strengths and limitations of this study
Our time estimates were retrospective with no
details on identifying the sections of the pro-
posal that took the most time.
We used a short survey to increase the response
rate, but this means we have limited data on the
participants and their institutions.
Many researchers were reluctant to give us their
proposal identification numbers, presumably
because of confidentiality concerns.
Herbert DL, Barnett AG, Clarke P, et al. BMJ Open 2013;3:e002800. doi:10.1136/bmjopen-2013-002800 1
Open Access Research
online in March, are assessed by two external reviewers
(AprilMay), lead researchers provide responses to
the reviewers reports (May), grant review panels of
1012 experts assess each proposal considering reports
from two panel spokespersons and the applicants
responses to the reviewers reports and score each pro-
posal (AugustSeptember). Funding is then allocated
based on a ranking determined by the score until the
budget is exhausted, and the successful proposals are
announced (OctoberNovember). The budget for
Project Grants beginning in 2013 was AU$458 million.
The process which Australia uses, involving the assess-
ment of full proposals, is in contrast to several compar-
able fun ding bodies overseas which use staggered
application processes. For example, the UK Wellcome
Trust Investigator Awards rst invite a research plan;
shortlisted applicants are then invited to provide more
The UK Engineering and Physical
Sciences Research Council (EPSRC) has a similar stag-
gered process for their Platform Grants,
as does the
USA National Science Foundation (NSF). The NSFs
guidelines explain that a key reason for short-listing is
reducing the wasted effort of researchers spending time
preparing proposals with a low chance of success.
Despite the importance of applying for research
funding, the total time spent by researchers preparing
and submitting proposals is not known.
Guidelines on
how to effectively write grant proposals advise that they
cannot be written in a short amount of time,
but we
do not know if spending more time increases the chance
of success. A Nobel Laureate in Physics, and an
Australian-based researcher, Professor Brian Schmidt,
recently highlighted the large amount of time the
Australian researchers were wasting on preparing lengthy
proposals for Australian Research Council funding.
We surveyed the Australian medical research commu-
nity in order to estimate their time spent preparing pro-
posals and whether spending more time increased their
chance of success. We also examined whether previous
experience with peer review improved their success.
Study design
In March 2012, Australian researchers working in health
and medicine submitted 3727 proposals to the NHMRC
Project Grant funding scheme.
We attempted to
contact the lead researchers of every proposal by con-
tacting the ofces of research of every Australian univer-
sity and research institute. Of the 51 ofces approached,
30 (59%) agreed to distribute an email invitation to
their researchers. There was no reminder email. Willing
researchers completed a short online survey from March
to May 2012. The funding outcomes were announced by
the NHMRC in October 2012. This study was approved
by the Queensland University of Technology Ethics
Committee (approval number 1100001472).
Survey questions
The online survey asked researchers to consider their
time spent on proposals submitted in March 2012. For
each proposal, we asked them if they were the lead
researcher and how much time they spent (in days), and
whether the proposal was new or a resubmission. We
also asked them about their previous experience with
the peer-review system as an expert panel member or
external peer reviewer, which is roughly akin to being a
peer reviewer for a journal and part of the editorial
board. We asked for their salary in order to estimate the
nancial costs of preparing proposals. To protect the
anonymity of our participants and to minimise their
time spent completing the survey, we did not ask them
for extra-personal details or for the name of thei r
For researchers who submitted two or more proposals,
we asked them to rank their proposals in the order of
which deserved most funding. Researchers also
responded to a hypothetical scenario concerning their
desired level of reliability between two independent
peer-review panels (box 1). This was used to estimate
the desired reliability of the peer-review process. The
hypothetical numbers of 100 proposals and 20 funded
were based on a realistic NHMRC Project Grant panel.
Statistical methods
The total number of days spent preparing proposals was
estimated using the following equation:
3727{(1 P)[T(N; L)þ(M 1)T(N; O)]
þ P[T(R; L)þ(M 1)T(R; O)]}
where 3727 is the total number of proposals in 2012,
P is the proportion of resubmitted proposals, T() is
the average time spent in days for a combination of
new o r resubmitted (N or R) proposals, lead or other
rese archers (L or O ), and M is the average number of
rese archers per p roposal. This eq uation recognises
that the resubmitted proposals usually take less time
than new proposals, and that lead researchers gener-
ally s pend more time than the other researchers. This
estimate on the scale of working days was scaled to
Box 1 Hypothetical scenario on peer-review reliability
Question: Imagine that 100 Project Grant proposals in the same
field have been reviewed by a panel of 10 experts. They selected
20 proposals for funding.
Now imagine that a second panel of 10 experts reviews the same
100 proposals and must independently decide on which 20 pro-
posals deserve funding. How many of the 20 proposals originally
selected for funding would you want to also be selected by the
second panel?
Response Options: Exactly the same 20 proposals, a difference of
1 proposal, [], 20 completely different proposals.
2 Herbert DL, Barnett AG, Clarke P, et al. BMJ Open 2013;3:e002800. doi:10.1136/bmjopen-2013-002800
On the time spent preparing grant proposals in Australia
working years by assuming 46 working weeks per year.
A bootstrap 95% CI was calcu l ated b y ran d o mly resam-
pling from the obse rved responses to capture the uncer-
tainty in the time spent, number of research ers an d
proportion of resubmissions.
Of the 3727 proposals
submitted, 18 were subsequently withdrawn.
withdrawn proposals were included i n our e stimate o f
the total time, as this time is still valid for our aim of
capturing the total resea rcher time spent preparing pro-
posals across Australia.
We used logistic regression to estimate the prevalence
ratio (PR) of success according to the researchers
experience and time spent on the proposal. PRs are the
ratio of two probabilities, whereas odds ratios (ORs) are
the ratios of two odds.
Using PRs allows us to make
multiplicative statements about probabilities (eg, twice as
likely) that are not possible with ORs.
There were small amounts of missing data (07%) for
the questions on researcher experience and times.
These missing data were imputed using multiple imput-
ation based on the observed responses. For example,
35% said that they had previously served on a peer-
review panel, hence missing values to this question were
randomly imputed as Yes with probability 0.35.
The imputation and logistic regression models were
performed simultaneously using a Bayesian model,
hence the nal estimates of the PRs for success incorpor-
ate the uncertainty due to missing data. The model was
tted using the Bayesian WinBUGS software
and the
PRs are presented as means with 95% credible intervals.
We examined potential non-linear associations
between time spent and success. These were a threshold
beyond which more time did not increase the probabil-
ity of success, log-transformed time and a quadratic asso-
ciation; however, we found no statistically signicant
associations (results not shown).
We compared the researchers ranking of their propo-
sals with their success or failure in the peer-review
system. For each pair of proposals from the same
researcher, we compared their relative low and high
ranking with their funding success (yes or no). We only
examined those proposals where there was a difference
in success, as pairs of grants that were both failures or
both successes contain no information for this analysis.
We examined these results using a two-by-two table,
-test and κ agreement statistic.
Our online survey was started by 446 researchers,
but only 285 (64%) provided us with their proposal
number(s). We needed the proposal numbers in order
to match the survey responses (completed from March
to May 2012) with the success outcomes from the
NHMRC (announced in October 2012). However, many
researchers were reluctant to give us this information.
The 285 who gave us their proposal numbers submitted
632 proposals. The funding success rate in our sample
was 21%, the same as the overall NHMRC success rate
(21%) which indicates that our sample was representa-
tive of the wider population. The NHMRC received 3727
proposals of which 3570 were reviewed and 731 were
funded, giving a success rate of 21%.
An estimated 550 working years of researchers time was
spent preparing the 3727 proposals (95% CI 513 to 589
working-years). Based on the researchers salaries, this is an
estimated monetary cos t of AU$66 million per year, which
is 14% of the NHMRCs total funding budget. Each new
proposal took an av er age of 38 working da y s of the
researchers time and resubmissions took an a verage of 28
working da y s : an ov erall average of 34 day s per proposal.
Lead res ear chers spent an av erage of 27 and 21 workings
days per new and re submitted pr oposals, r espec tively, with
the remaining time spent by other r esear chers.
More time spent on the proposal did not increase the
probability of success (table 1). Owing to concern about
a lack of power to detect an association between time
spent and success, we used a retrospective power calcula-
tion. We had a 90% power to detect an increase in the
probability of success of 0.028 for a 10-day increase in
the time spent (based on the observed times and suc-
cesses of our sample). If we have missed a true associ-
ation, it is likely to be smaller than a 0.028 increase in
probability for 10 more days of the time spent.
Experience with the peer-r evie w sy stem, as either an
expert panel member or external peer r evie w er, did increase
the probability of success, but these increases w ere not s tat is-
tically signicant (table 1). Resubmitted proposals had a s ta t-
is tically signicant low er probability of success compared
with new proposals (PR 0.64, 95% CI 0.43 to 0.92).
There was no agreement between the researchers
rankings of their proposals and which ones were funded
(table 2). The χ
test showed no association (χ
p=0.34) and the κ agreement was negative (0.06).
Table 1 Prevalence ratios of funding success by researcher experience and time spent on proposal
Researchers experience and time PR 95% CI
Ever served on peer-review panel (Yes vs No) 1.27 0.89 to 1.74
Ever peer reviewed a proposal (Yes vs No) 1.33 0.78 to 2.22
Salary (per $5000 increase) 0.99 0.94 to 1.04
Resubmitted proposal (Yes vs No) 0.64 0.43 to 0.92
Time for lead researchers (10 day increase) 0.91 0.78 to 1.04
Time for other researchers (10 day increase) 0.89 0.67 to 1.17
95% CI, credible intervals; PR, prevalence ratio.
Herbert DL, Barnett AG, Clarke P, et al. BMJ Open 2013;3:e002800. doi:10.1136/bmjopen-2013-002800
On the time spent preparing grant proposals in Australia
Researchers were willing to accept a wide range in reli-
ability between two hypothetical peer-review processes
(gure 1). The modal response was a difference of ve
proposals (meaning 15 the same), which is a 25% dis-
agreement in funding between the two processes.
Australian researchers spend an enormous amount of
time preparing grant proposals.
We estimate that the
2012 NHMRC Project Grant scheme costs 550 working
years of researchers time, which is AU$66 million in
terms of the estimated salary costs. To put this quantum
of resources into perspective, it exceeds the total annual
staff costs at the Walter and Eliza Hall Institute (WEHI
2012, AU$61.6 million), one of Australias major
medical research institutes which produced 284 peer-
reviewed publications in 2012.
As success rates for the Project Grant scheme are his-
torically between 20% and 25%, the majority of time
spent preparing proposals is wasted with no immediate
benet due to the failure to obtain funding. Some
wasted time will be salvaged by submitting failed propo-
sals to other funding agencies or resubmitting next year.
However, resubmissions took just 10 days less on average
to prepare than new submissions, and resubmissions had
a 36% lower probability of success (table 1).
Spending more time on a proposal is no predictor of
success (table 1), and the poor agreement between
researchers rankings and funding succes s (table 2)
further demonstrates how hard it is to predict success
and justify spending more time on proposals. These
ndings are consistent with the previous studies on
NHMRC Project Grants that have shown a high degree
of variation in panel members scores
and a low correl-
ation between the scores assigned for track record and
bibliometric measures.
Underestimating time and cost
Our cost estimates are likely to underestimate the true
costs because some proposals are started but not submit-
ted, and we did not capture the time of researchers who
provided technical help or administrative staff who
helped with the submission process. Also, our estimates
do not include the costs of peer review, which would be
the time of 13 external peer reviewers per proposal
and an expert panel of 1012 senior researchers
meeting for a week, as well as the administrative time of
organising this peer review.
Our ndings are based on retrospective self-reported
times spent preparing proposals, and we could not verify
these times. Our study was designed to minimise partici-
pant burden and maximise our response rate by using a
short survey that maintained anonymity. Participants
completed our survey soon after the NHMRC closing
date for submissions which should have reduced recall
bias. At the time of completing, the survey participants
did not know if their proposal had succeeded, hence
our results are not biased by disgruntled researchers
inating their times. Future research could use diaries
to prospectively collect the time spent preparing propo-
sals and identify the sections of the proposal that took
the most time. Future research could also examine
whether preparing unsuccessful proposals provides any
benets to the researchers in terms of rening their sci-
entic ideas.
Excessive information
Researchers would prefer to spend less time writing pro-
posals and more time on actual research.
Our results
show that most researchers do not expect a perfect
system (gure 1). Hence, the amount of information
collected does not need to aim for the ideal system
shown in gure 2. Most researchers understand that a
perfect system is unachievable. The hypothetical associ-
ation between the information that the system collects
(which determines the time spent by researchers) and
the accuracy of the system is plotted in gure 2.
Underlying the gure is the notion that the marginal
cost of providing more information is rising (which is
consistent with our results regarding time spent on
grant preparation and success) and that the marginal
benet owing from this information in improving the
Table 2 Agreement between researchers relative ranking
of their proposals and funding success
Researchers ranking
Funding success
No Yes
Low 82 92
High 92 82
κ Agreement 0.06
Figure 1 Desired reliability of a hypothetical system (see
box 1 for hypothetical question).
4 Herbert DL, Barnett AG, Clarke P, et al. BMJ Open 2013;3:e002800. doi:10.1136/bmjopen-2013-002800
On the time spent preparing grant proposals in Australia
ranking of proposals is declining.
The standard way of
optimising the amount of information collected is to
equate the marginal benets with the marginal costs,
which occur at the maximum net benet. Beyond this
point, marginal costs to the applicant outweigh the ben-
ets even though there may still be improvements in the
accuracy of ranking. One may also reach a point where
the net benets become negative, when additional infor-
mation only confuses the ranking process.
Our results suggest that the current NHMRC Project
Grant system collects more information than what is neces-
sary as the associa tion betwe en time spent (at an individual
level) and success w as negative (table 1), putting it on the
downward slope of gure 2. Project Grant proposals are
between 80 and 120 pages long and panel members are
e xpected to read and rank betw een 50 and 100 proposals.
It is optimis tic to e xpect accurate judgements in this sea of
e xcessiv e information. An alterna tiv e application process is
to use an initial short proposal with shortlisted proposals
being asked to pr o vide more information tha t would then
be used to determine funding success.
Recommendations to minimise burden
Our time estimates are comparable with two small
Australian studies on the time spent preparing proposals
for NHMRC Project Grants. In 2004, a sample of 69
researchers spent an average of 20 days per proposal.
In 2009, a sample of 42 lead researchers spent between
20 and 30 days per proposal, which, when extrapolated
to the whole of Australia, gave an estimated total prepar-
ation cost of AU$41 million.
In 2012, the Canadian
Institutes of Health Research review of their Open
Operating Grant Program included a survey of 378
researchers who spent on average 169 h (or 23 working
days at 7.5 h per day) per proposal.
In Canada, new
recommended reforms include a reduction in the
amount of information submitted to minimise burden
on applicants and peer reviewers.
A recent review of health and medical research
funding in Australia recommended that the NHMRCs
online application process be simplied.
We not only
agree but also believe that the information requested for
each proposal could be reduced. This is because the key
scientic information used to judge a Project Grants
worthiness is just nine pages of a proposal, that is,
around 80 to 120 pages. Therefore, the proposals could
easily be shortened without any impact on peer review.
The inclusion of a staged application process starting
with an expression of interest (EOI), as used in the UK
and the USA, would further minimise the burden on
researchers. If an EOI could be used to reject 30% of
proposals, and assuming that an EOI takes one-quarter
of the time to prepare as a full proposal, then (based on
our survey) this would save 124 years of the researchers
time per year. This saved time is equivalent to funding
124 new postdoctoral positions per year.
Changes to eligibility rules for resubmitting proposals
from previous funding rounds could reduce the total
number of applications and improve success rates. The
UK proposals submitted to the EPSRC Platform Grant
scheme (20092010 to 20112012) have almost halved
(3379 vs 1938) and the success rate increased (30% vs
41%) after EPSRC implemented stricter eligibility rules
including a repeatedly unsuccessful applicants policy.
From our survey, the success rate for new proposals was
higher than for resubmissions, therefore the limitations
on the resubmission of Project Grants may reduce
the time wasted preparing proposals by improving the
chance of success.
The format of grant proposals could be shortened so
that only information relevant for peer review, not
administration, is collected. The administrative data
could be collected at a later date for only those propo-
sals that were successful. Another option is to restructure
the format of proposals based on the total budget,
where projects with smaller budgets can submit shorter
proposals. The potential savings in the researchers time
are enormous since preparing research proposals takes
between 1 and 3 months in a year. If more of this time
could be dedicated to actual research, then there would
be more and faster medical research discoveries.
Weighing down researchers in a lengthy grant proposal
process is a poor use of the researchers valuable time.
Acknowledgements The authors are grateful to the Australian researchers
who provided the survey data.
Contributors AGB, PC and NG conceived and designed the study and
analysed the data. All authors interpreted the data, drafted the article or
revised it critically for important intellectual content and approved the version
to be published. AGB is the study chief investigator and is the guarantor.
Figure 2 Hypothetical association between the information
collected for peer review and the accuracy of awarding the
best proposals. To draw this association, we assume that all
proposals can be ranked (without ties) from the best to the
Herbert DL, Barnett AG, Clarke P, et al. BMJ Open 2013;3:e002800. doi:10.1136/bmjopen-2013-002800 5
On the time spent preparing grant proposals in Australia
Funding This work was funded by the National Health and Medical Research
Council (Project Grant number 1023735).
Competing interests DLH salary is supported from NHMRC funding. AGB
receives funding from NHMRC and QLD Government. PC receives funding from
NHMRC, NIH and several other national and international health funding
agencies. NG receives funding from NHMRC, ARC, NIHR, QLD Government and
is the academic director of the Australian Centre for Health Services Innovation.
Ethics approval Queensland University of Technology Ethics Committee.
Provenance and peer review Not commissioned; externally peer reviewed.
Data sharing statement No additional data are available.
1. National Health and Medical Research Council. Funding rate and
funding by funding scheme. Canberra: NHMRC, 2012. http://www. (accessed Nov 2012).
2. Wilkinson E. Wellcome Trust to fund people not projects. Lancet
3. Engineering and Physical Sciences Research Council. Research
proposal funding rates 20112012. Swindon: EPSRC, 2012. http://
FundingRates1112.pdf (accessed Jan 2013).
4. National Science Foundation. Grant proposal guide. Arlington VA:
NSF, 2011. I-3.
nsf11001/gpgprint.pdf (accessed Nov 2012).
5. Wood FQ, Meek VL, Harman G. The research grant application
process. Learning from failure? Higher Educ 1992;24:123.
6. Kreeger K. A winning proposal. Nature 2003;423:1023.
7. Schmidt B. We must rebuild our grants system. The Australian
14 November 2012. Section: Opinion.
story-e6frgcko-1226516110682 (accessed Nov 2012).
8. National Health and Medical Research Council. Project Grants for
funding commencing in 2013. Canberra: NHMRC, 2012. https://
(accessed Nov 2012).
9. Davison AC, Hinkley DV. Bootstrap methods and their application.
Cambridge: Cambridge University Press, 1997.
10. Deddens JA, Petersen MR. Approaches for estimating prevalence
ratios. Occup Environ Med 2008;65:5016.
11. Lunn DJ, Thomas A, Best N, et al. WinBUGSa Bayesian
modelling framework: concepts, structure, and extensibility.
Stat Comput 2000;10:32537.
12. Herbert DL, Barnett AG, Graves N. Funding: Australias grant system
wastes time. Nature 2013;495:314.
13. Walter and Eliza Hall Institute of Medical Research. Annual Report
20112012. Melbourne: WEHI, 2012. 168.
uploads/11-12_WEHI_Annual_Report.pdf (accessed Feb 2013).
14. Graves N, Barnett AG, Clarke P. Funding grant proposals for
scientific research: retrospective analysis of scores by members of
grant review panel. BMJ 2011;343:d4797.
15. Nicol MB, Henadeera K, Butler L. NHMRC grant applications: a
comparison of track record scores allocated by grant assessors
with bibliometric analysis of publi cations. Med J Aust
16. Smith R. Classical peer review: an empty gun. Breast Cancer Res
2010;12(Suppl 4):S13.
17. Thomas CR, Maurice SC. Managerial economics. 9th edn. Boston:
McGraw-Hill Irwin, 2008.
18. Mow KE. Inside the black box: research grant funding and peer
review in Australian research councils. LAP Lambert Academic
Publishing, 2010, 188
19. Canadian Institutes of Health Research. Evaluation of the open
operating grant program: final report. Ontario: CIHR, 2012. http://
pdf (accessed Feb 2013).
20. Commonwealth of Australia. Strategic review of health and medical
research in Australiabetter health through research. Canberra:
DOHA, 2013.
Final_Report.pdf (accessed Apr 2013).
6 Herbert DL, Barnett AG, Clarke P, et al. BMJ Open 2013;3:e002800. doi:10.1136/bmjopen-2013-002800
On the time spent preparing grant proposals in Australia

Supplementary resources (2)

... Multiple studies have signposted that in national research grant evaluation systems certain groups of applicants often have an advantage in attracting research funding [10][11][12]. Nonobjective unfair advantages blur the connection between research excellence and funding success, and can have especially devastating consequences under extreme level of competition for funding. Most discussed characteristics of these more advantageous groups are the seniority or the age of the principal investigator, the previous research success, and also the past track record in attracting research funding. ...
... We agree with previous studies [6,8,9,11,17,40] in claiming how there is a need for research addressing how scarce research funds are allocated. As so far, the greatest share of studies addressing this questions have been conducted based on large and English based national systems like Canada [38,41,42], Australia [10,12], UK [43,44], and U.S. [6,11,14,45], or capture the recipients of elite grants like ERC [13,46,47], and the most addressed disciplines have been medical and STEM fields [8,43,44]. The scholarly community would benefit greatly by additions from smaller and non-English countries like Estonia and similar, and capturing also less studied research disciplines. ...
Full-text available
Recent data highlights the presence of luck in research grant allocations, where most vulnerable are early-career researchers. The national research funding contributes typically the greatest share of total research funding in a given country, fulfilling simultaneously the roles of promoting excellence in science, and most importantly, development of the careers of young generation of scientists. Yet, there is limited supply of studies that have investigated how do early-career researchers stand compared to advanced-career level researchers in case of a national research grant system. We analyzed the Estonian national highly competitive research grant funding across different fields of research for a ten-year-period between 2013-2022, including all the awarded grants for this period (845 grants, 658 individual principal investigators, PI). The analysis was conducted separately for early-career and advanced-career researchers. We aimed to investigate how the age, scientific productivity and the previous grant success of the PI vary across a national research system, by comparing early-and advanced-career researchers. The annual grant success rates varied between 14% and 28%, and within the discipline the success rate fluctuated across years even between 0-67%. The year-to-year fluctuations in grant success were stronger for early-career researchers. The study highlights how the seniority does not automatically deliver better research performance, at some fields, younger PIs outperform older cohorts. Also, as the size of the available annual grants fluctuates remarkably, early-career researchers are most vulnerable as they can apply for the starting grant only within a limited "time window".
... Peer review is incredibly expensive. Most of this cost is not monetary, as administrative costs have steadily decreased since (at least) the 1990s, but opportunity costs of scientists who spend their time reviewing and (especially) writing grants (Herbert et al. 2013). This has been exacerbated by increasingly large labor markets and hypercompetition (Marsh, Jayasinghe and Bond 2008). ...
Despite the surging interest in introducing lottery mechanisms into decision-making procedures for science funding bodies, the discourse on funding-by-lottery remains underdeveloped and, at times, misleading. Funding-by-lottery is sometimes presented as if it were a single mechanism when, in reality, there are many funding-by-lottery mechanisms with important distinguishing features. Moreover, funding-by-lottery is sometimes portrayed as an alternative to traditional methods of peer review when peer review is still used within funding-by-lottery approaches. This obscures a proper analysis of the (hypothetical and actual) variants of funding-by-lottery and important differences amongst them. The goal of this article is to provide a preliminary taxonomy of funding-by-lottery variants and evaluate how the existing evidence on peer review might lend differentiated support for variants of funding-by-lottery. Moreover, I point to gaps in the literature on peer review that must be addressed in future research. I conclude by building off of the work of Avin in moving toward a more holistic evaluation of funding-by-lottery. Specifically, I consider implications funding-by-lottery variants may have regarding trust and social responsibility.
... It takes at least a month to write a typical research proposal [13]. However, the time required to prepare research applications is not directly comparable due to differences in the type of call, research field, and funding agency guidelines. ...
Conference Paper
Full-text available
The goal of the research is to understand the factors that determine the success of international research proposals. For this purpose, a multi-stage study will be carried out. The study will include a systematic literature review, semi-structured interviews with European Commission evaluators, and the development of the model of success determinants. Text analysis of historical proposals will enhance the knowledge of the success factors linked with applications' discourse and language. The proposed research will complement the existing literature as the study will be based on a comprehensive dataset covering all funded and rejected project applications under the European Union's Horizon 2020 Framework Program. The model will support the participation of Polish higher education institutions in European funds, whose acquisition is a key challenge for them, and provide an opportunity for the development of innovative research and international cooperation. It might also be useful for institutions in other countries, especially those that similarly to Polish institutions have a low share in acquiring European Union funds.
... Grant success is one such important metric, and an enormous amount of time is spent developing bids in competitive processes, the vast majority of which are unfunded. A US study estimated that grant writing activities took up on average 4 and a half hours a week for each faculty member (Link et al. 2008), while an Australian survey suggested it took an average of 34 working days to submit to the National Health and Medical Research Council (Herbert et al. 2013). Success rates differ, but to give a few examples, in New Zealand the Marsden Fund has a success rate of about 10% (Royal Society Te Apārangi 2020), the European Commission flagship Horizon 2020 reported a success rate of 14% (Sohn 2019), and the Australian Research Council Future Fellowships scheme was only slightly higher at 15% (Sinclair 2021). ...
Full-text available
There is barely a field of academic research not subject to crisis claims. Many urban crises span careers and take significant emotional tolls. This is not due to a lack of effort. Academic productivity, as it is typically measured, is rapidly increasing and success claims commonplace. This article reflects critically upon the science-policy interface and interprets the work of Julia Kristeva to discuss the importance of creating “tiny revolts” able to rescale and reframe inquiry, and to problematise success. I argue these revolts hold potential in sustaining ourselves and others, as well as in creating new acts of critical thinking.
... Grant peer-review has been extensively criticized as time-consuming (Herbert et al., 2013), subjective (Guthrie et al., 2019), and costly as it requires scientists to spend time on the evaluation of project proposals than on research. On the other hand, machine learning has already been used for prescreening job applications in industry (e.g., Peñalvo et al., 2018) and for finding referees for grant peer-review in academia (Cyranoski, 2019). ...
Full-text available
As more objections have been raised against grant peer-review for being costly and time-consuming, the legitimate question arises whether machine learning algorithms could help assess the epistemic efficiency of the proposed projects. As a case study, we investigated whether project efficiency in high energy physics (HEP) can be algorith-mically predicted based on the data from the proposal. To analyze the potential of algorithmic prediction in HEP, we conducted a study on data about the structure (project duration, team number, and team size) and outcomes (citations per paper) of HEP experiments with the goal of predicting their efficiency. In the first step, we assessed the project efficiency using Data Envelopment Analysis (DEA) of 67 experiments conducted in the HEP laboratory Fermilab. In the second step, we employed predictive algorithms to detect which team structures maximize the epistemic performance of an expert group. For this purpose, we used the efficiency scores obtained by DEA and applied predictive algorithms-lasso and ridge linear regression, neural network, and gradient boosted trees-on them. The results of the predictive analyses show moderately high accuracy (mean absolute error equal to 0.123), indicating that they can be beneficial as one of the steps in grant review. Still, their applicability in practice should be approached with caution. Some of the limitations of the algorithmic approach are the unreliability of citation patterns, unobservable variables that influence scientific success, and the potential predictability of the model.
... The important thing here is that the grant application is also peer-reviewed and selected before the studies. A study reported that each grant proposal takes researchers an average of 34 working days [4]. If the researchers fail to obtain grants, they waste 34 working days. ...
Full-text available
One major source of exhaustion for researchers is the redundant paperwork of three different documents—research papers, ethics review applications, and research grant applications—for the same research plan. This is a wasteful and redundant process for researchers, and it has a more direct impact on the career development of early-career researchers. Here, we propose a trinity review system based on Registered Reports that integrates scientific, ethics, and research funding reviews. In our proposed trinity review system, scientific and ethics reviews are undertaken concurrently for a research protocol before running the study. After the protocol is approved in principle through these review processes, a funding review will take place, and the researchers will begin their research. Following the experiments or surveys, the scientific review will be conducted on a completed version of the paper again, including the results and discussions (i.e., the full paper), and the full paper will be published once it has passed the second review. This paper provides the brief process of the trinity review system and discusses the need for and benefits of the proposed system. Although the trinity review system only applies to a few appropriate disciplines, it helps improve reproducibility and integrity.
... Completing funding applications is time consuming (Herbert et al., 2013), often stressful and can conflict with family responsibilities (Herbert et al., 2014). It could be particularly stressful for those with career disruption given the conflicting views around what to write and the thought that writing anything might harm their chance of winning funding. ...
Full-text available
Background: When researchers' careers are disrupted by life events-such as illness or childbirth-they often need to take extended time off. This creates a gap in their research output that can reduce their chances of winning funding. In Australia, applicants can disclose their career disruptions and peer reviewers are instructed to make appropriate adjustments. However, it is not clear if and how applicants use career disruption sections or how reviewers adjust and if they do it consistently. Methods: To examine career disruption, we used surveys of the Australian health and medical research community. We used both a random sample of Australian authors on PubMed and a non-random convenience sample. Results: Respondents expressed concerns that sharing information on career disruption would harm their chances of being funded, with 13% saying they have medical or social circumstances but would not include it in their application, with concerns about appearing 'weak'. Women were more reluctant to include disruption. There was inconsistency in how disruption was adjusted for, with less time given for those with depression compared with caring responsibilities, and less time given for those who did not provide medical details of their disruption. Conclusions: The current system is likely not adequately adjusting for career disruption and this may help explain the ongoing funding gap for senior women in Australia. Funding: National Health and Medical Research Council Senior Research Fellowship (Barnett).
... For example, setting higher quality standards for junior researchers can be negatively perceived as 'ladder pulling' [30], while the widely held perception that open research can stifle innovation or long-held academic freedoms can make researchers at all career stages hesitant to change current practices [27,[31][32][33]. Further, applying for grants [34][35][36] and teaching [37] occupy an increasing amount of work time, which means attending training, developing open research practices, or changing long-standing research routines can be costly and therefore deprioritized. Finally, the increasing literature on how to adopt open research is fast becoming overwhelming, contradictory, and mainly tailored to early career researchers [18,23,25,26]. ...
Full-text available
Increasingly, policies are being introduced to reward and recognise open research practices, while the adoption of such practices into research routines is being facilitated by many grassroots initiatives. However, despite this widespread endorsement and support, as well as various efforts led by early career researchers, open research is yet to be widely adopted. For open research to become the norm, initiatives should engage academics from all career stages, particularly senior academics (namely senior lecturers, readers, professors) given their routine involvement in determining the quality of research. Senior academics, however, face unique challenges in implementing policy changes and supporting grassroots initiatives. Given that—like all researchers—senior academics are motivated by self-interest, this paper lays out three feasible steps that senior academics can take to improve the quality and productivity of their research, that also serve to engender open research. These steps include changing (a) hiring criteria, (b) how scholarly outputs are credited, and (c) how we fund and publish in line with open research principles. The guidance we provide is accompanied by material for further reading.
Researchers are spending an increasing fraction of their time on applying for funding. This raises the question whether investments into the funding distribution system are well spent with respect to its ultimate purpose: to support research. Multiple studies suggest that the current funding system has considerable deficiencies in reliably evaluating the merit of research proposals, despite extensive efforts on the sides of applicants, grant reviewers and decision committees. The sum of these efforts decreases the efficiency of research investments: for some funding schemes, the systemic costs of the application process as a whole may even outweigh the granted resources - a phenomenon that could be considered as predatory funding. We present five recommendations to remedy this unsatisfactory situation: (1) to explicitly weigh cost vs. benefits before publishing a call (or applying for it); (2) to increase transparency to allow such calculations; (3) to reduce the paperwork and time expenditure required for proposal submission; 4) to remove or reduce ulterior motives for grant applications; and (5) to adopt alternative funding distribution strategies.
Receiving research grants is among the highlights of an academic career, affirming previous accomplishments and enabling new research endeavors. Much of the process of acquiring research funding, however, belongs to the less favorite duties of many researchers: it is time consuming, often stressful, and, in the majority of cases, unsuccessful. This resentment toward funding acquisition is backed up by empirical research: the current system to distribute research funding, via competitive calls for extensive research applications that undergo peer review, has repeatedly been shown to fail in its task to reliably rank proposals according to their merit, while at the same time being highly inefficient. The simplest, fairest, and broadly supported alternative would be to distribute funding more equally across researchers e.g. by an increase of universities’ base funding, thereby saving considerable time that can be spent on research instead. Here, I propose how to combine such a ‘funding flat rate’ model – or other efficient distribution strategies – with quality control through postponed, non‐competitive peer‐review using open science practices.
Full-text available
To quantify randomness and cost when choosing health and medical research projects for funding. Retrospective analysis. Grant review panels of the National Health and Medical Research Council of Australia. Panel members' scores for grant proposals submitted in 2009. The proportion of grant proposals that were always, sometimes, and never funded after accounting for random variability arising from differences in panel members' scores, and the cost effectiveness of different size assessment panels. 59% of 620 funded grants were sometimes not funded when random variability was taken into account. Only 9% (n = 255) of grant proposals were always funded, 61% (n = 1662) never funded, and 29% (n=788) sometimes funded. The extra cost per grant effectively funded from the most effective system was $A18,541 (£11,848; €13,482; $19,343). Allocating funding for scientific research in health and medicine is costly and somewhat random. There are many useful research questions to be addressed that could improve current processes.
Full-text available
Recently there has been much interest in estimating the prevalence (risk, proportion or probability) ratio instead of the odds ratio, especially in occupational health studies involving common outcomes (for example, with prevalence rates above 10%). For example, if 80 out of 100 exposed subjects have a particular disease and 50 out of 100 non-exposed subjects have the disease, then the odds ratio (OR) is (80/20)/(50/50) = 4. However, the prevalence ratio (PR) is (80/100)/(50/100) = 1.6. The latter indicates that the exposed subjects are only 1.6 times as likely to have the disease as the non-exposed subjects, and this is the number in which most people would be interested. There is considerable literature on the advantages and disadvantages of OR versus PR (see Greenland,1 Stromberg,2 Axelson et al 3 and others). In this article we will review the existing methods and give examples and recommendations on how to estimate the PR. The most common method of modelling binomial (no/yes or 0/1) health outcomes today is logistic regression. In logistic regression one models the probability of the binomial outcome (Y = 1) of interest as: P(Y = 1| X1, X2, …, Xk) = eXβ/(1+eXβ) where Xβ = β+β1X1+β2X2+…+βkXk. Then exp(β1) = OR for a 1 unit increase in X1 adjusted for all other variables in the model. Logistic regression yields maximum likelihood estimates (MLEs) of the OR (adjusted for other covariates). If the adjusted OR is the parameter of interest, then these MLEs are generally considered the best estimators available. The adjusted OR can also be used to estimate the adjusted PR, but this should only be done for a rare disease (eg, one with a prevalence of 10% or less). This, together with the fact that …
Increasing competition for federal government research funds has resulted in a large number of good projects not being funded. This situation is unlikely to change in the near future and has generated uncertainty and frustration amongst many who are dependent on external funding for their research. In this context it is particularly important that the aims of federal government funding agencies are communicated effectively and that the procedures they establish to allocate research funds are seen as credible by the academic research community. This article reports the results of a survey which investigated the research grant process from the point of view of unsuccessful applicants from four universities for large 1991 initial Australian Research Council grants. The findings identify a number of limitations in the operations of the peer review mechanism as used by this Council and question the adequacy of the advice and instructions provided by the ARC to those nominated to review research proposals. The findings also raise questions concerning how the lists of external assessors are compiled as well as how these external assessors are later matched with individual applications.
WinBUGS is a fully extensible modular framework for constructing and analysing Bayesian full probability models. Models may be specified either textually via the BUGS language or pictorially using a graphical interface called DoodleBUGS. WinBUGS processes the model specification and constructs an object-oriented representation of the model. The software offers a user-interface, based on dialogue boxes and menu commands, through which the model may then be analysed using Markov chain Monte Carlo techniques. In this paper we discuss how and why various modern computing concepts, such as object-orientation and run-time linking, feature in the software's design. We also discuss how the framework may be extended. It is possible to write specific applications that form an apparently seamless interface with WinBUGS for users with specialized requirements. It is also possible to interface with WinBUGS at a lower level by incorporating new object types that may be used by WinBUGS without knowledge of the modules in which they are implemented. Neither of these types of extension require access to, or even recompilation of, the WinBUGS source-code.
‘If peer review was a drug it would never be allowed onto the market,’ says Drummond Rennie, deputy editor of the Journal Of the American Medical Association and intellec tual father of the international congresses of peer review that have been held every four years since 1989. Peer review would not get onto the market because we have no convincing evidence of its benefi ts but a lot of evidence of its fl aws. Yet, to my continuing surprise, almost no scientists know anything about the evidence on peer review. It is a process that is central to science - deciding which grant proposals will be funded, which papers will be published, who will be promoted, and who will receive a Nobel prize. We might thus expect that scientists, people who are trained to believe nothing until presented with evidence, would want to know all the evidence available on this important process. Yet not only do scientists know little about the evidence on peer review but most continue to believe in peer review, thinking it essential for the progress of science. Ironically, a faith based rather than an evidence based process lies at the heart of science.
Young, aspiring researchers often have to learn the hard way when it comes to writing a killer grant application. But a range of European initiatives aims to give them a helping hand. Karen Kreeger reports.
To investigate the correlation between the publication "track record" score of applicants for National Health and Medical Research Council (NHMRC) project grants and bibliometric measures of the same publication output; and to compare the publication outputs of recipients of NHMRC program grants with those of recipients under other NHMRC grant schemes. For a 15% random sample of 2000 and 2001 project grant applications, applicants' publication track record scores (assigned by grant assessors) were compared with bibliometric data relating to publications issued in the previous 6 years. Bibliometric measures included total publications, total citations, and citations per publication. The program grants scheme underwent a major revision in 2001 to better support broadly based collaborative research programs. For all successful 2001 and 2002 program grant applications, a citation analysis was undertaken, and the results were compared with citation data on NHMRC grant recipients from other funding schemes. Correlation between publication track record scores and bibliometric indicators. The correlation between mean project-grant track record scores and all bibliometric indicators was poor and below statistically significant levels. Recipients of program grants had a strong citation record compared with recipients under other NHMRC funding schemes. The poor correlation between track record scores and bibliometric measures for project grant applications suggests that factors other than publication history may influence the assignment of track record scores.