Conference PaperPDF Available

Abstract and Figures

To explain lower success rates of female applicants in ERC grants, we collected data about past performance of the applicants, and we interviewed panel members about how selection criteria are practiced in general and specifically for female vs. male applicants. Controlling for several past performance variables, we still do find gender bias – more often in favor of men than of women. The analysis of the interviews provides empirical evidence that current evaluation practices indeed are not at all optimal, leading to gender-biased practices and outcomes.
Content may be subject to copyright.
Explaining gender bias in ERC grant selection - a first exploration of
the life sciences case‡,&
Peter van den Besselaar*,#, Helene Schiffbaenker**, Ulf Sandström##, Charlie Mom#
* Network Institute, Vrije Universiteit, Boelelaan 1105, 1081 HV, Amsterdam (Netherlands)
# Teresa Mom Consultancy bv, Amsterdam (The Netherlands)
** Joanneum Research, Vienna (Austria)
## KTH, Royal Institute of Technology, Stockholm (Sweden)
To explain lower success rates of female applicants in ERC (life sciences) starting grants, we
collected data about past performance of the applicants, and we interviewed panel members
about how selection criteria are practiced in general and specifically for female vs. male
applicants. The analysis of the interviews provides empirical evidence that current evaluation
practices indeed lead to gender-biased practices and outcomes. The statistical analysis shows
after controlling for several past performance variables – the prevalence of gender bias,
more often in favor of men than of women.
Keywords: Gender bias; peer review; panel review; research grants; ERC; European research
Council; funding.
There is a longstanding discussion on whether gender bias influences grant selection
processes, and the literature shows contradicting results [1, 2, 3, 4, 5, 6, 7, 8, 9]. However,
there are three main problems with most research: (i) Most studies explain in fact only
differences between success rates of men and women. However, these success rates are only
meaningful when taking possible quality differences of male and female researchers into
account. If these exist [10, 11], gendered differences in success rate could partly or fully an
effect of those quality differences and not of gender bias. To solve this problem, we have
collected data to measure various dimensions of past performance, which are included in the
analysis. (ii) Most studies depend on information only about the successful applicants, but not
on the rejected – as the latter data are generally accessible for investigators. However, in this
study we do have the data about successful and rejected applications. (iii) Bias emerges from
the decision-making process, and this is often done at the level of review panels. In contrast,
most studies focus on a higher level of aggregation, such as the funding instrument, or at the
This work was supported by the ERC (grant 610706: GendERC project), but the funder had no influence on the
design, analysis, or interpretation of the results. The work was also supported by the EC (grant 2654319: RISIS
& Version V8, September 15, 2018
STI Conference 2018 · Leiden
level of the discipline. We include here an initial analysis at panel level. We do detect gender
bias, in contrast to recent reviews [6, 8, 9].
We investigate the 2014 ERC Starting Grant scheme, and have access to the relevant data
about the 3,030 applicants (about 95 %) that gave informed consent. We selected this case, as
it is the most prestigious grant that exist in Europe for early career researchers (up to seven
years after the PhD), and it is expected to strongly contribute to career opportunities of those
getting the grant [12].
Starting point of the study is that overall female applicants have lower success rates
(applicants/grantees) than men, most obviously in the life sciences (LS) domain. Figure 1
shows the success rates in step 1 and step 2 of the evaluation process in the nine LS panels of
the StG 2014. The panel level enables to locate gender differences more accurately and
potential improvements can be implemented more effectively. In this case, women have a 6 %
lower success rate in step 1 and a 2 % lower success rate in step 2, which makes an overall
difference of 3 %. Figure 1 illustrates that gender differences success rates vary considerably
between panels. In panel LS8 (Evolutionary, population & environmental biology) women do
much better than men, but in panel LS6 (Immunity and infection) it is the opposite. Also,
differences exist between step 1 and 2 in the procedure, indicating the large influence of the
interview with the applicant, and/or a ‘gender correction’ at least in a few panels (LS1, LS3,
LS4). Due to space limitations, this cannot be discussed here.
Figure 1: Success rate of female panel members, StG 2014, LS panels
As Table 1 shows, domain and field differences exist. Overall the success rate of women is
higher than the success rate for men in physics and engineering (PE), possibly as part of
policies to increase female participation within PE. But within PE the differences are large:
women do much better in ‘fundamental constituents of matter’, but much worse in
‘mathematical foundations’. Although in the social sciences and humanities the overall
success rates are equal, between the SH fields, differences are huge: Within ‘environment,
space and population’ (SH3: sociology, anthropology, education, communication) women do
twice as well as men, and within ‘markets, individuals and institutions’ (SH1: economics,
organization and management) it is the other way around. It would be interesting to find out
why these differences occur. Is this research field specific, for example is gender stereotyping
stronger in fields where ‘excellence’ plays a stronger role in the discourse such as philosophy,
mathematics, economics [13], or is this an effect of group dynamics – so more related to
STI Conference 2018 · Leiden
personal and panel characteristics [11, 14, 15]? Studying these patterns over time may answer
these questions. If the pattern is stable over time, one may conclude that the field
characteristics are most important, if not it may be mainly panel characteristics.
Table 1. Difference between female success rate male success rate
Panel with highest ratio
Panel with lowest ratio
- 27 %
Evolutionary, population & environm. biol.
+ 109 %
- 70 %
+ 11 %
Fundamental constituents of matter
+ 94 %
- 70 %
9 %
Environment, space and population
+ 108 %
- 81 %
- 6 %#
* Life sciences; ** Physics & Engineering; *** Social Sciences & Humanities. Panel names from 2014.
# Read as: the success rate of women is 6% lower than the success rate of men (all applicants)
We used a series of interviews to investigate the grant selection process and the possibility of
bias entering into it. The panel processes are only weakly formalized, as are the criteria
deployed by the panelists. The council has two principles implemented: (i) the only criterion
that should count is excellence of the project and the investigator; (ii) panels consist of
excellent researchers in their respective fields and should therefore decide among themselves
what excellent applications are. But what is ‘independence’ and how can a panel member or a
reviewer see this? And what is ‘ability to do groundbreaking research’? Is that having
published in Nature, having published a very highly cited paper, or something else? The
interviews show that this in fact results in quite uncertainty and differences. For example,
reviewers doubt about criteria deployed and express the need for clearer and operational
criteria for ‘excellence’. As a panelist tells:
They give you very general guidelines like the scientific quality, the quality of the
researcher, the originality of the proposal, and so on, typical of all projects. In those
projects that are so related to your field of expertise you don’t even need it because
you appreciate them immediately. The problem comes when the projects are far from
your field of expertise, then you have to be very objective in your criteria, so I have
prepared a list of things I should not be forgetting.
This uncertainty and the well-known group dynamics that occurs in panels [11, 14] open the
possibilities for bias entering in the evaluation and selection, which is strongly reinforced by
the high time pressure the panels are confronted with. But, if bias is possible, does it also
occur? To provisionally answer this question we use the following statistical analysis.
Approach, data & methods
Given this, we aim to predict the applicants scores and application success, using a set of
independent variables related to performance (productivity, impact, previous grants, quality of
the collaboration network) and to the person (age, nationality, research field, and of course
gender). As decision-making on grants is done in panels, the effect of the panel is considered
too – through an informal multi-level approach.
The following data were collected, and we add what variables where extracted. As the data
had many formats, quite some technical work needed to be done to extract and integrate the
required data (using the SMS platform -
- Age, gender, date of PhD, nationality, field of research: from an administrative
file of the ERC.
- Earlier and current other grants: manually extracted from the CVs.
- Collaboration network: semi-automatic extraction of organizations from the CVs
- Quality of the network: semi-automatic linking of organization names with the
data in the Leiden Ranking; manual search for comparable scores of those
organizations not in the Leiden Ranking.
STI Conference 2018 · Leiden
- Host institution: from an administrative file of the ERC. For the quality of the host
institution we use the extended 2015 Leiden Ranking scores.
- Productivity, impact: Downloaded from the Web of Science with a manual
disambiguation. The we calculated a series of bibliometric variables, such as the
number of publications, the number of fractional publications, the number of
citations, the number of citations with a three years window, the share of top cited
papers (1%, 5%, 10%, 25% and 50%), the number of top 10% highly cited papers
(so the size dependent variant), the average number of coauthors, and the average
number of international coauthors [22].
- Organizational proximity (cronyism): From the applicants’ data and the panelists’
data we extracted the links between applicants and panel members in terms of
belonging to the same organization [9].1
- Panel review scores of the applications: from an administrative file of the ERC.
- Decision: from an administrative file of the ERC
We currently have a stratified dataset of about 1742 applicants, evenly distributed over the 5
scores given by the panels: A-granted, A-not-granted, B-step2, B-step1, C. We plan to collect
the bibliometric data for the remaining 1288 in the future, so the results here are to some
extent preliminary. The unique nature of our data is that we can combine (advanced)
bibliometric indicators with a large set of other variables. These data enable several
interesting analyses. For example, one may analyze whether organizational proximity
(cronyism) [16], or cognitive proximity [17, 18] have an effect on grant success. One may
also study whether language use in review reports shows the nature of the decision-making
process [19, 20], and more specifically whether language use shows gender bias [21].
Due to space limits, we restrict the analysis to the Life Sciences. Firstly, we deploy ordinal
regression for the LS applicants in order to estimate the effect of gender on the decision after
controlling for several quality (past performance) variables, the quality of the network and,
and for organizational proximity. Secondly, we move to the second level, and compare the
panels. We do a similar regression but on the level of the individual panels that can be
Results: Life sciences
We used the bibliometric indicators mentioned above, the variables on the quality of the
network and the host institution, and the number of grants the applicant has already acquired.
We also include whether a panel member is at the host institution of the applicant, and gender.
Running an ordinal regression, and after manually stepwise deleting variables that did not
work, eight variables remained in the model, which resulted in a pseudo R-square
(Nagelkerke) of 0.308. Table 2 shows the result.
Factors that help to get a better score are papers in high impact journals, the quality of the
network, measured as the median ranking of the organizations in the network of the applicant,
average number of international coauthors, and the number2 top 10% most cited papers
(fractionally counted). Negative works the average number of coauthors, as that may suggest
a lower level of independence. Finally, we do find effects of sexism and nepotism: women
1 We also started to analyze the role of Cognitive bias but at the moment we only have data for a few panels. We
therefore do not include this variable here [17, 18].
2 This is the size dependent variable, which we feel is more valid than the share of top cited-papers.
STI Conference 2018 · Leiden
score some 0.35 points lower than male (on a five-points scale), and when the candidate has a
panel with a panel member that is at the proposed host institution, this gives almost a 0.6
point bonus.
Table2: Score by performance, organizational proximity and gender
Std. Error
95% Confidence Interval
Number highly cited (10%) papers
Journal impact (NJCS)
Number earlier grants
Quality network
Average nr co-authors
Average nr international co-authors
Nearby panelist
Female versus male
Ordinal regression; Link function: Logit
Pseudo R-square (Nagelkerke) = 0.308
Bootstrapped: 2000 samples; confidence interval 95%
This means that from a performance perspective, only one variable plays a role (the number
of top cited papers). The other variables that influence the score are reputation based (journal
impact related; ranking related) and network based (number of (international) co-authors).
Also, the number of earlier grants has a positive effect on the score; and these grants partly
can be considered as performance, but at least also partly as reputation-related. Finally we
find two bias factors: after controlling for the performance and reputation variables, sexism
and cronyism still have an effect on the scores the applicants get.3
Results: Life science panels
As grant decision-making tales place at the level of panels, and different social dynamics may
take place in the different panels, one may expect that the levels of bias may be different in
different panels. We therefore repeat the analysis for the 9 individual panels, each
representing one or more specific disciplines within the life sciences. However, as at panel
level the number of granted applicants is low (typically about 11 out of about 100 applicants),
the number of variables that can be included is smaller, and also variables that are significant
at the LS domain level, are that not anymore on panel level. Nevertheless, the variables have
overall the same effect in the panel models as for the domain as a whole. In table 3 we show
the sign of the variables for each of the panel-regressions. We use the same variables as for
life sciences as a whole. Most have the expected effect, but some have not. We will address
that after having collected and cleaned the data for all applicants.
Table 3: Sign of regression coefficients at panel level
Number highly cited (10%) papers
Journal impact (NJCS)
Number earlier grants
Quality network
Average nr co-authors
Average nr international co-authors
Nearby panelist
Female versus male
3 Results concerning cronyism (or nepotism) confirms the follow-up study concerning the Swedish MRC
reported in [23].
STI Conference 2018 · Leiden
Interestingly, gender bias in favor of men is in six of the nine panels, covering 78 % of the
female applicants in the life sciences. The other three have bias in favor of women, but have
only 22% of the female applicants. This needs further analysis, but gender bias may be related
to the share of women in a field.
Conclusions and further work
Using data for 80% of the applicants, we have shown that gender bias occurs in the life
sciences, but not in all parts of the field in the same way. In most panels we find bias against
women, but in three panels it is the opposite. However, the first set of panels include almost
80 % of all female LS applicants. We also found that the gender bias and different success
rates are not the same, as in one third of the panels, the sign of gender bias is different from
the sign of the success rates: For example, in panel 9, the success rate of women is higher than
of men, but there is still gender bias in favor of men, after controlling for the performance of
applicants. This means that in fact the positive success rate without gender bias would have
been higher.
This analysis covers only life sciences, but we are also analyzing the other domains: social
sciences and humanities, and physics and engineering. These fields are not only different in
terms of gender success rates, but we also expect differences in gender bias.
Panels play an important role, therefore we will also include characteristics of the panel to the
model. What panel characteristics do lead to gender bias? For example, we found a negative
correlation between the number of female panel members and the female success rate (not
discussed in this paper).
Finally, if one understands the dynamics of gender bias, the next question is how to reduce it.
That is crucial, as the type of grants we study here have strong career implications [15, 16].
[1] Ahlqvist, V., Andersson, J., Söderqvist, L., Tumpane J. (2015) A gender neutral process?
– A qualitative study of the evaluation of research grant applications 2014, Swedish
Research Council, Stockholm.
[2] Beck, R, Halloin, V. (2017) Gender and research funding success: Case of the Belgian
F.R.S.-FNRS. Research Evaluation 26(2), 115–123.
[3] Böhmer, S., Hornbostel, S., Meuser, M. (2008) Postdocs in Deutschland: Evaluation des
Emmy Noether-Programms, iFQ-Working Paper No. 3, Bonn.
[4] Bornmann, L., Mutz, R., Daniel, H.D. (2007) Gender differences in grant peer review: a
meta-analysis, Journal of Informetrics 1, No. 3, pp. 226–238.
[5] Bornmann, L., Mutz, R., Daniel, H.D. (2008) How to detect indications of potential
sources of bias in peer review: A generalized latent variable modeling approach
exemplified by a gender study, Journal of Informetrics 2, 280– 287.
[6] Ceci, S.J., Ginther, D.k., Kahn, S., Williams, W.M. (2014) Women in academic science: a
changing landscape. Psychological Science in the Public Interest 15 (3): 75-141.
[7] Van den Lee R, Ellemers N (2015) Gender contributes to personal research funding
success in The Netherlands. PNAS 112, 12349-12353
[8] Marsh, H. W., Jayasinghe, U. W., Bond, N. W. (2011) Gender differences in peer reviews
of grant applications: A substantive-methodological synergy in support of the null
hypothesis model, Journal of Informetrics 5, 167–180.
[9] Williams, W.M., Ceci, S.J. (2011) Understanding current causes of women’s underrepresentation
in science. PNAS 108, 8, 3157-3162.
STI Conference 2018 · Leiden
[10] Van den Besselaar P, Sandström U (2018) Vicious circles of gender bias, lower positions
and lower impact: gender differences in scholarly productivity and impact. PlosOne 12
(2018) 8: e0183301.
[11] Lamont M (2009) How professors think. Harvard University Press
[12] Van den Besselaar P, Sandström U (2015) Early career grants, performance and careers;
a study of predictive validity in grant decisions. Journal of Informetrics 9 826-838
[13] Leslie SJ, Cimpian A, Meyer M, Freeland E (2015) Expectations of brilliance underlie
gender distributions across academic disciplines. Science 347, 6216, January 6.
[14] Van Arensbergen P, Van der Weijden I, Van den Besselaar P (2014)The selection of
talent as a group process; a literature review on the dynamics of decision-making in
grant panels. Research Evaluation 23 4:298-311
[15] Olbrecht M, Bornmann L. (2010) Panel peer review of grant applications: What do we
know from research in social psychology on judgment and decision-making in groups?
Research Evaluation 19 293–304.
[16] Mom C, Van den Besselaar P. Does institutional proximity affect grant application
success? Paper presented at the PEERE Conference, Rome, March 2018
[17] Van den Besselaar P, Sandström U (2017) Influence of cognitive distance on grant
decisions, Proceedings STI conference 2017, Paris.
[18] Sandström U & Van den Besselaar P, The effect of cognitive distance on gender bias in
grant decisions. Paper presented at the PEERE Conference, Rome, March 2018
[19] Van den Besselaar P, Stout L, Gou X (2016) Predicting panel scores by linguistic analysis.
In: Ismael Rafols et al, Peripheries, Frontiers and Beyond; Proceedings STI 2016,
[20] Van den Besselaar P, Sandström U, Schiffbaenker H (2018) Using linguistic analysis of
peer review reports to study panel processes, Scientometrics (2018)
[21] Van den Besselaar P, Sandström U, Schiffbaenker H, Gendered language in applications
and reviews: a linguistic analysis (In preparation)
[22] Sandström U. & Wold, A 2015) Centres of excellence: reward for gender or top-level
research? In Thinking Ahead: research, funding and the future. RJ Yearbook 2015/2016,
[23] Sandström U. & Hällsten M. (2008). Persistent nepotism in peer review. Scientometrics
74 (2) 175-189.
... A recent empirical study of 1742 European Research Council life science proposals did find evidence of gender bias in some parts of the field after controlling for facets of past performance, and that review panel characteristics may have a role in that bias (Van den Besselaar, Schiffbaenker, Sandström, & Mom, 2018). Using roughly the same data, showed that probability of success also increases if a near-by panelist is from the same institution as the applicant. ...
Conference Paper
Full-text available
Indicators that could predict the success or failure of 3459 research proposals are identified and evaluated. The sample was highly homogeneous (all proposals were from one medical school and submitted to one funding agency) but heterogeneous within this context (all types of NIH proposals are included). The most important exogenous indicator was whether the PI had a backlog of proposal opportunities. Gender and race had no statistically significant impact. Only one of the six linguistic indicators (derived from the research strategy section of these proposals) was a significant predictor of proposal success.
... To date, little is known about the success determinants of international research funding proposals, in Poland especially. Literature, mainly devoted to the statistics of funded projects, deals with the Matthew effect in science, the hypothesis that outstanding scientists and/or outstanding research institutions have an advantage in competing for funding [27], [31] and is primarily concerned with the bias of review panels [3], [26], [27]. The topic appears mainly in grey literature, websites and training materials. ...
Conference Paper
Full-text available
The goal of the research is to understand the factors that determine the success of international research proposals. For this purpose, a multi-stage study will be carried out. The study will include a systematic literature review, semi-structured interviews with European Commission evaluators, and the development of the model of success determinants. Text analysis of historical proposals will enhance the knowledge of the success factors linked with applications' discourse and language. The proposed research will complement the existing literature as the study will be based on a comprehensive dataset covering all funded and rejected project applications under the European Union's Horizon 2020 Framework Program. The model will support the participation of Polish higher education institutions in European funds, whose acquisition is a key challenge for them, and provide an opportunity for the development of innovative research and international cooperation. It might also be useful for institutions in other countries, especially those that similarly to Polish institutions have a low share in acquiring European Union funds.
Full-text available
Kernwoorden: diversiteit en inclusie, exclusie, hoger onderwijs, intersectionaliteit, ongelijkheidspraktijken D iversity and inclusion have a prominent place on the agenda of higher education institutions. As higher education institutions strive for equal opportunities, they increasingly develop diversity policies. In order to have effective policies, knowledge about inequalities in higher education institutions is crucial. Yet, this knowledge has not previously been brought together into a coherent overview. The aim of this review is therefore to provide a coherent overview of inequalities in higher education by using the concept of inequality practices, so that (1) knowledge gaps for further research can be identified and (2) recommendations can be made for more effective diversity interventions. We identify fourteen inequality practices along the analytical distinction of numbers, institutions , and knowledge (production). Our review shows that the (in)visibility of difference plays a central role in experiencing inequalities. Different social groups experience different inequality practices depending on the (in) visibility of their identity aspects. Yet, precisely because of the (in)visibility of identity aspects they might also experience different consequences of the same inequality practices. This difference in consequences can also be explained by intersectionality. Finally, we provide recommendations to become more inclusive in diversity research and practices by (specifically) paying attention to invisible differences (sexuality, gender identity, disability) and intersectionality. Abstract Naar het inclusiever (her)maken van het hoger onderwijs: een review naar ongelijkheidspraktijken
Bornmann and Marewski (2019) have adapted the concept of fast-and-frugal heuristics to scientometrics in order to study and guide the application of bibliometrics in research evaluation. Bibliometrics-based heuristics (BBHs) are simple decision strategies for evaluative purposes based on bibliometric indicators. One aim of the heuristics research program is to develop methods for studying the use of BBHs in research evaluation. Many deans probably evaluate rough performance differences between researchers in their departments based on h index values. Bornmann, Ganser, Tekles, and Leydesdorff (2020) developed the Stata command h_index and R package hindex which can be deployed in a fast and frugal way to decide on the following question: can the h index be used to compare all researchers in a university department, or are the citation cultures so different between sub-groups in the department that not all researchers can be compared with one another? The command and package can be used for simulations that might answer the question before extensive processes of data collection start. If the citation cultures are very different in the sub-groups, the researchers should be compared with field-normalized indicators (instead of the h index). This paper shows how the h_index command and hindex package can be employed for the decision on the h index use in the BBH.
This Element describes for the first time the database of peer review reports at PLOS ONE, the largest scientific journal in the world, to which the authors had unique access. Specifically, this Element presents the background contexts and histories of peer review, the data-handling sensitivities of this type of research, the typical properties of reports in the journal to which the authors had access, a taxonomy of the reports, and their sentiment arcs. This unique work thereby yields a compelling and unprecedented set of insights into the evolving state of peer review in the twenty-first century, at a crucial political moment for the transformation of science. It also, though, presents a study in radicalism and the ways in which PLOS's vision for science can be said to have effected change in the ultra-conservative contemporary university. This title is also available as Open Access on Cambridge Core.
Despite slow ongoing progress in increasing the representation of women in academia, women remain significantly under‐represented at senior levels, in particular in the natural sciences and engineering. Not infrequently, this is downplayed by bringing forth arguments such as inherent biological differences between genders, that current policies are adequate to address the issue, or by deflecting this as being “not my problem” among other examples. In this piece we present scientific evidence that counters these claims, as well as a best‐practice example, Genie, from Chalmers University of Technology, where one of the authors is currently employed. We also highlight particular challenges caused by the current COVID‐19 pandemic. Finally, we conclude by proposing some possible solutions to the situation and emphasize that we need to all do our part, to ensure that the next generation of academics experience a more diverse, inclusive, and equitable working environment. Diversity in academia: Data shows that there are few female faculty in academia today, with the percentage decreasing the higher the rank. There are many common responses to gender equality efforts that hamper progress of women in academia. Herein, we counteract such comments and discuss possible solutions along with concerns about how the COVID‐19 pandemic may affect diversity in academia.
Full-text available
Assessing the success and performance of researchers is a difficult task, as their grant output is influenced by a series of factors, including seniority, gender and geographical location of their host institution. In order to assess the effects of these factors, we analysed the publication and citation outputs, using Scopus and Web of Science, and the collaboration networks of European Research Council (ERC) starting (junior) and advanced (senior) grantees. For this study, we used a cohort of 355 grantees from the Life Sciences domain of years 2007-09. While senior grantees had overall greater publication output, junior grantees had a significantly greater pre-post grant award increase in their overall number of publications and in those on which they had last authorship. The collaboration networks size and the number of sub-communities increased for all grantees, although more pronounced for juniors, as they departed from smaller and more compact pre-award co-authorship networks. Both junior and senior grantees increased the size of the community within which they were collaborating in the post-award period. Pre-post grant award performance of grantees was not related to gender, although male junior grantees had more publications than female grantees before and after the grant award. Junior grantees located in lower research-performing countries published less and had less diverse collaboration networks than their peers located in higher research-performing countries. Our study suggests that research environment has greater influence on post-grant award publications than gender especially for junior grantees. Also, collaboration networks may be a useful complement to publication and citation outputs for assessing post-grant research performance, especially for grantees who already have a high publication output and who get highly competitive grants such as those from ERC.
Full-text available
Peer and panel review are the dominant forms of grant decision-making, despite its serious weaknesses as shown by many studies. This paper contributes to the understanding of the grant selection process through a linguistic analysis of the review reports. We reconstruct in that way several aspects of the evaluation and selection process: what dimensions of the proposal are discussed during the process and how, and what distinguishes between the successful and non-successful applications? We combine the linguistic findings with interviews with panel members and with bibliometric performance scores of applicants. The former gives the context, and the latter helps to interpret the linguistic findings. The analysis shows that the performance of the applicant and the content of the proposed study are assessed with the same categories, suggesting that the panelists actually do not make a difference between past performance and promising new research ideas. The analysis also suggests that the panels focus on rejecting the applications by searching for weak points, and not on finding the high-risk/high-gain groundbreaking ideas that may be in the proposal. This may easily result in sub-optimal selections, in low predictive validity, and in bias.
Conference Paper
Full-text available
The selection of grant applications generally is based on peer and panel review, but as shown in many studies, the outcome of this process does not only depend on the scientific merit or excellence, but also on social factors, and on the way the decision-making process is organized. A major criticism on the peer review process is that it is inherently conservative, with panel members inclined to select applications that are line with their own theoretical perspective. In this paper we define 'cognitive distance' and operationalize it. We apply the concept, and investigate whether it influences the probability to get funded.
Full-text available
It is often argued that female researchers publish on average less than male researchers do, but male and female authored papers have an equal impact. In this paper we try to better understand this phenomenon by (i) comparing the share of male and female researchers within different productivity classes, and (ii) by comparing productivity whereas controlling for a series of relevant covariates. The study is based on a disambiguated Swedish author dataset, consisting of 47,000 researchers and their WoS-publications during the period of 2008-2011 with citations until 2015. As the analysis shows, in order to have impact quantity does make a difference for male and female researchers alike—but women are vastly underrepresented in the group of most productive researchers. We discuss and test several possible explanations of this finding, using a data on personal characteristics from several Swedish universities. Gender differences in age, authorship position, and academic rank do explain quite a part of the productivity differences.
Full-text available
The main rationale behind career grants is helping top talent to develop into the next generation leading scientists. Does career grant competition result in the selection of the best young talents? In this paper we investigate whether the selected applicants are indeed performing at the expected excellent level – something that is hardly investigated in the research literature. We investigate the predictive validity of grant decision-making, using a sample of 260 early career grant applications in three social science fields. We measure output and impact of the applicants about ten years after the application to find out whether the selected researchers perform ex post better than the non-successful ones. Overall, we find that predictive validity is low to moderate when comparing grantees with all non-successful applicants. Comparing grantees with the best performing non-successful applicants, predictive validity is absent. This implies that the common belief that peers in selection panels are good in recognizing outstanding talents is incorrect. We also investigate the effects of the grants on careers and show that recipients of the grants do have a better career than the non-granted applicants. This makes the observed lack of predictive validity even more problematic.
Full-text available
Much has been written in the past two decades about women in academic science careers, but this literature is contradictory. Many analyses have revealed a level playing field, with men and women faring equally, whereas other analyses have suggested numerous areas in which the playing field is not level. The only widely-agreed-upon conclusion is that women are underrepresented in college majors, graduate school programs, and the professoriate in those fields that are the most mathematically intensive, such as geoscience, engineering, economics, mathematics/computer science, and the physical sciences. In other scientific fields (psychology, life science, social science), women are found in much higher percentages. In this monograph, we undertake extensive life-course analyses comparing the trajectories of women and men in math-intensive fields with those of their counterparts in non-math-intensive fields in which women are close to parity with or even exceed the number of men. We begin by examining early-childhood differences in spatial processing and follow this through quantitative performance in middle childhood and adolescence, including high school coursework. We then focus on the transition of the sexes from high school to college major, then to graduate school, and, finally, to careers in academic science. The results of our myriad analyses reveal that early sex differences in spatial and mathematical reasoning need not stem from biological bases, that the gap between average female and male math ability is narrowing (suggesting strong environmental influences), and that sex differences in math ability at the right tail show variation over time and across nationalities, ethnicities, and other factors, indicating that the ratio of males to females at the right tail can and does change. We find that gender differences in attitudes toward and expectations about math careers and ability (controlling for actual ability) are evident by kindergarten and increase thereafter, leading to lower female propensities to major in math-intensive subjects in college but higher female propensities to major in non-math-intensive sciences, with overall science, technology, engineering, and mathematics (STEM) majors at 50% female for more than a decade. Post-college, although men with majors in math-intensive subjects have historically chosen and completed PhDs in these fields more often than women, the gap has recently narrowed by two thirds; among non-math-intensive STEM majors, women are more likely than men to go into health and other people-related occupations instead of pursuing PhDs. Importantly, of those who obtain doctorates in math-intensive fields, men and women entering the professoriate have equivalent access to tenure-track academic jobs in science, and they persist and are remunerated at comparable rates—with some caveats that we discuss. The transition from graduate programs to assistant professorships shows more pipeline leakage in the fields in which women are already very prevalent (psychology, life science, social science) than in the math-intensive fields in which they are underrepresented but in which the number of females holding assistant professorships is at least commensurate with (if not greater than) that of males. That is, invitations to interview for tenure-track positions in math-intensive fields—as well as actual employment offers—reveal that female PhD applicants fare at least as well as their male counterparts in math-intensive fields. Along these same lines, our analyses reveal that manuscript reviewing and grant funding are gender neutral: Male and female authors and principal investigators are equally likely to have their manuscripts accepted by journal editors and their grants funded, with only very occasional exceptions. There are no compelling sex differences in hours worked or average citations per publication, but there is an overall male advantage in productivity. We attempt to reconcile these results amid the disparate claims made regarding their causes, examining sex differences in citations, hours worked, and interests. We conclude by suggesting that although in the past, gender discrimination was an important cause of women’s underrepresentation in scientific academic careers, this claim has continued to be invoked after it has ceased being a valid cause of women’s underrepresentation in math-intensive fields. Consequently, current barriers to women’s full participation in mathematically intensive academic science fields are rooted in pre-college factors and the subsequent likelihood of majoring in these fields, and future research should focus on these barriers rather than misdirecting attention toward historical barriers that no longer account for women’s underrepresentation in academic science.
Full-text available
Peer review serves a gatekeeper role, the final arbiter of what is valued in academia, but is widely criticized in terms of potential biases—particularly in relation to gender. In this substantive-methodological synergy, we demonstrate methodological and multilevel statistical approaches to testing a null hypothesis model in relation to the effect of researcher gender on peer reviews of grant proposals, based on 10,023 reviews by 6233 external assessors of 2331 proposals from social science, humanities, and science disciplines. Utilizing multilevel cross-classified models, we show that support for the null hypothesis model positing researcher gender has no significant effect on proposal outcomes. Furthermore, these non-effects of gender generalize over assessor gender (contrary to a matching hypothesis), discipline, assessors chosen by the researchers themselves compared to those chosen by the funding agency, and country of the assessor. Given the large, diverse sample, the powerful statistical analyses, and support for generalizability, these results – coupled with findings from previous research – offer strong support for the null hypothesis model of no gender differences in peer reviews of grant proposals.
Full-text available
Women's participation and attitudes to talent Some scientific disciplines have lower percentages of women in academia than others. Leslie et al. hypothesized that general attitudes about the discipline would reflect the representation of women in those fields (see the Perspective by Penner). Surveys revealed that some fields are believed to require attributes such as brilliance and genius, whereas other fields are believed to require more empathy or hard work. In fields where people thought that raw talent was required, academic departments had lower percentages of women. Science , this issue p. 262 ; see also p. 234
Conference Paper
Nepotism or cronyism is an important issue, as the expectation is that grants are given to the best researchers and not to socially, organizationally, or topic-wise near applicants. In this exploratory paper, we investigate the effect of organizational proximity (defined as the applicant having the same current and/or future institutional affiliation as one of the panelists) on the probability of getting a grant. We additionally analyze various aspects of this form of nepotism: can it be explained by performance differences, who gains from it, and has it a gender dimension? We do find that the probability to get funded increases significantly for those that have a near-by panelist. At the same time, the effect differs between disciplines and countries, and men profit more of it than women do.
The influence of gender on the outcome of research evaluation activities and access to research funding has been heavily debated in recent decades. In this study, data from 6,393 applications submitted between 2011 and 2015 to the Belgian funding agency Fonds de la Recherche Scientifique - FNRS (F.R.S.-FNRS) were statistically analysed to highlight any possible effect of gender on success rates. Results show no significant influence of gender on success rates or the likelihood of getting funding for most of the funding schemes we analysed. Research credit (RC) was the only one where gender and success variables were statistically dependent, although mean success rates of male and female applicants were not significantly different. Average grades given by remote reviewers to male applicants were significantly higher in the frame of RC applications. Among RC applications, the difference in success rates was higher in Humanities and Social Sciences, followed by Exact and Natural Sciences, and finally Life and Health Sciences. Proportions of male researchers who apply were shown to be higher for most of the funding schemes analysed, mainly for grant applications (such as RC) where only tenure researchers are allowed to apply. Taken together, our results show that access to F.R.S.-FNRS funding is not gender-dependent for the majority of the funding schemes except one where men represent the vast majority of the applicants. Reasons that could explain this statistical dependence are under investigation and could be due to the lower grading of women by remote reviewers. © The Author 2017. Published by Oxford University Press. All rights reserved.
Significance Women remain underrepresented in academia as they continue to face a leadership gap, salary gap, and funding gap. Closing the funding gap is of particular importance, because this may directly retain women in academia and foster the closing of other gaps. In this study, we examined the grant funding rates of a national full population of early career scientists. Our results reveal gender bias favoring male applicants over female applicants in the prioritization of their “quality of researcher” (but not “quality of proposal”) evaluations and success rates, as well as in the language use in instructional and evaluation materials. This work illuminates how and when the funding gap and the subsequent underrepresentation of women in academia are perpetuated.
The allocation of resources for scientific research is determined by panel peer review. To make funding recommendations, the reviewers convene to evaluate the quality of grant applications. Many research studies in social psychology have investigated what (undesired) phenomena (such as groupthink, motivation losses, and group polarization) can occur in group judgment and decision-making. In the research on peer review, however, these phenomena have not been examined up to now. This article describes the peer review panel with the help of features used in social psychology to characterize groups (such as entitativity of groups, group task) and presents phenomena from the research in social psychology that can have an (undesired) effect on the judgment of panel groups. Measures to counteract these phenomena are discussed. The necessity of research in this area is pointed out.