PreprintPDF Available
Preprints and early-stage research may not have been peer reviewed yet.

Abstract and Figures

Nepotism or cronyism is a critical issue, as the expectation is that grants are given to the best researchers, and not to applicants that are socially, organizationally, or topic-wise nearby the decision-makers. In this paper, we investigate the effect of organizational proximity (defined as when the applicant has the same current and/or future affiliation as one of the panel members) on the probability of getting a grant. We start with analyzing various aspects of this form of particularism: Who gains from it? Does it have a gender dimension? Can it be explained by performance differences between applicants or between organizations? We do find that the probability to get funded increases significantly for those that have a nearby panelist from the host institution. At the same time, the effect differs between disciplines and countries, and men profit more of it than women do. This study is based on one of the most prominent grant schemes in Europe, with overall excellent scientists as panel members. One would expect that if we find particularism even here, it may exist everywhere.
Content may be subject to copyright.
Preprint - March 9, 2021 V3
1
Do interests affect grant application success?
The role of organizational proximity
1
Charlie S Mom*, Peter van den Besselaar**,
2
* TMC, Amsterdam
charlie@teresamom.com
** Vrije Universiteit Amsterdam, the Netherlands
p.a.a.vanden.besselaar@vu.nl
Abstract: Bias in grant allocation is a critical issue, as the expectation is that grants are given
to the best researchers, and not to applicants that are socially, organizationally, or topic-wise
near-by the decision-makers. In this paper, we investigate the effect of organizational
proximity, defined as an applicant with the same affiliation as one of the panel members (a
near-by panelist), on the probability of getting a grant. This study is based on one of the most
prominent grant schemes in Europe, with overall excellent scientists as panel members.
Various aspects of this organizational proximity are analyzed: Who gains from it? Does it
have a gender dimension? Is it bias, or can it be explained by performance differences?
We do find that the probability to get funded increases significantly for those that apply in a
panel where there is a panelist from the institution where the applicant has agreed to use the
grant. At the same time, the effect differs between disciplines and countries, and men profit
more of it than women do. Finally, depending on how one defines what counts as the best
researchers, the near-by panelist effect can be interpreted as preferential attachment (quality
links to quality) or as bias and particularism.
Keywords: Interest representation; conflict of interest; cronyism; nepotism; particularism;
favoritism; proximity; grant selection; peer review; research funding; European Research
Council; ERC.
1
This paper is a result of the GendERC project, which was supported by the ERC (grant 610706). The
authors thank Ulf Sandstrom (FPS, Stockholm) for comments a previous version, and Lucia Polo
Alvarez (Tecnalia, Bilbao, Spain) for collecting data about the institutional affiliations of the panel
members. Reviewers and participants of the 1st PEERE conference (Rome March 2018) provided
useful comments that helped to improve the paper, as did the reviewers for the STI 2018 conference
in Leiden, The Netherlands. Frédérique Sachwald (OST, HCERES, Paris) was helpful in explaining
the complex French situation. In version V2 the theoretical section has been rewritten. Version V3
contains some minor editorial changes.
2
Corresponding author
Preprint - March 9, 2021 V3
2
Introduction
From a Mertonian perspective, grant and career decisions in academia should be based on
merit (Merton 1942). Deviation from this principle results in particularism or favoritism,
which means that other than scholarly qualifications and performance play a role in the
decision, such as gender, age, nationality, or characteristics like social, professional, or
disciplinary network relations. In those cases, not the best and the most qualified get the job
or the grant, but someone who is in some dimension close to the (preferences of) decision-
makers.
Many studies have been done to answer the question whether grants are given to the best
applicants, showing that the granted applicants on average have higher past performance than
non-granted (Van Leeuwen & Moed 2012). But when the granted applicants were compared
with a similarly large group of best-performing rejected applicants, the picture changed: the
non-granted were on average at least equally good as the granted, resulting in high numbers
of false positives and false negatives (Van den Besselaar & Leydesdorff 2009; Bornmann et
al. 2010; Hornbostel et al. 2009). Furthermore, studies on the predictive validity of grant
decisions also suggest that merit is not the main (or only) criterion for awarding grants (Van
den Besselaar & Sandström 2015; Gallo et al. 2016; Wang et al 2019). If rewarding merit is
not the modus operandi of panel review, then what are the mechanisms?
Particularism in grant allocation
Cole (1992) proposed a useful differentiation of the concept of particularism. At the one hand
he distinguishes what could be called social particularism: selection processes based on social
relations such as friendship, membership of political parties, being colleagues (cronyism), of
on family ties (nepotism). This type of particularism is clearly against the Mertonian principle
of universalism, although even here some reservation is needed. For example, if the science
system is highly stratified, and top performers are concentrated in a small number of
universities, attributed merit and organizational membership may strongly correlate.
The second type distinguished by Cole is cognitive particularism, which may be a necessary
characteristic of science. Selecting someone for an academic position or a research grant
because the applicant is in cognitive terms close to the decision makers makes sense, as those
that decide most likely are convinced that their own discipline and own research line are very
important. As at the research front uncertainty prevails, there are no ‘objective’ criteria to
Preprint - March 9, 2021 V3
3
assess merit detached from preferences for specific approaches, topical fields and disciplines.
But also here, reservations emerge, as systematic bias may come in, e.g., when those in the
higher positions do not cover innovative and new interdisciplinary developments.
We distinguish various types of social particularism, based on different mechanisms. But the
general characteristic is that all these mechanisms are based on some form of proximity
between the involved persons. Cronyism literally points at an exchange relationship, e.g., one
gives a research grant to someone and gets loyalty in return. Nepotism refers to giving favors
based on family relations. But also other similarities between may play a role, such as gender:
predominantly male decision makers select men more often than women for research grants.
Such gender bias may be based on explicit opinions about differences between men and
women, but it can also be unintentional and based on implicit gender stereotypes. Another
form of particularism is organizational proximity which is bias emerging from organizational
interest representation. A panel member may specifically support applicants that bring the
grant (and the related prestige) to the panel member's university, even when the panel
member does not have any personal connection to the applicant. Especially in case of
prestigious and large grants, the organization’s reputation benefits from receiving many of
them. This paper focuses on the effects on the effect of particularism in the form of
organizational interest representation.
3
That these variants of particularism should be
avoided, is not contested. However, already Cole (1992) noticed that the boundaries are
rather fuzzy. It could be the case that there is not only a friendship relation, but also a relation
based on scientific reputation between an applicant and a panel member. Can we then
empirically distinguish whether a decision was influenced by the one or by the other relation.
Although Cole (1992) suggests that it is hardly possible to disentangle the two relations, we
will try to do so in the empirical part of the paper.
Cognitive proximity is the other form of particularism. Panel members may support
applicants from their own field or specialty, above better qualified applicants from other
fields, even without knowing the applicants personally. As Cole already discussed, the
3
This paper focuses on particularism in grant allocation, but particularism may also play a role in
career decisions. The same mechanisms may play a role such as nepotism, gender bias or cognitive
distance. It goes beyond the scope of this article to discuss the literature on bias in career decisions.
Studies showing gender bias in career decisions are available for several countries, like without
claiming to be exhaustive - the Netherlands (Van den Brink 2010), Italy (Allesina 2011 but
contested by Abramo et al 2014a; Abramo et al 2014b, 2015) and e.g., Spain (Zinovyeva & Bagues
2012; 2015). But other studies argue the opposite.
Preprint - March 9, 2021 V3
4
problem comes up as whether this is particularism that should be avoided, or whether
cognitive proximity is rooted in a basic characteristic of science: there is often disagreement
between scientists about what the important problems are, and the about the best ways to
answer those questions.
Another relevant strand of theorizing comes from psychology. Thorngate et al. (2009)
provide an interesting overview of what social psychology research has shown about merit
and bias in small group decision making (Olbrecht & Bornmann 2010; Van Arensbergen at
al. 2014). Overwhelming evidence shows that neutral decision making is very hard to achieve
if at all.
Research on particularism in grant allocation started back in the 1970s. Pfeffer et al. (1976),
found that the distribution of NSF social science funding over universities correlated strongly
with the number of panel members from those universities. This was stronger in fields with
high uncertainty about what are the important research directions than in fields with low
cognitive uncertainty. The same effect was found in other countries like Sweden (Sandstrom
2012), Korea (Jang et al. 2017), and Canada (Tamblyn et al 2018): “One possible reason is
that reviewers vote favorably for applicants from the same institution, even if they have never
met them and would therefore not be in conflict” (Tamblyn et al 2018).
A study of the NSF procedures found that reviewers and panel members did not favor
proposals that came from applicants from their own state or region (Cole et al.1981). The
well-known study by Wennerås and Wold (1997) suggested that grant decision making in a
Swedish council suffered from considerable gender bias and nepotism. Ten years later,
Sandström & Hällsten (2008) replicated this study and they found a similar level of nepotism,
but no gender bias. Several studies indicate that panel members have higher success rates
(Abrams 1991; Viner et al. 2004; Moed 2015), and Chinese studies showed the effect of
connections to the research bureaucracy on getting research funding (Zhang et al., 2020). Van
den Besselaar (2012) showed those that belong to the inner circle of a council which is a
much broader group than only panel members have a significantly higher number of
applications and an equally higher number of grants compared to applicants that are not
involved in the (net)work of the council. This effect holds if one controls for past
performance. This higher success (not success rates) may be based on being better informed
about funding possibilities than those applicants that are more on a distance. When one is
better informed, one can make better decisions when to apply (Van den Besselaar 2012;
Bagues et al. 2019).
Preprint - March 9, 2021 V3
5
In order to avoid particularism, research councils have established conflict of interest (CoI)
protocols, which generally means that if a panel member has a too close social or professional
relation with an applicant, that panel member has to leave the panel meeting when the
specific grant application is discussed, or even is not allowed in the panel whatsoever.
Whether this is an efficient procedure has been questioned in a Canadian study (Gallo et al.
2016), and. despite the CoI-regulations, applicants with a link to a panel member seem to
profit from that anyhow (Abdoul 2012).
An important problem, related to the distinction made by Cole (1992), is whether relations
between applicants and panelists can and should be avoided at all, as one would expect the
best (granted) applicants to have connections to panel members who should also be excellent
researchers representing excellent research environments (Billig & Jacobsson 2009). This
argument does not always holds, as panel members are not always excellent researchers
(Abrams 1991; Sandstrom 2012), although in other cases they are like in the case studied
here (Van den Besselaar et al. 2015). This is a strong argument against interpreting a close
relation between an applicant and panelists in terms of particularism, and we will come back
to this argument in an empirical way, when analyzing our case below.
The case.
We investigate here the (2014) ERC starting grant. Applicants have to indicate in which
research organization they will (when winning the grant) do the research. This means they
can move with the grant from their affiliation when applying (‘home organization’ in ERC
terminology) to another institution (the 'host organization'); of course they can also stay and
then the host organization is the same as the home organization. Two mechanisms may be
relevant. A panelist of the home affiliation of the applicant may be inclined to assess the
application more negatively than justified by the quality, as he/she may see his/her
organization losing a possible grant. A panelist from the host organization may, in contrast,
be more positive than justified by the quality as his/her organization potentially wins a grant.
These mechanisms cannot be understood as personal loyalty, but as representing
organizational and through that one’s own interest. As said, when a panel member is in the
same organization (e.g. university) as the applicant there is a conflict of interests and the
panelist or reviewer should leave the room when the application is discussed. Even if one
assumes that this happens, it remains an open question whether it has the intended effect.
Preprint - March 9, 2021 V3
6
In this paper, we investigate the effect of organizational proximity on the success of grant
applications, as an example of one of the various proximity relations that can exist between
applicants on the one hand and peer reviewers and panel members on the other. This study
differs from earlier studies as it is much broader in scope, and the case is one of the most
prestigious grant schemes that currently exist, and with overall excellent scientists as panel
members (Van den Besselaar & Sandström 2017). One would expect that if we find
particularism in some form even here, it may exist everywhere.
Data and method
We define organizational proximity in the following way: an applicant related to the same
organization as at least one panel member. This relation can have two forms: An applicant
either works at the same organization as one of the panelists, or a panelists is affiliated to the
organization where the applicant when receiving the grant will move to. In line with the
definition of the council, we define the organization at an aggregated level e.g. universities or
national public research organizations like the Max Planck Gesellschaft in Germany, INRA
in France, or CSIC in Spain. A first complicating factor is that research labs belonging to
such PRO may be affiliated to a university which seems increasingly to be the case. For
example, several applicants employed at a CNRS institute also have a university affiliation
and the specific location where the applicant works maybe a university institute. In other
countries such double affiliations exist, e.g., in the Netherlands where the institutes of the
Netherlands Royal Academy of Science and of the Netherlands research council NWO also
have strong relations with universities. Here we use the affiliation mentioned by the
applicants' CV and by the panelists' website. We use links at the level of the primary
organizational affiliation, and not at the level of sub-units. This level of aggregation may be
too high for studying nepotism, as this refers to a personal link between a panel member and
an applicant. But, for studying organizational interest representation, this is adequate, as the
personal relation does not need to play a role. A second complicating issue especially in
France is the recent mergers within the university system, which makes attribution sometimes
problematic. This was solved by searching on the web what the correct affiliation was in the
period we study.
The following data were accessible for this case study. Panelists' names, countries and
research field could be found on the council’s website. Using this information, we identified
Preprint - March 9, 2021 V3
7
the panelists' affiliation through searching the web for open CV data and home pages. A
different strategy was followed for the applicant affiliation, as for these the council supplied
the affiliation and the email address at the time of applying. In cases where the email address
was a-specific (e.g., a Gmail address) we used the CV of the applicant. In some cases the CV
was ambiguous, and that was solved by searching the web for the required information. In the
case under study, applicants have to specify where they will use the grant: the so-called host-
institution. Data about the intended host institute could be found in the applications as
supplied by the council. Bibliometric data were retrieved from WoS and from Scopus. Data
on earlier grants, and on the network of the applicants were extracted from the CVs of the
applicants.
The host institution may be the same as their current affiliation (the home institution), but that
is not necessarily the case. To compute proximity, we compared (i) the applicants home
institute with the panelists' home institutes and (ii) the applicants' host institute with the
panelists' home institutes. By doing so, one can distinguish several forms of a near-by
panelist:
- no near-by panelists (proximity-0)
- a panelist from the home institution of the applicant (proximity-1)
- a panelist from the applicant's intended host institution (proximity-2)
- a panelist from the home and from the host institution of the applicant (proximity-3)
The latter means in almost all cases that the applicants do not change institutions but plan to
use the grant in the home institution. In only 3 cases, proximity 3 reflects a mobile applicant
in combination with two different near-by panelists, one from the home and the other from
the host institution. Proximity groups 1 and 2 are both very small. We report their size in the
results section, but exclude them from the rest of the analysis.
Of the 3207 applicants of the 2014 ERC Starting Grant, 3030 signed the informed consent
form. We checked whether the non-participating applicants (N=177) affect our findings, and
that is not the case (Annex A1).
After comparing the success rate by proximity type, we analyze who profits from proximity.
We do that at the country level, at the organization level, and at the individual level, where
we specifically compare men and women applicants. In the analysis at the country level, we
firstly exclude those countries that do not have any proximity relations in our sample. If there
are no proximity relations, then the question of the effect of proximity relations on success
Preprint - March 9, 2021 V3
8
rates makes no sense. Secondly, we also exclude countries with less than 50 applicants. We
do this because in those cases, success rates change sharply with only one more or one less
successful applicant.
Applicants from non-EU or non-associated countries (group-4 countries) are a special case,
as prox-3 cannot occur: those applicants cannot stay in their home institution as it is outside
of the EU and associated countries, and therefore these applicants have to move. This group
of countries covers 143 applicants of which 23 successful so the success rate (16.4 %) is
higher than average. This set consists of two different subgroups, as the successful applicants
come all from the U.S., Canada and Australia (and in 2014: Switzerland), whereas applicants
from other non-EU countries are never successful. Furthermore, all women and most men
among the successful applicants in this group are EU nationals returning to Europe, mostly
from the U.S. Due to these peculiarities, and to the fact that prox-3, which is the dominant
pattern in the rest of the data, does not exist for this group, we exclude group-4 countries
from the current analysis.
Lastly, the Swiss situation warrants mentioning. In 2014, Swiss organizations could not act as
a host for successful applicants as a consequence of the referendum that closed the Swiss
borders for several groups of EU citizens. Although retroactively this was changed (in
September 2014), this was too late for applicants to select a Swiss organization as host. As a
consequence, Switzerland is treated here as a category 4 country. However, we did find two
applicants who (according to the data we received from the ERC) were granted a StG, but
moved to Switzerland. According to their homepages, they apparently got a different
(replacement) grant through an ERC-SNF collaboration. We keep these cases in the analysis
since we are interested in the decision making by the panels, and a panel did decide to award
grants to these applicants.
Research questions
We combine success rates with the proximity data, and then answer the following questions:
(i) Is the success rate different for the different proximity-types distinguished above?
(ii) Do the three domains (Life Sciences, Physics and Engineering, Social Sciences and
Humanities) show the same pattern?
Preprint - March 9, 2021 V3
9
(iii) Who profits from proximity? Firstly, this will be analyzed at the individual level, in
terms of gender differences. Secondly, it will be analyzed at the level of organizations:
Does the ranking of universities correlates with profiting from organizational
proximity? Thirdly, we investigate whether the host-countries differ in profiting from
organizational proximity.
(iv) Can we explain different success rates also in other terms than particularism: Are those
organizations that win most from proximity simply better, and therefore providing more
panelists and at the same time attracting better and more successful applicants?
Findings
Individual level
Type-1 and type-2 proximity occur only a few times (Table 1), and therefore one more or one
less case of these would change the effect as a percentage strongly. Therefore, we do not
include prox-1 and prox-2 in the analysis. This is different for prox-3, where we have enough
cases for further analysis. If an applicant makes clear in the application that he/she will
remain in the same organization when receiving the grant, and there is a panel member from
that organization, that panel member has an interest in the success of the applicant, and the
data in Table 1 suggest that this interest may have an effect on the panel decisions, as the
success rate of prox-3 cases is more than 40% higher than average. The question to be
answered is why this is the case: are the prox-3 cases simply the better applicants, or is it the
effect of interest representation? We address this issue below.
Table 1: Overall success rate versus success rate with a near-by panelist
Proximity type
Applicants
Success
Success rate
Proximity 0
2558
280
10.9%
Proximity 3
274
45
16.4%
All in sample
2832
325
11.5%
Excluded groups
Proximity 1
31
3
9.7%
Proximity 2
22
2
9.1%
Group-4 countries
145
23
15.9%
No consent
177
22
12.4%
All applicants
3207
375
11.7%
* All cases except the non-response who did not get a grant, and except group-4 countries.
** Numbers are too small for reliable interpretation.
Preprint - March 9, 2021 V3
10
The funding instrument under study has a two-step procedure, with a first selection where
75% of the applicants is rejected, and then a second selection where about half of the
remaining applicants receive a grant. Table 2 shows that the nearby panelist effect is
strongest in the first step, but also has some effect in Step 2.
Table 2: Success rate by near-by panelist (by Step)
prox
total
to step 2
SR
Prox-3 vs Prox-0
granted
SR
Prox-3 vs Prox-0
0
2558
635
24.8%
280
44.1%
3
274
91
33.2%
134%
45
49.5%
112%
Total
2832
726
25.6%
325
44.8%
Domain level
The next question is whether domain differences occur, due to differences between
disciplinary cultures. Table 3 summarizes the findings. In the life sciences, the pattern is the
strongest and in line with the overall pattern: There, the success rate for the prox-3 group is
twice as high compared to the success rate for the prox-0 group. Within Physics and
Engineering Sciences, the effect of panel member proximity seems absent. Within Social
Sciences and Humanities, panel member proximity shows a similar effect as in Life Sciences
although the effect is smaller: a 40 % increase in success rate. As the number of prox-1 and
prox-2 cases is low, and by domain even lower, they are not included in Table 3.
Table 3: Success rate with a near-by panelist versus overall success rate: domain differences
Domain
ALL
Prox 0
Prox 3
N
Success
success rate
N
Success
success rate
N
success
success
rate
LS
899
122
13.57%
805
99
12.30%
94
23
24.47%
PE
1269
143
11.27%
1144
128
11.19%
125
15
12.00%
SH
664
60
9.04%
609
53
8.70%
55
7
12.73%
ALL
2832
325
11.48%
2558
280
10.95%
274
45
16.42%
These results suggest that particularism does play a role in panel decisions in two of the three
domains. However, it could also mean that the concentration of talent (panelists and
applicants) is already substantial, and therefore, excellent applicants are in the same
organizational environments as the excellent panelists. If concentration of talent would be the
Preprint - March 9, 2021 V3
11
case, one would expect to find this more uniformly over all fields.
4
As we find significant
differences between the fields, the findings seem to point at particularism, with substantial
field differences.
It has been suggested that differences between the fields could be explained by competition:
stiffer competition would lead to more unethical behavior such as cronyism in grant
decisions. More specifically, the higher competition in life sciences could be the cause of the
more substantial near-by panelist effect. However, our data do not support this explanation.
In fact, competition in the LS domain is the lowest, demonstrated by the higher success rate
in life sciences than in the two other domains. Another explanation relates to differences in
the level of dependence and uncertainty between disciplines (Whitley 1980). Lower
codification meaning that there is less agreement of what is good science and in what
direction a discipline is moving would open up decision-making for nepotism. Testing this,
however, needs analysis at an even lower level of aggregation and for this we currently lack
the data.
Who profits?
The next question is who profits from the different success rates for prox-3? We answer this
question at the level of countries, of organizations, and of individuals.
Country differences
To start with the first, one can distinguish between countries functioning as the home
5
country (where are the applicants working at the moment of applying) and countries
functioning as host country (where applicants plan to spend their grant). As the hosts
countries profit, we will focus on those.
For the descriptive statistics, we only include countries with more than 50 applicants. Using
this set of countries
6
, we find a strong correlation between the number of applicants and the
number of successful applications as a host (r = 0.80), and a moderately strong correlation
4
The issue of concentration of talent will be addressed below.
5
This refers to residence, not to nationality.
6
The following fourteen countries are included: Austria, Belgium, Denmark, Finland, France,
Germany, Israel, Italy, Netherlands, Norway, Portugal, Spain, Sweden, UK.
Preprint - March 9, 2021 V3
12
between the number of applicants and the number of proximity relations (r = 0.59). This is
not surprising as more applicants indicate a bigger science system and thus more successful
applicants and more panelists. There is also a strong correlation between the number of
successful applicants and the number of proximity relations (r = 0.82). We however find no
correlation (r = - 0.07) between the number of applicants and the success rate showing that
the bigger systems are not outperforming the smaller ones.
In Table 4, we show the success percentages of the applicants by host countries, that may
profit from the near-by panelist phenomenon. For this analysis, we only retain those countries
that have at least 50 applicants and at least one proximity relation. Some countries show a
much higher success rate for the group applicants with a near-by panel member
7
than their
overall success rates, such as very strongly Finland, but also Sweden, Italy, UK, Germany,
and Spain. This 'profit score' is in the last column of Table 4, and there one sees that e.g. the
success rate of the UK within the group of applicants with a relation to a nearby panelist is
about twice as high (1.8) as the overall success rate of applicants that have a UK based host
organization. For Israel, France, and Denmark no the nearby panelist effect was found, and
the Netherlands, Belgium, Austria, Portugal, and Norway show the opposite pattern and have
a success rate lower than average for the near-by category.
Table 4: Near-by panelist advantage by country*
Country
Number of
Number
Host success rate
SPR/SR
applicants
success
all (SR)
proximity (SRP)
Finland
103
7
6.8%
28.6%
4.2
Sweden
116
5
4.3%
10.0%
2.3
Italy
364
15
4.1%
8.3%
2.0
UK
539
62
11.5%
20.6%
1.8
Germany
395
65
16.5%
27.3%
1.7
Spain
269
22
8.2%
11.1%
1.4
Israel
85
22
25.9%
28.6%
1.1
Denmark
82
13
15.9%
15.4%
1.0
France
244
48
19.7%
18.6%
0.9
Netherlands
208
41
19.7%
8.0%
0.4
Belgium
101
10
9.9%
0.0%
0.0
Austria
82
13
15.9%
0.0%
0.0
Portugal
79
6
7.6%
0.0%
0.0
Norway
53
6
11.3%
0.0%
0.0
* Not included are countries without proximity relations or less than 50 applicants.
7
This is not nationality or residency of the applicant but the country of the organization where
someone applies the grant for.
Preprint - March 9, 2021 V3
13
Interestingly, there is a moderate negative correlation (r = - 0.36) between the profit a country
has from proximity relations (SPR/SR) and the overall success rate of a country. This shows
that to some extend the lower the country's overall success rate, the higher the benefit of
proximity.
Organizational differences
Does the nearby panelist effect occur more in low ranked than in highly ranks organizations?
Figure 1 shows that the median ranking of those granted with a nearby panelist relation is
higher (median difference = 0.201, p = 0.156) than those granted without a nearby panelist
relation, and also higher (0.256, p = 0.000) than the step-2 non-granted applicants. Also the
mean values differ: 0.382 (p = 0.064) and 0.509 (p = 0.026) respectively. These findings
suggest that high ranked organizations profit more of organizational proximity than lower
ranked organizations do. This could support the argument that the nearby panelist effect is
not so much particularism and bias, but the effect of preferential attachment: better
organizations have more panelists and attract better applicants. We will test this below in the
section on organizational proximity and performance.
Figure 1: Median host organization ranking by proximity relation
Preprint - March 9, 2021 V3
14
Gender differences
Table 5 shows that more men than women are involved in organizational proximity relations.
Overall, male applicants are 1.4 times more likely than the female applicants to be in such
proximity relation. This differs by domain, and for life sciences, for physics and engineering
and for the social sciences and humanities the numbers are 1.37, 1.21 and 1.3 respectively.
Table 5: Distribution of proximity by applicants’
gender and domain
All
Proximity
Share
LS
men
481
63
13.10%
women
324
31
9.57%
PE
men
858
98
11.42%
women
286
27
9.44%
SH
men
318
36
11.32%
women
291
19
6.53%
All
men
1657
197
11.89%
women
901
77
8.55%
Table 6 shows that when there is organizational proximity, men profit overall somewhat more
from prox-3 then women do. Men have more often a prox-3 relation than women, and
although for both men and women the chance of getting the grant increases when there is a
proximity relation, the increase for men is higher: Organizational proximity seems to add to
gender bias in grant allocation. However, the effect differs by domain. In the Life Sciences
men profit slightly more than women from proximity, in the Social Sciences and Humanities
men profit much more, and the pattern is exactly the opposite in Physics and Engineering,
where women profit more than men do.
Table 6: Gender distribution of proximity by field (domain) and success.
Field
Sex
total success
success with proximity
Ratio*
LS
women
41
11.55%
6
19.35%
1.68
men
81
14.89%
17
26.98%
1.81
PE
women
38
12.14%
5
18.52%
1.53
men
105
10.98%
10
10.20%
0.93
SH
women
28
9.03%
0
0.00%
0.00
men
32
9.04%
7
19.44%
2.15
All
women
107
10.94%
11
14.29%
1.31
men
218
11.76%
34
17.26%
1.47
*: ratio between the success rate with and without proximity
Preprint - March 9, 2021 V3
15
Organizational proximity and performance
Can the organizational proximity effect also be explained in a different way, without referring
to particularism? The obvious alternative explanation is to take into account the performance
of the host institutes and of the applicants. The hypothesis would be that the excellent
applicants gravitate towards excellent organizations (Billig & Jacobsson 2009), and those
excellent organizations are more likely to be present in the ERC panels than less excellent
research organizations. In that case, the correlation between near-by panelist relations and
grant success would be due to a confounding variable: excellence.
We test this by comparing the group of successful applicants with proximity-3 (Granted-
nearby) with three other groups in terms of their scores on a few indicators, which are
defined in Annex A2. We distinguish two types of indicators (Annex A3): (i) performance
indicators, and (ii) prestige indicators. We compare the Granted-nearby with the group of
granted applicants without a proximity relation (Granted-other), and with a group of
excellent non-granted applicants, which is defined in two ways;
- the group that was not successful in the final phase of the procedure (2Non-granted)
- the non-granted with the highest performance score: the best of the rest (BotR). This
BotR-group is selected per panel and is equally large as the set 2Non-granted. The
'best' is defined in terms of absolute impact.
The scores of these four groups are shown in Figure 2. The granted-nearby applicants score
clearly better on the two reputation indicators: on journal impact of the journals in which
they have published and on network quality in terms of the median ranking of the
organizations found in the CVs of the applicants. The next highest on these two variables are
the other granted applicants, and then the Step-2 non-granted. However, on the performance
variable total impact the Best of the Rest’ scores much higher than the other three groups,
and on the total grants variable, the BotR score as high as the granted-nearby group. These
findings suggest that one cannot explain the nearby panelist effect as preferential attachment
based on excellent performance, as these granted-nearby applicants on average perform less
than the three other groups. The granted-nearby group, however does score higher on the
reputation indicators, so if it is a form of ‘preferential attachment’ and not of interest
representation, it is reputation and not performance based.
Preprint - March 9, 2021 V3
16
Figure 2: Scores on performance and reputation, four groups of applicants
All variables normalized at panel level
Conclusions and further research
Our study suggests that having a nearby panel member does affect the grant decision process.
Those with an organizational near-by panel member from the host institution have an overall
much higher success rate than average, and the difference is substantial: 50% higher. We
found that in PE the effect does not exist, but in LS the probability to get funded is twice as
high with a nearby panelist relation. SH is in between these two.
Men profit somewhat more from a near-by panelist than women do. Interestingly, the
exception is the Physics and Engineering domain, where women profit much more from a
nearby panelist. This leads to interesting new questions: where do these domain differences
come from? The field-based gender differences show a pattern that needs further exploration:
The higher the share of women in a domain, the more men profit from the near-by panelist
relation and vice versa. In the SSH with almost 50 % women applicants, men profit much
more than women from proximity. In LS, with somewhat more than a third women
applicants, men and women profit equally. And finally, within PE with about 25% women
applicants, women profit much more than men do.
The hypothesis was formulated that the differences between the fields may be related to e.g.
the level of competition. However, whereas the nearby panelist effect in the Life Sciences is
much stronger than in the other fields, competition there is lower, as the success rate is higher
-0.2
0
0.2
0.4
0.6
0.8
1
1.2
1.4
1.6
1.8
total impact total grants journal impact network quality nr coauthors
Granted Nearby Granted other Non funded Step 2 Total impact Best
Best of the rest
Preprint - March 9, 2021 V3
17
in the LS than in the other domains. Also other field characteristics may play a role, such as
the levels of uncertainty and dependence within a field (Whitley 2000). This could not be
tested in the current study.
Also the country differences are substantial. Some countries profit much from the near-by
panelist cases, in other countries there is no effect, and in again other countries the success
rate of the near-by panelist cases is much lower than the average success rate. It remains an
open question why these differences occurred. It may be a random pattern where countries
profit in some years, but not in other years. If there would be a stable pattern, the question
comes up as why some countries profit more than others from organizational proximity.
Answering this would require repeating the study for more years.
Aside from the proximity effect at country level, we also showed that chances for non-EU
nationals who do not already reside in the EU are zero. Understanding this observation would
also need further research. Almost all successful applicants from outside the EU are EU
citizens.
Finally, the question was asked as whether the observed advantage for applicants with a near
by panelist is the result of interest representation, or of concentration of excellence. This
alternative hypothesis that the near-by panelist advantage reflects the concentration of the
most excellent researchers in the most excellent organizations was tested. We showed that the
grantees with a near-by panelist have a much lower performance level than the grantees
without a near-by panelist, and the difference is even larger with the highest performing non-
grantees. On the other hand, the proximity-3 grantees scored substantially higher on
reputation indicators. This suggests that reputation is (i) a vital asset in science, but (ii) not
necessarily founded in performance. The conclusion of whether the nearby panelist effect is
interest representation or a concentration of excellence depends on how one would
understand excellence: as based on performance (as the authors of this paper are inclined to
do) or as reputation.
Some limitations should be mentioned, which also point at directions for future research. (i)
Although several variables are included to measure excellence (impact, earlier grants, top
journals, quality of the network), excellence (and more generally quality) has more
dimensions that may play a role in the grant decision making, such as independence (Van den
Besselaar & Sandström 2019) which is explicitly mentioned by the council. (ii) Also other
Preprint - March 9, 2021 V3
18
relational characteristics such as cognitive proximity (Sandström & Van den Besselaar 2018)
may play a role. (iii) We could not observe the panels in their selection activities, but that
would be crucial for understanding how the nearby panelist effect is produced. (iv) More
work is needed on the methodological problem of identification of organizational affiliation
and testing whether double affiliations have an effect on the results.
It is important for the science system, for the applicants, and also for the research councils to
understand if and where particularism creeps into the grant selection procedures. Access to
more data is for this a requirement. One would need data for more years and several funding
instruments, to be able to investigate whether our findings can be generalized beyond the
single case studied here.
References
Abdoul H, Perrey C, Tubach F, Amiel P, Durand-Zaleski I, Alberti C (2012). Non-Financial
Conflicts of Interest in Academic Grant Evaluation: A Qualitative Study of Multiple
Stakeholders in France. PLoS ONE 7(4): e35247. doi:10.1371/journal.pone.0035247
Abramo G, D'Angelo C A, Rosati F (2014a). Relatives in the same university faculty:
Nepotism or merit? Scientometrics 101: 73749.
Abramo G, D'Angelo C A, Rosati F (2014b). Career advancement and scientific performance
in universities. Scientometrics 98: 891907.
Abramo G, D'Angelo C A, Rosati F (2015). The determinants of academic career
advancement: Evidence from Italy. Science and Public Policy 42: 761774.
Abrams PA (1991). The predictive ability of peer review of grant proposals: the case of
ecology and the US National Science Foundation. Social Studies of Science 21:111-132
Allesina S (2011). Measuring nepotism through shared last names: The case of Italian
academia. PLoS ONE 6 (8), e21160.Cole 2000
Bagues M, Sylos-Labini M, Zinovyeva N (2019). Connections in scientific committees and
applicants’ self-selection: Evidence from a natural randomized experiment. Labour
Economics 58, 81-97. [Earlier version: Bagues M & Sylos-Labini M & Zinovyeva N
(2015). "Connections in Scientific Committees and Applicants' Self-Selection: Evidence
from a Natural Randomized Experiment," IZA Discussion Papers 9594, Institute of Labor
Economics (IZA).]
Billig H & Jacobsson C (2009). The Swedish Research Council welcomes debate on
openness and competition [In Swedish: Vetenskapsrådet välkomnar debatt om öppenhet
och konkurrens]. Läkartidningen/Swedish Medical Journal, 109(39), debate section.
[available at https://lakartidningen.se/debatt-och-brev/2009/09/vetenskapsradet-
valkomnar-debatt-om-oppenhet-och-konkurrens/]
Bellow A (2005). In Praise of Nepotism: A Natural History. Doubleday: New York.
Preprint - March 9, 2021 V3
19
Bornmann L, Leydesdorff L, Van den Besselaar P (2010). A meta-evaluation of scientific
research proposals: Different ways of comparing rejected to awarded applications. Journal
of Informetrics 4 (3): 211-220.
Cicchetti DV (1991). The reliability of peer review for manuscript and grant submissions: A
cross-disciplinary investigation. Behavioral and Brain Sciences 14 (1):119135.
Chubin DE, Hackett EJ (1990). Peerless Science: Peer Review and U.S. Science Policy.
SUNY Press: Albany.
Cole, S (1992). Making Science: Between Nature and Society. Harvard University Press:
Cambridge.
Cole S, Rubin, L, Cole JR (1977). Peer review and the Support of Science. Scientific
American 237 (4): 34-41.
Cole S, Cole JR, Simon GA (1981). Chance and consensus in peer review. Science 214
(4523) 881886.
Fisman R, Shi J, Wang YX, Xu R (2018). Social Ties and Favoritism in Chinese Science.
Journal of Political Economy 126 (3): 1134-1171
Gallo SA, Glisson SR (2018). External Tests of Peer Review Validity Via Impact Measures,
Frontiers in Research Metrics and Analytics, 23 August, doi: 10.3389/frma.2018.00022.
Gallo SA, Lemaster M, Glisson SR (2016). Frequency and type of Conflicts of Interest in the
peer review of basic Biomedical research funding applications: self-reporting versus
manual detection. Science & Engineering Ethics 22: 189-197.
Jang D, Doh S, Kang GM, Han DS (2017). Impact of Alumni Connections on Peer Review
Ratings and Selection S.uccess Rate in National Research. Science Technology & Human
Values 42 (1): 116-143
Li D (2017). Expertise versus bias in evaluation: evidence from the NIH. American Economic
Journal: Applied Economics 9:60-92.
Long JS, Fox MF (1995). Scientific careers: Universalism and particularism. Annual Review
Sociology 24:4571.
Merton R (1942). The normative structure of science. In: RK Merton, The sociology of
science. University of Chicago Press 1973.
Moed H (2005). Citation Analysis in Research Evaluation. Springer: Dordrecht.
Nybom T (1997). Kunskap-Politik-Samhälle: essäer om kunskapssyn, universitet och
forskningspolitik 1900-2000. Stockholm: Arete.
Sandström U (2012). Vetting the panel members [In Swedish: En granskning av granskarna:
hur bra beredningsgrupperna? Forskning om forskning 2/2012 (revised version 2015).
[available at https://www.forskningspolitik.se/files/dokument/fof-1-2015-report.pdf]
Sandström U & M Hällsten (2008). Persistent nepotism in peer-review. Scientometrics 74 (2):
175-89.
Tamblyn R, Girard N, Qian CJ, Hanley J (2018). Assessment of potential bias in research grant
peer review in Canada. CMAJ, 23;190:E489-99. doi: 10.1503/cmaj.170901
Preprint - March 9, 2021 V3
20
Van den Besselaar P (2012). Grant committee membership: service or self-service? Journal
of Informetrics 6: 580585.
Van den Besselaar et al. (2015). Deliverable 4, GendERC Project.
Van den Besselaar P & Leydesdorff L (2009). Past performance, peer review, and project
selection: a case study in the social and behavioral sciences. Research Evaluation 18 (4):
273288.
Van den Besselaar P & Sandström U (2017). Influence of cognitive distance on grant
decisions. Proceedings STI Conference 2017.
Van den Besselaar P & Sandström U (2019). Measuring researcher independence using
bibliometric data: A proposal for a new performance indicator. PLoS ONE 14 (3), e0202712.
Van den Brink M, Benschop Y & Jansen W. (2010) Transparency in academic recruitment: A
problematic tool for gender equality? Organization Studies 31: 145983.
Van Leeuwen T, and Moed H (2012) ‘Funding Decisions, Peer Review, and Scientific Excellence
in Physical Sciences, Chemistry and Geosciences’ Research Evaluation, 21, 189–98.
Wennerås C & Wold A (1997). Nepotism and sexism in peer review. Nature 387 (6631):
341-343.
Van den Brink, M. (2010) Behind the Scenes of Science. Gender practices in the recruitment and
selection of professors in the Netherlands, Pallas Publications, Amsterdam`
Viner N, Powell P, Green R (2004) Institutionalized biases in the award of research grants: a
preliminary analysis revisiting the principle of accumulative advantage. Research Policy
33: 443-454.
Whitley R (2000). The Intellectual and Social Organization of the Sciences, Second Edition.
Oxford University Press: Oxford & New York.
Zhang GP, Xiong LB, Wang X, Dong JN, Duan HB (2020). Artificial selection versus natural
selection: Which causes the Matthew effect of science funding allocation in China?
Science and Public Policy 47 (3): 434445
Zinovyeva N & Bagues M (2012). The Role of Connections in Academic Promotions. SSRN
Electronic Journal 7 (2), DOI: 10.2139/ssrn.2136888. [Also published as Zinovyeva N &
Bagues M (2015). The role of connections in academic promotions. American Economic
Journal: Applied Economics, 7(2): 264292.]
... Systematic errors may be due to biases. Conservatism, novelty-and risk aversion are examples of biases towards some groups of proposals; and since grant proposals are often not anonymized (singleblind review), applicants' characteristics such as their gender, affiliation or nationality might also bias reviewers (Mallard et al., 2009;Mom & Van den Besselaar, 2020;Reinhart, 2009;Uzzi et al., 2013). 3 Systematic errors might furthermore stem from some characteristics of the reviewers themselves. ...
Article
Full-text available
One of the main critiques of academic peer review is that inter-rater reliability (IRR) among reviewers is low. We examine an under-investigated factor possibly contributing to low IRR, reviewers’ diversity in their topic-criteria mapping (TC-mapping for short). It refers to differences among reviewers pertaining to which topics they choose to emphasize in their evaluations, and how they map those topics onto various evaluation criteria. In this paper we look at the review process of grant proposals in one funding agency to ask: how much do reviewers differ in TC-mapping, and do their differences contribute to low IRR? Through a content analysis of review forms submitted to a national funding agency (Science Foundation Ireland) and a survey of its reviewers, we find evidence of inter-reviewer differences in their TC-mapping. Using a simulation experiment we show that, under a wide range of conditions, even strong differences in TC-mapping only have a negligible impact on IRR. Although further empirical work is needed to corroborate simulation results, these tentatively suggest that reviewers’ heterogeneous TC-mappings might not be of concern for designers of peer review panels to safeguard inter-rater reliability. Peer Review https://publons.com/publon/10.1162/qss_a_00207
Article
Full-text available
Peer review is used commonly across science as a tool to evaluate the merit and potential impact of research projects and make funding recommendations. However, potential impact is likely to be difficult to assess ex-ante; some attempts have been made to assess the predictive accuracy of these review decisions using impact measures of the results of the completed projects. Although many outputs, and thus potential measures of impact, exist for research projects, the overwhelming majority of evaluation of research output is focused on bibliometrics. We review the multiple types of potential impact measures with an interest in their application to validate review decisions. A review of the current literature on validating peer review decisions with research output impact measures is presented here; only 48 studies were identified, about half of which were US based and sample size per study varied greatly. 69% of the studies employed bibliometrics as a research output. While 52% of the studies employed alternative measures (like patents and technology licensing, post-project peer review, international collaboration, future funding success, securing tenure track positions, and career satisfaction), only 25% of all projects used more than one measure of research output. Overall, 91% of studies with unfunded controls and 71% of studies without such controls provided evidence for at least some level of predictive validity of review decisions. However, several studies reported observing sizable type I and II errors as well. Moreover, many of the observed effects were small and several studies suggest a coarse power to discriminate poor proposals from better ones, but not amongst the top tier proposals or applicants (although discriminatory ability depended on the impact metric). This is of particular concern in an era of low funding success, where many top tier proposals are unfunded. More research is needed, particularly in integrating multiple types of impact indicators in these validity tests, as well as considering the context of the research outputs relative to goals of the research program and concerns for reproducibility, translatability and publication bias. In parallel, more research is needed focusing on the internal validity of review decision making procedures and reviewer bias.
Article
Full-text available
Background: Peer review is used to determine what research is funded and published, yet little is known about its effectiveness, and it is suspected that there may be biases. We investigated the variability of peer review and factors influencing ratings of grant applications. Methods: We evaluated all grant applications submitted to the Canadian Institutes of Health Research between 2012 and 2014. The contribution of application, principal applicant and reviewer characteristics to overall application score was assessed after adjusting for the applicant's scientific productivity. Results: Among 11 624 applications, 66.2% of principal applicants were male and 64.1% were in a basic science domain. We found a significant nonlinear association between scientific productivity and final application score that differed by applicant gender and scientific domain, with higher scores associated with past funding success and h-index and lower scores associated with female applicants and those in the applied sciences. Significantly lower application scores were also associated with applicants who were older, evaluated by female reviewers only (v. male reviewers only, -0.05 points, 95% confidence interval [CI] -0.08 to -0.02) or reviewers in scientific domains different from the applicant's (-0.07 points, 95% CI -0.11 to -0.03). Significantly higher application scores were also associated with reviewer agreement in application score (0.23 points, 95% CI 0.20 to 0.26), the existence of reviewer conflicts (0.09 points, 95% CI 0.07 to 0.11), larger budget requests (0.01 points per $100 000, 95% CI 0.007 to 0.02), and resubmissions (0.15 points, 95% CI 0.14 to 0.17). In addition, reviewers with high expertise were more likely than those with less expertise to provide higher scores to applicants with higher past success rates (0.18 points, 95% CI 0.08 to 0.28). Interpretation: There is evidence of bias in peer review of operating grants that is of sufficient magnitude to change application scores from fundable to nonfundable. This should be addressed by training and policy changes in research funding.
Article
Full-text available
We study favoritism via hometown ties, a common source of favor exchange in China, in fellow selection of the Chinese Academies of Sciences and Engineering. Hometown ties to fellow selection committee members increase candidates’ election probability by 39 percent, coming entirely from the selection stage involving an in-person meeting. Elected hometown-connected candidates are half as likely to have a high-impact publication as elected fellows without connections. CAS/CAE membership increases the probability of university leadership appointments and is associated with a US$9.5 million increase in annual funding for fellows’ institutions, indicating that hometown favoritism has potentially large effects on resource allocation.
Article
Full-text available
Throughout the world, women leave their academic careers to a far greater extent than their male colleagues. (1) In Sweden, for example, women are awarded 44 per cent of biomedical PhDs but hold a mere 25 per cent of the postdoctoral positions and only 7 per cent of professorial positions. It used to be thought that once there were enough entry-level female scientists, the male domination of the upper echelons of academic research would automatically diminish. But this has not happened in the biomedical field, where disproportionate numbers of men still hold higher academic positions, despite the significant numbers of women who have entered this research field since the 1970s.
Article
Full-text available
This article presents an analysis of the funding policies of three research councils at the Netherlands Organization for Scientific Research (NWO). The key issue is the extent to which these three councils recognized scientific excellence, and particularly, whether they succeeded in rewarding the grants of the most successful and influential researchers. Data on grant applications provided by NWO for the time period 2000-4 were combined with bibliometric indicators of past research performance of applicants and non-applicants derived from Thomson Reuters' Web of Science. It is found that the three councils did support scientific excellence, in the following sense. Firstly, they tend to attract research proposals from the better groups in the fields they cover. Secondly, the applicants whose submitted proposals were granted-and the research groups they represent-tend to generate a higher citation impact at their international research fronts than those whose submissions were rejected. Although there are some differences in the outcomes among the three councils, this conclusion is valid for each council. On the other hand, for applicants with more than three granted applications we observed a rather variable pattern: in one council these performed at the same level as researchers whose applications were all rejected; in another council these applicants outperformed the rejected applicants; and in another council the number of applicants with more than three granted applications was very small.
Article
To investigate either artificial or natural selection leads to the Matthew effect in the science funding allocation and its consequences, this study retrieves 274,732 publications by Chinese scientists from the Web of Science and examines how the disparity of science funding determines scientists' research performance. We employ the Negative Binomial Model and other models to regress the publication's citation times, which measures the research performance, on the number of funding grants and their amounts of currency that the publication receives, which measures the disparity of science funding. The empirical results suggest an inverted U-shaped relationship. However, the optimum number of funding grants far exceeds the actual number that most publications receive, implying that increasing the funding for academic research positively impacts scientists' research performance. The natural disparity thus plays a major role in distributing the science funding. Additionally, China's publication-based academic assessment system may be another main cause.
Article
We investigate theoretically and empirically how connections in evaluation committees affect application decisions. Prospective candidates who are connected to a committee member may be more likely to apply if they anticipate a premium at the evaluation stage. However, when failure is costly and connections convey information to potential applicants regarding their chances of success, the impact of connections on application decisions is ambiguous. We document the relevance of this information channel using data from national evaluations in Italian academia. We find that prospective candidates are significantly less likely to apply when the committee includes, through the luck of the draw, a colleague, a coauthor or a Ph.D. advisor. At the same time, applicants tend to receive more favorable evaluations from their connections. Overall, the evidence suggests that connected individuals have access to better information at the application stage, which helps them to make better application decisions. Ignoring applicants’ self-selection would lead to an overestimation of the connection premium in evaluations by 29%.
Book
This book is written for members of the scholarly research community, and for persons involved in research evaluation and research policy. More specifically, it is directed towards the following four main groups of readers: – All scientists and scholars who have been or will be subjected to a quantitative assessment of research performance using citation analysis. – Research policy makers and managers who wish to become conversant with the basic features of citation analysis, and about its potentialities and limitations. – Members of peer review committees and other evaluators, who consider the use of citation analysis as a tool in their assessments. – Practitioners and students in the field of quantitative science and technology studies, informetrics, and library and information science. Citation analysis involves the construction and application of a series of indicators of the ‘impact’, ‘influence’ or ‘quality’ of scholarly work, derived from citation data, i.e. data on references cited in footnotes or bibliographies of scholarly research publications. Such indicators are applied both in the study of scholarly communication and in the assessment of research performance. The term ‘scholarly’ comprises all domains of science and scholarship, including not only those fields that are normally denoted as science – the natural and life sciences, mathematical and technical sciences – but also social sciences and humanities.
Article
Evaluators with expertise in a particular field may have an informational advantage in separating good projects from bad. At the same time, they may also have personal preferences that impact their objectivity. This paper examines these issues in the context of peer review at the US National Institutes of Health. I show that evaluators are both better informed and more biased about the quality of projects in their own area. On net, the benefits of expertise weakly dominate the costs of bias. As such, policies designed to limit bias by seeking impartial evaluators may reduce the quality of funding decisions.
Article
This study seeks to examine the impact of alumni connections between the evaluators and evaluatees on the results of peer review ratings for the Korean national R&D project and selection success rate. Specifically, this study analyzed the evaluation results of 8,402 research proposal entries submitted between 2007 and 2011 for the “general researchers support project,” all in the Natural Science and Engineering areas and sponsored by the National Research Foundation of Korea. Each proposal entry was evaluated by three evaluators, and approximately 39 percent of the proposals had at least one evaluator from the same university. The results of this study showed that evaluators have a tendency to give relatively high scores to research proposals submitted by the alumni of the same universities as their alma mater. Also, when an evaluator from the same university as an evaluatee was included in the evaluator group, the results of this study showed that the percentage of entry submissions was higher compared to the contrary. Such results show that in the process of peer review–based research proposal evaluations for national R&D projects, alumni connections have significant influence on evaluation results in South Korea.