Content uploaded by Rocco Bellanova
Author content
All content in this area was uploaded by Rocco Bellanova on Jul 09, 2020
Content may be subject to copyright.
Content uploaded by Rocco Bellanova
Author content
All content in this area was uploaded by Rocco Bellanova on Aug 28, 2017
Content may be subject to copyright.
23
2 Lost in Quantifi cation:
Scholars and the Politics of
Bibliometrics
Lynn P. Nygaard and Rocco Bellanova
As scholarship becomes increasingly globalized, bibliometric systems for
quantifying research productivity have become increasingly relevant to
academia (Gingras, 2014; Hicks et al., 2015; Lillis & Curry, 2010; Pansu
et al., 2013). Bibliometric indicators are used to convert information about
research activity (primarily publications and citations) into numbers that,
in their apparent neutrality, seem to transcend linguistic and cultural
(including disciplinary) boundaries. Developed as a way to study academic
publication and citation patterns statistically, bibliometrics were originally
used mostly for research purposes – to substantiate claims about who
produces what and under which circumstances (De Bellis, 2014). Today,
however, bibliometrics are most familiar to scholars as evaluative devices
(see, e.g. Pansu et al., 20 13) . B ib li omet ric ind ic at or s a re us ed to as se ss r es ear ch
performance, not only at the institutional level (to distribute funding to
universities, rank journals and so on) but also increasingly at the individual
level in the context of hiring or promotion decisions (Gingras, 2014). For
those asked to evaluate the work of scholars from different countries,
disciplines or contexts, the promise of simple indicators that would enable
comparisons based on seemingly objective metrics – applicable beyond
disciplinary and linguistic boundaries – can be enticing. The question
is whether current bibliometric indicators are as neutral as they seem –
whether they satisfactorily ‘quantify’ scholarly activity and whether the
resulting number means the same thing to scholars across different settings
(Gingras, 2014).
The aim of this chapter is to look at bibliometrics as a specific instance
of quantification and thus – as with any other form of quantification –
as a form of governing things and people. The politics of bibliometrics
deserve to be unpacked because even with the best intentions, developers
of bibliometric indicators must make non-trivial decisions about how to
measure things that are notoriously difficult to quantify (De Bellis, 2014).
These decisions require answering important questions such as: What does
24 Part 1: Evaluation Practices Shaping Academic Publishing
‘productivity’ entail? What should count and how should it be counted?
(Gingras, 2014). Bibliometrics play a key role in conceptualizing scientific
research as a ‘knowledge economy’ by attempting to measure how
effectively resources are converted into output (publications and patents,
but also citations, which often stand in for prestige; see, e.g. Gutwirth
and Christiaens [2015]) and then using these measurements to justify, for
example, the allocation of funding (Gingras, 2014). Their evaluative role
also can encourage researchers and institutes to adopt a set of practices that
generate the highest number of publications, citations or patents – drawing
attention away from activities that are not captured by these indicators
(such as student supervision, mentoring, teaching, administrative work,
etc.) (Elzinga, 2010; Gingras, 2014).
This single-minded focus on productivity not only threatens to erode
the diversity in scientific practices but can also systematically disadvantage
scholars whose practices differ from the norms established by the indicators.
For example, if a bibliometric indicator captures only (or primarily) English
language outputs because the database used to calculate the indicator is
limited to mostly English language publications, then scholars who do not
publish in English may not only receive a lower numerical assessment, but
their works risk also becoming less visible on the international academic
landscape (Pansu et al., 2013). And researchers who decide to publish in
English to gain entrance to high-ranking journals may feel pressured to
focus on research topics that appeal to Anglo-Saxon audiences at the risk of
losing local knowledge (Gingras, 2014; Lillis & Curry, 2010).
In line with other critical voices (see, e.g. Hug et al., 2014; Pansu et
al., 2013), we are also skeptical about the widespread use of bibliometrics
because money and power are involved in their construction and use.
Not only are bibliometric indicators used to allocate funding in research-
producing settings and to support hiring and promotion, but bibliometrics
are also in many cases an international business in themselves.
Notwithstanding the rise of alternative models of metrics, the most-used
citations-based indicators are computed using large-scale databases owned
and run by private companies (Pansu et al., 2013: 26ff). Thompson Reuters
owns the Science Citation Index, the Social Science Citation Index and the
Arts and Humanities Citation Indexes, created and managed by the Institute
for Scientific Information (ISI) since the 1960s and accessed through the
platform Web of Science (WoS). The other two systems were released
in 2004: Scopus, run by Elsevier; and Google Scholar (GS), created by
Google. These databases stand as the main tools used in both scholarly
bibliographic searches and citation indexes and can be considered global
sources: Their stated ambition is to include a significant amount of the
worldwide academic output; their focus is not limited to a given field of
scientific knowledge; they are used and recognized by such diverse users
as researchers, funding agencies and performance evaluators; and they
Lost in Quantifi cation: Scholars and the Politics of Bibliometrics 25
are all commercial services provided by private companies with a global
outreach. The data from these databases are used, for example, to generate
the impact factor of journals (calculated using ISI indexes) and to create
international rankings of universities, such as the QS World University
Rankings or the Times Higher Education World University Rankings, which
both use Scopus to calculate citations per university (see QS, 2016; Times
Higher Education, 2015). Yet, the ‘global’ character of these databases has
been questioned (Lillis & Curry, 2010), with a study by Mongeon and Paul-
Hus (2016) showing the systematic underrepresentation in both the WoS
and Scopus in terms of linguistic and disciplinary coverage (e.g. journals
not published in English and journals in the fields of social sciences and
humanities).
Unpacking the politics of bibliometrics involves looking at the choices
made in developing an indicator or database and the consequences of those
choices for various groups of researchers, keeping in mind the money and
power at stake. It also involves taking a look at what kinds of innovations
or adjustments are made to indicators over time: Bibliometrics are produced
through more or less complex algorithms, where adjustments are common
(e.g. more data are introduced as input, further computing power becomes
available or a different output is sought). Gillespie (2014: 178), for example,
notes that ‘algorithms can easily, instantly, radically, and invisibly be
changed. While major upgrades may happen only on occasion, algorithms
are regularly being “tweaked”’. These ‘tweaks’ are made for a reason; thus,
exploring these reasons for adjustments – as well as looking at who wins
and loses when they are implemented – also sheds light on the politics of
bibliometrics as a tool for governing scholarly practices. In this chapter,
therefore, we explore the ‘political’ moments of bibliometric design to
illustrate how the development of bibliometric indicators is not merely
technical but also based on choices that communicate power through their
social impacts.
The chapter is organized as follows: First, we present our theoretical
perspective, which draws from the traditions of science and technology
studies (STS) and academic literacies theory to conceptualize academic
publishing as a social practice where technologies (such as bibliometric
databases and algorithms) play a key role in articulating the values that
underlie scholarly production. This perspective sheds light on how power
is communicated through the creation of metrics – how measuring a
phenomenon turns into defining it and thus how some groups can become
marginalized. We illustrate this perspective by describing some of the
dilemmas developers can face when constructing a bibliometric indicator
that is intended to work fairly across different academic contexts. We then
take a closer look at examples of two kinds of metrics to illustrate how
the politics of bibliometrics work in practice: GS as an example focused on
citations where technological innovations set it apart from its competitors,
26 Part 1: Evaluation Practices Shaping Academic Publishing
and the Norwegian Publication Indicator (NPI), an output-based indicator
for performance-based funding of research-producing institutions
in Norway. We demonstrate how each of these examples represents
innovations that are meant to improve fairness yet do not fundamentally
challenge the underlying notions of impact, quality and productivity that
give primacy to the natural sciences and English language publications
and thus marginalize scholars in both the geolinguistic periphery (Lillis &
Curry, 2010) and the social sciences and humanities. We conclude with some
thoughts about the importance of maintaining a critical stance about what
goes into the construction of bibliometric indicators, how they are used and
what academia stands to lose from their widespread (and uncritical) use.
Theoretical perspective: Academic publishing as a social practice
and the communication of power
Measuring research performance requires making assumptions about
quantification as a process of measuring and academic publishing as
an object to be measured. Combining notions from STS and academic
literacies theories enables us to construct a framework for unpacking these
assumptions. First, the field of STS examines how concrete practices and
technologies shape, and are shaped by, science. In this perspective, the how
of bibliometrics matters – how indicators are constructed and used and
how quantification takes place (Bowker & Leigh Star, 1999; Desrosières,
2014). STS-inspired research challenges assumptions that quantification
is a straightforward and objective activity that mirrors reality (Espeland
& Stevens, 2008). STS invites us to take seriously ‘technicalities’ as key
ele men ts of sc ient if ic and so ci al li fe an d o ffer s c once ptua l t oo ls to appr ehen d
their social dimensions. A central idea in STS is that quantification and
measurement are not synonymous: With quantification, what is at stake is
translating something non-numerical into numbers, while measurement
operates with something already in numerical form (Desrosières, 2014).
Quantification thus requires the crucial first step of translation: the
definition of socio-technical conventions concerning what can be counted
and what values should be assigned to what is being counted. This step
leads to commensuration, that is, ‘the valuation or measuring of different
objects with a common metric’ (Espeland & Stevens, 2008: 408). The act of
deciding what should be counted in bibliometric indicators and what values
should be assigned does not merely describe scientific production but also
interrogates the value of scientific work: it may confirm, reinforce, question
or deny it (Pansu et al., 2013). In other words, any form of classification
legitimizes some types of output and delegitimizes others (Gruber,
2014). And as individual academics shape their behavior in response to
such classifications (Michels & Schmoch, 2014), the use of bibliometric
indicators for evaluation does not simply ‘objectively measure’ academic
Lost in Quantifi cation: Scholars and the Politics of Bibliometrics 27
activity, but rather sets standards for desired behavior – whether intended
or not (Gingras, 2014; Gutwirth & Christiaens, 2015). Thus, bibliometrics
move from describing productivity to co-defining it, creating an idealized
image of scholarly activity (Gruber, 2014).
Academic literacies theory complements this perspective by drawing
attention to the complex nature of academic writing. Instead of seeing
academic writing as a monolithic practice, it conceptualizes academic
writing as a situated social activity, where what are considered acceptable
practices may change depending on the purpose, audience and context of
the writing (Barton et al., 2000; Lea & Street, 2006; Lillis & Curry, 2010).
Academic literacies theory acknowledges that academic contexts vary
considerably (in terms of disciplines, methodological traditions, orientation
to academic versus applied output, languages and research subject matter),
which means that individual writers may experience tensions between
conflicting institutional expectations as well as their own identities
as writers (Curry & Lillis, 2014; Nygaard, 2017). While a main focus of
academic literacies research has been on student writing, conceptualizing
academic writing as a situated social practice has clear implications for
understanding faculty publishing (Curry & Lillis, 2014). Viewing academic
writing and publishing as social practices that differ across contexts
challenges assumptions that academic publishing is a function of hours
spent on research; that the writing process is the same for academics across
various contexts; that the output of this process will be a product easily
identifiable as an academic publication; that this product will be simple
to include in the body of data that the metric draws from; and that, once
published, the product will be cited by other scholars in accordance with its
quality. The academic literacies perspective implies that, depending on the
setting, authors face a number of decisions about which genre to produce,
how to conceptualize quality (including where to submit a publication),
whether to collaborate and with whom and to what extent to prioritize
producing a publication over other pressing tasks that researchers regularly
face (Nygaard, 2017).
The main ideas from STS and academic literacies that inform our
perspective on the politics of bibliometrics can be summarized as follows:
Decisions made by the developers of bibliometric indicators about how
to quantify research productivity inevitably advantage some researchers
more than others because researchers follow different patterns in their
behaviors – including what they produce, how they collaborate, where they
publish and how they use citations – based on discipline and geographical
region. Because bibliometric indicators are associated with explicit or
implicit rewards, they thus perform power and can inform behavior (if
scholars strategically act to improve their productivity in bibliometric
terms, see Michels and Schmoch [2014]) or marginalize scholars in certain
groups, or both.
28 Part 1: Evaluation Practices Shaping Academic Publishing
Diverse academic practices and design dilemmas
To flesh out this theoretical perspective and illustrate its practical
relevance, in this section we briefly describe the kinds of decisions
a developer might make about how to quantify journal articles in a
bibliometric indicator: first translating the concept of ‘journal article’ into
something that can be counted and then counting it. To function well as
an indicator, a metric needs to include zero items that fail to meet the
defined criteria and all items that do meet the criteria (Hicks et al., 2015).
Journal articles, generally considered the gold standard of publishing, are
not easy to measure on either of these counts because it is not always clear
when something can be considered a legitimate journal. Relevant criteria
include peer-review procedures, sponsors of the journal, quality assurance
mechanisms, types of articles, original articles or translations, etc. Erring
on the side of inclusivity might allow non-peer-reviewed or duplicate works
to be reported as original research (but see discussion in Lillis and Curry
[2010] about ‘equivalent publishing’ by multilingual scholars), while erring
on the side of caution and rigor may exclude some items meeting the criteria.
Once the developer has decided what constitutes a relevant journal
article, the decision about whether all journal articles should be counted
equally must be made. If some are assumed to be higher quality than others,
how is quality to be recognized and how much value should be assigned to
different degrees of quality (Walters, 2014)? If quality is associated with
citations (either through the impact factor of the journal or the number of
times the article has been cited), how are different groups’ systematically
different citation practices accounted for? The significance and usage of the
citation not only varies between disciplines (Hyland, 1999; Walters, 2014),
but there is also evidence that scholars in the geolinguistic periphery or semi-
periphery are cited less often simply because of their geolinguistic location
and that citations from journals based in the geolinguistic periphery are
less valued than citations from journals in the core (Lillis et al., 2010).
The next question that the developer faces is whether the metric will
value journal articles in relation to other academic outputs. Monographs
and book chapters, for example, are common outputs, but their prevalence
varies across disciplines: scholars in the humanities produce the most books
relative to the other disciplines, while researchers in the natural sciences
produce the fewest (Aagaard et al., 2015; Piro et al., 2013). Metrics that
assign value only to journal articles disadvantage groups that publish a
broader range of genres. If the developer aims for a complex metric that
also includes outputs other than articles, the question becomes how to
count them relative to journal articles. Designing a metric means assigning
a numerical value that specifies the difference in value across disciplines;
in some fields, book chapters might count almost as much as a journal
article, whereas in others they may be seen as having little va lue. No matter
Lost in Quantifi cation: Scholars and the Politics of Bibliometrics 29
what value is chosen, it may disadvantage at least some fields or disciplines,
making cross-disciplinary comparison of research performance difficult.
Finally, the developer faces the problem of how to count co-authorship
in terms of allocating full or partial credit per publication. This decision
could advantage one discipline over another because scholars in the natural
sciences (and quantitative social sciences) tend to work in teams, while
scholars in the humanities and (qualitative social sciences) often work alone
or in pairs (Aagaard et al., 2015; Hug et al., 2014). Choosing to fractionalize
favors solo publishing, while using whole counts inflates the productivity
of those working in teams.
These dilemmas show how the developer of a bibliometric system
does not simply describe productivity but also defines it. The resulting
indicator specifies what is considered legitimate scholarly activity and
what is not. Those whose normal scholarly practices include a substantial
proportion of outputs delegitimized by the metrics developer (e.g. journal
articles in journals in languages other than English or publications for non-
academic audiences) risk becoming marginalized unless they improve their
productivity scores, which could reduce diversity in scholarly practices.
In this way, the widespread use of bibliometrics can govern behavior
(Michels & Schmoch, 2014). Attaching funding or other rewards to these
bibliometric indicators communicates this governing power all the more
strongly (see articles in the special issue of Language Policy edited by Curry
and Lillis [2013]).
Below, we explore different aspects of the politics of bibliometrics by
taking a closer look at two examples: GS, a global citation database; and the
NPI, a national output-based productivity indicator used for performance-
based funding. Both examples represent design innovations in bibliometrics
that could, in theory, benefit scholars in the periphery by being more
inclusive in comparison to other systems.
Google Scholar: A global citation database
To understand the overall functioning of the mainstream bibliometric
systems (GS, WoS and Scopus), it is worth breaking them down into
their main elements: databases, algorithms and interfaces. In particular,
approaching databases and algorithms ‘as analytically distinct’ (Gillespie,
2014: 169) permits us to better understand how these systems follow
the two-step process of quantification. The first step is the creation of
conventions to translate things into computable data, which is followed by
the measurement of the obtained data. Compared to the WoS and Scopus
databases, GS applies a different rationale for deciding what counts. In GS,
the scientific nature of the works indexed is presumed a priori instead of
having to be evaluated through publication in prescribed channels, as with
WoS or Scopus. According to Delgado López-Cózar et al. (2014: 447), ‘GS
30 Part 1: Evaluation Practices Shaping Academic Publishing
automatically retrieves, indexes, and stores any type of scientific material
uploaded by an author without any previous external control.’ Indeed, when
it comes to defining what is scholarly output, GS addresses webmasters
rather than authors or journal editors, stating that
the content hosted on your website must consist primarily of scholarly
articles – journal papers, conference papers, technical reports, or their
drafts, dissertations, pre-prints, post-prints, or abstracts. Content
such as news or magazine articles, book reviews, and editorials is not
appropriate for Google Scholar. (Google, 2016)
In contrast, for WoS and Scopus, the worth of a scientific outlet must be
proven using several quantitative and qualitative criteria. In the case of
Scopus, the criteria for the indexation of a new journal range from the ‘type
of peer review’ to ‘publishing regularity’ and from having an International
Standard Serial Number to having titles and abstracts in English regardless
of the language of the journal (Elsevier, 2016). WoS and Scopus work
with ever-increasing but strictly bounded databases, and their gatekeeping
policies represent assumptions about what constitutes good research
practice (for a critique of these criteria, see Pansu et al. [2013: 94–96]). GS,
on the other hand, draws on a potentially unbounded database (i.e. the
Inter net), wher e the patr ollin g of what is con sidere d sci entifi c is lef t to oth er
actors (e.g. publishers and university repositories). As noted, for Google,
the key actors are the webmasters of scientific repositories, who ensure
that publications are made readable by Google robot crawlers (Google,
2016). In other words, journals and other academic repositories must adopt
specific technical formats in order to be read by the Google machines and
computed. Ultimately, this approach is supposed to let scientific worth
speak for itself: From a scientific point of view, it is not inclusion in the
database that matters, but rather the citation patterns generated by a work
that will show its quality.
GS is also a ‘freely available’ service (Bar-Ilan, 2008: 257). There is
no fee for carrying out searches or calculating commonly used metrics.
Authors can create and to some extent manage their online profile. Based
on the information provided at registration, GS offers a series of results
(e.g. publications attributed to them). Updates to the author’s profile are
automatically implemented or submitted to the authors for review to
increase the precision of the information obtained by the robots. Authors
can calculate their metrics and decide whether to make their profile public
(Google, 2016). Yet, according to critics, the user-friendliness of GS has
contributed to reinforcing the abuse of bibliometrics rather than challenging
the role they play in evaluating scholarly activity (Delgado López-Cózar et
al., 2014; Gingras, 2014).
Lost in Quantifi cation: Scholars and the Politics of Bibliometrics 31
Because the citation-focused conventions used by GS are more inclusive
than the approaches of Scopus or the WoS, scholars publishing in languages
other than English and/or working in the social sciences and humanities
may find more of their publications and citations on GS than in the WoS
or Scopus (Harzing & Alakangas, 2016). However, GS appears to err on the
side of too much openness, potentially allowing the inclusion of items that
do not meet the criteria, such as non-peer-reviewed works or duplicates (e.g.
Delgado López-Cózar et al., 2014). We argue that GS’s approach does not
challenge the notion of citations as a key measure of impact or quality, but
actually boosts this idea by widening the pool of citations from which to
draw. Nor does GS critically engage with the reality that citation practices
differ across disciplines and geolinguistic settings. Several researchers
(e.g. Bar-Ilan, 2008; Delgado López-Cózar et al., 2014; Gingras, 2014) have
highlighted GS’s limitations. But comparing the functioning of GS to
that of WoS and Scopus invites an unpacking of the ‘black-box evaluation
machine[s]’ (Hicks et al., 2015: 430), which, we argue, opens a space to more
radically question the implications of adopting citations measurement as
the gold standard of what matters in scientific practice.
The Norwegian Publication Indicator: An output-based indicator
of productivity
The purpose of the NPI is to quantify original research output at the
institutional level. As a national metric, it stands in contrast to panel-
based peer review models, such as the Research Assessment Exercise in the
United Kingdom (subsequently replaced by the metric-oriented Research
Excellence Framework; see Schneider, 2009). In the NPI, data are harvested
systematically from indexes such as WoS but also supplemented through
regular mandatory institutional reporting (Norwegian Association of
Higher Education Institutions, 2004). The indicator is designed to help
distribute national government funds to research-producing institutions,
and as such it functions as an incentive that rewards desired publishing
behavior with points that convert to funding (Norwegian Association of
Higher Education Institutions, 2004; Sivertsen, 2010). Schneider argues
that when the indicator was introduced in 2004, it was ‘novel and
innovative’ in attempting to acknowledge different publications patterns
across disciplines by taking into account not only journal articles but also
books and chapters (Schneider, 2009: 8; see also Aagaard et al., 2015).
The NPI works by assigning points to publications affiliated with each
research institute, weighting each publication based on its genre and where
it is published (Sivertsen, 2010) (see Table 2.1). ‘Level 1’ channels comprise
all academic presses and journals (in any language) that meet the basic
criteria, including having an ISSN and established peer-review procedures.
32 Part 1: Evaluation Practices Shaping Academic Publishing
To avoid hav ing s cholars max im ize t heir points by subm itt ing only to ‘ea sy ’
journals or presses (Schneider, 2009), about 20% of the journals in each
field (and some presses) have been identified as top-tier (‘Level 2’); articles
published in Level 2 journals or presses receive additional points. Discipline-
based evaluation boards determine which presses and journals are Level 1
or Level 2, based on a combination of factors (including impact factor and
reputation within a discipline). However, the resulting conflicts of interest
and ‘horse trading’ that have ensued as a result of this classification process
have been a major target of criticism (Aagaard et al., 2015: 112).
Although the NPI includes books and book chapters, it gives primacy to
the journal article. As Table 2.1 shows, this discrepancy is most evident in
the relative increase in points between Level 1 and Level 2, where journal
articles triple in value, but books and book chapters do not even double.
And although Norwegian language journals are given credit, they are
almost completely absent from the more prestigious Level 2 lists – because
‘international’ is a key criterion for being considered as Level 2. Indeed,
no Norwegian publishing companies – including the top-ranked university
presses – are categorized as Level 2 (Brock-Utne, 2007). Adjustments are
constantly being made to which journals qualify to be top tier (Sivertsen,
2010), but no effort is being made to change the two-level model, despite
the desire by some scholars for more levels (asserting that two levels do
not represent a sufficient degree of differentiation between journals in
their field). Other scholars suggest eliminating these levels because the
distinction between Level 1 and Level 2 sometimes appears arbitrary (see,
e.g. Rice, 2013).
Adjustment has occurred, however, in the way the system awards
points to co-authorship. Fractionalization was included from the beginning
because of the distortion inherent in counting a co-authored work multiple
times (once for each co-author) (Kyvik, 2003). Rather than each co-author
(or each institute) being able to claim one whole output, they are each given
a fraction of the available points. From 2015, fractionalization is no longer
based on a simple fraction but rather a square root (e.g. with four authors, each
would get 0.50 points rather than 0.25) in order to increase field neutrality
(i.e. ‘fairness’) by ‘correcting’ for the version that favored solo publications
(Aagaard et al., 2015; Kunnskapsdepartementet, 2015: 60). While authors in
the social sciences and humanities also gain by co-authorship being given
Table 2.1 NPI points by publication category and level of publication channel
Category Level 1 Level 2
Journal article 1 3
Monograph 5 8
Book chapter or article in anthology 0.7 1
Lost in Quantifi cation: Scholars and the Politics of Bibliometrics 33
greater value, the natural sciences gain significantly more – as was intended,
according to the Ministry of Education and Research in its proposed 2016
national budget for higher education (Kunnskapsdepartementet, 2015).
The second adjustment made to the Norwegian model from 2015 is
increased incentive for scholars to cooperate internationally. When one
co-author is ‘international’ (with an institutional affiliation outside
Norway, regardless of the nationality of the author), the total point sum is
multiplied by 1.3. An additional incentive for international collaboration is
likely to further marginalize those who publish in Norwegian. It is unlikely
that international collaborating partners are able to write sufficiently well
in Norwegian, unless the partner is another Norwegian who is affiliated
with an institute outside of Norway. The effect of this adjustment, if the
incentives work as intended, will be an increase in English publication,
which may entail a reduction in Norwegian publications (Ossenblok, 2012).
Conclusion: What is Lost in Quantifi cation
In this chapter, we have described what we call the ‘politics of
bibliometrics’. We have argued that because bibliometrics appear in
numerical form – as a measure of citations or productivity – they give the
impression of having neutrality and objectivity, although they are highly
normative in their construction and usage. Bibliometrics heavily influence
decisions that matter about whom to hire, promote or fund. The ability
of bibliometric indicators to legitimize some scholarly practices and
delegitimize others affects not just individual scholars or institutions, but
also the diversity of publication practices (if scholars limit their publishing
to only things that count and avoid, for example, targeting non-academic
audiences). And if scholars in the margins shift their publication language
to English and accordingly adapt the focus of their work to suit Anglo-
Saxon journals, we risk losing local knowledges (Brock Utne, 2007; Gingras,
2014; Lillis & Curry, 2013).
Central to our discussion has been a focus on the choices made in the
construction of metrics, particularly with respect to how scholarly practices
are categorized. The nature of categorizing inevitably ‘valorizes some point
of view and silences another’ (Bowker & Leigh Star, 1999: 5–6). As such,
developing bibliometrics becomes a choice about what is to be valorized
– what can be counted as scholarly. Both GS and the Norwegian model
represent innovations with respect to inclusion of scholars in the periphery,
but they also highlight the difficulties of quantification, translating notions
of ‘scholarship,’ ‘quality’ and ‘impact’ into metrics that can be used in
the same way across disciplines and geographical contexts. Compared to
similar databases, GS applies a more inclusive rationale of ‘what counts’,
but it does not contest the assumed role that citations play in scholarship
or take into account how citation practices may differ across settings. The
34 Part 1: Evaluation Practices Shaping Academic Publishing
Norwegian Publications Indicator also represents an innovation in attaining
‘field neutrality,’ but it has received criticism for its non-transparent way
of determining which channels should qualify as top tier (Aagaard et al.,
2015); the difficulty of attaining field neutrality is highlighted in the recent
adjus tments made to the mo de of f ractionali zation to r edi rec t more fu ndi ng
to the natural sciences (Sandström & Erik, 2009). Perhaps of greater concern
to what might be lost is the adjustment to adding points for international
collaboration, which provides an additional incentive to publish in English
rather than Norwegian.
Unpacking the politics of bibliometric systems by examining how they
categorize and quantify scholarly activity, how they communicate power
and what kind of impact they have on scholarly publishing practices,
par tic ul ar ly in gl obal t er ms , h elps op en up bi bl iome tr ic s t o a de ba te in w hi ch
more scholars may feel confident and compelled to participate. While the
need for bibliometric indicators is likely to persist in academia, our hope
is that bibliometrics will not be used uncritically, and that scholars and
policymakers will become increasingly aware of what may be lost in these
quantification processes.
Acknowledgments
The authors would like to thank the editors for their comments as well
as the participants in the research seminar at the Centre de Recherche en
Science Politique (CReSPo) of the Université Saint-Louis, Bruxelles, for
their feedback on a preliminary version of this chapter. Lynn P. Nygaard’s
research has been supported by PRIO and the Research Council of Norway.
Rocco Bellanova’s research has been carried out with the support of
the following projects: Actions de recherche concertées (ARC) – ‘Why
Regulate? Regulation, De-Regulation and Legitimacy of the EU’ (funded by
the Communauté française de Belgique); and NordSTEVA – ‘Nordic Centre
for Security Technologies and Societal Values’ (funded by NordFORSK).
References
Aagaard, K., Bloch, C. and Schneider, J.W. (2015) Impacts of performance-based research
funding systems: The case of the Norwegian Publication Indicator. Research
Evaluation 24, 106–117.
Bar-Ilan, J. (2008) Which h-index? A comparison of WoS, Scopus and Google Scholar.
Scientometrics 74 (2), 257–271.
Barton, D., Hamilton, M. and Ivanic, R. (eds) (2000) Situated Literacies: Reading and
Writing in Context. London: Routledge.
Bowker, G.C. and Leigh Sta r, S. (1999) Sorting Things Out: Classification and Its Consequences.
Cambridge, MA: The MIT Press.
Brock-Utne, B. (2007, Summer) Is Norwegian threatened as an academic language?
International Higher Education 15–16.
Lost in Quantifi cation: Scholars and the Politics of Bibliometrics 35
Curry, M.J. and Lillis, T. (2014) Strategies and tactics in academic knowledge production
by multilingual scholars. Education Policy Analysis Archives 22 (32), 1–23. See http://
dx.doi.org/10.14507/epaa.v22n32.2014.
De Bellis, N. (2014) History and evolution of (biblio)metrics. In B. Cronin and C.R.
Sugimoto (eds) Beyond Bibliometrics: Harnessing Multidimensional Indicators of Scholarly
Impact (pp. 23–44). Cambridge, MA: The MIT Press.
Delgado López-Cózar, E., Robinson-García, N. and Torres-Salinas, D. (2014) The
Google scholar experiment: How to index false papers and manipulate bibliometric
indicators. Journal of the Association for Information Science and Technology 65 (3),
446–454.
Desrosières, A. (2014) Le gouvernement de la cité néolibérale: quand la quantification
rétroagit sur les acteurs [The governing of the neoliberal city: When quantification
has a feedback effect on the actors]. In A. Desrosières (ed.) Prouver et gouverner. Une
analyse politique des statistiques publiques [Proving and Governing. A Political Analysis of
Public Statistics] (pp. 33–59). Paris: La Découverte.
Elsevier (2016) Content policy and selection. See https://www.elsevier.com/solutions/
scopus/content/content-policy-and-selection (accessed 17 April 2017).
Elzinga, A. (2010) New public management, science policy and the orchestration of
university research: Academic science the loser. The Journal for Transdiciplinary
Research in Southern Africa 6 (2), 307–332.
Espeland, W.N. and Stevens, M.L. (2008) A sociology of quantification. Archives
Européennes de Sociologie: European Journal of Sociology 49 (3), 401–436.
Gillespie, T. (2014) The relevance of algorithms. In T. Gillespie, P.J. Boczkowski and
K.A. Foot (eds) Media Technologies. Essays on Communication, Materiality, and Society
(pp. 167–193). Cambridge, MA: The MIT Press.
Gingras, Y. (2014) Les dérives de l’évaluation de la recherche. Du bon usage de la bibliométrie.
Paris: Raisons d’Agir. (Published by MIT Press [2016] as Bibliometrics and Research
Evaluation: Uses and Abuses.)
Google (2016) Inclusion guidelines for webmasters. See https://scholar.google.com/intl/
en/scholar/inclusion.html#overview (accessed 17 April 2017).
Gruber, T. (2014) Academic sell-out: How an obsession with metrics and rankings is
damaging academia. Journal of Marketing for Higher Education 24 (2), 165–177.
Gutwirth, S. and Christiaens, T. (2015) Les sciences et leurs problèmes: La fraude
scientifique, un moyen de diversion? [Sciences and their problems: Scientific fraud, a
diversion?]. Revue Interdisciplinaire d’Études Juridiques 74 (1), 21–49.
Harzing, A.-W. and Alakangas, S. (2016) Google Scholar, Scopus and the Web of Science:
A longitudinal and cross-disciplinary comparison. Scientometrics 106 (2), 787–804.
Hicks, D., Wouters, P., Waltman, L., de Rijcke, S. and Rafols, I. (2015) Bibliometrics: The
Leiden Manifesto for research metrics. Nature 520 (7548), 429–431.
Hug, S.E., Ochsner, M. and Daniel, H.-D. (2014) A framework to explore and develop
criteria for assessing research quality in the humanities. International Journal for
Education Law and Policy 10 (1), 55– 64.
Hyland, K. (1999) Academic attribution: Citation and the construction of disciplinary
knowledge. Applied Linguistics 20 (3), 341–367.
Kunnskapsdepartementet (2015) Orientering om forslag til statsbudsjettet 2016 for
universitet og høgskolar [Orientation on the proposed national budget 2016
for universities and colleges]. See https://www.regjeringen.no/globalassets/
departementene/kd/orientering-om-forslag-til-statsbudsjettet-2016-uh.pdf#page
=60 (accessed 17 April 2017).
Kyvik, S. (2003) Changing trends in publishing behaviour among university faculty,
1980–2000. Scientometrics 58 (1), 35–48.
36 Part 1: Evaluation Practices Shaping Academic Publishing
Lea, M.R. and Street, B.V. (2006) The ‘academic literacies’ model: Theory and
applications. Theory Into Practice 45 (4), 36 8–377.
Lillis, T. and Curry, M.J. (2010) Academic Writing in a Global Context: The Politics and
Practices of Publishing in English. London: Routledge.
Lillis, T. and Curry, M.J. (2013) English, scientific publishing and participation in
the global knowledge economy. In E.J. Erling and P. Seargeant (eds) English and
Development: Policy, Pedagogy, and Globalization (pp. 220–242). Bristol: Multilingual
Matters.
Lillis, T., Hewings, A., Vladimirou, D. and Curry, M.J. (2010) The geolinguistics of
English as an academic lingua franca: Citation practices across English-medium
national and English-medium international journals. International Journal of Applied
Linguistics 20 (1), 111–13 5.
Michels, C. and Schmoch, U. (2014) Impact of bibliometric studies on the publication
behavior of authors. Scientometrics 98 (1), 36 9– 385.
Mongeon, P. and Paul-Hus, A. (2016) The journal coverage of Web of Science and Scopus:
A comparative analysis. Scientometrics 106 (1), 213–228.
Norwegian Association of Higher Education Institutions (2004) A Bibliometric Model
for Performance-based Budgeting of Research Institutions. See http://www.uhr.no/
documents/Rapport_fra_UHR_prosjektet_4_11_engCJS_endelig _versjon_av_hele_
oversettelsen.pdf (accessed 17 April 2017).
Nygaard, L.P. (2017) Publishing and perishing: An academic literacies framework for
investigating research productivity. Studies in Higher Education 24 (3), 519–532.
Ossenblok, T., Engels, T. and Sivertsen, G. (2012) The representation of the social
sciences and humanities in the Web of Science: A comparison of publication patterns
and incentive structures in Flanders and Norway (2005–9). Research Evaluation 21
(4), 2 80 –290.
Pansu, P., Dubois, N. and Beauvois, J.-L. (2013) Dis-moi qui te cite et je saurai ce que tu vaux.
Que mesure vraiment la bibliométrie? [Tell Me Who You Cite, and I will Know Your Value.
What do Bibliometrics Really Measure?]. Grenoble: Presses Universitaires de Grenoble.
Piro, F.N., Aksnes, D.W. and Rørstad, K. (2013) A macro analysis of productivity
differences across fields: Challenges of measurement of scientific publishing. Journal
of the American Society for Information Science and Technology 64 (2), 307–320.
QS (2016) QS World University Ra nkings: Methodolog y. See http://www.topuniversities.
com/qs-world-university-rankings/methodology (accessed 17 April 2017).
Rice, C. (2013, Nov. 5) Do you make these 6 mistakes? A funding scheme that turns
professors into typing monkeys. Science in Balance. Blog post. See http://curt-rice.
com/2013/11/05/do-you-make-these-6-mistakes-a-funding-scheme-that-turns-
professors-into-typing-monkeys/ (accessed 17 April 2017).
Sandström, U. and Sandström, E. (2009) The field factor: Towards a metric for academic
institutions. Research Evaluation 18 (3), 243–250.
Schneider, J.W. (2009) An outline of the bibliometric indicator used for performance-
based funding of research institutions in Norway. European Political Science 8,
364–378.
Sivertsen, G. (2010) A performance indicator based on complete data for the scientific
publication output at research institutions. ISSI Newsletter 6 (1), 22 –28.
Times Higher Education (2015) World University Rankings 2015–2016 methodology. See
https://www.timeshighereducation.com/news/ranking-methodology-2016 (accessed
17 April 2017 ).
Walters, W.H. (2014) Do article influence scores overestimate the citation impact of
social science journals in subfields that are related to higher-impact natural science
disciplines? Journal of Informetrics 8 (2), 421–430.