ChapterPDF Available

2 Lost in Quantification: Scholars and the Politics of Bibliometrics

Authors:

Figures

Content may be subject to copyright.
23
2 Lost in Quantifi cation:
Scholars and the Politics of
Bibliometrics
Lynn P. Nygaard and Rocco Bellanova
As scholarship becomes increasingly globalized, bibliometric systems for
quantifying research productivity have become increasingly relevant to
academia (Gingras, 2014; Hicks et al., 2015; Lillis & Curry, 2010; Pansu
et al., 2013). Bibliometric indicators are used to convert information about
research activity (primarily publications and citations) into numbers that,
in their apparent neutrality, seem to transcend linguistic and cultural
(including disciplinary) boundaries. Developed as a way to study academic
publication and citation patterns statistically, bibliometrics were originally
used mostly for research purposes to substantiate claims about who
produces what and under which circumstances (De Bellis, 2014). Today,
however, bibliometrics are most familiar to scholars as evaluative devices
(see, e.g. Pansu et al., 20 13) . B ib li omet ric ind ic at or s a re us ed to as se ss r es ear ch
performance, not only at the institutional level (to distribute funding to
universities, rank journals and so on) but also increasingly at the individual
level in the context of hiring or promotion decisions (Gingras, 2014). For
those asked to evaluate the work of scholars from different countries,
disciplines or contexts, the promise of simple indicators that would enable
comparisons based on seemingly objective metrics – applicable beyond
disciplinary and linguistic boundaries – can be enticing. The question
is whether current bibliometric indicators are as neutral as they seem –
whether they satisfactorily ‘quantify’ scholarly activity and whether the
resulting number means the same thing to scholars across different settings
(Gingras, 2014).
The aim of this chapter is to look at bibliometrics as a specific instance
of quantification and thus – as with any other form of quantification –
as a form of governing things and people. The politics of bibliometrics
deserve to be unpacked because even with the best intentions, developers
of bibliometric indicators must make non-trivial decisions about how to
measure things that are notoriously difficult to quantify (De Bellis, 2014).
These decisions require answering important questions such as: What does
24 Part 1: Evaluation Practices Shaping Academic Publishing
‘productivity’ entail? What should count and how should it be counted?
(Gingras, 2014). Bibliometrics play a key role in conceptualizing scientific
research as aknowledge economy by attempting to measure how
effectively resources are converted into output (publications and patents,
but also citations, which often stand in for prestige; see, e.g. Gutwirth
and Christiaens [2015]) and then using these measurements to justify, for
example, the allocation of funding (Gingras, 2014). Their evaluative role
also can encourage researchers and institutes to adopt a set of practices that
generate the highest number of publications, citations or patents – drawing
attention away from activities that are not captured by these indicators
(such as student supervision, mentoring, teaching, administrative work,
etc.) (Elzinga, 2010; Gingras, 2014).
This single-minded focus on productivity not only threatens to erode
the diversity in scientific practices but can also systematically disadvantage
scholars whose practices differ from the norms established by the indicators.
For example, if a bibliometric indicator captures only (or primarily) English
language outputs because the database used to calculate the indicator is
limited to mostly English language publications, then scholars who do not
publish in English may not only receive a lower numerical assessment, but
their works risk also becoming less visible on the international academic
landscape (Pansu et al., 2013). And researchers who decide to publish in
English to gain entrance to high-ranking journals may feel pressured to
focus on research topics that appeal to Anglo-Saxon audiences at the risk of
losing local knowledge (Gingras, 2014; Lillis & Curry, 2010).
In line with other critical voices (see, e.g. Hug et al., 2014; Pansu et
al., 2013), we are also skeptical about the widespread use of bibliometrics
because money and power are involved in their construction and use.
Not only are bibliometric indicators used to allocate funding in research-
producing settings and to support hiring and promotion, but bibliometrics
are also in many cases an international business in themselves.
Notwithstanding the rise of alternative models of metrics, the most-used
citations-based indicators are computed using large-scale databases owned
and run by private companies (Pansu et al., 2013: 26ff). Thompson Reuters
owns the Science Citation Index, the Social Science Citation Index and the
Arts and Humanities Citation Indexes, created and managed by the Institute
for Scientific Information (ISI) since the 1960s and accessed through the
platform Web of Science (WoS). The other two systems were released
in 2004: Scopus, run by Elsevier; and Google Scholar (GS), created by
Google. These databases stand as the main tools used in both scholarly
bibliographic searches and citation indexes and can be considered global
sources: Their stated ambition is to include a significant amount of the
worldwide academic output; their focus is not limited to a given field of
scientific knowledge; they are used and recognized by such diverse users
as researchers, funding agencies and performance evaluators; and they
Lost in Quantifi cation: Scholars and the Politics of Bibliometrics 25
are all commercial services provided by private companies with a global
outreach. The data from these databases are used, for example, to generate
the impact factor of journals (calculated using ISI indexes) and to create
international rankings of universities, such as the QS World University
Rankings or the Times Higher Education World University Rankings, which
both use Scopus to calculate citations per university (see QS, 2016; Times
Higher Education, 2015). Yet, the ‘global’ character of these databases has
been questioned (Lillis & Curry, 2010), with a study by Mongeon and Paul-
Hus (2016) showing the systematic underrepresentation in both the WoS
and Scopus in terms of linguistic and disciplinary coverage (e.g. journals
not published in English and journals in the fields of social sciences and
humanities).
Unpacking the politics of bibliometrics involves looking at the choices
made in developing an indicator or database and the consequences of those
choices for various groups of researchers, keeping in mind the money and
power at stake. It also involves taking a look at what kinds of innovations
or adjustments are made to indicators over time: Bibliometrics are produced
through more or less complex algorithms, where adjustments are common
(e.g. more data are introduced as input, further computing power becomes
available or a different output is sought). Gillespie (2014: 178), for example,
notes that ‘algorithms can easily, instantly, radically, and invisibly be
changed. While major upgrades may happen only on occasion, algorithms
are regularly being “tweaked”’. These ‘tweaks’ are made for a reason; thus,
exploring these reasons for adjustments – as well as looking at who wins
and loses when they are implemented – also sheds light on the politics of
bibliometrics as a tool for governing scholarly practices. In this chapter,
therefore, we explore the ‘political’ moments of bibliometric design to
illustrate how the development of bibliometric indicators is not merely
technical but also based on choices that communicate power through their
social impacts.
The chapter is organized as follows: First, we present our theoretical
perspective, which draws from the traditions of science and technology
studies (STS) and academic literacies theory to conceptualize academic
publishing as a social practice where technologies (such as bibliometric
databases and algorithms) play a key role in articulating the values that
underlie scholarly production. This perspective sheds light on how power
is communicated through the creation of metrics – how measuring a
phenomenon turns into defining it and thus how some groups can become
marginalized. We illustrate this perspective by describing some of the
dilemmas developers can face when constructing a bibliometric indicator
that is intended to work fairly across different academic contexts. We then
take a closer look at examples of two kinds of metrics to illustrate how
the politics of bibliometrics work in practice: GS as an example focused on
citations where technological innovations set it apart from its competitors,
26 Part 1: Evaluation Practices Shaping Academic Publishing
and the Norwegian Publication Indicator (NPI), an output-based indicator
for performance-based funding of research-producing institutions
in Norway. We demonstrate how each of these examples represents
innovations that are meant to improve fairness yet do not fundamentally
challenge the underlying notions of impact, quality and productivity that
give primacy to the natural sciences and English language publications
and thus marginalize scholars in both the geolinguistic periphery (Lillis &
Curry, 2010) and the social sciences and humanities. We conclude with some
thoughts about the importance of maintaining a critical stance about what
goes into the construction of bibliometric indicators, how they are used and
what academia stands to lose from their widespread (and uncritical) use.
Theoretical perspective: Academic publishing as a social practice
and the communication of power
Measuring research performance requires making assumptions about
quantification as a process of measuring and academic publishing as
an object to be measured. Combining notions from STS and academic
literacies theories enables us to construct a framework for unpacking these
assumptions. First, the field of STS examines how concrete practices and
technologies shape, and are shaped by, science. In this perspective, the how
of bibliometrics matters – how indicators are constructed and used and
how quantification takes place (Bowker & Leigh Star, 1999; Desrosières,
2014). STS-inspired research challenges assumptions that quantification
is a straightforward and objective activity that mirrors reality (Espeland
& Stevens, 2008). STS invites us to take seriously ‘technicalities’ as key
ele men ts of sc ient if ic and so ci al li fe an d o ffer s c once ptua l t oo ls to appr ehen d
their social dimensions. A central idea in STS is that quantification and
measurement are not synonymous: With quantification, what is at stake is
translating something non-numerical into numbers, while measurement
operates with something already in numerical form (Desrosières, 2014).
Quantification thus requires the crucial first step of translation: the
definition of socio-technical conventions concerning what can be counted
and what values should be assigned to what is being counted. This step
leads to commensuration, that is, ‘the valuation or measuring of different
objects with a common metric’ (Espeland & Stevens, 2008: 408). The act of
deciding what should be counted in bibliometric indicators and what values
should be assigned does not merely describe scientific production but also
interrogates the value of scientific work: it may confirm, reinforce, question
or deny it (Pansu et al., 2013). In other words, any form of classification
legitimizes some types of output and delegitimizes others (Gruber,
2014). And as individual academics shape their behavior in response to
such classifications (Michels & Schmoch, 2014), the use of bibliometric
indicators for evaluation does not simply ‘objectively measure’ academic
Lost in Quantifi cation: Scholars and the Politics of Bibliometrics 27
activity, but rather sets standards for desired behavior – whether intended
or not (Gingras, 2014; Gutwirth & Christiaens, 2015). Thus, bibliometrics
move from describing productivity to co-defining it, creating an idealized
image of scholarly activity (Gruber, 2014).
Academic literacies theory complements this perspective by drawing
attention to the complex nature of academic writing. Instead of seeing
academic writing as a monolithic practice, it conceptualizes academic
writing as a situated social activity, where what are considered acceptable
practices may change depending on the purpose, audience and context of
the writing (Barton et al., 2000; Lea & Street, 2006; Lillis & Curry, 2010).
Academic literacies theory acknowledges that academic contexts vary
considerably (in terms of disciplines, methodological traditions, orientation
to academic versus applied output, languages and research subject matter),
which means that individual writers may experience tensions between
conflicting institutional expectations as well as their own identities
as writers (Curry & Lillis, 2014; Nygaard, 2017). While a main focus of
academic literacies research has been on student writing, conceptualizing
academic writing as a situated social practice has clear implications for
understanding faculty publishing (Curry & Lillis, 2014). Viewing academic
writing and publishing as social practices that differ across contexts
challenges assumptions that academic publishing is a function of hours
spent on research; that the writing process is the same for academics across
various contexts; that the output of this process will be a product easily
identifiable as an academic publication; that this product will be simple
to include in the body of data that the metric draws from; and that, once
published, the product will be cited by other scholars in accordance with its
quality. The academic literacies perspective implies that, depending on the
setting, authors face a number of decisions about which genre to produce,
how to conceptualize quality (including where to submit a publication),
whether to collaborate and with whom and to what extent to prioritize
producing a publication over other pressing tasks that researchers regularly
face (Nygaard, 2017).
The main ideas from STS and academic literacies that inform our
perspective on the politics of bibliometrics can be summarized as follows:
Decisions made by the developers of bibliometric indicators about how
to quantify research productivity inevitably advantage some researchers
more than others because researchers follow different patterns in their
behaviors – including what they produce, how they collaborate, where they
publish and how they use citations – based on discipline and geographical
region. Because bibliometric indicators are associated with explicit or
implicit rewards, they thus perform power and can inform behavior (if
scholars strategically act to improve their productivity in bibliometric
terms, see Michels and Schmoch [2014]) or marginalize scholars in certain
groups, or both.
28 Part 1: Evaluation Practices Shaping Academic Publishing
Diverse academic practices and design dilemmas
To flesh out this theoretical perspective and illustrate its practical
relevance, in this section we briefly describe the kinds of decisions
a developer might make about how to quantify journal articles in a
bibliometric indicator: first translating the concept of ‘journal article’ into
something that can be counted and then counting it. To function well as
an indicator, a metric needs to include zero items that fail to meet the
defined criteria and all items that do meet the criteria (Hicks et al., 2015).
Journal articles, generally considered the gold standard of publishing, are
not easy to measure on either of these counts because it is not always clear
when something can be considered a legitimate journal. Relevant criteria
include peer-review procedures, sponsors of the journal, quality assurance
mechanisms, types of articles, original articles or translations, etc. Erring
on the side of inclusivity might allow non-peer-reviewed or duplicate works
to be reported as original research (but see discussion in Lillis and Curry
[2010] about ‘equivalent publishing’ by multilingual scholars), while erring
on the side of caution and rigor may exclude some items meeting the criteria.
Once the developer has decided what constitutes a relevant journal
article, the decision about whether all journal articles should be counted
equally must be made. If some are assumed to be higher quality than others,
how is quality to be recognized and how much value should be assigned to
different degrees of quality (Walters, 2014)? If quality is associated with
citations (either through the impact factor of the journal or the number of
times the article has been cited), how are different groups’ systematically
different citation practices accounted for? The significance and usage of the
citation not only varies between disciplines (Hyland, 1999; Walters, 2014),
but there is also evidence that scholars in the geolinguistic periphery or semi-
periphery are cited less often simply because of their geolinguistic location
and that citations from journals based in the geolinguistic periphery are
less valued than citations from journals in the core (Lillis et al., 2010).
The next question that the developer faces is whether the metric will
value journal articles in relation to other academic outputs. Monographs
and book chapters, for example, are common outputs, but their prevalence
varies across disciplines: scholars in the humanities produce the most books
relative to the other disciplines, while researchers in the natural sciences
produce the fewest (Aagaard et al., 2015; Piro et al., 2013). Metrics that
assign value only to journal articles disadvantage groups that publish a
broader range of genres. If the developer aims for a complex metric that
also includes outputs other than articles, the question becomes how to
count them relative to journal articles. Designing a metric means assigning
a numerical value that specifies the difference in value across disciplines;
in some fields, book chapters might count almost as much as a journal
article, whereas in others they may be seen as having little va lue. No matter
Lost in Quantifi cation: Scholars and the Politics of Bibliometrics 29
what value is chosen, it may disadvantage at least some fields or disciplines,
making cross-disciplinary comparison of research performance difficult.
Finally, the developer faces the problem of how to count co-authorship
in terms of allocating full or partial credit per publication. This decision
could advantage one discipline over another because scholars in the natural
sciences (and quantitative social sciences) tend to work in teams, while
scholars in the humanities and (qualitative social sciences) often work alone
or in pairs (Aagaard et al., 2015; Hug et al., 2014). Choosing to fractionalize
favors solo publishing, while using whole counts inflates the productivity
of those working in teams.
These dilemmas show how the developer of a bibliometric system
does not simply describe productivity but also defines it. The resulting
indicator specifies what is considered legitimate scholarly activity and
what is not. Those whose normal scholarly practices include a substantial
proportion of outputs delegitimized by the metrics developer (e.g. journal
articles in journals in languages other than English or publications for non-
academic audiences) risk becoming marginalized unless they improve their
productivity scores, which could reduce diversity in scholarly practices.
In this way, the widespread use of bibliometrics can govern behavior
(Michels & Schmoch, 2014). Attaching funding or other rewards to these
bibliometric indicators communicates this governing power all the more
strongly (see articles in the special issue of Language Policy edited by Curry
and Lillis [2013]).
Below, we explore different aspects of the politics of bibliometrics by
taking a closer look at two examples: GS, a global citation database; and the
NPI, a national output-based productivity indicator used for performance-
based funding. Both examples represent design innovations in bibliometrics
that could, in theory, benefit scholars in the periphery by being more
inclusive in comparison to other systems.
Google Scholar: A global citation database
To understand the overall functioning of the mainstream bibliometric
systems (GS, WoS and Scopus), it is worth breaking them down into
their main elements: databases, algorithms and interfaces. In particular,
approaching databases and algorithms ‘as analytically distinct’ (Gillespie,
2014: 169) permits us to better understand how these systems follow
the two-step process of quantification. The first step is the creation of
conventions to translate things into computable data, which is followed by
the measurement of the obtained data. Compared to the WoS and Scopus
databases, GS applies a different rationale for deciding what counts. In GS,
the scientific nature of the works indexed is presumed a priori instead of
having to be evaluated through publication in prescribed channels, as with
WoS or Scopus. According to Delgado López-Cózar et al. (2014: 447), ‘GS
30 Part 1: Evaluation Practices Shaping Academic Publishing
automatically retrieves, indexes, and stores any type of scientific material
uploaded by an author without any previous external control.’ Indeed, when
it comes to defining what is scholarly output, GS addresses webmasters
rather than authors or journal editors, stating that
the content hosted on your website must consist primarily of scholarly
articles – journal papers, conference papers, technical reports, or their
drafts, dissertations, pre-prints, post-prints, or abstracts. Content
such as news or magazine articles, book reviews, and editorials is not
appropriate for Google Scholar. (Google, 2016)
In contrast, for WoS and Scopus, the worth of a scientific outlet must be
proven using several quantitative and qualitative criteria. In the case of
Scopus, the criteria for the indexation of a new journal range from the ‘type
of peer review’ to ‘publishing regularity’ and from having an International
Standard Serial Number to having titles and abstracts in English regardless
of the language of the journal (Elsevier, 2016). WoS and Scopus work
with ever-increasing but strictly bounded databases, and their gatekeeping
policies represent assumptions about what constitutes good research
practice (for a critique of these criteria, see Pansu et al. [2013: 94–96]). GS,
on the other hand, draws on a potentially unbounded database (i.e. the
Inter net), wher e the patr ollin g of what is con sidere d sci entifi c is lef t to oth er
actors (e.g. publishers and university repositories). As noted, for Google,
the key actors are the webmasters of scientific repositories, who ensure
that publications are made readable by Google robot crawlers (Google,
2016). In other words, journals and other academic repositories must adopt
specific technical formats in order to be read by the Google machines and
computed. Ultimately, this approach is supposed to let scientific worth
speak for itself: From a scientific point of view, it is not inclusion in the
database that matters, but rather the citation patterns generated by a work
that will show its quality.
GS is also a ‘freely available’ service (Bar-Ilan, 2008: 257). There is
no fee for carrying out searches or calculating commonly used metrics.
Authors can create and to some extent manage their online profile. Based
on the information provided at registration, GS offers a series of results
(e.g. publications attributed to them). Updates to the author’s profile are
automatically implemented or submitted to the authors for review to
increase the precision of the information obtained by the robots. Authors
can calculate their metrics and decide whether to make their profile public
(Google, 2016). Yet, according to critics, the user-friendliness of GS has
contributed to reinforcing the abuse of bibliometrics rather than challenging
the role they play in evaluating scholarly activity (Delgado López-Cózar et
al., 2014; Gingras, 2014).
Lost in Quantifi cation: Scholars and the Politics of Bibliometrics 31
Because the citation-focused conventions used by GS are more inclusive
than the approaches of Scopus or the WoS, scholars publishing in languages
other than English and/or working in the social sciences and humanities
may find more of their publications and citations on GS than in the WoS
or Scopus (Harzing & Alakangas, 2016). However, GS appears to err on the
side of too much openness, potentially allowing the inclusion of items that
do not meet the criteria, such as non-peer-reviewed works or duplicates (e.g.
Delgado López-Cózar et al., 2014). We argue that GS’s approach does not
challenge the notion of citations as a key measure of impact or quality, but
actually boosts this idea by widening the pool of citations from which to
draw. Nor does GS critically engage with the reality that citation practices
differ across disciplines and geolinguistic settings. Several researchers
(e.g. Bar-Ilan, 2008; Delgado López-Cózar et al., 2014; Gingras, 2014) have
highlighted GS’s limitations. But comparing the functioning of GS to
that of WoS and Scopus invites an unpacking of the ‘black-box evaluation
machine[s]’ (Hicks et al., 2015: 430), which, we argue, opens a space to more
radically question the implications of adopting citations measurement as
the gold standard of what matters in scientific practice.
The Norwegian Publication Indicator: An output-based indicator
of productivity
The purpose of the NPI is to quantify original research output at the
institutional level. As a national metric, it stands in contrast to panel-
based peer review models, such as the Research Assessment Exercise in the
United Kingdom (subsequently replaced by the metric-oriented Research
Excellence Framework; see Schneider, 2009). In the NPI, data are harvested
systematically from indexes such as WoS but also supplemented through
regular mandatory institutional reporting (Norwegian Association of
Higher Education Institutions, 2004). The indicator is designed to help
distribute national government funds to research-producing institutions,
and as such it functions as an incentive that rewards desired publishing
behavior with points that convert to funding (Norwegian Association of
Higher Education Institutions, 2004; Sivertsen, 2010). Schneider argues
that when the indicator was introduced in 2004, it was ‘novel and
innovative’ in attempting to acknowledge different publications patterns
across disciplines by taking into account not only journal articles but also
books and chapters (Schneider, 2009: 8; see also Aagaard et al., 2015).
The NPI works by assigning points to publications affiliated with each
research institute, weighting each publication based on its genre and where
it is published (Sivertsen, 2010) (see Table 2.1). ‘Level 1’ channels comprise
all academic presses and journals (in any language) that meet the basic
criteria, including having an ISSN and established peer-review procedures.
32 Part 1: Evaluation Practices Shaping Academic Publishing
To avoid hav ing s cholars max im ize t heir points by subm itt ing only to ‘ea sy ’
journals or presses (Schneider, 2009), about 20% of the journals in each
field (and some presses) have been identified as top-tier (‘Level 2’); articles
published in Level 2 journals or presses receive additional points. Discipline-
based evaluation boards determine which presses and journals are Level 1
or Level 2, based on a combination of factors (including impact factor and
reputation within a discipline). However, the resulting conflicts of interest
and ‘horse trading’ that have ensued as a result of this classification process
have been a major target of criticism (Aagaard et al., 2015: 112).
Although the NPI includes books and book chapters, it gives primacy to
the journal article. As Table 2.1 shows, this discrepancy is most evident in
the relative increase in points between Level 1 and Level 2, where journal
articles triple in value, but books and book chapters do not even double.
And although Norwegian language journals are given credit, they are
almost completely absent from the more prestigious Level 2 lists – because
‘international’ is a key criterion for being considered as Level 2. Indeed,
no Norwegian publishing companies – including the top-ranked university
presses – are categorized as Level 2 (Brock-Utne, 2007). Adjustments are
constantly being made to which journals qualify to be top tier (Sivertsen,
2010), but no effort is being made to change the two-level model, despite
the desire by some scholars for more levels (asserting that two levels do
not represent a sufficient degree of differentiation between journals in
their field). Other scholars suggest eliminating these levels because the
distinction between Level 1 and Level 2 sometimes appears arbitrary (see,
e.g. Rice, 2013).
Adjustment has occurred, however, in the way the system awards
points to co-authorship. Fractionalization was included from the beginning
because of the distortion inherent in counting a co-authored work multiple
times (once for each co-author) (Kyvik, 2003). Rather than each co-author
(or each institute) being able to claim one whole output, they are each given
a fraction of the available points. From 2015, fractionalization is no longer
based on a simple fraction but rather a square root (e.g. with four authors, each
would get 0.50 points rather than 0.25) in order to increase field neutrality
(i.e. ‘fairness’) by ‘correcting’ for the version that favored solo publications
(Aagaard et al., 2015; Kunnskapsdepartementet, 2015: 60). While authors in
the social sciences and humanities also gain by co-authorship being given
Table 2.1 NPI points by publication category and level of publication channel
Category Level 1 Level 2
Journal article 1 3
Monograph 5 8
Book chapter or article in anthology 0.7 1
Lost in Quantifi cation: Scholars and the Politics of Bibliometrics 33
greater value, the natural sciences gain significantly more – as was intended,
according to the Ministry of Education and Research in its proposed 2016
national budget for higher education (Kunnskapsdepartementet, 2015).
The second adjustment made to the Norwegian model from 2015 is
increased incentive for scholars to cooperate internationally. When one
co-author is ‘international’ (with an institutional affiliation outside
Norway, regardless of the nationality of the author), the total point sum is
multiplied by 1.3. An additional incentive for international collaboration is
likely to further marginalize those who publish in Norwegian. It is unlikely
that international collaborating partners are able to write sufficiently well
in Norwegian, unless the partner is another Norwegian who is affiliated
with an institute outside of Norway. The effect of this adjustment, if the
incentives work as intended, will be an increase in English publication,
which may entail a reduction in Norwegian publications (Ossenblok, 2012).
Conclusion: What is Lost in Quantifi cation
In this chapter, we have described what we call thepolitics of
bibliometrics. We have argued that because bibliometrics appear in
numerical form – as a measure of citations or productivity – they give the
impression of having neutrality and objectivity, although they are highly
normative in their construction and usage. Bibliometrics heavily influence
decisions that matter about whom to hire, promote or fund. The ability
of bibliometric indicators to legitimize some scholarly practices and
delegitimize others affects not just individual scholars or institutions, but
also the diversity of publication practices (if scholars limit their publishing
to only things that count and avoid, for example, targeting non-academic
audiences). And if scholars in the margins shift their publication language
to English and accordingly adapt the focus of their work to suit Anglo-
Saxon journals, we risk losing local knowledges (Brock Utne, 2007; Gingras,
2014; Lillis & Curry, 2013).
Central to our discussion has been a focus on the choices made in the
construction of metrics, particularly with respect to how scholarly practices
are categorized. The nature of categorizing inevitably ‘valorizes some point
of view and silences another’ (Bowker & Leigh Star, 1999: 5–6). As such,
developing bibliometrics becomes a choice about what is to be valorized
– what can be counted as scholarly. Both GS and the Norwegian model
represent innovations with respect to inclusion of scholars in the periphery,
but they also highlight the difficulties of quantification, translating notions
of ‘scholarship,’ ‘quality’ and ‘impact’ into metrics that can be used in
the same way across disciplines and geographical contexts. Compared to
similar databases, GS applies a more inclusive rationale of ‘what counts’,
but it does not contest the assumed role that citations play in scholarship
or take into account how citation practices may differ across settings. The
34 Part 1: Evaluation Practices Shaping Academic Publishing
Norwegian Publications Indicator also represents an innovation in attaining
‘field neutrality,’ but it has received criticism for its non-transparent way
of determining which channels should qualify as top tier (Aagaard et al.,
2015); the difficulty of attaining field neutrality is highlighted in the recent
adjus tments made to the mo de of f ractionali zation to r edi rec t more fu ndi ng
to the natural sciences (Sandström & Erik, 2009). Perhaps of greater concern
to what might be lost is the adjustment to adding points for international
collaboration, which provides an additional incentive to publish in English
rather than Norwegian.
Unpacking the politics of bibliometric systems by examining how they
categorize and quantify scholarly activity, how they communicate power
and what kind of impact they have on scholarly publishing practices,
par tic ul ar ly in gl obal t er ms , h elps op en up bi bl iome tr ic s t o a de ba te in w hi ch
more scholars may feel confident and compelled to participate. While the
need for bibliometric indicators is likely to persist in academia, our hope
is that bibliometrics will not be used uncritically, and that scholars and
policymakers will become increasingly aware of what may be lost in these
quantification processes.
Acknowledgments
The authors would like to thank the editors for their comments as well
as the participants in the research seminar at the Centre de Recherche en
Science Politique (CReSPo) of the Université Saint-Louis, Bruxelles, for
their feedback on a preliminary version of this chapter. Lynn P. Nygaard’s
research has been supported by PRIO and the Research Council of Norway.
Rocco Bellanova’s research has been carried out with the support of
the following projects: Actions de recherche concertées (ARC) – ‘Why
Regulate? Regulation, De-Regulation and Legitimacy of the EU’ (funded by
the Communauté française de Belgique); and NordSTEVA – ‘Nordic Centre
for Security Technologies and Societal Values’ (funded by NordFORSK).
References
Aagaard, K., Bloch, C. and Schneider, J.W. (2015) Impacts of performance-based research
funding systems: The case of the Norwegian Publication Indicator. Research
Evaluation 24, 106–117.
Bar-Ilan, J. (2008) Which h-index? A comparison of WoS, Scopus and Google Scholar.
Scientometrics 74 (2), 257–271.
Barton, D., Hamilton, M. and Ivanic, R. (eds) (2000) Situated Literacies: Reading and
Writing in Context. London: Routledge.
Bowker, G.C. and Leigh Sta r, S. (1999) Sorting Things Out: Classification and Its Consequences.
Cambridge, MA: The MIT Press.
Brock-Utne, B. (2007, Summer) Is Norwegian threatened as an academic language?
International Higher Education 15–16.
Lost in Quantifi cation: Scholars and the Politics of Bibliometrics 35
Curry, M.J. and Lillis, T. (2014) Strategies and tactics in academic knowledge production
by multilingual scholars. Education Policy Analysis Archives 22 (32), 1–23. See http://
dx.doi.org/10.14507/epaa.v22n32.2014.
De Bellis, N. (2014) History and evolution of (biblio)metrics. In B. Cronin and C.R.
Sugimoto (eds) Beyond Bibliometrics: Harnessing Multidimensional Indicators of Scholarly
Impact (pp. 23–44). Cambridge, MA: The MIT Press.
Delgado López-Cózar, E., Robinson-García, N. and Torres-Salinas, D. (2014) The
Google scholar experiment: How to index false papers and manipulate bibliometric
indicators. Journal of the Association for Information Science and Technology 65 (3),
446–454.
Desrosières, A. (2014) Le gouvernement de la ci néolibérale: quand la quantification
rétroagit sur les acteurs [The governing of the neoliberal city: When quantification
has a feedback effect on the actors]. In A. Desrosières (ed.) Prouver et gouverner. Une
analyse politique des statistiques publiques [Proving and Governing. A Political Analysis of
Public Statistics] (pp. 33–59). Paris: La Découverte.
Elsevier (2016) Content policy and selection. See https://www.elsevier.com/solutions/
scopus/content/content-policy-and-selection (accessed 17 April 2017).
Elzinga, A. (2010) New public management, science policy and the orchestration of
university research: Academic science the loser. The Journal for Transdiciplinary
Research in Southern Africa 6 (2), 307–332.
Espeland, W.N. and Stevens, M.L. (2008) A sociology of quantification. Archives
Européennes de Sociologie: European Journal of Sociology 49 (3), 401–436.
Gillespie, T. (2014) The relevance of algorithms. In T. Gillespie, P.J. Boczkowski and
K.A. Foot (eds) Media Technologies. Essays on Communication, Materiality, and Society
(pp. 167–193). Cambridge, MA: The MIT Press.
Gingras, Y. (2014) Les dérives de l’évaluation de la recherche. Du bon usage de la bibliométrie.
Paris: Raisons dAgir. (Published by MIT Press [2016] as Bibliometrics and Research
Evaluation: Uses and Abuses.)
Google (2016) Inclusion guidelines for webmasters. See https://scholar.google.com/intl/
en/scholar/inclusion.html#overview (accessed 17 April 2017).
Gruber, T. (2014) Academic sell-out: How an obsession with metrics and rankings is
damaging academia. Journal of Marketing for Higher Education 24 (2), 165–177.
Gutwirth, S. and Christiaens, T. (2015) Les sciences et leurs problèmes: La fraude
scientifique, un moyen de diversion? [Sciences and their problems: Scientific fraud, a
diversion?]. Revue Interdisciplinaire d’Études Juridiques 74 (1), 21–49.
Harzing, A.-W. and Alakangas, S. (2016) Google Scholar, Scopus and the Web of Science:
A longitudinal and cross-disciplinary comparison. Scientometrics 106 (2), 787–804.
Hicks, D., Wouters, P., Waltman, L., de Rijcke, S. and Rafols, I. (2015) Bibliometrics: The
Leiden Manifesto for research metrics. Nature 520 (7548), 429–431.
Hug, S.E., Ochsner, M. and Daniel, H.-D. (2014) A framework to explore and develop
criteria for assessing research quality in the humanities. International Journal for
Education Law and Policy 10 (1), 55– 64.
Hyland, K. (1999) Academic attribution: Citation and the construction of disciplinary
knowledge. Applied Linguistics 20 (3), 341–367.
Kunnskapsdepartementet (2015) Orientering om forslag til statsbudsjettet 2016 for
universitet og høgskolar [Orientation on the proposed national budget 2016
for universities and colleges]. See https://www.regjeringen.no/globalassets/
departementene/kd/orientering-om-forslag-til-statsbudsjettet-2016-uh.pdf#page
=60 (accessed 17 April 2017).
Kyvik, S. (2003) Changing trends in publishing behaviour among university faculty,
1980–2000. Scientometrics 58 (1), 3548.
36 Part 1: Evaluation Practices Shaping Academic Publishing
Lea, M.R. and Street, B.V. (2006) The ‘academic literacies’ model: Theory and
applications. Theory Into Practice 45 (4), 36 8–377.
Lillis, T. and Curry, M.J. (2010) Academic Writing in a Global Context: The Politics and
Practices of Publishing in English. London: Routledge.
Lillis, T. and Curry, M.J. (2013) English, scientific publishing and participation in
the global knowledge economy. In E.J. Erling and P. Seargeant (eds) English and
Development: Policy, Pedagogy, and Globalization (pp. 220–242). Bristol: Multilingual
Matters.
Lillis, T., Hewings, A., Vladimirou, D. and Curry, M.J. (2010) The geolinguistics of
English as an academic lingua franca: Citation practices across English-medium
national and English-medium international journals. International Journal of Applied
Linguistics 20 (1), 111–13 5.
Michels, C. and Schmoch, U. (2014) Impact of bibliometric studies on the publication
behavior of authors. Scientometrics 98 (1), 36 9– 385.
Mongeon, P. and Paul-Hus, A. (2016) The journal coverage of Web of Science and Scopus:
A comparative analysis. Scientometrics 106 (1), 213–228.
Norwegian Association of Higher Education Institutions (2004) A Bibliometric Model
for Performance-based Budgeting of Research Institutions. See http://www.uhr.no/
documents/Rapport_fra_UHR_prosjektet_4_11_engCJS_endelig _versjon_av_hele_
oversettelsen.pdf (accessed 17 April 2017).
Nygaard, L.P. (2017) Publishing and perishing: An academic literacies framework for
investigating research productivity. Studies in Higher Education 24 (3), 519–532.
Ossenblok, T., Engels, T. and Sivertsen, G. (2012) The representation of the social
sciences and humanities in the Web of Science: A comparison of publication patterns
and incentive structures in Flanders and Norway (2005–9). Research Evaluation 21
(4), 2 80 –290.
Pansu, P., Dubois, N. and Beauvois, J.-L. (2013) Dis-moi qui te cite et je saurai ce que tu vaux.
Que mesure vraiment la bibliométrie? [Tell Me Who You Cite, and I will Know Your Value.
What do Bibliometrics Really Measure?]. Grenoble: Presses Universitaires de Grenoble.
Piro, F.N., Aksnes, D.W. and Rørstad, K. (2013) A macro analysis of productivity
differences across fields: Challenges of measurement of scientific publishing. Journal
of the American Society for Information Science and Technology 64 (2), 307–320.
QS (2016) QS World University Ra nkings: Methodolog y. See http://www.topuniversities.
com/qs-world-university-rankings/methodology (accessed 17 April 2017).
Rice, C. (2013, Nov. 5) Do you make these 6 mistakes? A funding scheme that turns
professors into typing monkeys. Science in Balance. Blog post. See http://curt-rice.
com/2013/11/05/do-you-make-these-6-mistakes-a-funding-scheme-that-turns-
professors-into-typing-monkeys/ (accessed 17 April 2017).
Sandström, U. and Sandström, E. (2009) The field factor: Towards a metric for academic
institutions. Research Evaluation 18 (3), 243–250.
Schneider, J.W. (2009) An outline of the bibliometric indicator used for performance-
based funding of research institutions in Norway. European Political Science 8,
364–378.
Sivertsen, G. (2010) A performance indicator based on complete data for the scientific
publication output at research institutions. ISSI Newsletter 6 (1), 22 –28.
Times Higher Education (2015) World University Rankings 2015–2016 methodology. See
https://www.timeshighereducation.com/news/ranking-methodology-2016 (accessed
17 April 2017 ).
Walters, W.H. (2014) Do article influence scores overestimate the citation impact of
social science journals in subfields that are related to higher-impact natural science
disciplines? Journal of Informetrics 8 (2), 421–430.
... Gruber noted that although the original purpose of journal metrics was for libraries to assess which journals to subscribe to, it is now for university management to measure scholarly output for recruitment and promotion purposes, and to assess funding applications (Curry, 2012;Nygaard & Bellanova, 2018). Furthermore, Atkinson-Grosjean (2006) stressed the negative effect on interdisciplinary research collaboration when funds are allocated according to purely quantifiable measures of scholarly output as researchers become less willing to collaborate in research with those with less output for fear of losing their funding. ...
... In terms of publication practices, Nygaard and Bellanova (2018) observed that the propensity to evaluate scholars primarily by means of metrics forces them to publish only in mainstream, highly-cited fields and gear their work, mostly in English, to "Anglo-Saxon audiences at the risk of losing local knowledge" (p. 24). ...
... The risks of overcompetitiveness as reported by John and Theron suggested the negative impact felt on interdisciplinary collaboration from evaluation metrics. This potential danger inherent in metrics of scholarly publication discussed in our CAE contributes to shifts in research foci toward more cited fields and delegitimization of local themes (Nygaard & Bellanova, 2018). ...
Article
Full-text available
Despite increasing demands to publish in English, publishing in private publishing houses' small number of prestige journals remains a benchmark of journal and manuscript quality. How such journals have responded to increasing demand for English language publication has been well-documented. However, the perspectives of editors working in non-prestige journals not affiliated with large, private publishing houses remain underrepresented, particularly concerning academic editorial work. To better present a diversity of editors' perceptions, this collaborative autoethnography explored the views of five applied linguistics and TESOL journal editors working in journals unaffiliated with private publishing houses. Issues explored included our respective journals' struggle to compete, such as in bibliometric assessment and maintaining quality review processes. Our explorative narratives of editorial perceptions revealed issues internal and external to journal editorial practice. Internally, 'quality' in blind and non-blind reviewing, evaluation criteria, reviewer bias, and field-specific norms of academic writing were problematized. Externally, issues of open access, author publication fees, bibliometric indexing, and our journals' positionings in their fields were raised. We believe that sharing our views through this collaborative narrativization can help broaden understanding of editorial practices and, by highlighting issues of interest to editors more broadly, can help to foster a sense of common purpose. http://www.englishscholarsbeyondborders.org/wp-content/uploads/2021/10/Adamson-et-al.pdf
... 'Excellence' may mean different things depending on whether it is in the context of ranking a university, evaluating the performance of a department within a university, distributing funding, or making hiring decisions. And measuring excellence means making difficult decisions about how to identify the discreet components of excellence and translate them into practices that can be counted (Nygaard and Bellanova 2018;De Bellis 2014). ...
... The answers to these questions matter because any decision about what to include (or not include) in such a metric will, perhaps unintentionally, legitimize some types of output and delegitimize othersthus not only measuring productivity, but also defining it and reifying notions of excellence (Moore et al. 2017;Nygaard and Bellanova 2018). For example, if only academic publications are counted, then the production of popular scientific output, or output targeted specifically at stakeholders outside academia, might be seen as less legitimate, less 'excellent'. ...
Article
Full-text available
As the importance of 'excellence' increases in higher education, so too does the importance of indicators to measure research productivity. We examine how such indicators might disproportionately benefit men by analysing extent to which the separate components of the Norwegian Publication Indicator (NPI), a bibliometric model used to distribute performance-based funding to research institutions, might amplify existing gender gaps in productivity. Drawing from Norwegian bibliometric data for 43,500 individuals, we find that each element of the indicator (weighting based on publication type, publication channel, and international collaboration, as well as fractionalization of co-authorship) has a small, but cumulative effect resulting in women on average receiving 10 per cent fewer publication points than men per publication. In other words, we see a gender gap that is not only caused by a difference in the level of production but is also amplified by the value ascribed to each publication.
... Though, as Swales (2019) points out (see also Monteiro & Hirano, 2020), there are centres within peripheries and peripheries in centres, disadvantages appear particularly acute for plurilingual EAL scholars working from regions outside traditional centres of knowledge production (Bennett, 2014;Englander & Corcoran, 2019;Hanauer, Sheridan, & Englander, 2019). Recognition of the global, neoliberal, asymmetrical market of knowledge production in which these scientists work is a necessary precursor to better understanding institutional policies and pedagogies as well as individual scientists' beliefs and practices (Gotti, 2020;Lillis & Curry, 2010;Nygaard & Bellanova, 2017). ...
Article
Research-intensive universities in the global peripheries have begun to mount English for research publication purposes (ERPP) initiatives to increase plurilingual scholars’ publishing success. Though research into pedagogical initiatives is still limited, investigations of such programs can provide researchers with a greater understanding of the broader experiences and perspectives of scholars as well as the potential impact of interventions on course participants’ scholarly writing. Answering the call for more longitudinal work in ERPP, this article outlines a small-scale, qualitative investigation of the perceived impact of an intensive ERPP course at a Mexican university on two environmental scientists’ research writing five years following course completion. Data analysis included systematic review of participant CVs, as well as semi-structured interviews with two plurilingual EAL scientists and two ERPP practitioners connected to the ERPP course. Employing a critical plurilingual lens, this article discusses findings that not only outline the perceived impact of the intervention on these scientists’ research writing at different stages of their academic trajectories, but also highlight the plurilingual nature of their evolving scholarly practices. The article culminates with data-driven suggestions for plurilingual conceptualization and enactment of scholarly writing pedagogies, policies, and research agendas.
... Though, as Swales (2019) points out (see also Monteiro & Hirano, 2020), there are centres within peripheries and peripheries in centres, disadvantages appear particularly acute for plurilingual EAL scholars working from regions outside traditional centres of knowledge production (Bennett, 2014;Englander & Corcoran, 2019;Hanauer, Sheridan, & Englander, 2019). Recognition of the global, neoliberal, asymmetrical market of knowledge production in which these scientists work is a necessary precursor to better understanding institutional policies and pedagogies as well as individual scientists' beliefs and practices (Gotti, 2020;Lillis & Curry, 2010;Nygaard & Bellanova, 2017). ...
Article
Full-text available
Research-intensive universities in the global peripheries have begun to mount English for research publication purposes (ERPP) initiatives to increase plurilingual scholars’ publishing success. Though research into pedagogical initiatives is still limited, investigations of such programs can provide researchers with a greater understanding of the broader experiences and perspectives of scholars as well as the potential impact of interventions on course participants’ scholarly writing. Answering the call for more longitudinal work in ERPP, this article outlines a small-scale, qualitative investigation of the perceived impact of an intensive ERPP course at a Mexican university on two environmental scientists’ research writing five years following course completion. Data analysis included systematic review of participant CVs, as well as semi-structured interviews with two plurilingual EAL scientists and two ERPP practitioners connected to the ERPP course. Employing a critical plurilingual lens, this article discusses findings that not only outline the perceived impact of the intervention on these scientists’ research writing at different stages of their academic trajectories, but also highlight the plurilingual nature of their evolving scholarly practices. The article culminates with data-driven suggestions for plurilingual conceptualization and enactment of scholarly writing pedagogies, policies, and research agendas.
... Citation indexing paved the way for evaluative bibliometrics, whose centrality has long been controversial in the academic world (see e.g. European Commission, 2019; Nygaard & Bellanova, 2017;Crane & Glozer, 2022). Despite long-held concerns, research evaluators and managers, such as funders and university hiring committees, continue to use metrics to quickly approximate research and researcher quality, and the commercial academic publishing industry continues to develop evaluative metrics and analytics as next generation products. ...
Article
Full-text available
Google Scholar has become an important player in the scholarly economy. Whereas typical academic publishers sell bibliometrics, analytics and ranking products, Alphabet, through Google Scholar, provides “free” tools for academic search and scholarly evaluation that have made it central to academic practice. Leveraging political imperatives for open access publishing, Google Scholar has managed to intermediate data flows between researchers, research managers and repositories, and built its system of citation counting into a unit of value that coordinates the scholarly economy. At the same time, Google Scholar’s user-friendly but opaque tools undermine certain academic norms, especially around academic autonomy and the academy’s capacity to understand how it evaluates itself.
... Precise numbers for academic publishing across all languages are hard to obtain because of how numbers are tallied. Research articles tend to be more systematically counted because they are more visibly included in evaluation metrics, while practitioner-oriented and other types of articles, book chapters, and books are less consistently tallied. 2 In addition, variations in how citation indexes or journal directories tally publications can result in very different pictures of global knowledge production being created (Nygaard & Bellanova, 2018). ...
Article
We are living in an era characterized by multilingualism, global mobility, superdiversity (Blommaert, 2010), and digital communications. Mobility and multilingualism, however, have long characterized most geolinguistic contexts, including those where monolingual ideologies have influenced the formation of contemporary nation states (Cenoz, 2013). As language is a pillar of both curriculum and instruction, in many academic spaces around the world efforts are on the rise to acknowledge the colonial origins of English, decenter the dominance of Standard English(es), and decolonize knowledge production (e.g., Bhambra et al., 2018; de Sousa Santos, 2017). Additionally, many ‘inner circle’ (Kachru, 2001) Anglophone contexts have long witnessed the centrifugal forces of multilingualism. Yet what prevails in institutional academic contexts is a centripetal pull toward what has been captured in phrases such as ‘linguistic mononormativity’ (Blommaert & Horner, 2017) or ‘Anglonormativity’ (McKinney, 2017). Nowhere is this pull more evident than in the sphere of writing for publication, relentlessly construed as an ‘English Only’ space, as exemplified in Elnathan's (2021) claim in the journal Nature : ‘English is the international language of science, for better or for worse.’
... And nobody is more acutely impacted by the dominance of English than plurilingual EAL scientists ( Flowerdew, 2019 ;McKinley and Rose, 2018 ;Politzer-Ahles et al., 2016 ). Thus conversations about international scientifi c communication should be carried out with a recognition of the global, neoliberal, asymmetrical market of knowledge production in which scientists work ( Demeter, 2019 ;Lillis and Curry, 2010 ;Nygaard and Bellanova, 2017 ). In this chapter, we consider challenges that are most acute or amplifi ed for plurilingual EAL scientists, be they in natural or social science disciplines, while refl ecting upon pedagogical interventions that might e ectively and equitably support them. ...
Chapter
In this chapter, pedagogies for supporting scientists who publish research in English as an additional language (EAL) and live outside “centre” countries are examined. We first draw attention to the burgeoning field of English for Research Publication Purposes (ERPP), where research into challenges of global scientists is rapidly expanding, highlighting the particular strengths, needs, and challenges of EAL scientists. Next, we present contrasting pedagogies for supporting EAL scientists, drawing a distinction between “critical” and “pragmatic” approaches. Drawing on the extant literature from the fields of applied linguistics, writing studies, and education—including our research into the experiences of Latin American scientists—we then present an adaptable pedagogical approach that challenges monolingual ideologies and practices in global science writing, adjudication, and support. We conclude this chapter by presenting a set of principles that promote greater equity and diversity in global scientific knowledge production by supporting EAL scientists in targeted ways that recognize both their strengths as plurilingual communicators as well as the distinct challenges they face in a complex, metric-heavy science world where English is privileged. APA Citation: Corcoran, J. N. & Englander, K. (2021). Pedagogies for Supporting Global Scientists’ Research Writing. In C. Hanganu-Bresch, M. Zerbe, G. Cutrufello, & S. Maci (Eds.) Handbook of Scientific Communication. (pp. 348 – 358) London: Routledge.
... The expression 'publish or perish' , coined by Wilson (1942), has been used to describe the pressure endured by scientists to quickly and continuously publish their work in order to advance in their careers (Garfield, 1996). With the establishment of English as a global language of science, the expression has been adapted to 'publish in English or perish' (Curry & Lillis, 2004;Flowerdew, 2008Flowerdew, , 2013Nygaard & Bellanova, 2018). It should be noted, however, that if, on the one hand, multilingual academics have to cope with the unfair pressure to publish in English, on the other, it is difficult to deny that the use of one global scientific language helps in the networking and the exchange of ideas among academics, allowing for transnational scientific exchanges and collaborations. ...
Article
This paper examines the context of scholarly knowledge production and dissemination in Brazil by comparing the publishing practices in both Portuguese and in English of Brazilian scholars who hold a research grant, across eight fields of knowledge. Data consists of 1,874 Curricula Vitae and the analysis focused on the language, number, and genres of publications over a three-year period (2014 to 2016). The study revealed a clear contrast regarding the more frequent use of English by researchers in the ‘harder’ sciences and the preference for Portuguese by those in the ‘softer’ sciences. The results also suggested an interconnection in which scholars who published the most tended to adopt English. Multiple factors involved in the genre and language choices made by academics were analysed, such as characteristics of the work produced by each disciplinary community, the audience of the research, the type of language used, and the need to obtain research funding. This investigation can potentially inform policies and investments in Brazilian higher education and research to provide continued support specific to the needs of different disciplinary communities, as well as foster the inclusion of multilingual scholars who do not have English as their first language in the global arena of knowledge production and dissemination.
Article
This review discursively addresses questions about (1) what digital genres are, in the context of genre theory and social practices, and (2) what the impact of these new-media genres may be on how we theorize and analyze genre, engage in genre-informed teaching, and, more generally, produce and interact with genre-mediated information.
Article
Concomitant with the increased pressures on scholars around the globe to publish in top-tiered scholarly indexed English journals, the Indonesian government has imposed a stern policy obliging local scholars to publish in such journals. This policy has serious ramifications for the academic promotions, tenures, research grants and allowances of these scholars. Yet, as it is English that has become the privileged language for global academic publication, there is the tendency that it gives rise to linguistic hegemony in knowledge production and dissemination. Drawing upon in-depth interview results from two Indonesian professors who have ample experiences in writing and publication in the field of linguistics, this study seeks to discover strategies they employed to de-westernize hegemonic knowledge in global academic publishing. In so doing, the article further contributes to the debates over the politics of knowledge production and dissemination amid the intellectual hegemony of knowledge in academic publication.
Article
Full-text available
This article aims to provide a systematic and comprehensive comparison of the coverage of the three major bibliometric databases: Google Scholar, Scopus and the Web of Science. Based on a sample of 146 senior academics in five broad disciplinary areas, we therefore provide both a longitudinal and a cross-disciplinary comparison of the three databases. Our longitudinal comparison of eight data points between 2013 and 2015 shows a consistent and reasonably stable quarterly growth for both publications and citations across the three databases. This suggests that all three databases provide sufficient stability of coverage to be used for more detailed cross-disciplinary comparisons. Our cross-disciplinary comparison of the three databases includes four key research metrics (publications, citations, h-index, and hI, annual, an annualised individual h-index) and five major disciplines (Humanities, Social Sciences, Engineering, Sciences and Life Sciences). We show that both the data source and the specific metrics used change the conclusions that can be drawn from cross-disciplinary comparisons.
Article
Full-text available
Bibliometric methods are used in multiple fields for a variety of purposes, namely for research evaluation. Most bibliometric analyses have in common their data sources: Thomson Reuters' Web of Science (WoS) and Elsevier's Scopus. This research compares the journal coverage of both databases in terms of fields, countries and languages, using Ulrich's extensive periodical directory as a base for comparison. Results indicate that the use of either WoS or Scopus for research evaluation may introduce biases that favor Natural Sciences and Engineering as well as Biomedical Research to the detriment of Social Sciences and Arts and Humanities. Similarly, English-language journals are overrepresented to the detriment of other languages. While both databases share these biases, their coverage differs substantially. As a consequence, the results of bibliometric analyses may vary depending on the database used.
Article
Full-text available
The current discourse on research productivity (how much peer-reviewed academic output is published by faculty) is dominated by quantitative research on individual and institutional traits; implicit assumptions are that academic writing is a predominately cognitive activity, and that lack of productivity represents some kind of deficiency. Introducing the academic literacies approach to this debate brings issues of identity, multiple communities, and different institutional expectations (at the local, national, and international levels) to the foreground. I argue that academics often juggle competing demands that create various sites of negotiation in the production of academic writing: the results of these negotiations can have a direct impact on what kind of research output is produced, and how much it ‘counts’. Drawing from research on the Peace Research Institute Oslo (PRIO), this article demonstrates how a theoretical framework based on academic literacies can be used to investigate research productivity outcomes in specific academic settings.
Article
Full-text available
In the process of developing new tools for measuring and assessing research quality in the humanities, many challenges emerge, such as technical problems (e.g., building publication databases, capturing social impact) and scholars’ opposition to measuring research performance. This paper focuses on scholars’ opposition and presents the four most crucial objections of humanities scholars regarding the measurement and assessment of research quality. We suggest a framework to explore and develop quality criteria and indicators that considers scholars’ objections. Finally, we outline an empirical procedure, including Repertory Grid interviews and a Delphi survey, to implement the framework.
Article
There has been a growing use of performance-based research funding systems (PRFS) as a policy tool. With the introduction of the Publication Indicator in 2004, Norway joined this international trend in which the allocation of basic funds is increasingly linked to performance indicators. The purpose of this article is to present and discuss the main results of a recent evaluation of the Norwegian Publication Indicator, which examines the Indicator’s impact on publishing patterns, its properties, and how it has functioned in practice. This includes both a broad range of potential effects such as the Indicator’s impact on the quantity and the quality of publications, Norwegian language publishing, and length of articles and monographs. It also includes an examination of properties such as the Indicator’s legitimacy and transparency, how it functions as a measure of research performance across different fields, its use as a management tool, and how the system is organized and administrated in practice. In examining these questions, the article draws on a number of different data sources, including large-scale surveys of both researchers and research managers, multilevel case studies, and bibliometric analysis. The article concludes with a discussion of the implications of the analysis both for further development of the Norwegian Model and for PRFS in general.