Content uploaded by Khalid Mahmood
Author content
All content in this area was uploaded by Khalid Mahmood on Aug 11, 2017
Content may be subject to copyright.
Vol.18, No.2 Shah & Mahmood (2016)
Validation of Journal Impact Metrics of Web of Science and Scopus
Syed Rahmatullah Shah
University of Boras, Sweden
University of the Punjab, Lahore, Pakistan
Email: rahmatgee@yahoo.com
Khalid Mahmood
University of the Punjab, Lahore, Pakistan
Email: khalid.im@pu.edu.pk
Citation based metrics are widely used to assess the impact of
research published in journals. This paper presents the results of a
research study to verify the accuracy of data and calculations of
journal impact metrics presented in Web of Science (WoS) and
Scopus in the case of three journals of information and library
science. Data collected from the websites of journals were compared with that of
two citation extended databases. The study manually calculated the Journal
Impact Factor (JIF) and the Impact per Publication (IPP) in accordance with
formulas given in the databases. Data were also collected from the Google Scholar
to draw a comparison. The study found discrepancies in two sets of data and
bibliometric values, i.e., systematic values presented in WoS and Scopus and
calculated in this study. Commercial databases presented inflated measures
based on fabricated or erroneous data. The study is of practical importance to
researchers, universities and research financing bodies that consider these
bibliometric indicators as a good tool for measuring performance, assessment,
and evaluation of research quality as well as researchers.
Keywords: Citation analysis; Journal rankings; Journal Impact Factor; Impact per
Publication; Scholarly communication; Bibliometrics.
INTRODUCTION
Citations are a valuable source for researchers, librarians, publishers and
scientific and academic organizations. They use citations as a measure for quality of
research output and evaluation of a research journal (Moed, 2005). Researchers use
citations to look into the flow and development of ideas in their research. They
PAKISTAN JOURNAL OF INFORMATION MANAGEMENT & LIBRARIES (PJIM&L) 58
Vol.18, No.2 Shah & Mahmood (2016)
check accuracy, originality, authenticity, influence, and other relevant facts about
ideas related to their own studies (Garfield, 1964; Salton, 1963). Researchers
strengthen their ideas on the bases of citations to highlight the existing research
and the presence of gaps to be filled in by their own studies (Moed, 2005). Citations
also serve the purpose of lending intellectual credits to the real contributors in
research. They also safeguard the rights of the researcher who originally initiated or
developed an idea (Day, 2014; Merton, 1957). Librarians have a long history of
using citations as a tool in making comparisons of two or more published journals
covering the same discipline or subject category. Their use of citations helps utilize
the limited financial resources. Therefore, librarians use citations to draw a
comparison and decide which journals should be acquired from a wide variety
available amongst researchers in a particular subject (Moed, 2005).
Publishers started to bring out citations data similar to their product
catalogues that facilitate librarians in making quick decisions. Citations data
produced by publishers came into the use of researchers that opened new horizons
for both, publishers and researchers (De Bellis, 2014). Information and
communication technologies, such as internet and web technologies, added value
to production and utilization of citations data. Various reference and citation
extended databases, such as Web of Science, Scopus and Google Scholar, emerged
to facilitate researchers and librarians. Despite limiting their role to citations data,
these web-based automated systems introduced a number of other metadata
related solutions such as research impact matrices and indices. The use of these
numeric measures of research impact drew the attention of research financing
authorities, administration of universities and research organizations, research
funding, awards and reward councils, selection boards, appointing authorities, and
others of similar characters and roles. These new beneficiaries used citations data
as a measuring tool for the researchers’ performance as a measure of research, and
to evaluate journals (Blaise, 2014).
In spite of the undeterred wider use of citation based measures there is
plenty of literature that criticizes the application of such metrics to evaluate the
quality of research. In addition to non-discrimination of positive or negative
citations, the use of citations has serious disadvantages for researchers, publishers,
and institutions and research itself (Wouters, 2014). Researchers face the stress of
publishing more research articles as proof of their performance. Their financial
benefits, such as increments, awards, job tenures, new appointments, and
promotions, are unduly linked to these citation based measures (De Bellis, 2014;
PAKISTAN JOURNAL OF INFORMATION MANAGEMENT & LIBRARIES (PJIM&L) 59
Vol.18, No.2 Shah & Mahmood (2016)
Wouters, 2014). The final effect manifests itself in the form of researchers’
employing smart tactics to counter citation issues at the cost of research and
knowledge (Wouters, 2014). The research publications industry faces issues related
to franchising and monopolizing trends (Blaise, 2014).
Editors of research journals are forced to publish research in a strategic way.
Their survival and promotional efforts opens them up to biases and a questionable
publication system of research. Academic focused institutions lag behind in securing
competitive public funds. Therefore, academic institutions increasingly become
research focused to strengthen their position in a race of research and
development fund competitions (Wouters, 2014). Citation impact and other metrics
are calculated on the basis of ‘citable items’ in Web of Science (WoS) and ‘citable
documents’ in Scopus (Nelhans, 2014). However, not one from the empirical studies
and research literature supported publication counts, citation counts, and
calculations on the basis of these numbers as a suitable tool for measuring the
quality of research, performance of researchers and the resulting financial benefits.
Many earlier studies compared features offered by various citation extended
databases (Bergman, 2012). There are also studies about the practical utility of
these databases for a single information source (Bar-Ilan, 2010). Moreover,
practical aspects of a single database were discussed. The aforementioned studies
discussed various policy and methodological issues that were relevant to impact
measures in ideal circumstances on the part of these citation databases. Some
authors highlighted misconduct in these databases (Seglen, 1997). However, the
researchers rarely endeavoured to validate the treatment of data by these
databases and prove errors or malpractices on the part of databases empirically and
in a transparent and verifiable way. The present study is an attempt to fill this gap
in research literature.
Key Concepts
Web of science quality measures. Journal Impact Factor (JIF) and 5-year JIF
are popular quality measures in Web of Science (WoS). These measures are based
on the calculations of a number of citations in the preceding years. These citation
calculations are limited to journals indexed in Web of Science, irrespective of the
citations of articles in good or poor quality journals ("The Thomson Reuters Impact
Factor," 1994). Journal Impact Factor (JIF) is calculated as under:
2014 Impact factor of journal = A/B
Numerator = A = Number of times all items published in that journal in 2012
PAKISTAN JOURNAL OF INFORMATION MANAGEMENT & LIBRARIES (PJIM&L) 60
Vol.18, No.2 Shah & Mahmood (2016)
and 2013 cited by WoS indexed publications in 2014
Denominator = B = Number of ‘citable items’ published by that journal in
2012 and 2013 ("The Thomson Reuters Impact Factor," 1994).
Similarly, 5-year Journal Impact Factor is calculated as:
5-year impact factor of journal in 2014 = a/b
Numerator = a = Number of citations in 2014 to articles published in 2009-
13
Denominator = b = Number of articles published in 2009-2013 ("The
Thomson Reuters Impact Factor," 1994).
Scopus quality measure. Impact per Publication (IPP) is one of the popular
quality measures in Scopus. It leads to the calculation of Source Normalized Impact
per Paper (SNIP) which is used as an alternative to the WoS Journal Impact Factor
(Leydesdorff & Opthof, 2010). It is a ratio of citations to the number of published
papers within the Scopus indexed publications ("About Impact per Publication
(IPP)," 2015). Formula for calculation of IPP is given below:
IPP for year 2014 = X/Y
Numerator = X = 2014 citations in citable items published in 2011-2013
Denominator = Y = Number of cited items published in 2011-2013.
Citable item/citable document. ‘Citable item’ in WoS and ‘Citable document’
in Scopus serve the same purpose in two databases. The WoS considers articles
(research articles) and reviews as citable items. Journal Citation Reports (JCR) of
WoS considers only articles and reviews. Editorials, letters, news items, and
meeting abstracts are excluded from the JIF calculations because they are not
generally cited ("Journal Citation Reports," 2012). The Scopus includes conference
papers among citable documents. Therefore, articles, reviews, and conference
papers are citable documents in Scopus ("Journal Rankings," 2015).
LITERATURE REVIEW
Many studies mentioned systematic misconduct and un-ethical practices on
the part of reference and citation extended databases. Seglen (1997) pointed out
the wrong inclusion of citations of non-citable items in measuring the impact factor
of research journals in JCR. He also mentioned an undue favour to literature of
diminishing discipline in measuring JIF in addition to other biases of language and
projection of American literature. The PLoS Medicine Editors ("The Impact Factor
Game," 2006) identified that impact factor calculations were unscientific, arbitrary,
and a hidden process. This process had enough space for editors to decrease the
PAKISTAN JOURNAL OF INFORMATION MANAGEMENT & LIBRARIES (PJIM&L) 61
Vol.18, No.2 Shah & Mahmood (2016)
number of citable items that ultimately increased the impact factor of the journal.
These editors contested that Thomson Reuters was not accountable to anybody for
these manipulations in their completely non-transparent system. The editors
stated, “during discussions with Thomson Scientific... it became clear that the
process of determining a journal’s impact factor is unscientific and arbitrary... we
came to realize that Thomson Scientific has no explicit process for deciding which
articles other than original research articles it deems as citable. We conclude that
science is currently rated by a process that is itself unscientific, subjective, and
secretive” (p. 707). Carrió (2008) also pointed that decision about citable items
from hidden data was on the discretion of Thomson Reuters’ officials.
Rossner, Van Epps and Hill (2007) contacted Thomson Scientific to inquire
about the discrepancy in the data of a particular journal available in Web of Science
and that were used for calculating the impact factor of that journal. They failed to
access the actual data used for the impact factor. They concluded that scientists
should not rely on a measure which was based on hidden data—in contrast to the
basic principles of scientific inquiry. Binswanger (2014) was of the view that “a de
facto monopoly for the calculation of impact factors... enables Thomson Scientific
to sell its secretly fabricated Impact Factors to academic institutions at a high price”
(p. 61). Brumback (2009) opined that “scientists should be outraged that the worth
of science is being measured by a secretive proprietary metric that as often
destroys as much as it aids careers and scientific initiatives” (p. 932).
Monastersky (2005) pointed out unethical practices by editors to increase
impact factor of their journals. He stated that in addition to editors’ undue
managerial tactics, Thomson Reuters’ management team modified numerator and
denominator values in calculating impact factor. Published citable items are put
into non-citable document categories that reduce the denominator value and
increase impact. If any of these documents is cited, then its citation is added into
the numerator value that results into an increase of impact factor. Thus, both
increase in citations number and decrease in citable items increase impact factor of
research journal. Many researchers have repeatedly raised their voices against this
erroneous and unethical practice. (A considerable number of representative papers
include Brumback, 2008; Campbell, 2008; Chew, Villanueva, & Van Der Weyden,
2007; Dong, Loh, & Mondry, 2005; Falagas & Alexiou, 2008; Frandsen, 2008; Glänzel
& Moed, 2002; Jasco, 2001; Kumar, 2010 Law, 2012; Martin, 2016; Moed, Van
Leeuwen, & Reedijk, 1999; Rousseau, 2012; Sevinc, 2004; Simons, 2008; Smart,
2015; Van Leeuwen, Moed, & Reedijk, 1999; Whitehouse, 2001; Wolthoff, Lee, &
PAKISTAN JOURNAL OF INFORMATION MANAGEMENT & LIBRARIES (PJIM&L) 62
Vol.18, No.2 Shah & Mahmood (2016)
Ghohestani, 2011; Zupanc, 2014).
We could find three studies that tried to audit the values of JCR impact
factor. Golubic, Rudes, Kovacic, Marusic, and Marusic (2008) collected article and
citation data from Web of Science for four journals from different disciplines,
including Nature, and compared it with the number of citations and citable articles
in JCR. They found that “items classified as non-citable items by WoS, and thus not
included in the denominator of the IF equation, received a significant number of
citations, which are included in the numerator of the IF equation” (p. 45). When
they put their data into the impact factor formula the values decreased for all high-
ranked and middle-ranked journals (between 12.2% and 32.2%).
Wu, Fu and Rousseau (2008) calculated data collected from WoS and
predicted 2007 impact factors (IFs) for several journals, such as Nature, Science,
Learned Publishing and some library and information sciences journals. In most of
the cases they found lower values of the calculated impact factor than that of
officially released by JCR. Law and Li (2015) selected three journals in the field of
tourism and compared the number of citable articles given in JCR and the
publisher’s website (Sciencedirect.com). They found that JCR used a small number
of citable articles for the calculation of impact factors as compared to the actual
number.
The discrepancies are likely due to the differences in data used. Another
possibility for the discrepancy is that ScienceDirect used a categorization that
is different from that used by Thomson Reuters, and that Thomson Reuters
used a subjective and inconsistent way of categorization. Drawing on the
findings of this study, Thomson Reuters could, and probably should, publish
their categorization approach to make their IFs more credible (p. 21).
STATEMENT OF THE PROBLEM
Citation based quantitative metrics are widely used as surrogates for
determining the quality of research published in journals. A large number of
previous researchers have found errors and malpractices used to manipulate the
calculation of these measures in order to project the journals as carriers of good
quality research. Journal editors and the staff of citation extended databases have
been involved in this unethical practice. However, very few studies audited values
of the impact measures released by these databases with the help of independent
data.
PAKISTAN JOURNAL OF INFORMATION MANAGEMENT & LIBRARIES (PJIM&L) 63
Vol.18, No.2 Shah & Mahmood (2016)
This study empirically validates research impact measures presented by Web
of Science and Scopus. It investigates the authenticity, reliability, and
trustworthiness of the quality measures. This research is an attempt to check and
highlight, in a transparent as well as in a verifiable way, if there is any systematic
misconduct in research impact measures presented by these two reference and
citation extended databases. The primary research question addressed in study
was, whether data and calculations of journal impact metrics are accurately
presented in WoS and Scopus in the case of three journals of information and
library science.
METHODOLOGY
In order to validate the journal quality measures provided by citation
extended databases, we decided to compare the values with those calculated
manually by us. We selected two databases, i.e., WoS and Scopus, and three
research journals for this study. Selection of research journals was from WoS due to
its limited coverage of journal titles as compared to Scopus. Subject category
‘Information Science and Library Science’ was selected from Web of Science (WoS)
JCR index. Eighty-seven research Journals were indexed in this category. Three
research journals were selected – one with the highest rank position, MIS Quarterly
(USA, ISSN: 0276-7783, JCR rank: 1) and two from lower rank positions, Library and
Information Science (Japan, ISSN: 0373-4447, JCR rank: 69) and Malaysian Journal of
Library and Information Science (Malaysia, ISSN: 1394-6234, JCR rank: 71).
Statistical data regarding quality measures were collected from citation databases.
Data regarding citable items/documents were collected manually from official
websites of respective journals, and data regarding citations were manually
counted from the respective citing databases – WoS and Scopus. We used Microsoft
Excel to calculate our own Journal Impact Factor, 5-year Journal Impact Factor, and
Impact per Publication (IPP).
In addition to the comparison of WoS and Scopus, we collected citations
data for five years on the pattern of WoS and Scopus from Google Scholar by using
Publish or Perish (Harzing, 2007) software. Finally, three quality measures were
calculated on the basis of Google Scholar data, but using the formulas of WoS and
Scopus.
RESULTS & DISCUSSION
Journal Impact Factor (JIF) and 5-year JIF for three journals as per Journal
Citation Report (JCR) are shown in table 1. These calculations are claimed to be the
PAKISTAN JOURNAL OF INFORMATION MANAGEMENT & LIBRARIES (PJIM&L) 64
Vol.18, No.2 Shah & Mahmood (2016)
output of specific software used by Thomson Reuters. Therefore, it is systematically
generated data and results are based on that data set. Similarly, there are manual
calculations of JIF and 5-year JIF for the same journals in Table 2. Although results
given in Table 1 and Table 2 are of same journals, for the same time period, and of
specific number of citations yet there are notable variations in the data involved in
calculating JIF and in the final metrics.
Table 1
Systematic data from Journal Citation Report (JCR) 2014
Journal
Total cites
Citable items
JIF
5-Year JIF
Library and Information Science
19
4
0.278
0.173
Malaysian Journal of Library and
Information Science
90
20
0.238
0.455
MIS Quarterly
9,600
54
5.311
8.490
It was observed that none of these three journals had any missing issue in
the years under study. Therefore, data missing cannot be assumed as a reason of
variations in data sets and further results. The first issue is related to simple
calculations of JIF from the given values in table 1. These calculations are wrong in
all the journals. Secondly, in JIF calculations, any increase in citations (numerator)
and any decrease in citable items (denominator) affect results in such a way that JIF
and 5-year JIF scores increase. Total cites and citable items in tables 1 and 2 are
significantly different. For instance, Library and Information Science is a semi-annual
journal that published four issues in two years (2012-13). Systematic data in table 1
show only four citable items having 19 citations in all WoS indexed journals in 2014.
But in the calculations for this study, as shown in table 2, there were 18 citable
items that had just one citation in all WoS indexed journals in 2014. These
variations in data sets completely changed the calculated JIF. Thus, the difference of
JIF from 0.056 to 0.278 (about five times increase) in a subject of social sciences
makes no sense for justification of WoS quality measures. The situation is the same
for the other two journals. Furthermore, Table 1 presented very low number of
citable items and very high number of citations for all journals as compared to Table
2 and the result was inflated JIF scores presented by JCR.
The values of Impact per Publication (IPP) as per data provided by Scopus are
given in table 3. These calculations are the result of the software that is used by
Elsevier in Scopus quality measures. Scopus provides raw data on its website for
calculations of journal and publication impact measures. As explained by Scopus,
PAKISTAN JOURNAL OF INFORMATION MANAGEMENT & LIBRARIES (PJIM&L) 65
Vol.18, No.2 Shah & Mahmood (2016)
these citations and documents data are periodically updated.
Table 2
Empirical data collected from journal websites and WoS
Journal
Citations (A)
5-year
citations
(2009-2013)
Citable
items
(2Y)
Citable
items
(5Y)
JIF
5-year
JIF
2012
2013
Library and
Information
Science
0 1 4 B=18 b=52 A=1
B=18
IF=.056
a=4
b=52
0.077
Malaysian
Journal of
Library and
Information
Science
5 4 44 B=42 b=112 A=9
B=42
IF=.214
a=44
b=112
0.393
MIS
Quarterly
432 213 1900 B=122 b=243 A=645
B=122
IF=5.286
a=1900
b=243
7.819
Table 3 presented data as per June 24, 2015 updates ("Compare Journals,"
2015). Further, there are manual calculations of Scopus quality indicator (Impact
per Publication—IPP), on the same method that was used by Scopus, in table 4. It
was observed that results in table 3 are different from the results in table 4, similar
to the situation, previously, in case of WoS quality measures.
Table 3
Systematic data from Scopus("Compare Journals," 2015)
Journal
Total
Doc. 2014
Total
Doc. (3Y)
Citable Doc.
(3Y) = Y
Total Cites
(3Y) = X
IPP =
X/Y
Library and Information
Science
16
67
62
12
0.117
Malaysian Journal of
Library and Information
Science
20
70
70
51
0.614
MIS Quarterly
6
178
171
2059
7.228
Scopus calculates Impact per Publication (IPP) as the ratio of three years
citations to the number of citable documents. Table 3 shows that systematic
calculations as per given data through official resources are wrong in case of all
PAKISTAN JOURNAL OF INFORMATION MANAGEMENT & LIBRARIES (PJIM&L) 66
Vol.18, No.2 Shah & Mahmood (2016)
journals under study. In comparing systematic results to the manual results, there
are difference between Scopus official values and those of manual calculations. For
instance, citations of the year 2014 in all Scopus indexed journals from three year
documents (2011-13) of Library and Information Science Journal were 12 as per
Scopus official resources and only three as per manual calculations. Moreover,
citable documents in three year period for this journal were 62 as per Scopus
official data while 27 as per manual calculations. A similar situation emerged for
other journals. Although the problem of inflated IPP is not seen in Scopus but one
cannot depend on these erroneous calculations.
Table 4
Empirical data from journal websites and Scopus
Journal
Total Cites (X)
Total Citable Docs. (Y)
IPP = X/Y
2011
2012
2013
2011
2012
2013
Library and Information
Science
2
0
1
9
7
11
3/27=0.111
Malaysian Journal of
Library and Information
Science
36
11
6
28
20
22
53/70=
0.757
MIS Quarterly
744
777
507
48
61
61
2028/170=
11.929
Table 5
Empirical data from Google Scholar
Journal
Citable items/Doc.
Citations
2009
2010
2011
2012
2013
2009
2010
2011
2012
2013
Library and
Information
Science
14
11
9
7
11
7
1
3
0
2
Malaysian
Journal of
Library and
Information
Science
18
24
28
20
22
73
52
63
31
15
MIS Quarterly
36
37
48
61
61
952
1372
1546
1403
808
PAKISTAN JOURNAL OF INFORMATION MANAGEMENT & LIBRARIES (PJIM&L) 67
Vol.18, No.2 Shah & Mahmood (2016)
Data in table 5 came from Google Scholar and official websites of respective
journals. The number of citations were calculated by using Publish or Perish
(Harzing, 2007) software. Further, JIF, 5-year JIF and IPP were calculated with the
help of Scopus and WoS formulas. Results are presented in table 6.
Unlike WoS and Scopus, Google Scholar has a wider coverage of documents.
This database also considers a few other resources as documents that are out of
scope from both WoS and Scopus. Three journal quality metrics based on data sets
of Google Scholar (table 6) present another picture regarding the effects of an
increase in the number of citations in a particular period of time based on the
parameters of enhanced coverage of documents. It was observed that, contrary to
official data sets and results (tables 1 and 3), manual calculations (tables 2 and 4)
have, somehow, similarity to mechanically produced results through Google
Scholar.
If we take Library and Information Science as an example, its WoS JIF is 0.278
(table 1). This impact factor could not be justified even on the basis of Google
Scholar data that counted all possible citations in broader spheres in comparison to
WoS. Even then impact factor of Library and Information Science is much lower (i.e.
0.111) Conversely, manual calculation of impact factor for Library and Information
Science in this study give JIF value of 0.056 that is closer to the JIF from Google
Scholar data. The result is similar for IPP calculations from Scopus system and
manual for all journals.
Table 6
JIF and IPP scores based on Google Scholar data
Journal
JIF
5-year JIF
IPP
Library and Information Science
0.111
0.250
0.185
Malaysian Journal of Library
and Information Science
1.045
2.089
1.557
MIS Quarterly
18.123
25.025
53.671
Data from Web of Science (tables 1 and 2) make it clear that in all these
three research journals, the given number of citations is much higher (table 1) than
the actual number of citations (table 2). Similarly, the given number of citable items
is significantly lower than the actual citable items. Hence, impact factor scores are
inflated. Scopus based data show that the given number of citations for each of
these journals (table 3) was more than the actual number of citations (table 4). Also
PAKISTAN JOURNAL OF INFORMATION MANAGEMENT & LIBRARIES (PJIM&L) 68
Vol.18, No.2 Shah & Mahmood (2016)
the given number of citable documents (table 3) is less than the actual citable
documents (table 4). The results have been manipulated in the same manner. These
findings indicate that quality metrics of Web of Science and Scopus are fabricated
rather than tools for an impartial calculation and presentation of facts, as is
generally assumed in research community.
Findings of the present study are in conformity with that of Golubic et
al.(2008), Law and Li (2015), and Wu, Fu and Rousseau (2008) that WoS
manipulates data to show higher values of impact factor for journals. Calculation of
journal quality metrics based on the data from comparatively new citation
extended databases, i.e., Scopus and Google Scholar, is a unique strength of this
study. This study strengthens the conclusions of previous studies like (PLoS
Medicine Editors, 2006; Rossner, Van Epps, & Hill, 2007; Seglen, 1997) and confirms
that Thomson Reuters still continues their practice of manipulating citation data.
Although the staff of Thomson Reuters claimed that impact factor was accurate and
consistent “due to its concentration on a simple calculation based on data that are
fully visible in the Web of Science” (McVeigh & Mann, 2009, p. 1109) but the
findings of the previous as well as the present study disprove this statement.
Discrepancies have also been found in the Scopus calculations. This study proves
that the use of fake number of citations is a common practice in impact factor
calculation based on illogical, unethical and unscientific practices. Editorial material
is usually undervalued and considered as non-citable for use as a denominator in an
equation. On the other hand, all citations on this material are counted in the
numerator. A simple solution to avoid this discrepancy is to include them in all in
research assessment procedures, as suggested by Van Leeuwen, Costas,Calero-
Medina, and Visser (2013).
Issues like discretion, not publically known and non-replication of
calculations must not be acceptable to stakeholders. Web of Science and Scopus
have published their criteria for calculations and mentioned document types they
use. What is citable and what is not citable is decided in research. Whatever is
considered by WoS or Scopus as specified types of documents can be delimited
from available search options on websites of both databases. Impact factors or
impact per publication can be calculated by anybody. Therefore, the claimed impact
factor system is transparent by itself but impact factor declarations are problematic
and can be contested or claimed in a proper way wherever it is of serious concern.
PAKISTAN JOURNAL OF INFORMATION MANAGEMENT & LIBRARIES (PJIM&L) 69
Vol.18, No.2 Shah & Mahmood (2016)
Conclusion, Limitations, Implications and Recommendations for Future Research
This study has some theoretical and practical implications. On the theoretical
side, it will stimulate further research regarding assessment, evaluation, and quality
measurement of research. Likewise, this study may help in attracting attention of
researchers to check their exploitation in the name of quality scores, high
productivity, brand-oriented or franchised publications. It may also help to highlight
the efforts for business promotion or industrialized thinking about research rather
than the promotion of real knowledge and science for real development. Practically,
this research may help librarians, policy makers, information analysts,
bibliometricians, and researchers to find their way in contributing knowledge rather
than being sucked into marketing and publicity scenarios designed by the corporate
sector in the publishing industry. This study will stimulate further research to
explore contradictions in policies and practices of prominent actors such as
Thomson Group and Elsevier in this study. It is also suggested that this study should
be replicated with a larger sets of journals in other subject areas.
Bibliometric indicators are of a high value for research and for the scientific
contribution to knowledge. Reference and citation extended databases have an
added value to the research process. Unfortunately, bibliometric indicators have
been used as performative measures and evaluation tools by the administration of
academia and research financing bodies over the last decade. The research
community has been badly affected due to these misleading impact metrics.
Publishing and productivity with high impacts have diverted attention of the
researchers from contributing to knowledge. They have shifted their focus from
knowledge to tactical productivity to cope and counter the unduly emerged awards,
rewards, and promotion systems. An objective shift on the part of the researchers
highly promoted publishing industry. The unavailability of appropriate quality
measures for one’s performance and the value of research lend support the existing
numeric impact system. To save oneself from the exploitation from numerical
impacts, it is a challenge and a research task for scholars to come up with a
justifiable, reliable, consistent and transparent system of performative evaluation
of researchers, as well as qualitative value of one’s scientific contribution in a
particular field, discipline, or research area.
In the present study, the number of citations and number of citable
items/documents differ from the numbers presented by WoS as well as Scopus
during manual calculations. It was not clear which articles, reviews, and conference
papers were not included as citable items on both these citation databases. It was
PAKISTAN JOURNAL OF INFORMATION MANAGEMENT & LIBRARIES (PJIM&L) 59
PAKISTAN JOURNAL OF INFORMATION MANAGEMENT & LIBRARIES (PJIM&L) 70
Vol.18, No.2 Shah & Mahmood (2016)
also unclear why other items or documents that the research community
considered as citable items / documents were excluded in these databases. Both
WoS and Scopus continuously include journals in their indexes. Therefore, inclusion
of any new journal in WoS or Scopus system changes the data set and the results
presented in this study. Furthermore, being the denominator, the number of citable
items / documents was of much importance due to their considerable impact on
the final results. Therefore, a general perception about articles, reviews, and
conference papers was adopted in this research. The authors of this study used
their subjective approach in concluding what amounted to a citable item or not. It is
a limitation of this study. Another limitation of our study is that it was restricted to
only three journals. For generalization of results more studies with larger sets of
journals are needed.
REFERENCES
Bar-Ilan, J. (2010). Citations to the “Introduction to Informetrics” indexed by WOS,
Scopus and Google Scholar. Scientometrics, 82(3), 495-506.
Bergman, E. M. (2012). Finding citations to social work literature: The relative
benefits of using Web of Science, Scopus, or Google Scholar. The Journal of
Academic Librarianship, 38(6), 370-379.
Binswanger, M. (2014). Excellence by nonsense: The competition for publications in
modern science. In S. Bartling, & S. Friesike (Eds.), Opening Science (pp. 49-
72). Springer International.
Blaise, C. (2014). Scholars and Scripts, Spoors and Scores. In C. Blaise, & C. R.
Sugimoto (Eds.), Beyond Bibliometrics: Harnessing multidimensional
indicators of scholarly impact. (pp. 3-21). MIT Press.
Brumback, R. A. (2008). Worshiping false idols: The impact factor dilemma. Journal
of Child Neurology, 23(4), 365-367.
Brumback, R. A. (2009). Impact Factor: Let's be unreasonable! Epidemiology, 20(6),
932-933.
Campbell, P. (2008). Escape from the Impact Factor. Ethics in Science and
Environmental Politics, 8(1), 5-6.
Carrió, I. (2008). Of impact, metrics and ethics. European Journal of Nuclear
Medicine and Molecular Imaging, 35(6), 1049-1050.
Chew, M., Villanueva, E. V., & Van Der Weyden, M. B. (2007). Life and times of the
Impact Factor: Retrospective analysis of trends for seven medical journals
(1994-2005) and their editors' views. Journal of the Royal Society of
Medicine, 100(3), 142-150.
PAKISTAN JOURNAL OF INFORMATION MANAGEMENT & LIBRARIES (PJIM&L) 71
Vol.18, No.2 Shah & Mahmood (2016)
Day, R. E. (2014). The data - It is me! In B. Cronin, & C. R. Sugimoto (Eds.), Beyond
bibliometrics: Harnessing multidimensional indicators of scholarly impact (pp.
67-84). MIT Press.
De Bellis, N. (2014). History and evaluation of (biblio)metrics. In B. Cronin, & C. R.
Sugimoto (Eds.), Beyond Bibliometrics: Harnessing multidimensional
indicators of scholarly impact (pp. 23-44). MIT Press.
Dong, P., Loh, M., & Mondry, A. (2005). The "impact factor" revisited. Biomedical
Digital Libraries, 2(7), 1-8.
Elsevier. (2015). About Impact per Publication (IPP). Retrieved 08 23, 2015, from
http://www.journalmetrics.com/ipp.php
Elsevier. (2015). Compare journals. Retrieved 08 23, 2015, from http://www-
scopus-com.lib.costello.pub.hb.se/source/eval.url
Elsevier. (2015). Journal Rankings. Retrieved 08 23, 2015, from
http://www.scimagojr.com/journalrank.php
Falagas, M. E., & Alexiou, V. G. (2008). The top-ten in Journal Impact Factor
manipulation. Archivum Immunologiae et Therapiae Experimentalis, 56(4),
223-226.
Frandsen, T. F. (2008). On the ratio of citable versus non-citable items in economics
journals. Scientometrics, 74(3), 439-451.
Garfield, E. (1964). Science Citation Index - A new dimension in indexing. Science,
144(3619), 649-954.
Glänzel, W., & Moed, H. F. (2002). Journal impact measures in bibliometric
research. Scientometrics, 53(2), 171-193.
Golubic, R., Rudes, M., Kovacic, N., Marusic, M., & Marusic, A. (2008). Calculating
Impact Factor: How bibliometrical classification of journal items affects the
impact factor of large and small journals. Science and Engineering Ethics,
14(1), 41-49.
Harzing, A. W. (2007). Publish or perish. Retrieved 08 23, 2015, from
http://www.harzing.com/pop.htm
Jasco, P. (2001). A deficiency in the alogritm for calculating the Impact Factor of
scholarly journals: The Journal Impact Factor. Cortex, 37(4), 590-594.
Kumar, M. (2010). The import of the Impact Factor: Fallacies of citation-dependent
scientometry. Bulletin of the Royal College of Surgeons of England, 92(1), 26-
30.
Law, R. (2012). The usefulness of Impact Factors to tourism journals. Annals of
Tourism Research, 39(3), 1722-1724.
PAKISTAN JOURNAL OF INFORMATION MANAGEMENT & LIBRARIES (PJIM&L) 72
Vol.18, No.2 Shah & Mahmood (2016)
Law, R., & Li, G. (2015). Accuracy of Impact Factors in tourism journals. Annals of
Tourism Research, 50, 19-21.
Leydesdorff, L., & Opthof, T. (2010). Scopus's Source Normalized Impact Per Paper
(SNIP) versus a Journal Impact Factor based on fractional counting of
citations. Journal of the American Society for Information Science and
Technology, 61(11), 2365-2369.
Martin, B. R. (2016). Editors' JIF-boosting stratagems-Which are appropriate and
which not? Research Policy, 45(1), 1-7.
McVeigh, M. E., & Mann, S. J. (2009). The Journal Impact Factor denominator:
Defining citable (counted) items. JAMA, 302(10), 1107-1109.
Merton, R. K. (1957). Priorities in scientific discovery: A chapter in the sociology of
science. American Sociological Review, 22, 635-659.
Moed, H. F. (2005). Citation Analysis in research evaluation (Vol. 9). Springer
Science and Business Media.
Moed, H. F., Van Leeuwen, T. N., & Reedijk, J. (1999). Towards appropriate
indicators of journal impact. Scientometrics, 46(3), 575-589.
Monastersky, R. (2005). The 'number that’s devouring science. Chronicle of Higher
Education, 52(8), 14.
Nelhans, G. (2014). Qualitative scientometrics? Proceedings of the 35th IATUL
Conference. The International Association of Scientific and Technological
University Libraries (IATUL).
Rossner, M., Van Epps, H., & Hill, E. (2007). Show me the data. The Journal of Cell
Biology, 179(6), 1091-1092.
Rousseau, R. (2012). Updating the Journal Impact Factor or total overhaul?
Scientometrics, 92(2), 413-417.
Salton, G. (1963). Associative document retrieval technologies using bibliographic
information. Journal of the ACM (JACM), 10(4), 440-457.
Seglen, P. O. (1997). Why the Impact Factor of journals should not be used for
evaluating research. BMJ, 314(7079), 498-502.
Sevinc, A. (2004). Manipulating Impact Factor: An unethical issue or an editors's
choice. Swiss Medical Weekly, 134(27-28), 410.
Simons, K. (2008). The misused Impact Factor. Science, 322(5899), 165-165.
Smart, P. (2015). Is the Impact Factor the only game in town? The Annals of the
Royal College of Surgeons of England, 97(6), 405-408.
The PLoS Medicine Editors. (2006). The Impact Factor game. PLoS Med, 291.
PAKISTAN JOURNAL OF INFORMATION MANAGEMENT & LIBRARIES (PJIM&L) 73
Vol.18, No.2 Shah & Mahmood (2016)
Thomson Reuters. (1994, 06 20). The Thomson Reuters Impact Factor. Retrieved on
08 23, 2015, from http://wokinfo.com/essays/impact-factor/
Thomson Reuters. (2012). Journal Citation Reports. Retrieved 08 23, 2015, from
http://admin-apps.webofknowledge.com/JCR/help/h_sourcedata.htm
Van Leeuwen, T. N., Moed, H. F., & Reedijk, J. (1999). Critical comments on Institute
for Scientific Information Impact Factors: A sample of inorganic molecular
chemistry journals. Journal of Information Science, 25(6), 489-498.
Van Leeuwen, T., Costas, R., Calero-Medina, C., & Visser, M. (2013). The role of
editorial material in bibliometric research performance assessment.
Scientometrics, 95(2), 817-828.
Whitehouse, G. H. (2001). Citation rates and Impact Factors: Should they matter?
The British Journal of Radiology, 74(877), 1-3.
Wolthoff, A., Lee, Y., & Ghohestani, R. F. (2011). Comprehensive Citation Factor: A
novel method in ranking medical journals. European Journal of Dermatology,
21(4), 495-500.
Wouters, P. (2014). The citation: From culture to infrastructure. In B. Cronin, & C. R.
Sugimoto (Eds.), Beyond bibliometrics: Harnessing multidimensional
indicators of scholarly impact. (pp. 47-66). MIT Press.
Wu, X. F., Fu, Q., & Rousseau, R. (2008). On indexing in the Web of Science and
predicting Journal Impact Factor. Journal of Zhejiang University SCIENCE B,
9(7), 582-590.
Zupanc, G. K. (2014). Impact beyond the Impact Factor. Journal of Comparative
Physiology A, 200(2), 113-116.
PAKISTAN JOURNAL OF INFORMATION MANAGEMENT & LIBRARIES (PJIM&L) 74