ArticlePDF Available

Abstract and Figures

Citation based metrics are widely used to assess the impact of research published in journals. This paper presents the results of a research study to verify the accuracy of data and calculations of journal impact metrics presented in Web of Science (WoS) and Scopus in the case of three journals of information and library science. Data collected from the websites of journals were compared with that of two citation extended databases. The study manually calculated the Journal Impact Factor (JIF) and the Impact per Publication (IPP) in accordance with formulas given in the databases. Data were also collected from the Google Scholar to draw a comparison. The study found discrepancies in two sets of data and bibliometric values, i.e., systematic values presented in WoS and Scopus and calculated in this study. Commercial databases presented inflated measures based on fabricated or erroneous data. The study is of practical importance to researchers, universities and research financing bodies that consider these bibliometric indicators as a good tool for measuring performance, assessment, and evaluation of research quality as well as researchers.
Content may be subject to copyright.
Vol.18, No.2 Shah & Mahmood (2016)
Validation of Journal Impact Metrics of Web of Science and Scopus
Syed Rahmatullah Shah
University of Boras, Sweden
University of the Punjab, Lahore, Pakistan
Khalid Mahmood
University of the Punjab, Lahore, Pakistan
Citation based metrics are widely used to assess the impact of
research published in journals. This paper presents the results of a
research study to verify the accuracy of data and calculations of
journal impact metrics presented in Web of Science (WoS) and
Scopus in the case of three journals of information and library
science. Data collected from the websites of journals were compared with that of
two citation extended databases. The study manually calculated the Journal
Impact Factor (JIF) and the Impact per Publication (IPP) in accordance with
formulas given in the databases. Data were also collected from the Google Scholar
to draw a comparison. The study found discrepancies in two sets of data and
bibliometric values, i.e., systematic values presented in WoS and Scopus and
calculated in this study. Commercial databases presented inflated measures
based on fabricated or erroneous data. The study is of practical importance to
researchers, universities and research financing bodies that consider these
bibliometric indicators as a good tool for measuring performance, assessment,
and evaluation of research quality as well as researchers.
Keywords: Citation analysis; Journal rankings; Journal Impact Factor; Impact per
Publication; Scholarly communication; Bibliometrics.
Citations are a valuable source for researchers, librarians, publishers and
scientific and academic organizations. They use citations as a measure for quality of
research output and evaluation of a research journal (Moed, 2005). Researchers use
citations to look into the flow and development of ideas in their research. They
Vol.18, No.2 Shah & Mahmood (2016)
check accuracy, originality, authenticity, influence, and other relevant facts about
ideas related to their own studies (Garfield, 1964; Salton, 1963). Researchers
strengthen their ideas on the bases of citations to highlight the existing research
and the presence of gaps to be filled in by their own studies (Moed, 2005). Citations
also serve the purpose of lending intellectual credits to the real contributors in
research. They also safeguard the rights of the researcher who originally initiated or
developed an idea (Day, 2014; Merton, 1957). Librarians have a long history of
using citations as a tool in making comparisons of two or more published journals
covering the same discipline or subject category. Their use of citations helps utilize
the limited financial resources. Therefore, librarians use citations to draw a
comparison and decide which journals should be acquired from a wide variety
available amongst researchers in a particular subject (Moed, 2005).
Publishers started to bring out citations data similar to their product
catalogues that facilitate librarians in making quick decisions. Citations data
produced by publishers came into the use of researchers that opened new horizons
for both, publishers and researchers (De Bellis, 2014). Information and
communication technologies, such as internet and web technologies, added value
to production and utilization of citations data. Various reference and citation
extended databases, such as Web of Science, Scopus and Google Scholar, emerged
to facilitate researchers and librarians. Despite limiting their role to citations data,
these web-based automated systems introduced a number of other metadata
related solutions such as research impact matrices and indices. The use of these
numeric measures of research impact drew the attention of research financing
authorities, administration of universities and research organizations, research
funding, awards and reward councils, selection boards, appointing authorities, and
others of similar characters and roles. These new beneficiaries used citations data
as a measuring tool for the researchers’ performance as a measure of research, and
to evaluate journals (Blaise, 2014).
In spite of the undeterred wider use of citation based measures there is
plenty of literature that criticizes the application of such metrics to evaluate the
quality of research. In addition to non-discrimination of positive or negative
citations, the use of citations has serious disadvantages for researchers, publishers,
and institutions and research itself (Wouters, 2014). Researchers face the stress of
publishing more research articles as proof of their performance. Their financial
benefits, such as increments, awards, job tenures, new appointments, and
promotions, are unduly linked to these citation based measures (De Bellis, 2014;
Vol.18, No.2 Shah & Mahmood (2016)
Wouters, 2014). The final effect manifests itself in the form of researchers’
employing smart tactics to counter citation issues at the cost of research and
knowledge (Wouters, 2014). The research publications industry faces issues related
to franchising and monopolizing trends (Blaise, 2014).
Editors of research journals are forced to publish research in a strategic way.
Their survival and promotional efforts opens them up to biases and a questionable
publication system of research. Academic focused institutions lag behind in securing
competitive public funds. Therefore, academic institutions increasingly become
research focused to strengthen their position in a race of research and
development fund competitions (Wouters, 2014). Citation impact and other metrics
are calculated on the basis of ‘citable items’ in Web of Science (WoS) and ‘citable
documents’ in Scopus (Nelhans, 2014). However, not one from the empirical studies
and research literature supported publication counts, citation counts, and
calculations on the basis of these numbers as a suitable tool for measuring the
quality of research, performance of researchers and the resulting financial benefits.
Many earlier studies compared features offered by various citation extended
databases (Bergman, 2012). There are also studies about the practical utility of
these databases for a single information source (Bar-Ilan, 2010). Moreover,
practical aspects of a single database were discussed. The aforementioned studies
discussed various policy and methodological issues that were relevant to impact
measures in ideal circumstances on the part of these citation databases. Some
authors highlighted misconduct in these databases (Seglen, 1997). However, the
researchers rarely endeavoured to validate the treatment of data by these
databases and prove errors or malpractices on the part of databases empirically and
in a transparent and verifiable way. The present study is an attempt to fill this gap
in research literature.
Key Concepts
Web of science quality measures. Journal Impact Factor (JIF) and 5-year JIF
are popular quality measures in Web of Science (WoS). These measures are based
on the calculations of a number of citations in the preceding years. These citation
calculations are limited to journals indexed in Web of Science, irrespective of the
citations of articles in good or poor quality journals ("The Thomson Reuters Impact
Factor," 1994). Journal Impact Factor (JIF) is calculated as under:
2014 Impact factor of journal = A/B
Numerator = A = Number of times all items published in that journal in 2012
Vol.18, No.2 Shah & Mahmood (2016)
and 2013 cited by WoS indexed publications in 2014
Denominator = B = Number of ‘citable items’ published by that journal in
2012 and 2013 ("The Thomson Reuters Impact Factor," 1994).
Similarly, 5-year Journal Impact Factor is calculated as:
5-year impact factor of journal in 2014 = a/b
Numerator = a = Number of citations in 2014 to articles published in 2009-
Denominator = b = Number of articles published in 2009-2013 ("The
Thomson Reuters Impact Factor," 1994).
Scopus quality measure. Impact per Publication (IPP) is one of the popular
quality measures in Scopus. It leads to the calculation of Source Normalized Impact
per Paper (SNIP) which is used as an alternative to the WoS Journal Impact Factor
(Leydesdorff & Opthof, 2010). It is a ratio of citations to the number of published
papers within the Scopus indexed publications ("About Impact per Publication
(IPP)," 2015). Formula for calculation of IPP is given below:
IPP for year 2014 = X/Y
Numerator = X = 2014 citations in citable items published in 2011-2013
Denominator = Y = Number of cited items published in 2011-2013.
Citable item/citable document. ‘Citable item’ in WoS and ‘Citable document’
in Scopus serve the same purpose in two databases. The WoS considers articles
(research articles) and reviews as citable items. Journal Citation Reports (JCR) of
WoS considers only articles and reviews. Editorials, letters, news items, and
meeting abstracts are excluded from the JIF calculations because they are not
generally cited ("Journal Citation Reports," 2012). The Scopus includes conference
papers among citable documents. Therefore, articles, reviews, and conference
papers are citable documents in Scopus ("Journal Rankings," 2015).
Many studies mentioned systematic misconduct and un-ethical practices on
the part of reference and citation extended databases. Seglen (1997) pointed out
the wrong inclusion of citations of non-citable items in measuring the impact factor
of research journals in JCR. He also mentioned an undue favour to literature of
diminishing discipline in measuring JIF in addition to other biases of language and
projection of American literature. The PLoS Medicine Editors ("The Impact Factor
Game," 2006) identified that impact factor calculations were unscientific, arbitrary,
and a hidden process. This process had enough space for editors to decrease the
Vol.18, No.2 Shah & Mahmood (2016)
number of citable items that ultimately increased the impact factor of the journal.
These editors contested that Thomson Reuters was not accountable to anybody for
these manipulations in their completely non-transparent system. The editors
stated, “during discussions with Thomson Scientific... it became clear that the
process of determining a journal’s impact factor is unscientific and arbitrary... we
came to realize that Thomson Scientific has no explicit process for deciding which
articles other than original research articles it deems as citable. We conclude that
science is currently rated by a process that is itself unscientific, subjective, and
secretive” (p. 707). Carrió (2008) also pointed that decision about citable items
from hidden data was on the discretion of Thomson Reuters’ officials.
Rossner, Van Epps and Hill (2007) contacted Thomson Scientific to inquire
about the discrepancy in the data of a particular journal available in Web of Science
and that were used for calculating the impact factor of that journal. They failed to
access the actual data used for the impact factor. They concluded that scientists
should not rely on a measure which was based on hidden datain contrast to the
basic principles of scientific inquiry. Binswanger (2014) was of the view that “a de
facto monopoly for the calculation of impact factors... enables Thomson Scientific
to sell its secretly fabricated Impact Factors to academic institutions at a high price”
(p. 61). Brumback (2009) opined that “scientists should be outraged that the worth
of science is being measured by a secretive proprietary metric that as often
destroys as much as it aids careers and scientific initiatives” (p. 932).
Monastersky (2005) pointed out unethical practices by editors to increase
impact factor of their journals. He stated that in addition to editors’ undue
managerial tactics, Thomson Reuters’ management team modified numerator and
denominator values in calculating impact factor. Published citable items are put
into non-citable document categories that reduce the denominator value and
increase impact. If any of these documents is cited, then its citation is added into
the numerator value that results into an increase of impact factor. Thus, both
increase in citations number and decrease in citable items increase impact factor of
research journal. Many researchers have repeatedly raised their voices against this
erroneous and unethical practice. (A considerable number of representative papers
include Brumback, 2008; Campbell, 2008; Chew, Villanueva, & Van Der Weyden,
2007; Dong, Loh, & Mondry, 2005; Falagas & Alexiou, 2008; Frandsen, 2008; Glänzel
& Moed, 2002; Jasco, 2001; Kumar, 2010 Law, 2012; Martin, 2016; Moed, Van
Leeuwen, & Reedijk, 1999; Rousseau, 2012; Sevinc, 2004; Simons, 2008; Smart,
2015; Van Leeuwen, Moed, & Reedijk, 1999; Whitehouse, 2001; Wolthoff, Lee, &
Vol.18, No.2 Shah & Mahmood (2016)
Ghohestani, 2011; Zupanc, 2014).
We could find three studies that tried to audit the values of JCR impact
factor. Golubic, Rudes, Kovacic, Marusic, and Marusic (2008) collected article and
citation data from Web of Science for four journals from different disciplines,
including Nature, and compared it with the number of citations and citable articles
in JCR. They found that “items classified as non-citable items by WoS, and thus not
included in the denominator of the IF equation, received a significant number of
citations, which are included in the numerator of the IF equation” (p. 45). When
they put their data into the impact factor formula the values decreased for all high-
ranked and middle-ranked journals (between 12.2% and 32.2%).
Wu, Fu and Rousseau (2008) calculated data collected from WoS and
predicted 2007 impact factors (IFs) for several journals, such as Nature, Science,
Learned Publishing and some library and information sciences journals. In most of
the cases they found lower values of the calculated impact factor than that of
officially released by JCR. Law and Li (2015) selected three journals in the field of
tourism and compared the number of citable articles given in JCR and the
publisher’s website ( They found that JCR used a small number
of citable articles for the calculation of impact factors as compared to the actual
The discrepancies are likely due to the differences in data used. Another
possibility for the discrepancy is that ScienceDirect used a categorization that
is different from that used by Thomson Reuters, and that Thomson Reuters
used a subjective and inconsistent way of categorization. Drawing on the
findings of this study, Thomson Reuters could, and probably should, publish
their categorization approach to make their IFs more credible (p. 21).
Citation based quantitative metrics are widely used as surrogates for
determining the quality of research published in journals. A large number of
previous researchers have found errors and malpractices used to manipulate the
calculation of these measures in order to project the journals as carriers of good
quality research. Journal editors and the staff of citation extended databases have
been involved in this unethical practice. However, very few studies audited values
of the impact measures released by these databases with the help of independent
Vol.18, No.2 Shah & Mahmood (2016)
This study empirically validates research impact measures presented by Web
of Science and Scopus. It investigates the authenticity, reliability, and
trustworthiness of the quality measures. This research is an attempt to check and
highlight, in a transparent as well as in a verifiable way, if there is any systematic
misconduct in research impact measures presented by these two reference and
citation extended databases. The primary research question addressed in study
was, whether data and calculations of journal impact metrics are accurately
presented in WoS and Scopus in the case of three journals of information and
library science.
In order to validate the journal quality measures provided by citation
extended databases, we decided to compare the values with those calculated
manually by us. We selected two databases, i.e., WoS and Scopus, and three
research journals for this study. Selection of research journals was from WoS due to
its limited coverage of journal titles as compared to Scopus. Subject category
‘Information Science and Library Science’ was selected from Web of Science (WoS)
JCR index. Eighty-seven research Journals were indexed in this category. Three
research journals were selected one with the highest rank position, MIS Quarterly
(USA, ISSN: 0276-7783, JCR rank: 1) and two from lower rank positions, Library and
Information Science (Japan, ISSN: 0373-4447, JCR rank: 69) and Malaysian Journal of
Library and Information Science (Malaysia, ISSN: 1394-6234, JCR rank: 71).
Statistical data regarding quality measures were collected from citation databases.
Data regarding citable items/documents were collected manually from official
websites of respective journals, and data regarding citations were manually
counted from the respective citing databases WoS and Scopus. We used Microsoft
Excel to calculate our own Journal Impact Factor, 5-year Journal Impact Factor, and
Impact per Publication (IPP).
In addition to the comparison of WoS and Scopus, we collected citations
data for five years on the pattern of WoS and Scopus from Google Scholar by using
Publish or Perish (Harzing, 2007) software. Finally, three quality measures were
calculated on the basis of Google Scholar data, but using the formulas of WoS and
Journal Impact Factor (JIF) and 5-year JIF for three journals as per Journal
Citation Report (JCR) are shown in table 1. These calculations are claimed to be the
Vol.18, No.2 Shah & Mahmood (2016)
output of specific software used by Thomson Reuters. Therefore, it is systematically
generated data and results are based on that data set. Similarly, there are manual
calculations of JIF and 5-year JIF for the same journals in Table 2. Although results
given in Table 1 and Table 2 are of same journals, for the same time period, and of
specific number of citations yet there are notable variations in the data involved in
calculating JIF and in the final metrics.
Table 1
Systematic data from Journal Citation Report (JCR) 2014
Total cites
Citable items
5-Year JIF
Library and Information Science
Malaysian Journal of Library and
Information Science
MIS Quarterly
It was observed that none of these three journals had any missing issue in
the years under study. Therefore, data missing cannot be assumed as a reason of
variations in data sets and further results. The first issue is related to simple
calculations of JIF from the given values in table 1. These calculations are wrong in
all the journals. Secondly, in JIF calculations, any increase in citations (numerator)
and any decrease in citable items (denominator) affect results in such a way that JIF
and 5-year JIF scores increase. Total cites and citable items in tables 1 and 2 are
significantly different. For instance, Library and Information Science is a semi-annual
journal that published four issues in two years (2012-13). Systematic data in table 1
show only four citable items having 19 citations in all WoS indexed journals in 2014.
But in the calculations for this study, as shown in table 2, there were 18 citable
items that had just one citation in all WoS indexed journals in 2014. These
variations in data sets completely changed the calculated JIF. Thus, the difference of
JIF from 0.056 to 0.278 (about five times increase) in a subject of social sciences
makes no sense for justification of WoS quality measures. The situation is the same
for the other two journals. Furthermore, Table 1 presented very low number of
citable items and very high number of citations for all journals as compared to Table
2 and the result was inflated JIF scores presented by JCR.
The values of Impact per Publication (IPP) as per data provided by Scopus are
given in table 3. These calculations are the result of the software that is used by
Elsevier in Scopus quality measures. Scopus provides raw data on its website for
calculations of journal and publication impact measures. As explained by Scopus,
Vol.18, No.2 Shah & Mahmood (2016)
these citations and documents data are periodically updated.
Table 2
Empirical data collected from journal websites and WoS
Citations (A)
Library and
0 1 4 B=18 b=52 A=1
Journal of
Library and
5 4 44 B=42 b=112 A=9
432 213 1900 B=122 b=243 A=645
Table 3 presented data as per June 24, 2015 updates ("Compare Journals,"
2015). Further, there are manual calculations of Scopus quality indicator (Impact
per PublicationIPP), on the same method that was used by Scopus, in table 4. It
was observed that results in table 3 are different from the results in table 4, similar
to the situation, previously, in case of WoS quality measures.
Table 3
Systematic data from Scopus("Compare Journals," 2015)
Doc. 2014
Doc. (3Y)
Citable Doc.
(3Y) = Y
Total Cites
(3Y) = X
Library and Information
Malaysian Journal of
Library and Information
MIS Quarterly
Scopus calculates Impact per Publication (IPP) as the ratio of three years
citations to the number of citable documents. Table 3 shows that systematic
calculations as per given data through official resources are wrong in case of all
Vol.18, No.2 Shah & Mahmood (2016)
journals under study. In comparing systematic results to the manual results, there
are difference between Scopus official values and those of manual calculations. For
instance, citations of the year 2014 in all Scopus indexed journals from three year
documents (2011-13) of Library and Information Science Journal were 12 as per
Scopus official resources and only three as per manual calculations. Moreover,
citable documents in three year period for this journal were 62 as per Scopus
official data while 27 as per manual calculations. A similar situation emerged for
other journals. Although the problem of inflated IPP is not seen in Scopus but one
cannot depend on these erroneous calculations.
Table 4
Empirical data from journal websites and Scopus
Total Cites (X)
Total Citable Docs. (Y)
Library and Information
Malaysian Journal of
Library and Information
MIS Quarterly
Table 5
Empirical data from Google Scholar
Citable items/Doc.
Library and
Journal of
Library and
MIS Quarterly
Vol.18, No.2 Shah & Mahmood (2016)
Data in table 5 came from Google Scholar and official websites of respective
journals. The number of citations were calculated by using Publish or Perish
(Harzing, 2007) software. Further, JIF, 5-year JIF and IPP were calculated with the
help of Scopus and WoS formulas. Results are presented in table 6.
Unlike WoS and Scopus, Google Scholar has a wider coverage of documents.
This database also considers a few other resources as documents that are out of
scope from both WoS and Scopus. Three journal quality metrics based on data sets
of Google Scholar (table 6) present another picture regarding the effects of an
increase in the number of citations in a particular period of time based on the
parameters of enhanced coverage of documents. It was observed that, contrary to
official data sets and results (tables 1 and 3), manual calculations (tables 2 and 4)
have, somehow, similarity to mechanically produced results through Google
If we take Library and Information Science as an example, its WoS JIF is 0.278
(table 1). This impact factor could not be justified even on the basis of Google
Scholar data that counted all possible citations in broader spheres in comparison to
WoS. Even then impact factor of Library and Information Science is much lower (i.e.
0.111) Conversely, manual calculation of impact factor for Library and Information
Science in this study give JIF value of 0.056 that is closer to the JIF from Google
Scholar data. The result is similar for IPP calculations from Scopus system and
manual for all journals.
Table 6
JIF and IPP scores based on Google Scholar data
5-year JIF
Library and Information Science
Malaysian Journal of Library
and Information Science
MIS Quarterly
Data from Web of Science (tables 1 and 2) make it clear that in all these
three research journals, the given number of citations is much higher (table 1) than
the actual number of citations (table 2). Similarly, the given number of citable items
is significantly lower than the actual citable items. Hence, impact factor scores are
inflated. Scopus based data show that the given number of citations for each of
these journals (table 3) was more than the actual number of citations (table 4). Also
Vol.18, No.2 Shah & Mahmood (2016)
the given number of citable documents (table 3) is less than the actual citable
documents (table 4). The results have been manipulated in the same manner. These
findings indicate that quality metrics of Web of Science and Scopus are fabricated
rather than tools for an impartial calculation and presentation of facts, as is
generally assumed in research community.
Findings of the present study are in conformity with that of Golubic et
al.(2008), Law and Li (2015), and Wu, Fu and Rousseau (2008) that WoS
manipulates data to show higher values of impact factor for journals. Calculation of
journal quality metrics based on the data from comparatively new citation
extended databases, i.e., Scopus and Google Scholar, is a unique strength of this
study. This study strengthens the conclusions of previous studies like (PLoS
Medicine Editors, 2006; Rossner, Van Epps, & Hill, 2007; Seglen, 1997) and confirms
that Thomson Reuters still continues their practice of manipulating citation data.
Although the staff of Thomson Reuters claimed that impact factor was accurate and
consistent “due to its concentration on a simple calculation based on data that are
fully visible in the Web of Science” (McVeigh & Mann, 2009, p. 1109) but the
findings of the previous as well as the present study disprove this statement.
Discrepancies have also been found in the Scopus calculations. This study proves
that the use of fake number of citations is a common practice in impact factor
calculation based on illogical, unethical and unscientific practices. Editorial material
is usually undervalued and considered as non-citable for use as a denominator in an
equation. On the other hand, all citations on this material are counted in the
numerator. A simple solution to avoid this discrepancy is to include them in all in
research assessment procedures, as suggested by Van Leeuwen, Costas,Calero-
Medina, and Visser (2013).
Issues like discretion, not publically known and non-replication of
calculations must not be acceptable to stakeholders. Web of Science and Scopus
have published their criteria for calculations and mentioned document types they
use. What is citable and what is not citable is decided in research. Whatever is
considered by WoS or Scopus as specified types of documents can be delimited
from available search options on websites of both databases. Impact factors or
impact per publication can be calculated by anybody. Therefore, the claimed impact
factor system is transparent by itself but impact factor declarations are problematic
and can be contested or claimed in a proper way wherever it is of serious concern.
Vol.18, No.2 Shah & Mahmood (2016)
Conclusion, Limitations, Implications and Recommendations for Future Research
This study has some theoretical and practical implications. On the theoretical
side, it will stimulate further research regarding assessment, evaluation, and quality
measurement of research. Likewise, this study may help in attracting attention of
researchers to check their exploitation in the name of quality scores, high
productivity, brand-oriented or franchised publications. It may also help to highlight
the efforts for business promotion or industrialized thinking about research rather
than the promotion of real knowledge and science for real development. Practically,
this research may help librarians, policy makers, information analysts,
bibliometricians, and researchers to find their way in contributing knowledge rather
than being sucked into marketing and publicity scenarios designed by the corporate
sector in the publishing industry. This study will stimulate further research to
explore contradictions in policies and practices of prominent actors such as
Thomson Group and Elsevier in this study. It is also suggested that this study should
be replicated with a larger sets of journals in other subject areas.
Bibliometric indicators are of a high value for research and for the scientific
contribution to knowledge. Reference and citation extended databases have an
added value to the research process. Unfortunately, bibliometric indicators have
been used as performative measures and evaluation tools by the administration of
academia and research financing bodies over the last decade. The research
community has been badly affected due to these misleading impact metrics.
Publishing and productivity with high impacts have diverted attention of the
researchers from contributing to knowledge. They have shifted their focus from
knowledge to tactical productivity to cope and counter the unduly emerged awards,
rewards, and promotion systems. An objective shift on the part of the researchers
highly promoted publishing industry. The unavailability of appropriate quality
measures for one’s performance and the value of research lend support the existing
numeric impact system. To save oneself from the exploitation from numerical
impacts, it is a challenge and a research task for scholars to come up with a
justifiable, reliable, consistent and transparent system of performative evaluation
of researchers, as well as qualitative value of one’s scientific contribution in a
particular field, discipline, or research area.
In the present study, the number of citations and number of citable
items/documents differ from the numbers presented by WoS as well as Scopus
during manual calculations. It was not clear which articles, reviews, and conference
papers were not included as citable items on both these citation databases. It was
Vol.18, No.2 Shah & Mahmood (2016)
also unclear why other items or documents that the research community
considered as citable items / documents were excluded in these databases. Both
WoS and Scopus continuously include journals in their indexes. Therefore, inclusion
of any new journal in WoS or Scopus system changes the data set and the results
presented in this study. Furthermore, being the denominator, the number of citable
items / documents was of much importance due to their considerable impact on
the final results. Therefore, a general perception about articles, reviews, and
conference papers was adopted in this research. The authors of this study used
their subjective approach in concluding what amounted to a citable item or not. It is
a limitation of this study. Another limitation of our study is that it was restricted to
only three journals. For generalization of results more studies with larger sets of
journals are needed.
Bar-Ilan, J. (2010). Citations to the “Introduction to Informetrics” indexed by WOS,
Scopus and Google Scholar. Scientometrics, 82(3), 495-506.
Bergman, E. M. (2012). Finding citations to social work literature: The relative
benefits of using Web of Science, Scopus, or Google Scholar. The Journal of
Academic Librarianship, 38(6), 370-379.
Binswanger, M. (2014). Excellence by nonsense: The competition for publications in
modern science. In S. Bartling, & S. Friesike (Eds.), Opening Science (pp. 49-
72). Springer International.
Blaise, C. (2014). Scholars and Scripts, Spoors and Scores. In C. Blaise, & C. R.
Sugimoto (Eds.), Beyond Bibliometrics: Harnessing multidimensional
indicators of scholarly impact. (pp. 3-21). MIT Press.
Brumback, R. A. (2008). Worshiping false idols: The impact factor dilemma. Journal
of Child Neurology, 23(4), 365-367.
Brumback, R. A. (2009). Impact Factor: Let's be unreasonable! Epidemiology, 20(6),
Campbell, P. (2008). Escape from the Impact Factor. Ethics in Science and
Environmental Politics, 8(1), 5-6.
Carrió, I. (2008). Of impact, metrics and ethics. European Journal of Nuclear
Medicine and Molecular Imaging, 35(6), 1049-1050.
Chew, M., Villanueva, E. V., & Van Der Weyden, M. B. (2007). Life and times of the
Impact Factor: Retrospective analysis of trends for seven medical journals
(1994-2005) and their editors' views. Journal of the Royal Society of
Medicine, 100(3), 142-150.
Vol.18, No.2 Shah & Mahmood (2016)
Day, R. E. (2014). The data - It is me! In B. Cronin, & C. R. Sugimoto (Eds.), Beyond
bibliometrics: Harnessing multidimensional indicators of scholarly impact (pp.
67-84). MIT Press.
De Bellis, N. (2014). History and evaluation of (biblio)metrics. In B. Cronin, & C. R.
Sugimoto (Eds.), Beyond Bibliometrics: Harnessing multidimensional
indicators of scholarly impact (pp. 23-44). MIT Press.
Dong, P., Loh, M., & Mondry, A. (2005). The "impact factor" revisited. Biomedical
Digital Libraries, 2(7), 1-8.
Elsevier. (2015). About Impact per Publication (IPP). Retrieved 08 23, 2015, from
Elsevier. (2015). Compare journals. Retrieved 08 23, 2015, from http://www-
Elsevier. (2015). Journal Rankings. Retrieved 08 23, 2015, from
Falagas, M. E., & Alexiou, V. G. (2008). The top-ten in Journal Impact Factor
manipulation. Archivum Immunologiae et Therapiae Experimentalis, 56(4),
Frandsen, T. F. (2008). On the ratio of citable versus non-citable items in economics
journals. Scientometrics, 74(3), 439-451.
Garfield, E. (1964). Science Citation Index - A new dimension in indexing. Science,
144(3619), 649-954.
Glänzel, W., & Moed, H. F. (2002). Journal impact measures in bibliometric
research. Scientometrics, 53(2), 171-193.
Golubic, R., Rudes, M., Kovacic, N., Marusic, M., & Marusic, A. (2008). Calculating
Impact Factor: How bibliometrical classification of journal items affects the
impact factor of large and small journals. Science and Engineering Ethics,
14(1), 41-49.
Harzing, A. W. (2007). Publish or perish. Retrieved 08 23, 2015, from
Jasco, P. (2001). A deficiency in the alogritm for calculating the Impact Factor of
scholarly journals: The Journal Impact Factor. Cortex, 37(4), 590-594.
Kumar, M. (2010). The import of the Impact Factor: Fallacies of citation-dependent
scientometry. Bulletin of the Royal College of Surgeons of England, 92(1), 26-
Law, R. (2012). The usefulness of Impact Factors to tourism journals. Annals of
Tourism Research, 39(3), 1722-1724.
Vol.18, No.2 Shah & Mahmood (2016)
Law, R., & Li, G. (2015). Accuracy of Impact Factors in tourism journals. Annals of
Tourism Research, 50, 19-21.
Leydesdorff, L., & Opthof, T. (2010). Scopus's Source Normalized Impact Per Paper
(SNIP) versus a Journal Impact Factor based on fractional counting of
citations. Journal of the American Society for Information Science and
Technology, 61(11), 2365-2369.
Martin, B. R. (2016). Editors' JIF-boosting stratagems-Which are appropriate and
which not? Research Policy, 45(1), 1-7.
McVeigh, M. E., & Mann, S. J. (2009). The Journal Impact Factor denominator:
Defining citable (counted) items. JAMA, 302(10), 1107-1109.
Merton, R. K. (1957). Priorities in scientific discovery: A chapter in the sociology of
science. American Sociological Review, 22, 635-659.
Moed, H. F. (2005). Citation Analysis in research evaluation (Vol. 9). Springer
Science and Business Media.
Moed, H. F., Van Leeuwen, T. N., & Reedijk, J. (1999). Towards appropriate
indicators of journal impact. Scientometrics, 46(3), 575-589.
Monastersky, R. (2005). The 'number that’s devouring science. Chronicle of Higher
Education, 52(8), 14.
Nelhans, G. (2014). Qualitative scientometrics? Proceedings of the 35th IATUL
Conference. The International Association of Scientific and Technological
University Libraries (IATUL).
Rossner, M., Van Epps, H., & Hill, E. (2007). Show me the data. The Journal of Cell
Biology, 179(6), 1091-1092.
Rousseau, R. (2012). Updating the Journal Impact Factor or total overhaul?
Scientometrics, 92(2), 413-417.
Salton, G. (1963). Associative document retrieval technologies using bibliographic
information. Journal of the ACM (JACM), 10(4), 440-457.
Seglen, P. O. (1997). Why the Impact Factor of journals should not be used for
evaluating research. BMJ, 314(7079), 498-502.
Sevinc, A. (2004). Manipulating Impact Factor: An unethical issue or an editors's
choice. Swiss Medical Weekly, 134(27-28), 410.
Simons, K. (2008). The misused Impact Factor. Science, 322(5899), 165-165.
Smart, P. (2015). Is the Impact Factor the only game in town? The Annals of the
Royal College of Surgeons of England, 97(6), 405-408.
The PLoS Medicine Editors. (2006). The Impact Factor game. PLoS Med, 291.
Vol.18, No.2 Shah & Mahmood (2016)
Thomson Reuters. (1994, 06 20). The Thomson Reuters Impact Factor. Retrieved on
08 23, 2015, from
Thomson Reuters. (2012). Journal Citation Reports. Retrieved 08 23, 2015, from
Van Leeuwen, T. N., Moed, H. F., & Reedijk, J. (1999). Critical comments on Institute
for Scientific Information Impact Factors: A sample of inorganic molecular
chemistry journals. Journal of Information Science, 25(6), 489-498.
Van Leeuwen, T., Costas, R., Calero-Medina, C., & Visser, M. (2013). The role of
editorial material in bibliometric research performance assessment.
Scientometrics, 95(2), 817-828.
Whitehouse, G. H. (2001). Citation rates and Impact Factors: Should they matter?
The British Journal of Radiology, 74(877), 1-3.
Wolthoff, A., Lee, Y., & Ghohestani, R. F. (2011). Comprehensive Citation Factor: A
novel method in ranking medical journals. European Journal of Dermatology,
21(4), 495-500.
Wouters, P. (2014). The citation: From culture to infrastructure. In B. Cronin, & C. R.
Sugimoto (Eds.), Beyond bibliometrics: Harnessing multidimensional
indicators of scholarly impact. (pp. 47-66). MIT Press.
Wu, X. F., Fu, Q., & Rousseau, R. (2008). On indexing in the Web of Science and
predicting Journal Impact Factor. Journal of Zhejiang University SCIENCE B,
9(7), 582-590.
Zupanc, G. K. (2014). Impact beyond the Impact Factor. Journal of Comparative
Physiology A, 200(2), 113-116.
... The emphasis on publications in high impact factor journals diverts the attention of researchers from the real contribution to knowledge production. They shift their focus to tactical actions to match established reward and promotion systems (Mahmood & Shah 2016). Evaluation using bibliometric indicators can be also complicated and unreliable due to the high probability of misuse. ...
Full-text available
The study deals with research performance assessment issues as important aspects of research management and research quality. The case of Kazakhstan clearly demonstrates the impact of the prevailing bibliometric-centered approach. The aim of this study is to suggest an inclusive scale of individual research performance assessment. The method used is a quantitative study of the opinions of researchers and academics on the range of research related activities they traditionally carry out. The study expands the knowledge base on academic human resource management, and can be of high relevance for substantiating the criteria of performance assessment of researchers by HR managers of universities and public research institutions. The research results can be helpful for setting and complying with individual and institutional criteria for research performance evaluation. Policy highlights A Survey among 264 researchers in Kazakh universities and public research institutions (response rate: 63%) asked them to rate their activities in five groups: supervising activity, professional advancement, publications, public recognition, and scientific & organizational activities. The results demonstrate that their priorities correspond to national and international priorities: publishing papers in local and international peer-reviewed journals indexed in WoS and Scopus, and monographs. Findings revealed that “Supervising” and “Professional Advancement” activities have the highest importance among all criteria groups. Respondents gave the highest preference to participating at overseas and international conferences, seminars and workshops, thus expressing their desire to disseminate their research findings internationally and to build international links. Scientific & organizational activities are the core activities which correlate to all other activities. And the role of S&O criteria is definitely underestimated in the performance assessment of researchers. The current research performance evaluation system in Kazakhstan is dominantly based on bibliometrics and is one-sided and biased. An inclusive scale of individual research performance assessment needs to be developed, considering researchers’ ideas and preferences.
... The use of scientometric indicators is a general trend of cooperation between the scientific community on the one hand, and the state and scientific foundations on the other hand [1]. Negative criticism of the use of scientometric tools for evaluating scientific activity is widely discussed [2,3], but scientometric indicators become the basis for assessing academic achievement and for making funding decisions through both public and private foundations in modern science [4,5]. The almost complete victory of scientometrics is due to the huge amount of heterogeneous scientific research (millions of scientists and hundreds of scientific fields) and the simplicity of administration and management of scientific institutions with this approach. ...
Conference Paper
El 30 de enero de 1933 Adolf Hitler se convierte en Canciller de Alemania, imponiendo a continuación un régimen totalitario que trajo consigo una de las mayores catástrofes de la historia. Guiado por un agresivo expansionismo territorial y una ideología supremacista, el “nuevo orden” encarnado por el Tercer Reich y su búsqueda de la hegemonía territorial en Europa condujo al estallido de un conflicto global. Este trabajo pretende analizar el tratamiento informativo que el histórico periódico castellano y decano de la prensa española, El Norte de Castilla, dio a los episodios más importantes del nazismo desde el ascenso del Führer a la cancillería hasta el estallido de la Segunda Guerra Mundial (1933-1939). Una etapa que coincide en España con los años finales de la II República y la Guerra Civil. La prensa del momento, eminentemente política, experimentará un notable cambio en sus contenidos y en su forma de representar la actualidad a consecuencia de la situación interna española (democracia-guerradictadura). Se trata de estudiar la perspectiva desde la que El Norte mostró la realidad de la Alemania nazi y si esta estuvo condicionada por el enfrentamiento armado en España, así como por la amistad entre el país germano y el bando franquista.
Full-text available
In this chapter, Binswanger (a critic of the current scientific process) explains how artificially staged competitions affect science and how they result in nonsense. An economist himself, Binswanger provides examples from his field and shows how impact factors and publication pressure reduce the quality of scientific publications. Some might know his work and arguments from his book ‘Sinnlose Wettbewerbe’.
Full-text available
Abstract The number of scientific journals has become so large that individuals, institutions and institutional libraries cannot completely store their physical content. In order to prioritize the choice of quality information sources, librarians and scientists are in need of reliable decision aids. The "impact factor" (IF) is the most commonly used assessment aid for deciding which journals should receive a scholarly submission or attention from research readership. It is also an often misunderstood tool. This narrative review explains how the IF is calculated, how bias is introduced into the calculation, which questions the IF can or cannot answer, and how different professional groups can benefit from IF use.
Full-text available
In this study, the possibilities to extend the basis for research performance exercises with editorial material are explored. While this document type has been traditionally not considered as an important type of scientific communication in research performance assessment procedures, there is a perception from researchers that editorial materials should be considered as relevant document types as important sources for the dissemination of scientific knowledge. In a number of these cases, some of the mentioned editorial materials are actually ‘highly cited’. This lead to a thorough scrutiny of editorials or editorial material over the period 1992–2001, for all citation indexes of Thomson Scientific. The relevance of editorial materials through three quantitative bibliometric characteristics of scientific publications, namely page length, number of references, and the number of received citations, are thoroughly analyzed.
This extended editorial explores the growing range of stratagems devised by journal editors to boost their Journal Impact Factor (JIF) and the consequences for the credibility of this indicator as well as for the academic community more broadly. Over recent years, JIF has become the most prominent indicator of a journal's standing, bringing intense pressure on journal editors to do what they can to increase it. After explaining the curious way in which JIF is calculated and the technical limitations that beset it, we examine the approaches employed by journal editors to maximise it. Some approaches would seem completely acceptable, others (such as coercive citations and cross-citing journal cartels) are in clear breach of the conventions on academic behaviour, but a number fall somewhere in between. Over time, editors have devised ingenious ways of enhancing their JIF without apparently breaching any rules. In particular, the editorial describes the ‘online queue’ stratagem and asks whether this constitutes appropriate behaviour or not. The editorial draws three conclusions. First, in the light of ever more devious ruses of editors, the JIF indicator has now lost most of its credibility. Secondly, where the rules are unclear or absent, the only way of determining whether particular editorial behaviour is appropriate or not is to expose it to public scrutiny. Thirdly, editors who engage in dubious behaviour thereby risk forfeiting their authority to police misconduct among authors.
“Not everything that can be counted counts. Not everything that counts can be counted.” William Bruce Cameron Journal metrics mania started over 50 years ago with the impact factor that has since become so well entrenched in publishing. Ask anyone where they would like to publish their research and most will reply by saying in a journal with the highest impact factor. While this suggests quality and a degree of vetting by the scientific community, the impact factor has also been used to benchmark and compare journals. Impact factors are often used as a proxy of a journal 's quality and scientific prestige. However, is medicine dependent on a valuation system that may be grounded in falsity? Much about this measure is imperfect and destructive. Journals can manipulate the impact factor by refusing to publish articles like case reports that are unlikely to be cited or, conversely, by publishing a large proportion of review articles, which tend to attract more citations. Another tactic that may be used is to publish articles that could be highly cited early in the year, thereby leaving more time to collect citations. Many use the impact factor as an important determinant of grants, awards, promotions and career advancement, and also as a basis for an individual's reputation and professional standing. Nevertheless, you should remember that the impact factor is not a measure of an individual article, let alone an individual scientist. As long as an article has been cited, the citation will contribute to the journal's impact factor. This is regardless of whether the article's premise is true or false, or whether the cited paper was being credited or criticised. Perversely, a weak paper that is being refuted will augment the impact factor, as will a retracted article, because although the article may have been retracted, the citations of this article will still count. The impact factor has weathered many storms in the past but criticisms against it are increasing, as is interest in displacing it as a single metric used to measure an article's influence. Many would like the scientific community to assess research on its merits rather than on the basis of the journal in which it is published. With the advent of social media, an article can now be commented on in real time with Tweets, bookmarks and blogs. In future, these measures will complement the impact factor but they will probably not become an alternative. Despite its imperfections, the impact factor has been around for a long time. As yet, although many alternative metrics have since emerged, nothing better is available. Perhaps it is the scientific community's misuse of the impact factor that is the problem and not the impact factor itself? In this article, Pippa Smart, who is the guest editor for this series, writes about the ways to measure the impact of a journal and published articles. JYOTI SHAH Commissioning Editor
Past studies of citation coverage of Web of Science, Scopus, and Google Scholar do not demonstrate a consistent pattern that can be applied to the interdisciplinary mix of resources used in social work research. To determine the utility of these tools to social work researchers, an analysis of citing references to well-known social work journals was conducted. Web of Science had the fewest citing references and almost no variety in source format. Scopus provided higher citation counts, but the pattern of coverage was similar to Web of Science. Google Scholar provided substantially more citing references, but only a relatively small percentage of them were unique scholarly journal articles.The patterns of database coverage were replicated when the citations were broken out for each journal separately. The results of this analysis demonstrate the need to determine what resources constitute scholarly research and reflect the need for future researchers to consider the merits of each database before undertaking their research. This study will be of interest to scholars in library and information science as well as social work, as it facilitates a greater understanding of the strengths and limitations of each database and brings to light important considerations for conducting future research.
The journal impact factor is an annually calculated number for each scientific journal, based on the average number of times its articles published in the two preceding years have been cited. It was originally devised as a tool for librarians and publishers to provide information about the citation performance of a journal as a whole, but over the last few decades it has increasingly been used to assess the quality of specific articles and the research performance of individual investigators, institutions, and countries. In addition to this clear abuse of the journal impact factor, several conceptual and technical issues limit its usability as a measure of journal reputation, especially when journals are compared across different fields. An author's decision regarding the suitability of a scholarly journal for publication should, therefore, be based on the impact that this journal makes in the field of research, rather than on the journal impact factor.