Article

Inconsistency in Selecting Metrics used for Bibliometric Studies

Authors:
To read the full-text of this research, you can request a copy directly from the author.

Abstract

There are several metrics available for application in bibliometrics. Some of the metrics that have been considered here such as measuring growth of publications, citation impact, authorship pattern, h-index have been widely used to generate statistical analysis with respect to books, articles, publications. Now one must be aware of the pros and cons of each and every metrics used in the research. One has to be certain that there is no information that is getting lost when data about researchers and their institutions are squeezed into a tabular form of metrics. There are certain metrics that have been discussed to be replaced with other metrics to obtain more accurate interpretation of the research performance. If used otherwise it can create a hindrance to the real research performance when misused.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
In research evaluation of single researchers, the assessment of paper and journal impact is of interest. High journal impact reflects the ability of researchers to convince strict reviewers, and high paper impact reflects the usefulness of papers for future research. In many bibliometric studies, metrics for journal and paper impact are separately presented. In this paper, we introduce two graph types which combine both metrics in a single graph. The graph types are rooted in Bland-Altman plots introduced by Bland and Altman (1986). The graphs can be used in research evaluation to visualize the performance of single researchers comprehensively.
Article
Full-text available
It is shown that the mean number of authors per paper or the proportion of the multiple-authored papers is inadequate as a measure of the degree of collaboration in a discipline. A measure which combines some of the merits of both measures is suggested and derived. This measure, called the Collaborative Coefficient, is derived for four commonly used probability distributions.
Article
Full-text available
The growth rate of scientific publication has been studied from 1907 to 2007 using available data from a number of literature databases, including Science Citation Index (SCI) and Social Sciences Citation Index (SSCI). Traditional scientific publishing, that is publication in peer-reviewed journals, is still increasing although there are big differences between fields. There are no indications that the growth rate has decreased in the last 50 years. At the same time publication using new channels, for example conference proceedings, open archives and home pages, is growing fast. The growth rate for SCI up to 2007 is smaller than for comparable databases. This means that SCI was covering a decreasing part of the traditional scientific literature. There are also clear indications that the coverage by SCI is especially low in some of the scientific areas with the highest growth rate, including computer science and engineering sciences. The role of conference proceedings, open access archives and publications published on the net is increasing, especially in scientific fields with high growth rates, but this has only partially been reflected in the databases. The new publication channels challenge the use of the big databases in measurements of scientific productivity or output and of the growth rate of science. Because of the declining coverage and this challenge it is problematic that SCI has been used and is used as the dominant source for science indicators based on publication and citation numbers. The limited data available for social sciences show that the growth rate in SSCI was remarkably low and indicate that the coverage by SSCI was declining over time. National Science Indicators from Thomson Reuters is based solely on SCI, SSCI and Arts and Humanities Citation Index (AHCI). Therefore the declining coverage of the citation databases problematizes the use of this source.
Article
Full-text available
PRESENTS AN OUTLINE AND PRELIMINARY ANALYSIS OF A GROUP WHICH CONSTITUTES THE GREATER PART OF A SINGLE INVISIBLE COLLEGE. INVISIBLE COLLEGE REFERS TO THAT "INGROUP" IN EACH OF THE MORE ACTIVELY PURSUED AND HIGHLY COMPETITIVE SPECIALTIES IN THE SCIENCES. GROUP MEMBERS CLAIM TO BE REASONABLY IN TOUCH WITH EVERYONE WHO IS CONTRIBUTING MATERIALLY TO RESEARCH IN THEIR AREA NOT ONLY NATIONALLY, BUT INTERNATIONALLY AS WELL. "THE IMPLICATIONS OF THIS STUDY ARE CONSIDERABLE FOR ANALYZING THE SOCIAL LIFE OF SCIENCE AND THE NATURE OF COLLABORATION AND COMMUNICATION AT THE RESEARCH FRONT . . . . PERHAPS THE RECENT ACCELERATION IN THE AMOUNT OF MULTIPLE AUTHORSHIP IN SEVERAL REGIONS OF SCIENCE IS DUE PARTLY TO THE BUILDING OF A NEW COMMUNICATION MECHANISM DERIVING FROM THE INCREASED MOBILITY OF SCIENTISTS, AND PARTLY TO AN EFFORT TO UTILIZE LARGER AND LARGER QUANTITIES OF LOWER-LEVEL RESEARCH MANPOWER. IF THIS IS SO, THEN THE CONVENTIONAL EXPLANATION OF COLLABORATION, AS THE UTILIZATION OF MANY DIFFERENT SKILLS AND PAIRS OF HANDS TO DO A SINGLE JOB OTHERWISE IMPOSSIBLE TO PERFORM, IS WOEFULLY INADEQUATE AND MISLEADING."
Article
This study is aimed at analysing self-citation as a strategy used by journals and authors regarding first citations in of Latin-American psychology journals between 2012 and 2016. A total of 8977 citations received were analysed for a total of 2403 papers published in the 19 Latin-American psychology journals collected in the 2016 WoS (included in the 2015 JCR edition). The results indicate that there is an effect of the first self-citations on the number of citations, the journal self-citations and the author’s. It is observed that the journal self-citations and first journal self-citations are more important for the journals located in first quartiles, versus author’s self-citations. The importance of the type of self-citation differs between some publications and others, being the journal self-citations those that greater differences present between journals throughout the period studied. The self-consumption of information, according to the number of articles with self-citations, varies between the journals, ranging between 88.8 and 55.8%. It can be concluded that self-citations and first self-citations play an important role in the citation of the works and in the increase of their visibility.
Article
More than one million citations from the scientific literature have been processed by the Citation Index Project at the Institute for Scientific Information. The Project, sponsored by NSF and NIH, will be described briefly, and new methods of using citation data for evaluation of publications will be discussed. Summaries of statistical data, compiled by computer methods such as the following, will be given. Frequency of citation of one journal by another. Frequency of current citations to the past literature. Frequency of self‐citation by journals and authors. Number of source citations per cited paper. Number of references per source paper. Number of papers published per journal. Information scientists and research workers are encouraged to use this unique reservoir of information for additional statistics applicable to their fields of work as a basis for comparative studies on the efficacy of various indexing techniques.
Article
The h-index is a popular bibliometric indicator for assessing individual scientists. We criticize the h-index from a theoretical point of view. We argue that for the purpose of measuring the overall scientific impact of a scientist (or some other unit of analysis) the h-index behaves in a counterintuitive way. In certain cases, the mechanism used by the h-index to aggregate publication and citation statistics into a single number leads to inconsistencies in the way in which scientists are ranked. Our conclusion is that the h-index cannot be considered an appropriate indicator of a scientist's overall scientific impact. Based on recent theoretical insights, we discuss what kind of indicators can be used as an alternative to the h-index. We pay special attention to the highly cited publications indicator. This indicator has a lot in common with the h-index, but unlike the h-index it does not produce inconsistent rankings.
Article
I propose the index h, defined as the number of papers with citation number ≥h, as a useful index to characterize the scientific output of a researcher. • citations • impact • unbiased
Article
I first mentioned the idea of an impact factor in Science in 1955.¹ With support from the National Institutes of Health, the experimental Genetics Citation Index was published, and that led to the 1961 publication of the Science Citation Index.² Irving H. Sher and I created the journal impact factor to help select additional source journals. To do this we simply re-sorted the author citation index into the journal citation index. From this simple exercise, we learned that initially a core group of large and highly cited journals needed to be covered in the new Science Citation Index (SCI). Consider that, in 2004, the Journal of Biological Chemistry published 6500 articles, whereas articles from the Proceedings of the National Academy of Sciences were cited more than 300 000 times that year. Smaller journals might not be selected if we rely solely on publication count,³ so we created the journal impact factor (JIF).
  • F Gonzala-Sala
  • J Osca-Lluch
  • Haba-Osca
Gonzala-Sala, F., Osca-Lluch, J. & Haba-Osca, J. Scientometrics (2019).