Article

Lognormal distribution of citation counts is the reason for the relation between Impact Factors and Citation Success Index

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Later Milojevic et al. (2017) defined this probability as Citation Success Index (CSI) and found an S-type relation between JIF and CSI. Shen et al. (2018) found that the relationship between JIF and CSI mainly results from the lognormal distribution of citations. Another way to alleviate the skewness problem is logarithmic conversion (Lundberg, 2007). ...
... In the upgraded version, we follow the idea of the Citation Success Index (CSI) and extend CSI to a field-normalized indicator. The original CSI presented to compare the citation capacity between two journals (Stringer et al., 2008;Milojevic et al., 2017;Shen et al., 2018), is defined as the probability of a randomly selected paper from one journal having more citations than a randomly selected paper from the other journal. ...
Article
Full-text available
Since its first release in 2004, the CAS Journal Ranking, a ranking system of journals based on a citation impact indicator, has been widely used both in selecting journals when submitting manuscripts and conducting research evaluation in China This paper introduces an upgraded version of the CAS Journal Ranking released in 2020 and the corresponding improvements. We will discuss the following improvements: (1) the CWTS paper-level classification system, a fine-grained classification system, utilized for field normalization, (2) the Field Normalized Citation Success Index (FNCSI), an indicator which is robust against not only extremely highly cited publications, but also wrongly assigned document types, and (3) document type difference. In addition, this paper will present part of the ranking results and an interpretation of the features of the FNCSI indicator. Peer Review https://www.webofscience.com/api/gateway/wos/peer-review/10.1162/qss_a_00270
... We note that under the assumption of log-normal distribution, which often holds for many journals (Radicchi et al., 2008;Thelwall, 2016A, 2016B, 2016C), it might be possible to do so. For example, in the work (Shen et al., 2018), we derived an expression of CSI in terms of only the mean and standard deviation, assuming the log-normal distribution. Therefore, in this work, we want to find similar relations for other CSI-related indicators, and if possible even other commonly used indicators, in terms of only the mean and standard deviation. ...
... , the CSI, i.e. the probability of z i t > z j r is calculated as (Shen et al., 2018), ...
Preprint
Two journal-level indicators, respectively the mean ($m^i$) and the standard deviation ($v^i$) are proposed to be the core indicators of each journal and we show that quite several other indicators can be calculated from those two core indicators, assuming that yearly citation counts of papers in each journal follows more or less a log-normal distribution. Those other journal-level indicators include journal h index, journal one-by-one-sample comparison citation success index $S_j^i$, journal multiple-sample $K^i-K^j$ comparison success rate $S_{j,K^j}^{i,K^i }$, and minimum representative sizes $\kappa_j^i$ and $\kappa_i^j$, the average ranking of all papers in a journal in a set of journals($R^t$). We find that those indicators are consistent with those calculated directly using the raw citation data ($C^i=\{c_1^i,c_2^i,\dots,c_{N^i}^i \},\forall i$) of journals. In addition to its theoretical significance, the ability to estimate other indicators from core indicators has practical implications. This feature enables individuals who lack access to raw citation count data to utilize other indicators by simply using core indicators, which are typically easily accessible.
... A metric for comparing the citation capacity of pairs of journals. Citation success index between two journals is defined as the probability of a randomly chosen paper from one journal having larger citation count than a randomly chosen paper from the other journal (Milojevi et al., 2017;Shen et al., 2018). • Self-citation rate. ...
... Second, not all indicators used by the CAS for establishing the List were analyzed in this study. Some of their indicators are difficult to obtain, such as rejection rates and the journal citation success index (Milojevi et al., 2017;Shen et al., 2018), which were not analyzed. In general, the limitations of our study demonstrate the need for a wider set of indicators that reflect the quality of the editorial procedures in journals (Wouters et al., 2019) as well as independent journal evaluation performed by academic communities contributing to whitelists of recommendable journals (Aksnes & Sivertsen, 2019;Zhang & Sivertsen, 2020 ...
Preprint
Full-text available
There is widespread agreement that questionable journals pose a threat to the integrity of scholarly publishing and the credibility of academic research. However, there is currently no agreed upon definition of what constitutes a questionable journal. The characteristics of questionable journals have not been delineated, standardized, nor broadly accepted. A series of policy initiatives by the central Chinese government has culminated in the now Early Warning List of International Journals, released by the National Science Library of the Chinese Academy of Sciences – 65 journals that Chinese scholars should be wary of publishing in. Taking this List as a litmus test, we analyze the characteristics of each journal focusing on a definitive set of factors that may see a journal included on the List. We not only include the factors applied by the publisher of the List, such as the article processing charges, the retraction rate etc., but also investigate several other factors. Most of the factors are found to influence the List, while some are not. In fact, many of the journals on the List are highly ranked by impact factors. Our study aims to provide empirical information supporting global attempts to mitigate the pervading phenomenon of questionable journals.
... The FNCSI is de ned as the probability that the citation of a random paper from a journal is larger than a random paper on the same topic and with the same document type from other journals (Shen et al.,2018). For journal A, the probability P( > |a∈A,o∈O) that the citation of a paper from journal A is larger than a random paper in the same topic and with the same document type from other journals, is de ned as below: ...
Preprint
Full-text available
The COVID-19 pandemic resulted in a surge in the production and high citation rates of related publications, and these ultra-highly cited papers brought grave challenges to journal evaluation. So it is significant to test the performance of bibliometric indicators during the crisis and assess their ability to adapt to rapidly evolving research landscapes. The CAS Journal Ranking, one of the most widely used journal ranking systems in China, is committed to accurately revealing the average impact of journals and enhancing the robustness of evaluation results. This study focused on the response of the CAS Journal Ranking system to the ultra-highly cited papers related to COVID-19. We compared the journal impact factor (JIF), category normalized citation impact (CNCI), and CAS’s indicator - the field normalized citation success index (FNCSI) - under journal-level and paper-level classification systems by assessing changes in indicator values and examining ranking mobility of journals. The results indicate combining FNCSI and CWTS paper-level classification system yields a robust indicator in coping with the challenges brought by COVID-19 papers. The combination is effective because FNCSI measure reduces the enormous impact of COVID-19 papers, while CWTS paper-level classification system groups the majority of COVID-19 papers into the “coronavirus” category, preventing distortion of citation normalization of other groups. By revealing the pros and cons of various indicators, we hope to emphasize the relative suitability and dependence on the context. and inform future improvements to scientific journal evaluation systems and methodologies.
... One is the journal citation success index. This is a specific method for comparing journal citation impact (Milojevi et al., 2017;Shen et al., 2018). We have instead applied a more widespread methods, that is, dividing journals into quartiles after ranking them within their disciplines by journal impact factors. ...
Article
Full-text available
On December 31, 2020, The National Science Library of the Chinese Academy of Sciences (CAS) released a list of 65 international scientific journals, all of them indexed by the Web of Science, that were said to be potentially in conflict with academic rigor. The list immediately influenced the publication patterns of Chinese researchers. A year later, on December 31, 2021, CAS released a revised and reduced list of 35 questionable journals and said the publishers in the meantime had reacted constructively and improved the procedures of their journals. This study aims to provide an understanding of the motivations and criteria behind the list, partly by reviewing Chinese policy documents and public debates, partly with a quantitative analysis to detect the relative importance of the criteria used to select the journals for the list.
... Second, this version introduces a citation success index (Milojević et al., 2017) to replace impact factors as a measure of a journal's influence in corresponding topics. The citation success index of the target journal compared with the reference journal is defined as the probability of the citation count of a randomly selected paper from the target journal being larger than that of a random paper from a reference journal (Shen et al., 2018). Third, it extends the coverage of disciplines from the natural sciences into the social sciences and includes some local journals that are not listed in the JCR but are listed in the ESCI list to support the internationalization process of domestic titles. ...
Preprint
Full-text available
Evaluating and ranking journals is a crucial part of assessing the quality of science and technology output for government agencies, research institutions, and academic researchers. As China’s research output and strength continue to increase, ranking indexes abound and journal evaluation and selection are becoming an area of wide attention. In this paper, we describe the history and main motivations behind journal evaluation activity in China. Then, we systematically introduce and compare the most influential journal lists in China. These are: the Chinese Science Citation Database (CSCD); the journal partition table (JPT); the AMI Comprehensive Evaluation Report (AMI); the Chinese STM Citation Report (CJCR); the “A Guide to the Core Journals of China” (GCJC); the Chinese Social Sciences Citation Index (CSSCI); and the World Academic Journal Clout Index (WAJCI). Besides, some other lists published by government agencies, professional associations, and universities are also briefly introduced. Overall, this paper reveals the landscape of the journal evaluation system in China. The results offer a deeper understanding of the methods and practices used to evaluate academic journals in China as well as some insights into how other countries assess and rank journals.
... The first question to answer before trying to calculate the probability of publishing a breakthrough paper by citation analysis is how citations are distributed; the answer is that a large number of studies have shown that citation distributions follow lognormal functions (Redner 2005;Radicchi et al. 2008;Stringer et al. 2010;Evans et al. 2012;Wilson 2014a, 2014b;Shen et al. 2018). In addition to these mathematical studies, a further support for the lognormal distribution of citations comes indirectly from studies of the skewedness of these distributions. ...
Preprint
Full-text available
In research policy, effective measures that lead to improvements in the generation of knowledge must be based on reliable methods of research assessment, but for many countries and institutions this is not the case. Publication and citation analyses can be used to estimate the part played by countries and institutions in the global progress of knowledge, but a concrete method of estimation is far from evident. The challenge arises because publications that report real progress of knowledge form an extremely low proportion of all publications; in most countries and institutions such contributions appear less than once per year. One way to overcome this difficulty is to calculate probabilities instead of counting the rare events on which scientific progress is based. This study reviews and summarizes several recent publications, and adds new results that demonstrate that the citation distribution of normal publications allows the probability of the infrequent events that support the progress of knowledge to be calculated.
Article
The Journal’s Impact Factor is an appropriate measure of recent concern rather than an effective measure of long-term impact of journals. This paper is mainly to find indicators that can effectively quantify the long-term impact of journal, with the aim to provide more useful supplementary information for journal evaluation. By examining the correlation between articles’ past citations and their future citations in different time windows, we found that the articles which were referenced in the past years will yield useful information also in the future. The age characteristics of these sustained active articles in journals provide clues for establishing long-term impact metrics for journals. A new indicator: h1-index was proposed to extract the active articles with at least the same number of citations as the h1-index in the statistical year. On this basis, four indicators describing the age characteristics of active articles were proposed to quantify the long-term impact of journals. The experimental results show that these indicators have a high correlation with the journal’s total citations, indicating that it is appropriate for these indicators to express the impact of the journal. Combining the average age of the active articles with the impact factors of journals, we found that some journals with short-term attraction strategies can also build long-term impact. The indicators presented in this paper that describe the long-term impact of journals will be a useful complement to journal quality assessment.
Article
In research policy, effective measures that lead to improvements in the generation of knowledge must be based on reliable methods of research assessment, but for many countries and institutions this is not the case. Publication and citation analyses can be used to estimate the part played by countries and institutions in the global progress of knowledge, but a concrete method of estimation is far from evident. The challenge arises because publications that report real progress of knowledge form an extremely low proportion of all publications; in most countries and institutions such contributions appear less than once per year. One way to overcome this difficulty is to calculate probabilities instead of counting the rare events on which scientific progress is based. This study reviews and summarizes several recent publications, and adds new results that demonstrate that the citation distribution of normal publications allows the probability of the infrequent events that support the progress of knowledge to be calculated.
Article
Full-text available
Although the Journal Impact Factor (JIF) is widely acknowledged to be a poor indicator of the quality of individual papers, it is used routinely to evaluate research and researchers. Here, we present a simple method for generating the citation distributions that underlie JIFs. Application of this straightforward protocol reveals the full extent of the skew of distributions and variation in citations received by published papers that is characteristic of all scientific journals. Although there are differences among journals across the spectrum of JIFs, the citation distributions overlap extensively, demonstrating that the citation performance of individual papers cannot be inferred from the JIF. We propose that this methodology be adopted by all journals as a move to greater transparency, one that should help to refocus attention on individual pieces of work and counter the inappropriate usage of JIFs during the process of research assessment.
Article
In this paper we present “citation success index”, a metric for comparing the citation capacity of pairs of journals. Citation success index is the probability that a random paper in one journal has more citations than a random paper in another journal (50% means the two journals do equally well). Unlike the journal impact factor (IF), the citation success index depends on the broadness and the shape of citation distributions. Furthermore, it is insensitive to sporadic highly-cited papers that affect the IF. Nevertheless, we show, based on 16,000 journals containing ∼2.4 million articles, that the citation success index is a relatively tight function of the ratio of IFs of journals being compared. This is due to the fact that journals with the same IF have quite similar citation distributions. The citation success index grows slowly as a function of IF ratio. It is substantial (>90%) only when the ratio of IFs exceeds ∼6, whereas a factor of two difference in IF values translates into a modest advantage for the journal with higher IF (index of ∼70%). We facilitate the wider adoption of this metric by providing an online calculator that takes as input parameters only the IFs of the pair of journals.
Effectiveness of journal ranking schemes as a tool for locating information
  • M J Stringer
  • M Sales-Pardo
  • L A N Amaral
Stringer, M. J., Sales-Pardo, M., & Amaral, L. A. N. (2008). Effectiveness of journal ranking schemes as a tool for locating information. PLoS ONE, 3, e1683. Zhesi Shen National Science Library, Chinese Academy of Sciences, Beijing 100190, PR China Liying Yang National Science Library, Chinese Academy of Sciences, Beijing 100190, PR China Jinshan Wu * School of Systems Science, Beijing Normal University, Beijing 100875, PR China