Article

Sense and Nonsense About the Impact Factor

Authors:
To read the full-text of this research, you can request a copy directly from the author.

Abstract

The impact factor is based on citations of papers published by a scientific journal. It has been published since 1961 by the Institute for Scientific Information. It may be regarded as an estimate of the citation rate of a journal's papers, and the higher its value, the higher the scientific esteem of the journal. Although the impact factor was originally meant for comparison of journals, it is also used for assessment of the quality of individual papers, scientists and departments. For the latter a scientific basis is lacking, as we will demonstrate in this contribution.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... Fundamentally, a large number of articles that are cited extremely rarely, if at all, benefit from the high citation rates of a small number of citation classics: scientific standing and reputation are transferred from successful to less successful papers. From this aspect, the IF cannot be regarded as a good indicator of the scientific quality of an individual article (6,10,12,14,18,(20)(21)(22)(23)57,59,60,78,80,83,84,92,106,(126)(127)(128). ...
... Seglen (20)(21)(22)(23) and Rostami-Hodjegan & Tucker (18) refute this on the basis of empirical findings, claiming that articles are cited irrespective of the IF of the journal in which they are published. Empirical analyses by Opthof (128) suggest that the quality of an article has a greater impact on the citation rate than the IF of the journal; on the other hand, the exposure accorded to an article through publication in a highly reputed journal does contribute to its citation success (interpreted by Seglen (21) as a national bias; cf. 57). ...
... By contrast, Opthof (128), who analyzed two cardiovascular journals, concluded that "the impact factor indeed permits assessment of the quality of journals". Using a different line of reasoning, it is pointed out that the correlation between a high rejection rate of manuscripts submitted to journals and a high IF permits the IF to be interpreted as an indicator of the quality of a journal: more manuscripts are submitted to high IF journals, which can consequently select those of the highest quality and can afford a high rejection rate, thus attaining a high quality standard and even increasing it in the long term (51,65,93). ...
Article
The journal impact factor (IF), which is published annually by the Institute for Scientific Information® (USA), is meanwhile in widespread use as a scientometric parameter for the evaluation of research and researchers in Germany and other European countries. The present article subjects the IF to critical analysis. It first deals with processes of production, transfer, and use of medical knowledge, because the IF intervenes in these processes on account of its reflexivity. Secondary effects of the IF resulting from its reflexivity are discussed with the focus on the level of the author, the journal and the medical discipline as well as on social knowledge processes in society. In addition, the extent to which the IF is appropriate for evaluating the quality of a specific article, of a journal or of individual and collective research achievements is discussed. The present article calls for a) research evaluation in accordance with the recommendations of the Deutsche Forschungsgemeinschaft (German Research Council, DFG) and of the Arbeitsgemeinschaft der Wissenschaftlichen Medizinischen Fachgesellschaften (Association of the Scientific Medical Societies, AWMF); and b) for more intensive occupation with and organization of medical knowledge processes.
... Because of the high volume of research on biochar published over recent years, articles containing obsolete information were manually removed from the search results and most of the articles selected for further processing were those published within the last decade. As a parameter that is often used to reflect the rigor of a research paper, the journal's SCI impact factor were taken into consideration while filtering articles [33][34][35]. Although low-quality research can also be found in a journal with a high SCI impact factor, the probability of such situation is generally low [34]. ...
... As a parameter that is often used to reflect the rigor of a research paper, the journal's SCI impact factor were taken into consideration while filtering articles [33][34][35]. Although low-quality research can also be found in a journal with a high SCI impact factor, the probability of such situation is generally low [34]. ...
Article
Full-text available
With the increasing popularity of biochar as a soil amendment worldwide in recent years, a question of concern arises as to whether the application of biochar would suppress or stimulate greenhouse gas (GHG) emissions. In this study, published data extracted from independent individual studies were systematically selected, statistically processed, graphically presented and critically analyzed to understand biochar’s influences on the emissions of CO2, CH4 and N2O—the three major GHGs emitted in agricultural fields. The results revealed not only the significant importance of biochar’s pyrolysis temperature for its impacts on GHG emissions, but also the dissimilar influences on the generations of different GHGs. The application of biochar, in general, stimulated the emissions of CO2 and CH4 to various extents. With biochar pyrolyzed under relatively lower temperatures (e.g., <500 °C), higher application rates generally resulted in more stimulated CO2 and CH4 emissions; whereas those pyrolyzed under relatively higher temperatures (e.g., >550 °C) became less stimulative (and sometimes even suppressive) for CO2 and CH4 emissions, especially when applied at higher rates. Nevertheless, the response of N2O emission to biochar application contrasted with those of CO2 and CH4. The results may contribute to better regulations for biochar application in combating GHG emissions in agriculture.
... While its calculation is simply the arithmetic mean (Pang 2019) by division of the number of citations a journal receives in a given year (numerator) by the number of papers which received these citations and are published by that journal in the two preceding years (denominator), the JIF is probably the most controversial metric. The main reason for that is due to its widespread use as a proxy of research quality or research performance in the assessment of individual authors, departments or academic institutions (Adler et al. 2009;Seglen 1989), see also (Opthof 1997;Opthof and Wilde 2009) for a further discussion on the use of citation data to evaluate research. As pointed out by several colleagues including concerns of his inventor ["used inappropriately as surrogates in evaluation exercises", (Garfield 1996)], such usage is mostly inappropriate [e.g. ...
... Simons (2008), McKiernan et al. (2019), Casadevall and Fang (2014)] as the JIF of a journal does not predict the citedness of individual articles published in the respective journal [e.g. Adler et al. (2009), Opthof (1997, Opthof et al. (2004)]. Several initiatives have been instigated discussing the potentially harmful effects of such practices and providing recommendations for sensible and responsible use of metrics such as the JIF-most prominently the 'San Francisco Declaration on Research Assessment' (DORA), 1 the 'Leiden Manifesto' (Hicks et al. 2015), and 'the Metric tide' report (Wilsdon et al. 2015). ...
Article
Full-text available
Skewed citation distribution is a major limitation of the Journal Impact Factor (JIF) representing an outlier-sensitive mean citation value per journal The present study focuses primarily on this phenomenon in the medical literature by investigating a total of n = 982 journals from two medical categories of the Journal Citation Report (JCR). In addition, the three highest-ranking journals from each JCR category were included in order to extend the analyses to non-medical journals. For the journals in these cohorts, the citation data (2018) of articles published in 2016 and 2017 classified as citable items (CI) were analysed using various descriptive approaches including e.g. the skewness, the Gini coefficient, and, the percentage of CI contributing 50% or 90% of the journal’s citations. All of these measures clearly indicated an unequal, skewed distribution with highly-cited articles as outliers. The %CI contributing 50% or 90% of the journal’s citations was in agreement with previously published studies with median values of 13–18% CI or 44–60% CI generating 50 or 90% of the journal’s citations, respectively. Replacing the mean citation values (corresponding to the JIF) with the median to represent the central tendency of the citation distributions resulted in markedly lower numerical values ranging from − 30 to − 50%. Up to 39% of journals showed a median citation number of zero in one medical journal category. For the two medical cohorts, median-based journal ranking was similar to mean-based ranking although the number of possible rank positions was reduced to 13. Correlation of mean citations with the measures of citation inequality indicated that the unequal distribution of citations per journal is more prominent and, thus, relevant for journals with lower citation rates. By using various indicators in parallel and the hitherto probably largest journal sample, the present study provides comprehensive up-to-date results on the prevalence, extent and consequences of citation inequality across medical and all-category journals listed in the JCR.
... Numerous scientists have criticized abuse of the impact factor as a parameter for the measurement of quality of individual papers. [9][10][11][12][13] This criticism is primarily based on the fact that citation distributions underlying the JIF are extremely skewed. In a gaussian normal distribution, the mean, median, and mode are equal. ...
... Journals with a high JIF may still publish papers that remain uncited both during a relatively brief period after publication, 10 relevant for the calculation of the JIF, but also over longer periods. 19 This is not saying that uncited papers should not have been published, but one may anticipate that journals with a high IF publish less uncited papers. ...
Article
In this article, I show that the distribution of citations to papers published by the top 30 journals in the category Cardiac & Cardiovascular Systems of the Web of Science is extremely skewed. This skewness is to the right, which means that there is a long tail of papers that are cited much more frequently than the other papers of the same journal. The consequence is that there is a large difference between the mean and the median of the citation of the papers published by the journals. I further found that there are no differences between the citation distributions of the top 4 journals European Heart Journal, Circulation, Journal of the American College of Cardiology, and Circulation Research. Despite the fact that the journal impact factor (IF) varied between 23.425 for Eur Heart J and 15.211 for Circ Res with the other 2 journals in between, the median citation of their articles plus reviews (IF Median) was 10 for all 4 journals. Given the fact that their citation distributions were similar, it is obvious that an indicator (IF Median) that reflects this similarity must be superior to the classical journal impact factor, which may indicate a nonexisting difference. It is underscored that the IF Median is substantially lower than the journal impact factor for all 30 journals under consideration in this article. Finally, the IF Median has the additional advantage that there is no artificial ranking of 128 journals in the category but rather an attribution of journals to a limited number of classes with comparable impact.
... It expresses the impact or quality of a journal in terms of the degree to which its articles are cited in the literature. (MacRoberts & MacRoberts, 1989) that underly the ISI IF, and the issues that arise when it is being applied in a range of domains (Moed & Leeuwen, 1995;Opthof, 1997;Reedijk, 1998), the ISI IF as an operationalization of I g can be characterized by three main features: ...
Preprint
We generated networks of journal relationships from citation and download data, and determined journal impact rankings from these networks using a set of social network centrality metrics. The resulting journal impact rankings were compared to the ISI IF. Results indicate that, although social network metrics and ISI IF rankings deviate moderately for citation-based journal networks, they differ considerably for journal networks derived from download data. We believe the results represent a unique aspect of general journal impact that is not captured by the ISI IF. These results furthermore raise questions regarding the validity of the ISI IF as the sole assessment of journal impact, and suggest the possibility of devising impact metrics based on usage information in general.
... The JIF is now commonly used to measure the impact of journals and by extension the impact of the articles they have published, and by even further extension the authors of these articles, their departments, their universities and even entire countries. However, the JIF has a number of undesirable properties which have been extensively discussed in the literature [2,3,4,5,6]. This had led to a situation in which most experts agree that the JIF is a far from perfect arXiv:0902.2183v2 ...
Preprint
The impact of scientific publications has traditionally been expressed in terms of citation counts. However, scientific activity has moved online over the past decade. To better capture scientific impact in the digital era, a variety of new impact measures has been proposed on the basis of social network analysis and usage log data. Here we investigate how these new measures relate to each other, and how accurately and completely they express scientific impact. We performed a principal component analysis of the rankings produced by 39 existing and proposed measures of scholarly impact that were calculated on the basis of both citation and usage log data. Our results indicate that the notion of scientific impact is a multi-dimensional construct that can not be adequately measured by any single indicator, although some measures are more suitable than others. The commonly used citation Impact Factor is not positioned at the core of this construct, but at its periphery, and should thus be used with caution.
... As an example of the representation gap, journals in the Emerging Sources Citation Index for Global South publications were only assigned JIFs in 2023 (Quaderi 2023). There is also considerable criticism of its misuse as a gauge of the relative importance of individual researchers and institutions (Amin and Mabe 2003;Hecht, Hecht, and Sanberg 1998;Opthof 1997;Seglen 1997;Walter et al. 2003). Among the many concerns raised about impact factors is the opaque nature of the decision about what a "citable" article type is. ...
Article
Full-text available
Research assessment is a major driver of research behavior. The current emphasis on journal citations in a limited number of journals with an English focus has multiple effects. The need to publish in English even when it is not the local language affects the type of research undertaken and further consolidates the Global North-centric view or scientific approach. The bibliometric databases on which assessments of universities and journals are based are owned by two large corporate organizations, and this concentration of the market has in turn concentrated the research environment. Open infrastructure offers an alternative option for the research endeavor. The OAPEN online open access library and the Directory of Open Access Books form part of this infrastructure and we consider the pattern of languages present in the directories over time.
... [4][5][6][7]. Moreover, the correlation between journal rank (as measured by journal impact factor) and the methodological quality of papers published in a journal is low or even negative [8]. ...
Article
Full-text available
Researchers would be more willing to prioritize research quality over quantity if the incentive structure of the academic system aligned with this goal. The winner of a 2023 Einstein Foundation Award for Promoting Quality in Research explains how they rose to this challenge.
... Concomitantly, journal metrics were proposed, firstly by Eugene Garfield (Bensman 2007), and broadly discussed in terms of importance, criteria, and validity by different researchers in the long run, for instance Opthof (1997), Bloch and Walter (2001), Saha et al. (2003), Campbell (2008), Bornmann et al. (2012), Kaldas et al. (2020) among many others. ...
Article
Full-text available
The assessment of scientific articles regarding their relevance to a research portfolio is becoming increasingly important. The reason is that scientific works have been growing quantitative and qualitatively. Taking this scenario into account, Methodi Ordinatio was proposed, and its major contribution was the 7th phase of the methodology, the InOrdinatio. It is an equation based on the main criteria for selecting a paper related to a theme: year of publication, number of citation and the impact factor (or journal metrics). However, finding the value of these three factors might not be an easy and quick task. Therefore, in order to improve the use of Methodi Ordinatio, two other tools were developed: FInder and RankIn. Thus, the first purpose of this paper is to present these two new tools. Additionally, considering that the impact factor has been a matter of disputes in the academia, the paper also proposes the use of one single indicator for the impact factor in the equation. To achieve this second purpose, a robust mathematical model was elaborated taking into consideration the other journal metrics in order to estimate the alternative impact factor to be used in InOrdinatio in case one or more papers are not indexed to any metrics. After testing with 48 statistical models and tools, the results show that this indicator is robust and trustworthy to be used. Finally, as the third as last purpose, this paper also proposes an alternative version to calculate the InOrdinatio, which can be flexible regarding the weight of its three criteria used in the equation.
... It means that considerable time and money spend on research would not be wasted (Gerstner et al., 2017). And it means research quality, and not the best advertising strategy (Schilhan et al., 2021), career stage (Merton, 1968), or nationality and race (Lerback et al., 2020), would influence citation counts, which is often used as measure of scientific impact (Opthof, 1997). ...
Article
Increasing publication numbers make it difficult to keep up with knowledge evolution in a science like hydrology. Here we give recommendations to authors and journals for writing future‐proof articles that contribute to knowledge accumulation and synthesis. Increasing publication numbers make it difficult to keep up with knowledge evolution in a science like hydrology. Here we give recommendations to authors and journals for writing future‐proof articles that contribute to knowledge accumulation and synthesis.
... This may lead to a downward bias of the result because of this measurement error problem. Third, the impact factor may not be a good proxy of the quality of individual papers or individual scientists [40,41]. In addition, other research outcomes and proxies such as grants and citations were not included in our analysis. ...
Article
Full-text available
Background: Physicians play a unique role in scientific and clinical research, which is the cornerstone of evidence-based medical practice. In China, tertiary public hospitals link promotions and bonuses with publications. However, the weight placed on research in the clinician's evaluation process and its potential impact on clinical practice have come under controversy. Despite the heated debate about physicians' role in research, there is little empirical evidence about the relationship between physicians' publications and their clinical outcomes. Method: This paper examines the association of the quantity and quality of tertiary hospitals' attending physicians' publications and inpatient readmission rates in China. We analyzed a 20% random sample of inpatient data from the Urban Employee Basic Medical Health Insurance scheme in one of the largest cities in China from January 2018 through October 2019. We assessed the relationship between the quantity and impact factor of physicians' publications and 30-day inpatient readmission rates using logistic regression. There were 111,965 hospitalizations treated by 5794 physicians in our sample. Results: Having any first-author publications was not associated with the rate of readmission. Among internists, having clinical studies published in journals with an average impact factor of 3 or above was associated with lower readmission rates (OR = 0.849; 95% CI (0.740, 0.975)), but having basic science studies published in journals with an average impact factor of 3 or above was not associated with the rate of readmission. Among surgeons, having clinical studies published in journals with an average impact factor of 3 or above was likewise associated with lower readmission rates (OR = 0.708 (0.531, 0.946)), but having basic science studies published in journals with an average impact factor of 3 or above was associated with higher readmission rates (OR = 1.230 (1.051, 1.439)).
... That is the journal's circulation, peer-reviewing process, the impact factor [8], funding disclosures of the articles [4], to name a few. Many reports on the science behind the usage metrics [9] and if citations accurately reflect the quality of the research articles [10] are well published. Self-citations are also prevalent to increase and promote one's work [11]. ...
Article
Full-text available
Understanding the research usage metrics in India by clinical psychologists, appeared necessary for improving our quality and number of publications. A small purposive non-random sample is provided giving a glimpse into a larger topic for clinical psychologists of India.
... Zaini (2009) believes that the building block of a victorious academic career is "publishing, publishing, and publishing." This is endorsed by Opthof (1997), who highlights that universities with an exemplary research reputation to maintain, extensively need scholars who can engage in publishing research findings that are indexed in high-impact journals. ...
Article
South African senior academics do not accentuate the importance of the “publish or perish” mantra as required for young emerging scholars. This continued unfair and/or unjust practice is perpetuated further by a lack of attention to the problem, including less interest in research country-wide by some senior academics. It is in this context that—where this injustice is reported—it is often undermined and/or side-lined, or even critiqued. This paper is revisiting all of the various challenges faced by young emerging scholars in South African universities. Due to the complicated nature of the conduct of research in South African universities, the author did not pin-point any university by name, as this is the practical thorn that is evident country-wide and has been a systemic, strategic instigation to side-line emerging scholars in producing knowledge through various methods of gate-keeping. It is also delaying post-graduate students in the system for them to not see the importance of the continuation of post-graduate studies. Afrocentricity has been deployed as a theoretical lens, together with unstructured interviews and document reviews to collect data.
... Evaluation of research articles then becomes a matter of identifying the quality journals. Examples of strategies that use this approach include citation analysis (see, for example, Garfield, 1979;Harter, 1996;Nicolaisen, 2002), journal impact factor analysis (see, for example, Opthof, 1997), approaches based on the reputation of journals (see, for example, Giles et al., 1989;Blake, 1996;Kohl & Davis, 1985), peer-review status (Lee, et al., 2002), manuscript acceptance rate (Lee, et al., 2002), indexing of the journal in established indexing or abstracting services (Gehanno & Thirion, 2000), and number of subscribers to the journal (Lee, et al., 2002). All approaches have their strengths and limitations. ...
Article
Full-text available
This paper for the Seventh International Forum on Research in School Librarianship describes a small-scale pilot study that is part of a much larger longitudinal study of “Research and Researchers in School Librarianship”. The pilot study is a preliminary attempt to address issues associated with determining the quality of the published research in the field of school librarianship. The main aims are first, to test the extent to which experienced evaluators agreed in their rankings of research articles on the basis of quality; and secondly, to investigate the ways in which experienced evaluators evaluate research articles. A qualitative, naturalistic research design is used. The data collection was still proceeding at the time the paper was being written; the conference presentation will therefore provide further information about the results of the data analysis and draw some conclusions from the analysis. However, it is already clear from the literature review that the relationship between research quality and the adoption of the results of that research in decision making is more complex than we have supposed.
... Scientometric indicators can be sorted in three major categories: journal indicators (JI), author indices (AI) and article level metrics (ALM). The most popular JIs are the frequently criticized impact factor (Garfield, 1996(Garfield, , 1998(Garfield, , 2006Opthof, 1997;Seglen, 1997), article influence score (Bollen, Rodriguez, & Van de Sompel, 2006), eigenfactor etc. (Bergstrom, West, & Wiseman, 2008;DPM, 2008). Among AIs the most simple and popular is the h-index (Hirsch, 2005(Hirsch, , 2007, but there are also many other variants like the g-index (Egghe, 2006), A-index (Jin, 2006), R-index (Jin, Liang, Rousseau, & Egghe, 2007), etc. ...
Article
Article level scientometric indicators (ALMs) are usually of cumulative nature making articles of different age hard to compare. Here, we introduce a new ALM, the Time Debiased Significance Score (TDSS), which measures the significance of a publication based on the structure of the whole citation network and eliminates the global ageing bias in the network: older publications should not be a priori privileged or disadvantaged compared to newer ones. The TDSS is based on a modified variant of the PageRank measure, incorporating a mathematically consistent temporal detrending and ensuring a few key features: (i) the TDSS should not show any global trend as a function of the topological index (causal order in the citation network); (ii) the TDSS value of a publication should decrease as time passes (and the citation network grows) if no more citations are associated with it. The above definition is beneficial in multiple ways, including e.g. low computational complexity and weak domain dependence. Further, estimation of reliability of the TDSS and its extension to groups of items like overall score of a research group are also possible.
... 2,3 The journal IF (which includes total citations to the journal) is not necessarily representative of citations to individual articles, as these vary widely. 3, 16 Garfield himself states that 'of 38 million items cited from 1900-2005, only 0.5% were cited more than 200 times. Half [of the published articles] were not cited at all . . ...
Article
Objective (1) To analyse trends in the journal impact factor (IF) of seven general medical journals ( Ann Intern Med, BMJ, CMAJ, JAMA, Lancet, Med J Aust and N Engl J Med) over 12 years; and (2) to ascertain the views of these journals’ past and present Editors on factors that had affected their journals’ IFs during their tenure, including direct editorial policies. Design Retrospective analysis of IF data from ISI Web of Knowledge Journal Citation Reports—Science Edition, 1994 to 2005, and interviews with Editors-in-Chief. Setting Medical journal publishing. Participants Ten Editors-in-Chief of the journals, except Med J Aust, who served between 1999 and 2004. Main outcome measures IFs and component numerator and denominator data for the seven general medical journals (1994 to 2005) were collected. IFs are calculated using the formula: (Citations in year z to articles published in years x and y) / (Number of citable articles published in years x and y), where z is the current year and x and y are the previous two years. Editors’ views on factors that had affected their journals’ IFs were also obtained. Results IFs generally rose over the 12-year period, with the N Engl J Med having the highest IF throughout. However, percentage rises in IF relative to the baseline year of 1994 were greatest for CMAJ (about 500%) and JAMA (260%). Numerators for most journals tended to rise over this period, while denominators tended to be stable or to fall, although not always in a linear fashion. Nine of ten eligible editors were interviewed. Possible reasons given for rises in citation counts included: active recruitment of high-impact articles by courting researchers; offering authors better services; boosting the journal's media profile; more careful article selection; and increases in article citations. Most felt that going online had not affected citations. Most had no deliberate policy to publish fewer articles (lowering the IF denominator), which was sometimes the unintended result of other editorial policies. The two Editors who deliberately published fewer articles did so as they realized IFs were important to authors. Concerns about the accuracy of ISI counting for the IF denominator prompted some to routinely check their IF data with ISI. All Editors had mixed feelings about using IFs to evaluate journals and academics, and mentioned the tension between aiming to improve IFs and ‘keeping their constituents [clinicians] happy.’ Conclusions IFs of the journals studied rose in the 12-year period due to rising numerators and/or falling denominators, to varying extents. Journal Editors perceived that this occurred for various reasons, including deliberate editorial practices. The vulnerability of the IF to editorial manipulation and Editors’ dissatisfaction with it as the sole measure of journal quality lend weight to the need for complementary measures.
... Additionally, journals with high impact factors have higher retraction rates due to fraud and other serious forms of misconduct than journals with lower impact factors [34]. Furthermore, impact factors are not intended to serve as the primary criteria for what makes "good research," nor are they designed to be the sole mark of "quality" of a journal [35][36][37][38][39]. In other words, the question of what constitutes "good research" is separate from what is, or is not, a "good journal." ...
Article
Full-text available
Background: Journals with high impact factors (IFs) are the “coin of the realm” in many review, tenure, and promotion decisions, ipso facto, IFs influence academic authors’ views of journals and publishers. However, IFs do not evaluate how publishers interact with libraries or academic institutions. Goal: This provisional system introduces an evaluation of publishers exclusive of IF, measuring how well a publisher’s practices align with the values of libraries and public institutions of higher education (HE). Identifying publishers with similar values may help libraries and institutions make strategic decisions about resource allocation. Methods: Democratization of knowledge, information exchange, and the sustainability of scholarship were values identified to define partnership practices and develop a scoring system evaluating publishers. Then, four publishers were evaluated. A high score indicates alignment with the values of libraries and academic institutions and a strong partnership with HE. Results: Highest scores were earned by a learned society publishing two journals and a library publisher supporting over 80 open-access journals. Conclusions: Publishers, especially nonprofit publishers, could use the criteria to guide practices that align with mission-driven institutions. Institutions and libraries could use the system to identify publishers acting in good faith towards public institutions of HE.
... Slowly but steadily, these old paradigms are shifting with open access publishing, semantically enriched content, data publication, and machine-readable metadata gaining momentum and importance [32,36]. Opposition is also growing against the use of impact factor [8,9,23] or h-index as metrics for assessment of the participants in this publication process, and it has been shown that these metrics can be tampered with easily [1,7,28,30]. ...
Preprint
Scientific publishing is the means by which we communicate and share scientific knowledge, but this process currently often lacks transparency and machine-interpretable representations. Scientific articles are published in long coarse-grained text with complicated structures, and they are optimized for human readers and not for automated means of organization and access. Peer reviewing is the main method of quality assessment, but these peer reviews are nowadays rarely published and their own complicated structure and linking to the respective articles is not accessible. In order to address these problems and to better align scientific publishing with the principles of the Web and Linked Data, we propose here an approach to use nanopublications as a unifying model to represent in a semantic way the elements of publications, their assessments, as well as the involved processes, actors, and provenance in general. To evaluate our approach, we present a dataset of 627 nanopublications representing an interlinked network of the elements of articles (such as individual paragraphs) and their reviews (such as individual review comments). Focusing on the specific scenario of editors performing a meta-review, we introduce seven competency questions and show how they can be executed as SPARQL queries. We then present a prototype of a user interface for that scenario that shows different views on the set of review comments provided for a given manuscript, and we show in a user study that editors find the interface useful to answer their competency questions. In summary, we demonstrate that a unified and semantic publication model based on nanopublications can make scientific communication more effective and user-friendly.
... Parmi la large gamme des produits que l'entreprise propose, ce sont ses usages multiples qui ont favorisé la quasi-exclusivité du JCR, et de son algorithme JIF, comme mesure de la qualité scientifique des revues et, par dérivation, des institutions et des chercheurs. Même si de nombreuses revues regrettaient cette situation et pointaient les limites de cet outil bibliométrique (Opthof, 1997 ;Seglen, 1997), elles constataient simultanément son caractère incontournable. ...
Article
Contre la vision d’une bibliométrie considérée comme un tout unifié et cohérent, cet article distingue trois composants majeurs des infrastructures informationnelles qui soutiennent l’évaluation de la recherche scientifique : les algorithmes, les jeux de données et les outils bibliométriques. En retraçant la genèse et le succès de certains de ces composants, de 1960 à nos jours, les auteurs identifient plusieurs configurations dans lesquelles s’articulent de manière différentielle la position des producteurs de jeux de données, la cible privilégiée des algorithmes et la forme des outils bibliométriques. Cette histoire montre des croisements répétés entre bibliométrie et webométrie, et rappelle que toute évaluation s’appuie nécessairement sur une définition des pairs jugés pertinents.
... The number of citations can indicate the value of an article (Garfield 1972), and the articles published in journals with high impact factors may receive more citations (Leimu & Koricheva 2005). However, publishing in a journal with a high impact factor does not ensure high annual citations (Opthof 1997). Although the number of citations increased with the journal's impact factor, our findings did not corroborate the notion of the so-called "publication effect". ...
Article
The intermediate disturbance hypothesis (IDH) suggests that the peak of species diversity occurs at intermediate-scale disturbances. The IDH received criticisms because many studies have shown that the relationship between disturbances and species diversity is generally not unimodal. We searched Web of Science for articles on IDH to study the applications of the hypothesis in animal and plant studies. We classified found articles into those which presented evidence in favour and against the IDH. Furthermore, we analysed the effects of article age and impact factor of the journal in which it was published on the number of citation this article received. We found that most arguments against the IDH were found in papers on aquatic ecology and in papers published in journals with higher impact factors. Those articles were also cited more often than those presenting evidence in support of the IDH. We thus can conclude that the IDH seems to be less supported in newer papers and particularly in those in the field of aquatic ecology.
... In a latter paper, he claimed that journal impact factor is not related to the quality of articles and that thus it should not be used for research evaluation (Seglen, 1997). Similarly, next five root papers discussed the usability, advantages and disadvantages of the various 'impact factors', either in medicine or in general (Carmi, 1997;Garfield, 1999;Gisvold, 1999;Opthof, 1997;Smith, 1997). Contrary to Seglens studies, Callaham et al. (2002) found out the journal impact factor is as important as other traditional measures of publications quality. ...
Article
Background: The application of bibliometrics in medicine enables one to analyse vast amounts of publications and their production patterns on macroscopic and microscopic levels. Objectives: The aim of the study was to analyse the historical perspective of research literature production regarding application of bibliometrics in medicine. Methods: Publications related to application of bibliometrics in medicine from 1970 to 2018 were harvested from the Scopus bibliographic database. Reference Publication Year Spectroscopy was triangulated with the VOSViewer to identify historical roots and evolution of topics and clinical areas. Results: The search resulted in 6557 publications. The literature production trend was positive. Historical roots analysis identified 33 historical roots and 16 clinical areas where bibliometrics was applied. Discussion: The increase in productivity in application of bibliometrics in medicine might be attributed to increased use of quantitative metrics in research evaluation, publish or perish phenomenon and the increased use of evidence-based medicine. Conclusion: The trend of the literature production was positive. Medicine was in the forefront of knowledge development in bibliometrics. reference publication year spectroscopy proved to be an accurate method which was able to identify most of the historical roots.
... The categorization of journals and their impact is a contentious subject in many disciplines, such as management (Mingers & Wilmott 2013;Singh, Haddad, & Chow, 2007), mathematics (Rousseau, 1988), psychology (Smart & Elton 1982), and various medical disciplines (e.g., Hannson, 1995;Opthof 1997;Saha, Saint, & Christakis, 2003). The issue has also exercised IS academics (e.g., Cuellar, Truex, & Takeda, 2016b;Gillenson & Stutz, 1991;Hamilton & Ives, 1980;Katerattanakul, Razi, Han, & Kam, 2005;Lowry et al., 2013;Peffers & Ya, 2003;Stewart, & Cotton, 2018;. ...
... Furthermore, whether citation per se is a reliable measurement for scholarly impact is still an open question (Opthof, 1997;Seglen, 1997;Harter & Nisonger, 1997). One concern is self-citation where scholars deliberately cite the article, journal or the academic institutions they are affiliated with. ...
Preprint
Full-text available
The adoption of digital library services which provide users access to resources from anywhere has enabled the collection of data about the learning behavior of library patrons. Such Big Data can yield valuable insights into how learning happens and can be used to build recommendation systems for education. By their nature, such resources are interconnected by bibliometric metadata. In this paper, we develop and test methods for building graphs of research corpora accessed by patrons through a library proxy server. We provide open-source software for building and analyzing these representations and discuss the challenges of identifying and discovering metadata from sparse proxy server logs. In addition, we discuss the potential for further research in network modeling of library access records.
... The first, which dates back to the formulation of the journal impact factor in the sixties and is still predominant (Archambault and Larivière 2009;Garfield 1955Garfield , 1972Glänzel and Moed 2002), refers to the importance of getting published in high-impact journals. Accordingly, a somewhat misplaced parallel is often drawn between impact (of the publication outlet) and quality (of the article) (Buela-Casal and Zych 2012;Monastersky 2005;Opthof 1997;Seglen 1997;Zupanc 2014). The second change, much closer in time, consists of the availability of several tools to measure-and, possibly, improve-the impact of research papers. ...
Article
Full-text available
The paper authored by Zong et al. (Scientometrics, 2019. https://doi.org/10.1007/s11192-019-03108-w) claims that equipping articles with a video abstract provides them a citation advantage. Here I argue that the study above does not consider two potential confounding factors, namely, the role played by self-citations as well as by the self-selection bias. Author self-citations push the citation premium of the articles analyzed in the study referenced above, thus the net effect of video abstracts is lower than expected. What is more, articles with a video abstract seem to associate with higher citations in comparison to their counterparts without the video companion due to the self-selection bias. Namely, authors may be prone to include a video abstract in the articles they believe are of outstanding quality and best representative of their research activities. All this suggests that the alleged citation advantage of video abstracts is, at least, of doubtful occurrence.
... Nevertheless, there are numerous problems with the accuracy of the calculations and their ability to reflect impact (Egghe, 1988;Dellavalle, Schilling, Rodriguez, Van de Sompel, & Bollen, 2007). There are also major problems of over-interpretation, leading to inappropriate uses (Opthof, 1997;Boell & Wilson, 2010), such as those that ignore disciplinary differences. ...
Preprint
Full-text available
Reading academic publications is a key scholarly activity. Scholars accessing and recording academic publications online are producing new types of readership data. These include publisher, repository, and academic social network download statistics as well as online reference manager records. This chapter discusses the use of download and reference manager data for research evaluation and library collection development. The focus is on the validity and application of readership data as an impact indicator for academic publications across different disciplines. Mendeley is particularly promising in this regard, although all data sources are not subjected to rigorous quality control and can be manipulated.
... Traditional research on academic content quality has focussed on research papers. Early studies explored research papers' quality by examining the features of journals, for example, journal reputation was analysed using citations or impact factors (Blake, 1996;Opthof, 1997). Researchers then progressed to assessing the quality of paper based on its external features, such as author reputations and paper citations (Ugolini et al., 1997;Mukherjee, 2007). ...
Article
Full-text available
Purpose Academic social (question and answer) Q&A sites are now utilised by millions of scholars and researchers for seeking and sharing discipline-specific information. However, little is known about the factors that can affect their votes on the quality of an answer, nor how the discipline might influence these factors. The paper aims to discuss this issue. Design/methodology/approach Using 1,021 answers collected over three disciplines (library and information services, history of art, and astrophysics) in ResearchGate, statistical analysis is performed to identify the characteristics of high-quality academic answers, and comparisons were made across the three disciplines. In particular, two major categories of characteristics of the answer provider and answer content were extracted and examined. Findings The results reveal that high-quality answers on academic social Q&A sites tend to possess two characteristics: first, they are provided by scholars with higher academic reputations (e.g. more followers, etc.); and second, they provide objective information (e.g. longer answer with fewer subjective opinions). However, the impact of these factors varies across disciplines, e.g., objectivity is more favourable in physics than in other disciplines. Originality/value The study is envisioned to help academic Q&A sites to select and recommend high-quality answers across different disciplines, especially in a cold-start scenario where the answer has not received enough judgements from peers.
... We ordered them by the mean number of citations per year in the Web of Science database in the three years that followed their publication. We used this number as a surrogate for the quality of the articles (Opthof, 1997). We categorized these articles into three levels of impact (using a logarithmic scale): low (less than 0.99 citations/year), medium (between 1.00 and 9.99 citations/year) and high (more than 10.00 citations/year) impact. ...
Article
Full-text available
Tinbergen's question "What does the behavior exist for?" has contributed to the establishment of behavioral ecology. However, communication within this discipline could be impaired if one does not realize that the question may refer to distinct temporal scopes. Answering it requires specific methodological approaches for each scope: different interpretations of the question refer to different processes. Here we evaluate whether the behavioral ecology literature avoids these pitfalls. We analyze a sample of the articles related to Tinbergen's question, evaluating if they: precisely delimit the temporal scope of the question; use methodology appropriate to the temporal scope of the article; accurately define the terms used to refer to the survival value of behavior; and use the terms consistently. Additionally, we evaluate whether the citation of these articles is impaired by misinterpretations regarding the temporal scope and terms associated with the question. Of the 22 analyzed articles, three present problems in defining the time of the question, but in the other 19, methods suited to the time studied were used. Four terms (fitness, effect, adaptation, and function) were used to refer to the utility of the behavior, but only one article defined all of them. We found no communication problems in the citing process regarding the time of interest of the question and the terms used to refer to the usefulness of the behavior in the 16 analyzed citation events. Low/medium-and high-impact articles were similar in terms of the problems found. We suggest future articles should define the terms used, in order to avoid miscommunication in the field.
... Der Impact Factor hat sich seit seiner Einführung im Jahr 1960 unter verschiedenen Rangsystemen, die Rückschlüsse auf den Einfluss einer Zeitschrift, Publikati on oder einer Person zulassen, durchgesetzt (Bollen, van de Sompel, Hagberg & Chute, 2009). Gleichwohl ist die ser nicht frei von Kritik (Müller, 2009), mitunter irre führend (Opthof, 1997), und er scheint zur Evaluation der individuellen Forschungsleistung nicht geeignet (Arbeits gemeinschaft der Wissenschaftlichen Medizinischen Fachgesellschaften e.V., 2014). Der Impact Factor dient der Beurteilung von Zeitschriften und wird als Quotient der Anzahl von Zitaten veröffentlichter Artikel und der Anzahl veröffentlichter Artikel im Zeitfenster von zwei Jahren geführt (Müller, 2009). ...
Article
Full-text available
BACKGROUND The frequency of publications by nursing scientists from the German-speaking area in journals with a high impact factor is an indicator for participation of the discipline in the international discourse. Previous publication analyses focused on nursing science journals only and regularly found an underrepresentation of experimental studies and clinical topics. AIM To identify and analyse the number of publications by nursing scientists from Germany, Austria and German-speaking Switzerland in international high impact journals. METHOD The Journal Citation Reports were used to identify nursing relevant categories of journals in which the highest 10 % of the years 2010 to 2014 were selected according to the 5-year Impact Factor. Inclusion of publications and data extraction were carried out by two independent persons. RESULTS 106939 publications from 126 journals were screened; 100 publications were identified with 229 contributions by 114 nursing scientists. 42 % of studies are observational and 11 % are experimental. The majority of studies are clinically oriented (55 %). More than 50 % have been published in the past two years. CONCLUSIONS The number of publications by nursing scientists from the German-speaking countries in High Impact Journals is low. There is an increase throughout the observation period. In opposite to former analyses a higher proportion of clinical research has been found.
... Thus, actual measurement of the impact factor of a journal is difficult. In another work [20], Opthuf et. al observed that the impact factor of a journal does not necessarily reflect the impact of the publications and the authors' scientific impact individually. ...
Conference Paper
Quantifying scientific productivity has traditionally been one of the major areas of research. An important component in measuring the scientific productivity of a researcher has been the number and citation count of his publications. However, that citation based measures of scientific productivity are influenced by domain specific factors like the popularity of the research domain, the key topics within the domain, as well as temporal factors like aging, these measures may not suitably reflect the contribution of a researcher uniformly across all domains. In this paper, we introduce social trust on a researcher in a given domain as a measure of his scientific productivity and success. We argue that trust in scientific domain is a social component that indicates the productivity and can influence several parameters like the collaborations of the researchers as well as the citations received by their publications. Unlike citation count of publications, trust is not domain specific and hence can be used uniformly across all domains to measure the scientific productivity. Our proposed measure of trust relies on a trust-based network of authors (nodes), where a link between two nodes is based on social indicators like co-authorship and citation counts. We validate the correctness as well as effectiveness of our proposed approach empirically using the ArnetMiner dataset. Observations indicate that the proposed measure not only mitigates the aging issues prevalent in citation based measures but can also predict the possibility of future success of the researchers in terms of citation count.
... However, Scimago Lab (2011) freely provides a web-based ranking tool which allows users to obtain IF information in Scopus database. Opthof (1997) strongly criticized the use of IF in evaluating researchers' academic works. He further contented that the IF is not a reliable tool for the assessment of the quality of individual paper and scientists. ...
... Also, in each cluster there was no relationship between the adjusted citation count and the adjusted impact factor. This is consistent to previous reports on cardiovascular research that the journal impact factor is not representative of the citation count of its individual papers (Opthof, 1997;Opthof et al., 2004). ...
Article
Full-text available
Background: It might be difficult for clinicians and scientists to identify comprehensively the major research topics given the large number of publications. A bibliometric report that identifies the most-cited articles within the body of the relevant literature may provide insight and guidance for readers toward scientific topics that are considered important for researchers and all relevant workers of academia. To our knowledge, there is a lack of an overall evaluation of the most-cited articles and hence of a comprehensive review of major research topics in neuroscience. The present study was therefore proposed to analyze and characterize the 100 most-cited articles in neuroscience. Methods: Based on data provided from Web of Science, the 100 most-cited articles relevant to neuroscience were identified and characterized. Information was extracted for each included article to assess for the publication year, journal published, impact factor, adjusted impact factor, citation count (total, normalized, and adjusted), reference list, authorship and article type. Results: The total citation count for the 100 most-cited articles ranged from 7,326 to 2,138 (mean 3087.0) and the normalized citation count ranged from 0.163 to 0.007 (mean 0.054). The majority of the 100 articles were research articles (67%) and published from 1996 to 2000 (30%). The author and journal with the largest share of these 100 articles were Stephen M. Smith (n = 6) and Science (n = 13) respectively. Among the 100 most-cited articles, 37 were interlinked via citations of one another, and they could be classified into five major topics, four of which were scientific topics, namely neurological disorders, prefrontal cortex/emotion/reward, brain network, and brain mapping. The remaining topic was methodology. Interestingly 41 out of 63 of the rest, non-interlinked articles could also be categorized under the above five topics. Adjusted journal impact factor among these 100 articles did not appear to be associated with the corresponding adjusted citation count. Conclusion: The current study compiles a comprehensive list and analysis of the 100 most-cited articles relevant to neuroscience that enables the comprehensive identification and recognition of the most important and relevant research topics concerned.
... Leydesdorff and Opthof (2010), Moed et al. (2012), and Vanclay (2012) called for confidence intervals to be provided for journal metrics. Such uncertainty measures can be found, for instance, in Schubert and Glänzel (1983), Nieuwenhuysen and Rousseau (1988), Opthof (1997), Greenwood (2007), Stern (2013), and Chen, Jen, and Wu (2014). Therefore, we decided to base our decision for a draw on official stability intervals, provided by CWTS Journal Indicators. 1 Basically, these stability intervals are based on bootstrapping and can be interpreted as 95% confidence bands, representing a range the journal-specific SNIP fluctuates in. ...
Article
In this paper we transfer the Elo rating system, which is widely accepted in chess, sports and other disciplines, to rank scientific journals. The advantage of the Elo system is the explicit consideration of the factor time and the history of a journal's ranking performance. Most other rankings that are commonly applied neglect this fact. The Elo ranking methodology can easily be applied to any metric, published on a regular basis, to rank journals. We illustrate the approach using the SNIP indicator based on citation data from Scopus. Our data set consists of more than 20 000 journals from many scientific fields for the period from 1999 to 2015. We show that the Elo approach produces similar but by no means identical rankings compared to other rankings based on the SNIP alone or the Tournament Method. Especially the rank order for rather ‘middle-class’ journals can tremendously change.
... Irrungen, Wirrungen und neue Entwicklungen zu Impact und Impact Factor Opthof 1997, Seglen 1997, Kurmis 2003, Decker et al. 2004) und auch bereits in den GfBS News ausführlich behandelt (Schmidt 2006). Garfield selbst warnte vor der falschen Anwen-dung als Qualitäts-Messlatte für Autoren ("misuse in evaluating individuals", Garfield 1998b), sogar in einem Artikel in deutscher Sprache (Garfield 1998a ...
Article
Background The simplest variables to quantify on an academic curriculum vitae are the impact factors (IFs) of journals in which articles have been published. As a result, these measures are increasingly used as part of academic staff assessment. The present study tests the hypotheses that IFs exhibit patterns that are consistent between journals of different specialties and that these IFs reflect the quality of staff academic performance. Methods The IFs of a sample of journals from each of four medical specialties—medicine, oncology, genetics, and public and occupational health—were downloaded from the Science Citation Index and compared. Overall and specialty-specific journal IF frequencies were analyzed with respect to distribution patterns, averages, and skew. Results Approximately 91% of journal IFs fell within the 0 to 5 range, with 97% being less than 10. The overall IF distribution featured a positive skew and a mean of 2.5. Separate analysis of the journal specialty subsets revealed significant differences in IF means (genetics 3.4 > oncology 3.1 > medicine 2.0 > public health 1.6; p < .006), all of which well exceeded the respective IF medians. Journals from the general medicine category exhibited both the lowest IF median (0.7) and the most positively skewed distribution. Conclusion The distribution of IFs exhibits degrees of skew, numeric average, and spread that differ significantly between journal specialty subsets. This suggests that factors other than random variations underlie much of the IF variation between specialty journals and reduces the plausibility of a reliable correlation between IFs and the quality of academic staff performance. It is concluded that a dominant emphasis on IFs in academic recruitment and promotion may select for long-term faculty characteristics other than academic quality alone.
Article
Full-text available
There is a clear systemic motive to silence and undermine the genuine voices of young academics in comprehensive South African universities. The foregoing manifests in various ways including gate-keeping publishing techniques. Senior academics do not emphasize the significance of ‘publish or perish’ mantra as needed and on time for young emerging academics. This continued invidious practice is further perpetuated by the circulating scholarly reports and the media-alike which intentionally do not pay too much attention to this ongoing injustice. Where such is reported, it is often not given too much attention, or rather side-lined and even critiqued. This research article seeks to revisit all the various challenges facing young emerging scholars in South African universities. Due to complicated ethical reasons, the author does not dwell much on pin-pointing universities one by one. Also, this is because the problem seems to be a country-wide systemic instigation to undermine the new emerging voices of young emerging scholars who were previously marginalized and kicked out of the apartheid research system. I, therefore, adopt Afrocentricity as a theoretical lens to challenge the perpetuation of this continued intentional and discriminatory practice against publishing whilst young. https://doi.org/10.19108/KOERS.86.1.2500
Chapter
Scientific publishing is the means by which we communicate and share scientific knowledge, but this process currently often lacks transparency and machine-interpretable representations. Scientific articles are published in long coarse-grained text with complicated structures, and they are optimized for human readers and not for automated means of organization and access. Peer reviewing is the main method of quality assessment, but these peer reviews are nowadays rarely published and their own complicated structure and linking to the respective articles are not accessible. In order to address these problems and to better align scientific publishing with the principles of the Web and Linked Data, we propose here an approach to use nanopublications as a unifying model to represent in a semantic way the elements of publications, their assessments, as well as the involved processes, actors, and provenance in general. To evaluate our approach, we present a dataset of 627 nanopublications representing an interlinked network of the elements of articles (such as individual paragraphs) and their reviews (such as individual review comments). Focusing on the specific scenario of editors performing a meta-review, we introduce seven competency questions and show how they can be executed as SPARQL queries. We then present a prototype of a user interface for that scenario that shows different views on the set of review comments provided for a given manuscript, and we show in a user study that editors find the interface useful to answer their competency questions. In summary, we demonstrate that a unified and semantic publication model based on nanopublications can make scientific communication more effective and user-friendly.
Article
Motivation/Background: Publishing in highly rated journals has being the primary prerequisite for hiring, appraising and promoting academics in higher institutions since the beginning of the 21st century. Lecturers became concerned more with this than classroom activities. This paper seeks answers to the following questions: what is the nature of impact factor publishing? Is there any relationship between impact factor policy and the development of education and scholarship in Nigerian higher institutions? Methods: Qualitative data gathering, content analysis, and Conservative Theory of imperialism as framework of analysis were adopted. Results: The results reveal that impact factor is an ineffective index for academic evaluation, perpetuates academic and economic imperialism, and undermines the development of higher education and scholarship in Nigeria. Conclusions: The mechanisms of impact factor rating are entirely western and neo-colonial, while its application as measurement index for evaluation of lecturers negates the goal for which it was introduced. The relevance of this conclusion for higher education in Nigeria lies in its advocacy for policy reforms and the abrogation of orthodox impact factor policy. Thess supports the recommendations of some scholars for the re-introduction of orthodox classroom performance evaluation index that has been discarded for the policy.
Article
Full-text available
O projeto de reestruturação da Revista Brasileira de Odontologia (RBO), iniciado em 2016, tem se caracterizado pelo esforço contínuo de seus gestores, da comunidade acadêmica e instituições parceiras em adequar-se às necessidades apontadas pelas instâncias avaliadoras e de indexação, para sua consolidação como um periódico de reconhecida importância e do mais elevado conceito nos âmbitos nacional e internacional. Na atualidade, podemos dizer que o objetivo de internacionalização da RBO está prestes a se concretizar, pois encaminhamentos já têm sido feitos para indexação em bases de dados pelas quais poderá obter o conceito de publicação “internacional” marcando, assim, mais um passo consolidado para a progressiva qualidade da revista. Sabemos que a revista deve atender tanto os pesquisadores que encaminham artigos para publicação, quanto os seus leitores. Os objetivos principais dos pesquisadores na publicação são: a) validar resultados (validação essa que depende da reputação da revista); b) chegar ao público alvo (em geral tão internacional quanto possível), e c) alcançar reconhecimento pela excelência do trabalho. Cirurgiões-Dentistas brasileiros que publicam estão mais frequentemente ligados a programas de pós-graduação, os quais pressionam por melhores perfis de publicações, com visibilidade internacional. Diante disso, o fator de impacto (número de citações em relação ao número de publicações na revista nos dois anos anteriores) é relevante para a reputação do periódico e pode ser entendido como indicador de qualidade, embora não reflita a qualidade de um trabalho ou de um pesquisador em particular. As revistas com baixo índice de impacto sofrem evasão de pesquisadores, inclusive os de maior reputação; desse modo, revistas com baixo fator de impacto tendem à inércia, sem condições de se fortalecerem. Seguindo esse pensamento, nossa trajetória fica pautada na busca pelas melhores indexações de bases internacionais que culminarão num maior incremento do fluxo de publicação e aumento da internacionalização da revista.
Article
Full-text available
Background: One of the frequently used methods for assessing research trends and the impact of published scientific literature in a particular discipline is citation analysis. Journals may strive to improve their metrics by choosing manuscripts and study designs that are more likely to be cited. The aim of this study was to identify the 50 most-cited articles in the field of pediatrics, analyze their study design and other characteristics of those articles, and assess the prevalence of systematic reviews among them. Methods: In December 2017, we searched Web of Science (WoS) for all articles published in the field of pediatrics. Two authors screened articles independently and in the further analysis included 50 articles with the highest number of citations. To avoid bias for scientific papers published earlier, the citation density was calculated. We also analyzed Journal Impact Factor (JIF) of journals where citation classics were published. Results: The citation density in top 50 cited articles in the field of pediatrics ranged from 33.16 to 432.8, with the average of 119.95. Most of the articles reported clinical science. Median 2016 JIF for journals that published them was 6.226 (range: 2.778 to 72.406). Half of the top 10 highly cited articles in pediatrics were published in a journal with JIF below 5. Most of the studies among the citation classics in pediatrics were cross-sectional studies (N = 22), followed by non-systematic narrative reviews (N = 10), randomized controlled trials (N = 5), cohort studies (N = 5), systematic reviews (N = 2), case-control studies (N = 2), case reports (N = 2), and there was one study protocol and one expert opinion. Conclusion: Few randomized controlled trials and systematic reviews were among citation classics in the field of pediatrics. Articles that use observational research methodology, and are published in journals with lower impact factors, can become citation classics.
Article
Full-text available
تعد المقاييس العالمية في تقييم الأوراق البحثية ذات أهمية خصوصا في الآونة الأخيرة؛ نظراً للتسارع المتزايد والمنافسة الكبيرة في مجال البحث العلمي، وتختلف درجات التقييم للمجلات العلمية من مجال إلى أخر، حيث نلاحظ وجود مجلات علمية ذات تقييم عالي جدا قد يصل إلى 40 نقطة وأخرى تصل إلى 0.01 نقطة، ونقصد بالنقطة هنا قوة المجلة العلمية وبصياغة أخرى يطلق عليها عامل التأثير الأعلى. ويتضح أن عامل التأثير ليس الطريقة المثالية لقياس جودة المقالات لكن لا يوجد شيئا أفضل منه، وما يميزه توافره وهو طريقة جيدة للتقييم العلمي، لقد بينت الخبرة أن أفضل الدوريات في كل تخصص هي تلك الدوريات الأكثر صعوبة في قبول المقالات وهذه هي الدوريات التي لها عامل تأثير عالي، ويتوافر معظم هذه الدوريات قبل ابتكار عامل التأثير. ولقد توصلت الدراسة إلى العديد من النتائج من أهمها: تفوق ترتيب سكيماجو للدوريات والدولة في عدد دوريات علوم المكتبات والمعلومات، حيث وصلت عدد الدوريات عام 2013م (205) مائتين وخمس دوريات، وهي زيادة هائلة من (89) دورية عام 1999م، أي بفارق (116) مائة وست عشرة دورية خلال (14) أربعة عشر عاما، كما يزيد عدد الدوريات عن قائمة الدوريات الأساسية في تخصص علوم المكتبات والمعلومات والتي تصدرها تومسون رويترز بقدر (142) مائة وأثنين وأربعين دورية، وهو عدد لا يستهان به في التخصص، تعد الدوريات الخمس المجلة الدولية للمكتبات والمعلومات، رسالة المكتبة، عالم الكتب، العربية 3000، الفهرست هي أولى الدوريات العربية في تخصص علوم المكتبات والمعلومات والتي تقدمت للحصول على عامل تأثير عربي، في حين هناك العديد من الدوريات الأخرى المتخصصة والأصيلة في المجال، إلا أنها لم تتقدم للحصول على عامل تأثير لها. ومن أهم توصيات الدراسة: ضرورة الإسراع من قياس عامل التأثير للدوريات العربية في مجال علوم المكتبات والمعلومات، مع ضرورة توافر لها مواقع إلكترونية على الويب سواء كانت بالشكل الورقي أو الإلكتروني، ومحاولة الوصول إلى طرق معيارية لقياس ليس فقط عامل التأثير وإنما جودة الدورية العلمية من كافة النواحي ، للوصول بدورياتنا العربية للدولية. الكلمات المفتاحية عامل تأثير تومسون رويترز شبكة العلم )عامل تأثير معهد المعلومات العلمية شبكة المعرفة)، ترتيب سيكماجو للدورية والدولة، مؤشر مركز دراسات العلم والتكنولوجيا، عامل تأثير الدورية للمعهد الدولي للمعلومات العلمية، عامل تأثير الدورية لكشاف الاستشهادات العلمية، عامل التأثير العالمي طبقا لمعهد مصادر المعلومات، عامل الاستشهاد. Abstract The global standards in evaluating research papers are very important, especially in recent times; due to the increasing acceleration and competition in the field of scientific research. The degrees of assessment of the scientific journals differ from one area to another, where we note that some scientific journals have a very high impact factor, which could be up to 40 points and the other up to 0.01 point. The point here means the power of the scientific journal or High impact factor. It is clear that the Impact Factor is not the ideal way to measure the quality of the articles, but there isn’t something better than it. The Advantage of Impact factor is its availability, and it is a good way of scientific assessment. Our Experiences have shown that the best Journals in each discipline which are more strict in accepting articles and have high impact factors, most of these journals are available before the innovation of Impact Factor. The study found many results and most important are: The SCImago Journal & Country Rank has the highest numbers of journals in Library and Information Science, as in 2013 there are 205 Journals, so it is a massive increase from 89 journals in 1999. The difference is 116 within 14 years. There are also increases in the number of journals than the core list of journals in Library and Information Science which issued by Thomson Reuters by 142 journals. The five Arab journals in library and information science: International Journal of Libraries and Information, Library Message, the world of books, Arabic 3000, the index are the first Arab Journals which apply to get the Arab Impact Factor, while there are many other specialist periodicals and journals in the area, but did not apply for getting the impact factor. The main recommendations of the study are: the need to accelerate measuring the impact factors of Arabic periodicals in the field of library and information science. These Arab Journals must have websites on the Web, whether as it is in the print or electronic form, and try to access the standard methods for measuring not only the impact factor, but also the quality of scientific journals, to be international journals. Keywords: Thomson Reuters Web of Science (ISI Web of Knowledge), The SCImago Journal & Country Rank ,Centre for Science and Technology Studies (CWTS) Journal Indicators , Journal Impact Factor (JIF) Global Institute For Scientific Information (GISI), Journal Impact Factor (IF) Science Citation Index (SCI) ,Global Impact Factor (GIF) Institute for Information Resources ,Citefactor .
Article
The Journal’s Impact Factor is an appropriate measure of recent concern rather than an effective measure of long-term impact of journals. This paper is mainly to find indicators that can effectively quantify the long-term impact of journal, with the aim to provide more useful supplementary information for journal evaluation. By examining the correlation between articles’ past citations and their future citations in different time windows, we found that the articles which were referenced in the past years will yield useful information also in the future. The age characteristics of these sustained active articles in journals provide clues for establishing long-term impact metrics for journals. A new indicator: h1-index was proposed to extract the active articles with at least the same number of citations as the h1-index in the statistical year. On this basis, four indicators describing the age characteristics of active articles were proposed to quantify the long-term impact of journals. The experimental results show that these indicators have a high correlation with the journal’s total citations, indicating that it is appropriate for these indicators to express the impact of the journal. Combining the average age of the active articles with the impact factors of journals, we found that some journals with short-term attraction strategies can also build long-term impact. The indicators presented in this paper that describe the long-term impact of journals will be a useful complement to journal quality assessment.
Article
Full-text available
Rankings for sports such as chess or table tennis are based on the so called Elo rating system. In this paper, we apply this rating system to rank economics journals. One main advantage of the Elo ranking compared to existing ones is its explicit consideration of a journal's performance path. Another advantage is the easy application of the system to any journal metric that is published on a regular basis. Our application is based on data from Web of Science that comprises the impact factors of 382 economics journals for the period from 1997 to 2016. The most recent Elo ranking is quite different for rather 'middle-class' journals compared to other existing rankings. However, also some differences for the top 30 emerge.
Article
Numerous studies published in the academic literature address the issue of journal quality assessment. However, little has been done to compare the factors that influence the perceptions of journal quality in different disciplines. From Chinese authors’ viewpoint, this study explored the factors influencing author quality perceptions of journals in computer science and technology as well as in library and information science. Our empirical findings indicate that author-perceived journal quality in these two fields is significantly positively correlated with impact factors and not statistically significantly correlated with technical delay and immediacy index. Slightly different results are also found between the two fields in terms of the effects of editor service, editorial delay and acceptance rates.
Article
This study investigates self-citation rates of 222 Chinese journals within seven groups including 76 journals of agronomy (34.2 percent), 57 of biology (25.7 percent), 28 of environmental science and technology (12.6 percent), 15 of forestry (6.8 percent), 24 of academic journals of agricultural university (10.8 percent), 9 of aquatic sciences (4.1 percent), and 13 of animal husbandry and veterinary medicine (5.9 percent). The average self-citation rates range from 2 percent to 67 percent in 2006, 1 percent to 68 percent in 2007 and 0 percent to 67 percent in 2008. There is a significant difference in self-citation rate between most groups of journals. The self-citation rate is positively and significantly correlated with the self-citation rate in 2006 for all 222 journals (N = 222, R ² = 0.194, P = 0.004) (P < 0.05). However, the self-citation rate is not significantly correlated with the journal’s impact factor in 2007 (N = 222, R ² = 0.114, P = 0.091) and 2008 (N = 222, R ² = 0.112, P = 0.096) (P < 0.05) for the 222 journals. The relationship between self-citation rate and journal impact factor is discussed.
Article
Sixty-one publications about evoked and event-related potentials (EP and ERP, respectively) in patients with severe Disorders of Consciousness (DoC) were found and analyzed from a quantitative point of view. Most studies are strongly underpowered, resulting in very broad confidence intervals (CI). Results of such studies cannot be correctly interpreted, because, for example, CI > 1 (in terms of Cohen’s d) indicate that the real effect may be very strong, very weak, or even opposite to the reported effect. Furthermore, strong negative correlations were obtained between sample size and effect size, indicating a possible publication bias. These correlations characterized not only the total data set, but also each thematically selected subset. The minimal criteria of a strong study to EP/ERP in DoC are proposed: at least 25 patients in each patient group; as reliable diagnosis as possible; the complete report of all methodological details and all details of results (including negative results); and the use of appropriate methods of data analysis. Only three of the detected 60 studies (5%) satisfy these criteria. The limitations of the current approach are also discussed.
Article
The Impact factor is a scientometric indicator utilized to measure the quality of the journals and papers. But, it has limitations, and cannot be the only one. Itself not the quantity of times should be overrated, that by different circumstances, has been cited a paper, without measuring the real quality of the investigation.
Article
Guidelines for submitting commentsPolicy: Comments that contribute to the discussion of the article will be posted within approximately three business days. We do not accept anonymous comments. Please include your email address; the address will not be displayed in the posted comment. Cell Press Editors will screen the comments to ensure that they are relevant and appropriate but comments will not be edited. The ultimate decision on publication of an online comment is at the Editors' discretion. Formatting: Please include a title for the comment and your affiliation. Note that symbols (e.g. Greek letters) may not transmit properly in this form due to potential software compatibility issues. Please spell out the words in place of the symbols (e.g. replace “α” with “alpha”). Comments should be no more than 8,000 characters (including spaces ) in length. References may be included when necessary but should be kept to a minimum. Be careful if copying and pasting from a Word document. Smart quotes can cause problems in the form. If you experience difficulties, please convert to a plain text file and then copy and paste into the form.
Article
A comparison is made between two types of research past performance analysis: the results of bibliometric-indicators and the results of peer judgement. This paper focuses on two case studies: the work of Dutch National Survey Committees on Chemistry and on Biology, both compared with our bibliometric results for research groups in these disciplines at the University of Leiden. The comparison reveals a serious lack of agreement between the two types of past performance analysis. This important, science-policy relevant observation is discussed in this paper.
Article
Sufficient data are available to recommend the use of the high-resolution or signal-averaged electrocardiogram in patients recovering from myocardial infarction without bundle branch block to help determine their risk for developing sustained ventricular tachyarrhythmias. However, no data are available about the extent to which pharmacological or nonpharmacological interventions in patients with late potentials have an impact on the incidence of sudden cardiac death. Therefore, controlled, prospective studies are required before this issue can be resolved. As refinements in techniques evolve, it is anticipated that the clinical value of high-resolution or signal-averaged electrocardiography will continue to increase.
Article
Guidelines for submitting commentsPolicy: Comments that contribute to the discussion of the article will be posted within approximately three business days. We do not accept anonymous comments. Please include your email address; the address will not be displayed in the posted comment. Cell Press Editors will screen the comments to ensure that they are relevant and appropriate but comments will not be edited. The ultimate decision on publication of an online comment is at the Editors' discretion. Formatting: Please include a title for the comment and your affiliation. Note that symbols (e.g. Greek letters) may not transmit properly in this form due to potential software compatibility issues. Please spell out the words in place of the symbols (e.g. replace “α” with “alpha”). Comments should be no more than 8,000 characters (including spaces ) in length. References may be included when necessary but should be kept to a minimum. Be careful if copying and pasting from a Word document. Smart quotes can cause problems in the form. If you experience difficulties, please convert to a plain text file and then copy and paste into the form.
Article
A cardiological ranking list was prepared based on papers published in 1981–1992. The nations studied comprised the G-7 countries, Belgium, Denmark, Finland, the Netherlands, Norway, Sweden and Switzerland. The number of citations received by these publications was checked. In general the output and citation frequency in the last decade increased, although often temporarily. These data were also related to population size and expenditure on research and development. They show that the United States leads research in clinical cardiology. In most G-7 nations, however, the quality and quantity of cardiological publications lag behind those of the smaller West-European countries. This may be partly due to differences in funding andlor publication in a language other than English. (Eur Heart J 1996; 17: 35–42)
Implications of Cardiac Arrhythmia Suppression Trial. Task Force of the Working Group on Arrhythmias of the ESC
  • Beyond Cast
CAST and beyond. Implications of Cardiac Arrhythmia Suppression Trial. Task Force of the Working Group on Arrhythmias of the ESC. Circulation 1990;81:1123-1127. Eur Heart J 1990;11:194-199.
Standards for analysis of ventricular late potentials A comparative using high resolution or signal-averaged electrocardiography. Task study of bibliometric past performance analysis and peer judgement
  • G Breithardt
Breithardt G et al. Standards for analysis of ventricular late potentials [8] Moed HF, Burger WJM, Frankfort JG, Van Raan AFJ. A comparative using high resolution or signal-averaged electrocardiography. Task study of bibliometric past performance analysis and peer judgement.
Task Force of the Working Group of Arrhythmias of the ESC
  • Nisms Downloaded From
Downloaded from nisms. Task Force of the Working Group of Arrhythmias of the ESC. tion 1991;83:1481-1488, Eur Heart J 1991;12:473-480. J Am Coil Circulation 1991;84:1831-1851. Eur Heart J 1991;12:1112-1131. Cardiol 1991;17:999-1006.
A comparative using high resolution or signal-averaged electrocardiography. Task study of bibliometric past performance analysis and peer judgement. Force Committee between the ESC, the AHA and the ACC
  • H F Moed
  • Wjm Burger
  • J G Frankfort
  • Afj Van Raan
Moed HF, Burger WJM, Frankfort JG, Van Raan AFJ. A comparative using high resolution or signal-averaged electrocardiography. Task study of bibliometric past performance analysis and peer judgement. Force Committee between the ESC, the AHA and the ACC. Circrrla-Scientometrics 1985;8:149-159,