ArticlePDF Available

Scientometrics in a changing research landscape: Bibliometrics has become an integral part of research quality evaluation and has been changing the practice of research

Authors:

Abstract

Bibliometrics has become an integral component of quality assessment for science and funding decisions. The next challenge for scientometrics is to develop similarly reliable indicators for the social impact of research.
Accepted for publication in EMBO Reports
TITLE: Scientometrics in a changing science landscape
SUBTITLE: Bibliometrics has become an integral part of research quality evaluation
and has been changing the practice of research
By Lutz Bornmann & Loet Leydesdorff
Quality assessments permeate the entire scientific enterprise, from funding applications to
promotions, prizes and tenure. Their remit can encompass the scientific output of individual
scientists, whole departments or institutes, or even entire countries. Peer review has
traditionally been the major method used to determine the quality of scientific work, either to
arbitrate if the work should be published in a certain journal, or assessing the quality of a
scientist’s or institution’s total research/publication output. Since the 1990s, quantitative
assessment measures in the form of indicator-supported procedures, such as bibliometrics,
have gained increasing importance, especially in budgetary decisions where numbers are
more easily compared than peer opinion, and are usually faster to produce. In particular,
quantitative procedures can provide important information for quality assessment when it
comes to comparing a large number of units, such as several research groups or universities,
as individual experts are not capable of handling so much information in a single evaluation
procedure. Thus, for example, the new UK Research Excellence Framework (REF) puts more
emphasis on bibliometric data and less on peer review than did its predecessor.
Even though bibliometrics and peer review are often thought of as alternative methods of
evaluation, their combination in what is known as informed peer review can lead to more
2
accurate assessments: peer reviewers can enhance their qualitative assessment on the basis of
bibliometric and other indicator-supported empirical results. This reduces the risk of
distortions and mistakes as discrepancies between the peers’ judgements and the bibliometric
evaluation become more transparent. Although this combination of peer review and
bibliometrics is regarded as the ideal method for research evaluation, the weighting of both
can differ. The German Research Foundation (DFG), for example, encourages applicants to
submit only their five most relevant publications, which is a manageable number for the
reviewers. On the other side, the Australian Research Council (ARC) and the UK REF focus
on bibliometric instruments for national evaluations to the detriment of peer review. The
weighting of the two instruments can also change over time: the new REF weights
bibliometrics higher than the former Research Assessment Exercise.
Bibliometrics has various advantages that make it suitable for the evaluation of research. The
most important one is that bibliometrics analyses data, which concerns the essence of
scientific work. In virtually all research disciplines, publishing relevant research results is
crucial; results that are not published are usually of no importance. Furthermore, authors of
scientific publications have to discuss the context and implications of their research with
reference to the state of the art and appropriately cite the methods, data sets and so on that
they have used. Citations are therefore embedded in the reputation system of research, as
researchers express their recognition and the influence of others’ work.
Another advantage of using bibliometrics in research evaluation is that the bibliometric data
can be easily found and assessed for a broad spectrum of disciplines using appropriate
databases: e.g. Web of Science (WoS) or Scopus. The productivity and impact even of large
research units can therefore be measured with reasonable effort. Finally, the results of
3
bibliometrics correlate well with other indicators of research quality, including external
funding or scientific prizes [1, 2]. Since there is now hardly any evaluation that does not count
publications and citations, bibliometrics seems to have established itself as a reliable tool in
the general assessment of research. Indeed, it would not last long if reputations and awards
based on bibliometric analyses were arbitrary or undeserved.
However, bibliometrics also has a number of disadvantages. These, though, do not relate to its
general applicability in research evaluationthis is generally no longer doubtedbut relate
to whether such an analysis is done professionally according to standards [3], which are often
known only to experts.
First, bibliometrics can only be applied to disciplines where the literature and its citations are
available from appropriate databases. While the natural sciences are well-represented in such
databases, the literature of the technical sciences, the social sciences, and the humanities
(TSH) are only partly included. Bibliometrics can therefore only yield limited results for these
disciplines. Google Scholar is often seen as a solution, but it is not clear what Google Scholar
considers as a citation; the validity of the data is therefore not guaranteed [4].
Second, bibliometric data are numerical data with highly skewed distributions. Their
evaluation therefore requires appropriate statistical methods. For example, the arithmetic
mean is relatively inappropriate for citation analysis, since it is strongly influenced by highly
cited publications. Thus, Göttingen University in Germany achieved a good place in the
current Leiden ranking, which uses a mean-based indicator, because it could boast one
extremely highly cited publication in recent years. The Journal Impact Factorthe best
known indicator for the importance of journalsis similarly affected by this problem: since it
4
gives the average number of citations for the papers in a journal during the preceding two
years, it may be determined by a few highly cited papers and hardly at all by the mass of
papers which are cited very little or not at all.
The h indexa bibliometric indicator which is now similarly well known as the Journal
Impact Factoris unaffected by this problem, as it is not based on the mean. Rather, it
measures the publications in a set with a specific minimum of citations (namely h) so that the
few highly cited publications play only a small role in its calculations. The h index, however,
has other weaknesses that make its use in research evaluation questionable; the arbitrary limit
for the selection of the significant publications with at least h citations is criticised; it could
just as well be h2 citations.
Third, citations need time to accumulate. Research evaluation on the basis of bibliometrics
can therefore say nothing about more recent publications. It has now become standard
practice in bibliometrics to allow at least three years for a reliable measurement of the impact
of publications. This disadvantage of bibliometrics is chiefly a problem with the evaluation of
institutions where the research performance of recent years is generally assessed, about which
bibliometricsthe measurement of impact based on citationscan say little. In the
assessment of recent years, one can only use bibliometric instruments to evaluate the
productivity of the researchers of an institution and their success in publishing their
manuscripts in respected journals.
Here, the most important question is how long the citation window should be to achieve
reliable and valid impact measurement. There are many examples where the importance of
research results have become apparent only decades after publication [5]. For example, the
5
“Shockley-Queisser limit” describes the limited efficiency of solar cells on the basis of
absorption and reemission processes. The original reception of the paper was rather timid, but
today it has become one of the relatively few highly cited papers in a field that has developed
relatively synchronously with rapidly growing solar-cell and photovoltaic research.
Although such papers constitute probably one in every 10,000 papers [5], the standard
practise of using a citation window of only three years nevertheless seems to be too small. In
one study, of the 10% of highest cited papers identified using a 30-year window, more than
40% are excluded from this elite collection when a 3-year window is used [6]. When a 20-
year window is used, 92% are still included, and a 10-year window yields 82% of the 30-year
highest cited papers. Based on his results, Wang recommends that researchers should report
the potential errors in their evaluations when using short time windows, providing a
paragraph such as: ‘Although a citation window of 5 years is used here, note that the
Spearman correlation between these citation counts and long term (31 year) citation counts
will be about 0.87. Furthermore, the potential error of using a 5-year time window will be
higher for highly cited papers because papers in the top 10 % most cited papers in year 5 have
a 32 % chance of not being in the top 10 % in year 31’” [5].
This tendency to focus on the citations of papers published during the last two or three years
assumes a rapid research front, as in the biomedical sciences. However, disciplines differ in
terms of the existence and speed of research fronts and their historical developments.
Baumgartner & Leydesdorff have distinguished between “transitory knowledge claims” in
research papers at the research front and “sticky knowledge claims” that may accumulate
citations during ten or even more years [7].
6
As bibliometrics has developed into a standard procedure in research evaluation, with both
advantages and disadvantages, a further question is now whether bibliometric measurement
and assessment is likely to change scientific practise, as fixing on particular indicators for
measuring research performance generally leads to an adaptation of researchers behaviour.
This may well be intentional: one reason for research evaluation is to increase research
performance, namely productivity. However, there are also unintended effects. For example,
in order to achieve a desired increase in publication volume, some researchers choose a
publication strategy known as salami slicing: The results of a research project are published in
many small parts, although they could also be published in a few large papers or a single one.
This behaviour is not generally considered to help the progress of research, but it may
improve bibliometric scores.
It is also desirable for researchers to publish in respected journals. Yet since these journals
generally only publish newsworthy results or results with a high impact, a stronger focus on
respected journals in research evaluation raises the risk of scientific malpractice when results
are manipulated or falsified to satisfy this requirement. This behaviour should not be
unreasonably increased by research evaluation processes, in which, for example, scientists in
China are sometimes financially rewarded according to the Impact Factors of the journals in
which they publish their papers [8].
In national scientific systems, in which research evaluation or bibliometrics play a major role,
indicators are often used without sufficient knowledge of the subject. Since the demand for
such figures is high and the numbers are often required speedily or inexpensively, they are
sometimes produced by analysts with little understanding of bibliometrics. For example, such
amateur bibliometricians may be inclined to use the h index because it is a popular and
7
modern indicator that is readily available and easy to calculate. Yet these assessments often
do not take into account that the h-index is unsuitable for comparing researchers from
different subject areas and with different academic ages. Amateur bibliometricians also often
wrongly use the Journal Impact Factor to measure the impact of single pieces of work, as the
IF only provides information about the performance of a journal.
There is a community of professional experts in bibliometrics who develop advanced
indicators for productivity and citation impact measurements. Only experts from this
community should undertake a bibliometric study that involves comparisons across fields of
science. These centres of professional expertise have generated analytical versions of the
databases and can be found, for example, at the Centre for Science and Technology Studies
(CWTS, Leiden) or the Centre for Research & Development Monitoring (ECOOM, Leuven).
Third, a range of suppliers of bibliometric data, such as Elsevier or Thomson Reuters, have
developed research evaluation systems that allow decision makers to produce results about
any given research unit at the press of a button. This “desktop bibliometrics” also increases
the risk that such analyses are applied without sufficient knowledge of the subject.
Furthermore, these systems often present themselves as a black box: the user does not know
how the results are calculated; but even simple indicators such as the h index can be
calculated in different ways. This is why the results of bibliometric analyses often do not
always correspond to the current standards in bibliometrics.
Fourth, bibliometrics can be applied well in the natural sciences, but its application to TSH is
limited. Even if research in these disciplines is published, these publications and their
citations are only poorly represented in the literature databases that can be used for
8
bibliometrics. The differing citation culturein particular the different average number of
references per paper and thereby the different probability of being citedis widely regarded
as the cause of this variation. Based on an analysis of all WoS records published in 1990,
1995, 2000, 2005, and 2010, however, Marx & Bornmann found that almost all disciplines
show similar numbers of references in the reference lists [9]. This suggests that the
comparatively low citation rates in the humanities are not so much the result of a lower
average number of references per paper, but caused by the low fraction of references that are
published in the core set of journals covered by WoS.
Furthermore, the research output in TSH is not only publications, but other products such as
software and patents. These products and their citations are hardly reflected in the literature
databases. Thus, for example, a large part of the publications and other research products from
the TSH area are missing from the Leiden University Ranking, which is based on data in
WoS. Even the indicator report of the German Competence Centre for Bibliometrics (KB),
which assesses German research based on bibliometric data from WoS, under-represents
publications from the TSH areas.
So far, scientometric research has developed no satisfactory solution to evaluate TSH in the
same sophisticated way that is used for the natural sciences. Various initiatives have therefore
tried to develop alternative quality criteria. For example, the cooperative project “Developing
and Testing Research Quality Criteria in the Humanities, with an emphasis on Literature
Studies and Art History” of the Universities of Zürich and Basel, supplies Swiss universities
with instruments to measure research performance and compare research performance
internationally.
9
Until the 1990s, politicians had faith that pushing the quality of science to the highest levels
would automatically generate returns for society. Quality controls in research were primarily
concerned with the use of research for research. Triggered by the financial crisis and by
growing competition between nations, the direct societal benefits of research have moved
increasingly into the foreground of quality assessments. The state no longer has faith that
excellent research alone is automatically best for society. Basic research in particular has
become subject to scrutiny, since it is more difficult to show a link between its results and
beneficial applications. Recent years have therefore seen a tendency to implement evaluation
procedures that attempt to provide information on the societal impacts of research. For
example, applicants to the US National Science Foundation have to state what benefits their
research would bring beyond science. As part of the UK REF, British institutions also have to
provide information about the societal impacts of their research.
Evaluating the societal impacts of research does not stop at the traditional products of
research, such as prizes or publications, but includes other elements such as software, patents
or data sets. The impact itself is also measured more broadly to include effects on society and
not just on research. However, there are still no accepted standard procedures that yield
reliable and valid information. Generally, a case study is carried out in which an institution
describes one or several examples of the societal impacts of its research. The problem is that
the results of case studies cannot be generalised and compared owing to a lack of
standardisation.
So-called altmetricsthe number of page views, downloads, shares, saves, recommendations,
comments and citations in Wikipedia, or data from social media platforms, such as Twitter,
Mendeley and CiteULikecould provide a possible alternative to bibliometric data. A
10
perceived advantage of altmetrics is the ability to provide recent data, whereas citations need
time to accumulate. Another perceived advantage is that alternative metrics can also measure
the impact of research in other sectors of society, as social media platforms are used by
individuals and institutions from many parts of society.
However, it is not clear to what extent these advantagesspeed and breadth of impact
really matter. The study of altmetrics began only a few years ago and is now in a state similar
to that of research into traditional metrics in the 1970s. Before alternative metrics can be
applied to research evaluationwith possible effects on funding decisions or promotions
there are a number of open questions. What kind of impact do the metrics measure, and with
what category of persons? How reliable are the data obtained from social media platforms?
How can the manipulation of social media data by users be counteracted or prevented?
Finally, metrics need to be validated by correlating them with other indicators: is there, for
example, a connection between alternative metrics and the judgment of experts as to the
societal relevance of publications?
This new challenge of measuring the broad impact of research on society has triggered a
scientific revolution in scientometrics. This assertion is based on a fundamental change in the
taxonomy of scientometrics: productivity no longer only means publication output, and the
impact of publications can no longer be equated simply with citations. Scientometrics should
therefore soon enter a phase of normal science to find answers to the questions mentioned
above. Such corresponding alternative indicators should be applied in research evaluation
only after altmetrics has been thoroughly scrutinized in further studies.
11
It is clear that scientometrics has become an integral part of research evaluation and plays a
crucial role in making decisions about national research policies, funding, promotions, job
offers and so on, and thereby on the careers of scientists. Scientometrics therefore has
demonstrated that it provides reliable, transparent and relevant results, which it largely
achieves with citation-based data if it is done correctly. The next challenge will be to develop
altmetrics to the same standards.
CONFLICT OF INTEREST: The authors declare that they have no conflict of interest.
Lutz Bornmann is at the Division for Science and Innovation Studies, Administrative
Headquarters of the Max Planck Society, Munich, Germany. Email: bornmann@gv.mpg.de
Loet Leydesdorff is at the Amsterdam School of Communication Research (ASCoR),
University of Amsterdam, The Netherlands. Email: loet@leydesdorff.net
12
REFERENCES
1. Diekmann, A., Naf, M., & Schubiger, M. (2012). The impact of (Thyssen)-awarded
articles in the scientific community. Kölner Zeitschrift für Soziologie und
Sozialpsychologie, 64(3), 563-581. doi: 10.1007/s11577-012-0175-4.
2. Luhmann, N. (1992). Die Wissenschaft der Gesellschaft. Frankfurt am Main,
Germany: Suhrkamp.
3. Bornmann, L., & Marx, W. (2014). How to evaluate individual researchers working in
the natural and life sciences meaningfully? A proposal of methods based on
percentiles of citations. Scientometrics, 98(1), 487-509. doi: 10.1007/s11192-013-
1161-y.
4. Bornmann, L., Marx, W., Schier, H., Rahm, E., Thor, A., & Daniel, H. D. (2009).
Convergent validity of bibliometric Google Scholar data in the field of chemistry.
Citation counts for papers that were accepted by Angewandte Chemie International
Edition or rejected but published elsewhere, using Google Scholar, Science Citation
Index, Scopus, and Chemical Abstracts. Journal of Informetrics, 3(1), 27-35. doi:
10.1016/j.joi.2008.11.001.
5. van Raan, A. F. J. (2004). Sleeping Beauties in science. Scientometrics, 59(3), 467-
472.
6. Wang, J. (2013). Citation time window choice for research impact evaluation.
Scientometrics, 94(3), 851-872. doi: 10.1007/s11192-012-0775-9.
7. Baumgartner, S. E., & Leydesdorff, L. (2014). Group-based trajectory modeling
(GBTM) of citations in scholarly literature: Dynamic qualities of “transient” and
“sticky knowledge claims”. Journal of the Association for Information Science and
Technology, 65(4), 797-811. doi: 10.1002/asi.23009.
13
8. Shao, J., & Shen, H. (2011). The outflow of academic papers from China: why is it
happening and can it be stemmed? Learned Publishing, 24(2), 95-97. doi: citeulike-
article-id:9042983
9. Marx, W., & Bornmann, L. (2014). On the causes of subject-specific citation rates in
Web of Science. Arxiv.org. Retrieved August 26, 2014
... It is derived from bibliometric, that quantitatively examines scientific output and evaluates its progress, identifying overstudies subareas and highlighting knowledge gaps for future research (Mingers and Leydesdorff, 2015). Scientometry allows for the evaluation of numerous scientific studies (Bornmann and Leydesdorff, 2014), through both normative and descriptive approaches (Serenko et al., 2010). Additionaly, these approaches provide a quick and effective way to assess a field of study and enhance our understand of it (Bornmann and Leydesdorff, 2014). ...
... Scientometry allows for the evaluation of numerous scientific studies (Bornmann and Leydesdorff, 2014), through both normative and descriptive approaches (Serenko et al., 2010). Additionaly, these approaches provide a quick and effective way to assess a field of study and enhance our understand of it (Bornmann and Leydesdorff, 2014). ...
Preprint
Submerged or partially floating seagrasses in marine or brackish waters form productive seagrass beds, feeding grounds for a rich and varied associated biota, play key ecological roles in mitigating climate change and provide ecosystem services for humanity. The objective of this study was to perform a temporal quali- and quantitative analysis on the scientific production on seagrasses in the Atlantic Ocean during last 64 years (1960 to 2024) through defined workflow by scientometric analysis on Scopus database. Publications in this database date back to 1969, comprising a total of 3.482 scientific articles, primary focused on seagrass ecology. These articles were published in 574 distinct peer-reviewed scientific ecological journals, and are divided into four periods based on the number of articles, keywords and biograms, with an average annual increase of 8.28% in the number of articles published. Zostera marina, Halodule wrightii and Thalassia testudinum were the most researched species, especially in Atlantic coastal areas of Europe and North/Central America. Studies on seagrasses along the Atlantic coast have been well consolidated by a few authors with prolific scientific output, but much of the research has been conducted by non-specialists who published only one or a few articles. We also found that researches from each continent tend to focus on specific topics: North America researches investigated future climate change scenarios and seagrass ecology, while those from Europe prioritize on restoration plans. Additionaly, European researchers from Europe predominantly collaborate with local scientists, a trend also observed among American researchesThis indicates a need for increase research and scientific production in the South Atlantic region.
... Bibliometric studies are a valuable tool for analyzing the evolution of scientific production in a specific area, as they allow for the quantification and evaluation of publication trends, with quantity and quality indicators that include the number of publications, the impact of articles, or citations received; and structural indicators that measure the collaboration between authors and institutions and the evolution of research topics over time [30,31]. In the case of women's football, a bibliometric analysis can identify the most researched fields of study, knowledge gaps, the most influential authors, and the leading institutions and countries in this sport's research. ...
Article
Full-text available
Background: The evolution of women’s football over the past three decades has been remarkable in terms of development, visibility, and acceptance, transforming into a discipline with growing popularity and professionalization. Significant advancements in gender equality and global visibility have occurred, and the combination of emerging talent, increasing commercial interest, and institutional support will continue to drive the growth and consolidation of women’s football worldwide. Methods: The purpose of this study was to present a bibliometric analysis of articles on the evolution of women’s football in terms of scientific production as well as its causes and motivations over the past 30 years (1992–2024). A total of 128 documents indexed in the Web of Science database were reviewed. Outcome measures were analyzed using RStudio version 4.3.1 (Viena, Austria) software and the Bibliometrix data package to evaluate productivity indicators including the number of articles published per year, most productive authors, institutions, countries, and journals as well as identify the most cited articles and common topics. Results: Scientific production on women’s football has shown sustained growth, particularly since 2010. Key research areas have focused on injury prevention, physical performance, psychosocial factors, motivation, and leadership. The United States, the United Kingdom, and Spain have emerged as the most productive countries in this field, with strong international collaboration reflected in co-authorship networks. Conclusions: The study revealed a clear correlation between the evolution of women’s football and the increase in scientific production, providing a strong foundation for future research on emerging topics such as the importance of psychological factors, sport motivation and emotional well-being on performance, gender differences at the physiological and biomechanical levels, or misogyny in social networks, thus promoting comprehensive development in this sport modality.
... Why Scientometrics?-Scientometrics involve the study of quantitative aspects of the process of science as a communication system, concerned with the analysis of citations in the academic literature (Bornmann andLeydesdorff 2014, Mingers andLeydesdorff 2015). This approach serves to evaluate and identify gaps in the literature of an area of study, while supporting the in-depth review of existing literature (Ouyang et al. 2018, Ren et al. 2021. ...
Article
Full-text available
Research on upside-down jellies has largely focused on their life history and symbiotic relationship with members of the Symbiodiniaceae, with most studies carried out in laboratory settings. Members of the genus Cassiopea have been studied widely for their semi-sessile benthic behavior and for hosting algal symbionts analogous to their anthozoan counterparts, stony coral, making them excellent laboratory models to study host-symbiont relationships. Much less information is available on their field ecology, though high population densities of upside-down jellies have been linked with human activity in nearshore environments. In this review, we searched readily available literature on Cassiopea with the goal to identify major gaps in understanding their field ecology. Internet-based searches using the Web of Science Core Collection through October 2023 yielded 195 documents on Cassiopea research, with 72% of the published studies laboratory-based and the remainder including field studies and reviews. While historical papers date back to 1774, there are generally fewer than 10 per decade, until 1990, with a subsequent exponential increase in publications. Publications based on field studies became more frequent beginning in the early 2000s. This literature review provides a baseline for understanding the existing realm of Cassiopea research and indicates that field-based studies could enhance understanding of their responses in anthropogenically-impacted environments. Why Scientometrics?-Scientometrics involve the study of quantitative aspects of the process of science as a communication system, concerned with the analysis of citations in the academic literature (Bornmann and Leydesdorff 2014, Mingers and Leydesdorff 2015). This approach serves to evaluate and identify gaps in the literature of an area of study, while supporting the in-depth review of existing literature (Ouyang et al. 2018, Ren et al. 2021). The evaluation of articles utilizing this methodology has yielded creative perspectives by employing quantitative analysis to examine the dynamics and advancements in scientific endeavors, including input, output,
... The analysis sought to identify relationships between investigators, so researchers with at least two published studies were included. The methodology incorporated performance measurement (research impact), network analysis, and scientific mapping to conduct a systematic evaluation of citation patterns, publication trends, and author collaborations [19][20][21]. ...
Article
Full-text available
Background Metabolic dysfunction-associated steatotic liver disease (MASLD), previously known as non-alcoholic fatty liver disease (NAFLD), is a prevalent hepatic condition linked to metabolic alterations. It gradually causes liver damage and potentially progresses to cirrhosis. Despite its significance, research, especially in the pediatric population, is limited, leading to contradictory findings in diagnosis and treatment. This meta-analysis aims to synthesize existing literature on therapeutic interventions for MASLD in children and adolescents. Methods A comprehensive search of randomized controlled clinical trials yielded 634 entries from PubMed, Scopus, and Web of Science up to 2023. Interventions included medications, behavioral modifications, dietary changes, probiotics, supplements, surgical procedures, or combinations. The analysis focused on studies with treatment duration of at least 3 months, employing a random-effects REML meta-analysis model. Treatment effects on anthropometric measurements and biochemical components were examined and adjusted for heterogeneity factors analysis. A bibliometric analysis for insights into research contributors was performed. Results The systematic review incorporated 31 clinical trials, with 24 meeting criteria for meta-analysis. These comprised 3 medication studies, 20 with supplements, 4 focusing on lifestyle, and 4 centered on diets. Significant overall treatment effects were observed for ALT, AST, BMI, and HOMA-IR mainly by supplements and lifestyle. Meta-regression identified age, BMI changes, and treatment duration as factors modifying ALT concentrations. Bibliometric analysis involving 31 linked studies highlighted contributions from 13 countries, with the USA, Spain, and Chile being the most influential. Conclusions We conclude that supplementation and lifestyle changes can effectively impact ALT and AST levels, which can help address liver issues in obese children. However, the evaluation of risk bias, the high heterogeneity, and the bibliometric analysis emphasize the need for more high-quality studies and broader inclusion of diverse child populations to provide better therapeutic recommendations. Trial registration PROSPERO, CRD42023393952. Registered on January 25, 2023.
... The approach is grounded in the use of mathematical and statistical tools to dissect and interpret data from scientific publications and citations, providing insights into the architecture and momentum of scientific. The methodology is designed to map the scientific field, spotlighting emerging trends and evaluating the impact and influence of scientific endeavors (Bornmann and Leydesdorff, 2014). Essential to this stage of scientometric analysis are techniques such as co-citation, co-word, and network analysis, which graphically illustrate the interplay between au-thors, their scholarly work, and the categorizations of their research (Ferreira et al., 2023). ...
... The study employed standard Scientometric tools to analyze the research on Indian Indigenous knowledge. It is an established method for research evaluation that has been adopted by many scholars for decades (Bornmann & Leydesdorff, 2014). The scientometric analysis provides an objective and quantitative means to measure academic publications' growth, impact, and interconnectedness. ...
Article
Full-text available
The study of indigenous knowledge in India has evolved over the years and gained significant importance as a vital area of research. This study aimed to explore the evolution of the research landscape on indigenous knowledge in India over the last twenty years (2003-2022), focusing on growth trends and knowledge mapping through Scientometric tools. The study collected 1,980 data from the Scopus database, indexed between 2003 and 2022. Initially, the analysis focused on measuring the research growth and performance of the key players. Then, the study performed scientific knowledge mapping, visualizing the relationships between different concepts and topics within the field. The findings reveal a significant growth in indigenous knowledge research in India. The study also identified key research themes, including traditional medicine, agriculture, biodiversity conservation, etc. In the early two decades, research was conducted on natural resource management, ethnoveterinary practices, ethnomedicine, biodiversity, tribal communities, and traditional healers, etc. The most recent research topics were COVID-19, sustainability, livelihood, ethnopharmacology, climate change, herbal drugs, etc. Research on medicinal plants and ethnobotany was the most influential in the last two decades. Furthermore, the study revealed a highly interconnected network of authors and institutions, with a few key players dominating the field. The study concludes by highlighting the need for further research on indigenous knowledge in India, particularly in areas such as intellectual property rights, geographical identification, preservation, and the role of indigenous knowledge in sustainable development.
... We want to do a bibliometric analysis of UMS's research output until 2022 in this project. Bibliometrics, the study of quantitative characteristics of literature, is used to assess the productivity and influence of a researcher or institution's research activities (Agarwal et al., 2016;Arias-Ciro, 2020;Bornmann & Leydesdorff, 2014;Cancino et al., 2017;Cucari et al., 2023;Das, 2015;Ellegaard & Wallin, 2015a;Guiling et al., 2022;Natakusumah, 2016;Okubo, 1997). ...
Article
Full-text available
This study uses bibliometric techniques to evaluate the scientific output of Universitas Muhammadiyah Surakarta (UMS), addressing a common challenge for regional universities: limited global visibility. Despite demonstrating strong research productivity, with 1,389 publications from 2004 to 2022, UMS's influence remains primarily regional, as evidenced by a high concentration of local citations. By analyzing citation metrics, co-authorship networks, and publication trends, this study sheds light on UMS's strengths and areas for development. The findings show that UMS excels in fields like public health and engineering, supported by partnerships with institutions in Malaysia and the UK. However, fields such as social sciences and education have lower international citation rates. This research highlights the need for UMS to pursue interdisciplinary collaborations, broaden its network beyond Southeast Asia, and aim for publication in high-impact journals to enhance its global presence. By providing strategic recommendations, this study contributes to knowledge on regional university research performance, suggesting future analyses that incorporate additional databases and multilingual data for a more comprehensive view of UMS's contributions to the global academic community.
Article
Full-text available
With the constant advancement of irrigation technology and the continuous expansion of irrigation areas, non-point source pollution (NPS) caused by agricultural activities has posed a persistent threat to ecosystems and biological safety. Against this backdrop, it is imperative to lay scientific foundations for green, sustainable, and high-quality agricultural development through a thorough review of the relevant research progress. In this study, bibliometric methods are adopted to comprehensively analyze and visualize the current state and key literature on agricultural irrigation and NPS pollution from 2010 to July 2024. The focus of this study is specifically on summarizing the research hotspots and development trends of different irrigation methods and the mechanisms behind their impacts on NPS pollution. The results indicate that publications from the United States and China account for 63.8% of the total, but the fragmentation of research efforts remains, suggesting a necessity to strengthen international and regional collaboration. There are three institutions with the highest publication output, namely Northwest A&F University, Hohai University, and the Chinese Academy of Sciences. The subjects identified as the key areas of research on irrigation-related NPS pollution (IRR-NPS) include precision irrigation, rapid water pollution response, spatiotemporal management, interdisciplinary integration, wastewater treatment, and crop models. Regarding future research, it is necessary to focus attention on real-time precision irrigation, standardized crop models, data accuracy, spatiotemporal pollution coordination, pollution purification technology development, interdisciplinary integrated governance, and the innovative applications of soil improvement technologies. In addition to offering theoretical support and practical guidance for the management of agricultural NPS pollution, this study also provides management and technical support for policymakers, which is beneficial for advancing agricultural irrigation technology and environmental preservation.
Article
This study highlights the significant potential of activated carbon-based materials in environmental remediation and energy production, particularly in converting carbon dioxide (CO2) and hydrogen (H2) into methane (CH4) and water (H2O) using transition metal-based catalysts. It emphasizes the role of porous AC in waste reduction and resource utilization, examining various applications of CO2 and evaluating environmental impacts. The research explores commercialization opportunities and specifically investigates CO2 methanation using activated carbon-based materials. Using bibliometric analyses of 4196 articles from the Web of Science database, the study identifies a growing research interest in porous activated carbon-related CO2 methanation from 2014 to 2024. The top three journals in this field are Environment Development and Sustainability, Biomass Conversion and Biorefinery, and Journal of Environment Science and Pollution. However, there is limited inter-institutional collaboration in this field, suggesting room for development towards commercializing sustainable CH4 production pathways. CH4 is highlighted as a crucial intermediate in industrial processes, and research directions are identified through co-occurring author keywords analysis. The study suggests the need for a comprehensive approach integrating activated carbon materials into carbon-neutral energy processes while addressing the potential adverse effects of activated carbon nanoparticles on biological and environmental factors. Ultimately, it clarifies the potential uses and commercialization prospects for porous AC materials, especially in conjunction with carbon capture and utilization technologies, promoting sustainable practices in energy production and environmental management.
Article
Full-text available
It is well known in bibliometrics that the average number of citations per paper differs greatly between the various disciplines. The differing citation culture (in particular the different average number of references per paper and thereby the different probability of being cited) is widely seen as the cause of this variation. Based on all Web of Science (WoS) records published in 1990, 1995, 2000, 2005, and 2010 we demonstrate that almost all disciplines show similar numbers of references in the appendices of their papers. Our results suggest that the average citation rate is far more influenced by the extent to which the papers (cited as references) are included in WoS as linked database records. For example, the comparatively low citation rates in the humanities are not at all the result of a lower average number of references per paper but are caused by the low fraction of linked references which refer to papers published in the core journals covered by WoS.
Article
Full-text available
Wir untersuchen in unserer Studie, ob Artikel, die mit dem Thyssen-Preis prämiert wurden, in höherem Maße rezipiert werden als nicht-ausgezeichnete Artikel. Dazu wurden die Ergebnisse für die prämierten Artikel mit einer Kontrollstichprobe der nicht ausgezeichneten Arbeiten verglichen. Prämierte Artikel „ernten“ signifikant mehr Zitationen als Artikel in der Kontrollstichprobe. Bemerkenswert ist, dass das Muster der Zitationsrangfolge exakt der Rangfolge der Preise entspricht. Der erste Preis erhält die meisten, der zweite Preis die zweitmeisten und der dritte Preis die drittmeisten Zitationen, während nicht prämierte Artikel an vierter Stelle der Rangfolge stehen. Der Unterschied zeigt sich auch, wenn nur Zitationen im Jahr der Veröffentlichung eines Artikels und im Folgejahr betrachtet werden. In diesem Zeitraum kann der Preis „an sich“ noch keine Wirkung auf die Rezeption in der „scientific community“ entfaltet haben. Die Ergebnisse liefern keinen Beleg für die sozialkonstruktivistische These, dass die Rezeption im Sinne einer „sich selbst erfüllenden Prognose“ erst durch die Preisverleihung erzeugt wird. Vielmehr gelingt es der Jury Arbeiten auszuwählen, die dann tatsächlich in den Folgejahren in der Fachöffentlichkeit Aufmerksamkeit hervorrufen.
Article
Full-text available
Group-based Trajectory Modeling (GBTM) is applied to the citation curves of articles in six journals and to all citable items in a single field of science (Virology, 24 journals), in order to distinguish among the developmental trajectories in subpopulations. Can highly-cited citation patterns be distinguished in an early phase as "fast-breaking" papers? Can "late bloomers" or "sleeping beauties" be identified? Most interesting, we find differences between "sticky knowledge claims" that continue to be cited more than ten years after publication, and "transient knowledge claims" that show a decay pattern after reaching a peak within a few years. Only papers following the trajectory of a "sticky knowledge claim" can be expected to have a sustained impact. These findings raise questions about indicators of "excellence" that use aggregated citation rates after two or three years (e.g., impact factors). Because aggregated citation curves can also be composites of the two patterns, 5th-order polynomials (with four bending points) are needed to capture citation curves precisely. For the journals under study, the most frequently cited groups were furthermore much smaller than ten percent. Although GBTM has proved a useful method for investigating differences among citation trajectories, the methodology does not enable us to define a percentage of highly-cited papers inductively across different fields and journals. Using multinomial logistic regression, we conclude that predictor variables such as journal names, number of authors, etc., do not affect the stickiness of knowledge claims in terms of citations, but only the levels of aggregated citations (that are field-specific).
Article
Full-text available
Although bibliometrics has been a separate research field for many years, there is still no uniformity in the way bibliometric analyses are applied to individual researchers. Therefore, this study aims to set up proposals how to evaluate individual researchers working in the natural and life sciences. 2005 saw the introduction of the h index, which gives information about a researcher's productivity and the impact of his or her publications in a single number (h is the number of publications with at least h citations); however, it is not possible to cover the multidimensional complexity of research performance and to undertake inter-personal comparisons with this number. This study therefore includes recommendations for a set of indicators to be used for evaluating researchers. Our proposals relate to the selection of data on which an evaluation is based, the analysis of the data and the presentation of the results.
Article
Full-text available
Examining a comprehensive set of papers (n = 1837) that were accepted for publication by the journal Angewandte Chemie International Edition (one of the prime chemistry journals in the world) or rejected by the journal but then published elsewhere, this study tested the extent to which the use of the freely available database Google Scholar (GS) can be expected to yield valid citation counts in the field of chemistry. Analyses of citations for the set of papers returned by three fee-based databases – Science Citation Index, Scopus, and Chemical Abstracts – were compared to the analysis of citations found using GS data. Whereas the analyses using citations returned by the three fee-based databases show very similar results, the results of the analysis using GS citation data differed greatly from the findings using citations from the fee-based databases. Our study therefore supports, on the one hand, the convergent validity of citation analyses based on data from the fee-based databases and, on the other hand, the lack of convergent validity of the citation analysis based on the GS data.
Article
Full-text available
A ‘Sleeping Beauty in Science’ is a publication that goes unnoticed (‘sleeps’) for a long time and then, almost suddenly, attracts a lot of attention (‘is awakened by a prince’). We here report the -to our knowledge- first extensive measurement of the occurrence of Sleeping Beauties in the science literature. We derived from the measurements an ‘awakening’ probability function and identified the ‘most extreme Sleeping Beauty so far’.
Article
Full-text available
Eukaryotic cells sense oxygen and adapt to hypoxia by regulating a number of genes. Hypoxia-inducible factor 1 (HIF-1) is the 'master' in this pleiotypic response. HIF-1 comprises two members of the basic helix--loop--helix transcription factor family, HIF-1 alpha and HIF-1 beta. The HIF-1 alpha protein is subject to drastic O(2)-dependent proteasomal control. However, the signalling components regulating the 'switch' for 'escaping' proteasomal degradation under hypoxia are still largely unknown. The rapid nuclear translocation of HIF-1 alpha could represent an efficient way to escape from this degradation. We therefore asked, where in the cell is HIF-1 alpha degraded? To address this question, we trapped HIF-1 alpha either in the cytoplasm, by fusing HIF-1 alpha to the cytoplasmic domain of the Na(+)-H(+) exchanger (NHE-1), or in the nucleus, by treatment with leptomycin B. Surprisingly, we found that HIF-1 alpha is stabilized by hypoxia and undergoes O(2)-dependent proteasomal degradation with an identical half-life (5--8 min) in both cellular compartments. Therefore, HIF-1 alpha entry into the nucleus is not, as proposed, a key event that controls its stability. This result markedly contrasts with the mechanism that controls p53 degradation via MDM2.
Article
This paper aims to inform choice of citation time window for research evaluation, by answering three questions: (1) How accurate is it to use citation counts in short time windows to approximate total citations? (2) How does citation ageing vary by research fields, document types, publication months, and total citations? (3) Can field normalization improve the accuracy of using short citation time windows? We investigate the 31-year life time non-self-citation processes of all Thomson Reuters Web of Science journal papers published in 1980. The correlation between non-self-citation counts in each time window and total non-self-citations in all 31 years is calculated, and it is lower for more highly cited papers than less highly cited ones. There are significant differences in citation ageing between different research fields, document types, total citation counts, and publication months. However, the within group differences are more striking; many papers in the slowest ageing field may still age faster than many papers in the fastest ageing field. Furthermore, field normalization cannot improve the accuracy of using short citation time windows. Implications and recommendations for choosing adequate citation time windows are discussed.