Article

Correlation Between Perception-Based Journal Rankings and the Journal Impact Factor (JIF): A Systematic Review and Meta-Analysis

Authors:
To read the full-text of this research, you can request a copy directly from the author.

Abstract

This study, based on systematic review and meta-analysis, aimed to collect and analyze evidence on correlation between perception-based journal rankings and the most popular citation-based measure, Thomson Reuters Journal Impact Factor (JIF). A search was conducted in the Web of Science, Scopus, and Google Scholar databases. After the screening of titles, abstracts, and full text, 18 articles were selected as eligible for review and analysis. The included studies belonged to various subject areas in social sciences, science, and technology. The correlation coefficients found in most of the studies were statistically significant in a positive direction. The heterogeneity test was positive. Therefore, the random-effect method of meta-analysis was applied. The value of pooled correlation coefficient indicated a moderate positive relationship between two methods of assessing the quality of academic journals. The absence of a high correlation makes decision making based on a single ranking method dangerous. Therefore, a hybrid approach for journal assessment is recommended.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... Whilst Journal Impact Factors (JIFs) are frequently misused as journal or article quality measures, this is inappropriate (Kurmis, 2003;Seglen, 1998). Nevertheless, higher impact journals are statistically more likely to be high quality (Mahmood, 2017) and articles in higher impact journals are statistically more likely to be influential or otherwise high quality (e.g., Waltman & Traag, 2020). Investigating factors that statistically associate with publishing in higher impact journals may therefore help researchers to design more influential or higher quality research. ...
... The rise of megajournals has complicated the situation by introducing a class of journal that is generalist but may have variable quality control mechanisms due to the size or fluctuating nature (for special issues) of the editorial team, or that may disregard some traditional quality criteria (Wakeling et al., 2019). Nevertheless, in some fields, recognised hierarchies of journal importance may be fairly well reflected by JIFs (Kelly et al., 2013;Mahmood, 2017;Steward & Lewis, 2010), and so the citation rate of a journal in those fields could be a reasonable indicator of its quality relative to other journals in the same field. This, in turn, may be a reasonable indicator of the quality of the individual articles in that journal given that the quality of a journal is equal to the average quality of its articles. ...
... The correlation and regression results are consistent with, but do not prove, the idea that attracting highly cited team members to a research team would help to get the articles produced published in a higher impact journal. On average, higher impact journals are higher quality (Kelly et al., 2013;Mahmood, 2017;Steward & Lewis, 2010) and contain higher quality articles in some fields, so the same authorship factors may associate with the quality of an article. The associations have substantial disciplinary differences in strength, but they are present to some extent in all fields of science. ...
Article
Academic research often involves teams of experts, and it seems reasonable to believe that successful main authors or co-authors would tend to help produce better research. This article investigates an aspect of this across science with an indirect method: the extent to which the publishing record of an article’s authors associates with the citation impact of the publishing journal (as a proxy for the quality of the article). The data is based on author career publishing evidence for journal articles 2014–20 and the journals of articles published in 2017. At the Scopus broad field level, international correlations and country-specific regressions for five English-speaking nations (Australia, Ireland, New Zealand, UK and USA) suggest that first author citation impact is more important than co-author citation impact, but co-author productivity is more important than first author productivity. Moreover, author citation impact is more important than author productivity. There are disciplinary differences in the results, with first author productivity surprisingly tending to be a disadvantage in the physical sciences and life sciences, at least in the sense of associating with lower impact journals. The results are limited by the regressions only including domestic research and a lack of evidence-based cause-and-effect explanations. Nevertheless, the data suggests that impactful team members are more important than productive team members, and that whilst an impactful first author is a science-wide advantage, an experienced first author is often not.
... The rigor of journals is most often measured using subjective scholar ratings or rankings (Mahmood 2017, Miranda & Mongeau 1991, Shilbury & Rentschler 2007, Silverman et al. 2014, usually within disciplines/fields given variation in research methods, publication, and citation patterns across disciplines. Multidisciplinary journals have wider readership that often includes scholars from disciplines that can be positively biased toward their own fields, specializations, and research topics (Catling et al. 2009, Knudson & Chow 2008, Silverman et al. 2014. ...
... Journal usage/popularity has been most closely associated with average citations to articles published using metrics like the Impact Factor, CiteScore, and Source-Normalized Impact per paper (Bollen et al. 2009, Franceschet 2010, Knudson et al. 2024, Zhou et al. 2012. Journal prestige has also been examined using subjective scholar ratings (Bray & Major 2022, Mahmood 2017, Miranda & Mongeau 1991, Silverman et al. 2014, Tahai & Meyer 1999 but has also been estimated by various weighted citation metrics that take into account status by the structure (source of citations) of the citation network like Eigenfactor Score, journal h-index, and the SCImago Journal Rank (Guerrero-Bote & Moya-Anegon 2012, Roldan-Valdez et al. 2019). Scholar perceptions of rigor and prestige, however, are more stable over time than journal usage and prestige metrics (Bray & Major 2022, Knudson & Quimby 2023, Tahai & Meyer 1999. ...
Article
This study documented the indexing and citations to the first decade of articles published by the Athens Journal of Sports (AJSPO) in Google Scholar, with confirmatory secondary analysis of summary data from Dimensions and OpenAlex database services. A wide variety of articles have been published, with most focus on the business and behavioral/humanities aspects of sports. AJSPO has nearly complete indexing of published articles in these free database services. Article usage is strong in all three databases with median total citations and citation rate similar or better than new journals in the kinesiology/exercise and sport science field. AJSPO has a typical percentage of uncited articles in Google Scholar relative to many journals and contributes research primarily in the categories of the business, behavioral/ humanities, and analytics/coaching aspects of sports. Keywords: bibliometrics, database, impact, knowledge, research, subject category
... The rigor of journals is most often measured using subjective scholar ratings or rankings (Mahmood 2017, Miranda & Mongeau 1991, Shilbury & Rentschler 2007, Silverman et al. 2014, usually within disciplines/fields given variation in research methods, publication, and citation patterns across disciplines. Multidisciplinary journals have wider readership that often includes scholars from disciplines that can be positively biased toward their own fields, specializations, and research topics (Catling et al. 2009, Knudson & Chow 2008, Silverman et al. 2014. ...
... Journal usage/popularity has been most closely associated with average citations to articles published using metrics like the Impact Factor, CiteScore, and Source-Normalized Impact per paper (Bollen et al. 2009, Franceschet 2010, Knudson et al. 2024, Zhou et al. 2012. Journal prestige has also been examined using subjective scholar ratings (Bray & Major 2022, Mahmood 2017, Miranda & Mongeau 1991, Silverman et al. 2014, Tahai & Meyer 1999 but has also been estimated by various weighted citation metrics that take into account status by the structure (source of citations) of the citation network like Eigenfactor Score, journal h-index, and the SCImago Journal Rank (Guerrero-Bote & Moya-Anegon 2012, Roldan-Valdez et al. 2019). Scholar perceptions of rigor and prestige, however, are more stable over time than journal usage and prestige metrics (Bray & Major 2022, Knudson & Quimby 2023, Tahai & Meyer 1999. ...
Article
This study documented the indexing and citations to the first decade of articles published by the Athens Journal of Sports (AJSPO) in Google Scholar, with confirmatory secondary analysis of summary data from Dimensions and OpenAlex database services. A wide variety of articles have been published, with most focus on the business and behavioral/humanities aspects of sports. AJSPO has nearly complete indexing of published articles in these free database services. Article usage is strong in all three databases with median total citations and citation rate similar or better than new journals in the kinesiology/exercise and sport science field. AJSPO has a typical percentage of uncited articles in Google Scholar relative to many journals and contributes research primarily in the categories of the business, behavioral/humanities, and analytics/coaching aspects of sports.
... Journal metrics are routinely used to bestow a purported more objective assessment of "core," "flagship," "luxury," or "top-tier" status (e.g., top 10%) to some journals instead of disciplinary scholar perceptions (Rowlands, 2018;Wilsdon et al., 2015); however, these efforts may not be more accurate or consistent as subjective disciplinary perceptions. A meta-analysis of the association between journal IF and subjective ratings indicated heterogeneous results with only a moderate association (pooled r = 0.486) between them (Mahmood, 2017). This agrees with a large recent analysis of national research assessment of article quality reporting weak and never strong associations of IF and research quality (Thelwall et al., 2023). ...
... These studies consistently report variability of scholar perceptions that are influenced by their familiarity with journals, research area, and productivity. This is similar to studies in other disciplines reporting variability of perceptions and numerous potential biases (see meta-analysis by Mahmood, 2017). This variability reinforces impressions of subjectivity of scholar perception of prestige, and along with the plethora and easy access to journal metrics, contribute to growing use of citation metrics in ranking journals. ...
Article
This study documented the associations and variability of four prestige metrics for kinesiology-related journals. Four prestige metrics for the year 2022 were collected from three database services for 334 kinesiology-related journals. The prestige measures were highly skewed and had large variability, with medical journals having the highest values. There were strong nonlinear and heteroscedastic associations between all four prestige indicators, however rankings of top 10% journals based on these metrics had considerable disagreement between metrics. The substantial variability of journal prestige metrics was larger than the variation previously reported for scholar subjective perceptions of journal prestige. The prestige metrics provided inconsistent and unrealistically precise estimates of kinesiology journal prestige. Great care should be exercised in interpreting prestige metrics and scholar perceptions of journal prestige in diverse disciplines like kinesiology.
... Previous studies have mainly assessed the value of journal impact factors either from a theoretical perspective or with expert judgements of journals for a single field. A 2016 systematic review found 18 articles that had correlated JIFs with expert judgements of journal prestige in science, technology, and social science fields (Mahmood, 2017; Table 2). The sample sizes were from 8 to 127 journals. ...
... They align with Australian findings that expert rankings of journals positively correlate with journal impact for all 27 Scopus broad fields (Haddawy et al., 2016). They also tend to confirm numerous previous studies showing that the expert-judged value of a journal tends to correlate positively with its citation impact, including the stronger correlations for health-related fields and weaker correlations for the social sciences (Mahmood, 2017). They also confirm the relatively strong correlations between journal prestige and citation impact in psychology (Highhouse et al., 2020) and business Walters & Markgren, 2019). ...
Article
The Journal Impact Factor and other indicators that assess the average citation rate of articles in a journal are consulted by many academics and research evaluators, despite initiatives against overreliance on them. Undermining both practices, there is limited evidence about the extent to which journal impact indicators in any field relate to human judgements about the quality of the articles published in the field’s journals. In response, we compared average citation rates of journals against expert judgements of their articles in all fields of science. We used preliminary quality scores for 96,031 articles published 2014–18 from the UK Research Excellence Framework 2021. Unexpectedly, there was a positive correlation between expert judgements of article quality and average journal citation impact in all fields of science, although very weak in many fields and never strong. The strength of the correlation varied from 0.11 to 0.43 for the 27 broad fields of Scopus. The highest correlation for the 94 Scopus narrow fields with at least 750 articles was only 0.54, for Infectious Diseases, and there was only one negative correlation, for the mixed category Computer Science (all), probably due to the mixing. The average citation impact of a Scopus-indexed journal is therefore never completely irrelevant to the quality of an article but is also never a strong indicator of article quality. Since journal citation impact can at best moderately suggest article quality it should never be relied on for this, supporting the San Francisco Declaration on Research Assessment.
... Previous studies have mainly assessed the value of journal impact factors either from a theoretical perspective or with data from a single field. A 2016 systematic review found 18 articles that had correlated JIFs with expert judgements of journal value in science, technology, and social science fields (Mahmood, 2017; Table 2). The sample sizes were small, ranging from 8 to 127 journals. ...
... They align with Australian findings that expert rankings of journals positively correlate with journal impact for all 27 Scopus broad fields (Haddawy et al., 2016). They also tend to confirm numerous previous studies showing that the expert-judged value of a journal tends to correlate positively with its citation impact, including the stronger correlations for health-related fields and weaker correlations for the social sciences (Mahmood, 2017). They also confirm the relatively strong correlations between journal prestige and citation impact in psychology (Highhouse et al., 2020) and business Walters & Markgren, 2019). ...
Preprint
Full-text available
The Journal Impact Factor and other indicators that assess the average citation rate of articles in a journal are consulted by many academics and research evaluators, despite initiatives against overreliance on them. Despite this, there is limited evidence about the extent to which journal impact indicators in any field relates to human judgements about the journals or their articles. In response, we compared average citation rates of journals against expert judgements of their articles in all fields of science. We used preliminary quality scores for 96,031 articles published 2014-18 from the UK Research Excellence Framework (REF) 2021. We show that whilst there is a positive correlation between expert judgements of article quality and average journal impact in all fields of science, it is very weak in many fields and is never strong. The strength of the correlation varies from 0.11 to 0.43 for the 27 broad fields of Scopus. The highest correlation for the 94 Scopus narrow fields with at least 750 articles was only 0.54, for Infectious Diseases, and there was only one negative correlation, for the mixed category Computer Science (all). The results suggest that the average citation impact of a Scopus-indexed journal is never completely irrelevant to the quality of an article, even though it is never a strong indicator of article quality.
... Journal ranking lists are a hybrid of citation indicators and peer reviews (Mingers and Yang 2017). Within the last few decades, journal ranking lists have become vital, justified, and objective references for accessing the quality of research (Mahmood 2017). A journal ranking list is one of the most important metrics of research quality and determines the resources that accrue to researchers and their institutions (Peters et al. 2014). ...
... Journal ranking lists constitute a hybrid of citation indicators and peer reviews (Mingers and Yang 2017). Over the last few decades, journal ranking lists have become one of the most important metrics of research quality and are considered justified and objective (Mahmood 2017). In many journal ranking lists, four prominent ranking lists from the Association of Business Schools (ABS, UK) (Wu et al. 2015, Australian Business Deans Council (ABDC, Australia), German Academic Association for Business Research (VHB, Germany), and Comité National de la Recherche Scientifique (CNRS, France) are widely popular. ...
Article
Full-text available
Meta-syntheses from experts’ judgements and quantitative metrics are two main forms of evaluation. But they both have limitations. This paper constructs a framework for mapping the evaluation results between quantitative metrics and experts’ judgements such that they may be solved. In this way, the weights of metrics in quantitative evaluation are objectively obtained, and the validity of the results can be testified. Weighted average percentile (WAP) is employed to aggregate different experts’ judgements into standard WAP scores. The Technique for Order Preference by Similarity to an Ideal Solution (TOPSIS) method is used to map quantitative results into experts’ judgements, while WAP scores are equal to the final closeness coefficients generated by the TOPSIS method. However, the closeness coefficients of TOPSIS rely on the weights of quantitative metrics. In this way, the mapping procedure is transformed into an optimization problem, and a genetic algorithm is introduced to search for the best weights. An academic journal ranking in the field of Supply Chain Management and Logistics (SCML) is used to test the validity obtained by mapping results. Four prominent ranking lists from Association of Business Schools, Australian Business Deans Council, German Academic Association for Business Research, and Comité National de la Recherche Scientifique were selected to represent different experts’ judgements. Twelve indices including IF, Eigenfactor Score (ES), H-index, Scimago Journal Ranking, and Source Normalized Impact per Paper (SNIP) were chosen for quantitative evaluation. The results reveal that the mapping results possess high validity for the relative error of experts’ judgements, the quantitative metrics are 43.4%, and the corresponding best weights are determined in the meantime. Thus, some interesting findings are concluded. First, H-index, Impact Per Publication (IPP), and SNIP play dominant roles in the SCML journal’s quality evaluation. Second, all the metrics are positively correlated, although the correlation varies among metrics. For example, ES and NE are perfectly, positively correlated with each other, yet they have the lowest correlation with the other metrics. Metrics such as IF, IFWJ, 5-year IF, and IPP are highly correlated. Third, some highly correlated metrics may perform differently in quality evaluation, such as IPP and 5-year IF. Therefore, when mapping the quantitative metrics and experts’ judgements, academic fields should be treated distinctively.
... Quite a few studies have evaluated the relationships between subjective assessments of reputation (or "quality") and citation impact. These investigations have been undertaken with regard to entire journals (e.g., Haddawy et al., 2016;Hall, 2011;Hodge et al., 2021;Kulczycki & Rozkosz, 2017;Mahmood, 2017;Saarela & Kärkkäinen, 2020;Walters, 2017bWalters, , 2024Walters & Markgren, 2019), individual articles (e.g., Abramo et al., 2019;Weitzner et al., 2024), individual researchers (e.g., Derrick et al., 2011;Guba & Tsivinskaya, 2023), and institutions (e.g., Szluka et al., 2023). However, a comprehensive literature search revealed no prior studies of the impact of OA status on the relationships between expert journal ratings and journal citation metrics. ...
Article
Full-text available
Purpose: For a set of 1,561 Open Access (OA) and non-OA journals in business and economics, this study evaluates the relationships between four citation metrics—five-year Impact Factor (5IF), CiteScore, Article Influence (AI) score, and SCImago Journal Rank (SJR)—and the journal ratings assigned by expert reviewers. We expect that the OA journals will have especially high citation impact relative to their perceived quality (reputation). Design/methodology/approach: Regression is used to estimate the ratings assigned by expert reviewers for the 2021 CABS (Chartered Association of Business Schools) journal assessment exercise. The independent variables are the four citation metrics, evaluated separately, and a dummy variable representing the OA/non-OA status of each journal. Findings: Regardless of the citation metric used, OA journals in business and economics have especially high citation impact relative to their perceived quality (reputation). That is, they have especially low perceived quality (reputation) relative to their citation impact. Research limitations: These results are specific to the CABS journal ratings and the four citation metrics. However, there is strong evidence that CABS is closely related to several other expert ratings, and that 5IF, CiteScore, AI, and SJR are representative of the other citation metrics that might have been chosen. Practical implications: There are at least two possible explanations for these results: (1) expert evaluators are biased against OA journals, and (2) OA journals have especially high citation impact due to their increased accessibility. Although this study does not allow us to determine which of these explanations are supported, the results suggest that authors should consider publishing in OA journals whenever overall readership and citation impact are more important than journal reputation within a particular field. Moreover, the OA coefficients provide a useful indicator of the extent to which anti-OA bias (or the citation advantage of OA journals) is diminishing over time. Originality/value: This is apparently the first study to investigate the impact of OA status on the relationships between expert journal ratings and journal citation metrics.
... evidence that single, journal-level citation metrics are strongly biased, inaccurate for individual articles, and associate poorly with expert, peer evaluation of research quality (Mahmood, 2017;Thelwall et al., 2023). ...
Article
Journals are often ranked by citation metrics to classify them into ordinal categories (e.g., quartiles or quintiles) of prestige. This study compared five journal metrics and their associations for the top and bottom quintile of a large sample kinesiology-related journals. Top and bottom quintiles (n = 70) based on SCImago Journal Rank (SJR) for 2023 were identified. Descriptive data and associations between SJR, total documents, total citations, external citations per document, and percentage uncited documents were calculated. All journal metrics in both quintiles were highly variable and most were positively skewed. Most median metrics differed across quintile by a factor of 3.6 to 7.8 times, although median total citations to top quintile journals were 30 times larger than bottom quartile journals due to more total documents in top quintile journals. Most (5 and 7) of the 10 associations between the journal metrics were significant (p < .01) for the top and bottom quartiles, respectively. Results confirm previous research that SJR is a positively skewed, highly variable, and unrealistically precise proxy estimate of the prestige of kinesiology-related journals. Several prestige metrics might be carefully interpreted to approximate general perceptions of journal prestige. Current evidence does not, however, support the precise ranking or numerical comparison of kinesiology journals into clear top-tier prestige status given the coverage and subdisciplinary bias, skew, and variability in most journal metrics.
... This meta-analysis was carried out by using the PRISMA-P approach (Shamseer et al., 2015). Plenty of studies are available that used PRISMA for executing systematic review (Ali and Warraich, 2018 and meta-analysis Mahmood, 2017). Considering the appropriateness and wider usability of PRISMA, we also used this approach for executing the current meta-analysis regarding technology acceptance and its usage with mobile and digital libraries context using TAM. ...
Article
Full-text available
Purpose The purpose of this paper is to measure the relationship of technology acceptance model (TAM) variables (PEOU and PU) with behavioral intention (BI) and attitude in mobile and digital libraries context. This study also examines the relationship of external variables (information quality and system quality) with TAM variables (PEOU and PU) in mobile and digital libraries context. Design/methodology/approach This meta-analysis was performed through PRISMA-P guidelines. Four databases (Google Scholar, Web of Science, Scopus and LISTA) were used for searching, and the search was conducted according to defined criteria. Findings Findings of this study revealed a large effect size of PU and PEOU with BI. There was also a large effect size of PU and PEOU with attitude. A medium effect size was found between SysQ → PU, InfoQ → PU and SysQ → PEOU. However, there was a small effect size between InfoQ and PEOU. Originality/value To the best of the authors’ knowledge, there was no study published till the time of conducting this meta-analysis. Hence, this study fills the literature gap. This study also confirms that TAM is a valid model in the acceptance and use of technology in mobile and digital libraries context. Thus, the findings of the present study are helpful for developers and designers in designing and developing mobile library apps. It will also be beneficial for library authorities and system librarians in designing and developing digital libraries in academic settings.
... Although it may also be used as a publication format for systematic reviews with goals beyond treatment evaluations, its focus is on reporting reviews that evaluate the impacts of interventions (www.prisma-statement.org). The literature revealed that PRISMA was also used in previous studies for conducting systematic reviews and meta-analyses in the LIS domain (Ali and Warraich, 2018, 2023Mahmood 2017aMahmood , 2017bShahzad and Khan, 2023;Zhou et al., 2019). Hence, PRISMA was considered a suitable guideline for conducting this meta-analysis. ...
Article
Full-text available
Purpose The purpose of this study is to investigate the relationships between the information system success model constructs including information quality (IQ), system quality (SysQ) and service quality (ServQ) with user satisfaction (US) and intention to use (IU). Design/methodology/approach A meta-analysis approach was used to achieve the objectives. For this purpose, the PRISMA-P guideline was used, and a search strategy was designed to search in three indexing databases including Google Scholar, Scopus and LISTA. Findings The findings of this research revealed that IQ, SysQ and ServQ are positively related to US and IU regarding library systems. The strength of the relationship between IQ and IU, IQ and US, ServQ and US and SysQ and US was high. Originality/value This study is a unique addition to the literature, as it provides a collective and comprehensive conclusion regarding different information systems’ (ISs) success in libraries. Therefore, it fills the literature gap. The findings also work as guidelines for system developers, designers and library high-ups to consider IQ, SysQ and ServQ while designing and developing ISs for libraries.
... 3,4 Then, it is expected that the highest quality journals are those that aggregate the best scientific articles. 5 However, scientific journals reflect particularities regarding the area of knowledge, the characteristics of the research they publish, the type of scientific article, and even the research capabilities in the host country. These are important points in journal ranking, especially when comparing papers from different medical specialties published in non-English languages, and reflect the reality of research in different countries, such as Brazil. ...
Article
Full-text available
The Anais Brasileiros de Dermatologia, published since 1925, is the most influential dermatological journal in Latin America, indexed in the main international bibliographic databases, and occupies the 50th position among the 70 dermatological journals indexed in the Journal of Citations Reports, in 2022. In this article, the authors present a critical analysis of its trajectory in the last decade and compare its main bibliometric indices with Brazilian medical and international dermatological journals. The journal showed consistent growth in different bibliometric indices, which indicates a successful editorial policy and greater visibility in the international scientific community, attracting foreign authors. The increases in citations received (4.1×) and in the Article Influence Score (2.9×) were more prominent than those of the main Brazilian medical and international dermatological journals. The success of Anais Brasileiros de Dermatologia in the international scientific scenario depends on an assertive editorial policy, on promptly publication of high-quality articles, and on institutional stimulus to encourage clinical research in dermatology.
... Table 1 shows ASD studies' research areas. Lastly, the Thomson Reuters'©' impact factor (JIF) was used to estimate the effects of the identified publications (Mahmood, 2017). Although it is not an effective measure, it is essential for examining the evolution of research. ...
... We used PRISMA because it has been used in previous studies for conducting meta-analysis (Azevedo et al., 2023;Miao et al., 2018;Xing et al., 2018). Previous research in the LIS field have also used PRISMA guideline for conducting meta-analysis (Mahmood, 2017) and systematic reviews (Ali and Warraich, 2018Warraich, , 2021Warraich, , 2022. ...
Article
Full-text available
This study aims to investigate the relationship of factors of UTAUT/UTAUT2 model with behavioral intention to acceptance and use of technology within academic and digital libraries context. These objectives were achieved by using meta-analysis approach using the Preferred Reporting Items for the Systematic Review and Meta-analysis (PRISMA-P) protocol. A comprehensive search strategy was formulated for searching relevant studies from Google Scholar, Scopus, and LISTA. Fourteen studies were selected that were published from 2008 to 2022 in different countries worldwide. The results of all the hypotheses showed a significant relationship of Performance Expectancy (PE), Effort Expectancy (EE), Social Influence (SI), Facilitating Conditions (FC), Hedonic Motivation (HM), and habit on Behavioral Intention (BI) to use technology for libraries. The magnitude of the effect size regarding the relationship of all the constructs was large. Present research identified a complete picture of studies published worldwide regarding the relationships of UTAUT/UTAUT2 constructs for acceptance of technology within the context of academic and digital libraries. Thus, this meta-analysis fills the literature gap and provides insights to reach conclusion about the relationships of UTAUT/UTAUT2 constructs. As previously published studies in this domain have provided significant/no-significant results and the strength of the relationship was also varied and these studies were lacking any collective conclusion. Findings of our study are also helpful for application developers in developing user-friendly and useful software for library services.
... Esperamos aportar un grano de arena al desarrollo de una ciencia más relevante y democrática. (Mahmood, 2017). Aunque 32 estudios confirman la existencia de una correlación positiva entre el factor de impacto de la revista y las citas, otros tres no lo hacen (Tahamtam et al., 2016 (Vessuri et al., 2014 (Vessuri et al., 2015). ...
Article
Full-text available
RESUMEN Se describe la producción en términos de revistas y artículos de la psicología social iberoamericana y se examina la evidencia bibliomé-trica que muestra un aumento de la producción, pero que coexiste con el peso dominante de la producción de EEUU y, en menor medida, europea, que constituyen el centro del sistema científico. El mayor peso de la producción de EEUU se explica en parte por la existencia de más investigadores en EEUU que en Europa, Asia y América Lati-na. Se describe el sesgo de rechazo de la producción no angloparlante y de países de la periferia, así como la insatisfacción con el sistema de revisión y publicación actual. Una red virtual colaborativa iberoame-ricana se plantea como forma de desarrollo científico.
... A review of previous studies on the methodologies and techniques used for the evaluation of academic journals has allowed us to analyse certain initiatives performed in this area, amongst these feature: the combination of citation counts with the analysis of social networks to rank the journals (Bohlin et al., 2016;Rost et al., 2017); the integration of expert judgements with bibliometric indicators (Walters & Eck, 2012); statistical approaches to compare different citation-based journal classification metrics (Haley, 2017); journal classification models based on PageRank algorithms (Yu et al., 2017); the correlation between expert-based journal rankings and the impact factor and other quantitative indicators (Huang, 2016;Mahmood, 2017); and content-based classifications (Rafols & Leydesdorff, 2009). ...
Article
Full-text available
This article analyses the impact and visibility of scholarly journals in the humanities that are publishing in the national languages in Finland, Norway and Spain. Three types of publishers are considered: commercial publishers, scholarly society as publisher, and research organizations as publishers. Indicators of visibility and impact were obtained from Web of Science, SCOPUS, Google Metrics, Scimago Journal Rank and Journal Citation Report. The findings compiled show that in Spain the categories “History and Archaeology” and “Language and Literature” account for almost 70% of the journals analysed, while the other countries offer a more homogeneous distribution. In Finland, the scholarly society publisher is predominant, in Spain, research organization as publishers, mostly universities, have a greater weighting, while in Norway, the commercial publishers take centre stage. The results show that journals from Finland and Norway will have reduced possibilities in terms of impact and visibility, since the vernacular language appeals to a smaller readership. Conversely, the Spanish journals are more attractive for indexing in commercial databases. Distribution in open access ranges from 64 to 70% in Norwegian and Finish journals, and to 91% in Spanish journals. The existence of DOI range from 31 to 41% in Nordic journals to 60% in Spanish journals and has a more widespread bearing on the citations received in all three countries (journals with DOI and open access are cited more frequently).
... This may occur from unintentional biases (familiarity-attraction, availability) or from motivations for self-enhancement. Although perception-based ratings are likely influenced by knowledge of a journal's impact factor, a recent meta-analysis across disciplines is not suggestive of extensive contamination (Mahmood, 2017). ...
Article
Full-text available
Prestigious journals are widely admired for publishing quality scholarship, yet the primary indicators of journal prestige (i.e., impact factors) do not directly assess audience admiration. Moreover, the publication landscape has changed substantially in the last 20 years, with electronic publishing changing the way we consume scientific research. Given that it has been 18 years since the publication of the last journal prestige survey of SIOP members, the authors conducted a new survey and used these results to reflect on changing practices within industrial and organizational (I-O) psychology. SIOP members (n = 557) rated the prestige and relevance of I-O and management journals. Responses were analyzed according to job setting, and were compared to a survey conducted by Zickar and Highhouse (2001) in 2000. There was considerable consistency in prestige ratings across settings (i.e., management department vs. psychology department; academic vs. applied), especially among the top journals. There was considerable variance, however, in the perceived usefulness of different journals. Results also suggested considerable consistency across the two time periods, but with some increases in prestige among OB-oriented journals. Changes in the journal landscape are discussed, including the rise of OHP as a topic of concentration in I-O. We suggest that I-O programs will continue to attract the top researchers in talent management and OHP, which should result in the use of a broader set of journals for judging I-O program impact.
... Kim et al. (Kim et al., 2017) introduced a novel database known as Paraphrase Opinion Spam as a learning mechanism for the accurate detection of opinion frauds. Pieper (Pieper, 2016) used Amazon to detect review spam, while Mahmood (Mahmood, 2017) explored and correlated journal rankings and their actual truth table. ...
Article
Full-text available
Astroturfing is a phenomenon in which sponsors of fake messages or reviews are masked because their intentions are not genuine. Astroturfing reviews are intentionally made to influence people to take decisions in favour of or against a target service or product or organization. The tourism sector being one of the sectors that is flourishing and witnessing unprecedented growth is affected by the activities of astroturfers. Astroturfing reviews can cause many problems to tourists who make decisions based on available online reviews. However, authentic and genuine reviews help people make informed decisions. In this paper a Latent Dirichlet Allocation (LDA) based Group Topic-Author model is proposed for efficient discovery of social astroturfing groups within the tourism domain. An algorithm named Astroturfing Group Topic Detection (AGTD) is defined for the implementation of the proposed model. The experimental results of this study revealed the utility of the proposed system for the discovery of social astroturfing groups within the tourism domain.
Article
This study uses data for more than 3,300 business and economics journals to explore the relationships between 5 subjective (expert) journal ratings and 10 citation metrics including 5IF (5-year Impact Factor), Article Influence (AI) score, CiteScore, Eigenfactor, Impact per Publication, SJR, and SNIP. Overall, AI and SJR are the citation metrics most closely related to the expert journal ratings. Comparisons of paired citation metrics that are similar in all but a single key characteristic confirm that expert journal ratings are more closely related to size-independent citation metrics than to size-dependent metrics, more closely related to weighted metrics than to unweighted metrics, and more closely related to normalized metrics than to non-normalized metrics. These results, which are consistent across the 5 expert ratings, suggest that evaluators consider the average impact of an article in each journal rather than the total impact of the journal as a whole, that they give more credit for citations in high-impact journals than for citations in lesser journals, and that they assess each journal’s relative standing within its own field or subfield rather than its broader scholarly impact. No single citation metric is a good substitute for any of the expert ratings considered here.
Article
Purpose This study aims to assess the validity of citation metrics based on the disciplinary representative survey. Design/methodology/approach The present project compared citation rankings for individual scientists with expert judgments collected through a survey of 818 Russian sociologists. The Russian Index of Science Citation was used to construct the general population of 3,689 Russian sociologists, to whom the survey was sent by email. The regression analyses of bibliometric indicators and peer review scores for 723 names of scholars mentioned in the survey have been undertaken. Findings Findings suggest that scientometric indicators predict with significant accuracy the names of the most influential sociologists and those scholars who are not mentioned while they are less relevant for prediction names which received moderate attention in the survey. Originality/value This study contributes to the research on the validity of citation metrics by focusing on scientometric indicators, not limited to traditional metrics but including non-standard publication metrics and indicators of potential metric abuse. Besides, the study presents the national bibliometric data source that is especially important for non-Western higher education systems, less presented in the Web of Science or Scopus.
Article
The purpose of this multi-institution study was to develop an understanding of where and how ranking lists are being used for the purpose of informing promotion and tenure decisions. Individuals were selected for this survey who, were at the time, serving in administrative positions at 115 R1 Carnegie research institutions. The survey questionnaire consisted of demographic, closed-response, and rating-scale questions designed to understand the respondents' experiences and attitudes concerning their academic department's promotion and tenure process. Results of this survey will inform librarians on practices associated with promotion and tenure involving open access publishing and the use of standardized journal lists.
Article
Purpose This study introduces a new approach, called the social structure approach, for ranking academic journals by focusing on hospitality and tourism journals; and a hybrid metric, including the combination of the journal impact factor via citations and a social network metric, called the journal knowledge domain index (JKDI). Design/methodology/approach Twenty-five hospitality and tourism journals were selected to test this approach. Collaboration-based metrics, productivity-based metrics, and network-based metrics are considered components of the social structure approach. Additionally, a hybrid metric, including the combination of the journal impact factor via citations and a social network metric, JKDI, is developed. Findings The study’s findings show that top or leading journals have a weaker position in some social structure approach metrics compared to other (or follower) journals. However, according to the JKDI, leading journals have remained constant with the other ranking studies. Practical implications The ranking of academic journals is vital for the stakeholders of academia. Consequently, the findings of this study may help stakeholders to design an optimal ranking system and formulate and implement effective research strategies for knowledge creation and dissemination. Originality/value As one of the first in the journal-ranking literature, this study has significant implications, as it introduces a new ranking approach.
Article
Full-text available
Background Optimal ranking of literature importance is vital in overcoming article overload. Existing ranking methods are typically based on raw citation counts, giving a sum of ‘inbound’ links with no consideration of citation importance. PageRank, an algorithm originally developed for ranking webpages at the search engine, Google, could potentially be adapted to bibliometrics to quantify the relative importance weightings of a citation network. This article seeks to validate such an approach on the freely available, PubMed Central open access subset (PMC-OAS) of biomedical literature. Results On-demand cloud computing infrastructure was used to extract a citation network from over 600,000 full-text PMC-OAS articles. PageRanks and citation counts were calculated for each node in this network. PageRank is highly correlated with citation count (R = 0.905, P < 0.01) and we thus validate the former as a surrogate of literature importance. Furthermore, the algorithm can be run in trivial time on cheap, commodity cluster hardware, lowering the barrier of entry for resource-limited open access organisations. Conclusions PageRank can be trivially computed on commodity cluster hardware and is linearly correlated with citation count. Given its putative benefits in quantifying relative importance, we suggest it may enrich the citation network, thereby overcoming the existing inadequacy of citation counts alone. We thus suggest PageRank as a feasible supplement to, or replacement of, existing bibliometric ranking methods.
Article
Full-text available
Systematic reviews should build on a protocol that describes the rationale, hypothesis, and planned methods of the review; few reviews report whether a protocol exists. Detailed, well-described protocols can facilitate the understanding and appraisal of the review methods, as well as the detection of modifications to methods and selective reporting in completed reviews. We describe the development of a reporting guideline, the Preferred Reporting Items for Systematic reviews and Meta-Analyses for Protocols 2015 (PRISMA-P 2015). PRISMA-P consists of a 17-item checklist intended to facilitate the preparation and reporting of a robust protocol for the systematic review. Funders and those commissioning reviews might consider mandating the use of the checklist to facilitate the submission of relevant protocol information in funding applications. Similarly, peer reviewers and editors can use the guidance to gauge the completeness and transparency of a systematic review protocol submitted for publication in a journal or other medium.
Article
Full-text available
Background: An important attribute of the traditional impact factor was the controversial 2-year citation window. So far, several scholars have proposed using different citation time windows for evaluating journals. However, there is no confirmation whether a longer citation time window would be better. How did the journal evaluation effects of 3IF, 4IF, and 6IF comparing with 2IF and 5IF? In order to understand these questions, we made a comparative study of impact factors with different citation time windows with the peer-reviewed scores of ophthalmologic journals indexed by Science Citation Index Expanded (SCIE) database. Methods: The peer-reviewed scores of 28 ophthalmologic journals were obtained through a self-designed survey questionnaire. Impact factors with different citation time windows (including 2IF, 3IF, 4IF, 5IF, and 6IF) of 28 ophthalmologic journals were computed and compared in accordance with each impact factor's definition and formula, using the citation analysis function of the Web of Science (WoS) database. An analysis of the correlation between impact factors with different citation time windows and peer-reviewed scores was carried out. Results: Although impact factor values with different citation time windows were different, there was a high level of correlation between them when it came to evaluating journals. In the current study, for ophthalmologic journals' impact factors with different time windows in 2013, 3IF and 4IF seemed the ideal ranges for comparison, when assessed in relation to peer-reviewed scores. In addition, the 3-year and 4-year windows were quite consistent with the cited peak age of documents published by ophthalmologic journals. Research limitations: Our study is based on ophthalmology journals and we only analyze the impact factors with different citation time window in 2013, so it has yet to be ascertained whether other disciplines (especially those with a later cited peak) or other years would follow the same or similar patterns. Originality/ value: We designed the survey questionnaire ourselves, specifically to assess the real influence of journals. We used peer-reviewed scores to judge the journal evaluation effect of impact factors with different citation time windows. The main purpose of this study was to help researchers better understand the role of impact factors with different citation time windows in journal evaluation.
Article
Full-text available
Journal rankings lists have impacted and are impacting accounting educators and accounting education researchers around the world. Nowhere is the impact positive. It ranges from slight constraints on academic freedom to admonition, censure, reduced research allowances, non-promotion, non-short-listing for jobs, increased teaching loads, and re- designation as a non-researcher, all because the chosen research specialism of someone who was vocationally motivated to become a teacher of accounting is, ironically, accounting education. University managers believe that these journal ranking lists show that accounting faculty publish top-quality research on accounting regulation, financial markets, business finance, auditing, international accounting, management accounting, taxation, accounting in society, and more, but not on what they do in their ‘day job’ – teaching accounting. These same managers also believe that the journal ranking lists indicate that accounting faculty do not publish top-quality research in accounting history and accounting systems. And they also believe that journal ranking lists show that accounting faculty write top-quality research in education, history, and systems, but only if they publish it in specialist journals that do not have the word ‘accounting’ in their title, or in mainstream journals that do. Tarring everyone with the same brush because of the journal in which they publish is inequitable. We would not allow it in other walks of life. It is time the discrimination ended.
Article
Full-text available
Many lists that purport to gauge the quality of journals in management and organization studies (MOS) are based on the judgments of experts in the field. This article develops an identity concerns model (ICM) that suggests that such judgments are likely to be shaped by the personal and social identities of evaluators. The model was tested in a study in which 168 editorial board members rated 44 MOS journals. In line with the ICM, respondents rated journal quality more highly to the extent that a given journal reflected their personal concerns (associated with having published more articles in that journal) and the concerns of a relevant ingroup (associated with membership of the journal’s editorial board or a particular disciplinary or geographical background). However, judges’ ratings of journals in which they had published were more favorable when those journals had a low-quality reputation, and their ratings of journals that reflected their geographical and disciplinary affiliations were more favorable when those journals had a high-quality reputation. The findings are thus consistent with the view that identity concerns come to the fore in journal ratings when there is either a need to protect against personal identity threat or a meaningful opportunity to promote social identity.
Article
Full-text available
In a "publish-or-perish culture", the ranking of scientific journals plays a central role in assessing performance in the current research environment. With a wide range of existing methods and approaches to deriving journal rankings, meta-rankings have gained popularity as a means of aggregating different information sources. In this paper, we propose a method to create a consensus meta-ranking using heterogeneous journal rankings. Using a parametric model for paired comparison data we estimate quality scores for 58 journals in the OR/MS community, which together with a shrinkage procedure allows for the identification of clusters of journals with similar quality. The use of paired comparisons provides a flexible framework for deriving a consensus score while eliminating the problem of data missingness.
Article
Full-text available
The purpose of this article is to identify and discuss the possible uses of higher education journal rankings, and the associated advantages and disadvantages of using them. The research involved 40 individuals – lecturers, university managers, journal editors and publishers – who represented a range of stakeholders involved with research into higher education. The respondents completed an online questionnaire that consisted mainly of open questions. Although clear support for or opposition to journal rankings was split about equally, over two-thirds of the respondents reported having used or referred to a journal ranking during the previous 12 months. This suggests wide acceptance of the use of journal rankings, despite the downsides and problematic nature of these rankings being clearly recognised. It raises the question why the very diverse field of higher education does not show more resistance against the rather homogenising instrument of journal rankings.
Article
Full-text available
Background: Research rankings based on bibliometrics today dominate governance in academia and determine careers in universities. Method: Analytical approach to capture the incentives by users of rankings and by suppliers of rankings, both on an individual and an aggregate level. Result: Rankings may produce unintended negative side effects. In particular, rankings substitute the "taste for science" by a "taste for publication." We show that the usefulness of rankings rests on several important assumptions challenged by recent research. Conclusion: We suggest as alternatives careful socialization and selection of scholars, supplemented by periodic self-evaluations and awards. The aim is to encourage controversial discourses in order to contribute meaningful to the advancement of science.
Article
Full-text available
Although journals are the primary vehicle though which social work professionals explore innovative interventions, research strategies, and policy ideas, journal quality has received little attention in the literature. This project extends a 1990 study and presents multiple measures for assessing journal quality. The primary data source is a national survey of 556 faculty from accredited schools of social work; additional data were compiled from the Social Science Citation Index (SSCI). Findings indicate that journal rankings have changed since 1990 and are considerably different from the SSCI ratings. Multiple evaluation systems are recommended for assessing social work journal quality.
Article
Full-text available
This Editorial coincides with the release of the San Francisco declaration on research Assessment (DORA), the outcome of a gathering of concerned scientists at the December 2012 meeting of the American Society for Cell Biology. * To correct distortions in the evaluation of scientific research, DORA aims to stop the use of the "journal impact factor" in judging an individual scientist's work. The Declaration states that the impact factor must not be used as "a surrogate measure of the quality of individual research articles, to assess an individual scientist's contributions, or in hiring, promotion, or funding decisions." DORA also provides a list of specific actions, targeted at improving the way scientific publications are assessed, to be taken by funding agencies, institutions, publishers, researchers, and the organizations that supply metrics. These recommendations have thus far been endorsed by more than 150 leading scientists and 75 scientific organizations, including the American Association for the Advancement of Science (the publisher of Science ). Here are some reasons why:
Article
Full-text available
Instead of using citations or marketing academics™ perceptual ranking of journals, this article examines the ranking of marketing journals using Australian university library holdings, in either hard copy or full-text electronic format. This measure was used as a proxy for broad-based accessibility of marketing journals. The study found that the accessibility rankings differed significantly from the most recent U.S. perceptual rankings, and it is suggested that in some situations, the accessibility ranking may be a more appropriate measure than other approaches. An examination of journal characteristics and their relationship to holdings in Australian university libraries was also undertaken. It was found that the year in which the journal started publication and its perceived importance within the United States (i.e., perceptual ranking) had a statistical impact on the proportion of Australian university libraries holding the journal.
Article
Full-text available
Attempts to assess the quality of academic publications have been increasing lately. Due to the number of existing journals, it is hard to make a representative selection and to find criteria for determining quality. Hence, questions arise, including what sort of journals are more important in terms of reputation, readership frequency, and relevance to scien- tific research and practice. Recent studies on journal rank- ings have been carried out on the basis of both objective data (citation counts) and the quality perceptions of experts. This study attempts a rating of tourism and hospitality journals among the scientific community according to the journals' readership frequency, scientific and practical relevance, overall reputation, and the importance of being published in the journals to the academic career of the respondents. Academic journals are essential for creating and dissemi- nating knowledge. Publishing in the top-tier journals is vital to academic career advancement, particularly at research universities. The issue of journal quality has moved from the object of spirited, friendly discussions to a critical factor in promotion, tenure, and merit pay increase decisions (Soteriou, Hadjinicolas, and Patsia 1999). Therefore, studies on journal quality ranking can be of help to numerous stake- holders such as researchers, faculty and librarians, practitioners, and journal editors. In the past few years, examining and evaluating research output has become more and more important (Schlinghof and Backes-Gellner 2002; Pechlaner, Zehrer, and Abfalter 2002), leading to a wide range of methods being developed, discussed, criticized, revised, rejected, and used for the description and comparison of national, institutional, and individual research output (Backes-Gellner and Sadowski 1988). The number of articles in prestigious journals is a widely used indicator for the productivity of academic researchers.
Article
Full-text available
This study examines the question of whether the journal ranking VHB-JOURQUAL 2 can be considered as a good measure for the construct “scientific quality”. Various rankings in business research provide the database for the analysis. The correlations between theses rankings are used to assess the validity of VHB-JOURQUAL 2 along various validity criteria. The correlations with rankings that measure the same construct based on different methods show that VHB-JOURQUAL 2 has acceptable, but moderate convergent validity. The validity varies considerably across disciplines, showing that the heterogeneity of business administration is not sufficiently represented by this overall ranking. The variability is related to the variation in members per discipline represented by the German Association for Business Research. Furthermore, the measure shows a weak correlation with acceptance rates as an indicator of nomological validity in some disciplines.
Article
Full-text available
In this study we investigated the measurement validity of the findings in the IS journal quality stream over the past ten years. Our evaluation applied a series of validation tests to the metrics presented in these studies using data from multiple sources. The results of our tests for content, convergent, and discriminant validity, as well as those for parallel-form, test-retest, and item-to- total reliability, were highly supportive. From these findings, we conclude that recent studies in the IS journal quality stream are credible. As such, these IS journal quality measures provide appropriate indicators of relative journal quality. This conclusion is important for both academic administrators and scientometric researchers, the latter of whom depend on journal quality measures in the evaluation of published IS research.
Article
Full-text available
This study is part of a program aimed at creating measures enabling a fairer and more complete assessment of a scholar's contribution to a field, thus bringing greater rationality and transparency to the promotion and tenure process. It finds current approaches toward the evaluation of research productivity to be simplistic, atheoretic, and biased toward reinforcing existing reputation and power structures. This study examines the use of the Hirsch family of indices, a robust and theoretically informed metric, as an addition to prior approaches to assessing the s of IS researchers. It finds that while the top tier journals are important indications of a scholar's impact, they are neither the only nor, indeed, the most important sources of scholarly influence. Other ranking studies, by narrowly bounding the venues included in those studies, distort the discourse and effectively privilege certain venues by declaring them to be more highly influential than warranted. The study identifies three different categories of scholars: those who publish primarily in North American journals, those who publish primarily in European journals, and a transnational set of authors who publish in both geographies. Excluding the transnational scholars, for the scholars who published in these journal sets during the period of this analysis, we find that North American scholars tend to be more influential than European scholars, on average. We attribute this difference to a difference in the publication culture of the different geographies. This study also suggests that the ose with relatively low influence. Therefore, to be a part of the top European scholar list requires a higher level of influence than to be a part of the top North American scholar list.
Article
Full-text available
Creating rankings of academic journals is an important but contentious issue. It is of especial interest in the U.K. at this time (2007) as we are only one year away from getting the results of the next Research Assessment Exercise (RAE) the importance of which, for U.K. universities, can hardly be overstated. The purpose of this paper is to present a journal ranking for business and management based on a statistical analysis of the Harzing data set which contains 13 rankings. The primary aim of the analysis is two-fold – to investigate relationships between the different rankings, including that between peer rankings and citation behaviour; and to develop a ranking based on four groups that could be useful for the RAE. Looking at the different rankings, the main conclusions are that there is in general a high degree of conformity between them as shown by a principal components analysis. Cluster analysis is used to create four groups of journals relevant to the RAE. The higher groups are found to correspond well with previous studies of top management journals and also gave, unlike them, equal coverage to all the management disciplines. The RAE Business and Management panel have a huge and unenviable task in trying to judge the quality of over 10,000 publications and they will inevitably have to resort to some standard mechanistic procedures to do so. This work will hopefully contribute by producing a ranking based on a statistical analysis of a variety of measures.European Journal of Information Systems (2007) 16, 303–316. doi:10.1057/palgrave.ejis.3000696
Article
Full-text available
Purpose – The purpose of this paper is to develop a ranking of knowledge management and intellectual capital academic journals. Design/methodology/approach – A revealed preference, also referred to as citation impact, method was utilized. Citation data were obtained from Google Scholar by using Harzing’s Publish or Perish tool. The h-index and the g-index were employed to develop a ranking list. The revealed preference method was compared to the stated preference approach, also referred to as an expert survey. A comprehensive journal ranking based on the combination of both approaches is presented. Findings – Manual re-calculation of the indices reported by Publish or Perish had no impact on the ranking list. The revealed preference and stated preference methods correlated very strongly (0.8 on average). According to the final aggregate journal list that combined stated and revealed preference
Article
Full-text available
Purpose The purpose of this paper is to develop a global ranking of knowledge management and intellectual capital academic journals. Design/methodology/approach An online questionnaire was completed by 233 active knowledge management and intellectual capital researchers from 41 countries. Two different approaches: journal rank‐order and journal scoring method were utilized and produced similar results. Findings It was found that the top five academic journals in the field are: Journal of Knowledge Management, Journal of Intellectual Capital, Knowledge Management Research and Practice, International Journal of Knowledge Management, and The Learning Organization. It was also concluded that the major factors affecting perceptions of quality of academic journals are editor and review board reputation, inclusion in citation indexes, opinion of leading researchers, appearance in ranking lists, and citation impact. Research limitations/implications This study was the first of its kind to develop a ranking system for academic journals in the field. Such a list will be very useful for academic recruitment, as well as tenure and promotion decisions. Practical implications The findings from this study may be utilized by various practitioners including knowledge management professionals, university administrators, review committees and corporate librarians. Originality/value This paper represents the first documented attempt to develop a ranking of knowledge management and intellectual capital academic journals through a survey of field contributors.
Article
With the development of the marketing discipline, more and more marketing journals were released, and the relative positions of those journals were widely concerned, because evaluations of scholars and institutions were often based on which journal published their papers. Preference survey and citation analysis were two most commonly used methods to evaluate marketing journals, but there were some shortcomings existed. This paper suggested that Social Network Analysis was an alternative method to avoid those problems. 68 marketing journals were selected and ranked by using Social Network Analysis; the result was compared with previous ranking researches.
Article
Content analysis was conducted to provide a framework for studying the current state of and problems in the application of meta-analysis in the field of library and information science (LIS). The content of 35 meta-analysis application articles published in LIS-oriented journals was analyzed for their bibliometric information, reasons for conducting a meta-analysis, literature searches, criteria for selecting studies, meta-analysis procedures, quality control mechanisms, and results. Although meta-analysis appears to be underappreciated in the LIS domain, the findings demonstrate that meta-analysis holds strong prospects as an LIS research method. However, there are a number of problems that must be solved, one being the misunderstanding of meta-analysis as compared with other similar systematic review methods. Suggestions are offered for developing meta-analysis. An informed understanding of the role of the meta-analysis method in LIS will be helpful for future research and practice.
Article
Although the systematic review method has, in the past, been applied infrequently in library and information science (LIS) research, its use appears to be increasing. However, the relatively low quantity and poor quality of systematic reviews demonstrate the need for further research in this area. A critical appraisal framework is presented that can be used to guide the conduct and reporting of systematic reviews, at the same time increasing researchers and practitioners' awareness of the importance of such reviews in LIS research. Methods and tools used by scholars who have applied this method are reviewed, and criteria that are essential to achieving high quality systematic review are discussed in depth.
Article
Ranking journals is a longstanding problem and can be addressed quantitatively, qualitatively or using a combination of both approaches. In the last decades, the Impact Factor (i.e., the most known quantitative approach) has been widely questioned, and other indices have thus been developed and become popular. Previous studies have reported strengths and weaknesses of each index, and devised meta-indices to rank journals in a certain field of study. However, the proposed meta-indices exhibit some intrinsic limitations: (1) the indices to be combined are not always chosen according to well-grounded principles; (2) combination methods are usually unweighted; and (3) some of the proposed meta-indices are parametric, which requires assuming a specific underlying data distribution. We propose a data-driven methodology that linearly combines an arbitrary number of indices to produce an aggregated ranking, using different techniques from statistics and machine learning to estimate the combining weights. We additionally consider correlations between indices and meta-indices, to quantitatively evaluate their differences. Finally, we empirically show that the considered meta-indices are also robust to significant perturbations of the values of the combined indices.
Article
Beginning about three years ago, the world of academic publishing has become infected by fake impact factors and misleading metrics that are launched by bogus companies. The misleading metrics and fake impact factors have damaged the prestige and reliability of scientific research and scholarly journals. This article presents the in-depth story of some of the main bogus impact factors, how they approached the academic world, and how the author identified them. Some names that they use are Universal Impact Factor (UIF), Global Impact Factor (GIF), and Citefactor, and there even is a fake Thomson Reuters Company.
Article
Article
The need to provide decision makers with information on the amount of work done by scientists has favored a large increase in the usage of citation indicators, such as the Impact Factor (IF). This ratio is defined as the relationship between the number of citations in a given year to articles published in a scientific journal in the two preceding years, and the total number of published articles in that journal over the same time period. Thus, it estimates the average number of citations per paper published in the considered journal, evaluated on a two-year interval. Much was written on the information conveyed by this indicator and on the consequences that potentially can result from its misuses. I would like to share with the readers of this magazine some of the issues associated with the usage of the IF in the research community.
Article
Purpose – Ranking relevant journals is very critical for researchers to choose their publication outlets, which can affect their research performance. In the management information systems (MIS) subject, many related studies conducted surveys as the subjective method for identifying MIS journal rankings. However, very few consider other objective methods, such as journals’ impact factors and h-indexes. The paper aims to discuss these issues. Design/methodology/approach – In this paper, top 50 ranked journals identified by researchers’ perceptions are examined in terms of the correlation to the rankings by their impact factors and h-indexes. Moreover, a hybrid method to combine these different rankings based on Borda count is used to produce new MIS journal rankings. Findings – The results show that there are low correlations between the subjective and objective based MIS journal rankings. In addition, the new MIS journal rankings by the Borda count approach can also be considered for future researches. Originality/value – The contribution of this paper is to apply the Borda count approach to combine different MIS journal rankings produced by subjective and objective methods. The new MIS journal rankings and previous studies can be complementary to allow researchers to determine the top-ranked journals for their publication outlets.
Article
Systems of journal ranking can be arbitrary and problematic, particularly in highly specialized academic fields where journals are of limited scope and circulation. One field in which journal ranking is problematic is interior design. This article addresses existing ranking systems for academic journals and explores the limitations of these ranking systems. An attempt is made to suggest effective methods that could be developed for ranking interior design journals. The multiple factors that need to be considered in ranking journals in such a specialized field are discussed, including ranking bias, categorical specialization, and geographic and institutional variables. Implications of this review indicate that journal ranking for interior design should be considered within the context of relevant specialized journals rather than the current practice that places interior design in a broad comparative ranking system outside of its field. Multiple cross-referencing measurements, including impact factors, subject-specific citation rates, and acceptance rates, as well as prestige factors based on peer perceptual ranking of interior design journals need to be implemented. Faculty and institutional evaluations need to be based on composite factors that take into account not only a meaningful journal ranking system, but also a broader, and at the same time more accurate, metric to include the actual value of the publication itself, the focus of research, and the goals of the individual and of the institution.
Article
Purpose This is a polemical paper challenging both the principle and practice of journal ranking. In recent years academics and their institutions have become obsessive about the star‐ratings of the journals in which they publish. In the UK this is partly attributed to quinquennial reviews of university research performance though preoccupation with journal ratings has become an international phenomenon. The purpose of this paper is to examine the arguments for and against these ratings and argue that, on balance, they are having a damaging effect on the development of logistics as an academic discipline. Design/methodology/approach The arguments advanced in the paper are partly substantiated by references to the literature on the ranking of journals and development of scientific research. A comparison is made of the rating of logistics publications in different journal ranking systems. The views expressed in the paper are also based on informal discussions with numerous academics in logistics and other fields, and long experience as a researcher, reviewer and journal editor. Findings The ranking of journals gives university management a convenient method of assessing research performance across disciplines, though has several disadvantages. Among other things, it can skew the choice of research methodology, lengthen publication lead times, cause academics to be disloyal to the specialist journals in their field, favour theory over practical relevance and unfairly discriminate against relatively young disciplines such as logistics. Research evidence suggests that journal ratings are not a good proxy for the value and impact of an article. The paper aims to stimulate a debate on the pros and cons of journal rankings and encourage logistics academics to reflect on the impact of these rankings on their personal research plans and the wider development of the field. Research limitations/implications The review of journal ranking systems is confined to three countries, the UK, Germany and Australia. The analysis of journal ranking was also limited to 11 publications with the word logistics or supply chain management. The results of this review and analysis, however, provide sufficient evidence to support the main arguments advanced in the paper. Practical implications The paper asserts that the journal ranking system is encouraging a retreat into ivory towers where academics become more interested in impressing each other with their intellectual brilliance than in doing research that is of real value to the outside world. Originality/value Many logistics academics are concerned about the situation and trends outlined in this paper, but find it very difficult to challenge the prevailing journal ranking orthodoxy. This paper may give them greater confidence to question the value of the journal ranking systems that are increasing dominating academic life.
Article
We analyse the Keele list of economics journals, two lists produced in Australia and the Association of Business School (ABS) list. Econometric analysis suggests that all the rankings respond to combinations of bibliometrics, such as ISI's Article Influence and reward older journals. Lists produced by economists tend to reward theoretical journals and a focus on economics, whilst the ABS ranking tends to penalise an economics focus. On the basis of the regressions, we produce predicted rankings, distinguishing between journals which can be assigned to a specific category, for example 4*, and others which could lie in one of two categories.
Article
The journal impact factor is an annually calculated number for each scientific journal, based on the average number of times its articles published in the two preceding years have been cited. It was originally devised as a tool for librarians and publishers to provide information about the citation performance of a journal as a whole, but over the last few decades it has increasingly been used to assess the quality of specific articles and the research performance of individual investigators, institutions, and countries. In addition to this clear abuse of the journal impact factor, several conceptual and technical issues limit its usability as a measure of journal reputation, especially when journals are compared across different fields. An author's decision regarding the suitability of a scholarly journal for publication should, therefore, be based on the impact that this journal makes in the field of research, rather than on the journal impact factor.
Article
The ISI impact factor is widely accepted as a possible measurement of academic journal quality. However, much debate has recently surrounded this use, and several complex alternative journal impact indicators have been reported. To avoid the bias which may be caused by using a single quality indicator, ensemble of multiple indicators is a promising method for producing a more robust quality estimation. In this paper, an approach based on links between journals is proposed for the capturing and fusion of impact indicators. In particular, a number of popular indicators are combined and transformed to fused-links between academic journals, and two distance metrics: Euclidean distance and Manhattan distance are utilised to support the development and analysis of the fused-links. The approach is applied to both supervised and unsupervised learning, in an effort to estimate the impact and therefore the ranking of journals. Results of systematic experimental evaluation demonstrate that by exploiting the fused-links, simple algorithms such as K-Nearest Neighbours and K-means can perform as well as advanced techniques like support vector machines, in terms of accuracy and within-1 accuracy, while exhibiting the advantage of being more intuitive and interpretable.
Article
Using an online survey, we asked safety researchers around the globe how they perceived the quality of a list of 35 representative safety journals. We found that the most well-respected journal by expert opinion was the Journal of Loss Prevention in the Process Industries. However, taking both the respondents’ results and the citation-based results into consideration, the Journal of Hazardous Materials is the most influential journal, followed by Reliability Engineering and System Safety, Risk Analysis, Accident Analysis and Prevention and Safety Science.
Article
Highlights ► We conducted an exhaustive comparison of OM journal rankings based on impact factors versus other ranking methods (113 characters) ► Impact factors are useful metrics to rank OM journals (53 characters) ► Impact factor rankings alone are not a replacement for survey‐based, citation‐based, or author‐based methods (108 characters) ► Impact factors evaluate OM journal quality from another perspective and can be used with other methods to rank OM journals (122 characters) ► Impact factors are likely to shape and influence future perception of OM journal quality (88 characters) This paper investigates impact factor as a metric for ranking the quality of journal outlets for operations management (OM) research. We review all prior studies that assessed journal outlets for OM research and compare all previous OM journal quality rankings to rankings based on impact factors. We find that rankings based on impact factors that use data from different time periods are highly correlated and provide similar rankings of journals using either two‐year or five‐year assessment periods, either with or without self‐citations. However, some individual journals have large rank changes using different impact factor specifications. We also find that OM journal rankings based on impact factors are only moderately correlated with journal quality rankings previously determined using other methods, and the agreement among these other methods in ranking the quality of OM journals is relatively modest. Thus, impact factor rankings alone are not a replacement for the assessment methods used in previous studies, but rather they evaluate OM journals from another perspective.
Article
The primary objectives of this study were to identify a set of journals that report on industrial design research and to propose quality rankings of those journals. Based on an online survey, design journals were assessed in terms of two quality metrics: popularity and indexed average rank position. We find that both general and specialized design journals are highly valued and that geographic origin and academic background can be related with journal rankings. The results of the study offer a guide to both evaluators and those evaluated when judging or selecting research outlets.
Article
A Web-based survey of faculty at all ACSP schools is used to assess the value peers place on various journals. The results of the survey show that two journals—the Journal of the American Planning Association and the Journal of Planning Education and Research—dominate all others in importance. The authors analyze the survey data to identify how the relative valuation of journals differs by individual faculty or institutional characteristics. The authors then test whether the journals’ importance by peer judgment is related to journal impact factors. The results demonstrate clearly there is no correlation between the two.
Article
Currently the Journal Impact Factors (JIF) attracts considerable attention as components in the evaluation of the quality of research in and between institutions. This paper reports on a questionnaire study of the publishing behaviour and researchers preferences for seeking new knowledge information and the possible influence of JIF on these variables. 54 Danish medical researchers active in the field of Diabetes research took part. We asked the researchers to prioritise a series of scientific journals with respect to which journals they prefer for publishing research and gaining new knowledge. In addition we requested the researchers to indicate whether or not the JIF of the prioritised journals has had any influence on these decisions. Furthermore we explored the perception of the researchers as to what degree the JIF could be considered a reliable, stable or objective measure for determining the scientific quality of journals. Moreover we asked the researchers to judge the applicability of JIF as a measure for doing research evaluations. One remarkable result is that app. 80% of the researchers share the opinion that JIF does indeed have an influence on which journals they would prefer for publishing. As such we found a statistically significant correlation between how the researchers ranked the journals and the JIF of the ranked journals. Another notable result is that no significant correlation exists between journals where the researchers actually have published papers and journals in which they would prefer to publish in the future measured by JIF. This could be taken as an indicator for the actual motivational influence on the publication behaviour of the researchers. That is, the impact factor actually works in our case. It seems that the researchers find it fair and reliable to use the Journal Impact Factor for research evaluation purposes.
Article
The purpose of this study is to offer a comprehensive assessment of journal standings in Marketing from two perspectives. The discipline perspective of rankings is obtained from a collection of published journal ranking studies during the past 15 years. The studies in the published ranking stream are assessed for reliability by examining internal correlations within the set. Aggregate rankings are presented from the published ranking stream, as well as from the two predominant ranking approaches used in these studies (opinion surveys and citation analyses). A new data source for journal rankings is introduced—the actual in-house target journal lists used by a sample of Association to Advance Collegiate Schools of Business (AACSB)-accredited schools to evaluate faculty research, representing an institutional perspective. The aggregate journal rankings from these lists are presented, as well as the rankings in two subsegments of the sample (US/non-US and doctoral/nondoctoral). The publications from the discipline perspective are compared to data from the in-house target journal lists actually used by AACSB-accredited schools. A full set of rankings across both data sets (school lists and the published article stream) is presented and differences are discussed.
Article
One hundred eighty-nine academics rated 54 journals concerned with the behavioral aspects of management. Journals were evaluated in regard to the quality of research they published. Results of respondents' ratings are compared to the Social Science Citation Index and earlier studies evaluating managerial journals. In addition, biases hypothesized to affect these ratings are analyzed. Although findings are generally consistent with earlier research, this survey is unique in its focus on the behavioral aspects of management, the large number of rated journals, and its analysis of differences in ratings.
Article
The channels for knowledge generation and dissemination in the business disciplines are many: presenting research at conferences, writing books, distributing working papers, offering insights in society newsletters, giving invited talks, publishing studies in academic journals, and many other venues, including even blogs and perhaps Facebook®. But the most important venue is probably published research in "top-level" academic journals. In the discipline of Operations Management, many studies and lists have been published that attempt to determine which of these journals are supposedly the "top" according to either citation analyses, the opinion of recognized experts, author affiliations, bibliometric studies, and other approaches. These lists may then, in turn, be used in different degrees to evaluate research. However, what really counts is what the academic institutions actually use for guidance in evaluating faculty research. Based on a new source of ranking data from AACSB-accredited schools, we compare published journal-ranking studies against that of academe to determine the degree to which the studies reflect academic "reality". We present rankings of OM journals based on this new source of data and on an aggregate of the stream of published studies, and evaluate their consistency.
Article
With an increased pressure to publish in internationally highly regarded journals, faculty evaluations frequently depend on journal rankings. Nonetheless, debates about journal rankings frequently arise since they do not take into account the underlying diversity of the finance research community. Therefore this study examines how contextual factors such as a researcher's geographical origin, research interests, seniority and journal affiliation may influence their journal quality perceptions and readership patterns. Our analysis is based on a worldwide sample of 862 finance academics where the perceived journal quality is measured across a number of dimensions, including journal familiarity, average rank position, percent of respondents who classify a journal as top tier, and readership. The results support that while there is remarkable consistency in identifying the top journals, for the remaining journals a significant variation on journal quality perceptions exists based on a researcher's geographic origin, research interests, seniority and journal affiliation.
Article
To determine the degree of correlation among journal citation indices that reflect the average number of citations per article, the most recent journal ratings were downloaded from the websites publishing four journal citation indices: the Institute of Scientific Information’s journal impact factor index, Eigenfactor’s article influence index, SCImago’s journal rank index and Scopus’ trend line index. Correlations were determined for each pair of indices, using ratings from all journals that could be identified as having been rated on both indices. Correlations between the six possible pairings of the four indices were tested with Spearman’s rho. Within each of the six possible pairings, the prevalence of identifiable errors was examined in a random selection of 10 journals and among the 10 most discordantly ranked journals on the two indices. The number of journals that could be matched within each pair of indices ranged from 1,857 to 6,508. Paired ratings for all journals showed strong to very strong correlations, with Spearman’s rho values ranging from 0.61 to 0.89, all p
Article
The purpose of this study is to: (1) develop a ranking of peer-reviewed AI journals; (2) compare the consistency of journal rankings developed with two dominant ranking techniques, expert surveys and journal impact measures; and (3) investigate the consistency of journal ranking scores assigned by different categories of expert judges. The ranking was constructed based on the survey of 873 active AI researchers who ranked the overall quality of 182 peer-reviewed AI journals. It is concluded that expert surveys and citation impact journal ranking methods cannot be used as substitutes. Instead, they should be used as complementary approaches. The key problem of the expert survey ranking technique is that in their ranking decisions, respondents are strongly influenced by their current research interests. As a result, their scores merely reflect their present research preferences rather than an objective assessment of each journal's quality. In addition, the application of the expert survey method favors journals that publish more articles per year.
Article
This study presents a ranking of 182 academic journals in the field of artificial intelligence. For this, the revealed preference approach, also referred to as a citation impact method, was utilized to collect data from Google Scholar. This list was developed based on three relatively novel indices: h-index, g-index, and hc-index. These indices correlated almost perfectly with one another (ranging from 0.97 to 0.99), and they correlated strongly with Thomson's Journal Impact Factors (ranging from 0.64 to 0.69). It was concluded that journal longevity (years in print) is an important but not the only factor affecting an outlet's ranking position. Inclusion in Thomson's Journal Citation Reports is a must for a journal to be identified as a leading A+ or A level outlet. However, coverage by Thomson does not guarantee a high citation impact of an outlet. The presented list may be utilized by scholars who want to demonstrate their research output, various academic committees, librarians and administrators who are not familiar with the AI research domain.