Conference: Learning from the past & charting the future of the discipline. 14th Americas Conference on Information Systems, AMCIS 2008, Toronto, Ontario, Canada, August 14-17, 2008
To read the full-text of this research, you can request a copy directly from the authors.
Abstract
This study examines the use of journal rankings and proposes a new method of measuring IS journal impact based on the Hirsch family of indices (Hirsch 2005; Sidiropoulos et al. 2006). Journal rankings are a very important exercise in academia since they impact tenure and promotion decisions. Current methods employed to rank journal influence are shown to be subjective. We propose that the Hirsch Index (2005) and Contemporary Hirsch Index (Sidiropoulos et al. 2006) based on data from Publish or Perish be adopted as a more objective journal ranking method. To demonstrate the results of using this methodology, it is applied to the “pure MIS” journals ranked by Rainer and Miller (2005). The authors find substantial differences between the scholar rankings and those obtained using the Hirsch family of indices. They also find that the contemporary Hirsch Index allows researchers to identify journals that are rising or declining in influence.
To read the full-text of this research, you can request a copy directly from the authors.
... Bontis and Serenko (2009) (2009) apply citations as a science management tool for assessing the progress and the development of IS based on the relationship among core IS journals. Truex et al. (2008), Truex et al. (2009) and Cuellar et al. (2008) introduce the H-index family of measures of scholarly achievement to the IS field, which incorporate both publication counts and the number of citations those publications garnered. The results suggest the IS field lags behind other disciplines in applying scientometrics for linking and verifying research, as well as for mapping its research activities in comparison with other disciplines. ...
... Major findings Bontis and Serenko (2009) Ranked list of knowledge management journals using the H-index and g-index Cooper et al. (1993) A stable core of IS journals using multiple citation measures Cuellar et al. (2008) H-family of indices provide more information about journal rankings Long et al. (2009) Affiliation impacts research productivity more than where the researcher obtained the Ph. D. The productivity of IS researchers do not follow Lotka's Law Mingers and Harzing (2007) Ranked list of IS journals based on a statistical analysis of multiple rankings including citation-based studies Nerur et al. (2005) (2005) Ranked list of IS journals using citation data Polites and Watson (2009) Ranked list of IS journals using SNA; Science management tool for defining relationships between IS journals Serenko et al. (2010) Ranking of most productive countries, institutions, practitioners, authors in KM using various citation indices, Lotka's Law and Yule-Simon's Law Truex et al. (2008) Ranked list of IS authors using the H-indices ...
... Major findings Cooper et al. (1993) IS field has stable core of influential journals Cuellar et al. (2008) Impact and influence of IS using the H-Index Grover et al. (2006a, b) IS field has significant impact and influence on other fields Long et al. (2009) Impact of institution or department prestige on the productivity and quality of research of their graduates. Productivity is impacted more by affiliation than origin Moody et al. (2010) An evaluation of the most influential sources for IS Nerur et al. (2005) Influence of IS journals on journals from other fields. ...
Although scientometrics is seeing increasing use in Information Systems (IS) research, in particular for evaluating research efforts and measuring scholarly influence; historically, scientometric IS studies are focused primarily on ranking authors, journals, or institutions. Notwithstanding the usefulness of ranking studies for evaluating the productivity of the IS field’s formal communication channels and its scholars, the IS field has yet to exploit the full potential that scientometrics offers, especially towards its progress as a discipline. This study makes a contribution by raising the discourse surrounding the value of scientometric research in IS, and proposes a framework that uncovers the multi-dimensional bases for citation behaviour and its epistemological implications on the creation, transfer, and growth of IS knowledge. Having identified 112 empirical research evaluation studies in IS, we select 44 substantive scientometric IS studies for in-depth content analysis. The findings from this review allow us to map an engaging future in scientometric research, especially towards enhancing the IS field’s conceptual and theoretical development.
... In other publications, we have proposed more open and measurable criteria to judge scholarly influence as 'ideational influence' and 'social influence' (Cuellar, Takeda, & Truex, 2008;Takeda, et al., 2010;Truex, et al., 2011;Truex III, et al., 2009). This criterion could be considered to be in line with the 'fitness for use' or 'user based' definition of quality. ...
... "Although the top-tier journals have increased the number of articles published in recent years, the number of articles being published in the top-tier journals compared relatively with the number of scholars seeking publication has declined, thus creating an academic environment where top-tier publications have become harder and harder to produce." (Cuellar, et al., 2008) In the information systems (IS) discipline this discourse on 'what counts' for P&T and credibility as a researcher tracks previous discourses in which IS scholars were concerned with: 1) the relative paucity of IS specialty academic journals outlets and 2) the question of the independence of the field as a separate discipline in its own right (Katerattanakul, et al., 2006;Larsen & Levine, 2005;Straub, 2006;Wade, Biehl, & Kim, 2006). The former was an expression of concern that too many IS scholars were seeking access to relatively few extant IS journals or IS reference discipline outlets. ...
... Scarcity may now be maintained by the way we 'count' journal hits or admit published papers to the list of 'counted hits' (Ashkanasy, 2007;Cronin, 2001;Cuellar, et al., 2008;Culnan, 1986;Kateratanakul & Han, 2003;Straub & Anderson, 2010). With the maturing of the field the expectation of what constitutes methodological quality, or the development of theory as well as standards of narrative development have become higher. ...
This research describes the state of the current process for the evaluation of scholarly output, a process that is highly dependent on the reputation of a set of journals that are thought to be guarantors of scholarly quality. We review the current system of evaluating scholarly output and describe its virtues and shortcomings. Then, based on Habermas (1984) and Mingers and Walsham (2010), we propose an improvement to the system with the goal of establishing a more open and democratic discourse. We review previous work on the construct “scholarly influence” and supplement the constructs of ideational influence and social influence with venue influence creating a composite measure for scholarly influence that builds on and improves the current regimen for evaluating the work of scholars. We draw from past works on communicative theory and democratic discourse to propose a system that has greater transparency, more equal access, open participation, increased truthfulness, and lower power differences.
... We therefore, acquired the list of authors that had published in two representative European IS journals-the European Journal of Information Systems and the Information Systems Journal. These two journals appear in the Rainer and Miller (2005) and Cuellar, Takeda, and Truex (2008) journal rankings as the highest ranked European IS journals and are consistently ranked among the highest of the European IS journals in other studies. They are not, of course, the only European journals. ...
... From previous studies, Takeda and Cuellar (2008) and Cuellar, Takeda, and Truex (2008), it is clear that MISQ is the most influential IS journal and that ISR and JMIS are next in influence within the IS community. However, this study demonstrates that while MISQ, ISR, and the IS articles from Management Science are vehicles that convey a scholar's influence in the IS community, they are not the only or even the most important vehicles of influence. ...
This study examines the use of the Hirsch family of indices to assess the scholarly influence of IS researchers. It finds that while the top tier journals are important indications of a scholar’s impact, they are neither the only nor indeed the most important sources of scholarly influence. In effect other ranking studies, by narrowly bounding the venues included in those studies, effectively privilege certain venues by declaring them as more highly influential than they are when one includes broader measures of scholarly impact. Such studies distort the discourse. For instance, contrary to the common view that to be influential one must publish in a very limited set of US journals, our results of the impact of scholars published in top tier European IS journals are of similar influence to authors publishing in the MIS Quarterly, ISR and Management Science eventhough they do not publish in those venues.
... This stream, the so-called Lotkaian scientometrics, has led more recently to the development of what has been termed the Hirsch, or h-family, statistics. The h-indices balance the productivity of the scholar against the citations to those publications thus providing a metric that demonstrates both the productivity and uptake of the scholar's publications (Hirsch, 2005; Cuellar et al., 2008). The application of the Hirsch statistics to the assessment of scholarly influence in the IS discipline has been advanced elsewhere (Serenko & Bontis, 2009; Truex et al., 2009) and the calculation of the indices is described there. ...
... A Kleinian analysis of extant approaches to evaluating scholarly endeavor In conducting a program of research exploring the nature of scholarly influence, initiated for the Klein Festscrift in 2007, we have grown to believe that (cf., principle 1 – taking a value position) 1 the characteristics of the present academic discourse about scholarly influence, the evaluation of researchers and of journal outlets has become systematically distorted in several important ways (cf., principle 3 – revealing and challenging prevailing beliefs and social practices). We articulate and support this position through the lens of the ideal speech situation or democratic discourse (cf., principal 1 – using core CST concepts) and only have space in this article to describe the arguments we have fleshed out in other works published, in press, under review (Cuellar et al., 2008; Truex et al., 2009; Takeda, 2011). First, parties do not have equal access to participation. ...
Heinz Klein was a fine scholar and mentor whose work and life have inspired us to explore the notion of ‘scholarly influence’ which we cast as ‘ideational’ and ‘social influence’. We adopt a portfolio of measures approach, using the Hirsch family of statistics to assess ideational influence and Social Network Analysis centrality measures for social influence to profile Heinz Klein's contribution to information systems (IS) research. The results show that Heinz was highly influential in both ideational terms (a significant body of citations) and social terms (he is close to the heart of the IS research community). Reflecting on the major research themes and scholarly values espoused by Klein we define a ‘Kleinian view of IS research’, grounded in Habermas’ Theory of Communicative Action, and use that to frame four affirmative propositions to address what we observe to be a distortion and attenuation of the academic discourse on the evaluation of scholarly production. This paper argues that focus should be shifted from the venue of publication of the research to the uptake of the ideas contained in it, thus increasing the openness of the discourse, participation in the discourse, truthfulness, and reduction of the inequities in power distribution within academia.
... In the IS field, scientometric studies are based on a variety of theoretical groundings and methodological approaches and apply various degrees of scientific rigor (Truex III et al. 2009). The ideational influence construct has been operationalized in the literature by means of citation analysis and the Hirsch family of indices (Cuellar et al. 2008;Truex III et al. 2009;Truex III et al. 2011). The use of the Hirsch index has been shown to be useful in evaluating scholars and superior to other measures such as the impact factor (Mingers 2009;Mingers et al. 2012). ...
How the evaluation of research is conducted has significant effects on the field in terms of what work is done, how it is done and who is rewarded. This paper expands on Cuellar, et al (2016) by providing an extended description and critique of the existing method and an overview of its proposed replacement, the Scholarly Capital Model. It shows that the existing method: counting papers in ranked journals uses an under-theorized base, systematically distorted data and has deleterious effects on the field. The paper then overviews the Scholarly Capital Model and shows how it can be used to evaluate research regardless of type of institution.
... Ideational influence has been operationalized by means of the Hirsch family of 21 indices (Egghe, 2006; Hirsch, 2005; Sidiropoulos, Katsaros, & Manolopoulos, 22 2006) for both scholars and journals (Cuellar, Takeda, & Truex, 2008; Truex III et 23 al., 2009;). Three of the Hirsch family 24 ...
... The h-index can also be used to measure the impact of journals as it can be applied to any collection of cited papers (Braun, Glänzel, & Schubert, 2006;Schubert & Glänzel, 2007;Xu, Liu, & Mingers, 2015).Studies have been carried out in several disciplines: marketing (Moussa & Touzani, 2010), economics (Harzing, A.-W. & Van der Wal, 2009), information systems (Cuellar, Takeda, & Truex, 2008) and business (Mingers, J., et al., 2012). The advantages and disadvantages of the h-index for journals are the same as the h-index generally, but it is particularly the case that it is not normalised for different disciplines, and it is also strongly affected by the number of papers published. ...
Scientometrics is the study of the quantitative aspects of the process of
science as a communication system. It is centrally, but not only, concerned
with the analysis of citations in the academic literature. In recent years it
has come to play a major role in the measurement and evaluation of research
performance. In this review we consider: the historical development of
scientometrics, sources of citation data, citation metrics and the "laws" of
scientometrics, normalisation, journal impact factors and other journal
metrics, visualising and mapping science, evaluation and policy, and future
developments.
... According to g-index, when all articles of an outlet are 'ranked in decreasing order of the number of citations that they received, the g-index is the (unique) largest number such that the top g articles received (together) at least g 2 citations' [Egghe, (2006), p.131]. Both indices have been already employed in journal ranking projects (Cuellar et al., 2008;Harzing and van der Wal, 2008b;Tol, 2008). Contemporary, h-index (referred to as hc-index) suggested by Sidiropoulos et al. (2007), takes into account the age of each article. ...
The purpose of this investigation is to develop a ranking of academic business ethics journals. For this, a revealed preference approach, also known as a citation impact method, was employed. The citation data were generated by using Google Scholar; h-index, g-index and hc-index were utilised to obtain a ranking. It was observed that the scores of these three indices correlated almost perfectly. This study also demonstrates that business ethics is a well-established discipline that should have its own set of recognised academic outlets.
... While the h-index has been applied to journals in the past (Bornmann et al. 2009;Braun et al. 2006;Cuellar et al. 2008;Serenko et al. 2009) , there have been objections to the native use of the h-index for the evaluation of the impact of journals. Molinari and Molinari (2008) argue that h will only increase during a scientist's productive career and therefore should only be used to Volume 22 ...
... No restriction on the discipline was placed, and the 'Lookup Direct' function was applied to retrieve the latest data directly from Google. Overall, this approach is consistent with those utilized in previous journal ranking projects (Cuellar, Takeda, & Truex, 2008;Serenko & Bontis, 2009a). Table 1 presents the final journal ranking list. ...
This study presents a ranking of 182 academic journals in the field of artificial intelligence. For this, the revealed preference approach, also referred to as a citation impact method, was utilized to collect data from Google Scholar. This list was developed based on three relatively novel indices: h-index, g-index, and hc-index. These indices correlated almost perfectly with one another (ranging from 0.97 to 0.99), and they correlated strongly with Thomson's Journal Impact Factors (ranging from 0.64 to 0.69). It was concluded that journal longevity (years in print) is an important but not the only factor affecting an outlet's ranking position. Inclusion in Thomson's Journal Citation Reports is a must for a journal to be identified as a leading A+ or A level outlet. However, coverage by Thomson does not guarantee a high citation impact of an outlet. The presented list may be utilized by scholars who want to demonstrate their research output, various academic committees, librarians and administrators who are not familiar with the AI research domain.
bibliometric and philology, Math and Science Reading List 2017 by Stephen Cox, Volume 2 including Education, History of Math, and Instructional Design. bibliometric index, collaboration distance, trend in articles ~, taxonomy, made-to-measure indicator, ranking of researcher performance, word sense disambiguation and use of kernels, reviews and surveys of bibliometric and philology, combination, centrality, strategy, algorithm, lexicon simplificaiton and generation, evidence base, collaboration, impact of journals.
Assessing the research capital that a scholar has accrued is an essential task for academic administrators, funding agencies, and promotion and tenure committees worldwide. Scholars have criticized the existing methodology of counting papers in ranked journals and made calls to replace it (Adler & Harzing, 2009; Singh, Haddad, & Chow, 2007). In its place, some have made calls to assess the uptake of a scholar’s work instead of assessing “quality” (Truex, Cuellar, Takeda, & Vidgen, 2011a). We identify three dimensions of scholarly capital (ideational influence (who uses one’s work?), connectedness (with whom does one work?) and venue representation (where does one publish their work?)) in this paper as part of a scholarly capital model (SCM). We develop measurement models for the three dimensions of scholarly capital and test the relationships in a path model. We show how one might use the measures to evaluate scholarly research activity.
This study is part of a program aimed at creating measures enabling a fairer and more complete assessment of a scholar's contribution to a field, thus bringing greater rationality and transparency to the promotion and tenure process. It finds current approaches toward the evaluation of research productivity to be simplistic, atheoretic, and biased toward reinforcing existing reputation and power structures. This study examines the use of the Hirsch family of indices, a robust and theoretically informed metric, as an addition to prior approaches to assessing the s of IS researchers. It finds that while the top tier journals are important indications of a scholar's impact, they are neither the only nor, indeed, the most important sources of scholarly influence. Other ranking studies, by narrowly bounding the venues included in those studies, distort the discourse and effectively privilege certain venues by declaring them to be more highly influential than warranted. The study identifies three different categories of scholars: those who publish primarily in North American journals, those who publish primarily in European journals, and a transnational set of authors who publish in both geographies. Excluding the transnational scholars, for the scholars who published in these journal sets during the period of this analysis, we find that North American scholars tend to be more influential than European scholars, on average. We attribute this difference to a difference in the publication culture of the different geographies. This study also suggests that the ose with relatively low influence. Therefore, to be a part of the top European scholar list requires a higher level of influence than to be a part of the top North American scholar list.
The influence and impact of a variety of journals publishing information system (IS) research is determined using journal citation reports. The data suggests there is consistency in several of the top-rated journals, but wide variety in others. Some IS journal appear to have rankings that contradict their 'true' value to other researchers. For some of the best journals, the ranking is relatively consistent, which is comforting, but for some of the mid-ranked journals the picture is not very clear.
Summary The h-index (or Hirsch-index) was defined by Hirsch in 2005 as the number h such that, for a general group of papers, h papers received at least h citations while the other papers received no more than h citations. This definition is extended here to the general framework of Information Production Processes (IPPs), using a source-item
terminology. It is further shown that in each practical situation an IPP always has a unique h-index. In Lotkaian systems h = T1/a, where T is the total number of sources and α is the Lotka exponent. The relation between h and the total number of items is highlighted.
Summary In this short paper I propose a combination of qualitative and quantitative criteria to classify the quality, talent and creative
thinking of the scientists of the “hard”, medical and biological sciences. The rationale for the proposed classification is
to focus on the impact and overall achievements of each individual scientist and on how he is perceived by his own community.
This new method is probably more complete than any other form of traditional judgment of a scientist's achievements and reputation,
and may be useful for funding agencies, editors of scientific journals, science academies, universities, and research laboratories.
We extend the pioneering work of J. E. Hirsch, the inventor of the h-index, by proposing a simple and seemingly robust approach
for comparing the scientific productivity and visibility of institutions. Our main findings are that i) while the h-index
is a sensible criterion for comparing scientists within a given field, it does not directly extend to rank institutions of
disparate sizes and journals, ii) however, the h-index, which always increases with paper population, has an universal growth
rate for large numbers of papers; iii) thus the h-index of a large population of papers can be decomposed into the product
of an impact index and a factor depending on the population size, iv) as a complement to the h-index, this new impact index
provides an interesting way to compare the scientific production of institutions (universities, laboratories or journals).
In this paper we present characteristics of the statistical correlation between the Hirsch (h-) index and several standard bibliometric indicators, as well as with the results of peer review judgment. We use the results of a large evaluation study of 147 university chemistry research groups in the Netherlands covering the work of about 700 senior researchers during the period 1991-2000. Thus, we deal with research groups rather than individual scientists, as we consider the research group as the most important work floor unit in research, particularly in the natural sciences. Furthermore, we restrict the citation period to a three-year window instead of life time counts in order to focus on the impact of recent work and thus on current research performance. Results show that the h-index and our bibliometric crown indicator both relate in a quite comparable way with peer judgments. But for smaller groups in fields with less heavy citation traffic the crown indicator appears to be a more appropriate measure of research performance.
This article calculates probabilities for the occurrence of different types of papers such as genius papers, basic papers,
ordinary papers or insignificant papers. The basis of these calculations are the formulae for the cumulative nth citation distribution, being the cumulative distribution of times at which articles receive their nth(n = 1,2,3,...) citation.
These formulae (proved in previous papers) are extended to allow for different aging rates of the papers. These new results
are then used to define different importance classes of papers according to the different values of n, in function of time
t. Examples are given in case of a classification into four parts: genius papers, basic papers, ordinary papers and (almost)
insignificant papers.
The fact that, in these examples, the size of each class is inversely related to the importance of the journals in this class
is proved in a general mathematical context in which we have an arbitrary number of classes and where the threshold values
of n in each class are defined according to the natural law of Weber-Fechner.
The present study examines one of the fundamental aspects of author co-citation analysis (ACA): the way co-citation counts are defined. Co-citation counting provides the data on which all subsequent statistical analyses and mappings are based, and we compare ACA results based on two different types of co-citation counting: on the one hand, the traditional type that only counts the first one among a cited work’s authors, and on the other hand, a simplified approach to all-author co-citation counting that takes into account the first five authors of a cited work. Results indicate that the picture produced through this simplified all-author co-citation counting contains author groups that are more coherent, and is therefore considerably clearer. However, this picture represents fewer specialties in the research field being studied than that produced through the traditional first-author co-citation counting when the same number of top-ranked authors is selected and analyzed. Reasons for these effects are discussed. Variations of counting more than first authors are compared.
Three hundred and sixty-four information systems faculty responded to a questionnaire rating 51 journals and 13 conferences associated with the information systems field. In addition to rating the value of the outlets, faculty were asked to state whether a journal was published primarily to disseminate information systems research or not. Relative rankings for each journal and conference were determined. As the third in a series of studies, comparisons were made between these findings and those of previous ones. The overall stability in the rankings of journals and conferences was also identified. A few journals and conferences were rated and ranked for the first time. Furthermore, a significant increase in the ratings of “pure” information systems journals was noted.
The identification of the small subset of indexes to quickly reference articles in the top IS journals is discussed. The increased availability of online indexes has been a great time-saver for busy researchers, provided they know which indexes will produce the best results. A list of seven possible online indexes was provided that can be used by the researchers to find articles in the top 25 IS journals. Searching the right combination of these indexes can save researchers a great deal of time in locating the best articles from the best journals.
This article focuses on information science (IS) journals. As the pressure for scientifically rigorous and relevant research mounts, authors need to identify those outlets with the highest visibility for their work and those publications in which readers seek the best sources for informed IS research. Increasingly, academics and institutions all over the world place significant importance to journal rankings for promotion, tenure and assessment purposes. In particular, the U.S. college and university faculty promotion and tenure decisions are decided on the basis of academic research output in top tier journals. Furthermore, the research assessment exercise in Great Britain ranks university departments for the purpose of distributing government research funds. This process measures research excellence by assessing where faculty publishes, taking into account the respective journal standing. There is evidence to suggest that universities on both sides of the Atlantic increasingly use journal lists for internal assessment and promotion purposes.