ArticlePDF Available

Abstract and Figures

Information systems (IS) journal rankings and ratings help scholars focus their publishing efforts and are widely used surrogates for judging the quality of research. Over the years, numerous approaches have been used to rank IS journals, approaches such as citation metrics, school lists, acceptance rates, and expert assessments. However, the results of these approaches often conflict due to a host of validity concerns. In the current scientometric study, we make significant strides toward correcting for these limitations in the ranking of mainstream IS journals. We compare expert rankings to bibliometric measures such as the ISI Impact Factor™, the h-index, and social network analysis metrics. Among other findings, we conclude that bibliometric measures provide very similar results to expert-based methods in determining a tiered structure of IS journals, thereby suggesting that bibliometrics can be a complete, a less expensive, and a more efficient substitute for expert assessment. We also find strong support for seven of the eight journals in the Association for Information Systems (AIS) Senior Scholars’ “basket” of journals. A cluster analysis of our results indicates a two-tiered separation in the quality of the highest quality IS journals — with MISQ, ISR, and JMIS belonging, in that order, to the highest A tier. Journal quality metrics fit nicely into the sociology of science literature and can be useful in models that attempt to explain how knowledge disseminates through scientific communities.
Content may be subject to copyright.
This material is presented to ensure timely dissemination of scholarly and technical work.
Copyright and all rights therein are retained by authors or by other copyright holders. All
persons copying this information are expected to adhere to the terms and constraints
invoked by each author's copyright. In most cases, these works may not be reposted
without the explicit permission of the copyright holder.
This is a copyrighted publication of MIS Quarterly, and is held by the Regents of the
University of Minnesota for the Management Information Systems Research Center
(MISRC). More information can be found at:
http://www.misq.org/roadmap/copyright.html
This version of the referenced work is the post-print of the article, and should not be
redistributed or cited without permission of the authors.
The current reference for this work is as follows:
Paul Benjamin Lowry, Gregory D. Moody, James Gaskin, Dennis F. Galletta,
Sean Humpherys, Jordan B. Barlow, and David W. Wilson (2012). “Evaluating
journal quality and the Association for Information Systems (AIS) Senior
Scholars’ journal basket via bibliometric measures: Do expert journal assessments
add value?” MIS Quarterly (forthcoming, accepted 07-Dec-2012).
If you have any questions and/or would like copies of other articles I’ve published, please
email me at Paul.Lowry.PhD@gmail.com, and I’d be happy to help. My vita can be
found at http://www.cb.cityu.edu.hk/staff/pblowry
Alternatively, I have an online system that you can use to request any of my published or
forthcoming articles. To go to this system, click on the following link:
https://seanacademic.qualtrics.com/SE/?SID=SV_7WCaP0V7FA0GWWx
Evaluating Journal Quality and the Association for Information Systems (AIS) Senior Scholars’
Journal Basket via Bibliometric Measures: Do Expert Journal Assessments Add Value?
Paul Benjamin Lowry
Department of Information Systems
College of Business, City University of Hong Kong
Paul.Lowry.PhD@gmail.com
Gregory D. Moody
Department of Management, Entrepreneurship and Technology
Lee Business School, University of Nevada, Las Vegas
greg.moody@unlv.edu
James Gaskin
Information Systems Department
Marriott School of Management, Brigham Young University
james.eric.gaskin@gmail.com
Dennis F. Galletta
Decision, Operations, and Information Technology
Katz School of Business, University of Pittsburgh
galletta@katz.pitt.edu
Sean L. Humpherys
Computer Information & Decision Management
College of Business, West Texas A&M University
shumpherys@wtamu.edu
Jordan B. Barlow
Operations & Decision Technologies
Kelley School of Business, Indiana University
jordy.barlow@gmail.com
David W. Wilson
Management Information Systems Department
Eller College of Management, University of Arizona
davewilsonphd@gmail.com
ACCEPTED AT
MISQ
BY SENIOR EDITOR DETMAR STRAUB IN MARCH OF 2013.
THIS IS THE WORKING PAPER VERSION THAT IMMEDIATELY PRECEDES
GALLEYS MAKING THE PAPER READY FOR PUBLICATION. 1
2
BIOGRAPHIES
Paul Benjamin Lowry is an Associate Professor of IS and Associate Director of the
MBA Programme at the City University of Hong Kong. He received his Ph.D. in MIS from the
University of Arizona. He has published articles in MISQ, JMIS, JAIS, IJHCS, JASIST, ISJ, EJIS,
CACM, Information Sciences, DSS, IEEETSMC, IEEETPC, SGR, Expert Systems with
Applications, Computers & Security, and others. He serves as an AE at MISQ (regular guest),
EJIS, I&M, ECRA, and CAIS. He has also served as an ICIS track co-chair. His research interests
include behavioral information security (e.g., protection motivation, accountability, whistle-
blowing, compliance, deception, privacy), human-computer interaction (e.g., trust, culture,
intrinsic motivations), e-commerce, and scientometrics of IS research
Gregory D. Moody holds two Doctorates in Information Systems from the University of
Pittsburgh and the University of Oulu. He is currently an Assistant Professor and the University
of Nevada, Las Vegas. He has a Master’s of Information Systems Management from the Marriott
School, Brigham Young University, where he was also enrolled in the Information Systems
Ph.D. Preparation Program. He has published or has articles accepted in MISQ, JMIS, JASIST,
JISE, and CAIS. His interests include IS security and privacy, e-business (electronic markets,
trust) and humancomputer interaction (Web site browsing, entertainment). He is currently the
AIS SIGHCI Newsletter Editor.
James Gaskin James Gaskin is an Assistant Professor of IS at Brigham Young
University, Utah. He received his Ph.D. in MIS from Case Western Reserve University. He has
articles published or accepted in MISQ, JAIS, CAIS, among others. His research interests include
organizational genetics, human computer interaction, team leadership and success, innovation in
project-based design work, and research and teaching methodologies. James advises scholars and
practitioners worldwide through correspondence stemming from his statistics videos and wiki
(Gaskination on YouTube, and StatWiki).
Dennis F. Galletta (PhD, University of Minnesota) is an AIS Fellow and Professor at the
Katz Business School, University of Pittsburgh, where he also serves as Director of Doctoral
Programs. He has also published in journals such as Management Science, ISR, JMIS, EJIS, and
JAIS, served on editorial boards of Journal of MIS, MISQ, ISR, and JAIS. He won an MISQ
“Developmental Associate Editor” (2006). He has served as both program and general chair/co-
chair of ICIS and AMCIS, as well as co-chair of the ICIS Doctoral Consortium. He served as
ICIS Treasurer, AIS President, and AIS Council Member. He is founding co-Editor in Chief of
AIS Transactions on HCI. He also established the AIS concept of Special Interest Groups in
2000.
Sean Humpherys is an assistant professor at West Texas A&M, Department of
Computer Information and Decision Management. He received his Ph.D. in MIS from the
University of Arizona. He has published in MISQ, DSS, IEEE Transactions on Professional
Communication, and CAIS. He has been a researcher for three Centers of Excellence and has
received funding from the Department of Homeland Security, NSF, Center for Identification
Technology Research, National Center for Border Security and Immigration, and
Pricewaterhouse Coopers. His research interests include decision support systems for fraud
3
detection, credibility assessments, rapid screening for hostile intent, humancomputer
interaction, machine learning algorithms, data mining, computational linguistics, tenure and
promotion, and the scientific process.
Jordan B. Barlow is a doctoral student at the Kelley School of Business, Indiana
University. He is a graduate of the Masters of Information Systems program at Brigham Young
University where he was enrolled in the Information Systems Ph.D. Preparation Program. His
research interests include collaboration, CMC, virtual teams, and behavioral IT security. He has
published in CAIS and presented his work at conferences such as the ACM Conference on
Computer Supported Cooperative Work and Social Computing (CSCW).
David W. Wilson is a doctoral student at the University of Arizona in the Center for the
Management of Information. He holds a Masters of Information Systems Management from the
Marriott School of Management, Brigham Young University, where he completed the
Information Systems Ph.D. Preparation Program. His research has appeared or is forthcoming in
MIS Quarterly and CAIS, and at conferences including ICIS, AMCIS, and HICSS. His research
interests include online identity, information privacy, and human-computer interaction.
ACKNOWLEDGEMENTS
We appreciate partial funding for this project from City University of Hong Kong UCG Grant
#7200256. We are indebted to Detmar Straub, the AE, and the reviewers for their rigorous
oversight and feedback that resulted in a much improved manuscript. We appreciate reviews and
feedback on previous versions of this work from Ritu Agarwal, Joey George, Mike Denny,
Laura Rawlins, and Leslie Wilcocks. We also appreciate contributions to earlier portions of this
research from Jared VanderHorst.
4
Evaluating Journal Quality and the Association for Information Systems (AIS) Senior
Scholars’ Journal Basket via Bibliometric Measures: Do Expert Journal Assessments Add
Value?
ABSTRACT
Information systems (IS) journal rankings and ratings help scholars focus their publishing efforts
and are widely used surrogates for judging the quality of research. Over the years, numerous
approaches have been used to rank IS journals, approaches such as citation metrics, school lists,
acceptance rates, and expert assessments. However, the results of these approaches often conflict
due to a host of validity concerns. In the current scientometric study, we make significant strides
toward correcting for these limitations in the ranking of mainstream IS journals. We compare
expert rankings to bibliometric measures such as the ISI Impact Factor, the h-index, and social
network analysis metrics. Among other findings, we conclude that bibliometric measures provide
very similar results to expert-based methods in determining a tiered structure of IS journals,
thereby suggesting that bibliometrics can be a complete, a less expensive, and a more efficient
substitute for expert assessment. We also find strong support for seven of the eight journals in
the Association for Information Systems (AIS) Senior Scholars’ “basket” of journals. A cluster
analysis of our results indicates a two-tiered separation in the quality of the highest quality IS
journalswith MISQ, ISR, and JMIS belonging, in that order, to the highest A+ tier. Journal
quality metrics fit nicely into the sociology of science literature and can be useful in models that
attempt to explain how knowledge disseminates through scientific communities.
KEYWORDS
Information systems journal rankings, scientometrics, bibliometrics, journal quality, SenS-6,
SenS-8, self-citation, Impact Factor, h-index, social network analysis, expert opinion, composite
ranking or rating, AIS Senior Scholars basket of journals, nomologies for dissemination of
scientific knowledge
5
INTRODUCTION
As a scientific discipline, Information Systems (IS) defines itself in large part by the
academic journals it produces. This is so because peer-reviewed journals serve as the primary
outlet for research findings and academic discussion. Rainer and Miller (2005) assert that a
journal’s importance to a discipline “naturally leads to the question of relative academic quality
[of its journals] (p. 92). Lewis et al. (2007) argue the importance of rigorous research regarding
journal quality and rankings: “Scientometric studies form a vital line of inquiry to facilitate the
ongoing evaluation and improvement of an academic discipline. In particular, Straub (2006)
notes that scientometric research is concerned with the legitimacy in a field and how it is
established (p. 242) and lauded the inherent value of these self-studies to the development and
progress of the IS field.
Discussion of relative journal quality in a discipline must be continual, relevant, and
rigorous in order to inform and convince internal and external stakeholders (Straub 2006).
Timely discussion regarding the rigor and scope of a field’s top journals also helps to educate
stakeholders outside the discipline (e.g., college deans, P&T Committees, external reviewers,
etc.). This issue is particularly relevant in IS because of some misconceptions regarding the
quality of IS journals. For example, in December 2011, the Financial Times expanded their list
of top business journals, which increased the number of top journals for virtually every business
discipline except IS. Valacich et al. (2006) and Kozar et al. (2006) earlier confirmed this
disparity among business disciplines regarding elite publishing opportunitiesconcluding that
most other business areas have significantly more elite publishing opportunities than IS
researchers.
Dennis et al. (2006) identified a serious problem with what business schools might
6
consider to be top IS journals. They discovered that few tenured IS researchers publish in elite
journals as defined by one of the commonly accepted business-school journal lists promoted in
Trieschmann et al. (2000), which includes only MISQ
1
and ISR from the IS discipline (this is also
true of other top business journal lists from Financial Times, Business Week, and UT-Dallas).
Among tenured IS faculty, only 0.8 percent in the US and 0.7 percent worldwide published in
MISQ and ISR. However, Dennis et al. (2006) state that 86 percent of the 49 institutions they
studied expected three or more elite publications for tenure. In a separate survey of 375 IS
faculty, 55 percent of the respondents reported that to qualify for tenure, researchers had to
publish in top-tier journals (Galletta 2010). As a result of these pressures and disparities, IS
faculty face more difficulty in meeting tenure requirements than those in other business
disciplines, which then further affects the IS field (Dean et al. 2011; Dennis et al. 2006; Valacich
et al. 2006).
Key problems with well-publicized lists that drive research behavior and reward in
business schools are their creation by an external organization serving a non-academic agenda.
The process to create the lists is not scientific, i.e., lack of an open, peer-reviewed, intellectual
process that uses empirical evidence to determine what constitutes a top journal. Based on such
lists, the longstanding tradition in some North American business schools is that only MISQ and
ISR are considered top-tier journals (Dennis et al. 2006). Not surprisingly, several elite
institutions in Europe and Asia have followed suit, considering only MISQ and ISR to be top IS
journals. If a greater number of top IS journals actually exist than this perception allows, then the
pervading bias will continue to have an unfair and detrimental effect on the global IS field
because North American business schools have a disproportionately heavy influence on global
1
For brevity, we abbreviate all journal names in this paper with their common abbreviations. The journals’ full
names, with additional publication information, are cross-referenced in Appendix C.
7
rankings and accreditation standards.
Several recent IS studies have highlighted these inequities as they play out among
internal and external stakeholders (Dean et al. 2011; Dennis et al. 2006; Kozar et al. 2006;
Valacich et al. 2006). In response to this issue, the Association for Information Systems (AIS)
Senior Scholars publicly endorsed a basket of six plus two
2
top IS journals (hereafter, the SenS-
6
3
) (Saunders et al. 2007) and then at their meeting in December 2011 decided to include all
eight of those IS journals in a single list (hereafter, the SenS-8) (AIS 2011).
4
We believe that the proposed SenS-8 could win broader acceptance outside the IS
community with sound empirical evidence supporting the claims. The IS community has already
empirically demonstrated that it has fewer publishing opportunities in the top tier than other
disciplines (Dennis et al. 2006; Kozar et al. 2006; Valacich et al. 2006); however, to date,
empirical evidence to convince business school deans and other key policy-makers and
constituencies that there are other elite journals beyond MISQ and ISR in the field has not been
proffered. Statements by the AIS Senior Scholars alone are unlikely to provide a compelling case
that the IS field has more than two top journals. Hard empirical evidence is pivotal to reify the
SenS-8.
To provide such hard evidence, this paper employs a repeatable and multi-faceted
methodology. Rigorous, evidence-based assessment can enable the IS discipline to make
stronger arguments for the actual quality of its journalswhether it has zero, one, two, three, or
a more numerous but still manageable set of premier journals. Additionally, although both
2
The additional two journals were said to be of comparable quality, but were placed into a second group because the
Senior Scholars believed that a list of eight might be too long to be considered by some outside stakeholders.
3
Though the AIS supports the Senior Scholars Forum, the SenS-6 and SenS-8 baskets are official recommendations
of the Senior Scholars, rather than the AIS itself.
4
Although this thoughtful recommendation carries strong merit within a major part of the IS community,
broadening the basket from six to eight has not been without its own controversy.
8
opinion-based expert assessments of journals and bibliometric approaches have contributed to
past assessments of journal quality, we aim to show that a multi-faceted bibliometric approach
can effectively replace extensive and costly expert-opinion surveys of the IS academic
community (e.g., Lowry et al. 2004). Rather, bibliometric measures can assess the quality of
journals more easily and objectively, thereby enabling regular updates and easier replication for
purposes of measurement validity.
Scientometric approaches involving bibliometrics have long been key to addressing
publishing and journal quality issues in other research fields. Notably Science and Nature have
published scientometric articles supported by bibliometrics (e.g., Acuna et al. 2012b; Wilhite and
Fong 2012). Similarly, top business journals have published scientometric articles that provide
persuasive evidence of journal quality and other related issues of significance to business fields.
Examples include Trieschmann et al. (2000) in the Academy of Management Journal; Walsh
(2011) in the Academy of Management Review; Chen and Huang (2007) in Journal of Corporate
Finance, Bonner et al. (2006) in Accounting, Organizations, and Society; and Nerur et al. (2008)
in Strategic Management Journal. Scientometric work in the MIS field focusing on issues related
to journal quality was initiated many years ago in MIS Quarterly by Culnan and Swanson (1987;
1986), and more recent papers have been published in this venue by Dennis et al. (2006) and
Dean et al. (2011) as well as in Information Systems Research by Valacich et al. (2006).
As further motivation for our scientometric approach, we first outline methodological
issues not adequately addressed by existing IS-ranking studies. Then we explain our approach
and address these controversies through an analysis of the largest and most diverse data
collection effort to date. Next, we compare the results of bibliometric methods to that of expert
opinions, including the SenS-8. We conclude by examining the unique contributions of this
9
approach as compared to past approaches and providing recommendations for the IS field based
on the implications.
METHODOLOLGICAL ISSUES WITH RANKINGS APPROACHES
The question of how to determine the relative quality of IS journals has been the subject
of healthy debate for many years (e.g., Dean et al. 2011; Ferratt et al. 2007; Katerattanakul and
Han 2003; Lowry et al. 2004; Rainer Jr. and Miller 2005; Straub and Anderson 2010). Despite
this vibrant research stream, limitations and biases of existing approaches hamper reliable and
consistent ranking of IS journals. Appendix A summarizes the three major approaches to
assessing journal quality, along with their strengths and weaknesses. Based on a review of extant
IS scientometrics studies, four key issues would seem to be preeminent.
Issue 1: Should Non-IS Journals and Practitioner Magazines be Bundled with Pure IS-
Journals?
Prior studies have sporadically ranked purely IS-journals against non-IS journals and
practitioner magazines. By including such disparate outlets in the journal basket under scrutiny,
these studies add noise that undermines the validity of the rankings (Lewis et al. 2007)
particularly to external audiences within the business school. Thus, previous studies perpetuate
an “apples-to-oranges” mixed comparison in journal rankings. Moreover, the opinions of the
larger IS field regarding top IS journals are systematically different and inappropriately mixed
with the opinions of much smaller groups of researchers who publish in journals outside IS. Such
mixed approaches can lead to misleading results that undermine the face validity of these studies
because they do not account for the different missions of various journal types (Adler and
Bartholomew 1992). For example, one study (Peffers and Ya 2003) included JACM (an elite CS
journal), AMR, and ASQ in their list of journals, yet these were ranked below several non-IS
10
specific practitioner magazines and IS journals such as The DATABASE for Advances in
Information Systems, CAIS, and JCIS. Another study (Rainer Jr. and Miller 2005) ranked some
practitioner magazines (e.g., CACM, IEEE Software) above leading academic journals such as
JACM, ASQ, AMJ, Organization Science, and AMR.
Issue 2: Should Diverse Global Opinions be Used to Rank Journals?
The second issue raises the question of geographic diversity of perspectives in rankings.
IS scholars continually call for scientometric studies that are global in scope and that represent
the general IS discipline not just North American academics (Baskerville and Wood-Harper
1998; Dean et al. 2011; Katerattanakul and Han 2003; Lowry et al. 2004). However, the majority
of extant studies remain have focused on North America (e.g., Dean et al. 2011). This issue is
increasingly salient because IS scholars engage in global collaboration with colleagues, and
researchers and institutions in different world regions use journal-ranking studies in distinct ways
(Baskerville and Wood-Harper 1998; Iivari 2008; Willcocks et al. 2008).
Similarly, past journal ranking studies generally assume that participants are
homogeneous in experience, attitude, research purpose, and type of institution (Baskerville 2008;
Özbilgin 2009). However, scientometric research in other fields shows that perceptions of
journal quality can be affected by geography (Galliers and Meadows 2003; Sellers et al. 2004;
van Dalen and Henkens 2001), type of institution (Axarloglou and Theoharakis 2003; Svensson
and Wood 2006), academic level (Axarloglou and Theoharakis 2003; Sellers et al. 2004), and an
individual’s educational training (e.g., IS Ph.D. vs. non-IS Ph.D.) (Axarloglou and Theoharakis
2003; Sellers et al. 2004). To date, with the exception of two studies in which global regions
were considered (Lowry et al. 2004; Mylonopoulos and Theoharakis 2001), global IS journal
rankings have not addressed these demographic factors.
11
Issue 3: Should Expert Opinion and Bibliometrics Be Used Together?
Third, extant studies of IS journal rankings have used a one-dimensional measurement
approach and focused solely on expert opinion or bibliometrics, but never both. Recent
discussion in our field brings this practice into question (Straub and Anderson 2010).
Furthermore, scientometrics studies in other leading academic fields use both of these
approaches to provide what is purported to be a more balanced assessment of journal quality
(e.g., Allen et al. 2009; Butler 2008; Harnad 2008; Harvey et al. 2007; Mingers and Harzing
2007).
What exactly is the problem? Surveying scholars for their opinions is costly. It requires a
huge scholarly effort and it raises an assortment of validity and measurement issues that are not
simply resolved. Thus, it would be beneficial if a bibliometric approach could be devised that
would yield the same results as expert assessments.
Issue 4: Does the SenS-8 Basket of Journals Well Represent the Top IS Journals?
Finally, can we find reasonable evidence to reify or contend the Senior Scholars basket of
top IS journals? A lively debate on the assessment of IS journal quality was initiated in 2007 by
the AIS Senior Scholars, who recommended the aforementioned “basket” of six plus two
excellent journals. This basket, supported by 72% of researchers surveyed by Galletta (2010)
included MISQ, ISR, JMIS, EJIS, ISJ, and JAIS (Saunders et al. 2007). Although JIT and JSIS
were characterized as two additional journals that would not reduce the quality of the list, most
researchers referred to the basket of six. To encourage equal treatment of the journals by the IS
community, the Senior Scholars specifically avoided rank-ordering the journals. Aiming to
reduce confusion from the “six plus two” approach of the SenS-6 list, and to recognize the two
journals that were, in essence, not being given equal consideration, the Senior Scholars combined
12
all of those journals into a single, official basket of eight “excellent” journals (SenS-8) in
December 2011. Strikingly, to date, no research has provided external empirical validation of the
global IS academic community’s assessment of this recommendation whether the included
journals are truly the top eight journals in IS and whether they should or should not be rank-
ordered.
METHODOLOGIES BY ISSUE
The goal of our study was to conduct the largest and most rigorous expert-based ranking
study to date and then compare the results to bibliometric methods on the same IS journal set. If
the results are statistically equivalent, then one can conceivably replace the other. If not, a more
complicated, balanced methodology would need to be developed, similar to what has been done
in other business fields (e.g., Allen et al. 2009; Butler 2008; Harnad 2008; Harvey et al. 2007;
Mingers and Harzing 2007). The remainder of this section describes our methodologies and
design choices, organized by the four issues that drive this paper.
Addressing Issue 1 by Ranking Only Academic IS journals
All extant IS journal rankings studies, except one portion of the Peffers and Ya (2003)
study, rank IS journals, non-IS journals, and practitioner magazines together (see Tables B.1 and
B.2 in Appendix B). Although several previous studies questioned the practice of including non-
IS journals in the rankings (Chua et al. 2003; Katerattanakul and Han 2003; Lewis et al. 2007;
Peffers and Ya 2003), most of these studies still rank some (or many) journals that are not,
strictly-speaking, IS journals. We break with this practice by specifically including only
academic IS journals, in part because citation analysis is more valid when comparing journals
within the same discipline (Harvey et al. 2007; Leydesdorff 2008).
What then constitutes an “IS journal”? Noting that no definitive criteria exist, Lewis et al.
13
(2007) call for an empirically validated set of such criteria. Our response to this challenge was to
adopt a verifiable means of determining IS-centricity. First, similar to Lowry et al. (2004), we
focused on identifying and ranking the best IS journals. Consequently, we began with a list of
journals (IS and non-IS) that were ranked in all previous IS journal rankings (see Table B.1).
Then, we evaluated the editorial mission and stated goals of the supporting organization for
every journal on that list, which in most cases provided a clear answer regarding whether a
journal was primarily an IS journal. In the few cases where this distinction was unclear, we
systematically considered the research foci, educational training, and departmental affiliation of
the editors and editorial boards of the journals in question. If only a small minority of a journal’s
editors were IS academics residing in IS departments, then we did not include the journal in our
study (e.g., AMJ). Two hundred IS academics then reviewed our list of proposed IS journals to
ensure that none were missing or listed in error
5
. We further validated these decisions and added
a few more suggested journals based on this preliminary test (see Table B.1). All IS journals that
were initially considered for this study are listed in Appendix C.
Addressing Issue 2 by Using the Most Global, Diverse Sample to Date
The IS academic community continually presses for more global representation in IS
journal rankings (Baskerville and Wood-Harper 1998; Dean et al. 2011; Katerattanakul and Han
2003; Lowry et al. 2004), yet only two studies have addressed this need (i.e., Lowry et al. 2004;
Mylonopoulos and Theoharakis 2001). A diverse sample is thus needed to reflect today’s global
IS community and to answer such calls (Baskerville 2008; Gallivan and Benbunan-Fich 2007;
Katerattanakul and Han 2003; Özbilgin 2009).
To rigorously approach this issue, we first sought to reach the entire IS global academic
5
These scholars were randomly targeted from the larger pool of respondents to our expert assessment research
instrument.
14
community via population oversampling (see Appendix D for details). The goal was to reach not
only elite researchers at elite institutions, but also to include all IS academics in all AIS world
regions. We estimate that our survey reached a maximum of approximately 8350 eligible
respondents. The 2816 responses that we received therefore represents at least a 33.7 percent
response rate from the international IS academics. Accordingly, this participation rate is the
largest international participation in an IS journal study to date.
Further, we collected demographics like type of institution
6
(Carnegie Foundation 2010;
Dean et al. 2011; Hendrix 2009), academic position, and educational training of the respondents
as controls to determine if such factors make any difference. Of the 2816 responses, 2420 were
complete and usable and 2280 provided optional demographic information, as summarized in
Table 1. To provide meaningful analysis by world region, we asked all respondents to state the
country of their primary institution. For the first time in such a study, almost half of the
respondents were from outside North America, thereby providing the most internationally
diverse response to date for this type of study.
7
Table 1. Respondent Demographics (n = 2280)
AIS region
Region 1: The Americas
51.5%
Region 2: Europe, Africa, Other
28.7%
Region 3: Asia and Australia
19.8%
Ph.D. training
Information Systems
65.4%
6
We based the institution-type categorization on those used by the 2005 version of the Carnegie Classification of
Institutions of Higher Learning™ (Carnegie Foundation 2010) used to classify institutions in North America based
on the primary purpose of the institution (e.g., research-intensive vs. undergraduate teaching). We used this
classification because of its transparency, simplicity, and similar use in previous studies (Dean et al. 2011; Hendrix
2009). Rather than use all the classifications, we reduced these to five basic types.
7
Whereas the authors were ready and willing to conduct non-response bias tests on our final sample, the SE did not
feel that these tests would yield greater confidence in the representativeness of the sample. First, it is nowhere clear
what the population of IS academics is or how one would gain access to it. Second, the choice to “over” sample
certain regions was based on the typically lower response rates that these areas have demonstrated in the past and
this complicates the representativeness issue. Overall, the SE felt that the sampling frame was reasonable and that
the realized sample was sufficient to draw credible inferences about journal quality.
15
CS or Engineering
14.1%
Non-IS business
11.1%
Behavioral Science
3.3%
Other
6.0%
Professorial status
Assistant (or Lecturer)
8
27.7%
Associate (or Senior Lecturer)
34.0%
Full
30.7%
Advanced doctoral candidate
6.5%
Other or no response
1.1%
Institution type
(Based on Carnegie
Classification of Institutions)
Research University with very high research activity
(RU/VH)
40.0%
Research University with high research activity
(RU/H)
20.0%
Doctoral Research University/Master’s level
university
19.7%
Undergraduate Teaching-oriented University
11.5%
Other and No response
8.8%
Addressing Issue 3 by Comparing Bibliometrics and Expert Opinions
Expert assessment and bibliometric approaches could each make unique contributions in
assessing journal quality; they could also show offsetting limitations that would lead to possibly
different conclusions. To determine the extent to which expert opinions are redundant, we
compare them to bibliometric measures such as ISI Impact Factor metrics, social network
analysis (SNA) metrics derived from the ISI citations database, and the h-index (Hirsch 2005)
and its derivatives (Egghe 2006; Sidiropoulos et al. 2007; Zhang 2009), which are calculated
using Google Scholar
TM
. Then, we collected bibliometric data for the top 40 journals emerging
from the expert survey.
In terms of ISI Impact Factor metrics, we used the standard ISI Impact Factor but also
considered the five-year impact factor, impact factor without journal self-citation, and five-year
8
The titles “Lecturer” and “Senior Lecturer” are used in many schools in Europe and Australia and are roughly
equivalent to “Assistant Professor” and “Associate Professor” in university systems in Asia and North America.
These titles must not be confused with the roles of “instructor” or “adjunct” or “clinical” in Asian and North
American systems; these titles involve professors in a teaching-focused role in a non-tenured status.
16
article influence. Because several top IS journals were not indexed by Thomson Reuters, and to
account for any potential systematic error introduced by the ISI impact factor, one of our
bibliometric measures is based on the h-index (Hirsch 2005). Unlike the ISI impact factor, the h-
index is calculated from Google Scholar citations data, which allows us to include data for all top
IS journals. Because the h-index has known shortcomings (Straub and Anderson 2010; Truex et
al. 2009), we also chose to use three variants of the h-index designed to address these specific
weaknesses: the hc-index, the g-index, and the e-index. The h-index and its variants are based on
an entirely different formula than that of ISI, and thus can possibly account for other factors of
quality that are not captured through ISI measures (Sidiropoulos et al. 2007). We calculated all
measures related to the h-index systematically using Harzing’s Publish or Perish™ bibliometrics
software version 3.2.4150 (Harzing 2011). Appendix E provides detailed definitions of these
metrics.
The third group of metrics was created through social network analysis (SNA) using the
citation data available through the ISI database. Our SNA included only the 21 IS journals from
our dataset that have Thomson-Reuters impact factor scores. The analysis measured the extent to
which articles within a journal cite an article in another journal. Polites and Watson (2009) used
SNA to demonstrate journal centrality and influence within and across disciplines. In line with
their research, we used three measures of node (journal) prestige and centrality: Freeman degree,
Bonacich power index, and information centrality. Notably, “Freeman degree prestige is
commonly used for determining journal rankings (though not generally referred to by this name).
The Bonacich power index provides more insight regarding degree prestige because it is capable
of discriminating between citations received from more popular journals vs. less popular
journals, based on their respective degree scores” (Polites and Watson 2009, p. 603). These
17
scores represent the citation pattern of articles among journals and the pattern that is formed
within this network structure. Articles that are cited more heavily do not bias this index; rather,
the index is based on overall patterns and the manner in which the journal relates to all other
journals within the dataset. We weighted these three standardized measures equally to form a
single SNA score.
An advantage of our proposed method is that calculating journal composite scores
follows a straightforward, repeatable procedure by recalculating scores in a straightforward
fashion with updated ISI and h-index data. Table F.1 in Appendix F summarizes all bibliometric
scores for each of the 21 journals. Table 2 provides a summary of factors used for the composite
bibliometric measure.
Table 2. Summary of Factors Used for the Composite Measure
Bibliometric factor
Baseline case
(MISQ)
Brief description
Expert assessment
100% region 1
100% region 2
100% region 3
Percentage of experts who assessed the journal as a tier-1
journal factoring in best-case tier-1 journal; only considered by
region for each journal
2010 ISI Impact Factor
4.49
Citation impact of journal for 2010 (based on 20082009 data
released in summer 2011)
2010 5-year ISI impact factor
9.21
5-year citation impact of journal for 2010 (based on 2005
2009 data released in summer 2011)
2010 ISI impact factor
eliminating journal self-citation
3.97
Citation impact of journal for 2010 removing self-citations to
the journal (based on 20082009 data released in summer
2011)
2010 Article Influence™
2.89
Standardized average influence of a journal's articles over the
first five years after publication for 2010 (based on 20052009
data released summer 2011)
2011 h-index
198.00
Alternate citation impact factor based on the latest Google
Scholar
TM
data, August 2011; the number of the last citation-
rank-ordered article whose ranking is lower than or equal to
the number of citations received (Hirsch 2005)
2011 hc-index
103.00
Adjusted h-index that ascribes more weight to recently
published articles than older articles as a solution to the time-
in-print bias (Sidiropoulos et al. 2007); based on the latest
Google Scholar
TM
data, August 2011
2011 g-index
169.00
Adjusted h-index that ascribes more weight to highly
influential articles (Egghe 2006); based on the latest Google
Scholar
TM
data, August 2011
2011 e-index
272.12
A metric that is complementary to the h-index, accounting for
differences in citation patterns among journals with the same
or similar h-index score (Zhang 2009); based on the latest
18
Google Scholar
TM
data, August 2011
SNAFreeman Degree
56.219
A localized, within-network measure of the number of direct
relationships for a given journal (Freeman 1979).
SNABonacich Power (β =
.075)
6.175
A localized, within-network degree measure for a journal’s
power, based on the power of other journals to which it is
connected (Bonacich 1987).
SNAInformation centrality
1.149
A measure of all paths between pairs of journals, including the
strength of ties between journals (Porta et al. 2006; Stephenson
and Zelen 1989).
Addressing Issue 4 by Aggregating the Indices and Considering Self-Citation Practices
Issue 4 raises the question of which journals are the top journals in the IS field. To be
able to address issue 4 fully, we aggregate the measures described previously, calculate
sensitivity analysis of various weighting schemes, and perform cluster analysis to discern tiers of
journals according to the measures. As part of our analysis, we also consider the issue of niche
behavior and self-citation practices that are critical considerations for any journal-ranking
endeavor.
Self-citation practices can be useful in assessing journal quality. Reasonable levels of
self-citation are acceptable and expected, of course, but IS scholars generally agree that coercive
self-citation is simply unethical and therefore unacceptable for top journals (Crews et al. 2009;
Gray 2009; Straub and Anderson 2009). A recent study in Science showed that coercive self-
citation
9
is practiced more frequently in business disciplines than in other social science fields
(Wilhite and Fong 2012). With a designated few IS journals being accused of coercive self-
citation (Wilhite and Fong 2012), the IS field was unfortunately highlighted as one of the
“offending” business disciplines. The issue of coercive self-citation continues to be a problem
because authors tend to obey such requests from editors. In Bormann and Daniel’s study (2008),
many authors concurred that several citations in their work were non-essential to the article but
9
Coercive self-citation refers to “requests that (i) give no indication that the manuscript was lacking in attribution;
(ii) make no suggestion as to specific articles, authors, or a body of work requiring review; and (iii) only guide
authors to add citations from the editor’s journal” (Wilhite and Fong 2012, p. 542).
19
were required by editors simply to approve the work for publication.
A special issue of CAIS discussed the problem of coercive self-citation. Some of the
articles in the issue examined citation patterns of IS and business journals while others discussed
the ethical implications of self-citation (Gray 2009). Still other articles presented arguments for
why editors might request additional citations to the journal in which an author was submitting
his/her work. However, virtually all agreed that coercive self-citation is a practice that should not
be tolerated in IS journals.
Scientometric evidence also suggests that top journals do not need to engage in coercive
citations and to game the system in order to boost journal impact factors (Straub and Anderson
2009). Instead, journals having high scientometric impact without forcing self-citation have a
natural, strong influence on other leading journals and related leading conferences because their
content is engaging and noteworthy. That is, because research in top journals tends to be
interesting and compelling, it often initiates related discussions in other top journals and
conferences (Straub and Anderson 2009).
Even when self-citations are ethical and appropriate, a disproportionately high number of
self-citations can indicate that a journal is likely not a mainstream journal but is instead
demonstrating niche behavior, which is subtly different from actually being a niche journal.
Niche journals are narrower in their appeal and often serve focused research communities. Niche
journals are characterized by a large number of ethical self-citations (Romano Jr. 2009; Trkman
2009) because to continue a research stream in a niche area, one often must refer to previous
work in the same journal.
Operating with coercive practices and operating as a niche journal are suggested here
only as two possible reasons for unusually high journal self-citation, which do not necessarily
20
represent an exhaustive list. It is also possible that self-citation has voluntary, cultural, or topical
origins. Voluntary self-citation might occur when many of a journal’s authors believe they need
to self-cite to increase their chances of acceptance based on what they believe to be unwritten
rules of the journal. High self-citation could be inspired by a form of selfish benevolence, in
which a community of interconnecting authors act to “help” a journal (and indirectly themselves)
to rise in stature with more citations. Cultural self-citation might also occur when many authors
conform to abundant examples of self-citation in that journal, either consciously or
unconsciously. Topical self-citation occurs when a journal becomes “known” for a highly
specific topic or publishes a debate on a particular topic.
Consequently, we do not assert or imply that any particular journal in our analysis is
practicing coercion or is a niche journal. Further study would be required to uncover a more
comprehensive list of reasons for excessive self-citation practices, and to judge each journal by
examining the best evidence that could become available. Instead, our judgment in this study is
that if a journal has more self-citations than meaningful external citations rather than
demonstrating the characteristics of a mainstream “excellent” journal—it instead exhibits “niche
behavior. Our key operating assumption is that mainstream journals should garner both external
and self-citations without a disproportionate amount of self-citations.
Journal self-citation, particularly the coercive form, is one reason some IS scholars
caution against relying on raw bibliometric measures to measure journal and article quality (e.g.,
Sarkis 2009; Trkman 2009). Given this discussion and the intense external scrutiny of IS journals
(e.g., Wilhite and Fong 2012), we decided to aggregate and cluster using a variety of measures,
and to augment that analysis by also providing segmentation of the IS journal basket, including
the consideration of niche behavior. The important task at this juncture was to assess self-citation
21
rates in a systematic and unbiased manner.
Multiple articles in the CAIS special issue used ISI Journal Citation Report data to
identify journals with significant self-citation rates (e.g., Li 2009; Straub and Anderson 2009).
However, we found that ISI impact factor data is inadequate for short-term criteria because the
data lags more than a couple of years and covers too large of a span of time. These data also do
not account for the IS field’s leading conferences (i.e., the AIS conferences and HICSS), which
are important in demonstrating emerging scholarly discussions. Thus, we created two new
measures based on Google Scholar citation data for all 21 IS journals indexed in the ISI between
January 2011 and July 2012 and categorized the citations into seven groups (e.g., self-citations,
citations in top IS journals, etc.). Appendix D details the collection and categorization procedures
for this data.
We term the first measure as short-term self-citation percentage, and it is the number of
self-citations over total citations in a recent 1.5-year period (Jan. 2011 through July 2012). The
1.5 year period was selected in alignment with the recommendations from 26 prestigious
marketing journal editors to monitor short term self-citation ratios (Lynch 2012).
10
This period
also gives newer journals with a shorter publication record a fairer comparison to more
established journals. Based on the 21 top IS journals we targeted, the average short-term self-
citation percentage was 14.3 percent with an average of 6.1 percent for the SenS-6, and with
none of SenS-6 in the double-digit range. Thus, we applied the simple heuristic that to be
considered as a mainstream IS journal, the short-term self-citation percentage must be in single
digits. Fifteen journals met this criterion, whereas six did not (see Appendix F).
10
In reaction to the Wilhite and Fong article (2012) exposing the self-citation practices of specific journals, 26
editors of prestigious journals identified solutions to the problems. They sent letters to more than 600 business
school deans asking that research articles be judged based on individual merit rather than on the impact factor of the
publishing journal and that vigilance be given to identify surges in self-citation ratios (Lynch 2012).
22
We term the second new measure the short-term IS influence ratio, and it is the total of a
journal’s quality IS citations (citations in IS ISI journals plus top AIS-affiliated and HICSS
conference citations) divided by the journal’s total self-citations in the 1.5 year period. This
second new influence measure answers the question: “With whom is a journal having a scholarly
conversation and to what degree does it positively impact the broader IS scientific community?”
We included top IS conferences because articles from mainstream IS journals are likely to
influence top IS conference articles more strongly than niche journals. That is, mainstream
journals would be expected to create a near-immediate “buzz” with some of their findings and,
thus, to create new scientific conversations in meaningful venues for IS scholars. Niche journals
should generally create less of a “buzz” in the short-term and, thus, would likely be cited for
other reasons and with a delayed effect, such as for a passing reference in a literature review and
less so for strong theoretical support or theory building.
Not surprisingly, the SenS-6 journals all had influence ratios greater than 1:1 (quality IS
citations to self-citations). Thus, our filtering benchmark for this ratio is that a mainstream IS
journal should have a quality citations to self-citations ratio of 1:1 (100%) or higher. Essentially,
journals with high ratios are having meaningful discourses with others in the IS academic
community more than with themselves or non-IS communities; those with low ratios are
conversing with themselves or non-IS communities more than they are with the IS community.
This ratio allows us to focus on mainstream journals and journals that are central in their
influence on the IS community. In total, 15 journals met this criterion, whereas 6 did not (again
see Appendix F).
DATA ANALYSIS
Analyzing Issue 1 by Filtering Out Non-IS Journals
23
In terms of issue 1, we explained earlier in the methods section why only IS-mission-
specific journals should be ranked. Our first analytical step was thus to create a filtered list of the
IS journals by eliminating any non-IS journals and IS journals that were not highly ranked. All
subsequent analysis was conducted beginning with the list of the highest-ranked journals in
Appendix C.
Analyzing Issues 2 through 4 using the Composite Measures and Weightings Scheme
Because the analyses for issues 2 through 4 are intertwined, these issues are presented in
the same section. Specifically, we first determined which journal had the best score for each
bibliometric factor and considered this score to be the baseline (100%) score against which we
compared all other journals. MISQ had the best score for every factor, and thus became the
baseline journal against which we compared all others.
11
Next, we calculated a z-score for each
journal on each factor (based on the baseline score for the factor), which we then multiplied by
the composite score’s weight for the factor. The composite score was simply the weighted
average of the z-scores for all the journal-ranking factors. However, determining an unbiased
weighting structure is untenable because each measure of quality suffers from potential biases
and limitations and the overall “true score” remains unknown. Accordingly, we weighted these
composites using four different weighting schemes in a sensitivity analysis, as outlined in the
next section. We used the sum of ranks for each of the journals across all weighting schemes to
arrive at the final composite rankings. Using this method reduces error associated with any one
weighting scheme and more closely approximates the overall “true score” of a journal’s relative
quality, as in classical measurement theory for reliability of multiple measures.
Using our composite bibliometric ranking methodology, we created weighted rankings
11
One exception is that JAIS tied MISQ for top rank in terms of Bonacich Power.
24
(see Table 3) for the 21 IS journals for which ISI data was available
12
. We also conducted a
sensitivity analysis to demonstrate that ranking results were not primarily an artifact of the
weighting scheme chosen and to eliminate errors related to a particular weighting scheme. To
establish this statistically, we conducted a series of nonparametric paired-sample Wilcoxon
signed-rank tests for the four unique combinations of paired weightings schemes. We found no
statistical differences in any of the pairings, thereby indicating that each weighting scheme
produced approximately the same statistical result (see Table F.2 in Appendix F for more
details).
Table 3. Sensitivity Analysis Weighting Schemes
Approach
Composite citation
scores
Composite h-type
index Scores
SNA
Alternative 1
33%
33%
33%
Alternative 2
25%
25%
50%
Alternative 3
25%
50%
25%
Alternative 4
50%
25%
25%
This sensitivity analysis approach also demonstrates that the results were not skewed to
provide the best results for MISQ; MISQ simply had strong results, regardless of the
measurement approach and weighting that was applied. MISQ was ranked first using all four
weighting approaches, and thus received a score of four (1
st
× 4 = 4); JAIS was ranked sixth three
times and fifth once, and thus received a score of 23 (6
th
× 3 = 18, + 5
th
× 1 = 5, = 23). Table F.3
shows the results of the rank-sum approach. All rankings tables highlight the SenS-8 journals in
grey.
This sensitivity analysis demonstrates that the top three journals (MISQ #1, ISR #2, and
12
Our focus here is to evaluate journal quality for the highest quality IS journals. Because ISI citations data were
only available for the 21 journals listed in Table 4, we were forced to exclude 19 of the 40 journals for which we
collected expert assessment from our final weighted ranking. Analysis of separate h-index citations data from
Google Scholar indicates that the journals that did not have ISI citations data generally had lower h-index results
and, thus, were generally less cited. Hence, this is additional proof that the 21 journals best represent the current best
IS journals.
25
JMIS #3) were completely unaffected by the changes in weights and always appeared in the
same rank order, regardless of the approach used. The next four journals shifted positions
between ranks four to seven, depending on the weighting scheme used. The next three journals
shifted positions between ranks eight to ten, depending on the weighting scheme used. Four
subsequent groups thus emerge in the rankings. The closest approximation of the true value of
relative journal quality occurs by summing the ranks of each weighting approach.
Because expert assessment is the only factor that changed according to region and other
demographic data, we then performed various analyses using only the expert assessment data to
better ascertain the degree to which world region and other demographics influence IS journal
rankings. Specifically, we assessed the top IS journals as ranked by world region, academic type,
Ph.D. training, and type of university and found little to no variation among the rankings. This
finding indicates a consensus of IS researchers across world regions in terms of ranking IS
journals.
Because of the homogeneity in worldwide opinions, we further assessed whether
comparing overall expert opinion to a single composite bibliometric measure made any
difference. We found that overall expert assessment of top journals appeared to follow the
bibliographic assessment quite closely in a paired-sample Wilcoxon signed-rank test between the
overall z-scores of the expert rankings to the average z-scores of the composite bibliometric
rankings (null hypothesis of no difference had a p-value of 0.958). This result indicates that
collecting expert opinion on IS journals yields rankings that are very similar to those derived
from bibliometrics (see Appendix G). In sum, data analysis establishes that composite
bibliometrics is effective in ranking top IS journals and that expert opinion provides only
redundant information. Similarly, because of the homogeneity of worldwide opinions with
26
respect to ranking IS journals, we conclude that rankings by world region do not add value.
Addressing Issue 4 by Analyzing Journals With and Without Niche Behavior
Based on the approach outlined in the methods section, Table F.4 summarizes the results
of our screening criteria for niche versus mainstream journals. If we exclude journals that exhibit
(for whatever reason) niche behavior due to aberrant citation patterns, there remain 14 IS
journals that are eligible to be further considered in the IS basket of journals (listed in
alphabetical order): ECRA, EJIS, ISJ, ISM, ISR, JAIS, JCIS, JDM, JGIM, JMIS, JOCEC, JSIS,
MISQ, and MISQe. The journals that were excluded for having an excessively high short-term
self-citation percentage and/or an excessively low short-term IS influence ratio were (listed in
alphabetical order) DSS, I&M, IJEC, ISF, IT&M, JIT, and WIRT.
Given that these results partially conflict with the SenS-8 recommendation, we performed
further tests to ensure that the exclusion of the targeted journals was valid. The average short-
term self-citation ratio among the 21 investigated IS journals is 8.1%, excluding JIT and IT&M,
which had outlier self-citation ratios of 77.8% and 68.9% respectively. Table F.4 displays the
short-term self-citation ratios for each journal. To identify journals that may be characterized as
having high short-term self-citation ratios, we performed a k-means cluster analysis (n=21) using
only short-term self-citation ratios as the clustering variable (Appendix H describes the
assumptions of our cluster analyses). Centroids for the three clusters of short-term self-citation
ratio are 4.5%, 21.4%, and 73.4%. Cluster 1 (i.e., journals with low short-term self-citation
ratios) include ECRA, EJIS, I&M, ISJ, ISM, ISR, JAIS, JCIS, JDM, JGIM, JMIS, JOCEC, JSIS,
MISQ, and MISQE. Cluster 2 (i.e., journals with high short-term self-citation ratios) includes
DSS, IJEC, ISF, and WIRT. Cluster 3 (i.e., extremely high ratios) includes IT&M and JIT.
The k-means cluster analysis places I&M right near the border between clusters 1 (low
27
ratios) and 2 (high ratios). The distance to cluster 1s centroid for I&M is 5.4% and the distance
from cluster 2s center point for DSS is 5.6%. Consequently, these two journals constitute the
border between Cluster 1 (low) and Cluster 2 (high). Although I&M is on the border of being
clustered with higher self-citing journals, I&M was assigned to the niche citation pattern group
because of its very low short-term IS influence ratio.
Given that the remaining 14 journals consistently rank in our sensitivity analysis in
certain positions whereas others’ positions vary but still rank within consistent groups, we
performed an additional cluster analysis of these journals to determine if natural clusters exist.
We first used the Caliński-Harabasz method to determine the optimal group size (Caliński and
Harabasz 1974). The results of this test indicated that the optimal size was four groups (pseudo F
= 38.57). We then used the common centroid linkage method to determine the cluster assignment
for each journal (n=14).
The second of the four clusters included only one journal, JAIS, but a three-cluster
solution placed JAIS among others in the second tier. Whereas the four-cluster solution suggests
that JAIS stands in the ordered position of the fourth best journal in the field, the prior sensitivity
analyses suggest an unordered second tier of IS journals. For this reason, we combined JAIS, as
the sole representative of the second cluster, with the third cluster.
Figures 1a and 1b respectively depict the clustering of the top IS journals considering,
and not considering, niche behavior.
28
Figure 1a: Results of Cluster Analysis of Top IS Journals, for the Entire
Sample of 21 Journals
Figure 1b: Results of Cluster Analysis of Top IS Journals, Excluding those
Exhibiting Niche Behavior
29
To validate our recommended clustering approach further, we used an alternative method
to determine the sensitivity of the tierswhether, based on high short-term self-citation or low
IS influence ratios, the excluded journals would change the constitution of the top two tiers. In
this alternative analysis, rather than excluding journals on the basis of the cutoffs or self-citation
cluster results, we included all journals and adjusted the z-scores of each bibliometric score using
the following formula: original z-score × (1 short-term self-citation rate) × (short-term IS
influence ratio), with the IS-influence ratio being capped at 1. Then, we ran the cluster analysis
(n=21) and found that MISQ, ISR, and JMIS remained in the top, ordered tier; JAIS, JSIS, EJIS,
and ISJ remained in the second, unordered tier; and none of the previously excluded journals
were clustered in the top two tiers.
We then conducted a similar analysis (n=21) using the following formula: original z-
score × short-term IS influence ratio, capped at one. In this solution, the same set of seven
journals (MISQ, ISR, JMIS, JAIS, JSIS, EJIS, and ISJ) comprised the top two tiers, with none of
the previously excluded journals being clustered in the top tiers. Thus, using a variety of
techniques to account for short-term self-citation and IS influence, we conclude that our two-tier
clustering approach most accurately represents the top (A-level) mainstream IS journals.
DISCUSSION
Journal rankings are a practical necessity in academia where perfect measures of
scholastic quality are elusive and difficult, if not impossible, to attain. Yet, to be useful and
reliable, journal rankings need to be updated periodically to reflect the changing nature of the
discipline. Traditionally, the challenge with periodic/regular updates of such rankings is the
resource-intensive nature of collecting subjective expert opinion on journal quality.
To this point, we established that expert opinion adds no significant value to readily
30
calculable bibliometrics when the rankings objective is to identify top, mainstream IS journals.
In this paper, we developed a robust composite bibliometric measure of journal quality using ISI,
h-indices, and SNA metrics to establish the current rankings of top IS journals. Because niche
journal properties and potential self-citation abuse can undermine legitimate comparison, we also
included a comparative analysis that used conservative filtering measures to ensure that no
candidate journals had high self-citation patterns and low short-term external influence.
We trust that the measurement approach advocated here has been shown to be sound. By
comparison, one common belief is that journal acceptance rates serve as acceptable surrogates
for journal quality (e.g., Cabell and English 2004). Although this might seem logical on the
surface, Lewis et al. (2007) found that the acceptance rates of published journals do not correlate
significantly with other measures of journal quality (e.g., journal rankings studies). In contrast,
using a sample of target journal lists from IS doctoral programs reveals that journal rankings
studies largely correlate with measures of journal quality. Lewis et al. (2007) further support the
validity of IS journal rankings studies by demonstrating that rankings studies constitute valid
measurements of journal quality and demonstrate acceptable content validity, construct validity,
and reliability across rankings studies included in their sample. Their contributions to the IS
scientometric literature provide evidence that journal quality can, in fact, be approximated using
repeatable, similarly structured rankings; for this reason, they call for more consistency in the
rankings methods employed in the discipline. Our paper advances this goal by providing a highly
repeatable, maximally independent rankings method that can be periodically updated as the IS
discipline continues to evolve.
Contributions to the IS Field
The most important outcome of this study is that expert opinion on top IS journals
31
equates with external bibliometrics and is statistically indistinguishable. This indicates that
collecting expert opinion is no longer useful or necessary. Instead, the IS field should adopt our
composite bibliometrics rankings approach that can more easily and more frequently rank IS
journals. In doing so, it would be helpful to use the simple filtering guidelines we offer to help
prevent gaming self-citations and exclude niche journals from broader, mainstream journal
rankings.
The development and use of a consistent rankings approach are particularly useful for the
IS field, a young, growing, and dynamic area of inquiry in which journal quality is continually
changing and improving over time. Thus, an easily replicable rankings approach can help keep
such rankings current by recognizing newer, high-quality journals and changes in journal quality
over time (Allen et al. 2009). As an example, eight years after its review in Lowry et al. (2004),
JAIS has risen even higher in its ranking and is now consistently found in the top seven in every
category and form of analysis. Meanwhile, being recently added to the ISI Impact Factor index
with newly released Impact Factors or soon-to-be-released Impact Factors, other IS journals are
also improving in quality. This trend is promising for the IS field because the rigorous process of
being selected for inclusion in the ISI Impact Factor index is a sign of quality that should help
attract even better articles to be published in the journals. IS journals that were recently added or
will shortly be added to the ISI Impact Factor index include ISF (added in 2005); JDM (added in
2006); ECRA and JGIM (added in 2007); JAIS (added in 2008); IT&M (added in 2009); MISQE,
JCIS, and WIRT (added in 2010); and DATABASE, EM, IT&P, ITD, and I&O (added in 2011).
Recommendations
Based on our results, we offer two main recommendations to the IS community: First, we
recommend further revision to the SenS-8 recommendation. Second, we recommend working to
32
improve the ISI impact factors of existing IS journals.
Recommendation 1: SenS-8 Might Require Adjustment
Our cluster analysis demonstrates that the top-tiered IS journals are MISQ, ISR, and JMIS
in that specific rank order, with a gap following that tier. This important finding establishes that
the widespread tradition of only ranking one or two journals in the highest tier puts IS
researchers at an unnecessary disadvantage, and is highly problematic. Our results largely
support the recommendations of the AIS Senior Scholars in terms of the SenS-8, but with hard
empirical evidence regarding their actual quality. Apart from validating MISQ, ISR, and JMIS as
the top tiered (i.e., “A+”) IS journals, we found evidence that EJIS, ISJ, JAIS, and JSIS occupy
the next tier (i.e., “A”) of the highest quality IS journals. A key concern is that we provided
bibliometric evidence that JIT does not presently exhibit self-citation rates and IS community
influence of a top, mainstream IS journal.
13
However, the remainder of its bibliometrics indicates
that JIT would belong in the second cluster if short-term citation measures were not considered.
Moreover, cluster analysis using the SNA metrics indicates that the second tier, unlike the first
tier, has no natural rank order, with the exception of JAIS perhaps being of higher quality than
the other three (having originally been in its own cluster). Hence, for enhanced external validity
and greater latitude in institutional application, verifiable assertions would be that the two
clusters overall are the Select-7 (MISQ, ISR, JMIS, EJIS, ISJ, JAIS, and JSIS) with no implied
rank-order beyond MISQ, ISR, JMIS, and JAIS. Research-intensive institutions might refer to the
first three journals as “A+” journals and the rest as “A” journals.
13
JIT had the highest short-term self-citation percentage and the lowest short-term IS influence ratio of all 21 IS
journals with ISI bibliometrics. This disparity occurred because during the 1.5 years used for the short-term
measures, JIT had a high volume of research commentaries that cited each other and that were scarcely cited outside
of JIT at the time. However, removing those commentaries from comparison, the self-citation statistics are still
extremely high, at 40% for this chosen period.
33
The Senior Scholars agreed at their ICIS Shanghai meeting in 2011 to reevaluate the list
every 5 years, which would see this exercise occurring in the year 2016. If self-citation patterns
at JIT and other excluded journals dropped below the thresholds for what we have defined as
niche behavior, then some of these journals could very well find themselves admitted to the
Sens-listing.
Recommendation 2: Improve Impact Factors and Citation Practices of Existing Journals
A clear trend is that scholars in all fields are increasingly targeting journals with ISI
Impact Factors and eschewing journals without them. Without an ISI Impact Factor similar to
other top journals in a particular discipline, it is increasingly difficult to convince colleagues
outside one’s discipline that the journal is of high quality. Accordingly, senior faculty routinely
advise Ph.D. students and young faculty to publish in top journals that are recognized by experts
in the faculty’s fields and journals that have substantial ISI Impact Factors. The potential
downside of this trend to the broader IS field (and to science in general) is that several strong IS
journals exist that have not yet earned ISI Impact Factor status, despite having an arguably strong
impact on the field and on science in general. This is particularly true of niche journals that enjoy
high-quality editorial boards, engage in rigorous peer-review, exhibit quality articles, rank highly
in expert rankings, and demonstrate reasonable impact on metrics outside of the ISI Impact
Factor (e.g., the h-index).
The broader, less selective h-indices can assess virtually all journals for their impact
because Google Scholar indexes virtually every publicationonline or in printwhereas
Thomson Reuters indexes only journals that meet multiple quality criteria. Thus, the pragmatic
solution to this dilemma is for such journals to develop and establish a sufficiently consistent,
high-quality citation historyfor example, using the h-index and other key metrics considered
34
by Thomson Reutersso that these journals can be included in the ISI Impact Factor index.
Otherwise, and unfortunately, without being indexed by ISI, these journals are likely to be
undervalued by those who are not familiar with themparticularly by scholars outside IS.
Because Thomson Reuters requires several years of demonstrated quality and impact to
approve a journal for an ISI Impact Factor, we recommend that the IS field be more cautious
regarding continually offering new journals. Such journals face increasing difficulty in gaining
traction as high-quality outlets, unless they are strategically sponsored by elite academic
organizations (e.g., AIS, ACM, IEEE, INFORMS) and engage elite editorial boards.
We therefore recommend that the IS community should instead focus on developing,
expanding (e.g., more print space), improving publication quality, and scrutinizing self-citation
practices of existing journals. Considering the trend toward electronic publication (e.g., JAIS,
CAIS, AIS THCI, and JITTA), rapid publication (e.g., ACMTMIS, CAIS, AIS THCI), and the
ability of journals to expand the number of issues and pages with demand (e.g., recently seen
with MISQ, ISR, and JMIS) substantial new space exists for exciting articles to appear in high-
quality IS journals. Furthermore, with more focus and investment, other existing high-quality IS
journals can continue to ascend in quality. Important to this cause, coercive self-citation should
no longer be a practice of any IS journal; if it does occur, we agree that such a journal should be
publically censured by the AIS Senior Scholars (cf, Clarke et al. 2009) or the AIS.
Toward Theoretical Development of Dissemination of Knowledge Nomologies
Although there is practical value in establishing the quality of a discipline’s journals, the
primary contribution to science is to methods and measures. Advancing how we measure a single
construct like journal quality should be increasingly important to scholars who are interested in
studying the nomologies in which this construct appears. One of these research domains is
35
dissemination of scientific knowledge and one phenomenon that clearly needs to be studied is
how research is communicated from scholar to scholar, as in journals, conference proceedings, or
books (Straub 2006), and results in high impact research. Impactful research will be received
favorably by colleagues, reviewers, and readersperhaps receiving nominations for or winning
awards (Campbell et al. 1982; Daft et al. 1987). Such research is generally perceived to be novel,
creative, or admired for pushing boundaries and assumptions (Daft et al. 1987; Davis 1971;
Straub and Anderson 2009). Impactful research is also generally highly cited, and is
subsequently used as a basis for theoretical advancements within or outside one’s research
discipline (Daft et al. 1987; Karuga et al. 2007). Significant research can often be leveraged to
resolve pressing problems of practice, thereby resulting in consulting opportunities or
influencing educational curricula (Daft et al. 1987; Davis 1971; Straub and Anderson 2009).
Many factors have been proposed to influence the quantity or attributes of a researcher’s impact.
Here, and as summarized in Figure 2, we propose a preliminary model of what most
likely influences a researcher’s impact. Individual, intrinsic factors relate to characteristics of the
researcher, such as proactivity or confidence (Daft et al. 1987; Judge et al. 2009; Seibert et al.
1999), intrinsic motivation (Grant 2008), cognitive or mental ability (Dreher and Bretz 1991;
Judge et al. 2009), creativity (Daft et al. 1987), and past productivity or experience (Acuna et al.
2012a; Pfeffer and Langton 1993; Williamson and Cable 2003). Another set of predictors of a
researcher’s research output relate to his or her social network. This set includes factors such as
the number or diversity of network connections (Konrad and Pfeffer 1990; Wolff and Moser
2009), the productivity or standing of the researcher’s advisor or co-authors (Acuna et al. 2012a;
Williamson and Cable 2003), and the extent to which a researcher engages with the community
by attending and presenting at conferences (Williamson and Cable 2003).
36
Figure 2. Model Depicting the Mediating Effect of Journal Quality on the Relationship
Between Researcher’s Output and Researcher’s Impact.
Environmental factors extrinsic to the researcher can also affect research output
negatively or positively. These include academic origin and affiliations (Acuna et al. 2012a;
Long et al. 1998), salary and other incentive systems (Pfeffer and Langton 1993), and the size or
research orientation of the researcher’s institution (Long et al. 1998; Pfeffer and Langton 1993;
Williamson and Cable 2003). Productivity can be impacted by research-related resources and
grants (Acuna et al. 2012a), and by departmental norms, policies, and cultures (Konrad and
Pfeffer 1990; Maslach and Leiter 2008; Pfeffer and Langton 1993; Williamson and Cable 2003).
All these factors, and certainly others, likely influence researcher output. Key to
37
researcher output is the number of academic publications (journals, proceedings, books) a
researcher produces. However, research output also include key complementary factors, such as
the number of Ph.D. students supervised, size and number of research grants awarded, patents,
and technology artifacts.
Publication quality plays a pivotal role in terms of the extent to which a researcher’s
output (most commonly seen in academic journal articles) will become influential and have a
lasting impact. We conceptualize the role of publication quality as playing a moderating role
between raw research output and the subsequent influence of that research. Publication quality
includes the quality of journals, conferences, and books in which an academic publishes. The
highest quality outlets will facilitate broader dissemination, recognition, and influence of the
products of researchers’ efforts. High-quality journals and proceedings tend to be more highly
read and cited as well as generally more authoritative on the topics addressed in their published
articles. Crucially, publication quality is also found in articles themselves and is often expressed
in terms of rigor, relevance, and novelty.
By assessing the quality of journals using the algorithm presented in this paper, journals
can monitor their long-term efforts to improve their quality and thereby further enhance the value
of research output. As Daft et al. (1987) emphasize, quantity without quality is like faith without
works. Evaluations of researcher impact are not complete without a measure of quality to qualify
the quantity of output. The primary contribution of this paper is the operationalization of journal
quality with a relatively straightforward set of measurements, which not only aids in journal
assessment, but also facilitates broader assessments of a researcher’s impact.
38
Conclusion
We argue in this paper that solid scientometrics can establish a baseline indicator of the
quality of one’s research record. It is well to remember, though, that journal quality (as a
surrogate for research quality) is not the only way this can be done. Many other meaningful
approaches can demonstrate the quality of a research record that do not involve scientometrics
and, therefore, we recognize that their absence from the current study is a limitation. We hope
that in improving the scientometric evaluation of IS journals, we are not encouraging the field to
downplay critical qualitative indicators of research quality. Instead, because journal rankings will
always be a component of assessment, we conducted this study with the intention of bringing
about useful improvement in the method of ranking IS journals and to provide necessary, hard
evidence to strengthen the external case for the quality of IS journals.
REFERENCES
Acuna, D.E., Allesina, S., and Kording, K.P. 2012a. "Future Impact: Predicting Scientific
Success," Nature (489:7415), pp. 201-202.
Acuna, D.E., Allesina, S., and Kording, K.P. 2012b. "Predicting Scientific Success," Nature
(489:13 September), pp. 201-202.
Adler, N.J., and Bartholomew, S. 1992. "Academic and Professional Communities of Discourse:
Generating Knowledge on Transnational Human Resource Management," Journal of
International Business Studies (23:3), pp. 551-569.
AIS 2011. "Senior Scholars' Basket of Journals," Association for Information Systems, retrieved:
September 19, retrieved from
http://home.aisnet.org/displaycommon.cfm?an=1&subarticlenbr=346.
Allen, L., Jones, C., Dolby, K., Lynn, D., and Walport, M. 2009. "Looking for Landmarks: The
Role of Expert Review and Bibliometric Analysis in Evaluating Scientific Publication
Outputs," PloS ONE (4:6).
Axarloglou, K., and Theoharakis, V. 2003. "Diversity in Economics: An Analysis of Journal
Quality Perceptions," Journal of the European Economic Association (1:6), pp. 1402-
1423.
Bar-Ilan, J. 2008. "Informetrics at the Beginning of the 21st Century--a Review," Journal of
Informetrics 2: 1-52. (2:1), pp. 1-52.
Baskerville, R. 2008. "For Better or Worse: How We Apply Journal Ranking Lists," European
Journal of Information Systems (17:2), pp. 156-157.
Baskerville, R., and Wood-Harper, A.T. 1998. "Diversity in Information Systems Action
Research Methods," Journal of the Operational Research Society (7:2), pp. 90-107.
39
Bonacich, P. 1987. "Power and Centrality: A Family of Measures," American Journal of
Sociology (92:5), pp. 1170-1182.
Bonner, S.E., Hesford, J.W., Van der Stede, W.A., and Young, S.M. 2006. "The Most Influential
Journals in Academic Accounting," Accounting, Organizations, and Society (31:7), pp.
663-685.
Bornmann, L., and Daniel, H.D. 2009. "The State of H Index Research: Is the H Index the Ideal
Way to Measure Research Performance?," EMBO Reports (10:1), pp. 2-6.
Bornmann, L., Mutz, R., and Daniel, H.D. 2008. "Are There Better Indices for Evaluation
Purposes Than the H Index? A Comparison of Nine Different Variants of the H Index
Using Data from Biomedicine," Journal of the American Society for Information Science
and Technology (59:5), pp. 830-837.
Butler, L. 2008. "Using a Balanced Approach to Bibliometrics: Quantitative Performance
Measures in the Australian Research Quality Framework," Ethics in Science and
Environmental Politics (8:1), pp. 83-92.
Cabell, D., and English, D.L. 2004. Cabell's Directory of Publishing Opportunities in
Management, (9
th
ed.), Beaument, TX, USA, Cabell Publishing.
Caliński, T., and Harabasz, J. 1974. "A Dendrite Method for Cluster Analysis," Communications
in Statistics-theory and Methods (3:1), pp. 1-27.
Campbell, D.T. 1960. "Recommendations for Apa Test Standards Regarding Construct, Trait,
Discriminant Validity," American Psychologist (15:August), pp. 546-553.
Campbell, D.T., and Fiske, D.W. 1959. "Convergent and Discriminant Validation by the
Multitrait-Multimethod Matrix," Psychological Bulletin (56:2), pp. 81-105.
Campbell, J.P., Daft, R.L., and Hulin, C.L. 1982. What to Study: Generating and Developing
Research Questions, Beverly Hills, CA, USA, Sage Publications.
Carnegie Foundation 2010. "The Carnegie Classification of Institutions of Higher Education,"
retrieved: January 1, 2011, retrieved from http://classifications.carnegiefoundation.org/.
Chen, C.R., and Huang, Y. 2007. "Author Affiliation Index, Finance Journal Ranking, and the
Pattern of Authorship," Journal of Corporate Finance (13:5), pp. 1008-1026.
Chua, C., Cao, L., Cousins, K., and Straub, D.W. 2003. "Measuring Researcher-Production in
Information Systems," Journal of the Association for Information Systems (3:4), pp. 145-
215.
Clarke, R., Davison, R., and Beath, C.M. 2009. "Journal Self-Citation Xi: Regulation of ‘Journal
Self-Referencing’–the Substantive Role of the Ais Code of Research Conduct,"
Communications of the Association for Information Systems (25:1), pp. 91-96.
Crews, J.M., McLeod, A., and Simkin, M.G. 2009. "Journal Self-Citation Xii: The Ethics of
Forced Journal Citations," Communications of the Association for Information Systems
(25:1), pp. 97-110.
Culnan, M.J. 1987. "Mapping the Intellectual Structure of Mis," MIS Quarterly (11:3), pp. 340-
353.
Culnan, M.J., and Swanson, E.B. 1986. "Research in Management Information Systems, 1980-
1984: Points of Work and Reference," MIS Quarterly (10:3), pp. 289-302.
Daft, R.L., Griffin, R.W., and Yates, V. 1987. "Retrospective Accounts of Research Factors
Associated with Significant and Not-So-Significant Research Outcomes," The Academy
of Management Journal (30:4), pp. 763-785.
Davis, M.S. 1971. "That's Interesting: Towards a Phenomenology of Sociology and a Sociology
of Phenomenology," Philosophy of Social Science (1:4), pp. 309-344.
40
Dean, D.L., Lowry, P.B., and Humpherys, S.L. 2011. "Profiling the Research Productivity of
Tenured Information Systems Faculty at U.S. Institutions," MIS Quarterly (35:1), pp. 1-
15.
Dennis, A., Valacich, J., Fuller, M., and Schneider, C. 2006. "Empirical Benchmarks for
Promotion and Tenure in Information Systems," MIS Quarterly (30:1), pp. 1-13.
Dolnicar, S. 2002. "A Review of Unquestioned Standards in Using Cluster Analysis for Data-
Driven Market Segmentation," The Australian and New Zealand Marketing Academy
Conference 2002 (ANZMAC 2002), Deakin University, Melbourne, Australia, December
2-4.
Dreher, G.F., and Bretz, R.D. 1991. "Cognitive Ability and Career Attainment: Moderating
Effects of Early Career Success," Journal of Applied Psychology (76:3), pp. 392-397.
Egghe, L. 2006. "Theory and Practice of the G-Index," Scientometrics (69:1), pp. 131-152.
Ferratt, T.W., Gorman, M.F., Kanet, J.J., and Salisbury, W.D. 2007. "Is Journal Quality
Assessment Using the Author Affiliation Index," Communications of the Association for
Information Systems (19:1), pp. 710-724.
Fersht, A. 2009. "The Most Influential Journals: Impact Factor and Eigenfactor," Proceedings of
the National Academy of Sciences of the United States (106:17), pp. 6883-6884.
Freeman, L.C. 1979. "Centrality in Social Networks: Conceptual Clarification," Social Networks.
(1:1979), pp. 215-239.
Galletta, D. 2010. "The Senior Scholar’ Basket of Journals. Panel Presentation," in: International
Conference of Information Systems (ICIS) 2010, AIS, St. Louis, Missouri, USA.
Galliers, R.D., and Meadows, M. 2003. "A Discipline Divided: Globalization and Parochialism
in Information Systems Research," Communications of the Association for Information
Systems (11), pp. 108-117.
Gallivan, M., and Benbunan-Fich, R. 2007. "Analyzing Is Research Productivity: An Inclusive
Approach to Global Is Scholarship," European Journal of Information Systems (16:1),
pp. 36-53.
Garfield, E. 2005. "The Agony and the Ecstasy--the History and the Meaning of the Journal
Impact Factor," International Congress on Peer Review and Biomedical Publication,
Chicago, IL, USA, September 16.
González-Pereira, B., Guerrero-Bote, V., and Moya-Anegón, F. 2010. "A New Approach to the
Metric of Journals' Scientific Prestige: The Sjr Indicator," Journal of Informetrics (4:3),
pp. 379-391.
Grant, A.M. 2008. "Does Intrinsic Motivation Fuel the Prosocial Fire? Motivational Synergy in
Predicting Persistence, Performance, and Productivity," Journal of Applied Psychology
(93:1), pp. 48-58.
Gray, P. 2009. "Journal Self-Citation I: The Overview of the Journal Self-Citation Papersthe
Wisdom of the Is Crowd," Communications of the Association for Information Systems
(25:1), pp. 1-10.
Hair Jr., J.F., Black, W.C., Babin, B.J., and Anderson, R.E. 2009. Multivariate Data Analysis,
(7th ed.), Prentice Hall.
Hamilton, S., and Ives, B. 1980. "Communications of Mis Research: An Analysis of Journal
Stratifications," First International Conference on Information Systems, Philadelphia, PA,
USA, December 8-10, pp. 220-232.
Hardgrave, B.C., and Walstrom, K.A. 1997. "Forums for Mis Scholars," Communications of the
ACM (40:11), pp. 119-124.
41
Harnad, S. 2008. "Validating Research Performance Metrics against Peer Rankings," Ethics in
Science and Environmental Politics (8:11), pp. 103-107.
Harvey, C., Morris, H., and Kelly, A. 2007. "Academic Journal Quality Guide: Journals
Classified by Field and Rank," The Association of Business Schools, pp. 1-2.
Harzing, A.-W. 2011. "Publish or Perish," Tarma Software Research Pty Ltd., retrieved: August
19, 2011, retrieved from www.harzing.com.
Hendrix, D. 2009. "Institutional Self-Citation Rates: A Three Year Study of Universities in the
United States," Scientometrics (81:2), pp. 321-331.
Hirsch, J. 2005. "An Index to Quantify an Individual's Scientific Research Output," Proceedings
of the National Academy of Sciences of the United States of America (102:46), pp. 16569-
16572.
Iivari, J. 2008. "Expert Evaluation Vs. Bibliometric Evaluation: Experiences from Finland,"
European Journal of Information Systems (17:2), pp. 169-173.
Judge, T.A., Hurst, C., and Simon, L.S. 2009. "Does It Pay to Be Smart, Attractive, or Confident
(or All Three)? Relationships among General Mental Ability, Physical Attractiveness,
Core Self-Evaluations, and Income," Journal of Applied Psychology (94:3), pp. 742-755.
Karuga, G.G., Lowry, P.B., and Richardson, V.J. 2007. "Assessing the Impact of Premier
Information Systems Research over Time," Communications of AIS (2007:19), pp. 115-
131.
Katerattanakul, P., and Han, B. 2003. "Are European Is Journals under-Rated? An Answer Based
on Citation Analysis," European Journal of Information Systems (12:1), pp. 60-71.
Konrad, A.M., and Pfeffer, J. 1990. "Do You Get What You Deserve? Factors Affecting the
Relationship between Productivity and Pay," Administrative Science Quarterly (35:2),
pp. 258-285.
Kozar, K., Larsen, K., and Straub, D. 2006. "Leveling the Playing Field: A Comparative
Analysis of Business School Journal Productivity," Communications of the AIS (17:23),
pp. 524-538.
Lewis, B.R., Templeton, G.F., and Luo, X. 2007. "A Scientometric Investigation into the
Validity of Is Journal Quality Measures," Journal of the Association for Information
Systems (8:12), pp. 619-633.
Leydesdorff, L. 2008. "Caveats for the Use of Citation Indicators in Research and Journal
Evaluations," Journal of the American Society for Information Science and Technology
(59:2), pp. 278-287.
Li, E.Y. 2009. "Journal Self-Citation Iii: Exploring the Self-Citation Patterns in Mis Journals,"
Communications of the Association for Information Systems (25:1), pp. 21-32.
Long, R.G., Bowers, W.P., Barnett, T., and White, M.C. 1998. "Research Productivity of
Graduates in Management: Effects of Academic Origin and Academic Affiliation,"
Academy of Management Journal (41:6), pp. 704-714.
Lowry, P.B., Romans, D., and Curtis, A. 2004. "Global Journal Prestige and Supporting
Disciplines: A Scientometric Study of Information Systems Journals," Journal of the
Association for Information Systems (5:2), pp. 29-80.
Lynch, J.G. 2012. "Business Journals Combat Coercive Citation," Science (335:March), pp.
1169-a.
Maslach, C., and Leiter, M.P. 2008. "Early Predictors of Job Burnout and Engagement," Journal
of Applied Psychology (93:3), pp. 498-512.
McVeigh, M.E. 2004. "Open Access Journals in the Isi Citation Databases: Analysis of Impact
42
Factors and Citation Patterns," retrieved: January 1, 2011, retrieved from
http://science.thomsonreuters.com/m/pdfs/openaccesscitations2.pdf.
Meho, L. 2007. "The Rise and Rise of Citation Analysis," Physics World (20:1), pp. 32-36.
Miller, C.W. 2006. "Superiority of the H-Index over the Impact Factor for Physics,"
ArXiv:physics/0608183v1, retrieved: January 1, 2011, retrieved from
http://arxiv.org/PS_cache/physics/pdf/0608/0608183v1.pdf.
Mingers, J., and Harzing, A. 2007. "Ranking Journals in Business and Management: A Statistical
Analysis of the Harzing Data Set," European Journal of Information Systems (16:4), pp.
303-316.
Mylonopoulos, N.A., and Theoharakis, V. 2001. "On Site: Global Perceptions of Is Journals,"
Communications of the ACM (44:9), pp. 29-33.
Nerur, S.P., Rasheed, A.A., and Natarajan, V. 2008. "The Intellectual Structure of the Strategic
Management Field: An Author Co Citation Analysis," Strategic Management Journal
(29:3), pp. 319-336.
Özbilgin, M. 2009. "From Journal Rankings to Making Sense of the World," The Academy of
Management Learning and Education (8:1), pp. 113-121.
Peffers, K., and Ya, T. 2003. "Identifying and Evaluating the Universe of Outlets for Information
Systems Research: Ranking the Journals," Journal of Information Technology Theory
and Application (5:1), pp. 63-84.
Pfeffer, J., and Langton, N. 1993. "The Effect of Wage Dispersion on Satisfaction, Productivity,
and Working Collaboratively: Evidence from College and University Faculty,"
Administrative Science Quarterly (38:3), pp. 382-407.
Polites, G.L., and Watson, R.T. 2009. "Using Social Network Analysis to Analyze Relationships
among Is Journals," Journal of the Association for Information Systems (10:8), p 2.
Porta, S., Cruciiti, P., and Latora, V. 2006. "The Network Analysis of Urban Streets: A Primal
Approach," Environment and Planning B: Planning and Design (33:5), pp. 705-725.
Rainer Jr., R.K., and Miller, M.D. 2005. "Examining Differences across Journal Rankings,"
Communications of the ACM (48:2), pp. 91-94.
Romano Jr., N.C. 2009. "Journal Self-Citation V: Coercive Journal Self-CitationManipulations
to Increase Impact Factors May Do More Harm Than Good in the Long Run,"
Communications of the Association for Information Systems (25:1), pp. 41-56.
Sarkis, J. 2009. "Journal Self-Citation Xvii: Editorial Self-Citation Requestsa Commentary,"
Communications of the Association for Information Systems (25:1), pp. 141-148.
Sarle, W.S., and Kuo, A.-H. 1993. "The Modeclus Procedure, Sas Technical Report P-256," SAS
Institute Inc., Cary, NC, USA.
SAS 1999. "Introduction to Clustering Procedures: The Number of Clusters," retrieved:
November 28, 2012, retrieved from http://v8doc.sas.com/sashtml/stat/chap8/sect10.htm.
Saunders, C., Avison, D., Davis, G., Eindor, P., Galletta, D., Hirschheim, R., and Straub, D.
2007. "Ais Senior Scholars Forum Subcommittee on Journals," retrieved: January 1,
2011, retrieved from
http://home.aisnet.org/associations/7499/files/Senior%20Scholars%20Letter.pdf.
Seibert, S.E., Crant, J.M., and Kraimer, M.L. 1999. "Proactive Personality and Career Success,"
Journal of Applied Psychology (84:3), pp. 416-427.
Sellers, S.L., Perry, R., Mathiesen, S.G., and Smith, T. 2004. "Evaluation of Social Work Journal
Quality: Citation Versus Reputation Approaches," Journal of Social Work (40:1), pp.
143-160.
43
Sidiropoulos, A., Katsaros, D., and Manolopoulos, Y. 2007. "Generalized Hirsch H-Index for
Disclosing Latent Facts in Citation Networks," Scientometrics (72:2), pp. 253-280.
Sombatsompop, N., and Markpin, T. 2005. "Making an Equality of Isi Impact Factors for
Different Subject Fields," Journal of the American Society for Information Science and
Technology (56:7), pp. 676-683.
Stephenson, K., and Zelen, M. 1989. "Rethinking Centrality: Methods and Examples," Social
Networks (11:1), pp. 1-37.
Straub, D., and Anderson, C. 2009. "Journal Self-Citation Vi: Forced Journal Self-Citation--
Common, Appropriate, Ethical?," Communications of the Association for Information
Systems (25:1), pp. 57-66.
Straub, D., and Anderson, C. 2010. "Journal Quality and Citations: Common Metrics and
Considerations About Their Use," MIS Quarterly (34:1), pp. iii-xii.
Straub, D., Boudreau, M.-C., and Gefen, D. 2004. "Validation Guidelines for Is Positivist
Research," Communications of the AIS (13:24), pp. 380-427.
Straub, D.W. 2006. "The Value of Scientometric Studies: An Introduction to a Debate on Is as a
Reference Discipline," Journal of the Association for Information Systems (7:5), pp. 241-
246.
Svensson, G., and Wood, G. 2006. "The Pareto Plus Syndrome in Top Marketing Journals:
Research and Journal Criteria," European Business Review (18:6), pp. 457-467.
Trieschmann, J.S., Dennis, A.R., Northcraft, G.B., and Niemi, A.W. 2000. "Serving Multiple
Constituencies in the Business School: Mba Program Versus Research Performance,"
Academy of Management Journal (43:6), pp. 1130-1141.
Trkman, P. 2009. "Journal Self-Citation Xx: Citations and the Question of Fit," Communications
of the Association for Information Systems (25:1), pp. 165-170.
Truex, D., Cuellar, M., and Takeda, H. 2009. "Assessing Scholarly Influence: Using the Hirsch
Indices to Reframe the Discourse," Journal of the Association for Information Systems
(10:7), pp. 560-594.
Valacich, J.S., Fuller, M.A., Schneider, C., and Dennis, A.R. 2006. "Publication Opportunities in
Premier Business Outlets: How Level Is the Playing Field?," Information Systems
Research (17:2), pp. 107-125.
van Dalen, H.P., and Henkens, K. 2001. "What Makes a Scientific Article Influential? The Case
of Demographers," Scientometrics (50:3), pp. 455-482.
Walsh, J.P. 2011. "Presidential Address: Embracing the Sacred in Our Secular Scholarly World,"
Academy of Management Review (36:2), pp. 215-235.
Walstrom, K., and Hardgrave, B. 2001. "Forums for Information Systems Scholars," Information
& Management (39:1), pp. 117-124.
Whitman, M., Hendrickson, A., and Townsend, A. 1999. "Research Commentary. Academic
Rewards for Teaching, Research and Service: Data and Discourse," Information Systems
Research (10:2), pp. 99-109.
Wilhite, A.W., and Fong, E.A. 2012. "Coercive Citation in Academic Publishing," Science
(335:February), pp. 542-543.
Willcocks, L., Whitley, E., and Avgerou, C. 2008. "The Ranking of Top Is Journals: A
Perspective from the London School of Economics," European Journal of Information
Systems (17:2), pp. 163-168.
Williamson, I.O., and Cable, D.M. 2003. "Predicting Early Career Research Productivity: The
Case of Management Faculty," Journal of Organizational Behavior (24:1), pp. 25-44.
44
Wolff, H.-G., and Moser, K. 2009. "Effects of Networking on Career Success: A Longitudinal
Study," Journal of Applied Psychology (94:1), pp. 196-206.
Zhang, C.T. 2009. "The E-Index, Complementing the H-Index for Excess Citations," PloS ONE
(4:5), p e5429.
APPENDIX A. JOURNAL QUALITY RANKING METHODS
Consistent with Straub and Anderson (2010), we recognize that a journal’s quality and a journal’s impact, reputation, and
influence are not necessarily equivalent. Similarly, an underlying nomology likely existsthat is largely unknown and
unresearchedsuch that key factors of quality (e.g., rigor of review process, caution with respect to editorial oversight, accuracy
of content, etc.) are what predict journal impact or influence (Straub and Anderson 2010). However, due to the complex and
unknown nature of this nomology, and following extant practice in scientometrics research, we follow Straub and Anderson
(2010) in simply equating journal quality with journal impact and reputation for pragmatic purposes.
On this basis, we categorize the various methods of assessing journal quality from this lens into three methodological approaches:
expert assessment, citation analyses, and non-validated approaches. We review these approaches to better establish the
foundation for our choice to combine bibliometrics with expert assessment, rather than rely on only one method, as is extant
practice in the IS discipline.
Approach 1: Bibliometric Methods for Assessing Journal Quality
Bibliometric journal-ranking methods typically use citation analysis of a journal’s articles to assess the journal’s overall
contribution to science and, subsequently, use this contribution as a surrogate for journal quality (Straub and Anderson 2010). For
convenience, such methods typically limit the citation window to two or three years after the article’s publication (Allen et al.
2009; Fersht 2009; González-Pereira et al. 2010); however, more recently citation methods have considered longer windows such
as five years (Straub and Anderson 2010). The advantages of bibliometric methods include simplicity, objectivity, and
widespread use across most disciplines (McVeigh 2004; Meho 2007; Sombatsompop and Markpin 2005).
However, bibliometric journal-ranking approaches have several drawbacks. One limitation is that they require an index database,
such as Scopus™ or Thomson’s ISI Web of Knowledge™. These index databases are necessarily limited in scope—completely
excluding many journals of lesser quality or of unproven quality (i.e., newer journals) (Straub and Anderson 2010); however,
articles in these omitted journals are still citedsome heavily so (Harvey et al. 2007). Another criticism of bibliometric measures
is that a window of two or three years discounts long-term contribution (Straub and Anderson 2010). Allen et al. (2009) found
that many highly rated articles are not cited in the first three years but instead become highly cited after three years. Because of
this scope limitation, bibliometric approaches tend to downplay the long-term scientific contribution of certain articles (Allen et
al. 2009; Fersht 2009) and, consequently, downplay the contribution and subsequent judged quality of the journals in which these
deflated articles are published. For these reasons, Straub and Anderson (2010) assert that a five-year window is more appropriate
than a two-year window.
Other potential issues with bibliometric approaches include the following (Harvey et al. 2007): differences in how fields use
citation chains (some use lengthy chains, others favor short chains), herding (similar sets of highly cited articles are repeated for
articles in a discipline), content bias (review-oriented journals are cited more heavily than journals that publish original research),
journal editors who promote artificial journal self-citation, and differences in maturity of fields. These latter issues explain why
leading scientometrics research has recently established that bibliometrics are highly appropriate for comparing journals within a
discipline but highly inappropriate for comparing journals between disciplines (Harvey et al. 2007; Leydesdorff 2008).
We alleviate many of the above-mentioned drawbacks by using multiple bibliometrics, which approach we address in the
methodology section. Nevertheless, journal rankings experts outside the IS discipline have increasingly concluded that the best
overall approach is to combine journal bibliometrics with expert assessment of journal quality (e.g., Allen et al. 2009; Butler
2008; Harnad 2008; Harvey et al. 2007; Mingers and Harzing 2007).
Approach 2: Expert Assessment of Journal Quality
Studies using expert assessment of journal quality add important qualitative information and judgment that cannot be directly
reflected in bibliometric indicators that solely consider impact—including an expert’s knowledge of editorial practices,
familiarity with a journal’s peer-review process, judgment of the credentials of a journal’s editorial board, and so on (Straub and
Anderson 2010). The IS field uses this approach extensively (e.g., Hamilton and Ives 1980; Lowry et al. 2004; Mylonopoulos and
45
Theoharakis 2001; Peffers and Ya 2003). Through an extensive empirical analysis, Lewis et al. (2007) demonstrated that the best
IS journal rankings studies using expert opinion in a recent 10-year period (i.e., Hardgrave and Walstrom 1997; Lowry et al.
2004; Mylonopoulos and Theoharakis 2001; Peffers and Ya 2003; Walstrom and Hardgrave 2001; Whitman et al. 1999)
displayed a remarkable degree of measurement validity and reliability.
The greatest limitation of expert rankings is that they do not consider a journal’s actual impact on science. Accordingly,
researchers increasingly call for the combined use of bibliometrics with expert rankings. Another limitation of expert assessment
is that because the IS field is relatively new and dynamic, the quality of many of its journals is in a constant state of flux. As a
result, newer, quality journals can rise quickly in assessed reputationas occurred with JAIS, ISJ, and EJIS (Lowry et al. 2004).
Thus, newer IS journals have been absent in most expert ranking studies, thereby making a comparison to older journals difficult.
For example, only three rankings include all the following IS journals in the same study: MISQ, ISR, JMIS, DSS, I&M, EJIS,
JAIS, and ISJ (Lowry et al. 2004; Mylonopoulos and Theoharakis 2001; Peffers and Ya 2003). An easy solution to this problem
is to conduct periodic expert-ranking studies (Lowry et al. 2004). Given the changes in the IS field and the recent controversies
regarding the AIS Senior Scholars’ recommendation of the SenS-6/SenS-8 baskets, a current assessment of expert opinion is
warranted.
Approach 3: Other Approaches
Researchers use other approaches less frequently because of issues in the designs of the approaches that lead to multiple validity
and generalizability concerns. A common but questionable practice is the use of a department- or college-specific journal
rankings list for institution-specific needs. Not surprisingly, this approach typically yields lists that are highly politicized and thus
lack validity and generalizability; such lists often conveniently focus on journals in which the work of associated senior faculty
has been published (Harvey et al. 2007).
A second recently proposed approach is to rank journals on the basis of the ranked quality of the institutions with which the
authors publishing in the journals are associated (Author Affiliation Index, or AAI) (Ferratt et al. 2007). One potential concern
regarding this approach is that it shifts too much of the quality assessment away from the quality of the journal content to the
quality of the authors’ associated institutions. The logical fallacy here should be clear: although positive correlations exist
between institution quality and article quality, a higher-quality institution does not guarantee higher-quality articles.
With AAI, it is also possible that the relationships discovered are tautological. How do we know the best schools? At least one
way is to determine the journals in which they publish. How do we know the best journals? The tautology is that the AAI method
says we know this by knowing where the best schools publish.
A final, more accepted approach is to simply average all previous journal rankings into one index (Rainer Jr. and Miller 2005).
We believe this approach can be useful for highly stable fields. However, we are concerned with the application of this averaging
approach to IS journal rankings for three reasons: (1) Virtually every IS journal rankings study to date has used a different
methodology and inclusion criteria for the selected journals and respondents (e.g., some included non-IS journals, some did not);
thus, the average is not from the same baseline conditions. (2) Most previous IS journal rankings used only North American
respondents, so the average was biased toward these respondents. (3) The IS field and its associated journals have been in a
period of rapid growth and quality improvement; thus, creating an average of rankings over a decade obfuscates contemporary
knowledge of IS journal quality.
46
APPENDIX B. INCLUSION/EXCLUSION DECISIONS IN FINAL ANALYSIS OF JOURNALS
Table B.1 Justification for Inclusion/Exclusion Decisions in Final Analysis of Journals
Name
Abbreviation
(Rainer Jr. and Miller 2005)
(Lowry et al. 2004)
(Katerattanakul and Han
2003)
(Peffers and Ya 2003)
(Mylonopoulos and
Theoharakis 2001)
(Whitman et al. 1999)
(Hardgrave and Walstrom
1997)
(Walstrom and Hardgrave
2001)
IS Journal?
Top-40 cut?
Justification (if applicable)
Academy of Management Journal
-
25
--
--
--
17
--
15
14
N
n/a
Primarily management
Academy of Management Review
-
32
--
--
--
22
--
19
16
N
n/a
Primarily management
ACM Computing Surveys
-
20
--
12
--
24
14
14
10
N
n/a
Primarily CS
ACM SIG Publications
-
27
--
--
--
26
33
--
--
N
n/a
Will not rank large aggregates like this
ACM Transactions on Database Systems
-
15
--
10
--
--
--
11
6
N
n/a
Primarily CS
ACM Transactions on Information Systems
-
9
--
--
39
--
--
--
--
N
n/a
Primarily CS
ACM Transactions on MIS
ACM TMIS
--
--
--
--
--
--
--
--
Y
Y
Write-in by several experts, top-40
Administrative Science Quarterly
-
24
--
--
--
21
--
16
--
N
n/a
Primarily management
African J. of Information Systems
AFJIS
--
--
--
--
--
--
--
--
Y
N
Write-in; not ranked before; not top-40
AI Magazine
-
--
--
9
--
--
--
--
--
N
n/a
Magazine; primarily CS
AIS Transactions on HCI
AIS THCI
--
--
--
--
--
--
--
--
Y
Y
Write-in by several experts, top-40
All ACM Transactions
-
--
10
--
--
13
12
17
--
N
n/a
Will not rank large aggregates like this
All IEEE Transactions
-
--
8
--
--
6
9
12
--
N
n/a
Will not rank large aggregates like this
Australian Journal of Information Systems
AJIS
--
--
--
25
46
--
--
--
Y
Y
n/a
Business Horizons
-
--
--
--
--
--
--
--
25
N
n/a
Primarily management
California Management Review
-
--
--
--
--
--
--
--
--
N
n/a
Primarily management
China J. of Information Systems (CJIS)
CJIS
--
--
--
--
--
--
--
--
Y
N
Write-in; not ranked before; not top-40
Communication Research
-
--
--
--
--
--
--
43
--
N
n/a
Primarily communication
Communications of the ACM
-
2
5
3
--
2
3
4
2
N
n/a
Magazine; primarily CS
Communications of the Association for Information
Systems
CAIS
23
--
--
5
18
--
--
--
Y
Y
n/a
Communications of the International Information
Management Association
CIIMA
--
--
--
--
--
--
--
--
Y
N
Write-in; not ranked before; not top-40
Computer Decisions
-
--
--
--
--
--
--
--
27
N
n/a
Primarily CS
Computer Journal
-
--
--
25
--
50
43
--
--
N
n/a
Primarily CS
Computers and Operations Research
-
17
--
--
--
--
24
--
--
N
n/a
Primarily OR/OM
Computers in Human Behavior
-
--
--
--
--
--
--
42
--
N
n/a
Primarily HCI journal
Computer-supported cooperative work
-
--
--
--
36
--
--
--
--
N
n/a
Primarily communication
Data Management
-
--
--
--
--
--
37
--
24
N
n/a
Primarily CS
DATABASE
-
30
--
--
--
--
--
--
--
N
n/a
Primarily CS
47
Datamation
-
--
--
--
--
--
--
51
23
N
n/a
Magazine
Decision Sciences
-
7
6
--
--
8
5
6
8
N
n/a
Primarily decision science
Decision Support Systems
DSS
8
7
20
7
9
13
10
11
Y
Y
n/a
Electronic Commerce Research and Applications
ECRA
--
--
--
41
--
--
--
--
Y
Y
n/a
Electronic Government, An International Journal
(EG)
EG
--
--
--
--
--
--
--
--
Y
N
Write-in; not ranked before; not top-40
Electronic J. of Information Systems Evaluation
EJISE
--
--
--
--
--
--
--
--
Y
N
Write-in; not ranked before; not top-40
Electronic J. of Information Systems in Developing
Countries
EJISDC
--
--
--
--
--
--
--
--
Y
N
Write-in; not ranked before; not top-40
Electronic Markets
EM
--
--
--
29
40
--
--
--
Y
Y
n/a
Enterprise Information Systems
EIS
--
--
--
--
--
--
--
--
Y
N
Write-in; not ranked before; not top-40
Enterprise Modeling and Information Systems
Architectures, An International J.
EMISA
--
--
--
--
--
--
--
--
Y
N
Write-in; not ranked before; not top-40
E-services Journal
e-SJ
--
--
--
45
--
--
--
--
Y
Y
n/a
European Journal of IS
EJIS
13
11
14
4
11
--
--
--
Y
Y
n/a
European Journal of Operations Research
-
--
--
--
--
42
--
--
--
N
n/a
Primarily OR/OM
Expert Systems Review
-
--
--
--
--
--
--
38
--
N
n/a
Primarily CS
Expert Systems with Applications
-
--
--
24
--
--
--
34
--
N
n/a
Primarily CS
Harvard Business Review
-
6
15
--
--
7
6
9
9
N
n/a
Primarily management
Human-Computer Interaction
-
--
--
7
--
32
--
23
--
N
n/a
Primarily HCI
IBM Systems Journal
-
42
--
8
--
28
--
--
--
N
n/a
Primarily CS
IEEE Computer
-
19
25
16
--
19
11
--
--
N
n/a
Magazine; primarily CS
IEEE Software
-
11
--
--
--
--
--
--
--
N
n/a
Magazine; primarily CS
IEEE Transactions on Computer
-
18
--
--
--
--
--
--
--
N
n/a
Primarily CS
IEEE Transactions on Knowledge and Data
Engineering
-
--
--
--
--
--
--
--
--
N
n/a
Primarily CS
IEEE Transactions on SE
-
10
22
5
--
--
--
7
5
N
n/a
Primarily CS
IEEE Transactions on SMC
-
14
--
--
--
--
--
--
--
N
n/a
Primarily CS
INFOR
-
--
--
--
--
--
--
37
--
N
n/a
Not in print
Information & Management
I&M
12
9
15
5
10
15
20
12
Y
Y
n/a
Information & Organization
I&O
40
20
--
28
25
--
--
--
Y
Y
n/a
Information and Software Technology
-
--
--
--
--
--
45
--
--
N
N
Primarily CS
Information Knowledge Systems Management
IKSM
--
--
--
--
--
--
--
--
Y
N
Write-in; not ranked before; not top-40
Information Management & Computer Security
IM&CS
--
--
--
--
--
--
--
--
Y
N
Write-in; not ranked before; not top-40
Information Processing and Management
IP&M
--
--
--
46
--
35
--
--
Y
N
Not top-40
Information Research
IR
--
--
--
43
--
--
--
--
Y
N
Not top-40
Information Resources Management Journal
IRMJ
50
--
--
11
38
31
35
--
Y
Y
n/a
Information Sciences
-
--
--
--
24
--
--
--
--
N
n/a
Primarily CS / Information Sciences
Information Systems
-
--
21
18
21
--
--
--
--
N
n/a
Primarily CS
Information Systems and eBusiness Management
ISeB
--
--
--
--
--
--
--
--
Y
N
Write-in; not ranked before; not top-40
Information Systems Education J.
ISEJ
--
--
--
--
--
--
--
--
Y
N
Write-in; not ranked before; not top-40
Information Systems Frontiers
ISF
--
--
--
18
--
--
--
--
Y
Y
n/a
Information Systems Journal
ISJ
36
13
17
10
16
16
--
--
Y
Y
n/a
Information Systems Management
ISM*
43
--
19
35
33
26
30
17
Y
Y
n/a
Information Systems Research
ISR
3
2
2
2
3
4
2
3
Y
Y
n/a
Information Technology & People (IT&P)
IT&P
--
--
--
15
27
--
--
--
Y
Y
n/a
Information Technology and Management (IT&M)
IT&M
--
--
--
27
--
--
--
--
Y
Y
n/a
Infosytems
-
--
--
--
--
--
--
--
26
N
n/a
Not in print
48
Interfaces (INFORMS)
-
39
--
--
--
39
20
28
19
N
n/a
Primarily OR/OM
International J. of Business Information Systems
IJBIS
--
--
--
--
--
--
--
--
Y
N
Write-in; not ranked before; not top-40
International J. of Electronic Commerce
IJEC
--
--
--
12
23
--
--
--
Y
Y
n/a
International J. of Enterprise Information Systems
IJEIS
--
--
--
--
--
--
--
--
Y
N
Write-in; not ranked before; not top-40
International J. of Information and Decision
Sciences
IJIDS
--
--
--
--
--
--
--
--
Y
N
Write-in; not ranked before; not top-40
International J. of Information Management
IJIM
--
--
--
37
--
--
--
--
Y
N
Not top-40
International J. of Information System Modeling
and Design
IJISMD
--
--
--
--
--
--
--
--
Y
N
Write-in; not ranked before; not top-40
International J. of Information Technologies and
Systems Approach
IJITSA
--
--
--
--
--
--
--
--
Y
N
Write-in; not ranked before; not top-40
International J. of Intercultural Information
Management
IJIIM
--
--
--
--
--
--
--
--
Y
N
Write-in; not ranked before; not top-40
International J. of Technology Management
IJTM
41
--
--
--
--
41
--
--
Y
N
Not top-40
International Journal of Human-Computer Studies
-
--
--
11
42
44
--
22
--
N
n/a
Primarily HCI
International Journal of Man-Machines Studies
-
34
--
--
--
34
25
--
--
N
n/a
Now IJHCS (HCI journal)
Issues in Information Systems
ISS
--
--
--
--
--
--
--
--
Y
N
Write-in; not ranked before; not top-40
J. of Education for Management Information
Systems
JEMIS
38
--
--
--
--
39
--
--
Y
N
Not in print
J. of Computer Information Systems
JCIS
--
23
26
13
41
22
27
22
Y
Y
n/a
J. of Database Management
JDM
--
--
--
14
--
19
26
--
Y
Y
n/a
J. of Enterprise Information Management
JEIM
--
--
--
--
--
--
--
--
Y
N
Write-in; not ranked before; not top-40
J. of Global Information Management
JGIM
--
--
--
19
--
--
--
--
Y
Y
n/a
J. of Global IT Management
JGITM
--
--
--
23
--
--
--
--
Y
Y
n/a
J. of Information Privacy and Security
JIPS
--
--
--
--
--
--
--
--
Y
N
Write-in; not ranked before; not top-40
J. of Information System Security
JISS
--
--
--
--
--
--
--
--
Y
N
Write-in; not ranked before; not top-40
J. of Information Systems and Technology
Management
JISTEM
--
--
--
--
--
--
--
--
Y
N
Write-in; not ranked before; not top-40
J. of Information Systems Applied Research
JISAR
--
--
--
--
--
--
--
--
Y
N
Write-in; not ranked before; not top-40
J. of Information Systems Education
JISE
33
--
--
31
--
36
41
--
Y
Y
n/a
J. of Information Technology
JIT
--
--
23
40
--
--
--
--
Y
Y
n/a
J. of Information Technology Case and Application
Research
JITCAR*
--
--
--
33
--
--
--
--
Y
Y
Write-in by several experts, top-40
J. of Information Technology for Development
ITD
--
--
--
--
--
--
--
--
Y
N
Write-in; not ranked before; not top-40
J. of Information Technology Management
JITM
36
--
--
--
--
38
--
--
Y
Y
n/a
J. of Information Technology Theory and
Applications
JITTA
--
--
--
26
--
--
--
--
Y
Y
n/a
J. of Information, Technology, and Organizations
JITTO
--
--
--
--
--
--
--
--
Y
N
Write-in; not ranked before; not top-40
J. of International Technology and Information
Management
JITIM
45
--
--
--
--
42
--
--
Y
Y
Write-in by several experts, top-40
J. of Management Information Systems
JMIS
5
3
--
3
4
7
5
7
Y
Y
n/a
J. of Management Systems
JMS
21
--
--
--
--
27
--
--
Y
N
Not top-40
J. of Organizational and End-User Computing
JOEUC
--
--
--
22
37
40
44
--
Y
Y
n/a
J. of Organizational Computing and Electronic
Commerce
JOCEC
--
--
--
34
31
--
--
--
Y
Y
n/a
J. of Strategic IS
JSIS
27
18
22
16
20
30
25
--
Y
Y
n/a
J. of Systems and Information Technology
JSIT
--
--
--
--
--
--
--
--
Y
N
Write-in; not ranked before; not top-40
J. of the Association for Information Systems
JAIS
--
12
--
9
30
--
--
--
Y
Y
n/a
49
Journal of Computer and System Sciences
-
--
--
13
--
--
--
--
--
N
n/a
Primarily CS
Journal of Database Administration
-
22
--
--
--
--
28
--
--
N
n/a
Primarily CS
Journal of Information Management
-
27
--
--
--
--
21
--
--
N
n/a
n/a
Journal of Information Science
-
49
--
--
--
--
23
--
--
N
n/a
Primarily information science
Journal of Information Systems (Accounting)
-
44
19
--
--
35
18
39
--
N
n/a
Primarily accounting
Journal of Operations Research
-
--
--
--
--
--
--
32
--
N
n/a
Primarily OR/OM
Journal of Systems and Software
-
--
--
27
--
--
--
33
--
N
n/a
Primarily CS
Journal of the ACM
-
26
--
4
17
45
10
--
--
N
n/a
Primarily CS
Journal of the American Society for Information
Science
-
--
--
--
--
--
34
--
--
N
n/a
Primarily information science
Journal on Computing
-
--
16
--
--
--
--
--
--
N
n/a
Primarily CS
Knowledge Based Systems
-
--
--
21
--
--
--
31
--
N
n/a
Primarily CS
Management Science
-
4
4
--
--
5
2
3
4
N
n/a
Primarily management
MIS Quarterly
MISQ
1
1
1
1
1
1
1
1
Y
Y
n/a
MIS Quarterly Executive
MISQE
--
--
--
--
--
--
--
--
Y
Y
Write-in by several experts, top-40
MISQ Discovery
-
--
--
--
20
--
--
--
--
N
n/a
No longer in print
Omega
-
48
--
--
--
29
32
24
15
N
n/a
Primarily OR/OM
Operations Research
-
--
17
--
--
43
--
18
18
N
n/a
Primarily OR/OM
Organization Science
-
31
14
--
--
15
--
8
--
N
n/a
Primarily OB / management
Organizational Behavior and Human-Decision
Processes
-
--
--
--
--
47
--
21
--
N
n/a
Primarily OB / management
Pacific Asia J. of the Association for Information
Systems
PAJAIS
--
--
--
--
--
--
--
--
Y
N
Write-in; not ranked before; not top-40
Review of Business Information Systems
RBIS
--
--
--
--
--
--
--
--
Y
N
Write-in; not ranked before; not top-40
Revista Latinoamericana y del Caribe de la
Asociación de Sistemas de Latinoamericana y del
Caribe de la Asociación de Sistemas de
Información
RELCASI
--
--
--
--
--
--
--
--
Y
Y
Write-in by several experts, top-40
Scandinavian J. of Information Systems
SJIS
--
--
--
--
--
--
--
--
Y
Y
Write-in by several experts, top-40
Simulation
-
--
--
--
--
--
--
45
--
N
n/a
Primarily CS
Sloan Management Review
-
16
--
--
--
12
8
13
13
N
n/a
Primarily management
Systèmesd'information et management
SIM
--
--
--
--
--
--
--
--
Y
N
Write-in; not ranked before; not top-40
The DATABASE for Advances in Information
Systems
DATABASE
35
--
--
8
14
17
29
20
Y
Y
n/a
The Information Society
-
--
--
--
49
36
--
--
--
N
n/a
Primarily OR/OM
Wirtschaftsinformatik
WIRT
--
24
--
32
--
--
--
--
Y
Y
n/a
50
Table B.2 Summary Statistics for Previous Rankings Studies’ Use of IS-Centric Journals
Summary Item
(Rainer Jr. and
Miller 2005)
(Lowry et al.
2004)
(Katerattanakul
and Han 2003)
(Peffers and Ya
2003)
(Mylonopoulos
and Theoharakis
2001)
(Whitman et al.
1999)
(Hardgrave and
Walstrom 1997)
(Walstrom and
Hardgrave
2001)
IS journals ranked
19
12
10
36
21
20
13
8
Total journals ranked
48
25
27
45
49
43
45
26
IS journals as percent of total in study
39.6%
48.0%
37.0%
80.0%
42.9%
46.5%
28.9%
30.8%
51
APPENDIX C. CONSIDERED PUBLICATIONS
Table C.1 IS-Centric Journals Considered with Publishing Information
Journal Name
Publisher
Sponsoring Organization
ACM Transactions on MIS (ACM TMIS)
ACM
ACM
African J. of Information Systems (AFJIS)
The International Center for IT and Development,
College of Business, Southern University
Same as publisher
AIS Transactions on HCI (AIS THCI)
The Association for Information Systems (AIS)
Same as publisher
Australasian J. of Information Systems (AJIS)
Australasian Association for Information Systems
(AAIS) through the Australian Computer Society
Digital Library (ACS)
University of Canberra (UC)
China J. of Information Systems (CJIS)
School of Economics and Management, Tsinghua
University, Beijing, "Information Systems Journal"
Same as publisher
Communications of the AIS (CAIS)
The Association for Information Systems (AIS)
Same as publisher
Communications of the International Information
Management Association (CIIMA)
International Information Management Association,
Inc.
Same as publisher
Decision Support Systems (DSS)
Elsevier
Same as publisher
Electronic Commerce Research and Applications (ECRA)
Elsevier
Same as publisher
Electronic Government, An International Journal (EG)
Inderscience Enterprises Limited
Same as publisher
Electronic J. of Information Systems Evaluation (EJISE)
Academic Conferences Limited
Same as publisher
Electronic J. of Information Systems in Developing Countries
(EJISDC)
City University of Hong Kong, Erasmus University of
Rotterdam, University of Nebraska, Omaha
Same as publisher
Electronic Markets (EM)
Springer
University of St. Gallen, Switzerland
and the University of Leipzig,
Germany
Enterprise Information Systems (EIS)
Taylor & Francis Group
Same as publisher
Enterprise Modeling and Information Systems Architectures,
An International J. (EMISA)
Special Interest Group on Modeling Business
Information Systems within the German Informatics
Society (GI-SIGMoBIS)
Same as publisher
e-Service J. (e-SJ)
Indiana University Press
The Trustees of Indiana University
European J. of Information Systems (EJIS)
Palgrave Macmillan, a division of Macmillan
Publishers Limited
Same as publisher
Information & Management (I&M)
Elsevier
Same as publisher
Information and Organization (I&O)
Elsevier
Same as publisher
Information Knowledge Systems Management (IKSM)
IOS Press
Same as publisher
52
Table C.1 IS-Centric Journals Considered with Publishing Information (Continued)
Information Management & Computer Security (IM&CS)
Emerald Group Publishing Limited
Same as publisher
Information Processing & Management (IP&M)
Elsevier
Same as publisher
Information Research (IR)
Professor T.D. Wilson, Professor Emeritus of the
University of Sheffield
Lund University Libraries
Information Resources Management J. (IRMJ)
IGI Global
The Information Resource
Management Association (IRMA)
Information Systems and eBusiness Management (ISeB)
Springer
Same as publisher
Information Systems Education J. (ISEJ)
EDSIG, the Education Special Interest Group of
AITP, the Association of Information Technology
Professionals (Chicago, Illinois)
Same as publisher
Information Systems Frontiers (ISF)
Springer
Same as publisher
Information Systems J. (ISJ)
John Wiley & Sons, Inc.
Same as publisher
Information Systems Management (ISM)
Taylor & Francis Group
Same as publisher
Information Systems Research (ISR)
The Institute for Operations Research and the
Management Sciences (INFORMS)
Same as publisher
Information Technology & People (IT&P)
Emerald Group Publishing Limited
Same as publisher
Information Technology and Management (IT&M)
Springer
Same as publisher
International J. of Business Information Systems (IJBIS)
Inderscience Enterprises Limited
Same as publisher
International J. of Electronic Commerce (IJEC)
M.E. Sharpe
Same as publisher
International J. of Enterprise Information Systems (IJEIS)
IGI Global
Same as publisher
International J. of Information and Decision Sciences (IJIDS)
Inderscience Enterprises Limited
Same as publisher
International J. of Information Management (IJIM)
Elsevier
Same as publisher
International J. of Information System Modeling and Design
(IJISMD)
IGI Global
IRMA
International J. of Information Technologies and Systems
Approach (IJITSA)
IGI Global
IRMA
International J. of Intercultural Information Management
(IJIIM)
Inderscience Enterprises Limited
Same as publisher
International J. of Technology Management (IJTM)
Inderscience Enterprises Limited
Same as publisher
Issues in Information Systems (ISS)
International Association for Computer Information
Systems (IACIS)
Same as publisher
J. of Computer Information Systems (JCIS)
International Association for Computer Information
Systems (IACIS)
Same as publisher
J. of Database Management (JDM)
IGI Global
IRMA
53
Table C.1 IS-Centric Journals Considered with Publishing Information (Continued)
J. of Enterprise Information Management (JEIM)
Emerald Group Publishing Limited
Same as publisher
J. of Global Information Management (JGIM)
IGI Global
IRMA
J. of Global Information Technology Management (JGITM)
Ivy League Publishing
Same as publisher
J. of Information Privacy and Security (JIPS)
UW-Whitewater, Global Business Resource Center
Same as publisher
J. of Information System Security (JISS)
The Information Institute
Same as publisher
J. of Information Systems and Technology Management
(JISTEM)
TECSI - Laboratório de Tecnologia e Sistemas de
Informação - FEA USP/ TECSI - Research Lab on
Information Systems and Technology, Universidade
de São Paulo-USP
Same as publisher
J. of Information Systems Applied Research (JISAR)
EDSIG, the Education Special Interest Group of
AITP, the Association of Information Technology
Professionals (Chicago, Illinois)
Same as publisher
J. of Information Systems Education (JISE)
Education Special Interest Group (EDSIG) of the
Association of Information Technology Professionals
(AITP)
Same as publisher
J. of Information Technology (JIT)
Palgrave Macmillan, a division of Macmillan
Publishers Limited
Same as publisher
J. of Information Technology Case and Application Research
(JITCAR)
Ivy League Publishing
Same as publisher
J. of Information Technology for Development (ITD)
Taylor and Francis
College of Information Science and
Technology at the University of
Nebraska Omaha
J. of Information Technology Management (JITM)
Association of Management
Same as publisher
J. of Information Technology Theory and Application (JITTA)
The Association for Information Systems (AIS)
Same as publisher
J. of Information, Technology, and Organizations (JITTO)
Informing Science Institute
Same as publisher
J. of International Technology and Information Management
(JITIM)
The International Information Management
Association
Same as publisher
J. of Management Information Systems (JMIS)
M.E. Sharpe Inc.
Same as publisher
J. of Management Systems (JMS)
Association of Management (AoM) / International
Association of Management (IAoM)
Same as publisher
J. of Organizational and End User Computing (JOEUC)
Information Resources Management Association
Same as publisher
J. of Organizational Computing and Electronic Commerce
(JOCEC)
Taylor & Francis
Same as publisher
J. of Strategic Information Systems (JSIS)
Elsevier
Same as publisher
54
Table C.1 IS-Centric Journals Considered with Publishing Information (Continued)
J. of Systems and Information Technology (JSIT)
Emerald Group Publishing Limited
Same as publisher
J. of the Association for Information Systems (JAIS)
The Association for Information Systems (AIS)
Same as publisher
MIS Quarterly (MISQ)
Management Information Systems Research Center
(MISRC) of the University of Minnesota
Same as publisher
MIS Quarterly Executive (MISQE)
The Association for Information Systems (AIS)
Society for Information Management;
MISQ, AIS, Indiana University;
University of St. Gallen, City
University of Hong Kong
Pacific Asia J. of the Association for Information Systems
(PAJAIS)
The Association for Information Systems (AIS)
Same as publisher
Review of Business Information Systems (RBIS)
Clute Institute
Same as publisher
Revista Latinoamericana y del Caribe de la Asociación de
Sistemas de Latinoamericana y del Caribe de la Asociación de
Sistemas de Información (RELCASI)
The Association for Information Systems (AIS)
Same as publisher
Scandinavian J. of Information Systems (SJIS)
IRIS Association
The Association for Information
Systems (AIS)
Systèmesd'information et management (SIM)
Editions Eska
Association Information et
Management (AIM)
The DATABASE for Advances in Information Systems
(DATABASE)
ACM SIGMIS
University of Memphis Management
Information Systems Department
Wirtschaftsinformatik (WIRT); also published in English as
Business & Information Systems Engineering
GablerVerlag
Springer
55
APPENDIX D. DETAILS ON DATA COLLECTION PROCEDURES
Population Oversampling for Expert Survey Data Collection
For the expert assessment portion of our research, we designed the data collection methodology with an oversampling method
that included almost the entire population of IS academics in the world. We followed the methodology used in Lowry et al.
(2004), but included more sample sources to ensure population oversampling. Thus, we assume that our statistics are based on the
population of IS researchersnot a subsample of the population. To achieve this global representation, we first used the target
and respondent list from Lowry et al. (2004). We added to this group all faculty listed in the AIS membership directory, those
who published in the last five years in the traditionally acknowledged top-4 IS journals from previous studies (i.e., MISQ, ISR,
JMIS, and JAIS), those who attended ICIS in the last five years, and anyone listed as a member of any IS department in the world
(based on the AIS website listings).
This oversampling method resulted in 16,202 purportedly unique individuals and email addresses. An examination of the pool
revealed that many entries were duplicates (e.g., the same person with different name spellings, additional entries with various
email addresses, or multiple records for the same person representing different institutions over time). We thus eliminated 1,847
potential respondents whom we could verify as having duplicate identities. We then sent invitations to the remaining potential
14,355 respondents. Of these, 4,994 email addresses were invalid, generally for people who no longer resided at the institution
and/or had their account suspended; spam filters blocked a much smaller portion. In addition, 372 valid email addresses existed
for respondents who were on long-term leave (e.g., maternity, health, and industry) or were not otherwise available. Thus, we
estimate that our survey successfully reached 8989 unique academics.
Of the 8989 academics whom we reached, 83 noted that they were too busy or uninterested to respond; 56 noted that they were
retired and thus not eligible; and 444 noted that, although they published in IS journals or resided in IS departments, they did not
consider themselves to be IS academics but instead members of another field (thus, we eliminated them in our attempt to restrict
our sample to IS researchers). Most of these were academics in IS departments with academic training in computer science,
statistics, and operations.
14
From among the 8406 remaining target respondents, we received 2816 responses. Of these responses, 139 were omitted because
the respondents did not consider themselves to be active IS academics. To be conservative, we retained the 83 uninterested/busy
respondents as potential respondents; thus, we estimate that our survey reached a maximum of 8350 eligible respondents, and
given the 2816 responses that we received, we achieved a minimum 33.7 percent response rate from international IS academics.
Accordingly, this participation rate was the largest international participation in an IS journal study to date. We believe that 8350
is the most accurate estimate of the actual population of IS researchers in the world at the time our data was collected (i.e., 2010).
To increase the quality and validity of our results, our survey software prevented duplicate entries from the same person or same
computer, while allowing responses only from explicitly invited participants. Finally, we omitted responses for 396 people who
left portions of the survey blank without explanation. This process left 2420 responses that were used to conduct our full data
analysis. By comparison, after a similar winnowing process, Lowry et al. (2004) had 1572 responses remaining in their analysis.
Self-citation Google Scholar Data Collection
In order to better understand short-term citation activity, we identified all articles published from January 2011 through July 2012
in the 21 IS journals considered in our study, thereby resulting in 1358 articles. Using Google Scholar, we identified every article
that cited each of the identified 1358 articles, thereby resulting in 2548 citing articles. We coded each of the 2548 citing articles
into one of seven mutually exclusive categories listed below:
1. Self-cites: Citing article was published by the same journal as the cited article
2. Non-peer: Citing article was published in a non-peer reviewed outlet, or non-journal non-conference outlet such as
dissertations, books, SSRN, sprouts, working paper, etc.
3. AIS/HICSS Conference: Citing article was published in one of the following eight conferences:
a. HICSS (Hawaii International Conference On System Sciences)
b. AIS Conferences:
14
We conducted a random audit of 300 (out of 444) of these individuals and found that 90 (30%) were listed as “IS
academics” in the AIS membership directory. This result is to be expected because the IS field is an interdisciplinary
field and IS academics routinely are members of related organizations such as the ACM, IEEE, and Academy of
Management.
56
i. ICIS (International Conference on Information Systems)
ii. AMCIS (Americas Conference on Information Systems)
c. Affiliated AIS Conferences:
i. ECIS (European Conference on Information Systems)
ii. CONF-IRM (International Conference on Information Resources Management)
iii. ICMB (International Conference on Mobile Business)
iv. MCIS (Mediterranean Conference on Information Systems)
v. PACIS (Pacific-Asia Conference on Information Systems)
4. Non-AIS/HICSS Conference: Citing article was published in a conference not listed in #3, including symposiums,
workshops, and colloquiums.
5. IS ISI Journal: Citing article was published in one of the 29 IS journals indexed by the ISI in 2011: DATABASE, DSS,
ECRA, EIS, EJIS, EM, I&M, I&O, IJEC, IJIM, IJTM, ISF, ISJ, ISM, ISR, IT&M, IT&P, JAIS, JCIS, JDM, JGIM, JIT,
JMIS, JOCEC, JSIS, MISQ, MISQE, WIRT.
6. Other ISI Journal: Citing article was published in a journal indexed by the ISI, but is not one of the IS journals
referred to in #5 (e.g., Journal of Psychology).
7. Non ISI Journal: Citing article was published in any peer-reviewed journal not currently indexed by the ISI.
An error in citation counts could significantly bias results. Accordingly, we desired 100% reliability in our coding efforts. In
establish 100% reliable coding, two coders were initially assigned to each of the 21 journals under review. The coders
independently categorized each of the citing articles for each of the cited articles for their assigned journals. An independent third
coder (reconciliation coder) identified discrepancies between the two original coders. The reconciliation coder manually
investigated the un-reconciled article following the same procedures of categorizing the citing articles as followed by the original
coders. If his categorization counts agreed with one of the two original coders, the agreeing counts were retained. If the three
coders’ counts disagreed, a fourth coder worked with the reconciliation coder until the discrepancy was verbally resolved.
Following this procedure ensured 100% reliability among coders. These citation counts were then used as input to calculate the
final measures.
57
APPENDIX E. DETAILED DEFINITIONS OF CITATION METRICS USED
ISI Impact Factor
The Thomson Reuters ISI Impact Factor™ of a journal is the average number of citations received per paper published in that
journal during the two preceding years, accounting for the number of “citable items” published (Fersht 2009). For example, the
2010 impact factor for MISQ (released summer 2011) is the number of citations that MISQ received during 2009 and 2008,
divided by the number of “citable items” (or actual articles) the journal published during those same two years. “Citable items”
are articles, proceedings, or research notes; and do not include editorials, letters, or book reviews. More specifically, the 2010
Impact Factor of a journal would be calculated in the following manner:
A = the number of times articles published in 2009 and 2008 were cited by indexed journals during 2010
B = the total number of “citable items” published by that journal in 2009 and 2008
2010 impact factor = A/B
Importantly, the 2010 Impact Factor could not be released until summer 2011 because the Impact Factor could not be calculated
until all the 2010 publications were processed by Thomson Reuters.
Proponents of this measure admit that it is not perfect, but it is one of the most reliable in existence, being widely used for several
years (Garfield 2005). A significant advantage of this measure is the ability to compare journals from different fields and
disciplines fairly and consistently. A strength and a limitation of the ISI Impact Factor is that a journal has to attain a certain
threshold of citations and general publisher quality indicators to be allowed to have an ISI Impact Factor. This is useful because
having an ISI Impact Factor is an indicator of quality; unfortunately, this makes it difficult to assess the citation impact of
journals that do not have an ISI Impact Factor.
ISI five-year impact factor
The five-year impact factor is an ISI Thomson Reuters metric that uses five years of data instead of two in the standard
calculation. Thus, the 2010 Five-Year Impact Factor uses years 20052009. Using this factor helps consider longer-term citation
impact.
ISI impact factor without journal self-citation
The ISI impact factor without journal self-citation is an ISI Thomson Reuters metric that is based on their Impact Factor
calculation but eliminates any self-citations from the journal in question. Specifically, any citations within any article in the
journal that refer to an article published in the same journal are eliminated. Thus, we included this metric to adjust further for any
potential differences in self-citation rates of top IS journals.
ISI five-year article influence
The Article Influence™ score is another bibliometric factor created by ISI Thomson Reuters adopted here. It determines the
average influence of a journal’s articles over the first five years after publication. This score is calculated by dividing a journal’s
Eigenfactor Score by the number of articles in the journal, normalized as a fraction of all articles in all indexed publications. This
measure is roughly analogous to the Five-Year Journal Impact Factor in that it is a ratio of a journal’s citation influence to the
size of the journal’s article contribution over a period of five years. The mean Article Influence Score is 1.00; thus, a score
greater than 1.00 indicates that each article in the journal has above-average influence. A score less than 1.00 indicates that each
article in the journal has below-average influence. Of course, this measure is relative to all publications indexed by Thomson
Reuters; thus, the influence is compared to the influence of other leading journalsnot all journals.
h-index
The h-index (Hirsch 2005) is a measure of a journal’s quality based on its most highly cited articles since inception. To compute
the h-index for a journal, all articles in the lifetime of the journal are ranked by the number of times other articles cite them. The
most-cited article receives a rank of one and the ranking number increases as the number of citations decreases. A journal with an
index of h has published h papers each of which has been cited in other papers at least h times. For example, if the fifth most
cited article of a journal is cited at least five times (but the next most cited article is less than five), the journal has an h-index of
five. If the 20
th
most cited article of a journal is cited at least 20 times, the h-index is 20. The advantage of the h-index over the
58
impact factor is that higher priority is given to the quality of articles rather than solely the number of times a journal is cited
(Miller 2006). A journal with highly cited articles will have a higher h-index than highly cited journals with few high-quality
articles. This prevents bias toward journals that tend to self-cite. Moreover, the h-index uses Google Scholar data on journals;
thus, this version of impact can be computed for more published journals than the ISI Impact Factors.
hc-index
The hc-index is an adjusted h-index which gives more weight to recently published articles than older articles as a
solution to the time-in-print bias (Sidiropoulos et al. 2007); it is based on the latest Google Scholar
TM
data. The h-index has
been criticized for several limitations, all of which cannot be addressed in our paper because of space limitations. For more
complete treatment, see (Bar-Ilan 2008; Bornmann and Daniel 2009; Bornmann et al. 2008). We have chosen to address three
core limitations that have been noted in previous IS literature (Truex et al. 2009), and are most applicable to our
journal-level comparison (Truex et al. 2009; Zhang 2009). First, the h-index metric considers journals over their
lifetime (rather than the most recent years). As a result, journals that have been in publication for several years have
a significant citation advantage over those with a shorter history of publication (Truex et al. 2009). Further, a journal
that published several highly cited articles in the past will continue to have a large h-index even if the quality of the
journal changes. To overcome this time-in-print bias, Sidiropoulos et al. (2007) proposed a variation of the h-index that
they term the contemporary h-index, or hc-index. This metric adjusts the h-index by increasing the weight for more recently
published articles and decreasing the weight for older papers.
g-index
The g-index is an adjusted h-index that ascribes more weight to highly influential articles (Egghe 2006);it is based on the
latest Google Scholar
TM
data. A second limitation of the h-index important for our consideration is its inability to
recognize highly influential papers. Because the h-index is based on rank-ordered citations counts,