Content uploaded by Michael Cuellar
Author content
All content in this area was uploaded by Michael Cuellar on Aug 17, 2016
Content may be subject to copyright.
A Conceptualization of Research Output Quality
Twenty-second Americas Conference on Information Systems, San Diego, 2016
1
Toward a Conceptualization of Research
Output Quality
Completed Research Paper
Michael J. Cuellar
Georgia Southern University
mcuellar@georgiasouthern.edu
Hirotoshi Takeda
Laval University
hirotoshi.takeda@fsa.ulaval.ca
Duan e P. Tru ex
Georgia State University
dtruex@gsu.edu
Introduction
The evaluation of sc holarly output is of great importance to the academic field. The re sults of this
evaluation are used for decisions that materially affect the lives of those in academia from
promotion and tenure to grants, fellowships, and other material resources. The current method of
evaluating output within the I nformation Sy stems (IS) discipline has been traditionally the
counting o f publications in journal ranking lists. A key premise to this method is the p lacement o f
the article in a particular venue is an indicator of the quality of the article. I n line with venue
placement being an indicator of quality, a large literature stream exists to support this method
including attempts to identify high quality journals; methods to identify the antecedents to
quality and the ranking of scho lars using this method.
However, the concept of scholarly output quality has been under-theorized (Dean et al. 2011;
Locke and Lowe 2002; Straub 2008) leading to a number of criticisms such as being politically
influenced (MacDonald and Kam 2007 ) as well as leading to misleading results (Singh et al. 2007;
Truex III et al. 2011). In this paper, we investigate the existing studies in IS related to journal and
researcher ranking and find that all the studies possess this atheoretical engagement with quality.
Quality is continually referenced but never defined or seriously engaged. Quality may be the focus
of the analysis of scholarly output but has not been developed. The lack of definition for quality is
not only an issue for IS but also for many fields, e.g. accounting as Loc ke, et al. (200 2) points out.
In this paper, we develop the quality assertion. First, we review the various definitions o f quality
in use, then we review the literature to search for quality in the IS literature. The n we develop the
concept of quality as portraye d in literature. Following a summary of the state of the literature, we
proffer a r econceptualization of quality for evaluation of academic research o utput and discuss
implications of this rec onceptualization for ac ademic research.
Literatu re Review
In this section, we first review the co ncept of quality from the literature. Then we explore how the
IS field has dealt with the concept of quality in their ranking studies. We find the term “quality” is
use d in the IS literature o n rankings in a common sense and unproblematic manner. No attempt
is made to conceptualize or define the notion be fore it is o perationalized. We find a systematic
attempt at a definitio n only in the information quality literature (Wang and Strong 1996), o ne
that adopts the definition of quality from the product manufacturing field. But in ge neral the
notio n of quality, where use d or explicated at all, is ill define d and in a v ery loose sense.
Conceptualizations of Quality from Product Manufacturing Practice
In the literature concerning ‘quality ’, essentially three different definitions o f quality have been
espoused which we denote as “production” quality, “consumption” quality and “transcendent”
quality. Production quality is associated with the notion of “conformance to specification” (ASQ
A Conceptualization of Research Output Quality
Twenty-second Americas Conference on Information Systems, San Diego, 2016
2
200 8; Crosby 1 97 9). In other words, does the production process produce output that conforms to
the specification for the product? Other ways o f v iewing this include “number o f defects”
(Motorola.University 200 8) o r “uniformity around a target v alue” (Taguchi 19 92). All o f these
definitions contain the idea of an ideal target or specification, which the production process is to
achieve. We do find in the software production process a conformity to specifications are used. In
production co ntexts, the goal of a quality effort is to reduce the product’s variance as compared to
a specified ideal or product template. For example, the manufactured item is examined by devices
measuring to an exacting set of specifications: dimensional, weight, fit/finish, functional
performance, response time, maximum temperature resistance, power output, etc. The closer the
item comes to specifications for it, the higher its quality is held.
A second definition, consumption quality, is associated with the notion of “fitness for use” (ASQ
200 8). That is, c an the item as produced meet the needs of those who use it? Related
interpretations of this sort include “Degree to which a set of inherent characteristics meets
requirements” of a c ustomer as determined by the customer (ISO 20 05 ), “What the customer gets
out and is willing to pay for” (Drucker 1985). In these definitions the emphasis is o n whether or
not the co nsumer of the product can use the product and if they use it, does that product co nform
to their needs and performance expectations? The product does not have to be used as designed;
rather it can be used in any way that the c ustomer desires. Quality is a performative measure
derived from the c ustomers interactions with the product wherein the resulting fitness for use, as
perceived by the customer, is the measure of the product’s quality. For example, quality might be
how well a customer perceives that a clothes dryer dries clothing. If it dries to the leve l desired by
the customer, then the dryer is of high quality. If the dryer leaves the clothes wet, it would be
considered it to be of lesser quality. Of course, a customer may not kno w the concept of a clothes
dryer and looking at the dryer decides to use the dryer as a chicken coop. If the customer decides
that the dryer used as a chicken coop is rather small and the coup spins around, he may consider
it to be of low quality as a chicke n coo p regardless of its clothes drying potential. The
consumption quality notion is reflected in a long stream of research in the information quality
field starting with Wang and Strong (1996) and associates (Lee et al. 2002 ; Madnick et al. 2009;
Stro ng et al. 1 997; Stvilia et al. 2007; Wang 1998). Other papers in the information quality area
have adopted this approach as well, e.g. in examining why users accept certain information on the
web (Reih (2002 ) where Tay lor (1986) and Wilson (1983) implic itly adopt information quality.
The third approach to quality is o ne in which quality is not defined but assumes that what is
meant by the term is assumed to be understood and acce pted. This non-definition of quality is
defined by Garvin (1984) as the transcendent view. A transcendent view considers quality to be
something that cannot be articulated; we know a quality product when we see it but find it hard to
pinpoint particular characteristics that make it a quality product:
"Quality is neither a part of mind, nor is it part of matter. It is a third entity which is independent of the two ... even
tho ugh Quality cannot be defined, you know what Quality is!" (Pirsig (1974)
The transcendent view of quality is concerned with quality aesthetics and can be likened to
primitive Platonic concepts such as “beauty” and “truth”; by being ex posed to a succession o f
quality objects we develop a sensitivity for quality. Editors may claim they recognize good
research when they see it, as a result of the sensitivity they have developed through ex posure to
many research publications. This is indeed the defensive explanation used by many e ditors when
challenged to defend the quality of any given work. In the transcendent sense, quality is a naïve,
sub jective, and path dependent conceptualization in which the standards used to identify quality
are particular to the evaluator and ar ise from the evaluator’s ex periences. Quality is path
dependent based on where o ne has rece ived his o r her training, one’s position/research in the
field, the standards by which o ne is evaluated, interactions with peers, and the ex pectations of
one’s emplo yers.
Whereas the notion o f quality standards has been a robust discourse in other fields for
generations, notably that of art history and art criticism, o ur own field seems unaware o f these
other intellectual traditions. For instance the notion of tr anscendental quality as a warrant to
clear and universal adjudication o f quality was discredited and abandoned dec ades ago in the
A Conceptualization of Research Output Quality
Twenty-second Americas Conference on Information Systems, San Diego, 2016
3
field o f art history and art criticism (Innis 2009; Langer 1957; Langer 2000). The transcendent
view of quality is simply a naïv e and indefensible no tion.
1
The Approach of the IS literature
The literature generally ackno wledges there is no accepted theory of quality for the evaluation of
academic research output (Dean et al. 2011 ; Locke and Lowe 2002). I n addition, no empirical
study exists that we are aware of, which would demonstrate for example, that MISQ and I SR are
the highest quality IS journals. Instead the literature is a series of papers in which attempts are
made to analy ze what are considered to be proxies for quality.
To investigate the way the concept of research output quality has been handled in the IS
literature, a review was made of papers within the IS field which attempt to assess research
output quality. To collect the papers for study, we began with 39 papers in th e IS literature that
attempt to rank journals, scholars or departments across the IS and related disciplines. These
papers were accumulated through the authors’ seven years of wo rk in this area. This review shows
the form of quality used in those studies is unanimously of the transcendent approach. Except for
one paper (Clarke 2008), no attempt to define quality was made. Instead in these papers, we find
the term “quality” or cognates such as “prestige,” “top,” “value,” etc. is used in an implicit and
common sense manner. Yet somehow, despite this lack o f conceptual clarity, efforts have been
made to define various measure s for quality. Straub and Anderson (2010) seem to summarize the
general approach to the to pic use d in the IS discipline. They have sugge sted that journal quality is
… an assessment o f journal a ttributes that focuses on the process o f reviewing papers, the publication of papers that
make s ignificant intellectual contributions to the field, and the subsequent stature of the journal that results from the
fo rm er two attributes. (p. iv)
For them, quality is an “assessment” of the processes in which the reviewing and publication of
papers is performed. They do not describe the characteristic properties of quality or how we might
distinguish it from other related co ncepts. The two processes o f reviewing and publication are
antec edent to the judgment of quality. In consonance with this definition, they state:
[L]et us suggest that a concept like journal quality lies almost completely in the minds of scholars because quality itself is
highly abstract, .... Without c learly mapped physical markers, w e can come up with a set of metrics that will
approximate this construct, but never tap into it without a large dose of humility … It is not even remotely similar to the
construct of something physical like ball bearing quality, where w e can measure with small degrees o f precision the
variances of machine tools in creating the balls, their housings, and the processes that assemble these. (Straub and
Anderson 2010, p. x).
Here, they make a clear statement of the transcendence view of the concept o f quality that they
hold. For them quality is “almost co mpletely in the minds of scholars.” Therefore, quality is not
something external to viewers but rather a subjective judgment that scholars make. Scholars
explicitly reject the “co nformance to specification” fo rmulation of quality as not possible. While
we c an develop metrics to measure quality, we need to recognize that we are not really tapping
into the conc ept.
This use of the transcendent v iew o f q uality has resulted in issues related to assessing the quality
of jo ur nals. Dennis, et al. (2006) observe:
1
A question may be raised here as to the suitability of these kinds of definitions for the assessment of
research output quality. We are not mass-pr oducing research products; therefore research output shou ld not
be compared t o an ideal archetype; academic research has t o do with surpassing ideal types rather than
conforming to them. In response, we might ask, what is the point of r esearch output? We argue that it is to
document an investigation into a phenom en on tha t has resulted in som e new knowledge. In that sense, we
might consider research, documentation of the process by which knowledge wa s developed. Th e
investigatory pr ocess followed might th en be considered an “information manufacturing” process that goes
through very definit e steps such as formation of the research question, literature review and analy sis, design
of the investigation, data c ollection, analysis, documentation, review, rev ision, and publication. We ther efore
consider a daption of product-manufacturing quality a ssessm ent definitions and methodologies t o the
process of th e dev elopment of academic knowledge a ppropriate.
A Conceptualization of Research Output Quality
Twenty-second Americas Conference on Information Systems, San Diego, 2016
4
Recent studies of faculty opinions show considerable d ifferences of opinion among researchers in d ifferent parts of the
world over what constitutes the top research journals (Lowry, et al. 2004; Mylonopoulos and Theoharakis 2001; Peffers
and Ya 2003; Rainer and M iller 2005; Saunders 2005b). The s tudy by Low ry et a l. (2004) shows researchers in all
regions, on average, agree that the top two IS journals a re MIS Quarterly (MISQ) and Information Systems Research
(ISR) (respective ly), but beyond this, there is less agreement. The third place journal differs by region, but even so, in
each region, the third place journal is scored at less than half the quality of ISR, suggesting that the top two journals are
clearly the best, with others noticeably lower in perceived quality than MISQ and ISR. The study by Mylonopoulos and
Theoharakis (2001) places these tw o jou rnals first and second in North America, but adds Communicatio ns of the ACM
(CACM) as either first or second in other regions, with MISQ and ISR second or third. CACM publishes primarily short
articles in a magazine format targeted at compute r science practitioners, and is typically not cons idered a pure
res earch journal(Dennis et al. 2006, Appendix A).
Based on these issues, we rule o ut the transcendent approach as a valid way to proceed and in the
next section will analyze the quality as conformance to specification and as fitness for use
appro aches as the foundation for a conceptualization of research output quality.
A Conceptu alization of Research Output Quality.
We use the term research output to refer to the documentation of an investigation into a
phe nomenon that has resulted in the generation of new knowledge. Thus research output can
refer to a book, a journal artic le, or a co nference proceeding.
Research Output Quality
Befor e we can addre ss the issue of conceptualizing r esearch output q uality, we need to address
the issue of why we need such a co nceptualization. Comments ex ist that we o ften argue about the
quality o f a journal but why do we need to discuss article quality? The answer to the question is
quite simply, article quality is the basis of tenure, promotion , and funding decisions. How the
articles of a scholar are perceived is, at least in part, a determiner of the scholar’s receipt of
var ious rewards. If we are to make a fair and democratic business of evaluating scholars, then we
need to have some sort of standard of what “quality” research is. We cannot continue to allow an
atheo retical and subjective evaluation of “quality” to continue.
If, then, we are to alleviate the atheoretical use o f q uality within the literature and place it on a
firm footing, the concept of research output quality should be de fined and theorized. In this
section, we have prov ided a beginning of an effo rt to explore and operationalize the idea of quality
scholarly o utput. We saw above that o f the three different conceptualizations of research quality,
the IS field uses a “transcendental” appro ach and reviewed the issues associated with that
appro ach.
We also argue the use of the “conformance to specification approach” is not possible. For the
conformance to specification approach to be used there must exist very de tailed specifications.
Detailed specifications do not exist. A s we showed earlier with the Straub and Anderson (2010)
notio n, we are not dealing with quality in the sense of the quality o f a b all bearing.
The foundation of the co nceptualization of quality as conformance to specification is the idea o f
an ide al target or specification, which the pro duction proces s is to achieve. The goal o f a quality
effort is to reduce the variance from the ideal. Reduction of variance from the ideal presumes
there exists a clear and precise notion of the “ideal” state. In manufacturing, there is a
spec ification, which includes p recise measurab le dimensions, and other specifications. This
operationalized spec ification is notably missing in the field o f ac ademic research. One could argue
numerous standards have been produced for positivist research (Carte and Russell 2003; Chin et
al. 2003; Gefen and Straub 2005; Klein and Kozlowski 2000; Petter et al. 2007; Straub et al.
2004), interpretivist research (Klein and Myers 1999) or critical research (Myers and Klein 2011 ),
however each of these proposed standards are filled with “guidelines”, “rules o f thumb”, and other
fuzzy standards that are insufficient to serve as “ideal” states or a precisely measureable statement
of quality to compare with research output in similar fashio n as commercial production quality.
Such guidelines and ‘rules of thumb’ are problematic for two reasons. On one hand they are a kind
of ‘contingency approach’ (Davis 1971) to quality. On the o ther hand they are the guidelines to
assemble elements in some ‘right’ order then the designation o f quality might be assigned to a
A Conceptualization of Research Output Quality
Twenty-second Americas Conference on Information Systems, San Diego, 2016
5
work. Except that it generally does not work this way. Jo nes (2004) illustrates the case in an
examination of papers given “the best paper” awards for the IS fie lds top journals and
conferences. His investigation looked at the papers and com pared them to the then standing
guidelines for how good quality papers must present research methods. His finding was that
almost none of the “best papers” met those standards. The diffic ulty with such lists, guidelines,
and contingency approaches is eve n if such ideal specifications can be generated and agreed upon
no clear or single co nfiguration/array of the elements that guarantee a quality publication e xists.
Thus extreme difficulties exist in creating an operationalization of this definition. The
“performance to specification” approach cannot serve as the definition for academic quality and
we move on to the second definition.
Research Output Quality as Fitness for Use
In considering research output quality as its being fit for use, we have to first define what use the
research output will fit. The outputs o f r esearch can be used for a number of different purposes:
for example for practitioners to inform their practice and thus use the research to improve their
work or output, for other researchers to use it as the basis for the foundation for their research
either to replicate the study at hand, to ver ify the findings, or to extend or challenge the
conclusions o f the study, for government policy formulation, etc. For any of these purposes, the
research is o f high quality if it is co nsidered to be good enough for the purpose to which the user
wishes to use it.
A publication would be of “high quality” if the ideas contained in the public ation are taken up by
the field and info rm future research or practice with its observations. Thus the point of research
outputs is to report and present findings in such a way that it will tell the user something new
about reality so they can improve their practice or extend knowledge. Quality research provides
useful informatio n to bo th practice and research that is useful for them in the ir pursuits.
If we limit our discussion of research output quality to the academic world, how can we kno w if a
research output is useful and therefore of high quality ? To answer the question, we must review
the dialogic nature of academic research. Latour (1987 ) provides us an interesting model o f how
this occurs. He indicates that a statement by itself is not considered true or false, but its fate is
determined by how those in subsequent statements use the statement given. These uses may be
what he terms “positive modalities” or “negative modalities.” Positive modalities accept the truth
of the statement and move the reader toward application. Negative modalities question the
statement and move the reader backward to q uestion how the statement was generated and in
particular the errors invo lved in the generation of the statement. In considering Latour’s concept,
we see that positive modalities would co nsider the statement fit to use and therefore of high
quality, while negative modalities que stion its usefulness and therefore its quality.
Given this conceptualization of scholarly output quality, we c an identify the following
propositions. First, fitness for use is necessarily a subjective value judgment. Without a firm
spec ification we have to le ave the ev aluation of fitness for use in the hands of the receiving
scholar. While their analysis is necessarily path dependent, we reco gnize that by looking at what
the field does instead of the opinions o f a few empowered experts, we can discern the o pinion of
the field (crowd) about the paper. The evidence Surowiecki (2005 ) presents that crowd sourced
decisions, under the right c ircumstances, y ield superior and more accurate ‘results’ than ex pert
decision making is being incre asingly born out in studies on crowd sourced decision -making in
many fields. Genome researc h (Malone et al. 2010) examines how harnessing the power of the
crowd leverages new ways to ex press old problems and opens prev iously missed solutio n spaces.
Terwiesch and Xu (2008) ex amine how motivating crowds to participate yields superior solution
options. The areas of finance and stock picking, product innovation and de sign, and long distance
search research also support Surowiecki’s assertions that an open, transparent , and ‘democratic’
engagement of the crowd is wise and offers surprisingly efficient outcomes (A fuah and Tucci 2011;
Jeppesen and Lakhani 20 1 0; Poetz and Schreier 20 11; v on Hippel 20 05) .
Second, as a subjective v alue judgment, scholarly output quality is a construction. Following the
constructivist learning model of Kolb (1984), we recognize that individual theory building is a
A Conceptualization of Research Output Quality
Twenty-second Americas Conference on Information Systems, San Diego, 2016
6
result of an internal conversation that we have with ourselves in interaction with the world. This
interaction and o ur subsequent reflection upon ex perience allow us to construct a worldv iew of
how the world is and how it works. The ‘world’ would include our view of scholarly o utput that is
use ful for their research. Thus each individual over their experience s in doctoral studies, and their
sub sequent research activity builds a v iew of what would be useful for them in their own research
and their worldview thus provides a basis for the assessment o f scholarly output. Again, these
independent views are expressed as a field and through examining the response o f the entire field
we can get a perspec tive on what the field perceives as fitne ss fo r use.
Third, different agencies, groups of individuals going through similar life circumstances (A rcher
1995), will tend to develop similar perceptions of fitness for use. These groups may be formed by
coalescing around a certain scholar or philosophical position. They may also be from a c ertain
doc toral program, university department, or coalition of researchers. Whatever the so urce, they
will tend to have similar life ex per iences that drive their research programs, goals, and views of
what is correct research. Influenced by these groups and the experiences they share, they will tend
to dev elop similar views o f what makes certain scholarly output more fit for research than others.
Fourth, the assessment of scholarly output must be perfo rmed by as large a gro up from the field
as possible. Because the judgment is subjective, we cannot allow small groups of scholars to make
the determination for us. By using a large group, we avo id sampling errors and we harness the
“wisdom of crowds” as discussed above to determine how fit for use an example of scholarly
output is.
Fifth, the process also improves on the transcendent approach by making the evaluation more
open and transparent. Properly operationalized, this assessment should be fully open, a kind o f
democratic discourse (Truex III et al. 2011). The examination and discovery of positive and
negative modalities is o ne that is o pen to all, and assertions of quality o n this basis can be easily
chec ked.
Sixth, there is empirical support for this notion. Serenko and Dohan (2011) compared ex pert
survey and citation impact studies. In their paper, they reported the field of study and current
research interests o f respondents color their perception of the quality of the journal. This parallels
a finding by Walstrom and Hardgrave (1995) in which they found that journal rankings were
positively influenced by research interest, familiarity, and discipline. These findings would seem
to support the notion that there is a correlation in the minds of the respondents of which jo urnals
are useful to them and the concept of higher quality.
Finally, the co ncept of modalities allows us to profile the use by the field. We can see where a
paper was used as a mere example of something, a proof text, or an underlying framework. We
can also see negatively where a paper was used as a negative example, or was subjected to a
fundamental critique.
Discussion
This paper has reviewed the quality literature and identified three different definitions for quality.
A rev iew of the I S literature has shown that IS has tended to use the transcendent definition o f
quality which led to quality being a subjective evaluation. Being subjective, there are no empirical
studies that demonstrate the quality of articles or journals. With the lack o f q uality articles being
the case, we argue su bjective evaluation exists because of a power-based negotiation within the
field, whic h determines the papers and journals that will be considered “high quality.”
In place o f subjectivity, we offered a definition of research o utput quality based on quality as
“fitness for use”. Following Latour (1987 ), we argued that citations to papers appear as different
modalities. Positive modalities, which cite the paper approvingly , move the reader toward
application o f the concepts of the paper while negative modalities , those that cite the paper to
hav e issues with it, move the reader toward contesting the findings of the paper.
For researchers, this paper marks the first attempted definition of academic research quality that
we are aware of. In place of the subjective definitions and surveys of opinions, we no w have a
conceptualization of quality, theoretically founded in the quality literature that leads us to a more
A Conceptualization of Research Output Quality
Twenty-second Americas Conference on Information Systems, San Diego, 2016
7
ob jective way to conceive o f quality and to o perationalize its analysis. This co nceptualization is a
great improvement over the subjective transcendent conceptualization used today. This research
can open the determination of quality to the entire field, free d from the domination o f well-
meaning “senior scholars.” Research output quality as “fitness for use” as determined by the
actual use of the literature by the field is emancipating in that it allows an open and objective
ev aluation of the quality of research.
Given that the conceptualization of research output quality advocated he re, the ex tent to which
the field believes that published research is use ful for their own studies, is a path dependent,
sub jective value judgment. The question arises: “how is this any different or better than the
transcendent approach utilized in the literature?” As we argued above, it is superior in that rather
than base the assessment of quality in the hands of only a few people we base it in the judgment of
the large numbers of the field. A s discussed above, a larger number of people examining a work in
terms of suitability for their research are superior to a smaller number of experts in finding issues.
The operationalization of this approach is much more than a simple counting of citations or
“Facebook likes” but rather relies o n sophisticated scientometric analy sis such as that descr ibed
in (Cuellar et al. 2016) This is not to say that the larger co mmunity judgment is perfect or is
immune to bias, but rather that by exposing the research output to the field and the n measuring
what they do with it, we get to see the actual impact on the field by the original work.
A Research Agenda
As a nasc ent area for research, there are many aspects that require development:
Operationalization of the Con cept
As discussed above, investigation is necessary to determine how these quality modalities manifest
in the empirical world. Since we are talking about the use of papers by other scholars, we could
argue these modalities would manifest themselves in citations. To identify these modalities in the
citation of papers, we must go beyond the simple c ounting of citations, we must consider the
modality of the citation. The way in which these modalities manifest themselves will vary. Po sitive
modalities might vary from heavy reliance on the concepts of the paper in their paper : for
example using it as the underlying framework, as a support for their logic, to provide concepts for
the ir paper, as an incidental citation or example. Negative modalities will be c itations as bad
examples or specific refutation. To oper ationalize quality, you wo uld need to evaluate both the
positive and negative citations. These different modalities can lead us to a typology o f usage of
one paper within anothe r one.
This typology could be used in the development of analysis technology to evaluate papers. Citing
papers could be parsed to identify the modality of citation employed within the paper and a
modality evaluation assigned. Such a computation however does not capture the situation whe re a
paper is not considered to be useful enough for a citation. We refer to this kind of modality as the
negative modality of silence. Some other kind of analysis that compares the net citation modality
level for a paper against other papers is needed to account for this kind of silent modality.
Additional work would be needed to determine how to use this methodology to evaluate scholars.
Ant ecedents of Fitn ess for Use
This area considers the different c auses for v arious modalities of quality. Why is a pap er c ited or
not cited? How do es the scholar’s v ie w of fitness for use come abo ut? Why is it that a paper is
simply ignored rather than negatively cited? Various studies at the individual level of analysis
could be done to understand the psyc hology of citation and of the formation of the mental
construct of fitness for use so that we can understand how a scholar forms his or her co ncept of
what research is use ful for them to further their work.
Another area is o f social antecedents. In what way do social structures influence scholar’s c hoices
of which work to draw upon to develop their research? How do these social structures influence
the development of their worldviews? In this line o f research, investigations would be made to
sho w the impact of ideological and material structures on the individual in the formation of his or
her views of fitness for use.
A Conceptualization of Research Output Quality
Twenty-second Americas Conference on Information Systems, San Diego, 2016
8
Another implication that derives from this study is that v enue of publication should be irrelevant
in theory. However, because of ex isting social structures, the uptake of a scholar’s ideas is
distorted by perceptions of the jo urnal quality. I n an ideal world, where the paper is published
will not make a difference in the use of the paper by the field. However, because of distortions in
the ideology of the field, different venues do convey enhancements to the perception of quality of
the article and make it more likely to be cited regardless of actual quality. This area calls for
sociological studies to determine the effect of publication venue, peer percep tion, institutional
pressures on the perception of usefulness for the scholar or on the visibility of a paper for a
scholar.
Explanatory researc h can also be done to determine why authors achieve the levels of quality that
they do. Perhaps some o f the antecedents of quality can be determined such as social activ ity,
publishing in highly visible venues, or citing other important work. Other antecedents might be
due to their doctoral training, associations , or stre ngth of their co-authors.
Implications for Practice
For the practice o f ev aluating research output, this conceptualization of quality points toward
changes in the methodology o f evaluating scholarly output and for the promotion and tenure
process among others. I n an ideal world, journal lists would not be needed. Researchers should
labor to get their work vetted as best they can and then make the paper available for access by
others. Other scholars would review and then adopt their ideas or not. Those responsible for
assessing the quality o f researc h o utputs would request information in the v arious scientometric
statistics in order to evaluate the sc holar, journal, or institution.
In the publication process, journals should migrate to review and repository institutions. They
should function, as they do no w, as volunteer review organizations that provide research and
writing assistance to researchers. But rather than simply trying to accept/reject articles, they
should seek to be assistance organizations, he lping researchers develop their work. Ideally , this
assistance c ould begin back in the research conceptualization phase rather than waiting for a
paper to be written. They could help researchers design their work and then provide consultation
on the ex ecution and documentation of their results. The search facilities now available such as
Google Scholar can level the playing fie ld amongst all articles so that they are all equally
accessible.
Conclusion
In this paper, we have seen the current method o f evaluating scholarly output is that o f counting
papers published in ranked journals. This methodology assumes the publication o f papers in a
particular journal is a warrant for assuming it has a certain level of quality. However, the concept
of scholarly o utput quality has not, to this po int, been conceptualized in a way that has been
generally accepted. A review of 39 papers in the IS literature dealing with scholarly output quality
has shown that quality is gener ally used in an implicit, undefined, and generally unproblematic
way within the literature. As many researchers have realized this leads to contradictory and
otherwise inconsistent results. To resolve this issue, this paper has proposed that scholarly output
quality may be conceptualized as “fitness for use”, i.e. is the paper under co nsideration useful for
informing my research. Fitness for use deals with how the field uses scholarly output and can deal
with the output in positiv e or negative modalities.
This conceptualization o f scholarly output quality is proposed as a subjective evaluation:
constructed by indiv iduals based on their experience, influe nced by the different ex periences of
var ious groups which results in different perceptions of fitness for use, should be done by as large
a group in the field as possible to avoid bias, and in so doing, would c reate a more ope n and
transparent method of evaluating quality.
Finally, a research agenda was created that includes the need to develop an efficient and effective
means o f operationalizing this co ncept in o rder to utilize it in practice and r esearch to be done on
the antecedents of fitness for use .
A Conceptualization of Research Output Quality
Twenty-second Americas Conference on Information Systems, San Diego, 2016
9
References
Afuah, A., and Tucci, C. 2011. "Crowdsourcing as a Solution to Distant Search." Univ eristy o f
Michigan.
Archer, M. S. 1995. Re alist Social Theory: The Morphogenetic A pproach, (1st ed.). Cambridge:
Cambridge Univ ersity Press.
ASQ. 200 8. "Glossary - Entry : Quality ." Retriev ed 07 /20, 2008
Carte, T. A., and Russell, C. J. 2003. "I n Pursuit of Mo deration: Nine Common Errors and Their
Solutio ns," MIS Quarte rly (37 :3), pp. 47 9-501.
Chin, W. W., Marcolin, B. L., and Newsted, P. R. 2003. "A Partial Least Squares Latent Variable
Modeling Approach for Measuring Interaction Effec ts: Results from a Monte Carlo
Simulation Study and an Electronic-Mail Emo tion/Adoption Study," I nformation
Systems Research (14:2), pp. 189-217.
Clarke, R. 2008. "A Citation Analysis o f Australian Information Systems Researchers Towards a
New Era?," Australian Journal of Information Systems (15:2), pp. 35 -55.
Crosby, P. 1979. Quality Is Free. New York: McGraw-Hill.
Cuellar, M. J., Takeda, H., V idgen, R., and Truex III, D. P. 2 016. "Ide ational I nfluence,
Connectedness, and Venue Representation: Making an Assessment of Sc holarly Capital,"
Journal of the Association for Information Systems (17:1), pp. 1-2 8.
Davis, M. S. 1 971. "That's Interesting! Towards a Phenomenology of Sociology and a Sociology of
Phenomenology," Philo so phy of Social Science ), pp. 309-344.
Dean, D. L., Lowry, P. B., and Humpherys, S. L. 2011. "Profiling the Research Productivity o f
Tenured Information Sy stems Faculty at U.S. Institutions," MI S Quarterly (35:1), pp. 1 -
15.
Dennis, A . R., V alac ich, J. S., Fuller, M. A ., and Schneider, C. 200 6. "Research Standards for
Pro mo tion and Tenure in Information Systems," MIS Quarterly (30:1), pp. 1 -12.
Drucker, P. 1985. Innovation and Entrepreneurship. Harpe r &Row.
Garvin, D. A. 1984. "What Does "Pro duct Quality" Really Mean?," MIT Slo an Management
Rev iew:Fall).
Gefen, D., and Straub, D. 20 05 . "A Practical Guide to Factorial Validity Using Pls -Graph: Tutorial
and A nnotated Example," Communications of the Association for Information Systems
(16), pp. 91 -109.
Innis, R. E. 2009. Susanne Langer in Focus: The Symbolic Mind . Boomington, Indiana: Indiana
Univ ersity Press.
ISO. 2005 . "Iso 9000:2005, Quality Management Systems -- Fundame ntals and Vocabulary," in:
TC 1 7 6/SC.
Jeppesen, L. B., and Lakhani, K. R. 2010 . "Marginality and Problem-Solving Effec tiveness in
Broadc ast Search," Organization Science (21), pp. 1016-103 3.
Jones, M. 2004 . "Debatable Advice and Inconsistent Evidence: Methodology in Is Research," in
Information Systems Research: Relevamt Theo ry and Informed Practice, B. Kaplan,
D.P. Truex III, D. Wastell and T. Wood-Harper (eds.). Boston: Kluwer Academic
Publishers.
Klein, H. K., and Myers, M. D. 1 999. "A Set of Principles for Conducting and Evaluating
Interpr etive Field Studies in Information Systems," MIS Quarterly (23 :1 ), pp. 67 -94.
Klein, K. J., and Kozlowski, S. W. 2000. Multilevel Theory, Research and Methods in
Organizations. San Franc isc o: Jossey-Bass.
Kolb, D. A. 1984. Experiential Learning Experience as the Source o f Learning and Development.
Englewood Cliffs, NJ: Prentice-Hall, I nc.
Langer, S. K. 1957 . Philosophy in a New Key: A Study in the Symbolism o f Reason, Rite and Art.
Cambridge: Harv ar d University Press.
Langer, S. K. 2000. "Language and Thought," in Language Aw areness: Readings for College
Write rs, P. Esc ho ltz, A. Rosa and V. Clark (eds.). Boston: Bedfo rd/St. Martin's.
Latour, B. 1987 . Scie nce in Action. Cambridge, Massachusetts: Harv ard Univ ersity Press.
Lee, Y. W., Strong, D. M., Kahn, B. K., and Wang, R. Y. 2002. "A imq: A Methodology for
Information Quality A ssessment," Information & Management (40 ), pp. 133-146.
Locke, J., and Lo we, A . 2002. "Problematising the Construction o f Journal Quality: A n
Engage ment with the Mainstream," Accounting Forum (26:1).
A Conceptualization of Research Output Quality
Twenty-second Americas Conference on Information Systems, San Diego, 2016
10
MacDonald, S., and Kam, J. 2007. "Ring a Ring O' Roses: Quality Journals and Gamesmanship in
Management Studies," Journal of Management Studies (44:4), pp. 640 -655.
Madnick, S. E., Wang, R. Y ., Lee, Y. W., and Zhu, H. 2009 . "Overv iew and Framework for Data
and Info rmation Quality Research," ACM Journal of Data and Information Quality (1:1),
pp. 2:1-2:22.
Malo ne, T. W., Lauchacher, R., and Dellacocas, C. 2010. "Harnassing Crowds: Mapping the
Genome in Bro adcast Search." Boston: MIT Sloan Business Schoo l.
Motorola.University. 2008. "What Is Six Sigma." Retrieved 07 /20, 2008
Myers, M., and Klein, H. K. 2011. "A Set of Principles fo r Conducting Critical Research in
Information Systems," MIS Quarte rly (35:1), pp. 17 -36.
Petter, S., Straub, D., and Rai, A . 2007. "Spec ifying Formative Constructs in Information Systems
Rese arch," MI S Quarterly (31:4), pp. 623-656.
Pirsig, R. M. 1 974. Zen and the Art of Moto rcycle Maintenance. New Y ork: Bantam Bo oks.
Poetz, M. K., and Schreier, M. 2011. "The V alue of Cro wdsourcing: Can Users Really Compete
with Professionals in Generating New Product Ideas?," Journal of Product Innovation
Management).
Rieh, S. Y . 2002. "Judgement of Info rmation Quality and Congnitive Authority in the Web,"
Journal of the American Society fo r I nformation Sc ience and Technology (53:2), pp.
145 -161.
Serenko, A ., and Dohan, M. 2011. "Comparing the Expert Survey and Citation Impact Journal
Ranking Methods: Example from the Field of Artificial Intelligence," Journal of
Informetrics (5), pp. 629-648.
Singh, G., Haddad, K. M., and Chow, C. W. 2007. "Are Articles in "Top" Management Journals
Nec essarily of Higher Quality ," Journal of Management Inquiry (1 6:4), pp. 319-331.
Straub, D. 2008. "Ty pe Ii Rev iewing Errors and the Search for Exciting Papers," MIS Quarterly
(32 :2), pp. v-x.
Straub, D., and Anderson, C. 2010. "Editor's Comments: Journal Quality and Citations: Common
Metric s and Considerations for Their Use," MIS Quarte rly (34:1 ), pp. iii-x 88.
Straub, D., Boudreau, M.-C., and Gefen, D. 20 04. "Validation Guidelines for Is Positivist
Rese arch," Communications of the Association for Information Systems (13), pp. 380-
427 .
Stro ng, D. M., Lee, Y. W., and Wang, R. Y . 1997 . "Data Quality in Context," Communications o f
the ACM (40:5 ), pp. 103-110.
Stv ilia, B., Gasser, L., Twidale, M. B., and Smith, L. C. 2007 . "A Framework for Information
Quality Assessme nt," Journal of the American Soc iety for Information Science and
Tec hnology (58:12), pp. 1720-1733.
Suro wiecki, J. 2005. The Wisdom of Crow ds. New Y ork: Random House.
Taguchi, G. 1 992. Taguchi on Robust Tec hology Dev elopment. ASME Press.
Taylor, R. S. 1986. Value Added Processes in Information Systems. Norwood, N.J.: Ablex
Publishing.
Terwiesch, C., and Xu, Y. 2008. "Innov ation Contests, Open Innovation and Multi -Agent Problem
Solv ing," Management Science (54:9), pp. 1529-1543.
Truex III, D. P., Cue llar, M. J., Vidgen, R., and Takeda, H. 2011. "Emanc ipating Scholars:
Reco nceptualizing Scholarly Output," in: The Seve nth International Critical
Management Studies Co nference. Naples, Italy.
von Hippe l, E. 2005. Democratizing Innovation. Cambridge, MA: MIT Press.
Walstrom, K. A., Hardgrave, B. C., and Wilson, R. L. 1995. "Forums for Management Information
Systems Scholars," Communications of the ACM (38:3 ), pp. 93 -1 07.
Wang, R. Y . 1998. "A Product Perspective o n Total Data Quality Management," Communications
of the ACM (41 :2).
Wang, R. Y ., and Strong, D. M. 1996. "Beyond Accuracy: What Data Quality Means to Data
Consumers," Jo urnal of Management Information Systems (12:4), pp. 5-33.
Wilson, P. 1 983. Second-Hand Authority: An Inquiry in to Cognitive Authority. Westport, CT:
Gree nwoo d Press.