ArticlePDF Available

The Integrated User Satisfaction Model: Assessing Information Quality and System Quality as Second-Order Constructs in System Administration

Authors:

Abstract and Figures

While many studies have investigated the relationship between information systems (IS) characteristics and IS use, the results have been inconsistent. We argue that this inconsistency may be due to the modeling of information and system quality and the importance of the system usage context. We extend Wixom and Todd’s (2005) integrated model of IS satisfaction by proposing and modeling information and system quality as second-order constructs and by testing the model in the system administration context. Our findings provide support for modeling information and system quality as second-order constructs in the integrated model. Furthermore, our findings support using additional constructs, unique to the context studied, in the integrated model. We contribute to current literature by 1) enhancing the construct validity of information and system quality, which ultimately improves statistical conclusion validity and internal validity for studies that focus on information and system quality; and 2) testing the extended model in the system administration context. Our findings suggest that future research should measure information quality and system quality as second-order constructs and that including context-specific information and system characteristics provides researchers and practitioners with a better understanding of IS characteristics important in system administration.
Content may be subject to copyright.
Communications of the Association for Information Systems
#3091* 68.(0*

&e Integrated User Satisfaction Model: Assessing
Information Quality and System Quality as
Second-order Constructs in System Administration
Nicole Forsgren
Chef Soware2.(30*+:,1&.0(31
Alexandra Durcikova
University of Oklahoma
Paul F. Clay
Fort Lewis College
Xuequn Wang
Murdoch University
3003;8-.7&2)&)).8.32&0;36/7&8 -@4&.7*0&.72*836,(&.7
?.71&8*6.&0.7'639,-883=39'=8-*3962&07&8!0*(8632.(.'6&6=!*8-&7'**2&((*48*)+36.2(097.32.231192.(&8.3273+8-*
773(.&8.32+362+361&8.32!=78*17'=&2&98-36.>*)&)1.2.786&8363+!0*(8632.(.'6&6=!*36136*.2+361&8.3240*&7*(328&(8
*0.'6&6=&.72*836,
*(311*2)*).8&8.32
367,6*2.(30*96(./3:&0*<&2)6&0&=&90&2)$&2,%9*592?*28*,6&8*)"7*6!&8.7+&(8.323)*077*77.2,
2+361&8.329&0.8=&2)!=78*19&0.8=&7!*(32)36)*6327869(87.2!=78*1)1.2.786&8.32Communications of the Association for
Information Systems#3068.(0*
:&.0&'0*&8 -@4&.7*0&.72*836,(&.7:30.77
C
ommunications of the
A
I
S
ssociation for
nformation
Research Paper ISSN: 1529-3181
Volume 38
Paper 39
pp. 803839
May
2016
The Integrated User Satisfaction Model: Assessing
Information Quality and System Quality as Second-
order Constructs in System Administration
Nicole Forsgren
Chef Software
USA
nicolefv@gmail.com
Alexandra Durcikova
Michael F. Price College of Business
University of Oklahoma
USA
Paul F. Clay
School of Business Administration
Fort Lewis College
USA
Xuequn Wang
School of Engineering and Information Technology
Murdoch University
Australia
Abstract:
While many studies have investigated the relationship between information systems (IS) characteristics and IS use,
the results have been inconsistent. We argue that this inconsistency may be due to the modeling of information and
system quality and the imp
ortance of the system usage context. We extend Wixom and Todd’s (2005)
integrated
model of IS satisfaction by proposing and modeling information and system quality as second
-order constructs and
by
testing the model in the system administration context. Our findings provide support for modeling information and
system quality as second
-order constructs in the integrated model. Furthermore, our findings support using
additional
constructs, unique to the context studied, in the integrated model. We contribute to current literature by 1) enhancing
the construct validity of information and system quality, which ultimately improves statistical conclusion validity and
internal validity for studies that focu
s on information and system quality;
and 2) testing the extended model in the
system administration context.
Our findings suggest that future research should measure
information quality and
system quality as second
-order constructs and that including context-
specific information and system characteristics
provides researchers and practitioners with a better understanding of IS characteristics important in system
administration.
Keywords
: Information Systems (IS) Success, Information Quality, System Quality, Second-order Construct,
System
Administration
.
This manuscript underwent editorial review. It was received 02/25/2013 and was with the authors for 8 months for 3 revisions. The
Associate Editor chose to remain anonymous.
804
The Integrated User Satisfaction Model: Assessing Information Quality and System Quality as Second-order
Constructs in System Administration
Volume 38
Paper 39
1 Introduction
Researchers have long been interested in information systems (IS) success in various contexts and
sought to better understand the relationship between system characteristics and how those characteristics
influence systems’ quality, success, and use. In evaluating these system characteristics, studies have
generally modeled system characteristics as first-order constructs that, in turn, influence first-order quality
constructs (e.g., Nelson, Todd, & Wixom, 2005; Wixom & Todd, 2005). Such models may be appropriate
for relatively simplistic constructs; however, the theoretical definition of more complex constructs suggests
that one should employ different modeling approaches. Due to advances in path modeling techniques
(Wetzels, Odekerken-Schröder, & Van Oppen, 2009) and clearer model specification guidelines
(MacKenzie, Podsakoff, & Podsakoff, 2011), it is appropriate to rethink model specifications for
information quality and system quality as second-order constructs.
When evaluating user perceptions of information technology (e.g., DeLone & McLean, 1992), researchers
have often used theories of technology acceptance (e.g., Davis, 1989; Venkatesh, 2000) and user
satisfaction (e.g., Bailey & Pearson, 1983; Schneiderman, 1997). To leverage the strengths of these
research streams, Wixom and Todd (2005) developed an integrated user satisfaction model (hereafter WT
model). Their model provides a deeper understanding of the technology characteristics that comprise
information quality and system quality and how these characteristics lead to information and system
satisfaction and subsequent technology acceptance. However, despite the theoretical definitions
employed in WT and similar research streams, researchers have treated information quality and system
quality as relatively simplistic constructs. We believe that researchers should periodically reevaluate their
research streams’ state of affairs and assess whether the field has adopted emergent best practices or
has persisted in reapplying existing methodologies and existing measurement/modeling techniques.
Because of the integral position of information quality and system quality in the WT model, we use the WT
model to illustrate the use of modeling techniques that are more consistent with the complex theoretical
definitions of information quality and system quality.
In addition to investigating the modeling of quality constructs as second-order constructs, we conduct our
study in a new domain. Wixom and Todd (2005) originally tested the WT model in the context of data
warehousing systems, and we need to test the model in other contexts for two reasons. First, researchers
should “investigate whether there is a core set of system characteristics that apply broadly across a wide
range of systems” (Wixom & Todd, 2005, p. 100). While researchers have tested the WT model across a
variety of contexts, in some contexts, the measurement model does not demonstrate clear construct
validity1, which leads to questions regarding the existence of contextual boundary conditions in the core
WT model. A boundary condition is a condition or context in which the expected pattern of relationships is
invalid. For example, when one tests an established model in a new context, the expected relationships
among constructs may no longer hold or the measurement model may not demonstrate convergent and
discriminant validity; in such a case, the new context may be a boundary condition for the model. Second,
studies should identify and investigate any additional system characteristics specific to alternative
technology and contexts (Whetten, 1989). We answer these calls by conducting our study in a new
context: system administration.
To summarize, our study makes two main contributions to the IS field. First, by modeling information
quality and system quality more accurately as second-order constructs, we enhance the construct validity
of information and system quality, which ultimately improves statistical conclusion validity and internal
validity for studies with information quality and system quality (MacKenzie, 2003). Second, by testing the
WT model in the context of system administration, we help develop a deeper understanding of the
characteristics of information quality and system quality relevant for system admins and identify potential
system characteristics that future studies should include in the core set of system characteristics. When
one brings these two contributions together, we demonstrate that, by updating our modeling techniques to
model and evaluate information quality and system quality as second-order constructs, we can resolve the
apparent boundary condition of the core WT information quality and system quality characteristics that
appears in some contexts.
1 Based on data collected from experienced system administrators, when one models information quality and system quality as first-
order constructsconsistent with WTinformation quality and system quality failed to discriminate from each other. Additionally, 22
percent of the items used to measure information quality and system quality in WT were not significantly related to these constructs,
which suggests the existence of a potential boundary condition in the core WT system characteristics. For more detail, see Section
7.1.
Communications of the Association for Information Systems
805
Volume 38
Paper 39
This paper proceeds as follows: in Section 2, we discuss the WT model in the context of system
administration. In Section 3, we review previous literature and describe in detail the underlying rationale to
measure information and system quality as second-order constructs. In Section 4, we discuss the context
of the empirical study and propose additional relevant system characteristics. In Section 5, we present the
research methodology used to examine the model and, in Section 6, the results. In Section 7, we
conclude the paper by discussing our results and their implications.
2 Integrated User Satisfaction Model
Researchers have proposed several models of information systems success that most often build on
DeLone and McLean’s (1992) model of information systems user satisfaction or the technology
acceptance model (Davis, 1989). To leverage the strength of both the technology acceptance and user
satisfaction theories, Wixom and Todd (2005) developed a model that links system and information
satisfaction constructs from user satisfaction literature (DeLone & McLean, 1992) with the behavioral
predictors found in the technology acceptance literature (Davis, 1989) (see Figure 1). DeLone and
McLean (1992) argue that the object-based attitudes and beliefs expressed in system quality, information
quality, system satisfaction, and information satisfaction central to information satisfaction literature affect
the behavioral beliefs that are central to technology acceptance literature, which ease of use and
perceived usefulness capture (Davis, 1989).
The WT integrated model suggests that external variables (e.g., IS characteristics) influence object-based
beliefs, which are operationalized as information quality and system quality. In turn, information quality
and system quality affect object-based attitudes of information satisfaction and system satisfaction. As
object-based attitudes, information and system satisfaction influence one’s behavioral beliefs of perceived
usefulness and ease of use, respectively. Behavioral beliefs then affect one’s attitudes toward using the
system. Finally, one’s attitudes influence behavior (i.e., using or not using a system). We describe these
relationships below.
Based on the theory of reasoned action (TRA) (Ajzen & Fishbein, 1980; Ajzen, Fishbein, & Zanna, 2005),
the WT model posits that behavioral beliefs affect behavioral attitude. Ease of use reflects a user’s belief
about the effortlessness of using a particular system, which contributes to their beliefs about a system’s
usefulness (Davis, 1989). Ease of use is a behavioral belief that influences a user’s behavioral attitude
about using a particular system (Davis, 1989, Wixom & Todd, 2005). Perceived usefulness reflects a
user’s behavioral beliefs about the impact of a particular system on their job performance, which impacts
their behavioral attitude about using a particular system (Davis, 1989; Wixom & Todd, 2005). This
behavioral attitude, together with perceived usefulness, affects behavioral intention (Davis, 1989,
Venkatesh, Morris, Davis, & Davis, 2003).
Figure 1. Wixom and Todd’s (2005) Integrated Model with IQ and SQ as First-order Constructs
As the user satisfaction literature explains (e.g., Baroudi & Orlikowski, 1988; DeLone & McLean, 1992;
Doll & Torkzadeh, 1988; Ives, Olson, & Baroudi, 1983), information quality and system quality are users
perceptions of a system and its information. Consistent with previous literature on user satisfaction, the
model proposes that information and system quality affect information and system satisfaction,
respectively. Wixom and Todd (2005) conceptualize information and system quality as object-based
beliefs and information and system satisfaction as object-based attitudes. This conceptualization is
consistent with attitude-behavior theories, which state that object-based beliefs affect the object-based
806
The Integrated User Satisfaction Model: Assessing Information Quality and System Quality as Second-order
Constructs in System Administr
ation
Volume 38
Paper 39
attitudes users have toward that system (Ajzen et al., 2005). Furthermore, object-based attitudes are
antecedents to behavioral beliefs. Although user attitude theories support these behavioral and object-
based beliefs, we believe that one can strengthen the model by carefully examining information quality
and system quality and their modeling as second-order constructs. We describe our reasoning in Section
3.
3 Information Quality and System Quality as Second-order Constructs
To better understand the modeling of information quality and system quality as latent constructs, we
conducted a thorough literature search on relevant papers published since 2000. Specifically, we
searched the EBSCO database using the key words “information quality”, “system quality”, and
“information systems success model”. This search resulted in 19 papers, which we present in Appendix A.
Three observations about information quality and system quality emerge from reviewing these 19 studies:
1) their multidimensionality, 2) the context dependence of their characteristics, and 3) the need for
second-order modeling. We discuss each observation in Sections 3.1 to 3.3.
3.1 Multidimensionality of Information Quality and System Quality
While some studies using information quality and system quality have modeled them as first-order
constructs with “global” indicators, which are influenced by a parsimonious set of system characteristics,
the majority of studies presented in Appendix A recognize the multidimensionality of both information
quality and system quality. The number of sub-dimensions for each construct ranges anywhere from two
to nine. The need for sub-dimensions arises because information quality “reflects perceptions of the
system itself and the way it delivers information” (Wixom & Todd, 2005, p. 91) and because system quality
“represent(s) user perceptions of interaction with the system over time” (Nelson et al., 2005, p. 205).
Furthermore, those system characteristics are defining characteristics for information and system quality,
and a change in just one system characteristic will likely affect information or system quality. For example,
enhancing a system’s reliability can improve system quality. On the other hand, an increase in system
reliability is not necessarily associated with an increase in system flexibility, and an increase in system
quality may not produce increases in all of the other system characteristics. While we can view each
characteristic as a separate construct, these characteristics are all essential parts of information quality or
system quality constructs at a more abstract level and represent their sub-dimensions.
3.2 Context-dependence of Characteristics
The characteristics of information quality and system quality are context dependent. That is, various
contexts may have a different set of information and system characteristics relevant to information and
system quality. In addition to generic characteristics, such as accuracy and timeliness for information
quality and reliability for system quality, new characteristics emerge. Information quality characteristics are
extended by understandability and conciseness for ERP systems (Gable et al., 2008), scope for mobile
data services (Lee, Shin, & Lee, 2009), and value-added and richness for virtual communities (Zheng,
Zhao, & Stylianou, 2013). System quality characteristics are extended by flexibility, sophistication,
integration, and customization for ERP systems (Gable, Sedera, & Chan, 2008); integration for data
warehouses (Wixom & Watson, 2001); and appearance, security, and interactivity for virtual communities
(Zheng et al., 2013). Therefore, there is a demonstrated need to continuously develop corresponding
measures of information and system quality in additional contexts. Since we did not find any study that
develops measures of information and system quality in the context of system administration, our study
can make valuable contributions to the existing literature in this context and to related contexts, such as
those of other technical professionals, those who work in high risk or complex work environments, or
those whose work is characterized by interruptions (see Section 4 below).
3.3 Need for Second-order Modeling
Based on our discussion above, we can conclude that researchers have often not defined latent
constructs in a unidimensional way (MacKenzie et al., 2011); in these cases, the level of abstraction used
to define the construct should be followed to determine whether it is viewed as unidimensional or multi-
dimensional and whether one can better measure it as a first-order construct or a second-order construct
(Jarvis, MacKenzie, & Podsakoff, 2003).
Communications of the Association for Information Systems
807
Volume 38
Paper 39
While we agree with the importance of each of these information and system quality characteristics, we
can improve the model specification that uses first-order constructs that are influenced by these
characteristics. According to Jarvis et al. (2003), one may specify the conceptual definitions of constructs
at a more abstract level, which includes multiple first-order constructs. Because researchers define
information quality and system quality at a more abstract level, it may be appropriate to measure them as
second-order constructs, which include first-order constructs that cover their essential aspects, such as
reliability and flexibility (Jarvis et al., 2003). One aim of measurement is to ensure that 1) all important
aspects of the conceptual definition are included, 2) the items are not contaminated by including irrelevant
items, and 3) the indicators are properly worded (e.g., unambiguous and specific) (MacKenzie, 2003).
Thus, using “global” indicators may not be appropriate because global indicators may make the question
cognitively complex, and there is no way to tell precisely how participants combine different dimensions
and generate their response to the question. For example, consider system quality, which one usually
measures as a first-order construct with the following three indicators: 1) in terms of system quality, I
would rate <the system> highly, 2) overall, <the system> is of high quality, and 3) overall, I would give the
quality of <the system> a high rating (Nelson et al., 2005; Wixom & Todd, 2005). These indicators can be
ambiguous in their definition and construction of system quality. Specifically, these items rely on an
implicit definition of quality rather than a specific one, and there is no way to know how participants come
to their responses. Thus, these first-order quality indicators may not accurately measure the relevant
characteristics that comprise information and system quality, which undermines the construct validity of
information and system quality and, in turn, weakens the statistical conclusion validity and internal validity
of studies that use these constructs.
Therefore, it may be more appropriate to remove the “overall quality measures used to capture
information quality and system quality as first-order constructs and instead measure these two constructs
as second-order constructs. First, the indicators for each information or system characteristic first-order
construct can be specific and unambiguous. Second, these information or system characteristic first-order
constructs can collectively capture all essential aspects of the conceptual domain of information quality or
system quality. Note that we do not mean to suggest that information and system quality should never be
measured as first-order constructs. Rather, we argue that developing a measurement to capture
information quality and system quality as first-order constructs can be quite challenging. Also note that we
call attention to this issue not to criticize the original research but to emphasize the challenge of
developing measures and to propose an alternative method to measure information quality and system
quality.
Using second-order constructs for information and system quality is even more desirable because of
these measures’ multidimensional nature. For example, prior research (e.g., Wixom & Todd, 2005) has
suggested system quality is influenced by four system characteristics (reliability, flexibility, integration, and
timeliness), which represent relatively unique aspects of system quality. Therefore, one may more
appropriately model information and system quality as second-order formative constructs (MacKenzie et
al., 2011).
Previous research has shown the validity of modeling quality constructs at the second-order. For example,
researchers have modeled service quality as a second-order construct while examining research
questions concerned with the hotel industry (Sánchez-Hernández, Martinez-Tur, Peiró, & Ramos, 2009),
airline industry (Koufteros, Babbar, & Kaighobadi, 2009), and IS service (Kettinger, Park, & Smith, 2009).
Agarwal, Malhotra, & Bolton (2010) empirically tested and demonstrated that the second-order service
quality model was statistically superior to other, first-order service quality models used in prior literature.
Loiacono, Chen, and Goodhue (2002) used a second-order model for testing previous work with
WebQual, and confirmatory factor analysis showed that, as a second-order construct, five first-order
constructs (usefulness, response time, trust, ease of use, and entertainment) influenced WebQual
perceptions (see also Kim & Stoel, 2004). In another study, McKinney et al. (2002) decomposed website
quality into system quality and information quality. They designed and tested second-order constructs for
measuring Web-customer expectations, disconfirmation, and perceived performance regarding
information quality and system quality (McKinney, Yoon, & Zahedi, 2002). Previous studies have also
measured information quality and system quality as second-order constructs (e.g., Setia, Venkatesh,
Joglekar, 2013; Zheng et al., 2013). These studies suggest that quality in general, and information and
system quality in particular are quite complex and that modeling these constructs as second-order
constructs may be more appropriate than modeling them as first-order constructs.
808
The Integrated User Satisfaction Model: Assessing Information Quality and System Quality as Second-order
Constructs in System Administration
Volume 38
Paper 39
While previous studies have recognized the multidimensional nature of information quality and system
quality, we identified only two studies using second-order formative constructs for information or system
quality (see Setia et al., 2013; Zheng et al., 2013). However, these studies use relatively few information
and system quality characteristics (no more than three) and study a much simpler model while focusing
on DeLone and McLean’s (1992) original model of information systems success. Our review suggests that
IS research would be wise to heed methodological advances and research findings about measuring
information quality and system quality, and it may become necessary to consider including additional
characteristics specific to the context of interest (Galletta & Lederer, 1989).
4 The WT Model in the Context of System Administration
4.1 Context of Interest: System Administrators
In this section, we discuss the specific context for this study and identify relevant information and system
characteristics beyond those found in the extant literature. System admins are information technology
professionals who execute the system administration tasks for their organization even though their job
titles may not designate them as such (Sage, 2008). System admins’ responsibilities include configuring,
maintaining, troubleshooting, and backing up systems, networks, databases, and servers (Sage, 2008).
Their work provides technical services to their organizations and requires significant knowledge of
operating systems, hardware components, applications, databases, networking, and the complex
interrelationships among system components (Bailey, Kandogan, Haber, & Maglio, 2007; Barrett, Chen, &
Maglio, 2003; Barrett et al., 2004; Haber & Kandogan, 2007; Patterson et al., 2002; Sage, 2008). A single
system admin can support anywhere from a single end user to over 16,000 end users, though most
support approximately 45 (Sage, 2008). With the growing complexity of computing infrastructures (Bailey
et al., 2007; Barrett et al., 2004; Patterson et al., 2002), efficiently maintaining and managing these
systems has grown in importance (Horn, 2001). Additionally, the high costs associated with this complex
work have attracted many companies’ attention (HP, 2007; Sun Microsystems, 2006; Horn, 2001). While
the costs of actual hardware components continues to fall, the cost of administering these systems has
increased and surpassed hardware and software costs (Kephart & Chess, 2003; Patterson et al., 2002;
Horn, 2001). In summary, system admins are subject matter experts whose work is highly complex and
expensive.
In this context, companies provide system admins with tools to support their work and help them be more
effective and efficient (e.g., Horn 2001; HP, 2007; Khalifa & Davison, 2006; Lenchner et al., 2009; Sun
Microsystems, 2006). Because system admins deal mostly with software and hardware, many of their
tools are computerized, and such tools can include various information systems and software programs
(Haber & Kandogan, 2007; Velasquez & Weisband, 2008). Organizations can develop these tools in-
house or buy them from vendors or third parties (which they can then either customize or not) (Barrett et
al., 2004). When working on a task, system admins have reported selecting and using the tool that
uniquely suits the problem at hand (Velasquez & Weisband, 2008). However, research has not yet
investigated the information and system quality characteristics important in the tools that system admins
use. In Section 5, we presents the unique needs of system admins with respect to information and system
quality in the WT model.
4.2 The WT Model in this Context
We extend the WT model in the context of our study, system administration, by referencing recent studies
on system admins and their work practices to gain “a better understanding among researchers, and
among many system designers too, about the “users” of computer systems and the settings in which they
work” (Bannon, 1991, p. 25). Studies on system administration and system admins’ work practices
suggest that these users may have unique information and system needs (Barrett et al., 2004; Velasquez
& Weisband, 2008). For example, when checking on system use or system performance, a system admin
may use a reporting tool that presents the information in an easy-to-understand graphical format even if
generating these reports takes a few minutes (e.g., formatted reports such as bar charts and pie charts).
Alternatively, when monitoring system status, system admins may prefer a tool with much faster response
times and basic text output. Researchers who have analyzed system admins’ work practices have
identified the following relevant information and system characteristics (Velasquez and Weisband, 2008):
reliability, flexibility, integration, accessibility, credibility, scalability, scriptability, situation awareness,
monitoring, speed, completeness, accuracy, format, currency, logging, and verification. Consistent with
Communications of the Association for Information Systems
809
Volume 38
Paper 39
McKnight (2005), we have expanded credibility to include three dimensions: functionality, predictability,
and helpfulness. We have also operationalized situation awareness to account for current states (situation
awareness) and future states (mental model) (Hrebec & Stiber, 2001). Table 1 provides definitions of
these characteristics, which we modeled as first-order constructs in our study (see Figure 2).
Table 1. System and Information Characteristics Important to System Admins (Adapted from Velasquez &
Weisband, 2008)
Information characteristics
Definition
Completeness1
The degree to which all necessary information is provided.
Accuracy1
The degree to which the information is correct.
Format1
The degree of how well the information is presented.
Logging2
The degree to which the information reflects previous actions taken.
Verification2
The degree to which the information echoes or repeats the outcomes of previous
actions.
System characteristics
Reliability1
The dependability of system operation.
Flexibility1
The way the system adapts to changing demands of the user.
Integration1
The degree to which the system allows data to be integrated from various sources.
Accessibility1
The ease with which information can be accessed or extracted from the system.
Credibility (functionality)2
The degree to which the functionality of the system meets the needs of the user.
Credibility (predictibility)2
The degree to which the system operates in a predictable way.
Credibility (helpfulness)2
The degree to which the system’s embedded Help function is helpful.
Scalability3
The ability of a system to scale to large or complex computing environments.
Scriptability3
The degree to which the system provides the ability to script add-ons or automate
tasks.
Situation awareness2
The degree to which the system provides information about the overall state of the
system that can be used to guide current actions.
Mental model2
The degree to which the system provides information that can be used to anticipate
future events and formulate plans.
Monitoring3
The degree to which the system provides the ability to monitor for certain events or
conditions.
Speed3
The degree to which the system executes commands, including the speed of startup /
initiation.
Timeliness4
The degree to which the system provides timely responses to requests for information
or action.
1 Characteristics significant in Wixom and Todd (2005).
2 New characteristics identified by Velasquez and Weisband (2008); expanded to include three dimensions of trust (functionality
,
predictability, and helpfulness) (McKnight, 2005) and both current/future dimensions of system state (Hrebec & Stiber, 2001).
3 New characteristics identified by Velasquez and Weisband (2008).
4
Characteristics not significant in Wixom and Todd (2005) but included in Velasquez and Weisband (2008).
In summary, our model (see Figure 2) tests the WT model with information and system quality re-specified
as second-order constructs in the context of system admins. This model expands the core set of system
and information quality characteristics to include those that prior research analyzing system admins’ work
practice has suggested (Velasquez & Weisband, 2008). We exclude service quality (e.g., Chen, 2010;
Negash, Ryan, & Igbaria, 2003) from the model because system admins provide rather than consume a
service. Testing this model will provide further empirical evidence on which characteristics one should
include in the core set of system characteristics and can also inform practitioners regarding which features
they should implement.
810
The Integrated User Satisfaction Model: Assessing Information Quality and System Quality as Second-order
Constructs in System Administration
Volume 38
Paper 39
Figure 2. The WT Model in the Context of System Admins with IQ and SQ as Second-order Constructs
5 Method
5.1 Measurements
We use the Wixom and Todd (2005) instrument to measure the constructs adapted from the WT model
and adapt measures from McKnight (2005) for credibility items. We developed items to measure new
constructs (i.e., logging, verification, scalability, scriptability, situation awareness, mental model, and
monitoring) following Churchill’s (1979) methodology. We created items based on construct definitions
and components identified in the literature (e.g., Endsley, 1995; Fogg & Tseng, 1999). Next, we used a
sorting task to determine face and discriminant validity. We wrote each measurement item on a 3x5 note
card and shuffled all cards. We asked three professional system admins who did not participate in the pilot
or final survey to sort the cards into logical groups and name each group. Each system admin sorted the
items into groups and specified similar identifying terms. We measured each item on the final survey on a
five-point Likert scale (1 = strongly disagree; 5 = strongly agree). Based on the feedback from three
professional system admins, we slightly modified the wording on some items. Before implementing the
survey, we pilot tested it was with 24 professional system admins at a professional system administrator
conference. We invited participants to email the researcher with any suggestions or modifications not
included in their survey responses.
Although organizations may play a role in selecting or purchasing tools, system admins primarily use a
self-selected suite of tools to perform their work, and system admins in the same organization and even
on the same team use different tools to perform the same tasks. Given this variability, we faced difficulty in
gathering survey responses from hundreds of system admins on one particular tool. As such, we asked
each participant to identify the tool they used most often in their jobs and complete the survey with that
one particular tool in mind. Appendix B lists the final instrument of this study.
5.2 Sampling and Data Collection
We used a Web-based survey to collect data. The population of interest was system admins of all
specialties (e.g., network, operating system, web, etc.). We posted an invitation to participate in this study
Communications of the Association for Information Systems
811
Volume 38
Paper 39
on professional system administrator association message boards. We also invited participants to refer
fellow system admins to the study. All surveys were confidential (we collected no identifying information)
and all questions were optional.
After removing incomplete responses, we had 230 fully completed surveys. A total of 517 subjects viewed
the survey and 237 completed it for a 45.8 percent completion rate. More than half of the survey
participants worked for for-profit organizations (53.5%), including those in the manufacturing, high tech,
and finance industries. The next largest number of respondents worked in academic settings and in not-
for-profit organizations or government agencies (both 16.1%). Of the survey respondents, 94.8 percent
were male and 4.3 percent were female. The age of respondents ranged from 20 to 62 with an average
age of 38.1. Participants reported working at their current organization for an average of five years
(ranging from less than one month to 35 years) and reported working as a system admin for an average of
13.8 years (ranging from one year to 32 years). Our participants’ demographics were similar to those
found in the Salary Survey for 2007 (Sage, 2008), considered the most recent and comprehensive survey
of system admin personal and work demographics. These similarities suggest the survey sample
represents the system admin population. Table 2 shows the demographics that were available for direct
comparison. In addition, we compared early and late respondents and found no significant differences,
which indicates that nonresponse bias was likely not an issue in our study.
Table 2. Study v. 2007 Sage Salary Survey (2008)
Measure
Study Statistics
SAGE Statistics
Yearsexperience (mean)
13.83
9.74
Yearsexperience (stddev)
7.2
6.3
Age (mean)
38.1
341
Male respondents
94.8
86.6%
Female respondents
4.3
13.4%
Earned undergraduate degree
67.4%
59.1%
1 Approximated from available data.
To begin our analysis, we examined whether common method bias was a concern in our study. Following
Podsakoff, MacKenzie, Lee, and Podsakoff (2003) and Lindell and Whitney (2001), we performed two
common method variance (CMV) tests. First, an exploratory factor analysis of all items extracted 17
factors explaining 78 percent of the variance with no single factor accounting for significant loading (at the
p < 0.10 level) for all items. However, because Harmon’s single-factor test is known to insufficiently detect
moderate to small levels of CMV (Malhotra, Kim, & Patil, 2006), we also used the marker-variable
technique (Lindell & Whitney, 2001), which offers two alternative methods for assessing CMV. In the first
alternative, one can identify and incorporate a marker variable that is theoretically unrelated to at least one
variable in the study into the instrument. The correlation between the marker variable and the dependent
variable is a reliable estimate of CMV. In the second alternative, the second-smallest positive correlation
among the manifest variables provides a conservative estimate for CMV. Given our study design, we
employed the second method. After adjusting for the second smallest positive correlation, all significant
correlations remained significant. Based on the results of these two tests, we can conclude that CMV was
not a concern for our data set.
6 Analysis and Results
Following previous literature (Chin, 1998, 2010; Gefen, Straub, & Rigdon, 2011; Hulland, 1999), we tested
the proposed research model using partial least squares (PLS), a structural equation modeling method
suitable for complex predictive models and theory building (Barclay, Higgins, & Thompson, 1995; Chin,
1998; Lohmoller, 1989). We used SmartPLS version 2.0 (Ringle, Wende, & Will, 2005) for the analysis
and the bootstrap re-sampling method (using 1,000 samples) to determine the significance of the paths in
the structural model. PLS is a preferred analytical technique for several reasons: first, PLS is quite useful
to support an incremental study. As Chin (2010) suggests, PLS is appropriate where a researcher “builds
on a prior model by developing both new measures and structural paths” (p. 660), which is what our study
presents. By using PLS, researchers can “constrain the new construct and measures to its immediate
nomological neighborhood of constructs and avoid possible covariance-based structural equation
modeling (CBSEM) estimation bias that can be affected by minor modeling or item selection errors” (p.
812
The Integrated User Satisfaction Model: Assessing Information Quality and System Quality as Second-order
Constructs in System Administration
Volume 38
Paper 39
660). Second, our study models IQ and SQ as second-order formative constructs. As Chin (2010) and
Gefen et al. (2011) suggest, one can easily implement formative measurement with PLS but not so easily
with CBSEM. Further, in CBSEM, researchers are limited to only second-order reflective constructs, while
PLS can easily model both second-order reflective and formative constructs (Chin, 2010; Wetzels et al.,
2009). Third, PLS is appropriate for complex models. As the number of items increases and the model
becomes more complex, researchers will likely obtain a poor model fit due to the model’s complexity
(Chin, 2010). Our study has a complex model with 82 items, and PLS is well suited to analyze our model.
Fourth, PLS works well with small-to-medium sized samples (Chin, 2010). Fifth, PLS is better suited to
predictive models than CBSEM, which focuses instead on model fit (Chin, 2010), and the model we tested
predicts IS intention. Finally, this study is an extension and comparison to a study that used PLS analysis
(Wixom & Todd, 2005), and easily interpretable comparisons to the original study suggest using PLS.
A rule-of-thumb for PLS sample size is ten times the largest structural equation or the largest
measurement equation (Barclay et al., 1995, Gefen, Straub, & Boudreau, 2000). In our case, the largest
structural equation was the system quality construct with 14 paths in the measurement model; our sample
size of 230 exceeded the minimum suggested sample size of 140.
Following recommended methods, we analyzed the data in two stages (Gefen & Straub, 2005). In the first
stage, we assessed the measurement model, and, in the second stage, we assessed the structural model
(Hulland, 1999). In assessing the measurement model, we first specified a null model with all first-order
latent variables (including first-order IQ and SQ) (Wetzels et al., 2009). We established convergent validity
by satisfying three criteria (Gefen & Straub, 2005; Hulland, 1999). First, each item loaded significantly on
its respective construct, and no items loaded on its construct below the cutoff value of 0.60 (see Appendix
B, Table B2). Second, composite reliability (CR) for each construct was over 0.70 (Appendix B) (Chin,
MArcolin, & Newsted, 2003). Finally, the average variance extracted (AVE) of each construct exceeded
the threshold value of 0.50 (Appendix B). Therefore, the measurement model demonstrated good
convergent validity. Discriminant validity was confirmed because the correlations between constructs were
below 0.85 (Brown, 2006) and, for each construct, the square root of AVE exceeded all correlations
between that factor and any other construct (Gefen & Straub, 2005). The measurement model
demonstrated good discriminant validity except for first-order information quality. The correlation between
information quality and information satisfaction was 0.85 (compared to 0.77 in Wixom and Todd (2005))
(Appendix C), which is too high (Brown 2006) and indicates that information quality measured as a first-
order construct may not sufficiently discriminate from related constructs. We discuss the implications of
this issue in Section 7.1.
We tested the structural model in four related steps. In Section 7.1 (analysis A), we present the results of
the original WT model, which includes only the WT original constructs, with information quality and system
quality as first-order constructs. In Section 7.2 (analysis B), we test this model (with WT original constructs
only) with second-order constructs. In Section 7.3 (analysis C), we expand analysis B by including system
admin-specific information and system quality characteristics constructs and by keeping information and
system quality as second-order constructs. Finally, in Section 7.4 (analysis D), we test the expanded WT
model with information quality and system quality as first-order constructs.
6.1 Analysis A: WT Model (WT Constructs Only) with Information Quality and
System Quality as First-order Constructs
As a baseline, we first replicated the WT model in the system administrator context with information
quality and system quality as first-order constructs (see Figure 3). The measurement model had good
convergent validity but not good discriminant validity due to a high correlation between information quality
and information satisfaction (see Table C1). For the structural model, we examined the paths between
information characteristics and information quality and between system characteristics and system
quality.inf. As Table 3 shows, for information characteristics, currency was not significantly related to
information quality; for system characteristics, integration was not significantly associated with system
quality. In other words, these results suggest that currency is not meaningfully related to information
quality and that integration is not meaningfully related to system quality in the context of system admins.
These results are not consistent with Wixom and Todd (2005), who found that all four information
characteristics were significantly related to information quality, that integration was significantly related to
system quality, and that timeliness was not related to system quality.
Communications of the Association for Information Systems
813
Volume 38
Paper 39
*** p < .001, **p < .01
Figure 3. Structural Model for Analysis A
Table 3. Assessing the WT Structural Model for Information Quality and System Quality
as First-order Constructs (Analysis A)
Path weights
Information quality
System quality
Accuracy
0.397 ***
Completeness
0.202 ***
Currency
0.098
Format
0.265 ***
Accessibility
0.201 *
Flexibility
0.181 **
Integration
-0.0004
Reliability
0.404 ***
Timeliness
0.219 ***
Note: *p < 0.05, ** p < 0.01, *** p < 0.001
Next, we analyzed the full structural model. Similar to linear regression, PLS examines the significance of
construct relationships and provides R2 measures (Gefen et al., 2000), which represent the amount of
variance explained by the independent variables. One can use path coefficients to test hypotheses and
indicate the strength and significance of relationships between constructs. Together, the R2 and the path
coefficients indicate how well the data support the hypothesized model. Figure 3 shows the results of our
analyzing the original WT structural model. Our analysis reveals that all of the paths were significant,
which reaffirms the WT model in the system administrator context. On the other hand, one should
remember that information quality as a first-order construct does not exhibit good discriminant validity in
the system administrator context. Therefore, one should interpret the results in Figure 3 with caution.
We also compared the results from Wixom and Todd (2005) to those from analysis A (Table 4). For path
weights, we can see that, while all were significant, the paths from the two studies were not quite similar.
For example, the path weight between system satisfaction and information satisfaction from Wixom and
Todd (2005) was 0.50, which is much larger than that from analysis A (0.17). In contrast, some other
paths from analysis A were much larger than those seen in WT, such as the path weight between
information quality and information satisfaction (0.724 in analysis A and 0.43 in Wixom and Todd (2005)).
The R2 values were also quite different. Most importantly, the R2 measures from information quality and
system quality were much lower from analysis A than those from Wixom and Todd (2005). Thus, we can
conclude that the original information and system characteristics can only explain a small portion of
variance of information quality and system quality in the system admin context and that we need
additional information and system characteristics.
814
The Integrated User Satisfaction Model: Assessing Information Quality and System Quality as Second-order
Constructs in System Administration
Volume 38
Paper 39
Table 4. Results Comparison between Wixom and Todd (2005) and Analysis A
Path weights
Wixom and Todd (2005)
Analysis A
Information quality
information satisfaction
0.43***
0.724***
System quality
system satisfaction
0.73***
0.830***
System satisfaction
information satisfaction
0.50***
0.170***
Information satisfaction
perceived usefulness
0.64***
0.525***
System satisfaction
perceived ease of use
0.81***
0.563***
Perceived ease of use
perceived usefulness
0.25***
0.243***
Perceived usefulness
attitude
0.42***
0.574***
Perceived ease of use
attitude
0.50***
0.261***
Perceived usefulness
intention
0.47***
0.408***
Attitude
intention
0.36***
0.336***
R2
Information quality
0.75
0.572
System quality
0.74
0.654
Information satisfaction
0.71
0.735
System satisfaction
0.53
0.735
Perceived usefulness
0.67
0.453
Perceived ease of use
0.65
0.317
Attitude
0.69
0.542
Intention
0.59
0.474
Note: ** p < 0.01, *** p < 0.001
Finally, a PLS goodness-of-fit index (GoF) for this model was 0.6813. GoF represents an operational
solution to evaluate the fit of the overall model and was proposed by Tenenhaus, Amato, and Esposito
(2004). The GoF measure provides a single index for the overall prediction performance of the model2,
and one computes it as the geometric mean of the average communality index and the average R2. The
higher the value of GoF, the better the model’s predictive performance.
6.2 Analysis B: WT Model (Original WT Constructs Only) with Information Quality
and System Quality as Second-order Constructs
We built the second-order information quality and system quality constructs by relating them to the blocks
of the underlying first-order constructs (Wetzels et al., 2009). As Chin (2010) suggests for formative
constructs, AVE, composite reliability, and item loadings do not apply. Rather, the interpretation should be
based on the weights, which indicate the relative importance of each indicator. Table 5 shows the
correlations between the constructs. Here, first-order constructs function as formative indicators for the
corresponding second-order construct.
Table 5. Assessing the WT Structural Model for Information Quality and System Quality
as Second-order Constructs (Analysis B)
Path Weights
Information quality
System quality
Accuracy
0.302***
Completeness
0.301***
Currency
0.339***
2 GoF has a different conceptual meaning from fit indices in CBSEM. Recent literature has challenged its usefulness and suggested
that GoF cannot separate valid models from invalid ones (Henseler & Sarstedt, 2013). Therefore, the reader should use caution
when interpreting PLS GoF statistics.
Communications of the Association for Information Systems
815
Volume 38
Paper 39
Table 5. Assessing the WT Structural Model for Information Quality and System Quality
as Second-order Constructs (Analysis B)
Format
0.358***
Accessibility
0.328***
Flexibility
0.261***
Integration
0.145***
Reliability
0.333***
Timeliness
0.285***
Correlation
Information satisfaction
0.68
0.69
System satisfaction
0.64
0.77
Note: *** p < 0.001
This measurement model had good convergent and discriminant validity. The correlation between
information satisfaction and information quality was 0.68. To compare the differences between the two
correlations in analysis A and analysis B (0.85 and 0.68, respectively, in analysis A), we performed a
significance test following Meng, Rosenthal, and Rubin (1992). This test indicated a significant difference
between the two values (p < 0.05). As Table 5 also shows, when we modeled information quality and
system quality as second-order constructs, all of the path weights were significant, which indicates that all
of the system characteristics are relevant to information quality and system quality. Therefore, combined
with the theoretical rationale, these results support using second-order constructs to model information
quality and system quality. Figure 4 shows the results of our analyzing the structural model, and all of the
paths were significant. The GoF for analysis B (0.6964) was greater than that for analysis A (0.6813),
which further supports using second-order constructs to model information quality and system quality.
*** p < .001, **p < .01
Figure 4. Structural Model for Analysis B
6.3 Analysis C: Extended WT Model with Information Quality and System Quality
as Second-order Constructs
In this model, we extended the WT model in which we modeled information quality and system quality as
second-order constructs by including additional system admin-specific constructs. Table 6 displays the
path weights between each second-order construct and its underlying first-order constructs. All of these
weights were significant, which indicates that these additional system characteristics are relevant to
information quality and system quality in the system admin context (see Appendix D for additional
analyses that support including these context-specific characteristics). Figure 5 shows the results of the
structural model, and all paths were significant. The GoF for analysis C (0.7007) was the highest of the
three models tested thus far. This result provides further support for the extended WT model with context-
specific characteristics.
816
The Integrated User Satisfaction Model: Assessing Information Quality and System Quality as Second-order
Constructs in System Administration
Volume 38
Paper 39
Table 6. Assessing the Extended WT Structural Model for Info Quality and Sys Quality
as Second-Order Constructs (Analysis C)
Weights
Information quality
System quality
Accuracy
0.258***
Completeness
0.268***
Currency
0.281***
Format
0.299***
Logging
0.194***
Verification
0.187***
Accessibility
0.124***
Credibility (functionality)
0.160***
Credibility (help)
0.131***
Credibility (predictability)
0.132***
Flexibility
0.097***
Integration
0.057***
Mental model
0.104***
Monitoring
0.049***
Reliability
0.123***
Scalabilty
0.089***
Scriptability
0.063***
Situation awareness
0.129***
Speed
0.118***
Timeliness
0.108***
Correlation
Information quality
System quality
Information satisfaction
0.69
0.74
System satisfaction
0.63
0.80
Note: *** p < 0.001
*** p < .001, **p < .01
Figure 5. Structural Model for Analysis C
Communications of the Association for Information Systems
817
Volume 38
Paper 39
Lastly, we evaluated the extended WT model with information quality and system quality as first-order
constructs. As Table 7 shows, neither information quality nor system quality exhibited good discriminant
validity when modeled as a first-order constructs because several path weights were not significant.
Figure 6 shows the structural model3. The GoF for this model (0.6837) was lower than that for analysis B
(0.6964) and C (0.7007) and slightly larger than that for analysis A (0.6813), which supports not only
adding system admin-specific constructs but also modeling information quality and system quality as
second-order constructs.
Table 7. Assessing the Extended WT Structural Model for Information Quality and
System Quality as First-order Constructs (Analysis D)
Path weights
Information quality
System quality
Accuracy
0.360 ***
Completeness
0.150 **
Currency
0.115
Format
0.273 ***
Logging
0.150 **
Verification
0.039
Accessibility
0.101
Credibility (functionality)
0.148 *
Credibility (help)
0.109 *
Credibility (predictability)
0.180 **
Flexibility
0.108 *
Integration
-0.072
Mental model
-0.058
Monitoring
-0.042
Reliability
0.215 *
Scalability
0.123 **
Scriptability
0.017
Situation awareness
0.117
Speed
0.072
Timeliness
0.065
* p< 0.05
** p< 0.01
*** p< 0.001
3 Here the results of structural model from Analysis D are exactly the same as those from Analysis A. In both analyses, information
quality and system quality are modeled as first-order constructs. Therefore, their values do not change after adding additional SA
constructs. On the other hand, the path weights between system characteristics and information quality and system quality change
after adding additional SA constructs, and those changed weights are shown in Table 7.
818
The Integrated User Satisfaction Model: Assessing Information Quality and System Quality as Second-order
Constructs in System Administration
Volume 38
Paper 39
*** p < .001, ** p < .01
Figure 6. Structural Model for Analysis D
To summarize, our results show that the WT model using information quality and system quality as first-
order constructs did not exhibit good discriminant validity. The same was true when we added additional
indicators unique to the system admin context. We achieved the best results when testing the extended
WT model that used both information quality and system quality as second-order constructs. This model
had not only the best discriminant validity but also the highest GoF (see Table 8 for model comparison).
Table 8. Results Summary
IQ and SQ as first-order constructs
IQ and SQ as second-order constructs
WT model with
original constructs
Analysis A
Measurement model:
Good convergent validity but poor
discriminant validity
Relationship between system characteristics
and quality:
Non-significant paths
GoF: 0.6813
Analysis B
Measurement model:
Good convergent validity and discriminant
validity
Relationship between system characteristics
and quality:
All paths significant
GoF: 0.6964
Extended m
odel with
additional
constructs
Analysis D
Measurement model:
Good convergent validity but poor
discriminant validity
Relationship between system characteristics
and quality:
Non-significant paths
GoF: 0.6837
Analysis C
Measurement model:
Good convergent validity and discriminant
validity
Relationship between system characteristics
and quality:
All paths significant
GoF: 0.7007
7 Discussion
With this work, we contribute to the literature by extending the WT integrated model of information system
satisfaction by proposing and modeling information quality and system quality as second-order constructs
and by testing the model in the system administration context. Literature suggests that measures of
information and system quality as first-order constructs can be problematic, particularly when the level of
abstraction is not consistent with the theoretical definitions used in the research design (e.g., Jarvis et al.,
2003). Furthermore, developing indicators to precisely measure information quality and system quality as
Communications of the Association for Information Systems
819
Volume 38
Paper 39
first order constructs can be challenging (MacKenzie et al., 2011). Therefore, modeling information quality
and system quality as second-order constructs consistent with well-established conceptual definitions (c.f.
Nelson et al. 2005; Wixom and Todd, 2005) is more appropriate (Jarvis et al., 2003; MacKenzie, 2003;
MacKenzie et al., 2011). Through comparative modeling of alternatives, we also highlights potential
boundary conditions that exist when one models information and system quality as first-order constructs in
new contexts. That is, when one tests an established model (such as WT) in a new context (such as
system administration), the expected relationships among constructs or the measurement model itself
may no longer hold; this new context would represent a boundary condition for the existing model.
We also contribute to the literature by evaluating information and system quality constructs used in studies
of information systems success, which we accomplished by examining the coreset of system and
information characteristics presented in the WT study (Wixom & Todd, 2005) in a new context: system
administration. Also, we evaluated including additional characteristics relevant to the context studied
(Velasquez & Weisband 2008) following Galletta and Lederer’s (1989) call for including characteristics
specific to IS use in situ. Our analysis suggests that the core set of characteristics are relevant in system
administration, provides support for including additional characteristics specific to the context of interest,
and, thus, extends the core set to include factors that future studies should consider including. When
combined, these two contributions show that updated modeling techniques and careful selection of
appropriate information and system characteristics can resolve the apparent boundary conditions in the
core WT model. We now discuss our findings for the sub-dimensions of information quality and system
quality.
All of the characteristics that previous research has proposed to influence information quality were
significant (Analysis C, Table 6). Previous studies on system admins’ work practices support the
relevance of the core information constructs of accuracy, completeness, format, and currency. In the
context of system administration, system planning, updating, and debugging are often done with only the
information supplied by the system; rarely are system admins lucky enough to have system failure
physically apparent and, thus, must rely on the accuracy of the information supplied to them (Barrett et al.,
2004). Prior work has also reported that many system admins prefer to use a command issued through a
command line interface (CLI) tool that queries and returns information in real time rather than having to
refresh a graphical user interface (GUI) screen or require constant information updates, which reinforces
the importance of current information (Barrett et al., 2004; Takayama & Kandogan, 2006). In addition, we
found support for including two new constructs relevant to the context of system administration:
verification and logging. Investigations of system administrators suggest that the ability to access and
review the previous actions taken and the results of those actions are important to system admins (e.g.,
Barrett et al., 2004; Velasquez & Weisband, 2008). This information allows one to retrace the steps taken
previously and may be relevant in other contexts when tasks are interrupted, such as online shopping
(Farrimond, Knight, & Titov, 2006) and decision making (Speier, Valacich, & Vessey, 1999). Finally,
system admins have reported using tools to specify the amount and format of data output (Velasquez &
Weisband, 2008).
The significance of accuracy, completeness, currency, and format in a variety of studies and contexts
(including data warehousing, university information systems, general computer usage, end user
computing, and, now, system administration) suggests these constructs should remain in the core set of
information quality characteristics (Bailey & Pearson, 1983; Baroudi & Orlikowski, 1988; Doll & Torkzadeh,
1988; Ives et al., 1983; Rai, Lang, & Welker, 2002). While the widespread application and significance of
these four constructs may suggest that information quality has reached a relatively stable
conceptualization across contexts, the significance of logging and verification in this study suggests that
the core characteristics are not all inclusive. Furthermore, our findings suggest that we should
investigate logging and verification in contexts that experience work interruptions and that logging and
verification may be candidates for joining the core set of characteristics.
Similarly, all proposed system quality characteristics were important (analysis C, Table 6). The core
system quality constructs found significant by WT (reliability, flexibility, integration, and accessibility) were
relevant. Timeliness, which was not significant in the WT study, was also relevant. Given that a system’s
reliability is of the utmost importance (because downtime in a large system can cost US$500,000 per hour
(Patterson, 2002)), the tools used to manage, configure, and monitor those systems need to be just as
reliable. Flexibility is important because of the dynamic nature of the work system admins do and the
systems they manage. For example, IT security administrators have identified the ability to change output
styles or user-defined dashboards as important (Botta et al., 2007). The large number of tools used to
820
The Integrated User Satisfaction Model: Assessing Information Quality and System Quality as Second-order
Construct
s in System Administration
Volume 38
Paper 39
gather small pieces of information suggests that integration could be useful to system admins; for
example, system admins may consult system logs, notification systems, and knowledge repositories when
completing a work task. The ability to link various data sources into a single interface would provide a
streamlined information-gathering process. Accessibility to systems and the information they hold is
especially important to system admins, whose work includes documenting the infrastructure and
communicating system details with other stakeholders. Systems that provide easy access to information
enable this aspect of system admins’ work. These results provide further evidence of the applicability of
the core set of system quality characteristics in a new context.
In addition to the core characteristics, context-specific system quality characteristics (credibility, scalability,
scriptability, situation awareness, monitoring, and speed) were important, which supports previous
qualitative research of system admins’ work practices. Researchers have found a system’s credibility to
be an underlying factor in the user interface system admins choose (Takayama & Kandogan, 2006). In our
study, we expanded credibility to include three characteristics: functionality, predictability, and helpfulness.
Previous research, which reports that system admins select tools that provide functions to match their
work tasks and that perform in consistent, predictable ways, supports these characteristics (Velasquez &
Weisband, 2008). The large size of today’s computing infrastructures presents a practical need for tools
that can scale to meet the requirements of systems that continually grow and change. Because system
admins can maintain systems that support a dynamic user base with variable data requirements, they
need scalable tools. Other studies have also identified the need for such tools (e.g., Barrett et al., 2004;
Haber & Bailey, 2007) and called out tools that don’t scale as a limitation (Verdoes, 1997).
Scriptability is a large part of the work of system administration because one does much of the work using
custom scripts and tools. The ability to script routines for oft-repeated or complex tasks is common among
system admins, and the ability to add custom functions to existing tools would greatly enhance the
usefulness of systems (Velasquez & Weisband, 2008). Researchers have identified the ability to maintain
an accurate picture of complex systems as a challenge to system admins for over a decade (Hrebec &
Stiber, 2001). The ability to maintain both current, tactical information about a work environment (i.e.,
situation awareness) and the ability to plan and anticipate the future environment (i.e., mental model) is
especially relevant when maintaining complex, dynamic infrastructures.
System monitoring is a constant concern for system admins, who must ensure that storage limits and
performance thresholds are not exceeded (Forsgren Velasquez, Kim, Kersten, & Humble, 2014). Without
monitoring capabilities, system admins must check various aspects of system and subsystem state at
regular intervals as part of their system maintenance responsibilities; monitoring capabilities allow system
admins to automate alerts if thresholds are approached. Finally, the speed of start-up and execution is
important in systems that system admins use because of the high-paced environments they work in;
Velasquez and Weisband (2008) report preferences for tools that work quickly. In addition, scholars such
as Bailey and Pearson (1983), Ives et al. (1983), and Doll and Torkzadeh (1988) have included speed as
an aspect of system quality, which indicates its contextual appropriateness in a variety of settings. Speed
also presents an example of why context is important when selecting information quality characteristics:
users of data warehousing systems (the context studied in Wixom and Todd (2005)) query for complex
information analyses that can take minutes to generate, while system admins require an immediate
response from their tools as they configure and set up systems.
Overall, our results confirm that system administrators have specific needs of tools that relate to their work
practices and environment. At the macro level, the WT integrated model was supported in the context of
system administration, which suggests that system admins have some tool-use behaviors similar to those
of computer users in previous studies. Findings also support the idea that system and information quality
characteristics should be carefully chosen based on context (Galletta & Lederer, 1989). Our results also
suggest that one may better model information quality and system quality as second-order constructs,
whose essence is captured in their first-order constructs. The fact that all of the information quality and
system quality characteristics proposed in this study were significant suggests that we need to model
these two constructs as second-order constructs. Modeling them as first-order constructs could lead to
falsely discarding characteristics that are indeed important.
Communications of the Association for Information Systems
821
Volume 38
Paper 39
8 Conclusion
8.1 Limitations
Before discussing this research’s limitations, we note that they are subject to six limitations. First,
limitations regarding the survey population exist. Although we made efforts to recruit system
administrators of all specialties and experience levels, respondents may not be representative.
Additionally, while we recruited respondents through organizations with primarily North American,
Australian, and European membership, we had no way to ensure survey respondents were not from
countries with different work cultures. We minimized these challenges by examining demographic and
work information and examining the data for any outliers in response or completion time.
Second, a potential for non-response bias exists. Because system administrators self-selected into the
study and did not need to complete the survey once they began it, the respondents may not represent all
system administrators. For example, the survey took an average of 11 minutes to complete, which may
have excluded participation from extremely busy system administrators. However, variance in respondent
experience level and in the tool identified for survey responses indicate heterogeneity, which suggests
that the results reported here are relatively robust. In addition, we found no significant differences between
early and late respondents (p < 0.05 for t-test continuous variables and χ2 test for gender).
Third, we need to further develop and refine our measurement items. We used previously validated
measures and newly developed measures. Most of the constructs used short scales and, though we
found the measures to be reliable and valid, they could be improved.
Fourth, we focused on system administrators and the tool they used most often in their work. The findings
may not be generalizable outside of the context of system administration or for tools used infrequently.
However, this study does provide a test of core system and information quality characteristics in a new
context and suggests new characteristics to include in future studies. If one accepts that system admins
most often use tools of their choosing as Velasquez and Weisband (2008) report, our findings may not be
generalizable to mandatory-use technologies. Also, one could consider additional constructs of interest to
system administrators. For example, Jaferian, Botta, Raja, Hawkey, & Beznosov (2008) identify design
guidelines that span general usability, technical complexity (such as providing for task prioritization and
multiple levels of information abstraction), organizational complexity (such as providing for archiving and
supporting collaboration), and task-specific guidelines (such as providing support for rehearsing and
meaningful errors). Ross, Weill, and Robertson (2006) identify different organizational operating models
that leverage enterprise architecture for organizational strategy; these models include varying levels of
business process integration and standardization, which can be reflected in the information systems
employed. Future research should investigate the importance of additional characteristics in the context of
system administration.
Fifth, we focused on end users’ perceptions of the information system being studied. However,
implementing and adopting information systems can depend on much more than the end user, even when
studying volitional use systems (Ross et al., 2006; Weill & Ross, 2004). Investigations of IS adoption and
use among system admins that explicitly include stakeholders other than the end user would offer more
insights into IS success.
Finally, our study was cross-sectional, which limits our ability to study causal effects. Future research
should investigate longitudinal patterns of system use among system admins. Even with these limitations,
the study provides insights into the work practices and tool use of a rarely studied and increasingly
important population of users.
8.2 Implications for Research
Although the limitations above suggest caution in interpreting the results, the work presents valuable
implications for future research. First, the results support our argument that one can model information
quality and system quality as second-order constructs. Consistent with prior literature (MacKenzie, 2003;
MacKenzie et al., 2011), we suggest modeling information quality and system quality as second-order
constructs. This conceptualization allows one to more accurately measure information and system quality
in each context in which one studies them. Future studies can also apply the second-order construct
model proposed in this study and examine it in additional settings. We affirm the importance of the
established core set of information and system characteristics in the system administration context, yet
822
The Integrated User Satisfaction Model: Assessing Information Quality and System Quality as Second-order
Constructs in System Administration
Volume 38
Paper 39
they are not a comprehensive set across all settings. As researchers apply information quality and system
quality constructs to investigate additional technologies and in alternative research settings, they should
take care to evaluate the unique features of these contexts to ascertain the completeness of the relevant
system and information characteristics. While outside our scope here, future investigations may also
include the effects of context on IS adoption and use, such as the type of IT governance that guides IS
decisions (Weill & Ross, 2004), the business process being automated by the IS (Ross et al., 2006), or
the impact of IT governance frameworks on IS adoption, such as COBIT.
Second, this study provides a better understanding of system characteristics influencing information and
system quality in the context of system administration. Prior work states that system admins use a suite of
tools to do their work (Barrett et al., 2004; Haber & Bailey, 2007; Velasquez & Weisband, 2008). Future
research should examine whether our findings hold for this suite as a whole or only for particular tools in
the suite.
8.3 Implications for Practice
This study also presents important implications for practice. First, the results provide designers of system
administration tools guidance about characteristics important to their audience: system admins. For
example, accurate information was a significant indicator of information quality. Information accuracy may
be difficult to “show”, but developers can account for this need through carefully programming and
rigorously testing their product, or they could provide several sources of information that can be
triangulated. Some significant aspects of system quality for system admins include reliability, credibility,
scalability, and speed. Again, tool reliability may be difficult to show, but developers can account for this
need through careful programming (e.g., being careful to avoid memory leaks, which may cause systems
to hang or crash) and rigorously testing their product in many different environments. Credible tools were
those with a good track record and known developers; companies with well-known and well-liked tools
already on the market will benefit when introducing new applications. New companies may increase the
credibility of their tools through beta testing and testimonial marketing.
Second, companies now have a way to assess information quality and system quality characteristics and
have further evidence to link those characteristics to system administrator perceptions of usefulness,
usability and, ultimately, use, which will allow companies to more objectively evaluate tools and identify
those most useful to system administrators. For example, when considering two very similar tools, the one
that is scriptable or provides an API will be better suited to system admins than ones that do not.
Lastly, our modeling of information quality and system quality can help designers of systems focus on the
important characteristics in a certain context. For example, measuring information quality with indicators
such as “in general, <the system> provides me with high-quality information” may not provide any useful
information to practitioners in terms of what information characteristics make a certain system have high
information quality. However, by modeling information and system quality as second-order constructs,
practitioners can understand what characteristics are important in a certain context (e.g., timeliness in the
context of this study) and can, thus, focus on these characteristics while designing systems.
8.4 Summary
While previous literature on the WT model helps explain the technology characteristics of information
quality and system quality, opportunities to improve the modeling of these constructs still exist. In this
study, we modeled information and system quality as second-order constructs and tested the WT model in
the context of system admins. Our study makes two main contributions to the literature. First, we show
that modeling information quality and system quality as second-order constructs can be more appropriate
both conceptually and statistically and may resolve apparent boundary conditions with the core WT
information and system quality constructs. Thus, we enhance the construct validity of information and
system quality. Second, by testing an extended WT model in the context of system administration, we
contribute to a deeper understanding of the characteristics of information quality and system quality that
are relevant to system admins.
Communications of the Association for Information Systems
823
Volume 38
Paper 39
References
Agarwal, J., Malhotra, N. K., & Bolton, R. N. (2010). A cross-national and cross-cultural approach to global
market segmentation: An application using consumers' perceived service quality. Journal of
International Marketing, 18(3), 18-40.
Ajzen, I., & Fishbein, M. (1980). Understanding attitudes and predicting social behavior. Englewood Cliffs,
NJ: Prentice-Hall.
Ajzen, I., Fishbein, M., & Zanna, M. P. (2005). The influence of attitudes on behavior. In D. Albarracin & B.
T. Johnson (Eds.), The handbook of attitudes. Mahwah, NJ: Erlbaum.
Almutairi, H., & Subramanian, G. H. (2005). An empirical application of the DeLone and McLean model in
the Kuwaiti private sector. Journal of Computer Information Systems, 45(3), 113-122.
Bailey, J., Kandogan, E., Haber, E., & Maglio, P. (2007). Activity-based management of IT service
delivery. In Proceedings of the 1st ACM Symposium on Computer Human Interaction for
Management of Information Technology.
Bailey, J., & Pearson, S. W. (1983). Development of a tool for measuring and analyzing computer user
satisfaction. Management Science, 29(5), 530-545.
Bannon, L. J. (1991). From human factors to human actors: The role of psychology and human-computer
interaction studies in system design. In J. Greenbaum & M. Kyng (Eds.), Design at work (pp. 25-
44). Hillsdale, NJ: Lawrence Erlbaum Associates.
Barclay, C., Higgins, C., & Thompson, R. (1995). The partial least squares approach to causal modeling,
personal computing adoption and use as an illustration. Technology Studies, 2(2), 285-309.
Baroudi, J. J., & Orlikowski, W. J. (1988). A short form measure of user information satisfaction: A
psychometric evaluation and notes on use. Journal of Management Information Systems, 4(4), 44-
59.
Barrett, R., Chen, Y. Y. M., & Maglio, P. (2003). System administrators are users, too: Designing
workspaces for managing internet-scale systems. In R. Barrett, M. Chen, & P. Maglio (Eds.),
Proceedings of the Conference on Human Factors in Computing Systems.
Barrett, R., Kandogan, E., Maglio, P., Haber, E., Takayama, L. A., & Prabaker, M. (2004). Field studies of
computer system administrators: Analysis of system management tools and practices. In
Proceedings of the ACM Conference on Computer Supported Cooperative Work (pp. 388-395).
Botta, D., Werlinger, R., Gagné, A., Beznosov, K., Iverson, L., Fels, S., & Fisher, B. (2007). Towards
understanding IT security professionals and their tools. In Proceedings of the Symposium on
Usable Privacy and Security (pp. 100-111).
Brown, T. A. (2006). Confirmatory Factor Analysis for Applied Research. New York, NY: Guilford Press.
Chen, C.-W. (2010). Impact of quality antecedents on taxpayer satisfaction with online tax-filing systems
an empirical study. Information & Management, 47(5-6), 308-315.
Chin, W. W. (1998). Issues and opinions on structural equation modeling. MIS Quarterly, 22(2), vii-xvi.
Chin, W. W. (2010). How to write up and report PLS analyses. In V. Esposito Vinzi, W. W. Chin, J.
Henseler, & H. Wang (Eds.), Handbook of partial least squares (pp. 655-690). Berlin, Heidelberg:
Springer.
Chin, W. W., Marcolin, B., & Newsted, P. (2003). A partial least squares latent variable modeling approach
for measuring interaction effects: Results from a Monte Carlo simulation study and an electronic-
mail emotion/adoption study. Information Systems Research, 14(2), 189-217.
Churchill, G. A. (1979). A paradigm for developing better measures of marketing constructs. Journal of
Marketing Research, 16(1), 64-73.
Davis, F. D. (1989). Perceived usefulness, perceived ease of use, and user acceptance of information
technology. MIS Quarterly, 13(3), 319-340.
DeLone, W., & McLean, E. (1992). Information systems success: The quest for the dependent variable.
Information Systems Research, 3(1), 60-95.
824
The Integrated User Satisfaction Model: Assessing Information Quality and System Quality as Second-order
Constructs in System Administration
Volume 38
Paper 39
Doll, W., & Torkzadeh, G. (1988). The measurement of end-user computing satisfaction. MIS Quarterly,
12(2), 259-274.
Endsley, M. (1995). Toward a theory of situation awareness in dynamic systems. Human Factors, 37(1),
32-64.
Farrimond, S., Knight, R. G., & Titov, N. (2006). The effects of aging on remembering intentions:
Performance on a simulated shopping task. Applied Cognitive Psychology, 20(4), 533-555.
Fogg, B., & Tseng, H. (1999). The elements of computer credibility. In Proceedings of the ACM
Conference on Human Factors in Computing Systems (pp. 80-87).
Forsgren Velasquez, N., Kim, G., Kersten, N., & Humble, J. (2014). 2014 state of DevOps report.
PuppetLabs.
Gable, G. G., Sedera, D., & Chan, T. (2008). Re-conceptualizing information system success: The IS-
impact measurement model. Journal of the Association for Information Systems, 9(7), 377-408.
Galletta, D. F., & Lederer, A. L. (1989). Some cautions on the measurement of end user satisfaction.
Decision Sciences, 20(3), 419-439.
Gefen, D., & Straub, D. (2005). A practical guide to factorial validity using PLS-graph: Tutorial and
annotated example. Communications of the Association for Information Systems, 16, 91-109.
Gefen, D., Straub, D., & Boudreau, M. (2000). Structural equation modeling and regression: Guidelines for
research practice. Communications of the AIS, 4, 1-77.
Gefen, D., Straub, D. W., & Rigdon, E. E. (2011). An update and extension to SEM guidelines for
administrative and social science research. MIS Quarterly, 35(2), iii-xiv.
Haber, E., & Bailey, J. (2007). Design guidelines for system administration tools developed through
ethnographic field studies. In Proceedings of the Computer-Human Interaction for the Management
of Information Technology.
Haber, E., & Kandogan, E. (2007). Security administrators in the wild: Ethnographic studies of security
administrators. In Proceedings of the ACM Conference on Human Factors in Computing Systems.
Henseler, J., & Sarstedt, M. (2013). Goodness-of-fit indices for partial least squares path modeling.
Computational Statistics, 28(2), 565-580.
Horn, P. (2001). Autonomic computing: IBM's perspective on the state of information technology. IBM
Corporation.
HP. (2007). Adaptive infrastructure. Retrieved from http://h71028.www7.hp.com/enterprise/cache/5003-0-
0-0-121.aspx
Hrebec, D. G., & Stiber, M. (2001). A survey of system administrator mental models and situation
awareness. In Proceedings of the ACM Conference on Computer Personnel Research (pp. 166-
172).
Hulland, J. (1999). Use of partial least squares (PLS) in strategic management research: A review of four
recent studies. Strategic management Journal, 20(2), 195-204.
Ives, B., Olson, M. H., & Baroudi, J. J. (1983). The measurement of user information satisfaction.
Communications of the ACM, 26(10), 785-793.
Jaferian, P., Botta, D., Raja, F., Hawkey, K., & Beznosov, K. (2008). Guidelines for designing IT security
management tools. In Proceedings of the 2nd ACM Symposium on Computer Human Interaction for
Management of Information Technology.
Jarvis, C. B., MacKenzie, S. B., & Podsakoff, P. M. (2003). A critical review of construct indicators and
measurement model misspecification in marketing and consumer research. Journal of Consumer
Research, 30(2), 199-218.
Junglas, I., Goel, L., Abraham, C., & Ives, B. (2013). The social component of information systems—how
sociability contributes to technology acceptance. Journal of the Association for Information
Systems, 14(10), 585-616.
Kephart, J., & Chess, D. (2003). The vision of autonomic computing. IEEE Computer, 36(1), 41-51.
Communications of the Association for Information Systems
825
Volume 38
Paper 39
Kettinger, W. J., Park, S., & Smith, J. (2009), Understanding the consequences of information systems
service quality on IS service reuse. Information & Management, 46(6), 335-341.
Khalifa, M., & Davison, M. (2006). SME adoption of IT: The case of electronic trading systems. IEEE
Transactions on Engineering Management, 53(2), 275-284.
Kim, H., Xu, Y., & Koh, J. (2004). A comparison of online trust building factors between potential
customers and repeat customers. Journal of the Association for Information Systems, 5(10), 392-
420.
Kim, S., & Stoel, L. (2004). Dimensional hierarchy of retail website quality. Information & Management,
41(5), 619-633.
Koufteros, X., Babbar, S., & Kaighobadi, M. (2009). A paradigm for examining second-order factor models
employing structural equation modeling. International Journal of Production Economics, 120, 633-
652.
Lee, S., Shin, B., & Lee, H. G. (2009). Understanding post-adoption usage of mobile data services: The
role of supplier-side variables. Journal of the Association for Information Systems, 10(12), 860-888.
Lenchner, J., Rosu, D., Velasquez, N. F., Guo, S., Christiance, K., DeFelice, D., Deshpande, P. M.,
Kummamuru, K., Kraus, N., Luan, L. Z., Majumdar, D., McLaughlin, M., Ofek-Koifman, S., Deepak,
P., Perng, C. -S., Roitman, H., Ward, C., & Young, J. (2009). A service delivery platform for server
management services. IBM Journal of Research and Development, 53(6), 2-17.
Lindell, M. K., & Whitney, D. J. (2001). Accounting for common method variance in cross-sectional
research designs. Journal of Applied Psychology, 86(1), 114-121.
Lohmoller, J.-B. (1989). Latent variable path modeling with partial least squares. Heidelberg, Germany:
Pysica-Verlag.
Loiacono, E., Chen, D., & Goodhue, D. (2002) WebQual revisited: Predicting the intent to reuse a web
site. In Proceedings of the America’s Conference on Information Systems.
MacKenzie, S. B. (2003). The dangers of poor construct conceptualization. Journal of the Academy of
Marketing Science, 31(3), 323-326.
MacKenzie, S. B., Podsakoff, P. M., & Podsakoff, N. P. (2011). Construct measurement and validation
procedures in MIS and behavioral research: Integrating new and existing techniques. MIS
Quarterly, 35(2), 293- 334.
Malhotra, N. K., Kim, S. S., & Patil, A. (2006). Common method variance in IS research: A comparison of
alternative approaches and a reanalysis of past research. Management Science, 52(12), 1865-
1883.
McKinney, V., Yoon, K., & Zahedi, F. (2002). The measurement of Web-customer satisfaction: An
expectation and disconfirmation approach. Information Systems Research, 13(3), 296-315.
McKnight, D. H. (2005). Trust in information technology. In G. B. Davis (Ed.), The Blackwell encyclopedia
of management: Management information systems (vol. 7, pp. 329-331). Malden, MA: Blackwell,
Medina, Q., & Chaparro, P. (2007). The impact of the human element in the information systems quality
for decision making and user satisfaction. Journal of Computer Information Systems, 48(2), 44-52.
Meng, X.-L., Rosenthal, R., & Rubin, D. B. (1992). Comparing correlated correlation coefficients.
Psychological Bulletin, 111(1), 172-175.
Negash, S., Ryan, T., & Igbaria, M. (2003). Quality and effectiveness in Web-based customer support
systems. Information & Management, 40(8), 757-768.
Nelson, R. R., Todd, P. A., & Wixom, B. H. (2005). Antecedents of information and system quality: An
empirical examination within the context of data warehousing. Journal of Management Information
Systems, 21(4), 199-235.
Patterson, D. (2002). A simple way to estimate the cost of downtime. In Proceedings of the Large
Installation System Administrator's Conference (pp. 185-188).
826
The Integrated User Satisfaction Model: Assessing Information Quality and System Quality as Second-order
Constructs in System Administration
Volume 38
Paper 39
Patterson, D., Brown, A., Broadwell, P., Candea, G., Chen, M., Cutler, J., Enriquez, P., Fox, A., Kiciman,
E., Merzbacher, M., Oppenheimer, D., Sastry, N., Tetzlaff, W., Traupman, J., & Treuhaft, N. (2002)
Recovery-oriented computing (ROC): Motivation, definition, techniques, and case studies
(Technical Report UCB//CSD-02-1175). Berkeley, CA: University of California.
Pearson, A., Tadisina, S., & Griffin, C. (2012). The role of e-service quality and information quality in
creating perceived value: Antecedents to web site loyalty. Information Systems Management, 29(3),
201-215.
Podsakoff, P. M., MacKenzie, S., Lee, J., & Podsakoff, N. (2003). Common method biases in behavioral
research: A critical review of the literature and recommended remedies. Journal of Applied
Psychology, 88(5), 879-903.
Rai, A., Lang, S., & Welker, R. (2002). Assessing the validity of IS success models: An empirical test and
theoretical analysis. Information Systems Research, 13(1), 50-69.
Ringle, C., Wende, S., & Will, S. (2005). Smart PLS 2.0 (M3) beta. Hamburg: SmartPLS.
Ross, J. W., Weill, P., & Robertson, D. C. (2006). Enterprise architecture as strategy: Creating a
foundation for business execution. Boston, MA: Harvard Business Press.
Sage. (2008). SAGE annual salary survey for 2007. USENIX.
Sánchez-Hernández, R. M., Martínez-Tur, V., Peiró, J. M., & Ramos, J. (2009). Testing a hierarchical and
integrated model of quality in the service sector: Functional, relational and tangible dimensions.
Total Quality Management and Business Excellence, 20(11), 1173-1188.
Setia, P., Venkatesh, V., & Joglekar, S. (2013). Leveraging digital technologies: How information quality
leads to localized capabilities and customer service performance. MIS Quarterly, 37(2), 565-590.
Shneiderman, B. (1997). Designing the user interface: Strategies for effective human-computer
interaction. Redwood City, CA: Addison Wesley.
Speier, C., Valacich, J. S., & Vessey, I. (1999). The influence of task interruption on individual decision
making: An information overload perspective. Decision Sciences, 30(2), 337-360.
Sun Microsystems. (2006). N1 grid system. Retrieved from http://wwws.sun.com/software/solutions/n1
(Jan, 2006).
Takayama, L., & Kandogan, E. (2006). Trust as an underlying factor of system administrator interface
choice. In Proceedings of the ACM Conference on Human Factors in Computing Systems (pp.
1391-1396).
Tenenhaus, M., Amato, S., & Esposito, V. V. (2004). A global goodness-of-fit index for PLS structural
equation modelling. In Proceedings of the XLII SIS Scientific Meeting (pp. 739-742).
Teo, T. S. H., Srivastava, S. C., & Jiang, L. (2008). Trust and electronic government success: An empirical
study. Journal of Management Information Systems, 25(3), 99-131.
Velasquez, N. F., & Weisband, S. P. (2008). Work practices of system administrators: Implications for tool
design. In Proceedings of the 2nd ACM Symposium on Computer Human Interaction for
Management of Information Technology.
Venkatesh, V. (2000). Determinants of perceived ease of use: Integrating control, intrinsic motivation, and
emotion into the technology acceptance model. Information Systems Research, 11(4), 342-365.
Venkatesh, V., Morris, M. G., Davis, G. B., & Davis, F. D. (2003). User acceptance of information
technology: Toward a unified view. MIS Quarterly, 27(3), 425-478.
Verdoes, J. A. (1997). In search of a complete and scalable systems administration suite. Paper
presented at the Large Scale System Administration of Windows NT Workshop.
Wang, Y. (2008). Assessing e-commerce systems success: A respecification and validation of the DeLone
and McLean model of IS success. Information Systems Journal, 18(5), 529-557.
Weill, P., & Ross, J. W. (2004). IT-governance: How top performers manage IT decision rights for superior
results. Boston, MA: Harvard University Business Press.
Communications of the Association for Information Systems
827
Volume 38
Paper 39
Wetzels, M., Odekerken-Schröder, G., & van Oppen, C. (2009). Using PLS path modeling for assessing
hierarchical construct models: Guidelines and empirical illustration. MIS Quarterly, 33(1), 177-195.
Whetten, D. A. (1989). What constitutes a good theoretical contribution? Academy of Management
Review, 14(4), 490-495.
Wixom, B. H., & Todd, P. A. (2005). A theoretical integration of user satisfaction and technology
acceptance. Information Systems Research, 16(1), 85-102.
Wixom, B. H., & Watson, H. J. (2001). An empirical investigation of the factors affecting data warehousing
success. MIS Quarterly, 25(1), 17-41.
Xu, J., Benbasat, I., & Cenfetelli, R. T. (2013). Integrating service quality with system and information
quality: An empirical test in the e-service context. MIS Quarterly, 37(3), 777-794.
Yi, M. Y., Yoon, J. J., Davis, J. M., & Lee, T. (2013). Untangling the antecedents of initial trust in Web-
based health information: The roles of argument quality, source expertise, and user perceptions of
information quality and risk. Decision Support Systems, 55(1), 284-295.
Zheng, Y., Zhao, K., & Stylianou, A. (2013). The impacts of information quality and system quality on
users' continuance intention in information-exchange virtual communities: An empirical
investigation. Decision Support Systems, 56, 513-524.
828
The Integrated User Satisfaction Model: Assessing Information Quality and System Quality as Second-order
Constructs in System Administration
Volume 38
Paper 39
Appendix A: Recent Literature Using Information and System Quality
Table A1. Recent Literature
Empirical studies Context Constructs used
Sub-dimensions
1st / 2nd
order
Reflective /
Formative
Almutairi &
Subramanian (2005) IS function in private
sector organizations
Information
quality
N/A 1st order2 Reflective
System quality
N/A
1st order2
Reflective
Chen (2010) Online system for
filing income taxes
Information
quality
Accuracy
Timeliness
Completeness
2nd order Reflective
System quality
Access
Interactivity
Ease of Use
2nd order Reflective
Gable, Sedera, &
Chan (2008)
Enterprise System
Information
quality
Availability
Usability
Understandability
Relevance
Format
Conciseness
1st order Formative
System quality
Ease of use
Ease of learning
User requirement
System features
System accuracy
Flexibility
Sophistication
Integration
Customization
1st order Formative
Junglas, Goel,
Abraham, & Ives
(2013) Second Life
Information
quality
N/A 1st order3 Reflective
System quality
N/A
1st order3
Reflective
Kim, Xu, & Koh
(2004)
Internet stores
Information
quality
N/A 1st order2 Reflective
System quality
N/A
1st order2
Reflective
Lee et al. (2009) Mobile data services
Information
quality
Relevance
Timeliness
Timeliness
Scope
2nd order Reflective
System quality
Access
Usability
Navigation
2nd order Reflective
Medina & Chaparro
(2007)
Scholar Control
Information System
of Higher Education
Institutions
Information
quality
Opportune
Updated
Useful
Exact
Complete
Relevant
1st order Reflective
System quality
Easy to use
Exact
Operational
Efficiency
Adaptable
Friendly
1st order Reflective
Negash et al. (2003)
Information
quality
Informativeness
Entertainment
2nd order Reflective
Communications of the Association for Information Systems
829
Volume 38
Paper 39
Table A1. Recent Literature
System quality
Interactivity
Access
2nd order Reflective
Nelson et al. (2005) Data warehouse
Information
quality
Completeness
Accuracy
Format
Currency
1st order Reflective
System quality
Reliability
Flexibility
Accessibility
Response time
Integration
1st order Reflective
Pearson, Tadisina, &
Griffin (2012)
e-commerce web
sites Information
quality
Relevance
Understandability
Reliability
Adequacy
Scope
Usefulness
2nd order1 Reflective
Rai et al. (2002)
Integrated student
information systems
Information
quality
N/A 1st order2 Reflective
Setia et al. (2013) Customer service
units digital design Information
quality
Completeness
Accuracy
Format
Currency
2nd order Formative
Teo, Srivastava, &
Jiang (2008) e-government web
sites
Information
quality
N/A 1st order2 Reflective
System quality
N/A
1st order2
Reflective
Wang (2008)
e-commerce
applications
Information
quality
Content
Accuracy
Timeliness
1st order Reflective
System quality
Ease of use
1st order
Reflective
Wixom & Todd
(2005) Data warehouse
Information
quality
Completeness
Accuracy
Format
Currency
1st order Reflective
System quality
Reliability
Flexibility
Integration
Accessibility
Timeliness
1st order Reflective
Wixom & Watson
(2001) Data warehouse
Information
quality
Accuracy
Comprehensiveness
Consistency
Completeness
1st order Reflective
System quality
Flexibility
Integration
1st order Reflective
Xu, Benbasat, &
Cenfetelli (2013)
Consumer websites
Information
quality
Completeness
Accuracy
Format
Currency
1st order Reflective
System quality
Reliability
Flexibility
Accessibility
Timeliness
1st order Reflective
830
The Integrated User Satisfaction Model: Assessing Information Quality and System Quality as Second-order
Constructs in System Admini
stration
Volume 38
Paper 39
Table A1. Recent Literature
Yi, Yoon, Davis, &
Lee (2013)
Zheng, Zhao, &
Stylianou (2013)
Web
sites with health
information
Virtual Community
Information
quality
Currency
Accuracy
Relevancy
Usefulness
Completeness
1st order Reflective
Information
quality
Reliability
Objectivity
Value-added
Timeliness
Richness
Format
2nd order Formative
System quality
Navigation
Accessibility
Appearance
Security
Interactivity
2nd order Formative
1 Average scores are used for each sub-dimensions
2 Specific items are used, but authors did not specify specific sub-dimensions.
3
“Overall” items are used.
Communications of the Association for Information Systems
831
Volume 38
Paper 39
Appendix B: Survey Instrument
Table B1. Survey Instrument
Construct and item
Accuracy
ACCU1: ____ produces correct information.
ACCU2: There are few errors in the information I obtain from ____.
ACCU3: The information provided by ____ is accurate.
Completeness
CMP1: ____ provides me with a complete set of information.
CMP2: ____ produces comprehensive information.
CMP3: ____ provides me with all the information I need.
Currency
CURR1: ____ provides me with the most recent information.
CURR2: ____ produces the most current information.
CURR3: The information from ____ is always up to date.
Format
FMT1: The information provided by _____ is well laid out.
FMT2: The information provided by _____ is clearly presented on the screen.
FMT3: The information provided by _____ is well formatted.
Logging
LOG1: ____ keeps track of previous actions so I can retrace my steps later.
LOG2: ____ allows me to see and review previous actions.
LOG3: ____ logs previous actions.
Verification
VERI1: ____ allows me to see and review the outcomes of previous actions.
VERI2: ____ makes the outcomes of previous actions available to me.
VERI3: I can access the outcomes of previous commands or tasks using ____.
Information quality
IQ1: Overall, I would give the information from ____ high marks.
IQ2: Overall, I would give the information provided by ____ a high rating in terms of quality.
IQ3: In general, ____ provides me with high-quality information.
Accessibility
ACCS1: ____ allows information to be readily accessible to me.
ACCS2: ____ makes information very accessible.
ACCS3: ____ makes information easy to access.
Flexibility
FLEX1: _____ can be adapted to meet a variety of needs.
FLEX2: _____ can flexibly adjust to new demands or conditions.
FLEX3: _____ is versatile in addressing needs as they arise.
Integration
INTE1: ____ effectively integrates data from different areas of the company.
INTE2: ____ pulls together information that used to come from different places in the company.
INTE3: ____ effectively combines data from different areas of the company.
Reliability
REL1: ____ operates reliably.
REL2: ____ performs reliably.
REL3: The operation of ____ is dependable.
832
The Integrated User Satisfaction Model: Assessing Information Quality and System Quality as Second-order
Constructs in System Administration
Volume 38
Paper 39
Table B1. Survey Instrument
Timeliness
TIM1: It takes too long for ____ to respond to my requests (RC).
TIM2: ____ provides information in a timely fashion.
TIM3: ____ returns answers to my requests quickly.
Credibility/trust (functionality)
FUNC1: ____ has the functionality that I need.
FUNC2: ____ has the features required for my work activities.
FUNC3: ____ has the ability to do what I want it to do.
FUNC4: ____ has the overall capabilities I need.
Credibility/trust (helpfulness)
HELP1: The help function of ____ provides competent guidance (as needed).
HELP2: The help function of ____ provides whatever help I need.
HELP3: The help function of ____ provides very sensible and effective advice, if needed.
HELP4: The help function supplies my need for help through a help function.
Credibility/trust (predictability)
PRED1: ____ behaves in a predictable way.
PRED2: I can forecast in advance how ____ will work.
PRED3: ____ functions in the same way each time I use it.
PRED4: As a work tool, ____ is very predictable.
Mental model
MM1: ____ provides me with information necessary to predict future system state.
MM2: ____ provides me with information necessary to plan future actions on the system.
MM3: ____ helps me understand my system so that I can anticipate future events.
MM4: ____ helps me “picture” my system in my head so that I can formulate plans.
Monitoring
MON1: ____ allows me to monitor system state.
MON2: ____ provides monitoring capabilities.
MON3: I can monitor my system using ____.
Situation awareness
SA1: ____ helps me to understand the current state of my environment.
SA2: ____ helps me build a mental map of my current system.
SA3: ____ provides information that helps me understand how my current system is operating.
SA4: ____ helps me build a mental map of the system that I can use when I troubleshoot.
SA5: ____ provides information that helps me respond to emergencies in my system.
Scalability
SCAL1: _____ can be used in both simple and complex environments.
SCAL2: _____ is scalable.
SCAL3: _____ can be used in both small and large environments.
Scriptability
SCRIPT1: ____ allows me to automate processes.
SCRIPT2: ____ supports scripting.
SCRIPT3: ____ is programmable.
Speed
SPEED1: _____ executes commands quickly.
SPEED2: ____ starts up rapidly.
SPEED3: ____ initializes swiftly.
Communications of the Association for Information Systems
833
Volume 38
Paper 39
Table B1. Survey Instrument
System quality
SQ1: In terms of system quality, I would rate ____ highly.
SQ2: Overall, ____ is of high quality.
SQ3: Overall, I would give the quality of ____ a high rating.
Information satisfaction
ISAT1: Overall, the information I get from ____ is very satisfying.
ISAT2: I am very satisfied with the information I receive from ____.
System satisfaction
SSAT1: All things considered, I am very satisfied with _____.
SSAT2: Overall, my interaction with _____ is very satisfying.
Usefulness
USE1: Using _____ improves my ability to make good decisions.
USE2: ____ allows me to get my work done more quickly.
USE3: Using ____ enhances my effectiveness on the job.
Ease of use
EOU1: ____ is easy to use.
EOU2: It is easy to get ____ to do what I want it to do.
EOU3: ____ is easy to operate.
Attitude
ATT1: Using ____ is (not enjoyable/ very enjoyable).
ATT2: Overall, using ____ is a (unpleasant/pleasant) experience.
ATT3: My attitude toward using ____ is (very unfavorable/very favorable).
Intention
INT1: I intend to use ____ as a routine part of my job over the next year.
INT2: I intend to use ____ at every opportunity over the next year.
INT3: I plan to increase my use of ____ over the next year.
Table B2. Descriptive Statistics and Item Loadings
Item
Mean
S.D.
AVE
CR
Loading
ACCS1
6.11
1.020
0.848 0.944
0.908
ACCS2
5.83
1.293
0.957
ACCS3
5.68
1.325
0.897
ACCU1
6.07
0.987
0.699 0.870
0.904
ACCU2
5.62
1.510
0.639
ACCU3
6.13
0.877
0.929
ATT1
5.42
1.247
0.794 0.920
0.878
ATT2
5.55
1.161
0.932
ATT3
6.06
1.054
0.861
CMP1
5.11
1.686
0.789
0.917
0.937
CMP2
5.31
1.571
0.928
CMP3
4.50
1.826
0.791
FUNC1
5.72
1.126
0.822 0.949
0.889
FUNC2
5.90
0.991
0.911
FUNC3
5.88
1.049
0.916
FUNC4
5.76
1.109
0.911
834
The Integrated User Satisfaction Model: Assessing Information Quality and System Quality as Second-order
Constructs in System Administration
Volume 38
Paper 39
Table B2. Descriptive Statistics and Item Loadings
HELP1
4.33
1.681
0.843 0.955
0.923
HELP2
4.11
1.652
0.934
HELP3
4.03
1.620
0.939
HELP4
4.15
1.691
0.875
PRED1
6.24
0.798
0.730 0.915
0.905
PRED2
6.00
1.068
0.797
PRED3
6.29
0.807
0.805
PRED4
6.25
0.845
0.904
CURR1
6.05
1.044
0.834 0.940
0.950
CURR2
6.05
1.018
0.948
CURR3
5.83
1.190
0.849
EOU1
5.27
1.485
0.793 0.920
0.879
EOU2
5.50
1.294
0.889
EOU3
5.48
1.379
0.904
FLEX1
6.07
1.333
0.892 0.961
0.920
FLEX2
5.80
1.543
0.957
FLEX3
5.78
1.449
0.956
FMT1
5.24
1.436
0.841
0.941
0.920
FMT2
5.24
1.399
0.944
FMT3
5.46
1.323
0.885
INT1
6.50
0.758
0.643 0.842
0.793
INT2
5.94
1.289
0.880
INT3
5.31
1.404
0.723
INTEG1
4.41
1.949
0.930 0.975
0.945
INTEG2
4.44
1.990
0.971
INTEG3
4.38
1.972
0.976
ISAT1
5.58
1.271
0.941 0.969
0.969
ISAT2
5.60
1.287
0.971
IQ1
5.78
1.177
0.918 0.971
0.953
IQ2
5.88
1.142
0.959
IQ3
5.86
1.159
0.962
LOG1
4.45
2.061
0.827 0.933
0.954
LOG2
4.57
2.018
0.961
LOG3
4.78
1.977
0.799
MM1
4.59
1.765
0811 0.944
0.911
MM2
4.80
1.764
0.933
MM3
4.77
1.683
0.944
MM4
4.95
1.560
0.805
MON1
5.46
1.709
0.875 0.953
0.939
MON2
4.98
2.013
0.900
MON3
5.25
1.939
0.960
REL1
6.27
0.913
0.894 0.962
0.952
REL2
6.21
0.911
0.958
Communications of the Association for Information Systems
835
Volume 38
Paper 39
Table B2. Descriptive Statistics and Item Loadings
REL3
6.21
0.916
0.926
SA1
6.07
1.200
0.674 0.911
0.848
SA2
5.49
1.526
0.818
SA3
5.84
1.287
0.834
SA4
5.46
1.449
0.845
SA5
5.81
1.376
0.754
SCAL1
6.08
1.199
0.726 0.88
0.904
SCAL2
5.87
1.290
0.747
SCAL3
6.20
1.131
0.896
SCRIPT1
5.79
1.654
0.781 0.914
0.872
SCRIPT2
5.93
1.483
0.913
SCRIPT3
5.65
1.640
0.864
SPEED1
5.64
1.329
0.892 0.961
0.910
SPEED2
5.75
1.366
0.953
SPEED3
5.68
1.383
0.969
SSAT1
6.07
1.113
0.939 0.968
0.970
SSAT2
5.89
1.220
0.968
SQ1
6.14
1.054
0.947 0.982
0.967
SQ2
6.16
1.043
0.976
SQ3
6.15
1.051
0.977
TIM1
5.34
1.697
0.775 0.912
0.817
TIM2
5.73
1.081
0.898
TIM3
5.57
1.349
0.926
USE1
5.64
1.343
0.757 0.903
0.794
USE2
5.98
1.231
0.911
USE3
6.25
1.004
0.902
VERI1
4.28
1.914
0.847 0.943
0.906
VERI2
4.34
1.862
0.914
VERI3
4.13
1.951
0.939
836
The Integrated User Satisfaction Model: Assessing Information Quality and System Quality as Second-order
Constructs in System Administration
Volume 38
Paper 39
Appendix C: Factor Correlations and Square Root of AVE on Diagonal
Table C1. Factor Correlations and Square Root of AVE on Diagonal
Constructs
1
2
3
4
5
6
7
8
9
10
11
12
13
14
1. Access
0.92
2. Accuracy
0.52
0.83
3. Attitude
0.54
0.45
0.89
4. Completeness
0.38
0.44
0.47
0.89
5. Credibility (functionality)
0.54
0.45
0.60
0.46
0.91
6. Credibility (help)
0.48
0.31
0.45
0.42
0.51
0.92
7. Credibility (predictability)
0.58
0.52
0.51
0.28
0.53
0.31
0.85
8. Currency
0.44
0.63
0.40
0.48
0.39
0.24
0.42
0.92
9. Ease of use
0.61
0.31
0.54
0.34
0.50
0.46
0.41
0.35
0.89
10. Flexibility
0.40
0.42
0.46
0.29
0.45
0.30
0.33
0.29
0.21
0.94
11. Format
0.64
0.46
0.43
0.35
0.46
0.49
0.38
0.39
0.55
0.28
0.92
12. Info quality (first-order)
0.65
0.66
0.58
0.51
0.61
0.48
0.62
0.55
0.48
0.44
0.56
0.96
13. Information satisfaction
0.64
0.57
0.59
0.47
0.60
0.51
0.56
0.42
0.46
0.40
0.62
0.85
0.97
14. Integration
0.28
0.16
0.28
0.26
0.40
0.32
0.05
0.15
0.16
0.41
0.24
0.24
0.25
0.96
15. Intent
0.47
0.43
0.62
0.38
0.54
0.28
0.45
0.44
0.39
0.43
0.36
0.54
0.52
0.32
16. Logging
0.22
0.25
0.30
0.31
0.38
0.22
0.21
0.14
0.13
0.43
0.08
0.35
0.31
0.32
17. Mental model
0.45
0.26
0.39
0.27
0.35
0.26
0.35
0.26
0.30
0.35
0.38
0.40
0.46
0.28
18. Monitor
0.30
0.10
0.10
0.17
0.09
0.26
0.14
0.11
0.13
0.22
0.27
0.17
0.29
0.20
19. Reliability
0.62
0.53
0.50
0.32
0.51
0.37
0.71
0.35
0.43
0.34
0.36
0.58
0.54
0.02
20. Scalability
0.44
0.43
0.49
0.25
0.49
0.36
0.49
0.41
0.27
0.47
0.32
0.52
0.46
0.20
21. Scriptability
0.18
0.24
0.37
0.25
0.40
0.25
0.19
0.15
0.01
0.59
0.03
0.30
0.29
0.44
22. Situation awareness
0.51
0.37
0.44
0.30
0.37
0.29
0.49
0.33
0.30
0.34
0.41
0.50
0.51
0.26
23. Speed
0.61
0.42
0.42
0.33
0.45
0.43
0.57
0.39
0.45
0.34
0.36
0.49
0.48
0.10
24. Sys quality (first-order)
0.65
0.59
0.63
0.35
0.63
0.49
0.71
0.44
0.46
0.48
0.46
0.71
0.65
0.16
25. System satisfaction
0.66
0.54
0.70
0.45
0.71
0.54
0.63
0.47
0.56
0.47
0.51
0.74
0.70
0.22
26. Timeliness
0.58
0.49
0.50
0.36
0.49
0.41
0.59
0.42
0.47
0.35
0.41
0.61
0.57
0.10
27. Usefulness
0.59
0.50
0.70
0.34
0.62
0.39
0.53
0.37
0.49
0.55
0.48
0.65
0.64
0.36
28. Verification
0.17
0.18
0.27
0.28
0.37
0.25
0.10
0.16
0.19
0.30
0.17
0.29
0.26
0.35
15
16
17
18
19
20
21
22
23
24
25
26
27
28
1 Access
2 Accuracy
3 Attitude
4 Completeness
5 Credibility (functionality)
6 Credibility (help)
7 Credibility (predictability)
8 Currency
9 Ease of use
10 Flexibility
11 Format
12 Info quality (first-order)
13 Information Satisfaction
Communications of the Association for Information Systems
837
Volume 38
Paper 39
Table C1. Factor Correlations and Square Root of AVE on Diagonal
14 Integration
15 Intent
0.80
16 Logging
0.30
0.91
17 Mental model
0.40
0.24
0.90
18 Monitor
0.09
-0.01
0.39
0.93
19 Reliability
0.34
0.31
0.28
0.13