ArticlePDF Available

ADDRESSING CHANGE TRAJECTORIES AND RECIPROCAL RELATIONSHIPS: A LONGITUDINAL METHOD FOR INFORMATION SYSTEMS RESEARCH

Authors:

Abstract and Figures

This paper makes a focused methodological contribution to the information systems (IS) literature by introducing a bivariate dynamic latent difference score model (BDLDSM) to simultaneously model change trajectories, dynamic relationships, and potential feedback loops between predictor and outcome variables for longitudinal data analysis. It will be most relevant for research that aims to use longitudinal data to explore longitudinal theories related to change. Commonly used longitudinal methods in IS research-linear unobserved effects panel data models, structural equation modeling (SEM), and random coefficient models-largely miss the opportunity to explore rate of change, dynamic relationships, and potential feedback loops between predictor and outcome variables while incorporating change trajectories, which are critical for longitudinal theory development. Latent growth models help address change trajectories, but still prevent researchers from using longitudinal data more thoroughly. For instance, these models cannot be used for examining dynamic relationships or feedback loops. BDLDSM allows IS researchers to analyze change trajectories, understand rate of change in variables, examine dynamic relationships between variables over time, and test for feedback loops between predictor and outcome variables. The use of this methodology has the potential to advance theoretical development by enabling researchers to exploit longitudinal data to test change-related hypotheses and predictions rigorously. We describe the key aspects of various longitudinal techniques, provide an illustration of BDLDSM on a healthcare panel dataset, discuss how BDLDSM addresses the limitations of other methods, and provide a step-by-step guide, including Mplus code, to develop and conduct BDLDSM analyses.
Content may be subject to copyright.
C
ommunications of the
A
I
S
ssociation for
nformation
Research Paper DOI: 10.17705/1CAIS.044XX ISSN: 1529-3181
Volume 44
Paper XX
pp. 1 60
August
2021
ADDRESSING CHANGE TRAJECTORIES AND
RECIPROCAL RELATIONSHIPS: A LONGITUDINAL
METHOD FOR INFORMATION SYSTEMS
RESEARCH
Youyou Tao
Information Systems and Business Analytics
Loyola Marymount University
Youyou.tao@lmu.edu
Abhay Nath Mishra
Information Systems and Business Analytics
Iowa State University
abhay@iastate.edu
Katherine Masyn
Population Health Sciences
Georgia State University
kmasyn@gsu.edu
Mark Keil
Computer Information Systems
Georgia State University
mkeil@gsu.edu
Abstract:
This paper makes a focused methodological contribution to the information systems (IS) literature by introducing a
bivariate dynamic latent difference score model (BDLDSM) to simultaneously model change trajectories, dynamic
relationships, and potential feedback loops between predictor and outcome variables for longitudinal data analysis. It
will be most relevant for research that aims to use longitudinal data to explore longitudinal theories related to change.
Commonly used longitudinal methods in IS research linear unobserved effects panel data models, structural equation
modeling (SEM), and random coefficient models largely miss the opportunity to explore rate of change, dynamic
relationships, and potential feedback loops between predictor and outcome variables while incorporating change
trajectories, which are critical for longitudinal theory development. Latent growth models help address change
trajectories, but still prevent researchers from using longitudinal data more thoroughly. For instance, these models
cannot be used for examining dynamic relationships or feedback loops. BDLDSM allows IS researchers to analyze
change trajectories, understand rate of change in variables, examine dynamic relationships between variables over
time, and test for feedback loops between predictor and outcome variables. The use of this methodology has the
potential to advance theoretical development by enabling researchers to exploit longitudinal data to test change-related
hypotheses and predictions rigorously. We describe the key aspects of various longitudinal techniques, provide an
illustration of BDLDSM on a healthcare panel dataset, discuss how BDLDSM addresses the limitations of other
methods, and provide a step-by-step guide, including Mplus code, to develop and conduct BDLDSM analyses.
Keywords: Bivariate Dynamic Latent Difference Score Model, Latent Growth Model, Longitudinal Research,
Measurement Invariance, Structural Equation Modeling, Health IT
Communications of the Association for Information Systems
2
Volume 44
10.17705/1CAIS.044XX
Paper XX
1 Introduction
In recent years, scholars in multiple disciplines have strongly recommended longitudinal theorizing and data
analysis (Bolander, Dugan, & Jones, 2017; Kher & Serva, 2014; Ployhart & Vandenberg, 2010; Zheng,
Pavlou, & Gu, 2014). Buoyed by the increasing availability of public and private longitudinal datasets,
researchers in information systems (IS) have been using longitudinal data analysis techniques to examine
phenomena that evolve over time. Commonly employed techniques include linear unobserved effects panel
data models (e.g., fixed/random effects models), structural equation modeling (SEM), and random-
coefficient models (Zheng et al., 2014). Although these methods have enabled researchers to move beyond
cross-sectional, single-point analyses, they suffer from two major drawbacks. First, these methods fail to
incorporate change trajectories (time-dependent changes in variables between repeated measurements
across time) in predictor and/or outcome variables, despite the fact that IS phenomena often involve
constantly changing variables in dynamic relationships. For example, when users start using an IT
application for task accomplishment, the number of IT features used could possibly change over time. This
change trajectory in the number of IT features used needs to be taken into consideration when examining
the impact of IT feature use on task performance (Benlian, 2015). Traditional panel data analyses cannot
support trajectory change assessment adequately. For example, in fixed-effects models, variation over time
is absorbed by the time fixed effects and is treated as incidental fluctuation (Wooldridge, 2010; Zheng,
Pavlou, & Gu, 2014). This model cannot be applied to assess trajectory changes of predictor and outcome
variables, or to incorporate such trajectory changes in model assessment. Research examining change
trajectories in variables, however, is important in theory building, both to understand change patterns and
to explore the dynamic longitudinal relationships among variables. This is because a number of IS theories,
such as Information Technology (IT) adoption theories, technology diffusion theories, and information
processing theories, are rooted in the change patterns of variables and their longitudinal relationships over
time (Zheng et al., 2014).
Latent growth modeling (LGM), which was recently introduced in the IS literature (Bala & Venkatesh, 2013;
Benlian, 2015; Serva, Kher, & Laurenceau, 2011; Söllner, Pavlou, & Leimeister, 2016; Zheng et al., 2014),
addresses the first drawback. It enables the examination of change trajectories in variables and has been
used to model how the change process evolves (Zheng et al., 2014). LGM, however, does not address the
second drawback of traditional models, namely their inability to examine feedback loops between predictor
and outcome variables over time. A feedback loop captures the causality between two variables with
reciprocal causal links (Fang, Lim, Qian, & Feng, 2018). We define a positive feedback loop as one that has
the tendency to reinforce the initial action, and a negative feedback loop as one that has the tendency to
oppose the initial action (de Gooyert, 2019). Feedback loop consideration is relevant in many IS
phenomena. For example, while the IT business value literature has established that IT investments can
improve firm performance, recent research suggests that such improvements also lead to subsequent IT
investments, which suggests a positive feedback loop between IT investment and firm performance (Baker,
Song, & Jones, 2017). Yet, neither traditional panel data models nor LGM can examine whether a positive
feedback loop exists between IT investment and firm performance in a single model while incorporating
change trajectories in variables. As a result, our understanding of the relationship between IT investment
and firm performance has been limited to a unidirectional view, while there might be more subtle and
complex two-way causal interactions between these two variables over time.
This study introduces a more comprehensive and advanced method, a bivariate dynamic latent difference
score model (BDLDSM), also known as a latent change score model, to study how relationships between
the predictor variable and the outcome variable evolve over time. BDLDSM addresses both of the limitations
discussed above. Digital phenomena where longitudinal data can be brought to bear to examine dynamic
and reciprocal relationships between variables abound. BDLDSM enables IS researchers to examine these
phenomena and facilitates longitudinal theory extension and development. Specifically, BDLDSM enables
IS researchers to (1) understand the rate of change in a variable over time, (2) examine constructs from a
reciprocal, longitudinal development perspective, (3) gain a nuanced understanding of the dynamic
longitudinal relationship between the predictor and outcome variables while addressing reverse causality,
and (4) examine feedback loops between variables. We discuss each of these advantages next.
First, BDLDSM enables IS researchers to gain a comprehensive understanding of the overall rate of change
in variables as the outcome of interest by allowing them to identify the sources of the change. The method
enables researchers to ascertain if the overall rate of change in outcome variable comes from constant
Communications of the Association for Information Systems
3
Volume 44
10.17705/1CAIS.044XX
Paper XX
change over time; is proportional to the level of outcome variable at the previous time point; or is influenced
by the level of predictor at the previous time point.
Second, BDLDSM enables IS researchers to examine traditionally static constructs from a reciprocal,
longitudinal development perspective, which may lead to considerable theory extension. For example, trust
is commonly treated as a static concept in the IS research due to methodology limitations (Serva et al.,
2011; Söllner et al., 2016; Zheng et al., 2014), but the perceptions of trust may evolve over time. Further, it
may have a reciprocal relationship with other constructs. Serva et al. (2011) introduced a scenario for
longitudinal designs in which researchers usually study the initial trust of users when they first contract with
e-vendors. However, the perception of trust may evolve over time, as the users’ relationship develops with
e-vendors. Thus, it is important to study the change trajectory of the perception of trust over time. BDLDSM
can be applied to study the change trajectory of trust, and how trust dynamically impacts other constructs,
such as transaction intentions, while accounting for change trajectories. In addition, BDLDSM can be used
to examine the reciprocal nature of trust. For example, Serva, Fuller, and Mayer (2005) investigated the
reciprocal trust between interacting teams. BDLDSM can be applied to further examine how the trust from
one party changes over time when this party observers the action of another party and reconsiders one’s
subsequent trust-related attitudes and behaviors (Serva et al., 2005).
Third, BDLDSM enables IS researchers to gain a nuanced understanding of the dynamic longitudinal
relationship between the predictor and outcome variables, which cannot be resolved by other longitudinal
research models, including LGM. For example, using LGM, Zheng et al. (2014) investigated the longitudinal
relationship between the weekly word of mouth (WOM) volume and the weekly sales rank. Using BDLDSM,
IS researchers can answer the following type of research question: Does weekly WOM volume positively or
negatively affect the subsequent change of weekly sales rank while accounting for the reverse causality
from weekly sales rank to WOM volume and further accounting for the weekly sales from the previous week
and the constant change of weekly sales over the course of the study?
Fourth, BDLDSM enables IS researchers to examine feedback loops between variables. This method can
contribute to the ongoing discussion in the IS literature about the nature of causal relationships between IT
investments, use, and performance by emphasizing the feedback loops between these variables. For
example, IT use among individuals, groups, organizations, and countries and the relationship of such use
with various economic, social, cognitive, and other outcomes is an established area of inquiry in the IS
literature. BDLDSM allows researchers to test the potential feedback loops between IT use and these
outcomes. By facilitating the analysis of such data, BDLDSM can help IS researchers disentangle the true
nature of the relationship between IT use and outcomes. Results obtained from these analyses can spur
longitudinal theory creation and subsequent testing.
In this research, we illustrate the application of BDLDSM by investigating the longitudinal relationship
between health information technology (HIT) applications and hospital performance. Extant research largely
relies on a static framework, despite using panel data methods, to study the relationship between HIT
implementation and hospital performance. Such a static framework may not be able to reveal the dynamic
relationship between HIT and hospital performance. Further, very few prior studies that applied a dynamic
framework have examined the influence of trajectory changes and potential feedback loops between HIT
and hospital performance (e.g., Menon and Kohli 2013). Considering trajectory changes for both HIT
implementation and hospital performance is vital when examining dynamic lead-lag association. A dynamic
lead-lag association examines how the levels of the predictor variable temporally precede and lead changes
in the outcome variable (Grimm, An, McArdle, Zonderman, & Resnick, 2012). Overlooking these trajectory
changes can impact the significance levels and the directions of the effects of HIT implementation on
hospital performance. Further, HIT implementation and hospital performance may develop feedback loops
over time. For example, an increased HIT implementation level may lead to hospital performance
improvement and that improved hospital performance may further lead to a higher level of HIT
implementation. Thus, we plan to extend the current literature that studies HIT impact on healthcare
performance to provide further empirical tests while accounting for reverse causality and change trajectory.
In this study, to illustrate the application of BDLDSM, we focus on one hospital performance measure,
experiential quality, which evaluates patients’ perceptions of the quality of care they receive at a hospital
based on their interactions with healthcare providers (Angst, Devaraj, & D'Arcy, 2012; Chandrasekaran,
Senot, & Boyer, 2012; Pye, Rai, & Baird, 2014; Senot, Chandrasekaran, Ward, Tucker, & Moffatt-Bruce,
2016; Sharma et al., 2016).
Communications of the Association for Information Systems
4
Volume 44
10.17705/1CAIS.044XX
Paper XX
Our study contributes to the IS literature in two major ways. First, to the best of our knowledge, this is the
first paper in the IS field that introduces BDLDSM, which is an emerging methodological approach that is
ideally suited to understand the overall rate of change in variables and to study dynamic, longitudinal
relationships between variables, while incorporating their change trajectories. Despite BDLDSM’s significant
potential for confirming longitudinal theoretical models, it has not, to our knowledge, been applied in the IS
literature. We demonstrate that BDLDSM can be used to examine research questions for which other
existing, widely used methods are inadequate such as theorizing longitudinal change and examining
feedback loops between predictor and outcome variables. Our paper aids longitudinal theory development
by enabling IS researches to theorize and test forms of changes (e.g., linear or nonlinear), levels of
changes (e.g., within units change, between units change, or both), and dynamic longitudinal relationships
in both descriptive and explanatory longitudinal research. Second, to the best of our knowledge, the
interplay between HIT implementation levels and hospital performance over time has not been previously
studied by incorporating the growth rate of HIT implementation levels and hospital performance variables
within a dynamic framework to incorporate the dynamic effects. Our paper is the first to evaluate this
interplay over time by incorporating the change trajectories of HIT implementation levels and hospital
performance variables. Our paper not only extends the current HIT value literature by examining HIT impact
on experiential quality from a dynamic and nonlinear perspective, but also provides insights regarding the
nonlinear change trajectories of HIT implementation levels and the potential feedback loop between HIT
implementation levels and hospital performance. We provide well-documented Mplus code with a
covariance matrix (see Appendices E and F) that IS researchers can easily adapt for their own uses, and a
bibliography section of BDLDSM that points to foundational references on BDLDSM.
2 Literature Review
We conducted a systematic review of published longitudinal research in the top IS journals from 2004-2018
(15 years). Our review reveals that longitudinal research methods used in IS research have largely ignored
change trajectories in variables over time. We then review these methods and then discuss LGM, which is
a research method that incorporates change trajectories in variables over time but fails to examine the
dynamic lead-lag association or feedback loops between variables.
2.1 Review of Longitudinal Research in Information Systems
We begin our analysis by reviewing longitudinal research published in the “Senior Scholars' Basket of
Journals” because these journals are accepted in the IS community as top journals, and Management
Science (MS) because it is a highly-rated general-purpose journal, where IS colleagues regularly publish
their work.
1
We identified 190 articles involving longitudinal research that applied quantitative methods
between 2004 and 2018. The most common analysis techniques were linear unobserved effects panel data
models (66 papers), SEM (36 papers), random coefficient models (11 papers), and other regression models
(e.g., ordinary least squares (OLS), Negative Binomial (NB), Difference-in-Difference (DID)) (15 papers)
(see appendix A for more details on the search process, search result summary, and the articles that were
identified for each data analysis method). Below, we discuss the advantages and disadvantages of the two
most frequently applied analysis techniques.
The most commonly applied longitudinal analysis technique in the IS field is the linear unobserved effects
panel data model. Two commonly used linear unobserved effects models are the fixed effects model and
the random effects model. In the fixed effects model, the unobserved effects are allowed to arbitrarily
correlate with the predictors. In the random effects model, the unobserved effects are not allowed to
arbitrarily correlate with the predictors. Random effects models are also used when the time-invariant
estimators are important (e.g., Langer, Slaughter, and Mukhopadhyay (2014)).
Both fixed and random effects models make strict exogeneity assumptions wherein predictors in each period
are expected to be uncorrelated with the idiosyncratic error in each period. The assumption no longer holds
if a lagged dependent variable is one of the predictors. In recent years, researchers have published a
number of papers that apply dynamic panel models to address this issue (Aral, Brynjolfsson, & Van Alstyne,
1
Specifically, the journals we reviewed include European Journal of Information Systems (EJIS), Information Systems Journal (ISJ),
Information Systems Research (ISR), Journal of the Association for Information Systems (JAIS), Journal of Information Technology
(JIT), Journal of Management Information Systems (JMIS), MIS Quarterly (MISQ), Journal of Strategic Information Systems (JSIS),
and Management Science (MS). Only those papers that were accepted by the IS department at MS were included in this analysis,
however, BDLDSM has not been used in papers published in other departments by IS scholars.
Communications of the Association for Information Systems
5
Volume 44
10.17705/1CAIS.044XX
Paper XX
2012; Bhargava & Mishra, 2014; Menon & Kohli, 2013; Tambe & Hitt, 2012). A special type of dynamic
panel models that can be applied in a system of equations is called a panel vector autoregressive model
(PVAR). In recent years, a small number of IS studies have used a PVAR model to examine the relationship
between a system of interdependent variables (Adomavicius, Bockstedt, & Gupta, 2012; Chen, De, & Hu,
2015; Dewan & Ramaprasad, 2014; Thies, Wessel, & Benlian, 2016).
In summary, fixed and random effects models can test if a relationship exists between predictor and
outcome variables over time; dynamic panel models can account for dynamic outcome variables and
predictors that are not strictly exogenous; and PVAR can be used for a system of interdependent variables
to address autocorrelations and joint endogeneity. None of these approaches, however, can be applied to
model change trajectories or capture dynamic relations between two variables over time.
SEM, which is a multivariate technique that analyzes causal relationships among latent variables (Bollen,
2011), is the second most common longitudinal analysis technique in the IS literature. Researchers have
typically collected predictor and the outcome variables at different time points to study adoption, system
use, and post-adoption impacts (Sun, 2013; Sykes, Venkatesh, & Gosain, 2009; Venkatesh, Thong, Chan,
Hu, & Brown, 2011; Venkatesh, Zhang, & Sykes, 2011). Although separation of predictor and outcome
variables, such that predictors precede outcomes, establishes temporal precedence, it does not lend itself
to tracking changes in variables over time. To examine the change trajectory of the predictor and outcome
variables, IS researchers apply LGM (Bala & Venkatesh, 2013; Benlian, 2015; Serva et al., 2011), discussed
in the next section.
Finally, prior research in IS has also used random coefficient models, other regression models (e.g., OLS,
NB, DID models), survival models, and ANOVA (and ANOVA like) techniques to analyze longitudinal
datasets. Please see Appendix A for the list of papers that applied these methods.
2.2 Review of LGM Research in Information Systems
To date, the use of LGM in the IS field has been limited (Li, Fang, Wang, & Lim, 2015; Serva et al., 2011;
Zheng et al., 2014). A major advantage of LGM over traditional SEM is that it offers precise information on
longitudinal change trajectories in variables over time (Benlian, 2015; Zheng et al., 2014), which is important
from a theoretical perspective. It is important to note that traditional SEM models are based on cross-
sectional analysis and do not account for longitudinal relationships (Zheng et al., 2014). Zheng et al. (2014)
have discussed the importance of introducing LGM in the IS field from both theoretical and practical
perspectives and provided analysis guidelines to help IS researchers better describe, measure, analyze,
and theorize longitudinal change. A few researchers in the IS field have applied LGM in their research. For
instance, Bala and Venkatesh (2013) employed LGM to develop a job characteristic change model during
an enterprise system implementation. Benlian (2015) adopted LGM and tested three functional forms of
change in IT usage. LGM models provide researchers with a dynamic view of interactions between predictor
and outcome variables over time.
Despite its benefits, LGM has two significant limitations. First, LGM cannot uncover the feedback loop
between the predictor and the outcome variable over time. However, it is important that IS researchers be
equipped with an analytical technique that has this capability. For instance, in the IT business value
research, the possibility of a positive feedback loop between IT investment and a firm’s productivity over
time is widely discussed (Baker et al., 2017), but without the use of a dynamic and reciprocal analysis
framework, researchers have not been able to examine this feedback loop. Reciprocal favors between
buyers and sellers in the online marketplace (Ou, Pavlou, & Davison, 2014) or reciprocity norms within a
dyadic relationship in knowledge exchange (Beck, Pahlke, & Seebach, 2014) are other areas where
dynamic reciprocal feedback may be relevant but has not been tested. Clearly, the ability to examine
feedback loops and reciprocal behaviors can help IS researchers to explore and understand dynamics more
fully across a variety of different contexts.
Second, LGM only captures static or time-invariant associations between variables (Grimm et al., 2016).
This static association cannot be used to examine effects related to subsequent changes. This limitation
may lead to an inadequate development of dynamic change theories. For instance, Zheng et al. (2014) use
LGM to examine the relationship between WOM communication and book sales over time. They find a
negative correlation between the slope of WOM communication and the slope of Amazon sales rank,
indicating that products with a slower growth of WOM communication tend to exhibit a faster decrease in
sales compared to other products. This association is a static, between-person association. The framework
Communications of the Association for Information Systems
6
Volume 44
10.17705/1CAIS.044XX
Paper XX
does not unveil dynamic lead-lag associations between the predictor variable and the outcome variable;
LGM cannot be used to examine if levels in WOM communication precede subsequent changes in the sales
rank, and thus cannot be used to conclude that a slower growth of WOM communication is predicted to
yield a faster decrease in book sales.
To uncover the feedback loops between variables over time and to examine the dynamic association
between the predictor and outcome variables over time, we need to extend our current understanding of
LGM. Thus, we introduce an advanced dynamic LGM, BDLDSM, which is a proper subset of LGM, which is
itself a proper subset of SEM.
3 BDLDSM Model
3.1 The Value of and Need for BDLDSM in the IS Field
Contemporary research on longitudinal data analysis is shifting its focus toward tracking change trajectories
over time; as such, it calls for methods that combine features of existing techniques to analyze longitudinal
data more rigorously, answer new research questions, test change-related hypotheses, and promote time-
related theory development (McArdle, 2009). BDLDSM combines the features of LGM, cross-lagged, and
autoregressive models (Eschleman & LaHuis, 2014; McArdle, 2009). LGM provides information about how
growth in variables is related over time and answers research questions that focus on change from starting
point to finishing point (O'Rourke, 2016). BDLDSM not only answers research questions that LGM answers,
but also more involved and nuanced ones. Using BDLDSM, researchers can model the change process
(change in one variable from time t-1 to time t) by incorporating both growth change components that
represent the average change during the study time period and a proportional change component that
represents the variable level at time t-1 (Rudd & Yates, 2020). Cross-lagged models can be applied to
assess directional and reciprocal influences on intra-unit changes between predictor and outcome variables
over time (Rudd & Yates, 2020). However, unlike BDLDSM, cross-lagged models use covariances but not
mean structures, and thus cannot be applied to model growth over time. Autoregressive model can be
applied to assess the effect of the previous value but cannot be applied to model within-unit changes.
BDLDSM allows researchers to model complex change trajectories (incorporating both within-unit change
that measures the trajectory change of each individual unit and between-unit change that assesses how
individual units vary in their trajectories) (Rudd & Yates, 2020). It provides information about dynamic
relations between variables and enables modeling patterns of change by incorporating both growth change
components and a proportional change component (McArdle, 2009).
BDLDSM has been applied in several disciplines, including education, sociology, and psychology to study
the dynamic interplay between the predictor and outcome variables. For example, Grimm et al. (2016) used
BDLDSM to examine the dynamic lead-lag relationship between children’s mathematics ability and their
visual motor integration. Grimm (2007) employed BDLDSM to examine how the change of depression over
time can be predicted by previous academic achievement scores, and vice versa. In the psychology
literature, Sbarra and Allen (2009) used BDLDSM to study developmental issues related to sleep and mood
disturbances, while Kim and Deater-Deckard (2011) studied developmental issues related to dynamic
changes in anger and to externalizing and internalizing problems.
Having been established as a robust method in other fields in recent years, BDLDSM offers IS scholars an
opportunity to model the change between two time points for several measurement waves, analyze
longitudinal association between two variables, and advance longitudinal theorizing. BDLDSM can be used
to unpack research questions that cannot be answered by traditional longitudinal models. For example,
while traditional longitudinal models may conclude that a predictor variable has a positive influence on an
outcome variable, the result may merely suggest an average upward trajectory. In fact, the outcome variable
may drop during early stages and then increase rapidly to overcome the earlier disadvantage. Further, the
change in the predictor variable level itself may be driven by the change in the outcome variable.
Disentangling the driving force between the predictor variable and the outcome variable and investigating
their dynamic relationships in a more nuanced way are of interest to both IS researchers and practitioners
who want to gain a better understanding of the longitudinal effects in real-world phenomena. For example,
in section 4, we demonstrate the use of BDLDSM in probing the relationship between HIT implementation
level and an important measure of hospital performance, experiential quality, by asking the following
research questions: 1) What is the nature of the change process in experiential quality variable (change in
experiential quality from time t-1 to time t)? 2) What is the dynamic relationship between HIT implementation
Communications of the Association for Information Systems
7
Volume 44
10.17705/1CAIS.044XX
Paper XX
level and experiential quality variables across time? Specifically, what is the best model for explaining the
relationship between them? 3) Is there a feedback loop between HIT implementation level and experiential
quality variables? Using BDLDSM, IS researchers can answer similar questions in their own studies in a
variety of contexts.
3.2 A Brief Introduction to BDLDSM
Since BDLDSM needs to fit the latent difference score (LDS) framework, it requires a few assumptions
regarding observed data and latent variables in the LDS model: 1) change in the model applies only to the
latent variables (true scores) where true scores and errors are separated at each time point, 2) the change
function does not vary for individuals over time, however, the constant change (growth) factors may vary for
individuals, 3) the time interval between each set of latent variables is equal to the time interval between
every other set of latent variables in the model, 4) difference equations which approximate differential
equations are applied to represent change, and 5) means, variances, and covariances of observed variables
over time are given a restrictive structure in order to fit SEM frameworks (Hamagami & McArdle, 2007;
O'Rourke, 2016). Also, similar to other longitudinal models, in BDLDSM, measurement invariance needs to
hold over time to make sure the same constructs were measured over time (Kim, Wang, & Liu, 2020;
McArdle, 2009).
For BDLDSM, the specification of the LDS must account for measurement error and time-specific, construct-
irrelevant variance in the observed scores at each time point (McArdle, 2009). Below, we specify each
observed repeated measure as a function of a true score and an unobserved random error:
   (1)
   (2)
where  and  are the observed scores,  and  are true scores, and  and  are the
corresponding measurement errors at time t for the individual unit i. We then specify latent difference scores
of and as the differences between the true scores at time t and t-1 in the individual unit
i. The resulting equations are:
    (3)
   (4)

    (5)
   (6)
where and  are true change scores for the individual unit i from time t-1 to time t,  and  are
true scores for the individual unit i at time t, and  and  are the true scores for the individual unit
i at time t-1.
The trajectory of each set of change scores over time is parameterized using a random slope factor, with
loadings adjustable to reflect linear or nonlinear trajectories. The latent change score at each time period is
then a function of the random slope factor as well as prior levels of both and . The following two equations
represent models that have linear change trajectories:
      (7)
     (8)
where and are constant growth factors, which measure the stable, constant change (rate of growth)
over the course of the study;  and , called proportional change parameters, are within-variable
proportional changes where the predicted changes are proportional to the level of the variable at time t-1;
and  and are coupling parameters that specify cross-variable effects which determine how changes in
one variable from time t-1 to time t are predicted by the level of the other variable at time t-1. We can infer
that changes in and for the individual unit i from time t-1 to t come from three sources: the constant
Communications of the Association for Information Systems
8
Volume 44
10.17705/1CAIS.044XX
Paper XX
growth factors (g and j), within-variable proportional effects (and ), and cross-variable coupling effects
 and . In other words, the changes in and from time t-1 to time t are functions of three components:
constant change over the course of the study, proportional effect, and coupling effect. To account for the
nonlinear trajectory of change scores, we first need to specify growth models based on latent change scores,
which require the first derivative of the functional form of change with respect to time. For example, if the
growth factor follows a cubic form with respect to time t, we can assume the following cubic growth model:
           (9)
The first derivative of (9) can be written as:
       (10)
We then incorporate the derivative function into the bivariate latent change score framework:
          (11)
where  and  are latent growth factors for the latent changes scores, the constant growth
factor (same as ), is the linear growth factor, and is the quadratic growth factor. represents within-
variable proportional change parameter and  represents the cross-variable coupling parameter.
Corresponding derivative and change score model equations can be written for growth models with other
functional forms. Further, although not explored in this paper, BDLDSM can be extended to explore group
differences in relationships and to study how changes proceed in different subgroups with multilevel
modeling. A step-by-step guide to help researchers develop and conduct BDLDSM analyses is provided in
appendix B.
4 An Application of BDLDSM
We now illustrate the application of BDLDSM to examine the dynamic, longitudinal relationship between
HIT implementation and experiential quality. A synthesis of research suggests that the current literature has
yet to sufficiently explore if a positive or negative feedback loop exists between HIT implementation levels
and experiential quality. For example, if there is a positive feedback loop between HIT implementation levels
and experiential quality, it may be that an increased HIT implementation level drives experiential quality
improvement and that hospitals with improved experiential quality are more likely to adopt additional HIT.
Accordingly, we chose to use BDLDSM because it enables us to tease out complex and potentially
reciprocal associations between HIT implementation and experiential quality.
4.1 Sample and Data Collection
We use data from three sources for this study. First, to obtain experiential quality data, we use the Hospital
Consumer Assessment of Healthcare Providers and Systems (HCAHPS) survey data collected annually for
2008–2013. This dataset records patients’ perceptions of the quality of care they received during their
inpatient hospital stays. Second, to obtain HIT implementation data, we use IT supplement files from
American Hospital Association (AHA) collected annually for 20082012. The AHA IT supplement database
is a hospital-level database containing HIT implementation-level information on three hospital IT functions:
electronic clinical documentation (ECD), computerized provider order entry (CPOE), and decision support
(DS). Third, to obtain hospital characteristics data, we use AHA’s annual survey dataset for 2008–2013.
The AHA survey dataset provides hospital demographics, organization structure, and operational and
financial information. After mapping these three datasets, our resulting dataset is an unbalanced panel data
set including 791 hospital-level observations from seven states
2
in the U.S. with five waves of HIT
implementation data and six waves of experiential quality data.
Experiential quality measures healthcare providers’ ability to engage in meaningful communications with
the patients (Angst et al., 2012; Pye et al., 2014; Senot et al., 2016). We used communication score to
measure experiential quality. This score is obtained by averaging respondent answers to four topics in the
HCAHPS survey. In keeping with prior research (Senot et al. 2016), we applied a logit transformation on the
2
These seven U.S. states are California, Florida, Maryland, North Carolina, New York, New Jersey, and Washington.
Communications of the Association for Information Systems
9
Volume 44
10.17705/1CAIS.044XX
Paper XX
computed average score to meet the assumption of normality.
3
The following equation gives the
communication score, with i as the individual hospitals measured in year t and Q as the average score for
four communication items:
 
 (12)
More details about the measurement of communication score can be found in appendix C, part I.
To assess the levels of HIT function implementation at each wave, we used a total of 16 items to create
three HIT constructs: ECD enables care providers to access and record patient information; CPOE allows
physicians to give instructions and order medicines and procedures; and DS supports decision making by
giving care providers access to information that helps them accurately diagnose patient conditions, consult
the latest evidence, and provide patient-specific care. We decided to form factor scores for three HIT
constructs instead of modeling these constructs directly as latent variables for two reasons. First, the
psychometric properties of these items and constructs have been established in the literature, and the
reliability and validity of the 16 HIT measurement items used in this paper have already be verified (Adler
Milstein, Everson, & Lee, 2015; Ayabakan, Bardhan, Zheng, & Kirksey, 2017; Everson, Lee, & Friedman,
2014), Second, forming factor scores for three HIT constructs can give us better control in model
identification in a complex BDLDSM model. More details about these measurements can be found in
appendix C, part II.
To account for other factors that may influence communication score and HIT implementation level, we
included five control variables: hospital bed size, profit status, teaching status, dummy variables for state
effect, and market competition. We obtained hospital bed size, profit status, teaching status variables from
AHA survey datasets. We measured market competition using the Herfindahl-Hirschman index (HHI). For
a focal hospital, we operationalized market competition at the hospital referral region (HRR) level and
aggregated hospitals into HRRs.
It is important to mention that our datasets included missing responses from some hospital in some years.
Specifically, not all hospitals provide details on HIT implementation variables (the AHA IT survey) and
communication scores (the HCAHPS survey) every year. Since there is no evidence found based on the
official data documentations
4
that the propensity of missingness on AHA IT and HCAHPS survey depend
on the HIT implementation levels and communication score respectively, we rule out the missing not at
random (MNAR) assumption. We further examine whether the missing data is under missing at random
(MAR) assumption or missing completely at random (MCAR) assumption. Our analysis result based on our
dataset shows for-profit hospitals are more likely than non-for-profit hospitals to have missing HIT
implementation data and hospitals with fewer beds are more likely to have missing communication scores
than hospitals with more beds. Since MAR allows missingness to depend on other observed variables (e.g.,
hospital’s profit status and hospital size in our case), missing data in the dataset are assumed to occur at
random under MAR assumption. Thus, full information maximum likelihood (FIML), a commonly applied
method that handles the incompleteness in longitudinal studies, is applied to estimate the BDLDSM models
using all available information in the presence of missing data (Grimm, 2007; Grimm, Ram, & Estabrook,
2017; Klopack & Wickrama, 2020; McArdle & Hamagami, 2001).
4.2 Data Analysis and Results
We propose a three-step process to develop and conduct the BDLDSM analysis. This three-step process
is described in further detail in Appendix B. The first step is to test measurement invariance to establish
whether the same constructs were measured over time. We began our test by confirming configural
invariance, metric invariance, and scalar invariance of the three-factor structure for HIT (see appendix D).
Configural invariance is to test whether the same items measure the constructs across time; metric
invariance is to test whether the factor loadings of the items that measure the constructs are equivalent
across time; and scalar invariance tests whether the items’ intercepts are equivalent across time. Upon
confirmation, we subsequently computed the resultant factor scores for each hospital at each time point and
used them as the HIT implementation variables in all the longitudinal models.
3
The functional form of the change process as well as the parameter estimates related to the dual change processes would not
necessarily hold on the untransformed scale of communication score.
4
Source: The AHA IT survey and the HCAHPS survey documentation and data.
Communications of the Association for Information Systems
10
Volume 44
10.17705/1CAIS.044XX
Paper XX
Modeling Growth Trajectories for the Predictor and Outcome Variable
The second step in our analysis involved modeling growth trajectories for the predictor and outcome variable
to determine the proper functional form of change for each. Researchers need to appropriately choose the
best-fit change trajectory functions before implementing BDLDSM to ensure accurate representation of the
dynamic associations between the predictor variable and the outcome variable.
We assume that HIT implementation levels increase in a nonlinear manner over time for two theoretical
reasons. First, from a resource-based perspective, hospitals need to change their current clinical processes
to fit adopted technologies with existing resources and processes. This transition process takes time
(Atasoy, Chen, & Ganju, 2018). Second, from the business value of IT perspective, there is a learning curve
associated with the use of technologies in hospitals during the first few years after the adoption. Based on
these two reasons, hospitals may adopt technologies at a relatively slow rate in the first few years but at a
faster rate during subsequent years. In other words, technology implementation levels in hospitals may grow
with a positive accelerating rate over time to cope with the learning curve associated with the use of
technologies.
We next discuss the growth rate of communication score. As we described in section 4.1, the communication
score is captured by the HCAHPS survey. Results of the HCAHPS survey began to be reported on the
Hospital Compare website in 2008. Public reporting of patient hospital experience, including communication
with healthcare providers, allows patients to compare and choose better-performing hospitals. Thus, at the
early stage of public reporting, hospitals would apply different ways and methods to monitor and improve
communication quality (Elliott et al., 2015). Yet, according to the law of diminishing returns, there may be a
diminishing value of successive interventions to further improve communication quality. Thus, we assume
that the growth rate of communication score may diminish over time. Accordingly, we assume that
communication score may increase in a nonlinear manner over time as well.
We examined how the average trajectories of HIT and communication score variables change over time.
Figure 1 illustrates the average growth trajectories of ECD, CPOE, DS, and communication score,
respectively. Although the average change trajectory plots can be useful graphical summaries, they may
not reflect the shape of the individual hospital trajectories. To explore the functional form of intra-hospital
change over time, we conducted an extensive descriptive analysis to understand the nature and
idiosyncrasies of each hospital’s temporal pattern of growth (Singer & Willett, 2003). We began with a simple
graphic visualization by examining randomly selected arrays of individual hospitals’ empirical growth plots.
We then used a nonparametric approach for smoothing each hospital’s temporal idiosyncrasies without
imposing a specific functional form. We present examples of selected Lowess Smoothing plots of ECD,
CPOE, DS, and communication score in Appendix E. From Figure 1 and Appendix E, we notice that the
trajectories of ECD, CPOE, DS, and communication score variables suggest the possibility of non-linearity.
Hence, we fit the variables into three types of growth modelsa linear growth model, a quadratic growth
model, and a cubic growth modelto identify the best functional form of change (see Appendix B, step 2
for more details on modeling growth trajectories for the predictor and outcome variables). There was no
need to test a no-growth model because all variable trajectories were clearly increasing over time.
Communications of the Association for Information Systems
11
Volume 44
10.17705/1CAIS.044XX
Paper XX
Figure 1. Growth Trajectory of ECD, CPOE, DS, and Communication Score
To systematically test which model provided the best fit, we applied chi-square difference tests to evaluate
comparative fit in a pairwise fashion. Since chi-square is sensitive to sample size, four additional fit indices
were also examined to assess model fit: the root mean square error of approximation (RMSEA), the
comparative fit index (CFI), Tucker-Lewis index (TLI), and the standardized root mean square residual
(SRMR). The most commonly used criteria for fit statistics include RMSEA < 0.08, CFI > 0.95, TLI > 0.95,
and SRMR < 0.08 (Hu & Bentler, 1999; Zheng et al., 2014). Table 1 shows fit statistics for the linear growth
model, quadratic growth model, and cubic growth model for each HIT and communication score variables.
From Table 1, we find that the best fitting HIT models, quadratic growth models, show adequate model fit
and are selected by the strict chi-square difference test, whereas the worse fitting linear growth models do
not. We also note that even though linear and quadratic models for communication scores provide
reasonable model fit, the cubic model provides the best fitting index and is selected by the strict chi-square
difference test. Thus, we conclude that quadratic growth models provide the best fit for the three HIT
variables (ECD, CPOE, and DS), whereas the cubic growth model provides the best fit statistics for
communication score.
Communications of the Association for Information Systems
12
Volume 44
10.17705/1CAIS.044XX
Paper XX
Table 1. Model Comparison of Change Form
Variable
Model
χ2
(DF)
Model
Comparison
Δχ2
ΔDF
RMSEA
[90% C.I.]
CFI
TLI
SRMR
Best
Model
ECD
M1:
Linear
Growth
92.16
(10)
0.100
[0.082,
0.119]
0.871
0.871
0.089
M2:
Quadratic
Growth
M2:
Quadratic
Growth
8.54
(6)
M1 vs. M2
83.61***
4
0.023
[0.000,
0.054]
0.997
0.995
0.020
M3:
Cubic
Growth
3.64
(1)
M1 vs. M3
88.51***
9
0.057
[0.000,
0.124]
0.997
0.970
0.011
M2 vs. M3
4.9
5
CPOE
M1:
Linear
Growth
111.9
5 (10)
0.111
[0.093,
0.130]
0.811
0.811
0.095
M2:
Quadratic
Growth
M2:
Quadratic
Growth
11.08
(6)
M1 vs. M2
100.87***
4
0.032
[0.000,
0.061]
0.992
0.987
0.025
M3:
Cubic
Growth
2.01
(1)
M1 vs. M3
109.94***
9
0.035
[0.000,
0.107]
0.998
0.985
0.009
M2 vs. M3
9.07
5
DS
M1:
Linear
Growth
74.91
(10)
0.089
[0.071,
0.108]
0.878
0.878
0.073
M2:
Quadratic
Growth
M2:
Quadratic
Growth
4.16
(6)
M1 vs. M2
70.75***
4
0.000
[0.000,
0.036]
1.000
1.000
0.017
M3:
Cubic
Growth
2.16
(1)
M1 vs. M3
72.74***
9
0.038
[0.000,
0.109]
0.998
0.982
0.010
M2 vs. M3
1.99
5
Communi-
cation
Score
M1:
Linear
Growth
126.8
6
(16)
0.093
[0.078,
0.109]
0.969
0.970
0.047
M3:
Cubic
Growth
M2:
Quadratic
Growth
69.60
(12)
M1 vs. M2
57.26***
4
0.078
[0.060,
0.096]
0.990
0.987
0.045
M3:
Cubic
Growth
34.97
(7)
M1 vs. M3
91.89***
9
0.071
[0.048,
0.095]
0.995
0.990
0.033
M2 vs. M3
34.63***
5
Note: *** p<0.001, ** p<0.01, * p<0.05, + p<0.1
Finding the Best BDLDSM Model and Interpreting the Results
The third step is to find the best BDLDSM model to test the impact of HIT implementation level on
communication score. We evaluated three models to test dynamic relationships between HIT
implementation and communication score Model 1: no coupling effects; Model 2: a coupling effect from
HIT implementation to the change of communication score (ΔCommunication); Model 3: full coupling effects.
We use Model 1 to test if there exists a dynamic association between HIT implementation and
communication score, Model 2 to examine if HIT implementation is a leading indicator (predictor of
subsequent changes) of communication score, and Model 3 to estimate if there is a feedback loop between
HIT implementation and communication. We mapped time-invariant covariates (hospital bed size, profit
status, teaching status, market competition, and state effect) as predictors of the intercept and growth
factors for CPOE, ECD, DS, and communication score. Because the three HIT variables (CPOE, ECD, and
DS) have the best fit statistics in the quadratic models and communication score measure has the best fit
statistics in the cubic growth model, we fit the former using BDLDSM with the first derivative quadratic growth
function and the latter using the first derivative cubic growth function (equations are presented in equation
(9) to (11)).
Table 2 shows the chi-square model comparison among the three models for each pair of the predictor
variable and the outcome variable and the standard SEM fit indices (RMSEA, CFI, TLI, and SRMR) for each
model. According to the chi-square difference test and the fit indices, the model with the coupling effects
from CPOE to the ΔCommunication has good overall fit and best represent the dynamic association
Communications of the Association for Information Systems
13
Volume 44
10.17705/1CAIS.044XX
Paper XX
between communication and CPOE.
5
The full coupling models have good overall fit and best represented
the dynamic associations between communication and ECD and between communication and DS.
6
Table 2. Model Comparison of BDLDSM (Nonlinear Change Function)
Pairs of DV
and IV
Model
χ2
(DF)
Model
Comparison
Δχ2
ΔDF
RMSEA
[90%
C.I.]
CFI
TLI
SRMR
Best
Model
CPOE and
Communi-
cation
M1: No
coupling
141.648
(77)
0.033
[0.024,
0.041]
0.991
0.980
0.016
M2: IV
to ΔDV
M2: IV to
ΔDV
115.523
(76)
M1 vs. M2
26.125***
1
0.026
[0.016,
0.035]
0.994
0.988
0.016
M3: Full
coupling
114.047
(75)
M1 vs. M3
27.601***
2
0.026
[0.015,
0.035]
0.994
0.988
0.016
M2 vs. M3
1.476
1
ECD and
Communi-
cation
M1: No
coupling
132.445
(77)
0.030
[0.021,
0.039]
0.992
0.984
0.016
M3: Full
Coupling
M2: IV to
ΔDV
112.999
(76)
M1 vs. M2
19.446***
1
0.025
[0.014,
0.034]
0.995
0.989
0.016
M3: Full
coupling
106.505
(75)
M1 vs. M3
25.94***
2
0.023
[0.012,
0.033]
0.996
0.990
0.015
M2 vs. M3
6.494***
1
DS and
Communi-
cation
M1: No
coupling
133.79
(77)
0.031
[0.022,
0.039]
0.992
0.983
0.017
M3: Full
Coupling
M2: IV to
ΔDV
104.476
(76)
M1 vs. M2
29.314***
1
0.022
[0.010,
0.031]
0.996
0.991
0.017
M3: Full
coupling
98.962
(75)
M1 vs. M3
34.828***
2
0.020
[0.006,
0.030]
0.997
0.992
0.016
M2 vs. M3
5.514*
1
Note: (1) M1: BDLDSM with no coupling effects; M2: BDLDSM with a coupling effect from HIT implementation to the change of
communication score (ΔCommunication); M3: BDLDSM with full coupling effects. (2) All control variables including hospital bed
size, profit status, teaching status, state effect, and market competition are included. (3) *** p<0.001, ** p<0.01, * p<0.05, +
p<0.1
The BDLDSM with a coupling effect from CPOE to the change of communication score (ΔCommunication)
can be written as:
         (13)
       (14)
where  and represent the constant growth factor,  and represent the linear growth factor, 
represents the quadratic growth factor. Since all control variables including hospital bed size, profit status,
teaching status, state effect, and market competition were modeled as time-invariant, we regressed the
growth factors on these five control variables. βc and βIT are the self-feedback coefficients, which capture
proportional changethat is the effect of the same variable at the previous state of the change, is the
coupling coefficient, representing a coupling effect from CPOE implementation level to ΔCommunication.
Next, we present the BDLDSM with full coupling, which accounts for coupling effects in both directions, i.e.,
from ECD or DS implementation level to the change of communication and vice versa. The latent change
equations to examine the relationship between ECD and communication score can be written as:
         (15)
        (16)
5
According to the chi-square difference test presented in Table 2, the fit of Model 2 is significantly better than Model 1, and the fit of
Model 3 is not significantly better than Model 2 for the following pair: communication and CPOE.
6
According to the chi-square difference test presented in Table 2, Model 2 has a better fit than Model 1 at a significance level, and
Model 3 has a better fit than Model 2 at a significance level, which indicate Model 3 as the best fit for the following two pairs of variables
communication and ECD, and communication and DS.
Communications of the Association for Information Systems
14
Volume 44
10.17705/1CAIS.044XX
Paper XX
The latent change equations to examine the relationship between DS and communication score can be
written as:
         (17)
        (18)
where is the coupling coefficient, representing the coupling effect from ECD or DS implementation level
to the change of communication score ( and  is the coupling coefficient, representing
the coupling effect from communication score to the change of ECD or DS implementation level (ΔECD or
ΔDS). Other parameters have the same interpretations as in previous equations (equations 13 and 14).
A path diagram of this bivariate dynamic latent difference score model with full coupling is illustrated in
Figure 2, and the definition of the parameters are presented in Table 3. Time-invariant covariates including
hospital bed size, profit status, teaching status, state effect, and market competition are included as
predictors of the growth factors, although we do not show this in Figure 2. The unlabeled paths are fixed at
1 and residual variances on all the endogenous latent variables are fixed at 0. The other BDLDSM models
can be adapted from this full-coupling path diagram. For example, the path diagram for BDLDSM with a
coupling effect from CPOE to the change of communication score (ΔCommunication) does not have the
paths from communication score to the change of HIT implementation levels (ΔHIT), thus does not have the
coupling effect of .
Figure 2. Path Diagram of a Bivariate Dynamic Latent Difference Score with Full Coupling
Communications of the Association for Information Systems
15
Volume 44
10.17705/1CAIS.044XX
Paper XX
Table 3. Definition of Parameters of the BDLDSM Model in Figure 2
Parameter
Definition
 in circles
Latent true scores for communication score at year t (from year 2008 to 2013)
 in circles
Latent true scores for HIT implementation levels at year t (from year 2008 to 2012)
 in rectangles
Observed scores for communication score at year t (from year 2008 to 2013)
 in rectangles
Observed scores for HIT implementation levels at year t (from year 2008 to 2012)
ec[t]
Measurement error for communication score at year t (from year 2008 to 2013)
eIT[t]
Measurement error for HIT implementation levels at year t (from year 2008 to 2012)

Latent change scores in communication score for each repeated assessment

Latent change scores in HIT implementation levels for each repeated assessment
Coupling coefficient that represents the coupling effect from HIT implementation levels to the
change of communication score (

Coupling coefficient that represents the coupling effect from communication score to the
change of HIT implementation levels (
βc
Self-feedback coefficient that captures proportional change of communication score at the
previous occasion
βIT
Self-feedback coefficient that captures proportional change of HIT implementation levels at the
previous occasion
,
The intercept and the slope of latent communication score and change scores respectively.
The slope component of communication score () includes the constant growth factor (),
the linear growth factor (), and the quadratic growth factor ().
 , 
Intercept and the slope component of latent HIT implementation levels and changes scores
respectively. The slope component of HIT implementation levels () includes the constant
growth factor () and the linear growth factor ().
μc0 , μcs
Intercept mean and slope mean for communication score respectively
μIT0 , μITs
Intercept mean and slope mean for HIT implementation respectively
σ2ec , σ2eIT
Residual variance for communication score and HIT implementation levels respectively
σ2c0 , σ2IT0
Variance of initial conditions for communication score and HIT implementation levels
respectively
σ2cs , σ2ITs
Variance of slopes for communication score and HIT implementation levels respectively
σ c0IT0
Covariance of initial conditions of variables communication score and HIT implementation
levels
σ c0Cs
Covariance of initial conditions and slope of communication score
σ IT0ITs
Covariance of initial conditions and slope of HIT implementation levels
σ csITs
Covariance of slope of variables communication score and HIT implementation levels
σ c0ITs
Covariance of initial conditions of communication score and slope of HIT implementation levels
σ csIT0
Covariance of initial conditions of HIT implementation levels and slope of communication score
σ eCeIT08, σ eCeIT09,
σ eCeIT10, σ eCeIT11,
σ eCeIT12
Covariance of residuals from variables communication score and HIT implementation levels for
year 2008, 2009, 2010, 2011, 2012 respectively
Lastly, we present the parameter estimates in Table 4 and interpret the results for the best fitting BDLDSM
models. Model 1 shows the reported parameters for the relationship that is best represented by the model
with the coupling effects from CPOE to the ΔCommunication. Models 2 and 3 in Table 4 show the
parameters for the two relationships that were best represented by full coupling effect model (i.e., ECD and
Communication and DS and Communication).
Communications of the Association for Information Systems
16
Volume 44
10.17705/1CAIS.044XX
Paper XX
Table 4. Model Estimation of BDLDSM with Full Coupling Effect
Model 1
CPOE and Communication
Model 2
ECD and Communication
Model 3
DS and Communication
Dynamic Coefficients
Proportion
-0.965*** (0.112)
-1.248*** (0.147)
-1.303*** (0.152)
Coupling
-0.519 (0.394)
-0.251** (0.091)
-0.362* (0.154)
Proportion 
0.427 (0.407)
0.841+ (0.496)
0.733 (0.485)
Coupling 
2.025* (0.933)
1.721 (1.083)
Latent Means

9.343*** (1.188)
12.139*** (1.387)
12.583*** (1.489)

0.693+ (0.357)
0.467*** (0.085)
0.573*** (0.122)

0.091 (0.069)
0.037** (0.014)
0.052* (0.023)

2.038** (0.716)
-15.837+ (8.086)
-13.26 (9.291)

0.234 (0.283)
-0.474 (0.418)
-0.375 (0.496)
Note: (1) *** p<0.001, ** p<0.01, * p<0.05, + p<0.1 (2) All control variables including hospital bed size, profit status,
teaching status, state effect, and market competition are included (3) Standard errors in parentheses
The change of communication score from one year to the next (ΔCommunication) has three sources: 1) a
constant change of growth in the level of communication score each year which includes the constant growth
factor ( ), the linear growth factor ( ), and the quadratic growth factor ( ); 2) prior state of
communication score ); and 3) prior state of HIT implementation levels (). The constant growth factor
() is positively significant in models 1, 2 and 3, and the linear growth factor () and the quadratic growth
factor () are positively significant in models 2 and 3, indicating that the change of communication score
is positive across the course of this study. The proportional change effect ) is negatively significant in
models 1, 2, and 3, indicating that a higher level of communication score is associated with a slower
subsequent increase in communication score. The coupling effects of is not statistically significant in
Model 1, suggesting that CPOE is not a significant leading indicator of subsequent changes in
communication score. The coupling effects of are negatively significant in models 2 and 3, indicating that
increased implementation level of ECD or DS is a negative leading indicator of the subsequent change in
communication score. In other words, hospitals that have a high ECD or DS implementation level in the
current year tend to show less positive changes in communication score.
In conclusion, the change in communication score from one year to the next (ΔCommunication) is positively
predicted by constant change of growth in the level of communication score each year, negatively predicted
by communication score from the previous year, and negatively predicted by ECD and DS implementation
levels. In other words, hospitals would expect a constant increase in communication scores across the
years. One plausible explanation of this steady increase over time is communication score may be
enhanced based upon repeated feedback from patients (Senot et al. 2016; Sharma et al. 2016). If the
current year’s communication score is high, hospitals may experience a slower subsequent change
(increase) in communication score. Hospitals with a higher implementation level of ECD or DS in the current
year may also experience a slower subsequent change (increase) in communication score.
We next analyze the dynamic lead-lag associations between HIT implementation levels and communication
scores to assess if there are feedback loops between these variables. We noticed that the coupling effect
of  is insignificant in Model 3, indicating that the communication score is not a leading indicator for the
subsequent changes in DS implementation. We also noticed that the coupling effect of  is significant in
Model 2, implying that an increased communication score leads to a higher subsequent change in ECD
implementation level. As we discussed in the previous paragraph, the coupling effects of are negatively
significant in Model 2 as well. This results in a dynamic process where the implementation level of ECD has
a tendency to impact changes in communication score in a negative manner and communication score has
a tendency to impact changes in the implementation level of ECD in a positive matter. That is to say, there
is a feedback loop between ECD and communication score where hospitals with higher implementation
levels of ECD in the current year may experience a slower subsequent increase in communication score
Communications of the Association for Information Systems
17
Volume 44
10.17705/1CAIS.044XX
Paper XX
and hospitals with higher communication score in the current year tend to show more positive changes in
ECD implementation level.
Our examination of the dynamic relationship between HIT implementation level and experiential quality
variables across time reveals no effect for prior CPOE implementation levels on changes in communication
score. According to prior research, it is possible that CPOE facilitates the communication process by
supporting task execution and clinical workflow, and at the same time, it also introduces errors in the
communication process by misrepresenting communication as information transfer rather than interactive
sense-making (Coiera, Ash, & Berg, 2016; Queenan, Angst, & Devaraj, 2011).
We also find that an increased DS implementation level in the current year is predicted to decrease the
subsequent change of communication score. The reason may be that hospitals have to go through an
adaptation process in an adjustment period before the benefits from advanced clinical HIT, such as DS, can
be fully realized. During this adjustment period, healthcare providers may need to spend an increased
amount of time and energy to learn the new and sophisticated technologies and to adjust to the new clinical
routines in the presence of the patients (Sharma et al. 2016). Thus, there may be a reduction in
communication score in hospitals as a result of this adaptation process.
We identified one feedback loop between ECD implementation and communication score. Increased ECD
implementation is a leading indicator for a slower subsequent increase in communication score, and
increased communication score is a leading indicator for more positive changes in the subsequent ECD
implementation level. This result may reveal the two-sided effects of ECD. For example, the use of ECD
may yield less communication with patients, which might be more efficient but results in less satisfaction
from patients’ side due to lack of personalized interaction with care providers. Prior research indicates that
adopting higher levels of HIT can shift healthcare providers’ attention to standardized aspects of healthcare
delivery and away from communication-related activities because during care delivery, completing all clinical
tasks may take precedence over listening to the patients (Chandrasekaran et al. 2012). On the other hand,
increased communication score may indicate that, while patients are more satisfied because they interact
more with a care provider, a hospital may conceive that as their weakness in efficiency and consequently
implement ECD at a higher level.
Very few prior studies have analyzed how HIT implementation levels impact experiential quality, and none
has done so after accounting for reverse causality and incorporating trajectory change. This may explain
why some prior studies reported findings that appear to contradict the results we obtained. For example,
Sharma et al. (2016) found two different types of HIT jointly enhanced experiential quality, and Queenan et
al. (2011) found that CPOE use was positively related to experiential quality. To the best of our knowledge,
our study is the first to investigate how experiential quality impact on HIT implementation levels, and the
first to unveil the dynamic process between HIT implementation levels and experiential quality.
5 Discussion
5.1 Key Contributions
This study makes two major contributions to the IS literature. First, we show how BDLDSM can be used to
model the change process (change in one variable from time t-1 to time t) and to examine the dynamic
relationships between variables, thus enhancing the ability of IS researchers to develop and test longitudinal
theories of various phenomena. Our work provides the first demonstration in the IS literature of quantitatively
studying feedback loops between the predictor and outcome variables over time. We also offer detailed
guidelines for researchers to examine change as an outcome and to test the dynamic relationship between
the predictor variable and the outcome variable, while simultaneously considering the functional forms of
change. Further, our study presents the first description in the IS field of how to incorporate functional forms
of change in both the predictor and outcome variables in a BDLDSM, which facilitates theory development
relating to change. BDLDSM is equipped to assess the form of change (e.g., linear or nonlinear), the level
of change (e.g., within units change, between units change, or both), and dynamic longitudinal
relationships (Ployhart & Vandenberg, 2010). IS researchers can apply the form of change to
develop descriptive longitudinal research, which illustrates how a phenomenon changes over time. IS
researchers can also use the level of change and dynamic associations in explanatory longitudinal
research, which explains how the level of change in a predictor variable affects the subsequent change in
the outcome variable over time (Ployhart & Vandenberg, 2010). Both descriptive and explanatory
Communications of the Association for Information Systems
18
Volume 44
10.17705/1CAIS.044XX
Paper XX
longitudinal research can be extended to explore and examine longitudinal dynamic relationships in the IS
field.
From the HIT value perspective, we extend the current literature that studies HIT impact on experiential
quality to include a dynamic and nonlinear perspective. We find that HIT implementation levels increase in
a quadratic way over time, and communication score grows with cubic trajectories over time. This suggests
the need for researchers to examine the relationship between HIT impact on communication score using a
model that incorporates nonlinear functional forms of change for both the HIT and communication score
variables.
Further, we tested dynamic lead-lag relationships between three HIT functions and experiential quality using
BDLDSM and obtained a more comprehensive understanding of the rate of change in communication score.
Our results suggest that hospitals would expect a constant increase in communication scores across the
course of this study; however, this constant change is limited by the communication score and
implementation levels of ECD or DS at the preceding time point. We also identified a negative feedback
loop between ECD implementation level and communication score, indicating that hospitals with higher
implementation levels of ECD in the current year may experience a slower subsequent increase in
communication score and hospitals with higher communication score in the current year tend to show more
positive changes in ECD implementation level. However, we did not find a dynamic lead-lag relationship
between CPOE and communication score. A plausible explanation is that a learning curve may exist
between the CPOE implementation and communication score improvement, and the impact of CPOE may
take a longer duration to manifest. The insights from this study has significant implications for decision
makers in hospitals as well. In particular, managers need to be aware of the dynamic relationship between
HIT implementation levels and communication score to better allocate HIT resources. In order to help
facilitate the use of this method, we have provided the MPlus code in Appendix F, covariance matrix, mean,
and sample size in Appendix G, and the bibliography section for BDLDSM in Appendix H.
5.2 The Choice of Statistical Techniques in Longitudinal Research
Based on the review of frequently applied longitudinal analysis techniques in the IS field between 2004 and
2018 in the literature review section and the advantages of BDLDSM in facilitating longitudinal theory
extension and development, we have developed guidelines for IS researchers to determine which statistical
techniques to use when conducting longitudinal research, especially when examining the nature of change
in variables across time points. Table 5 compares the data requirements and the characteristics of the
longitudinal models mentioned in this paper. To determine which statistical techniques to use, we offer the
following four guidelines. First, researchers should identify the role of time in the theory-building process
and ensure that their design and analysis align with the theory (George & Jones, 2000; Mitchell & James,
2001). If researchers want to address the time lag between the predictor variable X and the outcome variable
Y for causal inference, they can use SEM, a linear unobserved effects model, or a random-coefficients
model. If they want to incorporate the trajectory change of X or Y in the longitudinal model, they can use
random-coefficients model, LGM, or BDLDSM. If researchers want to examine the dynamic associations or
feedback loop between X and Y, they can consider applying BDLDSM. If researchers want to decompose
the dynamic effect of variables, they need to use BDLDSM. For instance, changes in the outcome variable
may be influenced by the prior level of the outcome variable, the prior level of the predictor variable, and the
overall trajectory change of the outcome variable over time. If researchers want to study other aspects of
time, such as frequency, cycles, intensity, and duration, they can use other specific analysis techniques to
examine the role of time. For example, if researchers want to study when events occur by using time duration
as an outcome, they can use survival analysis techniques.
Second, researchers should consider how many waves of repeated measures they have collected. While
linear unobserved effects models and random-coefficients models need at least two time points of repeated
measures, LGM and BDLDSM need at least three time points of data (Zheng et al., 2014). At least three
waves of data are needed to identify and conceptualize the trajectory of change (Bala & Venkatesh, 2013;
Chan, 1998) and to distinguish nonlinearities in LGM and BDLDSM (Raudenbush, 2001).
Third, researchers should consider the hypotheses underlying the model of change and choose an analysis
technique accordingly (Ferrer & McArdle, 2003). For example, if identifying growth in each variable is
important for the hypothesis testing and can be detected in the data, LGM is preferred. If the overall rate of
change at each measurement and the dynamic relations between variables over time are the outcomes of
Communications of the Association for Information Systems
19
Volume 44
10.17705/1CAIS.044XX
Paper XX
interest, BDLDSM is preferred. If identifying growth or change in variables is not important to the hypotheses
or the theory, researchers do not need to use growth analysis techniques.
Fourth, researchers should consider whether they need to test multilevel hypotheses. If so, they need to
use either a random-coefficients model or multi-level SEM/LGM/BDLDSM. Linear unobserved effects
models are not well suited for such investigations.
Table 5: Summary of Longitudinal Methods and Data Length Requirements
SEM
LGM
BDLDSM
Linear Unobserved
Effects Model
Random
Coefficient
Model
Time periods
1
>=2
>=3
>=3
>=2
>=2
Within-unit Change
No
No
Yes
Yes
Yes
Yes
Between-unit Change
No
Yes
Yes
Yes
Yes
Yes
Change of Trajectory
No
No
Yes
Yes
No
Yes
Dynamic Lead-Lag Relationship
No
No
No
Yes
No
No
Dynamic Effect Identification
No
No
No
Yes
No
No
Feedback Loop (X<->Y)
No
No
No
Yes
No
No
5.3 Limitations and Conclusions
BDLDSM has its limitations. First, the causal lag examined in BDLDSM may be limited by the data sample
in terms of measurement resolution and sample size (Sbarra & Allen, 2009). For example, we used one-
year spacing between measurements, but it is likely that the causal lag between HIT and communication
score may be shorter or longer. If the true causal lag has a lower measurement resolution, however, it will
lead to an inflation of the parameter estimation (Sbarra & Allen, 2009). If the true causal lag has a longer
measurement resolution, BDLDSM may be able to address it by modifying the corresponding specifications
but only if the lag corresponds to exactly two or more time units (e.g., a two-year lag). Additionally, in our
analysis, we noticed that some parameters (e.g., coupling effects parameter) are not significant at the 0.05
level even though the selected model suggested significant parameters. One of the reasons might be that
our sample size is not large enough to show all statistically significant coupling parameters. Researchers
need to have an adequate sample size to identify change trajectory and reliably estimate growth models
such as BDLDSM (Curran, Obeidat, & Losardo 2010). Determining an adequate sample size depends on a
few factors, including the complexity of the model, the size of measurement errors, effect size, attrition, and
the number and spacing of measurement occasions (Grimm et al., 2012). Consequently, researchers should
take the causal lag and sample size into consideration when using BDLDSM. Future research may collect
data using a higher or lower resolution of the measurement and an adequate sample size. This could help
researchers test the causal lag with different time spacing between measurements and compare the fits of
various models with data to identify the best model.
Second, BDLDSM’s complexity may lead to difficulties in interpreting results. Researchers need to not only
explain the form of change for both predictor and outcome variables but also interpret the various BDLDSM
parameters. Also, given the model’s complexity, it is difficult to use graphs to illustrate BDLDSM.
Consequently, we suggest that researchers use this model only if they want to probe the dynamic interplay
between variables over time. Further, BDLDSM cannot reveal the underlying mechanism of the result. For
example, in the illustration, we find that higher level of communication score leads to a slower subsequent
change (increase) in communication score. However, we are not sure what leads to this effect. Further
research is needed to unveil the underlying mechanisms behind the BDLDSM results. Moreover, applying
the BDLDSM model without theoretical support in model development may result in overfitting problems.
We suggest researchers follow the principle of parsimony and apply BDLDSM with both theoretical and
empirical support.
Third, BDLDSM can only analyze two repeatedly measured variables in the model. Further, BDLDSM
excludes additional confounding or interacting constructs from the model. Thus, we need to explain the
model with caution and limit the conclusion to the variables studied as well as the studied timespan (Grimm
et al., 2017). For example, the demonstrated example in this paper examined the longitudinal relationship
between HIT implementation and experiential quality during the observation period.
Communications of the Association for Information Systems
20
Volume 44
10.17705/1CAIS.044XX
Paper XX
Although we employed BDLDSM in the context of HIT implementation and experiential quality, the method
can be easily applied to other areas of interest to IS researchers. We have provided a few example
application domains in the introduction section. We believe that the generalized method we introduce in this
paper is agnostic to application context and can be used by researchers to simultaneously account for
change trajectories, model the change process, and test for dynamic lead-lag associations and feedback
loops between predictor and outcome variables. To our knowledge, this is the first time that BDLDSM has
been introduced to the IS literature. It is our hope that IS researchers will use this method to examine new
phenomena using newly collected data and revisit older phenomena by reanalyzing already collected data
to advance longitudinal data analysis and theorizing.
Communications of the Association for Information Systems
21
Volume 44
10.17705/1CAIS.044XX
Paper XX
References
AdlerMilstein, J., Everson, J., & Lee, S. Y. D. (2015). EHR adoption and hospital performance: timerelated
effects. Health Services Research, 50(6), 1751-1771.
Adomavicius, G., Bockstedt, J., & Gupta, A. (2012). Modeling supply-side dynamics of IT components,
products, and infrastructure: An empirical analysis using vector autoregression. Information Systems
Research, 23(2), 397-417.
Angst, C. M., Devaraj, S., & D'Arcy, J. (2012). Dual role of IT-assisted communication in patient care: a
validated structure-process-outcome framework. Journal of Management Information Systems,
29(2), 257-292.
Aral, S., Brynjolfsson, E., & Van Alstyne, M. (2012). Information, technology, and information worker
productivity. Information Systems Research, 23(3), 849-867.
Atasoy, H., Chen, P.-y., & Ganju, K. (2018). The spillover effects of health IT investments on regional
healthcare costs. Management Science, 64(6), 2515-2534.
Ayabakan, S., Bardhan, I., Zheng, Z. E., & Kirksey, K. (2017). The impact of health information sharing on
duplicate testing. MIS Quarterly, 41(4), 1083-1103.
Baker, J., Song, J., & Jones, D. R. (2017). Closing the loop: Empirical evidence for a positive feedback
model of IT business value creation. The Journal of Strategic Information Systems, 26(2), 142-160.
Bala, H., & Venkatesh, V. (2013). Changes in employees' job characteristics during an enterprise system
implementation: a latent growth modeling perspective. MIS Quarterly, 37(4), 1113-1140.
Beck, R., Pahlke, I., & Seebach, C. (2014). Knowledge exchange and symbolic action in social media-
enabled electronic networks of practice: a multilevel perspective on knowledge seekers and
contributors. MIS Quarterly, 38(4), 1245-1269.
Benlian, A. (2015). IT feature use over time and its impact on individual task performance. Journal of the
Association for Information Systems, 16(3), 144-173.
Bhargava, H. K., & Mishra, A. N. (2014). Electronic medical records and physician productivity: Evidence
from panel data analysis. Management Science, 60(10), 2543-2562.
Bolander, W., Dugan, R., & Jones, E. (2017). Time, change, and longitudinally emergent conditions:
understanding and applying longitudinal growth modeling in sales research. Journal of Personal
Selling & Sales Management, 37(2), 153-159.
Bollen, K. A. (2011). Evaluating effect, composite, and causal indicators in structural equation models. MIS
Quarterly, 35(2), 359-372.
Chandrasekaran, A., Senot, C., & Boyer, K. K. (2012). Process management impact on clinical and
experiential quality: Managing tensions between safe and patient-centered healthcare. Manufacturing
& Service Operations Management, 14(4), 548-566.
Chen, H., De, P., & Hu, Y. J. (2015). IT-enabled broadcasting in social media: An empirical study of artists’
activities and music sales. Information Systems Research, 26(3), 513-531.
Coiera, E., Ash, J., & Berg, M. (2016). The unintended consequences of health information technology
revisited. Yearbook of Medical Informatics, 25(1), 163-169.
Curran, P. J., Obeidat, K., & Losardo, D. (2010). Twelve frequently asked questions about growth curve
modeling. Journal of cognition and development, 11(2), 121-136.
Dewan, S., & Ramaprasad, J. (2014). Social media, traditional media, and music sales. MIS Quarterly,
38(1), 101-121.
de Gooyert, V. (2019). Developing dynamic organizational theories; three system dynamics based research
strategies. Quality & Quantity, 53(2), 653-666.
Fang, Y., Lim, K. H., Qian, Y., & Feng, B. (2018). System dynamics modeling for information systems
research: Theory development and practical application. MIS Quarterly, 42(4), 1303-1329.
Communications of the Association for Information Systems
22
Volume 44
10.17705/1CAIS.044XX
Paper XX
Ferrer, E., & McArdle, J. (2003). Alternative structural models for multivariate longitudinal data analysis.
Structural Equation Modeling, 10(4), 493-524.
Elliott, M. N., Cohea, C. W., Lehrman, W. G., Goldstein, E. H., Cleary, P. D., Giordano, L. A., . . .
Zaslavsky, A. M. (2015). Accelerating improvement and narrowing gaps: Trends in patients'
experiences with hospital care reflected in HCAHPS public reporting. Health Services Research,
50(6), 1850-1867.
Eschleman, K. J., & LaHuis, D. (2014). Advancing occupational stress and health research and
interventions using latent difference score modeling. International Journal of Stress Management,
21(1), 112-136.
Everson, J., Lee, S.-Y. D., & Friedman, C. P. (2014). Reliability and validity of the American Hospital
Association's national longitudinal survey of health information technology adoption. Journal of the
American Medical Informatics Association, 21(e2), e257-e263.
George, J. M., & Jones, G. R. (2000). The role of time in theory and theory building. Journal of Management,
26(4), 657-684.
Grimm, K. J. (2007). Multivariate longitudinal methods for studying developmental relationships between
depression and academic achievement. International Journal of Behavioral Development, 31(4), 328-
339.
Grimm, K. J., An, Y., McArdle, J. J., Zonderman, A. B., & Resnick, S. M. (2012). Recent changes leading
to subsequent changes: Extensions of multivariate latent difference score models. Structural
Equation Modeling: A Multidisciplinary Journal, 19(2), 268-292.
Grimm, K. J., Mazza, G. L., & Mazzocco, M. M. (2016). Advances in methods for assessing longitudinal
change. Educational Psychologist, 51(3-4), 342-353.
Grimm, K. J., Ram, N., & Estabrook, R. (2017). Growth modeling: structural equation and multilevel
modeling approaches. New York: The Guilford Press.
Hamagami, F., & McArdle, J. J. (2007). Dynamic extensions of latent difference score models. In S.M.
Boker & M.J. Wenger (Eds.), Data analytic Techniques for Dynamical Systems (pp. 4785).
Mahwah, NJ: Erlbaum.
Hu, L., & Bentler, P. M. (1999). Cutoff criteria for fit indexes in covariance structure analysis: Conventional
criteria versus new alternatives. Structural Equation Modeling: A Multidisciplinary Journal, 6(1), 1-55.
Kher, H. V., & Serva, M. A. (2014). Changing the way we study change: Advocating longitudinal research
in MIS. ACM SIGMIS Database: the DATABASE for Advances in Information Systems, 45(2), 45-60.
Kim, J., & DeaterDeckard, K. (2011). Dynamic changes in anger, externalizing and internalizing problems:
Attention and regulation. Journal of Child Psychology and Psychiatry, 52(2), 156-166.
Kim, E., Wang, Y., & Liu, S. (2020). The Impact of Measurement Noninvariance on Latent Change Score
Modeling: A Monte Carlo Simulation Study. Structural Equation Modeling: A Multidisciplinary Journal,
1-13.
Klopack, E. T., & Wickrama, K. (2020). Modeling Latent Change Score Analysis and Extensions in Mplus:
A Practical Guide for Researchers. Structural Equation Modeling: A Multidisciplinary Journal, 27(1),
97-110.
Langer, N., Slaughter, S. A., & Mukhopadhyay, T. (2014). Project managers' practical intelligence and
project performance in software offshore outsourcing: A field study. Information Systems Research,
25(2), 364-384.
Li, H., Fang, Y., Wang, Y., & Lim, K. H. (2015). Understanding competitive action repertoire, strategic group
and performance of e-marketplace sellers: A latent growth modeling approach. Paper presented at
the Thirty Sixth International Conference on Information Systems, Fort Worth, TX.
McArdle, J. J. (2009). Latent variable modeling of differences and changes with longitudinal data. Annual
Review of Psychology, 60, 577-605.
Communications of the Association for Information Systems
23
Volume 44
10.17705/1CAIS.044XX
Paper XX
McArdle, J. J., & Hamagami, F. (2001). Latent difference score structural models for linear dynamic analyses
with incomplete longitudinal data. In New Methods for the Analysis of Change. (pp. 139-175).
Washington, DC, US: American Psychological Association.
Menon, N. M., & Kohli, R. (2013). Blunting Damocles' sword: A longitudinal model of healthcare IT impact
on malpractice insurance premium and quality of patient care. Information Systems Research, 24(4),
918-932.
Mitchell, T. R., & James, L. R. (2001). Building better theory: Time and the specification of when things
happen. Academy of Management Review, 26(4), 530-547.
O'Rourke, H. P. (2016). Time Metric in Latent Difference Score Models: Doctoral dissertation, Arizona
State University.
Ou, C. X., Pavlou, P. A., & Davison, R. (2014). Swift guanxi in online marketplaces: The role of computer-
mediated communication technologies. MIS Quarterly, 38(1), 209-230.
Queenan, C. C., Angst, C. M., & Devaraj, S. (2011). Doctors’ orders––If they’re electronic, do they improve
patient satisfaction? A complements/substitutes perspective. Journal of Operations Management,
29(7-8), 639-649.
Ployhart, R. E., & Vandenberg, R. J. (2010). Longitudinal research: The theory, design, and analysis of
change. Journal of Management, 36(1), 94-120.
Pye, J., Rai, A., & Baird, A. (2014). Health information technology in US hospitals: How much, how fast?. In
35th International Conference on Information Systems: Building a Better World Through Information
Systems, ICIS 2014.
Raudenbush, S. W. (2001). Comparing personal trajectories and drawing causal inferences from
longitudinal data. Annual Review of Psychology, 52(1), 501-525.
Rudd, K. L., & Yates, T. M. (2020). A latent change score approach to understanding dynamic autonomic
coordination. Psychophysiology, 57(11), e13648.
Sbarra, D. A., & Allen, J. J. (2009). Decomposing depression: On the prospective and reciprocal dynamics
of mood and sleep disturbances. Journal of Abnormal Psychology, 118(1), 171-182.
Senot, C., Chandrasekaran, A., Ward, P. T., Tucker, A. L., & Moffatt-Bruce, S. (2016). The impact of
combining conformance and experiential quality on hospitals’ readmissions and cost performance.
Management Science, 62(3), 829-848.
Serva, M. A., Fuller, M. A., & Mayer, R. C. (2005). The reciprocal nature of trust: A longitudinal study of
interacting teams. Journal of Organizational Behavior, 26(6), 625-648.
Serva, M. A., Kher, H., & Laurenceau, J.-P. (2011). Using latent growth modeling to understand longitudinal
effects in MIS theory: A primer. Communications of the Association for Information Systems, 28(1),
213-232.
Sharma, L., Chandrasekaran, A., Boyer, K. K., & McDermott, C. M. (2016). The impact of Health Information
Technology bundles on Hospital performance: An econometric study. Journal of Operations
Management, 41(1), 25-41.
Singer, J. D., & Willett, J. B. (2003). Applied Longitudinal Data Analysis: Modeling Change And Event
Occurrence: Oxford university press.
Söllner, M., Pavlou, P., & Leimeister, J. M. (2016). Understanding the development of trust: Comparing trust
in the IT artifact and trust in the provider. Paper presented at the Academy of Management
Proceedings.
Sun, H. (2013). A longitudinal study of herd behavior in the adoption and continued use of technology. MIS
Quarterly, 37(4), 1013-1041.
Sykes, T. A., Venkatesh, V., & Gosain, S. (2009). Model of acceptance with peer support: a social network
perspective to understand employees' system use. MIS Quarterly, 33(2), 371-393.
Tambe, P., & Hitt, L. M. (2012). The productivity of information technology investments: New evidence from
IT labor data. Information Systems Research, 23(3), 599-617.
Communications of the Association for Information Systems
24
Volume 44
10.17705/1CAIS.044XX
Paper XX
Thies, F., Wessel, M., & Benlian, A. (2016). Effects of social interaction dynamics on platforms. Journal of
Management Information Systems, 33(3), 843-873.
Venkatesh, V., Thong, J. Y., Chan, F. K., Hu, P. J. H., & Brown, S. A. (2011). Extending the twostage
information systems continuance model: Incorporating UTAUT predictors and the role of context.
Information Systems Journal, 21(6), 527-555.
Venkatesh, V., Zhang, X., & Sykes, T. A. (2011). “Doctors do too little technology”: A longitudinal field study
of an electronic healthcare system implementation. Information Systems Research, 22(3), 523-546.
Wooldridge, J. M. (2010). Econometric analysis of cross section and panel data (2 ed.). Cambridge, MA:
MIT Press.
Zheng, Z., Pavlou, P. A., & Gu, B. (2014). Latent growth modeling for information systems: Theoretical
extensions and practical applications. Information Systems Research, 25(3), 547-568.
Communications of the Association for Information Systems
25
Volume 44
10.17705/1CAIS.044XX
Paper XX
Appendix A: Review of Longitudinal Papers in the IS Field
We started the search process with two keywords, “longitudinal” and “panel”, used on the ISI Web of Science
database. We identified 178 quantitative papers that employed longitudinal data sets. We also identified 12
papers that employed longitudinal datasets but are not in the search result because they do not have
“longitudinal” or “panel” in their title, abstract, or keywords. Thus, the resulting number of papers in our
analysis is 190.
The majority of these papers were published in ISR (61 papers), MISQ (52 papers), JMIS (25 papers), and
MS (23 papers). We found that 20 papers were published between 2004 and 2008, 68 between 2009 and
2013, and 102 between 2014 and 2018, suggesting an increased interest in quantitative longitudinal
research in recent years. We coded the papers based on time span of collected data and analysis
techniques. The time span of collected data ranged from 75 minutes to 87 years. An analysis of the methods
used in these papers yielded interesting patterns.
An analysis of the methods used in these papers yielded interesting patterns. We present the detailed paper
information for each data analysis method in the following table:
Data Analysis
Method
IS Papers from 2004 2018
Linear
Unobserved
Effects Model
(Fixed Effects and
Random Effects
Models)
X. Li and Wu (2018); Kumar, Qiu, and Kumar (2018); Pan, Huang, and Gopal (2018); Burtch,
Carnahan, and Greenwood (2018); Foerderer, Kude, Mithas, and Heinzl (2018); Adjerid,
Adler-Milstein, and Angst (2018); J. Yan, Leidner, and Benbya (2018); Gong, Hong, and
Zentner (2018); Müller, Fay, and vom Brocke (2018); S. F. Lu, Rui, and Seidmann (2018);
Bavafa, Hitt, and Terwiesch (2018); Atasoy, Chen, and Ganju (2018); P. Huang, Tafti, and
Mithas (2018) ; Hong and Pavlou (2017); Pang (2017); Kwon, Oh, and Kim (2017); Baker,
Song, and Jones (2017); Z. Li and Agarwal (2017); N. Huang, Hong, and Burtch (2017); Lin,
Goh, and Heng (2017); Cavusoglu, Phan, Cavusoglu, and Airoldi (2016); K. Kim, Gopal, and
Hoberg (2016); Kwon, So, Han, and Oh (2016); Atasoy, Banker, and Pavlou (2016); Yin, Mitra,
and Zhang (2016); Luo, Fan, and Zhang (2016); Pang, Tafti, and Krishnan (2016); Parker,
Ramdas, and Savva (2016); P. Huang and Zhang (2016); J. Chan, Ghose, and Seamans
(2016); S. H. Kim, Mukhopadhyay, and Kraut (2016); Driouchi, Wang, and Driouchi (2015);
Yaraghi, Du, Sharman, Gopal, and Ramesh (2015); L. Yan, Peng, and Tan (2015); Y. Liu and
Aron (2015); Mani and Barua (2015); Lin and Heng (2015); Qiu, Tang, and Whinston (2015);
Khansa, Ma, Liginlal, and Kim (2015); Dong and Wu (2015); Salge, Kohli, and Barrett (2015);
Tambe and Hitt (2014); Langer, Slaughter, and Mukhopadhyay (2014); Parker and Weber
(2014); Belo, Ferreira, and Telang (2014); Bhargava and Mishra (2014); Mehra, Langer,
Bapna, and Gopal (2014); Menon and Kohli (2013); Dedrick, Kraemer, and Shih (2013); Lim,
Stratopoulos, and Wirjanto (2013); Tafti, Mithas, and Krishnan (2013); Kleis, Chwelos,
Ramirez, and Cockburn (2012); Butler and Wang (2012); Aral, Brynjolfsson, and Van Alstyne
(2012); Tambe and Hitt (2012); Gu, Park, and Konana (2012); Chang and Gurbaxani (2012);
Xiaoquan Zhang and Wang (2012); Soper, Demirkan, Goul, and St Louis (2012); Altinkemer,
Ozcelik, and Ozdemir (2011); Ghose and Han (2011); Chellappa, Sambamurthy, and Saraf
(2010); Chellappa and Saraf (2010); Pathak, Garfinkel, Gopal, Venkatesan, and Yin (2010);
Ghose (2009); Hahn (2009);
SEM/PLS
Bala and Bhagwatwar (2018); Wu, Guo, Choi, and Chang (2017); Xiaojun Zhang and
Venkatesh (2017); Sykes and Venkatesh (2017); Venkatesh, Windeler, Bartol, and Williamson
(2017); Steinbart, Keith, and Babb (2016); Sun and Fang (2016); Bhattacherjee and Lin
(2015); Barnett, Pearson, Pearson, and Kellermanns (2015); Hu, Kettinger, and Poston
(2015); Sykes (2015); Boss, Galletta, Lowry, Moody, and Polak (2015); Bhattacherjee and
Park (2014); Tsai and Bagozzi (2014); Ou, Pavlou, and Davison (2014); Venkatesh and Sykes
(2013); Sun (2013); Venkatesh and Windeler (2012); Goh and Wasko (2012); Venkatesh,
Thong, Chan, Hu, and Brown (2011); Venkatesh, Zhang, and Sykes (2011); Hsieh, Rai, and
Xu (2011); Tallon (2010); Chengalur-Smith, Sidorova, and Daniel (2010); D. J. Kim, Ferrin,
and Rao (2009); Sykes, Venkatesh, and Gosain (2009); S. S. Kim (2009); Venkatesh, Brown,
Maruping, and Bala (2008); Lam and Lee (2006); Venkatesh and Agarwal (2006); Pavlou and
Fygenson (2006); S. S. Kim, Malhotra, and Narasimhan (2005); S. S. Kim and Malhotra
(2005); Pavlou and Gefen (2004); Jarvenpaa, Shaw, and Staples (2004); Bhattacherjee and
Premkumar (2004)
Random-
Coefficient
Models
Corey M. Angst, Wowak, Handley, and Kelley (2017); Xiaojun Zhang and Venkatesh (2017);
Venkatesh, Rai, Sykes, and Aljafari (2016); X. Ma, Kim, and Kim (2014); X. Ma, Khansa, Deng,
and Kim (2013); Setia, Rajagopalan, Sambamurthy, and Calantone (2012); Sasidharan,
Communications of the Association for Information Systems
26
Volume 44
10.17705/1CAIS.044XX
Paper XX
Santhanam, Brass, and Sambamurthy (2012); Ko and Dennis (2011); Ying Lu and
Ramamurthy (2010); Goes, Karuga, and Tripathi (2010); Rai, Maruping, and Venkatesh
(2009)
Other Regression
Models (e.g.,
OLS, NB, DID
models)
Daniel, Midha, Bhattacherhjee, and Singh (2018); Gómez, Salazar, and Vargas (2017);
Ayabakan, Bardhan, Zheng, and Kirksey (2017); Saunders and Brynjolfsson (2016); Qiu et al.
(2015); Rai, Arikan, Pye, and Tiwana (2015); Veiga, Keupp, Floyd, and Kellermanns (2014);
C. Z. Liu, Au, and Choi (2014); Im, Grover, and Teng (2013); Wang, Meister, and Gray (2013);
K. Han, Kauffman, and Nault (2011); Kleis et al. (2012); Ghose and Yao (2011); Gao, Gopal,
and Agarwal (2010); Park, Shin, and Sanders (2007)
Survival Analysis
Kanat, Hong, and Raghu (2018); Dewan, Ho, and Ramaprasad (2017); P. Huang and Zhang
(2016); Yaraghi et al. (2015); Joseph, Ang, and Slaughter (2015); Scherer, Wünderlich, and
von Wangenheim (2015); C. Zhang, Hahn, and De (2013); S. Li, Shang, and Slaughter (2010);
Jeyaraj, Raiser, Chowa, and Griggs (2009); Miller and Tucker (2009); Susarla and Barua
(2011); Bhattacharjee, Gopal, Lertwachara, Marsden, and Telang (2007);
Latent Growth
Model
Benlian (2015); Zheng, Pavlou, and Gu (2014); Bala and Venkatesh (2013)
Dynamic Panel
Data Model
Pang et al. (2016); Bhargava and Mishra (2014); Bapna, Langer, Mehra, Gopal, and Gupta
(2013); Menon and Kohli (2013); Aral et al. (2012); Butler and Wang (2012); Tambe and Hitt
(2012)
Panel Vector
Autoregressive
Model
Thies, Wessel, and Benlian (2018); Thies, Wessel, and Benlian (2016); H. Chen, De, and Hu
(2015); Dewan and Ramaprasad (2014); Adomavicius, Bockstedt, and Gupta (2012)
ANOVA/MANOVA
/T-test
Du, Das, Gopal, and Ramesh (2014); Gupta and Bostrom (2013); Cotteleer and Bendoly
(2006); Venkatesh and Ramesh (2006); Willcoxson and Chatham (2004)
Social Network
Analysis
B. Zhang, Pavlou, and Krishnan (2018); Wu et al. (2017); Xiaojun Zhang and Venkatesh
(2017); Sykes and Venkatesh (2017); Sykes et al. (2009); Vidgen, Henneberg, and Naudé
(2007)
Other Methods
Not Listed Above
T. H. Kim, Wimble, and Sambamurthy (2018); Wright (2018); W. Chen, Wei, and Zhu (2018);
Corey M Angst, Block, D'Arcy, and Kelley (2017); Trantopoulos, von Krogh, Wallin, and
Woerter (2017); Yingda Lu, Singh, and Sun (2017); Goode, Hoehle, Venkatesh, and Brown
(2017); Venkatesh, Thong, Chan, and Hu (2016); Gómez, Salazar, and Vargas (2016);
Susarla, Oh, and Tan (2016); Ramasubbu and Kemerer (2016); S. P. Han, Park, and Oh
(2016); Srivastava, Teo, and Devaraj (2016); L. Ma, Krishnan, and Montgomery (2015); Yeow
and Goh (2015); Singh, Sahoo, and Mukhopadhyay (2014); Singh et al. (2014); Peng, Dey,
and Lahiri (2014); Pang, Tafti, and Krishnan (2014); Chang and Gurbaxani (2013); Burtch,
Ghose, and Wattal (2013); Bang, Lee, Han, Hwang, and Ahn (2013); K. Han and Mithas
(2013); Soper et al. (2012); Langer, Forman, Kekre, and Sun (2012); Deng and Chi (2012);
Gao and Hitt (2012); Xue, Ray, and Sambamurthy (2012); Joseph, Boh, Ang, and Slaughter
(2012); Aron, Dutta, Janakiraman, and Pathak (2011); Ransbotham and Kane (2011); Singh,
Tan, and Mookerjee (2011); Sawyer, Guinan, and Cooprider (2010); Gnyawali, Fan, and
Penner (2010); Morris and Venkatesh (2010); Vitari and Ravarini (2009); Du, Geng, Gopal,
Ramesh, and Whinston (2008); He, Butler, and King (2007); Roberts, Hann, and Slaughter
(2006); Johnson, Moe, Fader, Bellman, and Lohse (2004)
Communications of the Association for Information Systems
27
Volume 44
10.17705/1CAIS.044XX
Paper XX
Appendix B: Step-by-Step Guide for BDLDSM Analysis
We propose a three-step process to develop and conduct the BDLDSM analysis.
Step 1: Establish Measurement Invariance over Time
This step is a prerequisite to latent growth or change model analysis because we must ensure that the same
construct is measured using the same metric with the same precision at each wave (Bala & Venkatesh,
2013; Benlian, 2015; Grimm, Ram, & Estabrook, 2017; McArdle, 2009). Measurement invariance allows the
interpretation of growth trajectories in direct, meaningful ways and ensures that observed changes reflect
changes in individual units, but not changes in the measurement (Chan, 1998; Grimm et al., 2017). There
are four levels of measurement invariance: configural invariance (whether the same items measure the
constructs across time); metric invariance (whether the factor loadings of the items that measure the
constructs are equivalent across time); scalar invariance (whether the items’ intercepts are equivalent
across time); and strict invariance (whether the residual variances are equal across data waves) (Chen,
2007). In practice, strict invariance test is not recommended because this criterion is too strict and is difficult
to establish. Thus, we recommend testing configural invariance, metric invariance, and scalar invariance to
establish measurement invariance.
Step 2: Modeling Growth Trajectories for the Predictor and Outcome Variables
We examine the predictor and outcome variables to determine the nature of their growth trajectories.
Common models used for this include the no-change model, linear change model, and nonlinear change
model. We then compare these models using the chi-square difference test to identify the growth model
with the best fit.
In following paragraphs, we first introduce the no-growth model, and then present linear growth and
nonlinear growth models for this purpose. We adapted the equations here from Grimm et al. (2017).
The no-growth models have only one latent variable (the intercept), which represents the overall level of
variables over time. All the no-growth and growth models are two-level models: level-1 is the individual level,
while level-2 is the sample levelthat is, the level for the entire sample. This two-level model not only allows
individual scores to change over time, but also allows change among individual units.
We model the level-1 (individual) equation for the no-growth model as follows:
    
where  is the repeatedly measured variable at time t for individual unit i, is the random intercept or
predicted score for individual unit i when t = 0, and is the time-dependent residual.
We model the level-2 (sample) equation by specifying the random intercept, , with a sample mean for
the intercept, and an individual deviation from the sample mean, or fixed effect, :
    
Combining level-1 and level-2 equations, we get the following complete no-growth model equation:
     
Unlike the no-growth models, which have only one latent variable (the intercept), the linear growth model
has two latent variables: the intercept, , and the linear rate of change, or random slope, .
We model the level-1 linear growth model as
        
where  is the repeatedly measured variable at time t for individual unit i, is the random intercept or
predicted score for individual unit i when t = , is the linear rate of change (linear slope) for individual unit
i when t = , and is the time-dependent residual.
Besides specifying the random intercept, we also need to specify the linear slope for the level-2 linear growth
equation, where  is the sample-level mean for the linear slope and  is the individual deviations from
the sample-level mean:
    
Communications of the Association for Information Systems
28
Volume 44
10.17705/1CAIS.044XX
Paper XX
Combining level-1 and level-2 equations, we get the following complete linear growth model equation:
           
However, if the variables are measured over a relatively long period, we will likely detect some degree of
nonlinearity in their trajectories, meaning that the variables will likely change at different rates. To measure
the nonlinear functional forms of change, we can apply different nonlinear growth models. There are two
major types of nonlinear growth models. The first comprises growth models with nonlinearity in time; in these
models, changes depend only on the known time assessment. The second type comprises growth models
with nonlinearity in parameters, in which changes depend on unknown entities (Grimm et al. 2016).
Examples of growth models with nonlinearity in time are quadratic and cubic models, which account for
nonlinearity by adding a quadratic term of time (in the quadradic model) and both a quadratic term and a
cubic term of time (in the cubic model); and spline models, which allow for separate growth models for
distinct spans of time. Examples of growth models with nonlinearity in parameters are the Jenss-Bayley
growth model, which combines linear and exponential trajectories, and the latent basis growth model, which
allows free factor loadings of time. Here, we introduce only the growth models with nonlinearity in time, such
as quadratic and cubic growth models.
We specify the level-1 quadratic growth model with three latent variables: the intercept, ; the linear rate
of change, ; and the quadratic rate of change, :
         
The level-2 equation for quadratic slope, , is written as
    
where is the sample-level mean for the quadratic slope and  is the individual deviations from the
sample-level mean of the quadratic slope.
Combining level-1 and level-2 equations, we get the following complete quadratic growth model equation:
                
Similarly, we can specify the level-1 cubic growth model as
           
where is the cubic change for the individual unit i when t = .
The level-2 equation for the cubic slope is
    
where  is the sample-level mean for the cubic slope and  is the individual deviations from the sample-
level mean of the cubic slope. Combing level-1 and level-2 equations, the cubic growth model can be
specified as
                 
where  is sample-level mean for the cubic slope and  is the individual deviations from the sample-level
mean of the cubic slope.
To incorporate the above growth models into a structural equation modeling framework, we fitted growth
models with latent variables for the intercept and slope to represent the change:
 
where is a T 1 vector of the repeatedly measured observed scores for individual unit i; T represents the
number of repeated assessments based on the selected time metric; is a T  R matrix of factor loadings
defining the latent growth factors; R is the number of growth factors (R = 1 for the no-growth model, R = 2
for the linear growth model, R = 3 for the quadratic growth model, and R = 4 for the cubic growth model);
and is an R 1 vector of the factor scores for the individual unit i. For example, the linear growth model
has two factor scores: is the intercept factor score, and is the linear factor score. In addition to intercept
and linear factor scores, the quadratic growth model has as the quadratic factor score, and the cubic
growth model has both the quadratic factor score, , andthe cubic factor score.  is an R 1 vector
of residual for the individual unit i. Figures B1-B3 display the path diagrams for the linear, quadratic, and
Communications of the Association for Information Systems
29
Volume 44
10.17705/1CAIS.044XX
Paper XX
cubic growth models. In Figures B1-B3, y1 to y5 represent the measurement of y in five different time periods,
and the numbers in the arrows are the default fixed time score loadings. The number in the path represents
time values that remain constant for the intercept (), change linearly for the linear factor score (), change
quadratically for the quadratic factor score (), and change in a cubic way for the cubic factor score ().
Figure B1. Linear Growth Model
Figure B2. Quadratic Growth Model
Communications of the Association for Information Systems
30
Volume 44
10.17705/1CAIS.044XX
Paper XX
Figure B3. Cubic Growth Model
Step 3: Finding the Best BDLDSM Model and Interpreting the Results
Once the growth trajectory models are established in step 2, we incorporate the functional forms of change
for the predictor variable and the outcome variable into the BDLDSM model.
To better understand the dynamic relationship between predictor (X) and outcome (Y) variables, we can
examine four BDLDSM models:
(1) BDLDSM with no coupling effects
(2) BDLDSM with a coupling effect from X to the change of Y (ΔY)
(3) BDLDSM with a coupling effect from Y to the change of X (ΔX)
(4) BDLDSM with full coupling effects, including coupling effects from X to ΔY and Y to ΔX
From the current literature, there are two proposed model selection approaches to identify the best
representation of the dynamic associations between predictor (X) and outcome (Y) variables. The first
approach has been widely applied, which is to compare various BDLDSM models by beginning with no
coupling model and examine the improvement in the model fit (change of chi-square and change of
parameters) for the two coupling models from X to ΔY and from Y to ΔX, and then compare the improvement
in fit of these two coupling models with the full coupling model (Grimm et al., 2016; Grimm et al., 2017; Rudd
& Yates, 2020; Sbarra & Allen, 2009). A limited number of studies followed the other approach, which is to
present the parameter estimation of the full coupling model directly and then evaluate the dynamic
associations between the predictor and outcome variables based on the significance level of the coupling
parameters (Arias et al., 2020; Eschleman & LaHuis, 2014).
Even though there is no standalone approach for model comparison, in this paper, we followed the first
approach for model selection because it provides statistical support to whether adding coupling
parameter(s) yields a significant improvement in the overall fit. From a theory development perspective,
however, we suggest testing and comparing only the no-coupling effect model, the coupling effect from X
to ΔY model, and the full coupling effect model (models 1, 2, and 4). We then compare these three models
using the chi-square difference test and four additional mode fit indices (RMSEA, CFI, TLI, SRMR) to select
the model with the best model specification. We next estimate the parameters in the best-fitting model and
interpret the result.
Communications of the Association for Information Systems
31
Volume 44
10.17705/1CAIS.044XX
Paper XX
Appendix C: Measures
Part I. Survey Items for Experiential Quality (Source: HCAHPS Survey)
The core of the HCAHPS survey comprises 21 items measuring a patient’s perception on his/her experience
during a hospital stay. These items encompass 11 key topics that relate to communication with doctors,
communication with nurses, responsiveness of hospital staff, pain management, communication about
medicines, discharge information, cleanliness of the hospital environment, quietness of the hospital
environment, transition of care, hospital rating, and willingness to recommend hospital. A random sample
of recently discharged patients (between 48 hours and 6 weeks after discharge) from a hospital are asked
to complete this survey. This patient-level data is later aggregated at the hospital-level by Centers for
Medicare & Medicaid Services (CMS) and published on Hospital Compare website. Following CMS
guidelines, only experiential quality items based on a sample of more than 100 respondents were included
in our study.
For this study, we select 4 topics related to communication. The topics related to communication are
composites that are constructed from two or three survey items. We present the topics and items in the
following list with items formatted in italics:
Communication
(1) How often did nurses communicate well with patients?
During this hospital stay:
How often did nurses treat you with courtesy and respect?
How often did nurses listen carefully to you?
How often did nurses explain things in a way you could understand?
(2) How often did doctors communicate well with patients?
During this hospital stay:
How often did doctors treat you with courtesy and respect?
How often did doctors listen carefully to you?
How often did doctors explain things in a way you could understand?
(3) How often did staff explain about medicines before giving them to patients?
Before giving you any new medicine:
How often did hospital staff tell you what the medicine was for?
How often did hospital staff describe possible side effects in a way you could understand?
(4) Were patients given information about what to do during their recovery at home? (Yes /No)
During this hospital stay:
Did hospital staff talk with you about whether you would have the help you needed when you left the
hospital?
Did you get information in writing about what symptoms or health problems to look out for after you left the
hospital?
The response categories for questions in topics (1) - (3) are “Never/Sometimes”, “Usually” or “Always”, and
the response categories for questions in topic (4) are “Yes” or “No”. For question (1) to (3), we used the sum
of the percentage of respondents who answered “Always” and “Usually”, and for question (4), the
percentage of patients who answered “Yes” to measure communication score. We then calculated the
average of these four items for further analysis.
Communications of the Association for Information Systems
32
Volume 44
10.17705/1CAIS.044XX
Paper XX
Part II. IT Items Scale (Source: AHA IT Supplement Files)
HIT implementation is measured by a six-point scheme as follows:
1 = Fully implemented across all units
2 = Fully implemented in at least one unit
3 = Beginning to implement in at least one unit
4 = Have resources to implement in the next year
5 = Do not have resources but considering implementing
6 = Not in place and not considering implementing
Although the original items were measured on a six-point ordinal scale, we coded each item on a four-point
scale so that a single lowest category would reflect all forms of non-implementation. The resulting ordered
IT implementation scheme is as follows: 0 (no implementation), 1 (beginning to implement in at least one
unit), 2 (fully implemented in at least one unit), and 3 (fully implemented across all units), with full
implementation indicating that IT has completely replaced paper record functionally. Descriptive statistics
and correlations between HIT variables can be found in Table C1.
Table C1: Descriptive Statistics and Pairwise Correlations among HIT Variables
HIT Category
Item Name
Mean
SD
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
1
ECD
Nursing Notes
2.10
1.13
2
Problem lists
1.63
1.27
0.752
3
Medication lists
2.26
1.10
0.818
0.776
4
Discharge summaries
2.11
1.19
0.735
0.697
0.759
5
Advanced directives
1.85
1.33
0.721
0.684
0.744
0.669
6
CPOE
Laboratory tests
1.42
1.28
0.555
0.526
0.573
0.514
0.505
7
Radiology tests
1.41
1.28
0.556
0.527
0.574
0.515
0.506
0.912
8
Medications
1.35
1.27
0.563
0.534
0.581
0.522
0.512
0.924
0.926
9
Consultation requests
1.24
1.27
0.569
0.54
0.588
0.528
0.518
0.935
0.937
0.949
10
Nursing orders
1.42
1.29
0.558
0.53
0.577
0.518
0.508
0.917
0.918
0.93
0.941
11
DS
Clinical guidelines
1.31
1.28
0.592
0.562
0.612
0.549
0.539
0.688
0.689
0.698
0.707
0.693
12
Clinical reminders
1.42
1.31
0.63
0.598
0.651
0.585
0.574
0.58
0.581
0.588
0.595
0.584
0.892
13
Drug allergy alerts
2.20
1.17
0.672
0.638
0.694
0.623
0.611
0.618
0.619
0.627
0.635
0.622
0.794
0.845
14
Drug_drug interaction
alerts
2.19
1.16
0.675
0.641
0.698
0.626
0.615
0.621
0.623
0.63
0.638
0.625
0.798
0.849
0.905
15
Drug_Lab interaction
alerts
1.82
1.30
0.642
0.609
0.663
0.596
0.584
0.591
0.592
0.599
0.606
0.595
0.759
0.807
0.861
0.865
16
Drug dosing support
1.75
1.30
0.637
0.604
0.658
0.591
0.58
0.586
0.587
0.595
0.602
0.59
0.753
0.801
0.854
0.858
0.816
Communications of the Association for Information Systems
33
Volume 44
10.17705/1CAIS.044XX
Paper XX
Appendix D: Tests of Measurement Invariance for IT Factors
To establish measurement invariance, we estimated and compared three models: configural model, metric
model, and scalar model (Chen, 2007; Cheung & Rensvold, 2002). In these models, we progressively added
constraints and compared their fits to assess configural invariance, metric invariance, and scalar invariance
for the three HIT constructs ECD, CPOE and DS. Details are below:
Model 1 (Configural Model): In this model, we free both factor loadings and the intercepts (the levels of the
items) to assess configural invariance.
Model 2 (Metric Model): In this model, we constrained factor loadings to be equal at each time point for the
same items to test metric invariance.
Model 3 (Scalar Model): In this model, we constrained both factor loadings and item intercepts to be equal
at each time point for the same items to test scalar invariance.
Table D1 reports fit statistics for Model 1 to Model 3 for each HIT factor. Model 1 has acceptable fit statistics
across all HIT factors, indicating that configural invariance is established for all three HIT factors. We then
compare Model 2 (metric model) with Model 1 (configural model) to assess metric invariance and compare
Model 3 (scalar model) with Model 2 (metric model) to assess scalar invariance. We adopted changes in
CFI (≥ -0.01) for nested models to evaluate metric invariance and scalar invariance because this criterion is
independent of model complexity and sample size and commonly applied by scholars (Chen 2007; Cheung
& Rensvold 2002). As shown in table D1, the value of change in CFI for nested models are all much smaller
than -0.01, suggesting that the null hypothesis of invariance should not be rejected. Thus, both metric and
scalar invariance are established, in addition to configural invariance. Since we did not find a significant
reduction in fit statistics from Model 1 to Model 3 for each factor, we choose the most parsimonious model
(Model 3) for ECD, CPOE, and DS for further analysis.
Table D1: Establishing Measurement Invariance
χ2
DF
RMSEA
CFI
Model Comparison
CFI
TLI
SRMR
Factor 1 (ECD)
Model 1: Configural Model
372
215
0.03
0.989
0.985
0.048
Model 2: Metric Model
504
231
0.038
0.981
M2 vs. M1
-0.008
0.975
0.06
Model 3: Scalar Model
654
287
0.039
0.974
M3 vs. M2
-0.007
0.973
0.061
Factor 2 (CPOE)
Model 1: Configural Model
350
215
0.028
0.999
0.999
0.025
Model 2: Metric Model
575
231
0.043
0.998
M2 vs. M1
-0.001
0.997
0.026
Model 3: Scalar Model
778
287
0.046
0.997
M3 vs. M2
-0.001
0.997
0.029
Factor 3 (DS)
Model 1: Configural Model
1140
335
0.054
0.98
0.974
0.066
Model 2: Metric Model
1171
355
0.053
0.979
M2 vs. M1
-0.001
0.975
0.068
Model 3: Scalar Model
1305
423
0.05
0.978
M3 vs. M2
-0.001
0.977
0.068
Communications of the Association for Information Systems
34
Volume 44
10.17705/1CAIS.044XX
Paper XX
The resulting factor loadings across different time periods are presented below in Table D2.
Table D2: Item Loadings across Time Periods
HIT Factors
Item Name
2008
2009
2010
2011
2012
ECD
Nursing Notes
0.906
0.921
0.926
0.935
0.953
Problem lists
0.766
0.796
0.807
0.825
0.868
Medication lists
0.81
0.836
0.845
0.861
0.897
Discharge summaries
0.727
0.76
0.772
0.793
0.841
Advanced directives
0.688
0.724
0.736
0.759
0.812
CPOE
Laboratory tests
0.985
0.978
0.977
0.978
0.987
Radiology tests
0.988
0.984
0.983
0.984
0.98
Medications
0.986
0.981
0.98
0.98
0.995
Consultation requests
0.957
0.94
0.937
0.939
0.982
Nursing orders
0.966
0.953
0.95
0.952
0.969
DS
Clinical guidelines
0.858
0.868
0.87
0.902
0.883
Clinical reminders
0.872
0.881
0.883
0.912
0.895
Drug allergy alerts
0.964
0.967
0.968
0.977
0.972
Drug_drug interaction alerts
0.977
0.979
0.98
0.985
0.982
Drug_Lab interaction alerts
0.866
0.875
0.877
0.908
0.89
Drug dosing support
0.852
0.862
0.865
0.898
0.878
Communications of the Association for Information Systems
35
Volume 44
10.17705/1CAIS.044XX
Paper XX
Appendix E: Selected Lowess Smoothing Plots of ECD, CPOE, DS, and
Communication Score
ECD
CPOE
DS
Communication Score
Figure E1. Selected Lowess Smoothing Plots of ECD, CPOE, DS, and Communication Score
7
7
We randomly selected 100 hospitals from our sample and plotted Lowess Smoothing for these hospitals. We only reported Lowess
Smoothing plots for four hospitals in Appendix E for space considerations.
Communications of the Association for Information Systems
36
Volume 44
10.17705/1CAIS.044XX
Paper XX
Appendix F. Mplus 7 Code to Conduct BDLDSM Analysis
We have provided the sample Mplus7 code for estimating the BDLDSM model with full coupling effect
between communication and ECD. The code is similar for other HIT variables.
!Import Data
Data:
!! If input is raw data, use the following statement:
file is dv_iv_control_state.txt;
!! If input is observed covariance matrix and means, add the following two statements:
!! TYPE = MEANS COVARIANCE;
!! NOBSERVATIONS = 791;
Variable:
! Describe Variables
! comm08, comm09, comm10, comm11, comm12, comm13 are outcome variable (communication score)
measured in 2008, 2009, 2010, 2011, 2012, 2013
! ecd08, ecd09, ecd10, ecd11, ecd12 are predicor variable (electronic clinical documentation) measured in
2008, 2009, 2010, 2011, 2012
! market, logbed, noprofit, and teaching are control variables: market competition, hospital bed size (logged),
profit status, teaching status
! hospst1 hospst6 capture state effects
names are
id comm08 comm09 comm10 comm11 comm12 comm13
ecd08 ecd09 ecd10 ecd11 ecd12
market logbed noprofit teaching
hospst1 hospst2 hospst3
hospst4 hospst5 hospst6;
usevar =
comm08 comm09 comm10 comm11 comm12 comm13
ecd08 ecd09 ecd10 ecd11 ecd12
market logbed noprofit
teaching hospst1 hospst2
hospst3 hospst4 hospst5 hospst6;
missing id- group (-99);
DEFINE:
Communications of the Association for Information Systems
37
Volume 44
10.17705/1CAIS.044XX
Paper XX
!! Rescale using the DEFINE command. Both communication score and electronic clinical documentation
implementation levels are scaled up and multiplied by 10 for further analysis. Multiplication by 10 does not
affect the model fit but help to increase the value of variances for easier estimation.
!! If data is covariance matrix and means, please skip the DEFINE step.
comm08 = comm08 * 10;
comm09 = comm09 * 10;
comm10 = comm10 * 10;
comm11 = comm11 * 10;
comm12 = comm12 * 10;
comm13 = comm13 * 10;
ecd08 = ecd08 * 10;
ecd09 = ecd09 * 10;
ecd10 = ecd10 * 10;
ecd11 = ecd11 * 10;
ecd12 = ecd12 * 10;
!!Describe Analysis methods
ANALYSIS:
TYPE= MEANSTRUCTURE;
COVERAGE=0;
processors = 40;
MODEL:
! Use BY command to indicate which latent variables are measured by which items
! * followed by a number means providing a start value to aid model estimation
! The starting values used in BDLDSM model are estimated using automatic starting values provided by
MPLUS from latent change score models. We first estimated one latent change score model for
communication score (incorporated cubic change form) and one latent change score model for ECD
implementaiton (incorporated quadratic change form) to obtain the starting values automatically generated
in these two models. We then applied the starting values from these two models in this BDLDSM model.
! Specify latent true scores ly1 ly6
! Factor loadings for ly1- ly6 are fixed at 1
ly1 BY comm08@1;
ly2 BY comm09@1;
ly3 BY comm10@1;
ly4 BY comm11@1;
ly5 BY comm12@1;
ly6 BY comm13@1;
Communications of the Association for Information Systems
38
Volume 44
10.17705/1CAIS.044XX
Paper XX
! dy represents latent change scores ∆y (the change of communication score)
! Factor loadings for dy2- dy6 are fixed at 1
dy2 BY ly2@1;
dy3 BY ly3@1;
dy4 BY ly4@1;
dy5 BY ly5@1;
dy6 BY ly6@1;
! b_2yi represents constant change factor
! Factor loadings for b_2yi are fixed at 1
b_2yi BY dy2@1;
b_2yi BY dy3@1;
b_2yi BY dy4@1;
b_2yi BY dy5@1;
b_2yi BY dy6@1;
! b_3yi represents linear growth factor for communication score
! Factor loadings for b_3yi changes linearly and multiply by 2 (according to )
! Factor loadings: -2*2, -1*2, 0, 1*2, 2*2
b_3yi BY dy2@-4;
b_3yi BY dy3@-2;
b_3yi BY dy4@0;
b_3yi BY dy5@2;
b_3yi BY dy6@4;
! b_4yi represents quadratic growth factor for communication score
! Factor loadings for b_3yi changes quadratically and multiply by 3 (according to )
! Factor loadings: 4*3, 1*3, 0, 1*3, 4*3
b_4yi BY dy2@12;
b_4yi BY dy3@3;
b_4yi BY dy4@0;
b_4yi BY dy5@3;
b_4yi BY dy6@12;
! Autoregressions
ly2 ON ly1@1;
Communications of the Association for Information Systems
39
Volume 44
10.17705/1CAIS.044XX
Paper XX
ly3 ON ly2@1;
ly4 ON ly3@1;
ly5 ON ly4@1;
ly6 ON ly5@1;
! Proportional Effects
! pi_y represents the proportional change parameter for communication score
dy2 ON ly1*-1.22029 (pi_y);
dy3 ON ly2*-1.22029 (pi_y);
dy4 ON ly3*-1.22029 (pi_y);
dy5 ON ly4*-1.22029 (pi_y);
dy6 ON ly5*-1.22029 (pi_y);
! Covariance
ly1 WITH b_2yi*7.50249;
ly1 WITH b_3yi*-0.50301;
ly1 WITH b_4yi*0.10612;
b_2yi WITH b_3yi*-0.10361;
b_2yi WITH b_4yi*-0.02776;
b_3yi WITH b_4yi*-0.01400;
! Specify the intercepts
[ comm08@0 ];
[ comm09@0 ];
[ comm10@0 ];
[ comm11@0 ];
[ comm12@0 ];
[ comm13@0 ];
[ ly1*8.77831 ];
[ ly2@0 ];
[ ly3@0 ];
[ ly4@0 ];
[ ly5@0 ];
[ ly6@0 ];
[ dy2@0 ];
[ dy3@0 ];
[ dy4@0 ];
Communications of the Association for Information Systems
40
Volume 44
10.17705/1CAIS.044XX
Paper XX
[ dy5@0 ];
[ dy6@0 ];
[ b_2yi*11.60411 ];
[ b_3yi*0.22889 ];
[ b_4yi*0.01956 ];
! Label for the residual variance for communication scores is sigma2_u
comm08*0.34152 (sigma2_u);
comm09*0.34152 (sigma2_u);
comm10*0.34152 (sigma2_u);
comm11*0.34152 (sigma2_u);
comm12*0.34152 (sigma2_u);
comm13*0.34152 (sigma2_u);
! Specify Residual Variances
ly1*9.01256;
ly2@0;
ly3@0;
ly4@0;
ly5@0;
ly6@0;
dy2@0;
dy3@0;
dy4@0;
dy5@0;
dy6@0;
b_2yi*9.47906;
b_3yi*0.09324;
b_4yi*0.00795;
! Specify latent true scores lx1 lx5
! Factor loadings for lx1- lx5 are fixed at 1
lx1 BY ecd08@1;
lx2 BY ecd09@1;
lx3 BY ecd10@1;
lx4 BY ecd11@1;
lx5 BY ecd12@1;
Communications of the Association for Information Systems
41
Volume 44
10.17705/1CAIS.044XX
Paper XX
! dx represents latent change scores ∆x (the change of ECD)
! Factor loadings for dx2- dx5 are fixed at 1
dx2 BY lx2@1;
dx3 BY lx3@1;
dx4 BY lx4@1;
dx5 BY lx5@1;
! b_2xi represents constant change factor for ECD
! Factor loadings for b_2xi are fixed at 1
b_2xi BY dx2@1;
b_2xi BY dx3@1;
b_2xi BY dx4@1;
b_2xi BY dx5@1;
! b_3xi represents linear growth factor for ECD
! Factor loadings for b_3yi changes linearly and multiply by 2 (according to )
! Factor loadings: -1*2, 0, 1*2, 2*2
b_3xi BY dx2@-2;
b_3xi BY dx3@0;
b_3xi BY dx4@2;
b_3xi BY dx5@4;
! Autoregressions
lx2 ON lx1@1;
lx3 ON lx2@1;
lx4 ON lx3@1;
lx5 ON lx4@1;
! Proportional Effects
! pi_x represents the proportional change parameter for ECD
dx2 ON lx1*0.72088 (pi_x);
dx3 ON lx2*0.72088 (pi_x);
dx4 ON lx3*0.72088 (pi_x);
dx5 ON lx4*0.72088 (pi_x);
! Covariance
lx1 WITH b_2xi*-23.40836;
Communications of the Association for Information Systems
42
Volume 44
10.17705/1CAIS.044XX
Paper XX
lx1 WITH b_3xi*0.66594;
b_2xi WITH b_3xi*2.26084;
! Specify the intercepts
[ ecd08@0 ];
[ ecd09@0 ];
[ ecd10@0 ];
[ ecd11@0 ];
[ ecd12@0 ];
[ lx1*-2.67705 ];
[ lx2@0 ];
[ lx3@0 ];
[ lx4@0 ];
[ lx5@0 ];
[ dx2@0 ];
[ dx3@0 ];
[ dx4@0 ];
[ dx5@0 ];
[ b_2xi*3.13733 ];
[ b_3xi*0.35770 ];
! Label for the residual variance for ECD is sigma2_s
ecd08*17.07641 (sigma2_s);
ecd09*17.07641 (sigma2_s);
ecd10*17.07641 (sigma2_s);
ecd11*17.07641 (sigma2_s);
ecd12*17.07641 (sigma2_s);
! Specify Residual Variances
lx1*29.81328;
lx2@0;
lx3@0;
lx4@0;
lx5@0;
dx2@0;
dx3@0;
dx4@0;
dx5@0;
b_2xi*24.77468;
Communications of the Association for Information Systems
43
Volume 44
10.17705/1CAIS.044XX
Paper XX
b_3xi*2.83590;
! Including time-invariant control variables in the model and regressing the growth factors on the time-
invariant control variables
ly1 b_2yi b_3yi b_4yi on market logbed
noprofit teaching hospst1 hospst2
hospst3 hospst4 hospst5 hospst6;
lx1 b_2xi b_3xi on market logbed
noprofit teaching hospst1 hospst2
hospst3 hospst4 hospst5 hospst6;
! Bivariate Information
ly1 WITH lx1*-4.27001;
ly1 WITH b_2xi*-1.73666;
ly1 WITH b_3xi*0.26032;
b_2yi WITH b_3xi;
lx1 WITH b_2yi*-2.97771;
lx1 WITH b_3yi*0.30969;
lx1 WITH b_4yi*-0.13135;
b_2xi WITH b_2yi*-1.90278;
b_2xi WITH b_3yi;
b_2xi WITH b_4yi;
b_3xi WITH b_3yi*-0.07306;
b_3xi WITH b_4yi;
! Covariance between communication score and ECD at each time point and constrained to be equal across
time by the common label, sigma_su
comm08 WITH ecd08 (sigma_su); comm09 WITH ecd09 (sigma_su);
comm10 WITH ecd10 (sigma_su); comm11 WITH ecd11 (sigma_su);
comm12 WITH ecd12 (sigma_su);
! Communication Score -> ΔECD
! Coupling parameters from Communication Score to ΔECD is specified and labeled as delta_x
dx2 ON ly1 (delta_x); dx3 ON ly2 (delta_x);
Communications of the Association for Information Systems
44
Volume 44
10.17705/1CAIS.044XX
Paper XX
dx4 ON ly3 (delta_x); dx5 ON ly4 (delta_x);
! ECD -> ΔCommunication Score
! Coupling parameters from ECD to ΔCommunication Score is specified and labeled as delta_y
dy2 ON lx1 (delta_y); dy3 ON lx2 (delta_y);
dy4 ON lx3 (delta_y); dy5 ON lx4 (delta_y); dy6 ON lx5 (delta_y);
plot:
type = plot3;
series = comm08 comm09 comm10 comm11 comm12 comm13 (*);
Output:
patterns tech1 residual fsdet stdyx tech4
modindices sampstat svalues;
Communications of the Association for Information Systems
45
Volume 44
10.17705/1CAIS.044XX
Paper XX
Appendix G. Sample Size, Mean, and Covariance Matrix
The covariance matrix is also downloadable from this address:
https://drive.google.com/file/d/174HTqIld6JGJ9CYmsMhTi7CYIyACe9dN/view?usp=sharing
Sample Size
Mean
COMM08
COMM09
COMM10
COMM11
COMM08
770
8.556
8.156
COMM09
767
9.171
6.003
6.029
COMM10
765
9.534
5.569
5.139
5.804
COMM11
763
9.994
5.269
4.831
5.12
5.592
COMM12
755
10.579
5.063
4.573
4.863
5.003
COMM13
740
10.898
4.729
4.271
4.434
4.565
ECD08
399
-2.188
-0.894
0.445
1.177
0.89
ECD09
519
-1.823
-1.012
-0.15
1.072
1.251
ECD10
526
-1.327
-2.004
-0.221
0.454
0.631
ECD11
481
0.830
-3.245
-1.232
-0.285
0.118
ECD12
522
3.314
-1.852
-0.423
0.881
0.952
CPOE08
399
-2.216
-0.519
-0.061
-0.009
-0.522
CPOE09
519
-1.467
-1.995
-1.418
-0.758
-0.666
CPOE10
526
-0.697
-3.011
-1.493
-0.926
-0.817
CPOE11
481
1.718
-2.869
-1.227
-0.793
-0.712
CPOE12
522
5.002
-2.55
-0.798
0.058
0.505
DS08
399
-2.037
-1.729
-0.148
0.427
0.549
DS09
519
-2.013
-1.471
-0.674
0.644
0.963
DS10
526
-1.269
-3.099
-0.733
0.044
0.359
DS11
481
1.101
-3.249
-1.583
-0.878
-0.161
DS12
522
3.565
-2.508
-0.893
0.262
0.451
MARKET
791
0.172
0.13
0.101
0.099
0.103
LOGBED
791
5.254
-0.872
-0.719
-0.717
-0.748
NOPROFIT
791
0.666
0.274
0.149
0.137
0.125
TEACHING
791
0.105
-0.062
-0.058
-0.076
-0.068
State=CA
791
0.076
-0.03
-0.038
-0.04
-0.038
State=FL
791
0.056
0.024
-0.006
-0.008
-0.009
State=MD
791
0.118
0.368
0.318
0.314
0.311
State=NC
791
0.18
-0.214
-0.062
-0.056
-0.061
State=NJ
791
0.075
0.132
0.099
0.112
0.111
State=NY
791
0.302
-0.257
-0.262
-0.213
-0.199
COMM12
COMM13
ECD08
ECD09
ECD10
ECD11
COMM12
5.597
COMM13
4.843
5.261
ECD08
0.772
0.527
32.856
ECD09
0.852
0.492
23.3
39.894
ECD10
0.514