Content uploaded by Jan Recker
Author content
All content in this area was uploaded by Jan Recker on Jun 28, 2019
Content may be subject to copyright.
HOW DO INDIVIDUALS INTERPRET MULTIPLE
CONCEPTUAL MODELS? A THEORY OF
COMBINED ONTOLOGICAL COMPLETENESS
AND OVERLAP
Jan Recker
Faculty of Management, Economics and Social
Sciences
University of Cologne
Germany 50923
and
School of Management
Queensland University of Technology
Australia 4000
e-mail: jan.recker@wiso.uni-koeln.de;
j.recker@qut.edu.au
Peter Green
School of Accountancy
Queensland University of Technology
Australia 4000
e-mail: p.green@qut.edu.au
Manuscript accepted for publication in the Journal of the Association for Information Systems
as a Research Article.
ABSTRACT
When analyzing or designing information systems, users often work with multiple conceptual
models because each model articulates a different, partial aspect of a real-world domain.
However, the available research in this area has largely studied the use of single modeling
artefacts only. We develop new theory about interpreting multiple conceptual models that
details propositions for evaluation of individuals’ selection, understanding, and perceived
usefulness of multiple conceptual models. We detail several implications of our theory
development for empirical research on conceptual modeling. We also outline practical
contributions for the design of conceptual models and for choosing models for systems
analysis and design tasks. Finally, to stimulate research that builds on our theory, we illustrate
procedures for enacting our theory and discuss a range of empirically relevant boundary
conditions.
Keywords
Conceptual modeling, representation theory, combined ontological completeness, ontological
overlap, model interpretation, model selection, domain understanding, perceived usefulness.
2
INTRODUCTION
When analyzing or designing information systems (IS), professionals such as process analysts,
systems designers, and software developers frequently develop and use representations of the
relevant features of a real-world domain that the IS is intended to support. These
representations, called conceptual models, describe someone’s or some group’s understanding
of a real-world domain and the relevant features or phenomena in that domain (Wand and
Weber, 2002).
Conceptual models are developed using grammars—that is, sets of constructs and the rules by
which to combine them (Wand and Weber, 2002). The traditional focus of the academic
literature on conceptual modeling has been on how the quality of grammars and models might
be evaluated and improved (Burton-Jones et al., 2009; Siau and Rossi, 2011). However, the
academic literature is inconsistent with practice. IS professionals typically do not use just one
conceptual modeling grammar, let alone one conceptual model, in their analysis and design
tasks. As we will show below, they use multiple models, often designed with different grammars,
in their systems analysis and design practices. Yet, the literature offers no comprehensive
theory yet to explain how practitioners would work with these models, to answer questions such
as: which models do they choose as a representation to help them in an analysis or design
task? How do they read multiple models together? Which models are useful together and which
are not?
In this paper, we develop a new theory with which to analyze and explain the interpretation of
multiple conceptual models in combination. We generate research models that specify detailed
propositions regarding three important decisions when interpreting multiple models: selecting
which models to use, determining how much domain understanding can be generated from
multiple models, and explaining the perceived usefulness of conceptual model combinations.
Because our aim is to invite future empirical research on conceptual modeling on the basis of
3
theory, we also provide an illustration of the procedures with which the theory can be applied,
and we discuss moderator variables that might be relevant in empirical research designs to
establish boundary conditions.
In terms of contributions, to the best of our knowledge, this paper is the first paper to propose a
theory to explain the interpretation of multiple conceptual models that is anchored in properties
of the models themselves. This issue is important because, generally, when users model
system requirements to ensure they have the requirements clear, they use many different
models. When looked at in combination by other stakeholders who are trying to get a sense of
the system, the combination of models, and symbolic constructs within the models, may not
provide the full picture due to their lack of representation of a critical concept or they may
confuse the stakeholders due to different symbols in different models actually representing the
same real-world concept. Our work attempts to determine clearly and specifically the conditions
under which any set of conceptual models can be viewed by stakeholders to minimize omitted
or confused meaning. Moreover, while theoretical, our work also informs practice. By clarifying
how multiple conceptual models might be meaningfully combined, we provide guidance to
modeling practitioners on which model combinations are better for understanding domains,
which are worse, and which will not cover various domains. Our work also informs model
designers about model combinations they can create that will likely be of most benefit to future
users.
We make one note about the theory development reported in this paper. The theory we produce
is not grounded or derived inductively from data (Urquhart and Fernandez, 2013). Rather, it
largely builds on established theory (Wand and Weber, 1990; 1993; 1995) and it derives new
logic deductively from the premises of that theory. However, our theorizing also uses both
formal and informal empirical data sources as the empirical matters that inspire our
problematization of both the reported phenomena and the literature available purportedly
4
explaining it (Alvesson and Kärreman, 2007, p. 1265). We rely on our own observations of
conceptual modelling practices that we have gathered in many years of field work (e.g., Green
et al., 2011; Jabbari Sabegh and Recker, 2017) as an informal source of theorizing (Fiske,
2004). We also rely on published facts and cases of the same practices that inspired us to
attempt to provide an explanation of what we believe is a real-world puzzle (Byron and
Thatcher, 2016, p. 4) that is only seemingly innocuous: how do analysts and designers interpret
multiple models? Noting the observations, we engaged in thought experiments seeking
explanations that could satisfy our curiosity (Corley and Gioia, 2011). This paper reports the
outcomes from this process.
We proceed as follows: first, we review the background relevant to our theorizing, in particular
the available empirical evidence on the use of multiple conceptual models and the available
theory base in conceptual modeling research. Then we formulate our new theory and develop
its key propositions in three research models. Next, we discuss the scope and contributions of
our theory and propose a range of implications.
BACKGROUND
Two streams of literature inform our theory development. One is the literature that covers the
research area of conceptual modeling as a whole. We review this literature first because it helps
to position our theory development within the contributions of existing research. Second, we
review empirical knowledge on the use of (multiple) conceptual models specifically, because our
theory development addresses challenges that stem from an inconsistency between available
theoretical knowledge and reported practice when dealing with multiple models.
5
Research on conceptual modeling
Conceptual modeling concerns the development and use of representations of relevant features
of a real-world domain that an IS is intended to support (Wand and Weber, 2002). It is an active
research area in information systems research with contributions consistently appearing in our
top journals (e.g., Parsons, 2011; Recker, 2013; Bera et al., 2014; Clarke et al., 2016; Khatri
and Vessey, 2016; Lukyanenko et al., 2017). Two broad research streams can be distinguished:
First, there is a stream of research on the design of representations of relevant features of a
real-world domain through the use of conceptual modeling grammars and methods. For
example, several studies have focused on how a conceptual model can be created, especially
how to design a “better” model (e.g., Gemino and Wand, 2005; Shanks et al., 2008; Recker,
2013; Clarke et al., 2016). Studies have also demonstrated how the representation that a
conceptual model offers can be augmented through design features like colors (Masri et al.,
2008), text (Gemino and Parker, 2009), and other customizations (Samuel et al., 2015). Finally,
some studies have examined how practitioners use methods or grammars for the design of
conceptual models (Dawson and Swatman, 1999; Purao et al., 2002; Recker et al., 2010).
Second, there is a stream of research on how previously built conceptual models are used for
purposes of problem solving or decision making. Much of this research has focused on how
users understand a conceptual model in its entirety (Figl et al., 2013; Bera et al., 2014), or
various elements within it (Bodart et al., 2001; Parsons, 2011). Fewer studies have focused on
how practitioners interpret a conceptual model for specific analysis and design tasks (Bowen et
al., 2006; Allen and Parsons, 2010; rFigl and Recker, 2016).
With our theory, we contribute to the second broad stream of research on conceptual modeling.
Our explicit focus on multiple models, rather than just one model, is a key extension to the
conceptual modeling literature. Common among the studies in either stream is a focus on a
6
single artefact – one model or grammar. When studies have employed multiple models or
grammars, they were usually compared to evaluate which one was “better”.
Empirical evidence on the use of multiple conceptual models
The starting point for our investigation is an inconsistency between conceptual modeling
practice and academic theory. It is common for practitioners to use several conceptual models
when analyzing or designing information systems, and they appear to be equipped with some
intuition about which models can be combined in purposeful ways. For example, entity-
relationship models (Chen, 1976) describe real-world domains in terms of the entities that make
up a domain, the attributes that characterize these entities, and the relationships that may exist
between the entities. Process models like the Business Process Model and Notation (BPMN,
OMG, 2011) describe real-world domains in terms of events that occur and the sequences of
activities that are triggered and executed in response to these events. It seems intuitive that
entity-relationship models differ from BPMN models: one addresses form and substance, and
the other, behavior and change (Burton-Jones and Weber, 2014). It also seems logical that both
substance and change are important to understand when one examines what an information
system represents and what it is meant to do. This logic is also evident when one considers
prominent conceptual modeling methods. For instance, UML features fourteen grammars (and
rules for construction) to describe structure, behavior, and interactions of a system from a
variety of perspectives (Fowler, 2004). Other longstanding methodologies, such as Multiview
(Avison and Wood-Harper, 1986), have promoted multiple models for thirty years.
Not surprisingly in this situation, evidence from both surveys and case studies exists that
practitioners indeed often work with multiple models, often constructed with different grammars
(Dobing and Parsons, 2008; Petre, 2013; Jabbari Sabegh and Recker, 2017). Moreover,
evidence suggests that they do so not to substitute models but to combine them. For example,
ninety percent of UML practitioners reportedly work with at least two different models in at least
7
a third of their projects, and nearly three-quarters use at least two of the models in two-thirds of
their projects (Dobing and Parsons, 2008, p. 6). Similarly, Grossman et al. (2005, p. 393) report
that over 60% of their surveyed UML users worked at least with use case, class, sequence,
chart and activity diagrams. Petre (2013) reported that many professional software engineers
used UML models selectively and even integrated into their work other models, such as those
developed using DFD, ERD, BPMN and other grammars (p. 728). In a similar manner, case
studies of model-driven engineering practices illustrate how practitioners use multiple types of
models as means of reference and communication during systems development (e.g., Cherubini
et al., 2007, p. 561). Other cases detail the issues users encounter when working with multiple
models (Baker et al., 2005, pp. 483-487).
Of course, one might believe that the use of multiple models during systems analysis and
design is no longer current or relevant, but this assertion seems incorrect. While certainly
modern approaches to systems development such as agile have gained popularity (Conboy,
2009), this situation does not mean that model-based systems development practices have
disappeared. For example, model-driven engineering is practiced widely in many industries and
across organizations both large and small (e.g., Mohagheghi et al., 2013; Hutchinson et al.,
2014; Whittle et al., 2014). In these projects, a vast amount of modeling techniques and models
is reportedly in use (e.g., Grossman et al., 2005; Petre, 2013). Moreover, a recent interview
study in 2017 showed that all interviewed IS practitioners used more than one type of
conceptual model in their systems analysis and design tasks (Jabbari Sabegh and Recker,
2017). Clearly, in practice, models developed with various grammars appear to be used in
combination. Yet, in the literature, studies with an explicit focus on multiple models are sparse.
Table 1 summarizes literature that explicitly focuses on combinations of conceptual models.
8
Table 1. Literature on the Use of Combinations of Conceptual Models
Reference Object of
study
Summary of Research Implications for This Paper
Kim et al.
(2000)
Usability of
multiple
models as part
of a system-
development
methodology
The research examined
representation aids that assist
users in using multiple models to
solve problems during systems
development. It shows that visual
cues and contextual information in
multiple models assist users in
searching for related information
and developing hypotheses about
the target system.
This study proposed an
alternative theory with potential
for conjunction: The study
examined external aids that are
not inherent to the models
themselves and that may
interact with the explanatory
mechanisms we develop.
Siau and
Lee (2004)
Interpretation
of class
diagrams and
use case
diagrams in
UML
The research showed that use
case diagrams and class
diagrams depict different aspects
of a problem domain. To users,
the models appear to have very
little overlap in the information
captured, and both are perceived
as necessary in requirements
analysis.
This study suggested two
relevant properties of model
combinations: that they do not
overlap and that they are
complementary. However, the
study did not identify from where
complementarity or overlap in
the models would stem.
Dobing and
Parsons
(2008)
Use of UML
diagram types
Modeling practitioners use
multiple types of UML diagrams in
most projects. More than 50% of
users report that they use five or
more types of diagrams in at least
a third of their software
development projects.
This study established the
ecological validity of our theory,
that is, that practitioners use
multiple models in combination.
Gemino and
Parker
(2009)
Interpretation
of textual use
cases with use
case models
The research shows that
participants who receive
supporting diagrams develop
higher levels of domain
understanding than they did with
a textual use case description
alone.
This study indicated that benefits
may accrue from multiple
models that are redundant: use
case diagrams aided the text by
displaying the same information
in a different way.
Jabbari
Sabegh and
Recker
(2017)
The use of
multiple
conceptual
models during
systems
analysis and
design
The research interviews
contemporary systems analysis
and design practitioners to
establish how and why multiple
models are used in practice.
This study showed that the use
of multiple models during
systems analysis and design
remains current. It also
suggested that selection and use
of multiple models can be
influenced by many different
factors. However, the study did
not offer a theory to explain the
findings.
We highlight two main points about the literature summarized in Table 1. On the one hand, the
few empirical studies on multiple conceptual models mention findings like:
9
- “the information depicted by the two diagram types is sufficiently different and not
overlapping” (Siau and Lee, 2004, p. 235);
- “integration of information from the multiple perspectives was indeed necessary to
thoroughly understand the business case” (Kim et al., 2000, p. 289); and
- “while the use case diagram does not seem to add new information […,] [it] helps users
better understand sets of use cases” […] “Use cases augmented with a use case diagram
provides a more effective communication of system information than use cases alone”
(Gemino and Parker, 2009, p. 15 & 16).
- “all of our interviewees (15 out of 15) reportedly used more than one type of models in their
design and analysis tasks. […] multiple interrelated models were used to represent different
aspects of the system” (Jabbari Sabegh and Recker, 2017, p. 64 & 66).
These passages make three important points. First, they suggest that multiple models are
frequently used (Jabbari Sabegh and Recker, 2017) because a potential benefit of multiple
models is that they maximize the amount of information about a real-world domain. Second, this
effect is not unequivocal, however, as models have to be “sufficiently different” (Siau and Lee,
2004, p. 235). Third, models that do not contain different, complementary information may still
offer benefits (Gemino and Parker, 2009), likely because they establish “correspondence”
between the representations (Jabbari Sabegh and Recker, 2017, p. 69).
On the other hand, none of the studies we found provided explanatory mechanisms about these
effects that were rooted in the models themselves; none of the research focused on the
artefacts. For example, Kim et al. (2000) demonstrated the benefits of external aids for
understanding multiple models, such as visual cues to aid the transition between diagrams, or a
context diagram to position the relative importance of individual items. There is no theory on
10
which attributes that are inherent to conceptual models explain how best to combine them for
use.
We draw three primary conclusions from this literature review:
1. There is evidence that suggests that practitioners work with multiple models (Grossman et
al., 2005; Dobing and Parsons, 2008; Mohagheghi et al., 2013), and that practitioners prefer
having them (Whittle et al., 2014), and obtain benefits from them (Siau and Lee, 2004;
Gemino and Parker, 2009). However, we do not yet fully understand how and why that
situation is.
2. We do not yet fully understand which properties of the models themselves would make them
more or less appropriate for combination.
3. While extant studies have indicated that “representation” aspects or attributes of models
may matter, which such property dominates is not completely clear yet.
In the theory development that follows, we develop answers to these challenges. To provide a
plausible basis for assumptions we require, we build on representation theory, which was
originally formulated by Wand and Weber (1990; 1993; 1995), and which has become a central
theory researchers have used to make predictions about conceptual modeling (Moody, 2009;
Siau and Rossi, 2011). Because we draw on it extensively, we provide a brief description in
Appendix A. A more detailed account of representation theory as it currently stands, its origins
and development over time, is provided in Burton-Jones et al. (2017).
THEORY DEVELOPMENT
In describing the development of our new theory about the interpretation of multiple conceptual
models, we follow three main steps (Whetten, 1989; Weber, 2012). First, we introduce the
constructs that conceptualize our independent variable: conceptual model combinations.
11
Second, we present constructs that conceptualize our main dependent variable: the
interpretation of multiple models. Third, we develop propositions that describe the associations
between these constructs. Table A1 summarizes relevant construct definitions.
Completeness and Overlap of Conceptual Model Combinations
We start by illustrating combined ontological completeness and ontological overlap in
conceptual models (Figure 1). In Figure 1, the large circle describes a set of real-world
phenomena that is to be represented. The representations required to develop a faithful (i.e.,
clear, complete, and accurate, see Weber, 1997, p. 83) description of these phenomena is
indicated by the black dots, which symbolize different ontological constructs required (e.g.,
which things are of relevance? What are their properties? Which events occur that change the
states of these things?). The two shaded circles describe the level of representation of these
phenomena achieved in two conceptual models: A and B. Each model provides some, partial,
representation of the phenomena, that is, each model has some level of ontological
completeness.
12
Completeness of
domain representation
of model A
Completeness of
domain representation
of model B
Remaining construct
deficit of combined
representation
Ontological
overlap between
models A and B
Set of real-world
phenomena to be
represented
Figure 1: Illustration of Combined Ontological Completeness and Overlap of Two
Conceptual Models. In Analogy to Weber (1997, p. 102)
To describe the combination of models (A and B), we define two constructs:
1. Combined ontological completeness: the level of representational coverage a set of multiple
models provides about some real-world phenomena. The level of combined ontological
completeness is defined as the sum of ontological construct representations available in
each of the models (Figure 1).
2. Ontological overlap: the set of redundant representations across a set of models, that is, the
extent to which a (partial) representation of some real-world phenomena in one model is
already available in another model. The level of ontological overlap is defined as the sum of
ontological construct representations shared between the models (Figure 1).
Figure 1 also illustrates the notion of remaining construct deficit, which clarifies the level of
combined ontological completeness that is achievable. Models are created using grammars that
13
provide constructs that describe the semantics of real-world phenomena. However, because, as
detailed in Appendix A, no available conceptual modeling grammar is complete, no single
grammar offers constructs to develop a full representation of a real-world phenomena.
Therefore, all grammars have a representational limit defined by their extent of construct deficit,
which is the maximal ontological completeness (MOC) offered by a grammar. Therefore, any
one model is, at best, maximally ontologically complete (but not fully complete). In fact, models
are often less than maximally complete, as most do not contain all grammatical constructs –
only a small subset (zur Muehlen and Recker, 2008). In other words, the actual level of
completeness of a conceptual model is often less than its potential level of completeness.
Theoretically, two or more models in combination could achieve a full representation of all
required real-world phenomena, but the level of combined ontological completeness depends
on selecting models for combination that maximize the completeness of representation. The
combined ontological completeness of the models is also constrained by the maximal
ontological completeness of the grammars used to create the models: for example, no number
of BPMN models, however large, will ever offer a full representation of a real-world domain
because the BPMN grammar is ontologically incomplete (Recker et al., 2010).
Interpretation of Conceptual Model Combinations
Next, we describe the phenomenon that our theory purports to explain: the interpretation of
multiple conceptual models.
Interpretation of one or more conceptual models is primarily a task of model readers (sometimes
also called model interpreters, see Gemino and Wand, 2004), that is, those users who during
analysis and design engage in solving problems and making decisions with the use of
previously built conceptual models, as opposed to those who develop conceptual models (i.e.,
14
model creators or simply modelers, see Gemino and Wand, 2004).1 Addressing model readers
is important because system failures often stem from communication failures between analysts
and users in the early stages of system analysis and design (Lauesen and Vinter, 2001).
How do individuals interpret—“read”—conceptual models? The answer to this question is not
straightforward. Research has established the purposes of interpreting conceptual models, such
as supporting communication between developers and users, helping analysts understand a
domain, providing input for systems design processes, and documenting requirements for future
reference (Kung and Sølvberg, 1986; Wand and Weber, 2002). It has also examined intended
or reported benefits from interpreting conceptual models, such as input to organizational
redesign and improved documentation of operational processes (e.g., Indulska et al., 2009).
Another set of studies has identified the extent of conceptual model interpretation by
practitioners in tasks such as database design and management, software development,
business process improvement, or enterprise architecture design (Davies et al., 2006; Fettke,
2009).
These findings demonstrate that the purposes and application areas of conceptual models are
varied. However, independent of purpose and application, all uses of conceptual models
involves interpreting their content (Burton-Jones and Meso, 2008) – and it remains unclear what
interpreting conceptual models as an act involves. Burton-Jones et al. (2009, p. 498) suggest
that model interpretation can be examined from two perspectives: interpretational fidelity (how
faithfully—viz., completely, clearly and accurately—does the interpretation of one or more
conceptual models represent the denotational semantics in the models intended by their
creators?) and interpretational efficiency (what resources are used to interpret one or more
conceptual models?). While this distinction has been applied widely in the literature to
1 The premises of our theory in principle also inform the behaviors and decisions of model creators because
they, too, read and interpret conceptual models – with a view to redesigning these models or creating others.
15
distinguish different outcome variables, such as scores on problem solving questions as a
measure of interpretational fidelity, or time taken to complete a problem solving task as a
measure of interpretational efficiency (e.g., Bodart et al., 2001; Gemino and Wand, 2005;
Shanks et al., 2008; Recker and Dreiling, 2011; Bera et al., 2014), it does not elaborate on the
act of interpretation per se. We take this step.
We start by suggesting that conceptual model interpretation is a goal-directed activity that
involves the user (the subject), the model(s) (the object), and the task (the organizational action
that requires the interpretation of one or more conceptual models). We assume that (a) the user
is an individual person who interprets a conceptual model for a task2, (b) conceptual models are
tangible artifacts that provide a representation of a real-world domain that is relevant to the user
given a particular task (Wand and Weber, 1990), and (c) the task is a goal-oriented activity, so
task outcomes can be compared to predefined task requirements (Zigurs and Buckland, 1998).
This definition stresses that interpreting models occurs as part of a particular task, rather than
for its own sake—“just to read them”. A task goal might be to identify system requirements from
a domain model in order to express relevant functional requirements completely and clearly. Or,
a task goal might involve specifying database queries accurately and efficiently, which may
require the models to be not only expressive but also parsimonious (Bowen et al., 2006). Task
goals may even differ to the point at which incomplete and/or inaccurate conceptual models
may be required. Independent from the nature of the task goal, however, model interpretation is
inevitably characterized by task goals. Therefore, requirements in regard to the interpretation of
one or more conceptual models in support of the task can be defined a priori.
2 The use of conceptual models can also occur at a group level. To limit the scope of our study, we focus on the
individual level in our theory.
1
6
Moreover, in this definition, interpretation of one or more conceptual models is a means to an
end, rather than an end in itself. The tasks and task goals against which interpretational fidelity
or efficiency must be evaluated may vary, but independent of the specific tasks, the act of
interpreting one or more conceptual models always entails at least three components: selection,
action, and evaluation. The basis for these components stems from research on cognitive
representation and control of action processes (e.g., Beach and Mitchell, 1978; Locke and
Latham, 1990): people engage in behavior, driven by a mental representation that links higher-
level goals (such as those imposed by a task) to specific actions (such as the selection of
models for reading) that are instrumental in achieving these goals. In doing so, people evaluate
the performance expectancy of any object they may use in these actions, perform the actions,
and then compare the achieved performance against their expectations. When faced with
multiple options (e.g., multiple models), people perform profitability tests to compare acceptable
options (Beach and Mitchell, 1978).3
Following this line of reasoning, we define the selection, action, and evaluation components of
conceptual model interpretation.
Selection: deciding which conceptual models to read
Prior to engaging in a task in which a user will interpret a set of available conceptual models, the
user has expectations about perceived performance gains from reading or studying the
conceptual models. These expectations are similar to those ascribed to other artefacts (e.g.,
built information systems) with which the user plans to engage in a task, such as their ability to
3 It should be clear from this discussion that we view interpretation of conceptual models as a primarily
rational act in which individuals make decisions in choice situations (such as selecting a set of conceptual
models for an upcoming task) by recognizing available alternatives and then balancing what they perceive to
be costs (such as ontological overlap) and outcomes (such as level of domain understanding) based on
individual preference functions (Scott, 2000).
1
7
help the user perform tasks more quickly, improve efficiency, and increase the quality of their
work (Venkatesh and Davis, 2000; Brown et al., 2012). When a user has a variety of conceptual
models to assist in their model-based task, the user’s performance expectancies will determine
a profitability test (Beach and Mitchell, 1978) for choosing the most profitable candidate (a
specific combination of models from the given set). This test will manifest in a selection
decision about which model or model combination to read in an upcoming task. For instance, if
an analysis task involves the redesign of an organizational procedure to minimize the use of
resources, the user may select models that convey information about workflow processes and
role allocations, such as activity charts and swimlane diagrams. On the other hand, if a task
involves presenting an overview to senior managers who need to grasp a domain quickly, the
user may select models that convey only essential information on a high level of abstraction,
such as use case diagrams or class diagrams.
Action: generating domain understanding from the conceptual models
Having selected a conceptual model combination, the user engages in the model-based task
and evaluates it in terms of whether and how many performance gains stem from interpreting
the conceptual models. Conceptual models can support many tasks (e.g., systems analysis,
communication, design, project management, end-user querying, process redesign,
organizational change management) (Kung and Sølvberg, 1986; Wand and Weber, 2002; rFigl
and Recker, 2016), so defining the performance gains will vary depending on the task. For
example, during database design a conceptual model might be interpreted with the goal to
identify the constraints required for SQL expressions (Bowen et al., 2006).
In any case, however, conceptual models must be interpreted in order to realize performance
gains from them (Aguirre-Urreta and Marakas, 2008, p. 12; Burton-Jones et al., 2009, p. 499).
Therefore, for all tasks, the interpretation of conceptual models necessarily and unequivocally
involves reading the model to construct knowledge about the depicted domain. Hence, one key
18
evaluation of performance gains must be how much domain understanding can be generated
from interpreting a conceptual model combination during a task (Gemino and Wand, 2004;
Shanks et al., 2008; Burton-Jones et al., 2009; Recker and Dreiling, 2011). Domain
understanding is generated when model readers organize and integrate the information
presented in the conceptual models with their own experience and mental models (Mayer,
1989), thereby constructing new knowledge about the elements in a real-world domain (surface
understanding) and the actual and possible relationships between these elements (deep
understanding) (Mayer, 2009). The user then applies this domain understanding in completing
the task they set out to do.
Evaluation: appraising the usefulness of the conceptual models
A key finding of the research on cognitive representation and control of action processes is that
people continuously update the mental representations that govern their actions as part of a
progressing decision (e.g., Beach and Mitchell, 1978; Beach, 1993). They reflect on the options
selected and the action outcomes achieved to evaluate the compatibility between the two. For
example, people update their expectations about behaviors (e.g., how they undertake a task)
based on their own past behaviors that either confirmed or disconfirmed their previous
expectations (Oliver, 1977). The adjusted perceptions then provide the basis for subsequent
behaviors (Bem, 1972). In this vein, model readers who employ a conceptual model
combination perform a cognitive appraisal, evaluating the performance gains from interpreting
the conceptual models for a particular task by reflecting on their expectations that led to the
initial profitability test and determining whether their pre-task expectations were confirmed
(Oliver, 1977; Recker, 2010). Performance gains from interpreting conceptual models depend
on the nature of the task. For instance, one might evaluate whether reading conceptual models
during a systems analysis task increased the user’s task efficiency (e.g., by comparing task
completion times). However, independent of any task-specific performance metric, performance
19
gains should also manifest as beliefs about the performance that results from the object in use
(Venkatesh et al., 2003), so they can be measured as the perceived usefulness of the chosen
conceptual models in supporting the task at hand. In this context, perceived usefulness can be
defined as the degree to which a model reader believes that a particular conceptual model
combination was effective in achieving the intended task objectives (Davis, 1989; Maes and
Poels, 2007).
Proposition Development
Having described the constructs in our theory, we now develop propositions that describe the
associations between the constructs. Figure 2 shows the key associations—the main
propositions—we explore in this section. The three variables visualized in the dashed part of
Figure 2, viz., environmental uncertainty, task nature and prior domain knowledge, describe
possible boundary conditions situated in the conceptual modeling context. We discuss these in
Appendix C.
Figure 2: Summary of Theory Propositions
20
Predicting the selection of conceptual model combinations
The selection proposition concerns which models from a set of available models with different
levels of ontological completeness users will select to complete an upcoming task. Figure 3
illustrates this proposition. It suggests that individuals will start selecting multiple conceptual
models if available but only until some point.
The literature on cognitive fit (Vessey and Galletta, 1991; Agarwal et al., 1996; Khatri et al.,
2006) suggests that individuals select models to aid their tasks based on a mental model of the
problem space they are faced with. For example, for symbolic tasks they would choose a
tabular representation of the domain (Vessey and Galletta, 1991). As we have stated earlier, the
interpretation of models is likewise characterized by task goals. Specifically, conceptual models
aid in the task of requirements analysis for some problem context. However, cognitive fit theory
implies that unless the combination of models can represent semantically all the elements of the
problem task then the mental representation of the user will be degraded and performance will
be degraded.
Therefore, notwithstanding an initial model selection, on the assumption that multiple models
about a real-world domain are available to model readers, we predict that, prima facie,
individuals will select additional models to complement their initial choice because they desire a
maximal level of ontological completeness of the combined representation of some focal real-
world problem they are presented with in some task. They do so because any one model will
have construct deficit. In other words, if available, users will likely select additional models
because they have a desire to compensate for the impoverished representation that any one
model provides (see also the discussion of adaptive behaviors by Weber, 1997, pp. 95-96). One
typical manifestation of an impoverished representation is a conceptual model that focuses on a
particular design dimension (say, data structures) but omits a different dimension (say, system
behaviour). Intuitively, these different models each provide a partial level of representation.
21
Combining them provides a more complete representation, which appears at face value
desirable to a user because by increasing the level of combined ontological completeness of a
domain’s representation, more information will be available for integration into a mental model of
the real-world phenomenon being represented. Therefore, we propose initially:
P1a. Given one conceptual model, users will select additional models for use in their task
such that the combined ontological completeness of the model combination will be
maximized.
Yet, as visualized in Figure 3, the selection logic will not be linear. At some point, users will stop
selecting additional models when they appear, at face value, to convey the same information,
even if the additions would provide an increased level of completeness. Our reasoning is as
follows:
Selecting additional models increases the chance that the ontological overlap between models
will increase. This situation decreases the clarity of the combined representation because
several constructs are in the set of models that describe the same real-world phenomenon. This
situation may lead to confusion when users inspect the available models: users might wonder
why certain constructs appear multiple times and/or might assume that a redundant construct
stands for some other type of phenomenon (Weber, 1997, p. 99). In either case, the models will
appear “complex.” Therefore, to mitigate the anticipated additional effort that is associated with
generating understanding from conceptual models, we argue that model users will follow a law
of diminishing returns, choosing additional models that maximize the combined ontological
completeness only until they reach a level of bearable overlap they can tolerate. Should this
level of overlap be exceeded, we predict that users will de-select models, even lowering the
combined ontological completeness that can be achieved. Otherwise, the domain
representation achieved will be undermined by lack of clarity and it will require too much
cognitive load when users interpret the models, as the bearable level of combined ontological
22
overlap is constrained by users’ cognitive processing capacity (Miller, 1956). Because of their
limited overall cognitive capacity, users do not refer to the entire set of models at once as a
single chunk of information (Ward and Sweller, 1990). Instead they screen each model for local
information and thus they can anticipate the cognitive demands of information processing very
quickly. Therefore, users will not select additional models even if adding another model would
increase combined ontological completeness. We propose:
P1b. Users will select additional models for use in their tasks only until their bearable level
of ontological overlap of the combined representation is reached.
Level of combined
ontological completeness
high
Level of ontological overlap
of models
none
low high
bearable
Tipping point
Figure 3: Selection of model combinations as a function of combined
ontological completeness and overlap
Predicting the development of domain understanding from conceptual model combinations
The domain understanding proposition concerns which model combinations maximize model
readers’ ability to gain understanding about the real-world domain represented during their
interpretation. Figure 4 illustrates this proposition.
23
When reading models, users create a mental model representation of the domain based on the
information the models provide (Gemino and Wand, 2005). They identify and internalize
constructs in the model by integrating them with concepts in their mental representations of the
domain (Mayer, 1989; Pretz et al., 2003), thereby updating their existing knowledge and
constructing new knowledge. A complete mental representation of the domain is a key driver of
a user’s ability to reason about the domain in the course of problem solving (Newell and Simon,
1972).
When interpreting conceptual models, the construct deficit that is inherent in any single model is
a noted issue: users lack relevant information about a real-world domain, which diminishes the
level of understanding about a phenomenon a user can generate (e.g., Bajaj, 2004; Parsons,
2011). However, a combination of models provides more representation elements that convey
meaning about the phenomenon in a real-world domain when the combination has higher
ontological completeness than any one model alone. If such is the case, more information is
available for assimilation into users’ mental representation about the phenomenon, improving
the level of domain understanding that can be gained from the models. Therefore, we state:
P2a. Model users who read a combination of models that have a high level of combined
ontological completeness will generate higher levels of domain understanding than
will users who read a combination of models that have a low level of combined
ontological completeness.
The level of domain understanding that can be achieved from a selected set of models is
moderated by the level of ontological overlap between the models. Model combinations with
higher levels of ontological overlap introduce additional extraneous cognitive load (Sweller,
1988; Gemino and Wand, 2005) in two ways: (1) Model users must identify the overlapping
constructs. Identifying redundant constructs complicates readers’ cognitive search process (i.e.,
the process of locating visual constructs in a model and identifying relevant attributes and
24
relationships) (Larkin and Simon, 1987). (2) Model users must reconcile their meaning, which
adds complexity because users have to compare the semantics of constructs. The heightened
cognitive demand to understand redundant constructs across models diminishes the capacity to
absorb information and hence the ability to generate domain understanding. Therefore, we
argue that:
P2b. The positive impact of combined ontological completeness of model combinations on
users’ ability to gain domain understanding decreases as the ontological overlap in
the combination of models increases.
Level of domain
understanding generated
high
Model combinations
without ontological
overlap
Combined ontological
completeness of model
combinations
low
low high
Model combinations
with ontological
overlap
Figure 4: Level of domain understanding from model combinations as a function
of combined ontological completeness and overlap
Predicting the perceived usefulness of conceptual model combinations
Our third proposition concerns model users’ evaluation of performance gains from interpreting
multiple models. Figure 5 illustrates this proposition.
25
We predict that users’ perceptions of model combinations’ usefulness will be more positive
when the representation’s combined ontological completeness is high. Perceived usefulness
can be understood as the degree to which a person believes that a particular model is effective
in achieving the intended task objectives (Davis, 1989; Recker et al., 2011). Evaluations of the
usefulness of conceptual models for a task depend on users’ having the necessary and
sufficient manifestations of relevant real-world phenomena explicit in a model. If there are
deficits in the desired representations, the available representation will be less effective as a
way to solve problems (Gemino and Wand, 2004). Therefore, users are unlikely to find multiple
models with impoverished quality useful (Lindland et al., 1994; Maes and Poels, 2007). We
state:
P3a. Users perceive a combination of models with a high level of combined ontological
completeness as more useful in model-interpretation tasks than a combination of
models with a lower level of combined ontological completeness.
However, model combinations with increased ontological completeness and increased
ontological overlap will be evaluated as less useful because the additional complexity of the
representation will offset the gains in representational effectiveness by requiring more cognitive
effort to reconcile the meaning conveyed (Wand and Weber, 1993). Ontologically overlapping
models add confusion, which adds complexity to the task users set out to complete. As the
extent of overlap increases, the perceived usefulness of the combination decreases. A clear
(i.e., non-overlapping) interpretation of conceptual models will allow a user to glean meaning
from the models more easily and, thus, retain cognitive capacity to complete the task at hand.
Conversely, if additional effort must be invested in interpreting the models because of high
overlap, less capacity is available for the problem-solving task. Such perceptions of effort will
undermine the perception of usefulness (Recker, 2010). We expect that the detrimental impact
of ontological overlap on perceived usefulness is stronger than the positive impact of ontological
2
6
completeness. Users deem parsimonious models more useful for their tasks than complete and
complex representations because of the computational advantage parsimonious models provide
in information processing (Larkin and Simon, 1987):
P3b. Users’ perceptions of the usefulness of conceptual model combinations decreases
as the level of ontological overlap of the combinations increases, such that the
negative effect of ontological overlap is stronger than the positive effect of combined
ontological completeness.
Pe
r
ceived usefulness of
conceptual model
combinations
high
Model combinations
without ontological
overlap
Combined ontological
completeness of model
combinations
low
low high
Model combinations
with ontological
overlap
Figure 5: Perceived usefulness of model combinations as a function of
combined ontological completeness and overlap
DISCUSSION
Summary of the Scope and Contributions of our Theory
We developed new theory about how individuals interpret multiple conceptual models, which
posits that two attributes of model combinations, combined ontological completeness and
2
7
ontological overlap, are key determinants for their selection, understanding, and perceived
usefulness by users.
Having done so, we feel it is prudent to delineate the boundary conditions of our theorizing. We
start by identifying the scope of our theory as limited by its assumptions. We developed a
specific theory (how users interpret multiple models) from Wand and Weber’s (1995) more
general theory of information systems as representations. Therefore, like their work, our
theorizing describes a model of the artefacts that define an information system’s deep structure.
As such, it focuses on the semantics of conceptual models and grammars (Bera et al., 2014;
Clarke et al., 2016). The choice of visual syntax (e.g., a rectangle or a circle, Moody, 2009) is
not part of our theory. Also, our theory does not describe in detail the psychological, linguistic, or
cognitive processes through which users engage with conceptual models to understand real-
world domains (Truex and Baskerville, 1998; Evermann, 2005). In addition, it does not address
pragmatic factors (e.g., tasks, knowledge, external conditions) that would describe their use.
However, as we discuss in Appendix C, such factors can be brought into the theory’s focus.
A second boundary to the scope of our theory stems from its position in the stream of
conceptual modeling literature. Our theory is about the selection and interpretation of
combinations of previously built conceptual models by practitioners in analysing and
understanding systems requirements. It is not a theory of conceptual model creation even
though we believe design principles could possibly be derived from our explanations. It is also
not a theory of the use of methods and grammars for the design of conceptual models (Purao et
al., 2002).
Third, our theory of model interpretation bears some resemblance to theories of information
behaviour in general because it conceptualizes some specific aspect of “how people need,
seek, manage, give and use information in different contexts” (Fisher et al., 2005, p. xix).
However, with its focus on the role of models as representations of an information system’s
28
deep structure (Wand and Weber, 1995; Burton-Jones et al., 2017) it is both much narrower
than general models of information seeking (e.g., Leckie, 2005) or acquiring (e.g., Rioux, 2005),
and more specific in that it focuses on artefacts more so than a person’s information-seeking
behavior (e.g., Wilson, 1999, p. 251).
Finally, our theory is also bounded because its predictions have not yet been tested. Describing
operationalization and measurement strategies for our theory in full would require an entire
paper. However, to motivate and guide potential future empirical research to evaluate, refute,
extend or otherwise improve our theory, we provide two aids about how empirical research
procedures could be carried out. First, in Appendix B we provide an illustration of how our
theory can be applied to the analysis of multiple available models presented to an individual.
Second, in Appendix C, we provide a discussion of what we believe might be important
moderator variables (Figure 2) that should be included in an empirical research design used to
test our theory.
In evaluating the contributions our new theory offers, we consider the knowledge provided by
the extant representation theory on which it is founded and the empirical research program it
has supported to date. Our theory development may be construed as “dropping the theoretical
tools, holding the concepts lightly and updating them frequently” (Holmström and Truex, 2011),
which can best be explained by considering the two principles of maximal ontological
completeness (MOC) and minimal ontological overlap (MOO). We adapted these principles from
Green et al. (2007; 2011) and Weber (1997) by transferring their application from grammars to
models and from model design to model interpretation. This adaptation is “interesting” (Davis,
1971) because it highlights twotensions to the original concepts:
1. The tension between ontological completeness and overlap. We started our theorizing
by appropriating two established concepts: MOC and MOO (e.g., Green et al., 2007; Green
et al., 2011). In what followed, our theorizing highlighted a potential conflict between these
29
two notions that has not been surfaced explicitly earlier. Our theory suggests that,
sometimes, combined ontological completeness and ontological overlap may conflict. For
instance, imagine a set of models that together maximizes the representation of real-world
phenomena and also shares a large set of common representational constructs; viz., the
combination has high levels of combined ontological completeness and overlap. Then
imagine a second model combination that has a lower level of combined ontological
completeness but also a lower level of ontological overlap. Which of these two combinations
should be selected, interpreted, and deemed more useful? This question is far from trivial.
Green et al. (2011) argued that the primary principle for grammar selection as part of model
design should be MOC, but whether that is true for the selection and interpretation of models
remains in question. Could model interpretation be governed by principles of clarity (i.e.,
minimizing construct overload and/or redundancy) over completeness? There is some
evidence that suggests that the simplicity of a representation may be more useful than its
completeness. For instance, Siau and Lee (2004) showed that users preferred diagrams that
were easier to use and that such diagrams enabled them to obtain a more complete
representation. Samuel et al. (2015) also demonstrated that practitioners often rationalized
the volume of information in models to achieve a simpler, not fuller, understanding of the
relevant domain.
2. The tension between model design and interpretation: Weber (1997) and Green et al.
(2007; 2011) argued that MOC and MOO are criteria that guide designers in their choice of
grammars for model creation. We developed a theory for the choice of models for model
interpretation, but the relationship between design choices made in the creation of models
and the interpretation choices available for a user is an important one. For readers of
conceptual models, it is not the grammar and its potential maximal coverage of real-world
phenomena that matters but the actual maximal coverage of real-world phenomena that is
30
available in any combination of models produced by a grammar or grammars. A model’s
actual maximal coverage is limited by the number and type of constructs in a grammar, so a
model designer has potentially unlimited choices from which to create a complete and clear
representation of a real-world phenomenon: they may choose from available or
recommended grammars, may opt to use an alternative like free text or additional
documents adjacent to the models (Green et al., 2011), or may even alter construct
semantics or invent new semantics (Recker et al., 2010). These design choices are not
available to these models’ readers, who seek to obtain a complete and clear interpretation of
a focal real-world phenomenon. One key difference is that model designers should be able
to create representations that have a full level of ontological completeness, whereas,
provided multiple models are available, model readers can, at best, select a maximally (not
necessarily fully) ontologically complete representation.
The questions about the opposition of MOC and MOO and the correlation of design and
interpretation ultimately require empirical work to resolve. Our theory offers an explanatory logic
to guide such work and identifies some of the conditions under which the relationship between
MOC and MOO can be examined. Further, Appendix C introduces three moderator variables
that may be useful to identify contexts in which the relationship between MOC and MOO differ.
Our new theory also contributes in several ways to the broader literature on conceptual
modeling. This theory is the first model to analyze and explain the interpretation of conceptual
models in combinations. Unlike empirical accounts of the use of conceptual models broadly
(Dobing and Parsons, 2008; Petre, 2013; Jabbari Sabegh and Recker, 2017), our theory offers
principles about how and why users might select different sets of conceptual models to
complete their tasks, how much domain understanding they might generate, and how useful
they perceive sets of models to be. Also, our theory focuses on the artefacts (viz., the models)
themselves.
31
Implications for Research
As the focus of this paper is theory development, the implications of our research relate
primarily to future research that enacts or evaluates our theory through empirical research. We
see several ways in which our theory can be advanced.
First, our three research models suggest the presence of limits and thresholds. For example,
proposition 1 argues that users will select additional models until they reach a bearable level of
ontological overlap that is constrained by users’ cognitive processing capability. However, users’
processing capability is both volatile and contextual (Gobet and Clarkson, 2004); therefore, our
theory has no basis on which to speculate ex ante what the thresholds will be for different users,
so we cannot offer a hypothesis on this element of the proposition. Instead, the existence and
level of the threshold is an empirical question.
Second, the range of predictions could be extended beyond the three core evaluations on which
we focus. A promising direction is to develop predictions about the design of model
combinations, rather than their interpretation.
A third direction flows from a broader examination of the tasks and the associated goals for the
models interpreted during these tasks. We focused on the development of domain
understanding because any subsequent interpretation of a model for other analysis and design
tasks (say, software specification versus system configuration versus process re-design)
depends ultimately on how well individuals can understand the modeled domain (Burton-Jones
and Meso, 2008). Still, similar to existing research on using conceptual models for specific
tasks, such as the development of database queries (Bowen et al., 2006), it would be useful to
see how model combinations could assist different kinds of specific problem-solving tasks.
Fourth, we see a promising research direction in the development of appropriate measurements
for our theory. For example, because users evaluate grammars differently based on their
32
perceptions of the grammars’ ontological completeness and clarity (Recker et al., 2011), it will
be important to clarify how a set of models’ perceived level of completeness and clarity affect
their evaluations and the behaviors of the users who read them. We focused on perceived
usefulness as a performance-evaluation metric because it is a well-established measure in the
literature. Other suitable metrics include the perceived semantic quality of model combinations
(Maes and Poels, 2007) and satisfaction with models (Nolte et al., 2016), to name just two.
Fifth, our theory focuses on attributes of models as artifacts. These can be brought in
combination with other elements (e.g., factors that describe the context of conceptual model
interpretation) to account for variations in its predictions. We provide a brief discussion of three
context factors in Appendix C. A more comprehensive analysis of the context will benefit from
programmatic efforts and it could build on Wand and Weber’s (2002) taxonomy of context
factors.
Implications for Practice
Two principal implications for practice emerge from our theory. First, we developed theoretical
models that can explain and guide choices available to conceptual models’ end users when they
seek to interpret these models during systems analysis and design tasks. Our theory suggests
that two guiding principles—combined ontological completeness and ontological overlap—
inform the selection, understanding, and usefulness of multiple models. In essence, our theory
suggests that model readers should be mindful of whether they require a parsimonious or a
complete representation of the real world to complete their tasks effectively.
A second implication arises about the design of multiple conceptual models. We assumed that
the purpose for creating conceptual models is to create faithful (i.e., complete, clear, and
accurate) representations of a real-world domain. However, as we discuss in Appendix C, under
some conditions, such as environmental uncertainty, when working with explorative tasks that
involve conceptual model use, or when designing models for users with very high or very low
33
domain knowledge, tradeoffs between completeness and clarity may have to be taken into
account already during model design, in order to make the created models fit for interpretation.
For example, designers may wish to create partially redundant representations to maximize
combined ontological completeness at the expense of ontological overlap or, conversely, create
representationally deficient conceptual models in order to maximize clarity and simplicity.
However, given our focus on model interpretation rather than creation, we have not studied
these implications, so they require further testing.
Limitations
We acknowledge two important limitations. First, we reported on theory development void of any
systematic empirical data collection or evaluation. Our suggested logic and explanations thus
remain speculative until empirical work is performed. To invite and guide such work, we provide
in Appendix B an illustration of procedures for enacting our theory. In Appendix C, we discuss
several important boundary conditions that might be relevant to empirical study design. We
hope through these means it becomes clear how an empirical analysis of practitioners’
interpretation of multiple models could be carried out.
Second, we wish to acknowledge the subjectivity of interpretation mappings that are required in
ontological analyses of grammar constructs in conceptual models (Rosemann et al., 2009). As
we illustrate in Appendix B, potential interpretation bias can be mitigated in two ways: similar to
Recker et al. (2009), the starting point should be the literature on published analyses of the
grammars used in the models. Then, like us, researchers should engage in an iterative process
based on principles of dialogical reasoning and suspicion (Klein and Myers, 1999) in which
interpretation mapping drafts are formulated between multiple researchers who then question
each of the suggested mappings, iterating between these two steps to tease out biases and
distortions in constructing a jointly agreed result such as that shown in Table B1.
34
CONCLUSION
Systems analysis and design practitioners often work with multiple conceptual models, rather
than just one. We proposed a theory that can be used to examine which combinations of
conceptual models are likely more suitable for interpretation by model readers. Our theory offers
fellow scholars a way to generate more research on conceptual modeling as a theory-in-use
and, in turn, increase the relevance of this important traditional stream of IS research. Our own
work will test our theory empirically and refine it, and we hope that fellow scholars will join us in
this endeavor.
REFERENCES
Agarwal, R., A. P. Sinha, and M. Tanniru (1996) "Cognitive Fit in Requirements Modeling: A
Study of Object and Process Methodologies", Journal of Management Information
Systems (13)2, pp. 137-162
Aguirre-Urreta, M. I. and G. M. Marakas (2008) "Comparing Conceptual Modeling Techniques:
A Critical Review of the EER vs. OO Empirical Literature", The DATA BASE for
Advances in Information Systems (39)2, pp. 9-32
Alexander, P. A. (1992) "Domain Knowledge: Evolving Themes and Emerging Concerns",
Educational Psychologist (27)1, pp. 33-51
Allen, G. and J. Parsons (2010) "Is Query Reuse Potentially Harmful? Anchoring and
Adjustment in Adapting Existing Database Queries", Information Systems Research
(21)1, pp. 56-77
Alvesson, M. and D. Kärreman (2007) "Constructing Mystery: Empirical Matters in Theory
Development", Academy of Management Review (32)4, pp. 1265-1281
Avison, D. E. and A. T. Wood-Harper (1986) "Multiview - An Exploration in Information Systems
Development", Australian Computer Journal (18)4, pp. 174-179
Bajaj, A. (2004) "The Effect Of the Number of Concepts On the Readability of Schemas: An
Empirical Study With Data Models", Requirements Engineering (9)4, pp. 261-270
Baker, P., S. Loh, and F. Weil (2005) “Model-Driven Engineering in a Large Industrial Context
— Motorola Case Study”, in Briand, L. and C. Williams (eds.) Model Driven Engineering
Languages and Systems - MoDELS 2005, Montego Bay, Jamaica: Springer, pp. 476-
491
Beach, L. R. (1993) "Image Theory: an Alternative to Normative Decision Theory", Advances in
Consumer Research (20)1, pp. 235-238
Beach, L. R. and T. R. Mitchell (1978) "A Contingency Model for the Selection of Decision
Strategies", Academy of Management Review (3)3, pp. 439-449
Bem, D. J. (1972) “Self Perception Theory”, in Berkowitz, L. (ed.) Advances in Experimental
Social Psychology, New York, New York: Academic Press, pp. 1-62
35
Bera, P., A. Burton-Jones, and Y. Wand (2011) "Guidelines for Designing Visual Ontologies to
Support Knowledge Identification", MIS Quarterly (35)4, pp. 883-908
Bera, P., A. Burton-Jones, and Y. Wand (2014) "How Semantics and Pragmatics Interact in
Understanding Conceptual Models", Information Systems Research (25)2, pp. 401-419
Bodart, F., A. Patel, M. Sim, and R. Weber (2001) "Should Optional Properties Be Used in
Conceptual Modelling? A Theory and Three Empirical Tests", Information Systems
Research (12)4, pp. 384-405
Bowen, P. L., R. A. O'Farrell, and F. Rohde (2006) "Analysis of Competing Data Structures:
Does Ontological Clarity Produce Better End User Query Performance", Journal of the
Association for Information Systems (7)8, pp. 514-544
Brown, S. A., V. Venkatesh, and S. Goyal (2012) "Expectation Confirmation in Technology
Use", Information Systems Research (23)2, pp. 474-487
Bunge, M. A. (1977) Treatise on Basic Philosophy Volume 3: Ontology I - The Furniture of the
World, Dordrecht, The Netherlands: Kluwer Academic Publishers
Bunge, M. A. (1979) Treatise on Basic Philosophy Volume 4: Ontology II - A World of Systems,
Dordrecht, The Netherlands: Kluwer Academic Publishers
Burton-Jones, A. and P. Meso (2008) "The Effects of Decomposition Quality and Multiple Forms
of Information on Novices’ Understanding of a Domain from a Conceptual Model",
Journal of the Association for Information Systems (9)12, pp. 784-802
Burton-Jones, A., J. Recker, M. Indulska, P. Green, and R. Weber (2017) "Assessing
Representation Theory with a Framework for Pursuing Success and Failure", MIS
Quarterly (41)4, pp. 1307-1333
Burton-Jones, A., Y. Wand, and R. Weber (2009) "Guidelines for Empirical Evaluations of
Conceptual Modeling Grammars", Journal of the Association for Information Systems
(10)6, pp. 495-532
Burton-Jones, A. and R. Weber (2014) “Building Conceptual Modeling on the Foundation of
Ontology”, in Topi, H. and A. Tucker (eds.) Computing Handbook, Third Edition:
Information Systems and Information Technology, Boca Raton, Florida: CRC Press, pp.
15-1-15-24
Byron, K. and S. M. B. Thatcher (2016) "Editors’ Comments: “What I Know Now That I Wish I
Knew Then”—Teaching Theory and Theory Building", Academy of Management Review
(41)1, pp. 1-8
Chandler, P. and J. Sweller (1991) "Cognitive Load Theory and the Format of Instruction",
Cognition and Instruction (8)4, pp. 293-332
Chen, P. P.-S. (1976) "The Entity Relationship Model - Toward a Unified View of Data", ACM
Transactions on Database Systems (1)1, pp. 9-36
Cherubini, M., G. Venolia, R. DeLine, and A. J. Ko (2007) “Let's Go to the Whiteboard: How and
Why Software Developers Use Drawings”, in SIGCHI Conference on Human Factors in
Computing Systems, San Jose, California: ACM, pp. 557-566
Clarke, R., A. Burton-Jones, and R. Weber (2016) "On the Ontological Quality and Logical
Quality of Conceptual-Modeling Grammars: The Need for a Dual Perspective",
Information Systems Research (27)2, pp. 365-382
Conboy, K. (2009) "Agility from First Principles: Reconstructing the Concept of Agility in
Information Systems Development", Information Systems Research (20)3, pp. 329-354
Corley, K. G. and D. A. Gioia (2011) "Building Theory about Theory Building: What Constitutes a
Theoretical Contribution?", Academy of Management Review (38)1, pp. 12-32
Davies, I., P. Green, M. Rosemann, M. Indulska, and S. Gallo (2006) "How do Practitioners Use
Conceptual Modeling in Practice?", Data & Knowledge Engineering (58)3, pp. 358-380
Davis, F. D. (1989) "Perceived Usefulness, Perceived Ease of Use, and User Acceptance of
Information Technology", MIS Quarterly (13)3, pp. 319-340
3
6
Davis, M. S. (1971) "That's Interesting: Towards a Phenomenology of Sociology and a
Sociology of Phenomenology", Philosophy of the Social Sciences (1)4, pp. 309-344
Dawson, L. and P. Swatman (1999) “The Use of Object-oriented Models in Requirements
Engineering: A Field Study”, 20th International Conference on Information Systems,
Charlotte, North Carolina: Association for Information Systems, pp. 260-273
Dess, G. G. and D. W. Beard (1984) "Dimensions of Organizational Task Environments",
Administrative Science Quarterly (29)1, pp. 52-73
Dobing, B. and J. Parsons (2008) "Dimensions of UML Diagram Use: A Survey of Practitioners",
Journal of Database Management (19)1, pp. 1-18
Edwards, J. R. and J. W. Berry (2010) "The Presence of Something or the Absence of Nothing:
Increasing Theoretical Precision in Management Research", Organizational Research
Methods (13)4, pp. 668-689
Erickson, J., K. Lyytinen, and K. Siau (2005) "Agile Modeling, Agile Software Development, and
Extreme Programming: The State of Research", Journal of Database Management
(16)4, pp. 88-100
Evermann, J. (2005) "Towards a Cognitive Foundation for Knowledge Representation",
Information Systems Journal (15)2, pp. 147-178
Evermann, J. and Y. Wand (2006) "Ontological Modeling Rules for UML: An Empirical
Assessment", Journal of Computer Information Systems (46)5, pp. 14-29
Fettke, P. (2009) "How Conceptual Modeling Is Used", Communications of the Association for
Information Systems (25)43, pp. 571-592
Fickinger, T. and J. Recker (2013) “Construct Redundancy In Process Modelling Grammars:
Improving The Explanatory Power Of Ontological Analysis”, in 21st European
Conference on Information Systems, Utrecht, The Netherlands: Association for
Information Systems,
Figl, K., J. Mendling, and M. Strembeck (2013) "The Influence of Notational Deficiencies on
Process Model Comprehension", Journal of the Association for Information Systems
(14)6, pp. 312-338
Fisher, K. E., S. Erdelez, and L. McKechnie (2005) Theories of Information Behavior, Medford,
New Jersey: Information Today
Fiske, S. T. (2004) "Mind the Gap: In Praise of Informal Sources of Formal Theory", Personality
and Social Psychology Review (8)2, pp. 132-137
Fowler, M. (2004) UML Distilled: A Brief Guide To The Standard Object Modelling Language,
3rd edition, Boston, Massachusetts: Addison-Wesley Longman
Fowler, M. and J. Highsmith (2001) "The Agile Manifesto", Software Development (9)8, pp. 28-
32
Gemino, A. and D. C. Parker (2009) "Use Case Diagrams in Support of Use Case Modeling:
Deriving Understanding from the Picture", Journal of Database Management (20)1, pp.
1-24
Gemino, A. and Y. Wand (2004) "A Framework for Empirical Evaluation of Conceptual Modeling
Techniques", Requirements Engineering (9)4, pp. 248-260
Gemino, A. and Y. Wand (2005) "Complexity and Clarity in Conceptual Modeling: Comparison
of Mandatory and Optional Properties", Data & Knowledge Engineering (55)3, pp. 301-
326
Gobet, F. and G. Clarkson (2004) "Chunks in Expert Memory: Evidence for the Magical Number
Four … or is it Two?", Memory (12)6, pp. 732-747
Gray, P. H. and W. H. Cooper (2010) "Pursuing Failure", Organizational Research Methods
(13)4, pp. 620-643
Green, P. (1996) An Ontological Analysis of ISAD Grammars in Upper CASE Tools,
Unpublished PhD Thesis, Brisbane, Australia: The University of Queensland
3
7
Green, P. and M. Rosemann (2001) "Ontological Analysis of Integrated Process Models:
Testing Hypotheses", Australasian Journal of Information Systems (9)1, pp. 30-38
Green, P., M. Rosemann, M. Indulska, and C. Manning (2007) "Candidate Interoperability
Standards: An Ontological Overlap Analysis", Data & Knowledge Engineering (62)2, pp.
274-291
Green, P., M. Rosemann, M. Indulska, and J. Recker (2011) "Complementary Use of Modeling
Grammars", Scandinavian Journal of Information Systems (23)1, pp. 59-86
Gregor, S. (2006) "The Nature of Theory in Information Systems", MIS Quarterly (30)3, pp. 611-
642
Grossman, M., J. E. Aronson, and R. V. McCarthy (2005) "Does UML Make the Grade? Insights
from the Software Development Community", Information and Software Technology
(47)6, pp. 383-397
Holmström, J. and D. P. Truex (2011) "Dropping Your Tools: Exploring When and How Theories
Can Serve as Blinders in IS Research", Communications of the Association for
Information Systems (28)19, pp. 283-294
Hutchinson, J., J. Whittle, and M. Rouncefield (2014) "Model-driven Engineering Practices in
Industry: Social, Organizational and Managerial Factors that Lead to Success or
Failure", Science of Computer Programming (89)Part B, pp. 144-161
Indulska, M., P. Green, J. Recker, and M. Rosemann (2009) “Business Process Modeling:
Perceived Benefits”, in Castano, S., U. Dayal, and A. H. F. Laender (eds.) Conceptual
Modeling - ER 2009, Gramado, Brazil: Springer, pp. 458-471
Irwin, G. and D. Turk (2005) "An Ontological Analysis of Use Case Modeling Grammar", Journal
of the Association for Information Systems (6)1, pp. 1-36
Jabbari Sabegh, M. A. and J. Recker (2017) "Combined Use of Conceptual Models in Practice:
An Exploratory Study", Journal of Database Management (28)2, pp. 56-88
Kendall, K. E. and J. E. Kendall (2008) Systems Analysis and Design, 7th edition, Upper Saddle
River, New Jersey: Prentice Hall
Khatri, V. and I. Vessey (2016) "Understanding the Role of IS and Application Domain
Knowledge on Conceptual Schema Problem Solving: A Verbal Protocol Study", Journal
of the Association for Information Systems (17)12, pp. 759-803
Khatri, V., I. Vessey, V. Ramesh, P. Clay, and P. Sung-Jin (2006) "Understanding Conceptual
Schemas: Exploring the Role of Application and IS Domain Knowledge", Information
Systems Research (17)1, pp. 81-99
Kim, J., J. Hahn, and H. Hahn (2000) "How Do We Understand a System with (So) Many
Diagrams? Cognitive Integration Processes in Diagrammatic Reasoning", Information
Systems Research (11)3, pp. 284-303
King, W. R. and J. He (2005) "Understanding the Role and Methods of Meta-Analysis in IS
Research", Communications of the Association for Information Systems (16)32, pp. 665-
686
Klein, H. K. and M. D. Myers (1999) "A Set of Principles for Conducting and Evaluating
Interpretive Field Studies in Information Systems", MIS Quarterly (23)1, pp. 67-94
Kung, C. H. and A. Sølvberg (1986) “Activity Modeling and Behavior Modeling of Information
Systems”, in Olle, T. W., H. G. Sol, and A. A. Verrijn-Stuart (eds.) Information Systems
Design Methodologies: Improving the Practice, Amsterdam: North-Holland, pp. 145-171
Larkin, J. H. and H. A. Simon (1987) "Why a Diagram Is (Sometimes) Worth Ten Thousand
Words", Cognitive Science (11)1, pp. 65-100
Lauesen, S. and O. Vinter (2001) "Preventing Requirement Defects: An Experiment in Process
Improvement", Requirements Engineering (6)1, pp. 37-50
Leckie, G. J. (2005) “General Model of the Information Seeking of Professionals”, in Fisher, K.
E., S. Erdelez, and L. McKechnie (eds.) Theories of Information Behavior, Medford, New
Jersey: Information Today, pp. 158-163
38
Lindland, O. I., G. Sindre, and A. Solvberg (1994) "Understanding Quality in Conceptual
Modeling", IEEE Software (11)2, pp. 42-49
Locke, E. A. and G. P. Latham (1990) A Theory of Goal Setting and Task Performance,
Englewood Cliffs, New Jersey: Prentice-Hall
Lukyanenko, R., J. Parsons, and Y. F. Wiersma (2014a) “The Impact of Conceptual Modeling
on Dataset Completeness: A Field Experiment”, in Myers, M. D. and D. W. Straub (eds.)
35th International Conference on Information Systems, Auckland, New Zealand:
Association for Information Systems,
Lukyanenko, R., J. Parsons, and Y. F. Wiersma (2014b) "The IQ of the Crowd: Understanding
and Improving Information Quality in Structured User-Generated Content", Information
Systems Research (25)4, pp. 669-689
Lukyanenko, R., J. Parsons, and Y. F. Wiersma (2016) "Emerging Problems of Data Quality in
Citizen Science", Conservation Biology (30)3, pp. 447-449
Lukyanenko, R., J. Parsons, Y. F. Wiersma, G. Wachinger, B. Huber, and R. Meldt (2017)
"Representing Crowd Knowledge: Guidelines for Conceptual Modeling of User-
generated Content", Journal of the Association for Information Systems (18)4, pp. 297-
339
Maes, A. and G. Poels (2007) "Evaluating Quality of Conceptual Modelling Scripts Based on
User Perceptions", Data & Knowledge Engineering (63)3, pp. 769-792
March, J. G. (1991) "Exploration and Exploitation in Organizational Learning", Organization
Science (2)1, pp. 71-87
Masri, K., D. C. Parker, and A. Gemino (2008) "Using Iconic Graphics in Entity-Relationship
Diagrams: The Impact on Understanding", Journal of Database Management (19)3, pp.
22-41
Mayer, R. E. (1989) "Models for Understanding", Review of Educational Research (59)1, pp. 43-
64
Mayer, R. E. (2009) Multimedia Learning, 2nd edition, Cambridge, Massachusetts: Cambridge
University Press
Mendling, J., M. Strembeck, and J. Recker (2012) "Factors of Process Model Comprehension
— Findings from a Series of Experiments", Decision Support Systems (53)1, pp. 195-
206
Miller, G. A. (1956) "The Magical Number Seven, Plus or Minus Two: Some Limits on Our
Capacity for Processing Information", Psychological Review (63)2, pp. 81-97
Milliken, F. J. (1987) "Three Types of Perceived Uncertainty About the Environment: State,
Effect, and Response Uncertainty", Academy of Management Review (12)1, pp. 133-143
Mohagheghi, P., W. Gilani, A. Stefanescu, and M. A. Fernandez (2013) "An Empirical Study of
the State of the Practice and Acceptance of Model-driven Engineering in Four Industrial
Cases", Empirical Software Engineering (18)1, pp. 89-116
Moody, D. L. (2009) "The “Physics” of Notations: Toward a Scientific Basis for Constructing
Visual Notations in Software Engineering", IEEE Transactions on Software Engineering
(35)6, pp. 756-779
Newell, A. and H. A. Simon (1972) Human Problem Solving, Englewood Cliffs, New Jersey:
Prentice-Hall
Nolte, A., E. Bernhard, J. Recker, F. Pittke, and J. Mendling (2016) "Repeated Use of Process
Models: The Impact of Artifact, Technological, and Individual Factors", Decision Support
Systems (88), pp. 98-111
Oliver, R. L. (1977) "Effect of Expectation and Disconfirmation on Postexposure Product
Evaluations - an Alternative Interpretation", Journal of Applied Psychology (62)4, pp.
480-486
OMG (2011) "Business Process Model and Notation (BPMN) - Version 2.0", Object
Management Group, http://www.omg.org/spec/BPMN/2.0 (current March 17, 2011)
39
Opdahl, A. L. and B. Henderson-Sellers (2002) "Ontological Evaluation of the UML Using the
Bunge-Wand-Weber Model", Software and Systems Modeling (1)1, pp. 43-67
Parsons, J. (2011) "An Experimental Study of the Effects of Representing Property Precedence
on the Comprehension of Conceptual Schemas", Journal of the Association for
Information Systems (12)6, pp. 401-422
Petre, M. (2013) “UML in Practice”, in Cheng, B. H. C. and K. Pohl (eds.) 2013 International
Conference on Software Engineering, San Francisco, California: IEEE, pp. 722-731
Pretz, J. E., A. J. Naples, and R. J. Sternberg (2003) “Recognizing, Defining, and Representing
Problems”, in Davidson, J. E. and R. J. Sternberg (eds.) The Psychology of Problem
Solving, New York, New York: Cambridge University Press, pp. 3-30
Puolamäki, K. and A. Bertone (2009) "Introduction to the Special Issue on Visual Analytics and
Knowledge Discovery", SIGKDD Explorations (11)2, pp. 3-4
Purao, S., M. Rossi, and A. Bush (2002) "Towards an Understanding of Problem and Design
Spaces during Object-oriented Systems Development", Information and Organization
(12)4, pp. 249-281
Recker, J. (2010) "Continued Use of Process Modeling Grammars: The Impact of Individual
Difference Factors", European Journal of Information Systems (19)1, pp. 76-92
Recker, J. (2013) "Empirical Investigation of the Usefulness of Gateway Constructs in Process
Models", European Journal of Information Systems (22)6, pp. 673-689
Recker, J. and A. Dreiling (2011) "The Effects of Content Presentation Format and User
Characteristics on Novice Developers’ Understanding of Process Models",
Communications of the Association for Information Systems (28)6, pp. 65-84
Recker, J., M. Indulska, M. Rosemann, and P. Green (2010) "The Ontological Deficiencies of
Process Modeling in Practice", European Journal of Information Systems (19)5, pp. 501-
525
Recker, J., M. Rosemann, P. Green, and M. Indulska (2011) "Do Ontological Deficiencies in
Modeling Grammars Matter?", MIS Quarterly (35)1, pp. 57-79
Recker, J., M. Rosemann, M. Indulska, and P. Green (2009) "Business Process Modeling: A
Comparative Analysis", Journal of the Association for Information Systems (10)4, pp.
333-363
rFigl, K. and J. Recker (2016) "Process Innovation as Creative Problem-Solving: An
Experimental Study of Textual Descriptions and Diagrams", Information & Management
(53)6, pp. 767-786
Rioux, K. (2005) “Information Acquiring-and-Sharing”, in Fisher, K. E., S. Erdelez, and L.
McKechnie (eds.) Theories of Information Behavior, Medford, New Jersey: Information
Today, pp. 169-173
Rosemann, M., P. Green, and M. Indulska (2004) “A Reference Methodology for Conducting
Ontological Analyses”, in Lu, H., W. Chu, P. Atzeni, S. Zhou, and T. W. Ling (eds.)
Conceptual Modeling – ER 2004, Shanghai, China: Springer, pp. 110-121
Rosemann, M., J. Recker, P. Green, and M. Indulska (2009) "Using Ontology for the
Representational Analysis of Process Modeling Techniques", International Journal of
Business Process Integration and Management (4)4, pp. 251-265
Samuel, B. M., L. A. Watkins III, A. Ehle, and V. Khatri (2015) "Customizing the Representation
Capabilities of Process Models: Understanding the Effects of Perceived Modeling
Impediments ", IEEE Transactions on Software Engineering (41)1, pp. 19-39
Samuelson, P. A. and W. D. Nordhaus (2001) Microeconomics, 17th edition, Boston,
Massachusetts: McGraw-Hill/Irwin
Scott, J. (2000) “Rational Choice Theory”, in Browning, G., A. Halcli, and F. Webster (eds.)
Understanding Contemporary Society: Theories of the Present, Thousand Oaks,
California: Sage, pp. 126-138
40
Shanks, G., E. Tansley, J. Nuredini, D. Tobin, and R. Weber (2008) "Representing Part–Whole
Relations in Conceptual Modeling: An Empirical Evaluation", MIS Quarterly (32)3, pp.
553-573
Siau, K. and L. Y. Lee (2004) "Are Use Case and Class Diagrams Complementary in
Requirements Analysis: An Experimental Study on Use Case and Class Diagrams in
UML", Requirements Engineering (9)4, pp. 229-237
Siau, K. and M. Rossi (2011) "Evaluation Techniques for Systems Analysis and Design
Modelling Methods - A Review and Comparative Analysis", Information Systems Journal
(21)3, pp. 249-268
Sweller, J. (1988) "Cognitive Load During Problem Solving: Effects on Learning", Cognitive
Science (12)2, pp. 257-285
Truex, D. P. and R. Baskerville (1998) "Deep Structure or Emergence Theory: Contrasting
Theoretical Foundations for Information Systems Development", Information Systems
Journal (8)2, pp. 99-118
Tuovinen, J. E. and J. Sweller (1999) "A Comparison of Cognitive Load Associated With
Discovery Learning and Worked Examples", Journal of Educational Psychology (91)2,
pp. 334-341
Urquhart, C. and W. D. Fernandez (2013) "Using Grounded Theory Method in Information
Systems: The Researcher as Blank Slate and Other Myths", Journal of Information
Technology (28)3, pp. 224-236
Venkatesh, V. and F. D. Davis (2000) "A Theoretical Extension of the Technology Acceptance
Model: Four Longitudinal Field Studies", Management Science (46)2, pp. 186-204
Venkatesh, V., M. G. Morris, G. B. Davis, and F. D. Davis (2003) "User Acceptance of
Information Technology: Toward a Unified View", MIS Quarterly (27)3, pp. 425-478
Vessey, I. and D. F. Galletta (1991) "Cognitive Fit: An Empirical Study of Information
Acquisition", Information Systems Research (2)1, pp. 63-84
Wand, Y. and R. Weber (1989) “An Ontological Evaluation of Systems Analysis and Design
Methods”, in Falkenberg, E. D. and P. Lindgreen (eds.) Information System Concepts:
An In-depth Analysis. Proceedings of the IFIP TC 8/WG 8.1 Working Conference on
Information System Concepts, Amsterdam, The Netherlands: North Holland, pp. 79-107
Wand, Y. and R. Weber (1990) "An Ontological Model of an Information System", IEEE
Transactions on Software Engineering (16)11, pp. 1282-1292
Wand, Y. and R. Weber (1993) "On the Ontological Expressiveness of Information Systems
Analysis and Design Grammars", Journal of Information Systems (3)4, pp. 217-237
Wand, Y. and R. Weber (1995) "On the Deep Structure of Information Systems", Information
Systems Journal (5)3, pp. 203-223
Wand, Y. and R. Weber (2002) "Research Commentary: Information Systems and Conceptual
Modeling - A Research Agenda", Information Systems Research (13)4, pp. 363-376
Ward, M. and J. Sweller (1990) "Structuring Effective Worked Examples", Cognition and
Instruction (7)1, pp. 1-39
Weber, R. (1997) Ontological Foundations of Information Systems, Melbourne, Australia:
Coopers & Lybrand and the Accounting Association of Australia and New Zealand
Weber, R. (2012) "Evaluating and Developing Theories in the Information Systems Discipline",
Journal of the Association for Information Systems (13)1, pp. 1-30
Weber, R. and Y. Zhang (1996) "An Analytical Evaluation of NIAM's Grammar for Conceptual
Schema Diagrams", Information Systems Journal (6)2, pp. 147-170
Whetten, D. A. (1989) "What Constitutes a Theoretical Contribution?", Academy of Management
Review (14)4, pp. 490-495
Whiteley, D. (2013) An Introduction to Information Systems, London, England: Palgrave
Macmillan
41
Whittle, J., J. Hutchinson, and M. Rouncefield (2014) "The State of Practice in Model-Driven
Engineering", IEEE Software (31)3, pp. 79-85
Wilson, T. D. (1999) "Models in Information Behaviour Research", Journal of Documentation
(55)3, pp. 249-270
Wood, C., B. Sullivan, M. Iliff, D. Fink, and S. Kelling (2011) "eBird: Engaging Birders in Science
and Conservation", PLoS Biology (9)12, pp. e1001220
Xue, L., G. Ray, and B. Gu (2011) "Environmental Uncertainty and IT Infrastructure
Governance: A Curvilinear Relationship", Information Systems Research (22)2, pp. 389-
399
Yourdon, E. (1989) Modern Structured Analysis, Upper Saddle River, New Jersey: Prentice-Hall
Zhang, H., R. Kishore, R. Sharman, and R. Ramesh (2007) "Agile Integration Modeling
Language (AIML): A Conceptual Modeling Grammar for Agile Integrative Business
Information Systems", Decision Support Systems (44)1, pp. 266-284
Zigurs, I. and B. K. Buckland (1998) "A Theory of Task/Technology Fit and Group Support
Systems Effectiveness", MIS Quarterly (22)3, pp. 313-334
zur Muehlen, M. and M. Indulska (2010) "Modeling Languages for Business Processes and
Business Rules: A Representational Analysis", Information Systems (35)4, pp. 379-390
zur Muehlen, M. and J. Recker (2008) “How Much Language is Enough? Theoretical and
Practical Use of the Business Process Modeling Notation”, in Léonard, M. and Z.
Bellahsène (eds.) Advanced Information Systems Engineering - CAiSE 2008,
Montpellier, France: Springer, pp. 465-479
42
APPENDIX A: BACKGROUND TO REPRESENTATION THEORY AND DEFINITIONS
OF KEY CONSTRUCTS
Representation theory (Wand and Weber, 1990; 1993; 1995) addresses the question how well
conceptual modeling grammars can generate faithful (i.e., clear, complete, and accurate, see
Weber, 1997, p. 83) representations of relevant real-world phenomena. To identify relevant
types of real-world phenomena, the theory adopts and modifies an ontological theory of the real
world proposed by Bunge (1977; 1979). Representation theory suggests a mapping between
the set of existing constructs in a conceptual modeling grammar that is available to the user to
model aspects of the real world, and the set of constructs in a benchmark ontology (such as
Bunge’s) that are required and sufficient to describe real-world phenomena.4 Based on this
mapping, Wand and Weber (1993) suggest two basic criteria: A good modeling grammar or
indeed a good conceptual model should be ontologically complete (viz., exhibit no construct
deficit) and ontologically clear (viz., exhibit no construct overload, redundancy, or excess) to
describe accurately and unambiguously all required real-world phenomena in the business
domain an IS is intended to support.
Relevant construct definitions of Wand and Weber’s theory are provided in Table A1. The theory
and the literature it supports is reviewed by Burton-Jones et al. (2017) so we do not discuss it in
detail here. However, three observations about this literature are relevant to our paper:
1. The theory has been subjected to various tests and applications (Burton-Jones et al., 2017).
However, this research has largely been on single grammars or single models, examining
questions like whether ontological deficiencies in a grammar lower perceptions of its
usefulness (Recker et al., 2011), whether ontological deficiencies in a grammar inhibit users’
4 Representation theory is not tied to a specific ontological model. While Bunge’s ontological theory is often
used, other benchmark ontologies could and should be used (Wand and Weber, 1993, p. 221).
43
ability to model a particular real-world phenomenon faithfully (e.g., Bodart et al., 2001;
Shanks et al., 2008; Parsons, 2011), and whether users’ ability to understand a real-world
domain is inhibited by deficiencies in the conceptual model (e.g., Gemino and Wand, 2005;
Bowen et al., 2006; Evermann and Wand, 2006; Bera et al., 2011).
1. All theory-based evaluations of conceptual modeling grammars (e.g., UML, OML, OPM,
ERD, DFD, BPMN, Petri nets, MibML, WSDL, BPEL, and others) to date have shown that
no available grammar is ontologically complete (e.g., Wand and Weber, 1993; Weber and
Zhang, 1996; Opdahl and Henderson-Sellers, 2002; Irwin and Turk, 2005; Green et al.,
2007; Recker et al., 2009). Therefore, even users who wanted to create models that fully
represent all aspects of the real-world phenomena they want to represent cannot. Therefore,
no single conceptual model offers a full representation of a real-world domain. Green (1996)
and Weber (1997, pp. 100-102) therefore predicted that users would employ modeling
grammars in combination to address deficits in any one grammar. They made two
predictions. First, model designers select grammar combinations with maximal ontological
completeness, that is, a combination that minimizes total construct deficit and covers as
many aspects of the focal real-world phenomenon as possible. Second, model designers
select grammar combinations with minimal ontological overlap, that is, combinations that
minimize the grammars’ overlap of representations of real-world phenomena that can be
modelled.
Work was then conducted to develop propositions and gather data about which grammar
combinations designers might select (Green et al., 2007; zur Muehlen and Indulska, 2010;
Green et al., 2011). But this work has not been extended to examine model readers’
interpretations of combinations of models (rather than the grammars with which they have
been constructed).
44
2. The existing empirical work has demonstrated that ontological deficiencies can predict
weaknesses in the use of the conceptual models or grammars. Yet, these effects are not
uniform or consistent. On one hand, implications of ontological incompleteness appear clear:
The lack of potentially relevant information about a real-world domain diminishes the level of
understanding users can generate (e.g., Bajaj, 2004; Parsons, 2011), which often leads
them to seek workarounds and customizations (Recker et al., 2010; Green et al., 2011;
Samuel et al., 2015). They devise new grammatical constructs, access additional grammars
or “aids,” or refer to additional documentation to provide the meaning missing from the
model. On the other hand, deficiencies of ontological clarity, in particular redundancy, do not
always have clear effects (Fickinger and Recker, 2013). Construct redundancy can have
positive consequences for users (Green and Rosemann, 2001), no apparent consequences
(Recker et al., 2010), or only partially negative consequences (Recker et al., 2011).
Therefore, redundancy of representations between models may be either beneficial or a
detriment.
These observations indicate at least one non-trivial dialectical logic involved in how individuals
might interpret multiple models in combination: on the one hand, it seems logical for users to
seek multiple models if these models would provide a more complete representation of a real-
world domain. On the other hand, multiple models are often at least partially redundant, as
information from one model may also be contained in a second model, so the representations
may partially overlap. It remains unclear whether this situation yields an advantage or an issue:
Wand and Weber’s theory predicts a lack of clarity from redundancy, but the empirical evidence
from single grammars (Recker et al., 2010; Recker et al., 2011) and even multiple grammars
(Green and Rosemann, 2001; Gemino and Parker, 2009) suggests that users may sometimes
experience benefits from redundancies. Our theory development provides an attempt to unpack
this dialectic.
45
Table A1. Key Construct Definitions
Construct Definition Relevant
Reference
Representation A model of someone’s or some group’s perception of the meaning
of a real-world phenomenon.
(Wand and
Weber, 1995, p.
207)
Real-world
Phenomenon
The aggregation of constituent things and their properties that
exist in the real world, as perceived by someone or some group.
(Weber, 1997, p.
34 & 72)
Conceptual Model The script (i.e., a meaningful, orderly collection of symbols) that
embodies the description of the real-world phenomenon as
perceived by someone or some group.
(Weber, 1997, p.
75)
Combined
Ontological
Completeness
The extent to which a conceptual model combination of two or
more scripts provides a full representation of someone’s or some
group’s perception of the meaning of some real-world
phenomenon.
Newly developed
construct
Ontological Overlap The extent to which two or more scripts in a conceptual model
combination share model constructs that provide the same
representation of some real-world phenomenon.
Newly developed
construct
Maximal Ontological
Completeness
The fullest level of representation of someone’s or some group’s
perception of the meaning of some real-world phenomenon
attained by one out of several combinations of conceptual models.
Newly developed
construct
Minimal Ontological
Overlap
The lowest level of shared representations of someone’s or some
group’s perception of the meaning of some real-world
phenomenon in one out of several combinations of conceptual
models.
Newly developed
construct
Interpretation of
Conceptual Models
The extent to which reading one or more conceptual models
provides an individual user with a complete, clear, and accurate
understanding of the meaning of the described real-world
phenomenon in a goal-directed activity.
Newly developed
construct
Selection of Model
Combination
The decision to choose to employ one set of two or more
conceptual models for a given task from a larger set of available
conceptual models.
Newly developed
construct
Domain
Understanding
The new knowledge that readers generate about the elements in a
real-world domain and the actual and possible relationships
between these elements through the organization and integration
of information content in the conceptual models that are presented
to them with their own previous experience and existing mental
models.
Adapted from
(Mayer, 2009)
Perceived
Usefulness of Model
Combination
The degree to which a reader believes that a particular conceptual
model combination was effective in achieving the intended task
objectives.
Adapted from
(Maes and Poels,
2007, p. 709)
4
6
APPENDIX B: ILLUSTRATING PROCEDURES FOR ENACTING THE THEORY
Here we describe procedures for applying our theory to the analysis of multiple models. To keep
this illustration simple, we use materials from an established textbook for systems analysis and
design: the High-Peak Bicycles case described by Whiteley (2013, pp. 228-263). We chose this
case simply because the textbook features a wide selection of models for this scenario.
The case describes the composition of an information system to maintain records of bicycle
rentals, with requirements that the system allows for maintaining a bike register, renting out and
returning rentals, allocating bikes, processing transactions, and other functionality. We focus on
four types of models used in the case: use case, entity-relationship, data flow, and sequence
diagram (Figure B1).5
The procedure for analyzing the four models used in the High-Peak Bicycles case involves three
steps: (1) performing an interpretation mapping, (2) establishing levels of maximal ontological
completeness and minimal ontological overlap, and (3) deriving hypotheses from the analysis.
We describe each, in turn.
5 We selected these types of models because the relevant grammars have been analyzed using representation
theory (Wand and Weber, 1989; Opdahl and Henderson-Sellers, 2002; Irwin and Turk, 2005; Green et al.,
2011). We do not provide a detailed description of the grammars because that information is available in
many textbooks (e.g., Yourdon, 1989; Fowler, 2004; Whiteley, 2013).
4
7
(a) Use case diagram (b) Data flow diagram
(c) Entity-relationship diagram (d) Sequence diagram
Figure B1: Conceptual models for the High-Peak Bicycles case (Whiteley, 2013, pp. 228-263)
48
The first step, interpretation mapping (Wand and Weber, 1993, p. 221), involves matching
grammar constructs featuring in each of the models to an ontological benchmark such as
Bunge’s (1977; 1979) ontological theory. The constructs described in Bunge’s ontology as used
in representation theory are summarized by Recker et al. (2009). A detailed description of these
constructs is provided by Weber (1997). Guidelines for carrying out interpretation mappings are
also available (Rosemann et al., 2004; Rosemann et al., 2009). For the four models in the
High-Peak Bicycles case, we conducted the interpretation mapping in three steps:
1. We identified the published ontological analyses for the grammars used to create the four
diagrams. Irwin and Turk (2005) evaluated the use case modeling grammar; Wand and
Weber (1989) evaluated the data flow diagramming grammar; Green et al. (2011) evaluated
the entity-relationship modeling grammar; and Opdahl and Henderson-Sellers (2002)
evaluated sequence diagrams as part of the UML grammar.
2. For each grammar, we identified all grammar constructs included in the models shown in
Figure B1. For instance, the use case diagram in the case includes the constructs ”Actor”
and “Use Case” but not the constructs “System” or “Extend” (Irwin and Turk, 2005, p. 5).
3. For each construct, we reviewed the grammar mappings and corresponding mapping
rationale in the original analyses to confirm that they applied to the models in Figure B1.
This task was important especially for grammar constructs that are overloaded per grammar
specification (that is, it mapped to at least two ontological constructs, see Wand and Weber,
1993). In such instances, it was important to evaluate which meaning was ascribed to the
construct in the model in order to identify the corresponding ontological construct present in
the model. This step was important because interpretation mappings in general are not just
a 1:1 correspondence of ontological to grammatical constructs. Therefore, in any model,
overloaded grammar constructs could have more than one ontological interpretation. Still,
49
for validity and replicability purposes, we remained with the interpretations from the literature
(point 1 above) wherever possible.
Table B2 details the rationales behind each mapping, and Table B3 summarizes the mapping
results. Across the four diagrams, a total of ten distinct ontological constructs are represented.
Table B2. Ontological Evaluations of Conceptual Model Constructs in the Case
Conceptual
Model
Type
Grammar
Construct
Example
Construct
Symbol
Ontological
Construct
Rationale
Use case
diagram
Actor
Class Staff and Customer are roles, that
describe specific types of things
(e.g., humans). See Irwin and Turk
(2005, p. 13).
Use case
Transformation Use cases describe sets of actions
as mappings that will change the
state of the system.
Association
Binding mutual
property
Associations draw links between
actors and use cases, such as which
role is authoritative for carrying out
an action.
Generalization
Excess Generalization between use cases
does not carry an ontological
meaning because it violates the “kind
of” relationship that can exist
between things (but not between
processes and changes of states.
See Irwin and Turk (2005, p. 13).
Data flow
diagram
External entity
Class External entities represent types of a
thing that share similar properties.
Data store D3 customer
State Data stores represent information
about the state of a thing (e.g., the
current values of relevant variables
about a customer). See Wand and
Weber (1989, p. 92).
Data flow
Event Data flows can represent external
events (e.g., rental) or internal
events (e.g., non-return).
Process
Transformation Processes describe mappings that
define how things change from one
state into another.
50
Entity-
relationship
diagram
Entity type
Class and
property
Entity types represent types of a
thing that share similar properties in
general (e.g., customers share the
common property of being
customers of a particular company).
See Green et al. (2011, p. 6).
Relationship
type
Coupling and
binding mutual
property
Relationship types describe the
binding mutual properties that couple
two classes of things.
Cardinality of
relationship
State law Cardinality constraints represent a
state law that constrains the values
of a binding mutual property to
certain conditions.
Sequence
diagram
Object
Thing, state
and event
Some objects (e.g., a bike or a
customer) in the UML sequence
diagram are actual, physical things
whose properties are modified over
the course of sequence.
Other objects (e.g., rent out, control)
describe events that occur and lead
to state changes.
Finally, some objects (e.g., a rental
rate) describe the current state
vector of some thing (in this case,
the current value for a rental rate for
a particular bike).
Object Lifeline
History The lifeline represents the history of
events and state changes that occur
to a thing (e.g., a bike).
Message
Coupling and
binding mutual
property
Messages describe how a thing acts
on another thing by changing the
binding mutual property between
them. See Opdahl and Henderson-
Sellers (2002, p. 55).Ontologically,
messages, return messages, and
self-messages all denote types of
binding mutual properties and are,
therefore, redundant.
Return
message
Self-message
Guard condition
State Law Guard conditions describe properties
that restrict the functions of a mutual
property between things to a lawful
subset.
Legend: Underlined terms are constructs and labels used in the diagrams in Figure 6. Italic terms are
constructs in representation theory (Weber, 1997) as defined by Recker et al. (2009, p. 361).
51
Table B3. Summary of interpretation mapping of the four conceptual models
Ontological construct Use case
diagram
Data flow
diagram
Entity-relationship
diagram
Sequence
diagram
Thing
Property in general
Binding mutual property
Class
State
State Law
Event
History
Coupling
Transformation
Sum 3 out of 10 4 out of 10 5 out of 10 7 out of 10
The next step in in applying our theory involves determining the levels of MOC and MOO of
the possible combinations of conceptual models. This is done by computing the sum of covered
ontological constructs represented in any combination (MOC) and the sum of shared ontological
constructs (MOO) in any combination. Table B4 and Table B5 summarize these results.6
Table B4. MOC and MOO of Pairwise Combinations of Conceptual
Models in the High-Peak Bicycles Case
Diagram type Use case
diagram
Data flow
diagram
Entity-relationship
diagram
Sequence
diagram
Use case
diagram
0 2 2 1
Data flow
diagram
5 0 1 2
Entity-
relationship
diagram
6 8 0 3
Sequence
diagram
9 9 9 0
6 In Table B4, MOC of the combinations is given in the darker grey cells below the diagonal; MOO is given in
the lighter grey cells above the diagonal.
52
Table B5. MOC and MOO of 3-Way and 4-Way Combinations of Conceptual Models in
the High-Peak Bicycles Case
Use case
diagram
Data flow
diagram
Entity-
relationship
diagram
Use case diagram
Data flow
diagram
Sequence
diagram
Use case
diagram
Entity-
relationship
diagram
Sequence
diagram
Data flow diagram
Entity-relationship
diagram
Sequence
diagram
Use case diagram
Data flow diagram
Entity-relationship
diagram
Sequence
diagram
MOC: 8 MOC: 9 MOC: 10 MOC: 10 MOC: 10
MOO: 3 MOO: 5 MOO: 4 MOO: 6 MOO: 7
Having determined the levels of MOC and MOO of the possible combinations of conceptual
models, we can now evaluate which combinations are preferable. Because conceptual model
interpretation is a goal-directed activity occurring as part of some task, we will assume in what
follows that the task is systems analysis and design (Kendall and Kendall, 2008).
We examine pairwise combinations first. The best model combination in terms of maximal
ontological completeness is the sequence diagram with a choice of the use case, the data flow,
or the entity-relationship diagram (Table B4). All three pairs cover nine of ten ontological
constructs (Table B3). The worst combination is the use case diagram and the data flow
diagram, which covers only five ontological constructs. In terms of minimal ontological overlap,
either the combination of the entity-relationship diagram with the data flow diagram or the
combination of the sequence diagram with the use case diagram achieves an overlap of one
construct (class and binding mutual property, respectively; see Table B3). The worst
combination is the entity-relationship diagram with the sequence diagram, which shares
representations for three ontological constructs (binding mutual property, state law, and
coupling). In terms of both maximal ontological completeness and minimal ontological overlap,
Table B4 suggests that the best pairwise combination is the use case diagram and the
sequence diagram (MOC: 9. MOO: 1) because both remaining maximally ontologically complete
pairs have an overlap of two constructs.
53
An examination of all other potential 3-way and 4-way model combinations (Table B5) suggests
that the optimal combination is the triple–use case, entity-relationship, and sequence diagram–
because it achieves MOC (all ten constructs) with MOO (four constructs). From the viewpoint of
maximal ontological completeness, the worst combination is the triple—the use case, the data
flow, and the entity-relationship diagram (MOC: 8). From the viewpoint of minimal ontological
overlap, the worst combination is the triple data flow, the entity-relationship, and the sequence
diagram (MOO: 6). Note that the combination of all four models (MOC: 10, MOO: 7) is worse for
interpretation than the noted optimal triple because it achieves the same level of ontological
completeness while having higher levels of ontological overlap.
The final step in applying our theory is to derive propositions about users’ selection,
understanding, and perceived usefulness of conceptual model combinations (for the task of
systems analysis and design) from the analysis summarized in Table B4 and Table B5.
Regarding proposition 1, our theory suggests that users will select from the set of diagrams in
the following order: first, the sequence diagram and the entity-relationship diagram (because
this combination has the highest increase in ontological completeness); and second, the
addition of the use case diagram (because in this combination ontological overlap increases to a
lesser extent than it does with the data flow diagram, while both additions increase ontological
completeness in the same way). Should the data flow diagram be selected in addition, the
theory suggests that users will discard this diagram from the combination in order to decrease
the level of ontological overlap.
Regarding proposition 2, our analysis suggests, first, that users will generate the highest level of
domain understanding when interpreting the triple-use case diagram, the entity-relationship
diagram, and the sequence diagram and, second, that interpreting this triple will allow users to
generate more domain understanding than will interpreting all four diagrams together because
the triple exhibits less ontological overlap while exhibiting the same level of completeness.
54
Regarding proposition 3, our analysis suggests, first, that users will evaluate the triple—the use
case diagram, the entity-relationship diagram, and the sequence diagram—as the most useful
combination. Our analysis also suggests that the perceived usefulness of all four diagrams will
be lower than the perceived usefulness of said triple. Finally, users will perceive the pair—use
case diagram and sequence diagram—as more useful than all four diagrams together. This
situation occurs because, even though the combined ontological completeness is lower in the
pair, the ontological overlap is much lower than it is between all four models. Our theory
suggests that practitioners will evaluate this pair as “useful enough” for the tasks at hand.
To summarize, we provided this illustration of procedure to demonstrate how to apply our theory
to generate empirically falsifiable predictions, i.e., testable hypotheses. It was not our intent to
demonstrate that the hypotheses are unequivocally correct but rather to show that the theory we
formulated allows both for analysis (what are the ontological properties of various combinations
of conceptual models?) and explanation (which combinations do users evaluate as faithful?)
(Gregor, 2006).
55
APPENDIX C: ESTABLISHING BOUNDARY CONDITIONS THROUGH EMPIRICAL
RESEARCH
Recall that our theory describes a model of properties vested in conceptual models as artefacts.
This focus entails several assumptions that limit the scope of contexts in which our theory holds.
We describe several potential boundary conditions in the Discussion section of the paper.
Still, we wish to encourage researchers to seek out even more boundaries because through
such work greater faith can be placed in a theory from the knowledge of the conditions where
and why predictions of the theory succeed or fail (Gray and Cooper, 2010).
One way to tease out a theory’s boundaries further would be through the use of meta-analysis
to quantitatively assess whether a theory’s assumptions and predications hold under a wide
range of circumstances (King and He, 2005). This approach, however, requires a large sample
of empirical studies. Another way is to investigate how potential moderators affect the
associations stipulated in a theory (Edwards and Berry, 2010, p. 676). Given that our theory
has not yet been tested empirically, we find this approach more useful because contemplating
potential moderator variables could feature in the design of empirical studies to evaluate our
propositions. In what follows, we describe what we believe are three relevant variables that
should be incorporated into research designs as potential moderators, and we present
arguments how they might influence what we regard as the principal proposition in our theory:
the development of domain understanding from interpreting multiple conceptual models. We
leave the exploration of variations on the other propositions to another time. Of course, our
speculations remain tentative at this stage: the role of the variables may also be as control,
interaction or mediation terms. Still, they remain important to the design of a study.
5
6
Environmental uncertainty
One assumption is central to our theory: the primary aim for the interpretation of conceptual
models in isolation or in combination is to obtain a complete, clear, and accurate representation
of the relevant real-world phenomenon (Wand and Weber, 1990; 1993). This assumption might
be challenged in some contexts, as “complete, clear and accurate” might not always be a
central aim for many tasks.7 For example, practices that involve system analysis and design
have changed. One apparent shift is the move away from legacy systems and packaged
software toward agile approaches to systems development (Fowler and Highsmith, 2001).
Agile approaches to systems development embrace readiness “to rapidly or inherently create
change, proactively or reactively embrace change, and learn from change while contributing to
perceived customer value (economy, quality, and simplicity), through its collective components
and relationships with its environment” (Conboy, 2009, p. 340). This context presents
challenges to traditional uses of conceptual modeling (Erickson et al., 2005; Zhang et al., 2007)
because the environment in which conceptual models are used is more fluid, emergent,
complex, and dynamic: models of systems are rapidly being translated into running prototypes;
new system structure and features emerge through constant and frequent feedback, and
requirements changes are embraced. In turn, analysts and designers cannot fully anticipate how
a system will evolve, what new functionality might be added, or what purposes in the real-world
domain the system might need to satisfy. Overall, the context for the use of conceptual models
in such settings is characterized by environmental uncertainty.8
7 We are grateful to the review team for alerting us to this challenge. As one reviewer put it: “For example,
incomplete and inaccurate throw-away conceptual models have their uses too.”
8 We adopted the term environmental uncertainty from studies on management and governance decisions,
which employ the term in a similar manner (Milliken, 1987; Xue et al., 2011). Our use of this term, like their
use, builds on Dess and Beard’s (1984) theory of task environments.
5
7
In task settings that are characterized by environmental uncertainty, the clarity of representation
provided by conceptual models might be more important than their completeness. If the context
in which individuals interpret representations as they perform tasks is unclear, users encounter
more stimuli to process (e.g., task, representation, changing requirements, different
stakeholders, evolving features, changing deep structure). In such situations, any way to reduce
cognitive load will help them select, process, and integrate relevant information in their mental
representations and perform their tasks (Mayer, 2009). Cognitive load in the interpretation of
conceptual models stems from the number of representational elements provided (which can be
reduced by reducing the combined ontological completeness of conceptual models) and the
representational elements’ lack of clarity (which can be increased by lowering the ontological
overlap of conceptual models). Therefore, in task contexts characterized by high environmental
uncertainty, the positive impact of a model combination’s combined ontological completeness
on users’ ability to generate domain understanding might be diminished and the negative impact
of a model combination’s ontological overlap on users’ ability to generate domain understanding
might be strengthened.
Task nature
Tasks that involve the processing of information conveyed in models are also changing. For
example, an article in this journal from April 2017 discusses challenges to common conceptual
modeling assumptions that flow from organizations’ increasing reliance on externally produced
information, such as online user-generated content (Lukyanenko et al., 2017).
This and other apparent shifts in information processing tasks can be described as a move in
emphasis from exploitative to explorative tasks (March, 1991). Explorative tasks are
characterized by a search for novel and innovative ways of doing things burton-and are
associated with experimentation, play, innovation, and/or discovery (March, 1991). In such
tasks, available information may be used for purposes other than those for which the
58
information was originally collected or the representation of the information was developed
(Lukyanenko et al., 2014b). For example, citizen science projects involve mining user-generated
content about some real-world domain (e.g., observations of native birds in some region) for
unanticipated, novel, and interesting insights (Wood et al., 2011). Likewise, in the organizational
redesign of operational procedures, analysts read conceptual models with the aim of finding
creative solutions about how operational processes might be improved (rFigl and Recker, 2016),
without knowing ex ante what the solution might look like. The lack of predefined outcomes and
unanticipated expectations about the informational needs has important implications for
conceptual modeling: quality of information is no longer defined only by precision, accuracy, or
other traditional metrics (Lukyanenko et al., 2016) but must include an evaluation of the ability
“to spot something interesting, unexpected, or novel” (p. 448).
It is likely that in contexts in which the nature of the task setting moves from exploitation to
exploration, the strength of the associations between combined ontological completeness and
ontological overlap on individual’s ability to generate domain understanding varies as well. For
example, in model interpretation settings characterized by explorative rather than exploitative
tasks, the completeness of representation that conceptual models provide might be more
important than their clarity because informational needs are difficult to anticipate and may even
be fluid. What constitutes a relevant aspect of some real-world phenomenon cannot always be
predefined. For example, Lukyanenko et al. (2014a, p. 6) observe how even small citizen
science projects concerned with conservation in local, confined geographic areas may find it
impossible to develop a classification model suitable to describe everything that might be
observed because distributions of plants and animals are simply not static.
In settings where a task is geared toward exploration, excluding some such aspect in a
representation could potentially lead to inaccurate uses (e.g., misidentifications), failure to spot
relevant insights, and thus misinformation. Indeed, Lukyanenko et al. (2014a, p. 11) report on
59
several observed patterns of mis-matching and mis-classification stemming from the way the
information about the relevant phenomena (here: animal species observed by citizens) was
described in the conceptual model underlying the database structure.
Potential deficiencies in a representation’s clarity, by contrast, may not differ much between task
settings of an explorative or exploitative nature, because they can likely be mitigated. For
example, many explorative tasks such as knowledge discovery do not act on models or data
schema directly but are supported by tools that build on visual analytics (Puolamäki and
Bertone, 2009) to convey and communicate information about a real-world domain in a variety
of representation formats (e.g., static or interactive, tables versus graphs, with or without
transformations into new, semantically meaningful forms). The clarity of these representations
may vary; however, given a complete representation, it will always be possible to apply formats
that provide unambiguous, non-redundant, and non-excessive information about some real-
world phenomena, whereas no modeling tool, however clear in the meaning of its constructs,
can make an impoverished representation more complete. Therefore, in task settings of an
explorative, rather than exploitative nature, the positive impact of combined ontological
completeness of model combinations on users’ ability to generate domain understanding might
be strengthened whereas the negative impact of ontological overlap of model combinations on
users’ ability to generate domain understanding might be diminished.
It is also likely that these variations are not absolute. For example, when the demands for
combined ontological completeness increase in task settings of an explorative nature, so does
the cognitive load associated with processing that information presented to the user (Sweller,
1988). It is likely that there will be a tipping point in the positive impact of combined ontological
completeness of model combinations on users’ ability to generate domain understanding from a
set of models, much like in the law of diminishing returns in economics (e.g., Samuelson and
Nordhaus, 2001, p. 110): as the number of representations about a focal real-world phenomena
60
covered through conceptual models increases, the relative gains in domain understanding will
diminish, once the bearable level of cognitive load is surpassed (Miller, 1956). This effect is
likely higher in task settings of explorative rather than exploitative nature, because the intrinsic
load of these tasks is higher due to their emphasis on discovery learning over schema
application (Tuovinen and Sweller, 1999), meaning less information processing capacity
remains available in the working memory to process the external information in the models
(Sweller, 1988; Chandler and Sweller, 1991).
Prior Domain Knowledge
A third variable relates to the influence of individual-level attributes characterizing the model
reader. As we discuss above, our theory is not a theory of pragmatics or cognitive psychology
that would explain how individuals come to learn new knowledge from conceptual models – the
theory merely states why attributes of models influence users’ interpretation (Shanks et al.,
2008; Bera et al., 2014; Burton-Jones et al., 2017).
There has been some work that demonstrated that individual-level variables influence the extent
to which users develop domain understanding from conceptual models. Amidst variables such
as grammar familiarity (Recker, 2010), modeling experience (Mendling et al., 2012), schema
expertise (Khatri and Vessey, 2016) and analyst role (Samuel et al., 2015), prior domain
knowledge (Khatri et al., 2006; Bera et al., 2014) stands out as the most widely studied user
characteristic in this context.
Prior domain knowledge captures the realm of knowledge an individual has about a particular
real-world domain (Alexander, 1992). It describes the mental model individuals have about a
domain, and which they use as a basis for internalizing new knowledge presented to them
through one or more conceptual models about the domain. In other words, prior domain
knowledge determines how much “new information” a model or set of models holds for the
person reading it (Mayer, 2009).
61
Recent evidence about the influence of prior domain knowledge on individuals’ ability to
generate domain understanding from interpreting a conceptual model suggests a nonlinear
moderation effect in the form of a downward concave curve (Bera et al., 2014): too little or too
high prior domain knowledge renders the information in a model either too complex or too
redundant if the model is not completely clear.
A similar variation may occur when individuals with very high levels of prior domain knowledge
interpret multiple conceptual models: as the level of combined ontological completeness
increases, the likelihood that information is added that is already present in the mental model of
the reader also increases, in turn only adding non-essential, redundant information that not only
fails to add new knowledge for internalization but also renders the information processing more
difficult as more cognitive effort must be devoted to identifying additional representational
elements and matching them to those already stored in the working memory. Yet, ontological
overlap of multiple models may not affect individuals with very high levels of prior domain
knowledge much because they can use their knowledge to overcome any such ambiguities
(Bera et al., 2014).
When individuals with very low levels of prior domain knowledge interpret multiple conceptual
models on the other hand, higher levels of combined ontological completeness and ontological
overlap might both have the same effect, that of adding extraneous load to a working memory
already operating at capacity (Miller, 1956): readers with little to no domain knowledge already
have difficulties internalizing even a clear single conceptual model because they “have an
insufficiently developed mental model to incorporate much meaning […] at all” (Bera et al.,
2014, p. 403). When assimilation of a relatively small set of new information fails to occur, it is
unlikely that the provision of a more complete and thus larger set of new information, let alone
one that contains overlap, will be of any benefit.