Conference PaperPDF Available

Evaluations in the Science of the Artificial – Reconsidering the Build-Evaluate Pattern in Design Science Research

Authors:

Abstract and Figures

The central outcome of design science research (DSR) is prescriptive knowledge in the form of IT artifacts and recommendations. However, prescriptive knowledge is considered to have no truth value in itself. Given this assumption, the validity of DSR outcomes can only be assessed by means of descriptive knowledge to be obtained at the conclusion of a DSR process. This is reflected in the build-evaluate pattern of current DSR methodologies. Recognizing the emergent nature of IT artifacts this build-evaluate pattern, however, poses unfavorable implications regarding the achievement of rigor within a DSR project. While it is vital in DSR to prove the usefulness of an artifact a rigorous DSR process also requires justifying and validating the artifact design itself even before it has been put into use. This paper proposes three principles for evaluating DSR artifacts which not only address the evaluation of an artifact’s usefulness but also the evaluation of design decisions made to build an artifact. In particular, it is argued that by following these principles the prescriptive knowledge produced in DSR can be considered to have a truth-like value.
Content may be subject to copyright.
www.uni.li
This is the author’s version of a work that was
submitted/accepted for publication in the following source:
Sonnenberg, C., & vom Brocke, J. (2012). Evaluations in the
Science of the Artificial - Reconsidering the Build-Evaluate
Pattern in Design Science Research. In K. Peffers, M.
Rothenberger & B. Kuechler (Eds.), Design Science Research in
Information Systems. Advances in Theory and Practice.
Proceedings of the 7th DESRIST Conference (Vol. 7286, pp. 381-
397). Las Vegas, NV, USA: Springer Berlin / Heidelberg.
Notice: Changes introduced as a result of publishing processes
such as copy-editing and formatting may not be reflected in this
document. For a definitive version of this work, please refer to
the published source.
The final publication is available at Springer via
http://dx.doi.org/10.1007/978-3-642-29863-9_28
adfa, p. 1, 2011.
© Springer-Verlag Berlin Heidelberg 2011
Evaluations in the Science of the Artificial
Reconsidering the Build-Evaluate Pattern
in Design Science Research
Christian Sonnenberg and Jan vom Brocke
University of Liechtenstein, Fuerst-Franz-Josef-Strasse 21,
9490 Vaduz, Principality of Liechtenstein
{christian.sonnenberg,jan.vom.brocke}@uni.li
Abstract. The central outcome of design science research (DSR) is prescriptive
knowledge in the form of IT artifacts and recommendations. However, prescrip-
tive knowledge is considered to have no truth value in itself. Given this assump-
tion, the validity of DSR outcomes can only be assessed by means of descrip-
tive knowledge to be obtained at the conclusion of a DSR process. This is re-
flected in the build-evaluate pattern of current DSR methodologies. Recogniz-
ing the emergent nature of IT artifacts this build-evaluate pattern, however,
poses unfavorable implications regarding the achievement of rigor within a
DSR project. While it is vital in DSR to prove the usefulness of an artifact a
rigorous DSR process also requires justifying and validating the artifact design
itself even before it has been put into use. This paper proposes three principles
for evaluating DSR artifacts which not only address the evaluation of an arti-
fact's usefulness but also the evaluation of design decisions made to build an ar-
tifact. In particular, it is argued that by following these principles the prescrip-
tive knowledge produced in DSR can be considered to have a truth-like value.
Keywords: Design science research, evaluation, design theory, epistemology
1 Introduction
Design science research (DSR) in information systems comprises of two primary
activities: build and evaluate [1]. Although the evaluation of DSR artifacts as well as
of design processes is regarded as being “crucial” [2, p. 82] much of the contempo-
rary information system DSR work focuses on the build activity and the creation of
prescriptive knowledge in the form of IT artifacts [3]. This is consistent with the view
that prescriptive knowledge is the basic outcome of DSR (cf. [4], [5]). However, the
prescriptive knowledge created during the build activity is assumed to have no truth-
like value [5] which basically questions if such knowledge is worth to be accumulat-
ed. Moreover, if prescriptive knowledge cannot be validated until it is applied in prac-
tice a design science researcher runs the risk of devoting a significant amount of time
to building insignificant solutions to practical problems.
This paper suggests, however, that prescriptive knowledge can have a truth-like
value if DSR is conducted according to three principles. These principles relate to the
problem of evaluation of DSR artifacts and spur reconsideration of the build-evaluate
pattern incorporated in many current DSR methodologies. These principles are de-
rived from the work on modes of DSR inquiries [4], on design theories [6], and on
evaluation patterns for DSR artifacts [7]. The paper aims at contributing to the body
of knowledge on DSR methodologies in that it tries to clarify some epistemological
implications of current DSR practices. Moreover, it links existing but still not inte-
grated and isolated contributions regarding evaluation and theorizing in DSR with the
purpose of providing guidance for design science researchers to rigorously produce
valid DSR artifacts.
The paper proceeds as follows. After discussing knowledge types involved in DSR
as well as current DSR practices the paper points to important epistemological impli-
cations of these practices. The paper then proposes and discusses three principles to
circumvent the implications of current DSR practices. The paper concludes with a
summary and an outlook on future research.
2 Knowledge Types in DSR and Their Truth Values
IIVARI [5] made the point that design science research in IS, just like research in eco-
nomics, is basically conducted at three levels of research: (1) a conceptual level, (2) a
descriptive level, and (3) a prescriptive level. Research on each level creates different
types of knowledge having different truth values. Conceptual knowledge captures
“what things are out there” [5] in terms of concepts, constructs, conceptual frame-
works, classifications, taxonomies, or typologies. Conceptual knowledge forms the
foundations upon which both descriptive as well as prescriptive research build. De-
scriptive research is concerned with describing, understanding, and explaining how
things are out there [5] and produces descriptive knowledge in the form of observa-
tions, empirical regularities, theories, and hypotheses [5]. Prescriptive research yields
prescriptive knowledge in the form of IT artifacts (design product knowledge) and
recommendations for practice (design process knowledge) [5]. Prescriptive research is
interested in answering how one can effectively achieve specified ends [5].
Among the three knowledge types DSR activities predominantly focus on the crea-
tion of prescriptive knowledge (cf. [2], [4], [5]). More particular, DSR essentially
aims at building artifacts that have utility for practice [2]. Statements of truth in DSR
therefore relate to the fact that an artifact is actually useful or not for solving a given
class of practical problems. IIVARI [5] emphasizes that prescriptive knowledge has no
truth or truth-like value. Ultimately, an artifact or recommendation as prescriptive
knowledge has to prove its utility in practice. This evidence, however, materializes in
descriptive knowledge about an artifact. According to IIVARI [5], only descriptive
knowledge, i.e. observations, empirical regularities, and theories have a truth value.
As a consequence evaluations in DSR are located at the descriptive research level and
are considered to not differ much from evaluations conducted in other sciences like
the natural or human sciences (cf. [2], [5], [8]). However, the science of the artificial
is different to other sciences in that it deals with analyzing phenomena (artifacts) that
usually have not been existent at the beginning of scientific inquiry [4]. Thus, it can
be challenged if evaluations in DSR should be conducted in a similar way as in the
natural or human sciences. The following sections briefly outline how evaluation is
considered in current DSR practices and subsequently discusses the implication of
these practices with regard to achieving ‘true’ knowledge in DSR.
3 The Build-Evaluate Pattern in DSR
Although suggesting that prescriptive knowledge as the central result of DSR has no
truth value, IIVARI [5] also emphasizes that prescriptive knowledge “forms an area of
its own and cannot be reduced to the descriptive knowledge of theories and empirical
regularities” [5, p. 56]. According to his understanding, DSR is concerned with creat-
ing prescriptive knowledge that is assumed to have no truth-like value and with gath-
ering evidence through descriptive research that an artifact proves to be useful. Cur-
rent DSR methodologies reflect this sequencing of prescriptive and descriptive re-
search. In DSR terms, design science researchers conduct two high level activities:
build and evaluate [1], [3]. A prominent example of such a DSR process is provided
by PEFFERS ET AL. [9]. Their DSR methodology has been synthesized from prior DSR
process proposed in the literature and is depicted in Fig. 1.
Build Evaluate
Identify
Problem
& Motivate
Define problem
Show
importance
Define
Objectives of
a Solution
What would a
better artefact
accomplish?
Design &
Development
Artefact
Demonstration
Find suitable
context
Use artefact to
solve problem
Evaluation
Observe how
effective,
efficient
Iterate back to
design
Communication
Scholary
publications
Professional
publications
Fig. 1. Build-Evaluate in a representative DSR methodology (cf. [9])
What can be seen from Fig. 1 and what is also a typical assumption of other DSR
processes is that evaluation activities and thus the articulation of truth statements
about an artifact occur ex post, i.e. after an artifact has been constructed [3]. Truth
about an artifact according to the build-evaluate pattern is known not until the evalu-
ate phase which creates descriptive knowledge about an artifact. This applies also for
DSR methodologies envisioning a concurrent or interweaved building and evaluation,
like for example in Action Design Research (ADR) as proposed in [10]. Although
ADR evaluation cycles appear to be much shorter when compared to a DSR process
according to Fig. 1 evaluations still occur ex post, i.e. after an artifact has been con-
structed or revised. Thus, a validation of design decisions and the design principles
incorporated by an artifact already in the design and construction phase is not a cen-
tral theme in DSR evaluations. Evaluations rather focus on proving the usefulness of
an artifact and less on the artifact design itself, i.e. on an artifact’s rationale and speci-
fications that are a constituent part of the prescriptive knowledge created in DSR.
In this regard it is interesting to note, however, that existing DSR methodologies
emphasize the build activities, i.e. the actual artifact design, over evaluation activities
[10]. This is consistent with what can also be observed in actual DSR projects. Much
time is spent on designing and building an artifact, like for example when building
new software systems or (re-) designing business process models. Given the signifi-
cant amount of time on building an artifact and provided that the magnitude of a de-
sign decision’s impact on the applicability and usefulness of an artifact is significantly
higher at design-time than at run-time, i.e. when the artifact is actually constructed
and instantiated (cf. [11]) it is less satisfying for a design science researcher to assume
that the prescriptive knowledge holds no truth value.
It is the claim of this paper, however, that the evaluation of DSR artifacts should be
approached differently compared to the study and evaluation of phenomena in the
natural or human sciences. This difference emerges directly from the scope and inter-
est of DSR which is not to explain or predict how the world is (through observations,
theories, etc.) but to shape the world by means of artifacts [5]. Moreover, as GREGOR
[4] points out, the truth value of DSR knowledge cannot be evaluated in terms of
traditional descriptive research since in DSR the researcher (or practitioner) would
construct the object of study himself/herself, i.e. the phenomenon under study emerg-
es as the research proceeds. Evaluations must account for this emergent nature and for
the importance of design decisions made at the build-time of an artifact. Maintaining
a build-evaluate-like pattern embodied in current DSR methodologies would have
significant epistemological implications on the validity of knowledge created while
the artifact emerges. These implications are discussed within the next section.
4 Epistemological Implications of the Build-Evaluate Pattern
From a descriptive research point of view an artifact is considered to be true if some
theory, observation, or empirical regularity exists that tells how an IT artifact actual-
ly behaves, why an IT artifact exists in the world, how an IT artifact actually re-
lates to other things in the world or if an artifact proved to be useful (cf. [2], [5]).
However, statements of truth in DSR do not primarily relate to what is and how
things are but to what could and what should be [5] and how useful things are
expected to be. This is consistent with the view of SIMON [8] who suggests that the
sciences of the artificial “are concerned not with the necessary but with the contingent
not with how things are but with how they might be in short, with design[8, p.
xii]. In this regard, GREGOR [4] argues that the study of IT artifacts by means of tradi-
tional descriptive research has to be reconsidered both in the building and the obser-
vation of IT artifacts in order to accommodate the particularities of the science of the
artificial [5]. Notably, the sequencing of build and evaluate activities hardly accounts
for the emergent nature of IT artifacts [10].
If DSR evaluations would be limited to descriptive knowledge it would only be
possible to infer ex post if an artifact proved to be useful and why it did so. However,
DSR requires IT artifacts to be built in a disciplined and “informed” way [2], [5]
which necessitates making inferences on the truth contained in the prescriptive
knowledge created throughout a DSR process. Therefore, it is important to infer on an
artifact’s expected impact on the world ex ante, i.e. before an artifact has been applied
to some real world problem. A designer could refer to descriptive knowledge to justi-
fy and inform the design of a new artifact and thus ingrain descriptive truth into it.
This would require the existence of kernel theories, a so called design theory, or meta-
artifacts [5], [6], [12]. Nevertheless, an IT artifact emerges throughout a DSR process.
The construction of an artifact precedes the knowledge of why it works [6] and thus
design decisions also relate to conceptual and mainly prescriptive knowledge of an
emergent design theory. These decisions have to be justified and validated by means
of evaluations long before an IT artifact has been put into use.
Eventually, the assumption that the truth of an artifact cannot be inferred from pre-
scriptive knowledge embodying an artifact’s ideas, purpose, and structure ultimately
affects the validity of early phases of a DSR process. If prescriptive research would
result in knowledge that cannot be assumed to have truth value then no reasoning
could be made about it. As a result, it can be questioned if prescriptive research could
be characterized as research at all since no valid knowledge is created. Prescriptive
knowledge as the major outcome of DSR would not be worth to be accumulated. Re-
using parts of an artifact by other researchers of within other contexts might not be
justifiable since these parts are also assumed to have no truth value. In this regard, a
design science researcher would hardly be able to build an artifact in a rigorous and
informed way as required by DSR guidelines [2] since design decisions could be vali-
dated not until an artifact has been constructed and applied to some reality. Some
might argue that the science of the artificial would no longer be a science but rather a
practice. In fact, PURAO [12] remarks that the scientific foundations underlying design
research have remained largely undeveloped.
Is there a way to circumvent these epistemological implications? The key to a solu-
tion must be to acknowledge that the science of the artificial is different to the natural
and human sciences and requires different modes of inquiry to reason about the truth
of the knowledge created [4]. The most significant difference is that the phenomena
under study cannot be assumed to be existent at the outset of a DSR endeavor but it
emerges in the course of scientific inquiry. The next sections outline how an inquiry
in DSR might be conducted in order to make truth-like statements about prescriptive
knowledge while it emerges through design science research.
5 Progressing Towards a Truth Principles for Evaluating
DSR Artifacts
5.1 Three Principles for Evaluating DSR Artifacts
To demonstrate the validity of an artifact already in the design phase and to provide a
rationale for the design decisions a design science researcher has to resort to a truth
residing in conceptual and prescriptive knowledge, i.e. the ideas, metaphors, analo-
gies, or other artifacts from which the artifact under study has been deduced. In order
to make truth statements about an artifact corresponding prescriptive knowledge
should be documented and accumulated in a way that allows for step-wise evaluations
of an artifact as it emerges in the DSR process. In particular, such a documentation
should not only allow for making inferences on the usefulness of an artifact but also
on an artifact’s expected suitability and importance as well as the validity and cor-
rectness of its design and construction. That means evaluations should also address
the validation of incremental design decisions right from the start of a DSR process.
Prior work already pointed out that evaluation in DSR may address either the arti-
fact design (i.e. the artifact characteristics) or the actual artifact as it is used by some
relevant stakeholders. The former refers to ex ante evaluations occurring prior to the
artifact “construction” whereas the latter refers to ex post evaluations after an artifact
has been constructed [3]. However, ex ante evaluations in DSR are usually interpreted
as a means to anticipate the effort required as well as the (economic) consequences
implied by the envisioned artifact characteristics. Ex ante evaluations thus often em-
ploy complexity or profitability measures at the outset of a DSR project (cf. [3]).
What has been neglected so far in ex ante evaluations is the emergent nature of IT
artifacts. As has been outlined above, current DSR methodologies treat the inherent
structure of an artifact, its principles of form and function, as a black box in both the
build and evaluation phase. In particular, the evaluation of design decisions made by a
researcher during the build phase is well out of scope of existing DSR methodologies.
It is the claim of this paper that the prescriptive knowledge that emerges through-
out a DSR process has a truth-like value. This implies that incremental additions
made to the prescriptive knowledge base throughout a DSR process, if evaluated and
documented in a rigorous way, can be communicated early by design science re-
searchers to interested peers or research communities. For example, a researcher
could present intermediate products of a DSR process to the research community in
order to build consensus on the relevance, novelty, and importance of a chosen prob-
lem domain, to discuss design objectives and features, to disseminate an initial blue-
print of an IT artifact spurring joint or distinct developments of artifacts for a particu-
lar problem domain, or to demonstrate that an artifact can be put into practice by
means of a prototype.
Building on prior work on DSR evaluations this paper extends the notion of ex ante
evaluations by emphasizing that in order to achieve rigor in DSR it is not sufficient to
just letting the IT artifact emerge in the build phase and evaluate its use but to ensure
that a design science researcher makes design decisions in a disciplined way order to
consistently and rigorously converge to a feasible and useful artifact. To do so it is
suggested that evaluations in DSR should be conducted according to three principles.
These principles have been synthesized and combined from prior literature ([4], [6],
[7]) and are summarized in Table 1. It is hold that by following these principles the
unfavorable epistemological implications of the build-evaluate distinction of current
DSR methodologies can be alleviated.
Table 1. DSR evaluation principles
Principle
Description
Distinction between
interior and exterior modes
of DSR inquiry
This principle directs the foci of evaluations on two
aspects: (1) the constituents of the artifact and the de-
sign decisions taken as well as on (2) the evaluation of
the usefulness of the artifact.
Documentation of
prescriptive knowledge as
design theories
This principle necessitates the prescriptive knowledge
to be documented in a structured way. This would
facilitate the communication and dissemination of the
prescriptive knowledge produced within a DSR pro-
cess. Moreover, such documentation would already
have a truth-like value that is worth to be accumulated
in a DSR knowledge base.
Continuous assessment of
the DSR progress achieved
through ex ante and ex post
evaluations
This principle prompts the design researcher to have
multiple evaluation episodes throughout
a single iteration of a DSR process.
These principles are interrelated in that one principle supports the other principles.
Their implications on DSR evaluations are explained in detail in the following sec-
tions.
5.2 Distinguishing Modes of DSR Inquiry
This principle directly points to the implications of the build-evaluate pattern. DSR
should not only describe and predict ‘“what is”’ and ‘“why it is”’ (descriptive
knowledge produced in the evaluation phase). DSR predominantly builds IT artifacts
producing prescriptive knowledge. The question is how a design science researcher
might infer on the truth residing in that prescriptive knowledge. GREGOR [4] proposed
a framework which clarifies on a high level how knowledge creation, theory building
and thus truth assessment can be achieved in DSR (cf. Fig. 2).
Exterior modeInterior mode
Prescriptive
knowledge
Design Theory
Purpose/scope
Form and function
Justificatory
Knowledge
Testable Propositions
Descriptive
knowledge
Observations
Empirical
regularities
Theories
Investigate
artifact in use
Justify and inform
artifact design
Theorize prescriptively for artifact
construction Theorize about artifacts in use
Ex ante evaluation Ex post evaluation
Fig. 2. Modes of DSR inquiry (based on [4, p. 8])
In their work [4] distinguishes two separate but linked modes of research activities
that particularly affect the way artifacts should be evaluated: (1) an interior mode of
DSR, and (2) exterior mode of DSR. The interior mode is concerned with producing
prescriptive statements about how artifacts can be designed, developed and brought
into being” [4, p. 7, emphasis added]. The exterior mode aims “primarily at analyzing,
describing and predicting what happens as artifacts exist and are used in their external
environment” [4, p. 7, emphasis added]. Research in the interior mode would make
use of inductive reasoning on prior descriptive or prescriptive knowledge when build-
ing an artifact. It is in this mode that prescriptive knowledge is produced. In the exter-
nal mode descriptive knowledge about the artifact is produced treating the artifact
more as a black box and only assessing significant design features with regard to
achieving some utilitarian ends [4]. The relationships between interior and exterior
research mode and the involved knowledge types are depicted in Fig. 2. The figure
also illustrates how the application of each of the three evaluation principles stated
above supports the creation of valid DSR knowledge.
In order to theorize in the interior mode, i.e. to add truth to prescriptive knowledge,
a design science researcher has to document the emerging IT artifact in a way that
allows for reasoning about its purpose, its rationale, its inner structure, the conditions
under which the artifact is expected to work, the steps required to actually use the
artifact in practice, or testable propositions that can be evaluated in the exterior mode.
Such prescriptive design knowledge can be documented by means of a design theory
[6]. The next section briefly outlines the anatomy of a design theory according to
GREGOR & JONES [6] and discusses how such an anatomy supports DSR evaluations.
The distinction between interior and exterior mode not only requires design
knowledge to be documented as design theories. It also widens the perspective of how
evaluations in DSR should be approached. Instead of only resorting to ex post evalua-
tions in the exterior mode (i.e. analyzing and creating descriptive knowledge), evalua-
tions should also be conducted ex ante during the build phase as part of the interior
mode. Ex ante evaluations would then refer to design theories and the progress
achieved in designing an IT artifact would be assessed by means of evaluation criteria
pertinent to different aspects of a design theory. This will also be discussed further
below.
5.3 Documentation of Cumulative Prescriptive Knowledge as Design Theories
Reasoning about IT artifacts in the interior mode, i.e. its build phase, requires the
design researcher to document prescriptive knowledge in a particular way. GREGOR &
JONES [6] refers to such a documentation as (information systems) design theory
(ISDT) showing the principles inherent in the design of an IS artifact that accom-
plishes some end, based on knowledge of both IT and human behavior. The ISDT
allows the prescription of guidelines for further artifacts of the same type. Design
theories can be about artifacts that are either products (for example, a database) or
methods (for example, a prototyping methodology or an IS management strategy)
[6, p. 322].
According to [6] a design theory consists of eight components:
1. Purpose and scope (causa finalis)
2. Constructs (causa materialis)
3. Principle of form and function (causa formalis)
4. Artifact mutability
5. Testable propositions
6. Justificatory knowledge
7. Principles of implementation (causa efficiens)
8. Expository instantiation.
Some components could be specified and reasoned about right at the outset of a
DSR project, while other components are specified and reasoned about as the IT arti-
fact emerges throughout the build phase. What can be seen, however, is that docu-
menting artifacts according to the eight components readily serves to evaluate an
artifact in terms of what should be and how it would be able to shape the world.
Reference to descriptive knowledge and thus to exterior modes of DSR is made
through components (5), (6), and (8). Testable propositions can be investigated in ex
post evaluations to create descriptive knowledge about the utility of the artifact. Justi-
ficatory knowledge serves to explain or anticipate why an artifact might work in a
given context and ingrains truth of prior knowledge. Justificatory knowledge can be
of a descriptive (theories, observations) or of a predictive type (other design theories
that proved to be useful or principles of form and functions that are reused). Exposito-
ry instantiations may help to reason about an artifacts feasibility and applicability at
build-time (artificial evaluation in interior mode) or to reason about its usefulness
when applied to some reality (naturalistic evaluation in exterior mode). The descrip-
tive knowledge gained by evaluating instantiations in the interior mode can serve as
additional justificatory knowledge for further developing the artifact in a subsequent
build cycle (e.g. benchmark results).
Documenting IT artifacts as design theories is a prerequisite for enabling the inte-
rior mode of DSR and thus to create prescriptive knowledge that ingrains truth value.
Moreover, it immediately affects the way evaluations can be conducted in DSR. The
distinction of interior and exterior modes of DSR together with a dedicated means for
documenting the IT artifact enables the reasoning about the validity of the artifact ex
ante, i.e. before it has been put into use. The predominant build-evaluate pattern of
DSR methodologies along with its unfavorable epistemological implications can be
reconsidered in favor of a more fine-grained consideration of research rigor in the
design process. Evaluations should not only be conducted at the conclusion of a DSR
project but they should be conducted on a continuing basis to assess the progress
achieved as the artifact emerges [3]. In this regard, principles (1) and (2) discussed
above support principle (3) leading to an expansion of the common build-evaluate
pattern into a design-evaluate-construct-evaluate pattern (e.g. as has also been put
forward in [3].
5.4 Continuous Assessment of the Progress Achieved in a DSR Process
By following principles (1) and (2) prescriptive knowledge in the form of design theo-
ries can be regarded as having truth-like value. Thus, it is possible and also reasonable
to consider the evaluation of design decisions ingrained in the artifact and not just its
usefulness by means of continuous assessments of the progress achieved in the DSR
process. Two aspects are central to enable such a continuous assessment. First, evalu-
ation criteria have to be defined to be able to systematically demonstrate the progress
achieved in DSR and to guide evaluation activities [14]. Second, it should be clarified
how ex ante and ex post evaluations can be positioned in a DSR methodology leading
to the definition of evaluation patterns in DSR (cf. [7]).
Evaluation Criteria
Table 2 below lists DSR evaluation criteria proposed by [1]. These criteria could be
applied in both ex ante and/or ex post evaluations. While this criteria set is considered
being comprehensive [14], however, the proposed evaluation criteria are not inde-
pendent of the artifact type under consideration. AIER & FISCHER [14] suggest criteria
that are independent of an artifact type and particularly apply for evaluating design
theories. These criteria are: utility, internal consistency, external consistency, broad
purpose and scope, simplicity, fruitfulness of further research. Another set of evalua-
tion criteria is proposed by ROSEMANN & VESSEY [15]. Their criteria set aims at par-
ticularly ensuring the relevance of a DSR artifact, i.e. if an artifact is expected to be
applicable in practice. The suggested criteria are: importance, suitability, and accessi-
bility of an artifact [15]. Applicability checks in that sense are considered particularly
suitable for ex ante evaluations.
Table 2. Evaluation criteria for DSR artifacts (cf. [1])
Construct
Model
Method
Instantiation
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
Depending on the type of object to be evaluated and on the point in time an evalua-
tion should be conducted some criteria might better reflect the progress achieved in
designing an artifact then others. To structure evaluation activities and corresponding
evaluation criteria the concept of evaluation patterns for DSR artifacts has been pro-
posed in [7]. The core ideas behind these patterns as well as their specifications are
presented in the next section.
Evaluation Patterns
Patterns are useful to describe a good solution to a recurring problem (cf. [16], cited
in [17]). Patterns can be useful for both researchers and practitioners in that they in-
corporate “high-level solutions to classes of problems that can be converted into spe-
cific best practices” [17, p. 9]. For researchers patterns may serve to “synthesize and
capture knowledge in a given domain as well as highlight areas for future research”
[17, p. 9]. SONNENBERG & VOM BROCKE [7] introduced the concept of evaluation
patterns for DSR artifacts. Such patterns should provide design science researchers
with an orientation when configuring particular evaluation strategies. Essentially,
these patterns can be positioned within a global design-evaluate-construct-evaluate
pattern.
Fig. 3 below sketches a cyclic high level DSR process incorporating a design-
evaluate-construct-evaluate pattern. The DSR process includes the DSR activities
problem identification, design, construction, and use followed by corresponding eval-
uation activities. As can be seen, the process suggests that evaluations in DSR should
be conducted throughout the whole process. In such a process, ex ante evaluations
validate the design of an artifact and ex post evaluations validate artifact instances
and artifacts in use. In particular, ex ante evaluations are conducted before the con-
struction, ex post evaluations are conducted after the construction of any artifact [3].
EVAL 1
IDENTIFY
PROBLEM DESIGN
EVAL 2
CONSTRUCT
EVAL 3USE
EVAL 4
Ex post evaluation
Ex ante evaluation
Fig. 3. Evaluation activities within a DSR process
The evaluation activities in Fig. 3 have been given generic names. Depending on
the context and the purpose of an evaluation within the DSR process different evalua-
tion methods and evaluation criteria could be applied for an evaluation activity [18].
Such a combination resembles best practices in the form of evaluation patterns.
Design science researchers could benefit from such evaluation patterns as they
would be able to disseminate their (validated) research findings also in early stages of
their research. Ultimately, a design science researcher has to proof the utility of an
artifact. However, even design objectives or principles of form and function, if related
to a generic problem and evaluated rigorously might already inform other researchers
and thus present a useful contribution to a DSR knowledge base.
In order to formulate such evaluation patterns it is required to broadly understand
the purpose and scope of individual evaluation activities of the DSR sketched in Fig.
3. The nature of these activities as well as possible evaluation criteria and methods are
summarized in Table 3 and are further discussed below. Moreover, their purpose and
scope as well as their significance for supporting the accumulation of (incremental)
prescriptive knowledge by means of design theories is discussed below.
Table 3. DSR evaluation activities and evaluation criteria
Activity
Input
Output
(mandatory)
Eval. Criteria
(exemplary)
Eval. Methods
(exemplary)
Eval 1
Problem
statement/
Observation of a
problem
Research need
Design objectives
Design theory
Existing solution to
a practical problem
Justified
problem
statement
Justified research
gap
Justified design
objectives
Applicability,
suitability,
importance,
novelty,
(economic)
feasibility
Literature
review,
review of
practitioner
initiatives,
expert inter-
view,
focus groups,
survey
Eval 2
Design specification
Design objectives
Stakeholders of the
design specification
Design tool/
design methodology
Validated design
specification
Justified design
tool/
methodology
Feasibility,
accessibility,
understandability,
clarity,
simplicity,
elegance,
completeness,
level of detail,
internal
consistency, ap-
plicability,
operationality,
Mathematical
proof,
logical
reasoning,
demonstration,
simulation,
benchmarking,
survey,
expert
interview,
focus group
Eval 3
Instance of
an artifact
(prototype)
Validated artifact
instance in an
artificial setting
(proof of
applicability)
Feasibility, ease
of use, effective-
ness, efficiency,
fidelity with real
world phenome-
non, operationali-
ty, robustness,
suitability
Demonstration
with prototype,
experiment
with prototype,
experiment
with system,
benchmarking,
survey,
expert inter-
view,
focus group
Eval 4
Instance of an
artifact
Validated artifact
instance in a
naturalistic
setting
(proof of
usefulness)
Applicability,
effectiveness,
efficiency, fidelity
with real world
phenomenon,
generality, impact
on artifact envi-
ronment and user,
internal con-
sistency,
external con-
sistency
Case study,
field experi-
ment, survey,
expert inter-
view,
focus group
Eval1 Activity
The evaluation of the problem identification activity serves the purpose of ensuring
that a meaningful DSR problem is selected and formulated. It should be demonstrated
whether the envisioned design science research project is important for practice, is
novel and thus adds to the existing knowledge base. The Eval1 activity might have
different inputs depending on what actually triggers the interest in the DSR project
(cf. [9]). A DSR process might start with a problem observed in practice, with a re-
search need observed in the literature, with an existing artifact (design theory) which
needs refinement in a given context, or with an existing practical solution that has not
been rigorously documented or developed. Mandatory outputs of this activity are a
justified problem statement, a justified research gap, and justified design objectives
which serve as input for subsequent activities. Thus, the evaluation criteria and meth-
ods all serve to justify the engagement in a DSR project. Therefore, an evaluation
pattern pertinent to the Eval1 activity could be termed “Justification” describing how
a design researcher can justify the value of a solution and the prospective artifact.
Criteria to be used here may predominantly refer to applicability checks regarding the
suitability of a design idea and the perceived importance of the problem. With regard
to developing an artifact, i.e. to specify a design theory, the Eval1 activity is con-
cerned with validating the purpose and scope as well as the constructs to be used. The
appropriateness of constructs might be justified by referring to constructs that have
been used for solving similar problems (justificatory prescriptive knowledge). An
artifacts idea could be further validated by means of descriptive justificatory
knowledge in the form of results from surveys or interviews. Moreover, a design sci-
ence researcher may already derive testable propositions at this point.
Eval2 Activity
The evaluation of the design activity result serves the purpose of showing that an
artifact design progresses to a solution of the stated problem. Since the artifact has not
yet been constructed (instantiated) and thus not been applied to some reality this eval-
uation is artificial [19]. Possible inputs to this activity are a design specification
(blueprint’, initial principles of form and function), the design objectives, infor-
mation on the stakeholders of a design specification, as well as the tools and method-
ologies used for creating a design specification. The design specification is evaluated
against its correctness and completeness to assess whether the design flawed. In par-
ticular, it should be evaluated whether the constructs used in the design specification
as well as their relationships correspond to the stated design objectives. Moreover, it
should be assessed whether the design specification is understandable and meaningful
to all of its stakeholders (e.g. managers, IT staff) Thus, the use of particular design
tools and methodologies has to be justified. Possible evaluation patterns pertinent to
the validation of the design specification could be termed demonstration (show
analytically that an artifact behaves as intended for a single test case), simulation, or
formal proof. With regard to the justification of the design tool or methodology a
pattern could be termed “tool evaluation”. With regard to a design theory, the Eval2
activity validates the principles of form and function which have been specified dur-
ing the design activity. Moreover, a design science researcher might want to formu-
late principles of implementation. Demonstrations and simulations may result in de-
scriptive justificatory knowledge in the form of observations and empirical regulari-
ties. A formal proof may yield prescriptive justificatory knowledge in the sense that a
formal proof confirms the consistency of assumptions about “what should be”.
Eval3 Activity
This evaluation activity serves to initially demonstrate if and how well the artifact
performs while interacting with organizational elements. In this activity, some infer-
ences on the utility of an artifact could already be made. Since this activity links ex
ante as well as ex post evaluations it is central for reflecting an artifact’s design and
stimulate subsequent iterations of the design activity if necessary (see feedback loop).
The “realities” considered here may comprise of subsets of “real tasks”, “real sys-
tem”, and “real users” (these “realities” have been suggested in [20]). Inputs to this
activity are instantiations of artifacts (“constructed” artifacts) which should be evalu-
ated regarding their applicability. At this point, the application context of the artifact
instance tends to be artificial (in the sense of [19]) and might only prove that an in-
stance is applicable to a task, within a system, or by a real user. The interplay of all
three realities together with the artifact instance would be the focus of the Eval4 activ-
ity. Prototypes are frequently used at this stage. Besides demonstrating the applicabil-
ity of an artifact instance, this evaluation activity should also proof that the artifact
instance is consistent with its specification, i.e. that it ingrains the principles of form
and function validated in the preceding evaluation activity Eval2. Possible evaluation
patterns pertinent to the Eval3 activity could be termed prototyping and experi-
mentation. With regard to developing a design theory this activity is concerned with
validating the component “expository instantiation” as well as artifact mutability.
Moreover, evidence is gathered with regard to the ability of the artifact to behave
according to its purpose and scope.
Eval4 Activity
This evaluation activity serves to ultimately show that an artifact is both applicable
and useful in practice. Evaluations reflect the organizational context by means of all
“three realities” (real tasks, real systems, and real users). Inputs to this activity are
artifact instances that are fully embedded within the organizational context. Possible
patterns pertinent to the Eval4 activity could be termed case study, field experi-
ment, survey, or applicability check. With regard to design theories the main
focus of the Eval4 activity would be to finally validate the artifact based on the testa-
ble propositions specified in the design theory.
6 Conclusions
This paper suggests reconsidering the build-evaluating pattern of current DSR meth-
odologies in favor of a more fine grained evaluation pattern that accommodates the
emerging nature of IT artifacts. Therefore, three principles for DSR evaluations have
been proposed that particularly support a design science researcher to make inferences
on the truth contained in the prescriptive knowledge produced by individual DSR
activities.
These principles have not been invented from scratch but have been synthesized
from prior literature in the field and combined to fit the purpose of this paper. How-
ever, some aspects need to be explored in more detail. In particular, the definition of a
comprehensive set of evaluations patterns related to the outlined evaluation activities
is expected to be particularly beneficial to better guide design science researchers and
to foster the rigor and discipline of the artifact development throughout the whole
DSR process. Future DSR methodologies could build on the principles put forward in
this paper and verify, whether they prove to be effective.
References
1. March, S.T., Smith, G.: Design and Natural Science Research on Information Technology.
Decision Support Systems , 15 (4), 251--266 (1995)
2. Hevner, A.R., March, S.T., Park, J., Ram, S.: Design Science in Information Systems. MIS
Quarterly, 28 (1), 75--105 (2004)
3. Pries-Heje, J., Baskerville, R., Venable, J.: Strategies for Design Research Evaluation. In:
16th European Conference on Information Systems (ECIS 2008), Galway, Ireland (2008)
4. Gregor, S.: Building Theory in the Sciences of the Artificial. In Proceedings of the Interna-
tional Conference on Design Science Research in Information Systems and Technologies
(DESRIST) 2009, Malvern, PA, (2009)
5. Iivari, J.: A Paradigmatic Analysis of Information Systems As a Design Science. Scandi-
navian J. Inf. Systems 19(2), (2007)
6. Gregor, S., Jones, D.: The anatomy of a design theory. Journal of the Association of In-
formation Systems, 8 (5), Article 2, 312--335, (2007)
7. Sonnenberg, C., vom Brocke, J.: Evaluation Patterns for Design Science Research Arte-
facts. In: Proceedings of the European Design Science Symposium 2011. CCIS, vol. 286.
Springer, Dublin, Ireland (2012)
8. Simon, H.: The sciences of the artificial, 3rd ed. MIT Press, (1996)
9. Peffers, K., Tuunanen, T., Rothenberger, M.A., Chatterjee, S.: A Design Science Research
Methodology for Information Systems Research. Journal of Management Information Sys-
tems, 24 (3), 45--77 (2007)
10. Sein, M.K., Henfridsson, O., Purao, S., Rossi, M., Lindgren, R.: Action Design Research.
MIS Quarterly, 35 (1), 37--56 (2011)
11. vom Brocke, J., Recker, J., Mendling, J.: Value-oriented process modeling: integrating fi-
nancial perspectives into business process re-design. Business Process Management Jour-
nal, 16 (2), (2010)
12. Walls, J., Widmeyer, G.R., El Sawy, O.A.: Building an information system design theory
for vigilant EIS, Information Systems Research, 3(1), pp. 36-59, (1992)
13. Purao, S.: Design research in technology and information systems: truth or dare,” un-
published paper, School of Information Sciences and Technology, The Pennsylvania State
University, University Park, State College, PA, 2002.
14. Aier, S., Fischer, C.: Criteria for Progress for Information Systems Design Theories. In-
formation Systems and E-Business Management, 9 (1), 133--172 (2011)
15. Rosemann, M. Vessey, I.: Toward Improving the Relevance of Information Systems Re-
search to Practice: The Role of Applicability Checks. MIS Quarterly, 32 (1), 1--22 (2008)
16. Alexander, C., Ishikawa, S., Silverstein, M.: A Pattern Language. New York, Oxford Uni-
versity Press, (1977).
17. Petter, S., Khazanchi, D., Murphy, J. D.: A Design Science Based Evaluation Framework
for Patterns. The DATA BASE for Advances in Information Systems, 41 (3), 9--26, (2010)
18. Vaishnavi, V.K., Kuechler, W.: Improving and Innovating Information & Communication
Technology: Design Science Research Methods and Patterns, Taylor Francis (2008)
19. Venable, J.: A Framework for Design Science Research Activities. In Proceedings of the
2006 Information Resource Management Association Conference. Washington, DC, USA
(2006)
20. Sun, Y., Kantor, P.B.: Cross-Evaluation: A new model for information system evaluation.
Journal of the American Society for Information Science and Technology, 57 (5), 614--
628 (2006)
... The design of these proposals for improvement leads to a new iteration of the DSR design cycle. In further studies, this prescriptive knowledge can be evaluated ex-ante as suggested in the design-evaluate-construct-evaluate pattern [24] or directly be implemented into the artifact. ...
... Consequently, the evaluation time is ex-post and corresponds with evaluation activities 3 and 4 of the above evaluation process model (refer to Figure 4.1). No distinction between the two activities is possible as the GaaPAM already exists but is not yet in practical use [24]. The artifact type is both a model (framework) and a method. ...
... As the proposals constitute prescriptive knowledge about how the GaaPAM becomes a better analysis tool, they can be interpreted as design theories. Design theories elaborate how an artifact should be (better) and how it can (better) solve a practical problem [24]. Respectively, a design theory is defined as "a prescriptive theory based on theoretical underpinnings which says how a design process can be carried out in a way which is both effective and feasible" [85, p. 37]. ...
Full-text available
Thesis
Government as a Platform (GaaP) is a promising approach to the digitalization of the public sector. GaaP perceives government as an open platform on which autonomous actors inside and outside the public sector co-create public services. By establishing platform concepts, digital infrastructures of the public sector become less complex and can efficiently adapt to new user-friendly services. GaaP has been discussed in literature for several years and has been applied in many countries. However, transforming a digital infrastructure towards GaaP is challenging because no guidelines exist for the practical application of GaaP. Recent research proposed the Government as a Platform Analysis Method (GaaPAM) to address this challenge. By analyzing the platform character of a digital infrastructure, further transformation actions towards GaaP can be inferred. However, the GaaPAM is only based on literature and has not been evaluated in practice yet. Therefore, this master thesis evaluates the GaaPAM practically and designs proposals for its improvement. To achieve this research objective, we first develop an evaluation concept with suitable evaluation criteria and methods. Then, we perform three workshops to apply the GaaPAM to different digital infrastructures within the German public sector and thereby identify strengths and weaknesses of the GaaPAM. The evaluation indicates that the GaaPAM is a helpful tool for applying GaaP in practice. The analysis of the platform character was successful, and many further transformation actions could be identified in all workshops. To solve the identified weaknesses, the proposals for improvement include extending the duration of the workshops, including additional aspects in the analysis, as well as better clarifying the analysis foundations, purpose, and view on the infrastructure. This thesis contributes to theory by providing proposals for further GaaPAM design and identifying gaps in digital platforms and GaaP concepts relevant for transforming public digital infrastructures. This thesis contributes to practice by working towards a GaaPAM that is better aligned with practitioner needs and thus better supports the digital transformation of the public sector towards GaaP.
... The results in DSR may include new knowledge on artifacts and meta-artifacts for solving identified problems, as well as the artifacts embodying the new knowledge built within the DSR process. The artifacts built within the DSR process enable evaluation of the DSR results in a real-world context via experimentation in multiple phases, incrementally [20]. In this research, DSR artifacts comprise a concept and a system architecture, which is realized in an emulator. ...
... Evaluation focuses on comparing objectives of a solution to the results obtained with the developed artifact(s) in use. Different evaluation criteria can be used for performing ex-post evaluations [20]. At the end of the evaluation activity, the researcher may iterate back to the design and development activity for improving performance of the solution. ...
... In particular, data transfers from moving vehicles towards a control center (over multiple radio accesses) were emulated, and ML was utilized in decision making, regarding the most suitable path for communication. Performance of two artifact iterations was evaluated in terms of ex-post criteria [20] (feasibility, fidelity with real world phenomenon, efficiency, and operationality). ...
Full-text available
Article
Autonomous moving vehicles facilitate mining of ore in underground mines. The vehicles are usually equipped with many sensor-based devices (e.g., Lidar, video camera, proximity sensor, etc.), which enable environmental monitoring, and remote control of the vehicles at the control center. Transfer of sensor-based data from the vehicles towards the control center is challenging due to limited connectivity enabled by the multi-access technologies of the communication infrastructure (e.g., 5G, Wi-Fi) within the underground mine, and the mobility of the vehicles. This paper presents design, development, and evaluation of a concept and architecture enabling continuous machine learning (ML) for optimizing route selection of real-time streaming data in a real and emulated underground mining environment. Continuous ML refers to training and inference based on the most recently available data. Experiments in the emulator indicated that utilization of a ML-based model (based on the RandomForestRegressor) in decision making achieved ~5–13% lower one-way delay in streaming data transfers, when compared to a simpler heuristic model.
... Our primary goal was to evaluate our artifact's practical applicability and completeness with the target stakeholders (Sonnenberg and vom Brocke 2012). Following Hevner et al. (2004) and Sonnenberg and vom Brocke (2012), we evaluated the process model's comprehensibility, relevancy, usability, completeness, functionality, fit with the company, and added value. The online questionnaire thus consisted of questions to cover these assessment criteria. ...
... After developing the scoring framework, the first author performed twelve additional expert interviews from the initial pool of experts to evaluate it (Exp2,3,5,9,11,13,14,16,17,21,23,24). Again, we applied the methods and procedures from the first interviews and evaluated the scoring framework's practical applicability based on the same assessment criteria as the process model (Hevner et al. 2004;Sonnenberg and vom Brocke 2012). Thus, the interview guide consisted of questions to cover these assessment criteria. ...
Full-text available
Conference Paper
Companies must optimize their information technology (IT) project portfolio to achieve goals. However, IT projects often exceed resources and do not create their promised value, for example, because of missing structured processes and evaluation methods. Continuous IT portfolio management is thus of importance and a critical business activity to reach value-driven goals. Guided by Design Science Research with literature reviews and expert interviews, we develop, evaluate, and adjust an IT project portfolio management process model, a holistic IT project evaluation framework, and implement a decision support system prototype. Our results and findings synthesize and extend previous research and expert opinions and guide decision-makers to make more informed and objective IT project portfolio management decisions aligned with optimal value creation. Furthermore, we deduce new research opportunities for IT project portfolio management process models, decision tools, and evaluation frameworks.
... The evaluation was done via a laboratory prototype system instantiating, embodying, and demonstrating the overall system architecture we present in a laboratory setting. Accordingly, our evaluation can be considered as ex-post evaluation of the overall system architecture presented, with focus on technical feasibility and operationality as evaluation criteria [5]. The results we present add to existing knowledge base on service manufacturing and MaaS, by presenting results on design knowledge (technical system architecture), and technical feasibility (empirical evaluation of the system architecture via a laboratory pilot system) of service manufacturing and MaaS. ...
Full-text available
Article
This work considers flexible manufacturing operations based on reconfigurable robotic skills and their usage in fully automated service manufacturing. In agile and ultra-flexible manufacturing operations, where lot sizes go down to one, the setup and execution of new tasks must be instant. We extend service manufacturing towards applications of multi-purpose autonomous mobile robots. We take digital data and service-oriented approach to configure and utilize re-usable robot operations formulated as robot skills. We integrate service requests, and system and robot skill models for an easily executable manufacturing service system. We show the feasibility of our approach by experimental tests with merged indoor logistics, assembly, and finishing tasks.
... We conducted the interviews in the native language of the interview partners, however, keeping the object of the interviews, i.e., the formulated DPs, in their communicated English language. Our guideline for the semi-structured interviews follows the order of presented DPs (see table 2) as well as our evaluation criteria, i.e., ease of use, elegance, simplicity, understandability, and completeness of our DPs (Sonnenberg & Brocke, 2012). ...
Full-text available
Conference Paper
The need for corporate decarbonization to mitigate climate change is reflected in a growing number of political measures to transparently disclose the environmental impact of corporate activities. Due to increasing reporting obligations, companies must constantly evaluate their own as well as suppliers' products and processes with respect to emissions data. To date, guidelines on how to design a data architecture focusing on the collection, storage, transformation, distribution, and disclosure of emissions data throughout an entire company are still lacking. Working with the design science research paradigm, we develop seven design principles for an enterprise-wide emissions data architecture (EEDA). We develop and iterate these principles by performing a structured literature review and semi-structured interviews. Taking this emission-centric perspective on data architecture, we foster the active engagement for a structured enterprise-wide approach for managing emissions data and coping with the increased demand for emissions reporting.
... The need for design guidelines for information system solutions to support changing social and health services contexts calls for a more simplistic view of the goal: design requirements, design principles, and specific design solution features that solve the problem and then ensure through evaluation, that the solution solves the problem for the key stakeholders and holds the potential to contribute to organizational effectiveness (Peffers et al., 2007;Sonnenberg & Vom Brocke, 2012;Vom Brocke et al., 2020;Zschech et al., 2020). Such guidance must be grounded in the well documented concept that social support is salutary for health and well-being (Ell, 1984). ...
Article
The digital transformation of the medical sector requires solutions that are convenient and efficient for all stakeholders while protecting patients’ sensitive data. One example that has already attracted design-oriented research are medical prescriptions. However, current implementations of electronic prescription management systems typically create centralized data silos, leaving user data vulnerable to cybersecurity incidents and impeding interoperability. Research has also proposed decentralized solutions based on blockchain technology, but privacy-related challenges have often been ignored. We conduct design science research to develop and implement a system for the exchange of electronic prescriptions that builds on two blockchains and a digital wallet app. Our solution combines the bilateral, verifiable, and privacy-focused exchange of information between doctors, patients, and pharmacies through verifiable credentials with a token-based, anonymized double-spending check. Our qualitative and quantitative evaluations as well as a security analysis suggest that this architecture can improve existing approaches to electronic prescription management by offering patients control over their data by design, a high level of security, sufficient performance and scalability, and interoperability with emerging digital identity management solutions for users, businesses, and institutions. We also derive principles on how to design decentralized, privacy-oriented information systems that require both the exchange of sensitive information and double-usage protection.
Article
Design Science Research (DSR) combines quantitative and qualitative approaches for educational research. One of the critical steps of DSR is the evaluation phase. In this phase, the artifact's utility, fitness, and usefulness are noted and reviewed. Since the DSR applied to health science is limited, this paper aims to present the evaluation phase of a study that developed an artifact for training student radiographers in chest pattern recognition. The artifact which is described in detail elsewhere by Mdletshe et al. [1], was developed as a tailor-made solution in medical radiation sciences education (MRSE), using DSR. During the evaluation of the artifact, the System Usability Scale (SUS) was used for the quantitative evaluation of the artifact. Meanwhile, the qualitative approach was performed using a hierarchy of qualitative criteria based on a review of multiple sources. This study demonstrated the DSR key concepts of the evaluation phase applied to health science. The presented case will help to demonstrate the implementation of the evaluation phase in a research project in health sciences (MRSE).
Full-text available
Article
The information revolution is affecting every aspect of society including education resulting in a concept known as digital learning or e-learning. One problem of digital learning environments and tools is the security and privacy of the learners. How to ensure the security and privacy of digital learners remains a major challenge leading to exposure among naive students. The objective of the study was to explore digital security and privacy culture among students and to propose a novel framework to enhance the security and privacy culture of students using a framework based on the competence learning theory. The Design Science Research (DSR) was adopted as a research paradigm with quantitative and qualitative data gathered from the implementation of the framework. 300 students from a college of education were purposively sampled for the study. A WhatsApp message was sent to the student requesting their confidential information under pretexting and impersonation to assess how security and privacy conscious they were. A framework was developed and implemented based on the theory of competence learning. The results showed high levels of vulnerabilities among students at 67.67% before the Implementation of the Framework in Round 1. This vulnerability reduced to 1.67% in after the implementation of the framework in Round 2 and further reduced to 0% in Round 3. The paper concluded that the lack of security and privacy awareness of the students made them Unconsciously Incompetent (UI). Exposure to the vulnerability made them consciously incompetent (CI), whiles further rounds of exposures led them to be consciously competent (CC). further exposure will eventually result in unconsciously competent (UC) as students become resistant and or immune to security and privacy attacks. The implication of this study is that, as more students converge in the digital space, the need to build their immunity and resistance against cyber-attacks is critical to avoid exposure. Learning to be security conscious should be consistent and routine to ensure that students have renewed behavior changes to become resistant and immune to online attacks. Article visualizations: </p
Article
Surgical process models support improving healthcare provision by facilitating communication and reasoning about processes in the medical domain. Modelling surgical processes is challenging as it requires integrating information that might be fragmented, scattered, and not process-oriented. These challenges can be faced by involving healthcare domain experts during process modelling. This paper presents ProDeM: a novel Process-Oriented Delphi Method for the systematic, asynchronous, and consensual modelling of surgical processes. ProDeM is an adaptable and flexible method that acknowledges that: (i) domain experts have busy calendars and might be geographically dispersed, and (ii) various elements of the process model need to be assessed to ensure model quality. The contribution of the paper is twofold as it outlines ProDeM, but also demonstrates its operationalisation in the context of a well-known surgical process. Besides showing the method’s feasibility in practice, we also present an evaluation of the method by the experts involved in the demonstration.
Full-text available
Article
The paper motivates, presents, demonstrates in use, and evaluates a methodology for conducting design science (DS) research in information systems (IS). DS is of importance in a discipline oriented to the creation of successful artifacts. Several researchers have pioneered DS research in IS, yet over the past 15 years, little DS research has been done within the discipline. The lack of a methodology to serve as a commonly accepted framework for DS research and of a template for its presentation may have contributed to its slow adoption. The design science research methodology (DSRM) presented here incorporates principles, practices, and procedures required to carry out such research and meets three objectives: it is consistent with prior literature, it provides a nominal process model for doing DS research, and it provides a mental model for presenting and evaluating DS research in IS. The DS process includes six steps: problem identification and motivation, definition of the objectives for a solution, design and development, demonstration, evaluation, and communication. We demonstrate and evaluate the methodology by presenting four case studies in terms of the DSRM, including cases that present the design of a database to support health assessment methods, a software reuse measure, an Internet video telephony application, and an IS planning method. The designed methodology effectively satisfies the three objectives and has the potential to help aid the acceptance of DS research in the IS discipline.
Full-text available
Conference Paper
Artefact evaluation is regarded as being crucial for Design Science Research (DSR) in order to rigorously proof an artefact’s relevance for practice. The availability of guidelines for structuring DSR processes notwithstanding, the current body of knowledge provides only rudimentary means for a design researcher to select and justify appropriate artefact evaluation strategies in a given situation. This paper proposes patterns that could be used to articulate and justify artefact evaluation strategies within DSR projects. These patterns have been synthesised from prior DSR literature concerned with evaluation strategies. They distinguish both ex ante as well as ex post evaluations and reflect current DSR approaches and evaluation criteria.
Full-text available
Article
Research in IT must address the design tasks faced by practitioners. Real problems must be properly conceptualized and represented, appropriate techniques for their solution must be constructed, and solutions must be implemented and evaluated using appropriate criteria. If significant progress is to be made, IT research must also develop an understanding of how and why IT systems work or do not work. Such an understanding must tie together natural laws governing IT systems with natural laws governing the environments in which they operate. This paper presents a two dimensional framework for research in information technology. The first dimension is based on broad types of design and natural science research activities: build, evaluate, theorize, and justify. The second dimension is based on broad types of outputs produced by design research: representational constructs, models, methods, and instantiations. We argue that both design science and natural science activities are needed to insure that IT research is both relevant and effective.
Conference Paper
This essay extends Simon's arguments in the Sciences of the Artificial to a critical examination of how theorizing in Information Technology disciplines should occur. Th e essay is framed around a number of fundamental questions that relate theorizing in the artificial sciences to the tradit ions of the philosophy of science. Theorizing in the artificial sciences is contrasted with theorizing in other branches of sci ence and the applicability of the scientific method is questione d. The paper argues that theorizing should be considered in a ho listic manner that links two modes of theorizing: an interior mode with the how of artifact construction studied and an exterior mode with the what of existing artifacts studied. Unlike some represe ntations in the design science movement the paper argues tha t the study of artifacts once constructed can not be passed back u ncritically to the methods of traditional science. Seven principle s for creating knowledge in IT disciplines are derived: (i) artifa ct system centrality; (ii) artifact purposefulness; (iii) nee d for design theory; (iv) induction and abduction in theory building; (v ) artifact construction as theory building; (vi) interior and exterior modes for theorizing; and (viii) issues with generality. The implicit claim is that consideration of these principles will impr ove knowledge creation and theorizing in design disciplines, for both design science researchers and also for researchers using more traditional methods. Further, attention to these principles sho uld lead to the creation of more useful and relevant knowledge.
Conference Paper
Seminal works in the application of design science research (DSR) in IS emphasize the importance of evaluation. However, discussion of evaluation activities and methods is limited and typically assumes an ex post perspective, in which evaluation occurs after the construction of an IS artifact. Such perspectives can assume that the evaluation is an empirical process and its methods can be selected in the same way as empirical research methods. In this paper, we analyze a broader range of evaluation strategies, which includes ex ante (prior to artifact construction) evaluation. This broader view is developed as a strategic DSR evaluation framework, which expands evaluation choices for IS DSR researchers, and also adds emphasis to strategies for evaluating design processes in addition to design products, using well-known quality criteria as an important asset. The framework encompasses both ex ante and ex post orientations as well as naturalistic settings (e.g., case studies) and artificial settings (e.g., lab experiments) for DSR evaluation. The framework proposed offers a strategic view of DSR evaluation that is useful in analyzing published studies, and also in surfacing the evaluation opportunities that present themselves to IS DSR researchers.