ArticlePDF Available

Towards Usable openEHR-aware Clinical Decision Support: A User-centered Design Approach

Authors:

Abstract and Figures

This thesis addresses the question of how usable openEHR-aware clinical decision support can be designed and developed in order to improve the quality of health care. To answer this research question, several sub-questions were identified and investigated. This included analyzing state of the art in two different aspects of design and development and evaluation of Clinical Decision Support (CDS) and also investigating application of a customized user-centered design (UCD) process in developing openEHR-based clinical applications. Analysis of state of the art in interplay between Human-Computer Interaction (HCI) and CDS and also the intersection between CDS and Electronic Health Records (EHR) revealed that consideration of both HCI and integration of CDS into EHR is more appreciated in theory than in practice and there is still a long way to go before reaching an acceptable level in these two success factors of CDS. Moreover, the experience in designing an openEHR-based clinical application revealed that apart from benefits offered by openEHR approach, such as specifying different roles and involvement of domain experts in defining domain concepts, there are various shortcomings that need to be improved, for instance the limited support for openEHR application developers. Additionally, this study revealed that there are characteristics of the domain, tasks and users in the domain that developers should be informed about while applying UCD methods.
Content may be subject to copyright.
Thesis for the degree of Licentiate of Engineering
Towards Usable open EHR-aware
Clinical Decision Support:
A User-centered Design Approach
Hajar Kashfi
Division of Interaction Design
Department of Applied Information Technology
CHALMERS UNIVERSITY OF TECHNOLOGY
Gothenburg, Sweden 2011
Towards Usable openEHR-aware Clinical Decision Support:
A User-centered Design Approach
Hajar Kashfi
c
Hajar Kashfi, 2011.
ISSN 1651-4769;7
Department of Applied Information Technology
Chalmers University of Technology
SE–412 96 Gothenburg, Sweden
Phone: +46 (0)31–772 1000
Contact information:
Hajar Kashfi
Division of Interaction Design
Department of Applied Information Technology
Chalmers University of Technology
SE–412 96 Gothenburg, Sweden
Phone: +46 (0)31–772 5407
Fax: +46 (0)31–772 3663
Email: hajar.kashfi@chalmers.se
URL: http://puix.org
Printed in Sweden
Chalmers Reproservice
Gothenburg, Sweden 2011
To my love, Mohsen
and my wonderful parents and brothers
Towards Usable open EHR-aware Clinical
Decision Support:
A User-centered Design Approach
Hajar Kashfi
Department of Applied Information Technology,
Chalmers University of Technology
Abstract
Nowadays, the use of computerized approaches to support health care processes
in order to improve quality of health care is widespread in the clinical domain.
Electronic health records (EHR) and clinical decision support (CDS) are con-
sidered to be two complementary approaches to improve quality of health care.
It is shown that EHRs are not able to improve quality of health care without
being supported by other features such as CDS. On the other hand, one of the
success factors of CDS is its integration into EHR, and since there are various
international EHR standards (such as openEHR) being developed, it is crucial
to take these standards into consideration while developing CDS.
Various clinical decision support systems (CDSS) are developed but unfortu-
nately only a few of them are being used routinely. Two of the reasons for
unacceptability of CDSSs among their users, i.e. clinicians, are shown to be
their separation from EHRs and poor usability of the user interfaces. Besides
integration into underlying information framework, i.e. EHR systems, consider-
ation of human-computer interaction (HCI) in designing and evaluating CDS is
one of the success factors that developers of these systems should keep in mind.
This thesis addresses the question of how usable openEHR-aware clinical deci-
sion support can be designed and developed in order to improve the quality of
health care. To answer this research question, several sub-questions were identi-
fied and investigated. This included analyzing “state of the art” in two different
aspects of design and development and evaluation of CDS and also investigating
application of a customized user-centered design (UCD) process in developing
openEHR-based clinical applications.
Analysis of state of the art in interplay between HCI and CDS and also the
intersection between CDS and EHR revealed that consideration of both HCI
and integration of CDS into EHR is more appreciated in theory than in practice
and there is still a long way to go before reaching an acceptable level in these
two success factors of CDS.
Moreover, the experience in designing an openEHR-based clinical application re-
vealed that apart from benefits offered by openEHR approach, such as specifying
different roles and involvement of domain experts in defining domain concepts,
there are various shortcomings that need to be improved, for instance the limited
support for openEHR application developers. Additionally, this study revealed
that there are characteristics of the domain, tasks and users in the domain that
developers should be informed about while applying UCD methods.
Keywords: Medical informatics, Electronic health record, openEHR, Clinical
decision support system, User-centered design, Human-computer interaction,
Interaction design, Usability.
Acknowledgments
I would like to express my gratitude to all of the individuals whose support and
help has made this research a reality; especially my supervisor and advisor, Olof
Torgersson. My gratitude also extends to G¨oran Falkman, Mats Jontell, Marie
Lindgren, Marita Nilsson, and the members of the Clinic of Oral Medicine at
Sahlgrenska University Hospital who were involved in this project.
I also want to thank Sus Lundgren, Martin Hjulstr¨om, Erik Fagerholt and Soren
Lauesen who provided constructive feedback on the design work presented in
this thesis. I am also indebted to Marie Gustafsson Friberger who provided
valuable comments on the thesis. Moreover, I appreciate the opportunities for
helpful discussions provided by the openEHR community members, especially
Ian McNicoll, Koray Atalag, Pablo Pazo, Rong Chen, Seref Arikan, and others
whose names may have been omitted here.
I would also like to convey my gratitude to my colleagues in the Interaction
Design Division, in particular Staffan Bj¨ork and Fang Chen, and members of
the Human-Technology Design research school of which I too am a member. Of
course, I would be remiss not to mention my kind fellow graduate student, Anna
Gryszkiewicz, with whom I share a pleasant and peaceful office.
My friends know that they all hold a very special place in my heart and I am
grateful to them for all of the joy they have brought into my life over the past
few years. But my wonderful family–my parents and brothers–know that they
are and always will be a part of my heart. I hope that the realization that
their love, patience, and willingness to bear the burden of our separation, has
empowered me to make my dreams come true will allow them to look upon my
achievements here with pride.
And finally, thanks to you, Mohsen, my love, and the meaning of my life. You
are the best and happiest thing that has ever happened to me. You are not only
a marvelous life partner, but also a tremendous friend and a remarkable fellow
graduate student. I appreciate the fruitful discussions we have had together and
the feedback you provided on this thesis. I am grateful to you for being such a
cheerful and supportive soul mate through both sunshine and rain, as well as all
of the other moments that will forever remain between you and I. Thank you for
being my everything and everyone–your love is the axis of my entire being.
Hajar Kashfi
Gothenburg, May 2011
List of Appended Papers
This thesis is a summary of the following four papers. References to the papers
will be made using the Roman numbers associated with the papers.
IHajar Kashfi, “Towards Interaction Design in Clinical Decision Support
Development: A Literature Review,” submitted to International Journal
of Medical Informatics, Elsevier
II Hajar Kashfi, “The Intersection of Clinical Decision Support and Elec-
tronic Health Record: A Literature Review,” submitted to 1st International
Workshop on Interoperable Healthcare Systems (IHS2011) - Challenges,
Technologies, and Trends, Szczecin, Poland, September 19-21, 2011.
III Hajar Kashfi, “Applying a User Centered Design Methodology in a Clini-
cal Context,” in Proceedings of The 13th International Congress on Medical
Informatics (MedInfo2010), Studies in health technology and informatics,
2010 Jan ;160(Pt 2):927-31.
IV Hajar Kashfi, “Applying a User-centered Approach in Designing a Clin-
ical Decision Support System,” submitted to Computer Methods and Pro-
grams in Biomedicine, Elsevier.
VHajar Kashfi, Olof Torgersson, “Supporting openEHR Java Desktop Ap-
plication Developers,” to appear in The XXIII International Conference
of the European Federation for Medical Informatics, Proceedings of Medi-
cal Informatics in a United and Healthy Europe (MIE2011),Oslo, Norway,
28-31 August, 2011.
VI Hajar Kashfi, Jairo Robledo Jr., “Towards a Case-Based Reasoning Method
for openEHR-Based Clinical Decision Support,” to appear in Proceed-
ings of The 3rd International Workshop on Knowledge Representation for
Health Care (KR4HC’11), Bled, Slovenia, 6 July, 2011.
i
List of Other Papers
IHajar Kashfi, “An openEHR-Based Clinical Decision Support System:
A Case Study,” in The XXII International Conference Of The European
Federation For Medical Informatics, Proceedings of Medical Informatics in
a United and Healthy Europe (MIE2009), Studies in health technology and
informatics. 2009. p. 348.
II Hajar Kashfi and Olof Torgersson, “A Migration to an openEHR-Based
Clinical Application,” in The XXII International Conference Of The Eu-
ropean Federation For Medical Informatics, Proceedings of Medical Infor-
matics in a United and Healthy Europe (MIE 2009), Studies in health
technology and informatics. 2009. p. 152-6.
ii
Contents
I Introduction
1 Introduction 3
1.1 Overview ............................... 4
2 Frame of Reference 5
2.1 Medical Informatics . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.2 Electronic Health Record . . . . . . . . . . . . . . . . . . . . . . 6
2.2.1 The need for Interoperability . . . . . . . . . . . . . . . . 6
2.3 openEHR ............................... 8
2.3.1 Two-level Modeling . . . . . . . . . . . . . . . . . . . . . 8
2.3.2 Two-level Software Engineering . . . . . . . . . . . . . . . 8
2.3.3 The Reference Model . . . . . . . . . . . . . . . . . . . . . 9
2.3.4 Archetype........................... 9
2.3.5 Template ........................... 10
2.4 Other EHR Standardization Approaches . . . . . . . . . . . . . . 10
2.4.1 The CEN/ISO EN13606 standard . . . . . . . . . . . . . 11
2.4.2 The Governmental Computerized Patient Record Project
(G-CPR) ........................... 11
2.4.3 Health Level 7 (HL7) . . . . . . . . . . . . . . . . . . . . 11
2.5 DecisionSupport........................... 11
2.6 Human-Computer Interaction . . . . . . . . . . . . . . . . . . . . 13
3 The Research Process 15
3.1 Conceptual framework . . . . . . . . . . . . . . . . . . . . . . . . 17
3.2 Research Questions . . . . . . . . . . . . . . . . . . . . . . . . . . 18
3.3 Objectives............................... 19
4 Methods and Tools 21
4.1 LiteratureReview .......................... 21
4.2 The Research Methodology . . . . . . . . . . . . . . . . . . . . . 21
4.2.1 Design and Creation Research Strategy . . . . . . . . . . 21
4.3 User-Centered Design Process . . . . . . . . . . . . . . . . . . . . 23
4.4 Assumptions ............................. 25
iii
4.5 Archetype Development Process . . . . . . . . . . . . . . . . . . 26
5 Summary of the Attached Papers 30
5.1 PaperI ................................ 30
5.2 PaperII................................ 31
5.3 PaperIII ............................... 31
5.4 PaperIV ............................... 32
5.5 PaperV................................ 32
5.6 PaperVI ............................... 32
6 Thesis Contributions 34
6.1 TheAnswertoRQ1 ......................... 34
6.2 TheAnswertoRQ2 ......................... 35
6.3 TheAnswertoRQ3 ......................... 36
6.4 TheAnswertoRQ4 ......................... 39
7 Future Work 40
8 Conclusion 41
Bibliography 47
II Publications
Paper I 51
Paper II 75
Paper III 91
Paper IV 105
Paper V 139
Paper VI 147
iv
Part I
Introduction
Chapter 1
Introduction
Errors that occur in a clinical process are mostly due to cognitive limitations of
humans, the potential to forget knowledge in the health care flow, and difficulties
in clinical workflows [1, 2]. Clinicians are prone to making errors, especially
because of the limitations of the working memory [3].
Various avoidable errors or adverse events in health care are documented in
the literature. These errors and events have even lead to patients’ deaths in
some cases. About 12 preventable deaths per million inhabitants in Sweden
are reported [4]. According to the Swedish Parliament, around 100,000 patients
suffer from preventable adverse clinical events each year and 3000 of these adverse
events lead to patient deaths [5]. Preventable medical errors resulted in deaths
of up to 98,000 people in the United States in 1999 [6]. Around 11% of the
patients admitted to two hospitals in London, experienced adverse events; 48%
of which were preventable and 8% of which resulted in patient deaths [7].
There have been investigations about the quality of care in various countries.
The existing quality problems belong to three different categories, namely “un-
deruse” (failure to provide the best expected health care service), “overuse”
(providing a health care service that is more harmful than being beneficial for
the patient), or “misuse” (unsuccessful delivery of the best expected health care
service because of some preventive complications), which occur in both small
and large health organizations [8].
Obviously, there are many debates surrounding the quality of health care, but
one should start with defining the meaning of “quality” in this domain. A widely
accepted and robust definition of quality is the definition developed in 1990 by
the Institute of Medicine (IOM) as “the degree to which health services for indi-
viduals and populations increase the likelihood of desired health outcomes and
are consistent with current professional knowledge” [9]. According to Graham
[10] “quality must be judged as good if care, at the time it was given, conformed
to the practice that could have been expected to achieve the best results.”
It has been demonstrated that information systems have the ability to decrease
avoidable clinical errors by supporting clinicians in health care process or in other
3
words to improve the quality of care [11]. Medical informatics is the research
field that is dealing with this matter. Electronic health records (EHR) have been
the leading research focus in this field so far [12, 13, 14]. The EHR research field
deals with issues such as capturing, storing, retrieving and sharing patient data.
For EHRs to be able to improve quality of care, they should be supported by
clinical decision support (CDS) services [15, 14, 16], those services that aid clin-
icians in the process of decision making. Nonetheless, not all CDS developments
have led to improving the clinical practice [17]. Hunt et al. [18] indicate that
66% of the CDS implementations have led to significant improvement in health
care while the remaining 34% did not. Various efforts have been made in order
to identify barriers to low adoption, acceptability and ineffectiveness of CDS
. Efforts have also been made to identify success factors in developing them
[13, 17, 19, 20, 21]. Some of those success factors are the level of integration
with clinicians’ workflow [13, 20, 17, 22, 23, 24], the degree of patient-specificity
[13, 17, 21], availability at the point of care or timely access [13, 17]. Accord-
ingly, two of the main factors that have a direct relation to success of the CDS
are integration of the CDS to EHR systems and proper design of CDS by taking
human-computer interaction (HCI) into consideration.
While there have been various recommendations regarding consideration of these
two success factors in development of CDSSs, related literature suggests that
these factors are being partially or totally ignored in many of the projects.
1.1 Overview
This thesis investigates the question of how usable openEHR-aware clinical de-
cision support can be designed and developed in order to improve the quality of
health care. In order to answer this question, several sub-problems were identified
to be investigated. The study has been carried out in oral medicine. However,
the outcome of the research should be applicable to medical informatics in gen-
eral. The structure of the thesis is as follows. The frame of reference of this
research is presented in Chapter 2. This chapter includes the definition of differ-
ent concepts basic to this research namely medical informatics, EHR, openEHR,
CDS, HCI, usability and user-centered design (UCD).
The research process and the conceptual framework are discussed in Chapter 3.
The research questions and objectives of this study are presented in this chapter
as well. Methods and tools are introduced in Chapter 4. A summary of the
included papers is given in Chapter 5, along with how they can be put in relation
to each other and the research questions. Some directions for future work are
discussed in Chapter 7. Finally, a conclusion is provided in Chapter 8.
4
Chapter 2
Frame of Reference
2.1 Medical Informatics
Medical informatics is defined as a scientific discipline “concerned with the sys-
tematic processing of data, information and knowledge in medicine and health
care” [25, 26]. This domain covers both “computational” and “informational”
aspects of the processes in the clinical domain [26]. Medical informatics deals
with providing solutions for problems of data, information and knowledge pro-
cessing in medicine and health care [25]. As a discipline, medical informatics
has been around for more than 50 years [12] but still is called young especially
compared to other medical disciplines [27]. Nowadays, new names are suggested
for this discipline such as health informatics and clinical informatics since the
word “medical” does not cover nursing informatics, dental informatics and so on
[12].
There are various research areas in the field of medical informatics namely elec-
tronic health record (EHR) systems, information systems, decision support sys-
tems, and image and signal processing [12]. EHRs have been the leading research
focus in this field so far [12, 13, 14]. The EHR research field deals with issues
such as capturing, storing, retrieving and sharing patient data. This has led to a
number of benefits such as reduced number of transportation errors, higher leg-
ibility of reports, and avoiding redundancy [13]. These benefits indirectly affect
patient safety, health care quality and efficiency [13]. Recently, there has been
more and more interest in adoption of EHRs and developing clinical applications
based on EHRs [13, 15, 14].
The idea of benefiting from computers and information technology (IT) in the
clinical domain has been around since the 1950s [28] (or the 1960s as observed
by [13]) when there were initiatives to automate health care and to simulate
the clinical decision making by computers [13, 24]. One of the turning points
of medical informatics is considered to be around 50 years ago when in 1959,
Ledley and Lusted reported on reasoning foundations of medical diagnosis [29].
Even though the concept of atomization of health care and application of com-
5
puters and IT in this domain is an old trade, it has a slow adoption pace and
low impact level compared to other domains such as engineering, marketing,
etc. In other words, health care has fallen behind other disciplines in applying
information technology to improve the processes and outcomes [13, 14].
As mentioned above, since the introduction of computers in the clinical domain
in the past decades, the main progress in this area has been in coping with
information management, i.e. adopting EHRs [13, 14, 12] rather than adopting
CDS in order to improve the decision making process. One of the aims of the
efforts in the area of EHR has been to improve quality of health care but it is
doubted whether EHRs have the ability to fulfill this goal [15]. More information
about EHRs is presented in the following section.
2.2 Electronic Health Record
The idea of computerized medical records has been around as one of the key
research areas in medical informatics for more than 20 years. Iakovidis [30]
defines an EHR as “digitally stored health care information about an individ-
ual’s lifetime with the purpose of supporting continuity of care, education and
research, and ensuring confidentiality at all times”. EHRs include the whole
range of patient-related data such as demographic information, medical history,
medications, and allergies [31].
The main aim of EHRs is to make distributed and cooperating health information
systems and health networks come true [31]. The first benefit of deploying EHRs
is that patients’ information is no longer on a piece of paper and clinicians have
access to all patients’ information when required [13].
Since the introduction of EHRs, various projects were initiated that led to de-
velopment of various commercial EHR products. Nowadays, there are more and
more EHR systems being developed. The interest is also increasing at the gov-
ernmental level in different countries such as the UK and Sweden. However, the
EHRs adoption rate is still low in community hospitals and office practices, while
higher in academic medical centers [13]. The maximum adoption of EHRs in The
United States is demonstrated to be only 40% [32]. In those countries in which
there exists a national health care plan, this rate is considered to be higher [13].
Several reasons have been identified for the low adoption rate of EHRs in small
hospitals and office practices, viz. high implementation and maintenance costs,
additional time and effort and finally the difficulty in choosing among available
systems in the market due to a lack of standardization [13].
2.2.1 The need for Interoperability
To to be able to fully benefit from EHRs, timely and secure access to all of
the EHR systems should be ensured, EHRs should be up-to-date and accurate
in terms of information they contain, and they should be correctly understood
when being communicated [33]. This means that EHR systems should be inter-
6
operable. EHRs are stored in various formats in different products which yields
interoperability problems in the domain. Therefore, developing national and
international EHR standards is important to support interoperability [34].
Before going into details of the approaches suggested to enhance interoperability
of EHRs, it is proper to present a definition of interoperability. Interoperability
of health systems is defined as “the ability, facilitated by ICT applications and
systems, to exchange, understand and act on citizens/patients and other health-
related information and knowledge among linguistically and culturally disparate
health professionals, patients and other actors and organizations within and
across health system jurisdictions in a collaborative manner” [33]. Four levels of
interoperability are defined by Stroetmann [33]:
1. having no interoperability
2. technical and syntactical interoperability
3. partial semantic interoperability
4. full semantic interoperability
A challenging aim regarding EHRs has been to reach semantic interoperability in
EHR systems. Interest in this issue in particular is increasing in the EU recently
with the aim of reaching semantic interoperability at regional, national and even
the EUR level [33].
So far, several efforts have been made to develop EHR standards in order to
structure and exchange patient information and to enable semantic interoper-
ability among medical information systems. The main approaches are as follows:
The European Committee for Standardization (CEN) Electronic Health
care Record communication standard (CEN/ISO EN13606)
The Governmental Computerized Patient Record project
The Health Level 7 (HL7) Reference Information Model and its clinical
document architecture
The GEHR approach
The openEHR approach which is a continuation to GEHR
All the above approaches focus on the technical issues related to standardized
and interoperable EHRs. More information about these approaches is provided
in the following sections. The EHR interoperability standard that is applied
in this study is openEHR. According to openEHR website 1“the Swedish gov-
ernment has decided on the use of ISO 13606 as a base standard for national
health data communication. openEHR will be used to define clinical models,
terminology integration, and to implement 13606 in some contexts.” ISO/CEN
1http://www.openehr.org
7
13606 resembles the openEHR reference model (See Section 2.3.2) in a simplified
manner [35] (ISO/CEN 13606 is explained in Section 2.4.1). This has been a
huge motivation for us to consider openEHR as our EHR approach to carry out
this study.
2.3 open EHR
openEHR has its origins in 1992 in an EU research project named Good Euro-
pean Health Record. This project was later continued under the name of Good
Electronic Health Record (GEHR) [36]. Currently the maintenance of openEHR
is done by a non-profit organization named the openEHR Foundation [35].
In the openEHR approach, clinicians are in charge of defining the specifications
of clinical knowledge to be used in information modeling. The main emphasis of
openEHR is on semantic interoperability of medical records. This approach sug-
gests a two-level architecture for clinical applications to separate knowledge and
information levels in order to overcome the problems caused by the ever-changing
nature of clinical knowledge. Patient data is stored in a generic form which is
retrievable in any system using constraints named archetypes. An archetype,
which is designed by domain experts, defines some constraints on data in terms
of types, values, relation of different items and so on. Archetypes are used for
data validation and sharing [37].
The openEHR framework consists of the reference information model (RM),
the implementation technology specification, the archetype definition language
(ADL), the open-source implementations, and an archetype repository (the repos-
itory is explained more in Section 4.5) [37]. A review of the openEHR architec-
ture is presented by Beale [37]. The key concepts of the openEHR architecture
are explained in the following sections.
2.3.1 Two-level Modeling
openEHR suggests a two-level architecture for EHR systems, and accordingly a
two-level software engineering approach for developing such systems. The key
idea in the two-level architecture is the separation of the domain knowledge level
and the information level.
The first level or the lower level is a stable reference information model. Software
and data are built from this stable object model named the openEHR Reference
Model (RM). The second or upper level provides formal definitions of the clinical
domain concepts. This reduces the dependency of the system and underlying
data to the ever-changing clinical concepts.
2.3.2 Two-level Software Engineering
openEHR suggests a two-level software engineering approach. In this approach,
there exists different view points and a separation of responsibilities in software
8
Figure 2.1: The openEHR two level software engineering taken from [37]
development. The main roles involved in the openEHR process are domain
experts, users and IT developers. Different view points introduced by openEHR
are the domain knowledge environment, the runtime system and the technical
development environment. The openEHR approach consists of the following
steps:
Domain specialists build reusable archetypes, templates (collections of
archetypes, see Section 2.3.4) for local use and terminologies for general
use.
IT developers focus on generic aspects of the system such as data manage-
ment.
The user works with a GUI which is derived from the templates. Data is
generated by users via the EHR system and is validated by archetypes at
runtime.
2.3.3 The Reference Model
The openEHR RM is specific to the clinical domain but still includes very generic
clinical concepts such as composition, observation, evaluation, instruction and
action. Moreover, in this RM, different data types are defined such as coded
text, quantity and multimedia.
2.3.4 Archetype
In the upper level of the openEHR two-level modeling approach, domain level
definitions are defined in form of archetypes and templates. These definitions are
9
Figure 2.2: The openEHR archetype meta-architecture taken from [37]
used in the EHR system at runtime. Archetypes are used to define constraints
on the generic RM. For instance, a blood pressure measurement can be defined
in form of an archetype in contrast to a more general clinical concept such as an
observation which is the focus of the RM.
All the information that is based on the RM is archetypeable, or in other words,
can be controlled by archetypes in terms of its creation, modification and so on.
Each archetype is an instance of the archetype model (AM) and is stored sepa-
rated from data in an archetype repository. The archetype definition language
(ADL) is the language that is used to define archetypes based on the AM. It
is recommended that when possible, archetypes should be reused and/or cus-
tomized instead of being created from scratch. The relation of archetypes to the
RM is depicted in Figure 2.2.
2.3.5 Template
Templates encapsulate a group of archetypes to be applied for a local use. Tem-
plates are trees of one or more archetypes and correspondent to user interface
forms, printed reports or other realizations of clinical data [37]. For instance,
using a template, one can put different clinical concepts like “blood pressure
measurement” and “mouth examination” (both defined as archetypes) together
to create an output report for EHRs [37].
2.4 Other EHR Standardization Approaches
Several studies have compared the main interoperability standards and specifi-
cations in the clinical domain [31, 34]. Based on a survey, Eichelberg provides
10
information on the most relevant EHR standards [38]. This includes the level
of interoperability provided by each standard, as well as content structure, ac-
cess services, multimedia support and security. In the following sections, a brief
introduction to the most relevant EHR standardization approaches is presented.
2.4.1 The CEN/ISO EN13606 standard
The CEN/ISO EN13606 is a European norm from CEN which is also approved
as an international ISO standard [39]. The aim of this standard is to enable
semantic interoperability in the electronic health record communication. The
CEN standard is actually a subset of the openEHR specification [34]. The same
as openEHR, this standard is based on the idea of a two-level architecture (i.e. a
dual model architecture [39]) which consists of a reference model and archetypes.
2.4.2 The Governmental Computerized Patient Record
Project (G-CPR)
G-CPR is a joint project between the US Department of Defense (DoD), the
US Department of Veterans Affairs (DVA) and the Indian Health Services (IHS)
[31]. This solution uses object-oriented specification to enable interoperability
and is rather a service-oriented solution than an architecture-based solution [31].
2.4.3 Health Level 7 (HL7)
HL7 is a well-known EHR communication standard in the clinical domain [31].
According to the HL7 website2: “Health Level Seven International (HL7) is a
not-for-profit, ANSI-accredited standards developing organization dedicated to
providing a comprehensive framework and related standards for the exchange,
integration, sharing, and retrieval of electronic health information that supports
clinical practice and the management, delivery and evaluation of health services”.
HL7 in HL7 version 3 presents a comprehensive Reference Information Model
(RIM) [31]. The HL7 clinical document architecture (CDA) templates are similar
to openEHR archetypes [38]. This standard, provides data-level interoperability
but functional level interoperability is not provided [31].
2.5 Decision Support in the Clinical Domain
Clinical decision support is a sub-domain of a more general research area named
decision-making support. According to Gupta [40] “decision-making support
systems (DMSS) are Information Systems designed to interactively support all
phases of a user’s decision-making process.” There are various definitions for
CDSS and CDS in the literature three of which are presented here:
2http://www.hl7.org/
11
“computer-based clinical decision support (CDS) can be defined as the use
of the computer to bring relevant knowledge to bear on the health care
and well being of a patient” [13].
“clinical decision support refers broadly to providing clinicians or patients
with clinical knowledge and patient-related information, intelligently fil-
tered, or presented at appropriate times, to enhance patient care” [14].
“clincial decision support is any EHR-related process that gives a clinician
patient-related healthcare info with the intent of making the clinician’s
decision making more efficient and better informed” [3].
CDS impacts the process of decision making about individual patients. This
support should be provided at the point of care and while the decisions are
made [24]. These systems provide support for diagnosis of diseases, prevention
of errors in the clinical process, treatment, and future evaluation of the patient.
Services supported by CDS include diagnosis, alerting, reminding, treatment
suggestions, and patient education. CDS interventions are the CDS content and
the method for delivering the content.
In providing CDS, three modes of interaction between human and computer can
be defined [13]:
User in charge (users can override computer’s suggestions)
Computer in charge (any decision made by computer is expected to be
carried out by users)
Collaborative decision making (computer controls the input, based on
users’ entries options are provided, users makes the desired choice)
The idea of having both computers and humans in charge of the health care pro-
cess is more practical in the clinical domain than building intelligent autonomous
systems that are in charge of the decision making process [13, 24]. The latter
may work in other disciplines but is less applicable in the clinical domain. Berner
[24] discusses that CDS is not meant to come up with “the answer” but should
provide information for the user and aid him/her in making decisions.
Not all of the information in a clinician’s mind can be transfered to the computer
(the CDSS) so a clinician usually knows more about the patient. Therefore,
having a collaborative pattern in which a clinician can eliminate some of the
choices made by the computer is better [24].
In this thesis, CDSS and CDS are used interchangeably to refer to a computer
program that aids clinicians in the process of decision making, at the point of
care, and based on health information of an individual patient, by presenting
that information coupled with external knowledge in a way that is more suitable
for making decisions regarding the care process of that specific patient. The
system is not meant to make the decisions, rather it is the clinician who makes
the final decision.
12
Planning Context of use Requirements Design Evaluation
Usability
planning and
scoping
Usability cost
benefit analysis
Identify
stakeholders
Context of
use analysis
Survey of
existing users
Field study
or user
observation
Diary
keeping
Task
analysis
Stakeholder
analysis
User
cost-benefit
analysis
User
requirements
interview
Focus groups
Scenarios of
use
Personas
Existing
system or
competitor
analysis
Task or
function
mapping
Allocation of
function
User, usability
and
organizational
requirements
Brainstorming
Parallel design
Design
guidelines and
standards
Storyboarding
Affinity
diagram
Card sorting
Paper
prototyping
Software
prototyping
Wizard-of-Oz
prototyping
Organizational
prototyping
Participatory
evaluation
Assisted
evaluation
Heuristic or
expert
evaluation
Controlled
user testing
Satisfaction
questionnaires
Assessing
cognitive
workload
Critical
incidents
Post-experience
interviews
Table 2.1: Methods for user-centered (human-centered) design taken from [44]
2.6 Human-Computer Interaction
Human-computer interaction (HCI) is defined as “a discipline concerned with
the design, evaluation and implementation of interactive computing systems for
human use and with the study of major phenomena surrounding them” [41].
The concept of usability is considered to be the heart of HCI [42]. Usability
is defined as “the extent to which a product can be used by specified users to
achieve specified goals with effectiveness, efficiency and satisfaction in a specified
context of use” [43]. A great deal of effort in the field of HCI is aimed at designing
and developing more usable computer systems [42].
Usability is a very important factor in designing an interactive system. If the
system is not usable enough for the intended users, it is likely that they do not
use the system so often (underuse) or use the system improperly (misuse) and
stick to their current methods for accomplishing the tasks, that both bring costs
to the organization or ruin the reputation of the team/company that developed
the system [44]. There are benefits in designing a usable system both for the
developer team and for the customers: increased productivity, reduced errors,
reduced training and support, improved acceptance and enhanced reputation
[44].
To develop a usable interactive system, both functional requirements and non-
13
functional requirements, including usability requirements, should be considered.
Traditionally, the focus of software design processes have been on functional
requirements, but nowadays there are design frameworks integrating these two
[44]. According to Shneiderman [45], usability requirements are of five types:
learnability, efficiency, memorability, errors, and satisfaction.
By involving users in the design and development process of a system, the system
will be more usable for the intended users [46, 47, 48]. The design approach which
places emphasis on involving users in the design is called user-centered design
(UCD) or human-centered design (HCD) [46, 47]. The main focuses of UCD
are active user involvement in the design process, multidisciplinary design teams
and iterative design [44]. UCD is not a substitute for software development
methods, but a complementary process to them. The UCD process is depicted
in Figure 4.2. The process starts with planning then context of use analysis,
requirements specification, design and evaluation are repeated until the user is
satisfied with the design and usability requirements are achieved.
Various methods are defined to support UCD. A broad range of methods are
specified by Maguire [44]. This collection is considered to be a proper introduc-
tion of various well-known methods and their relation to different UCD steps.
The methods are summarized in Table 2.1, column names correspond to different
steps in the UCD process.
14
Chapter 3
The Research Process
Oates [49] describes a model of the research process which consists of various
categories of methods such as research strategy, data generation methods, and
data analysis.
Oates describes how self-experiences and motivations along with the knowledge
gained from reviewing the literature in the field and being informed about the
existing gaps lead to a “conceptual framework” and “research questions” (see
Section 3.2).
A conceptual framework clarifies a researcher’s way of thinking and structuring a
research topic and the research process undertaken [49]. A conceptual framework
includes the research topic and its comprising factors, the research methodology
(a combination of strategies and methods used), the data analysis approaches,
the development methodology, and the research evaluation approach. Details
about the conceptual framework in this study is presented in Section 3.1.
In order to conduct a research study, a research strategy should be selected
(see Section 4.2.1), also data generation methods should be applied in order
to gather data and finally analyze the data using qualitative or quantitative
methods. In this study, data generation methods are those methods applied
in the system development process (see Section 4.3) along with the literature
published on the topic of interest, i.e. CDS (referred to as “documents” in
the Oates’s model). Both qualitative (used as part of the system development
process) and quantitative data analysis methods are applied in this study in
order to analyze data. Oates’s model is depicted in Figure 3.1. This figure also
highlights how the process of this study fits into this model.
15
Experiences and
motivations
Literature review
Research questions
Conceptual
framework
Survey
Design and
creation
Experiment
Case study
Action
research
Ethnography
Interviews
Observation
Questionan-
naires
Documents
Data generation
methods
Strategies
Quantitative
Qualitative
Data
analysis
usually
1:1 often
1:N
Figure 3.1: Model of research process adapted from [49]. The methods applied
in this study are colored in gray.
16
3.1 Conceptual framework
As mentioned previously, there have been various efforts in defining CDS success
factors and development difficulties ranging from a local, and practical view
[14, 50, 51, 15] to wider national-level views that have tried to provide guidelines
or road-maps for developing more effective CDS [52, 53, 21, 54].
Accordingly, four main categories of challenges and success factors faced in design
and development of CDS can be defined:
technical issues concerning knowledge representation and reasoning and
also maintenance of knowledge
integration into EHRs that deals with the integration to the underlying
IT framework or more specifically EHR systems in the clinical organiza-
tions which is a sub class of the “technical issues”, but because of its
importance, is considered as a separate category in this thesis.
human-computer interaction issues that focus on the user interface
design of these systems and the way users interact with them. User satis-
faction, effectiveness and acceptability of CDS in a practical setting, and
involving clinicians in the design and development of CDS all belong to
this category.
cultural and organizational issues that deal with the higher level as-
pects of motivations, utilizations, monitoring and acceptance of these sys-
tems at local, national and international levels.
According to the road-map for the United States national action on clinical de-
cision support [21] to reach widespread adoption of effective CDS, it is crucial
that system developers be supported to design “easy to deploy and use” applica-
tions. It is also recommended that best practices in system development should
be disseminated so that other developers can learn from previous successful ex-
periences.
As discussed before, the ultimate goal of efforts in the area of medical informatics
is to improve the quality of care, specially by introducing and applying EHRs
and CDS. To improve the quality of health care, neither of these two concepts
is an optimal solution individually. It is shown that to improve quality of health
care, EHR systems should be supported by other services such as CDS. On the
other hand, in order to develop effective CDS and to broaden its adoption, CDS
should be integrated into the existing EHR platform in the clinical organizations
and HCI should seriously be taken into consideration in designing and developing
them.
So far, this study has been done in relation to two of the categories of challenges
and success factors in developing CDS namely integration of CDS to EHRs and
taking HCI into account in designing and developing CDS. Figure 3.2 depicts
the factors the comprise this study. In the following, the research questions
(Section 3.2) and objectives (Section 3.3) are presented. Chapter 4 includes
17
more information about the research strategy and methods applied to answer
the research questions.
Quality Health Care
Electronic
Health Record
Highly adoptable
Clinical Decision Support
Integration to EHR
Technical Issues
HCI issues
Cultural and
organizational issues
Figure 3.2: In order to improve the quality of health-care focus on two areas
are inevitable: adopting clinical decision support and adopting electronic health
records (EHR). Development of highly acceptable clinical decision supports is de-
pendent on its integration into EHRs, and also consideration of human-computer
interaction (HCI) with the aim of developing usable clinical decision support sys-
tems (CDSS). There are however other issues such as technical considerations
(e.g. knowledge representation and reasoning) and cultural and organizational
aspects of adopting clinical decision support in health organizations. Among
these pillars, technical issues have gained the most attention so far, but interest
in other aspects has also been increasing recently. This thesis is invovles two
aspects (i.e. colored pillars) namely integration of clinical decision support sys-
tem into EHRs, and taking HCI into consideration with the aim of developing
a usable CDSS that is aware of an EHR standard named openEHR.
3.2 Research Questions
The aim of this study has been to answer this research question:
How can usable openEHR-aware clinical decision support be designed
and developed in order to improve the quality of health care?
In order to answer this question, several sub-problems were set to be investigated:
18
RQ1 Are usability of clinical decision support and methods to reach and as-
sure usability taken into consideration by developers of clinical decision
support?
RQ2 Are integration of clinical decision support into electronic health records
and adopting electronic health record standards taken into consideration
by developers of clinical decision support?
RQ3 Is the openEHR suggested approach suitable for designing and developing
openEHR-aware clinical applications, including clinical decision support
systems?
Does the two-level software engineering approach suggested by open EHR
work in practice?
RQ4 How can current successful design and development processes such as user-
centered design be customized for designing and developing clinical appli-
cations and clinical decision support?
The question involves the following sub-problem:
How can the design and development process of an openEHR-aware
clinical application, including clinical decision support systems, be
structured with focus on human-computer interaction and involving
clinicians in the process?
RQ5 Does openEHR offer any new opportunities for clinical decision support
in terms of knowledge representation and reasoning?
The question involves the following sub-problems:
Can openEHR be used to improve the process of knowledge represen-
tation and reasoning in clinical decision support?
Can clinical decision support benefit from structured, quality vali-
dated openEHR-based electronic health records?
Is it feasible and practical to integrate clinical decision support inter-
ventions into openEHR-based electronic health records?
To answer the above research questions1, several objectives should be accom-
plished. These objectives are defined in the following section.
3.3 Objectives
To investigate openEHR and also UCD in a clinical context with the aim of
answering the research questions, it was decided to develop a CDSS for an oral
disease. It was also planned to accomplish the following objectives:
1It should be mentioned that the work presented in this thesis is actually related to RQ1-
RQ4. RQ5 is suggested as the future direction of this study.
19
O1 Literature reviews should be conducted in order to analyze state of the art
in interplay between HCI and CDS, also the intersection of EHR and CDS.
O2 The openEHR framework should be studied and understood. The archetype
concept, two-level modeling, and two-level software engineering suggested
by openEHR should be analyzed.
O3 A UCD process should be applied from the beginning of the project. Dif-
ferent UCD methods that are applicable for the project should be selected
for designing and developing both the CDSS and archetypes.
O4 Domain-specific information about the disease should be gathered and struc-
tured using openEHR archetypes. Additionally, as suggested by openEHR,
reusable existing archetypes should be specified and customized if appli-
cable.
O5 The strengths and shortcomings of the openEHR approach and the limita-
tions of the two-level software engineering suggested by openEHR should
be identified in order to be reconsidered in the proposed approach in this
study.
O6 The characteristics of the clinical domain, clinical tasks and clinicians that
may have an effect on the user-centered design process should be identified.
20
Chapter 4
Methods and Tools
In this section, the methods and tools applied in order to answer the research
questions (see Section 3.2) are elaborated.
4.1 Literature Review
Literature review is a research methodology that aims at summarizing the avail-
able literature on a topic and presenting an analysis based on that and providing
a full picture on the topic [55]. In this study, literature review is used to analyze
state of the art in two different but related topics namely “interplay between
HCI and CDS development”, and “Intersection of CDS and EHR”. The search
strategies are discussed more in detail in the papers I and Paper II.
4.2 The Research Methodology
Oates in [49] defines research methodology in information systems as a com-
bination of “research strategy”, “design and development process” and “data
generation methods”. This is depicted in Figure 4.1.
In this study a design and creation research strategy is used as the research
strategy and a user-centered design process is used as the development process.
More information about these methods can be found in the following sections.
4.2.1 Design and Creation Research Strategy
The focus of a design and creation research strategy is on developing new soft-
ware products i.e. artefacts [49]. In this study the “artefact” to be developed
is a clinical decision support system for dry mouth. To understand this artifact
a description of the disease, i.e. dry mouth, and the characteristics of the CDS
are provided in the following.
21
Figure 4.1: Research methodology and development methodology taken from
[49]. Oates defines research methodology in information systems as a com-
bination of “research strategy”, “design and development process” and “data
generation methods”.
A Clinical Decision Support for Dry Mouth
“Dry mouth or xerostomia is the abnormal reduction of saliva and can be a
symptom of certain diseases or be an adverse effect of certain medications”
[56]. Treatment of Xerostomia is related to finding its cause(s). There are five
main categories for xerostomia namely drug-induced, disease-induced, radiation-
induced, chemotherapy-induced, and cGVHD-induced [56]. Finding cause(s) of
dry mouth is a challenge for clinicians and needs to be supported by a clinical
application.
A potential dry mouth patient should answer, or a clinician should find an answer
to various types of questions such as:
Do you need to moisten your mouth frequently or sip liquids often?
Have you noticed any swelling of you salivary glands?
Do you smoke or have been smoking regularly?
Are you currently taking 3 drugs or more?
Have you been subjected to therapeutical radiation against your head-and-
neck region?
Have you had a feeling of dry mouth daily for more than 3 months?
As in other diseases, each of these questions is considered very important in
diagnosis. They sound very straightforward, but the difficulty arises when all of
these questions should be remembered at the point of care [3] while the answer
provided by a patient should also be supported with an examination by clinicians,
e.g. the swollen salivary gland.
22
The dry mouth CDS is meant to be used in the clinic of oral medicine in Sahlgren-
ska University Hospital, Gothenburg, Sweden. Since this system is going to be
used integrated with an existing clinical data entry application, i.e. MedView
[57], data entry forms are not part of the Graphical User Interface (GUI), how-
ever users should be provided with options to edit existing data. Finally, users
need to be able to enter their own comments; including diagnosis or treatments
to the system.
The intended decision support process includes these four main steps:
1. Presenting an overview of patient-specific information and external knowl-
edge in a way that makes decision making easier
2. Providing proper reminders and alarms
3. Helping the user in finding the cause(s) of disease based on the patient’s
medical record
4. Suggesting related materials and treatment options, patient health infor-
mation and external knowledge
This study has been conducted in oral medicine, however, the outcome of the re-
search should be applicable to medical informatics in general, i.e. other diseases,
and other clinical applications.
4.3 User-Centered Design Process
UCD is a process that places emphasis on involving users in the design [46, 47].
As depicted in Figure 4.2, UCD is a circular design process. The UCD process
consists of five steps [47]. These steps are
1. plan the human-centered process
2. understand and specify the context of use
3. specify the users and organizational requirements
4. produce design solutions
5. evaluate design against requirements
This circle will be repeated until the users are satisfied with the design and the
requirements are met in the design solution.
As recommended [58], a customized UCD process has been applied in this study.
The customization was done in order to make the process suitable for the context
and also the nature of the project i.e. having openEHR as the underlying EHR
standard. To accomplish the UCD process, several methods were utilized such as
prototyping, usability tests, and interviews. Different UCD methods that were
applied in this project are summarized in Table 4.1 and discussed more in detail
in Paper V.
23
Plan the user-centered
design process
Understand and
specify the context of
use
Specify the users and
organizational
requirements
Evaluate design
agains requirements
Produce design
solutions
Meets
requirements?
Figure 4.2: The user centered design cycle [47, 44], see Section 4.3
Context of Use Requirements Design Evaluate
Identifying
stakeholders
Informal context
of use analysis
Interviews
Multidisciplinary
group sessions
Interview
Persona
Existing system or
competitor analysis
User, usability and
organizational
requirements
Literature study
Domain concept
modeling
Brainstorming
Design guidelines
and standards
Paper prototyping
Software
prototyping
Informal user
evluation
Informal expert
evaluations
Usability test
Think aloud
protocol
Satisfaction
questionnaires
Post experience
interviews
Table 4.1: The UCD methods applied in the project
24
The work presented here is the outcome of the first three iterations of this
project. Several users and domain experts were involved in this process. Several
user interface prototypes were developed, evaluated and improved iteratively.
The characteristics of this clinical context that have an effect on applying a
UCD process were detected and analyzed. Moreover, UCD was not only applied
to reach usability in the design, but also to develop domain concept models
to create archetypes. The latter was also accomplished iteratively by involving
clinicians (see Paper III).
The Multidisciplinary Project Team
The project team included the following members: one specialist in dentistry,
who was also one of the stakeholders and initiators of the project (from now
on, we refer to this person as the main clinical partner), three computer sci-
entists, with knowledge of human-computer interaction, usability and software
engineering, and one programmer.
Users Involved
Besides our main clinical partner, who was involved in the project from the
beginning, three more specialists in dentistry and one dental hygienist were
interviewed during this study both for requirements gathering, archetypes de-
velopment, and informal evaluation of the user interface paper prototypes (from
now on, we refer to this group as expert panel). Another group of three special-
ists in dentistry and one dental hygienist were also asked to participate in the
project as test users for user interface evaluations.
4.4 Assumptions
The focus of the project is on the design and development process and clinicians’
involvement in the process. We assume that openEHR as an interoperability
standard is acceptable for our purposes. The aim of this project is “not” to
prove if openEHR has been successful in EHRs interoperability.
The evaluation process that is mentioned in this thesis refers to the evaluation
which is done before releasing the CDS and even before evaluations based on
clinical trials which is required to prove reliability of CDS before deployment.
Only the openEHR archetype concept is applied in this work and no template is
created. The idea of openEHR templates is skipped for several reasons: imma-
turity, no implementation, and finally since it was possible to develop the system
without applying them.
25
4.5 Archetype Development Process
Domain concept modeling (information modeling) was required to understand
and specify the data needed to be gathered and presented in the CDS system.
Moreover, the domain modeling process was the first step in knowledge gathering
for providing CDS. The openEHR approach suggests archetypes creation as a
more structured way of modeling domain concepts.
The archetype development was conducted in close collaboration with clinicians,
i.e. experts in the domain. The development process was iterative, this means
that domain concept models were created and evaluated by experts in various
steps. More information about the process and tools used to develop archetypes
is provided in the following.
Iterative Domain Concept Modeling
The domain concept modeling started with sessions in which our main clinical
partner was asked to think about dry mouth and its related concepts and to
put as much information as possible on paper. Later, he was asked to prepare a
questionnaire based on this question. The reason for this was that the current
clinical system that clinicians in the clinic use in their everyday work is based
on the idea of clinical questionnaires. Questions on the questionnaire were then
categorized based on openEHR concepts; in other words, their logical relation,
e.g. is it related to the patient’s history or examining the patient?
Mind Mapping Diagrams
For better communication of the domain concepts in order to create archetypes,
simple diagrams were created based on the questionnaires and also the outcome
of the brainstorming sessions. For this purpose, a mind-map application was
used to make it possible for our expert panel to simply understand and edit
the created diagrams. The mind mapping software used in this step is called
XMind 1.
Evaluation
Iterative design of the domain concept model includes evaluations of the current
model based on the literature and experts’ opinions, and story-based assessment.
Information modeling diagrams were improved several times based on the ex-
perts’ opinions. Several experts were involved in this process to minimize the
subjectivity of the design and to be as broad as possible in collecting knowledge.
A sample mind map is depicted in Figure 4.3
1http://www.xmind.net
26
Figure 4.3: The domain concept modeling (information modeling) was done
iteratively and together with the experts in the disease. The XMind application
was used in order to easily understand and create diagrams and to communicate
information.
27
Figure 4.4: Ocean Archetype Editor is a freely available tool for authoring
openEHR archetypes. This tool is meant to be used by clinicians to define spec-
ifications of the domain concepts. This tool is developed based on the openEHR
reference model and supports various concept types introduced by it. In this
figure a sample examination archetype is being edited. On the left side of the
tool, there is a tree structure of the nodes in the archetype. The tools provided
several tabs to support editing various aspects of the archetype such as definition
and terminology.
Archetype Editor
For authoring archetypes, a free tool named the Ocean Archetype Editor2was
used. The Ocean Archetype Editor is a visual tool that supports the authoring
of openEHR archetypes. The editor is unicode-enabled, therefore archetypes
in any language, including Swedish, can be created in this tool, however, in
this project the main language for creating archetypes has been English so far.
This editor supports full openEHR data types and saves archetypes as different
formats such as ADL and XML.
Reusing Existing Archetypes
It is recommended that whenever possible existing archetypes be reused and/or
customized instead of being created from scratch for different local developers.
Accordingly, we have also tried to reuse some of the existing archetypes in this
project. The openEHR community along with other efforts has tried to make
2http://www.oceaninformatics.com
28
Figure 4.5: The openEHR clinical knowledge manager (CKM) is a common
repository of archetypes. Users interested in modeling clinical content may
participate in the creation and/or enhancement of this international set of
archetypes via this online repository.
the idea of share-ability and reuse-ability of archetypes possible by creating an
online repository of reviewed international archetypes. This repository is called
The openEHR Clinical Knowledge Manager which is explained below.
The openEHR Clinical Knowledge Manager
The openEHR clinical knowledge manager (CKM)3is an international, online
clinical knowledge resource. CKM is a library of clinical knowledge artifacts
which at the moment is limited to openEHR archetypes and templates. It is
anticipated that a complementary repository for other related artifacts like ter-
minology subsets be provided in the future. This repository provides the founda-
tion for interoperable EHRs. The openEHR archetypes available in the CKM go
under a review and publication process in order to be accessible to others. Users
interested in modeling clinical content may participate in the creation and/or
enhancement of this international set of archetypes.
3http://www.openehr.org/knowledge
29
Chapter 5
Summary of the Attached
Papers
In this section, a summary of the appended papers is given, along with how
they can be put in relation to each other and the research questions. Figure 5.1
depicts different areas covered in the publications.
5.1 Paper I
A literature review on interplay between HCI and CDS development is presented
in Paper I which is related to RQ1:are usability of clinical decision support and
methods to reach and assure usability taken into consideration by developers of
clinical decision support? This paper contributes to objective O1.
The paper starts with a brief review of the studies dealing with the question
of which factors should be considered in design and development of CDS to re-
sult in an acceptable and effective CDS, to motivate the importance of HCI,
usability and UCD in developing CDSSs. In order to conduct the literature re-
view, two databases (ScienceDirect1and PubMed2) were searched using boolean
combinations of some related keywords (usability, human-computer interaction,
user-centered design, clinical decision support, medical decision support). This
resulted in a total of 153 studies of which only 17 were relevant to the review.
Various concepts such as iterative design, involving clinicians in design and eval-
uation, qualitative evaluation methods, usability and UCD were the focus of
this review. More about the findings of this literature review can be found in
Section 6.1.
1http://sciencedirect.com
2http://pubmed.org
30
Applying openEHR as
the underlying standard
for a clinical application
Papers II, III, V
User centered design
process in a clinical
domain
Papers I, III, IV
EHR
HCI
State of the art in developing clinical decision
support with focus on the success factors
Papers I, II
HCI
EHR
Figure 5.1: A more empirical view of the study compared to (Figure 3.2) is
presented in this figure. Instead of categorization in an abstract level (i.e. HCI
issues, integration into EHRs) realization of these aspects in form of applying
specific approaches (i.e. openEHR, user-centered design) are introduced here.
Moreover, it is shown how these aspects are covered in various publications
attached to this thesis.
5.2 Paper II
Paper II includes a literature review conducted in order to answer RQ2:Are
integration of clinical decision support into electronic health records and adopt-
ing electronic health record standards taken into consideration by developers of
clinical decision support? This paper contributes to objectives O1 and O5.
The paper motivates the important of integrating CDS into EHRs based on
findings by other researchers in the field. It is discussed how CDS and EHRs
support each other’s success, and finally improving the quality of care. Based on
searching one database, i.e. ScienceDirect, and using boolean combinations of
some related keywords (electronic health record, medical health record, clinical
decision support, openEHR, HL7) a total of 98 studies were found where only
25 of them were relevant to the review.
In addition, since the focus of the thesis has been on openEHR, a discussion of
the causes of low adoption of the openEHR approach is presented in this paper
as well. More about the findings of this literature review and the reasons for low
adoption of openEHR can be found in Section 6.2.
5.3 Paper III
Paper III is related to RQ3:Is the openEHR suggested approach suitable for
designing and developing openEHR-aware clinical applications, including clinical
decision support systems? and RQ4:How can current successful design and
development processes such as user-centered design be customized for designing
31
and developing clinical applications and clinical decision support? This paper
contributes to objectives O2,O3, and O4.
This paper describes how a UCD approach can be used in a clinical context for
developing an openEHR-based CDSS. The paper includes a proposed customized
UCD approach along with the preliminary results of designing the GUI, domain
concept models and archetypes. Additionally, some challenges faced in adopting
openEHR are discussed in Paper III.
5.4 Paper IV
Paper IV is related to RQ4:How can current successful design and development
processes such as user-centered design be customized for designing and developing
clinical applications and clinical decision support? This paper contributes to
objectives O3 and O6.
This paper reports on employing a UCD process in developing a CDSS. Paper IV
can be seen as a more detailed version of Paper III, in which the focus has been
on the UCD process and the applied methods while details regarding openEHR
are skipped in this Paper. The paper includes results of the three iterations of the
project and includes various prototypes of the system, evaluations and analysis
of the evaluation results. In addition, those characteristics of the clinical context
that have an effect on applying a UCD process are identified and analyzed in
the paper.
5.5 Paper V
Paper V is indirectly related to RQ3. By “indirect” we mean the paper does
not include an answer to this question but is a more practical effort dealing
with one of the weaknesses of openEHR that has been discussed in Paper II and
Paper III. In this paper, we have dealt with the question: how can developers of
openEHR-based clinical applications connect iteratively designed and evaluated
user interfaces to the underlying framework with minimum effort? This paper
contributes to objective O2 and O5.
In this Paper, a framework for binding pre-designed GUIs to openEHR-based
backends is proposed. The proposed framework contributes to the set of options
available for developers. This approach can be useful especially for various small
scale and experimental systems as well as systems in which the quality of the
user interface is of great importance.
5.6 Paper VI
Paper VI is indirectly related to RQ5. This means that the paper does not
cover the answer to the research question but includes discussions about the
32
opportunities openEHR may provide for knowledge representation and reasoning
in CDSSs.
In this paper, a software architecture for the CDSS for dry mouth is proposed.
The architecture benefits from an existing openEHR framework and also a case-
based reasoning (CBR) framework. Case-based reasoning is a knowledge repre-
sentation and reasoning method that has been popular in the clinical domain
and, based on the available domain knowledge and patient data, seems to be a
proper choice for this project as well.
The paper also includes a methodological approach to developing openEHR
archetypes. In addition, motivations for selecting the knowledge representation
and reasoning method are given in the paper.
33
Chapter 6
Thesis Contributions
The major contributions of this study are the result of accomplishing the objec-
tives O1-O6 (see Chapter 5) and answering the research questions RQ1-RQ4.
This is actually documented in the attached papers as results. A brief summary
of the contributions is provided in the following.
6.1 The Answer to RQ1
The aim of efforts in the area of CDS is to develop such systems that result in the
wider adoption of CDS and accordingly improvement in quality of health care.
Various studies have dealt with the question of which factors should be considered
in design and development of CDS to result in an acceptable and effective CDS.
According to these studies, success factors of CDS can be divided in two main
categories of technical and non-technical (i.e. human-related) factors. Most of
these human-related factors, are the issues covered by the HCI discipline and
related to the concept of usability. HCI suggests methods and approaches to
address the human-related (i.e. user-related) factors and to assure usability of
the applications.
Based on a literature review (see Paper I), one can conclude that while various
researchers have so far introduced human-related factors as factors important
in the success of CDSSs, HCI is not still a routine practice in this field. Only
17 studies were relevant to the literature review whereas just in ScienceDirect
more than 100 practical studies on CDSSs are published. In particular, when
it comes to viewing UCD as a life-long process, very few studies can be found
that have covered this aspect in developing a CDSS. It was observed that some
of the recommended UCD methods are not applied or rarely are applied in CDS
developments. Task analysis, usability expert reviews and heuristic evaluation
are some of those rarely applied methods. Finally, there are still cases in which
evaluation of the system (our focus is on qualitative evaluations) is only con-
ducted after system deployment. All in all, there is a need for further adoption
34
of HCI (including usability) in this field.
6.2 The Answer to RQ2
Taking standards into consideration in any clinical application (and generally
any information system) is very important [14]. Since CDS operates by utilizing
both patient/organizational-specific data and clinical knowledge, it is important
to take the standards that support each of these areas into account [14].
Only 25 studies were found in ScienceDirect to have considered integration of
CDS into EHRs (from more than 100 studies that have documented CDS de-
velopments). For more information please refer to Paper II. We did not find
any study that reports on implementation of a CDS by applying openEHR.
The only study which considers the intersection between openEHR and CDS is
[59] in which the idea of integrating guideline rules into openEHR archetypes is
discussed.
The selected articles were reviewed in order to find out whether they consider any
of the standards related to CDS (i.e. EHR standards, guideline representation
standards, and terminology or vocabulary standards). It was observed that
standardization of guidelines and integration of guidelines into EHR has been
discussed in several studies [59, 60, 61].
The idea of applying standards even for EHR systems is still not mature enough,
and it is not surprising to see that researchers rarely consider this in CDS de-
velopment. For instance, from the 25 studies we reviewed only 6 had considered
EHR standardization (all of them applied HL7).
In conclusion, theory supports the benefits offered by integrating CDS into
EHRs, still, a great deal of effort should be put into this in order to reach an
acceptable level of integration in practice, especially considering standardization
aspects of EHR.
Moreover, if we put the the results of the literature review with focus on HCI
(see Section 5.1), it is observable that there are only a small number of studies
that have considered both HCI and EHR integration while developing CDS as
depicted in Figure 6.1.
35
32.0%
HCI consideration (8)
68.0%
No HCI consideration (17)
Figure 6.1: This chart represents how many of the studies have considered inte-
gration of CDS into EHRs, as well as HCI in developing CDS.
6.3 The Answer to RQ3
In this study, investigations were carried out into various aspects of develop-
ing openEHR-based applications with the focus on the design and development
“process” and with the aim of developing “usable” CDS.
As discussed in Section 2.3, openEHR suggests defining various “roles” in devel-
oping clinical applications, and to divide responsibilities among different roles.
The openEHR two-level software engineering, as might be expected, is compat-
ible with the multi-disciplinary team work suggested in UCD. The clinicians’
expertise can be used by involving them in the domain concept modeling as sug-
gested by openEHR and additionally in user interface design as recommended in
the HCI field. The two-level software engineering (see Figure 2.1) suggested by
the openEHR community is not by itself enough for developing user-friendly ap-
plications inasmuch as it does not consider the importance of involving clinicians
in designing and evaluating the GUI. To develop usable clinical applications, a
close collaboration between clinicians and IT developers is needed. Moreover,
automatic user interface generation results in interfaces with poor usability. This
is discussed further in the following.
Regardless of its advantages, the openEHR standard suffers from a rather low
adoption rate. Some possible reasons for the low adoption are the complexity of
the standard, lack of documentation and training for developers, and a limited
set of tools and frameworks available to ease application development (see Paper
II). The openEHR community seems to have mostly focused on representing and
modeling domain concepts and perfecting the specifications. However, to make
openEHR more practical, there is a need for supporting application developers
with APIs, frameworks and tools.
Surely, a number of application development projects exist such as the open
36
Archetype
Archetype
Archetype
Template
Template
Template
translated to
GUI artefacts
GUI
......
...... User
RM
object
RM
object
RM
object
RM
object
RM
object
AOM
object
validated by
AOM instances
openEHR-based
EHR
Data
Developer
Manual
adjustment
expert
create
Archetype
Archetype
Archetype
User
RM
object
RM
object
AOM
object
validated by
AOM instances
openEHR-based
EHR
Developer
create deisgn
is mapped to GUI
......
......
RM
object
RM
object
RM
object
Data
data binding
expert
A B
Figure 6.2: The two development models. The model on the left is supported
by opereffa
source health information platform [62] (OSHIP), the open EHR-Gen frame-
work [63], GastrOS [64], and the openEHR reference framework and application
[65] (opereffa). To the best of our knowledge, current openEHR frameworks and
tools are based on the idea that clinicians design and create archetypes (and tem-
plates) using existing tools. Later on, a GUI, or some GUI artifacts are generated
based on these archetypes/templates. In order to improve the GUI design, there
is a need for manual adjustment of the GUI or its style files (depicted in Fig-
ure 6.2-A). In contrast to this automatic or semi-automatic approach, there is an
alternative approach where there is no generation of GUI based on archetypes.
Instead, the interface is designed by experts based on the the users’ requirements.
Afterwards, there is a need to connect this GUI to the archetypes designed by
domain experts (illustrated in Figure 6.2-B). Unfortunately, the current frame-
works do not provide sufficient support for this approach. Therefore, we have
developed an extension, a Java desktop user interface data binding layer, to one
of the openEHR application development frameworks, i.e. opereffawith the aim
of supporting openEHR Java application developers who develop applications
according to the aforementioned approach.
37
Model the concepts
Understand and specify the
concepts
Evaluate the modelsEvaluate the design
Understand and specify the
requirements, considering
the context
Prototyping
Is the model agreed upon
among the experts involved?
Are the requirements met?
Specify the effect of changes
on the user interface
Domain expert panel
End user panel
Figure 6.3: The customized user-centered design applied in this study.
38
6.4 The Answer to RQ4
In this study, we have applied a design and development process that combines
UCD and openEHR principles. The suggested approach considers active involve-
ment of the clinicians in design and evaluation of the archetypes, and also the
user interface. Moreover, the effect of the archetypes on the user interface has
been taken into consideration. This customized UCD approach is depicted in
Figure 6.3. The proposed UCD process is compatible with the openEHR soft-
ware development approach illustrated in Figure 6.2-B (see the previous section
for details) More about this UCD process can be found in Paper III.
In addition, we have tried to learn from applying UCD in a clinical context.
Characteristics of the context, users and tasks that may have an effect on apply-
ing UCD are also identified in this study. These characteristics should be taken
into consideration in design and development of clinical applications including
CDS (see Paper IV).
39
Chapter 7
Future Work
The main future direction of this study is to address RQ5:
Does openEHR offer any new opportunities for clinical decision sup-
port in terms of knowledge representation and reasoning?
Various sub-problems related to this RQ would be:
1. Can openEHR be used to improve the process of knowledge representation
and reasoning in clinical decision support?
2. Can clinical decision support benefit from structured, quality validated
openEHR-based electronic health records?
3. Is it feasible and practical to integrate clinical decision support interven-
tions into openEHR-based electronic health records?
Moreover, there are other aspects of the study that need more investigations:
What are the challenges in applying user-centered design in a clinical con-
text and how to tackle these challenges?
Is the idea of automatic user interface generation acceptable from a human-
computer interaction perspective?
40
Chapter 8
Conclusion
This thesis investigates the question: how can usable openEHR-aware clinical
decision support be designed and developed in order to improve the quality of
health care? In order to answer this question, several sub-problems were identi-
fied to be investigated, and accordingly, several objectives to be accomplished.
Both theoretical and empirical research strategies were used in order to address
the identified research questions (see Chapter 3). Of the five specified research
questions, four are answered in this thesis.
Analysis of state of the art in interplay between HCI and CDS and also the
intersection between CDS and EHRs revealed that consideration of both HCI
and integration of CDS into EHRs is more appreciated in theory than practice
and are yet to be developed (see Paper I and Paper II).
Moreover, the experience in designing an openEHR-based clinical application
revealed that apart from benefits offered by the openEHR approach such as
defining different roles and involvement of users in defining domain concepts,
there are various shortcomings that should be improved, for instance the insuffi-
cient support for openEHR application developers. Additionally, it was observed
that there are characteristics of the domain, tasks and users in the domain that
developers should be informed about while applying UCD methods.
Finally, several future directions of the research were presented with focus on
both the UCD development process, and investigation of openEHR more in
depth (see Chapter 7).
41
Bibliography
[1] D. Bates and A. Gawande, “Improving safety with information technology,”
New England Journal of Medicine, vol. 348, no. 25, pp. 2526–2534, 2003.
[2] A. Kushniruka, M. Triolab, B. Steinc, E. Boryckid, and J. Kannrye, “The
relationship of usability to medical error: an evaluation of errors associated
with usability problems in the use of a handheld application for prescribing
medications,” in Medinfo 2004: Proceedings Of THe 11th World Congress
On Medical Informatics, vol. 107, p. 1073, Ios Pr Inc, Jan. 2004.
[3] J. Walker, E. Bieber, and F. Richards, Implementing an electronic health
record system. Springer Verlag, 2006.
[4] P. Reizenstein, “Safety problems in health care cause 100 avoidable deaths,”
Lakartidningen, vol. 84, pp. 1680–1681, 1987.
[5] H. Hoff, “Motion 2009/10: So278 Patient safety in healthcare - The Parlia-
ment.” Website, 2010. http://www.riksdagen.se/Webbnav/index.aspx?
nid=410\&typ=mot\&rm=2009/10\&bet=So278.
[6] L. Kohn, J. Corrigan, M. Donaldson, and Others, To err is human: building
a safer health system. Washington, D.C.: NATIONAL ACADEMY PRESS,
1999.
[7] C. Vincent, G. Neale, and M. Woloshynowych, “Adverse events in British
hospitals: preliminary retrospective record review,” Bmj, vol. 322, no. 7285,
p. 517, 2001.
[8] M. R. Chassin, “The Urgent Need to Improve Health Care Quality: In-
stitute of Medicine National Roundtable on Health Care Quality,” JAMA:
The Journal of the American Medical Association, vol. 280, pp. 1000–1005,
Sept. 1998.
[9] “Crossing the Quality Chasm: The IOM Health Care Qual-
ity Initiative.” Institute Of Medicine (IOM) Website, 2010.
http://www.iom.edu/Global/NewsQuality-Chasm-The-IOM-Health-Care
-Quality-Initiative.aspx.
42
[10] N. Graham, Quality in health care: Theory, application, and evolution.
Aspen publishers, Inc., 1995.
[11] T. Graham, A. Kushniruk, M. Bullard, B. Holroyd, D. Meurer, and
B. Rowe, “How usability of a web-based clinical decision support system
has the potential to contribute to adverse medical events,” in AMIA An-
nual Symposium Proceedings, vol. 2008, p. 257, American Medical Infor-
matics Association, Jan. 2008.
[12] A. Hasman, “Challenges for medical informatics in the 21 st century,” In-
ternational journal of medical informatics, vol. 44, no. 1, pp. 1–7, 1997.
[13] R. Greenes, Clinical decision support: the road ahead. Academic Press,
2007.
[14] J. Osheroff, E. Pifer, J. Teich, D. Sittig, and R. Jenders, Improving outcomes
with clinical decision support: An implementer’s guide. HIMSS, 2005.
[15] D. F. Sittig, A. Wright, J. a. Osheroff, B. Middleton, J. M. Teich, J. S.
Ash, E. Campbell, and D. W. Bates, “Grand challenges in clinical decision
support.,” Journal of biomedical informatics, vol. 41, pp. 387–92, Apr. 2008.
[16] R. Greenes, M. Sordo, D. Zaccagnini, M. Meyer, and GJ, “Design of a
standards-based external rules engine for decision support in a variety of
application contexts: report of a feasibility study at Partners HealthCare
System,” Medinfo, 2004.
[17] K. Kawamoto, C. A. Houlihan, E. A. Balas, and D. F. Lobach, “Improving
clinical practice using clinical decision support systems: a systematic review
of trials to identify features critical to success.,” BMJ (Clinical research ed.),
vol. 330, no. 7494, p. 765, 2005.
[18] D. Hunt, R. Haynes, S. Hanna, and K. Smith, “Effects of computer-based
clinical decision support systems on physician performance and patient out-
comes: a systematic review,” Jama, vol. 280, p. 1339, Oct. 1998.
[19] J. Bennett and P. Glasziou, “Computerised reminders and feedback in med-
ication management: a systematic review of randomised controlled trials,”
Medical Journal of Australia, vol. 178, no. 5, pp. 217–222, 2003.
[20] M. Trivedi, J. Kern, A. Marcee, B. Grannemann, B. Kleiber, T. Bettinger,
K. Altshuler, and A. McClelland, “Development and Implementation of
Computerized Clinical Guidelines : Barriers and Solutions,” Methods of
information in medicine, vol. 41, no. 5, pp. 435–442, 2002.
[21] J. Osheroff, J. Teich, B. Middleton, E. Steen, A. Wright, and D. Detmer,
“A roadmap for national action on clinical decision support,” Journal of
the American medical informatics association, vol. 14, no. 2, p. 141, 2007.
43
[22] J. Anderson, “Increasing the acceptance of clinical Information,” MD com-
puting: computers in medical practice, vol. 16, no. 1, p. 62, 1999.
[23] T. Wetter, “Lessons learnt from bringing knowledge-based decision support
into routine use.,” Artificial intelligence in medicine, vol. 24, pp. 195–203,
Mar. 2002.
[24] E. Berner, Clinical Decision Support Systems: Theory and Practice (Health
Informatics). New York, NY 10013, USA: Springer, 2007.
[25] A. Hasman, R. Haux, and a. Albert, “A systematic view on medical infor-
matics.,” Computer methods and programs in biomedicine, vol. 51, pp. 131–
9, Nov. 1996.
[26] R. Haux, “Aims and tasks of medical informatics,” International journal of
medical informatics, vol. 44, pp. 9–20; discussion 39–44, 45–52, 61–6, Mar.
1997.
[27] R. Haux, “Medical informatics: Past, present, future.,” International jour-
nal of medical informatics, vol. 79, pp. 599–610, Sept. 2010.
[28] M. Collen, “Origins of medical informatics,” Western Journal of Medicine,
vol. 145, pp. 778–785, 1986.
[29] R. S. Ledley and L. B. Lusted, “Reasoning Foundations of Medical Diag-
nosis: Symbolic logic, probability, and value theory aid our understanding
of how physicians reason,” Science, vol. 130, pp. 9–21, July 1959.
[30] I. Iakovidis, “Towards personal health record: current situation, obstacles
and trends in implementation of electronic healthcare record in Europe.,”
International journal of medical informatics, vol. 52, no. 1-3, pp. 105–15,
1998.
[31] B. Blobel, “Advanced and secure architectural EHR approaches.,” Interna-
tional journal of medical informatics, vol. 75, no. 3-4, pp. 185–90, 2006.
[32] J. Ash and D. Bates, “Factors and forces affecting EHR system adoption:
report of a 2004 ACMI discussion,” Journal of the American Medical Infor-
matics, pp. 8–12, 2005.
[33] V. Stroetmann, D. Kalra, P. Lewalle, J. Rodrigues, and KA, “Semantic
Interoperability for Better Health and Safer Health Care,” Deployment and
Research, no. January, 2009.
[34] P. Schloeffel, T. Beale, G. Hayworth, S. Heard, and H. Leslie, “The relation-
ship between CEN 13606, HL7, and openEHR,” in In Health Informatics
Conference (2006), vol. 7, p. 24, Health Informatics Society of Australia,
2006.
[35] “openEHR.” Website, 2010. http://openEHR.org.
44
[36] L. Bird, A. Goodchild, and Z. Tun, “Experiences with a two-level modelling
approach to electronic health records,” Journal of Research and Practice in
Information Technology, vol. 35, pp. 121–138, Apr. 2003.
[37] T. Beale and S. Heard, “openehr architecture overview.” Web-
site, 2008. http://www.openehr.org/releases/1.0.2/architecture/
overview.pdf.
[38] M. Eichelberg, T. Aden, J. Riesmeier, A. Dogac, and G. B. Laleci, “A survey
and analysis of Electronic Healthcare Record standards,” ACM Computing
Surveys, vol. 37, pp. 277–315, Dec. 2005.
[39] “CEN.” Website, 2011. http://pangea.upv.es/en13606.
[40] J. Gupta, G. Forgionne, and M. Mora, Intelligent Decision-making Support
Systems: Foundations, Applications and Challenges. Springer-Verlag New
York, Inc. Secaucus, NJ, USA, 2006.
[41] T. Hewett, R. Baecker, S. Card, and T. Carey, “ACM SIGCHI Curricula
for Human-Computer Interaction,” 1996.
[42] M. G. Helander, T. K. Landauer, and P. V. Prabhu, Handbook of Human-
Computer Interaction. Elsevier Science Pub Co, Aug. 1998.
[43] ISO 9241-11, Ergonomic requirements for office work with visual display
terminals (VDTs)Part 11: Guidance on usability. Geneva, Swiss: Interna-
tional Organization for Standardization, 1998.
[44] M. Maguire, “Methods to support human-centred design,” International
Journal of Human-Computer Studies, vol. 55, pp. 587–634, Oct. 2001.
[45] H. Sharp, Y. Rogers, and J. Preece, Interaction Design: Beyond Human-
Computer Interaction. Wiley, 2007.
[46] K. Vredenburg, S. Isensee, and C. Righi, User-Centered Design: An Inte-
grated Approach. Prentice Hall PTR, Upper Saddle River, NJ, 2002.
[47] ISO 13407, Human-Centred Design Process for Interactive Systems. Geneva,
Swiss: International Organization for Standardization, 1999.
[48] “User-Centered Design.” Website, 2010. https://www-01.ibm.com/
software/ucd/ucd.html.
[49] B. Oates, Researching information systems and computing. Sage Publica-
tions Ltd, 2006.
[50] E. ˙
Arsand and G. Demiris, “User-Centered methods for designing patient-
centric self-help tools,” Informatics for Health and Social Care, vol. 33,
no. 3, pp. 158–169, 2008.
45
[51] R. A. K. Horasani, M. I. T. Anasijevic, B. L. M. Iddleton, and M. S. C,
“Ten Commandments for Effective Clinical Decision Support: Making the
Practice of Evidence-based Medicine a Reality,” Journal of the American
Medical Informatics Association, vol. 10, pp. 523–530, 2003.
[52] K. Kawamoto and D. Lobach, “Proposal for fulfilling strategic objectives of
the US roadmap for national action on decision support through a service-
oriented architecture leveraging HL7 services,” Journal of the American
medical, pp. 146–155, 2007.
[53] I. Cho, J. Kim, J. H. Kim, H. Y. Kim, and Y. Kim, “Design and im-
plementation of a standards-based interoperable clinical decision support
architecture in the context of the Korean EHR.,” International journal of
medical informatics, vol. 9, pp. 611–622, July 2010.
[54] Y. Huang, L. Noirot, and K. Heard, “Migrating toward a next-generation
clinical decision support application: the BJC HealthCare experience,” in
AMIA Annual, pp. 344–8, Jan. 2007.
[55] H. Aveyard, Doing a literature review in health and social care: a practical
guide. Open University Press, 2007.
[56] S. Porter, “An update of the etiology and management of xerostomia,” Oral
Surgery, Oral Medicine, Oral Pathology, Oral Radiology & Endodontics,
vol. 97, pp. 28–46, Jan 2004.
[57] M. Jontell, U. Mattsson, and O. Torgersson, “MedView: an instrument
for clinical research and education in oral medicine.,” Oral surgery, oral
medicine, oral pathology, oral radiology, and endodontics, vol. 99, pp. 55–
63, January 2005.
[58] J. Gulliksen, B. G¨oransson, I. Boivie, S. Blomkvist, J. Persson, and A. s.
Cajander, “Key principles for user-centred systems design,” Behaviour &
Information Technology, vol. 22, pp. 397–409, January 2003.
[59] R. Chen, P. Georgii-hemming, and H. ˚
A hlfeldt, “Representing a
Chemotherapy Guideline Using openEHR and Rules,” Medical Informat-
ics, pp. 653–657, 2009.
[60] S. Barretto, J. Warren, A. Goodchild, L. Bird, S. Heard, and M. Stumptner,
“Linking guidelines to Electronic Health Record design for improved chronic
disease management,” in AMIA Annual Symposium Proceedings, pp. 66–70,
American Medical Informatics Association, Jan. 2003.
[61] G. Schadow, D. C. Russler, and C. J. McDonald, “Conceptual alignment
of electronic health record data with guideline and workflow knowledge.,”
International journal of medical informatics, vol. 64, pp. 259–74, Dec. 2001.
[62] “OSHIP.” Website, 2011. http://www.oship.org/.
46
[63] “Open-EHR-Gen.” Website, 2011. http://code.google.com/p/open-ehr
-gen-framework/.
[64] “GastrOS.” Website, 2011. http://sourceforge.net/projects/gastros.
[65] “opereffa.” Website, 2011. http://opereffa.chime.ucl.ac.uk/introduc
tion.jsf.
47
Part II
Publications
Paper I
Towards Interaction Design
in Clinical Decision Support Development:
A Literature Review
Hajar Kashfi
International Journal of Medical Informatics, Elsevier.
(manuscript submitted)
51
Towards Interaction Design
in Clinical Decision Support Development:
A Literature Review
H.Kashfia,1,
aDepartment of Applied Information Technology
Chalmers University of Technology
SE–412 96 G¨
oteborg, Sweden
Abstract
Aim: After motivating the importance of human-related factors in developing highly adoptable
clinical decision support systems (CDSS) according to the previous studies, this paper presents
the results of a literature review on clinical CDSS published literature, with a focus on interac-
tion design (ID) activities that naturally deal with human-related factors in designing interactive
systems. Methods: Two related databases were searched without any limitations on the publi-
cation year. The search yielded a final collection of 17 studies. Relevance criteria included (i)
discussing development and or evaluation of a CDSS (ii) taking one or several ID activities into
account. Results: It was observed that the main emphasis of the literature has so far been on
evaluation after design which is more compatible with the traditional view of human-computer
interaction (HCI) rather than ID. It was also observed that evaluation methods that are based on
user participation were used more often than evaluation based on usability expert participation.
The review results indicate a need for disseminating the knowledge gained by experience and
the existing ID knowledge among CDSS developers. Conclusion: To develop highly adoptable
CDSSs and in order to overcome the chronic problem of CDSSs not being used in practice,
human-related factors are considered to play an important role. ID (or more traditionally HCI)
deals with such factors with an aim to develop usable interactive products. Applying the existing
ID knowledge and adopting various methods to support ID in the clinical domain and especially
in developing CDSSs provides highly valuable opportunities in developing more usable CDSSs
and clinical applications in general. Nonetheless, the adoption rate of ID activities among CDSS
developers is low. Educating CDSS developers about such existing approaches and methods,
together with disseminating the knowledge gained by experience in applying ID in developing
CDSSs are two means to improve the current situation.
Keywords: clinical decision support, clinical decision support system, user-centered design,
usability, qualitative evaluation, interactive system design, user interface evaluation,
human-computer interaction, interaction design
Corresponding author
Email address: hajar.kashfi@chalmers.se (H.Kashfi)
Preprint submitted to International Journal of Medical Informatics May 11, 2011
1. Introduction
More than 40 years of research have been spent on clinical decision support (CDS) but as
many studies reveal, clinical decision support systems (CDSS) have been more appreciated in
theory than in practice [1, 2]. Many of the developed CDSSs are mostly research prototypes and
designed for a specific context [2, 3] , therefore not suitable for mainstream use. Consequently,
CDS is still suffering from a low adoption rate [4, 5, 1, 2, 6]. Several researchers have been deal-
ing with specifying challenges that developers of such systems face, and defining success factors.
The final aim of these studies is to provide insight in developing more acceptable and adoptable
CDSSs. Many of these challenges and success factors belong to the category of human-related
factors and goes under the definition of usability. This has been the focus of the interaction
design (ID) research field for years. ID suggests various methods to design and evaluate inter-
active systems with focus on users of such systems with the aim of achieving usability and user
satisfaction. The interesting question is whether developers of CDSSs have applied the existing
knowledge in order to develop more acceptable CDSSs or not at the time usability evaluation is
considered to be crucial for developing interactive applications.
In an effort to discover the practical attitude of CDSS developers towards human-related
factors and ID, this paper reviews the related literature to find out more about the practical attitude
of the CDSS developers towards ID and the related concepts. We will start with a review on
difficulties and success factors reported in developing CDSSs. Then, results from the literature
review is presented. We will end with analysis of the results and suggestions for further studies.
2. Background
The issue of applying computers in clinical decision making is a challenging problem [1].
This difficulty is not limited to complex decision support processes. Even in the easiest cases,
the process of design, development, deployment and maintenance of a CDSS requires a huge
amount of effort.
Several studies have been conducted in order to answer the question of which factors should
be considered in design and development of CDS to result in an acceptable and effective CDS?
A broad range of difficulties and success factors in developing CDS have been identified accord-
ingly. A discussion of what has been observed in these studies is presented in the following; in
addition, a summary of the factors reported in these studies is tabulated in Table 1
Technical Factors
A great deal of focus is on the issues related to knowledge extraction and maintenance.
Some of the technical factors reported in these studies are: compatibility with the legacy
systems [4], knowledge extraction [7, 4, 8, 6, 9], reliability of the knowledge, and tak-
ing knowledge from reliable sources [9, 7], maintaining, improving and monitoring the
knowledge base [9, 8, 8, 4, 7, 6], and creating new types of CDS interventions [5].
One of the other technical factors observed in the literature is integration of CDS into the
systems that are used to record, organize, and retrieve digitally stored health care informa-
tion of individual patients (electronic health record (EHR) systems). Integration of CDS
into EHR systems as one of the factors that are beneficial in wider adoption of CDS, is
advocated in several studies [2, 1, 5, 4, 7, 10]. Several studies discuss that delivery of
decision support through EHR can improve the quality of care [4, 3, 11, 12]. In overall,
EHR is considered as a leverage for CDS [6, 1].
2
Efficiency
Being time efficient is considered to be one of the factors that affects adopting CDS [9, 8].
Efficient data entry is one other factor discussed in the literature [8, 13]. One suggestion
to improve efficiency has been to remove tedious duplicate data entry required by some
CDSSs [7] and to utilize existing data for instance by integrating the CDS into EHR sys-
tems.
Users’ interaction with the system
The importance of user interface design is reported in various studies [14, 7, 5]. Consider-
ing the interaction of the user with the system is important in identifying how the system
is expected to be used [7]. There are many CDSSs that are reported to be implemented but
are not used in practice due to the poor user interface design among other factors [14].
In the ranked list of ten grand challenges in CDS development by Sitting et al. [5] the most
important challenge is considered to be improving the user interface. It should be noted
that clinical information systems with low usability not only do not improve patient care
and reduce clinical errors, but also may have the opposite effect [15, 16, 17, 18, 19, 20].
According to the literature, some detailed factors related to the interaction of users with
CDS are as follows: being understandable and controllable [9], anticipating when and what
information is needed and real time delivery of best knowledge available to overcome that
need at the point of care [8, 6, 21], and changing direction rather than stopping clinicians
[8].
Workflow
Beside proper design of the user interface other human-related factors such as cultural
issues, users’ workflow and the complex context of use have been considered important
factors in developing CDS [22, 7]. Is it observed that the success of CDS is in relation to
its integration to the complex clinical environment [22]. CDS should fit and be integrated
into users’, i.e. clinicians’, workflow [8, 21, 4]. Especially since clinicians are resistant
to changes in their workflow [8]. Integration of the CDS to both culture and the care
workflow is considered to be necessary [7] in order for these systems to be used optimally.
Users’ satisfaction and acceptance
As mentioned before, it has been demonstrated that CDS can have a potential influence
on health care, but there will be no effects on improvement in health care if the developed
system is not accepted by users and is not used in practice [7]. In order to design an accept-
able CDS it is required to understand clinicians’ characteristics [8]. Adoption planning of
a CDS should be done in relation to the end users [9]. It is recommended that in case of
using commercial products, the system should be customizable for local use [13]. Finally,
users should be trained properly for using the system and they should also be informed
about limitations of providing automatic CDS so that their expectations are adjusted [13].
All in all, user acceptance and satisfaction are considered to be important factors in adopt-
ing CDS [4], therefore, measuring and considering the user reaction to the CDS is crucial
in order to develop a successful application [23].
Monitoring, getting feedback and evaluating
3
In order to improve the CDS and also to keep the knowledge base updated, maintenance
of CDS and getting feedback from clinicians is needed [4, 8]. Clinicians should somehow
be motivated to apply CDS [7]. Effectiveness of CDS should be evaluated by the health
organizations [7] Moreover, health organizations should monitor the usage of the system
after deployment [8, 21, 13, 23, 1] to assure proper adoption of CDS by clinicians.
Who Year Difficulties and success factors
non-technical technical
Wetter [9] 2002 Planning in relation to the end users
Time and cost efficiency
Understandability and controllability
Reliability of the knowledge base
1
Trivedi et al. [23] 2002 Human related issues
Organizational issues
Technical issues
Bates et al. [8] 2003 Efficiency
Real time delivery of information
Presenting correct information at the
correct time
Fitting into user’s workflow
Changing direction rather than stopping
Understanding clinicians’ characteristics
Simple interventions
Efficient data entry
Monitoring and getting feedback
Maintaining the knowledge base
Kawamoto et al. [21] 2005 Integration of automatic decision support
to clinician’s workflow
Timely access to decision support at point
of care
Providing actionable recommendations
Monitoring the use of system after
deployment
Computer aided decision support1
Garg et al. [4] 2005 User acceptance
Integration to the user workflow
Being compatible with the legacy
systems
Maturity of the developed system
System maintenance
Table 1: Clinical Decision Support Challenges (continued on next page).
1In this paper 22 technical and non-technical factors are specified five of which are considered to be highly
correlated to the success of CDS.
4
Who Year Difficulties and success factors
non-technical technical
Osheroff et al. [6] 2007 Social and technical aspects of developing
a CDSS 2
Best knowledge be available when needed
High adoption and effective use
Natural complexity of decision
making
Knowledge extraction issues
Continuous improvement of
knowledge and CDS interventions
Berner et al. [13] 2007 Data entry process
User interface and vocabulary
Motivation of use for clinicians
Informing users about the limitation
User training
Evaluation of the system
Customizing commercial applications for
the local use
Monitoring the usage of the system after
deployment
Knowledge-base creation and
maintenance
Reliable knowledge base
Monitoring and maintaining the
knowledge base
Sittig et al. [5] 2008 Improving the effectiveness of CDS
interventions
Creating new interventions
Disseminating existing CDS
knowledge and interventions
Cho et al. [10] 2010 Defining a national CDS
architecture 3
Integration to the EHR context
Having access to a sustainable and
robust CDS
A sharable and reusable
knowledge base
Table 1: Clinical Decision Support Challenges (continued from previous page)
2In this paper three pillars in order to increase the widespread adoption of effective CDS are demonstrated.
Several activities are also recommended considering these three pillars.
3Cho et al. in 2010 published their experience in defining and implementing a national CDS architecture
with the aim of broad adoption of CDS services in Korea. The goal of this study has been to“achieve
widespread adoption of EHR by health care organizations to improve the quality, safety, and efficiency of
care”, “to establish a sharable lifetime EHR system to improve the quality of care and reduce health care
costs incurred by care redundancy in Korea”. They demonstrate that to reach these goals it should be assured
that clinical application providers have access to a “sustainable and robust CDS” when it is needed.
Putting technical factors aside, the related literature suggests the importance of (i) under-
standing users, i.e. clinicians, their needs and characteristics and designing a system according
to them (ii) understanding users’ workflow and design a system that fits into the workflow and
requires the minimum changes in the workflow (iii) user interface and the way users interact
with the system (iv) efficiency of the system (v) finally, getting feedback from users and assuring
users’ satisfaction and acceptance.
The interesting point is that the aforementioned issues have not been in focus just in the
CDSS field. Around 30 years ago, the need for investigating such factors in developing inter-
5
active systems in various domains resulted in the creation of a multidisciplinary field concerned
with designing, evaluating and implementing usable interactive systems with focus on users of
such systems, i.e. human [24, 25]. This field is called Human-computer interaction (HCI) and
later extended to Interaction design (ID) [26]. More about interaction design is discussed in the
following section.
2.1. Interaction Design
ID concerns the development of interactive systems that are usable [26]. By usable it is meant
easy to learn, effective to use and providing an enjoyable user experience [26]. User experience
is how a user feels about a specific product and includes various qualities such as satisfaction
and pleasure [26]. ID is concerned with theory, research and practice of design [26] and covers
a wider scope of issues, topics and paradigms compared to what HCI has traditionally been con-
cerned with [26]. Sharp [26] defines ID as “designing interactive products to support the way
people communicate and interact in their everyday and working lives.” ID relies on understand-
ing the people, i.e. users, what they desire to perform, i.e. tasks and goals, their characteristics
and capabilities, and the technology available. Moreover, it involves the knowledge of identify-
ing requirements and evolving them into a proper design solution [26]. The ID process involves
the following activities [26]:
“Identifying needs and establishing requirements for the user experience”
“Developing alternative designs that meet those requirements”
“Building interactive versions of the design”
“Evaluating what is being built throughout the process and the user experience it offers”
The heart of ID process is evaluation of the design solution and to ensure that the product is
usable. This is addressed via a user-centered design (UCD) approach [26].
2.2. User-centered Design
UCD means that in a development process, the real users of the product, their goals and not
just the technology should be considered as the driving forces. As it is suggested by its name,
UCD aims for involving users throughout the design process. There are various methods and
techniques that help to achieve this. Three principles are considered to be the basis for UCD.
These principles are:
Early focus on users and tasks: understanding users and involving them in the design
process (see Section 2.7 and Section 2.6).
Empirical measurement: observing and measuring performance and reactions of users
when they interact with the product (see Section 2.8).
Iterative design: repeating the cycle of design, evaluate and redesign as often as required
(see Section 2.4).
UCD principles appear to be obvious metrics but they are not easy to put into practice [26]. More
about the concepts related to ID and UCD can be found in the following sections.
6
2.3. The Life-cycle Model
To perform ID, two factors are important (i) understanding of the ID activities (ii) specifying
the relation of activities to one another and the full development process [26]. The latter is called
life-cycle model or process model. A life-cycle model may also include the when, and how to
move from one activity to the next one, also the description of the deliverables of the activities
[26].
2.4. Iterative Design
To design an interactive system, a clear understanding of the requirements is necessary. Nev-
ertheless, it is observed that not all of the system requirements can be found before starting the
design [27]. To overcome this problem, the idea of iterative design was introduced. The core
idea of iterative design is to have a design process that tries to overcome the problem of incom-
plete requirements specification. This is realized by cycling through the process of design and
incrementally improving the product produced in each pass.
Iterative design involves using prototypes. Prototypes are defined by Dix [27] as “artefacts
that simulate or animate some but not all features of the intended system.” Prototypes are useful
in discussing ideas with stakeholders and they can be used as a communication mean among the
team members [26]. Prototyping provides cheaper and faster opportunities to evaluate a design
solution. A prototype complexity can range from a simple paper-based simulation of the system
to a complex piece of software [26].
2.5. Different Types of Requirements
Sharp [26] defines requirement as “a statement about an intended product that specifies what
it should do or how it should perform”. Requirements should be specified as clearly as possible
via the requirements gathering activities. There are different kinds of requirements such as func-
tional requirements, environmental requirements, i.e. the circumstances in which the system will
be used (context of use), usability requirements, and user characteristics [26]. Therefore, it is
important to apply data gathering techniques in a way that this broad range of issues are covered.
2.6. Data gathering methods
Data gathering methods are used to collect sufficient, precise and relevant data with the aim
of producing the requirements [26]. The users’ tasks, their related goals, the context in which the
tasks are performed and the rationale for the current situation are the most important information
that is needed to be collected via data gathering techniques. Data gathering techniques are also
used in evaluation processes to collect users’ reaction and performance with the prototype of
the intended system [26, 28]. Interview, questionnaire, observations, studying documentation,
and researching similar products are some of the data gathering techniques [26]. Selection of
the data gathering techniques should be based on the participants involved and the nature of the
technique. In order to triangulate the results, usually, data gathering techniques are combined
[26].
2.6.1. Interview
Interview is a friendly and flexible data gathering technique where a goal oriented conversa-
tion is conducted with the interviewee [28, 26, 27]. In evaluation, interviewing users about their
interaction with the system provides an opportunity to gather information about the user’s expe-
rience. Various types of interviews are defined based on the amount of control the interviewer
imposes on this conversation: structured, unstructured, semi-structured and group interview [26].
7
2.6.2. Observation
Observation is a popular data gathering technique that gathers information about how users
achieve a task in the context and also the actual interaction of users with the system [26, 27].
One problem with simple observation is that it does not provide insight into the user’s decision
process and attitude while interacting with the system [27]. One approach to overcome this
problem is to ask users to verbalize their thoughts. This technique is called think aloud [26, 27].
2.6.3. Questionnaire and Survey
Questionnaires are a set of pre-designed questions that are used for collecting demographic
data and users’ opinions [27, 26]. Questionnaires are less flexible than interviews but they can
target a larger group of participants and they help in gathering more precise information [27, 28].
They need less administrative time, and the analysis of the gathered information can be done
more rigorously [27].
2.6.4. Usability Engineering
Usability engineering is an approach to UCD where a usability specification is produced as a
part of the requirements specification. This specification includes the exact criteria, i.e. usability
goals, against which the system will be evaluated. In order to set up usability goals, there is a need
to specify relevant measurable usability characteristics of the product, i.e. usability attributes [25]
and to decide how they are going to be measured.
2.7. Task Analysis
In order to design interactive systems, it is crucial to have a clear understanding of the tasks
the users want to perform [27]. Task analysis is focused on studying the user’s actions and and/or
cognitive processes to achieve the tasks [29]. It deals with existing systems and procedures and
investigates the existing situation [26, 27]. It also contributes to the new system requirements
specification [27]. Task analysis involves using various data gathering techniques such as inter-
views and questionnaires [30].
2.8. Design Evaluation
Even in case of applying UCD, there is a need to assess the design solution and to make
sure that the requirements are met [27]. This is the aim of evaluation. It is recommended not
to consider evaluation as a single phase in the design process but to carry it out throughout
the system life-cycle and to consider evaluation results in improving the design solution [27].
Evaluation should involve assessing not only the functional capabilities of the system, but also
the users’ interaction with the system, their experience and the impact of the interaction on
her/him. Therefore, various aspects such as learnability (how easy it is to learn the system),
usability and the user’s satisfaction should be considered in evaluation.
Various categorizations of evaluation methods are documented in the literature [25, 26, 27].
Helander in [25] defines two main classes of methods. The first class includes those methods
that rely on the users’ participation in the evaluation. This class of evaluations is called empirical
evaluation methods [25]. The second class of methods does not rely on direct user involvement
but on designers’ or usability experts’ judgment of the design. These methods are called usabil-
ity inspection or non-empirical evaluation methods [25] (Dix [27] calls this class expert analysis
methods). Another categorization of methods exists in [26] that divides methods into three cate-
gories of usability testing,field study and analytical evaluation. In this paper, the categorization
made by Helander [25] is used for referring to methods.
8
2.8.1. Empirical Evaluation Methods
Usability testing and field studies are two empirical evaluation approaches. Both usability
testing and field studies use basic data gathering techniques (observation and interview) to gather
information on how users interact with a system or how do they perform a task. Their main
difference is in the settings the observation is performed in.
Usability test is a collection of methods used to evaluate the usability of a product. It is
typically focused on how well a user can use the product to accomplish a specific task, and also
errors that occur during this process [27]. Usability test [26] involves observing a user while the
user is interacting with the system. It is performed in a controlled environment for instance in a
laboratory setting. This means that in this type of observation the test users are isolated from the
usual interruptions in their everyday work.
In contrast to usability testing, filed study [26] involves observing users in their natural work
settings. Field studies can be helpful in requirements gathering as well as evaluating a system.
2.8.2. Usability Inspection Methods
There are circumstances in which developers do not easily have access to users, user partic-
ipation is too expensive or it takes too much time to involve them. In those situations, usability
inspection methods can be used to discover design flaws since in contrast to empirical evaluation
methods, in this category of evaluation methods users do not need to be present [26]. In addition,
for empirical evaluation methods, normally a working prototype or implementation of the sys-
tem is required while usability inspection methods can be performed on very early designs and
prototypes. In the following, three main methods that belong to this category of evaluations are
given.
Heuristic Evaluation
Dix defines heuristic [27] as “a guideline or general principle or rule of thumb that can
guide a design decision or be used to critique a decision that has already been made.” Heuristic
evaluation is a methods that applies a set of heuristics to discover the design flaws. Heuristic
evaluation can be performed early in the project even on the design specification. This method
is considered to be one of the cheapest evaluation methods [27]. It is also called a resource
constraint method since running this method does not need a deep knowledge in usability and
ID and it can be done by developers in case the project team does not have access to usability
experts [25].
Walkthroughs
Walkthroughs are alternative approaches to heuristic evaluation. The same as heuristic eval-
uation, walkthroughs rely on predicting users’ problems without carrying out user tests [26].
Walkthroughs involve walking through a specific task and discovering the usability problems.
The focus in walkthroughs is on learnability of the system, i.e. how easy the system is to learn
[25, 26].
Usually, walkthroughs do not involve users, however, in a sub-class of walkthroughs named
pluralistic walkthroughs [25, 26], users, developers and usability experts work together to find
out usability problems of the design solution.
Usability-Expert Reviews
Usability-Expert review is an evaluation method when usability experts review a system de-
sign against standard and best practice design solutions in order to find out general design flaws
9
[25]. This method is very similar to heuristic evaluation. The main difference is that in this
method, experts usually do not use heuristics to run the test. They actually rely on their own
experience with system and users to identify the source of difficulties in the GUI design [25].
2.9. User-centered Design in Developing Clinical Decision Support Systems
Involvement of end users, i.e. clinicians, in the design process and adjustment of the design
based on their feedback is one way to improve the acceptance of a CDSS. This is achieved by
understanding clinicians, their characteristics, their goals and the tasks that they wish to perform,
and consequently by producing a usable CDSS that fits to that specific clinical context.
As evident from the discussion in the previous sections, ID provides the knowledge required
by CDSS developers (and generally developers of all types of products) in order to develop highly
adoptable CDSSs. But the question is whether this knowledge is applied by the developers or
not. This has been the motivation of conducting this systematic review.
3. Search Method
Systematic review is a research methodology that aims at summarizing the available literature
on a topic and presenting an analysis based on that and providing a full picture on the topic [31].
In this study, a systematic review is used to analyze state of the art in CDSS developers’ attitude
towards ID and ID activities.
The publications were selected from two databases namely ScicenDirect1and PubMed2.
These two databases cover the most important journals in medical informatics, ID and also well-
known conferences in medical informatics.
The search was performed using various boolean combinations of these keywords: “medical
decision support”, “clinical decision support”, “user-centered design”, usability, “user involve-
ment”, “interaction design” and “human-computer interaction”. Later, in two steps, the abstracts
and the contents of the returned results were studied and the most relevant papers were selected.
The search strategy is depicted in Figure 1. Criteria for selection of the papers were positive
answers to these questions:
Is the study discussing development and or evaluation of a CDSS (i.e. practical science)?
Are any of the ID concepts or activities considered in the study ( i.e. ID consideration)?
The literature was reviewed to discover whether the ID related concepts, methods and approaches
are documented by the authors. A list of related concepts and methods was incrementally created
based on what was found in the literature (see Table 2 for the list).
4. Results
The literature was reviewed to find out which ID/HCI activities and concepts are documented
by authors. An overview of the ID principles and methods presented in these studies is shown
in Figure 4. The results from this review are presented in the following. More analysis of the
results is given in the next section.
1http://www.sciencedirect.com
2http://www.pubmed.gov
10
Primary searches
N=158
104 excluded
no ID consideration
not practical science
Abstract relevant
N=54
43 excluded
no ID consideration
not practical science
duplicate
Content relevant
N=17
Figure 1: The search strategy
4.0.1. Importance of the User Interface
The literature indicates a general consensus that usability of the user interface plays an im-
portant role in the system acceptance and reducing technology-induced errors [32, 33, 34, 35,
36, 15].
23.5%
Process view (Life-cycle/UCD view) (4)
76.5%
Independent (13)
Figure 2: only in around 24% of cases the concept of life-cycle of the system, and/or consideration of user-centered
design as a process is reported.
11
82.4%
Only user (14)
17.6%
Both user and expert (3)
Figure 3: Evaluation with user participation vs. evaluation with expert participation
4.1. Evaluation Before or After Releasing the Application
Evaluation before releasing the application is typically conducted on a prototype of the sys-
tem [33, 37, 23, 34, 38, 35, 39, 40, 36, 15, 41, 42, 43]. On the other hand, there are cases in
which evaluation is done after releasing the system [32, 44, 45, 46]. In some cases authors do
not make it clear whether the evaluation is made on the system after release or in early stages of
the project before releasing the application [39].
4.1.1. User Involvement
Evident from the literature, involving users in evaluation (see Section 4.2) is more common
than involving them in design. Very few of the studies have documented involvement of users in
design (two studies [38, 35]).
4.1.2. Context of Use
Few studies documented efforts to understand clinicians, their characteristics and the context
of use [41, 42].
4.1.3. The life-cycle model
In contrast to most of the literature, there are few studies that have presented a life-cycle view
on UCD [35, 36, 42]. Some researchers have concluded that using UCD as a process throughout
the whole life-cycle of the system is necessary to design a usable CDS [42].
4.2. Empirical Evaluation
As evident from Table 2, empirical evaluations such as observations [32, 47, 35, 41, 15, 42,
46, 43], interviews [32, 33, 38, 39, 42], questionnaires/surveys [48, 23, 38], field study [48, 49,
41, 42], usability tests [48, 33, 37, 34, 36, 15, 41, 42], and think aloud [33, 37, 38, 35, 15, 42] are
the most popular evaluations documented in the literature.
4.3. Non-empirical Evaluation
Literature suggests that documented cases of applying non-empirical evaluation methods are
much fewer than those of empirical methods. For instance, cognitive walkthrough and heuris-
tic evaluation are applied only in two of the studies [37, 35]. Expert review is missing in the
literature.
12
UCD/life-cycle (4)
24%
Evaluation before (13)
76%
Evaluation after (4)
24%
Multi-disciplinary team (5)
29%
Iterative design (11)
65%
Prototyping (13)
76%
Usability test (9)
53%
Interview (15)
88%
Questionnaire (10)
59%
Survey (7)
41%
Field study (6)
35%
Observation (8)
47%
Heuristic Evaluation (2)
12%
Cognitive Walkthrough (2)
12%
Task analysis (1)
6%
Figure 4: Interaction design (ID) in developing clinical decision support systems (CDSS)
4.3.1. Challenges and Benefits
The literature indicates a general consensus that applying various ID methods, and the UCD
process is effective; and results in more usable CDSSs and will eventually increase the usage of
the system [37, 38, 36, 41]. Nevertheless, some of the studies have concluded that applying UCD
and some of its methods is difficult and challenging [37, 50].
4.4. Summary of Findings
The qualitative evaluation on the system was performed after releasing the system to the
users in around 24% of the studies.
Prototyping is used in 76% of the studies.
The concept of iterative design is considered in 65% of the studies.
Only in 24% of the studies the concept of life-cycle of the system, and/or consideration of
user-centered as a process is reported (see Figure 2)
13
76.5%
Evaluation before release (13)
23.5%
Evaluation after release (4)
Figure 5: Evaluation of the prototype before releasing the final product, evaluation after installing the product
Evaluations based on “user participation” (i.e. empirical evaluations) are popular among
developers of CDS (see Figure 3). As depicted in Figure 6, “interview” is used in 88%,
“questionnaire” in 59%, “usability test” in 53% and “field study” in 25% of the studies.
Only around 18% of the studies have applied non-empirical evaluations (heuristic evalua-
tion, and cognitive walkthrough each in 12% of cases).
5. Discussion
As evident from Table 1, success factors of CDSSs can be divided into two main categories
of technical and non-technical factors. The ID community specifies various methods that help
developers assure these human-related aspects of the system. The aim of this literature review
was to investigate whether existing knowledge in the ID field is applied properly by developers
of CDSSs with the aim of developing more acceptable and adoptable CDS. In the following, a
more detailed analysis of the literature is provided along with discussions to provide insight into
the practical attitude towards ID in the CDS field.
5.1. Interaction Design in Practice
The search strategy in this review resulted in discovering only 17 studies which is a small
number compared to the number of CDSSs being developed and evaluated. Only in one of the
databases, ScienceDirect, around 100 studies were discovered that, according to their titles and
abstracts, have documented developing and/or evaluating a CDSS.
Studies are distributed from 1997 to 20103. More than half of the studies are published
after 2006. This indicates that consideration of ID/HCI in developing CDSSs is becoming more
popular in recent years.
5.1.1. Early focus on Users and Tasks
The first UCD principle is to have early focus on users and tasks (see Section 2.2). This
means that users should be understood and involved in the full design process from the begin-
ning. This however does not mean that users should be actively involved in design or act as
3The search was carried out in October 2010. So it may not include all of the 2010 publications.
14
Reference [32] [33] [37] [23] [34] [44] [45] [38] [35] [39] [40] [36] [15] [41] [42] [46] [43]
Year 97 98 02 02 03 04 05 06 07 07 07 08 08 08 09 09 10
UCD 777777777 7 7 7 7 7
Life-cycle 7 7 777777 7 7 7 7 7
Cognitive aspects 7777777777777 7
Evl. Before 17    7 7 7
Evl After 27777  777777777
Multi-disciplinary team 7777777  7 7 77 7
Iterative design 77 7  7 7  7
Usability 7  73
Usability test 7  7 7 7 7 447 7  7
Usability engineering 7  7777777777777
User involvement  7
Think aloud 7  7 7 7 7  *7 7  7
Prototyping 7    7 7 7
Interview 7 7
Questionnaire 7 7     7777  7
Survey 777 7 7777 7   7
Walkthrough 7 7 77777*7 7 7 7 *7 7
Heuristic evaluation 7 7 77777*777777 7
Field study 777777777 7  7
Observation 777777777
Task analysis 777777777777 7 *7 7
Workflow 77 7
Workflow integration 77 *7 7 
Training 7  7   *77  7
Users involved 6 14 8 16 516 35 6 14 713 7 7 8 96 20
1Evaluation before release
2Evaluation after release
3It is called problem
4It is called interview
5Not clear in the text, obtained from correspondence with the author.
*It is mentioned in the paper but is not used practically
Table 2: Human Computer Interaction and Clinical Decision Support in practice (note that indicates the concept/method
is applied in the study.).
15
designers. It is enough if users are consulted when making design decisions [26]. To support the
importance of understanding the users, their goals and tasks, Sharp [26] indicates: “if something
is designed to support an activity with little understanding of the real work involved, it is likely to
be incompatible with current practice, and users do not like to deviate from their learned habits
if operating a new device with similar properties. Various data gathering techniques as well as
task analysis can be used to support this principle.
Nonetheless, the review findings demonstrate that CDSS developers have not been widely
upholding this principle. Data gathering methods applied in various studies have been mostly
used in evaluations rather than gathering various types of requirements (including understanding
users, and users’ characteristics and the context of use). Task analysis is used only in one study
[36]. Finally, almost none of the studies have documented the usability requirements as a type of
requirements used to motivate design and provide basis for evaluation.
All in all, it is evident that benefiting from various data gathering techniques in order to col-
lect different types of requirements is not considered in many of the studies, and the knowledge
on this needs to be improved in the field. It is important that those aspects of the clinical con-
text that make it different from other domains in terms of collaborating with users and applying
various evaluation or design methods be investigated carefully. Moreover, further investigation
is needed to find out why task analysis, and also formal methods for understanding the context
have not been popular in the CDS field.
5.1.2. Empirical measurement
The second UCD principle is to observe and measure performance and reactions of users
when they interact with the system (see Section 2.2). This principle is satisfied in an acceptable
level in the studies documented in the literature. Usability tests, and various data gathering
techniques were used to gather users’ feedback and to observe and analyze users’ interaction
with the CDSS to discover design problems.
5.1.3. Iterative Design
The third UCD principle is to have an iterative design process (see Section 2.2). Iterative
design as mentioned in the results, has been considered in 65% of the studies. In some studies it
is not clearly mentioned what has been the goal of evaluations and how or when the evaluation
results would be used. Surely, getting users’ feedback and evaluations would be of no use if
they do not inform the design. Considering the importance of iterative design in developing
usable interactive systems (making informed design improvements based on evaluations), this is
an indication of a need for improvement in this area.
5.1.4. Interaction Design versus Human-computer Interaction
As mentioned before, ID focuses on the whole design process but the focal point of most
of the studies is on the evaluation. As evident from the reviewed literature, most of the works
documented in the literature match the more traditional pattern of HCI rather than ID. The focus
of traditional HCI has been mostly on evaluation and pointing out design flaws after design while
ID and the UCD approach to ID emphasize not only on the evaluation but also the whole design
process.
Moreover, in database searches, using the phrase “interaction design” did not result in any
hits. None of the studies has actually used this term in the text. On the other hand, the phrase
“human-computer interaction” is used in some of the studies. This is also in line with the afore-
mentioned observation.
16
5.1.5. The Life-cycle Model
As mentioned in Section 2.3, it is important to specify how various ID activities are going to
be carried out in the life-cycle of the system. Also, there is a need for an early planning of the
UCD process in order to make various important decisions for instance how many users will be
involved in the process and how their involvement is managed [29].
The literature contains very few studies that discuss the theoretical background of UCD as a
design and development process that can be used in the life cycle of the project as a complemen-
tary process to existing software development methods [42, 36, 35]. In addition, the literature
does not include discussions on the UCD planning. This is in line with our observation that the
focus lies mostly on evaluation rather than the whole design process.
5.1.6. Design Evaluation
Evaluating design solutions before releasing the application is performed in many of the
studies which shows an acceptable level of knowledge on this concept especially in recent years
(since only one study [46] was found after 2006 in which evaluation was done after releasing the
system). But still, not all of the evaluation methods, especially non-empirical evaluation methods
have been applied so often in the field.
There are costs and benefits associated with each class of evaluations. To carry out empirical
evaluations, developers should have access to users that are willing to participate in the evaluation
process. To accomplish the non-empirical methods, access to usability experts or developers who
are able to conduct evaluation is required. If usability experts are not available in the project,
getting access to them might be costly for the project.
Regarding benefits linked to each type of evaluation, it is observed that on various occasions,
especially in early stages of the project, non-empirical methods (or as it is called in some liter-
ature, usability inspection methods) can be used to identify general design flaws, discover the
compliance to design standards or heuristics or even detailed qualities such as ease of learning
[25]. On the other hand, these methods are not considered to be a replacement for the empirical
methods especially since the general consensus is that some type of flaws in the system will be
discovered only by having users interact with the system [25].
All in all, it is recommended that choice of evaluation methods be made considering the
timing of evaluation, project progress, cost, and most importantly the goal of evaluation. This
however is almost absent in the literature reviewed. Also, the goals of evaluation are not clearly
specified. There is rare discussion of how various methods are chosen in the studies. For instant
just few cases such as [50, 41] discuss benefits and costs associated with various methods. This
would indicate the following:
Users in this domain are easy to access and their involvement in evaluation is cheap.
Usability experts have not been accessible in such projects and or their involvement is
costly for the project.
Even if the above statements are true, very few cases of applying heuristic evaluations
(where even developers with little knowledge on usability can run the test to discover de-
sign flaws) suggests the lack of knowledge in this area among CDSS developers. Also,
since not all of the design flaws can be discovered by users (especially being compati-
ble by general design patterns) absence of non-empirical methods can be a sign of poor
knowledge on ID in this domain.
17
Usability test (9) 53%
Interview, questionnaire, survey (17) 100%
Field study, observation (9) 53%
Figure 6: A comparison between various evaluations based on user participation.
5.1.7. General Discussion
The problem of low adoption rate of CDS has long been of concern to the researchers in this
field. As a response to a need for answering the question of whether a CDSS will be used by users
and how it will be used, researchers have previously done studies on various CDSS evaluation
approaches [51, 52].
According to a literature review by Kaplan [51], very few evaluations of CDSSs can be
found that focus on other aspects such as “why clinicians accept or do not accept a system” or
“why they change their practice behavior after introducing the system”. In her literature review
on the CDSS evaluation literature [51], Kaplan indicated that “in the evaluation literature, the
main emphasis is on how clinical performance changes. Most studies use experimental methods
or randomized controlled trials (RCT) to assess system performance or to focus on changes in
clinical performance that could affect patient care.” She has profiled 34 CDSS evaluation studies
in her review. Of those, only 12 studies had evaluated other aspects than performance using
various qualitative evaluation techniques such as observations, interviews, field studies and so on
[51].
In another related study [52], she puts steps further forward towards the general issue of
evaluating clinical applications in an attempt to call for alternative approaches. She mentions that
“unless evaluation approaches include social, organizational, cultural, cognitive, and contextual
issues, they cannot answer key questions about why clinicians use or do not use an informatics
application.”
While confirming Kaplan’s observations, we indicate that though qualitative evaluation meth-
ods may be able to answer the aforementioned questions, they cannot broaden the adoption rate
of CDS and the problem of CDSSs not being used in practice, unless they are put together with
a well-informed design and development process. This type of evaluation of course provides
an opportunity to find out the reasons for low adoption. But to uphold the success factors of
CDS documented by researchers in this field (see Section 2), there is a need for activities other
than just evaluation. This would include understanding clinicians, their tasks and their goals,
understanding their characteristics and the context of use of CDS as recommended in the ID
filed.
The next step would be to investigate existing methods and approaches and identify how they
fit into the clinical domain. For instance challenges in applying heuristic evaluation is discussed
in [50] and is being related to various factors such as complexity of the context, and limitations
of the method itself (for instance, the author mentions the existence of some collaborative tasks
in the clinical domain in which more than one user is involved in task completion; while heuristic
evaluation considers independent evaluators in the evaluation process).
In addition, considering that this literature review shows lack of knowledge in some areas,
there is a need to disseminate the existing ID knowledge and the knowledge gained by experi-
ences in developing CDSSs among CDSS developers using UCD.
18
6. Conclusion
It is shown that in the clinical domain, taking human-related (ID-related) factors into consid-
eration is highly recommended in theory especially in recent years.
Although compared to previous studies [51], this review suggests that qualitative evaluations
of CDSSs are gaining more interest and attention, the rate of applying various ID methods and the
UCD approach is still low among CDSS developers. Especially, compared to other domains such
as aviation or automobiles in which usability engineering, and user-centered design is routine
practice [47] and even compared to the clinical domain in general. Lastly, further investigations
to discover reasons for this are required, also the knowledge on ID and UCD and success stories
of applying them in various projects should be disseminated among developers of CDSSs to
improve the general attitude towards ID.
7. Future Work
As a continuation to this study, this research question should be answered: What are the chal-
lenges in applying user-centered design in a clinical context and how to tackle these challenges?
In order to find an answer to this question, the following objectives should be accomplished:
1. conducting a literature review in order to identify “the state of the art in the intersection
between ID and clinical application development” in general
2. gathering and investigating difficulties designers and developers of clinical applications
have faced so far
3. gathering and investigating the point of view of a group of developers and clinicians who
have been involved in any user-centered design process of a clinical application
4. providing a user-centered design guideline aimed at designers and developers of clinical
applications
Acknowledgment
The author extends her gratitude to Olof Torgersson, who provided useful and detailed feed-
back on the paper.
References
[1] R. Greenes, Clinical decision support: the road ahead, Academic Press, 2007.
[2] T. Wendt, P. Knaup-Gregori, A, Decision Support in Medicine: A Survey of Problems of User Acceptance, Stud
Health Technol Inform 77 (2000) 852–856.
[3] B. Chaudhry, J. Wang, S. Wu, M. Maglione, Systematic review: impact of health information technology on quality,
efficiency, and costs of medical care, Annals of internal Med 144 (10) (2006) 742–752.
[4] A. Garg, N. Adhikari, H. McDonald, M. Rosas, Effects of computerized clinical decision support systems on
practitioner performance and patient outcomes: a systematic review, JAMA 293 (10) (2005) 1223–1238.
[5] D. F. Sittig, A. Wright, J. a. Osheroff, B. Middleton, J. M. Teich, J. S. Ash, E. Campbell, D. W. Bates,
Grand challenges in clinical decision support., Journal of biomedical informatics 41 (2) (2008) 387–92.
doi:10.1016/j.jbi.2007.09.003.
[6] J. Osheroff, J. Teich, B. Middleton, E. Steen, A. Wright, D. Detmer, A roadmap for national action
on clinical decision support, Journal of the American medical informatics association 14 (2) (2007) 141.
doi:10.1197/jamia.M2334.Introduction.
[7] E. Berner, Clinical Decision Support Systems: Theory and Practice (Health Informatics), Springer, New York, NY
10013, USA, 2007.
19
[8] D. Bates, G. Kuperman, S. Wang, T. Gandhi, A. Kittler, L. Volk, C. Spurr, R. Khorasani, M. Tanasijevic,
B. Middleton, Ten commandments for effective clinical decision support: making the practice of evidence-
based medicine a reality, Journal of the American Medical Informatics Association 10 (6) (2003) 523–530.
doi:10.1197/jamia.M1370.Although.
[9] T. Wetter, Lessons learnt from bringing knowledge-based decision support into routine use., Artificial intelligence
in medicine 24 (3) (2002) 195–203.
[10] I. Cho, J. Kim, J. H. Kim, H. Y. Kim, Y. Kim, Design and implementation of a standards-based interoperable clinical
decision support architecture in the context of the Korean EHR., International journal of medical informatics 9
(2010) 611–622. doi:10.1016/j.ijmedinf.2010.06.002.
[11] D. Hunt, R. Haynes, S. Hanna, K. Smith, Effects of computer-based clinical decision support sys-
tems on physician performance and patient outcomes: a systematic review, Jama 280 (15) (1998) 1339.
doi:10.1001/jama.280.15.1339.
[12] M. Johnston, K. Langton, R. Haynes, Effects of computer-based clinical decision support systems on clinician
performance and patient outcome. A critical appraisal of reserach, Ann Intern Med 120 (2) (1994) 135–142.
[13] E. Berner, T. J. La Lande, Overview of Clinical Decision Support Systems, Springer, 2007, pp. 3–22.
[14] A. Hasman, Challenges for medical informatics in the 21 st century, International journal of medical informatics
44 (1) (1997) 1–7.
[15] T. Graham, A. Kushniruk, M. Bullard, B. Holroyd, D. Meurer, B. Rowe, How usability of a web-based clinical
decision support system has the potential to contribute to adverse medical events, in: AMIA Annual Symposium
Proceedings, Vol. 2008, American Medical Informatics Association, 2008, p. 257.
[16] J. Yy, Unexpected Increased Mortality After Implementation of a Commercially Sold Computerized Physician
Order Entry System., Pediatrics 116 (2005) 1506–1512.
[17] A. Kushniruka, M. Triolab, B. Steinc, E. Boryckid, J. Kannrye, The relationship of usability to medical error:
an evaluation of errors associated with usability problems in the use of a handheld application for prescribing
medications, in: Medinfo 2004: Proceedings Of THe 11th World Congress On Medical Informatics, Vol. 107, Ios
Pr Inc, 2004, p. 1073.
[18] A. Kushniruk, M. Triola, E. Borycki, B. Stein, J. Kannry, Technology induced error and usability: The relationship
between usability problems and prescription errors when using a handheld application, International Journal of
Medical Informatics 74 (7-8) (2005) 519–526. doi:10.1016/j.ijmedinf.2005.01.003.
[19] J. Saleem, E. Patterson, L. Militello, ML, Exploring barriers and facilitators to the use of computerized clinical
reminders, Journal of the American Medical Informatics (2005) 438–447doi:10.1197/jamia.M1777.On.
[20] R. Koppel, J. P. Metlay, A. Cohen, B. Abaluck, a. R. Localio, S. E. Kimmel, B. L. Strom, Role of computerized
physician order entry systems in facilitating medication errors., JAMA : the journal of the American Medical
Association 293 (10) (2005) 1197–203. doi:10.1001/jama.293.10.1197.
[21] K. Kawamoto, C. A. Houlihan, E. A. Balas, D. F. Lobach, Improving clinical practice using clinical decision
support systems: a systematic review of trials to identify features critical to success., BMJ (Clinical research ed.)
330 (7494) (2005) 765. doi:10.1136/bmj.38398.500764.8F.
[22] J. Anderson, Increasing the acceptance of clinical Information, MD computing: computers in medical practice
16 (1) (1999) 62.
[23] M. Trivedi, J. Kern, A. Marcee, B. Grannemann, B. Kleiber, T. Bettinger, K. Altshuler, A. McClelland, Develop-
ment and Implementation of Computerized Clinical Guidelines : Barriers and Solutions, Methods of information
in medicine 41 (5) (2002) 435–442.
[24] T. Hewett, R. Baecker, S. Card, T. Carey, ACM SIGCHI Curricula for Human-Computer Interaction (1996).
[25] M. G. Helander, T. K. Landauer, P. V. Prabhu, Handbook of Human-Computer Interaction, Elsevier Science Pub
Co, 1998.
[26] H. Sharp, Y. Rogers, J. Preece, Interaction Design: Beyond Human-Computer Interaction, Wiley, 2007.
[27] A. Dix, J. Finlay, G. Abowd, R. Beale, Human-computer interaction, Prentice-Hall, Inc., Upper Saddle River, NJ,
USA, 1997.
[28] D. Stone, C. Jarrett, M. Woodroffe, S. Minocha, User interface design and evaluation, Vol. 21, Morgan Kaufmann,
2005. doi:10.1057/palgrave.ivs.9500112.
[29] M. Maguire, Methods to support human-centred design, International Journal of Human-Computer Studies 55 (4)
(2001) 587–634. doi:10.1006/ijhc.2001.0503.
[30] A. Cooper, R. Reimann, D. Cronin, About Face 3: The Essentials of Interaction Design, Wiley, Indianapolis, IN,
USA, 2007.
[31] H. Aveyard, Doing a literature review in health and social care: a practical guide, Open University Press, 2007.
[32] B. Kaplan, R. Morelli, J. Goethe, Preliminary Findings from an Evaluation of the Acceptability of an Expert
System, in: Proceedings of the AMIA, 1997, p. 865.
[33] C. Gadd, P. Baskaran, D. Lobach, Identification of design features to enhance utilization and acceptance of systems
for Internet-based decision support at the point of care., in: Proceedings of the AMIA, 1998, pp. 91–5.
20
[34] C. Ying-Jui, A. T. Chirh-Yun, B. Y. Min-Li, B. Yu-Chuan, Li, Assessing the Impact of User Interfaces to the
Usability of a Clinical decision support system, in: AIMIA 2003 Symposium Proceedings, 2003, p. 808.
[35] K. Thursky, M. Mahemoff, User-centered design techniques for a computerised antibiotic decision sup-
port system in an intensive care unit, International journal of medical informatics 76 (10) (2007) 760–768.
doi:10.1016/j.ijmedinf.2006.07.011.
[36] A. Narasimhadevara, T. Radhakrishnan, B. Leung, R. Jayakumar, On designing a usable interactive system to
support transplant nursing., Journal of biomedical informatics 41 (1) (2008) 137–51. doi:10.1016/j.jbi.2007.03.006.
[37] C. Carroll, P. Marsden, P. Soden, E. Naylor, J. New, T. Dornan, Involving users in the design and usability evaluation
of a clinical decision support system., Computer methods and programs in biomedicine 69 (2) (2002) 123–135.
[38] S. J. Leslie, M. Hartswood, C. Meurig, S. P. McKee, R. Slack, R. Procter, M. a. Denvir, Clinical decision support
software for management of chronic heart failure: development and evaluation., Computers in biology and medicine
36 (5) (2006) 495–506. doi:10.1016/j.compbiomed.2005.02.002.
[39] A. Wilson, A. Duszynski, D. Turnbull, J, Investigating patients’ and general practitioners’ views of computerised
decision support software for the assessment and management of cardiovascular risk, Informatics in Primary Care
15 (2007) 33–44.
[40] D. F. Lobach, K. Kawamoto, K. J. Anstrom, M. L. Russell, P. Woods, D. Smith, Development, deployment and
usability of a point-of-care decision support system for chronic disease management using the recently-approved
HL7 decision support service standard., Studies in health technology and informatics 129 (Pt 2) (2007) 861–5.
[41] T. W. Marcy, B. Kaplan, S. W. Connolly, G. Michel, R. N. Shiffman, B. S. Flynn, Developing a decision support
system for tobacco use counselling using primary care physicians., Informatics in primary care 16 (2) (2008) 101–9.
[42] M. Peleg, A. Shachak, D. Wang, E. Karnieli, Using multi-perspective methodologies to study users’ interactions
with the prototype front end of a guideline-based decision support system for diabetic foot care., International
journal of medical informatics 78 (7) (2009) 482–93. doi:10.1016/j.ijmedinf.2009.02.008.
[43] J. Trafton, S. Martins, M. Michel, E. Lewis, Evaluation of the Acceptability and Usability of a Decision Support
System to Encourage Safe and Effective Use of Opioid Therapy for Chronic , Noncancer Pain by Primary Care
Providers, Pain Medicine 11 (2010) 575–585.
[44] A. S. Young, J. Mintz, A. N. Cohen, M. J. Chinman, A network-based system to improve care for schizophrenia: the
Medical Informatics Network Tool (MINT)., Journal of the American Medical Informatics Association : JAMIA
11 (5) (2004) 358–67. doi:10.1197/jamia.M1492.
[45] K. Zheng, R. Padman, M. P. Johnson, H. S. Diamond, Understanding technology adoption in clinical care: clinician
adoption behavior of a point-of-care reminder system., International journal of medical informatics 74 (7-8) (2005)
535–43. doi:10.1016/j.ijmedinf.2005.03.007.
[46] J. Saleem, L. Militello, N. Arbuckle, M. Flanagan, Provider Perceptions of Colorectal Cancer Screening Clinical
Decision Support at Three Benchmark Institutions, in: AIMIA Symposium Proceedings, 2009, pp. 558–562.
[47] J. Zhang, Human-centered computing in health information systems. Part 1: analysis and design., Journal of
biomedical informatics 38 (1) (2005) 1–3. doi:10.1016/j.jbi.2004.12.002.
[48] A. Kushniruk, V. Patel, J. Cimino, Usability testing in medical informatics: cognitive approaches to evaluation of
information, in: AMIA Annual Fall Symposium, 1997, pp. 218–222.
[49] M. Beuscart-Zephir, J. Brender, R. Beuscart, I. Menager-Depriester, Cognitive evaluation: how to assess the us-
ability of information technology in healthcare, Computer methods and programs in biomedicine 54 (1-2) (1997)
19–28.
[50] P. Edwards, K. Moloney, J. Jacko, F. Sainfort, Evaluating usability of a commercial electronic health record: A case
study, International Journal of Human-Computer Studies 66 (10) (2008) 718–728. doi:10.1016/j.ijhcs.2008.06.002.
[51] B. Kaplan, Evaluating informatics applications–clinical decision support systems literature review. (November
2001).
[52] B. Kaplan, Evaluating informatics applications–some alternative approaches: theory, social interactionism, and call
for methodological pluralism., International journal of medical informatics 64 (1) (2001) 39–56.
21
Paper II
The Intersection of Clinical Decision
Support and Electronic Health Record:
A Literature Review
Hajar Kashfi
1st International Workshop on Interoperable Healthcare Systems (IHS2011) -
Challenges, Technologies, and Trends, Szczecin, Poland, September 19-21, 2011.
(manuscript submitted)
75
The Intersection of Clinical Decision Support
and Electronic Health Record:
A Literature Review
H.Kashfia,1,
aDepartment of Applied Information Technology
Chalmers University of Technology
SE–412 96 G¨
oteborg, Sweden
Abstract
Aim: It is observed that clinical decision support (CDS) and electronic health records (EHR)
should be integrated so that their contribution to improving the quality of health care is enhanced.
In this paper, we present results from a review on the related literature. The aim of this review
was to find out to what extent CDS developers have actually considered EHR integration in
developing CDS. We have also investigated how various clinical standards are taken into account
by CDS developers. Methods: The ScienceDirect database was searched for related studies.
The search yielded a final collection of 25 studies. Relevance criteria included (i) discussing
development of CDS or an EHR with CDS services (ii) taking integration of CDS into EHRs
into account. Results: It was observed that there are few CDS development projects where EHR
integration is taken into account. Also, the number of studies where various clinical standards are
taken into consideration in developing CDS is surprisingly low especially for openEHR, the EHR
standard we aimed for. The reasons for low adoption of openEHR are issues such as complex and
huge specifications, shortcomings in educational aspects, low empirical focus and low support
for developers. Conclusion: There is a need for further investigation to discover the reasons why
the rate of integration of EHRs and CDS is not at an optimum level and mostly to discover why
CDS developers are not keen to adopt various clinical standards.
Keywords: Clinical decision support system, Electronic health record, clinical standards
1. Introduction
Even though more than 50 years of research have been put into the clinical decision support
(CDS) field, the adoption rate of these systems is still low [1, 2, 3, 4, 5, 6]. Various researchers
have investigated the factors that should be considered by developers of such systems in order to
result in higher adoption. One of these factors is the integration of CDS into the electronic health
record (EHR) systems. Different benefits are associated with the integration of CDS into EHRs.
For instance, integration facilitates real time access to the knowledge provided by CDS at point
Corresponding author
Email address: hajar.kashfi@chalmers.se (H.Kashfi)
Preprint submitted to Computer Methods and Programs in Biomedicine May 11, 2011
of care, it also eliminates tedious duplicate patient data entry since the preexisting digital patient
data in the EHR system can be utilized for the purpose of providing decision support [1, 7, 8].
The aim of this study has been to answer this research question: is integration of clinical
decision support into electronic health record taken into consideration by developers of clinical
decision support? The practical science literature was reviewed not only to explore CDS de-
velopers’ attitude towards integration of EHR and CDS, but also to discover the status of EHR
standards in this field.
The structure of the paper is as follows. We start with the background information including
the motivation for integration of CDS and EHRs in Section 2. In Section 3 the literature review
search strategy is given. The results of the review are presented in Section 4. Section 5 includes
the discussion of the findings along with our reflection on the low adoption rate of the openEHR
EHR standardization approach. Finally, we end with a conclusion and future directions of the
study in Section 6.
2. Background
The idea of computerized medical records h