ArticlePDF Available

A Clinical Reasoning Tool for Virtual Patients: Design-Based Research Study

Authors:
  • Medical School, University of Augsburg, Germany
  • Instruct gGmbH

Abstract and Figures

Background: Clinical reasoning is a fundamental process medical students have to learn during and after medical school. Virtual patients (VP) are a technology-enhanced learning method to teach clinical reasoning. However, VP systems do not exploit their full potential concerning the clinical reasoning process; for example, most systems focus on the outcome and less on the process of clinical reasoning. Objectives: Keeping our concept grounded in a former qualitative study, we aimed to design and implement a tool to enhance VPs with activities and feedback, which specifically foster the acquisition of clinical reasoning skills. Methods: We designed the tool by translating elements of a conceptual clinical reasoning learning framework into software requirements. The resulting clinical reasoning tool enables learners to build their patient's illness script as a concept map when they are working on a VP scenario. The student's map is compared with the experts' reasoning at each stage of the VP, which is technically enabled by using Medical Subject Headings, which is a comprehensive controlled vocabulary published by the US National Library of Medicine. The tool is implemented using Web technologies, has an open architecture that enables its integration into various systems through an open application program interface, and is available under a Massachusetts Institute of Technology license. Results: We conducted usability tests following a think-aloud protocol and a pilot field study with maps created by 64 medical students. The results show that learners interact with the tool but create less nodes and connections in the concept map than an expert. Further research and usability tests are required to analyze the reasons. Conclusions: The presented tool is a versatile, systematically developed software component that specifically supports the clinical reasoning skills acquisition. It can be plugged into VP systems or used as stand-alone software in other teaching scenarios. The modular design allows an extension with new feedback mechanisms and learning analytics algorithms.
Content may be subject to copyright.
Original Paper
A Clinical Reasoning Tool for Virtual Patients:Design-Based
Research Study
Inga Hege1, MCompSc, MD; Andrzej A Kononowicz2, MCompSc, PhD; Martin Adler3, Dipl Inform
1Institute for Medical Education, University Hospital of LMU Munich, Muenchen, Germany
2Department of Bioinformatics and Telemedicine, Jagiellonian University Medical College, Krakow, Poland
3Instruct AG, Muenchen, Germany
Corresponding Author:
Inga Hege, MCompSc, MD
Institute for Medical Education
University Hospital of LMU Munich
Ziemssenstr. 1
Muenchen, 80336
Germany
Phone: 49 89440057211
Email: inga.hege@med.uni-muenchen.de
Abstract
Background: Clinical reasoning is a fundamental process medical students have to learn during and after medical school. Virtual
patients (VP) are a technology-enhanced learning method to teach clinical reasoning. However, VP systems do not exploit their
full potential concerning the clinical reasoning process; for example, most systems focus on the outcome and less on the process
of clinical reasoning.
Objectives: Keeping our concept grounded in a former qualitative study, we aimed to design and implement a tool to enhance
VPs with activities and feedback, which specifically foster the acquisition of clinical reasoning skills.
Methods: We designed the tool by translating elements of a conceptual clinical reasoning learning framework into software
requirements. The resulting clinical reasoning tool enables learners to build their patient’s illness script as a concept map when
they are working on a VP scenario. The student’s map is compared with the experts’ reasoning at each stage of the VP, which is
technically enabled by using Medical Subject Headings, which is a comprehensive controlled vocabulary published by the US
National Library of Medicine. The tool is implemented using Web technologies, has an open architecture that enables its integration
into various systems through an open application program interface, and is available under a Massachusetts Institute of Technology
license.
Results: We conducted usability tests following a think-aloud protocol and a pilot field study with maps created by 64 medical
students. The results show that learners interact with the tool but create less nodes and connections in the concept map than an
expert. Further research and usability tests are required to analyze the reasons.
Conclusions: The presented tool is a versatile, systematically developed software component that specifically supports the
clinical reasoning skills acquisition. It can be plugged into VP systems or used as stand-alone software in other teaching scenarios.
The modular design allows an extension with new feedback mechanisms and learning analytics algorithms.
(JMIR Med Educ 2017;3(2):e21) doi:10.2196/mededu.8100
KEYWORDS
learning; educational technology; computer-assisted instruction; clinical decision-making
Introduction
In the context of health care education, virtual patients (VPs)
are often described as interactive, computer-based programs
that simulate real-life clinical encounters [1]. The technical basis
of VPs ranges from low-interactive Web pages to high-fidelity
simulations or virtual reality scenarios. In the form of interactive
patient scenarios, they are typically used to foster clinical
reasoning skills acquisition in health care education [2,3].
Interactive patient scenarios are Web-based applications in
which a learner navigates through a VP scenario and interacts
with the VP in form of menus, questions, or decision points. A
JMIR Med Educ 2017 | vol. 3 | iss. 2 | e21 | p.1http://mededu.jmir.org/2017/2/e21/
(page number not for citation purposes)
Hege et alJMIR MEDICAL EDUCATION
XSL
FO
RenderX
variety of commercial and open-source VP systems, such as
CASUS, OpenLabyrinth, or i-Human are available and applied
in health care education [4]. Such systems provide tools for
educators to create VP scenarios and deliver them to their
students.
Clinical reasoning or clinical decision making encompasses the
application of knowledge to collect and integrate information
from various sources to arrive at a diagnosis and a management
plan. It is a fundamental skill health care students have to acquire
during and after their education. In addition to traditional
teaching methods, VPs offer a safe environment to practice
clinical reasoning without harming a patient and to prepare
learners for clerkships or bedside teaching [2].
However, how clinical reasoning is implemented in VPs varies
greatly, and the effect of these design variations on learning
outcomes is not yet fully understood [5]. Feedback and scoring
are often implemented quantitatively, are outcome-oriented,
and do not account for the nonlinear nature [6] of the clinical
reasoning process. More process-oriented approaches, such as
a study described by Pennaforte et al [7], often require an
instructor to be present, thus, limiting the scalability of VPs.
Additionally, VP systems do not exploit their full potential
concerning the clinical reasoning process. For example, dealing
with cognitive errors, explicit development of illness scripts
[8], or pattern recognition approaches is rarely implemented in
VP systems.
Therefore, our aim was to develop a software tool that can be
combined with VP systems, specifically supports clinical
reasoning skills acquisition, and assesses all steps of this
complex process. We will describe the main components of the
software and results of usability tests and a pilot study.
Methods
Concept Development
The concept of the tool is based on a grounded theory study,
which is an exploratory qualitative research methodology aiming
at understanding a phenomenon and developing a theory
grounded in the data [9]. We explored the process of learning
clinical reasoning based on data resources such as scientific
literature or teaching material [10]. The result of the study was
an application-oriented framework with five main categories:
psychological theories, patient-centeredness, teaching and
assessment, learner-centeredness, and context. Each category
includes subthemes, such as illness scripts, cognitive errors,
self-regulated learning, learning analytics, or cognitive load.
This framework served as a basis for developing the concept
for the software. We discussed the framework and conclusions
on how to transfer it to VPs with health care professionals,
educators, and students, and on the basis of these discussions,
we developed the functional software requirements (Table 1).
Some of the subthemes of the framework, such as
communication, emotions, or authenticity, are related to the
design of the VP itself, rather than to the clinical reasoning tool,
so they were not translated into software requirements. However,
these aspects are important for the VP design process and need
to be considered and aligned with the tool.
Design of the User Interface
Figure 1 shows a wireframe model of the clinical reasoning tool
with its main components.
For each category (ie, findings, differential diagnoses, tests, and
therapies), the learners can search for a term, and either select
one from the type-ahead list, which is based on Medical Subject
Heading (MeSH) published by the US National Library of
Medicine [12], or choose to enter their own entry. Also,
negations can be entered, to add negative findings, such as “no
fever.”
Differential diagnoses can be marked as “must-not-miss” or as
“unlikely/ruled-out” by selecting the option from a context
menu. Once the learner has entered a differential diagnosis, the
button for submitting a final diagnosis will be activated. After
clicking this button, the learner can select one or more diagnoses
from his or her differentials and submit them as final diagnoses.
All added nodes (findings, differentials, tests, and therapies)
can be deleted, moved within the box, and connected with each
other via drag&drop. For example, if a finding speaks against
or confirms a diagnosis, the learner can connect the finding with
that diagnosis. By clicking on the connection, its color (=weight)
and meaning can be changed from red—“speaks against”—to
dark blue—“highly related.” Currently, 5 different
weights/colors can be assigned to a connection. Thus, learners
build their patients’ illness script in the form of a concept map
in a step-by-step approach.
Finally, the learners task is to compose a short summary
statement, usually 2 to 3 sentences about the VP in a text area
at the bottom of the tool’s panel. Such a summary statement is
a mental abstraction to transform relevant patient-specific details
into abstract terms, preferably using semantic qualifiers [13].
This transformation is a crucial step in the clinical reasoning
process.
With the 2 switch buttons on top, the learner can toggle the
display of connections and can anytime access an expert’s map.
JMIR Med Educ 2017 | vol. 3 | iss. 2 | e21 | p.2http://mededu.jmir.org/2017/2/e21/
(page number not for citation purposes)
Hege et alJMIR MEDICAL EDUCATION
XSL
FO
RenderX
Table 1. Overview of categories and subthemes, which have been translated into software requirements and how they have been implemented in the
clinical reasoning tool.
RequirementsSubthemeCategory
The concept of developing an illness script is implemented as a concept map (directed
weighted graph), with findings, differential diagnoses, tests, and therapy options as nodes.
Relations can be visualized with connections between the nodes, which can be weighted
(eg, “slightly related,” “highly related”)
Patient illness scriptPsychological theories
Learners can submit a final diagnosis anytime in the virtual patient (VP) scenario to en-
courage pattern recognition approaches.
Dual processing
The final diagnosis/-es of the learner are compared with the expert’s diagnoses. In case
of a mismatch, the tool analyzes potential sources of errors or biases.
Cognitive errorsPatient-centeredness
Concept mapping as a suitable method of teaching and assessing clinical reasoning is the
basis of the tool.
MethodsTeaching/assessment
The nodes of the concept map are based on the Medical Subject Heading thesaurus;
therefore, they can be scored by comparing them with expert nodes, including synonyms
and more/less specific entries.
Scoring
After each VP session, the learners can access a dashboard with their clustered scores,
development of their performance over time/VPs, and comparison with their peers.
Learning analyticsLearner-centeredness
Both, process- and outcome-oriented feedback is provided by the tool and can be accessed
by the learner anytime.
Feedback
In the development process, we conducted usability tests to test the general usability of
the tool and specifically uncover potential improvements in terms of extraneous cognitive
load [11].
Cognitive loadContext
Figure 1. Wireframe model of the clinical reasoning tool (right side) integrated into a virtual patient system (left side).
Technical Approach
The tool is programmed in Oracle Java, using Java Server Faces
as a framework; Hibernate, an open-source Object Relational
Mapping solution, for Java applications; and JGraphT, which
provides mathematical graph-theory objects and algorithms.
All user actions, including a time stamp and at which stage in
the VP scenario they were performed, are stored in an Oracle
database, but alternative database management systems such as
MySQL can be used as well. The client side is implemented in
dynamic hypertext markup language, including open source
libraries and frameworks such as JQuery, JSPlumb, and D3.js.
The tool is available in English, German, and Polish and can
be downloaded under a Massachusetts Institute of Technology
license [14]. Exemplary VPs are available in the VP system
CASUS [15].
Patient Illness Script Modeled as a Concept Map
Concept mapping is an approach applied in medical education
in general [16] and in clinical reasoning training and assessment
[17,18]. In the grounded theory study, which was the basis for
the development of the tool, concept mapping was identified as
a suitable method of teaching and assessing clinical reasoning
skills [10], as it reflects the nonlinear aspects of the process.
JMIR Med Educ 2017 | vol. 3 | iss. 2 | e21 | p.3http://mededu.jmir.org/2017/2/e21/
(page number not for citation purposes)
Hege et alJMIR MEDICAL EDUCATION
XSL
FO
RenderX
Illness scripts are mental representations, which link clinical
information about a disease, examples of that disease, and its
symptoms [8]. Illness scripts are developed by experiencing
many different patient cases. The tool uncovers the patient’s
illness script and enables the learners to build their own script
in the form of a concept map during a VP scenario. Learners
can select and connect elements of the concept map and label
the connections (Figure 2). In the back end, the concept map is
implemented as a directed weighted graph representation of the
learner’s and the expert’s maps.
Dual Processing and Cognitive Errors
Dual processing is the application of analytical and nonanalytical
reasoning [19]. Cognitive errors and biases are associated with
both approaches [20] and are an essential component of the
clinical reasoning process. We considered it as important to
allow and encourage the application of both approaches when
learners are working with a VP. Therefore, throughout a VP
scenario, learners can submit differential diagnoses as their
working or final diagnoses and assess their level of confidence
with that decision on a slider (scale from 1=“not at all confident”
to 100=“very confident”). If there is a mismatch between the
learner’s and the expert’s decisions, the software analyzes
potential cognitive errors based on the stage, identified findings,
differentials, and VPs the learner has accessed previously. The
analysis currently focuses on identifying and elaborating 5
common types of cognitive errors—premature closure,
availability bias, confirmation bias, representativeness, and base
rate neglects [20] (Table 2). To detect base rate neglect and
representativeness errors, the experts have to provide additional
information, such as disease prevalence, with their concept map.
The clinical reasoning tool then provides feedback and
explanations about the error, and the user can choose to try
again, continue the VP scenario, or get more feedback (Figure
3).
Scoring
Scoring and feedback are based on the process of building the
concept map and comparing it with an expert’s map.
Partial scores for the final diagnosis submission range between
0.5 and 0.9 (Figure 3), depending on the distance (ie, number
of edges) of the learner’s diagnosis to that of the expert’s in the
MeSH tree. The distance can be negative if the student’s final
diagnosis is more specific than the expert’s solution. For
example, if the learner has submitted the final diagnosis as
“bacterial pneumonia” and the expert has submitted
“pneumonia,” the distance between those two terms in the MeSH
hierarchy is −1. The score is then calculated by a heuristic
formula:
Score = 1 − (Math. abs (distance) / 10)
All changes to the concept map at each stage of the VP scenario
are recorded, stored in a database, and scored in comparison
with the expert’s map. Because the elements of the map are
based on MeSH, we can account for synonyms or more/less
specific terms for scoring. Additionally, when the learner moves
to the next stage in the VP scenario, all nodes in each category
are scored based on the expert’s map at this stage. The heuristic
algorithm is as follows:
Overall score at stage = all scores / (correct nodes +
missed nodes) − 0.05 × addNodes
Figure 2. Screenshot of an exemplary VP and a learner's map embedded in the VP system CASUS. The switches on top allow to show/hide all
connections and the expert's map; a help page and a short introductory video are available. Diagnoses can be marked as final or working diagnoses and
as must-not-miss (exclamation mark) diagnosis.
JMIR Med Educ 2017 | vol. 3 | iss. 2 | e21 | p.4http://mededu.jmir.org/2017/2/e21/
(page number not for citation purposes)
Hege et alJMIR MEDICAL EDUCATION
XSL
FO
RenderX
Table 2. Overview of errors that can be detected by the tool in case the learner has submitted a final diagnosis that is different from that of the expert’s.
Data requiredDetectionType of error
Findings and tests of the learner and the expert (in-
cluding stage)
Submission of a final diagnosis at an early stage,
after which the expert has added finding(s) or tests
that are connected to the final diagnosis
Premature closure
(accepting a diagnosis before it is fully
confirmed) Connections to final diagnosis of expert
Submission stage
Previously created concept maps (date of last access
and final diagnoses)
Learner has worked on or accessed a virtual patient
with a related final diagnosis (one Medical Subject
Heading hierarchy level up/down) within the last
5 days
Availability bias
(what recently has been seen is more likely
to be diagnosed later on)
Findings of the learner and the expertLearner has not added disconfirming finding(s) or
“speaks against” connections between disconfirm-
ing finding and the final diagnosis
Confirmation bias
(tendency to look for confirming evidence
for a diagnosis) Connections between findings and differential diag-
noses
Findings of the learner and the expertLearner has connected nonprototypical findings as
“speak against” findings to the correct final diagno-
sis
Representativeness
(focus on prototypical features of a disease) Nonprototypical findings (additional information
in expert map)
Differential diagnoses of the learner and the expertA rare final diagnosis has been submitted instead
of the more prevalent correct final diagnosis
Base rate neglect
(ignoring the true rate of a disease) Prevalence of diagnoses (additional information in
expert map)
Figure 3. Flowchart of the process of submitting a final diagnosis by a learner.
(all scores=sum of all scores of the user; correct nodes=all nodes
scored ≥0.5; missed nodes=nodes added by the expert, but not
by the learner at the given stage; addNodes=nodes added by the
learner but not present in the expert map).
The learner’s problem representation (summary statement) is
scored based on a comparison with the expert’s statement and
a list of semantic qualifiers (eg, “acute” vs “chronic”) suggested
by Connell et al [21].
The current rating algorithm counts the semantic qualifiers used
by the learner and the expert. On the basis of the assessment
rubric suggested by Smith et al [22], the score for the use of
semantic qualifiers is defined as follows:
Score 0: Less than 30% of semantic qualifiers used by the
expert
Score 1: <60% and ≥30% of semantic qualifiers used by
the expert
JMIR Med Educ 2017 | vol. 3 | iss. 2 | e21 | p.5http://mededu.jmir.org/2017/2/e21/
(page number not for citation purposes)
Hege et alJMIR MEDICAL EDUCATION
XSL
FO
RenderX
Score 2: ≥60% of semantic qualifiers used by the expert
The weighting of scores is based on the postencounter form
scoring model suggested by Durning et al [23].
Learning Analytics, Feedback, and Adaptability
All scores are clustered based on a model of the clinical
reasoning process developed by Charlin et al [24] and
correspond with the concept map elements (Table 3). The scores
are presented in a student-centered dashboard after a VP scenario
has been completed.
Additionally, we implemented clusters for self-directed learning
and dual processing, which are not yet fed back to the learner.
The self-directed learning cluster is currently based on the
percentage of nodes and connections that have been added by
the learner before/without consulting the expert solution. Dual
processing considers at which stage a learner submits a final
diagnosis; that is, submitting a final diagnosis at an early stage
of the VP scenario is an indicator of a more nonanalytical
reasoning approach. In a process-oriented approach, the learners
can at any stage consult and compare their map with the expert’s
or peers’ maps. The progress of the learner is tracked not only
within a VP scenario but also throughout a VP collection; these
process data feed the learner’s dashboard, in which clustered
scores and peer scores are visualized and recommendations for
further activities are displayed.
Application Program Interface to Virtual Patient
Systems
A major technical prerequisite for the implementation was the
use of the tool as a plug-in for Web-based VP systems through
an open application program interface (API).
The communication between the tool and the VP system is
required for (1) the initialization and update during the VP
session, (2) the display of performance data, and (3) a search
functionality (optional). Further details of the API are available
in the GitHub Wiki [25].
For the pilot study, we integrated the clinical reasoning tool into
the linear VP system CASUS [15,26], a Web-based application
for authoring and delivering case-based learning. A CASUS
VP typically presents a patient’s story, from the first introduction
to the treatment in about 5 to 15 screen cards with a variable
combination of text elements, multimedia, and questions. The
clinical reasoning tool is displayed in an iframe in the CASUS
application; the performance data and the search functionality
are integrated in the CASUS dashboard.
Usability Testing and Implementation of a Pilot Study
During the development process, we conducted usability tests
based on a VP with a prototypical version of the tool;
participants were 2 health care students and 2 health care
professionals, who were familiar with the concept of VPs. For
the usability test, we adapted a freely available VP from the
eViP repository [27] and presented it with the prototypical
clinical reasoning tool. In total, 4 sessions were held with the
same testing scenario by one of the authors (IH) in a “Think
aloud” approach [28]; participants were briefed about the VP,
the prototype, and its purpose; were asked about their
expectations, before they could freely explore the VP and the
tool; and were further asked about their reactions. Finally, in a
debriefing, participants were invited to elaborate on their
impressions and suggestions for changes. All findings were
documented in field notes and subsequently discussed among
the authors. Similar structured follow-up sessions with the same
participants were held with a more advanced version of the tool
in the VP system CASUS.
From October to December 2016, we implemented a pilot field
study with an evaluation of the tool based on 3 VPs in the VP
system CASUS. The VPs were reviewed by a course instructor,
who regarded the level of difficulty as appropriate for the
learners’ level of expertise and confirmed that the VPs match
the curricular objectives.
The VPs were integrated into the VP collection of the internal
medicine/surgery course at the medical faculty of
Ludwig-Maximilian University of Munich (LMU Munich),
Germany. The 3 VPs covered the following topics:
VP 1: a 19-year-old patient with a sore throat; final
diagnosis: mononucleosis
VP 2: a 66-year-old patient with a syncope; final diagnosis:
bronchial carcinoma
VP 3: a 76-year-old patient with acute dyspnea; final
diagnosis: pulmonary embolism
In total, 107 fourth year medical students were offered to
participate in the study as part of their regular curricular
activities. To evaluate the usability of the tool and the integration
into the VP system, we used a 5-item questionnaire (Table 4),
based on selected questions of the System Usability Scale [29].
The Web-based questionnaire was accessible after each VP
session. Participation was voluntary and anonymous.
Table 3. Description of clusters on which the learning analytics dashboard is based on.
ClusterConcepts in the model by Charlin et al
Scores for adding problems/findingsRepresentation of the problem and determination of objectives of encounter
Scores for adding testsInvestigations
Scores for adding therapeutic optionsTherapeutic interventions
Scores for generating differential diagnoses and scores for the final diag-
nosis
Categorization for the purpose of action
Scores for the summary statementFinal representation of the problem and semantic transformation
JMIR Med Educ 2017 | vol. 3 | iss. 2 | e21 | p.6http://mededu.jmir.org/2017/2/e21/
(page number not for citation purposes)
Hege et alJMIR MEDICAL EDUCATION
XSL
FO
RenderX
Table 4. Results of the usability questionnaire (n=10), rated on a 6-point Likert scale (0=totally disagree, 5=totally agree).
Mean response (minimum; maximum)Question
3 (0; 5)1. I think that I would like to use the clinical reasoning tool frequently.
3.2 (1; 5)2. I found the clinical reasoning tool unnecessarily complex.
3.4 (2; 5)3. I found the various functions in the clinical reasoning tool were well integrated.
2.8 (1; 5)4. The clinical reasoning tool helps structuring my thoughts.
3 free text responses5. What was good? What should be improved?
Ethics Approval and Consent to Participate
The implementation of the pilot study and evaluation was
approved by the ethical committee at LMU Munich, Germany.
Results
Usability Tests
The prototype-based usability testing revealed some important
usability issues; for example, in the prototype, the concept map
elements representing the illness script were displayed in a tab
layout, thus, unintentionally suggesting an order in which the
components had to be worked on. On the basis of this finding,
we changed the layout, so that all components were visible at
once. Also, two of the participants wanted to enter a negative
finding (“no fever”), which was not possible at that time, but
was implemented into the next version of the tool. In the
follow-up usability tests with a prefinal version of the tool, we
identified minor issues, such as the display size and content of
tooltips and unclear labeling of buttons. These issues were fixed
before the start of the pilot study. The complete usability
scenario, the field notes, and a list of the detected and solved
issues can be provided on request.
Pilot Study
During the pilot field testing period from October 15, 2016 to
January 31, 2017, with the 3 VPs, 64 of the 107 students created
118 concept maps of varying complexity. This response rate is
comparable with similar VP integration scenarios [30]. During
the testing period, we constantly evaluated the usage data and
further developed the tool. For example, we noted at the
beginning of the pilot testing that learners hesitated to interact
with the tool; therefore, we further expanded and improved the
scaffolding and prompting. Overall, the learners entered 284
problems, 324 differential diagnoses, 158 tests, and 21 treatment
options, and submitted 65 final diagnoses; however, only 36
connections were drawn and 19 summary statements composed.
Table 5 shows the distribution over the 3 VPs. The questionnaire
was completed by 10 participants (Table 4); no usability issues
were reported.
Of the free text responses, 2 reported a technical glitch, which
was fixed immediately; the 3rd response explicitly liked the
idea of the clinical reasoning tool.
Table 5. Total number and average number of nodes added per virtual patient (VP) by the users. The number of nodes added by the expert for each
VP is shown in parentheses.
Average VP 3 user
(expert)
Total VP 3Average VP 2 user
(expert)
Total VP 2Average VP 1 user
(expert)
Total VP 1Category
312462Created maps
20 (65%)7 (29%)38 (61%)Final diagnosis submitted
1.9 (8)592.8 (7)662.6 (8)159Findings/problems
2.2 (5)673.9 (8)942.6 (8)163Differential diagnoses
1.3 (8)412.1 (8)501.1 (5)67Tests
0.3 (4)80.2 (1)40.1 (1)9Therapies
0 (5)10.6 (8)140.3 (5)21Connections
Discussion
Overview
On the basis of a previous grounded theory exploration [10],
our aim was to develop a tool that supports the training of
clinical reasoning skills by addressing the most important steps
in the clinical reasoning process. The current version of the tool
is a good starting point from which we will continue a cyclic
process of further evaluation, adaption, and analysis of research
studies to advance the functionalities.
The major contribution of our study is a description of an
elaborated clinical reasoning tool based on a qualitative research
study [10]. Thus, the tool reflects the current research in clinical
reasoning training by translating the outcomes of the study into
concrete software components and instructional processes.
Concept mapping as the fundamental principle of the tool has
been shown to be an effective teaching and assessment approach
in health care education (eg, [31,32]). We adapted the typically
unstructured approach of concept mapping by providing four
main components of clinical reasoning in which the learner can
add nodes: problems, differential diagnoses, tests, and therapies.
JMIR Med Educ 2017 | vol. 3 | iss. 2 | e21 | p.7http://mededu.jmir.org/2017/2/e21/
(page number not for citation purposes)
Hege et alJMIR MEDICAL EDUCATION
XSL
FO
RenderX
Thus, the steps of the clinical reasoning process and components
of the patient illness script are explicitly represented in the maps
to guide the learners when they are working on a VP scenario.
If learners require further support, they can consult an expert’s
concept map and compare it with their own map.
Pilot Study
The results of the pilot study show that learners interact with
the tool, but the average number of nodes added by the learners
when compared with the expert map was quite low. Potential
explanations could be technical barriers, lack of motivation, or
limited clinical reasoning abilities. Because we tried to identify
any potential technical barriers with the initial usability tests,
we did not receive any support requests by the learners during
the pilot study, and an analysis of log files and database entries
did not reveal any relevant issues, we believe that technical
barriers were not the main reason for the low number of node
addition. In 2 of the 3 VPs in more than 60% (n=38) of the VP
sessions, the learners submitted a final diagnosis, despite the
low number of nodes added, which could indicate a tendency
of learners to focus on the outcome (ie, final diagnosis) rather
than on the process of clinical reasoning. The participants of
the pilot study were students at LMU Munich, who were familiar
with VPs since their preclinical years. However, the VPs earlier
used by the students were less demanding concerning the clinical
reasoning process. Problems and findings of the patient,
differential diagnoses, and the final diagnosis were either
directly presented in an elaborated way by the VP author, or
students had to select appropriate choices from a short list. This
simplified approach put the learners in a more passive role and
did not emphasize the importance of the process, but the
outcome could have influenced students’ interaction with the
new tool.
Interestingly, on average, learners added slightly more problems
and differential diagnoses for VP 2, but only 29% submitted a
final diagnosis. This could indicate that VP 2 was more difficult
to solve than VP 1 and VP 3, which is also supported by the
higher average number of differential diagnoses added for these
maps. A follow-up study is necessary to further investigate the
potential effect of VP difficulty on the clinical reasoning process.
Connections between the nodes are substantial components of
meaningful concept maps and show that learners understood
the concepts and their relations [18]. In the pilot study, only a
few connections were drawn, and in the questionnaire, we saw
a tendency that the tool did not optimally support learners to
structure their thoughts. This might indicate a need for further
explanations of concept mapping and/or improvement of the
functionality. Further data collection and analysis are needed
to find out more about these aspects.
For the pilot study, we combined the tool with a type of a VP,
in which the patient is represented in a textual description and
multimedia elements. However, the tool can also be integrated
into scenarios that represent the patient more authentically and
in which more emphasis can be laid on emotions of a patient
and identification of problems by actively asking questions.
Examples are VPs in the format of conversational agents in
which the learner can communicate in natural language with a
VP [33] or virtual reality applications [34]. We envision that
the tool could also be used in bedside teaching scenarios—for
example, as follow-up activities after a patient encounter to help
students document their reasoning process and to discuss it with
their supervisor. However, it is important to keep in mind that
authenticity has to be balanced with both cognitive load and
level of expertise of the learner [35]. Thus, less authentic VPs
as used in the pilot study can be helpful in preparing novice
students for more complex and authentic VP scenarios and
real-life patient encounters.
Further Development
Further development of the tool will focus on implementing
machine learning approaches to advance the comparing and
scoring of the summary statements and maps.
In the current version of the tool, the learner dashboard is created
and displayed within the tool. However, to allow a full
integration into learning and teaching infrastructures, such as
learning management systems, e-portfolios, or campus
management systems, we intend to map the performance data
to xAPI [36]. xAPI offers a vocabulary to collect user
experiences from different sources and store it in a learning
record store.
The open API allows the integration of the clinical reasoning
tool into other VP systems than CASUS. Therefore, we are
currently working on integrating the tool into the branched VP
system OpenLabyrinth [37] as part of the European project
WAVES [38].
The tool will also be used for further research studies about
clinical reasoning in VPs aiming at answering open questions
on the design of a VP to optimally foster the training of clinical
reasoning. For example, we are currently implementing a study
investigating differences on the reasoning process in
undergraduate medical students comparing outcome- and
process-oriented expert feedback [39].
Although the response rate of the questionnaire was low, we
sense that learners experienced difficulties in structuring their
thoughts with the tool, which is exemplified by the very few
connections added to the concept maps. The tool was designed
based on the results of a qualitative study on the clinical
reasoning learning process and VPs [10], and students were
involved in all relevant steps in both, the research and the tool
implementation process. However, despite these efforts, it seems
that the tool does not fully address the learners’ needs; an
explanation could be that the students in the pilot study were
not familiar with the principles and steps involved in the clinical
reasoning process, as this is not explicitly taught at the medical
school at LMU Munich. To address this issue, we developed a
series of short videos explaining the basic principles of the
clinical reasoning process [40], which will be integrated into
the tool for the next testing cycle. Additionally, it could be that
creating the whole map is too complex for some learners,
especially if they are not familiar with this way of thinking.
Thus, we are implementing a more adaptive approach in which
less advanced learners are guided in a step-wise approach
through the map development process, thereby reducing the
cognitive load. Depending on the level of expertise and VP
difficulty, learners will be prompted to focus on a specific task
JMIR Med Educ 2017 | vol. 3 | iss. 2 | e21 | p.8http://mededu.jmir.org/2017/2/e21/
(page number not for citation purposes)
Hege et alJMIR MEDICAL EDUCATION
XSL
FO
RenderX
in the map-creation process. For example, they will be provided
with all the nodes and will be asked to focus on the task of
creating relevant connections or on the identification of the
problems of the patient.
Limitations
A limitation in our usability testing approach was the low
response rate of the survey. This low rate is comparable with
the response rates of other VP courses at the medical school at
LMU, and we believe that the reason for this is survey fatigue
of the participating students; especially in the 4th year, students
are exposed to a large number of questionnaires. Furthermore,
because of the fact that we only used a subset of the 10-item
questionnaire, we are only able to detect usability trends. Our
intention was to achieve a higher response rate with a short
questionnaire, which turned out to be ineffective. Therefore,
we will continue further usability cycles with the full 10-item
usability questionnaire in future usage scenarios and studies to
collect more reliable and standardized data.
Conclusions
We believe that the clinical reasoning tool is a valuable addition
for Web-based VP systems; it specifically aims to support the
clinical reasoning process and includes aspects so far not
systematically included in VP systems. We recommend
combining the tool with short and carefully designed VPs to
make full use of it (see examples at [15]). Additionally, the tool
can be used independent from VPs in face-to-face teaching
scenarios—for example, to complement clinical reasoning
curricula, problem-based-learning seminars, or bedside teaching.
We believe that the outcome of our study is relevant for
educators and researchers interested in advancing the teaching
of clinical reasoning in health care professions.
Acknowledgments
The authors would like to thank all students, educators, health care professionals, and computer scientists for their valuable
feedback and input during the conceptualization, development, and testing of the software. The project receives funding from the
European Union’s Horizon 2020 research and innovation program under the Marie Skłodowska-Curie grant agreement No.
654857. AK is supported by internal funds at Jagiellonian University No. K/ZDS/006367. MA is CEO of the company Instruct,
which develops and distributes the VP system CASUS that served as an exemplary integration system for the tool.
Conflicts of Interest
None declared.
References
1. Ellaway R, Candler C, Greene P, Smothers V. 2006. MedBiquitous Virtual Patient Architecture URL: http://tinyurl.com/
jpewpbt [accessed 2017-10-30] [WebCite Cache ID 6qkBNcQGi]
2. Kononowicz AA, Zary N, Edelbring S, Corral J, Hege I. Virtual patients - what are we talking about? A framework to
classify the meanings of the term in healthcare education. BMC Med Educ 2015;15:11 [FREE Full text] [doi:
10.1186/s12909-015-0296-3] [Medline: 25638167]
3. Talbot TB, Sagae K, Bruce J, Rizzo A. Sorting out the virtual patient: how to exploit artificial intelligence, game technology
and sound education practices to create engaging role-playing simulations. Int J Gaming Comput Mediat Simul 2012;4:1-19.
[doi: 10.4018/jgcms.2012070101]
4. Vpsystems. Virtual Patients (VPs) in Healthcare Education URL: http://vpsystems.virtualpatients.net/ [accessed 2017-05-26]
[WebCite Cache ID 6qkBXuAaI]
5. Cook DA, Erwin PJ, Triola MM. Computerized virtual patients in health professions education: a systematic review and
meta-analysis. Acad Med 2010 Oct;85(10):1589-1602. [doi: 10.1097/ACM.0b013e3181edfe13] [Medline: 20703150]
6. Durning SJ, Lubarsky S, Torre D, Dory V, Holmboe E. Considering “nonlinearity” across the continuum in medical education
asssessment: supporting theory, practice, and future research directions. J Contin Educ Health Prof 2015;35(3):232-243.
[doi: 10.1002/chp.21298] [Medline: 26378429]
7. Pennaforte T, Moussa A, Loye N, Charlin B, Audétat M. Exploring a new simulation approach to improve clinical reasoning
teaching and assessment: randomized trial protocol. JMIR Res Protoc 2016 Feb 17;5(1):e26 [FREE Full text] [doi:
10.2196/resprot.4938] [Medline: 26888076]
8. Lubarsky S, Dory V, Audétat MC, Custers E, Charlin B. Using script theory to cultivate illness script formation and clinical
reasoning in health professions education. Can Med Educ J 2015;6(2):e61. [Medline: 27004079]
9. Watling CJ, Lingard L. Grounded theory in medical education research: AMEE Guide No. 70. Med Teach
2012;34(10):850-861. [doi: 10.3109/0142159X.2012.704439] [Medline: 22913519]
10. Hege I, Kononowicz AA, Berman NB, Lenzer B, Kiesewetter J. Advancing clinical reasoning in virtual patients - development
and application of a conceptual framework. GMS J Med Educ 2017 (forthcoming).
11. Davids MR, Halperin ML, Chikte UM. Review: Optimising cognitive load and usability to improve the impact of e-learning
in medical education. Afr J Health Prof Educ 2015;7(2):147-152. [doi: 10.7196/AJHPE.569]
12. NCBI. Medical Subject Headings (MeSH) URL: https://www.ncbi.nlm.nih.gov/mesh [accessed 2017-05-26] [WebCite
Cache ID 6qkC09iLM]
JMIR Med Educ 2017 | vol. 3 | iss. 2 | e21 | p.9http://mededu.jmir.org/2017/2/e21/
(page number not for citation purposes)
Hege et alJMIR MEDICAL EDUCATION
XSL
FO
RenderX
13. Bowen JL. Educational strategies to promote clinical diagnostic reasoning. N Engl J Med 2006 Nov 23;355(21):2217-2225.
[doi: 10.1056/NEJMra054782] [Medline: 17124019]
14. GitHub. Clinical Reasoning Tool URL: https://github.com/clinReasonTool/ClinicalReasoningTool [accessed 2017-05-26]
[WebCite Cache ID 6qkC2uGEV]
15. CASUS VP system. URL: http://crt.casus.net [accessed 2017-10-30] [WebCite Cache ID 6qkCGg9ty]
16. Daley BJ, Torre DM. Concept maps in medical education: an analytical literature review. Med Educ 2010 May;44(5):440-448.
[doi: 10.1111/j.1365-2923.2010.03628.x] [Medline: 20374475]
17. Vink SC, Van Tartwijk J, Bolk J, Verloop N. Integration of clinical and basic sciences in concept maps: a mixed-method
study on teacher learning. BMC Med Educ 2015 Feb 18;15:20 [FREE Full text] [doi: 10.1186/s12909-015-0299-0] [Medline:
25884319]
18. Torre DM, Durning SJ, Daley BJ. Twelve tips for teaching with concept maps in medical education. Med Teach
2013;35(3):201-208. [doi: 10.3109/0142159X.2013.759644] [Medline: 23464896]
19. Norman G. Dual processing and diagnostic errors. Adv Health Sci Educ Theory Pract 2009 Sep;14(Suppl 1):37-49. [doi:
10.1007/s10459-009-9179-x] [Medline: 19669921]
20. Norman GR, Eva KW. Diagnostic error and clinical reasoning. Med Educ 2010 Jan;44(1):94-100. [doi:
10.1111/j.1365-2923.2009.03507.x] [Medline: 20078760]
21. Connell KJ, Bordage G, Gecht MR, Rowland C. Assessing Clinicians' Quality of Thinking and Semantic Competence: A
Training Manual. Chicago: University of Illinois and Northwestern University Medical School; 1998.
22. Smith S, Kogan JR, Berman NB, Dell MS, Brock DM, Robins LS. The development and preliminary validation of a rubric
to assess medical students' written summary statements in virtual patient cases. Acad Med 2016 Jan;91(1):94-100. [doi:
10.1097/ACM.0000000000000800] [Medline: 26726864]
23. Durning SJ, Artino A, Boulet J, La Rochelle J, Van Der Vleuten C, Arze B, et al. The feasibility, reliability, and validity
of a post-encounter form for evaluating clinical reasoning. Med Teach 2012;34(1):30-37. [doi:
10.3109/0142159X.2011.590557] [Medline: 22250673]
24. Charlin B, Lubarsky S, Millette B, Crevier F, Audétat M, Charbonneau A, et al. Clinical reasoning processes: unravelling
complexity through graphical representation. Med Educ 2012 May;46(5):454-463. [doi: 10.1111/j.1365-2923.2012.04242.x]
[Medline: 22515753]
25. Hege I. GitHub. Documentation of the interface between a VP system and the clinical reasoning tool URL: https://github.
com/clinReasonTool/ClinicalReasoningTool/wiki/API-to-virtual-patient-systems [accessed 2017-10-30] [WebCite Cache
ID 6qkC68S0N]
26. Hege I, Kononowicz AA, Pfähler M, Adler M. Implementation of the MedBiquitous Standard into the learning system
CASUS. Bio-Algorithms Med Syst 2009;5(9):51-55.
27. Electronic Virtual Patient Project (eViP) URL: http://virtualpatients.eu/ [accessed 2017-05-26] [WebCite Cache ID
6qkCJSeq7]
28. Nielsen J. Usability Engineering. Camebridge, MA: AP Professional; 1993.
29. Usability. System Usability Scale (SUS) URL: https://www.usability.gov/how-to-and-tools/methods/system-usability-scale.
html [accessed 2017-05-26] [WebCite Cache ID 6qkCuQ70Y]
30. Hege I, Kopp V, Adler M, Radon K, Mäsch G, Lyon H, et al. Experiences with different integration strategies of case-based
e-learning. Med Teach 2007 Oct;29(8):791-797. [doi: 10.1080/01421590701589193] [Medline: 18236274]
31. Cutrer WB, Castro D, Roy KM, Turner TL. Use of an expert concept map as an advance organizer to improve understanding
of respiratory failure. Med Teach 2011;33(12):1018-1026. [doi: 10.3109/0142159X.2010.531159] [Medline: 22225439]
32. Kassab SE, Hussain S. Concept mapping assessment in a problem-based medical curriculum. Med Teach
2010;32(11):926-931. [doi: 10.3109/0142159X.2010.497824] [Medline: 21039104]
33. Lok B. Teaching communication skills with virtual humans. IEEE Comput Graph Appl 2006;26(3):10-13. [doi:
10.1109/MCG.2006.68]
34. Patel V, Aggarwal R, Taylor D, Darzi A. Implementation of virtual online patient simulation. Stud Health Technol Inform
2011;163:440-446. [Medline: 21335836]
35. Durning SJ, Dong T, Artino JA, LaRochelle J, Pangaro L, van der Vleuten C, et al. Instructional authenticity and clinical
reasoning in undergraduate medical education: a 2-year, prospective, randomized trial. Mil Med 2012 Sep;177(9 Suppl):38-43.
[Medline: 23029859]
36. Tincanapi. What is the Experience API? URL: http://tincanapi.com/overview/ [accessed 2017-05-26] [WebCite Cache ID
6qkD0vvWQ]
37. Open Labyrinth. URL: http://openlabyrinth.ca/ [accessed 2017-05-26] [WebCite Cache ID 6qkDsK1Ue]
38. WAVES (Widening Access to Virtual Educational Scenarios) project. URL: http://wavesnetwork.eu [accessed 2017-10-30]
[WebCite Cache ID 6qkDyfJYU]
39. Hege I, Kononowicz AA, Nowakowski M, Adler M. Implementation of process-oriented feedback in a clinical reasoning
tool for virtual patients. 2017 Presented at: IEEE 30th International Symposium on Computer-Based Medical Systems
(CBMS); 2017; Thessaloniki, Greece p. 22-24.
JMIR Med Educ 2017 | vol. 3 | iss. 2 | e21 | p.10http://mededu.jmir.org/2017/2/e21/
(page number not for citation purposes)
Hege et alJMIR MEDICAL EDUCATION
XSL
FO
RenderX
40. Youtube. Clinical Reasoning Videos URL: https://www.youtube.com/playlist?list=PL5qLyx5XrSJb_q-4Zbi2o3fw2IySw379M
[accessed 2017-09-10] [WebCite Cache ID 6tNJePF0D]
Abbreviations
API: application interface
LMU Munich: Ludwig-Maximilian University of Munich
MeSH: Medical Subject Headings
VP: virtual patient
Edited by G Eysenbach; submitted 26.05.17; peer-reviewed by C McGrath, SY Liaw, D Davies, H Salminen; comments to author
01.08.17; revised version received 24.09.17; accepted 11.10.17; published 02.11.17
Please cite as:
Hege I, Kononowicz AA, Adler M
A Clinical Reasoning Tool for Virtual Patients: Design-Based Research Study
JMIR Med Educ 2017;3(2):e21
URL: http://mededu.jmir.org/2017/2/e21/
doi:10.2196/mededu.8100
PMID:29097355
©Inga Hege, Andrzej A Kononowicz, Martin Adler. Originally published in JMIR Medical Education (http://mededu.jmir.org),
02.11.2017. This is an open-access article distributed under the terms of the Creative Commons Attribution License
(https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium,
provided the original work, first published in JMIR Medical Education, is properly cited. The complete bibliographic information,
a link to the original publication on http://mededu.jmir.org/, as well as this copyright and license information must be included.
JMIR Med Educ 2017 | vol. 3 | iss. 2 | e21 | p.11http://mededu.jmir.org/2017/2/e21/
(page number not for citation purposes)
Hege et alJMIR MEDICAL EDUCATION
XSL
FO
RenderX
... To improve CR skills, Hege et al. developed a new concept mapping tool that can be integrated into VP systems (17). While the tool has already been tested in some projects with regard to improving CR in human medicine (17)(18)(19), comparable studies for veterinary medicine are not yet available. ...
... To improve CR skills, Hege et al. developed a new concept mapping tool that can be integrated into VP systems (17). While the tool has already been tested in some projects with regard to improving CR in human medicine (17)(18)(19), comparable studies for veterinary medicine are not yet available. ...
... Authors of the CASUS R platform developed a special CR tool for integration into VP systems (17). Students can document their clinical decision-making process in a concept map consisting of four fields ("Relevant findings", "Differential diagnoses", "Examination/test" and "Therapy"). ...
Article
Full-text available
To provide students of veterinary medicine with the necessary day 1 competences, e-learning offerings are increasingly used in addition to classical teaching formats such as lectures. For example, virtual patients offer the possibility of case-based, computer-assisted learning. A concept to teach and test clinical decision-making is the key feature (KF) approach. KF questions consist of three to five critical points that are crucial for the case resolution. In the current study usage, learning success, usability and acceptance of KF cases as neurological virtual patients should be determined in comparison to the long cases format. Elective courses were offered in winter term 2019/20 and summer term 2020 and a total of 38 virtual patients with neurological diseases were presented in the KF format. Eight cases were provided with a new clinical decision-making application (Clinical Reasoning Tool) and contrasted with eight other cases without the tool. In addition to the evaluation of the learning analytics (e.g., processing times, success rates), an evaluation took place after course completion. After 229 course participations (168 individual students and additional 61 with repeated participation), 199 evaluation sheets were completed. The average processing time of a long case was 53 min, while that of a KF case 17 min. 78% of the long cases and 73% of KF cases were successfully completed. The average processing time of cases with Clinical Reasoning Tool was 19 min. The success rate was 58.3 vs. 60.3% for cases without the tool. In the survey, the long cases received a ranking (1 = very good, 6 = poor) of 2.4, while KF cases received a grade of 1.6, 134 of the respondents confirmed that the casework made them feel better prepared to secure a diagnosis in a real patient. Flexibility in learning (n = 93) and practical relevance (n = 65) were the most frequently listed positive aspects. Since KF cases are short and highlight only the most important features of a patient, 30% (n = 70) of respondents expressed the desire for more specialist information. KF cases are suitable for presenting a wide range of diseases and for training students' clinical decision-making skills. The Clinical Reasoning Tool can be used for better structuring and visualizing the reasoning process.
... Les preuves de validité des différentes échelles psychométriques évaluant la perception d'apprentissage du raisonnement chez les infirmiers sont présentées dans le tableau 3. Enfin, les développements technologiques, et plus particulièrement celui de l'intelligence artificielle, vont probablement bouleverser l'évaluation du raisonnement clinique dans les prochaines années [50][51][52] . ...
... Moreover, our students perceived this training session as more engaging and providing significantly higher motivation in the SG group. These results collectively confirm that learners are very motivated to use serious games as they are more engaging, interactive and provide more continuous feedback than traditional learning methods [16,45,50,51] or e-modules [16]. Although motivation and satisfaction are complex psychological processes, the motivational effect is important in education and might be associated with better learning outcomes [8,52,53]. ...
... Ces développements technologiques vont également probablement faire évoluer la compréhension de l'apprentissage du raisonnement clinique et de son évaluation. Par exemple, Hege et al. 50 ont développé un outil traduisant des éléments d'apprentissage conceptuel du raisonnement clinique sur un logiciel. Les apprenants construisaient le script pathologique de leur patient, comme une carte conceptuelle, lorsqu'ils travaillaient sur un scénario de patient virtuel. ...
Thesis
L’amélioration du raisonnement clinique (RC) est un enjeu essentiel pour la Médecine de demain car il est établi que son utilisation imparfaite conduit à des résultats de soins insuffisants. Le RC est un processus cognitif complexe. Cette activité intellectuelle synthétise l’information obtenue à partir de la situation clinique et l’utilise pour faire une analyse diagnostique et prendre une décision de prise en charge du patient en intégrant les connaissances et expériences antérieures. La formation à cette compétence est donc essentielle. Pour améliorer le raisonnement, une connaissance des mécanismes qui le constituent est nécessaire et une revue de ces mécanismes constitue la partie initiale de cette thèse.La formation des professionnels de santé par la simulation se généralise avec pour objectif « jamais la première fois sur le patient ». En plein essor, le jeu sérieux (JS) représente un outil pédagogique intéressant. Une revue de la littérature sur l’efficacité des JS et plus particulièrement dans le cadre du RC est également incluse dans la partie initiale de cette thèse. Ainsi, les JS sont efficaces et peuvent, entre autres, cibler certaines compétences dont le RC. Cependant, la plupart des études sur le RC dans le cadre des JS, restent subjectives avec des évaluations qualitatives ou des autoévaluations des apprenants ou concernent uniquement le résultat (prise de décision). Par conséquent, la valeur éducative et les modalités des JS dans la formation du RC restent à approfondir.Le JS LabForGames Warning a été développé dans le centre de simulation LabForSIMS pour les étudiants infirmiers et cible la détection de l’aggravation d’un patient et la communication. Cette thèse a pour objectif de tester un mode d’apprentissage utilisant la simulation par les JS afin d’améliorer le RC chez les professionnels de santé.Une 1ère étude a évalué la validité du JS LabForGames Warning selon le cadre théorique proposé par Messick. Elle a démontré que les scores et le temps de jeu ne pouvaient pas différencier le niveau des compétences cliniques des infirmiers.Cependant, les preuves de validité étaient obtenues pour le contenu, le processus de réponse et la structure interne. Même si cette version du jeu ne peut donc pas être utilisée pour une évaluation sommative des étudiants, notre étude montre que ce JS est bien accepté par les étudiants et qu’il peut être utilisé pour la formation au sein d’un programme éducatif.Une seconde étude a évalué l’efficacité de 2 modalités pédagogiques sur l’apprentissage du RC à la détection de l’aggravation clinique d’un patient en comparant un groupe d’étudiants infirmiers formé par simulation avec LabForGames Warning par rapport à un groupe formé par un enseignement traditionnel. Le RC a été évalué par les tests de concordance de script immédiatement et 1 mois après. Cette étude multicentrique randomisée a inclus 146 étudiants infirmiers volontaires. Aucune différence significative n’a été observée sur le RC entre la formation par la simulation avec JS et l’enseignement traditionnel. Cependant, la satisfaction et la motivation étaient meilleures avec l’enseignement par simulation.En conclusion, nous avons tout d’abord confirmé la validité du JS LabForGames Warning en tant qu’outil pédagogique à visée formative et non sommative. Puis, bien qu’aucune différence d’apprentissage du RC n’ait été observée entre la formation par la simulation avec JS et l’enseignement traditionnel, la satisfaction et la motivation étaient meilleures avec l’enseignement par simulation avec le jeu. Les études sont à poursuivre pour préciser les modalités et stratégies pédagogiques des JS dans la formation des professionnels de santé, comme par exemple la place du débriefing, le rôle de la motivation. En effet, en plein essor, les développements technologiques telle que l’intelligence artificielle vont transformer la formation au RC ainsi que les outils pédagogiques disponibles dans les années à venir.
... The use of digital teaching methods has been recommended to address gaps in clinical reasoning skills teaching and complement traditional face-to-face methods [8,[12][13][14][15]. Virtual patients, a specific type of computer program that simulates clinical scenarios, has been recommended as an effective method [9,16,17]. ...
... The think-aloud tasks involved observing students verbalizing their thoughts in real time while completing one patient case in eCREST. This method can provide insights into the clinical reasoning of medical students, as it provides access to their conscious thought processes [13,39,41,42]. A practice think-aloud task was given before the main task to ensure that students were comfortable with the process. ...
Article
Full-text available
BACKGROUND: Improving clinical reasoning skills—the thought processes used by clinicians to formulate appropriate questions and diagnoses—is essential for reducing missed diagnostic opportunities. The electronic Clinical Reasoning Educational Simulation Tool (eCREST) was developed to improve the clinical reasoning of future physicians. A feasibility trial demonstrated acceptability and potential impacts; however, the processes by which students gathered data were unknown. OBJECTIVE: This study aims to identify the data gathering patterns of final year medical students while using eCREST and how eCREST influences the patterns. METHODS: A mixed methods design was used. A trial of eCREST across 3 UK medical schools (N=148) measured the potential effects of eCREST on data gathering. A qualitative think-aloud and semistructured interview study with 16 medical students from one medical school identified 3 data gathering strategies: Thorough, Focused, and Succinct. Some had no strategy. Reanalysis of the trial data identified the prevalence of data gathering patterns and compared patterns between the intervention and control groups. Patterns were identified based on 2 variables that were measured in a patient case 1 month after the intervention: the proportion of Essential information students identified and the proportion of irrelevant information gathered (Relevant). Those who scored in the top 3 quartiles for Essential but in the lowest quartile for Relevant displayed a Thorough pattern. Those who scored in the top 3 quartiles for Relevant but in the lowest quartile for Essential displayed a Succinct pattern. Those who scored in the top 3 quartiles on both variables displayed a Focused pattern. Those whose scores were in the lowest quartile on both variables displayed a Nonspecific pattern. RESULTS: The trial results indicated that students in the intervention group were more thorough than those in the control groups when gathering data. The qualitative data identified data gathering strategies and the mechanisms by which eCREST influenced data gathering. Students reported that eCREST promoted thoroughness by prompting them to continuously reflect and allowing them to practice managing uncertainty. However, some found eCREST to be less useful, and they randomly gathered information. Reanalysis of the trial data revealed that the intervention group was significantly more likely to display a Thorough data gathering pattern than controls (21/78, 27% vs 6/70, 9%) and less likely to display a Succinct pattern (13/78, 17% vs 20/70, 29%; χ23=9.9; P=.02). Other patterns were similar across groups. CONCLUSIONS: Qualitative data suggested that students applied a range of data gathering strategies while using eCREST and that eCREST encouraged thoroughness by continuously prompting the students to reflect and manage their uncertainty. Trial data suggested that eCREST led students to demonstrate more Thorough data gathering patterns. Virtual patients that encourage thoroughness could help future physicians avoid missed diagnostic opportunities and enhance the delivery of clinical reasoning teaching.
... VP scenarios -with interaction as a key feature -are a suitable way to generate an authentic learner experience outside of a real environment (Chittaro & Ranon, 2007). Interactive patient scenarios are a common form of VP used to advance clinical reasoning skills in learners through interaction with a series of questions, menus or decision points (Hege et al., 2017). By engaging the learners' senses through sound, sight, and interaction, VPs can enhance information gathering (Chittaro & Ranon, 2007), a critical component of the clinical reasoning process. ...
Article
This paper reports on a longitudinal, design-based research (DBR) study to promote clinical decision making using a virtual patient (VP) simulation for emergency renal care. The VP was piloted with pharmacy students, then offered as an interprofessional learning exercise for pharmacy and medical students, before being introduced as part of the curriculum. In this paper, the DBR framework used to design, implement and evaluate the VP is described. The iterative changes made and implications for integration of the virtual patient simulation in the pharmacy curriculum are discussed.
... Educational design researchers often frame these goals within Stokes' 'Pasteur's quadrant': (Stokes, 1997, p.73) Any initial design and implementation inevitably contains flaws -indeed educational design research has been characterised as "research through mistakes" (Anderson and Shattuck, 2012, p.17), hence the requirement for an iterative approach with a commitment to multiple steps of development and refinement. In one example of design research in medical education, Hege et al. (Hege et al., 2017) reported the development of a clinical reasoning tool using virtual patients. The authors first translated a conceptual clinical reasoning framework into software requirements, blueprinting user experience and incorporating adaptive feedback. ...
Article
Advanced radiotherapy techniques such as image-guided adaptive brachytherapy for cervical cancer improve local tumour control and reduce treatment toxicity. This benefit is critically dependent on radiotherapy targeting or “contouring” by oncologists. Numerous studies have shown considerable inter-observer contouring variation across all tumour sites, often measured in centimetres, suggesting that current methods of teaching contouring are ineffective. Moreover, assessing contouring competency is currently a subjective, time-consuming and onerous process. The aim of this programme of research is to investigate the assessment and teaching of radiotherapy contouring within an educational design research framework. The thesis reviews the limitations and challenges of current strategies to improve radiotherapy contouring and how insights from the educational literature such as cognitive load theory, deliberate practice theory, and best practices in assessment and feedback can inform and improve contouring assessment and teaching. Real-world data from two studies of online assessment and education for radiotherapy contouring, within an international clinical trial of advanced radiotherapy techniques for locally advanced cervical cancer, were analysed to substantiate the limitations of current approaches within a clinical trial setting. The thesis describes a novel low-fidelity radiotherapy contouring simulation tool developed to address some of the issues identified in the clinical studies. A detailed useability study was carried out in a small group of oncologists, which also yielded interesting insights into their clinical reasoning and self-regulation processes. The simulation was then used in three pilot studies of different types of learners (trainees and experts) and programmes (one-off workshops and longitudinal programmes) to explore its acceptability, useability and effectiveness. The thesis concludes by discussing possible approaches for the next iteration of software development and educational research, which could lead to meaningful change in the teaching and assessment of radiotherapy contouring.
... Teaching aids are available from a variety of sources as free, open-access, medical educational resources: in repositories such as MedEdPortal [19]; on websites of national and international organisations, such as the Society of Improving Diagnosis in Medicine (SIDM) [20,21]; as outcomes of consensus building initiatives, such as The UK Clinical Reasoning in Medical Education group (CReME) [7]; or as part of web-based CR development tools, e.g. [22]. Despite that, our observation after conducting the interviews is that many of our respondents were either unaware of their existence or were not prepared to use such resources in their teaching practice. ...
Article
Full-text available
Background Effective clinical reasoning is a core competency of health professionals that is necessary to assure patients’ safety. Unfortunately, adoption of longitudinal clinical reasoning curricula is still infrequent. This study explores the barriers that hinder the explicit teaching of clinical reasoning from a new international perspective. Methods The context of this study was a European project whose aim is to develop a longitudinal clinical reasoning curriculum. We collected data in semi-structured interviews with responders from several European countries who represent various health professions and have different backgrounds, roles and experience. We performed a qualitative content analysis of the gathered data and constructed a coding frame using a combined deductive/inductive approach. The identified themes were validated by parallel coding and in group discussions among project members. Results A total of 29 respondents from five European countries participated in the interviews; the majority of them represent medicine and nursing sciences. We grouped the identified barriers into eight general themes: Time, Culture, Motivation, Clinical Reasoning as a Concept, Teaching, Assessment, Infrastructure and Others. Subthemes included issues with discussing errors and providing feedback, awareness of clinical reasoning teaching methods, and tensions between the groups of professionals involved. Conclusions This study provides an in-depth analysis of the barriers that hinder the teaching of explicit clinical reasoning. The opinions are presented from the perspective of several European higher education institutions. The identified barriers are complex and should be treated holistically due to the many interconnections between the identified barriers. Progress in implementation is hampered by the presence of reciprocal causal chains that aggravate this situation. Further research could investigate the perceptual differences between health professions regarding the barriers to clinical reasoning. The collected insights on the complexity and diversity of these barriers will help when rolling out a long-term agenda for overcoming the factors that inhibit the implementation of clinical reasoning curricula.
... Consequently, to promise the safety and quality of nursing education, there are appeals for educational reform in nursing schools. Students should have well-equipped laboratory sites where they can communicate with patient care before interacting with the actual patient by using innovative and interactive learning strategies that strengthen their skills in order to transfer knowledge to practice in a clinical application, including simulation experiences for students and game-based virtual reality [6,7]. ...
Article
Purpose: This review was aimed to evaluate the effectiveness of virtual reality simulation as a teaching / learning strategy on the acquisition of clinical skills and performance, self-confidence, satisfaction and anxiety level in nursing education. Methodology: The Preferred Reporting Items for Systematic Reviews guidelines, using the PICO model that is based on an evidence-based practice process was matched. A total of twenty-three studies included six themes: performance skills (n = 13), self-confidence (n = 8), satisfaction (n = 10), anxiety level (n = 3), self-efficacy (n = 4), and knowledge (n = 15). Experimental randomised control trials and quasi-experimental studies from 2009 to 2019, conducted in English, were included. Nursing students (n = 1797; BSN, ADN, MSc, LPN) participated. Results and conclusion: This review was indicated that virtual reality simulation provides learning strategy to acquire clinical skills, improve knowledge acquisition, increase self-confidence, self-efficacy, and satisfaction level, and decrease anxiety levels among nursing students.
Article
Full-text available
Introduction Emergencies and disasters occur in any society, and it is the hospitals and their emergency department staff who must be prepared in such cases. Therefore, 1 of the effective methods of training medical care staff is the use of simulators. However, when introducing new simulation approaches, we face many challenges. The aim of this study was to identify challenges of the simulation of the hospital emergency department during disasters and provide effective solutions. Methods This conventional content, thematic, analysis study was conducted in 2021. Participants were selected from Iranian experts using purposeful and snowball sampling methods. Data were collected using semi-structured interviews and were analyzed by the content analysis. Results After analyzing the data, the challenges of simulating the hospital emergency department during disasters were identified in 2 main components and 6 perspectives, which included organizational components (inappropriate and aimless training methods, lack of interaction and cooperation, lack of funding, and inadequate physical space) and technological components (weak information management and lack of interdisciplinary cooperation). Solutions included management (resource support) and data sharing and exchange (infrastructures, cooperation and coordination). Conclusion The simulation technology can be used as a method for training and improving the quality of health care services in emergencies. Considering that most of these challenges can be solved and need the full support of managers and policy makers, by examining these issues, supporting staff of health care centers are advised to make a significant contribution to the advancement of education and problem reduction in the event of disasters.
Article
Background: Since the beginning of the COVID-19 pandemic, people have been exposed to misinformation, leading to many myths about the virus and the vaccinations against it. As this situation doesn't seem to end soon, many authorities and health organizations, including the World Health Organization (WHO), are utilizing conversational agents in their fight against it. Although the impact and usage of these novel digital strategies are noticeable, the design of the conversational agents (CA) remains key to their success. Objective: This study describes the use of design-based research for contextual conversational agent design to address vaccine hesitancy. In addition, this protocol will examine the impact of Design-Based Research (DBR) on conversational agent design to understand how this iterative process can enhance accuracy and performance. Methods: A DBR methodology will be used for this study. Each phase of analysis, design and evaluation of each design cycle inform the next one via its outcomes. An anticipated generic strategy will be formed after completing the first iteration. Using multiple research studies, frameworks and theoretical approaches are tested and evaluated through the different design cycles. User perception of the conversational agent will be analysed/collected by implementing a usability assessment during every evaluation phase using the system usability scale. The PARADISE (PARAdigm for Dialogue System Evaluation) method will be adopted to calculate the performance of this text-based CA. Results: Two phases of the first design cycle (design and evaluation) were completed at the time of authoring this paper (April 2022). The research team is currently reviewing the NLU (Natural-language understanding) model as part of the conversation driven development (CDD) process in preparation for the first pilot intervention, which will conclude the CA's first design cycle. In addition, conversational data will be analysed quantitatively and qualitatively as part of the reflection and revision process to inform the subsequent design cycles. This project plans for three rounds of design cycles, resulting in various studies spreading outcomes and conclusions. The first study result describing the entire first design cycle is expected to be submitted for publication before the end of 2022. Conclusions: CAs constitute an innovative way of delivering health communication information. However, they are primarily used to contribute to behavioural change or educate about health issues. Therefore, health chatbots' impact should be carefully designed to meet outcomes. DBR can help shape a holistic understanding of the process of conversational agent conception. This protocol describes the design of VWise, a contextual conversational agent that aims to address vaccine hesitancy using the DBR methodology. The results of this study will help identify the strengths and flaws of DBR's application to such innovative projects. Clinicaltrial:
Article
Background: Learning with virtual patients is highly popular for fostering clinical reasoning in medical education. However, little learning with virtual patients is done collaboratively, despite the potential learning benefits of collaborative versus individual learning. Objective: This paper describes the implementation of student collaboration in a virtual patient platform. Our aim was to allow pairs of students to communicate remotely with each other during virtual patient learning sessions. We hypothesized that we could provide a collaborative tool that did not impair the usability of the system compared to individual learning and that this would lead to better diagnostic accuracy for the pairs of students. Methods: Implementing the collaboration tool had five steps: (1) searching for a suitable software library, (2) implementing the application programming interface, (3) performing technical adaptations to ensure high-quality connections for the users, (4) designing and developing the user interface, and (5) testing the usability of the tool in 270 virtual patient sessions. We compared dyad to individual diagnostic accuracy and usability with the 10-item System Usability Scale. Results: We recruited 137 students who worked on 6 virtual patients. Out of 270 virtual patient sessions per group (45 dyads times 6 virtual patients, and 47 students working individually times 6 virtual patients minus 2 randomly selected deleted sessions) the students made successful diagnoses in 143/270 sessions (53%, SD 26%) when working alone and 192/270 sessions (71%, SD 20%) when collaborating (P=.04, η2=0.12). A usability questionnaire given to the students who used the collaboration tool showed a usability score of 82.16 (SD 1.31), representing a B+ grade. Conclusions: The collaboration tool provides a generic approach for collaboration that can be used with most virtual patient systems. The collaboration tool helped students diagnose virtual patients and had good overall usability. More broadly, the collaboration tool will provide an array of new possibilities for researchers and medical educators alike to design courses for collaborative learning with virtual patients.
Article
Full-text available
Background: Clinical reasoning is a complex skill students have to acquire during their education. For educators it is difficult to explain their reasoning to students, because it is partly an automatic and unconscious process. Virtual Patients (VPs) are used to support the acquisition of clinical reasoning skills in healthcare education. However, until now it remains unclear which features or settings of VPs optimally foster clinical reasoning. Therefore, our aims were to identify key concepts of the clinical reasoning process in a qualitative approach and draw conclusions on how each concept can be enhanced to advance the learning of clinical reasoning with virtual patients. Methods: We chose a grounded theory approach to identify key categories and concepts of learning clinical reasoning and develop a framework. Throughout this process, the emerging codes were discussed with a panel of interdisciplinary experts. In a second step we applied the framework to virtual patients. Results: Based on the data we identified the core category as the "multifactorial nature of learning clinical reasoning". This category is reflected in the following five main categories: Psychological Theories, Patient-centeredness, Context, Learner-centeredness, and Teaching/Assessment. Each category encompasses between four and six related concepts. Conclusions: With our approach we were able to elaborate how key categories and concepts of clinical reasoning can be applied to virtual patients. This includes aspects such as allowing learners to access a large number of VPs with adaptable levels of complexity and feedback or emphasizing dual processing, errors, and uncertainty.
Article
Full-text available
Background Script theory proposes an explanation for how information is stored in and retrieved from the human mind to influence individuals’ interpretation of events in the world. Applied to medicine, script theory focuses on knowledge organization as the foundation of clinical reasoning during patient encounters. According to script theory, medical knowledge is bundled into networks called ‘illness scripts’ that allow physicians to integrate new incoming information with existing knowledge, recognize patterns and irregularities in symptom complexes, identify similarities and differences between disease states, and make predictions about how diseases are likely to unfold. These knowledge networks become updated and refined through experience and learning. The implications of script theory on medical education are profound. Since clinician-teachers cannot simply transfer their customized collections of illness scripts into the minds of learners, they must create opportunities to help learners develop and fine-tune their own sets of scripts. In this essay, we provide a basic sketch of script theory, outline the role that illness scripts play in guiding reasoning during clinical encounters, and propose strategies for aligning teaching practices in the classroom and the clinical setting with the basic principles of script theory.
Article
Full-text available
Background: Helping trainees develop appropriate clinical reasoning abilities is a challenging goal in an environment where clinical situations are marked by high levels of complexity and unpredictability. The benefit of simulation-based education to assess clinical reasoning skills has rarely been reported. More specifically, it is unclear if clinical reasoning is better acquired if the instructor's input occurs entirely after or is integrated during the scenario. Based on educational principles of the dual-process theory of clinical reasoning, a new simulation approach called simulation with iterative discussions (SID) is introduced. The instructor interrupts the flow of the scenario at three key moments of the reasoning process (data gathering, integration, and confirmation). After each stop, the scenario is continued where it was interrupted. Finally, a brief general debriefing ends the session. System-1 process of clinical reasoning is assessed by verbalization during management of the case, and System-2 during the iterative discussions without providing feedback. Objective: The aim of this study is to evaluate the effectiveness of Simulation with Iterative Discussions versus the classical approach of simulation in developing reasoning skills of General Pediatrics and Neonatal-Perinatal Medicine residents. Methods: This will be a prospective exploratory, randomized study conducted at Sainte-Justine hospital in Montreal, Qc, between January and March 2016. All post-graduate year (PGY) 1 to 6 residents will be invited to complete one SID or classical simulation 30 minutes audio video-recorded complex high-fidelity simulations covering a similar neonatology topic. Pre- and post-simulation questionnaires will be completed and a semistructured interview will be conducted after each simulation. Data analyses will use SPSS and NVivo softwares. Results: This study is in its preliminary stages and the results are expected to be made available by April, 2016. Conclusions: This will be the first study to explore a new simulation approach designed to enhance clinical reasoning. By assessing more closely reasoning processes throughout a simulation session, we believe that Simulation with Iterative Discussions will be an interesting and more effective approach for students. The findings of the study will benefit medical educators, education programs, and medical students.
Article
Full-text available
Purpose: The ability to create a concise summary statement can be assessed as a marker for clinical reasoning. The authors describe the development and preliminary validation of a rubric to assess such summary statements. Method: Between November 2011 and June 2014, four researchers independently coded 50 summary statements randomly selected from a large database of medical students' summary statements in virtual patient cases to each create an assessment rubric. Through an iterative process, they created a consensus assessment rubric and applied it to 60 additional summary statements. Cronbach alpha calculations determined the internal consistency of the rubric components, intraclass correlation coefficient (ICC) calculations determined the interrater agreement, and Spearman rank-order correlations determined the correlations between rubric components. Researchers' comments describing their individual rating approaches were analyzed using content analysis. Results: The final rubric included five com ponents: factual accuracy, appropriate narrowing of the differential diagnosis, transformation of information, use of semantic qualifiers, and a global rating. Internal consistency was acceptable (Cronbach alpha 0.771). Interrater reliability for the entire rubric was acceptable (ICC 0.891; 95% confidence interval 0.859-0.917). Spearman calculations revealed a range of correlations across cases. Content analysis of the researchers' comments indicated differences in their application of the assessment rubric. Conclusions: This rubric has potential as a tool for feedback and assessment. Opportunities for future study include establishing interrater reliability with other raters and on different cases, designing training for raters to use the tool, and assessing how feedback using this rubric affects students' clinical reasoning skills.
Article
Full-text available
The explication of relations between clinical and basic sciences can help vertical integration in medical curricula. Concept mapping might be a useful technique for this explication. Little is known about teachers' ability regarding the articulation of integration. We examined therefore which factors affect the learning of groups of clinicians and basic scientists on different expertise levels who learn to articulate the integration of clinical and basic sciences in concept maps. After a pilot for fine-tuning group size and instructions, seven groups of expert clinicians and basic scientists and seven groups of residents with a similar disciplinary composition constructed concept maps about a clinical problem that fit their specializations. Draft and final concepts maps were compared on elaborateness and articulated integration by means of t-tests. Participants completed a questionnaire on motivation and their evaluation of the instructions. ANOVA's were run to compare experts' and residents' views. Data from video tapes and notes were qualitatively analyzed. Finally, the three data sources were interpreted in coherence by using Pearson's correlations and qualitative interpretation. Residents outshone experts as regards learning to articulate integration as comparison of the draft and final versions showed. Experts were more motivated and positive about the concept mapping procedure and instructions, but this did not correlate with the extent of integration fond in the concept maps. The groups differed as to communication: residents interacted from the start (asking each other for clarification), whereas overall experts only started interaction when they had to make joint decisions. Our results suggest that articulation of integration can be learned, but this learning is not related to participants' motivation or their views on the instructions. Decision making and interaction, however, do relate to the articulation of integration and this suggests that teacher learning programs for designing integrated educational programmes should incorporate co-construction tasks. Expertise level turned out to be decisive for both the level of articulation of integration, the ability to improve the articulated integration and the cooperation pattern.
Article
Full-text available
Background The term "virtual patients" (VPs) has been used for many years in academic publications, but its meaning varies, leading to confusion. Our aim was to investigate and categorize the use of the term "virtual patient" and then classify its use in healthcare education. Methods: A literature review was conducted to determine all articles using the term "virtual patient" in the title or abstract. These articles were categorized into: Education, Clinical Procedures, Clinical Research and E-Health. All educational articles were further classified based on a framework published by Talbot et al. which was further developed using a deductive content analysis approach. Results: 536 articles published between 1991 and December 2013 were included in the study. From these, 330 were categorized as educational. Classifying these showed that 37% articles used VPs in the form of Interactive Patient Scenarios. VPs in form of High Fidelity Software Simulations (19%) and Virtual Standardized Patients (16%) were also frequent. Less frequent were other forms, such as VP Games.Analyzing the literature across time shows an overall trend towards the use of Interactive Patient Scenarios as the predominant form of VPs in healthcare education. Conclusions: The main form of educational VPs in the literature are Interactive Patient Scenarios despite rapid technical advances that would support more complex applications. The adapted classification provides a valuable model for VP developers and researchers in healthcare education to more clearly communicate the type of VP they are addressing avoiding misunderstandings.
Article
Since Dr. Howard Barrows (1964) introduced the human standardized patient in 1963, there have been attempts to game a computer-based simulacrum of a patient encounter; the first being a heart attack simulation using the online PLATO system (Bitzer, 1966). With the now ubiquitous use of computers in medicine, interest and effort have expended in the area of Virtual Patients (VPs). One problem in trying to understand VPs is that there are several quite distinct educational approaches that are all called a 'virtual patient.' This article is not a general review of virtual patients as current reviews of excellent quality exist (Poulton & Balasubramaniam, 2011; Cook & Triola, 2009). Also, research that demonstrates the efficacy of virtual patients is ample (Triola, et al., 2006). This article assesses the different kinds of things the authors call "virtual patients", which are often mutually exclusive approaches, then analyzes their interaction structure or 'game-play', and considers the best use scenarios for that design strategy. This article also explores dialogue-based conversational agents as virtual patients and the technology approaches to creating them. Finally, the authors offer a theoretical approach that synthesizes several educational approaches over the course of a medical encounter and recommend the optimal technology for the type of encounter desired.
Article
The purpose of this article is to propose new approaches to assessment that are grounded in educational theory and the concept of “nonlinearity.” The new approaches take into account related phenomena such as “uncertainty,” “ambiguity,” and “chaos.” To illustrate these approaches, we will use the example of assessment of clinical reasoning, although the principles we outline may apply equally well to assessment of other constructs in medical education. Theoretical perspectives include a discussion of script theory, assimilation theory, self-regulated learning theory, and situated cognition. Assessment examples to include script concordance testing, concept maps, self-regulated learning microanalytic technique, and work-based assessment, which parallel the above-stated theories, respectively, are also highlighted. We conclude with some practical suggestions for approaching nonlinearity.