Content uploaded by Annette Aboulafia
Author content
All content in this area was uploaded by Annette Aboulafia
Content may be subject to copyright.
Content uploaded by Annette Aboulafia
Author content
All content in this area was uploaded by Annette Aboulafia
Content may be subject to copyright.
Applying a Web and Simulation-Based System for
Adaptive Competence Assessment of Spinal Anaesthesia
Cord Hockemeyer1, Alexander Nussbaumer1, Erik Lövquist2, Annette Aboulafia2,
Dorothy Breen3, George Shorten3, and Dietrich Albert1
1 Department of Psychology, University of Graz, Austria
{cord.hockemeyer,alexander.nussbaumer,dietrich.albert}
@uni-graz.at
2Interaction Design Centre, University of Limerick, Ireland
{erik.lovquist,annette.aboulafia}@ul.ie
3Health Services Executive South, Cork University Hospital, Ireland
dorothybreen@financialcontroller.ie
george.shorten@hse.ie
Abstract. The authors present an approach for implementing a system for the
assessment of medical competences using a haptic simulation device. Based on
Competence based Knowledge Space Theory (CbKST), information on the
learners’ competences is gathered from different sources (test questions, data
from the simulator, and supervising experts’ assessments).
The envisaged architecture consists of three core modules, an LMS
(Moodle) containing user model and content objects and realising the interface
between system and user, a simulator interface as an own service connecting the
LMS to the (external) simulator system, and a CbKST service offering the
assessment logic and visualisations of the assessment result for learner and
teacher.
Keywords: Adaptive Competence Assessment, Medical Training, Haptics,
Simulation, Moodle, Competence-based Knowledge Space Theory, Spinal
Anaesthesia.
1 Introduction
Medical training has been undergoing major changes worldwide, recently. Reasons
for this include new employment legislation as well as changing patient expectations,
increasing awareness for quality assurance needs, and liability jurisdiction. As a
result, it is becoming less acceptable that doctors in-training practice their procedural
skills on patients. One solution to this is the application of advanced computer
technology (e.g., Virtual Reality or haptic devices) for simulating medical procedures
during the early practical training.
This paper describes the MedCAP [http://www.medcap.eu/] approach to integrate a
haptic device and a psychological model of knowledge and competences with a state
Hockemeyer, C., Nussbaumer, A., Lövquist, E., Aboulafia, A., Breen, D., Shorten, G., &
Albert, D. (2009). Applying a Web and Simulation-Based System for Adaptive
Competence Assessment of Spinal Anaesthesia. In M. Spaniol, Q. Li, R. Klamma & R.
Lau (Eds.), Advances in Web Based learning - ICWL 2009. Lecture Notes in Computer
Science (Vol. 5686, pp. 182-191). Berlin: Springer.
of the art learning management system. Within this aim, we focus on the technical
realisation of previously published conceptual ideas [1, 2].
After briefly introducing spinal anaesthesia and the Competence-based Knowledge
Space Theory (CbKST) in the remainder of this introduction, we will give some
general information on the web and simulation based system for spinal anaesthesia. In
Section 3, we describe the adaption of CbKST’s competence assessment procedure to
the medical domain, and in Section 4 the architecture of the resulting system.
1.1 Procedure of Spinal Anaesthesia
Spinal anaesthesia involves injecting a small amount of local anaesthetic through a
needle inserted between the spinal vertebrae, below the end of the spinal cord into the
surrounding spinal fluid (Figure 1). The local anaesthetic quickly blocks the patient’s
sensations below the point of injection providing excellent operating conditions for
surgery on the lower part of the abdomen and legs.
Fig. 1. Demonstration of a spinal anaesthetic injection
Almost always, the procedural skill of spinal anaesthesia is learned by watching a
more experienced practitioner and subsequently performing the procedure on a patient
under close supervision. Clearly this process has disadvantages. The patient can be
put at risk by having a potentially hazardous procedure performed by a trainee. The
training opportunities are limited by the type and number of patients who are suitable
for the procedure of spinal anaesthesia during the time of the training.
1.2 Competence based Knowledge Space Theory
Doignon and Falmagne [3, 4] developed the theory of knowledge spaces originally as
a behaviouristic approach to adaptive assessment of knowledge. The core idea is to
describe a domain of knowledge by a set of test items, and to structure this set of test
items by prerequisite relationships. They identify a learner’s knowledge state as the
subset of test items this learner is able to solve. The set of possible knowledge states –
the knowledge space – is strongly constrained by the prerequisite relationships. The
knowledge space also delineates reasonable learning paths, i.e. ways to sequence the
items to learn such that the learner has all the prerequisites for the current item.
The prerequisite relationships can be visualised as a Hasse diagram. Figure 2 (left
side) shows a hypothetical Hasse diagram for a set of five items (a to e) where, e.g., a
is a prerequisite of b, and b and c are both prerequisites of e. As a consequence, a also
is a prerequisite of e. The right side of Fig. 2 shows the corresponding knowledge
space (again as a Hasse diagram) and one possible learning path through this space.
Albert and his research groups have focused on investigating the underlying
cognitive structures (see, e.g., [5]) resulting in the development of Competence-based
Knowledge Space Theory (CbKST) [6]. The basic idea here is to assume the
existence of unobservable competences underlying the observed problem solving
behaviour. These competences are again structured by prerequisite relationships.
c
e
b
d
a
{a,b,c,d}
{a,b,c,d,e}
{a,b,c,e}
{c}
{a,c}
{a,b,c}
{a}
{a,b}
{a,b,d}
Fig. 2. Hasse diagrams of a hypothetical prerequisite relation and the corresponding knowledge
space. The dashed arrows denote one possible learning pat within that space.
The adaptive assessment of knowledge is a core element of knowledge space
theory [8, 9]. Ideally, the assessment procedure should start with a test item of
medium difficulty and then, depending on the learner’s answer, continuing with either
easier or more challenging test items. Knowledge space theory allows us to ground
this not only on some rather abstract measure of difficulty but on the concrete
prerequisites between the individual test items (or competences, respectively).
Looking at the hypothetical knowledge space in Fig. 2, a teacher might start an
assessment by asking item b. If the learner gives a correct solution, there is no need to
test item a. In case of a wrong answer, however, there would be no need to test items
d and e. On average, the teacher would have to ask 3.3 items in order to know for all
five items whether the learner can solve them or not. For larger item sets, the savings
are usually larger.
Practically, the assessment works, of course, non-deterministic. The assessment
procedure maintains a likelihood distribution over the knowledge (or competence)
space. After each evidence, the likelihood distribution is updated using a generalised
form of Bayes’ theorem, i.e. the likelihood of states fitting to the last evidence is
increased and the likelihood of states not fitting to the last evidence is decreased. The
deterministic variant described in the previous paragraph is effectively a special case
of the likelihood update where the probabilities for careless errors and lucky guesses
are 0 [8]. Simulation studies have shown that the loss of efficiency (in the sense of the
number of items to be asked) by moving from the deterministic to the probabilistic
procedure is very small [9].
2 A Web and Simulation-based System for Spinal Anaesthesia
The assessment procedure consists of two separate systems for gathering information
from the assessed anaesthetist. These two systems are interlinked to create a natural
flow of the assessment procedure.
2.1 The Web-based system
The assessment procedure utilises a problem-based learning approach, where
scenarios of patients are presented in an electronic format in the open-source learning
management system Moodle (moodle.org/). The patient scenarios is written by
experienced anaesthetists. Each scenario consists of information and questions, which
the assessed anaesthetist has to go through as a part of the assessment procedure. The
scenarios consist of an extensive amount of film clips and pictures, enhancing the
information given in each case. All of the media was taken in the clinical environment
by participating anaesthetists.
2.2 The Haptic Simulator
The simulator uses haptic technology, which enables the user to interact with and feel
objects in virtual environments [10]. The spinal haptic devise simulates the needle
insertion aspects of performing spinal anaesthesia and generates realistic sensations
and visual representations of actual patients [11]. It utilises a Phantom Desktop
[www.sensable.com] and is implemented in the haptic environments H3D API
[www.h3d.org] and Volume Haptics ToolKit (VHTK) [12].
The haptic device allows tracking the user’s movements in real time, providing the
functionality of incorporating metrics for automatic assessment of performance on the
virtual patient. Textures and 3D models representative of the patients in the cases are
incorporated in the simulator, see Fig 3.
The simulator assessment is directly linked to the question interrogation, i.e. at a
certain stage of a scenario the anaesthetist has to perform the procedure on anatomy
corresponding to the patient of that specific case.
Fig 3. To the left is a picture of the haptic device in use in the virtual environment and
to the right, a screen capture of a patient’s back.
3 Medical Competence Assessment
3.1 Adaptive Competence Assessment
Assessment procedures on “classical” knowledge spaces (i.e. in the original
behaviouristic approach by Doignon and Falmagne) have been well investigated (see,
e.g., [7, 8]). The most common approach models a likelihood distribution over the
complete knowledge space. Test problems are selected for which the likelihood
estimate of being mastered is close to 0.5, i.e. for which the system has yet little
knowledge about their mastery. Depending on the learner’s answer, the likelihood
estimate is updated applying a generalised version of Bayes’ theorem. Simulations
have shown that this procedure is very close to have a complete assessment result
with minimal effort, i.e. number of test problems posed to the learner [9].
Compared to assessment in classical knowledge space theory, the assessment of
competences in CbKST is a rather new area of research and development. A first,
straightforward approach was taken by Heller et al. [6] who suggested to do a
classical assessment on the level of test items and, afterwards, to map the resulting
knowledge state to its corresponding competence state. The more recent approach of
microadaptivity [13] includes changing the order of these steps, i.e. to interpret the
observed responses to the individual test item with respect to the underlying
competences and to perform the assessment procedure on the competence space.
More concretely, the part of microadaptivity used here is an assignment of tested
competences to each test item. Solving or failing such an item is then interpreted as
positive or negative, respectively, evidence on mastering the assigned competences.
3.2 Different Modalities of Information Gathering
In the context of medical assessment to be performed as a mixture of examination
questions, simulator work, and supervised work with patients, this combination of
different modalities provide a special challenge. A computer based system for
medical competence assessment has to be able to deal with all these different sources
of information on the candidate’s competences.
1. The classical source is test items to be posed to the candidate. As described in
Section 3.1 above, the candidate’s answers are interpreted as evidence on having or
not having the competences assigned to the respective test item.
2. A first new source refers to classical ways of teaching by supervised practising.
Instead of posing questions to the candidates, an expert supervises their work on
the simulator or on a patient. Afterwards, the expert answers a questionnaire with
respect to the candidate’s competences.
3. A second new source is given by the simulator itself. In a first step, certain metrics
will be transferred from the simulator to the learning management system after
finishing the simulation. These metrics will then be interpreted with respect to the
learner’s competences. A more advanced usage of the simulator will be the aim of
future research projects.
This leads, of course, to changes in the assessment procedure. Especially, the
selection of test problems has to be changed. The selection of test problems which
promise to uncover maximal information on the candiadate’s copetences is replaced
by a whole set of quasi-problems given at once. While such a block of information
may contain some redundancy, they will nevertheless completely be used.
Furthermore, since testing in medicine frequently follows the storyline of case
scenarios, even in the case of test items to be posed to the candidate, there can be no
arbitrary choice. Many of these test items contain information about the case that is
needed in the later course of the scenario. Therefore, instead of selecting the test item
that maximises the gain of assessment information the system can only decide
whether the next item is likely to give new assessment information or whether it could
be replaced by some message simply telling the medical data contained in the item.
4 An Integrating System Architecture
The overall system consists of three main components which are the Web application,
the Simulator (together with the simulator interface), and the CbKST service. (Two
groups of users (actors) are working on the system, the supervisors/experts and the
candidates. Furthermore, domain model and user model needed for the competence
assessment logic are stored and managed in the CbKST Web service. However,
domain and user model contain only referential information, the actual problems
(assessment items) and user information are stored in the Web application. This
design of separating Web application from assessment logic follows the approach
described in [14] and [15]. An illustration of this architecture is shown in Fig. 4.
The Web application guides the candidate through the several scenarios and
respective problems, which are stored and managed by this Web application. It also
contacts the CbKST Web Service for each problem if it should be posed or not and it
reports back about the correctness of the user’s answers (first assessment source).
Moreover it initiates the work on the simulator by contacting the simulator interface
and by telling the user to switch to the simulator. Also the expert supervising the
candidate uses the Web application to fill out the questionnaire which is also
transmitted to the CbKST Web Service in order to be used for the assessment
calculation (second assessment source). The result of the competence assessment
procedure is presented to the candidate in a simple form by showing a list of available
and non-available skills. For a more detailed graphical visualisation of the assessment
result, the Web application makes available the respective applet as part of the
CbKST Web Service by providing the link which opens this visualisation component.
The simulator allows the user’s actions to be constantly tracked when performing
the procedure on the virtual anatomy (third source of assessment information through
the simulator interface described below). The metrics are used as the measurements of
how the user is performing. Each metric can be seen as an assessment item which
tests a set of skills.
The simulator interface is the software component which connects the physical
simulator to the Web application and the CbKST service. It initiates and controls the
practice on the simulator and it reports the results of this practice to the CbKST
service.
The CbKST service is responsible for the logic of the competence assessment. It
exposes an interface as Web Service which can be contacted by the Web application
and the simulator interface for two purposes, (i) to report correctness of assessment
items and practice on the simulator and (ii) to get the total result of the assessment in
terms of available skills. This service has implemented the algorithms for competence
assessment as described in Section 3, whereby the calculations are based on CbKST
assessment procedures as described in Section 1.2. However, in contrast to the
traditional algorithms, not the optimal problem is chosen to minimise the number to
question, but the sequence of problems is controlled by the Web application. However
updating the probabilistic values of possible knowledge states is done in the
traditional way. Deriving the competence state is conducted by investigating the
assigned skills of questions and metrics which a learner could solve. A domain model
is used which contains information about problems and metrics, skills, skill
assignment to problems, and prerequisites between skills. Furthermore, the CbKST
service has available a user model containing information which problems a learner
has already solved and which skills are available. Domain and user model information
are stored in a database on the machine of the CbKST service.
In addition to the Web Service the CbKST service also has available a visualisation
component which provides the learner with a graphical illustration of the assessment
results. Following the approach described in [16], the learner gets a skill map
(prerequisite relations between skills) in a visual form where the result is depicted. In
this way the learner can see his or her competence state in relation to the knowledge
domain. This method is supposed to initiate reflection and motivation of the learner.
An illustration of this architecture is shown in Fig. 4.
Fig 4: In this figure, the overall system architecture is illustrated containing the main
components of the system and their interconnections as well as the users (actors)
operating on the system.
The Web application is based on the popular Moodle learning management system
(LMS) which is implemented in PHP and which runs inside of an Apache Web Server
together with a MySQL database. The assessment procedure has been implemented as
an extension of Moodle. Connection to the CbKST service is performed by using the
nusoap PHP library (see http://sourceforge.net/projects/nusoap/). The simulator
interface is implemented in Python, which is controlled by the simulation to send
appropriate information to the CbKST service. The CbKST service is realised in an
Apache Tomcat servlet container in order to provide visualisation component and
Web service. The visualisation component is developed in Java and is made available
as Applet. The Web service is exposed in an Apache Axis2 environment which is
installed in the Apache Tomcat engine. The database for domain and user model data
is located within the CbKST service. Web application and CbKST service are located
on servers accessible over the Internet. The simulator requires a dedicated workstation
connected to the Internet for accessing the web application. However, if a haptic
workstation is not available, a simplified version of the assessment procedure using
only the web application can be run on any machine with Internet access.
5 Conclusions
We have described a novel approach to competence assessment for medical trainees.
A multi-disciplinary co-operation of physicians, psychologists, and computer
scientists has been leading to an assessment system which offers several important
accomplishments. The inclusion of a haptic simulator device allows, one the one
hand, to assess the trainees’ procedural and haptic abilities without endangering
patients health. On the other side, it is also a solution for restrictions by new working
time regulations as well as restrictions imposed by the rare occurrence of special,
complicated cases. Furthermore, results of the research and development described
herein can serve as a basis towards an objective and standardized competence
assessment for young doctors.
Still, there remains much work to do. One important issue is to go beyond pure
testing and to extend the system in order to allow support also for teaching. A second
issue is to extend the contents of the system in order to comprise the whole field of
spinal anaesthesia.
Besides that, there is also the issue of computational efficiency. The medical
domain of spinal anaesthesia seems to be less structured than other fields (e.g.,
mathematics or physics) resulting in high computational demands [17]. There are
already theoretical developments on decreasing the computational demands during
competence assessment, however [18], some further research is needed in this area.
Acknowledgments
The work reported in this paper was financially supported by the European
Commission through grant no. LLP/LdV/TOI/2007/IRL-513 within the Lifelong
Learning Programme, Leonardo da Vinci sub programme.
References
1. Albert, D., Hockemeyer, C., Kulcsar, Z., Shorten, G.: Competence Assessment for Spinal
Anaesthesia. In: Holzinger, A. (ed.) USAB 2007. LNCS vol. 4799, pp. 165-170. Springer,
Berlin (2007)
2. Zhang, D., Albert, D., Hockemeyer, C., Breen, D., Kulcsar, Z., Shorten, G., Aboulafia, A.,
Lövquist, E.: Developing competence assessment procedure for spinal anaesthesia. In:
Proceedings of the 21st IEEE International Symposium on Computer-Based Medical
Systems, pp. 397-402. IEEE Press, New York (2008).
3. Doignon, J.-P., Falmagne, J.-C.: Spaces for the assessment of knowledge. Int. J. Man-
Machine Studies 23, 175-196 (1985).
4. Doignon, J.-P., Falmagne, J.-C.: Knowledge Spaces. Springer-Verlag, Berlin (1999).
5. Albert, D., Lukas, J. (eds.): Knowledge Spaces: Theories, Empirical Research, Applications.
Lawrence Erlbaum Associates, Mahwah, NJ (1999).
6. Heller, J., Steiner, C., Hockemeyer, C., Albert, D.: Competence-based knowledge structures
for personalised learning. Int. J. E-Learning 5 (1), 75-88 (2006).
8. Doignon, J.-P.: Probabilistic assessment of knowledge. In: Albert, D. (ed.) Knowledge
Structures, pp. 1-56. Springer Verlag, New York (1994).
9. Hockemeyer, C.: A comparison of non-deterministic procedures for the adaptive assessment
of knowledge. Psych. Beitr. 44, 495-503 (2002).
10. Srinivasan, M., Basdogan, C.: Haptics in virtual environments: Taxonomy, research status,
and challenges, Computer Graphics 21(4), 393-404 (1997).
11. Lövquist, E, Kulcsár, Z., Fernström, M., Aboulafia, A., Shorten. G.: The Design of a Haptic
Simulator for Teaching and Assessing Spinal Anaesthesia. In: Proceedings of the 5th
Intuition Conference, 6-8 October, Turin, Italy (2008).
12. Lundin, K., Gudmundsson, B., Ynnerman. A.: General proxy-based haptics for volume
visualization. Proceedings of the World Haptics Conference 2005, Pisa, Italy, pp. 557–560
(2005).
13. Kickmeier-Rust, M., Albert, D., Hockemeyer, C., Augustin, T.: Not breaking the Narrative:
Individualized Competence Assessment in Educational Games. In: Remenyi, D. (ed.)
Proceedings of the European Conference on Games-based Learning (ECGBL), pp. 161-168.
Academic Conferences Limited, Reading, UK (2008).
14. Nussbaumer, A., Gütl, C., Hockemeyer, C.: A Generic Solution Approach for Integrating
Adaptivity into Web-based E-Learning Platforms. In: Proceedings of the International
Conference on Interactive Mobile and Computer Aided Learning (IMCL 2007), 18-20 April
Amman, Jordan (2007).
15. Nussbaumer, A., Gütl, C., Albert, D.: Towards a Web Service for Competence-based
Learning and Testing. In: Proceedings of the World Conference on Educational Multimedia,
Hypermedia & Telecommunications (ED-MEDIA 2007), 25-29 June, Vancouver, Canada
(2007).
16. Nussbaumer, A.: Supporting Self-Reflection through Presenting Visual Feedback of
Adaptive Assessment and Self-Evaluation Tools. In: Proceedings of the 11th International
Conference on Interactive Computer-aided Learning (ICL 2008), 24-26 September, Villach,
Austria (2008).
17. Lövquist, E., Aboulafia, A., Breen, D., Shorten, G., Zhang, D., Albert, D.: Designing a
Simulation-Supported Adaptive Assessment System for Spinal Anaesthesia. In: Proceedings
of the 11th IASTED International Conference Computers and Advanced Technology in
Education (CATE2008), pp. 316-321 (2008).
18. Augustin, T., Hockemeyer, C., Kickmeier-Rust, M., Albert, D.: Individualized Skill
Assessment in Educational Games: Basic Definitions and Mathematical Formalism.
Submitted for publication (2009).