Error mode prediction
ERIK HOLLNAGEL²*, MAGNHILD KAARSTAD² and HYUN-CHUL LEE?
²OECD Halden Reactor Project, PO Box 173, N-1751 Halden, Norway;
?KAERI, HF Research Team, PO Box 105, Yusung, Taejon, Korea 305 ± 606
Keywords: Human erroneous actions; Performance prediction; Human Reliability
Analysis (HRA); Nuclear power plant; Man ± machine interaction; Error modes.
The study of accidents (`human errors’)has been dominated by eŒorts to develop
`error’ taxonomies and `error’ models that enable the retrospective identi®cation
of likely causes. In the ®eld of Human Reliability Analysis (HRA) there is,
however, a signi®cant practical need for methods that can predict the occurrence
of erroneous actions Ð qualitatively and quantitatively. The present experiment
tested an approach for qualitative performance prediction based on the Cognitive
Reliability and Error Analysis Method (CREAM). Predictions of possible
erroneous actions were made for operators using diŒerent types of alarm systems.
The data were collected as part of a large-scale experiment using professional
nuclear power plant operators in a full scope simulator. The analysis showed that
the predictions were correct in more than 70% of the cases, and also that the
coverage of the predictions depended critically on the comprehensiveness of the
preceding task analysis.
The study of human erroneous actions has traditionally followed two diŒerent lines
of approach. One has been concerned with the retrospective analysis of the likely
causes of erroneous actions, such as studies in `error psychology’. This approach has
used a variety of models stretching from classical human factors (Swain 1963,
Altman 1964)over information processing models (Norman 1981; Rouse and Rouse
1983, Reason 1990)to cognitive engineering (Woods et al. 1994)and socio-technical
models (Reason 1997). The other has been concerned with the qualitative and
quantitative prediction of possible erroneous actions, exempli®ed by the ®eld of
human reliability analysis or HRA (Park 1987, Dougherty and Fragola 1988;
Kirwan 1994a). The retrospective approach has been dominated by an academic
point of view, hence emphasized theories, models, and experiments, while the
predictive approach has been of a more pragmatic nature, hence putting greater
emphasis on data and methods. For people faced with practical problems in system
design, operation, or maintenance, the greater need is for applicable methods rather
than elegant theories.
2.Human error analysis project (HEAP)
The OECD Halden Reactor Project has since 1994 been engaged in a long-term
eŒort to study human erroneous actions (HEAP). The purposes of this study are: (1)
*Author for correspondence. e-mail: erik.hollhagel@ hrp.no
ERGONOMICS, 1999, VOL. 42, NO. 11, 1457 ± 1471
Ergonomics ISSN 0014-0139 print/ISSN 1366-5847 online Ó 1999 Taylor & Francis Ltd
to provide a better understanding and explicit modelling of how and why erroneous
actions occur, speci®cally when they involve cognitive challenging activities such as
diagnosis; and (2) to provide improved design guidance for the development of
human-machine systems that can avoid or compensate for erroneous actions. Four
initial pilot studies put the emphasis on methodological aspects, in particular the
development of methods to investigate cognitive aspects of operator performance
using real operators in di?cult scenarios (Kaarstad et al. 1994, 1995, Kirwan 1994b).
(1) The objective of the ®rst pilot study was to evaluate concurrent and
interrupted verbal protocol techniques with regard to their applicability to
identify operators’ problem solving strategies, diagnostic strategy types, and
possible `cognitive ine?ciencies’ while locating a fault. The main ®nding was
that the two methods provided diŒerent types of information about problem-
solving strategies and human erroneous actions, and the recommendation
was to use a combination of the two methods.
(2) The objective of the second pilot study was to test whether eye movement
tracking analysis was a feasible supplement to verbal protocols in studying
operators’ cognition during fault-®nding. The main ®nding was that the
analysis of eye movements helped to make the interpretation of verbal
protocols more robust and gave a better insight into the operators’ problem-
solving behaviour. Wearing the eye-tracker equipment had no apparent eŒect
on the qualityor quantityof verbal
(3) The third pilot study investigated the eŒects of scenario complexity on
operators’ diagnostic behaviour (Follesù et al. 1995). Complexity was
assumed to be a multidimensional concept that was varied by manipulating
the number of underlying faults in three diŒerent scenarios. The main
®ndings were that number of underlying faults did not by itself prove to be a
dominant complexity factor when the performance measures were the degree
of operator success in diagnosing the faults and the use of diŒerent diagnostic
strategies. This study used operators from two diŒerent operating environ-
ments, but the results showed no systematic variation with respect to either
subject pool or performance measures.
(4) The fourth pilot study looked at the quality of information provided by
diŒerent data sources. The ®ndings were that diŒerent types of protocols
(concurrent, auto-confrontation, and expert) produced similar results for a
set of pre-de®ned target activities, although concurrent verbal protocols were
the richer source of data. Furthermore, it was found that concurrent verbal
protocols can eŒectively be used for teams as well as for single operators
The purpose of performance prediction is to describe how a scenario may possibly
develop, given the existing working conditions. In many cases the representation of a
scenario only provides the basic structure of the events but leaves out the detailed
conditions that may in¯uence how an event develops. In order to make the
prediction, the scenario description must therefore be supplemented by information
about the conditions or factors that may in¯uence the propagation of events. One of
these is the variability of human performance, which in itself depends on the general
performance conditions, including the previous developments (®gure 1).
E. Hollnagel et al.
The purpose of the prediction is to ®nd out what may possibly happen under
given conditions, for example, if a component fails, if there is insu?cient time to act,
or if a person misunderstands a procedure. In principle, the task is `simply’ to ®nd a
path between antecedents (causes)and consequents (eŒects). Yet even when both the
scenario and the performance conditions have been described in su?cient detail, a
mechanical combination of taxonomic categories will soon generate so many
possibilities that the total becomes unmanageable. The focus can be improved only if
the context can be de®ned, because the context can be used to limit the number of
combinations that need to be investigated. Performance prediction must therefore
describe the likely context before it goes on to consider the actions that may occur.
3.1. Performance prediction in HRA
In ®rst-generation HRA (Dougherty 1990), the accepted purpose is to calculate the
probability that a speci®c operator action will fail, known as a `human error
probability’. For historical reasons, ®rst-generation HRA adapted the event-tree
method used in technical reliability analysis. The development of operator models
was almost incidental to the HRA approach, and the models only contained enough
detail to satisfy the demands from PSA/HRA applications. Models associated with
®rst-generation HRA approaches were appropriate for performance prediction in
the narrow sense of supporting probability estimates for simple error modes, but
could not easily be applied to the broader type of prediction that are of interest to
In information processing psychology, the emphasis is on the analysis of events
and the explanation of the psychological causes for erroneous actions (Norman
1988, Reason 1990). The speci®c paradigm for explanation, i.e. the information
processing system, is taken as a starting point and the main eŒort is put into
reconciling this with detailed introspective reports. The result has been a number of
rather detailed theoretical accounts, although for many of them the validity has not
been adequately established (Woods et al. 1994). While the ability to explain events
has been well developed, the information processing approaches on the whole show
little concern for performance prediction. Even in the case of the more detailed
accounts the descriptions refer to how decision making should take place, but cannot
easily be used to predict exactly how it will happen.
Figure 1.Basic dependencies in performance prediction.
Error mode prediction
In a cognitive systems engineering approach, and particularly in the case of the
phenotype-genotype approach that is the basis for CREAM (Hollnagel 1993), the
emphasis is on a principled way of analysing and predicting human erroneous
actions. With respect to performance prediction, the importance of describing the
context before looking at the details of how an event may develop is emphasized. In
this respect the role of the Common Performance Conditions (CPCs)is crucial since
these are used as a means of constraining the propagation of events, by eŒectively
eliminating some of the links between causes and eŒects.
The following provides an outline of a method for performance prediction that
was developed to be used as the qualitative part of a Human Reliability Analysis.
The method enables the analysts to achieve the following:
(1) identify the types of incorrect performance (error modes, cognitive failure
modes)that are possible for the given task or scenario; and
(2) qualitatively rank or rate the likelihood of the possible error modes, to
identify those that are the more likely to happen.
The method is a variation of the basic Cognitive Reliability and Error Analysis
Method (Hollnagel 1998). The application of CREAM enables a full quanti®cation
in the HRA/Probabilistic Safety Assessment tradition, but this was beyond the scope
of the current experiment.
3.2. Performance prediction in HEAP
The basis for any kind of performance prediction must be a detailed description of
the situation where the performance takes place and a speci®cation of the critical
aspects of the performance, that is, the target for the predictions. In the present case
the purpose of the performance prediction was to identify the kinds of performance
failures that could occur, expressed in terms of speci®c error modes. The starting
point for making a prediction of this type must refer to a description of the expected
performance, since the error modes basically are deviations from the expected
performance. Some examples are task analysis, operating procedures, ideal paths or
performance time-lines, and event trees. In this experiment the authors had access to
an Operator Performance Assessment System (OPAS), which has been described by
Skraaning (1998). The basic steps of the prediction method used in the experiment
are shown in ®gure 2, and the most important steps are described in the text.
3.2.1. Construct event sequences:
approach, which decomposes the main goal into subgoals that can be de®ned as low-
level goals of the main goal. OPAS classi®es the operator’s tasks as either `detection’
or `operation’. Alarm recognition and information-gathering tasks belong to the
detection category while the operator’s control tasks and communication tasks belong
to operation. OPAS entails two routines for operator performance recording. One is a
check routine that records whether operators carry out the classi®ed task, and the
other is a run time routine where the actual run time in the scenario is recorded.
Event sequences were constructed using the OPAS
3.2.2. Describe situation
conditions have traditionally been described in terms of a relatively small set of
factors that may in¯uence performance, variously called Performance Shaping
Factors (PSF)or Performance In¯uencing Factors (PIF). In practice it is possible to
performance conditions): The performance
E. Hollnagel et al.
de®ne a relatively small set of Common Performance Conditions (CPC)that describe
the general determinants of performance, hence the common modes for actions in a
context. The CPCs are shown in table 1, together with the basic qualitative
descriptors that are suggested to characterize the actual value for each CPC in a
given situation or scenario. The proposed CPCs are intended to have a minimum
degree of overlap, although they are not mutually independent.
The CPCs are applied, one by one, to the description of the situation or
scenario. On the basis of the available information, the analysts select the
appropriate descriptorfor eachCPC,
characterization of the situation. This process can be carried out for the situation
or scenario as a whole or for major segments or sub-parts of the scenario. The
latter may be necessary if it is expected that there will be signi®cant diŒerences
between segments, e.g. early in an accident and later. This process may also be
used to make clear which information is available and which is still missing. In
cases where the available information is insu?cient, it may be necessary for the
analysts to make their own assumptions. In such cases the method should help to
identify and document those assumptions.
and therebyproducean overall
3.2.3. Select performance segments:
segments of the scenario rather than for the scenario as a whole, the segmentation
must be done with the support of process matter experts. It is furthermore important
that the descriptions are on the same level of detail for all segments. This means that
pre-existing performance descriptions, such as procedures, cannot be used without
considering whether the information they provide is suitable for the needs of the
performance prediction. The level of detail of the descriptions should correspond to
the level of detail in the associated tables of cognitive activities and error modes, cf.
When the prediction is made for speci®c
Figure 2. Basic steps of performance prediction.
Error mode prediction
Common performance conditions.
Adequacy of organization
Adequacy of MMI and
Number of simultaneous
Time of day
Adequacy of training and
The quality of the roles and responsibilities of team members, additional support,
communication systems, safety management system, instructions and guidelines forexternally oriented activities, role of external agencies, etc.
The nature of the physical working conditions such as ambient lighting, glare on screens,
noise from alarms, interruptions from the task, etc.
The man-machine interface in general, including the information available on control panels,
computerized workstations, and operational support provided by speci®cally designed
Procedures and plans include operating and emergency procedures, familiar patterns of
response heuristics, routines, etc.
The number of tasks a person is required to pursue or attend to at the same time
(i.e. evaluating the eŒects of actions, sampling new information, assessing multiple goals,
The time available to carry out a task; corresponds to how well the task execution is
synchronized to the process dynamics.
The time of day (or night)describes the time at which the task is carried out, in particular
whether or not the person is adjusted to the current time (circadian rhythm). Typical
examples are the eŒects of shift work. It is a well-established fact that the time of day has an eŒect on the quality of work, and that performance is less e?cient if the normal circadian
rhythm is disrupted.
The level and quality of training provided to operators as familiarization to new technology,
refreshing old skills, etc. It also refers to the level of operational experience.
Fewer than capacity/
Matching current capacity/
More than capacity
Adequate, high experience/
E. Hollnagel et al.
below. It thus requires some degree of familiarity with the process as well as with the
One guideline for delimiting the performance segments refers to the notion of
performance variability. As mentioned above it is unrealistic to expect that an
operator will carry out a task or a procedure in precisely the way it has been described,
e.g. by the task analysis. Yet there will also be some regularities or constraints of the
process that impose an order on major scenario segments. There may be logical
reasons why things have to occur in a certain order, e.g. that identi®cation of
alternatives precede choice. There may also be physical or engineering reasons why
things have to be done in a certain sequence. This means that it is entirely reasonable
to expect that the performance segments will occur in a certain order, and this may in
turn be used as a guideline for determining the scope of the segments.
performance segment can be described in several ways. In task analyses and
procedures the description is normally a natural language characterization of the
actions. Ideal path descriptions and event trees are both more sparse in their
descriptions, sometimes only providing a label or a short identi®er such as `Stop SI’.
For the purpose of the performance prediction a uniform level of description can
be achieved by describing each action using a set of standard categories, referred to
as the cognitive activity list. The list is derived from accumulated experience from
operator performance studies rather than from a model of operator performance,
hence it has an empirical rather than an analytical basis. The starting point is the
descriptions of the actions as it results from, for example, the task analysis. Each
action is characterized in terms of the corresponding cognitive activity, using a table
of generic cognitive activities (Hollnagel 1998). As an example, the details for the `Oil
in compressed air (TP)system’ scenario are reproduced in table 2.
actions andassigncognitiveactivity: Theactionswithina
3.2.5. Determine cognitive function:
types has been made, it is possible to identify the likely error modes. This can
nominally be done on the level of phenotypes (manifestations), but to be really useful
When the assignment of the cognitive activity
Table 2. Actions and cognitive activities for part of scenario 1.
Subgoal descriptionOPAS typeAction Cognitive activity
Registration of alarm
Check air system for other
problems such as oil in system
Checking faulty RV10S05 valve
Bypassing of RH10 upper
Send FO (Field Operator)to
TO (Turbine Operator)asks FO
if ®lter automatics are working
Send another FO to check
Checking of FP-heater -> FO
Error mode prediction
for the performance prediction it should be made on the level of representative
cognitive functions. It is therefore necessary to refer to an operator model,
particularly of the cognitive functions. Fortunately, this model does not have to be
very complicated, as long as it provides an overall categorization of major cognitive
functions. This can be achieved by using the generic Simple Model of Cognition
(SMoC; Hollnagel and Cacciabue 1991), which makes a distinction among four
groups of cognitive functions, called execution, interpretation, observation and
planning. (The corresponding symbols E, I, O, and P are used in table 3.)This can be
used to de®ne a generic mapping of the dominant cognitive function(s)for each of
the cognitive activities.
The ®rst step in determining the likely error mode is to assign the cognitive
function that corresponds to a cognitive activity. If a cognitive activity involves more
than one cognitive function, it is necessary to choose that which is most important
given the conditions. This choice requires a good understanding of the working
conditions and the tasks, as produced by the preceding steps of the prediction
method. The assignment is illustrated in table 3.
3.2.6. Determine possible error mode:
possible error mode for the cognitive activity/cognitive function. This is achieved by
using a table of the possible cognitive function failures or error modes for each of the
basic cognitive functions. The list of possible error modes refers both to observable
failures, i.e. proper error modes, and inferred failures, which are more in the nature of
causes. Examples of the former are provided by execution errors, while examples of
The second step is the determination of the
Table 3.Generic (cognitive)error modes.
function Potential (cognitive)error mode
O1 Observation of wrong object. A response is given to the wrong stimulus
Wrong identi®cation made, due to e.g. a mistaken cue or partial
Observation not made (i.e. omission), overlooking a signal or a
Faulty diagnosis, either a wrong diagnosis or an incomplete diagnosis.
Decision error, either not making a decision or making a wrong or
Delayed interpretation, i.e. not made in time.I3
Priority error, as in selecting the wrong goal (intention).
Inadequate plan formulated, when the plan is either incomplete or
E1 Execution of wrong type performed, with regard to force, distance,
speed or direction.
Action performed at wrong time, either too early or too late.
Action on wrong object (neighbour, similar or unrelated).
Action performed out of sequence, such as repetitions, jumps, and
Action missed, not performed (i.e. omission), including the omission of
the last actions in a series (`undershoot’).
E. Hollnagel et al.
the latter are provided by the other categories. The list shown in table 3 is limited to
the main (cognitive)error modes. A more extensive list is found in Hollnagel (1998).
For each cognitive function several possible error modes are de®ned. The analyst
must select among these the one that best matches the description of the scenario and
the performance conditions. This combination requires a thorough consideration of
the nature of the scenario, together with appropriate knowledge of the method. In
cases where a cognitive activity involves two cognitive functions, for instance `verify’,
it is necessary to select only one possible(cognitive)error mode. This must be the one
that is most probable given the performance conditions. The result of determining the
error modes is illustrated by table 4, using a segment of one of the experimental
The main experiment was carried out as part of a larger study of alarm systems,
which aimed to investigate diŒerent types of alarm display and diŒerent alarm
processing levels. The experimental conditions of the study were derived from a
combination of display types and alarm processing levels.
4.1. Test facility and subjects
(HAMMLAB) at the Halden Reactor Project. The process model is a full scope
wasconducted inthe HAlden Man-Machine LABoratory
Table 4. Possible error modes for scenario 1, Segment A.
error mode Action (OPAS)OIPE
Registration of alarm
Check air system for
other problems such as
oil in the system
Bypassing of RH10
Send FO to check
TO asks FO if ®lter
automatics are working
Send another FO to
FP-heater -> FO
O2: wrong identi®cation
I1: faulty diagnosis
I1: faulty diagnosis
E1: execution of wrong
P2: inadequate plan
E5: action missed, not
P2: inadequate plan
I2: decision error
Error mode prediction
simulation of a pressurized water reactor plant with two parallel feedwater trains,
turbines and generators. It is closely linked to the plant model used in the training
simulator at the Loviisa nuclear power station in Finland.
Plant monitoring is accomplished from two overview displays and a number of
more detailed process formats. The main control station consists of a slightly U-
shaped control desk with two rows of workstations. In this study the control station
accommodated a reactor operator on the left side and a turbine operator on the right
side. It is possible to access dedicated reference symbols in the process displays with a
mouse to obtain more detailed information. The operator may choose any display on
a given screen, thereby providing ¯exibility to set up displays in accordance with the
The participants in this study were 12 licensed commercial power plant operators
from the Loviisa nuclear power station in Finland. Six crews of operators
participated with two operators per crew.
4.2. Scenarios and experimental conditions
A set of 16 scenarios was used in the alarm study and the experiment reported
here. All scenarios were designed and rated by subject matter experts, and
assigned to either of two groups of high and low complexity scenarios. The
experimental design is summarized in table 5. The main independent variables in
the alarm study were three kinds of alarm
processing levels. Owing to limited resources (subject schedule, time requirement,
etc.), the experiment was carried out using only eight of the experimental
conditions, as shown in table 5.
Each of the eight conditions in table 5 was used twice; once with a low level of
complexity and once with a high level of complexity. There were two or more breaks
in each scenario for the purpose of performance data acquisition. The interval
between the breaks is dependent on the scenario context and determined to minimize
the eŒect on operation.
display types and three alarm
4.3. Outcome of performance prediction
The classi®cation of the Common Performance Conditions (CPC)for the experiment
is shown in table 6.
Four of the CPCs were assumed to be constant for all scenarios, with the
assignments as shown in table 6. Of the remaining four CPCs, `adequacy of training
and experience’ was assumed to vary between scenarios but to be constant within
Table 5.Experimental conditions in the alarm study.
Alarm processing level
Nuisance alarms removed Redundant alarms removed
E. Hollnagel et al.
each scenario. Two CPCs, `number of simultaneous goals’ and `available time’, were
assumed to vary within a scenario, and were analysed separately for each scenario.
Finally, `adequacy of MMI and operational support’, was assumed to vary for all
experimental conditions. After an analysis of the conditions this CPC was rated as
`acceptable’ for all except condition 8 (nuisance alarms removed, integrated
graphics), for which it was rated as supportive, and condition 2 (no processing,
mixed display), for which it was rated as inappropriate (table 5).
For the purpose of the experiment it was decided to select the conditions where it
was most likely that there would be a diŒerence with regard to performance based on
diŒerences in alarm display. The analysis showed that CPC 3 was assumed to aŒect
performance diŒerently dependingon
experimental conditions selected for analysis were accordingly number 2 (no
processing, mixed display) and number 8 (nuisance alarms removed, integrated
display). In addition, condition number 7 (nuisance alarms removed, title display)
was used as a baseline condition for purposes of comparison.
how the alarms were presented. The
4.4. Data collection
Data from the experiments were recorded automatically by the computer system in
HAMMLAB. The experiment log includes a record of the alarm system, all the
operators’ interaction with the simulator, and the manipulations done by the
experimental leader. The variable log includes the process parameters known to
change during the scenario. Eye movements were recorded with the ASL model
4000SU eye-tracking equipment consisting of a helmet with cameras and a visor,
control units, monitors, video players, a real time clock, and a calibration surface.
Video recording consisted of one stationary camera, and video recording of Eye
Movement Tracking (EMT) together with soundtracks of the operators’ commu-
nication and an expert commentator. The expert commentators’ verbal protocols
were transcribed from the videotapes to ®les. The ®les were used together with the
corresponding event and variable logs for performing the analyses.
OPAS is a performance measurement system that has a check protocol and a
run-time protocol for operator performance recording. The check protocol is used to
record whether operators carry out the tasks as they have been classi®ed by the prior
analysis, hence to determine if the operators follow a prede®ned operating sequence.
The run-time protocol records the simulation time for each checked task. The
Table 6. Classi®cation of common performance conditions.
Vary for all
1. Adequacy of organization
2. Working conditions
3. Adequacy of MMI and operational
4. Availability of procedure/plan
5. Number of simultaneous goals
6. Available time
7. Time of day
8. Adequacy of training, experience
Error mode prediction
scoring is done on-line by a process expert. After the experiment, the two recorded
protocols can be compared to the simulation log or the video recording in order to
validate the protocols and calculate performance ratings.
4.5. Data analysis
OPAS was the main source for the prediction of error modes previous to the
experiment, and for analyses of the action failures that occurred during the
experiment. To evaluate the prediction quality, time windows for each scenario were
developed by a process expert. The time windows were based on the diŒerent stages
in OPAS, and for each stage one basic event was identi®ed to determine a speci®c
time that all actions within a stage could be related to. Following that, each action
was described by means of its ideal time and critical time for solution, referring to a
plant safety criterion.
The predictions were analysed using the time windows, the process expert rated
OPAS formats, simulator data and expert commentators, with occasional use of
videos and eye movement data to clarify some situations. OPAS can be used to
determine whether the operator performed the action or not, when the action was
performed, whether the operator followed the correct sequences or not, etc.
However, OPAS does not provide information on planning activities, nor for some
of the observation and execution activities. These potential error modes were
therefore analysed using other sources of information.
To validate the classi®cation system of the error mode predictions, the scenarios
were scored independently by two analysts. The agreement was high, with a mean of
about 72% , ranging from 53 to 88% agreement. The interscorer reliability can be
calculated using Cohen’s Kappa (Breakwell et al. 1995). This compares the nominal
scales from the scoring, and takes into account the probability of the same scoring
due to chance. The value of Cohen’s Kappa was calculated to be 0.66. By chance, the
probability of two analysers agreeing on the same error mode is quite low, as there
are 13 diŒerent error modes. Even if the probability for agreeing on the same error
mode by chance was set as high as 40% , all results would still have acceptable
signi®cance (p< 0.05).
5.1. Overall results of predictions
All predictions were analysed, and a match percentage for each scenario was
calculated. The match percentage ranged from 42 to 100% , with a mean of 67.8% .
The histogram (®gure 3) shows the match percentage distribution of the diŒerent
There are four scenarios with a match percent of predicted error modes and
observed error modes between 41 and 50% , seven scenarios between 51 and 60% ,
ten scenarios between 61 and 70% , eight scenarios between 71 and 80% , ®ve
scenarios between 81 and 90% and two scenarios between 91 and 100% . The average
match percentage at 68.6% is an acceptable result of prediction.
5.2. Detailed analysis of error modes
The data were further analysed to see the match to the predicted error modes for
each cognitive function. This showed that the overall match between predicted and
observed observation error modes was 100% . Interpretation errors had 68% match,
E. Hollnagel et al.
planning errors were not observed, and execution errors had 56% match. Table 7
shows the amount and percentage of correctly predicted error modes in more detail.
The predictions also included an evaluation of what the most likely error mode
would be for each scenario. This was based on the scorer judgements together with
the characterization made by the common performance conditions of the scenario.
The error mode judged to be the most likely for each scenario actually occurred in
72% of the cases.
5.3. Evaluation of prediction method
The lack of prediction of certain error modes (O1, O2, P1, P2, and E4) could be
related to the way the data were collected. OPAS was used to make the speci®c
predictions, and the construction of OPAS does easily permit the prediction of these
Figure 3. Percentage match of scenario predictions.
Table 7.Detailed analyses of match between predicted and observed error modes.
Observation of wrong object
Wrong identi®cation made
Observation not made (in time)
Inadequate plan formulated
Execution of wrong type performed
Action performed at wrong time
Action on wrong object
Action performed out of sequence
Action missed, not performed
Error mode prediction
error modes. The errors not observed through the prediction phase were related to
`wrong observations’, `wrong plans’ or `wrong actions’. Since OPAS only describes
the actions and observations the operator should carry out, it does not include any
planning activities, nor any possible mistakes or alternative ways to reach the goal.
The incomplete match between OPAS and the prediction model is the main reason
why the predictions did not cover all the steps in the model.
When looking at table 7, the number of predictions for the remaining error
modes was quite high, except for error modes E1 and E4. These error modes can
be said to be quite similar to the error modes that were not observed. Other data
sources could be used to resolve ambiguities in the OPAS scoring, but could not
be used to provide the missing predictions. If a
prediction is required, the basic performance description must be re®ned beyond
the OPAS categories. This re®nement should
appropriateness of the diŒerent error modes, thereby re®ning the categories. It
might also be considered whether the predictions should be made in relation to
diŒerent stages or segments of the scenario, rather than on the level of speci®c
more complete range of
include an evaluation of the
The main purpose of the experiment was to develop and re®ne a method for
predicting performance failures, expressed as the error modes that can be expected
for a speci®c task. The experiment used a method based on the principles of the
Cognitive Reliability and Error Analysis Method. The results from the data analysis
indicated that the method was reasonably precise in terms of the predictions that
were made, with an average match of 68.6% . The method was deliberately based on
rather simple assumptions about the operator’s cognitive functions, and used a very
simple operator model. The experience from using the method is that it was easy to
learn and e?cient in use. The analysis of a single scenario could be accomplished in
about half a day. This is important for possible future use, since a cumbersome
method is unlikely to be applied in practice.
The data analysis showed that the data collection, speci®cally the categories
that were used both to prepare the data collection and to make the actual on-line
scorings, was an important factor. Since the experiment was combined with
another study, the principles of the data collection were not ideal for the purpose
of evaluating the error prediction. The positive eŒect of that was that it avoided
using the same categories for observation and analysis, which to some extent
would have created an artefact. The downside was that the observations may
have failed to provide some categories of data, thereby making it impossible to
make a complete analysis. For future experiments it is recommended that the
categories for data collection are thoroughly evaluated, to support a broader
scope of operator activities. The data analysis also underlined the importance of
relating the categories directly to observable performance traits (phenotypes or
Overall the outcome of the experiment rather strongly supports the notion that
error modes can be predicted in a qualitative sense. This con®rms the basic principles
of the underlying cognitive model, as well as appropriateness of the speci®c method.
Further experiments will be needed to re®ne and evaluate the method, thereby
improving the con®dence in the model and the classi®cation system as well as
providing an enhanced basis for safety analyses and HRA.
E. Hollnagel et al.
ALTMAN, J. W. 1964, Improvements needed in a central store of human performance data,
Human Factors, 6, 681 ± 686.
BREAKWELL, G. M., HAMMOND, S. and Fife-Schaw, C. (eds) 1995, Research Methods in
Psychology (London: Sage).
DOUGHERTY, E. M. Jr. 1990, Human reliability analysis Ð where shouldst thou turn? Reliability
Engineering and System Safety, 29, 283 ± 299.
DOUGHERTY, E. M. Jr. and FRAGOLA, J. R. 1988, Human Reliability Analysis. A Systems
Engineering Approach with Nuclear Power Plant Applications (New York: John Wiley).
FOLLESé, K., KAARSTAD, M., DRéIVOLDSMO, A. and KIRWAN, B. 1995, Relations between task
complexity, diagnostic strategies and performance in diagnosing process disturbances,
in L. Norros (ed.), Proceedings of 5th European Conference on Cognitive Science
Approaches to Process Control, Espoo, Finland, August/September (Espoo: VTT).
HOLLNAGEL, E. 1993, Human Reliability Analysis: Context and Control (London: Academic
HOLLNAGEL, E. 1998, Cognitive Reliability and Error Analysis Method (London: Elsevier).
HOLLNAGEL, E. and CACCIABUE, P. C. 1991, Cognitive modelling in system simulation.
Proceedings of Third European Conference on Cognitive Science Approaches to
Process Control, CardiŒ, September.
KAARSTAD, M., FOLLESé, K., COLLIER, S., HAULAND, G. and KIRWAN, B. 1995, Human Error Ð
The Second Pilot Study, HWR-421 (Halden, Norway: OECD Halden Reactor Project).
KAARSTAD, M., KIRWAN, B., FOLLESé, K., ENDESTAD, T. and TORRALBA, B. 1994, Human
Error Ð The First Pilot Study, HWR-417 (Halden, Norway: OECD Halden Reactor
KIRWAN, B. 1994a, A Guide to Practical Human Reliability Assessment (London: Taylor &
KIRWAN, B. 1994b, Human Error Project Experimental Programme, HWR-378 (Halden,
Norway: OECD Halden Reactor Project).
NORMAN, D. A. 1981, Categorization of action slips, Psychological Review, 88, 1 ± 15.
NORMAN, D. A. 1988, The Psychology of Everyday Things (New York: Basic Books).
PARK, K. S. 1987, Human Reliability. Analysis, Prediction, and Prevention of Human Errors
REASON, J. T. 1990, Human Error (Cambridge: Cambridge University Press).
REASON, J. T. 1997, Managing the Risks of Organizational Accidents(Aldershot, UK: Ashgate).
ROUSE, W. B. and ROUSE, S. H. 1983, Analysis and classi®cation of human error, IEEE
Transactions of Systems, Man and Cybernetics, SMC-13, 539 ± 549.
SKRAANING, G. 1998, The Operator Performance Assessment System (OPAS) HWR-538
(Halden, Norway: OECD Halden Reactor Project).
SWAIN, A. D. 1963, A Method for Performing a Human Factors Reliability Analysis,
Monograph SCR-685 (Alberquerque, NM: Sandia National Laboratories).
WOODS, D. D., JOHANNESEN, L. J., COOK, R. I. and SARTER, N. B. 1994, Behind Human Error:
Cognitive Systems, Computers and Hindsight (Columbus, OH: CSERIAC).
Error mode prediction
Page 16 Download full-text