ArticlePDF Available

Abstract and Figures

Little research has been done examining the role of errors in learning computer software. It is argued, though, that understanding the errors that people make while learning new software is important to improving instruction. The purpose of the current study was to (a) develop a meaningful and practical system for classifying computer software errors, (b) determine the relative effect of specific error types on learning, and (c) examine the impact of computer ability on error behaviour. Thirty-six adults (18 males, 18 females), representing three computer ability levels (beginner, intermediate, and advanced), volunteered to think out loud while they learned the rudimentary steps (moving the cursor, using a menu, entering data) required to use a spreadsheet software package. Classifying errors according to six basic categories (action, orientation, knowledge processing, seeking information, state, and style) proved to be useful. Errors related to knowledge processing, seeking information, and actions were observed most frequently, however, state, style, and orientation errors had the largest immediate negative impact on learning. A more detailed analysis revealed that subjects were most vulnerable when observing, trying to remember, and building mental models. The effect of errors was partially related to computer ability, however beginner, intermediate and advanced users were remarkably similar with respect to the prevalence of errors.
Content may be subject to copyright.
Computers & Education 49 (2007) 441–459
www.elsevier.com/locate/compedu
0360-1315/$ - see front matter © 2005 Elsevier Ltd. Allrights reserved.
doi:10.1016/j.compedu.2005.09.006
The role of errors in learning computer software
Robin H. Kay ¤
University of Ontario Institute of Technology, 2000 Simcoe St. North, Oshawa, Ont., Canada L1H 7K4
Received 28 July 2005; received in revised form 24 August 2005; accepted 16 September 2005
Abstract
Little research has been done examining the role of errors in learning computer software. It is argued,
though, that understanding the errors that people make while learning new software is important to improv-
ing instruction. The purpose of the current study was to (a) develop a meaningful and practical system for
classifying computer software errors, (b) determine the relative eVect of speciWc error types on learning, and
(c) examine the impact of computer ability on error behaviour. Thirty-six adults (18 males, 18 females), rep-
resenting three computer ability levels (beginner, intermediate, and advanced), volunteered to think out
loud while they learned the rudimentary steps (moving the cursor, using a menu, entering data) required to
use a spreadsheet software package. Classifying errors according to six basic categories (action, orientation,
knowledge processing, seeking information, state, and style) proved to be useful. Errors related to knowl-
edge processing, seeking information, and actions were observed most frequently, however, state, style, and
orientation errors had the largest immediate negative impact on learning. A more detailed analysis revealed
that subjects were most vulnerable when observing, trying to remember, and building mental models. The
eVect of errors was partially related to computer ability, however beginner, intermediate and advanced users
were remarkably similar with respect to the prevalence of errors.
© 2005 Elsevier Ltd. Allrights reserved.
Keywords: Computer error learn software classiWcation cognitive expert novice
*Tel.: +1 905 721 3111 2679.
E-mail address: robin.kay@uoit.ca
442 R.H. Kay / Computers & Education 49 (2007) 441–459
1. Overview
Human error is inevitable, even when straightforward tasks are performed by experienced users
(Hollnagel, 1993; Lazonder & Van Der Meij, 1995; Virvou, 1999). While extensive research has
been done on the role of errors in high-risk domains, substantially less eVort has been made in the
area of computer software. The classiWcation rubrics for high-risk domains do not translate well to
a computer-based environment. Furthermore, most research in the computer software domain has
looked at human–computer interaction (HCI) with a focus on improving software interfaces (Car-
roll, 1990; Hourizi & Johnson, 2001; Maxion, 2005; Norman & Draper, 1986; Reason, 1990).
More research is needed on the role of errors in the learning process (Brown & Patterson, 2001;
Reason, 1990).
The purpose of this paper was to (a) develop a meaningful and practical system for classifying
errors made while learning a new computer software package, (b) explore the relative eVect of spe-
ciWc error types on learning performance, and (c) examine the impact of computer ability on error
behaviour.
2. Literature review
2.1. General research on errors
Extensive research has been done on identifying and evaluating the impact of errors in a wide
variety of domains including air traYc control (Isaac, Shorrock, & Kirwan, 2002), nuclear power
plants (Kim, Jung, & Ha, 2004), medicine (Horns & Lopper, 2002), aeronautics (Hourizi & John-
son, 2001), ATM machines (Byrne & Bovair, 1997), general safety systems (Vaurio, 2001), and
telephone operation (Gray, John, & Atwood, 1993). Typically, these domains are high risk areas
where making errors can result in serious loss of time, money or life. The principal goal of
research, then, is to identify, predict and ultimately eliminate errors (Johnson, 1999). However,
there is considerable evidence to suggest that all humans make errors, even experts (e.g., Kitajima
& Polson, 1995; Norman, 1981; Reason, 1990) in the most straightforward of tasks (Brown &
Patterson, 2001). In short, human error is inevitable (Hollnagel, 1993; Lazonder & Van Der Meij,
1995; Virvou, 1999).
2.2. Errors and human computer interaction
Research on errors in the domain of computers has focussed on system development (Johnson,
1999), software design (Smith & Harrison, 2002), operating systems (Brown & Patterson, 2001),
computer supported co-operative work environments (Trepess & Stockman, 1999), programming
(e.g., Ebrahim, 1994; Emurian, 2004), and HCI (e.g. Carroll, 1990; Norman & Draper, 1986). While
errors in most of these domains (e.g., system and software design, operating systems, and program-
ming) can result in considerable loss of time and money, errors in HCI usually present minimal
risk. Making errors while learning a computer software package can be frustrating and personally
time consuming, but is clearly less risky than a nuclear accident, an incorrect dosage of medicine,
or a computer server shut down.
R.H. Kay / Computers & Education 49 (2007) 441–459 443
The relatively low-risk HCI milieu has implications on the kind of research undertaken. Errors
are more readily accepted (Carroll, 1990; Lazonder & Van Der Meij, 1995; Norman & Draper,
1986), and the key focus of this research is to modify and improve user interfaces so that errors
can be minimized (Carroll, 1990; Hourizi & Johnson, 2001; Maxion, 2005; Norman & Draper,
1986; Reason, 1990). The ultimate goal is to design error-free software that is easy to use for every-
one (e.g., Carroll, 1990; Ebrahim, 1994; Norman & Draper, 1986).
Several researchers (e.g., Brown & Patterson, 2001; Kay, in press) have argued, though, that not
enough emphasis is being placed on the human user and learning. Virvou (1999) and Rieman,
Young, and Howes (1996) note that human reasoning is based on analogies, generalizations, and
guessing when learning new ideas and procedures. These methods work reasonably well but are
prone to errors particularly when a person is interacting with a computer – a machine that can
only interpret precise instructions. Virvou (1999) and Rieman et al.’s (1996) claims are supported
by observed error rates of 25–50% for novices (Lazonder & Van Der Meij, 1995) and 5–20% for
experienced users (Card, Moran, & Newell, 1983; Norman, 1981; Reason, 1990). Finally, Brown
and Patterson (2001) note that computer outages have remained virtually unchanged in the past
three decades in spite of improvements in software interfaces and hardware. In summary, human
error is not a problem that should be left solely to the user interface community (Brown & Patter-
son, 2001). There is a clear need for research examining the role of the human user in modifying
and reducing errors.
2.3. ClassiWcation of errors
A considerable amount time and eVort has been devoted to useful classiWcation systems of
errors (Emurian, 2004; Hollnagel, 2000; Hourizi & Johnson, 2001; Kitajima & Polson, 1995;
Lazonder & Van Der Meij, 1995; Reason, 1990; Virvou, 1999). Reason (1990) proposed very gen-
eral errors types: slips or lapses, rule-based mistakes, and knowledge-based mistakes. Slips occur
when a correct plan or action is executed incorrectly (e.g., typing mistake, dropping an object, trip-
ping, mispronounced word) whereas a lapse is typically a memory error. Mistakes are based on
incorrect plans or models. Rule-based mistakes occur when a user applies an incorrect set of rules
to achieve an action. When a person’s collection of rule-based, problem solving routines is
exhausted, he/she is forced to slow, conscious model building and can be subject to developing
incorrect representations of a problem. These are known as knowledge-based errors. While this
classiWcation system has proven to be useful, Reason (1990) acknowledges that “there is no uni-
versally agreed classiWcation of human error, nor is there any one prospect. A taxonomy is usually
made for a speciWc purpose, and no single schema is likely to satisfy all needs” (p. 10).
Hollnagel (1993) contends, though, that there are eight basic errors can be used to classify any
incorrect action involving timing, duration, force, distance/speed, direction, wrong objects, and
sequence. However, Hollnagel’s classiWcation rubric has been tested in a limited range of high-risk
domains.
A more specialized or domain-speciWc approach to error classiWcation is supported by a
number studies oVering unique errors types including input and test errors while programming
(Emurian, 2004), Wxation (De Keyser & Javaux, 1996) and automation surprise (Hourizi & John-
son, 2001) errors experienced by pilots, post completion errors when cards are left in ATM
machines (Byrne & Bovair, 1997), social conXict errors in collaborating computer-based
444 R.H. Kay / Computers & Education 49 (2007) 441–459
communities (Trepess & Stockman, 1999), shift work and medication errors in hospitals (Inoue
& Koizumi, 2004), fatal errors for computer server operators (Virvou, 1999), and entanglements or
combination errors committed by software users (Carroll, 1990). It would be diYcult for a general
model of error classiWcation to capture these domain-speciWc errors. Furthermore, generalizing
error categories might take away rich contextual information needed to address and rectify
problem areas.
To date, no classiWcation system of computer software errors has been developed, although
HCI researchers have informally identiWed a number of diVerent error types, such as Wxation, slips,
and mistakes (Norman & Draper, 1986), going too fast, reasoning on the basis of too little infor-
mation, inappropriate use of prior knowledge, and combination errors or entanglements (Carroll,
1990). Perhaps the most signiWcant error is the inability of a learner to observe or recognize his or
her mistakes (Lazonder & Van Der Meij, 1995; Virvou, 1999; Yin, 2001).
2.4. Role of errors in learning
Very little research has been done on attempting to understand the role of errors in the learning
process (Reason, 1990). Three conclusions, noted earlier, indicate that this kind of research,
though, is important. First, errors are inevitable when humans are performing any task (Hollnagel,
1993; Lazonder & Van Der Meij, 1995; Virvou, 1999) and remarkably frequent (5–50% – Card
et al., 1983; Lazonder & Van Der Meij, 1995; Norman, 1981; Reason, 1990) in a learning situation,
particularly when it involves computers (Virvou, 1999). Second, the role of the human in the error
process needs to be studied in more detail to complement the extensive research on computer
interfaces (e.g., Brown & Patterson, 2001; Kay, in press). Third, domain-speciWc classiWcation
rubrics need to be developed with a focus on cognitive activity and computers.
Hollnagel (2000) oVered four general learning or cognitive categories for errors: execution,
interpretation, observations, and planning. While relatively untested, these categories oVer a start-
ing point with which to investigate the role of errors in learning. Additionally, Reason’s (1990)
rule and knowledge-based error categories might be useful given the procedural and model build-
ing activities involved in learning computer software.
After identifying and classifying errors made while learning new software, it is equally impor-
tant to examine how users recover from errors. Novices, for example, have been reported to need
extensive, context-speciWc information when an error has occurred (Lazonder & Van Der Meij,
1995; Yin, 2001). Experienced users, on the other hand, have an aYnity for recovering from errors
quickly (Kitajima & Polson, 1995). Regardless of ability level, being forced to divert attention to
“error” interruptions is common when interacting with computer software and can cause immedi-
ate short-term memory loss (Oulasvirta & Saariluoma (2004)). As well, the adequate handling of
errors depends on what the users do with respect to detection, diagnosis, and correction (Lazonder
& Van Der Meij, 1995). Ultimately, understanding the role of errors in learning can be instrumen-
tal to guiding eVective instruction (Carroll, 1990).
2.5. EVect of ability
It is reasonable to expect that one’s previous ability using computer software will aVect the prev-
alence and impact of errors made. Experts are expected to outperform beginners in new learning
R.H. Kay / Computers & Education 49 (2007) 441–459 445
environments. In fact, expertise has been examined in a number of domains including chess (Char-
ness, 1991), physics (Anzai, 1991), medicine (Patel & Groen, 1991), motor skills in sports and dance
(Allard & Starkes, 1991), music (Sloboda, 1991), and literacy (Scardamalia & Bereiter, 1991). The
typical expertise paradigm involves comparing experts with novices on a series of tasks that experts
can do well and that novices have never tried (Ericsson & Smith, 1991). However, Reason (1990)
notes “no matter how expert people are at coping with familiar problems, their performance will
begin to approximate that of novices once their repertoire of rules has been exhausted by the
demands of a novel situation” (p. 58). The nature of expertise in using computer software has not
been examined in the literature, particularly with respect to experts attempting unfamiliar tasks.
2.6. Purpose of study
The purpose of this study was threefold. First, a formative, post-hoc analysis was done to
develop a meaningful and practical system for classifying errors speciWc to learning a new com-
puter software package. Second, the relative eVect of each error category on learning performance
was examined. Finally, the impact of computer ability on error behaviour was evaluated.
3. Method
3.1. Sample
The sample consisted of 36 adults (18 males, 18 females): 12 beginners, 12 intermediates, and 12
advanced users, ranging in age from 23 to 49 (MD33.0 years), living in the greater metropolitan
Toronto area. Subjects were selected on the basis of convenience. Equal numbers of males and
females participated in each ability group. Sixteen of the subjects had obtained their Bachelor’s
degree, eighteen their Master’s degree, one a Doctoral degree, and one, a community college
diploma. Sixty-four percent (nD23) of the sample were professionals; the remaining 36% were
students (nD13). Seventy-two percent (nD26) of the subjects said they were regular users of
computers. All subjects voluntarily participated in the study.
3.2. Procedure
Overview. Each subject was given an ethical review form, computerized survey and interview
before attempting the main task of learning the spreadsheet software package. Note that the sur-
vey and interview data were used to determine computer ability level. Once instructed on how to
proceed, the subject was asked to think-aloud while learning the spreadsheet software for a period
of 55 min. All activities were videotaped with the camera focused on the screen. Following the
main task, a post-task interview was conducted.
Learning tasks. Spreadsheet software is used to create, manipulate and present rows and col-
umns of data. The mean pre-task score for spreadsheet skills was 13.1 (SD D15.3) out of a total
possible score of 44. Ten of the subjects (6 advanced users, 4 intermediates) reported scores of 30
or more. None of the subjects had ever used the speciWc spreadsheet software package used in this
study (Lotus 1-2-3).
446 R.H. Kay / Computers & Education 49 (2007) 441–459
Subjects attempted a maximum of Wve spreadsheet activities arranged in ascending level of diY-
culty including (1) moving around the spreadsheet (screen), (2) using the command menu, (3)
entering data, (4) deleting, copying, and moving data, and (5) editing. They were Wrst asked to
learn “in general” how to do activity one, namely moving around the spreadsheet. When they were
conWdent that they had learned this activity, they were then asked to complete a series of speciWc
tasks. All general and speciWc activities were done in the order presented in Appendix A. In other
words, subjects could not pick and choose what they wanted to learn.
From an initial pilot study of 10 subjects, it was determined that 50–60 min was a reasonable
amount of time for subjects with a wide range of abilities to demonstrate their ability to learn the
spreadsheet software package. Shorter time periods limited the range of activities that beginners
and intermediate subjects could complete.
In the 55-min time period allotted to learn the software in the current study, a majority of the
subjects completed all learning tasks with respect to moving around the screen (100%) and using
the command menu (78%). About two-thirds of the subjects attempted to enter data (69%),
although only one-third Wnished (33%) all the activities in this area. Less than 15% of all subjects
completed the Wnal tasks: deleting, copying, moving, and editing data.
3.3. Data collection
Think-aloud protocols. The main focus of this study was to examine the role of errors with
respect to learning computer software. The use of think-aloud protocols (TAPs), where subjects
verbalize what comes to their mind as they are doing a task, is one promising technique for exam-
ining transfer. Essentially, the think-aloud procedure oVers a window into the internal talk of a
subject while he/she is learning. Ericsson and Simon (1980), in a detailed critique of TAPs, con-
clude that “verbal reports, elicited with care and interpreted with full understanding of the circum-
stances under which they were obtained, are a valuable and thoroughly reliable source of
information about cognitive processes” (p. 247).
The analyses used in this study are based on think-aloud data. SpeciWcally, 627 learning behav-
iours involving errors were classiWed and rated according to the degree to which they inXuenced
the learning.
Presentation of TAPs. The following steps were carried out in the think-aloud procedure to
ensure high quality data:
Step 1. (Instructions) Subjects were asked to say everything they were thinking while working on
the software. Subjects were told not to plan what they were going to say.
Step 2. (Examples) Examples of thinking aloud were given, but no practice sessions were done.
Step 3. (Prompt) Subjects were told it was important that they keep talking and that if they were
silent for more than 5 s, they would be reminded to “Keep talking”.
Step 4. (Reading) Subjects were permitted to read silently, but they had to indicate what they were
reading and summarize when they had Wnished.
Step 5. (Giving help) If a subject was really stuck, he/she could ask for help. A minor, medium, or
major form of help would be given, depending on how “stuck” a subject was.
Step 6. (Recording of TAPs) Both thinking aloud and the computer screen were recorded using an
8mm video camera.
R.H. Kay / Computers & Education 49 (2007) 441–459 447
3.4. Data source
Independent variables. There were six principal independent variables in this study correspond-
ing to the six categories of errors made by subjects in this study. Note that errors were labeled
according to what learning activity a subject was doing and included errors made when subjects
were (a) actively doing something (action), (b) trying to Wnd their current location or state of pro-
gress (orientation), (c) manipulating or processing information in some way (knowledge process-
ing), (d) seeking information, (e) Wxated or committing multiple errors simultaneously (in a state),
and (f) acting in a unique fashion (style). Operational deWnitions for these six classiWcations and
their respective sub-categories are presented in Table 1. Note that this classiWcation system was
driven by empirical observation, not theory.
In addition, three computer ability levels were compared in this study: beginners, intermedi-
ates, and advanced users. The criteria used to determined these levels included years of experi-
ence, previous collaboration, previous learning, software experience, number of application
software packages used, number of programming languages/operating systems known, and
application software and programming languages known. A multivariate analysis showed that
beginners had signiWcantly lower scores than those of intermediate and advanced users
(p< .005), and intermediates users had signiWcantly lower scores than advanced users on all
eight measures (p< .005).
Dependent variables. The eVect of each error category was evaluated using Wve dependent
variables: how often an error was committed (frequency), inXuence the error had on learning
(a score from ¡3 to 0 – see Table 2 for rating criteria), percentage of subjects who made an
error, total error eVect score, and total amount learned. Conceptually, the Wrst three variables
assessed prevalence (how often the behavior was observed and by how many subjects) and
intensity (mean inXuence of leaning behaviour). The fourth variable, total error eVect score, was
a composite of the Wrst three variables and was calculated by multiplying the frequency in which
an error occurred by the mean inXuence score of the error by the percentage of subjects who
made the error. For example, knowledge processing errors were made 154 times, had a mean
inXuence of ¡1.56 and were made by 97% of the subjects. The total error eVect score, then, was
¡233.0 (154 £¡1.56 £.97).
Total amount learned was calculated by adding up the number of subgoal scores that each sub-
ject attained during the 55-min time period. For each task, a set of learning subgoals was rated
according to diYculty and usefulness. For example, the task of “moving around the screen” had
Wve possible subgoals that could be attained by a subject: using the cursor key (1 point), using the
page keys (1 point), using the tab keys (1 point), using the GOTO key (2 points), and using the
End-Home keys (2 points). If a subject met each of these subgoals successfully, a score of 7 would
be given. If a subject missed the last subgoal (using the GOTO key), a score of 5 would be assigned.
Reliability of TAPs. Reliability and validity assessments were derived from the feedback given
during the study and a post-task interview. One principle concern was whether the TAPs inXu-
enced learning. While, several subjects reported that the think-aloud procedure was “weird”,
“frustrating” or “diYcult to do”, the vast majority found the process relatively unobtrusive.
Almost 70% of the subjects (nD25) felt that thinking aloud had little or no eVect on their learning.
The accurate rating of the inXuence of an error on learning (Table 2) was critical to the reliabil-
ity and validity of this study. Because of the importance of the learning inXuence scores, six out-
448 R.H. Kay / Computers & Education 49 (2007) 441–459
side raters were used to assess a 10%, stratiWed, random sample of the total 627 occasions when
errors were made. Inter-rater agreement was calculated using Cohen’ (Cohen, 1960), a more
conservative and robust measure of inter rater agreement (Bakeman, 2000; Dewey, 1983). The
coeYcients for inter rater agreement between the experimenter and six external raters (within one
point) were as follows: Rater 1, 0.80; Rater 2, 0.82; Rater 3, 0.95; Rater 4, 0.94; Rater 5, 0.93; Rater
6, 0.93. CoeYcients of 0.90 or greater are nearly always acceptable and 0.80 or greater is acceptable
in most situations, particularly for the more conservative Cohen’s (Lombard, Snyder-Duch, &
Bracken, 2004).
T
a
bl
e
1
Operational deWnitions of errors
Error type Criteria
Action error
Observation Does not observe consequences when key is pressed
Sequence Types in/selects information in the wrong order
Syntax Correct idea but types in incorrect syntax
Wrong key Presses the wrong key
Orientation error
General Does not know where he/she is the program
Knowledge processing error
Arbitrary connection Makes arbitrary connection between two events
Missed connection Misses connection between two events
Mistaken assumption Makes mistaken assumption
Mental model Misunderstanding in subject’s mental model of how something worked
Over extension Extends concept to an area in which it does not apply
Wrong search space Subject choose wrong location in which to look for information
Too speciWc in focus Subject focus is too narrow or speciWc
Misunderstands task Subject misunderstands task in study
Terminology Does not understand meaning of word or phrase
Seeking information error
Attention Shifts attention away from current task
Memory error Forgets information that has been presented/read previously
Observe Misreads or does not see a cue or piece of information
State error
Combination Combination of 2 or more error type
Fixation (a) Repeats exact same activity at least three times when it is clear each time
the activity does not work
(b) Repeated activity occurs for more than 5 min with no progress made
toward a solution
Style error
Miscellaneous style (e.g., random typing or turning of pages, taking the
long safe route, stalling for time)
Pace Doing an activity at a pace in which they miss information
being presented
Premature closure Believes he/she is Wnished task when there is more to complete
R.H. Kay / Computers & Education 49 (2007) 441–459 449
4. Results
4.1. Frequency of errors made
The average number of errors per subject for the 55-min learning period was 17.4 (SD D9.4).
Errors were experienced most often when subjects were seeking information (nD170), processing
knowledge (nD154), or carrying out some action (nD131). The most frequent subcategory errors
occurred when subjects were observing either their own actions or while seeking information
(nD161), trying to remember information (nD93), attempting to create a mental model (nD84),
or a committing a combination of errors (nD54). The frequency of each error category is pre-
sented in Table 3.
4.2. Mean inXuence of errors on learning
There was a signiWcant diVerence among the six main error categories with respect to their
immediate inXuence on learning (p<.001; Table 4). State (M1.90), orientation (M1.73),
and knowledge processing errors (M1.56) were signiWcantly more detrimental than seeking
information (M1.17) and action errors (M1.10) (ScheVé post hoc analysis; p< .005;
Table 4).
Subcategories with a mean inXuence of ¡1.60 or less included mental model (M1.67),
wrong search space (M1.75), terminology (M1.64), combination (M2.00), and pace
(M1.67) errors. A statistical comparison among subcategories of errors could not be done
because of the small sample size (Table 5).
T
a
bl
e
2
Rating system for inXuence score
Score Criteria used Example
¡3A signiWcant misunderstanding or
mistake is evident that is judged to
use a signiWcant amount of time
Subject thinks that the software help is the main menu
and spends 15 min learning to do the wrong task
¡2A signiWcant misunderstanding or
mistake which leads subject away
from solving the task at hand
Subject believes all commands are on the screen and
does not understand that there are sub menus. This
results some time loss and confusion
¡1 Minor misconception that has little
eVect on the direct learning of the
task at hand
Subject tries HOME key, which takes him back in the
wrong direction, but does not cause a big problem in
terms of moving to the speciWed cell
0 (a) Activity has no apparent eVect
on progress OR
(a) Subject tries a key and it does not work (e.g., gets
beeping sound)
(b) Can’t directly determine eVect
of activity OR
(b) Subject gets upset, but it is hard to know how it
aVects future actions
(c) Both good and bad eVects (c) Subject moves to cell quickly, but fails to learn a
better method. It is good that he completed the task,
but bad that he did not learn a more eYcient method
450 R.H. Kay / Computers & Education 49 (2007) 441–459
T
a
bl
e
3
Frequency of errors made
Error type Frequency % of all errors
Seek information
Memory 93 15
Observe 72 11
Attention 5 1
Total 170 27
Knowledge processing
Mental models 84 13
Mistaken assumption 22 4
Terminology 11 2
Over extension 9 1
Wrong search space 8 1
Misunderstands task 6 1
Too speciWc in focus 5 1
Arbitrary connection 5 1
Missed connection 4 1
Total 154 25
Action
Observation 89 14
Wrong key 28 4
Syntax 7 1
Sequence 7 1
Total 131 21
State
Combination 54 9
Fixation 14 2
Total 68 11
Style
Premature closure 30 5
Pace error 15 2
Misc. style 10 2
Total 55 9
Orientation
Total 49 8
Table 4
Analysis of variance for error type as a function of mean inXuence on learning score
¤p<0.001.
Source Sum of squares df Mean square F
Between groups 47.24 5 9.45 14.00¤
Within groups 419.28 621 0.68
Total 466.52 626
R.H. Kay / Computers & Education 49 (2007) 441–459 451
4.3. Percentage of subjects who made errors
It is clear from Table 5 that all subjects, regardless of ability, made errors while learning.
Knowledge processing, seeking information, and action errors were made by over 90% of all sub-
jects. State (75%) and style (67%) were observed less often, and only half the subjects experienced
orientations errors.
T
a
bl
e
5
Total error eVect as a function of error type
aCalculated by multiplying frequency by % of subjects who made this error type by mean inXuence on learning.
Error type Count % of subjects Mean inXuence (SD) Total error eVecta
Knowledge processing
Mental model 84 78 ¡1.67 (0.8) ¡109.4
Mistaken assumption 22 47 ¡1.36 (0.6) ¡14.1
Terminology 11 28 ¡1.64 (0.7) ¡5.1
Wrong search space 8 19 ¡1.75 (0.9) ¡2.7
Over extension 9 17 ¡1.33 (1.0) ¡2.0
Misunderstands task 6 14 ¡1.33 (0.8) ¡1.1
Too speciWc in focus 5 14 ¡1.20 (0.8) ¡0.8
Arbitrary connection 5 11 ¡1.40 (0.6) ¡0.8
Missed connection 4 11 ¡1.25 (1.0) ¡0.6
Total 154 97 ¡1.56 (0.8) ¡233.0
Seek information
Memory 93 89 ¡1.00 (0.7) ¡82.8
Observe 72 78 ¡1.39 (0.9) ¡78.1
Attention 5 14 ¡1.20 (0.8) ¡0.8
Total 170 97 ¡1.17 (0.8) ¡192.9
Action
Observation 89 86 ¡1.40 (0.7) ¡107.2
Wrong key 28 58 ¡0.43 (0.4) ¡7.0
Syntax 7 8 ¡0.71 (0.7) ¡0.4
Sequence 7 11 ¡0.29 (1.1) ¡0.2
Total 131 94 ¡1.10 (0.9) ¡135.5
State
Combination 54 53 ¡2.00 (0.8) ¡57.2
Fixation 14 28 ¡1.50 (0.8) ¡5.9
Total 68 67 ¡1.90 (0.8) ¡86.6
Style
Premature closure 30 61 ¡1.33 (0.6) ¡24.3
Pace error 15 28 ¡1.67 (0.6) ¡7.0
Misc. style 10 17 ¡1.60 (0.5) ¡2.7
Total 55 75 ¡1.47 (0.6) ¡60.6
Orientation
Total 49 53 ¡1.73 (1.0) ¡44.9
All errors
Total 627 100 ¡1.40 (0.9) ¡877.8
452 R.H. Kay / Computers & Education 49 (2007) 441–459
With respect to speciWc subcategories, memory errors (89%), failing to accurately observe the
consequences of one’s actions (78%), inaccurate mental models (78%), and observation errors
while seeking information were experienced by a majority of the subjects.
4.4. Total error eVect score
Knowledge processing, seeking information, and action errors showed the highest total error
eVect scores, largely because these kinds of errors were made frequently by almost all subjects
(Table 5). State, style, and orientation errors showed relatively low total error eVect scores because
they were experienced less often by fewer subjects.
4.5. Total amount learned
Only one of the six main categories, orientation errors, showed a signiWcant correlation
(r0.57, p< .05) with total amount learned. This result is consistent with the relatively high
mean inXuence on learning score observed for orientation errors, but not with the low total eVect
score.
4.6. Computer ability level and errors made
Frequency of errors. There were no signiWcant diVerences among beginner (MD20.4;
SD D11.7), intermediate (MD16.8; SD D10.0) and advanced (MD15.1; SD D5.6) groups with
respect to the number of errors made.
Mean inXuence on learning. A two-way ANOVA revealed signiWcant diVerences among ability
levels (p< .001), but no interaction eVect between error type and ability level (Table 6). Advanced
users (M1.15; SD D0.86) were aVected by errors signiWcantly less than either intermediate
(M1.40; SD D0.81) or beginner users (M1.58 SD D0.86) (ScheVé post hoc analysis,
p< .05).
Orientation errors. While advanced users were clearly less eVected by errors than intermediate
or beginners (Table 6), a closer examination of frequency of errors, percentage of subjects who
made errors, and mean inXuence of errors on learning as a whole revealed notable similarities
among all three ability groups with one exception: orientation errors. Advanced users committed
this kind of error infrequently and recovered quickly (Table 7). This turned out to be a signiWcant
Table 6
Two-way analysis of variance for error type and ability as a function of mean inXuence on learning score
¤p<0.001.
Source Sum of squares df Mean square F
Ability 11.27 2 5.63 8.52¤
Error category 32.76 5 6.55 9.90¤
Ability ¤Error category 4.70 10 0.47 0.71
Within cells 402.7 609 0.66
Total 466.52 626
R.H. Kay / Computers & Education 49 (2007) 441–459 453
advantage as “orientation errors” was the only category that was signiWcantly and negatively cor-
related with total amount learned.
5. Discussion
5.1. ClassiWcation system for computer software domain
The three error categories (slips/lapses, rule-based mistakes, and knowledge-based mistakes)
proposed by Reason (1990) can be applied to a number of the error types identiWed in this study.
Pressing the wrong key, typing in an incorrect command, and forgetting newly learned informa-
tion Wts into the slips/lapses category. Selecting the wrong sequence of actions, making a mistaken
assumption, and over extending a strategy aligns reasonably well with rule-based errors. Finally,
having an incorrect mental model, misunderstanding a task, not understanding terminology, and
making arbitrary connections appear to be knowledge-based errors. However, using Reason’s
(1990) more general categories, while parsimonious, eliminates key contextual clues about the cir-
cumstances surrounding error behaviour. In addition, combination, Wxation, orientation, and style
errors have no obvious place in Reason’s classiWcation rubric.
Hollnagel’s (2000) cognitive error classiWcation system (execution, interpretation, observations,
and planning) is also a reasonable model for the errors observed in this paper. There appears to be
a good Wt for action and execution errors, knowledge processing and interpretation errors, and
seeking information and observation errors. However, Hollnagel’s (1993) planning category does
not match the typical tasks performed by someone learning a new software package. Deliberate,
well-thought out actions appear to be the exception (see Kay, in press). Virvou’s (1999) approxi-
mate reasoning, trial and error, guessing paradigm is a closer match to what occurred in this study.
It is worth noting that Hollnagel’s error model, like Reason’s (1990) model, would eliminate useful
descriptive details. As well, the model fails to account for domain-speciWc errors like Wxation and
combination mistakes.
In this study, errors were organized according to what subjects were doing in the learning pro-
cess when they made their mistake. This richer, purpose-focused, classiWcation system provides (a)
a better understanding of the knowledge building process, and (b) speciWc opportunities for
improving instruction. This system also proved to be consistent with errors informally observed in
HCI research (e.g., Carroll, 1990; Norman & Draper, 1986): Wxation, going too fast, reasoning on
T
a
bl
e
7
Frequency, percent of subjects who made error, and mean inXuence on learning score as a function of ability level
aB, beginner; I, intermediate; A, advanced.
Error category Frequency Percent of subjects who made error Mean inXuence on learning
BaIaAaBI ABIA
Actions 45 45 41 100 83 100 ¡1.24 ¡1.15 ¡0.88
Orientation 32 14 3 67 67 25 ¡1.88 ¡1.57 ¡1.00
Know Processing 67 43 44 100 92 100 ¡1.64 ¡1.53 ¡1.45
Seeking Info 51 64 55 92 100 100 ¡1.27 ¡1.23 ¡1.00
State 33 21 14 75 75 75 ¡2.09 ¡2.00 ¡1.29
Style 17 14 24 75 75 75 ¡1.64 ¡1.42 ¡1.37
454 R.H. Kay / Computers & Education 49 (2007) 441–459
the basis of too little information, inappropriate use of prior knowledge, and combination errors
or entanglements (Carroll, 1990).
6. EVect of errors on learning
The Wndings from this study suggest that all subjects, regardless of ability level, make errors
throughout the entire computer knowledge acquisition process: when they look for useful infor-
mation, when they observe the result of their keystrokes, when they attempt to develop a model to
understand what they have learned, and when they make judgements about whether they have
achieved their Wnal goal. This result is consistent with claims of error inevitability (Hollnagel,
1993; Lazonder & Van Der Meij, 1995; Virvou, 1999).
The most frequent errors experienced by over 90% of all subjects were those related to seek-
ing information, knowledge processing, and interacting with the software. More speciWcally,
subjects appear most vulnerable with respect to observation, memory, and model building
errors. These weak spots are indirectly supported by previous research. Lazonder and Van Der
Meij (1995) noted that knowing when a mistake occurs and its exact nature can be vital to suc-
cess. If a subject fails to observe what has happened (observation error), learning can be
severely limited. Oulasvirta and Saariluoma (2004) add that attending to interruptions, a typi-
cal state of aVairs while learning computer software, can lead to short-term memory loss
(memory error). Finally, because the software in this study was new to all subjects, Reason
(1990) predicted that the probability of committing knowledge-based errors (model building
error) would increase.
It is worthwhile to note that the most frequent errors were not the most detrimental to learn-
ing. State, style and orientation errors, which were observed relatively infrequently, had the high-
est negative mean inXuence on learning. In other words, speciWc errors types, even if they don’t
occur often, can appreciably interrupt the learning process. Virvou’s (1999) “fatal” error cate-
gory might be useful here. This kind of error is fatal in the sense that considerable time is lost
while learning.
Orientation errors were noteworthy for two reasons. First they were the only error type signiW-
cantly and negatively correlated with learning. Second, they appeared to aVect beginner and inter-
mediate users more than advanced users. This kind of error, however, has not been emphasized in
previous HCI research (e.g., Carroll, 1990; Norman & Draper, 1986). More research needs to be
done on how to address this kind of problem for new users.
For the most part, errors have an immediate negative eVect on learning behavior, but are not
signiWcantly related to overall performance or total amount learned. This result may reXect the
fact that errors are a natural component of learning, regardless of ability level, and that while they
have an immediate negative eVect, other learning behaviors (e.g., transfer knowledge – see Kay, in
press) have a more signiWcant and direct impact on overall learning performance.
6.1. Errors and computer ability
Previous expertise research suggests that advanced users would make fewer errors while learn-
ing and that the consequences of these errors would be less severe (e.g., Kitajima & Polson, 1995).
R.H. Kay / Computers & Education 49 (2007) 441–459 455
The latter conclusion was supported by this study, but not the former. The reason for this discrep-
ancy may be due to the research paradigm used. In a typical expertise research design, experts are
asked to do tasks they know quite well – little if any learning is required. In this study, advanced
users were asked to learn software they had never used before. In a true learning situation, it
appears that subjects of all ability levels make a full range of errors. This result is consistent with
Reason’s, 1990 proposition that more experienced users will start to look like novices when
exposed to unfamiliar situations.
6.2. Suggestions for educators
An examination of the kinds of errors subjects make while learning suggests that help is needed
in a variety of areas. Educators should be wary of the following problems:
(1) Careful observation of one’s actions is critical for success.
(2) Errors due to forgetting or poor mental models were frequent. Assuring that new learners
have adequate representations of computer-based concepts might be one way of helping
them avoid making costly errors.
(3) Orientation errors, although relatively infrequent, need to be addressed because they are par-
ticularly inXuential on immediate and overall learning. Providing new users with clear cues
about where they are and what they are doing at any given moment may be important, par-
ticularly for beginners and intermediates.
(4) Subjects, regardless of computer proWciency, will have more diYculty when they become
Wxated on a problem or when they experience more than one error at the same time.
(5) With the exception of orientation errors, expect subjects of all ability levels to experience a
full range of errors.
6.3. Future research
This study is a preliminary Wrst step into investigating the role of errors in learning a new soft-
ware package. This research needs to be expanded in three key areas:
(1) test the classiWcation scheme on a broader range of computer software;
(2) explore how users recover from errors; and
(3) evaluate various intervention strategies based on a well-developed error rubric.
6.4. Caveats
No research endeavor is without Xaws. These factors should be considered when interpreting
the results and conclusions of this study:
(1) Although over 600 learning activities were analyzed, the sample consisted of only 36 subjects,
who were highly educated, and in their thirties. The results might be quite diVerent for other
populations.
456 R.H. Kay / Computers & Education 49 (2007) 441–459
(2) Only one software package was examined – spreadsheets. A variety of software packages
need to be examined to increase the conWdence in the results of this study.
(3) Procedural factors such as thinking-aloud and the presence of an experimenter did alter
learning. Stress, for example, can increase errors rates signiWcantly (Brown & Patterson,
2001).
(4) Subjects did not choose to learn this software for a personally signiWcant reason. Reduced
motivation may have aVected error behaviour (Trepess & Stockman, 1999).
(5) The think-aloud process, while fairly comprehensive, captured only a subset of subjects’
thoughts during the learning process. The classiWcation system of errors, then, is compro-
mised because one cannot truly know what is going on in the user’s mind.
6.5. Summary
A six-category classiWcation system of errors, based on a subject’s purpose or intent while learn-
ing, was eVective in identifying inXuential behaviors in the learning process. Errors related to
knowledge processing, seeking information and actions were observed most frequently, however,
state, style, and orientation errors had the largest immediate impact on learning. A more detailed
analysis revealed that subjects were most vulnerable when observing, trying to remember, and
building mental models. The eVect of errors was partially related to computer ability, however
beginner, intermediate and advanced users were remarkably similar with respect to the prevalence
and impact of errors.
Appendix A. SpeciWc spreadsheet tasks presented to subjects
General Task 1: Moving around the screen
SpeciWc Tasks 1:
(a) Move the cursor to B5.
(b) Move the cursor to B161.
(c) Move the cursor to Z12.
(d) Move the cursor to A1.
(e) Move the cursor to HA1235.
(f) Move the cursor to the bottom left corner of the entire spreadsheet.
General Task 2: Using the command menu
SpeciWc Tasks 2:
(a) Move to the command menu, then back to the worksheet.
(b) Move to the command: Save.
(c) Move to the command: Sort.
(d) Move to the command: Retrieve.
(e) Move to the command: Set Width.
(f) Move to the command: Currency.
R.H. Kay / Computers & Education 49 (2007) 441–459 457
General Task 3: Entering data into a cell
SpeciWc Tasks 3:
(a) Please start in cell A1 and enter all the information above.
(b) Centre the title SEX.
(c) Right justify the title AMOUNT.
(d) Widen the TELEPHONE column to 15 spaces.
(e) Narrow the SEX column to 5 spaces.
General Task 4: Deleting, copying, and moving data
SpeciWc Tasks 4:
(a) In the table above, move everything in Column A to Column B.
(b) Delete Row 4.
(c) Delete Column A.
(d) Delete the numbers 300–500 in the DATA B column.
(e) Name the range of data in DATA C column. Call this range DATA C.
(f) Copy the underline under DATA 1 to the cells under DATA B, C and D.
General Task 5: Editing data
SpeciWc Tasks 5:
Cana dian
Amrican
70002
Mistake
Replace Me
(a) In the table above, delete the space in Cana dian.
(b) Add an “e” to Amrican.
(c) Change 70002 to 80002.
(d) Delete the word Mistake.
(e) Replace the phrase Replace Me with the phrase New Me.
NAME TELEPHONE SEX DATE DUE AMOUNT
Robin 900-0100 M 07/14/92 300.12
Mary 800-0200 F 06/16/92 20046.23
DATA A DATA C DATA B DATA D
10 1 100 11
15 2 200 22
20 3 300 33
25 4 400 44
30 5 500 55
458 R.H. Kay / Computers & Education 49 (2007) 441–459
References
Allard, F., & Starkes, J. L. (1991). Motor-skill experts in sports, dances, and other domains. In K. A. Ericsson & J. Smith
(Eds.), Toward a general theory of expertise (pp. 126–152). Cambridge: Cambridge University Press.
Anzai, Y. (1991). Learning and use of representations for physics expertise. In K. A. Ericsson & J. Smith (Eds.), Toward
a general theory of expertise (pp. 64–92). Cambridge: Cambridge University Press.
Bakeman, R. (2000). Behavioral observation and coding. In H. T. Reis & C. M. Judge (Eds.), Handbook of research meth-
ods in social and personality psychology (pp. 138–159). New York: Cambridge University Press.
Brown, A., & Patterson, D. A. (2001). To err is human. In Proceedings of the Wrst workshop on evaluating and architecting
system dependability, goeteborg, sweden.
Byrne, M. D., & Bovair, S. (1997). A working memory model of a common procedural error. Cognitive Science, 21(1),
31–61.
Card, S. K., Moran, T. P., & Newell, A. (1983). The psychology of human–computer interaction. Hillsdale: N.J.L. Erlbaum.
Carroll, J. B. (1990). The Nurnberg funnel. Cambridge, MA: MIT Press.
Charness, N. (1991). Expertise in chess: the balance between knowledge and search. In K. A. Ericsson & J. Smith (Eds.),
Toward a general theory of expertise (pp. 39–63). Cambridge: Cambridge University Press.
Cohen, J. (1960). A coeYcient of agreement for nominal scales. Educational and Psychological Measurement, 20, 37–46
Winter.
De Keyser, V., & Javaux, D. (1996). Human factors in aeronautics. In F. Bodart & J. Vanderdonckt (Eds.), Proceedings
of the eurographics workshop, design, speciWcation and veriWcation of interactive systems’96. Wien (Autriche):
Springer-Verlag Computer.
Dewey, M. E. (1983). CoeYcients of agreement. British Journal of Psychiatry, 143, 487–489.
Ebrahim, A. (1994). Novice programmer errors: language constructs and plan composition. International Journal of
Man–Machine Studies, 41, 457–480.
Emurian, H. H. (2004). A programmed instruction tutoring system for Java: consideration of learning performance and
software self-eYcacy. Computers in Human Behavior, 20, 423–459.
Ericsson, A. K., & Smith, J. (1991). Prospects and limits of the empirical study of expertise: an introduction. In K. A.
Ericsson & J. Smith (Eds.), Toward a general theory of expertise (pp. 1–38). Cambridge: Cambridge University
Press.
Ericsson, K. A., & Simon, H. A. (1980). Verbal reports as data. Psychological Review, 87(3), 215–251.
Gray, W. D., John, B. E., & Atwood, M. E. (1993). Project Ernestine: validating a GOMS analysis for predicting and
explaining real-world performance. Human–Computer Interaction, 8(3), 237–309.
Hollnagel, E. (1993). The phenotype of erroneous actions. International Journal of Man–Machine Studies, 39, 1–32.
Hollnagel, E. (2000). Looking for errors of omission and commission or the hunting of the snark revisited. Reliability
Engineering and System Safety, 68, 135–145.
Horns, K. M., & Lopper, D. L. (2002). Medication errors: analysis not blame. Journal of Obstetric, Gynecologic, and
Neonatal Nursing, 31, 355–364.
Hourizi, R., & Johnson, P. (2001). In Michitaka Hirose (Ed.), Proceedings of INTERACT 2001, eighth IFIP TC.13 con-
ference on human–computer interaction, Tokyo, July 9–14. IOS Press.
Inoue, K., & Koizumi, A. (2004). Application of human reliability analysis to nursing errors in hospitals. Risk Analysis,
24(6), 1459–1473.
Isaac, A., Shorrock, S. T., & Kirwan, B. (2002). Human error in European air traYc management: The HERA project.
Reliability Engineering and System Safety, 75, 257–272.
Johnson, C. (1999). Why human error modeling has failed to help systems development. Interacting with Computers, 11,
517–524.
Kay, R. H. (in press). Learning performance and computer software: an exploration of knowledge transfer. Computers in
Human Behavior.
Kim, J. W., Jung, W., & Ha, J. (2004). AGAPE-ET: a methodology for human error analysis of emergency tasks. Risk
Analysis, 24(5), 1261–1277.
Kitajima, M., & Polson, P. G. (1995). A comprehension-based model of correct performance and errors in skilled, dis-
play-based human–computer interaction. International Journal of Computer Studies, 43, 65–99.
R.H. Kay / Computers & Education 49 (2007) 441–459 459
Lazonder, A. W., & Van Der Meij, H. (1995). Error-information in tutorial documentation: supporting users’ errors to
facilitate initial skill learning. International Journal of Computer Studies, 42, 185–206.
Lombard, M., Snyder-Duch, J., & Bracken, C. C. (2004). Practical resources for assessing and reporting intercoder reli-
ability in content analysis research projects. Retrieved September, 2004. Available from <http://www.temple.edu/
mmc/reliability>.
Maxion, R. A. (2005). Improving user-interface dependability through mitigation of human error. International Journal
of Human–Computer Studies, 63, 25–50.
Norman, D. A. (1981). Categorization of action slips. Psychological Review, 88, 1–15.
Norman, D. A., & Draper, S. W. (Eds.). (1986). User centered system design: New perspectives on human–computer inter-
action. Hillsdale, NJ: Lawrence Erlbaum Associates.
Oulasvirta, A., & Saariluoma, P. (2004). Long-term working memory and interrupting messages in human–computer
interaction. Behavior Information Technology, 23(1), 53–64.
Patel, V. L., & Groen, G. J. (1991). The general and speciWc nature of medical expertise: a critical look. In K. A. Ericsson
& J. Smith (Eds.), Toward a general theory of expertise (pp. 93–125). Cambridge, NY: Cambridge University Press.
Reason, J. (1990). Human error. New York, NY: Cambridge University Press.
Rieman, J., Young, R. M., & Howes, A. (1996). A dual-space model of iteratively deepening exploratory learning. Inter-
national Journal of Computer Studies, 44, 743–775.
Scardamalia, M., & Bereiter, C. (1991). Literate expertise. In K. A. Ericsson & J. Smith (Eds.), Toward a general theory of
expertise (pp. 172–194). Cambridge, NY: Cambridge University Press.
Sloboda, J. (1991). Musical expertise. In K. A. Ericsson & J. Smith (Eds.), Toward a general theory of expertise (pp. 153–
171). Cambridge, NY: Cambridge University Press.
Smith, S. P., & Harrison, M. D. (2002). Blending descriptive and numeric analysis in human reliability design. In P. For-
brig, B. Urban, J. Vanderdonckt, & Q. Limbourg (Eds.), Lecture notes in computer scienceInteractive systems: Design,
speciWcation and veriWcation (DSVIS 2002) (pp. 223–237). Springer.
Trepess, D., & Stockman, T. (1999). A classiWcation and analysis of erroneous actions in computer supported co-opera-
tive work environment. Interacting with Computers, 11, 611–622.
Vaurio, J. K. (2001). Modelling and quantiWcation of dependent repeatable human erros in system analysis and risk
assessment. Reliability Engineering and System Safety, 71, 179–188.
Virvou, M. (1999). Automatic reasoning and help about human errors in using an operating system. Interacting with
Computers, 11, 545–573.
Yin, L. R. (2001). Dynamic learning patterns: temporal characteristics demonstrated by the learner. Journal of Educa-
tional Multimedia and Hypermedia, 10(3), 273–284.
... The scientific field of Human-Computer Interaction (Gittins 1986) examines user experience by exploring the individual's entire interaction with ICT and the aspects associated with this interaction (Albert and Tullis 2013). Specific problems concerning self-regulated use of unfamiliar software have been covered by the researchers (Kay 2007;Babin, Tricot, and Mariné 2009;Yang, Chen, and Chen 2018;Senkbeil and Ihme 2017;Schlomann, Even, and Hammann 2022). ...
... In other words, learning from mistakes is critical (Harteis & Buschmeyer, 2012) and substantial (Baumeister et al., 2001) and has a positive and significant effect (Keith & Frese, 2008). Several papers from different contexts confirm the advantage and effects of failure learning in cardiac surgery (KC et al., 2013), in the railroad industry (Baum & Dahlin, 2007), in the financial industry (Jay & Miner, 2007), in the airline industry (Haunschild & Sullivan, 2002), and in software learning (Heimbeck et al., 2003;Kay, 2007). ...
Article
The dedicated stream of learning from others’ failure has gained increasing attention in entrepreneurship education research and—because of the COVID-19 crisis—among practitioners. The emerging literature on entrepreneurship education addresses this important phenomenon but lacks specific theoretical underpinnings; its new approaches to and empirical inquiry are one-sided. Therefore, a systematic literature review is warranted to provide a more comprehensive view of learning from others’ entrepreneurial failures and to develop a research model. Unprecedentedly, the author explores the type of research streams and gaps that can be included in a research model for entrepreneurship education research in the area of learning from others’ failures. The model is based on a systematic literature review analyzing 257 articles, which were narrowed down to 25 empirical articles focusing on university students’ learning from the entrepreneurial failures of others. The author argues that the literature lacks a holistic understanding of learning from others’ failures, especially regarding identifying new theories or new combinations of theories, and using methods other than experimentation and questionnaires. Therefore, the author develops a research model to overcome most of these gaps and identify important avenues for future research.
... Du point de vue méthodologique, pour détecter ces erreurs, les chercheurs font passer aux utilisateurs des épreuves ou leurs donnent des activités à accomplir. Ensuite, ils corrigent ces épreuves ou ils regardent les résultats des activités et ils essayent à postériori de formuler des hypothèses sur les erreurs des utilisateurs (Panko & Halverson, 1996, Panko, 1998, Kay, 2007, Panko, 2008. ...
... Τα ερευνητικά πορίσματα που αφορούν τα λάθη των χρηστών δείχνουν πως οι χρήστες των λογιστικών φύλλων ανεξάρτητα από το βαθμό εμπειρίας τους υποπίπτουν στην πραγματοποίηση μικρού ή μεγάλου αριθμού λαθών (Kruck, 2005). Παρόλα αυτά γνωρίζουμε πολύ λίγα σχετικά με τους τύπους των λαθών που γίνονται συνήθως (Kay, 2005). Ειδικότερα, οι Panko & Halverson (1996) αναφέρουν πως τα λάθη που γίνονται σε ένα λογιστικό φύλλο θα πρέπει να εξετάζονται σε διάφορα επίπεδα. ...
Conference Paper
Full-text available
think aloud protocol). Οι δραστηριότητες έλαβαν χώρα με τη συνεχή παρουσία της ερευνήτριας-βοηθού και η πλήρης διαδικασία καταγράφηκε σε βίντεο. Ως εννοιολογικό πλαίσιο συλλογής και ανάλυσης των δεδομένων χρησιμοποιήθηκαν οι κατηγορίες δεξιοτήτων και οι αντίστοιχες δραστηριότητες μελέτης τους όπως προτάθηκαν στο πλαίσιο του ευρωπαϊκού ερευνητικού προγράμματος DidaΤab που αφορά τη διδακτική των λογιστικών φύλλων. Τα αποτελέσματα της έρευνας δείχνουν πως οι μαθητές κατέχουν περισσότερο δεξιότητες χαμηλού επιπέδου και λιγότερο δεξιότητες υψηλού επιπέδου. Λέξεις-κλειδιά: λογιστικά φύλλα, δεξιότητες (χαμηλού και υψηλού επιπέδου), δραστηριότητες Abstract This paper reports on a study that investigated the competences that 14 year old students possess in relation to the use of spreadsheets. We organized a case study in which ten students tried to solve problems organized in spreadsheets. These ten students had been taught the specific instructive object, as the suitable curriculum anticipates. The research is done under the constant presence of the researcher-helper of the procedure. His role was to support the students cognitively but also technically. For the analysis of our data we based on the activities and competences that were proposed in the framework of the European research project "DidaTab". DidaTad investigates the use of spreadsheets in the Secondary School. The results that derived from the analysis of our data show that the students possess mostly competences of lower level in always in relation with spreadsheets.
... Du point de vue méthodologique, pour détecter ces erreurs, les chercheurs font passer aux utilisateurs des épreuves ou leurs donnent des activités à accomplir. Ensuite, ils corrigent ces épreuves ou ils regardent les résultats des activités et ils essayent à postériori de formuler des hypothèses sur les erreurs des utilisateurs (Panko & Halverson, 1996, Panko, 1998, Kay, 2007, Panko, 2008. ...
Article
Full-text available
Software defect prevention is an important way to reduce the defect introduction rate. As the primary cause of software defects, human error can be the key to understanding and preventing software defects. This paper proposes a defect prevention approach based on human error mechanisms: DPeHE. The approach includes both knowledge and regulation training in human error prevention. Knowledge training provides programmers with explicit knowledge on why programmers commit errors, what kinds of errors tend to be committed under different circumstances, and how these errors can be prevented. Regulation training further helps programmers to promote the awareness and ability to prevent human errors through practice. The practice is facilitated by a problem solving checklist and a root cause identification checklist. This paper provides a systematic framework that integrates knowledge across disciplines, e.g., cognitive science, software psychology and software engineering to defend against human errors in software development. Furthermore, we applied this approach in an international company at CMM Level 5 and a software development institution at CMM Level 1 in the Chinese Aviation Industry. The application cases show that the approach is feasible and effective in promoting developers’ ability to prevent software defects, independent of process maturity levels.
Article
From the perspective of parallel mixed-methods research, this paper describes interactivity research that employed usability-testing technology to analyse cognitive learning processes; personal learning styles and times; and errors-and-recovery of learners using an interactive e-learning tutorial called Relations. Relations presents mathematical content for Theoretical Computer Science 1. It incorporates instructional content, scaffolding to support learning, worked examples and exercises. The findings from combining quantitative and qualitative studies were synergistic as they confirmed each other and also complemented each other when one data collection strategy provided insights and details not obtainable from the other. Data was mapped against concepts from the literature. This work makes methodological and practical contributions.
Article
Full-text available
Software packages incorporate operation methods that permit end-users to achieve various purposes. Although software operation learning systems (tutorial systems) show general operations, operation styles (i.e., mouse operations, shortcut key operations, keyboard operations, etc.) vary by end-user. Showing operation methods to fit the preferred operation style of the end-user improves software efficiency. Thus, tutorial systems based on end-users' operation styles would be valuable. Herein we demonstrate a method to automatically assess the operation styles of end-users using operation logs in software package. Our method realize tutorial systems tailored to end-users' operation styles.
Article
There has been a growing trend towards the use of biomass as a primary energy source, which now contributes over 54% of the European pulp and paper industry energy needs [1]. The remaining part comes from natural gas, which to a large extent serves as the major source of energy for numerous recovered fiber paper mills located in regions with limited available forest resources. The cost of producing electricity to drive paper machinery and generate heat for steam is increasing as world demand for fossil fuels increases. Additionally, recovered fiber paper mills are also significant producers of fibrous sludge and reject waste material that can contain high amounts of useful energy. Currently, a majority of these waste fractions is disposed of by landspreading, incineration, or landfill. Paper mills must also pay a gate fee to process their waste streams in this way and the result of this is a further increase in operating costs. This work has developed methods to utilize the waste fractions produced at recovered fiber paper mills for the onsite production of combined heat and power (CHP) using advanced thermal conversion methods ( pyrolysis and gasification) that are well suited to relatively small scales of throughput. The electrical power created would either be used onsite to power the paper making process or alternatively exported to the national grid, and the surplus heat created could also be used onsite or exported to a local customer. The focus of this paper is to give a general overview of the project progress so far and will present the experimental results of the most successful thermal conversion trials carried out by this work to date.
Article
Full-text available
The extent to which memory for information content is reliable, trustworthy, and accurate is crucial in the information age. Being forced to divert attention to interrupt- ing messages is common, however, and can cause memory loss. The memory effects of interrupting messages were investigated in three experiments. In Experiment 1, attending to an interrupting message decreased memory accuracy. Experiment 2, where four interrupting messages were used, replicated this result. In Experiment 3, an interrupting message was shown to be most disturbing when it was semantically very close to the main message. Drawing from a theory of long-term working memory it is argued that interrupting messages can both disrupt the active semantic elaboration of content during encoding and cause semantic interference upon retrieval. Properties of the interrupting message affect the extent and type of errors in remembering. Design implications are discussed.
Article
Full-text available
This article sought to identify learner-demonstrated patterns when undergraduate students were learning to use a computer-based presentation program in a multimedia learning environment without time constraints. Using analysis of patterns in time (APT), a methodology that can code event changes over time, the frequencies and the amounts of time of the five learning patterns were calculated. The data derived from the APT scores indicated two major findings: (a) the amount of time used by the participants ranged from 20 to 87 minutes. The amount of time spent did not predict mastery of the posttest. (b) Following through error corrections, confirming actions, and trying new steps were the patterns that did appear to predict mastery. and hand on mouse and keeping up step-by-step with the video instruction were indicators of learners' task engagement. The findings suggested that it is important to design flexible instruction to facilitate repeated, persistent, and successful practice to achieve mastery in a self-paced. computer-based learning environment, regardless of the amount of time spent.
Chapter
[Reprinted from K. A. Ericcson & J. Smith Eds. (1991) Towards a General Theory of Expertise: Prospects amd Limits. Cambridge University Press, pp 153-171.] -------------------------------- This chapter treats six connected issues of musical expertise. It examines the difficulties associated with characterising expertise in a way that offers a genuine foothold for cognitive psychology, and suggests that expertise may not, in fact, be ‘special’ in any cognitively interesting sense. It goes on to review some experimental studies of music, which suggest that most members of a culture possess tacit musical expertise, expressed in their ability to use high-level structural information in carrying out a variety of perceptual tasks. This expertise seems to be acquired through casual exposure to the musical forms and activities of the culture. The chapter then provides two detailed examples of exceptional musical expertise that apparently developed in the absence of formal instruction, suggesting that normal and ‘exceptional’ expertise may be parts of a single continuum. It finally discusses that musical expertise requires an apprehension of a structure-emotion mapping
Article
Since the early 1990s, considerable effort has been spent to understand what is meant by an “error of commission” (EOC), to complement the traditional notion of an “error of omission” (EOO). This paper argues that the EOO–EOC dyad, as an artefact of the PSA event tree, is insufficient for human reliability analysis (HRA) for several reasons: (1) EOO–EOC fail to distinguish between manifestation and cause; (2) EOO–EOC refer to classes of incorrect actions rather than to specific instances; (3) there is no unique way of classifying an event using EOO–EOC; (4) the set of error modes that cannot reasonably be classified as EOO is too diverse to fit into any single category of its own. Since the use of EOO–EOC leads to serious problems for HRA, an alternative is required. This can be found in the concept of error modes, which has a long history in risk analysis. A specific system for error mode prediction was tested in a simulator experiment. The analysis of the results showed that error modes could be qualitatively predicted with sufficient accuracy (68% correct) to propose this method as a way to determine how operator actions can fail in PSA-cum-HRA. Although this still leaves the thorny issue of quantification, a consistent prediction of error modes provides a better starting point for determining probabilities than the EOO–EOC dyad. It also opens a possibility for quantification methods where the influence of the common performance conditions is prior to and more important than individual failure rates.