Content uploaded by Richard Edward Clark
Author content
All content in this area was uploaded by Richard Edward Clark on Feb 17, 2016
Content may be subject to copyright.
Content uploaded by Richard Edward Clark
Author content
All content in this area was uploaded by Richard Edward Clark on Feb 17, 2016
Content may be subject to copyright.
Cognitive Task Analysis
Clark, Feldon, van Merriënboer, Yates, & Early
CHAPTER 43: COGNITIVE TASK ANALYSIS
Richard E.Clark
University of Southern California, Los Angeles, California, USA
clark@usc.edu
David F. Feldon
University of South Carolina, Columbia, South Carolina, USA
Feldon@gwm.sc.edu
Jeroen J. G. van Merriënboer,
Open University of the Netherlands, Heerlen, Netherlands
Jeroen.vanMerrienboer@ou.nl
Kenneth A. Yates
University of Southern California, Los Angeles, California, USA
kenneth.yates@usc.edu
Sean Early
University of Southern California, Los Angeles, California, USA
searly@usc.edu
Citation
Clark, R. E., Feldon, D., vanMerrienboer, J., Yates, K, and Early, S. (In Press for August
2007). Cognitive Task Analysis. In Spector, J. M., Merrill, M. D., van Merriënboer, J. J.
G., & Driscoll, M. P. (Eds.) Handbook of research on educational communications and
technology (3rd ed.). Mahwah, NJ: Lawrence Erlbaum Associates.
ABSTRACT
This chapter presents an overview of the current state of Cognitive Task Analysis (CTA)
in research and practice. CTA uses a variety of interview and observation strategies to
capture a description of the explicit and implicit knowledge that experts use to perform
complex tasks. The captured knowledge is most often transferred to training or the
development of expert systems. The first section presents descriptions of a variety of
CTA techniques, their common characteristics, and the typical strategies used to elicit
knowledge from experts and other sources. The second section describes research on the
impact of CTA and synthesizes a number of studies and reviews pertinent to issues
underlying knowledge elicitation. In the third section we discuss the integration of CTA
with training design. In the fourth section, we present a number of recommendations for
future research and conclude with general comments.
Keywords
Automated Knowledge: About “how” to do something – with repetition it operates
outside of conscious awareness, and executes much faster than conscious processes
Cognitive Task Analysis: Interview and observation protocols for extracting implicit and
explicit knowledge from experts for use in instruction and expert systems.
1
Cognitive Task Analysis
Clark, Feldon, van Merriënboer, Yates, & Early
Complex tasks: Tasks where performance requires the integrated use of both controlled
and automated knowledge to perform tasks that often extend over many hours or days.
Declarative Knowledge: Knowledge about “what” or”why” - hierarchically structured
formatted as propositional, episodic, or visuospatial information that is accessible in
long-term memory and consciously observable in working memory
Subject Matter Expert (SME): A person who has extensive experience that permits them
to succeed rapidly and consistently at a class of tasks
43.1 INTRODUCTION
“Cognitive Task Analysis is the extension of traditional task analysis techniques
to yield information about the knowledge, thought processes and goal structures
that underlie observable task performance. [It captures information about both…]
... overt observable behavior and the covert cognitive functions behind it [to]
form an integrated whole.” (p. 3, Chipman, Schraagen, & Shalin, 2000)
Cognitive task analysis (CTA) uses a variety of interview and observation strategies to
capture a description of the knowledge that experts use to perform complex tasks.
Complex tasks are defined as those where performance requires the integrated use of both
controlled (conscious, conceptual) and automated (unconscious, procedural or strategic)
knowledge to perform tasks that often extend over many hours or days (see van
Merriënboer, Clark, & de Croock, 2002). CTA is often only one of the strategies used to
describe the knowledge required for performance. It is a valuable approach when
advanced experts are available who reliably achieve a desired performance standard on a
target task and the goal is to capture the “cognitive” knowledge used by them (Clark &
Estes, 1999). Analysts use CTA to capture accurate and complete descriptions of
cognitive processes and decisions. The outcome is most often a description of the
performance objectives, equipment, conceptual knowledge, procedural knowledge and
performance standards used by experts as they perform a task. The descriptions are
formatted so that they can be used as records of task performance and/or to inform
novices in a way that helps them achieve the performance goal(s) in any context. CTA is
most often performed before (or as an integral part of) the design of instruction, work, job
aids and/or tests. The descriptions are then used to develop expert systems, tests to certify
job or task competence, and training for acquiring new and complex knowledge for
attainment of performance goals (Chipman, Schraagen, & Shalin, 2000; Jonassen,
Tessmer, & Hannum, 1999).
43.2 TYPES OF COGNITIVE TASK ANALYSIS CURRENTLY IN USE
Researchers have identified over 100 types of CTA methods currently in use, which can
make it difficult for the novice practitioner to choose the appropriate method (Cooke,
1994). The number and variety of CTA methods is due primarily to the diverse paths that
the development of CTA has taken. It has origins in behavioral task analysis, early work
in specifying computer system interfaces, and in military applications—each with its own
demands, uses, and research base. Over the past twenty years, CTA has been increasingly
informed by advances in cognitive science and has become an important component for
2
Cognitive Task Analysis
Clark, Feldon, van Merriënboer, Yates, & Early
the design of systems and training in many domains. The growing body of literature
describing CTA methods, applications, and results mirrors the diverse application and
development of CTA methods. There are, however, reviews and classifications to guide
those interested in exploring and applying CTA, including a comprehensive “review of
reviews” provided by Schraagen, Chipman, and Shute (2000).
43.2.1 CTA Families
Cooke (1994) conducted one of the more extensive reviews of CTA. She identified three
broad families of techniques: (a) observation and interviews, (b) process tracing, and (c)
conceptual techniques. Observations and interviews involve watching experts and talking
with them. Process tracing techniques typically capture an expert’s performance of a
specific task via either a think-aloud protocol or subsequent recall. In contrast, conceptual
techniques produce structured, interrelated representations of relevant concepts within a
domain.
Cooke’s (1994) three families differ in terms of their specificity and formality. Generally,
observations and interviews are informal and allow knowledge elicitors much flexibility
during elicitation. Process tracing methods have more structure and specificity, although
some analysis decisions are left to the elicitor. Conceptual techniques are well-specified
and formal with few judgments on the part of the elicitor. As a further comparison, more
formal methods require greater training on the mechanisms and produce more
quantitative data compared to the informal methods, which focus on interview skills and
generate qualitative output. Because different techniques may result in different aspects
of the domain knowledge, Cooke recommends the use of multiple methods, a
recommendation often echoed throughout the CTA literature (see also Ericsson & Simon,
1993; Russo, Johnson, & Stephens, 1989; Vosniadou, 1994).
Wei and Salvendy’s (2004) review of CTA methods introduces a fourth family—formal
models—which use simulations to model tasks in the cognitive domain. Their review
further differs from others by providing practical guidelines on how to use the
classifications of CTA methods to select appropriate techniques to accomplish various
objectives. For example, one guideline suggests that when tasks or jobs do not have a
defined domain, observations and interviews are especially useful in the initial phase of
CTA to generate a more explicit context and identify boundary conditions.
43.3 VARIETIES OF CTA METHODS AND THEIR APPLICATIONS
These reviews provide a starting point to explore the numerous varieties of CTA
methods and their applications. We examine the overall CTA process and describe in
depth some methods that have particular application to instructional design. Although
there are many varieties of CTA methods, most knowledge analysts follow a five-stage
process (Chipman et al., 2000; Clark, 2006; Coffey & Hoffman, 2003; Cooke, 1994;
Crandall, Klein & Hoffman, 2006; Hoffman, Shadbolt, Burton, & Klein, 1995; Jonassen
et al., 1999). The five common steps in most of the dominant CTA methods are
performed in the following sequence:
1) Collect preliminary knowledge
2) Identify knowledge representations
3) Apply focused knowledge elicitation methods
3
Cognitive Task Analysis
Clark, Feldon, van Merriënboer, Yates, & Early
4) Analyze and verify data acquired
5) Format results for the intended application
The following sections contain descriptions of common CTA methods and brief
explanations of each type as it is used during each stage of the general process.
43.3.1 Collect Preliminary Knowledge
In this initial stage, the analyst identifies the sequence of tasks that will become the focus
of the CTA. Analysts attempt to become generally familiar with the knowledge domain
and identify experts to participate in the knowledge elicitation process. Although
knowledge analysts and instructional developers do not need to become subject matter
experts themselves, they should be generally familiar with the content, system, or
procedures being analyzed.
If possible, two or more subject matter experts (SMEs) should be selected to participate
in the process (Chao & Salvendy, 1994; Lee & Reigeluth, 2003). Although specific
criteria for identifying experts may change depending on circumstances1, all SMEs must
have a solid record of successful performance at the task(s) being analyzed. Experts are
most often interviewed separately to avoid premature consensus regarding the knowledge
and skills necessary for effective performance.
Techniques typically used during this phase include document analysis, observation, and
interviews (structured or unstructured). The analyst uses the results of this stage to
identify the knowledge types and structures involved in performing the tasks.
43.3.1.1 Document analysis
Analysts often begin their reviews by collecting any available written resources
describing the tasks and/or related subject matter. This can include a wide variety of
documents, including promotional literature, brochures, manuals, employee handbooks,
reports, glossaries, course texts, and existing training materials. These documents are
analyzed for orientation on the tasks, preparation of the in-depth analysis, and
confirmation of preliminary ideas (Jonassen et al., 1999). This orientation prepares
analysts for subsequent task analysis activities. For example, the information elicited
during structured interviews may be more robust when analysts are already familiar with
experts’ terminology. Documentation analysis also allows comparison of existing
materials on a procedure with accounts of expert practitioners to identify any immediate
discrepancies between doctrine and typical implementation.
43.3.1.2 Observations
Observation is one of the most frequently used and most powerful tools of knowledge
elicitation. It can be used to identify the tasks involved, possible limitations and
constraints for subsequent analysis, and available information necessary to perform the
task. It also allows analysts to compare an expert’s description of the task with actual
events. In many CTA systems, analysts will unobtrusively observe experts while they are
performing the tasks under examination to expand their understanding of the domain.
Analysts observe and record the natural conditions and actions during events that occur in
the setting (Cooke, 1994). Although definitive identification of an expert’s mental
1 See extensive discussions of appropriate definitions of expertise and criteria for identifying experts in:
Cooke (1992), Dawes (1994), Ericsson and Smith (1991), Glaser and Chi (1988), Mullin (1989), and
Sternberg and Horvath (1998).
4
Cognitive Task Analysis
Clark, Feldon, van Merriënboer, Yates, & Early
operations cannot be accomplished through observation, analysts may note the occasions
on which it seems that experts must make decisions, assess situations, or engage in
analysis.
43.3.1.3 Unstructured interviews
“The most direct way to find out what someone knows is to ask them” (Cooke, 1999, p.
487). In addition to observation, unstructured interviews are also common early in the
CTA process to provide an overview of the domain and to raise issues and questions for
exploration in subsequent structured interviews. In unstructured interviews, analysts may
not dictate the content or sequence of conversation. In other instances, however, they may
ask an expert to focus on a task, event, or case with instructions to “Tell me everything
you know about…”
43.3.2 Identify Knowledge Representations
Using the information collected during the preliminary stage, analysts examine each task
to identify sub-tasks and types of knowledge required to perform it. Most CTA
approaches are organized around knowledge representations appropriate for the task,
such as concept maps, flow charts, semantic nets, and so forth. These representations
provide direction and order to latter stages in the CTA process because knowledge
elicitation methods map directly to knowledge types. Some are best used to elicit
procedural knowledge, while others are more successful for capturing declarative
knowledge (Chipman et al., 2000). A learning hierarchy is one example of a method to
organize the types of knowledge required to perform a task.
43.3.2.1 Learning hierarchy analysis.
A learning hierarchy analysis represents the content of skills ordered from more complex
problem solving skills at the top to simpler forms of learning (Gagné, 1962, 1968;
Jonassen et al., 1999). So for example, problem solving is followed by rule learning,
which is followed by concepts. Thus, the basic idea is that people can only learn rules if
they have already mastered prerequisite concepts necessary to learn the rules. Analyzing
a learning hierarchy begins by identifying the most complex (highest) learning outcome
and then determining the underlying skills that must be mastered to achieve the target
outcome. A hierarchy of skills is represented as a chart of tasks for each intellectual skill
that is acquired to progress to increasingly complex skills.
The learning hierarchy constructed at this stage of the CTA process provides the guide to
structure the next stage of knowledge elicitation by identifying the information that must
be captured from the SMEs. Thus, it reflects the reiterative nature of the CTA process, in
which the details of the knowledge, skills, and cognitive strategies necessary for complex
learning are revealed, refined, and confirmed.
43.3.2.2 Apply Focused Knowledge Elicitation Methods
During knowledge elicitation, the analyst applies various techniques to collect the
knowledge identified in the prior stage. Past research indicates that different elicitation
methods yield different types of knowledge and that knowledge is rarely articulated
without being the focus of elicitation (Crandall, Klein & Hoffman, 2006; Hoffman,
Crandall, & Shadbolt, 1998). Analysts attempt to choose methods appropriate to the
targeted knowledge type as determined by the knowledge representations identified for
each task. Consequently, most elicitation efforts entail multiple techniques.
5
Cognitive Task Analysis
Clark, Feldon, van Merriënboer, Yates, & Early
Among the many types of knowledge elicitation methods, variations of structured and
semi-structured interviews are most commonly involved in CTA because they are
relatively easy to use and require less training than more formal methods such as protocol
analysis (Ericsson & Simon, 1993) or the use of repertory grids (e.g., Bradshaw, Ford,
Adams-Webber, & Agnew, 1993). It is the variation in these specific techniques that
defines the major differences between specific CTA models. Although the methods may
differ in focus, they share a common purpose in capturing the conditions and cognitive
processes for necessary for complex problem solving. Following are descriptions of two
CTA models that have been documented to effectively elicit experts’ knowledge in a
manner that is particularly effective for instruction (Crandall & Getchell-Reiter, 1993;
Velmahos, Toutouzas, Sillin, Chan, Clark, Theodorou, & Maupin, 2004).
43.3.2.3 Concepts, Processes, and Principles (CPP; Clark, 2004, 2006)
CPP involves a multi-stage interview technique that captures the automated and
unconscious knowledge acquired by experts through experience and practice by using
multiple SMEs to describe the same procedure, followed by cycles of expert self- and
peer-review. The initial, semi-structured interview begins with a description of the CTA
process by the analyst. The SME is then asked to list or outline the performance sequence
of all key sub-tasks necessary to perform the larger task being examined. SMEs are also
asked to describe (or help the interviewer locate) at least five authentic problems that an
expert should be able to solve if they have mastered the task. Problems should range from
routine to highly complex whenever possible. The resulting sequence of tasks becomes
the outline for the training to be designed or the job description produced after the CTA is
completed. Starting with the first subtask in the sequence, the analyst asks a series of
questions to collect:
(a) the sequence of actions (or steps) necessary to complete the sub task;
(b) the decisions that have to be made to complete the sub task, when each must
be made and, the alternatives to consider, and the criteria to decide between
the alternatives;
(c) all concepts, processes and principles that are the conceptual basis for the
experts’ approach to the sub-task;
(d) the conditions or initiating events that must occur to start the correct
procedure;
(e) the equipment and materials required;
(f) the sensory experiences required (e.g., the analyst asks if the expert must
smell, taste or touch something in addition to seeing or hearing cues in order
to perform each sub task), and
(g) the performance standards required, such as speed, accuracy or quality
indicators.
The interview is repeated for each SME, with each interview recorded and transcribed
verbatim for later analysis.
43.3.2.4 Critical Decision Method (CDM; Klein, Calderwood, & MacGregor, 1989)
CDM is a semi-structured interview method that uses a set of cognitive probes to
determine the bases for situation assessment and decision making during critical
(nonroutine) incidents (for a full procedural description, see Hoffman et al., 1998). CDM
is based on the concept of expert decision making as the recognition of cue patterns in the
task environment without conscious evaluation of alternatives. Thus, situational
6
Cognitive Task Analysis
Clark, Feldon, van Merriënboer, Yates, & Early
awareness plays a dominant role in experts’ selection of courses of action. The speed
with which such decisions are made suggests that experts unconsciously assess feasible
goals, important cues, situational dynamics, courses of action, and expectancies. To elicit
this knowledge, CDM uses a retrospective, case-based approach with elicitation
occurring in multiple “sweeps” to gather information in progressively deepening levels of
detail.
The technique begins by selecting a critical incident from the expert’s task experience
that was unusual in some way. The experts involved provide unstructured accounts of the
incident, from which a timeline is created. Next, the analyst and the experts identify
specific points in the chronology at which decisions were made. These decision points are
defined as instances when other reasonable alternative courses of action were possible.
The decision points are then probed further using questions that elicit: (a) the perceptual
cues used in making the decision, (b) prior knowledge that was applied, (c) the goals
considered, (d) decision alternatives, and (e) other situation assessment factors. The
reports are recorded and transcribed verbatim.
43.3.3 ANALYZE AND VERIFY DATA ACQUIRED
As noted above, CTA methods vary in structure, formality, and results. Because the
knowledge elicitation techniques described here are less formal, they require that the
analyst code and format the results for verification, validation, and applicability for use in
their intended application. When conducting interviews with experts, practitioners
recommend recording the interviews and transcribing them for review at a later time,
rather than trying to take detailed notes during the interview, which may distract from the
process. Transcripts may be coded to summarize, categorize, and/or synthesize the
collected data.
Following coding, the formatted output is presented to the participating SMEs for
verification, refinement, and revision to ensure that the representations of tasks and their
underlying cognitive components are complete and accurate. Once the information in the
formatted output has been verified or revised by the expert, the analyst should then
compare it with the output of other experts to validate that the results accurately reflect
the desired knowledge representation.
The analysis stage in CPP (Clark, 2004, 2006) begins with the analyst preparing a
summary of the interview in a standard format that includes the task, a list of sub-tasks,
and the conditions, standards, equipment and materials required. For each sub-task, the
analyst then writes a procedure that includes each action step and decision step required
to perform the task and gives the procedure to the SME to review. To verify the
individual CTAs, the analyst gives each SME’s product to one of the other SMEs and
asks them to edit the document for accuracy and efficiency (that is to determine the
fewest steps necessary for a novice with appropriate prior knowledge to perform the
task). In the final stage, the analyst edits the individual CTAs into one formatted
description of how to accomplish all tasks. After final approval by the SMEs, this final,
formatted document provides the information for the instructional design process. Clark
(2006) provides the format of the protocol.
The Critical Decision Method prescribes no single method for coding the transcripts that
are transcribed verbatim from the recorded interviews, as each specific research question
defines how the transcripts are coded (Klein et al., 1989). The coding scheme, however,
7
Cognitive Task Analysis
Clark, Feldon, van Merriënboer, Yates, & Early
should be domain-relevant and have cognitive functionality; in other words, it should tag
information that represents perceptual cues, decision points, and situational assessments.
A sample of a coded protocol can be found in Hoffman et al. (1998).
43.3.4 FORMAT RESULTS FOR THE INTENDED APPLICATION
The results of some highly structured CTA methods (e.g., cognitive modeling) are
readily applied to expert systems or computer-assisted tutoring applications. For less
formal CTA methods, such as those described here, the results must be translated into
models that reveal the underlying skills, mental models, and problem solving strategies
used by experts when performing highly complex tasks. Further, these models inform the
instructional design of curriculum, training and other performance applications. The
Concepts, Processes, and Principles (Clark, 2004, 2006) method generates a description
of the conceptual knowledge, conditions, and a detailed list of the actions and decisions
necessary to perform a task. These products can be incorporated into an instructional
design system. Similarly, products resulting from the application of the Critical Decision
Method have been used for a variety of instructional applications, including building and
evaluating expert systems and identifying training requirements. CDM can provide case
studies and information rgarding which aspects of a task depend on explicit knowledge
and which depend on tacit knowledge (Klein et al., 1989).
43.4 CURRENT RESEARCH EVIDENCE FOR THE IMPACT OF
COGNITIVE TASK ANALYSIS
Modern CTA evolved from a behavioral approach to analyzing performance. As the
understanding of occupational demands evolved from a focus on physical performance to
a focus on cognitive performance, evidence suggested that key aspects of performance
entailed knowledge that was not directly observable (Ryder & Redding, 1993; Schneider,
1985). Applications of behavioral task analysis to training resulted in incomplete
descriptions that led to decision errors during job performance (Schraagen et al., 2000).
Early versions of CTA were designed to capture the decisions and analysis that could not
be directly observed as well as the deeper conceptual knowledge that served as the basis
for analytical strategies and decisions (Clark & Estes, 1999). Thus, training shifted from
the reinforcement of associations between perceptual stimuli and behaviors to the
development of declarative and procedural knowledge.
Research evidence indicates that the accurate identification of experts’ cognitive
processes can be adapted into training materials that are substantially more effective than
those developed through other means (e.g., Merrill, 2002; Schaafstal, Schraagen, & van
Berlo, 2000; Velmahos et al., 2004). When content is inaccurate or incomplete, any
instruction based on that knowledge will be flawed (Clark & Estes, 1996; Jonassen et al.,
1999). Such flaws interfere with performance and with the efficacy of future instruction
(Lohman, 1986; Schwartz & Bransford, 1998). Resulting misconceptions resist
correction, despite attempts at remediation (Bargh & Ferguson, 2000; Chinn & Brewer,
1993; Thorley & Stofflet, 1996).
43.4.1 Declarative Knowledge and CTA
Declarative knowledge is hierarchically structured propositional, episodic, visuospatial
information that is accessible in long-term memory and consciously observable in
8
Cognitive Task Analysis
Clark, Feldon, van Merriënboer, Yates, & Early
working memory (Anderson, 1983; Anderson & Lebiere, 1998; Gagné, Briggs, & Wager,
1992). This type of knowledge supports performance through the conceptual
understanding of processes and principles related to a task and the role that the task plays
within its broader context (Gagné, 1982).
SMEs possess extensive declarative knowledge of their domains in the form of principled
frameworks of abstract, schema-based representations. These frameworks allow experts
to analyze complex problems efficiently (Glaser & Chi, 1988; Zeitz, 1997). These
elaborate schemas enable experts to retain and recall information, events, and problem
states with a high degree of accuracy (Cooke, Atlas, Lane, & Berger, 1993; Dochy,
Segers, & Buehl, 1999; Ericsson & Kintsch, 1995). Further, broad, principled
understandings of their domains facilitate skill transfer to solve related novel and
complex problems (Gagné & Medsker, 1996; Hall, Gott, & Pokorny, 1995; van
Merriënboer, 1997).
When communicated to novices, the organization of experts’ knowledge also impacts
training outcomes. In an examination of experts’ instructions to novices, Hinds,
Patterson, and Pfeffer (2001) found that trainees who received explanations from experts
performed better on transfer tasks than trainees who received their explanations from
non-experts. The experts provided explanations that were significantly more abstract and
theoretically oriented than those of the non-experts, so learners in the expert-to-novice
instructional condition were able to solve transfer problems more quickly and effectively
than their counterparts in the non-expert-to-novice instructional condition.
Conceptual knowledge alone, however, is insufficient for generating effective
performance. The non-expert instructors in the study provided more concrete, procedural
explanations, which facilitated higher performance by trainees when they attempted to
perform the original target task. The abstractions provided by the experts lacked key
details and process information necessary for optimal performance. This finding is
consistent with many others in the training literature suggesting that the most effective
learning occurs when all necessary information is available to the learner in the form of
instruction and/or prior knowledge (for a review, see Kirschner, Sweller, & Clark, 2006).
Findings from a variety of studies indicate that without CTA to facilitate knowledge
elicitation, experts in many fields unintentionally misrepresent the conceptual knowledge
on which they base their performance. In a study by Cooke and Breedin (1994), for
example, expert physicists attempted to predict the trajectories of various objects and
provided written explanations of the methods by which they reached their conclusions.
However, when the researchers attempted to replicate the physicists’ predictions on the
basis of the explanations provided, they were unable to attain the same results. The
calculated trajectories were significantly different from those provided by the experts.
In a similar study, expert neuropsychologists evaluated hypothetical patient profiles to
determine their theoretical levels of intelligence (Kareken & Williams, 1994).
Participants first articulated the relationships between various predictor variables (e.g.,
education, occupation, gender, and age) and intelligence. Then, they estimated IQ scores
on the basis of values for the predictor variables they identified. However, their estimates
differed significantly from the correlations they provided in their explanations of the
relationships among predictor variables. Many were completely uncorrelated. Clearly, the
experts’ performance relied on processes that were very different from their declarative
knowledge of their practice.
9
Cognitive Task Analysis
Clark, Feldon, van Merriënboer, Yates, & Early
43.4.2 Procedural Knowledge and CTA
Procedural knowledge is required for all skilled performance. Skill acquisition often
begins with learning declarative knowledge about discrete steps in a procedure. Yet the
development of automaticity occurs as we practice those procedures. The automatization
process involves learning to recognize important environmental cues that signal when the
skill is to be applied and the association of the cues to the discrete covert (cognitive) and
overt (action) steps required to attain a goal or sub-goal (Neves & Anderson, 1981).
Through practice, these associations and steps increase in reliability and speed of
performance. Over time, the procedures require diminishing levels of mental effort or
self-monitoring to perform until they utilize very few, if any, cognitive resources
(Wheatley & Wegner, 2001). This consistent, repeated mapping of conditional cues and
steps manifests as an integrated IF-THEN decision rule between the cue (IF) and the
procedure (THEN) necessary to attain a goal from a particular problem state (Schneider
& Shiffrin, 1977). This representation is a production within the ACT-R cognitive model
of learning proposed by Anderson (1995; Anderson & Lebiere, 1998).
During complex tasks, multiple IF-THEN productions are strung together to generate
more sophisticated hierarchies of performances. Each individual production attains a sub-
goal that is a component of the overall goal. To move from one production to the next in
a sequence, the new sub-goal must be identified and an appropriate production selected.
For novices, the identification and selection process for nearly every sub-goal is a
conscious, deliberate decision. However, experts automate this process, so they cannot
consciously identify many of these decision points (Blessing & Anderson, 1996).
Automaticity has two primary properties that limit the effectiveness of unassisted
explanations by experts2. First, automated knowledge operates outside of conscious
awareness, and executes much faster than conscious processes (Wheatley & Wegner,
2001). As such, it is not available for introspection or accurate self-monitoring. Second,
automated processes are typically uninterruptible, so they cannot be effectively changed
once they are acquired (Hermans, Crombez, & Eelen, 2000). Consequently, experts’
unaided self-reports of their problem-solving processes are typically inaccurate or
incomplete3 (e.g., Chao & Salvendy, 1994; Feldon, 2004).
43.4.3 Cues
Each element of an IF-THEN production has great importance for effective training. For
learners to develop effective procedures, they must attend to relevant cues to determine
correctly which sub-goals and procedures are appropriate. Thus, incorporating experts’
knowledge of these cues is important for optimal instruction (Fisk & Eggemeier, 1988;
Klein & Calderwood, 1991).
2 The literature on expertise has not reached a consensus on the role of automaticity. However, much
empirical evidence suggests it plays a defining role. See Feldon (in press) for an extensive review. Until
the article reaches publication, it can be found on the SpringerLink website under digital object identifier
(DOI) 10.1007/s10648-006-9009-0. A pre-publication draft can also be located at
http://www.cogtech.usc.edu/recent_publications.php.
3 When experts attempt to solve novel problems, the elements of their decision-making processes that are
newly generated are less likely to be reported inaccurately. However, pre-existing processes that were
applied to those problems will continue to be subject to self-report errors (Betsch, Fiedler, & Brinkmann,
1998).
10
Cognitive Task Analysis
Clark, Feldon, van Merriënboer, Yates, & Early
For example, Crandall and Getchell-Reiter (1993) investigated the procedural knowledge
of expert nurses specializing in neonatal intensive care for newborn or premature babies.
The participants were 17 registered nurses who averaged 13 years of overall experience
and 8.1 years of specialization. Without a formal knowledge elicitation technique, they
attempted to recall highly detailed accounts of critical incidents or measures they had
implemented which they believed they had positively influenced a baby’s medical
condition. After completing a free recall phase, the researchers used CTA to identify
additional relevant information that the nurses did not articulate. Analysis of the
transcripts revealed that the CTA probes elicited significantly more indicators of medical
distress in the babies than were otherwise reported. Before CTA, the nurses’ explanations
of the cues they used were either omitted or articulated vaguely as “highly generalized
constellations of cues” (p. 50).
Comparison of the elicited cues to those described in the available medical and nursing
training literature of the time revealed that more than one-third of the cues (25 out of 70)
used by the expert nurses in the study to correctly diagnose infants were absent from that
literature. These cues spanned seven previously unrecognized categories that were
subsequently invorporated into standard training for novice nurses entering neonatal
intensive care (Crandall & Gamblian, 1991).
43.4.4 Decision points
In addition to knowing which cues are important for decision making, it is also necessary
to correctly identify the points at which those decisions must be made. Much of the
research on decision-making suggests that many decisions are made prior to awareness of
the need to make a decision (Bargh, Gollwitzer, Lee-Chai, Barndollar, & Trötschel, 2001;
Wegner, 2002). Abreu (1999) found that practicing psychotherapists evaluated fictitious
case studies more negatively when they were primed with material about African-
American stereotypes than when they rated the same information without priming.
Similarly, when Bargh et al. (2001) subconsciously primed participants with goals of
either cooperation or high performance, the actions of the participants in a variety of
tasks typically conformed to the subliminal goal despite being completely unaware of
either the content of the prime or the fact that they held the goal itself.
In professions, automaticity presents significant problems for training if experts are relied
upon to explain the points at which decisions must be made. In medicine, for example,
studies of the reliability of diagnoses by expert physicians for identical symptoms
presented at different times only correlated between .40 and .50 (Einhorn, 1974;
Hoffman, Slovic, & Rorer, 1968). Despite self-reports suggesting that the participants
considered extended lists of symptoms, analysis of the symptoms in the cases presented
indicated that only one to four symptoms actually influenced diagnosis decisions
(Einhorn, 1974).
Some experts freely acknowledge that they are unable to accurately recall aspects of their
problem-solving strategies. Johnson (1983) observed significant discrepancies between
an expert physician’s actual diagnostic technique and the technique that he articulated to
medical students. Later, he discussed with the physician why his practice and his
explanation differed. The physician’s explanation for the contradiction was, “Oh, I know
that, but you see, I don’t know how I do diagnosis, and yet I need things to teach
11
Cognitive Task Analysis
Clark, Feldon, van Merriënboer, Yates, & Early
students. I create what I think of as plausible means for doing tasks and hope students
will be able to convert them into effective ones” (Johnson, 1983, p. 81).
43.4.5 Cognitive skills
Correctly identifying and explaining the sequences of cognitive and psychomotor actions
that are triggered by cues at decision points are likewise crucial to effective instruction.
Although psychomotor aspects of a task are relatively simple for learners to observe,
cognitive operations require articulation for a learner to successfully replicate an expert’s
performance. However, automaticity often impairs this process. For example, a team of
engineers and technicians with expertise in the assembly of sophisticated research
equipment attempted unsuccessfully to generate a complete set of assembly instructions,
despite extensive and repeated efforts to include every relevant fact, process, and
heuristic (Collins, Green, & Draper, 1985). When scientists who purchased the
equipment attempted to assemble it to those instructions, the equipment did not function.
After many discussions with the engineers, the scientists eventually discovered that the
expert team had accidentally omitted a necessary step from the instructions. The step
turned out to be a universally implemented practice among the engineers and technicians
that they had failed to articulate.
Chao and Salvendy (1994) systematically documented the rates at which experts omit
cognitive skills from self-reports. Six expert programmers were asked to complete a
series of challenging troubleshooting tasks, and all of their actions were recorded. The
programmers were then asked to explain their procedures using a variety of different
knowledge elicitation methods. No single expert was able to report more than 41% of
their diagnostic actions, 53% of their debugging actions, or 29% of their interpretations,
regardless of the knowledge elicitation method used. However, when the researchers
began compiling the elicited explanations from different experts, they found that the
percentage of actions explained increased. When explanations from all six experts were
aggregated, the percentages of verbalization for each category of actions increased to
87%, 88%, and 62%, respectively. The improvement in information elicited reflects the
experts’ individual differences in which sub-goal productions had been automated to
greater and lesser extents. Thus, one promising practice for instruction based on expert
knowledge is to employ CTA methods with multiple experts prior to developing
instruction.
43.4.6 Instructional Evidence
Several studies provide direct evidence for the efficacy of CTA-based instruction. In a
study of medical school surgical instruction, an expert surgeon taught a procedure
(central venous catheter placement and insertion) to first-year medical interns in a
lecture/demonstration/practice sequence (Maupin, 2003; Velmahos et al., 2004). The
treatment group’s lecture was generated through a CTA of two experts in the procedure.
The control group’s lecture consisted of the expert instructor’s explanation as a free
recall, which is the traditional instructional practice in medical schools. Both conditions
allotted equal time for questions, practice, and access to equipment. The students in each
condition completed a written posttest and performed the procedure on multiple human
patients during their internships. Students in the CTA condition showed significantly
greater gains from pretest to posttest than those in the control condition. They also
outperformed the control group when using the procedure on patients in every measure of
12
Cognitive Task Analysis
Clark, Feldon, van Merriënboer, Yates, & Early
performance, including an observational checklist of steps in the procedure, number of
needle insertion attempts needed to insert the catheter into patients veins, frequency of
required assistance from the attending physician, and time-to completion for the
procedure.
Similarly, Schaafstal et al. (2000) compared the effectiveness of a pre-existing training
course in radar system troubleshooting with a new version generated from cognitive task
analyses. Participants in both versions of the course earned equivalent scores on
knowledge pretests. However, after instruction, students in the CTA-based course solved
more than twice as many malfunctions, in less time, as those in the traditional instruction
group. In all subsequent implementations of the CTA-based training design, the
performance of every student cohort replicated or exceeded the performance advantage
over the scores of the original control group.
Merrill (2002) compared CTA-based direct instruction with a discovery learning
(minimal guidance) format and a traditional direct instruction format in spreadsheet use.
The CTA condition provided direct instruction based on strategies elicited from a
spreadsheet expert. The discovery learning format provided authentic problems to be
solved and made an instructor available to answer questions initiated by the learners. The
traditional direct instruction format provided explicit information on skills and concepts
and guided demonstrations taken from a commercially available spreadsheet training
course. Scores on the posttest problems favored the CTA-based instruction group (89%
vs. 64% for guided demonstration vs. 34% for the discovery condition). Further, the
average times-to-completion also favored the CTA group. Participants in the discovery
condition required more than the allotted 60 minutes. The guided demonstration
participants completed the problems in an average of 49 minutes, whereas the
participants in the CTA-based condition required an average of only 29 minutes.
43.4.7 Generalizability of CTA-based training benefits
Lee (2004) conducted a meta-analysis to determine how generalizable CTA methods are
for improving training outcomes across a broad spectrum of disciplines. A search of the
literature in 10 major academic databases (Dissertation Abstracts International, Article
First, ERIC, ED Index, APA/PsycInfo, Applied Science Technology, INSPEC, CTA
Resource, IEEE, Elsevier/AP/Science Direct), using keywords such as “cognitive task
analysis,” knowledge elicitation,” and “task analysis,” yielded 318 studies. Seven studies
qualified, based on the qualifications of: Training based on CTA methods with an
analyst, conducted between 1985 and 2003, and reported pre and post test measures of
training performance. A total of 39 comparisons of mean effect size for pre- and posttest
differences were computed from the seven studies. Analysis of the studies found effect
sizes between .91 and 2.45, which are considered to be large (Cohen, 1992). The mean
effect size was d=+1.72, and the overall percentage of post-training performance gain
was 75.2%. Results of a chi-square test of independence on the outcome measures of the
pre- and posttests (
χ
2 = 6.50, p < 0.01) indicated that CTA most likely contributed to the
performance gain.
43.4.8 Cost-benefit studies of CTA
There are few published studies of cost-effectiveness or cost-benefit that compare CTA
with other task analysis approaches. One exception, reported by Clark and Estes (1999),
described a field-based comparison of traditional task analysis and cognitive analysis by a
large (10,000+ employee) European organization that redesigned a required training
13
Cognitive Task Analysis
Clark, Feldon, van Merriënboer, Yates, & Early
course in emergency and safety procedures for approximately 500 managers. The old and
new versions of the course continued to be offered after the new version of the course
was designed in order to compare the relative efficacy of the two approaches. All
objectives and test items were similar in both the old and new versions.
As Table 1 indicates, the use of CTA required a greater front-end investment of time (the
organization refused to release salary data) both for the CTA itself and the training of
instructors for the course (data on the time required to train instructors for the old course
was not available). Yet even with the approximately 85 percent more front-end time
invested in design, development, and instructor training, the new course resulted in 2.5
person-years of time savings, because it could be offered in one day (compared with two
days for the previous course) with equal or greater scores on the performance posttest.
While these data are only suggestive, the time savings reported by Clark and Estes (1999)
reflect similar time savings reported above by Velmahos et al. (2004) and Merrill (2002).
INSERT TABLE 1 ABOUT HERE
43.5 INTEGRATING COGNITIVE TASK ANALYSIS AND
TRAINING DESIGN
43.5.1 Optimal integration of CTA and training design
For optimal application to instruction, CTA methods should be fully integrated with a
training design model to facilitate the alignment between learning objectives, knowledge
(declarative and procedural) necessary for attaining the objectives, and instructional
methods appropriate to the required knowledge. Currently, there are three major systems
that take this approach: Integrated Task Analysis Model (ITAM; Redding, 1995; Ryder &
Redding, 1993), Guided Experiential Learning (GEL; Clark, 2004, 2006), and the Four
Component Instructional Design system (4C/ID; van Merriënboer, 1997; van
Merriënboer et al., 2002; van Merriënboer & Kirschner, in press). Of these, the 4C/ID
model is the most extensively developed. It can be distinguished from other instructional
design models in three ways. First, the model’s emphasis is on the integrated and
coordinated performance of task-specific constituent skills rather than specific knowledge
types or sequenced performance of tasks. Second, a distinction is made between
supportive information, which helps learners perform the nonrecurrent aspects of a
complex skill, and procedural or just-in-time (JIT) information, which is presented to
learners during practice and helps them to perform the recurrent aspects of a complex
skill. Third, the 4C/ID model is based on learners performing increasingly complex skills
as a “whole-task” with part-task practice only of the recurrent skills, whereas traditional
design methods give emphasis to the deconstruction of a complex task into part-tasks,
which, once learned separately, are compiled as whole-task practice.
The assumption of the 4C/ID model is that environments supporting complex skill
learning can be described in terms of four interrelated components: learning tasks,
supportive information, just-in-time (JIT) information, and part-task practice.
43.5.1.1 Learning tasks
Learning tasks are concrete, authentic whole task experiences, organized sequentially
from easy to difficult. Learning tasks at the same level of difficulty comprise a task class,
or group of tasks that draw upon the same body of knowledge. Learning tasks within a
14
Cognitive Task Analysis
Clark, Feldon, van Merriënboer, Yates, & Early
class initially employ scaffolding that fades gradually over subsequent tasks within the
class. Learning tasks foster schema development to support nonrecurrent aspects of a
task. They also facilitate the development of automaticity for schemata used during
recurrent aspects of a task.
43.5.1.2 Supportive information
Supportive information assists the learner with interpreting, reasoning, and problem
solving activities that comprise the nonrecurrent aspects of learning tasks. It includes
mental models demonstrated through case studies, cognitive strategies modeled in
examples, and cognitive feedback. Through elaboration, supportive information helps
learners to apply their prior knowledge when learning new information they need to
perform the task.
43.5.1.3 JIT information
JIT information consists of rules, procedures, declarative knowledge, and corrective
feedback required for learners to perform recurrent aspects of the task. JIT information is
presented in small units as “how-to” instruction, with demonstrations of procedures and
definitions of concepts illustrated with examples. As learners perform the recurrent
aspects of a task and acquire automaticity, the amount of JIT information provided
diminishes.
43.5.1.4 Part-task practice
Part-task practice opportunities are provided for repetitive performance of the recurrent
aspects of a task when a high degree of automaticity is required. Part-task practice is
repeated throughout instruction and mixed with other types of practice. Part-task practice
includes items that vary from very familiar to completely novel.
43.6 CTA AND 4C/ID MODEL
The 4C/ID model utilizes CTA to accomplish four tasks: (1) decomposing complex skills
into skill hierarchies; (2) sequencing the training program within task classes; (3)
analyzing nonrecurrent aspects of complex skills to identify cognitive strategies and
mental models; and (4) analyzing recurrent aspects of the complex skill to identify rules
or procedures and their prerequisite knowledge that generate effective performance. In
general, these activities occur within the framework of the five-stage CTA process.
However, because this process is highly integrated with the 4C/ID model, the
instructional design model guides the CTA activities. This integration with the
instructional design process tends to highlight the reiterative nature of the CTA process.
43.6.1 Decomposition of the Complex Skill
In the first group of task analysis activities, complex skills are broken down into
constituent skills, and their interrelationships are identified4. Performance objectives are
specified5 for all constituent skills, and the objectives are classified as recurrent or
nonrecurrent. Objectives are classified as nonrecurrent if the desired behavior varies from
problem to problem and is guided by the use of cognitive strategies or mental models.
Objectives are recurrent if the desired behavior is highly similar from problem to problem
4 There are three categories of interrelationships: coordinate (performed in temporal order), simultaneous
(performed concurrently), and transposable (performed in any order).
5 Performance objectives reflect the performance as a result of learning and include an action verb, a
description of tools used, conditions, and standards for performance.
15
Cognitive Task Analysis
Clark, Feldon, van Merriënboer, Yates, & Early
and is guided by rules or procedures. Sometimes recurrent constituent skills require a
high degree of automaticity; these skills are identified for additional part-task practice.
Documentation analysis, observation, and unstructured interviews with SMEs provide the
information for building a preliminary skills hierarchy to guide further knowledge
elicitation efforts. Data collection, verification, and validation of the skills hierarchy
require multiple iterations of knowledge elicitation using multiple SMEs. The verified
skills hierarchy then serves as a guide for deeper CTA techniques, such as Clark’s (2004,
2006) Concepts, Principles, and Processes. The CPP data identifies constituent skills and
their interrelationships, performance objectives for each constituent skill, and
classification of the skill as recurrent or nonrecurrent. The CPP method also identifies
problems ranging from easy to difficult to assist in sequencing task classes.
43.6.2 Sequencing Task Classes
The second group of task analysis activities involves categorizing learning tasks into task
classes. The skills hierarchy and classified performance objectives determine the
sequence of training for individual constituent skills. The 4C/ID-model employs a whole-
task approach, in which trainees learn all constituent skills at the same time. In the first
task class, learners perform the simplest version of the whole task. As the conditions
under which the task is performed become increasingly complex, the whole task
scenarios become more authentic and reflective of those encountered by experts in the
real world. CTA processes are used both to verify the skills hierarchy and to confirm the
sequencing of task classes from simple to complex.
43.6.3 Analyze the Nonrecurrent Aspects of the Complex Skill
The third set of analytic activities identifies the supportive information necessary for each
task class in the form of mental models (how is the problem domain organized?) and
cognitive strategies (how to approach problems in the domain?). Knowledge elicitation
methods commonly used with SMEs to capture data for nonrecurrent aspects of a
complex skill include interviews and think-aloud protocols. The CTA methods are
repeated for both simple versions and complex versions of the task in order to capture the
knowledge required for performing the nonrecurrent aspects of the task.
43.6.4 Analyze the Recurrent Aspects of the Complex Skill
The final set of task analysis activities in the 4C/ID model is an in-depth analysis of the
recurrent constituent skills. These are identified during the skill decomposition process to
identify the JIT information required for the recurrent aspects of the learning tasks. Each
constituent skill that enables the performance of another constituent skill is identified in a
reiterative process, until the prerequisite knowledge already mastered by learners at the
lowest level of ability is identified.
Analysts employ CTA techniques to identify task rules and generate highly specific,
algorithmic descriptions of task performance. Next, the prerequisite knowledge required
to apply the procedure is identified. The analysis of concepts occurs through the creation
of feature lists that identify the characteristics of all instances of a concept. At a lower
level, facts (which have no prerequisites) are identified. At a higher level, processes and
principles are identified. When completed, the analyst incorporates these prerequisite
knowledge components into the rules or procedures for performing the task.
16
Cognitive Task Analysis
Clark, Feldon, van Merriënboer, Yates, & Early
In sum, the results of the four sets of CTA activities in the 4C/ID model provide detailed
and in-depth information about the skills, sequence, cognitive strategies, mental models,
rules, and prerequisite knowledge required for complex skill learning through the
instructional design of its four interrelated components: (a) learning tasks, (b) supportive
information, (c) JIT information, and (d) part-task practice. Combined, they form a fully
integrated system for problem-based learning in complex domains. A complete
description and procedure for implementing the 4C/ID model is found in van
Merriënboer and Kirschner (in press).
43.7 The Next Generation of Research on Cognitive Task Analysis
While CTA appears to have significant potential to improve various kinds of
performance, it shares many of the challenges reported in studies of instructional design
theories and models (e.g., Glaser, 1976; Salas & Cannon-Bowers, 2001). We need many
more well-designed studies that systematically compare the impact of different forms of
CTA on similar outcome goals and measures. We also need to understand the efficacy of
different CTA methods when used with different training design models and theories.
So many types of CTA have been used and reported, and variation in the application of
methods is so overwhelming that it is doubtful that any generalization about CTA will
satisfy basic standards of construct validity. Researchers are cautioned to look carefully
at the description of the methods used to implement CTA in order to classify the family
origin of the technique being replicated. While we attempted to describe five “common
elements” of most CTA methods in the first part of this chapter, the specific strategies
used to implement each of these elements varied across studies. The elements we
described are focused on the common steps used to implement CTA. This is a “sequence”
model and is similar to the ADDIE model (Analysis, Design, Development,
Implementation, and Evaluation) for instructional design. Schraagen et al. (2000) and
Cooke (1994) have discussed this problem in detail and have attempted to organize the
various methods into families based on the type of outcome being pursued (e.g., training,
job design, and assessment). Wei and Salvendy (2004) have suggested 11 very useful
guidelines for selecting the best CTA method to achieve a goal (see Table 2).
INSERT TABLE 43.2 ABOUT HERE
43.7.1 First Principles of Cognitive Task Analysis
A different and equally valuable strategy for tackling the multiplicity of CTA methods
would be to apply Merrill’s (2002) “first principles” approach to a similar problem with
instructional design models. Merrill classified what appeared to be the most
psychologically active instructional methods in a group of popular, evidence-based
instructional design models. One of the principles he suggested is that designs which help
learners connect with prior knowledge are more successful. An attempt to identify “first
principles” of CTA would be a benefit to researchers and practitioners by identifying the
active ingredients in key CTA methods. For example, nearly all CTA methods seem to
place a heavy premium on the identification of the environmental or contextual “cues”
that indicate the need to implement a skill. For example, the neonatal nurses in the
Crandall and Getchell-Reiter (1993) study involved generating more accurate diagnostic
17
Cognitive Task Analysis
Clark, Feldon, van Merriënboer, Yates, & Early
symptoms (cues) expressed by very sick babies. Since the recognition of conditional cues
may be automated and unconscious for many SMEs, the need for accurate and exhaustive
identification of important cues may be one of the most important principles of CTA. In
the case of the neonatal nursing studies, the cues captured during CTA have changed the
textbook instructions for future neonatal nurses. Other principles may be associated with
the identification of the sequence in which productions must be performed and the
decisions that must be made (including the alternatives that need to be considered and the
criteria for selecting alternatives). Principles may also be related to the protocols that are
used to observe and interview experts to capture the most accurate and exhaustive
description of their task-based knowledge. It is also likely that a separate set of principles
would be needed to characterize team or organizational CTA’s (Schraagen et al., 2000).
43.7.2 Research on Automated, Unconscious Expert Knowledge
Concerns about experts’ awareness of their own expertise and the strategies used to
capture unconscious knowledge are arguably the most important research issues
associated with CTA. The body of research on unconscious, automated knowledge has
yet to be widely integrated into instructional design or the practice of educational
psychologists. Most of the research in this area has been conducted by those interested in
psychotherapy and the dynamics of stereotypes and bias in decision making (e.g., Abreu,
1999; Bargh & Ferguson, 2000; Wheatley & Wegner, 2001) and motivation (e.g., Clark,
Howard, & Early, 2006). Yet we have ample evidence of the importance of this issue in
CTA and training from the results of current research by, for example, Velmahos et al.
(2004) and Chao and Salvendy (1994). We need to know much more about how
unconscious expertise influences the accuracy of task analysis. We also need to know
much more about how to modify automated, unconscious knowledge when people must
learn to modify skills. Clark and Elen (2006) review past research and make suggestions
for further study.
43.7.3 Cost Effectiveness and Cost Benefit Research
Another promising area of future research is cost-effectiveness and cost-benefit analysis
(Levin & McEwan, 2000). Existing studies have not explored this issue systematically,
but preliminary analyses indicating significant learner time savings and decreases in
significant performance errors (e.g., Clark & Estes, 1999; Merrill, 2002; Schaafstal et al.,
2000; Velmahos et al., 2004) are very promising.
These data are important in part because many key decision makers have the impression
that CTA is an overly complex process that requires a great deal of time to conduct and
should be avoided due to its cost (Cooke, 1994, 1999). It is accurate that CTA increases
the time and effort required for front-end design—particularly when a number of experts
who share the same skill must be observed and interviewed. Yet, it is also possible that
these costs may be offset by delivery-end savings due to increased learner accuracy and
decreased learning time.
People in formal school settings seldom consider decreased learning time as a benefit, but
in business and government settings, time is a valuable commodity. The conditions under
which savings are, and are not, available would be a valuable adjunct to continued
development of CTA. Many other suggestions are possible but are beyond the scope of
this chapter.
18
Cognitive Task Analysis
Clark, Feldon, van Merriënboer, Yates, & Early
43.8 CONCLUSION
CTA is one of the major contributions to instructional technology that have resulted from
the “cognitive revolution” in psychology and education starting in the 1970’s. CTA does
not seek to replace behavioral task analysis (or the analysis of documents and research to
support training) but instead adds to existing methods that help capture the covert mental
processes that experts use to accomplish complex skills. The importance of CTA is based
on compelling evidence that experts are not fully aware of about 70% of their own
decisions and mental analysis of tasks (Clark & Elen, 2006; Feldon & Clark, 2006) and
so are unable to explain them fully even when they intend to support the design of
training, assessment, job aids or work. CTA methods attempt to overcome this problem
by specifying observational and interview strategies that permit designers to capture more
accurate and complete descriptions of how experts succeed at complex tasks. Research
evidence described in this chapter strongly suggests huge potential benefits for designers
and learners when CTA-based performance descriptions are used in training and job aids.
Many designers are apparently not aware of, or are not using CTA. As this chapter was
going to press, we entered search terms in Google Scholar for “task analysis” or “task
analysis models” and then for “cognitive task analysis” or “cognitive task analysis
models”. The former terms return about nine to ten times more hits than the cognitive
task analysis terms. We looked at a number of the texts used to teach instructional design
and could not find any references to CTA.
CTA has been the subject of research more often than it has been applied in practice, so
we suspect that few designers have been trained to conduct effective cognitive task
analyses. It is also possible that the assumptions underlying CTA conflict with the
assumptions that underlie some of the currently popular design theories such as
constructivism and problem-based learning (Kirschner et al., 2006). Educators who avoid
direct instruction in favor of expert-supported group problem solving or “communities of
practice” would not be inclined to conduct CTA to support a constructivist context for
learning new skills or to teach CTA in graduate programs.
Our review of the research evidence for CTA strongly indicates that if it is adopted, it
might make a huge contribution to learning and performance. It is also clear that many
questions about CTA remain to be answered.
REFERENCES
Abreu, J. M. (1999). Conscious and nonconscious African American stereotypes: Impact
on first impression and diagnostic ratings by therapists. Journal of Consulting
and Clinical Psychology, 67, 387-393.
Anderson, J. R. (1983). The architecture of cognition. Cambridge, MA: Harvard
University Press.
Anderson, J. R. (1995) ACT: A simple theory of complex cognition. American
Psychologist, 51, 355-365.
*Anderson, J. R., & Lebiere, C. (1998). The atomic components of thought. Mahwah, NJ:
Lawrence Erlbaum Associates.
19
Cognitive Task Analysis
Clark, Feldon, van Merriënboer, Yates, & Early
*Bargh, J. A., & Ferguson, M. J. (2000). Beyond behaviorism: On the automaticity of
higher mental processes. Psychological Bulletin, 126, 925-945.
Bargh, J. A., Gollwitzer, P. M., Lee-Chai, A., Barndollar, K., & Trötschel, R. (2001). The
automated will: Activation and pursuit of behavioral goals. Journal of Personality
and Social Psychology, 81, 1014-1027.
Betsch, T., Fiedler, K., & Brinkmann, J. (1998). Behavioral routines in decision making:
The effects of novelty in task presentation and time pressure on routine
maintenance and deviation. European Journal of Social Psychology, 28, 861-878.
Blessing, S. B., & Anderson, J. R. (1996). How people learn to skip steps. Journal
of Experimental Psychology: Learning, Memory, and Cognition, 22, 576-598.
Bradshaw, J. M., Ford, K. M., Adams-Webber, J. R., & Agnew, N. M. (1993). Beyond
the repertory grid: New approaches to constructivist knowledge acquisition tool
development. In K. M. Ford & J. M. Bradshaw (Eds.), Knowledge acquisition as
modelling (pp. 9-32). New York: John Wiley & Sons.
*Chao, C.-J., & Salvendy, G. (1994). Percentage of procedural knowledge acquired as a
function of the number of experts from whom knowledge is acquired for
diagnosis, debugging and interpretation tasks. International Journal of Human-
Computer Interaction, 6, 221-233.
Chinn, C. A., & Brewer, W. F. (1993). The role of anomalous data in knowledge
acquisition: A theoretical framework and implications for science education.
Review of Educational Research, 63(1), 1-49.
*Chipman, S. F., Schraagen, J. M., & Shalin, V. L. (2000) Introduction to cognitive task
analysis. In J. M Schraagen, S. F. Chipman & V. J. Shute (Eds.), Cognitive Task
Analysis (pp. 3-23). Mahwah, NJ: Lawrence Erlbaum Associates.
Clark, R. E. (2004). Design Document For A Guided Experiential Learning Course. Final
report on contract DAAD 19-99-D-0046-0004 from TRADOC to the Institute for
Creative Technology and the Rossier School of Education.
Clark, R. E. (2006). Training aid for cognitive task analysis. Technical report produced
under contract ICT 53-0821-0137—W911NF-04-D-0005 from the Institute for
Creative Technologies to the Center for Cognitive Technology, University of
Southern California.
*Clark, R. E., & Elen, J. (2006). When less is more: Research and theory insights about
instruction for complex learning. In J. Elen & R. E. Clark (Eds.), Handling
Complexity in Learning Environments: Research and Theory (pp. 283-297).
Oxford: Elsevier Science Limited.
*Clark, R. E., & Estes, F. (1996) Cognitive task analysis, International Journal of
Educational Research, 25, 403-417.
Clark, R. E., Howard, K., & Early, S. (2006). Motivational challenges experienced
in highly complex learning environments. In J. Elen & R. E. Clark (Eds.),
Handling complexity in learning environments: Research and Theory (pp. 27-43).
Oxford: Elsevier Science Limited.
Coffey, J. W., & Hoffman, R. R. (2003). Knowledge modeling for the preservation of
institutional memory. Journal of Knowledge Management, 7(3), 38-52.
Cohen, J. (1992). A power primer. Psychological Bulletin, 112, 155-159.
Collins, H. M., Green, R. H., & Draper, R. C. (1985). Where’s the expertise?: Expert
20
Cognitive Task Analysis
Clark, Feldon, van Merriënboer, Yates, & Early
systems as a medium of knowledge transfer. In M. Merry (Ed.), Proceedings of
the fifth technical conference of the British Computer Society Specialist Group on
Expert Systems ‘85 (pp. 323-334). New York: Cambridge University Press.
Cooke, N. J. (1992). Modeling human expertise in expert systems. In R. R. Hoffman
(Ed.), The psychology of expertise: Cognitive research and empirical AI (pp. 29-
60). Mahwah, NJ: Lawrence Erlbaum Associates.
*Cooke, N. J. (1994). Varieties of knowledge elicitation techniques. International
Journal of Human-Computer Studies, 41, 801-849.
Cooke, N. J. (1999). Knowledge elicitation. In F. T. Durso (Ed.), Handbook of applied
cognition (pp. 479-509). New York: Wiley.
Cooke, N. J., Atlas, R. S., Lane, D. M., & Berger, R. C. (1993). Role of high-level
knowledge in memory for chess positions. American Journal of Psychology, 106,
321-351.
Cooke, N. J., & Breedin, S. D. (1994). Constructing naive theories of motion on-the-fly.
Memory and Cognition, 22, 474-493.
Crandall, B., & Gretchell-Leiter, K. (1993). Critical decision method: A technique for
eliciting concrete assessment indicators from the "intuition" of NICU nurses.
Advances in Nursing Science, 16(1), 42-51.
*Crandall, B, Klein, G. and Hoffman, R. R. (2006) Working Minds: A practitioner’s
guide to cognitive task analysis. Cambridge: MIT Press.
Crandall, B., & Gamblian, V. (1991). Guide to Early Sepsis Assessment in the NICU.
Fairborn, OH: Klein Associates.
Dawes, R. M. (1994). House of cards. New York: Free Press.
Dochy, F., Segers, M., & Buehl, M. M. (1999). The relation between assessment
practices and outcomes of studies: The case of research on prior knowledge.
Review of Educational Research, 69(2), 145-186.
Einhorn, H. (1974). Expert judgment: Some necessary conditions and an example.
Journal of Applied Psychology, 59, 562-571.
*Ericsson, K. A., & Kintsch, W. (1995). Long-term working memory. Psychological
Review, 102, 211-245.
Ericsson, K. A., & Simon, H. A. (1993). Protocol analysis: Verbal reports as data
(Rev. ed.). Cambridge, MA: Bradford.
Ericsson, K. A., & Smith, J. (1991). Towards a general theory of expertise: Prospects
and limits. New York: Cambridge University Press.
Feldon, D. F. (in press). Implications of research on expertise for curriculum and
pedagogy. Educational Psychology Review.
Feldon, D. F. (2004) Inaccuracies in expert self report: Errors in the description of
strategies for designing psychology experiments. Unpublished doctoral
dissertation, Rossier School of Education, University of Southern California,
USA.
*Feldon, D. F., & Clark, R. E. (2006). Instructional implications of cognitive task
analysis as a method for improving the accuracy of experts’ self-report. In G.
Clarebout & J. Elen (Eds.), Avoiding simplicity, confronting complexity:
Advances in studying and designing (computer-based) powerful learning
environments (pp. 109-116). Rotterdam, The Netherlands: Sense Publishers.
Fisk, A. D., & Eggemeier, R. T. (1988). Application of automatic/controlled processing
21
Cognitive Task Analysis
Clark, Feldon, van Merriënboer, Yates, & Early
theory to training of tactical command and control skills: I. Background and task
analytic methodology. In 33rd Annual Proceedings of the Human Factors Society
(pp. 281-285). Santa Monica, CA: Human Factors Society.
Gagné, R. M. (1968). Learning hierarchies. Educational Psychologist, 6, 1-9.
Gagné, R. M. (1962). The acquisition of knowledge. Psychological Review, 69, 355-
365.
Gagné, R. M. (1982). Developments in learning psychology: Implications for
instructional design and effects of computer technology on instructional design
and development. Educational Technology, 22(6), 11-15.
Gagné, R. M., Briggs, L. J., & Wager, W. W. (1992). Principles of instructional design.
Fort Worth, TX: Harcourt Brace Jovanovich.
Gagné, R. M., & Medsker, K. L. (1996). The conditions of learning: Training
applications. New York: Harcourt Brace.
Glaser, R. (1976). Components of a psychology of instruction: Toward a science of
design. Review of Educational Research, 46(1), 1-24.
Glaser, R., & Chi, M. T. H. (1988). Overview. In M. T. H. Chi, R. Glaser, & M. J.
Farr (Eds.), The nature of expertise (pp. xv-xxviii). Mahwah, NJ: Lawrence
Erlbaum Associates.
Hall, E. M., Gott, S. P., & Pokorny, R. A. (1995). A procedural guide to cognitive task
analysis: The PARI methodology. Brooks Air Force Base, TX: Manpower and
Personnel Division, U.S. Air Force.
Hermans, D., Crombez, G., & Eelen, P. (2000). Automatic attitude activation and
efficiency: The fourth horseman of automaticity. Psychologica Belgica, 40(1), 3-
22.
Hinds, P. J., Patterson, M., & Pfeffer, J. (2001). Bothered by abstraction: The effect of
expertise on knowledge transfer and subsequent novice performance. Journal of
Applied Psychology, 86, 1232-1243.
*Hoffman, R. R., Crandall, B., & Shadbolt, N. (1998). Use of the critical decision method
to elicit expert knowledge: A case study in the methodology of cognitive task
analysis. Human Factors, 40, 254-277.
*Hoffman, R. R., Shadbolt, N. R., Burton, A. M., & Klein, G. (1995). Eliciting
knowledge from experts: A methodological analysis. Organizational Behavior
and Human Decision Processes, 62(2), 129-158.
Hoffman, P., Slovic, P., & Rorer, L. (1968). An analysis of variance model for the
assessment of configural cue utilization in clinical judgment. Psychological
Bulletin, 69, 338-349.
Johnson, P. E. (1983). What kind of expert should a system be? The Journal of
Medicine and Philosophy, 8, 77-97.
*Jonassen, D. H., Tessmer, M., & Hannum, W. H. (1999). Task analysis methods for
instructional design. Mahwah, NJ: Lawrence Erlbaum Associates.
Kareken, D. A., & Williams, J. M. (1994). Human judgment and estimation of
premorbid intellectual function. Psychological Assessment, 6(2), 83-91.
*Kirschner, P., Sweller, J., & Clark, R. E. (2006). Why minimally guided learning does
not work: An analysis of the failure of discovery learning, problem-based
learning, experiential learning and inquiry-based learning. Educational
Psychologist, 41(2), 75-86.
22
Cognitive Task Analysis
Clark, Feldon, van Merriënboer, Yates, & Early
Klein, G. A., & Calderwood, R. (1991). Decision models: Some lessons from the field.
IEEE Transactions on Systems, Man, and Cybernetics, 21, 1018-1026.
*Klein, G. A., Calderwood, R., & MacGregor, D. (1989). Critical decision method for
eliciting knowledge. IEEE Transactions on Systems, Man, and Cybernetics, 19,
462-472.
Lee, R. L. (2004). The impact of cognitive task analysis on performance: A meta analysis
of comparative studies. Unpublished Ed.D. dissertation, Rossier School of
Education, University of Southern California, USA.
Lee, J.-Y., & Reigeluth, C. M. (2003). Formative research on the heuristic task analysis
process. Educational Technology, Research and Development, 51(4), 5-24.
Levin, H. M., & McEwan, P. J. (2000). Cost effectiveness analysis: Methods and
applications, 2nd edition. Beverly Hills, CA: SAGE Publications.
Lohman, D. F. (1986). Predicting mathemathantic effects in the teaching of higher-
order thinking skills. Educational Psychologist, 21(3), 191-208.
Maupin, F. (2003). Comparing cognitive task analysis to behavior task analysis in
training first year interns to place central venous catheters. Unpublished doctoral
dissertation, University of Southern California, Los Angeles, California.
Merrill, M. D. (2002). A pebble-in-the-pond model for instructional design.
Performance Improvement, 41(7), 39-44.
Mullin, T. M. (1989). Experts estimation of uncertain quantities and its implications for
knowledge acquisition. IEEE Transactions on Systems, Man, and Cybernetics,
19, 616-625.
Neves, D. M., & Anderson, J. R. (1981). Knowledge compilation: Mechanisms for the
automatization of cognitive skills. In J. R. Anderson (Ed.), Cognitive skills and
their acquisition (pp. 335-359). Hillsdale, NJ: Erlbaum.
Redding, R. E. (1995). Cognitive task analysis for instructional design: Applications in
distance education. Distance Education, 16, 88-106.
Russo, J. E., Johnson, E. J., & Stephens, D. L. (1989). The validity of verbal protocols.
Memory & Cognition, 17(6), 759-769.
Ryder, J. M., & Redding, R. E. (1993). Integrating cognitive task analysis into
instructional systems development. Educational Technology, Research and
Development, 41, 75-96.
Salas, E., & Cannon-Bowers, J. A. (2001). The science of training: A decade of
progress. Annual Review of Psychology, 52, 471-497.
Schaafstal, A., Schraagen, J. M., & van Berlo, M. (2000). Cognitive task analysis and
innovation of training: The case of the structured troubleshooting. Human Factors
42, 75-86.
Schneider, W. (1985). Training high-performance skills: Fallacies and guidelines. Human
Factors, 27, 285-300.
Schneider, W., & Shiffrin, R. M. (1977). Controlled and automatic human information
processing: 1. Detection, search, and attention. Psychological Review, 84, 1-66.
Schraagen, J. M., Chipman, S. F., & Shute, V. J. (2000) State-of-the-art review of
cognitive task analysis techniques. In J. M Schraagen, S. F. Chipman & V. J.
Shute (Eds.), Cognitive Task Analysis (pp. 467-487). Mahwah, NJ: Lawrence
Erlbaum Associates.
Schwartz, D., & Bransford, J. D. (1998). A time for telling. Cognition and Instruction,
23
Cognitive Task Analysis
Clark, Feldon, van Merriënboer, Yates, & Early
16, 475-522.
*Sternberg, R. J., & Horvath, J. A. (1998). Cognitive conceptions of expertise and their
relations to giftedness. In R. C. Friedman & K. B. Rogers (Eds.), Talent in
Context (pp. 177-191). Washington, DC: American Psychological Association.
Thorley, N., & Stofflet, R. (1996). Representation of the conceptual change model in
science teacher education. Science Education, 80, 317-339.
*van Merriënboer, J. J. G. (1997). Training complex cognitive skills: A four-component
instructional design model for technical training. Englewood Cliffs, NJ:
Educational Technology Publications.
*van Merriënboer, J. J. G., Clark, R. E., & de Croock, M. B. M. (2002). Blueprints for
complex learning: The 4C/ID*-model. Educational Technology Research and
Development, 50(2), 39-64.
van Merriënboer, J. J. G., & Kirschner, P. A. (in press). Ten steps to complex learning: A
systematic approach to Four-Component Instructional Design. Mahwah, NJ:
Lawrence Erlbaum Associates.
*Velmahos, G. C., Toutouzas, K. G., Sillin, L. F., Chan, L., Clark, R. E., Theodorou, D.,
& Maupin, F. (2004). Cognitive task analysis for teaching technical skills in an
inanimate surgical skills laboratory. The American Journal of Surgery, 18, 114-
119.
Wegner, D. M. (2002). The illusion of conscious will. Cambridge, MA: MIT Press.
*Wei, J., & Salvendy, G. (2004). The cognitive task analysis methods for job and task
design: Review and reappraisal. Behaviour & Information Technology, 23(4),
273-299.
Wheatley, T., & Wegner, D. M. (2001). Automaticity of action, Psychology of. In N. J.
Smelser & P. B. Baltes (Eds.), International Encyclopedia of the Social and
Behavioral Sciences (pp. 991-993). Oxford, IK: Elsevier Science Limited.
Zeitz, C. M. (1997). Some concrete advantages of abstraction: How experts’
representations facilitate reasoning. In P. J. Feltovich, K. M. Ford, & R. R.
Hoffman (Eds.), Expertise in context (pp. 43-65). Menlo Park, CA: American
Association for Artificial Intelligence.
* Indicates a core reference.
24
Cognitive Task Analysis
Clark, Feldon, van Merriënboer, Yates, & Early
List of Tables
43.1 Cost Comparison of Behavioral and Cognitive Task Analysis
43.2 Guidelines for Selecting CTA Methods
25
Cognitive Task Analysis
Clark, Feldon, van Merriënboer, Yates, & Early
Table 43.1
Cost Comparison of Behavioral and Cognitive Task Analysis (From Clark & Estes, 1999)
Comparison
Activities Behavioral Task
Analysis & Design Days* Cognitive Task
Analysis & Design Days
Task Analysis & Design 7 38
Training of Presenters 0 18
Delivery by Trainers 80 34
Sub-total 87 90
Total time for 500 trainers 1,000 500
Total training days** 1,087 590
*Day = person work day to design and present safety course
**Total savings with CTA: 1,087 days – 590 days = 497 days or 2.5 person years
26
Cognitive Task Analysis
Clark, Feldon, van Merriënboer, Yates, & Early
Table 43.2
Guidelines for Selecting CTA Methods (Following Wei & Salvendy, 2004)
Families of CTA Methods
When to use different
CTA methods Observations &
Interviews Process
Tracing Conceptual
Techniques Formal
Models
1. In initial stages when
tasks and domain are not
well-defined X
2. Procedures to perform
a task are not well
defined X
3. Tasks are
representative and
process is clear X
4. Task process and
performance need
tracking X
5. Verbal data is easily
captured without
compromising
performance
X
6. Domain knowledge
and structures need
defining X
7. Multiple task
analyzers are used, and
task requires less
verbalization
X
8. Task needs
quantitative predication,
and task models change
little when scenario
changes
X
9. Task performance is
affected or distracted by
interference X X X
10. Task analyzers lack
significant knowledge
and techniques X X X
11. Tasks are:
(a) Skill-based
X X
(b) Rule-based
X X
(c) Knowledge-based
X X
27