Content uploaded by Judith Orasanu
Author content
All content in this area was uploaded by Judith Orasanu on Jan 17, 2016
Content may be subject to copyright.
Finding Decisions in Natural Environments:
The View from the Cockpit
Judith Orasanu, Ph.D.
NASA-Ames Research Center
Ute Fischer, Ph.D.
Georgia Institute of Technology
In C. Zsambok & G. A. Klein (Eds.), Naturalistic Decision Making. Hillsdale, NJ: Lawrence
Erlbaum Associates, 1997
Running Head: Finding Decisions....
Finding Decisions
2
In keeping with the naturalistic decision making (NDM) tradition of studying “real
people making real decisions” in their everyday contexts, our mission was to understand flight-
related decision making by commercial airline pilots: What constitutes effective flight crew
decision making? What conditions pose problems for crews and lead to poor decisions?
Our initial examination of decision strategies that distinguished more from less effective
crews in simulated flight showed striking variability in the decision behaviors of the most
effective crews. Sometimes the crews were very quick and sometimes they were slow and
methodical. In retrospect we should not have been surprised, but as psychologists we were
looking for simple patterns, such as, “Good crews always make the fastest decisions.”
These observations suggested that the most effective crews tailored their decision
strategies to the situation. Thus, to understand what constitutes an effective decision strategy we
need to understand the problem situations that crews encounter. Our research question was
expanded to: “How can we assess the sensitivity and appropriateness of decision strategies in
light of situational features?”
We adopted an approach used by ethnographers (e.g., Hutchins & Klausen, 1991) and
cognitive engineers (Woods, 1993): close examination of a phenomenon of interest in its
everyday context, seeking natural variations in critical features. Our approach builds on Klein's
(1993) Recognition Primed Decision (RPD) model and on Hammond’s Cognitive Continuum
Theory (Hammond, Hamm, Grassia, & Pearson, 1987). Our work also echoes the theme of
Hart's work on "strategic behavior" (Hart & Wickens, 1990), namely, operators make decisions
that serve overall task goals, capitalizing on their strengths and minimizing work.
A SEARCH FOR DECISION EVENTS IN CONTEXT: DATA SOURCES
As our starting point shifted from strategies to situations, we began a search for decision
events in context. Our initial observations were based on crews “flying” a mission in a high-
fidelity flight simulator, which yielded three distinct types of decisions. However, we realized
that our opportunity to observe decisions was restricted by the particular scenarios used in those
studies, so we sought a broader set of situations that might present other types of decision events.
The Aviation Safety Reporting System (ASRS) data base satisfied this need. The ASRS
is a confidential reporting system maintained by NASA (with funding from the FAA). Pilots
(and others) can submit a report describing an incident that may have involved a risky situation
that was problematic in some way. Key words used to search the database were Problem
Solving and Decision Making. The resulting set of incident reports describes diverse events that
required crew decision making. However, because of the self-report nature of the descriptions,
what we know about the actual decision strategies used by crews is what they chose to tell us.
Likewise, information about conditions that may have led to poor decisions is limited. To
address these limitations, we pursued a third data source.
The National Transportation Safety Board’s (NTSB) accident investigations offer deep
analysis of actual crashes, based on crew conversations documented by the cockpit voice
Finding Decisions
3
recorder, physical evidence, aircraft systems, and interviews with survivors or observers. We
chose reports in which crew actions were identified by NTSB analysts as contributing or causal
factors in the accidents. These case studies provide a detailed picture of what happened
immediately prior to each accident, what the crew focused on, how they managed the situation,
what decisions were made, and what actions were taken. The analyses are good sources of
hypotheses about contextual factors that make decisions difficult, types and sources of error, and
effective strategies.
What we learned about decision situations and decision strategies from these three data
sources is described in the remainder of this chapter. First we describe six types of decision
events that were identified. Then we describe decision strategies that are associated with each
type of decision and differences in strategies used by more and less effective crews. Finally, we
describe a decision process model and a model of decision effort derived from the first two
activities.
DECISION EVENTS
Simulator Data
Our analyses were based on two full-mission simulator studies conducted at NASA-
Ames Research Center. The first (Foushee, Lauber, Baetge & Acomb, 1986) was designed to
study the effect of fatigue on the performance of 2-member crews. The second (Chidester,
Kanki, Foushee, Dickinson, & Bowles, 1990) investigated leader personality effects in 3-
member crews. All crews were exposed to the same events, which allowed between-crew
comparisons. Crew performance in the simulator was videotaped and all communications were
transcribed.
The scenario flown by all crews included a missed approach at their original destination
due to bad weather (excessive cross-winds) and diversion to an alternate landing site. During
climbout following the missed approach, the main hydraulic system failed. As a result, the gear
and flaps had to be extended by alternate means. Moreover, the flaps could only be set to 15
degrees, resulting in a faster than normal landing speed, and the gear could not be retracted once
extended, meaning that further diversion was not desirable because of fuel constraints.
Three major decisions were present in this scenario: (a) At the original destination, crews
had to decide whether to continue with the final approach or to perform a missed approach. (b)
Once the crew realized that the weather at their destination was not improving, they had to select
an alternate airport. (c) The hydraulic failure required crews to coordinate the flap and gear
extension procedures during final approach, an already high-workload period. How to manage
this coordination was the third decision. These problems imposed different cognitive demands
on the crews: The situations differed in the number of constraints a solution had to satisfy and in
the extent to which a solution was prescribed.
Problem (a) calls for a Go/No Go decision. A course of action is prescribed: If all
facilitating conditions are normal, then Go. If the “go” conditions are not met, an alternate
action is prescribed (No Go condition). Conditions for Go and No Go are clearly defined and the
Finding Decisions
4
actions to be taken in both cases are also clearly prescribed. Selecting an alternate landing site as
in problem (b) is an example of a Choice problem. Several legitimate options or courses of
action exist from which one must be selected. No rule prescribes a single appropriate response.
Options must be evaluated in light of goals, possible consequences, and situational constraints
(such as fuel, runway length, or weather). Scheduling problems like problem (c) require the
crews to decide on what is most important to do, when to do it and who will do it. Several tasks
must be accomplished within a restricted window of time with limited resources.
Incident Reports from the Aviation Safety Reporting System
Ninety-four ASRS reports were analyzed in depth and classified in terms of their
precipitating events, phase of flight during which the event and subsequent decisions occurred,
and focus of the decisions. Some 234 decisions were discerned in these cases, because a single
precipitating event often set the stage for a series of decisions. For example, an engine problem
may first require the crew to decide what to do with the engine (shut it down, reduce power to
idle, or continue operation), then to decide whether or not to divert, where to divert, and any
specific considerations about landing configuration as a consequence of the engine problem. Our
analyses of the ASRS reports yielded three additional types of decision events.
Condition-Action Rules. The situation requires recognition of a predefined condition and
retrieval of the associated response. These decisions mirror Klein’s (1993) RPD, but are
prescriptive in the aviation domain. They do not depend primarily on the pilot’s personal
experience with similar cases, but on responses dictated by the industry, company or FAA.
Neither conditions nor options are bifurcated, as in Go/No Go cases, though both types rely on
underlying rules. Examples include decisions to pull the fire handle in case of an engine fire or
to descend to a lower altitude in case of cabin decompression. Thus, the pilot must know the rule
and then decide whether conditions warrant applying it.
Procedural Management. The essence of this class of decisions is the presence of an
ambiguous situation that is judged to be of high risk. The crew does not know precisely what is
wrong, but recognizes that conditions are out of normal bounds. Standard procedures are
employed to make the situation safe, often followed by landing at the nearest suitable airport.
These decisions look like condition-action rules but lack prespecified eliciting conditions. The
response also is generalized, such as “get down fast.” One case studied was a decision to reduce
cruise speed when an airframe vibration was experienced (which turned out to be due to a loose
aileron trim tab). The defining features of this type of problem are ambiguous high-risk
conditions and a standard procedural response that satisfies the conditions. No specific rules in
manuals or checklists guide this type of decision; pilot domain knowledge and experience are the
source of the action.
Creative problem solving. These are ill-defined problems and are probably the least
frequent types of decision events crews ever encounter. No specific guidance is available in
standard procedures, manuals, or checklists to guide the crew to a course of action. The nature
of the problem may or may not be clear. The important distinction from procedural management
situations is that standard procedures will not satisfy the demands of the situation. New solutions
must be invented. Perhaps the most famous case is the DC-10 (UA flight 232) that lost all flight
Finding Decisions
5
controls when the hydraulic cables were severed following a catastrophic engine failure (NTSB,
1990). The crew had to figure out how to control the plane. They invented the solution of using
alternate thrust on the two remaining engines to "steer" it.
National Transportation Safety Board Accident Analyses
The six types of decision events just described could account for all problem situations
analyzed in a dozen NTSB accident reports. Because the NTSB seeks to understand causal and
contributing factors in accidents, we used their reports primarily as a source of hypotheses about
decision processes and causes of poor decisions, rather than to expand the set of decision types.
Decision Event Taxonomy
The six types of decisions were identified using simulator performance and ASRS
databases. They fall into two subgroups that differ primarily in whether a prescriptive rule exists
that defines a situationally appropriate response or whether the decision primarily relies on the
pilot’s knowledge and experience. These are referred to as “rule-based” and “knowledge-based”
decisions.{1}
Rule-based decisions include two subtypes: Go/No Go and Condition-Action decisions.
They differ in whether a binary option exists or whether a simple condition-action rule prevails.
The crucial aspect of the decision process for rule-based decisions is accurate situation
assessment. The major impediment is ambiguity. Such decisions are often made under high
time pressure and risk; thus, the industry has prescribed appropriate responses to match
predictable high-risk conditions. Once the situation is recognized, a fast response may be
required for safety. An example is deciding whether to abort a takeoff when an engine fails
some time during the takeoff roll.
Knowledge-based decisions vary in how well structured the problems are and in the
availability of response options. “Well-structured” problems are those in which the problem
situation and available response options are unambiguous and should be known to experienced
decision makers. In one case (“choice” problems), the decision maker must choose one option
after evaluating constraints and outcomes associated with various options. In the second case
(“scheduling” problems), effective performance depends on good judgment about relative
priorities of various tasks and accurate assessment of resources and limitations.
“Ill-structured” problems entail ambiguity, either in the cues that signal the problem or in
the available response options. Cues may be sufficiently vague or confusing that the crew cannot
identify the problem (“situational management” decisions), or crews do not know what to do
even if the problem is understood (“creative problem solving” required).
Analysis of the 94 ASRS reports indicates that rule-based decisions were slightly more
frequent in our sample (54%) than knowledge-based decisions (46%; Orasanu, Fischer, & Tarell,
1993). Three out of four rule-based decisions were Condition-Action decisions, the rest being
Go/No Go decisions. This distribution is not surprising, because Go/No Go decisions occur in
narrowly specified situations during takeoff and landing, whereas Condition-Action decisions
Finding Decisions
6
can occur anytime. About a third of the decisions (36%) required choices, and the remainder
were other types of knowledge-based decisions (4% Scheduling, 3% Procedural Management,
and 2% Creative Problem Solving).
DECISION STRATEGIES
The earlier description of decision types was based on properties of the situation. Now
we turn to crew strategies. We describe how crews responded to the various types of decision
events and differences in behaviors associated with more and less effective crew performance.
Crews flying full-mission simulations provided the richest source of strategy data. Little reliable
strategy data could be obtained from ASRS reports due to the self-report nature of these
descriptions. Corroborating strategy data were obtained from the NTSB accident reports.
Simulator Data
Videotapes of crew performance in simulators allowed us to observe decision making in
action rather than relying on post-hoc accounts, as in the other databases. How decision making
evolves over time in response to dynamic situations could be analyzed. These data provided not
only records of behavior but also of crew communication as a “window” into the crew’s
thinking. Within-crew comparisons can be made as each crew faces several decision events, thus
yielding the greatest generality of findings between and within crews.
Crew performance in the simulator was evaluated by two independent expert observers
both online and from videotapes. Operational and procedural errors (not decision behaviors)
were assessed. Crews were rank ordered by error scores and divided into higher and lower
performance groups using a median split. Decision-relevant behaviors of the two groups were
compared, based on their videotaped performance. Time-stamped transcripts of cockpit
conversation and action timelines permitted detailed analyses of communication and decision
behaviors. Our analyses of decision strategies were independent of the initial error assessments
by check pilots.
The decision taxonomy guided our examination of decision behaviors, providing a
structure that directed our focus. Working with aviation experts, we defined behaviors
appropriate to each decision, cues that signaled the problems, available options, temporal
parameters, relevant constraints, and standard procedures. For detailed descriptions of these
analyses see Fischer, Orasanu and Montalvo (1993) and Orasanu (1994).
We found differences between groups in two types of behaviors: (a) strategies specific to
each decision type, and (b) differences in generalized strategies that cut across decision types.
Decision-Specific Strategies
Consider first the Go/No Go decision (the missed approach). Higher performing crews
made the decision significantly earlier than the less effective crews, which provided a greater
safety margin. One reason they could make this decision early was because they had attended to
Finding Decisions
7
cues signaling the possibility of deteriorating weather. They sought weather updates as the
approach progressed, and planned for the possibility of a missed approach.
The second decision was a knowledge-based choice decision. After the missed approach
and the hydraulic failure, crews faced the problem of choosing a landing site. An alternate was
listed on their flight plan, but the unexpected hydraulic failure raised constraints that made the
designated alternate a poor choice (short runway with bad weather, mountainous terrain).
Recognizing these constraints, realizing that the designated alternate was not a good option,
retrieving other options, and evaluating them in light of the constraints were all required to make
a good decision. The more effective crews in fact verbalized concern with the constraints,
gathered more information about several options, and took longer to make their decision than the
less effective crews. No differences were found in the number of options considered by the two
groups despite differences in amount of information used to evaluate them. Relatively little
attention (beyond standard checklist procedures) was devoted to defining the problem. The
emphasis was on assessing potential solutions.
In the third type of decision, which required scheduling the manual gear deployment and
alternate flap extension, both the nature of the problem and the actions to be taken were clear.
What had to be decided was how these tasks were to be accomplished. What differentiated the
more and less effective crews was the manner in which the tasks were planned and carried out.
These abnormal procedures were unfamiliar to many crews (being relatively infrequent events)
and required additional work during the normally busy final approach phase of flight.
Preparation included review of the procedures in the checklists and manuals, becoming familiar
with the location of the gear handle, assessing how long the tasks would take, determining when
the tasks would be initiated and their sequencing, and assigning tasks to the crew members.
Higher performing crews reviewed the written guidance in advance, during a low workload
period. They rehearsed what would be done and how (e.g., use the alternate procedure to extend
the flaps to 10 degrees, manually lower the gear, then continue extending the flaps to 15
degrees). Because they had planned for these tasks, the higher performing crews began the tasks
earlier and completed them faster than the lower performing crews, thereby giving themselves a
cushion of time to accomplish other essential tasks and maintaining better control of the aircraft
during the final approach and landing.
Generalized strategies
Strategies that cut across various decisions and characterized higher performing crews
include the following: (a) They monitored the environment closely and appreciated the
significance of cues that signaled a problem; (b) they used more information in making
decisions and if necessary manipulated the situation to obtain additional information in order to
make a decision; (c) they adapted their strategies to the requirements of the situation,
demonstrating a flexible repertoire; (d) they planned for contingencies and kept their options
open when possible; (e) they did not overestimate their own capabilities or the resources
available to them; (f) they appreciated the complexity of decision situations and managed their
workload to cope with it. Less effective crews showed significantly lower levels of all these
behaviors and generally failed to modify their behaviors in response to different types of
situational demands.{2}
Finding Decisions
8
INTEGRATION OF DECISION EVENT AND DECISION STRATEGY DATA
Our examination of crew decision making from the perspective of the three different data
sources has led to several converging observations about cockpit decision making. We used the
taxonomy and strategy data to develop a simplified decision- process model appropriate to the
aviation environment, and a model of factors that determine the amount of cognitive work that
must be done to make a decision (a surrogate for decision difficulty, because we presently have
no empirical difficulty data).
A Simplified Decision Process Model
The decision process model we adopted is conceptually a simple one (see Fig. 32.1). It
draws on Klein’s (1993) RPD model and on Wickens and Flach’s (1988) information processing
model. Our model is tailored to the structure of the decision taxonomy and includes only
components that were visible in crew performance in the simulator.
-----------------------------------
Insert Figure 32.1 about here
-----------------------------------
The model consists of two major components: situation assessment and choosing a
course of action. Situation assessment requires definition of the problem and assessment of risk
level and time available to make the decision. Available time appears to be a major determinant
of subsequent strategies. If the situation is not understood, diagnostic actions may be taken, but
only if sufficient time is available. External time pressures may be modified by crews to
mitigate their effects (Orasanu & Strauch, 1994). If risk is high and time is limited, action may
be taken without thorough understanding of the problem.
Selecting an appropriate course of action depends on the affordances of the situation.
Sometimes a single response is prescribed in company manuals or procedures. At other times,
multiple options may exist from which one must be selected, or multiple actions must all be
accomplished within a limited time period. On some rare occasions, no response may be
available and the crew must invent a course of action. In order to deal appropriately with the
situation, the decision maker must be aware of what response options are available and what
constitutes a situationally appropriate process (retrieving and evaluating an option, choosing,
scheduling, inventing).
ASRS reports revealed the importance of situation assessment. In many cases extensive
diagnostic episodes occurred. These were not minor efforts but decisions in and of themselves,
such as deciding that insufficient information was available to make a good decision and
arranging conditions to get the needed information (e.g., to fly by the tower to allow inspection
of the landing gear; send crew member to the cabin to examine engine, aileron, etc.). Certain
diagnostic actions served a dual purpose: The actions could solve the problem as well as provide
diagnostic information about the nature of the problem. The idea seemed to be, “If this action
fixes the problem, we will know what the problem was.”
Finding Decisions
9
Efforts are currently under way to validate the components of the process model. In one
set of studies, pilots were asked to sort decision events into piles of scenarios that required
similar decisions (Fischer, Orasanu, & Wich, 1995). Multidimensional scaling analyses suggest
that pilots identified risk, time pressure, situational ambiguity, and response determinacy as
decision-relevant dimensions. Although these aspects verify components of the process model,
further studies are required to shed light on how they contribute to the process for different types
of decisions. The decision- process model can now serve as a frame for analyzing crew
performance in NTSB accident reports and in full-mission simulation.
Decision-Effort Model
Although we do not yet have experimental data on the cognitive demand level or
difficulty of various decision events, we have a model that allows us to predict which decisions
might involve the greatest amount of cognitive work, and where decision errors might be most
likely. The model is based on the two components of the decision- process model. Its two
dimensions are situational ambiguity and response availability, paralleling the processes of
situation assessment and choosing a course of action.
Situation Ambiguity
If a situation is ambiguous, more effort will be required to define the nature of the
problem than if cues clearly specify what is wrong. Three types of ambiguity have been
identified that may differ in their demands on the crew.
Vague cues. These cues are inherently ambiguous and nondiagnostic. They consist of
vibrations, noises, smells, thumps, and other nonengineered cues. Pilot knowledge and
experience are critical to their interpretation. ASRS reports include cases of a ramp vehicle
bumping into parked aircraft, a vibration during flight due to a loose aileron-trim tab, and the
sound of rushing air in the cockpit.
Conflicting cues. Cues of this type are clear and interpretable, often engineered
diagnostic indicators. The ambiguity lies in the simultaneous presence of more than one cue that
signal conflicting situations and imply opposing courses of action. For example, the presence of
a stall warning on takeoff and engine indicators of sufficient power for climb are conflicting
cues.
Uninterpretable cues. Again, these cues in themselves are clear, but in context are
uninterpretable. As a result, the crew may disregard them or suspect that the indicator is faulty.
A case of uninterpretable cues was the rapid loss of engine oil from both engines in synchrony
during an over water flight. The crew could not imagine a plausible scenario to explain these
indicators, and continued the flight. Only on landing did they discover that caps had been left off
both engine oil reservoirs.
Finding Decisions
10
Response Availability
The second dimension determining problem demand level is response availability. The
least work is required if a single response is prescribed to a particular set of cues (rule-based
decisions). More work is required if multiple responses must be evaluated and either one must
be chosen (choice decision) or multiple actions must be prioritized (scheduling decision; Payne,
Bettman, & Johnson, 1993). The greatest effort will be required if no response options are
available and one or more candidates must be created (ill-defined creative problem solving).
Two other factors enter into the equation, but probably operate in different ways--time
pressure and risk. When time pressure is high, little time is available for either diagnosing a
problem or generating and evaluating multiple options, so greater error might be expected than
when time pressure is low (Wright, 1974). The second factor, risk, may induce caution or
increased attention to a problem at moderate levels. At high levels, dysfunctional stress
responses may be expected, such as narrowing of perceptual scan, fixation on inappropriate
solutions, and reduction of working memory capacity (see Stokes, Kemper & Kitey, chapter 18,
this volume).
At this point the decision-effort model serves as a framework for examining the relations
among the various elements. We do not yet know whether situation ambiguity and response
availability carry equal weight in terms of cognitive work, but the NTSB accident reports suggest
that situation assessment may be the more vulnerable component.
NTSB Accident Analyses
Our examination of NTSB reports in which crew factors contributed to accidents found
that in most cases crews exhibited poor situation assessment rather than faulty selection of a
course of action based on adequate situation assessment (Orasanu, Dismukes, & Fischer, 1993).
This conclusion is based primarily on crew communications captured by the cockpit voice
recorder. Crews that had accidents tended to interpret cues inappropriately, often
underestimating the risk associated with a problem. For example, several crews have flown into
bad weather on final approach and crashed, rather than removing themselves from a dangerous
situation. A second major factor was that they overestimated their ability to handle difficult
situations or were overly optimistic about the capability of their aircraft. One crew decided to fly
on to their destination on battery power after losing both generators shortly after takeoff.
Unfortunately, the batteries failed before they reached their destination, resulting in loss of flight
displays (NTSB, 1983).
The NTSB recently analyzed flightcrew-involved accidents from 1978 to 1990 (NTSB,
1994). Of the 37 accidents in which crew errors were identified as contributing factors, 25
involved what the authors called “tactical decision errors.” Examples included deciding to
continue the flight in the face of a system malfunction, unstable approach, or deteriorating
weather.
Using our decision taxonomy as a frame to classify the tactical decision errors, we found
that a large proportion of them (66%) were Go/No Go decisions, which should have been the
Finding Decisions
11
simplest decisions in terms of response availability. These included rejected takeoffs, descent
below decision height, go-arounds, and diversions. In all but one case, the crew decided to
continue with the current plan in the face of cues that suggested discontinuation. However, in
many of these cases the cues were ambiguous and it was difficult to assess with great confidence
the level of risk inherent in the situation. Most significantly, most of the Go/No Go decisions
were made during the most critical phases of flight, namely takeoff and landing, when time to
make a decision was limited and the cost of an error was highest. Little room was available for
maneuvering or for gathering more information. In contrast, decisions made during cruise, even
very difficult decisions, usually are not burdened with the double factors of time pressure and
high risk. (There are a few notable exceptions like a cockpit fire or rapid decompression.)
Data from our simulator studies provided an additional perspective on this issue. When
the cognitive demands were great, the higher performing crews managed their effort by buying
time (e.g., requesting vectors or holding) or by reducing the load on the captain by shifting
responsibilities to the first officer (e.g., flying the plane). They also used contingency planning
and task structuring to reduce the load. In contrast, lower performing crews apparently tried to
reduce effort by oversimplifying situational complexity. They often acted on the first solution
they generated, even though it was not very satisfactory. They also allowed themselves to be
driven by time pressures and situational demands, rather than managing their "windows of
opportunity."
CONCLUSIONS
Different perspectives on crew decision making were obtained from each of the data
sources we examined. The ASRS reports provided insights into the many different types of
decision events that crews encounter. The simulator data were most useful for providing
evidence on more and less effective decision strategies because of their controlled nature and the
opportunity they afforded to observe multiple crews facing the same situations. The NTSB
analyses were a source of hypotheses about decision difficulty and where crews go wrong in
making decisions. Analysis of different types of decision events allowed us to identify some of
the differences in their underlying requirements and affordances, as well as the strategies most
appropriate to each. Crew performance in a controlled simulator environment revealed some
generic strategies that are beneficial in all decision contexts. These include good situation
assessment, contingency planning, and task management to allow time to make a good decision.
Other strategies are decision-specific and vary considerably, primarily in their temporal aspects.
Effective crew performance consists of flexible application of a varied repertoire of strategies.
Less effective crews did not appear to distinguish among the various types of decisions, applying
the same strategies in all cases regardless of variations in their demands.
Decision difficulty may hinge on situational ambiguity and absence of planned response
options. Time pressure clearly increases the likelihood of poor decisions and has a major impact
on decision strategies. The effect of risk is not yet well understood, but our sorting study
(Fischer, Orasanu, & Wich, 1995) indicates that it is a salient dimension to pilots, especially to
captains. We have not directly examined the effects of high workload on decision error, but we
imagine it might operate like time pressure. The best antidote for both appears to be appropriate
Finding Decisions
12
task and situation management behaviors that serve to buy more time or to shed tasks from the
decision maker.
Our findings have several implications for crew training: Programs should emphasize
the importance of identifying the temporal demands, risks, affordances, and constraints inherent
in a problem situation and the development of skill at adapting strategies to match situations. A
theory of naturalistic decision making must be sensitive to significant situational variations and
broad enough to account for a range of effective decision strategies.
ACKNOWLEDGMENTS
We wish to express our appreciation to NASA, Code UL, and to the FAA-ARD for their
support of the research on which this chapter was based. Special thanks go to Eleana Edens, our
project manager at the FAA, for her continued support.
FOOTNOTES
1. The concepts are taken from Rasmussen (1983), but are used somewhat differently here
because they apply primarily to decision situations, not to responses. Skill-based decisions,
Rasmussen’s third category, were not included in our analysis because of their automatic
psychomotor nature.
2. It should be noted that our description of more and less effective strategies is limited by the
flight scenarios used in these studies. Other effective strategies might be observed in situations
differing in features not included here.
REFERENCES
Chidester, T. R., Kanki, B. G., Foushee, H. C., Dickinson, C. L., & Bowles, S. V. (1990).
Personality factors in flight operations: Volume I. Leadership characteristics and crew
performance in a full-mission air transport simulation (NASA Tech. Mem. No. 102259).
Moffett Field, CA: NASA-Ames Research Center.
Fischer, U., Orasanu, J., & Montalvo, M. (1993). Efficient decision strategies on the flight deck.
In R. S. Jensen & D. Neumeister (Eds.), Proceeding of the Seventh International Symposium
on Aviation Psychology (pp. 238-243). Columbus, OH: Ohio State University Press.
Fischer, U., Orasanu, J., & Wich, M. (1995). Expert pilots’ perceptions of problem situations.
In Proceedings of the Eighth International Symposium on Aviation Psychology (pp. 777-
782). Columbus, OH: Ohio State University Press.
Foushee, H. C., Lauber, J. K., Baetge, M. M., & Acomb, D. B. (1986). Crew factors in flight
operations: III. The operational significance of exposure to short-haul air transport
operations (Tech. Mem. No. 88322). Moffett Field, CA: NASA-Ames Research Center.
Finding Decisions
13
Hammond, K. R., Hamm, R. M., Grassia, J., & Pearson, T. (1987). Direct comparison of the
efficacy of intuitive and analytical cognition in expert judgment. IEEE Transactions on
Systems, Man, and Cybernetics, 17(5), 753-770.
Hart, S. G., & Wickens, C. D. (1990). Workload assessment and prediction. In H. R. Booher
(Ed.), MANPRINT: An approach to system integration (pp. 257-296). New York: Van
Nostrand Reinhold.
Hutchins, E., & Klausen, T. (1991). Distributed cognition in an airline cockpit. Unpublished
manuscript, University of California, San Diego, CA.
Klein, G. A. (1993). A recognition-primed decision (RPD) model of rapid decision making. In
G. Klein, J. Orasanu, R. Calderwood, & C. Zsambok (Eds.), Decision making in action:
Models and methods (pp. 138-147). Norwood, NJ: Ablex.
National Transportation Safety Board. (1983). Aircraft accident report: Hawker Siddley 748,
Pinckneyville, IL. Washington, DC: Author.
National Transportation Safety Board (1990). Aircraft Accident Report - United Airlines
Flight 232, McDonnell Douglas DC-10-10, Sioux Gateway Airport, Sioux City, Iowa,
July 19, 1989 (NTSB/AAR-91-02). Washington, DC: Author.
National Transportation Safety Board. (1994). A review of flightcrew-involved, major
accidents of U.S. Air Carriers, 1978 through 1990 (PB94-917001, NTSB/SS-94/01).
Washington, DC: Author.
Orasanu, J. (1994). Shared problem models and flight crew performance. In N. Johnston, N.
McDonald, & R. Fuller (Eds.), Aviation psychology in practice (pp. 255-285). Hants,
England: Avebury Technical.
Orasanu, J., Dismukes, R. K., & Fischer, U. (1993). Decision errors in the cockpit. In L. Smith
(Ed.), Proceedings of the Human Factors and Ergonomics Society 37th Annual Meeting
(Vol. 1, pp. 363-367). Santa Monica, CA: Human Factors and Ergonomics Society.
Orasanu, J., Fischer, U., & Tarrel, R. (1993). A taxonomy of decision problems on the flight
deck. In R. Jensen (Ed.), Proceedings of the Seventh International Symposium on Aviation
Psychology (pp. 226-232). Columbus, OH: Ohio State University Press.
Orasanu, J., & Strauch, B. (1994). Temporal factors in aviation decision making. In L. Smith
(Ed.), Proceedings of the Human Factors and Ergonomics Society 38th Annual Meeting
(Vol. 2, pp. 935-939). Santa Monica, CA: Human Factors and Ergonomics Society.
Payne, J. W., Bettman, J. R., & Johnson, E. J. (1993). The adaptive decision maker. New York:
Cambridge University Press.
Rasmussen, J. (1983). Skill, rules, and knowledge: Signals, signs and symbols, and other
distinctions in human performance models. IEEE Transactions on Systems, Man and
Cybernetics, 13(3), 257-267.
Wickens, C. D., & Flach, J. M. (1988). Information processing. In E. L. Wiener & D. C. Nagel
(Eds.), Human factors in aviation (pp. 111-155). San Diego, CA: Academic Press.
Woods, D. D. (1993). Process-tracing methods for the study of cognition outside of the
experimental psychology laboratory. In G. Klein, J. Orasanu, R. Calderwood, & C.
Finding Decisions
14
Zsambok (Eds.), Decision making in action: Models and methods (pp. 228-251). Norwood,
NJ: Ablex.
Wright, P. L. (1974). The harassed decision maker: Time pressures, distractions, and the use of
evidence. Journal of Applied Psychology, 59, 555-561.
FIGURE CAPTIONS
Figure 1. Decision Process Model. The upper rectangle represents the Situation Assessment
component. The lower rectangles represent the Course of Action component. The rounded
squares in the center represent conditions and affordances.