ChapterPDF Available

Finding Decisions in Natural Environments: The View from the Cockpit

Authors:
Finding Decisions in Natural Environments:
The View from the Cockpit
Judith Orasanu, Ph.D.
NASA-Ames Research Center
Ute Fischer, Ph.D.
Georgia Institute of Technology
In C. Zsambok & G. A. Klein (Eds.), Naturalistic Decision Making. Hillsdale, NJ: Lawrence
Erlbaum Associates, 1997
Running Head: Finding Decisions....
Finding Decisions
2
In keeping with the naturalistic decision making (NDM) tradition of studying “real
people making real decisions” in their everyday contexts, our mission was to understand flight-
related decision making by commercial airline pilots: What constitutes effective flight crew
decision making? What conditions pose problems for crews and lead to poor decisions?
Our initial examination of decision strategies that distinguished more from less effective
crews in simulated flight showed striking variability in the decision behaviors of the most
effective crews. Sometimes the crews were very quick and sometimes they were slow and
methodical. In retrospect we should not have been surprised, but as psychologists we were
looking for simple patterns, such as, “Good crews always make the fastest decisions.”
These observations suggested that the most effective crews tailored their decision
strategies to the situation. Thus, to understand what constitutes an effective decision strategy we
need to understand the problem situations that crews encounter. Our research question was
expanded to: “How can we assess the sensitivity and appropriateness of decision strategies in
light of situational features?”
We adopted an approach used by ethnographers (e.g., Hutchins & Klausen, 1991) and
cognitive engineers (Woods, 1993): close examination of a phenomenon of interest in its
everyday context, seeking natural variations in critical features. Our approach builds on Klein's
(1993) Recognition Primed Decision (RPD) model and on Hammond’s Cognitive Continuum
Theory (Hammond, Hamm, Grassia, & Pearson, 1987). Our work also echoes the theme of
Hart's work on "strategic behavior" (Hart & Wickens, 1990), namely, operators make decisions
that serve overall task goals, capitalizing on their strengths and minimizing work.
A SEARCH FOR DECISION EVENTS IN CONTEXT: DATA SOURCES
As our starting point shifted from strategies to situations, we began a search for decision
events in context. Our initial observations were based on crews “flying” a mission in a high-
fidelity flight simulator, which yielded three distinct types of decisions. However, we realized
that our opportunity to observe decisions was restricted by the particular scenarios used in those
studies, so we sought a broader set of situations that might present other types of decision events.
The Aviation Safety Reporting System (ASRS) data base satisfied this need. The ASRS
is a confidential reporting system maintained by NASA (with funding from the FAA). Pilots
(and others) can submit a report describing an incident that may have involved a risky situation
that was problematic in some way. Key words used to search the database were Problem
Solving and Decision Making. The resulting set of incident reports describes diverse events that
required crew decision making. However, because of the self-report nature of the descriptions,
what we know about the actual decision strategies used by crews is what they chose to tell us.
Likewise, information about conditions that may have led to poor decisions is limited. To
address these limitations, we pursued a third data source.
The National Transportation Safety Board’s (NTSB) accident investigations offer deep
analysis of actual crashes, based on crew conversations documented by the cockpit voice
Finding Decisions
3
recorder, physical evidence, aircraft systems, and interviews with survivors or observers. We
chose reports in which crew actions were identified by NTSB analysts as contributing or causal
factors in the accidents. These case studies provide a detailed picture of what happened
immediately prior to each accident, what the crew focused on, how they managed the situation,
what decisions were made, and what actions were taken. The analyses are good sources of
hypotheses about contextual factors that make decisions difficult, types and sources of error, and
effective strategies.
What we learned about decision situations and decision strategies from these three data
sources is described in the remainder of this chapter. First we describe six types of decision
events that were identified. Then we describe decision strategies that are associated with each
type of decision and differences in strategies used by more and less effective crews. Finally, we
describe a decision process model and a model of decision effort derived from the first two
activities.
DECISION EVENTS
Simulator Data
Our analyses were based on two full-mission simulator studies conducted at NASA-
Ames Research Center. The first (Foushee, Lauber, Baetge & Acomb, 1986) was designed to
study the effect of fatigue on the performance of 2-member crews. The second (Chidester,
Kanki, Foushee, Dickinson, & Bowles, 1990) investigated leader personality effects in 3-
member crews. All crews were exposed to the same events, which allowed between-crew
comparisons. Crew performance in the simulator was videotaped and all communications were
transcribed.
The scenario flown by all crews included a missed approach at their original destination
due to bad weather (excessive cross-winds) and diversion to an alternate landing site. During
climbout following the missed approach, the main hydraulic system failed. As a result, the gear
and flaps had to be extended by alternate means. Moreover, the flaps could only be set to 15
degrees, resulting in a faster than normal landing speed, and the gear could not be retracted once
extended, meaning that further diversion was not desirable because of fuel constraints.
Three major decisions were present in this scenario: (a) At the original destination, crews
had to decide whether to continue with the final approach or to perform a missed approach. (b)
Once the crew realized that the weather at their destination was not improving, they had to select
an alternate airport. (c) The hydraulic failure required crews to coordinate the flap and gear
extension procedures during final approach, an already high-workload period. How to manage
this coordination was the third decision. These problems imposed different cognitive demands
on the crews: The situations differed in the number of constraints a solution had to satisfy and in
the extent to which a solution was prescribed.
Problem (a) calls for a Go/No Go decision. A course of action is prescribed: If all
facilitating conditions are normal, then Go. If the “go” conditions are not met, an alternate
action is prescribed (No Go condition). Conditions for Go and No Go are clearly defined and the
Finding Decisions
4
actions to be taken in both cases are also clearly prescribed. Selecting an alternate landing site as
in problem (b) is an example of a Choice problem. Several legitimate options or courses of
action exist from which one must be selected. No rule prescribes a single appropriate response.
Options must be evaluated in light of goals, possible consequences, and situational constraints
(such as fuel, runway length, or weather). Scheduling problems like problem (c) require the
crews to decide on what is most important to do, when to do it and who will do it. Several tasks
must be accomplished within a restricted window of time with limited resources.
Incident Reports from the Aviation Safety Reporting System
Ninety-four ASRS reports were analyzed in depth and classified in terms of their
precipitating events, phase of flight during which the event and subsequent decisions occurred,
and focus of the decisions. Some 234 decisions were discerned in these cases, because a single
precipitating event often set the stage for a series of decisions. For example, an engine problem
may first require the crew to decide what to do with the engine (shut it down, reduce power to
idle, or continue operation), then to decide whether or not to divert, where to divert, and any
specific considerations about landing configuration as a consequence of the engine problem. Our
analyses of the ASRS reports yielded three additional types of decision events.
Condition-Action Rules. The situation requires recognition of a predefined condition and
retrieval of the associated response. These decisions mirror Klein’s (1993) RPD, but are
prescriptive in the aviation domain. They do not depend primarily on the pilot’s personal
experience with similar cases, but on responses dictated by the industry, company or FAA.
Neither conditions nor options are bifurcated, as in Go/No Go cases, though both types rely on
underlying rules. Examples include decisions to pull the fire handle in case of an engine fire or
to descend to a lower altitude in case of cabin decompression. Thus, the pilot must know the rule
and then decide whether conditions warrant applying it.
Procedural Management. The essence of this class of decisions is the presence of an
ambiguous situation that is judged to be of high risk. The crew does not know precisely what is
wrong, but recognizes that conditions are out of normal bounds. Standard procedures are
employed to make the situation safe, often followed by landing at the nearest suitable airport.
These decisions look like condition-action rules but lack prespecified eliciting conditions. The
response also is generalized, such as “get down fast.” One case studied was a decision to reduce
cruise speed when an airframe vibration was experienced (which turned out to be due to a loose
aileron trim tab). The defining features of this type of problem are ambiguous high-risk
conditions and a standard procedural response that satisfies the conditions. No specific rules in
manuals or checklists guide this type of decision; pilot domain knowledge and experience are the
source of the action.
Creative problem solving. These are ill-defined problems and are probably the least
frequent types of decision events crews ever encounter. No specific guidance is available in
standard procedures, manuals, or checklists to guide the crew to a course of action. The nature
of the problem may or may not be clear. The important distinction from procedural management
situations is that standard procedures will not satisfy the demands of the situation. New solutions
must be invented. Perhaps the most famous case is the DC-10 (UA flight 232) that lost all flight
Finding Decisions
5
controls when the hydraulic cables were severed following a catastrophic engine failure (NTSB,
1990). The crew had to figure out how to control the plane. They invented the solution of using
alternate thrust on the two remaining engines to "steer" it.
National Transportation Safety Board Accident Analyses
The six types of decision events just described could account for all problem situations
analyzed in a dozen NTSB accident reports. Because the NTSB seeks to understand causal and
contributing factors in accidents, we used their reports primarily as a source of hypotheses about
decision processes and causes of poor decisions, rather than to expand the set of decision types.
Decision Event Taxonomy
The six types of decisions were identified using simulator performance and ASRS
databases. They fall into two subgroups that differ primarily in whether a prescriptive rule exists
that defines a situationally appropriate response or whether the decision primarily relies on the
pilot’s knowledge and experience. These are referred to as “rule-based” and “knowledge-based”
decisions.{1}
Rule-based decisions include two subtypes: Go/No Go and Condition-Action decisions.
They differ in whether a binary option exists or whether a simple condition-action rule prevails.
The crucial aspect of the decision process for rule-based decisions is accurate situation
assessment. The major impediment is ambiguity. Such decisions are often made under high
time pressure and risk; thus, the industry has prescribed appropriate responses to match
predictable high-risk conditions. Once the situation is recognized, a fast response may be
required for safety. An example is deciding whether to abort a takeoff when an engine fails
some time during the takeoff roll.
Knowledge-based decisions vary in how well structured the problems are and in the
availability of response options. “Well-structured” problems are those in which the problem
situation and available response options are unambiguous and should be known to experienced
decision makers. In one case (“choice” problems), the decision maker must choose one option
after evaluating constraints and outcomes associated with various options. In the second case
(“scheduling” problems), effective performance depends on good judgment about relative
priorities of various tasks and accurate assessment of resources and limitations.
“Ill-structured” problems entail ambiguity, either in the cues that signal the problem or in
the available response options. Cues may be sufficiently vague or confusing that the crew cannot
identify the problem (“situational management” decisions), or crews do not know what to do
even if the problem is understood (“creative problem solving” required).
Analysis of the 94 ASRS reports indicates that rule-based decisions were slightly more
frequent in our sample (54%) than knowledge-based decisions (46%; Orasanu, Fischer, & Tarell,
1993). Three out of four rule-based decisions were Condition-Action decisions, the rest being
Go/No Go decisions. This distribution is not surprising, because Go/No Go decisions occur in
narrowly specified situations during takeoff and landing, whereas Condition-Action decisions
Finding Decisions
6
can occur anytime. About a third of the decisions (36%) required choices, and the remainder
were other types of knowledge-based decisions (4% Scheduling, 3% Procedural Management,
and 2% Creative Problem Solving).
DECISION STRATEGIES
The earlier description of decision types was based on properties of the situation. Now
we turn to crew strategies. We describe how crews responded to the various types of decision
events and differences in behaviors associated with more and less effective crew performance.
Crews flying full-mission simulations provided the richest source of strategy data. Little reliable
strategy data could be obtained from ASRS reports due to the self-report nature of these
descriptions. Corroborating strategy data were obtained from the NTSB accident reports.
Simulator Data
Videotapes of crew performance in simulators allowed us to observe decision making in
action rather than relying on post-hoc accounts, as in the other databases. How decision making
evolves over time in response to dynamic situations could be analyzed. These data provided not
only records of behavior but also of crew communication as a “window” into the crew’s
thinking. Within-crew comparisons can be made as each crew faces several decision events, thus
yielding the greatest generality of findings between and within crews.
Crew performance in the simulator was evaluated by two independent expert observers
both online and from videotapes. Operational and procedural errors (not decision behaviors)
were assessed. Crews were rank ordered by error scores and divided into higher and lower
performance groups using a median split. Decision-relevant behaviors of the two groups were
compared, based on their videotaped performance. Time-stamped transcripts of cockpit
conversation and action timelines permitted detailed analyses of communication and decision
behaviors. Our analyses of decision strategies were independent of the initial error assessments
by check pilots.
The decision taxonomy guided our examination of decision behaviors, providing a
structure that directed our focus. Working with aviation experts, we defined behaviors
appropriate to each decision, cues that signaled the problems, available options, temporal
parameters, relevant constraints, and standard procedures. For detailed descriptions of these
analyses see Fischer, Orasanu and Montalvo (1993) and Orasanu (1994).
We found differences between groups in two types of behaviors: (a) strategies specific to
each decision type, and (b) differences in generalized strategies that cut across decision types.
Decision-Specific Strategies
Consider first the Go/No Go decision (the missed approach). Higher performing crews
made the decision significantly earlier than the less effective crews, which provided a greater
safety margin. One reason they could make this decision early was because they had attended to
Finding Decisions
7
cues signaling the possibility of deteriorating weather. They sought weather updates as the
approach progressed, and planned for the possibility of a missed approach.
The second decision was a knowledge-based choice decision. After the missed approach
and the hydraulic failure, crews faced the problem of choosing a landing site. An alternate was
listed on their flight plan, but the unexpected hydraulic failure raised constraints that made the
designated alternate a poor choice (short runway with bad weather, mountainous terrain).
Recognizing these constraints, realizing that the designated alternate was not a good option,
retrieving other options, and evaluating them in light of the constraints were all required to make
a good decision. The more effective crews in fact verbalized concern with the constraints,
gathered more information about several options, and took longer to make their decision than the
less effective crews. No differences were found in the number of options considered by the two
groups despite differences in amount of information used to evaluate them. Relatively little
attention (beyond standard checklist procedures) was devoted to defining the problem. The
emphasis was on assessing potential solutions.
In the third type of decision, which required scheduling the manual gear deployment and
alternate flap extension, both the nature of the problem and the actions to be taken were clear.
What had to be decided was how these tasks were to be accomplished. What differentiated the
more and less effective crews was the manner in which the tasks were planned and carried out.
These abnormal procedures were unfamiliar to many crews (being relatively infrequent events)
and required additional work during the normally busy final approach phase of flight.
Preparation included review of the procedures in the checklists and manuals, becoming familiar
with the location of the gear handle, assessing how long the tasks would take, determining when
the tasks would be initiated and their sequencing, and assigning tasks to the crew members.
Higher performing crews reviewed the written guidance in advance, during a low workload
period. They rehearsed what would be done and how (e.g., use the alternate procedure to extend
the flaps to 10 degrees, manually lower the gear, then continue extending the flaps to 15
degrees). Because they had planned for these tasks, the higher performing crews began the tasks
earlier and completed them faster than the lower performing crews, thereby giving themselves a
cushion of time to accomplish other essential tasks and maintaining better control of the aircraft
during the final approach and landing.
Generalized strategies
Strategies that cut across various decisions and characterized higher performing crews
include the following: (a) They monitored the environment closely and appreciated the
significance of cues that signaled a problem; (b) they used more information in making
decisions and if necessary manipulated the situation to obtain additional information in order to
make a decision; (c) they adapted their strategies to the requirements of the situation,
demonstrating a flexible repertoire; (d) they planned for contingencies and kept their options
open when possible; (e) they did not overestimate their own capabilities or the resources
available to them; (f) they appreciated the complexity of decision situations and managed their
workload to cope with it. Less effective crews showed significantly lower levels of all these
behaviors and generally failed to modify their behaviors in response to different types of
situational demands.{2}
Finding Decisions
8
INTEGRATION OF DECISION EVENT AND DECISION STRATEGY DATA
Our examination of crew decision making from the perspective of the three different data
sources has led to several converging observations about cockpit decision making. We used the
taxonomy and strategy data to develop a simplified decision- process model appropriate to the
aviation environment, and a model of factors that determine the amount of cognitive work that
must be done to make a decision (a surrogate for decision difficulty, because we presently have
no empirical difficulty data).
A Simplified Decision Process Model
The decision process model we adopted is conceptually a simple one (see Fig. 32.1). It
draws on Klein’s (1993) RPD model and on Wickens and Flach’s (1988) information processing
model. Our model is tailored to the structure of the decision taxonomy and includes only
components that were visible in crew performance in the simulator.
-----------------------------------
Insert Figure 32.1 about here
-----------------------------------
The model consists of two major components: situation assessment and choosing a
course of action. Situation assessment requires definition of the problem and assessment of risk
level and time available to make the decision. Available time appears to be a major determinant
of subsequent strategies. If the situation is not understood, diagnostic actions may be taken, but
only if sufficient time is available. External time pressures may be modified by crews to
mitigate their effects (Orasanu & Strauch, 1994). If risk is high and time is limited, action may
be taken without thorough understanding of the problem.
Selecting an appropriate course of action depends on the affordances of the situation.
Sometimes a single response is prescribed in company manuals or procedures. At other times,
multiple options may exist from which one must be selected, or multiple actions must all be
accomplished within a limited time period. On some rare occasions, no response may be
available and the crew must invent a course of action. In order to deal appropriately with the
situation, the decision maker must be aware of what response options are available and what
constitutes a situationally appropriate process (retrieving and evaluating an option, choosing,
scheduling, inventing).
ASRS reports revealed the importance of situation assessment. In many cases extensive
diagnostic episodes occurred. These were not minor efforts but decisions in and of themselves,
such as deciding that insufficient information was available to make a good decision and
arranging conditions to get the needed information (e.g., to fly by the tower to allow inspection
of the landing gear; send crew member to the cabin to examine engine, aileron, etc.). Certain
diagnostic actions served a dual purpose: The actions could solve the problem as well as provide
diagnostic information about the nature of the problem. The idea seemed to be, “If this action
fixes the problem, we will know what the problem was.”
Finding Decisions
9
Efforts are currently under way to validate the components of the process model. In one
set of studies, pilots were asked to sort decision events into piles of scenarios that required
similar decisions (Fischer, Orasanu, & Wich, 1995). Multidimensional scaling analyses suggest
that pilots identified risk, time pressure, situational ambiguity, and response determinacy as
decision-relevant dimensions. Although these aspects verify components of the process model,
further studies are required to shed light on how they contribute to the process for different types
of decisions. The decision- process model can now serve as a frame for analyzing crew
performance in NTSB accident reports and in full-mission simulation.
Decision-Effort Model
Although we do not yet have experimental data on the cognitive demand level or
difficulty of various decision events, we have a model that allows us to predict which decisions
might involve the greatest amount of cognitive work, and where decision errors might be most
likely. The model is based on the two components of the decision- process model. Its two
dimensions are situational ambiguity and response availability, paralleling the processes of
situation assessment and choosing a course of action.
Situation Ambiguity
If a situation is ambiguous, more effort will be required to define the nature of the
problem than if cues clearly specify what is wrong. Three types of ambiguity have been
identified that may differ in their demands on the crew.
Vague cues. These cues are inherently ambiguous and nondiagnostic. They consist of
vibrations, noises, smells, thumps, and other nonengineered cues. Pilot knowledge and
experience are critical to their interpretation. ASRS reports include cases of a ramp vehicle
bumping into parked aircraft, a vibration during flight due to a loose aileron-trim tab, and the
sound of rushing air in the cockpit.
Conflicting cues. Cues of this type are clear and interpretable, often engineered
diagnostic indicators. The ambiguity lies in the simultaneous presence of more than one cue that
signal conflicting situations and imply opposing courses of action. For example, the presence of
a stall warning on takeoff and engine indicators of sufficient power for climb are conflicting
cues.
Uninterpretable cues. Again, these cues in themselves are clear, but in context are
uninterpretable. As a result, the crew may disregard them or suspect that the indicator is faulty.
A case of uninterpretable cues was the rapid loss of engine oil from both engines in synchrony
during an over water flight. The crew could not imagine a plausible scenario to explain these
indicators, and continued the flight. Only on landing did they discover that caps had been left off
both engine oil reservoirs.
Finding Decisions
10
Response Availability
The second dimension determining problem demand level is response availability. The
least work is required if a single response is prescribed to a particular set of cues (rule-based
decisions). More work is required if multiple responses must be evaluated and either one must
be chosen (choice decision) or multiple actions must be prioritized (scheduling decision; Payne,
Bettman, & Johnson, 1993). The greatest effort will be required if no response options are
available and one or more candidates must be created (ill-defined creative problem solving).
Two other factors enter into the equation, but probably operate in different ways--time
pressure and risk. When time pressure is high, little time is available for either diagnosing a
problem or generating and evaluating multiple options, so greater error might be expected than
when time pressure is low (Wright, 1974). The second factor, risk, may induce caution or
increased attention to a problem at moderate levels. At high levels, dysfunctional stress
responses may be expected, such as narrowing of perceptual scan, fixation on inappropriate
solutions, and reduction of working memory capacity (see Stokes, Kemper & Kitey, chapter 18,
this volume).
At this point the decision-effort model serves as a framework for examining the relations
among the various elements. We do not yet know whether situation ambiguity and response
availability carry equal weight in terms of cognitive work, but the NTSB accident reports suggest
that situation assessment may be the more vulnerable component.
NTSB Accident Analyses
Our examination of NTSB reports in which crew factors contributed to accidents found
that in most cases crews exhibited poor situation assessment rather than faulty selection of a
course of action based on adequate situation assessment (Orasanu, Dismukes, & Fischer, 1993).
This conclusion is based primarily on crew communications captured by the cockpit voice
recorder. Crews that had accidents tended to interpret cues inappropriately, often
underestimating the risk associated with a problem. For example, several crews have flown into
bad weather on final approach and crashed, rather than removing themselves from a dangerous
situation. A second major factor was that they overestimated their ability to handle difficult
situations or were overly optimistic about the capability of their aircraft. One crew decided to fly
on to their destination on battery power after losing both generators shortly after takeoff.
Unfortunately, the batteries failed before they reached their destination, resulting in loss of flight
displays (NTSB, 1983).
The NTSB recently analyzed flightcrew-involved accidents from 1978 to 1990 (NTSB,
1994). Of the 37 accidents in which crew errors were identified as contributing factors, 25
involved what the authors called “tactical decision errors.” Examples included deciding to
continue the flight in the face of a system malfunction, unstable approach, or deteriorating
weather.
Using our decision taxonomy as a frame to classify the tactical decision errors, we found
that a large proportion of them (66%) were Go/No Go decisions, which should have been the
Finding Decisions
11
simplest decisions in terms of response availability. These included rejected takeoffs, descent
below decision height, go-arounds, and diversions. In all but one case, the crew decided to
continue with the current plan in the face of cues that suggested discontinuation. However, in
many of these cases the cues were ambiguous and it was difficult to assess with great confidence
the level of risk inherent in the situation. Most significantly, most of the Go/No Go decisions
were made during the most critical phases of flight, namely takeoff and landing, when time to
make a decision was limited and the cost of an error was highest. Little room was available for
maneuvering or for gathering more information. In contrast, decisions made during cruise, even
very difficult decisions, usually are not burdened with the double factors of time pressure and
high risk. (There are a few notable exceptions like a cockpit fire or rapid decompression.)
Data from our simulator studies provided an additional perspective on this issue. When
the cognitive demands were great, the higher performing crews managed their effort by buying
time (e.g., requesting vectors or holding) or by reducing the load on the captain by shifting
responsibilities to the first officer (e.g., flying the plane). They also used contingency planning
and task structuring to reduce the load. In contrast, lower performing crews apparently tried to
reduce effort by oversimplifying situational complexity. They often acted on the first solution
they generated, even though it was not very satisfactory. They also allowed themselves to be
driven by time pressures and situational demands, rather than managing their "windows of
opportunity."
CONCLUSIONS
Different perspectives on crew decision making were obtained from each of the data
sources we examined. The ASRS reports provided insights into the many different types of
decision events that crews encounter. The simulator data were most useful for providing
evidence on more and less effective decision strategies because of their controlled nature and the
opportunity they afforded to observe multiple crews facing the same situations. The NTSB
analyses were a source of hypotheses about decision difficulty and where crews go wrong in
making decisions. Analysis of different types of decision events allowed us to identify some of
the differences in their underlying requirements and affordances, as well as the strategies most
appropriate to each. Crew performance in a controlled simulator environment revealed some
generic strategies that are beneficial in all decision contexts. These include good situation
assessment, contingency planning, and task management to allow time to make a good decision.
Other strategies are decision-specific and vary considerably, primarily in their temporal aspects.
Effective crew performance consists of flexible application of a varied repertoire of strategies.
Less effective crews did not appear to distinguish among the various types of decisions, applying
the same strategies in all cases regardless of variations in their demands.
Decision difficulty may hinge on situational ambiguity and absence of planned response
options. Time pressure clearly increases the likelihood of poor decisions and has a major impact
on decision strategies. The effect of risk is not yet well understood, but our sorting study
(Fischer, Orasanu, & Wich, 1995) indicates that it is a salient dimension to pilots, especially to
captains. We have not directly examined the effects of high workload on decision error, but we
imagine it might operate like time pressure. The best antidote for both appears to be appropriate
Finding Decisions
12
task and situation management behaviors that serve to buy more time or to shed tasks from the
decision maker.
Our findings have several implications for crew training: Programs should emphasize
the importance of identifying the temporal demands, risks, affordances, and constraints inherent
in a problem situation and the development of skill at adapting strategies to match situations. A
theory of naturalistic decision making must be sensitive to significant situational variations and
broad enough to account for a range of effective decision strategies.
ACKNOWLEDGMENTS
We wish to express our appreciation to NASA, Code UL, and to the FAA-ARD for their
support of the research on which this chapter was based. Special thanks go to Eleana Edens, our
project manager at the FAA, for her continued support.
FOOTNOTES
1. The concepts are taken from Rasmussen (1983), but are used somewhat differently here
because they apply primarily to decision situations, not to responses. Skill-based decisions,
Rasmussen’s third category, were not included in our analysis because of their automatic
psychomotor nature.
2. It should be noted that our description of more and less effective strategies is limited by the
flight scenarios used in these studies. Other effective strategies might be observed in situations
differing in features not included here.
REFERENCES
Chidester, T. R., Kanki, B. G., Foushee, H. C., Dickinson, C. L., & Bowles, S. V. (1990).
Personality factors in flight operations: Volume I. Leadership characteristics and crew
performance in a full-mission air transport simulation (NASA Tech. Mem. No. 102259).
Moffett Field, CA: NASA-Ames Research Center.
Fischer, U., Orasanu, J., & Montalvo, M. (1993). Efficient decision strategies on the flight deck.
In R. S. Jensen & D. Neumeister (Eds.), Proceeding of the Seventh International Symposium
on Aviation Psychology (pp. 238-243). Columbus, OH: Ohio State University Press.
Fischer, U., Orasanu, J., & Wich, M. (1995). Expert pilots’ perceptions of problem situations.
In Proceedings of the Eighth International Symposium on Aviation Psychology (pp. 777-
782). Columbus, OH: Ohio State University Press.
Foushee, H. C., Lauber, J. K., Baetge, M. M., & Acomb, D. B. (1986). Crew factors in flight
operations: III. The operational significance of exposure to short-haul air transport
operations (Tech. Mem. No. 88322). Moffett Field, CA: NASA-Ames Research Center.
Finding Decisions
13
Hammond, K. R., Hamm, R. M., Grassia, J., & Pearson, T. (1987). Direct comparison of the
efficacy of intuitive and analytical cognition in expert judgment. IEEE Transactions on
Systems, Man, and Cybernetics, 17(5), 753-770.
Hart, S. G., & Wickens, C. D. (1990). Workload assessment and prediction. In H. R. Booher
(Ed.), MANPRINT: An approach to system integration (pp. 257-296). New York: Van
Nostrand Reinhold.
Hutchins, E., & Klausen, T. (1991). Distributed cognition in an airline cockpit. Unpublished
manuscript, University of California, San Diego, CA.
Klein, G. A. (1993). A recognition-primed decision (RPD) model of rapid decision making. In
G. Klein, J. Orasanu, R. Calderwood, & C. Zsambok (Eds.), Decision making in action:
Models and methods (pp. 138-147). Norwood, NJ: Ablex.
National Transportation Safety Board. (1983). Aircraft accident report: Hawker Siddley 748,
Pinckneyville, IL. Washington, DC: Author.
National Transportation Safety Board (1990). Aircraft Accident Report - United Airlines
Flight 232, McDonnell Douglas DC-10-10, Sioux Gateway Airport, Sioux City, Iowa,
July 19, 1989 (NTSB/AAR-91-02). Washington, DC: Author.
National Transportation Safety Board. (1994). A review of flightcrew-involved, major
accidents of U.S. Air Carriers, 1978 through 1990 (PB94-917001, NTSB/SS-94/01).
Washington, DC: Author.
Orasanu, J. (1994). Shared problem models and flight crew performance. In N. Johnston, N.
McDonald, & R. Fuller (Eds.), Aviation psychology in practice (pp. 255-285). Hants,
England: Avebury Technical.
Orasanu, J., Dismukes, R. K., & Fischer, U. (1993). Decision errors in the cockpit. In L. Smith
(Ed.), Proceedings of the Human Factors and Ergonomics Society 37th Annual Meeting
(Vol. 1, pp. 363-367). Santa Monica, CA: Human Factors and Ergonomics Society.
Orasanu, J., Fischer, U., & Tarrel, R. (1993). A taxonomy of decision problems on the flight
deck. In R. Jensen (Ed.), Proceedings of the Seventh International Symposium on Aviation
Psychology (pp. 226-232). Columbus, OH: Ohio State University Press.
Orasanu, J., & Strauch, B. (1994). Temporal factors in aviation decision making. In L. Smith
(Ed.), Proceedings of the Human Factors and Ergonomics Society 38th Annual Meeting
(Vol. 2, pp. 935-939). Santa Monica, CA: Human Factors and Ergonomics Society.
Payne, J. W., Bettman, J. R., & Johnson, E. J. (1993). The adaptive decision maker. New York:
Cambridge University Press.
Rasmussen, J. (1983). Skill, rules, and knowledge: Signals, signs and symbols, and other
distinctions in human performance models. IEEE Transactions on Systems, Man and
Cybernetics, 13(3), 257-267.
Wickens, C. D., & Flach, J. M. (1988). Information processing. In E. L. Wiener & D. C. Nagel
(Eds.), Human factors in aviation (pp. 111-155). San Diego, CA: Academic Press.
Woods, D. D. (1993). Process-tracing methods for the study of cognition outside of the
experimental psychology laboratory. In G. Klein, J. Orasanu, R. Calderwood, & C.
Finding Decisions
14
Zsambok (Eds.), Decision making in action: Models and methods (pp. 228-251). Norwood,
NJ: Ablex.
Wright, P. L. (1974). The harassed decision maker: Time pressures, distractions, and the use of
evidence. Journal of Applied Psychology, 59, 555-561.
FIGURE CAPTIONS
Figure 1. Decision Process Model. The upper rectangle represents the Situation Assessment
component. The lower rectangles represent the Course of Action component. The rounded
squares in the center represent conditions and affordances.
... Studies of NDM [34] break the decision-making process down into the following steps: ...
... We therefore adopt and apply models from aviation literature to decision situations in PPC. [34] proposed a classification with six categories of decision types: (1) Go/no go decisions: This decision type has only two options, continuation or termination of an action. Order release constitutes such a decision in PPC. ...
... We present our findings in two sections. First, we correlate the typical PPC activities defined by [38] model with the various decision types described in psychology literature by [34,36]. Then, we formulate our hypotheses on the impact of cognitive biases on human decision-making in smart manufacturing based on the specific decision situations. ...
Article
Smart manufacturing systems have to meet high performance expectations in form of customer individual products, short lead times, high due date reliability and resilience by the integration of new technologies and by intelligent process design. The planning and control of such smart manufacturing systems necessitates several instances of human decision-making. The latest research at the nexus of operations management and psychology reveals that human decision-making in such fast-paced, dynamic and complex environments is often biased, resulting in suboptimal operational performance. Human-centered design of planning and control processes can mitigate biased decisions, thus boosting performance in smart manufacturing systems. Current research is silent about the impact and mitigation of cognitive biases in operations management, though. We intend to close this research gap. We combine systematic reviews of literature on behavioral operations management and cognitive biases in operations management, particularly in production planning and control with an in-depth case study based on 12 expert interviews. We explore two decision-making models and combine them in a PPC framework. We create a framework for human-centered PPC in smart manufacturing systems, which includes the main impact factors, challenges and performance capabilities. Based on a detailed case study of a steel producer, we present a framework of human-centered PPC comprising five hypotheses on the impact of cognitive biases on human decision-making and on manufacturing performance.
... That is, people search until they find a solution that meets their minimal needs and then adopt that solution without searching further [55]. High levels of time pressure lead to perceptual narrowing, which reduces the use of available cues, diminishes alertness, and decreases working memory capacity [56][57][58]. Under a time constraint, the operator may have neither the time nor the attentional resources to examine and evaluate multiple possible hypotheses. ...
Article
Full-text available
The cognitive reflection test (CRT) is an experiment task commonly used in Western countries to test intuitive and analytical thinking styles. However, the validity of this task for Chinese participants has not been explored. Therefore, this study recruited Chinese college students to finish CRT tasks with various experimental designs. To gauge the accuracy of the CRT tasks, 438 Chinese college students first completed online questionnaires. Participants were then invited to participate in an offline laboratory with the same experimental settings. Finally, time pressure was used to strictly control intuition and analytical thinking to explore the performance of Chinese college students on CRT tasks. The results show that of the three experiments, Chinese college students had the highest accuracy in the offline test, and the CRT’s intuitive conflict problem still applies to Chinese students under the time-limited condition. This study demonstrates the validity of the CRT in China and proves that time pressure is an effective method for identifying individuals with strong logic ability.
... Once the pilots have selected a decision alternative, they run a mental simulation to evaluate the expected outcome. If the expected outcome is not satisfactory and time permits, they will try to seek more information and repeat the cue-decision pairing process (see (12)). In all other cases, the pilot executes the selected decision. ...
Article
In fighter pilot training, much of upgrade pilots’ (UPs’) learning takes place during mission debriefs. A debrief provides instructor pilots (IPs) the opportunity to correct situation awareness (SA) upon which the UPs base their tactical decisions. Unless the debrief is conducted with proper depth and breadth, the IPs’ feedback on UPs’ SA and tactical decision-making may be incomplete or false, resulting in poor, or even negative learning. In this study, a new debrief protocol based on the Critical Decision Method (CDM) is introduced. The protocol specifically addresses the SA of UPs. An evaluation was conducted to examine if a short CDM training programme to IPs would enhance their ability to provide performance feedback to UPs regarding their SA and tactical decision-making. The IPs were qualified flying instructors and the UPs were air force cadets completing their air combat training with BAe Hawk jet trainer aircraft. The impact of the training intervention was evaluated using Kirkpatrick’s four-level model. The first three levels of evaluation (Reactions, Learning and Behaviour) focused on the IPs, whereas the fourth level (Results) focused on the UPs. The training intervention had a positive impact on the Reactions, Learning and debrief Behaviour of the IPs. In air combat training missions, the UPs whose debriefs were based on the CDM protocol, had superior SA and overall performance compared to a control group.
... This study aims to present, identify, and propose the implementation of AI technology in aviation Single Pilot Certification, as well as investigate how AI can impact the transition from multi-crew to eMCO and SiPO, on the premise that a single-pilot human operator having timely and naturally interactive access to data will improve Network Design and Management (NDM) and the relation with learning assurance (Orasanu & Fischer, 1997). ...
Conference Paper
Full-text available
The aviation industry is characterized by innovation, change management, and human factors implementation in flight operations. The aviation industry anticipates the Single Pilot Operations (SiPO) implementation in commercial airliners. Further de-crewing on commercial airline jets would necessitate using artificial intelligence (AI) in the flight deck to support the pilot duties. This paper outlines human factors and ergonomics (HF/E) certification concerns regarding Human System Integration (HSI). The International Air Transportation Authority's (IATA) Technology Roadmap (IATA, 2019) and the European Aviation Safety Agency's (EASA) Artificial Intelligence (AI) roadmap give an overview and evaluation of current technology trends that will change the aviation environment with the use of AI and the introduction of extended Minimum Crew Operations (eMCO) and Single Pilot Operations (SiPO). A review of the existing research on Artificial Intelligence certification challenges in single pilot operations structured the research themes in cockpit design and users' perception-experience. AI certification challenges in future single pilot operations were examined through interviews with Subject Matter Experts (Human Factors analysts, AI analysts, regulators, test pilots, manufacturers, airline managers, examiners, instructors, qualified pilots, and pilots in training) and questionnaires were sent to a group of professional pilots and pilots in training. In the current regulatory environment, the associated risk-based approach for systems, equipment, and components is primarily driven by a requirements-based "development assurance" methodology during the development of their elements. Although system-level assurance may still necessitate a requirements-based approach, it is acknowledged that design-level layers that rely on learning processes – learning assurance cannot be addressed with only 'development assurance' techniques.Moreover, this research focuses on mitigating residual risk in the 'AI black box.' Results were analyzed and evaluated the Artificial Intelligence (AI) certification and learning assurance challenges under the future single pilot operations aspect.
... Decision-making requires the synergistic effects of various cognitive skills [23]. Studies on the cognitive processing involved in making decisions have shown that high time pressure can induce perceptual narrowing, which decreases vigilance, working memory, and the utilization of available clues [24][25][26][27]. In addition, people may not have enough time or the attentional resources required to review and weigh a variety of potential hypotheses under time pressure in fast-paced event-driven situations [25,28]. ...
Article
Full-text available
It is ubiquitous that food delivery riders do not have unlimited periods of time for deliberation to make decisions. Time pressure plays a significant role in decision-making processes. This study investigated how time pressure affected risk preference and outcome evaluation through behavioral and electrophysiological responses during decision-making. Participants finished a simple gambling task under three different time constraint conditions (high/medium/low). Behavioral and event-related potentials (ERPs) data were recorded during the experiment. The results showed that the decision time of people was shorter under high time pressure than under medium and low time pressures. People tend to make more risky choices when under high time pressure. The feedback-related negativity (FRN) amplitude was smaller in the high time pressure than in medium and low time pressure conditions. These findings provided evidence that time pressure has an impact on the risk decision-making process.
Chapter
In this chapter, we represent the pilot as an information processing system. We first describe breakdowns in pilot information processing as illustrated by four tragic accidents. We then present a framework for information processing and discuss how technological developments in aviation have influenced pilot information processing. Then, in separate sections we discuss information in aviation as applied to: •manual (flight) control, •communications & working memory, •performance measurement (the speed-accuracy tradeoff), •mental workload and SA, •attention in multi-tasking, •expertise, •decision making. Critical aspects of pilot visual attention, perception, and spatial cognition are described in Chapter 6, and more elaborate treatments of workload and situation awareness and of pilot DM are described in Chapters 7 and 21, respectively.
Article
Full-text available
Em ambientes dinâmicos com elevado grau de incerteza, o tempo de resposta é menor, porquanto o processo antecipatório simplifica o processo de seleção da informação, foca-se no alvo e em simultâneo usa estratégias de controlo do ambiente, onde a intuição ou instinto, sempre que recorrem à memória de trabalho de longo prazo e se aliam à maior experiência, prática e perícia, aumentam a eficiência da decisão, cujo sucesso é tanto maior, quanto mais próximo da realidade estiver o “scanning ambiente” (Tenenbaum, 2004). Considerados alguns ambientes policiais complexos e geradores de stress (e.g., buscas, revistas, detenções de criminosos, rixas), exige-se do investigador criminal a capacidade de gerir a imprevisibilidade e risco associado, e de decidir com rapidez a partir da selecção e tratamento da informação situacional, apenas possível se houver organizacionalmente elevados índices de formação e treino frequente, com enfoque para os exercícios de tiro com simulador porque mais próximos da realidade. O processo decisório do polícia é, pelos motivos invocados, complexo e condicionado por variáveis psicofisiológicas, ambientais e de tarefa (Davies, 2015). Sistemas sociais complexos, exigem elevadas competências técnicas ao nível do tiro de polícia e inerentemente da TD dos investigadores criminais da P.J., cujo desempenho, através do único estudo conhecido em Portugal com estas variáveis, deverá ser objeto de reflexão por evidenciar dados pouco consentâneos com os resultados de excelência pretendidos.
Chapter
Define a team, teamwork and performanceExplain why organizations (like the NHS) use teamsDescribe the structure of different types of teamsIdentify the characteristics of effective teams. Define a team, teamwork and performance Explain why organizations (like the NHS) use teams Describe the structure of different types of teams Identify the characteristics of effective teams.
Chapter
Full-text available
shows how one can go beyond spartan laboratory paradigms and study complex problem-solving behaviors without abandoning all methodological rigor / describes how to carry out process tracing or protocol analysis methods as a "field experiment" (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
Direct comparisons were made of expert highway engineers' use of analytical, quasirational, and intuitive cognition on three different tasks, each displayed in three different ways. Use of a systems approach made it possible to develop indices for measuring the location of each of the nine information display conditions on a continuum ranging from intuition-inducing to analysis-inducing and for measuring the location of each expert engineer's cognition on a continuum ranging from intuition to analysis. Individual analyses of each expert's performance over the nine conditions showed that the location of the task on the task index induced the expert's cognition to be located at the corresponding region on the cognitive continuum index. Intuitive and quasirational cognition frequently outperformed analytical cognition in terms of the empirical accuracy of judgments. Judgmental accuracy was related to the degree of correspondence between the type of task and the type of cognitive activity on the cognitive continuum.
Article
Time pressure has been found to reduce the quality of decision making by restricting consideration of information and options and by inducing strategy shifts. Time pressure is usually considered an external variable manipulated by the experimenter. In this paper we distinguish between externally-induced time pressures and crew-generated time pressures, and examine how crews can mitigate or exacerbate external pressures. The roles of both types of time pressures in crew decision making are examined in three air transport accidents and in crew performance in full-mission simulated flight. Implications for crew training are discussed.
Article
The introduction of information technology based on digital computers for the design of man-machine interface systems has led to a requirement for consistent models of human performance in routine task environments and during unfamiliar task conditions. A discussion is presented of the requirement for different types of models for representing performance at the skill-, rule-, and knowledge-based levels, together with a review of the different levels in terms of signals, signs, and symbols. Particular attention is paid to the different possible ways of representing system properties which underlie knowledge-based performance and which can be characterised at several levels of abstraction-from the representation of physical form, through functional representation, to representation in terms of intention or purpose. Furthermore, the role of qualitative and quantitative models in the design and evaluation of interface systems is mentioned, and the need to consider such distinctions carefully is discussed.
Article
The concept of workload and its relationship to performance is introduced in this chapter. Four categories of workload measurement techniques (ratings, primary and secondary task measures, and physiological indices) are reviewed, examples of each category are described, and their strengths and weaknesses are summarized. The importance of carefully formulating the question which a measure is to address is emphasized, and it is argued that the question should guide the selection of measures. Issues relevant to implementing and interpreting workload measures are discussed and some of the reasons that different measures provide apparently conflicting information about the same situation (i. e., dissociation) are addressed. Finally, the chapter concludes with a brief description of models that can be used to predict workload.