Conference PaperPDF Available

Patterns in Cooperative Cognition

  • Paychex and Rochester Institute of Technology
  • ShadowBox LLC & MacroCognition LLC


The goal of this panel is to discuss and begin to converge on what constitutes generalizable patterns in cooperative cognition. The premise is that by pulling together findings and observations that hold across different research perspectives, domains, and methodologies, we will further our understanding of the fundamentally cooperative nature of cognition. This understanding is central to the mission of human factors research, which is to provide design guidance and insight for new classes of innovative support tools.
Patterns in cooperative cognition
Emily S. Patterson, David D. Woods
Cognitive Systems Engineering Laboratory
Institute for Ergonomics, Ohio State University
210 Baker Systems, 1971 Neil Ave., Columbus, OH 43210
Nadine B. Sarter
Institute of Aviation – Aviation Research Laboratory
University of Illinois at Urbana-Champaign
Jennifer Watts-Perotti
Cognitive Systems Engineering Laboratory
In this paper, seven studies of cooperative cognition in complex operational settings conducted by members of
the Cognitive Systems Engineering Laboratory (CSEL) are reviewed. These studies were conducted using a
variety of methodologies, including naturalistic observations as well as more controlled investigations using
scenario-based simulations. Six converging patterns that were observed across these studies are synthesized.
These patterns are: 1) breakdowns in coordination that are signaled by surprise, 2) escalations in activities
following an unexpected event in a monitored process, 3) investments in shared understandings to facilitate
effective communication, 4) local actors adapting original plans created by remote supervisors to cope with
unexpected events, 5) calling in additional personnel when unexpected situations arise, and 6) functional
distributions of cognitive processes during anomaly response. These patterns further our understanding of the
fundamentally cooperative nature of cognition and provide insight for innovative design.
anomaly response, common ground, escalation, planning, updating.
Traditionally, studies in cognitive psychology have focused on individual cognition, where a subject
does not have the ability to use resources that would normally be available in a real-world context,
such as other people, machine artifacts, procedures, or experience (Hutchins, 1995). On the other
hand, traditional studies of group problem solving are run with a small, homogeneous, co-located
group that is given a clearly defined problem to solve. Over the past few decades, there has been a
growing movement among many researchers, including members of the Cognitive Systems
Engineering Laboratory (CSEL), to extend these perspectives by studying distributed cognition in
complex operational settings such as aircraft cockpits, space shuttle mission control centers, and
medical operating rooms.
In this paper, the results from seven recent CSEL studies (Table 1) are reviewed and synthesized. The
first series of studies (Sarter and Woods, 1992, 1994, 1997a, 1997b; see Sarter, Woods, and Billings,
1997 for a synthesis) investigated automation surprises in pilot interaction with advanced aircraft
automation. This series of studies built upon the recognition by Wiener (1980) that pilots were
sometimes surprised by the actions of aircraft automation. Sarter and Woods collected corpuses of
cases by asking line pilots to describe situations where the automation surprised them. They then
manipulated the factors identified during the first studies in a high-fidelity simulation with
experienced pilots, using carefully crafted scenarios to elicit automation surprises.
The second series of studies investigated various aspects of how practitioners coordinate during
anomaly response. Johannesen, Cook, and Woods (1994) analyzed real-time coordination and
communication in the operating room (direct observation of neuro-anesthesia teams), where
practitioners were observed to invest in creating a ‘common ground’ for communications before
anomalies occurred so that they could coordinate effectively when they had to respond to an anomaly.
Building on this research, targeted observations were conducted in space shuttle mission control at
NASA Johnson Space Center during simulated and actual shuttle operations. The results capture the
cognitive activities underlying coordination in updating or bringing up to speed incoming personnel
(Patterson and Woods, 1997) and how interacting functionally distributed teams are able to meet the
demands of and avoid failures in anomaly response (Watts, Woods, and Patterson, 1996, Watts and
Woods, 1997). The role of the voice loops technology in supporting this coordination was also
investigated (Watts et al., 1996b, see also Woods, 1995, Malin et al., 1991).
The remaining studies (Shattuck and Woods, 1997, Dekker and Woods, 1997) examined teams
involving local actors and distant supervisors and how they coordinate when the local situation
departs from the original plan. Shattuck and Woods (1997) designed scenarios where actual events
went beyond pre-planned guidance (either impasses or new opportunities). Commanders (distant
supervisors) communicated a plan to subordinates (local actors). In addition to the plan, the
commanders also communicated their intent, which was supposed to help the subordinate adapt the
plan to handle anomalies. In a simulated situation, both subordinates and commanders were observed
as they adapted plans to handle unexpected anomalies based on the commander’s intent. Dekker and
Woods (1997) examined how a distant supervisor decided whether or not to intervene and take back
authority from another agent in a deteriorating situation -- management by exception in supervisory
control. The context was a new envisioned world in air traffic management with a new distribution of
authority across controllers, pilots and flight dispatchers.
Table 1: Recent CSEL Studies of Cooperative Cognition
Reference Domain Method Theme
Sarter and Woods
(1992, 1994, 1997a,
Cockpit aviation Corpus of cases;
Automation surprises in human-
machine coordination
Patterson and Woods
Space Shuttle
Mission Control
Direct Observations in
the Field
Updating during shift changes and the
on-call model for intervention
Watts et al. (1996a,
Space Shuttle
Mission Control
Direct Observations in
the Field
Coordination across functional
distributed teams in anomaly response
Watts et al. (1996b) Space Shuttle
Mission Control
Direct Observations in
the Field
Auditory CSCW technology that
mediates the common ground
Johannesen, Cook,
and Woods (1994)
Anesthesiology Direct Observations in
the Field
Calibrating the team’s common ground
before an anomaly occurs
Shattuck and Woods
Command and
Communication of intent from
supervisors to local actors
Dekker and Woods
Air Traffic
Management by exception as a
cooperative architecture
Across these studies from different domains and using different methodologies, several patterns in
cooperative cognition have emerged. These patterns are:
how coordination breakdowns are signaled by an agent that is surprised by the behavior of other
agents or the underlying monitored process that is being influenced by another agent,
how anomalies in a monitored process produce an escalation in cognitive and coordinative
demands which bring out the penalties of poor support for coordination,
how investments in shared understandings before anomalies arise facilitate effective coordination
in responding to anomalies, e.g., prevent coordination surprises,
how to balance flexibility and planning when local actors need to adapt plans created by distant
supervisors to cope with unexpected situations,
how to effectively update and integrate additional personnel called in when unexpected situations
arise so that they can coordinate as if they had been present from the beginning of the trouble, and
how coordination across functionally distributed teams is a robust architecture for meeting the
demands and avoiding failure in anomaly response.
In the aviation domain, we have identified and shaped conditions that give rise to what we have
referred to as ‘automation surprises’ (Sarter and Woods, 1992; 1994; 1997a; 1997b). These are
situations where practitioners are surprised by actions taken (or not taken) by machine agents, such as
automation in computerized ‘glass’ cockpits. Automation surprises begin with miscommunication
and misassessments between the automation and users which lead to a gap between the user’s
understanding of what the automated systems are set up to do, what they are doing, and what they are
going to do. As a result, the automation ‘flies’ the aircraft into trouble and the human supervisor is
unaware that this is happening/will happen until problems arise.
The evidence shows that the potential for automation surprises is the greatest when three factors
1. the automation can act on its own without immediately preceding directions from its human
partner (high autonomy and authority),
2. there are gaps in users’ mental models of how their machine partners work in different
situations, and
3. there is weak feedback about the activities and future behavior of the machine agent (low
Parallels to ‘automation surprises’ have been observed in human-human interactions. These
observations point to a broader category of cooperative interaction, which we refer to as ‘coordination
surprises,’ of which automation surprises are a subset that occur in interactions between practitioners
and automation. In coordination surprises, an agent is surprised by the way another agent acts on the
distributed cooperative system. A human or machine agent can directly perform an activity that is
surprising to an agent. Alternatively, agents can perform activities that affect the underlying
monitored process or the coordination of other distributed agents in the system in ways that are not
anticipated by the agent who is surprised. The mismatch situation between an agent’s expectations
and the situation is believed to result from the convergence of the same three factors: high autonomy
and authority, gaps in mental models, and low observability.
To illustrate these factors, consider an example of a coordination surprise that was directly observed
during the STS-76 space shuttle mission (Patterson, 1997). Prior to the shuttle Atlantis docking with
the MIR space station, a NASA mission controller responsible for the mechanical systems on the
shuttle (Mech) was surprised by a request by the Russian space agency to close the vent doors prior to
docking. Evidence of the surprise includes prior statements made by the controller that he did not
believe the action would be requested, a look of surprise when the request was made, and a delay in
the timeline because implementing the action took longer than expected. In addition, the observed
controller described the event to his replacement in the next shift by saying “In the unlikely event that
we do it, I didn't want to be stumbling around…then all of a sudden we’re doing this…”
The evolution of the mindsets of the American and Russian space agencies regarding whether or not
to close the vent doors prior to docking are shown in Figure 1. Normally the vent doors are left open
in space to allow oxygen to escape prior to entry. A hydraulic leak during ascent raised concerns that
hydraulic fluid might contaminate the MIR space station. Analyses conducted by both space agencies
showed that the amount of leaked hydraulic fluid was negligible so that there was no need to close the
vent doors prior to docking. In addition, NASA planned to conduct a space walk during the mission,
demonstrating that they were not concerned about the leaked hydraulic fluid contaminating the
interior of the shuttle. During various interactions between the American and Russian space agencies,
the two organizations presented variations on a stance toward the decision of whether or not to close
the vent doors. One day before docking, the Russians announced that they were “90% go” on docking
without closing the vent doors. The observed controllers assumed that this was a final decision not to
close the vent doors.
Sometime between the conference call and the docking, a representative of the American space
agency had a private phone conversation with a representative from the Russian space agency where
the decision not to close the vent doors prior to docking was reversed. This reversal in decision was
not communicated to the personnel in mission control. This coordination surprise illustrates the three
factors that were identified as contributing to mismatch situations. 1) The representative was an agent
that had high autonomy and authority in that he could negotiate the closing of the vent doors without
being directed to do so by the controllers who would have to carry out the action. 2) In fact, the
controllers did not even realize that an American representative could have a private phone
conversation where important plans were negotiated, so there was a gap in their mental models of how
there might be reversals in decisions following a public statement about a stance toward a decision.
3) In addition, there was also missing feedback in that the representative did not inform the mission
controllers of the reversal in the decision.
Figure 1. Example of a coordination surprise
During all the field observations in all of the domains, when an unusual or unexpected event was
detected, there was an immediate escalation of cognitive and coordinative activities. This pattern can
give rise to several problems because of a fundamental relationship: the greater the trouble in the
underlying system or the higher the tempo of operations, the greater the information processing
activities required to cope with the trouble or pace of activities (Woods et al., 1994). For example,
demands for monitoring, attentional control, information, and communication among team members
(including human-machine communication) all go up with the unusualness (situations at or beyond
margins of normality or beyond textbook situations), tempo and criticality of situations. This means
that the burden of interacting with other agents or an interface tends to be concentrated at the very
times when the practitioner can least afford new tasks, new memory demands, or diversions of his or
her attention away from the job at hand.
As an illustrative example, during ascent of the space shuttle mission STS-76, there was a hydraulic
leak in Auxiliary Power Unit (APU) 3 which triggered escalations in cognitive and coordinative
activities (Watts, Woods, and Patterson, 1996, Watts and Woods, 1997). One of the controllers
responsible for monitoring the health and safety of the mechanical systems noticed an unexpected
drop in hydraulic fluid. The team of controllers immediately calculated the leak rate in order to
recommend an action to the astronauts. Because the leak was small enough not to require an
immediate abort, the controllers then focused on how to best configure the APUs in order to obtain
the most diagnostic information before shutting down the systems while also analyzing if any actions
could be taken to protect other interrelated systems. In parallel with these activities, the controllers
were constantly updating members of their immediate team, the flight director, and other support
controllers who were calling in. In addition, they were giving instructions to be relayed to the
astronauts through the CAPCOM controller. Even before the astronauts entered the orbit phase and
shut down the APUs, additional support personnel were called in to help assess the impacts of the
anomaly on mission plans.
Along with many others, we have observed that practitioners in operational settings rely on a shared
understanding or ‘common ground’ when communicating, and that this common ground is mediated
by artifacts and technology (Clark and Brennan, 1991; McDaniel, Olson, and Magee, 1996; Kraut,
Miller, and Siegel, 1996; Roth, Bennett, and Woods, 1987). In distributed supervisory control,
important elements of the common ground include shared understandings about a referent monitored
process and the responses of distributed agents in relation to the monitored process (Malin et al.,
1991; Johannesen, Cook, and Woods, 1994). In our observations, we have noticed the widespread use
of auditory mediating technologies that support a practitioner in maintaining peripheral awareness of
ongoing activities of other practitioners in relation to a monitored process without disrupting their
ongoing work or the communication process between the monitored parties (e.g., voice loops in space
shuttle mission control, Watts et al., 1996b; train controllers in the London Underground, Heath and
Luff, 1992; voice loops in air carrier operations, Rochlin et al., 1987). With these technological aids,
practitioners are able to ‘listen in’ on the activities of other distributed agents. ‘Listening in’
facilitates coordination in response to events in a monitored process and primes practitioner’s
expectations for when other practitioners might need to coordinate with them as well as letting them
know what is happening with subsystems of the monitored process that might affect them (Woods,
1995; Watts et al., 1996b).
The common ground is considered to be vitally important in effective coordination during responses
to anomalies. Practitioners in several domains are observed to invest in creating a common ground
before there is an obvious reason to do so. Space shuttle mission controllers monitor voice loops that
are not directly relevant to them in case a problem arises where they would then be required to
coordinate with those controllers (Watts et al., 1996b). Medical practitioners proactively update the
other medical practitioners involved in an operation before there is any evidence that there might be a
problem (Johannesen, Cook, and Woods, 1994). Space shuttle mission controllers, who are not
directly scheduled for a mission but who could potentially be called in if a problem arises, are
expected to obtain daily updates from staffed controllers (Patterson and Woods, 1997).
Most of the domains that are studied in the Cognitive Systems Engineering Lab (CSEL) are
distributed supervisory control systems. Distributed supervisory control systems are hierarchical and
cooperative architectures where remote supervisors work through intelligent local actors to control a
process. With this framework, human supervisors, designers, and procedure writers could all be
viewed as remote supervisors who implicitly or directly influence local actors. The distant
supervisors have a broader scope and a better understanding of the overarching goals for the
distributed system. The local actors have privileged access to the monitored process and what is
actually happening “on the ground.”
Shattuck and Woods (1997) investigated how local actors adapted when surprises occurred in
simulated command and control scenarios and how they used their commander’s statement of intent
behind the plan in adapting to unexpected events. At one extreme, practitioners would rotely follow
the original plans as described by the supervisor with no regard for the local complicating factors. At
the other extreme, practitioners would act completely autonomously, leaving their supervisors ‘out of
the loop’ and failing to coordinate with other local actors toward an organizational target. The results
demonstrate the need to strike a cooperative balance between remote supervisors and local actors,
where local actors have the knowledge and authority that they need to respond to unanticipated local
situations in ways that support achieving higher level goals.
Dekker and Woods (1997) observed another aspect of coordination between local actors and distant
supervisors. In an envisioned new form of air traffic management, authority to control flight paths
will be distributed mostly between pilots and company dispatchers. Air traffic controllers will monitor
the aircraft and intervene only to preserve safety -- a management by exception cooperative
architecture (Billings, 1996). Dekker and Woods investigated how this architecture creates a dilemma
for the distant supervisor about whether and when to take back partial or complete control over what
flight path some subset of aircraft will fly. If the supervisor intervenes early (when it is easy to
understand the situation and act constructively), it will tend to be seen as over-intervention. If the
supervisor intervenes late, it will be very difficult to act constructively given the tempo of the
situation and the workload involved in assessing and directing multiple aircraft. This particular case
raises a variety of questions about what shared models, shared information, and common or
overlapping fields of view are needed to support dynamically adjusting the distribution of authority to
match changing constraints.
Under pressure to use expertise more efficiently, many of the observed domains are fundamentally
changing their supervisory control architecture. When situations are routine, fewer staff and less
experienced staff monitor and adjust the underlying process. Only when anomalies occur and
situations depart from routine are more personnel, more expert personnel, and more specialized
personnel brought in to cope with the anomalous situation. Personnel at work during nominal
operations need to recognize off-nominal situations, to call on appropriate expertise, and to call in
additional resources. This ‘on-call’ architecture for supervisory control has the potential to effectively
utilize expert practitioners by using them only when expert knowledge and more intense analyses and
planning are required in order to respond to a problem. This means the distributed supervisory control
system needs to be able to bring increasing expertise to bear smoothly and coordinate multiple
cognitive activities as situations escalate in difficulty and tempo.
When practitioners are called in, they must be updated so that they know and function as if they had
been present during the previous activities (Johannesen, Cook, and Woods, 1994; Patterson and
Woods, 1997). They must learn what events have taken place and how plans have been revised as a
result of these events. They also need to know what analyses have been performed. In many
domains, commitments to irreversible decisions are delayed, so it is also important to know the team’s
stance toward critical decisions in order to influence the choice if the opportunity arises.
Practitioners’ expectations for monitoring system parameter values depend on past events and current
system configurations (e.g., a system is leaking so fluid levels are expected to be lower). Knowing
this information helps them to anticipate what to do in the case of contingencies (e.g., open the relief
valve if pressure begins to rise in the suspect system). They must also learn the status of
communications with other agents in the system, such as who has been updated, what written
documents need to be distributed, who has requested permission for changes to plans, and who is
involved in finalizing the commitment to a course of action. Observations in space shuttle mission
control during shift changes revealed that practitioners employ strategies that rely heavily on prior
knowledge and shared understandings to have quick and effective updates. For this reason, the
organization requires controllers who are assigned to provide support in the event of an anomaly to
receive periodic updates before any problems arise.
In many domains, the interdependent cognitive processes of anomaly response are distributed across a
set of functionally distinct but cooperative teams who possess distinct but overlapping expertise and
perspectives. For example, space shuttle ground support consists of many sub-communities such as
operations and engineering with common overall goals. However, each sub-team has distinct
responsibilities, resources, and authority, which lead them to approach these overall goals from
different perspectives.
Watts et al. (1996a, 1997) studied the coordination across these functionally distinct teams during
actual anomalies in shuttle operations. They described a pattern of cooperative advocacy that seemed
to provide a robust mechanism to cope with the demands and potential pitfalls of anomaly response.
Each sub-team developed their own assessment and response strategy of the anomaly and its
consequences for the mission -- a within-team perspective -- and then shared their perspectives in a
series of coordinative meetings. Preparing for a possible critique and actually confronting another
group’s perspective on the situation revealed inaccuracies, gaps, uncertainties, and conflicts. The
process of sharing each sub-team’s assessment stimulated individuals to call to mind other
possibilities, constraints, and side effects.
For example, during the observed anomalous space shuttle mission STS-76, there was a hydraulic
leak in an Auxiliary Power Unit (Figure 2). To avoid the somewhat unlikely but high-risk scenario of
losing capability in another of the three APUs, the operational community (MOD) wanted to avoid
using an APU to test the flight control systems, as would normally be done a day before entry. In
seemingly direct opposition, the engineering community (MER) wanted to use an APU for the flight
control system test in the same way as would be done on a nominal mission in order to gain more
information about the affected systems.
This opposition in stance resulted from the two community’s differing scopes of responsibility. The
operational community is directly responsible for safety during a particular mission. They believed
that any risk associated with gathering additional information should be justified in terms of how the
information might affect operational decisions on that flight. Since it was unlikely that additional
information would affect the configuration of the systems for entry and there was some risk associated
with using an APU, they were opposed to the test. The engineering community, on the other hand, is
responsible for the safety of the shuttle system design across all the missions. They suggested that
since the risk of using the APU was low, it was a reasonable decision to test the flight control
systems. In this way, they argued that they could ensure that there were no hidden problems with the
flight control system as well as gain information that might be valuable in redesigning the APU.
Figure 2. Example of functionally distinct teams advocating a stance
While debating these stances, it became clear that an alternative plan was acceptable to both
communities. Instead of using an APU to check out the flight control systems, a circulation pump
could be used. This would eliminate the risk of losing capability in either of the remaining two APUs
but also meet the need to test the flight control systems. There was a loss of some information on the
flight control systems that could only be gained by running an APU, but this plan met the most critical
needs of the goals advocated by both communities. It seems unlikely that this plan would have been
formed without having functionally distinct teams openly critiquing alternative options.
Cooperative advocacy as a strategy for coordinative anomaly response assumes that there are multiple
sub-teams or people, each diverse in perspective, background, methods, or goals (Watts and Woods,
1997). There is a movement as each makes an investment to develop their own assessment or stance
and then shares that result with the other sub-teams or people in a shared event or environment. The
mixing of the separate analyses spawns revision, cross checks for detecting and recovering from
erroneous assessments, and cues that call to mind new possibilities. Cooperative advocacy seems like
an interesting possible method for structuring coordinative activity when there is a wide scope of
factors that could be relevant, when all things cannot be thought through in advance, and when there
are uncertainties and potentially serious consequences as a result of misassessments or mis-plans.
This paper describes patterns in cooperative cognition that were observed in several different
operational settings and investigated through different field research methodologies. These patterns
further our understanding of how cognition is fundamentally cooperative. This understanding could
provide insight into how the design of machine artifacts could aid practitioners in various ways.
Systems could be designed to minimize coordination breakdowns, such as by avoiding coordination
surprises or supporting bringing more knowledge to bear as situations escalate. Alternatively, patterns
of existing cooperative strategies for coping with domain complexities could be augmented and
extended. For example, technologies could be provided to better mediate building and maintaining a
common ground, better balance the goals of distant supervisors and with the need for local actors to
adapt to unique circumstances, augment updates given to personnel who are called in to respond to an
unexpected situation, and support interactions between functionally distinct teams during anomaly
This paper represents contributions from many current and prior members of the Cognitive Systems
Engineering Laboratory, including Lawrence Shattuck, Leila Johannesen, Sidney Dekker, James
Corban, Scott Potter, Matthew Holloway and Marie Walters, as well as many collaborators, including
Richard Cook, Jane Malin, Charles Billings, Emilie Roth and Philip Smith. Two of the authors were
supported under National Science Foundation Graduate Research Fellowships. Any opinions,
findings, conclusions or recommendations expressed in this publication are those of the authors and
do not necessarily reflect the views of the National Science Foundation.
Billings, C. E. (1996). Aviation Automation: The search for a human-centered approach.
Hillsdale, NJ: Erlbaum.
Clark, H.H. & Brennan, S.E. (1991). Grounding in communication. In L. Resnick, J.M. Levine, and
S.D. Teasley (Eds.) Perspectives on Socially Shared Cognition. Washington, D.C.:
American Psychological Association, 127-149.
Dekker S. & Woods D. D. (1997). Management by exception in future air traffic control: An
empirical study of coordination in an envisioned distributed environment. Manuscript
submitted for publication.
Heath, C., & Luff, P. (1992). Collaboration and control: Crisis management and multimedia
technology in London underground line control rooms, Computer-Supported Cooperative
Work (1), 69-94.
Hutchins, E. (1995). Cognition in the Wild. Cambridge, MA: MIT Press.
Johannesen, L. J., Cook, R. I., & Woods, D. D. (1994). Grounding explanations in evolving
diagnostic situations (CSEL Report 1994-TR-03). The Ohio State University, Cognitive
Systems Engineering Laboratory.
Kraut, R. E., Miller, M. D., & Siegel, J. (1996). Collaboration in performance of physical tasks:
effects on outcomes and communication. CSCW '96 Proceedings, Boston, MA., 57-66.
Malin, J., Schreckenghost, D., Woods, D. D., Potter, S., Johannesen, L. J., Holloway, M., & Forbus,
K. (1991). Making Intelligent Systems Team Players. NASA Technical Report 104738,
Johnson Space Center, Houston TX.
McDaniel, S.E., Olson, G.M., & Magee, J.C. (1996). Identifying and analyzing multiple threads in
computer-mediated and face-to-face conversations. CSCW '96 Proceedings, Boston, MA.,
Patterson, E. S. (1997). Coordination across shift boundaries in space shuttle mission control.
(CSEL Report 1997-TR-01). The Ohio State University, Cognitive Systems Engineering
Patterson, E.S., & Woods, D.D. (1997). Shift changes, updates, and the on-call model in space shuttle
mission control. Proceedings of the Human Factors and Ergonomics Society 41st Annual
Meeting. Albuquerque, NM: 243-247.
Rochlin, G. I., La Porte, T. R. & Roberts, K. H. (1987). The self-designing high-reliability
organization, Aircraft carrier flight operations at sea. Naval War College Review, 76-90.
Roth, E. M., Bennett, K., & Woods, D. D. (1987). Human interaction with an ‘intelligent’ machine.
International Journal of Man-Machine Studies, 27:479-525.
Sarter, N. B., & Woods, D. D. (1997a). “Teamplay with a Powerful and Independent Agent”: A
Corpus of Operational Experiences and Automation Surprises on the Airbus A-320.
Human Factors, 39(4), 553-569.
Sarter, N. B., & Woods, D. D. (1997b). Mode Errors of Omission and Commission: Observed
Breakdowns in Pilot-Automation Coordination in a Full Mission Simulation Study.
Manuscript submitted for publication.
Sarter, N. B., & Woods, D. D. (1994). Pilot Interaction with Cockpit Automation II: An Experimental
Study of Pilot's Model and Awareness of the Flight Management System. International
Journal of Aviation Psychology, 4:1-28.
Sarter, N. B., & Woods, D. D. (1992). Pilot Interaction with Cockpit Automation I: Operational
Experiences with the Flight Management System. International Journal of Aviation
Psychology, 2:303--321.
Sarter, N. B., Woods, D. D., & Billings, C. (1997). Automation Surprises. In G. Salvendy, editor,
Handbook of Human Factors/Ergonomics, second edition, Wiley, New York.
Shattuck, L. G., & Woods, D. D. (1997). Communication of intent in distributed supervisory control
systems. Proceedings of the Human Factors and Ergonomics Society 41st Annual
Meeting. Albuquerque, NM, 259-268.
Watts-Perotti, J, & Woods, D. D. (1997). A cognitive analysis of functionally distributed anomaly
response in space shuttle mission control. (CSEL Report 1997-TR-02). The Ohio State
University, Cognitive Systems Engineering Laboratory.
Watts, J., Woods, D. D., & Patterson, E. S. (1996a). Functionally distributed coordination during
anomaly response in space shuttle mission control. Human Interaction with Complex
Systems '96, Dayton, OH.
Watts, J., Woods, D. D., Corban, J., Patterson, E. S., Kerr, R., & Hicks, L. (1996b). Voice loops as
cooperative aids in space shuttle mission control. CSCW '96 Proceedings, Boston, MA.,
Wiener, E.L., & Curry, R.E. (1980). Flight-deck automation: promises and problems. Ergonomics,
23(10), 995-1011.
Woods, D. D. (1995). The alarm problem and directed attention in dynamic fault management.
Ergonomics, 38(11), 2371-2393.
Woods, D. D., Johannesen, L. J., Cook, R. I., & Sarter, N. B. (1994). Behind Human Error: Cognitive
Systems, Computers, and Hindsight. Dayton, OH: CSERIAC.
... From the dynamic fault management standpoint, studies from Watts et al. (1996), Patterson et al. (1998), Patterson and Woods (2001), and Watts-Perotti and Woods (2007) have defined the mission control centre as the JCS and, thus, as the unit of analysis during space shuttle missions. The boundaries were drawn around multilevel set of teams, realtime telemetry data downlinked from the shuttle subsystems and procedures for certain distinctive situations. ...
Full-text available
More than 20 years ago, Woods proposed a model that accounts for the inherent complexity faced by operators when managing abnormal and emergency situations in highly complex sociotechnical systems. The model was reviewed a decade later and only a few studies have applied it to aviation. This paper proposes adjustments to the original model, based on recent theoretical developments and empirical evidence on the anomaly management activity in aviation. The model was divided into five components; three of which—activity, types of reasoning involved, and resources—were revisited and further developed. The two other components—fault behaviour and unit of analysis—were not updated and only discussed in the aviation context. As a result, the revisited model descriptively clarifies how the activity of anomaly management emerges from the use of a wide repertoire of strategies, involving a spectrum of types of reasoning and a set of resources for action, which are not limited to those anticipated by designers, such as checklists and the warning system. An instantiation of the revisited model highlights the implications of false alarms, which trigger a cascade of disturbances that, in turn, requires adaptive strategies based on heuristics and analogies and supported by pilot’s experience. The revisited model can support a more accurate analysis of anomalous situations and the redesign of work systems to achieve a better performance.
... In a synthesis of seven different studies across a period of five years, Emily Patterson, David Woods, Nadine Sarter, and Jennifer Watts-Perotti weaved together patterns from their findings on how cooperative cognition takes place. (Patterson, Watts-Perotti, & Smith, 1998) The patterns they found were: ...
Full-text available
The increasing complexity of software applications and architectures in Internet services challenge the reasoning of operators tasked with diagnosing and resolving outages and degradations as they arise. Although a growing body of literature focuses on how failures can be prevented through more robust and fault-tolerant design of these systems, a dearth of research explores the cognitive challenges engineers face when those preventative designs fail and they are left to think and react to scenarios that hadn’t been imagined. This study explores what heuristics or rules-of-thumb engineers employ when faced with an outage or degradation scenario in a business-critical Internet service. A case study approach was used, focusing on an actual outage of functionality during a high period of buying activity on a popular online marketplace. Heuristics and other tacit knowledge were identified, and provide a promising avenue for both training and future interface design opportunities. Three diagnostic heuristics were identified as being in use: a) initially look for correlation between the behaviour and any recent changes made in the software, b) upon finding no correlation with a software change, widen the search to any potential contributors imagined, and c) when choosing a diagnostic direction, reduce it by focusing on the one that most easily comes to mind, either because symptoms match those of a difficult-to-diagnose event in the past, or those of any recent events. A fourth heuristic is coordinative in nature: when making changes to software in an effort to mitigate the untoward effects or to resolve the issue completely, rely on peer review of the changes more than automated testing (if at all.)
... Observations of practitioners, such as military planners, in distributed operational settings reveal that there is a strong reliance on a shared understanding or common ground when communicating (Patterson et al. 1998). Team members interact through communication, coordination and other team interaction processes, and in doing so transform the collection of individuals' knowledge to team knowledge that ultimately guides the team's outcomes (Cooke et al. 2004). ...
Full-text available
The Crisis Action Planning (CAP) process is a collaborative team effort wherein military planners work together to quickly understand an emergency situation and make decisions about a response. Additional challenges to team cognition ensue when planners from multiple nations work in a distributed environment, as is the case with the multinational planning augmentation team (MPAT). A cognitive model of team collaboration provided the theoretical basis for development of a tool to improve the synchronisation of distributed decision-making and information management among coalition planners. The Multinational Crisis Action Planning (MCAP) tool was implemented by configuring a Microsoft Office SharePoint Server (MOSS) collaborative web environment. The tool supports the macrocognitive processes of Individual Knowledge Building (converting data into knowledge), Team Knowledge Building (developing knowledge interoperability among team members) and Developing Shared Problem Conceptualisation (promoting team shared understanding). There is some support for two additional processes: Team Consensus Development and Evaluation and Revision.
Conference Paper
Problem cognition is often the first step to problem solving. Complex problem solving is often promoted by cooperative cognition. Our research work presented in this paper is initialed by a motivated example. Then meta-cognition and cooperative cognition are explored by defining their Cognition Logic Context and Cognition Resource Context. Cognition evolution is discussed from the transformation of problem state space. Moreover, the knowledge flow principle engaged in cooperative cognition is explored from the perspective of learning and cognition evolution. To promote cooperative cognition under Web environment, a context application paradigm is proposed based on P2P scenarios. The conclusions and our future works are presented at last.
Conference Paper
This paper describes how human-technology interaction in modern ambient technology environments can be best informed by conceptualizing of such environments as problem solving systems. Typically, such systems comprise multiple human and technological agents that meet the demands imposed by problem constraints through dynamic collaboration. A key assertion is that the design of expert problem solving systems can benefit from an understanding of competence models of human-human and animal-animal collaboration. Consequently, design principles for such environments are derived from a review of competent collaboration in human groups, such as sport teams, and animal groups, such as wolf packs.
Conference Paper
Operators of unmanned air vehicles (UAVs) will face multiple, complex, co-occurring tasks, including mission planning, command and control, communications, payload control, and coordination with other assets. This workload is multiplied as single operators begin to control multiple UAVs. Developing appropriate decision support tools for the operator is challenging in part because neither doctrine nor technology are far enough along to usefully constrain the problem space. In this paper we describe our work (funded by the Office of Naval Research) in anticipating the likely demands to be placed on a UAV operator in the context of a highly network-centric theater of operations. We then identify high-payoff technologies that will work in concert to make the operator's tasks manageable even under accelerated operational tempos, and present our preliminary architecture that integrates our existing technology with planned advances in supporting human-computer interaction.
Conference Paper
In this paper we describe a human factors approach to decision support systems design, called Crew Intent Inference, that shifts the focus from the automation systems to the user. Systems that generate expectations of the crew's intent can provide enhanced decision support in numerous ways, among which is better aircrew coordination. We discuss crew coordination issues and how an intent-aware system might promote better coordination. Intent Inference involves the analysis of an operator's actions to identify the possible goals being pursued by that operator. Extending intent inference to cases involving crews can allow observations of individual operators' actions to be used to infer the goals of the entire team. Conversely, knowledge of the team's goals can be used to predict the actions and information needs of individual crew members. We discuss our preliminary work in extending this approach to the aircrew coordination domain
Full-text available
This paper reviews various errors that have been described by comparing human behavior to the norms of probability, causal connection and logical deduction. For each error we review evidence on whether the error has been demonstrated to occur. For many errors, the occurrence of a bias has not been demonstrated; for other, a bias does occur, but arguments can be made that the bias is not always an error. Based on the conclusions of this review, we caution researchers and practitioners in referring to well known biases and errors.
Full-text available
Both optimization techniques and expert systems technologies are popular approaches for developing tools to assist in complex problem-solving tasks. Because of the underlying complexity of many such tasks, however, the models of the world implicitly or explicitly embedded in such tools are often incomplete and the problem-solving methods fallible. The result can be 'brittleness' in situations that were not anticipated by the system designers. To deal with this weakness, it has been suggested that 'cooperative' rather than 'automated' problem-solving systems be designed. Such cooperative systems are proposed to explicitly enhance the collaboration of the person (or a group of people) and the computer system. This study evaluates the impact of alternative design concepts on the performance of 30 airline pilots interacting with such a cooperative system designed to support en-route flight planning. The results clearly demonstrate that different system design concepts can strongly influence the cognitive processes and resultant performances of users. Based on think-aloud protocols, cognitive models are proposed to account for how features of the computer system interacted with specific types of scenarios to influence exploration and decision making by the pilots. The results are then used to develop recommendations for guiding the design of cooperative systems.
Full-text available
One of the critical problems in the design and use of advanced decision-support systems is their potential “brittleness”. This brittleness can arise because of the inability of the designer to anticipate and design for all of the scenarios that could arise during the use of the system. The typical “safety valve” to deal with this problem is to keep a person “in the loop”, requiring that person to apply his/her expertise in making the final decision on what actions to take. This paper provides empirical data on how the role of the decision support system can have a major impact on the effectiveness of this design strategy. Using flight planning for commercial airlines as a testbed, three alternative designs for a graphical flight planning tool were evaluated, using 27 dispatchers and 30 pilots as subjects. The results show that the presentation of a suggestion or recommendation by the computer early in the person's own problem evaluation can have a significant impact on that person's decision processes, influencing situation assessment and the evaluation of alternative solutions
A 7-hour focus group session was held to explore issues concerning the interactions between airline operations control (AOC) and staff of the Air Traffic Control Systems Command Center (ATCSCC). This session was organized with three goals in mind: • To gain insight into the nature of the distributed and cooperative problem solving activities that arise in the interactions of the airlines with ATCSCC. • To identify the successful aspects of these interactions and to better understand the nature of underlying factors contributing to those successes. • To identify areas for potential improvement. Four factors were identified as contributing to successful cooperative problem solving: development of a shared understanding of goals and constraints, distribution of responsibilities, incorporation of feedback and process control, and staff selection. In addition, several areas for improvement were identified, including education, information exchange, policies and practices, work assignments, and computer support.