DataPDF Available

Designing Affordances for Direct Interaction

Authors:
The International Journal of
Designed Objects
DESIGNPRINCIPLESANDPRACTICES.COM
VOLUME 10 ISSUE 2
__________________________________________________________________________
Designing Affordances for Direct Interaction
AXEL ROESLER, SARAH CHURNG, STEPHEN BADER, AND HAEREE PARK
THE INTERNATIONAL JOURNAL
OF DESIGNED OBJECTS
www.designprinciplesandpractices.com
ISSN: 2325-1379 (Print)
ISSN: 2325-1395 (Online)
doi:10.18848/2325-1379/CGP (Journal)
First published in 2016 in Champaign, Illinois, USA
by Common Ground Publishing
www.commongroundpublishing.com
The International Journal of Designed Objects
is a peer-reviewed, scholarly journal.
COPYRIGHT
© 2016 (individual papers), the author(s)
© 2016 (selection and editorial matter) Common Ground Publishing
All rights reserved. Apart from fair dealing for the purposes of study,
research, criticism, or review, as permitted under the applicable
copyright legislation, no part of this work may be reproduced by any
process without written permission from the publisher. For permissions
and other inquiries, please contact
cg-support@commongroundpublishing.com.
Common Ground Publishing is a member of CrossRef
EDITORS
Lorenzo Imbesi, Sapienza University of Rome, Italy
Loredana Di Lucchio, University of Rome, Italy
COMMUNITY EDITOR
Jeremy Boehme, Common Ground Publishing, USA
ADVISORY BOARD
Genevieve Bell, Intel Corporation, USA
Michael Biggs, University of Hertfordshire, UK
Jeanette Blomberg, IBM Almaden Research Center, USA
Patrick Dillon, Exeter University, UK
Michael Gibson, University of North Texas, USA
Loredana Di Lucchio, Sapienza University of Rome, Italy
Jorge Frascara, Emily Carr University of Art and Design, Canada
Judith Gregory, Institute of Design, USA; University of Oslo, Norway
Christian Guellerin, L'École de design Nantes Atlantique, France
Tracy S. Harris, The American Institute of Architects, USA
Clive Holtham, City of London University, UK
Lorenzo Imbesi, Sapienza University of Rome, Italy
Hiroshi Ishii, MIT Media Lab, USA
Gianni Jacucci, University of Trento, Italy
Klaus Krippendorff , University of Pennsylvania, USA
Bill Lucas, MAYA Fellow, MAYA Design, Inc., USA
Ezio Manzini, Politecnico of Milano, Italy
Mario Minichiello, University of Newcastle, Australia
Guillermina Noël, Emily Carr University of Art and Design, Canada
Mahendra Patel, Leaf Design, India
Toni Robertson, University of Technology Sydney, Australia
Terry Rosenberg, Goldsmiths, University of London, UK
Keith Russell, University of Newcastle, Australia
Maria Cecilia Loschiavo dos Santos, University of São Paulo, Brazil
Louise St. Pierre, Emily Carr University of Art and Design, Canada
ASSOCIATE EDITORS
Articles published in The International Journal of Designed Objects
are peer reviewed by scholars who are active participants of the Design
Principles and Practices Journal Collection or a thematically related
Knowledge Community. Reviewers are acknowledged as Associate
Editors in the corresponding volume of the journal.
For a full list of past and current Associate Editors please visit
www.designprinciplesandpractices.com/journals/editors.
ARTICLE SUBMISSION
The International Journal of Designed Objects
publishes quarterly (March, June, September, December).
To find out more about the submission process, please visit
http://www.designprinciplesandpractices.com/journals/call-for-papers.
ABSTRACTING AND INDEXING
For a full list of databases in which this journal is indexed, please visit
www.designprinciplesandpractices.com/journals/collection.
KNOWLEDGE COMMUNITY MEMBERSH IP
Authors in The International Journal of Designed Objects
are members of the Design Principles and Practices Journal Collection
or a thematically related Knowledge Community. Members receive
access to journal content. To find out more visit
www.designprinciplesandpractices.com/about/become-a-member.
SUBSCRIPTIONS
The International Journal of Designed Objects
is available in electronic and print formats. Subscribe to gain access to
content from the current year and the entire backlist.
Contact us at cg-support@commongroundpublishing.com.
ORDERING
Single articles and issues are available from the journal bookstore at
www.ijg.cgpublisher.com.
HYBRID OPEN ACCESS
The International Journal of Designed Objects
is Hybrid Open Access, meaning authors can choose to make their
articles open access. This allows their work to reach an even wider
audience, broadening the dissemination of their research.
To find out more please visit
www.designprinciplesandpractices.com/journals/hybrid-open-access.
DISCLAIMER
The authors, editors, and publisher will not accept any legal
responsibility for any errors or omissions that may have been made in
this publication. The publisher makes no warranty, express or implied,
with respect to the material contained herein.
The International Journal of Designed Objects
Volume 10, Issue 2, 2016, www.designprinciplesandpractices.com, ISSN 2325-1379
© Common Ground, Axel Roesler, Sarah Churng, Stephen Bader, Haeree Park,
All Rights Reserved, Permissions: cg-support@commongroundpublishing.com
Designing Affordances for Direct Interaction
Axel Roesler, University of Washington, USA
Sarah Churng, University of Washington, USA
Stephen Bader, University of Washington, USA
Haeree Park, University of Washington, USA
Abstract: Affordances are visual indications for potential actions. The concept of affordances lies at the center of design
principles for Interaction Design and forms the basis for understanding, sense making, and act ionable representations.
The term affordances was coined by the ecological psychologist James J. Gibson in 1977. The cognitive psychologist
Ulric Neisser explored the implications of affordances in his book “Cognition and Reality” in 1976. He introduced the
perceptual cycle as a new model for perception/action coupling, a model that today can be considered a comprehensive
model for interactions. Don Norman introduced the term affordances to the design community in his book “The Design of
Everyday Thingsin 1986. This paper examines the role of affordances for design in light of the new findings in cognitive
science and neuroscience and presents a framework for designing affordance that is directly actionable as a
representation that provides the right information for action at the right moment in the right form and in the right place.
Design principles for affordances are illustrated with the design of a next generation primary flight display for the
commercial flight deck.
Keywords: Visual Design, Interfac e Design, Designing Information Systems and Architectures,
Interaction Design, Cognitive Systems Engineering
Introduction
isual properties in the world around us guide our interactions with people and processes.
Encounters with people, situations, and events initiate interactions—we act in the world.
Our interpretations and sense making of what is happening leads us to formulate plans
for responses. Our interactions are driven by cues in the world, individual knowledge, and
perspective. Affordances are visual indications for opportunities to act that are present in context
with our capabilities for action. Affordances can show us what to do.
Affordances don’t have to be the product of design—we find many natural features that are
affordances—just as a chair affords sitability, a rock shelf at seating height affords the exhausted
hiker the same sitability as a chair. Every feature in the environment around us affords distinct
interactions while it doesn’t afford others—the ground under our feet affords standing, and water
affords swimming but not standing (Gibson 1979).
We design visual properties in an artifact to enable others to recognize potential for
interactions—how to use a product, operate a car, and make a phone call (Norman 1986). The
visual properties we design for the use of these products and systems can take the form of the
three-dimensional physical product shape, such as a door handle, or the visual design of abstract
representations that form the interface of the product as in the interface of a smart phone app.
Whereas abstract visual representations such as diagrams, symbols, and icons require
interpretation based on conceptual models—knowledge of what something means, how it works,
and what to expect—visual elements that form affordances relate to potential interactions in
context. They can facilitate direct interactions in the situation at hand; we see what can be done
and what should be done in the current situation. Interpretation that draws from previously
acquired knowledge is not required. A chair, for example, becomes meaningful as an artifact that
affords sitability only in the situation that a person wants to sit down after standing for an
extended period of time. This person, at the right moment, encounters an artifact with certain
properties relative to his or her scale and directly perceives the opportunity to sit. This
relationship between an agent and a meaningful artifact is Gibson’s affordance in direct
V
THE INTERNATIONAL JOURNAL OF DESIGNED OBJECTS
perception (Gibson 1979). Affordances occur when an artifact, in the context of use, supports
people’s ability to understand and act in the world (Woods and Roesler 2007).
Human-computer interactions are mediated interactions; the operation of technology is
mediated through an interface. When it is designed well, the interface acts as representation of
the relationship between the operated artifact and the world around the artifact and operator and
provides means for input to control the fit between the artifact and the world. The operator is
neither controlling the artifact nor aspects in the world directly, but reasons and acts in
conversation with the interface. The effectiveness of the interaction mediated through an
interface is the challenge for Interaction Design.
Consider the following example of mixed mediated and direct interaction: the speedometer
on the dashboard of a car displays the speed at which the car is driven. The tachometer displays
the effectiveness of propulsion in the match between vehicle and environment. The two displays,
combined with input devices such as the gas pedal and shift stick for manual transmission,
provide the interface for mediated interaction with the propulsion of the vehicle. But the most
important driving tasks—the view through the windshield onto the road ahead and the steering of
the car with a spatially aligned steering wheel—do not require taking the eyes off the road;
staying on the road and maneuvering around obstacles rely on direct interaction.
The view through the windshield in motion combined with the sound of the engine and other
cues of locomotion, such as the vibration of the cabin and vestibular sense of motion, are
sufficient to operate the car. They can substitute the instrumentation on the dashboard. The
motion gradients and visual flow give us an approximate understanding of the speed and tell us
the direction we are heading (Gibson 1979; Warren 1988; Cutting, and Readinger 2002). The
numbers in the scale of the speedometer are auxiliary; they provide us with detailed information
about our current speed, required to observe the law. Numbers, words, and graphs, in these
abstract representation elements, require us to interpret or extract meaning from data based on a
conceptual model (for example, knowledge of the current speed limit or the safest and most fuel-
effective operation of the car in regards to the RPM displayed in the tachometer). Interpretations
of alphanumeric information, especially when assessing what is going on in comparison of
several discrete values, require time.
In this paper, we examine design implications for the provision of direct interactions;
interactions that are facilitated by affordances and visual properties in the world that support
sense making in action are addressed. Direct interactions are interactions that do not require
lengthy mental cycling through a conceptual model to understand what is happening or plan for
action that needs to be assessed against expectations and actual outcomes. The action/response to
an affordance in context is direct; the visual properties of the affordance show us what to do, and
they do not require interpretation but lead directly to the appropriate action in a given context.
The design challenge for turning the conceptual model into the visual presentation (the
representation becomes the model) is to turn the conceptual into the perceptual. This is the direct
mapping of abstract representation onto the represented properties in the world and their
functional relationships.
The Theory of Affordances
The term affordance was introduced in 1977 by the American psychologist J. J. Gibson.
Gibson developed an ecological systems view on the relationship between observer and
environment: we develop an understanding of our environment and how we can interact with it
as we perceive the world from our eyes. Gibson studied the relationship between people and their
environment constructed by visual perception and cognition: what we see, how we direct our
observations, and how observations direct action in the world. The term affordance refers to the
relationship between observer and meaningful features in the environment that afford potential
for action. An affordance is neither physical nor phenomenal; visual features become meaningful
2
ROESLER, CHURNG, BADER, & PARK: DE SIGNING AFFORDANCES FOR DIRECT INTER ACTION
by what they afford us to do in the situation at hand. Gibson constructed the term affordance to
capture the meaningful and actionable relationship between observer and features of interest in
the observer’s environment. Affordances are perceived relationships between observers and the
environment that make actions possible Perception guides action—Action becomes perception
(Noe, 2004).
Gibson’s conception of affordance is a visual property, a directly perceivable visual pattern
in the environment of an observer that—in relationship with the viewer’s intent, capabilities, and
resources—indicates the potential (or necessity) for direct interaction, which is action without
processing as a prerequisite. In other words, we see what can be done and needs to be done at the
current moment, in the current situation, and we act it without prior reflection. Our action is
based on the visual cue at hand in this situation—seeing is knowing when the alignment between
observations and opportunities for action make sense to the observer.
Observations—where we look—are driven by selective attention: we understand that
something should be there, we know where to look, and we are being affected by what we see.
Perception and action are coupled. Direct perception implies action.
The direct nature of the affordance forms a relationship between the observer (actor, agent,
practitioner, user, etc.), his/her environment, the situation at hand, his/her engagement, and
his/her capabilities. The action that is the consequence of the encounter with the affordance is the
result of direct perception. Affordances bridge observations with action. Interaction with the
environment generates new observations and so the cycle continues. Gibson’s term affordance
marks a relationship between people and meaningful visual features in the context of their intent
to act.
Conceptual Models in Direct and Mediated Interactions
For a cognitive psychology analysis of the anatomy of interactions between people and their
environment, Ulric Neisser’s perceptual cycle (1976) provides an elegant model for the cycling
through stages of perception, interpretation, and action that constitute interactions (see fig. 1).
As perception-action coupling in the perceptual cycle, two interwoven processes go on in
parallel. In one we notice changes in the world, especially changes that do not fit current
expectations; we perceive change. What we notice as different in the environment around us
about the world calls to mind new knowledge. In the second process, called knowledge-driven or
top-down processing, we act: knowledge such as explanations and expectations drive what we
look for in the present environment. The perceptual cycle captures the dynamic interplay and
mutual interdependence of perception, cognition, and action. Recognizing patterns in situations
guides action; what one can do (and the possibilities for action) directs what one perceives.
The perceptual cycle has led to new models of decision making that focus on action, time
windows, and social structures rather than what options one might debate (Zsambok and Klein
1997). It has led to new methods based on the ethnographic study of cognition in the wild
(Hutchins 1995). The starting point is to observe the interactions between agents, artifacts, and
the world in the field of practice.
In the Schemata section of the perceptual cycle, expectations about the potential
consequences and outcomes of actions form a reference for decision making. Expectations are
projections that are based on explanations—an understanding of how the world works and how
this knowledge applies to the present environment. Explanations form knowledge—conceptual
models or, in Neisser’s words, schemata that allow us to understand observations. In Neisser’s
perceptual cycle, we need knowledge to formulate plans for action and to evaluate outcomes.
In Gibson’s direct perception, affordances engage us in direct interaction that does not
require previous knowledge or planning. When we perceive misfit, our direct interactions realign
the present misfit and eventually create fit.
3
THE INTERNATIONAL JOURNAL OF DESIGNED OBJECTS
Figure 1: Ulric Neisser’s Perceptual Cycle
Source: Redrawn from Ulric Neisser’s Cognition and Reality 1976
Mediated into a system of people, artifacts, and world in which they act, affordances occur
when an artifact, in the context of use, supports people’s ability to understand and act in the
world. The term affordance is usually used in the sense of describing a direct mapping or direct
correspondence of how artifacts help someone meet the demands for a role in which they act. Its
contrast is clumsiness or the extra workload demands that occur when there is a poor fit between
the demands of a role and the characteristics of an artifact. A poor fit means there are extra steps
that make sense making and other macrocognitive functions more effortful, deliberative, memory
intense, and more vulnerable to various forms of breakdown (Woods and Roesler, 2007).
Direct Interaction
In Gibson’s direct perception, the action by itself becomes the unit of analysis—visual
interaction in the environment is an alignment process between agent and environment that is
dynamic. It is the alignment between the intent of the agent and affordances in the environment
that create meaning and make sense.
Warren (1988) describes the flight of the honeybee as the alignment of visual flow of
obstacles between the left and the right eyes of the honeybee. Balanced visual flow between both
eyes means that obstacles are equidistant from both of the bee’s eyes—the bee flies in the middle
of the canyon. The bee doesn’t require a cognitive map of her environment, a conceptual model
of her environment. Barriers in the environment form the trajectory of flight. Visual obstacles
signal barriers for locomotion and become affordances for determining a safe trajectory while in
flight.
The righting reflex kicks in when an upright erected human stumbles and falls. The
sensation of falling is comprised of a mixed visual and vestibular sense response to being out of
4
ROESLER, CHURNG, BADER, & PARK: DE SIGNING AFFORDANCES FOR DIRECT INTER ACTION
alignment followed by a quick motor correction to posture so that balance can be regained
(Cullen 2012; Borel et al. 2008). Were a conceptual model required for righting, we would all
fall long before we would be able to regain a stable upright position. Balance and collision
avoidance are time-critical, fast-paced interactions with no time for interpretation and evaluating
alternative plans for action.
Common in these examples is that the relationship between observer and visual properties of
the environment form a dynamic representation of alignment. The representation is direct—
changes in the trajectory of the observer misalign the visual properties from an expected frame of
reference (the horizon line is no longer horizontal, the point of origin of visual flow toward the
observer in motion is off-center); compensation for this misalignment brings the observer back to
a safe passage through the environment.
Direct interaction bridges from perception to action in a rapid scan of alignment between
present and expected visual-spatial affordance cues in the situation at hand: Is there a match or
misfit that requires realignment? Realignment forms the action component in the response.
The neuropsychology of reflexes and body orientation presents critical insights about visio-
spatial affordance cues. Reflexes are rapid, predictable motor responses to stimuli. By definition,
reflexes are involuntary and automatic, performed without the aid of conscious thought. “The
righting reflex,” also known as the Labrynthine righting reflex and the righting response, refers
to the body’s reflexive ability to restore itself to an optimal position and resist forces acting to
displace it out of its normal upright state (Purves 2008). This ability allows a human in the
process of falling to recover before toppling over. It involves complicated mechanisms initiated
by the vestibular system, which detects changes in gravity and acceleration, indicating that the
body is not erect. The righting reflex causes the head to move back into position. As the head
corrects, the body follows. More simply put, the righting reflex corrects the body’s orientation
when the perceived frame of reference for visual alignment does not match the expected frame of
reference for alignment of the intended outcome (Cullen 2012; Borel et al. 2008).
Direct interaction is very fast—response time matches critical pacing in the fast paced
development of the misalignment. The interaction is direct without delay due to processing that a
conventional conceptual model requires—decoding of abstract representation cues, the extraction
of meaning or sense making from the cues, and the formulation for a plan of action.
Let’s contrast the power of visual representation and pattern-based displays that facilitate
direct interaction with the conventional abstract representation of information in verbal or
numeric form, typical for process monitoring consoles and control rooms. Albeit a higher
precision of abstract representations, the detection of meaningful patterns in numeric data is
slow. A quote from an operator’s response during the 1979 Three Mile Island nuclear accident
hearings illustrates this. Reflecting on what he thought when he was trying to understand what
was happening during the accident as one of five operators in the control room with hundreds of
alarms going off and witnessing frantic data read-outs across thousands of displays in front of
them, he states: “All the information I needed to make the right decision was there—I just
couldn’t see it.” (Roesler 2009).
The Primary Flight Display
In some applications, operators cannot access cues for direct perception. In extreme
environments such as remote control operations, space missions, and instrument-based flight,
visual information required for spatial operation is not readily available (due to the absence of
light, visual features that change relative to the observer, limited camera views, or delays for the
image transmission from the distant environment). To utilize alignment cues for direct interaction
in such a setting, a visual representation of the controlled artifacts situation (location, relative
distances, alignments, speed, trajectory, trends, etc.) in the distant environment needs to be
designed to serve as an interface for the monitoring and control of location and spatial alignment
5
THE INTERNATIONAL JOURNAL OF DESIGNED OBJECTS
of the operated artifact. Interactions in nontangible artificial systems such as the computation of
operational limits require additional abstract representation elements that form manipulative
tokens in the model of interactions with such an abstract systems. For example control decisions
for maintaining aerodynamic lift in flight can be supported with an angle of attack display that
enhances the display of an airplane’s pitch angle display in the standard primary flight display.
Status information consists of relationships between data. Meaningful status representations
include background information, such as relating the current status to the range of normal
operations, showing the trend of change and providing support for operator assessment of
whether conditions are stable or are spinning out of control.
A pattern-based visual representation offers extended means to visualize the relationship
between several data dimensions and their dynamic behavior relative to safe zones of operations.
This may include data representation distributed across several displays or integrated displays
that create visual momentum (Woods 1984).
An example for a sophisticated visual display that combines alphanumeric data with a
pattern-based visual representation of key data relationships is the primary flight display in a
commercial flight deck. The primary flight display (PFD) represents the current flight situation
both in alphanumeric form and as a visual representation in which the alignment between visual
elements and emergent visual patterns provides affordances for direct interaction (fig. 2). Speed
tape on the left and altitude tape on the right, combined with the vertical speed, show the pilot the
flyability of the airplane in numeric form. Instead of presenting speed and altitude information in
discrete digits, the tapes are analog displays that move to indicate the rate of change. The
relationship between speed and altitude are critical to maintain stable flight conditions for the
airplane—the flight envelope. The PFD aligns current speed and altitude with the artificial
horizon, the attitude display at its center.
Figure 2: The Primary Flight Display (PFD) is the key tactical flight information display in the commercial flight deck.
The artificial horizon (1) in the attitude display area (2) of the PFD shows pitch and roll of the airplane. The altitude tap e
(3) indicates the current altitude in the center (shown at 34900 ft). The speed tape displays the current speed in knots (the
current speed in the illustration is 275 knots). Speed is also displayed as Mach (6). The vertical speed display (5) shows
the rate of altitude change in feet per minute. The lateral direction display (7) shows the yaw attitude of the airplane.
Source: Shown is a simplified schematic of an Airbus A-330 Primary Flight Display
The attitude display is a pattern-based display that provides visual cues and current status
and change for the pitch, roll, and yaw of the airplane by alignment of the background horizon
line and the foreground markers in the center. The attitude display provides powerful alignment
6
ROESLER, CHURNG, BADER, & PARK: DE SIGNING AFFORDANCES FOR DIRECT INTER ACTION
cues that form landmark patterns. When the horizon line is aligned level with the center of the
PFD, this means that the airplane flies parallel to the ground at zero pitch, zero roll, and zero
yaw. Departure from this stable condition can be detected effectively. The key in the PFD’s
effective display of flight attitude, speed, and altitude is its dynamic representation of flight
status change—the animated properties of display change. YouTube features a number of video
captures of PFD displays in action during various stages of flight, taken and posted by pilots
(Keywords A330 PFD, for example: https://www.youtube.com/watch?v=9VfI4Pyam1E,
accessed on December 15, 2014).
Salience of information is an issue in the design of the commercial flight deck. Another
design challenge is getting people to pay attention to the right thing at the right time, but by
design, rather than by training. When things go all wrong, the most important information needs
to be identifiable. The aviation industry currently doesn’t design attention management into the
flight deck—the best way this is done currently is to provide a reliable alerting system. The
following examination of a recent aviation accident, illustrates the limits of flight display design
and demonstrates the penalty of interactions thatby design—are based on a complex
conceptual model as a prerequisite for action when the situation at hand does not provide time for
making sense of what is happening.
Air France Flight 447
On June 1, 2009, at 2:14:28 a.m., Air France Flight 447 crashed into the Atlantic Ocean en route
from Rio de Janeiro to Paris; all 228 passengers and crew died in the accident. Due to the remote
crash site in deep waters, it took nearly two years to locate the debris of the plane. A final
accident report was released by the BEA in July 2012. Based on a thorough analysis of the flight
data and recordings of conversations in the cockpit during the accident, the report draws the
conclusion that the accident unfolded as follows (fig. 3): At approximately 2:08 a.m., three hours
and forty-one minutes after it had departed from Rio de Janeiro, the airplane enters a storm that
creates an ice layer over the pitot tubes of the airplane, blocking air intake for the airplane’s
speed sensors. The loss of speed information causes the airplane’s autopilot to disengage at
2:10:05 a.m. The autopilot disconnect warning alerts the pilots to take over manual control of the
airplane. The captain had left the flight deck eight minutes before the autopilot disengaged for an
in-flight rest period. In his place, a relief crew, a rated copilot on board to substitute crew during
relief periods, sits in the left seat. In response to the autopilot disengage alarm, the copilot in the
right seat declares that he has the controls at 2:10:06 a.m. and assumes the role of pilot flying
(PF) while the copilot in the left seat takes on the role of pilot monitoring (PM).
Two key events happen in the first seconds of the accident: because of the loss of reliable
airspeed data, the airplane flight management systems (the automation) changes the flight law
from normal law to alternate law. In alternate law, the airplane doesn’t automatically protect its
flight envelope—the area of operation that provides the airplane with stable aerodynamic lift,
limited by flight altitude, angle of attack, and pitch. In alternate law, manual control of the
airplane can steer the airplane outside the flight envelope, which causes an aerodynamic stall
the loss of aerodynamic lift, leading the airplane to fall from the sky.
In parallel, at 2:10:06 a.m., the PF begins pulling back his sidestick (fig. 3, call-out E),
leading the airplane to pitch nose up gradually from 0º to 15º. Pulling up the nose this
excessively leads the airplane to climb from 34,900 feet to 38,000 feet within one minute and
four seconds. The airplane enters an aerodynamic stall at 2:10:10 a.m. and stall warnings (“stall”)
sound in the cockpit.
7
THE INTERNATIONAL JOURNAL OF DESIGNED OBJECTS
Figure 3: Time line of the final minutes in the Air France AF447 accident. The letters in the black circles refer to critical
flight stages illustrated as PFD displays in Figures 4 and 5.
8
ROESLER, CHURNG, BADER, & PARK: DE SIGNING AFFORDANCES FOR DIRECT INTER ACTION
The airplane’s airspeed drops drastically, and the airplane loses altitude at a dramatic rate
with a vertical speed of 4,000 feet per minute, eventually culminating in 10,000 feet per minute.
The pilots are confused by the cascade of warnings, alarms, and rapid changes of altitude.
The captain reenters the flight deck at 2:11:42 a.m. At 2:13:40 a.m., the PF releases the pull-
back of his side stick, realizing that he had pulled up the nose for the entire three minutes and
thirty-four seconds since he took over control (fig. 3, call-out K). From the flight data that is
presented to them via the flight deck instrumentation, the pilots still do not realize that they are in
the middle of an aerodynamic stall. Control disintegrates quickly. The captain takes over control
at 2:13:43 and pulls the nose of the airplane down to gain speed in order to regain lift, but at this
point in time the airplane is too low in altitude to regain lift by a controlled dive. Forty-five
seconds later, flight 447 crashes into the sea.
We do not know why the pilot flying pulled back the side stick immediately after taking
manual control of the airplane. An article by The Telegraph in response to the 2012 accident
report (Ross and Tweedie, 2012) and a subsequent CBS News segment (Strassman, 2012) make
the case for design issues of the side stick design, typical for Airbus flight decks, but flight deck
voice recordings and flight data recorded during the accident provide no clear indication why the
pilot pulled back the stick.
The event that changed the rules of the game was a high-altitude stall that the pilots entered
at 2:10:10 a.m. and the full stall developing when the airplane reached 38,000 feet at
approximately 2:11:10. The stall was triggered by the pull-back of the side stick and the
consequent 15º pitch up of the nose of the airplane.
During an aerodynamic stall, the airplane falls uncontrollably from the sky. The high altitude
aerodynamic stall that the pilots of flight 447 experienced is a catastrophic event that provides
limited control options with rapidly disintegrating control. The rules that govern normal flight
operations are not applicable anymore; pilots have to radically change their conceptual model of
flight to respond. This is a common problem and one that the industry is trying to address with
training. The response to a low altitude stall is different from a high altitude stall (which was not
at the time trained at Air France) so the knowledge or action patterns that are activated are wrong
or missing for the context. Entering a stall situation illustrates significant design challenges for
the support of sense making for the Primary Flight Display.
Aerodynamic lift is constrained by a minimum and maximum speed at high altitude and
altitude limits for a given speed—the flight envelope. At the beginning of the stall event at
2:10.10 (see fig. 3, call-out D), the airplane flies at an altitude of 39,400 feet at a speed of 275
knots (316 miles per hour). To understand how these numbers, in combination with the effects of
the side stick pull back, represent a trend into a high-altitude stall requires the pilots to develop a
conceptual model of the flight envelope that is currently not provided to them during training.
This conceptual model is not available as a display in the flight deck; pilots need to mentally
represent and compare the current situation with the boundary conditions of the flight envelope.
The top left area of Figure 3 shows a graph of boundary crossings of the flight envelope that lead
to the aerodynamic stall of flight 447.
The current PFD shows speed limitations for the flight envelope as markers on the speed
tape, but these markers are not available in the alternate law configuration of the airplane. The
flight-mode change from normal law to alternate law was initiated because the pitot tubes
(sensors used for measuring airspeed) were frozen and were delivering invalid data and
prevented the flight computer from calculating envelope speed limits. The unreliable speed data
also led the flight director, a cross-hair marker for sidestick input, to disappear during the critical
phases of the development of the stall.
The flight deck of the Airbus 330 provide an optional but not permanent vertical situation
display that clearly indicates the altitude trend as a graph in side view or the airplane pitch in
relationship to altitude and flight envelope as shown is Figure 3. A vertical situation display can
be called up but is typically only used at low altitudes to avoid ground obstacles. Today vertical
9
THE INTERNATIONAL JOURNAL OF DESIGNED OBJECTS
situation displays are basic displays in the newest airplanes such as the Boeing 787 and Airbus
350.
Flight envelope information is represented in many ways in the flight deck and PFD, but not
in direct relationship with airplane pitch or angle of attack. The information in Figure 3 is only
available to the pilots in the PFD—as a line-of-sight projection in its attitude display, as markers
on the speed tape in normal law, as rate of change in the altitude tape, and as vertical speed
display. The various stages of the event, when represented in the PFD sequence of events (fig. 4),
look much less dramatic than the sequence of events represented in side view (fig. 3).
A leading cause in the accident was that the pilot flying pulled back the sidestick, but
because of the sidestick placement in the outboard side section of the pilot seats, sidestick tilt
cannot be observed by the other pilot. There is a sidestick indicator on the PFD but it is a
contextual indicator so it come on only at specific times. If there is dual input it will appear on
the PFD. The flight control logic in the Airbus takes the average of the two inputs as the desired
target.
Seventy-five audible stall warning alarms (verbal alerts of the word “stall”) sounded over the
four minutes and eighteen seconds during the high-altitude stall, yet these alarms did not provide
clear feedback as to what caused the stall, nor instructions as to what the pilots could do to
counteract the stall.
The one thing the pilots had to do to save the airplane was to push the sidestick forward.
Remember the pilots did not know what was happening and they were getting conflicting and
confusing alerts. There is no way to know to push the stick forward if they think holding the stick
back is keeping them out of the stall condition. They needed a representation to change their
mental model of the situation.
Although all the data to make this decision was available to the pilots, it would have
required the mental integration of several data dimensions to come up with this response strategy
that was rarely integrated in training simulator scenarios. Although the flight computer knew that
a pitch up at the high altitude the airplane was flying would rapidly lead to an exit of the flight
envelope and cause an aerodynamic stall—perhaps would have been able to compute responses
to the stall—there was no direct visual cue available to the pilots that would have made them
push the sidestick forward.
Figure 4, D-I shows how the current primary flight display represents the development
toward the stall. Note that the emphasis is on the numeric flight data and audible stall warning
(not represented in the display, see the timeline in Figure 3 for the duration and frequency of the
audible stall warnings). The current PFD provides little trend information or alignment cues as to
how to respond to the stall conditions with the controls at hand.
10
ROESLER, CHURNG, BADER, & PARK: DE SIGNING AFFORDANCES FOR DIRECT INTER ACTION
Figure 4: The critical flight stages D-I of the Air France 447 accident (in reference to Figure 3 as displayed in the current
Primary Flight Display)
11
THE INTERNATIONAL JOURNAL OF DESIGNED OBJECTS
Redesign of the Primary Flight Display
The design of appropriate affordances in context with multidimensional data representations can
support operators to make better decisions by helping them detect meaningful patterns in the
deterioration of control.
How could the flight information system of AF 477 be redesigned to provide affordances for
direct interaction with additional visual structure added to the flight deck to facilitate perception
that guides action?
The pilots listened to repetitive stall warnings and viewed speed and altitude tapes decline at
unusual rates. There was no direct cue that the airplane nose had to be pushed down, and there
was no pointer that this was to be accomplished by pushing the flight stick forward. Figure 4
shows how subtle the current PFD displayed the radical attitude changes and the airplane’s pitch
up. There was misalignment between the anticipated flight trajectory and the airplane’s attitude,
its spatial orientation in pitch, indicating clearly that the airplane’s nose was up. There was no
time to understand what was going on and why this was happening; the pilots needed to know
what to do.
The appropriate response to the high-altitude stall of AF 447 would have been to point the
nose down by pushing the sidestick forward, gain speed in an aggressive descent with the cost of
altitude, then pull the nose up to trade speed and regain altitude, but current displays have no
cues for prompting a midair drop in such situations.
What was needed in the AF 447 scenario was a clear representation of the airplane’s nose up
attitude that would make the pilot point the nose down in order to compensate the misalignment,
an indication that would, at first glance, lead to pushing the sidestick forward to push the nose
down.
Figure 5 illustrates a direct interaction approach to the display of the stall condition and
compensation alignment for the Primary Flight Display as affordance that prompt pilots to
naturally engage in downward velocity. The five PFD display stages for key events correspond to
call-outs D-I in the timeline/vertical representation of the flight path in Figure 3 and are
compared with the display of these five flight stages in the current PFD in Figure 4.
The concept design integrates numeric data in speed and altitude tapes plus stall warning
into an emergent, visual representation of airplane attitude misalingnment and flight envelope
boundaries that shows the trend of the developing stall condition, culminating in a salient visual
cue to push the nose down.
The visual representation of the attitude correction aligns with the pilot’s line of sight and
creates a visual coupling between the directional control cue and the directional input using a
sidestick or yoke (following the arrow in the PFD aligns with pushing forward the sidestick, see
fig. 6). This spatial alignment between the sidestick, PFD, and airplane/pitch direct a crucial
design move as the flat display needs to match the z-axis intervention of the yoke that stands
perpendicular to the x/y space of the PFD screen.
To illustrate the principles for the design of an affordance for direct interaction in the new
PFD concept, let’s walk through the five stages of the display in Figure 4 step-by-step in context
with the accident scenario:
The emerging shape of the downward arrow in the right column illustrations of Figure x (D–
I) guides the pilots to push the nose down. The new PFD concept display provides this cue in the
following sequence:
D: As the pilots take over manual control, the airplane flies at 34,000 feet at 275 knots.
E: The PF pulls back the sidestick. Starting with Figure 4-F, the modified PFD displays
the sidestick pull back input as a hairline cross 15º above the horizon. During stages E
G, the airplane gradually achieves the pitch input.
12
ROESLER, CHURNG, BADER, & PARK: DE SIGNING AFFORDANCES FOR DIRECT INTER ACTION
Figure 5: The critical flight stages D–I of the Air France 447 accident (in reference to fig. 3) as displayed in a Primary
Flight Display design concept that provides additional affordance for direct interaction—a visual cue for
pushing the nose down.
13
THE INTERNATIONAL JOURNAL OF DESIGNED OBJECTS
F: With the nose pointed up and the resulting climb/loss of flight speed toward the flight
envelope, the downward arrow emerges as an indication of misalignment.
The arrow forms a meaningful pattern as the situation develops. The emergent shape of
the arrow draws the pilots’ attention to the pitch of the airplane and the observed effects
on speed and altitude.
G: The arrow becomes a significant visual element as the pitch angle approaches +10º.
I: The arrow becomes a visual warning as the airplane enters the aerodynamic stall: the
speed drops sharply, and the airplane is loosing altitude rapidly. The contrast of the
arrow is increased as its sharp corners break through the attitude display area. The
corners point at the low airspeed at the top left, the falling altitude to the right, and the
associated rapid vertical speed of -4,000 ft/min.
As the airplane enters the stall, the arrow is the dominant visual cue at the right time; it
points out the critical data elements, suggesting pushing down the nose as a
compensation that makes sense in the context of the pressing situation. This makes the
arrow and the spatially aligned push on the sidestick an affordance for direct interaction.
The arrow indicates action, not just information.
Figure 6: The spatial alignment between sidestick push forward direction, arrow direction in the Primary Flight Display,
and resulting nose pitch down.
The pilot cleans up the emergent shape of misalignment by pointing the nose down. The
arrow is aligned with the push direction of the sidestick (fig. 6); following the arrow in the PFD
with a push of the sidestick gradually points the nose down and gradually erases the arrow in the
PFD. Pushing the nose down leads the airplane into a dive that allows it to regain speed;
increased speed brings it back into the flight envelope. The airplane regains lift, this stops the
stall, and the pilot can pull up the nose to climb and regain altitude to resume the regular flight
path.
Once control over the airplane is regained and the airplane flies level, visual elements that
signal pitch misalignment disappear; stability is signaled by the absence of the misalignment
cues.
14
ROESLER, CHURNG, BADER, & PARK: DE SIGNING AFFORDANCES FOR DIRECT INTER ACTION
Conclusion
The comparison between current and new Primary Flight Displays demonstrates the power of a
pattern-based visual display in contrast to an integrated data display. In the data-centered current
display, the pilots need to mentally integrate the numeric data of altitude change, speed change,
and attitude over the artificial horizon to interpret the flight situation. The new pattern-based
display design concept provides additional affordances that facilitate direct interaction. The new
PFD shows the pilots what to do. Instead of providing them only raw data that requires them to
develop an interpretation of the flight situation, it shows the misalignment between the current
attitude and stable flight conditions. Design for direct interaction in the form of the emerging
arrow alerts the pilots and provides them with a salient cue for what to do: compensate the arrow
by pointing down the nose. The arrow emerges as the situation becomes critical and provides and
affordance for direct interaction. The visualization becomes the conceptual model—the pilots
need to align and balance sidestick control input so that the arrow is erased by pointing the nose
down in the appropriate amount indicated by the size of the arrow (We would not want the pilot
to command nose full down pitch in alternate law)—this directs their interaction without
premeditation. Data becomes information in context: show me what I need, when I need it, and in
the form I need so that I can act.
The pathway to compensation is alignment. As the pilots intervene, they either see an
approach toward better alignment or the further disintegration of alignment. As they engage in
the compensation, they see how the airplane responds to their control input and can adjust
alignment in the process. The action follows the view. The design challenge is designing the
representation so that it is the conceptual model—seeing becomes knowing.
Acknowledgement
We thank Barbara Holder for her valuable advice and detailed comments in the process of
writing this paper. Many thanks also to Chris Curry for additional feedback, research on flight
envelope visualization, and a design study of angle of attack displays.
15
THE INTERNATIONAL JOURNAL OF DESIGNED OBJECTS
REFERENCES
Borel, L, C. Lopez, P. Péruch, and M. Lacour. 2008. “Vestibular Syndrome: A Change in
Internal Spatial Representation.” Neurophysiol Clin 38(6, December): 375–89.
Bureau d’Enquêtes et d’Analyses pour la sécurité de l’aviation civile (BEA). 2012. “Final
Accident Report on the Accident on 1st June 2009 to the Airbus A330-203 Registered
F-GZCP Operated by Air France, Flight AF 447 Rio de Janeiro—Paris.” Accessed
December 15, 2014. http://www.bea.aero/docspa/2009/f-cp090601.en/pdf/f-
cp090601.en.pdf.
Cullen, K. E. 2012. “The Vestibular System: Multimodal Integration and Encoding of Self-
Motion for Motor Control.” Trends in Neurosciences 35 (3, January): 185–96.
Cutting, J. E., and W. O. Readinger. 2002. “Perceiving Motion while Moving, or How Pairwise
Invariants Make Optical Flow Cohere.” Journal of Experimental Psychology: Human
Perception and Performance 28: 731–47.
Gibson, J. J. 1979. The Ecological Approach to Visual Perception. Boston: Houghton Mifflin.
Hutchins, E. 1995. Cognition in the Wild. Cambridge, MA: MIT Press.
Jusufi, A., Y. Zeng, R. J. Full, and R. Dudley. 2011. “Aerial Righting Reflexes in Flightless
Animals.” Integr Comp Biol 51 (6, December): 937–43.
Neisser, U. 1976. Cognition and Reality. San Francisco: W. H. Freeman.
Noe, A. 2004. Action in Perception. Cambridge, MA: MIT Press
Norman, D. 1986. The Design of Everyday Things. New York: Doubleday.
Purves, Dale. 2008. Neuroscience. Sunderland, MA: Sinauer.
Roesler, A. 2009. “Lessons from Three Mile Island: The Design of Interactions in a High-Stakes
Environment.” Visible Language 43(2/3): 170–195.
Ross, N. and N. Tweedie. 2012. Air France Flight 447: Damn it, We’re Going to Crash.” The
Telegraph: April 28. Accessed December 15, 2014. http://www.telegraph.co.uk
/technology/9231855/Air-France-Flight-447-Damn-it-were-going-to-crash.html.
Strassman, M. 2012. “Air France 447: Final Report on What Brought Airliner Down. News
Segment.” CBS This Morning, July 15. Accessed December 15, 2014.
http://www.cbsnews.com/videos/air-france-447-final-report-on-what-brought-airliner-
down/.
Warren, W. H. 1988. “Visually Controlled Locomotion: 40 Years Later.” Ecological Psychology
10 (34): 177–219.
Woods, D. D. 1984. “Visual Momentum: A Concept to Improve the Cognitive Coupling of
Person and Computer.” International Journal of Man-Machine Studies 21: 229–44.
Woods, D. D., and A. Roesler. 2007. “Connecting Design and Cognition at Work.In Product
Experience—Perspectives on Human-Product Interaction, edited by R. Schifferstein and
P. Hekkert, 199–213. Oxford, UK: Elsevier.
Zsambok, C. E., and G. A. Klein (eds). 1997. Naturalistic Decision Making. Mahwah, NJ:
Lawrence Erlbaum Associates.
ABOUT THE AUTHORS
Dr. Axel Roesler: Associate Professor for Interaction Design in the Division of Design,
School of Art, University of Washington, Seattle, Washington, USA.
Sarah Churng: Student, School of Art, University of Washington, Seattle, Washington, USA.
Stephen Bader: Student, School of Art, University of Washington, Seattle, Washington, USA.
Haeree Park: Student, School of Art, University of Washington, Seattle, Washington, USA.
16
The International Journal of Designed Objects
is one of six thematically focused journals in the
family of journals that support the Design Principles
and Practices knowledge community—its journals,
book series, conference and online community. It is
a section of
Design Principles and Practices: An
International Journal.
The International Journal of Designed Objects
examines the nature and forms of the objects of
design, including the products of industrial design,
fashion, interior design, and other design practices.
As well as papers of a traditional scholarly type, this
journal invites presentations of practice—including
documentation of designed objects together with
exegeses analyzing design purposes, purposes and
effects.
The International Journal of Designed Objects
is a peer-reviewed, scholarly journal.
ISSN 2325-1379
ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
Gibson's article, "Visually Controlled Locomotion and Visual Orientation in Animals" (1958/this issue), is the leading statement of a nonrepresentational, information-based approach to visual control. The core ideas he introduced 40 years ago resurface, explicitly or implicitly, in much contemporary work on perception and action in humans, insects, robots, and autonomous agents. The purpose of this special issue is to assess the continuing pertinence of these insights and illustrate current directions in research on visually controlled locomotion. In this article, I locate the 1958 article in the context of Gibson's emerging theory of perception, contrast information-based control with standard model-based and cybernetic control architectures, evaluate the current status of Gibson's visual control formulae, and situate visual control within an informational-dynamical approach to agent-environment systems.
Chapter
Full-text available
One long-standing perspective on the relationship between cognition and design is that people have severe limits on their memory, attention, and problem-solving capabilities. People are prone to illusions and biases of many kinds. In this view, design then would use new technological capabilities to develop prostheses that overcome the inherent weaknesses of people. Depending on the machine’s assessment of how people’s mental or physical state is changing, the machine will decide when and how to change the interface and change user tasks so that the demands remain within people’s limited capabilities. This chapter discusses connecting design with cognition at work. It provides an introduction to some of the basic concepts about design and cognition, which can help design processes to enable or release human expertise. Technology change and new designs are one set of drivers in these processes of organizational transformation and human adaptation. The changes that are triggered result in new levels of performance on some dimensions, new squeezes on performance in other places, new side effects when things that were separate become connected, and new forms of complexity. In adaptive cycles, designs act as stimulants in two major ways. Designs can trigger expansive adaptations by users and stakeholders that exploit capabilities as they seek to achieve their ends. When these expansive adaptations occur, one discovers that people have exploited designs in ways typically unforeseen by the designers. But design change also can introduce impediments that create complexities to be adapted around and overcome—the workarounds often captured in compilations of designs with poor usability. Though specific designs often create a mix of both affordances to be exploited and complexities to be worked around, this mix can vary greatly for different people in different roles across an organization.
Article
Full-text available
Gibson's article, Visually Controlled Locomotion and Visual Orientation in Animals(1958/this issue), is the leading statement of a nonrepresentational, information-based approach to visual control. The core ideas he introduced 40 years ago resurface, explicitly or implicitly, in much contemporary work on perception and action in humans, insects, robots, and autonomous agents. The purpose of this special issue is to assess the continuing pertinence of these insights and illustrate current directions in research on visually controlled locomotion. In this article, I locate the 1958 article in the context of Gibson's emerging theory of perception, contrast information-based control with standard model-based and cybernetic control architectures, evaluate the current status of Gibson's visual control formulae, and situate visual control within an informational-dynamical approach to agent-environment systems. Locomotion is a biologically basic function, and if that can be accounted for then the problem of human space perception may appear in a new light. The question, then, is how an animal gets about by vision.
Article
Full-text available
Computer display system users must integrate data across successive displays. This problem of across-display processing is analogous to the question of how the visual system combines data across successive glances (fixations). Research from cognitive psychology on the latter question is used in order to formulate guidelines for the display designer. The result is a new principle of person-computer interaction, visual momentum, which captures knowledge about the mechanisms that support the identification of “relevant” data in human perception so that display system design can support an effective distribution of user attention. The negative consequences of low visual momentum on user performance are described, and display design techniques are presented to improve user across-display information extraction.
Article
Full-text available
Animals that fall upside down typically engage in an aerial righting response so as to reorient dorsoventrally. This behavior can be preparatory to gliding or other controlled aerial behaviors and is ultimately necessary for a successful landing. Aerial righting reflexes have been described historically in various mammals such as cats, guinea pigs, rabbits, rats, and primates. The mechanisms whereby such righting can be accomplished depend on the size of the animal and on anatomical features associated with motion of the limbs and body. Here we apply a comparative approach to the study of aerial righting to explore the diverse strategies used for reorientation in midair. We discuss data for two species of lizards, the gecko Hemidactylus platyurus and the anole Anolis carolinensis, as well as for the first instar of the stick insect Extatosoma tiaratum, to illustrate size-dependence of this phenomenon and its relevance to subsequent aerial performance in parachuting and gliding animals. Geckos can use rotation of their large tails to reorient their bodies via conservation of angular momentum. Lizards with tails well exceeding snout-vent length, and correspondingly large tail inertia to body inertia ratios, are more effective at creating midair reorientation maneuvers. Moreover, experiments with stick insects, weighing an order of magnitude less than the lizards, suggest that aerodynamic torques acting on the limbs and body may play a dominant role in the righting process for small invertebrates. Both inertial and aerodynamic effects, therefore, can play a role in the control of aerial righting. We propose that aerial righting reflexes are widespread among arboreal vertebrates and arthropods and that they represent an important initial adaptation in the evolution of controlled aerial behavior.
Article
Complex systems with mediated control at a distance are explored using the Three Mile Island nuclear accident of 1979 as the focus. In such a high-stakes environment, representations of operations are critical to support human-machine interactions and monitor safe operations. A time-line of the critical first minutes of the event is presented and an analysis of operations in the control room from a communication perspective point toward principles for a better design. While the case of Three Mile Island is well documented from an engineering perspective, its relationship to communication design and interaction design provide insight with regard to necessary collaboration across disciplines.
Article
Understanding how sensory pathways transmit information under natural conditions remains a major goal in neuroscience. The vestibular system plays a vital role in everyday life, contributing to a wide range of functions from reflexes to the highest levels of voluntary behavior. Recent experiments establishing that vestibular (self-motion) processing is inherently multimodal also provide insight into a set of interrelated questions. What neural code is used to represent sensory information in vestibular pathways? How do the interactions between the organism and the environment shape encoding? How is self-motion information processing adjusted to meet the needs of specific tasks? This review highlights progress that has recently been made towards understanding how the brain encodes and processes self-motion to ensure accurate motor control.