ArticlePDF Available

Ecological interface design: supporting fault diagnosis of automated advice in a supervisory air traffic control task

Authors:

Abstract and Figures

Future air traffic control will have to rely on more advanced automation to support human controllers in their job of safely handling increased traffic volumes. A prerequisite for the success of such automation is that the data driving it are reliable. Current technology, however, still warrants human supervision in coping with (data) uncertainties and consequently in judging the quality and validity of machine decisions. In this study, ecological interface design was used to assist controllers in fault diagnosis of automated advice, using a prototype ecological interface (called the solution space diagram) for tactical conflict detection and resolution in the horizontal plane. Results from a human-in-the-loop simulation, in which sixteen participants were tasked with monitoring automation and intervening whenever required or desired, revealed a significant improvement in fault detection and diagnosis in a complex traffic scenario. Additionally, the experiment also exposed interesting interaction patterns between the participants and the advisory system, which seemed unrelated to the fault diagnosis task. Here, the explicit means-ends links appeared to have affected participants’ control strategy, which was geared toward taking over control from automation, regardless of the fault condition. This result suggests that in realizing effective human-automation teamwork, finding the right balance between offering more insight (e.g., through ecological interfaces) and striving for compliance with single (machine) advice is an avenue worth exploring further.
This content is subject to copyright. Terms and conditions apply.
ORIGINAL ARTICLE
Ecological interface design: supporting fault diagnosis
of automated advice in a supervisory air traffic control task
Clark Borst
1
Vincent A. Bijsterbosch
1
M. M. van Paassen
1
Max Mulder
1
Received: 29 April 2017 / Accepted: 4 September 2017 / Published online: 16 September 2017
The Author(s) 2017. This article is an open access publication
Abstract Future air traffic control will have to rely on
more advanced automation to support human controllers in
their job of safely handling increased traffic volumes. A
prerequisite for the success of such automation is that the
data driving it are reliable. Current technology, however,
still warrants human supervision in coping with (data)
uncertainties and consequently in judging the quality and
validity of machine decisions. In this study, ecological
interface design was used to assist controllers in fault
diagnosis of automated advice, using a prototype ecologi-
cal interface (called the solution space diagram) for tactical
conflict detection and resolution in the horizontal plane.
Results from a human-in-the-loop simulation, in which
sixteen participants were tasked with monitoring automa-
tion and intervening whenever required or desired, revealed
a significant improvement in fault detection and diagnosis
in a complex traffic scenario. Additionally, the experiment
also exposed interesting interaction patterns between the
participants and the advisory system, which seemed unre-
lated to the fault diagnosis task. Here, the explicit means-
ends links appeared to have affected participants’ control
strategy, which was geared toward taking over control from
automation, regardless of the fault condition. This result
suggests that in realizing effective human-automation
teamwork, finding the right balance between offering more
insight (e.g., through ecological interfaces) and striving for
compliance with single (machine) advice is an avenue
worth exploring further.
Keywords Ecological interface design Air traffic
control Automation Supervisory control Sensor failure
Decision making
1 Introduction
Predicted air traffic growth, coupled with economic and
environmental realities, forces the future air traffic man-
agement (ATM) system to become more optimized and
strategic in nature (Consortium 2012). One important
aspect of this modernization is the utilization of digital
datalinks between airborne and ground systems via auto-
matic dependent surveillance—broadcast (ADS-B). The
most important benefit of a digital datalink over voice
communication is that it facilitates the introduction of more
advanced automation for efficiently streamlining aircraft
flows, while maintaining safe separations. However, a
prerequisite for the success of such automation is that the
underlying data are reliable and accurate.
Field studies reported mixed findings about the accuracy
of ADS-B position reports. On the one hand, it has been
shown that ADS-B accuracy is already sufficient enough to
meet separation standards and thus could eventually
replace current radar technology (e.g., Jones 2003). On the
other hand, several studies indicated that offsets between
radar and ADS-B position reports could reach up to 7.5
nautical miles (Ali et al. 2013; Zhang et al. 2011; Smith
and Cassell 2006). Despite the fact that continuous efforts
&Clark Borst
c.borst@tudelft.nl
Vincent A. Bijsterbosch
v.a.bijsterbosch@gmail.com
M. M. van Paassen
m.m.vanpaassen@tudelft.nl
Max Mulder
m.mulder@tudelft.nl
1
Control and Simulation, Delft University of Technology,
Kluyverweg 1, 2629 HS Delft, Netherlands
123
Cogn Tech Work (2017) 19:545–560
https://doi.org/10.1007/s10111-017-0438-y
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
are being undertaken by the ATM community to improve
the quality of ADS-B reports, such position offsets do
provide an interesting case study for fault detection and
diagnosis in an airspace where ADS-B technology is used
to augment radar data with auxiliary aircraft data, such as
the planned waypoint(s), estimated time of arrival, GPS
and/or inertial navigation system positions and indicated
air speed. In general, these auxiliary data contain essential
information that would let a computer generate optimal
solutions to traffic situations. But unreliable data would
render such solutions error prone, demanding human
supervision for judging the validity of machine-generated
decisions and intervene whenever required.
To support humans in this supervisory control task, this
article focuses on using ecological interface design (EID)
in facilitating fault detection and diagnosis of automated
advice in conflict detection and resolution (CD&R), within
a simplified air traffic control (ATC) context. Here, a
prototype ecological interface, called the solution space
diagram (SSD), is used to study the impact of ambiguous
data (i.e., radar data mixed with ADS-B data) on error
propagation and fault detection and diagnosis performance.
More specifically, the role of explicit (and amplified)
‘means-ends’ relationships between the aircraft plotted on
the electronic radar display (source: radar data) and the
functional information plotted within the SSD (source:
ADS-B data) is investigated.
Note that the topic of EID and sensor failure has been
studied before, albeit in process control for manual oper-
ations of power plants (Burns 2000; St-Cyr et al. 2013;
Reising and Sanderson 2004). Here, the emphasis lies on
studying the impact of explicit means-ends relations, as
opposed to implicit means-ends relations, on judging the
validity and quality of automated advice under data
ambiguities within a highly automated operational context.
The goal of this article is thus to complement aforemen-
tioned studies with new empirical insights about the merits
of the EID approach in supervisory control tasks, where
reduced task engagement, in conjunction with distractions
caused by automation prompting action on its advice, could
potentially conceal sensor failures.
2 Background
2.1 Ecological interface design
EID was first introduced by Kim J. Vicente and Jens
Rasmussen some 25 years ago to increase safety in process
control work domains (Vicente and Rasmussen 1992).
Since that time, several books (e.g., Burns and Haj-
dukiewicz 2004; Bennett and Flach 2011) and numerous
articles have explored EID in a variety of application
domains [see Borst et al. (2015) and McIlroy and Stanton
(2015) for overviews]. In short, the EID framework is
focused on making the deep structure (i.e., constraints) and
relationships in a complex work domain directly visible to
the system operator, enabling the operator to solve prob-
lems on skills-, rules- and knowledge-based behavioral
levels.
Central in the development of an ecological interface is
the abstraction hierarchy (AH), a functional model of the
work domain, independent of specific end users (i.e.,
human and/or automated agents) and specific tasks. In
other words, the AH specifies how the system works (i.e.,
underlying principles and physical laws) and what needs to
be known to perform work, but not how to perform the
work and by whom. The goal (and challenge) of EID is then
to map the identified constraints and relationships of the
AH onto an interface in order to facilitate productive
thinking and problem-solving activities (Borst et al. 2015).
A generic template of the AH is shown in Fig. 1. At the
top level, the functional purpose specifies the desired sys-
tem outputs to the environment. The abstract function level
typically contains the underlying laws of physics governing
the work domain. At the generalized function level, the
constraints of processes and information flows inside the
system are described. The physical function level specifies
processes related to sets of interacting components. Finally,
at the bottom level, the physical form contains the specific
states, shapes and locations of the objects in the system. It
is argued that the AH is a psychological relevant way to
structure information, as it mimics how humans generally
tend to solve problems (i.e., top-down reasoning) (Vicente
1999).
The relations between constraints at different levels of
abstraction have been coined as ‘means-ends’ relations. In
a means-end relation, information found at a specific level
of abstraction is related to information at a lower level if it
can answer how it is accomplished (by means of...) and
related to higher-level information if it can answer why it is
needed (to serve the ends of...). The importance of the AH,
Fig. 1 Rasmussen’s AH, showing means-ends relationships between
the levels of abstraction
546 Cogn Tech Work (2017) 19:545–560
123
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
and how well and complete its constraints and relations are
represented on an interface, not only plays a critical role in
the success of any ecological interface on decision making,
but also on sensor/system failure detection and diagnosis,
as will be discussed in the following section.
2.2 EID and fault diagnosis
A general concern about ecological interfaces has been that
operators may continue to trust them even when the infor-
mation driving them is unreliable (Vicente and Rasmussen
1992; Vicente et al. 1996; Vicente 2002). However, several
empirical studies have proven otherwise [see Borst et al.
(2015) for an overview]. For example, in process control,
Reising and Sanderson investigated the differences of an
ecological interface over a conventional piping and instru-
mentation diagram for minimal and maximal adequate
instrumentation setups (Reising and Sanderson 2004).
Results showed that the maximally adequate ecological
interface showed the best failure diagnosis performance over
the conventional interface. The main conclusion drawn in
this research was that interfaces should display all relevant
information to the operator, which becomes crucial in
unanticipated events like sensor failures (Reising and San-
derson 2004). Other comparison studies between conven-
tional and ecological interfaces in process control
(Christoffersen et al. 1998; St-Cyr et al. 2013) and aviation
(Borst et al. 2010) showed similar promising results.
In investigations focused more on the means-ends rela-
tions, Ham and Yoon (2001) compared three ecological
displays of a pressurized water cooling control system of a
nuclear power plant on fault detection performance. The
display that explicitly visualized means-ends relations
between the generalized and abstract function levels showed
a significant increased operator performance, indicating an
improved awareness of the system, enabling the human to
solve unexpected situations (Ham and Yoon 2001).
Similarly, Burns (2000) investigated the effect of spatial
and temporal display proximity of related work domain
items on sensor failure detection and diagnosis. Results
showed that a low level of integration provided the fastest
fault detection time, but the most integrated condition
resulted in the fastest and most accurate fault diagnosis
performance. Interestingly, the most integrated display did
not show more data or displayed it better, but ‘(...) it
showed the data in relation to one another in a meaningful
way’ (Burns 2000, p. 241). This helped particularly in
diagnosing faults, which required reasoning and critical
reflection on the feedback provided by the interface.
To summarize, an ecological interface is not vulnerable
to sensor noise and faults by default, but how well and how
complete the AH is mapped on the interface plays a fun-
damental role in diagnosing faults.
2.3 EID and supervisory control tasks
Current empirical investigations on EID and fault diagnosis
mainly comprised manual control tasks, where the com-
puter is used for information acquisition and integration to
compose the visual image portrayed on the interface. When
computers are entering the realm of decision making and
decision execution, the involvement of the human operator
in controlling a process diminishes, making it seem as if
the human–machine interface becomes less important.
Paradoxically, with more automation, the role of the human
operator becomes more critical, not less (Carr 2014).
Consequently, this implies that in highly automated work
environments, the need for proper human–machine inter-
faces only becomes more important (Borst et al. 2015).
With more automation, the human will be pushed into
the role of system supervisor, who has the responsibilities
to oversee the system and intervene whenever the machine
fails. In general, this means that the computer can calculate
a specific (and optimal) solution and automatically execute
it, unless the human vetoes. Such interaction and role
division between human and automated agents are typi-
cally captured in levels of automation (LOA) taxonomies
(Parasuraman et al. 2000). To successfully fulfill the role
as system supervisor, it is thus essential that the human is
able to judge the validity and quality of computer-gener-
ated advice.
Similar to the success of EID in sensor failure diagnosis,
EID may offer a plausible solution in judging the validity and
quality of computer-generated advice in supervisory control
tasks. In terms of the AH, sensor failures on lower-level
components can propagate into wrong computer decisions
on higher functional levels. And without an interface that can
help to dissect and ‘see through’ the machine’s advice, the
more difficult it will become to evaluate its validity and
quality against the system’s functional purpose. It is there-
fore expected that explicit means-ends relations will play an
important role in overseeing machine activities and actions.
For future air traffic control, which foresees an increased
reliance on computers capable of making complex decisions,
an ecological interface could help to provide insight into the
rationality guiding the automation, resulting in more
‘transparent’ machines (Borst et al. 2015).
3 Ecological interface for ATC
3.1 Work domain of CD&R
In a nutshell, the job of an air traffic controller entails
separating aircraft safely while organizing and expediting
the flow of air traffic through a piece of airspace (i.e.,
sector) under his or her control. Controllers monitor aircraft
Cogn Tech Work (2017) 19:545–560 547
123
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
movements and the separation by using a plan view display
(PVD), i.e., an electronic radar screen. The criterium for
safe separation is keeping aircraft outside each other’s
protected zone—a puck-shaped volume, having a radius of
5 nm horizontally and 1000 ft vertically.
In previous research, the work domain of airborne self-
separation (i.e., CD&R for pilots) has been analyzed and
summarized in an AH (Dam et al. 2008; Ellerbroek et al.
2013). This control problem is similar to the work domain
of an air traffic controller, but with the difference that a
controller is responsible for more than one aircraft. As
such, an adaptation of the AH for self-separation has been
made for air traffic control purposes. The resulting AH and
the corresponding interface mappings on an augmented
PVD are shown in Fig. 2and will be explained in the
following sections.
3.2 Solution space diagram
Central in the AH and the augmented radar screen is the
portrayal of locomotion constraints for the controlled air-
craft (see Fig. 2). The circular diagram, showing triangular
velocity obstacles within the speed envelope of the selected
aircraft, is called the solution space diagram (SSD). This
diagram integrates several constraints found on lower
levels of the AH into a presentation of how aircraft sur-
rounding the controlled aircraft affects the solution space in
terms of heading and speed. It is thus essentially a
visualization of the abstract function level. The way the
SSD is constructed is graphically explained step-by-step in
Fig. 3. For more details on the design, the reader is referred
to previous work (e.g., Dam et al. 2008; Ellerbroek et al.
2013; Mercado Velasco et al. 2015).
The SSD enables controllers to detect conflicts (i.e.,
when the speed vector of a controlled aircraft lies inside a
conflict zone) and avoid a loss of separation by giving
heading and/or speed clearances to the controlled aircraft
that will direct the speed vector outside a conflict zone.
Any clearance that will move the speed vector into an
unobstructed area will lead to safe separation, but may not
always be optimal. That is, a safe and productive clearance
would direct an aircraft into a safe area that is closest to the
planned destination waypoint. A safe and efficient clear-
ance would be one that results in the smallest state change
and the least additional track miles relative to the initial
state and planned route. Thus, any combination that bal-
ances safety (e.g., adopting margins), efficiency and pro-
ductivity would be possible. The SSD does not dictate any
specific balance, but leaves it up to the controller (and his/
her expertise) to decide on the best possible strategy,
warranted by situation demands.
Linking the conflict zones to their corresponding aircraft
on the radar screen is encoded implicitly in the SSD, as
indicated by the dashed lines in the AH shown in Fig. 2.
Vmin
Vmax
Fig. 2 Abstraction hierarchy, with means-ends relationships, of
CD&R for air traffic control along with corresponding interface
mappings on a PVD
V
2
V1
(a)
V1
V2
Vrel
V
2
(b)
V1
V
2
V
2
(c)
Vmin Vmax
(d)
Fig. 3 The solution space diagram (SSD), showing the triangular
velocity obstacle (i.e., conflict zone) formed by aircraft B within the
speed envelope of the controlled aircraft A. aTraffic geometry,
bconflict zone in relative space, cconflict zone in absolute space,
dresulting solution space diagram for aircraft A
548 Cogn Tech Work (2017) 19:545–560
123
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
That is, from the shape and orientation of the conflict zone
a controller can reason on the locations, flight directions
and proximities of neighboring aircraft. In Fig. 4it can be
seen that the cone of the triangle points toward, at a slight
offset, the neighboring aircraft and the width of the triangle
is large for nearby aircraft and small for far-away aircraft.
Additionally, drawing an imaginary line from the aircraft
blip toward the tip of the triangle indicates the absolute
speed vector of a neighboring aircraft. As such, with the
shape and orientation of the conflict zones, a controller
would be able to link aircraft to their corresponding conflict
zones. Thus, in this way the controller is able to move from
higher-level functional information down toward lower-
level objects.
3.3 Information requirements and error
propagation
Composing the SSD requires information from sensors. In
ATC, the sensors for surveillance are the primary and
secondary radar systems that, combined, can gather aircraft
position, groundspeed, altitude (in flight levels) and call-
sign. This information is insufficient to construct the SSD,
as it requires accurate information about the aircraft speed
envelope (in indicated and true airspeed), destination
waypoint(s), flight direction and current velocity.
Currently, the ATM system is undergoing a modern-
ization phase where aircraft is being equipped with ADS-B
that can broadcast such information to airborne and ground
systems via digital datalinks. Given that continuous efforts
are being undertaken to improve ADS-B in terms of reli-
ability and accuracy, it is very likely that position and
direction information will still remain available from
ground-based radar systems, and that ADS-B is used to
augment radar data with auxiliary data, such as GPS
position, destination waypoint(s), speed envelopes and
speed vectors. Consequently, this implies that discrepan-
cies between ADS-B and the radar image may arise,
resulting in an ambiguity between the aircraft position
shown on the PVD (source: surveillance radar) and the
representation of the conflict zone (source: ADS-B).
Several studies in Europe and Asia have reported fre-
quently occurring ADS-B position errors reaching up to 7.5
nm (Ali et al. 2013; Smith and Cassell 2006; Zhang et al.
2011). The main causes for ADS-B not meeting their
performance standards are: (1) frequency congestion due to
other avionics using the same 1090-MHz frequency spec-
trum, (2) delays in the broadcasted messages and (3) mis-
sed update cycles, resulting mostly in in-trail position
errors.
In Fig. 5it can be seen how an in-trail ADS-B position
error can propagate into a misalignment of an aircraft
conflict zone and its corresponding radar position. Inter-
estingly, the ambiguity between the conflict zone orienta-
tion and the radar position creates a false solution space in
between the two conflict zones. That is, placing the speed
vector of the controlled aircraft into this area will eventu-
ally result in a loss of separation with the bottom-right
aircraft.
Identifying and diagnosing the validity of the solution
space requires the controller to link the conflict zones to the
aircraft plots on the PVD. In this case, the ADS-B error
would be relatively easy to spot because of the low traffic
density. However, one can imagine that under increased
traffic density and complexity, the error will be obscured
due to a more complex SSD, demanding a more explicit
representation of means-ends relations.
Another factor complicating the identification of a
position error is the distance between the controlled and
observed aircraft. In Fig. 6, it is can be seen that the larger
the distance d, the smaller the visual offset angle Dhof the
conflict zone. At distances d[50 nm, an in-trail position
offset of ¼7:5 nm will not be noticeable anymore by
visually inspecting the SSD. For aircraft separation pur-
poses, the look-ahead time will generally encompass
Vobs
Vobs
Vobs Vcon
Fig. 4 Implicit means-ends relations between conflict zones and
aircraft plots shown on the radar screen
(a) (b)
Fig. 5 The effect of an ADS-B in-trail position error () on the
visualization of the conflict zone within the SSD. aWithout ADS-B
error, bwith ADS-B error
Cogn Tech Work (2017) 19:545–560 549
123
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
5 min, corresponding to approximately 40 nm for a med-
ium class commercial aircraft at cruising speed. Thus also
here a more explicit representation of means-ends relations
will presumably be helpful in distinguishing between
nearby (i.e., high priority) and far-away (i.e., low priority)
aircraft/conflicts. In terms of human performance, this
could mean the difference between taking immediate
action versus adopting a ‘wait-and-see’ strategy.
3.4 Toward explicit means-ends relations
In the study of Burns (2000), means-ends relations were
made salient with close spatial and temporal proximity of
related display elements. In the SSD, related elements (i.e.,
aircraft and their conflict zones) already have a close
proximity on the electronic radar screen, allowing a con-
troller to link them together, as illustrated in Fig. 4. With
more traffic, however, the SSD can also become more
complex and cluttered (e.g., overlapping conflict zones),
potentially diminishing the benefit of close proximity on
fault detection and diagnosis. To negate this effect, it may
be required to further amplify the relations between aircraft
blips and their corresponding conflict zones.
One way to amplify the means-ends links is by making
use of the mouse cursor device, as illustrated in Fig. 7.To
support top-down linking, clicking on a conflict zone in the
SSD will highlight the corresponding aircraft on the radar
screen. To support bottom-up linking, hovering the mouse
cursor over an aircraft on the radar screen will highlight its
corresponding velocity obstacle. This could enable a con-
troller to more easily match triangles to aircraft, thus
expediting the detection of errors/mismatches, especially in
more complex traffic scenarios.
4 Experiment design
4.1 Participants
Sixteen participants volunteered in the experiment, all
students and staff at the Control and Simulation Depart-
ment, Faculty of Aerospace Engineering, Delft University
of Technology. All participants were familiar with both the
ATC domain (and ‘best practices’ in CD&R) and the SSD
from previous experiments and courses, but none of them
had professional ATC experience (see Table 1).
Given the goal and nature of this experiment (i.e.,
studying sensor failures and their impact on judging auto-
mated advice when using an ecological display), prior
knowledge of and experience with the SSD were required.
Concretely, this meant that participants were aware of how
the SSD is constructed (i.e., what aircraft information is
needed), what information is portrayed, and how to use the
SSD to control aircraft, all of which allowed for reduced
training time. Note that, in general, ecological interfaces
are not intuitive by default and would always require some
form of training and deep understanding before people can
exploit the power of such representations (Borst et al.
2015).
4.2 Tasks and instructions
The control task of the participants, as illustrated in Fig. 8,
was to monitor automation that would occasionally give
advice on how to solve a particular conflict (i.e., CD&R) or
how to clear an aircraft to its designated exit point. The
advisories remained valid for 30 s. During that time, par-
ticipants needed to diagnose the validity of the advisories
(by inspecting the SSDs) and rate their quality, or, level of
agreement, by dragging a slider in the advisory dialog
window (see Fig. 8). Finally, advisories could be either
accepted or rejected by clicking on one of two buttons in
the dialog window. In case no accept/reject action was
undertaken within those 30 s, the automation would always
d
Δθ
d
Δθ
Fig. 6 Sensitivity analysis of the visual offset angle Dhas a function
of distance d, at a fixed in-trail position offset
(a) (b)
Fig. 7 Explicit means-ends relations, by either aclicking on a
conflict zone in the SSD (top-down) or bby hovering the mouse
cursor over an aircraft (bottom-up)
550 Cogn Tech Work (2017) 19:545–560
123
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
automatically execute its intended advice. This level of
automation closely resembled ‘Management-by-Exception’
(Parasuraman et al. 2000).
During the supervisory control task, the solution spaces
of all aircraft could be examined, but aircraft could not be
controlled manually. However, only when an aircraft
received an advisory, that particular aircraft became
available for manual control, irrespective of the advisory
being accepted, rejected or expired. Following an advisory,
the color of the subjected aircraft would become, and
remain, blue as an indication of that aircraft being available
for manual control.
Participants were told that in case of accepting a conflict
resolution advisory, the automation would not steer it back
to its exit waypoint some time later. Thus accepting such
an advisory always required at least one manual control
action further on to put the aircraft back on its desired
course. Upon rejecting a conflict resolution advisory, at
least two manual control actions were required (i.e.,
resolving the conflict and clearing the aircraft to its exit
point). For an exit clearance advisory, either no (accept) or
one (reject) manual control action was required. For air-
craft under manual control, participants could give heading
and/or speed clearances by clicking and dragging the speed
vector within the SSD, followed by pressing ENTER on the
keyboard to confirm the clearance.
Finally, it was emphasized to the participants that they
had to carefully inspect the advisories based on the SSD
and the overall traffic situation shown on the radar display.
They were told that during the experiment, errors could
occur in the position reports needed to construct the SSD
(and thus also the advisory), resulting in a mismatch
between the radar positions of the conflict zones. Given the
prior SSD knowledge of the participants, they knew that
position mismatches could either manifest in off-track or
in-trail errors, each having a different effect on the dis-
played conflict zones (see Figs. 4,5for off-track and in-
trail offsets, respectively). Participants were, however,
unaware of how many aircraft featured an error as well as
the exact nature of the position error.
4.3 Independent variables
In the experiment, three independent variables were
defined:
1. Availability of amplified means-ends links, with levels
‘Off’ and ‘On’ (between participants),
2. Sensor failure, having levels ‘No Fault’ and ‘Fault’
(within participants) and
3. Scenario complexity, featuring levels ‘Low’ and
‘High’ (within participants).
The rationale for making the amplified means-ends links
a between-participant variable was to prevent participants
from signaling the absence of the means-ends feature as a
system failure. Note that based on an inquiry on the types
and number of previous experiments the participants have
been involved with (see Table 1), an effort was undertaken
to form two balanced groups in order to prevent their prior
experiences confound the means-ends manipulation.
The sensor fault always featured an ADS-B in-trail
position offset of 7.5 nm, which was found in literature to
be a realistic, frequently occurring error. In the scenarios
with a sensor failure, only one aircraft emitted incorrect
ADS-B position reports and would affect the solution
spaces of aircraft receiving an advisory. Additionally, the
in-trail position error always made the ADS-B position
report lag behind the radar plot (see Fig. 5).
Scenario complexity was a derivative of structured
versus unstructured air traffic flows. By keeping the num-
ber of aircraft inside the sector approximately equal
between two complexity levels, the average conflict-free
solution space for the unstructured, high complexity situ-
ation was smaller. The rationale for the two traffic struc-
tures was that a position offset of an aircraft flying in a
stream of multiple aircraft would be easier to spot within a
SSD (see Fig. 5) than aircraft flying from and in different
directions (see Fig. 7).
Table 1 Participant
background information Profile 9 M.Sc. students, 4 Ph.D. students, 2 Assistant Profs., 1 Full Prof.
Age 22–47 years (average 27)
SSD experiments 1–6 (average 2)
Fig. 8 Simulator screen, showing the control task of participants in
which they needed to assess the quality and validity of automated
advice and either accept or reject it
Cogn Tech Work (2017) 19:545–560 551
123
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
4.4 Traffic scenarios and automation
The two levels of scenario complexities were determined
by the sector shape, routing structure and the location of
crossing points (see Fig. 9).
The sector used for the high complexity condition fea-
tured a more unstructured routing network with crossing
points distributed over the airspace. This would require
participants to divide their focus of attention, or area of
interest, potentially inciting more workload. Additionally, a
more unstructured airspace would result in aircraft having a
reduced available solution space, thus making it more
difficult to resolve potential conflicts.
The sector representing low complexity had a more
organized airway system with crossing points clustered
around the center of the airspace. Finally, each sector (and
thus experimental condition) featured on average the same
number of aircraft that were inside the sector simultane-
ously and had approximately the same size.
Two runs with each of the two sectors were performed,
i.e., one run for each failure condition. To prevent scenario
recognition, dummy scenarios were used in between actual
measurement scenarios and measurement scenarios were
rotated 180. For example, conditions ‘High Complexity-
No Fault’ and ‘High Complexity-Fault’ both featured
sector 2, but rotated over 180.
In order to test multiple traffic situations per trial, and to
keep the trials repeatable and interesting, the simulation ran
at three times faster than real time. This resulted in a traffic
scenario of 585 s, which ran for 195 s in the simulation.
This was chosen such that four consecutive advisories of
30 s could be given without any overlap, with 15 s initial
adjustment time, 15 s in between advisories and 15 s
manual run-out time after the last advisory. In the scenarios
including a sensor failure, three out of four advisories
would be affected by this failure and would thus be
incorrect.
All advisories were scripted rather than being generated
by an algorithm. This simplification was facilitated by both
the predefined traffic scenarios and the simulated high
LOA (supervisory control task), eliminating the need for
designing and tuning a complex CD&R algorithm. This
also ensured that each participant received the exact same
advisory at the exact same time. However, participants
were told that advisories were in fact generated by a
computer.
Finally, in Table 2an overview is provided of the
number and type of advisories, organized by experiment
condition. In this table, it is also indicated how many
manual control actions were minimally required after
accepting and rejecting conflict resolution and exit clear-
ance advisories.
4.5 Control variables
The control variables in the experiment were as follows:
Degrees of freedom All aircrafts were located on the
same altitude (flight level 290) and could not change
their altitude. Thus the CD&R task took place in the
horizontal plane only, making it a 2D control task. Note
that this simplification ensured more comparable results
between participants as they could not change altitude
to resolve conflicts whenever they vetoed an advisory.
Aircraft type All aircrafts were of the same type, having
an equal speed envelope (180–240 kts indicated
airspeed) and turns at a fixed bank angle of 30.
Aircraft count On average, all scenarios featured 11
aircraft simultaneously inside the sector at all times.
Level of automation (LOA) The chosen LOA for this
experiment was fixed at ‘Management-by-Exception’
(Parasuraman et al. 2000), which meant that the
advisory would automatically be implemented unless
the participant vetoed. The main reason for supervisory
(a) (b)
Fig. 9 Sectors used in the simulator trials to manipulate complexity.
aSector 1: low complexity, bsector 2: high complexity
Table 2 The number and type of advisories encoded in the experi-
ment conditions
Advisory Means ends off/on
Low complexity High complexity
No fault Fault No fault Fault
Conflict
Correct 2 (2) 2 (2) 4 (4)
Incorrect 3 (6) 3 (6) 6 (12)
Exit
Correct 2 (0) 1 (0) 2 (0) 1 (0) 6 (0)
Incorrect –
Total 4 (2) 4 (6) 4 (2) 4 (6) 16 (16)
The numbers between brackets represent the minimum number of
required manual control actions after the advisories
552 Cogn Tech Work (2017) 19:545–560
123
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
control through Management-by-Exception, instead of
complete manual control, was to not only to simulate a
highly automated operational environment, but also to
keep the evolution of traffic scenarios comparable
between participants as much as possible.
Automation advisories All scripted advisories featured
a fixed expiration time of 30 s.
Interface The controllers had always access to the SSD.
That is, whenever they selected an aircraft, the SSD for
that aircraft opened and could be inspected, irrespective
of having received an advisory or not.
4.6 Dependent measures
The dependent measures in the experiment were as
follows:
Correct accept/reject scores measured if participants
accepted the advisory or wanted to implement their
own solution(s). These scores have also been used as a
proxy for the failure detection performance.
Advisory agreement rating measured the level of
agreement with the given advisory, which was mea-
sured by a slider bar with scale 0–100 before respond-
ing to the advisory.
Advisory response time measured the time between
initiation of and response (i.e., accept or reject) to an
advisory. An expired advisory would be measured as a
30-s response time.
Number of SSD inspections was recorded to measure
how often SSDs were opened, and furthermore how
many means-ends inspections were utilized and in what
way (top-down versus bottom-up).
Sensor failure diagnosis was measured by using verbal
comments (requiring participants to think aloud during
the trials) and were noted when the correct nature of the
sensor failure (i.e., an in-trail position error lagging
behind the radar plot) was detected and the corre-
sponding aircraft identified.
Workload ratings measured the overall perceived
workload per trial and were measured using a slider
bar with scale 0–100 at the end of each scenario.
Control strategy was measured by eliciting a partici-
pant’s main strategy from verbal comments and manual
control performance.
4.7 Procedure
The experiment started with a briefing in combination with
a fixed set of ten training runs, in which the basic working
principles of the interface, the automation and the details of
the task were discussed. The training scenarios gradually
built up in complexity to the level of the actual trial sce-
narios. In training, only two sensor failures occurred to
demonstrate the importance of carefully inspecting the
validity of the SSD (and the given advice) compared to the
radar positions.
After the briefing and training, participants engaged in
seven measurement scenarios of about 3 min. Of these
seven scenarios, four were actual measurement scenarios
according to the independent variables and they were
mixed with three dummy scenarios (without advisories).
The purpose of the dummy scenarios was twofold: (1)
prevent scenario recognition and (2) make advisories and
sensor failures appear as rare events. After each scenario,
participants indicated their perceived workload rating.
When the experiment was finished, a short debriefing
was administered, in which participants could provide
overall feedback on the simulation and their experience and
adopted control strategies. In total, the experiment took
about 2.5 hours per participant.
4.8 Hypotheses
It was hypothesized that the availability of explicit means-
ends relations, compared to implicit means-ends relations,
would result in: (1) an increased number of correctly
accepted and rejected advisories, (2) higher agreement
ratings for correct advisories and lower ratings for incorrect
advisories, (3) lower advisory response times, (4) improved
sensor failure diagnosis, (5) reduced number of SSD
inspections and (6) a lower number of manual control
actions. It was further expected that these results would be
more pronounced in the high complexity scenario.
The main rationale for these hypotheses was that the
explicit means-ends relations would enable participants to
gain a better insight into the traffic situation, leading to
more effective fault detection and manual control
performance.
5 Results
5.1 Advisory acceptance and rejection
The cumulative advisory acceptance and rejection counts,
categorized by experiment condition, are shown in Fig. 10.
Note that the maximum count for correctly accepted
advisories in scenarios without sensor failure was 32 (four
correct advisories times eight participants per group), and
for scenarios with a sensor failure, this number was eight
(one correct advisory times eight participants per group).
For the rejection counts, the maximum count was 24 (three
incorrect advisories times eight participants per group).
Cogn Tech Work (2017) 19:545–560 553
123
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
In Fig. 10a it can be seen that both means-ends groups
accepted the majority of correct advice. As hypothesized,
Fig. 10b reveals that group with explicit means-ends links
rejected more incorrect advice. Statistically, however,
neither significant main nor interaction effects have been
found for both the acceptance and rejection counts. From
Fig. 10b it is also clear that quite often incorrect advisories
have been accepted, given the low total number of rejec-
tions. This result can be partially explained by the observed
control strategies, in which accepting advisories was often
used as a gateway to gain manual control over aircraft (see
Sect. 5.8). As such, the acceptance and rejection counts
cannot be considered as good proxies for solely the failure
detection performance.
Due to the low number of correctly rejected advisories,
it is worthwhile to inspect the individual participant con-
tributions. It can be observed that in the high complexity
scenario, more participants of the means-ends group con-
tributed to correct rejections of faulty advice as compared
to the means-ends ‘Off’ group. More interestingly, only a
few participants (i.e., P2 in group means-ends ‘Off’ and P9
and P15 in group means-ends ‘On’) were successful in
rejecting (almost) all incorrect advice. To advance on the
way the means-ends linking was used, P9 and P15 were
among the participants who predominately used top-down
linking by clicking on the triangles (see Fig. 13a). Appar-
ently, for these participants, this strategy led to more suc-
cessful rejections of incorrect advice. Also note that in the
means-ends group, P13 is not at all represented, meaning
that this participant always wrongfully accepted incorrect
advice. This was also true for P3 in the means-ends ‘Off’
group.
5.2 Advisory agreement
The normalized agreement ratings are shown in Fig. 11.A
three-way mixed ANOVA only revealed a significant main
effect of sensor failure (Fð1;14Þ¼7:017;p¼0:019) and a
significant complexity means-ends interaction effect
(Fð1;14Þ¼5:643;p¼0:032), but no main effect of
means-ends. Thus, a fault condition led to lower advisory
agreements and in the low complexity condition, and the
agreement ratings of the means-ends group were generally
lower, irrespective of fault condition.
On a critical note, the reliability of the agreement ratings
can be questioned, given the relatively large spread in the
data. This is especially true for the means-ends ‘Off’ group.
Similar to the acceptance/rejection counts, the control
strategies could explain this spread. That is, not all par-
ticipants may have been equally thorough in evaluating the
advice, especially in the high complexity condition due to
experienced time pressure to inspect a more complex SSD
and act upon the advisory.
5.3 Advisory response time
The advisory response time was defined as the time
between the start of the advisory and it being either
accepted, rejected or expired. In the entire experiment,
however, not a single advisory expired.
The distributions of the average participants’ response
times, categorized by experiment condition, are provided in
Fig. 12. Due to violations of the ANOVA assumptions,
nonparametric Kruskal–Wallis and Friedman tests were
(a)
(b)
Fig. 10 Number of correctly accepted and rejected advisories, with
the rejection counts broken down per participant. aCorrectly accepted
advisories, bcorrectly rejected advisories
554 Cogn Tech Work (2017) 19:545–560
123
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
conducted for between- and within-participant effects,
respectively.
Results revealed only a significant effect of explicit
means-ends relations (Hð1Þ¼3:982;p¼0:046) in the
‘Low Complexity-Fault’ condition. Here, the means-ends
group took longer to act upon an advisory, which runs
counter to what was hypothesized. A Friedman test
reported a significant effect of the within-participant
manipulations (v2ð3Þ¼9:375;p¼0:025), where pairwise
comparisons (adopting a Bonferroni correction) showed a
significant difference between the ‘Low Complexity-No
Fault’ and ‘High Complexity-Fault’ conditions.
The relatively high (variability in) response times were
mainly caused by attention switches. Whenever partici-
pants were busy manually controlling an aircraft that pre-
viously received an advisory, automation could prompt for
action on an advisory for another aircraft. In most cases,
participants first finished working with the aircraft under
manual control before acting upon the advisory. This
behavior was observed in both participant groups. In the
means-ends ‘On’ group, the unexpected increased activity
(discussed in Sect. 5.4) in dissecting the traffic situation
(discussed in Sect. 5.8) was responsible for increased
response times.
5.4 SSD inspections
The total number of SSD inspections was counted and is
displayed in Fig. 13a. Due to violations of the ANOVA
assumption, nonparametric tests were conducted. Kruskal–
Wallis revealed only a significant effect of means-ends in
the ‘Low Complexity-Fault’ condition
(Hð1Þ¼4:431;p¼0:035), where the means-ends group
inspected significantly more aircraft SSDs in contrast to
what was hypothesized. A Friedman test on the within-
participant manipulations was significant
(v2ð3Þ¼8:353;p¼0:039). However, pairwise compar-
isons did not confirm significant differences between con-
ditions after adopting a Bonferroni correction. It can also
be observed that the spread for the means-ends group is
quite large, especially in the low complexity condition,
indicating that not all participants inspected the SSDs
equally frequently. A Levene’s test, however, did not mark
these spread patterns to be significantly different.
The increased activity of participants in the means-ends
‘On’ group was unexpected and counter to the hypothesis.
It was expected that explicit means-ends links would make
the search for neighboring aircraft that could be affected by
an advisory, more efficient and thus reduce the need to
open the SSDs of (many) other aircraft. Instead, it was
observed that the means-ends links seemed to have
encouraged participants to inspect the SSDs of more air-
craft more frequently. This behavior can be partially
explained by the observed control strategy as will be dis-
cussed in Sect. 5.8.
Another interesting result was found when investigating
how the explicit means-ends links were used. Comparing
the counts in Fig. 13b with the ones in Fig. 13c, it is clear
that hovering (i.e., bottom-up linking) was more frequently
used than clicking on the triangles (i.e., top-down linking).
This was rather unexpected, because bottom-up linking
would take more effort, especially in the complex scenario
2.00
1.00
.00
-1.00
-2.00
Fig. 11 Advisory agreement
Fig. 12 Average participant advisory response time
Cogn Tech Work (2017) 19:545–560 555
123
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
where aircraft was more scattered around the airspace. The
fastest way to use the means-ends links would be to first
click on the triangles close to the advisory. This would
narrow the search down to one or more (in case of over-
lapping triangles) aircraft on the PVD. Then, the search
could be finalized by hovering over the remaining aircraft
to link each triangle to its aircraft. Note that in the briefing
(and training) prior to the experiment, participants were not
instructed on this strategy to avoid biasing the results. Only
three participants (P9, P12 and P15) discovered and
adopted the preferred strategy, and two of them (P9 and
P15) were also the ones who correctly rejected the majority
of faulty advice (see Fig. 10b).
5.5 Sensor failure diagnosis
Successful sensor failure diagnosis was established through
verbal comments during the experiment, facilitated by
participants (in both groups) thinking aloud. A detection
was judged to have occurred when the participants found
the one aircraft exhibiting a sensor failure and could
explain what the nature of the failure was (i.e., in-trail
position offset lagging behind the radar position).
The cumulative numbers of successful detections are
shown in Fig. 14, where a maximum cumulative result of
eight was possible (one ADS-B failure per scenario times
eight participants), indicated by the dotted line. Kruskal–
Wallis only revealed a significant main effect of means-
ends in the high complexity condition
(Hð1Þ¼4:01;p¼0:046). Interestingly, despite the high
success rate for the means-ends group in this condition, this
was not reflected in rejection counts. This is yet another
indication that the interaction with the advisories itself is
not a good proxy for the fault detection performance.
Another interesting observation was that incorrect sen-
sor failure detection involved participants thinking that
there was an off-track position error, which would make
the triangle appear wider or sharper than necessary (de-
pending on the presumed proximity between the selected
and the observed aircraft). In their opinion, this made
resolution advisories not necessarily incorrect, but ineffi-
cient by taking either too much or too little buffer in
avoiding conflicts. This caused them to initially accept an
advisory, followed by a manual control action to improve
its ‘efficiency.’ This action, however, triggered more
manual control inputs moments later, as the real failure had
not been properly detected.
5.6 Workload
After each scenario, participants submitted a workload
score by means of a slider bar with values varying from 0
(low perceived workload) to 100 (extreme high perceived
(a)
(b)
(c)
Fig. 13 SSD inspections and means-ends usage. aSSD inspections,
btop-down linking, cbottom-up linking
556 Cogn Tech Work (2017) 19:545–560
123
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
workload). The normalized workload ratings can be seen in
Fig. 15, which reveals that a fault condition resulted in
higher workload. A three-way mixed ANOVA indeed
revealed a significant main effect of failure
(Fð1;14Þ¼19:327;p\0:05), but also reported neither
main nor interaction effects of the means-ends and com-
plexity manipulations. Despite these results, the perceived
workload was relatively low, given the nature of the
supervisory control task. The participants needed to mon-
itor the traffic scenarios and could only manipulate it when
a advisory would pop up for an aircraft that either needed
an exit clearance or a conflict resolution. Note that the
choice for this particular control task was made intention-
ally in order to investigate how low workload conditions
(and reduced vigilance) would impact failure diagnosis.
5.7 Manual control performance
Ideally, each participant required minimally 16 manual
control actions over the entire experiment (see Table 2),
totaling 128 commands per means-ends group. In Table 3,
it can be seen that many more commands were given in
both groups. The group with explicit means-ends gave
fewer commands in total, as hypothesized. This result,
however, was not statistically significant.
Interestingly, the means-ends manipulation seemed to
have reduced the number of heading-only (HDG) and
combined (COMB) commands and increased the number
of speed-only (SPD) commands. Note that a combined
command featured a single ATC instruction containing
both a speed and a heading clearance. The rationale for
including such a clearance was that it might say something
about the participants’ efficiency in ‘communicating’ with
the aircraft.
A graphical depiction of the number and type of com-
mands, distributed over the experiment conditions, is
shown in Fig. 16. Recall from Table 2that in the ‘No
Fault’ condition a minimum of 16 manual control actions
was required and for the ‘Fault’ condition 48 control
actions. This means that in the failure condition, approxi-
mately 20% more commands were given than necessary,
whereas this increase is about 55% in the nonfailure con-
dition. However, the differences in number of commands
between the complexity levels within one failure condition
were not significant.
When considering the type and number of commands
per participant (see Fig. 17), the preference for more SPD
clearances in the means-ends group becomes apparent.
However, it also becomes clear that not all participants
contributed equally to the overall increase in manual con-
trol actions. Some participants (i.e., P3, P11 and P12) gave
even less clearances than minimally required. They often
did not steer aircraft back to their exit point after solving a
conflict. Recall that after a conflict resolution advisory, the
Fig. 14 Correct failure diagnosis
Fig. 15 Normalized workload rating
Table 3 Number of commands, categorized by type and means-ends
condition
Commands Means ends off Means ends on Totals
SPD 24 39 63
HDG 90 73 163
COMB 68 55 123
182 167 349
Cogn Tech Work (2017) 19:545–560 557
123
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
automation would not issue an exit clearance advice and
thus it was up to the discretion of the participants to steer
aircraft back on course. A few other participants (e.g., P1
and P8) almost doubled the number of clearances as they
were continuously trying to ‘optimize’ their conflict reso-
lutions and exit clearances. Note that such individual dif-
ferences can be expected when using ecological interfaces,
as these displays do not dictate one particular course of
action.
5.8 Control strategies
Besides the manual control performances, participant
strategies were elicited from observations and verbal
comments during and after the experiment. A qualitative
depiction of two main control strategies is provided in
Fig. 18 as flow maps. The nominal strategy illustrated in
Fig. 18a was the anticipated/designed strategy for failure
detection and diagnosis. However, only six participants
followed this strategy (i.e., P3, P5, P6, P7, P11 and P12).
This observation is also reflected in the advisory response
times (Fig. 12), the SSD inspections (Fig. 13a) and the
number of control actions (Fig. 17).
All other participants (i.e., the majority within the
means-ends ‘On’ group) followed a more complicated
strategy, which was unexpected given the instructions and
the limitations of the traffic simulator. In its most succinct
form, this strategy was geared toward gaining manual
control over the traffic scenario as much as possible. That
is, as soon as a simulator trial commenced, participants
immediately began to inspect all SSDs and used the means-
ends linking when this was available, to proactively scan
for potential problems and solutions. Then, the accept/re-
ject buttons were used as gateways to gain manual control
over aircraft and work around the automation’s advice.
Here, the more convenient positioning of the ‘accept’
button above the ‘reject’ button (see Fig. 8) may have been
responsible for causing a bias toward accepting advice.
The ‘quickly-gaining-manual-control’ strategy also led
to some frustration among participants in the means-ends
group. The designed limitation of the simulator only
allowed interaction with aircraft that received an advisory,
whereas some participants ideally wanted to solve a con-
flict by interacting with another aircraft. Here, the explicit
means-ends links could highlight other, and perhaps more
convenient, aircraft to interact with. In that sense, the
limitations of the simulator did not allow participants to
fully exploit their plans. Instead, they had to fall back to a
back-up strategy that involved temporarily vectoring the
aircraft under manual control into conflict zones of (far-
away) aircraft that could not be controlled. This back-up
Fig. 16 Total number of commands, categorized by type and
experiment condition
Fig. 17 Number and types of commands per participant
(a)
(b)
Fig. 18 Qualitative illustrations of observed control strategies.
aExpected (nominal) behavior, bunexpected (observed) behavior
558 Cogn Tech Work (2017) 19:545–560
123
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
strategy often required follow-up control actions and speed
clearances to solve new conflicts that they deliberately
created, which explains the higher number of manual
control actions than was minimally required.
6 Discussion
The goal of this research was to investigate the impact of
explicit means-ends relationships on fault diagnosis of
automated advice. Here, the focus was on diagnosing rare
failure events in a supervisory air traffic control setting.
Guided by previous studies, it was reasonable to assume
that explicit means-ends links would make the fault
detection and supervisory control tasks more efficient and
effective.
The results revealed that the fault detection perfor-
mance, as established through verbal comments, was
indeed significantly (and positively) affected by the explicit
means-ends links in the high complexity scenario. The
majority of other measurements, especially the interaction
with the advisory system, were either inconclusive (due to
a lack of statistical significance) or ran counter to the
hypotheses. These results were mainly caused by unex-
pected, but interesting interaction patterns of the partici-
pants with both the advisory system and ecological
interface. These patterns appeared to be unrelated to the
fault diagnosis task and geared toward taking control over
from automation. As a result, several participants in the
means-ends group were more active than anticipated and
exploited the means-ends links to work around the
automation’s limitations. In combination with the limited
sample size, not many significant results were found due to
the variability between participants, which caused spread in
the data. Note that variability between participants is
expected when using ecological interfaces, because such
interfaces do not dictate any specific course of action.
However, the level of variability that was observed was
unexpected, given the instructions and the carefully
designed limitations of the simulator. In hindsight, two
factors may have contributed to these findings.
First of all, a supervisory control context, in which
decision authority shifts toward a computer, is not always
well appreciated by human operators (Bekier et al. 2012).
This can lead to people rejecting any form of automated
decision support. Additionally, introducing a higher level
of automation into socio-technical work domains generally
involves making difficult and intertwined trade-offs and
decisions to find the right balance between human and
machine authority and autonomy (e.g., Dekker and Woods
1999). Despite the carefully designed experiment, it can
thus be argued that still too much control authority was
allocated to the participants, making it difficult to study the
phenomenon of interest. Note that the outcomes of ATC
experiments are generally very difficult to control, as any
decision made by a participant can affect the evolution of
traffic situations in unexpected ways.
Second, all advisories were scripted and thus the same
for all participants. Although this was deemed necessary
for the sake of experimental control, research has also
indicated that acceptance problems may arise when com-
puter advice does not match the operator’s way of working
(Westin et al. 2016). As evidenced by the manual control
performance, participants in both groups did seem to prefer
different types of clearances. To mitigate fighting against
the automation, it may be better to provide advisories in
line with the operator’s preferences. This, however, would
be difficult to tune as this concept hinges on how consistent
each person reacts to the same situation.
The observed strategy of ‘fighting against the automa-
tion’ cannot solely be attributed to the chosen level of
automation and the implementation of the scripted advi-
sories, however. This strategy has to be considered in light
of the decision aid that was used, i.e., the SSD. This
interface is intended to give participants more insight into
the traffic situation and enable them to see more solutions
than just the one that was offered by the automation. In
case of explicit means-ends relations, participants were
able to elicit more ways to solve problems, leading them to
implement (for example) more speed clearances than the
group without the explicit means-ends links.
Our experiment has exposed a potential dilemma
regarding the use of ecological interfaces in highly auto-
mated control environments. On the one hand, constraint-
based interfaces could facilitate automation transparency
(Borst et al. 2015), allowing operators to better judge the
validity and quality of specific computer advice. On the
other hand, ecological interfaces reveal all feasible control
actions within the work domain constraints, thereby
increasing the chance that people will disagree with advice
that wants to push them into one specific direction.
Therefore, finding the right balance between offering more
insight (e.g., through ecological interfaces) and striving for
compliance with single (machine) advice is an avenue
worth exploring further.
7 Conclusion
This paper presented the empirical investigation of explicit
means-ends relations, in an ecological interface, on fault
diagnosis of automated advice in a supervisory air traffic
control task. Although a significant improvement in fault
detection and diagnosis was indeed observed in a high
complexity scenario, the experiment also exposed unex-
pected results regarding the participants’ interactions with
Cogn Tech Work (2017) 19:545–560 559
123
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
the advisory system and ecological interface. The explicit
means-ends links appeared to have mainly affected par-
ticipants’ control strategy, which was geared toward taking
over control from automation, regardless of the fault con-
dition. A plausible explanation is that the explicit relations
expanded the participants’ view on the traffic situations and
allowed them to see more solutions than just the one that
was offered by the advisory. This suggests that offering
more insight, versus striving for compliance with single
(machine) advice, is a delegate balance worth exploring
further.
Open Access This article is distributed under the terms of the
Creative Commons Attribution 4.0 International License (http://crea
tivecommons.org/licenses/by/4.0/), which permits unrestricted use,
distribution, and reproduction in any medium, provided you give
appropriate credit to the original author(s) and the source, provide a
link to the Creative Commons license, and indicate if changes were
made.
References
Ali B, Majumdar A, Ochieng WY, Schuster W (2013) ADS-B: the
case for London terminal manoeuvring area (LTMA). In: Tenth
USA/Europe air traffic management research and development
seminar, pp 1–10
Bekier M, Molesworth BR, Williamson A (2012) Tipping point: The
narrow path between automation acceptance and rejection in air
traffic management. Saf Sci 50(2):259–265. doi:10.1016/j.ssci.
2011.08.059
Bennett KB, Flach JM (2011) Display and interface design: subtle
science, exact art. CRC Press, Boca Raton
Borst C, Flach JM, Ellerbroek J (2015) Beyond ecological interface
design: lessons from concerns and misconceptions. IEEE Trans
Hum Mach Syst 45(2):164–175. doi:10.1109/THMS.2014.
2364984
Borst C, Mulder M, Van Paassen MM (2010) Design and simulator
evaluation of an ecological synthetic vision display. J Guid
Control Dyn 33(5):1577–1591. doi:10.2514/1.47832
Burns CM (2000) Putting it all together: improving display integra-
tion in ecological displays. Hum Factors 42(2):226–241
Burns CM, Hajdukiewicz J (2004) Ecological interface design. CRC
Press, Boca Raton
Carr N (2014) The glass cage: automation and us, 1st edn. W. W.
Norton & Company, New York
Christoffersen K, Hunter CN, Vicente KJ (1998) A longitudinal study
of the effects of ecological interface design on deep knowledge.
Int J Hum Comput Stud 48(6):729–762
Consortium S (2012) European ATM master plan. The roadmap for
sustainable air traffic management, pp 1–100
Dekker SWA, Woods DD (1999) To intervene or not to intervene: the
dilemma of management by exception. Cognit Technol Work
1:86–96
Ellerbroek J, Brantegem KCR, van Paassen MM, de Gelder N,
Mulder M (2013) Experimental evaluation of a coplanar airborne
separation display. IEEE Trans Hum Mach Syst 43(3):290–301.
doi:10.1109/TSMC.2013.2238925
Ham DH, Yoon WC (2001) Design of information content and layout
for process control based on goal-means domain analysis. Cognit
Technol Work 3:205–223
Jones SR (2003) ADS-B surveillance quality indicators: their
relationship to system operational capability and aircraft sepa-
ration standards. Air Traffic Control Q 11(3):225–250. doi:10.
2514/atcq.11.3.225
McIlroy RC, Stanton NA (2015) Ecological interface design two
decades on: whatever happened to the SRK taxonomy? IEEE
Trans Hum Mach Syst 45(2):145–163. doi:10.1109/THMS.2014.
2369372
Mercado Velasco GA, Borst C, Ellerbroek J, van Paassen MM,
Mulder M (2015) The use of intent information in conflict
detection and resolution models based on dynamic velocity
obstacles. IEEE Trans Intell Transp Syst 16(4):2297–2302.
doi:10.1109/TITS.2014.2376031
Parasuraman R, Sheridan TB, Wickens CD (2000) A model for types
and levels of human interaction with automation. IEEE Trans
Syst Man Cybern Part A Syst Hum 30(3):286–97
Reising DVC, Sanderson PM (2004) Minimal instrumentation may
compromise failure diagnosis with an ecological interface. Hum
Factors J Hum Factors Ergon Soc 46(2):316–33
Smith A, Cassell R (2006) Methods to provide system-wide ADS-B
back-up, validation and security. In: 25th Digital avionics
conference, pp 1–7
St-Cyr O, Jamieson GA, Vicente KJ (2013) Ecological interface
design and sensor noise. Int J Hum Comput Stud
71(11):1056–1068. doi:10.1016/j.ijhcs.2013.08.005
Van Dam SBJ, Mulder M, Van Paassen MM (2008) Ecological
interface design of a tactical airborne separation assistance tool.
IEEE Trans Syst Man Cybern 38(6):1221–1233
Vicente KJ (1999) Cognitive work analysis; toward safe, productive,
and healthy computer-based work. Lawrence Erlbaum Associ-
ates, Mahwah
Vicente KJ (2002) Ecological interface design: progress and
challenges. Hum Factors J Hum Factors Ergon Soc 44(1):62–78
Vicente KJ, Moray N, Lee JD, Rasmussen J, Jones BG, Brock R,
Djemil T (1996) Evaluation of a Rankine cycle display for
nuclear power plant monitoring and diagnosis. Hum Factors J
Hum Factors Ergon Soc 38(3):506–521. doi:10.1518/
001872096778702033
Vicente KJ, Rasmussen J (1992) Ecological interface design:
theoretical foundations. IEEE Trans Syst Man Cybern 22(4):1–2
Westin CAL, Borst C, Hilburn BH (2016) Strategic conformance:
overcoming acceptance issues of decision aiding automation?
IEEE Trans Hum Mach Syst 46(1):41–52. doi:10.1109/THMS.
2015.2482480
Zhang J, Liu W, Zhu Y (2011) Study of ADS-B data evaluation. Chin
J Aeronaut 24(4):461–466
560 Cogn Tech Work (2017) 19:545–560
123
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
1.
2.
3.
4.
5.
6.
Terms and Conditions
Springer Nature journal content, brought to you courtesy of Springer Nature Customer Service Center GmbH (“Springer Nature”).
Springer Nature supports a reasonable amount of sharing of research papers by authors, subscribers and authorised users (“Users”), for small-
scale personal, non-commercial use provided that all copyright, trade and service marks and other proprietary notices are maintained. By
accessing, sharing, receiving or otherwise using the Springer Nature journal content you agree to these terms of use (“Terms”). For these
purposes, Springer Nature considers academic use (by researchers and students) to be non-commercial.
These Terms are supplementary and will apply in addition to any applicable website terms and conditions, a relevant site licence or a personal
subscription. These Terms will prevail over any conflict or ambiguity with regards to the relevant terms, a site licence or a personal subscription
(to the extent of the conflict or ambiguity only). For Creative Commons-licensed articles, the terms of the Creative Commons license used will
apply.
We collect and use personal data to provide access to the Springer Nature journal content. We may also use these personal data internally within
ResearchGate and Springer Nature and as agreed share it, in an anonymised way, for purposes of tracking, analysis and reporting. We will not
otherwise disclose your personal data outside the ResearchGate or the Springer Nature group of companies unless we have your permission as
detailed in the Privacy Policy.
While Users may use the Springer Nature journal content for small scale, personal non-commercial use, it is important to note that Users may
not:
use such content for the purpose of providing other users with access on a regular or large scale basis or as a means to circumvent access
control;
use such content where to do so would be considered a criminal or statutory offence in any jurisdiction, or gives rise to civil liability, or is
otherwise unlawful;
falsely or misleadingly imply or suggest endorsement, approval , sponsorship, or association unless explicitly agreed to by Springer Nature in
writing;
use bots or other automated methods to access the content or redirect messages
override any security feature or exclusionary protocol; or
share the content in order to create substitute for Springer Nature products or services or a systematic database of Springer Nature journal
content.
In line with the restriction against commercial use, Springer Nature does not permit the creation of a product or service that creates revenue,
royalties, rent or income from our content or its inclusion as part of a paid for service or for other commercial gain. Springer Nature journal
content cannot be used for inter-library loans and librarians may not upload Springer Nature journal content on a large scale into their, or any
other, institutional repository.
These terms of use are reviewed regularly and may be amended at any time. Springer Nature is not obligated to publish any information or
content on this website and may remove it or features or functionality at our sole discretion, at any time with or without notice. Springer Nature
may revoke this licence to you at any time and remove access to any copies of the Springer Nature journal content which have been saved.
To the fullest extent permitted by law, Springer Nature makes no warranties, representations or guarantees to Users, either express or implied
with respect to the Springer nature journal content and all parties disclaim and waive any implied warranties or warranties imposed by law,
including merchantability or fitness for any particular purpose.
Please note that these rights do not automatically extend to content, data or other material published by Springer Nature that may be licensed
from third parties.
If you would like to use or distribute our Springer Nature journal content to a wider audience or on a regular basis or in any other manner not
expressly permitted by these Terms, please contact Springer Nature at
onlineservice@springernature.com
... Increases in automation complexity and in the amount of processed information are eliciting further research in human-machine interactions (HMI) towards improving human-machine coordination and teaming [6]. ATM DSS have progressively evolved to enhance the support provided to the ATM operator's decision-making process [7][8][9]. Since each task has unique objectives and constraints, the ATM system needs different types of analytics and reasoning techniques for different tasks. Nonetheless, utilising complex AI inference with characterised by a black-box behaviour could lead to a lack of transparency and a consequent loss of the operator's trust and situation awareness. ...
... A number of HMI innovations have been proposed for ATM implementation supported by system evolutions [28]. The two common design streams are visualisation [29][30][31] and control function improvements in DSS [7][8][9]. With the increasing AI exploitation in decision-support tools, the air-traffic controller (ATCo)/airtraffic control operator (ATCO)'s concerns when working with highly automated systems have not diminished but instead have been exacerbated [32]. ...
Article
Full-text available
Advances in the trusted autonomy of air-traffic management (ATM) systems are currently being pursued to cope with the predicted growth in air-traffic densities in all classes of airspace. Highly automated ATM systems relying on artificial intelligence (AI) algorithms for anomaly detection, pattern identification, accurate inference, and optimal conflict resolution are technically feasible and demonstrably able to take on a wide variety of tasks currently accomplished by humans. However, the opaqueness and inexplicability of most intelligent algorithms restrict the usability of such technology. Consequently, AI-based ATM decision-support systems (DSS) are foreseen to integrate eXplainable AI (XAI) in order to increase interpretability and transparency of the system reasoning and, consequently, build the human operators’ trust in these systems. This research presents a viable solution to implement XAI in ATM DSS, providing explanations that can be appraised and analysed by the human air-traffic control operator (ATCO). The maturity of XAI approaches and their application in ATM operational risk prediction is investigated in this paper, which can support both existing ATM advisory services in uncontrolled airspace (Classes E and F) and also drive the inflation of avoidance volumes in emerging performance-driven autonomy concepts. In particular, aviation occurrences and meteorological databases are exploited to train a machine learning (ML)-based risk-prediction tool capable of real-time situation analysis and operational risk monitoring. The proposed approach is based on the XGBoost library, which is a gradient-boost decision tree algorithm for which post-hoc explanations are produced by SHapley Additive exPlanations (SHAP) and Local Interpretable Model-Agnostic Explanations (LIME). Results are presented and discussed, and considerations are made on the most promising strategies for evolving the human–machine interactions (HMI) to strengthen the mutual trust between ATCO and systems. The presented approach is not limited only to conventional applications but also suitable for UAS-traffic management (UTM) and other emerging applications.
... The WDA technique has been used both in the context of developing EID-based interfaces and also for resolving interface issues. In the context of EID, WDA has been performed to improve energy efficiency monitoring Jamieson, 2007, 2014), railway driving performance (Read et al., 2021), road (Baber et al., 2019) and maritime (Van Dam et al., 2006;Morineau et al., 2009;Fay et al., 2018) traffic management, medical engineering (Kwok and Burns, 2005;McEwen et al., 2012;Li et al., 2014), aviation (Amelink et al., 2005;Borst et al., 2007;Van Dam et al., 2008;Borst et al., 2008Borst et al., , 2010Ellerbroek et al., 2011Ellerbroek et al., , 2013 and in ATC (Lodder et al., 2011;Klomp et al., 2014;Mercado Velasco et al., 2015;Beernink et al., 2015;Borst et al., 2017;Ellejmi et al., 2018) In the context of resolving interface issues, Mumaw et al. (2000a,b); Xu (2007) performed WDA and identified gaps and actions needed to resolve pilots' flight deck automation issues. ...
... As described in the introduction, the goal was to study how to clarify and extend what "human-centred design" can mean for maritime autonomy emergency response systems, and to give engineers and researchers tools for early investigations into these systems. Both researchers [12,53,54] and practitioners [55] typically expose relevant operators to new design concepts to study the design of work systems for a future in which practice has changed substantially. This approach can be seen as putting forth novel work system designs as hypotheses for how an envisioned world will be constructed [56]. ...
Article
Full-text available
Commercial deployment of maritime autonomous surface ships (MASSs) is close to becoming a reality. Although MASSs are fully autonomous, the industry will still allow remote operations centre (ROC) operators to intervene if a MASS is facing an emergency the MASS cannot handle by itself. A human-centred design for the associated emergency response systems will require attention to the ROC operator workplace, but also, arguably, to the behaviour-shaping constraints on the engineers building these systems. There is thus a need for an engineer-centred design of engineering organisations, influenced by the current discourse on human factors. To contribute to the discourse, think-aloud protocol interviewing was conducted with well-informed maritime operators to elicit fundamental demands on cognition and collaboration by maritime autonomy emergency response systems. Based on the results, inferences were made regarding both design factors and methodological choices for future, early phase engineering of emergency response systems. Firstly, engineering firms have to improve their informal gathering and sharing of information through gatekeepers and/or organisational liaisons. To avoid a too cautious approach to accountability, this will have to include a closer integration of development and operations. Secondly, associated studies taking the typical approach of exposing relevant operators to new design concepts in scripted scenarios should include significant flexibility and less focus on realism.
... EID does so by using the Abstraction Hierarchy model to guide the design of the interface and facilitate problem solving [9]. Moreover, since its introduction roughly thirty years ago, EID has already been widely adopted to facilitate interface development in several aviation domains ( [19], [20], [21] and [22]), as well as the management of autonomous vehicles ( [23] and [24]). Given the iterative relationship between interface design and the work domain analysis, the Abstraction Hierarchy portrayed in Figure 2 serves as a starting point and is thus subject to refinements as the collaborative UTM-ATM environment concept is matured in future studies. ...
Article
Full-text available
The forecasted increase in unmanned aerial vehicle (UAV) traffic in lower airspace raises concerns for maintaining the safety and efficiency of flight operations near towered airports. Regulatory bodies envision a collaborative interface between UAV traffic management and air traffic management to allow for coordinated operations of both systems. This study identifies the main challenges that such an environment poses for tower control. To address these challenges, an initial design for a collaborative tower control display is introduced. Remote human-in-the-loop simulations with professional air traffic controllers confirmed the usefulness of several interface elements (in particular, UAV priority and routing indications), as well as the utilization of a grid of geofences to dynamically segregate UAVs from manned aircraft. Surprisingly, the control strategy for geofence activation was similar to that of managing manned aircraft from a tower control perspective. Participants also mentioned that they would like more control over UAV traffic than initially expected. Performance could be improved by increasing predictability of UAV routing, adding conflict detection support as well as providing more authority over individual UAV locomotion supported by a tailored geofence structure. Further work is needed to investigate controller behavior in an environment that also requires control over manned traffic.
... The allocation of roles between humans and automation, as well as the automation's level of sophistication, is important determinants in this relationship (Endsley & Kaber, 1999). For example, automation may provide decision support to a human in direct control (Manzey et al., 2012;Metzger & Parasuraman, 2005;Rieger & Manzey, 2020), or automation may take the form of an intelligent agent that works largely independent, but with the human in a supervisory role, ready to intervene when needed (Borst et al., 2017). As the function distribution between agents and humans dictate the distribution of tasks, this in turn dictates the human information needs to perform these tasks. ...
Article
Full-text available
Objective: In this review, we investigate the relationship between agent transparency, Situation Awareness, mental work-load, and operator performance for safety critical domains. Background: The advancement of highly sophisticated automation across safety critical domains poses a challenge for effective human oversight. Automation transparency is a design principle that could support humans by making the automation's inner workings observable (i.e., "seeing-into"). However, experimental support for this has not been systematically documented to date. Method: Based on the PRISMA method, a broad and systematic search of the literature was performed focusing on identifying empirical research investigating the effect of transparency on central Human Factors variables. Results: Our final sample consisted of 17 experimental studies that investigated transparency in a controlled setting. The studies typically employed three human-automation interaction types: responding to agent-generated proposals, supervisory control of agents, and monitoring only. There is an overall trend in the data pointing towards a beneficial effect of transparency. However, the data reveals variations in Situation Awareness, mental workload, and operator performance for specific tasks, agent-types, and level of integration of transparency information in primary task displays. Conclusion: Our data suggests a promising effect of automation transparency on Situation Awareness and operator performance , without the cost of added mental workload, for instances where humans respond to agent-generated proposals and where humans have a supervisory role. Application: Strategies to improve human performance when interacting with intelligent agents should focus on allowing humans to see into its information processing stages, considering the integration of information in existing Human Machine Interface solutions.
... The design and improvement of visual interfaces is largely focused on adopting better formats such as glyphs, therefore highlighting only relevant information to avoid visual clutter [18][19][20][21]. In addition, control functions are also investigated to enhance and support human-machine teaming [22], such as suggesting deconfliction solutions [23]. However, the human operator's cognitive states and operational performance vary depending on the complexity and time-criticality of the mission. ...
Conference Paper
Full-text available
The proliferation of Unmanned Aircraft Systems (UAS) in low-altitude airspace and a growing interest in new Advanced Air Mobility (AAM) solutions, are eliciting the development of new and increasingly autonomous Decision Support Systems (DSS) specifically designed for integrated manned/UAS Traffic Management (UTM). These UTM DSS make use of advanced traffic flow and airspace management concepts, but to ensure effective teaming between the human and the system in challenging situations, the nature of their roles and responsibilities is to be analyzed in depth and reflected in the design of suitable Human-Machine Interfaces and Interactions (HMI2). The paper focuses on the detail UTM operator’s supervisory role in the envisioned semi-autonomous air traffic flow management paradigm. The key HMI2 formats and functions are prototyped for airspace demand-capacity visualization and traffic clustering, supporting more interpretable human-machine interactions. The Cognitive HMI2 framework is also embedded in the proposed prototype to support closed-loop interactions and improve system integrity.
Conference Paper
Prior to a voyage, a berth to berth planning is required to ensure safe sailing and also to have the autopilot setup correctly. During a voyage traffic supervision is one of the most safety critical tasks of navigators. Behavior prediction of other vessels based on few information and experience consumes a substantial amount of workload. The continuous increasing maritime traffic makes supervision a challenging task. This contribution elaborates a monitoring interface for maritime traffic supervision based on an Abstraction Hierarchy (AH) as part of an Ecological Interface Design process and compares the AH-driven design improvements with a mathematically derived solution-space design, originally targeted to air traffic observation, and with the current maritime standard HMI, the Electronic Chart Display and Information System (ECIDS). With the EID design subjects were on average more accurate and faster in identifying an overall critical situation and also more accurate in correctly identifying the most critical vessel if compared to the current state of the art (ECDIS) design.
Article
Operational demands in safety‐critical systems impose a risk of failure to the operators especially during urgent situations. Operators of safety‐critical systems learn to make decisions effectively throughout extensive training programs and many years of experience. In the domain of air traffic control, expensive training with high dropout rates calls for research to enhance novices' ability to detect and resolve conflicts in the airspace. While previous researchers have mostly focused on redesigning training instructions and programs, the current paper explores possible benefits of novel visual representations to improve novices' understanding of the situations as well as their decision‐making process. We conduct an experimental evaluation study testing two ecological visual analytics interfaces, developed in a previous study, as support systems to facilitate novice decision‐making. The main contribution of this paper is threefold. First, we describe the application of an ecological interface design approach to the development of two visual analytics interfaces. Second, we perform a human‐in‐the‐loop experiment with forty‐five novices within a simplified air traffic control simulation environment. Third, by performing an expert‐novice comparison we investigate the extent to which effects of the proposed interfaces can be attributed to the subjects' expertise. The results show that the proposed ecological visual analytics interfaces improved novices' understanding of the information about conflicts as well as their problem‐solving performance. Further, the results show that the beneficial effects of the proposed interfaces were more attributable to the visual representations than the users' expertise.
Article
Operators in air traffic control facing time- and safety-critical situations call for efficient, reliable and robust real-time processing and interpretation of complex data. Automation support tools aid controllers in these processes to prevent separation losses between aircraft. Issues of current support tools include limited ‘what-if’ and ‘what-else’ probe functionalities in relation to vertical solutions. This work presents the design and evaluation of two visual analytics interfaces that promote contextual awareness and support ‘what-if’ and ‘what-else’ probes in the spatio-temporal domain aiming to improve information integration and support controllers in prioritising conflict resolution. Both interfaces visualize vertical solution spaces against a time-altitude graph. The main contributions of this paper are: (a) the presentation of two interfaces for supporting conflict solving; (b) the novel representation of how vertical information and aircraft rate of climb and descent affect conflicts and (c) an evaluation and comparison of the interfaces with a traditional air traffic control support system. The evaluation study was performed with domain experts to compare the effects of visualization concepts on operator engagement in processing solutions suggested by the tools. Results show that the visualizations support operators' ability to understand and resolve conflicts. Based on the results, general design guidelines for time-critical domains are proposed.
Article
Signals (alarms, alerts, and warnings) are essential for alerting air traffic controllers to potential collisions and other adverse events. Excessive or misleading signals can increase response times and decrease their response rates. We used reports from the Aviation Safety Reporting System and structured interviews to understand the complexity of the controller’s tasks in the context of potentially high-consequence situations and events, and to develop design strategies to enhance the effectiveness of signals in the ATC environment. Methods We reviewed ASRS reports over a 6-year interval from 2015 to 2020, searching for reports that mentioned alarm, alert, or warning and were submitted by air traffic controllers. We found 370 relevant reports that we analyzed for hits, misses, false alarms, and correct rejections. Structured interviews with former controllers further explored the role of signals in air traffic control. Results The most common signals in reports were MSAW (139), ASDE-X and ASSC (27), CA (195), and AMASS (4). ASDE-X, ASSC, or AMASS were mentioned in 30 reports by ground or local controllers. TRACON controllers reported events involving MSAW 70 times and CA 51 times; these were also implicated by local controllers. CA was most mentioned, cited a total of 195 times in the reports. Conclusions This information may help us to develop strategies that can enhance signaling modality (e.g., new auditory, visual, and tactile signals). Trust in automation may be improved by using strategies such as indicating the automation’s level of confidence in situations like impending loss of separation.
Article
Full-text available
Cognitive engineering researchers have long studied the complexity and reliability of human–automation interaction. Historically, though, the area of human–automation decision-making compatibility has received less attention. Paradoxically, this could in the future become one of the most critical issues of all, as mismatches between human and automation problem-solving styles could threaten the adoption of automation. This paper presents the concept of strategic conformance as a potential key factor influencing initial acceptance of automation, specifically decision aiding systems capable of guiding decision and action. Here, strategic conformance represents the match in problem-solving style between decision aiding automation and the individual operator. The theoretical foundation builds on the compatibility construct found in technology acceptance theories such as the innovation diffusion and technology acceptance models. The paper concludes with a critical discussion on the limitations and drawbacks of strategic conformance. It is proposed that the construct would be most applicable at the introductory phase of new decision aiding automation, in helping to foster operators’ initial acceptance of such automation.
Article
Full-text available
The velocity obstacle (VO) representation, a mathematical representation that supports separation assurance for autonomous vehicle navigation, can provide improved performance when the intended trajectory (intent) of the involved vehicles is shared among each other. This paper proposes a fully analytical closed mathematical form for VOs as an intuitive and flexible calculation method, which is able to incorporate time-varying concepts such as trajectory prediction errors. Several study cases for specific traffic situations are explored, and possibilities to extend the method to the third dimension are discussed. Analysis showed that the calculation of the VOs with intent information could show self-curve intersections when maneuvers at close distance are required. Further elimination of these intersections might be necessary for certain specific applications.
Article
Full-text available
Since first receiving attention in the literature almost 25 years ago, ecological interface design has been applied to a wide variety of man-machine systems across a range of domains. The design framework has its theoretical basis in Gibsonian ecological psychology, and its founding principles draw heavily on Jens Rasmussen's skills, rules, and knowledge (SRK) taxonomy. This paper presents a comprehensive review of the framework's applications since Vicente and Rasmussen's 1992 seminal article detailing the theoretical foundations of the method. There is variation in terms of both the use of the two fundamental components of the method as it was first described, and how it has been supplemented with other phases of the cognitive work analysis; this review highlights these variations with regard to how the design framework has been applied and how these applications have been reported in the literature. The importance of the SRK taxonomy to the framework is specifically discussed following the finding that 40% of reviewed applications do not cite this component despite its centrality to ecological interface design. Attention is drawn to the method's flexibility and adaptability, to its contribution to the content and form of an interface, and a point is made about the importance of being clear and consistent when reporting how the method has been applied and, where appropriate, adapted.
Article
Full-text available
The ecological interface design (EID) paradigm was introduced in the process control domain 25 years ago by Kim Vicente and Jens Rasmussen, as a way to help operators cope with system complexity and events unanticipated in the design of automated control systems. Since that time, this perspective has sparked interest in other safety-critical sociotechnical domains where humans cooperate with computerized systems to ensure safe and efficient system behavior. Many of our own, but also other explorations have, however, resulted in several usability concerns and misconceptions about the EID perspective as a viable design approach. This paper discusses some of these concerns and misconceptions, where the final goal is to get past the EID label and to consider the general lessons relative to the demands and opportunities that advanced information technologies offer and complex systems require. This paper concludes with a preliminary outlook for the future of EID, where it is anticipated that the adjective “ecological” will become increasingly redundant, as the focus on supporting “productive thinking” becomes the dominant paradigm for engineering representations.
Book
Ecological Interface Design delivers the techniques and examples that provide you with a foundation to succeed in designing advanced display graphics. The opening chapters introduce the “art” of interface design by exposing the analytical methods behind designs, the most common graphical forms, and how these methods and forms are pulled together to create a complete design. The book then incorporates case studies that further emphasize techniques and results. Each example exemplifies a solution to a certain part of the EID puzzle. Some of the examples demonstrate the analysis phase, while others apply more scrutiny to graphical design. Each is unique, allowing allowing you to use them in the development of your own designs. The volume concludes with an analysis that connects ecological interface design with other common interface design methods, enabling you to better understand how to combine approaches in the creation of design solutions.
Article
As part of continuing investigations of how to best integrate Automatic Dependent Surveillance-Broadcast (ADS-B) systems into air traffic control systems, one avenue of research is looking at augmenting or replacing ground-based radar with GPS-based aircraft separation systems using ADS-B to communicate between aircraft. This study examines how undetected failures in GPS-derived position estimates can affect safety and system performance when used with ADS-B. Results show that ADS-B is adequate when compared to radar- based separation. But this study is only looking at horizontal plane surveillance and is not meant to suggest separation levels appropriate for operational conditions. It does suggest that on a purely technical basis the potential separation limits for ADS-B are equivalent to those for radar. For an actual operational evaluation, failure modes and recovery must also be considered.