ArticlePDF Available

Task Shedding and Control Performance as a Function of Perceived Automation Reliability and Time Pressure

  • Air Force Research LaboratoryWright-Patterson Air Force Base


Research has demonstrated that workload and past machine performance influences operator allocation of task responsibilities to machines. We extended past investigations by offering task operators the opportunity to relinquish task control to a robotic entity. Forty-three participants navigated a remotely controlled vehicle around a prescribed course under conditions of low or high time pressure. While navigating, they could allocate camera monitoring to a low-or high-reliability automated agent. Results showed most participants retained control of the camera; others relinquished control immediately. Time pressure and reliability did not interact to influence task performance. Course navigation time was faster under high time pressure but errors were unaffected. Bivariate correlations revealed a positive relation between self-ratings of robotic expertise and pressure to perform, and between pressure to perform and errors committed during navigation. These results demonstrate low levels of trust in the robotic camera and comparative sensitivity of navigation time to time pressure.
Task Shedding and Control Performance as a Function of
Perceived Automation Reliability and Time Pressure
James P. Bliss
John W. Harden
Old Dominion University
Norfolk, Virginia
H. Charles Dischinger, Jr.
NASA-Marshall Space Flight Center
Huntsville, Alabama
Research has demonstrated that workload and past machine performance influences operator allocation of
task responsibilities to machines. We extended past investigations by offering task operators the
opportunity to relinquish task control to a robotic entity. Forty-three participants navigated a remotely
controlled vehicle around a prescribed course under conditions of low or high time pressure. While
navigating, they could allocate camera monitoring to a low- or high-reliability automated agent. Results
showed most participants retained control of the camera; others relinquished control immediately. Time
pressure and reliability did not interact to influence task performance. Course navigation time was faster
under high time pressure but errors were unaffected. Bivariate correlations revealed a positive relation
between self-ratings of robotic expertise and pressure to perform, and between pressure to perform and
errors committed during navigation. These results demonstrate low levels of trust in the robotic camera and
comparative sensitivity of navigation time to time pressure.
For several decades, technology growth has
influenced the way humans perform complex
tasks. The change is particularly evident in the
air and ground transportation, industrial,
medical, and military task domains. Though
technology increases have stimulated greater
productivity, better quality control, and
increased production, they have at times reduced
operator situation awareness, exacerbated
cognitive workload, and produced variability in
operator attitudes toward the automation itself.
Parasuraman, Sheridan, and Wickens (2000)
presented a model of automation functionality
that drew connections to four stages of an
operator’s information processing: information
acquisition, information analysis, decision and
action selection, and action implementation.
They proposed that any particular automated
system could act within any number of those
stages, and that automating within these stages
fundamentally changes the actions required by
the operator to successfully achieve target goals.
Endsley and Kaber (1999) proposed a model
of levels of automation comprised of 10 levels at
which automation could function within a
system. In this research, the authors argued that
there were several problems with implementing
automation at levels which resulted in removing
the operator from “the loop.” In evaluating these
various levels of automation, the authors found
that lower levels of automation tended to result
in superior operator performance and that, in the
event of a failure, operator intervention was
quicker (Endsley & Kaber, 1999).
One particularly interesting finding is that
human users vary with regard to their reliance
and compliance rates and behaviors, even when
the reliability of the automation is advertised or
well known. This may be due to trust
development, complacency, workload, or a host
of other mediating or moderating factors
(Parasuraman, Molloy, & Singh, 1993; Dixon &
Wickens, 2006; Merritt & Ilgen, 2008).
Researchers have recorded reactions to
automated aids themselves as an index of trust
with some success (Ross, Szalma, Hancock,
Barnett, & Taylor, 2008). However, equally
revealing is the propensity for operators to shed
tasks to automated agents. Task shedding, or
what Parasuraman and Hancock (2001) refer to
as adaptive task allocation to the machine (ATA-
M), has been investigated by many researchers
interested in automation and its impact on
workload (c.f., Byrne & Parasuraman, 1996;
Scerbo, 1996). Recent interest in human-robot
interaction has increased the relevance of such
ATA-M has been shown to increase during
conditions of high primary task workload and
low certainty (Parasuraman & Hancock, 2001).
The purpose of the current experiment is to
replicate those findings in a paradigm that pairs
human operators with automated (robotic)
agents. The unique contribution is the presumed
interaction between automation reliability (and
by association, operator trust) and workload
(influenced by time pressure).
Given prior research (Bliss, Dunn, & Fuller,
1995), we hypothesized that participants would
be more liable to relinquish control of a camera
to a robot if the robot were advertised as
reliable. We also hypothesized that time
pressure would result in quicker and more
frequent task allocation to the robot (Kirlik,
For the current experiment we manipulated
variables according to a split-plot, 2 X 2
experimental design. The between-groups
independent variable was advertised automated
camera controller reliability rate. Participants in
the low-reliability group were told that
automating the camera control function had been
successful at making the overall task completion
time quicker either 75% or 95% of the time.
The within-groups variable was time pressure,
manipulated by the amount of time participants
had to complete the maneuvering task. During
the low-pressure condition, participants were
told that they were to complete the task in 10
minutes. For the high-pressure condition,
participants were told that they were to complete
the task in 5 minutes. In both cases, participants
were encouraged to complete the task as quickly
and accurately as possible.
Performance dependent measures included
the overall time taken to maneuver a remote-
control truck around a predefined course (in
secs), the number of errors made (driving
outside demarcated lines) while doing so,
whether or not participants chose to automate
the camera control task, and the time (in secs)
taken to so do.
We also collected questionnaire data,
including demographic information, trust
assessments, and information about the strategy
participants used during the task.
The 43 participants tested (20 male, 23
female) included 23 undergraduate students
enrolled in a general psychology course at The
University of Alabama in Huntsville and 20
employees at NASA’s Marshall Space Flight
Center in Huntsville, Alabama. The average age
of the participants was 23.05 years (SD=7.72).
Participants indicated that they had corrected-to-
normal visual acuity and hearing. Students at
UAH earned credit toward their psychology
class, whereas employees of NASA earned a
$10 Starbucks gift card for their participation.
Participants from each location were equally
distributed in the two reliability groups.
Initial questionnaires included informed
consent forms for UAH and NASA and a
background questionnaire that included
demographic items (age, sex, robot familiarity,
robotic control skill level, and general computer
use frequency). Following the experiment,
participants completed Jian, Bisantz, & Drury’s
(1999) trust scale twice; once to indicate trust of
the remotely controlled truck and once to
indicate their trust of the remotely controlled
camera. All participants also completed an
opinion questionnaire that allowed them to
discuss their perceived motivation for the
experiment, the strategy(ies) they used to
complete the task, and the level of effort they
expended during the experiment.
Remotely Controlled Truck – Participants
used a control device to maneuver a toy
remotely controlled truck around a demarcated
course. The course was 12 inches wide, U-
shaped, and bounded by walls on two sides (see
Figure 1). The starting position for the truck
was intentionally slanted 45 degrees to the track,
forcing participants to attain proper vehicle
alignment before proceeding. They then
maneuvered the truck through the course to the
end, whereupon the experimenter would reverse
the truck’s direction so participants could drive
the truck back to the starting point. Thus, the
number of left and right turns was equal.
Participants were not allowed to view the
truck and course directly. Instead, they were
required to complete the task by referring to a
laptop computer screen that showed a view of
the course recorded by a remotely mounted
camera. The perspective of the camera was
adjustable by the participant, so that he or she
could focus on particular parts of the course
while maneuvering the truck. The remote
controller for the truck had two small joysticks.
The right joystick controlled forward and
backward movement; the left joystick controlled
orientation of the front wheels to allow steering.
Remotely Controlled Camera – The video
camera was mounted on a military robot
(MarcBot unmanned ground vehicle) that was
positioned at a height of approximately 31
inches (see Figure 1). Participants used an X-
Box controller (left direction pad) to yaw the
camera. In this way, they could keep the track
and truck in view at all times.
Figure 1. Experimental Setup
Following their arrival at the experimental
laboratory, participants completed the informed
consent forms and the background information
form. Participants were then trained to
concurrently manipulate the remote-controlled
truck and the remote camera. The experimenter
demonstrated proper use of an X-box controller
for the camera and the remote control unit for
the truck, then let the participant become
comfortable with each. Participants completed
one practice trial, during which they drove the
truck around the course while viewing it
directly. When participants indicated that they
understood the controls and the task, the first
experimental session began.
Participants were seated with their back to
the course so that they were required to rely on
the video feed from the remote camera to
maneuver the truck. Participants were told that
they had a limited time to complete the course (5
minutes or 10 minutes, depending on
counterbalanced condition). They were also told
that they could elect to automate the camera
control at any time during the session. At this
time, the experimenter emphasized that other
participants who had chosen to automate had
achieved performance improvement 75% or
95% of the time (depending on reliability group
assignment). Once participants indicated that
they understood the instructions, they began to
maneuver the truck through the course.
After completing the first experimental
session, participants were given a five-minute
break, and then began the second session. The
second session was identical to the first, except
that the time limit was counterbalanced to
ensure completion of low and high time pressure
sessions. After finishing the course the second
time, participants completed the trust
questionnaire for both the truck and the camera.
They then completed the opinion questionnaire,
and were debriefed and dismissed. In all,
participation took approximately 45 minutes.
We began our analyses by ensuring that the
data were coded correctly. We then calculated
descriptive statistics to ensure that the data were
normally distributed with no outliers. We noted
missing data for one participant’s maneuvering
error score. We also coded Time to Automate as
missing if participants chose not to automate the
camera task. For the following analyses, we
adopted a p = .10 significance level to account
for the exploratory nature of our work and
associated benign implications of committing a
type I error.
For Decision to Automate, a Chi-Square test
revealed that participants were more likely to
want to maintain control of the camera task than
to delegate the task, χ2(1) = 8.395, p < .01. Of
the 86 control decisions made, participants
wanted to delegate control of the camera 24
times. Thirteen of the 43 participants in the high
pressure condition decided to automate the
camera task; 11 of the 43 participants did so in
the low pressure condition. In each case, eight
of those deciding to automate did so
immediately (at the start of the task).
To predict the binary outcome of whether
participants decided to automate camera control
or not, we computed a binary logistic regression.
We first computed the regression analyses using
a prediction model that included reliability
group (low or high), sex (male or female), age,
robotic experience, skill level, perceived
pressure to perform, perceived problems with
completing the maneuvering task, and perceived
Results from the standard logistic regression
indicated that the combination of the predictors
significantly predicted the outcome, χ2(8) =
14.888, p = .061, Negelkerke R2 = .478.
However, results from each individual Wald
statistic indicated that only reliability group and
perceived problems with completing the
maneuvering task were significant predictors of
their deviation decision. Therefore, we
conducted a follow-up standard logistic
regression including just these two predictors.
Results from this analysis indicated that the
combination of the two predictors significantly
predicted the outcome, χ2(2) = 5.648, p=.059, R2
= .205.
A total of 75.0% of all participants’
decisions were correctly predicted with this
model. Type I error was 6.0%, indicating that
94% of participants’ decisions to want to
automate the task were correctly classified. Type
II error was 19.5%, indicating that 80.5% of
teams’ decisions to want to retain camera
control were correctly classified. Participants in
the low reliability group were .267 times more
likely to want to automate the camera task.
Participants were 1.471 times more likely to
want to automate the camera task if they
perceived a problem completing the
maneuvering task (see Table 1).
Table 1
Standard Logistic Regression
Variable B SE Wald Odds Ratio
Rel. Group -1.321 .815 2.625 .267
Problem .386 .901 2.160 1.471
To determine whether truck maneuvering
speed or errors varied as a function of advertised
automation reliability or time pressure, we
computed 2 X 2 ANOVAs for those variables.
For time taken to traverse the course, there was
no significant interaction and no main effect of
reliability. However, there was a main effect of
time pressure, F(1,47) = 3.45, p = .07, partial n2
= .078, showing that participants traversed the
course more quickly in the high pressure
condition (233.98 secs) than in the low pressure
condition (271.86 secs). For errors made, there
was no significant interaction, no effect of
reliability, and no main effect of time pressure
(p > .10).
Our final step was to calculate correlations
among demographic, experience, strategy and
performance. Because of the numerous
correlations computed, we adopted a p = .01
significance level. Participant self-ratings of
robotic expertise were positively related to level
of perceived pressure to perform, r = .458, p = .
005. In turn, greater pressure to perform
associated with more numerous errors
committed during the low-pressure task session,
r = .496, p = .002.
Kirlik (1993) demonstrated that in some
cases operators may weigh the comparative
benefits of automating by considering workload
and manual control skills. Ultimately, they may
decide to not automate a task because it is
inconvenient or perceived as costly toward
overall performance or workload. In the current
experiment automation delegation was the
exception rather than the norm. Most
participants elected to retain control of the
camera, even when the advertised reliability and
time pressure were relatively high. This may
suggest that operators are cognizant of the task
changes that may follow automation
(Parasuraman et al., 2000).
One especially intriguing finding here was
that the participants who did choose to automate
the camera task did so immediately (in some
cases before the task even began), rather than in
the middle of a session. This suggests that
participants may make task strategy decisions
prior to engaging in the task to optimize
workload or situation awareness, thereby
minimizing the problems Endsley and Kaber
(1999) proposed that accompany automation
Varying the amount of time given to
participants to complete the course seemed to
influence operator performance speed,
suggesting that the manipulation we chose
effectively changed perceived workload. Yet,
contrary to Byrne and Parasuraman’s (1996)
findings, the number of participants electing to
automate the camera was comparable across
time pressure conditions. The fact that most
people electing to automate the camera did so
immediately may have masked any differences
attributable to perceived workload.
Questions exist concerning the malleability
of automation decisions made prior to task
operation, as well as identification of factors
influencing the timing of such decisions. Future
research concerning these aspects could help
predict the potential for reliance on robotic
The experimenters acknowledge Mr. Nick
Harris and Mr. Marshall Bliss, who assisted with
data collection.
Bliss, J.P., Dunn, M., & Fuller, B.S. (1995).
Reversal of the cry-wolf effect: An investigation of
two methods to increase alarm response rates.
Perceptual and Motor Skills, 80, 1231-1242.
Byrne, E.A., & Parasuraman, R. (1996).
Psychophysiology and adaptive automation.
Biological Psychology, 42, 249-268.
Dixon, S., & Wickens, C.D. (2006). Automation
reliability in unmanned aerial vehicle control: A
reliance-compliance model of automation
dependence in high workload. Human Factors,
48(3), 474-486.
Endsley, M. R., & Kaber, D. B. (1999). Level of
automation effects on performance, situation
awareness and workload in a dynamic control task.
Ergonomics, 42(3), 462-492.
Kirlik, A. (1993). Modeling strategic behavior in
human-automation interaction: Why an “aid” can
(and should) go unused. Human Factors, 35(2), 221–
Merritt, S. M., & Ilgen, D. R. (2008). Not all
trust is created equal: Dispositional and history-based
trust in human-automation interactions. Human
Factors, 50(2), 194–210.
Parasuraman, R., & Hancock, P. (2001).
Adaptive control of mental workload. In P. Hancock
and P. Desmond (eds.), Stress, Workload, and
Fatigue. New York: CRC Press.
Parasuraman, R., Molloy, R., & Singh, I.L.
Performance consequences of automation-induced
“complacency.” The International Journal of
Aviation Psychology, 3(1), 1-23.
Parasuraman, R., Sheridan, T. B., & Wickens, C.
D. (2000). A model for types and levels of human
interaction with automation. Transactions on
Systems, Man and Cybernetics, Part A: Systems and
Humans, IEEE, 30(3), 286-297.
Ross, J.M., Szalma, J.L., Hancock, P.A., Barnett,
J.S., & Taylor, G. (2008). The effect of automation
reliability on user automation trust and reliance in a
search-and-rescue scenario. Proceedings of the
Human Factors and Ergonomics Society 52nd
Annual Meeting. Santa Monica, CA: Human Factors
and Ergonomics Society.
Scerbo, M.S. (1996). Theoretical perspectives
on adaptive automation. In R. Parasuraman and M.
Mouloua (Eds.), Automation and Human
Performance: Theory and Applications. Mahwah,
NJ: Lawrence Erlbaum Associates.
... Nonetheless, studying criticality is crucial to understanding battlefield behavior and other jobs that require operators to perform consequential tasks. In 195 similar experiments (Bliss, Harden, & Dischinger, 2013;Hanson, Bliss, Harden, & Papelis, 2014), task criticality was manipulated to determine the effects on operator control strategies. Bliss et al. (2013) manipulated task criticality in the form of time pressure; parti-200 cipants were informed that negative performance would have detrimental consequences. ...
... In 195 similar experiments (Bliss, Harden, & Dischinger, 2013;Hanson, Bliss, Harden, & Papelis, 2014), task criticality was manipulated to determine the effects on operator control strategies. Bliss et al. (2013) manipulated task criticality in the form of time pressure; parti-200 cipants were informed that negative performance would have detrimental consequences. Bliss et al. (2013) found that participants performed better under strict time pressure, or high criticality, compared with no pressure. ...
... Bliss et al. (2013) manipulated task criticality in the form of time pressure; parti-200 cipants were informed that negative performance would have detrimental consequences. Bliss et al. (2013) found that participants performed better under strict time pressure, or high criticality, compared with no pressure. ...
Full-text available
Warfighters often rely on lengthy, lexical object descriptions when performing search tasks in critical environments. Several theoretical frameworks, including the Pictorial Superiority Effect, posit images to be more effective forms of instruction for short-term memory recall tasks. However, it is unclear whether pictures are superior forms of object description when the search task has serious consequences. The purpose of the current work was to determine whether pictorial or lexical descriptions are more effective forms of instruction for military search tasks of varying criticalities. Twenty participants with military deployment experience and 20 students with no deployment experience navigated a virtual marketplace environment to search for pictorially and lexically described targets. Participants searched for targets under conditions of both low and high task criticality. Mixed analyses of variance showed that both samples collected more pictorially described targets in the high criticality condition than in the low criticality condition. Participants collected pictorially described targets faster than lexical targets, and military participants took longer to locate lexically described targets in the high criticality condition. These results lend credence to the pictorial superiority effect and may be used to inform design of instructional tools. © 2018, © 2018 Society for Military Psychology, Division 19 of the American Psychological Association.
... For this type of cooperation to work, the machine needs to be able to sense and intervene when the human's performance starts dropping due to increases in task load. [5,37]. Parasuraman and Hancock have shown that task shedding can be triggered by high workload and low certainty [29]. ...
... 2.1, when users are put in a highworkload scenario, and the workload of the task is over a certain threshold, they tend to shed that task or switch to another task on which they may be able to perform better [46]. At this point, the performance on the prior task would degrade, and therefore, be an apt point of intervention on the part of the machine agent in the HMT [5]. To account for this, a hybrid workload and performance model is needed to predict task shedding tendencies. ...
Full-text available
In Human–Machine Teaming environments, it is important to identify potential performance drops due to cognitive overload. If identified correctly, they can help improve the performance of the human–machine system by offloading some tasks to less cognitively overloaded users. This can help prevent user error that can result in critical failures. Also, it can improve productivity by keeping the human operators at an optimal performance state. This paper explores a new method for identifying user cognitive load by a three-class classification using brain activity data and by applying a convolutional neural network and long short-term memory model. The data collected from a set of cognitive benchmark experiments were used to train the model, which was then tested on two separate datasets consisting of more ecologically valid task environments. We experimented with various models built with different benchmark tasks to explore which benchmark tasks were better suited for the prediction of task shedding events in these compound tasks that are more representative of real-world scenarios. We also show that this method can be extended across-tasks and across-subject pools.
... Ideal task performance depends on optimizing mental workload, which refers to the limited information processing capabilities of the human brain, as demanded by a task [33,54]. When task demands are too high for the brain's maximum processing capacity, performance decrements and task shedding often occur [5,33]. 'Workload' is an umbrella term. ...
Full-text available
The use of robotic arms across domains is increasing, but the relationship between control features and performance is not fully understood. The goal of this research was to investigate the difference in task performance when using two different control devices at high and low task complexities when participants can shed tasks to automation. In this experiment, 40 undergraduates (24 females) used two control devices, a Leap Motion controller and an Xbox controller, to teleoperate a robotic arm in a high or low complexity peg placement task. Simultaneously, participants were tasked with scanning images for tanks. During the experiment, participants had the option to task shed the peg task to imperfect automation. Analyses indicated a significant main effect of control device on task completion rate and time to first grasp the peg, with completion rate higher and time lower when using the Leap. However, participants made significantly more errors with the Leap Motion controller than with the Xbox controller. Participants in both conditions task shed similarly with both control devices and task shed at similar times. The 2 x 2 mixed ANOVAs somewhat supported the proposed hypotheses. The results of this study indicate that control device impacts performance on a robotic arm task. The Leap Motion controller supports increased task completion rate and quicker peg grasps in high and low task complexity when compared with the Xbox controller. This supports the extension of Control Order Theory into three-dimensional space and suggests that the Leap Motion controller can be implemented in some domains. However, the criticality and frequency of errors should be carefully considered.
Full-text available
Controlling and monitoring unmanned vehicles is a cognitively demanding task, particularly when searching environments for potential improvised explosive devices (IEDs). Due to the diversity of methods used to construct IEDs, unreliable information about the potential for harm may be provided to operators. Also, warfighters may search environments that are sparsely or heavily populated. Few researchers have manipulated information reliability and task criticality jointly, though these constructs often co-vary in real task situations such as IED search. Sixteen undergraduate students navigated an unmanned ground vehicle around a demarcated course and made object investigation decisions. Participants searched the environment under conditions of low and high criticality and encountered objects accompanied by low or high reliability warnings. Results showed that criticality and reliability individually and jointly impacted reaction time and navigation errors. The reported findings generally support our hypotheses and suggest additional work is necessary to replicate such effects with active duty personnel.
Full-text available
Advances in modern day technology are rapidly increasing the ability of engineers to automate ever more complicated tasks. Often these automated aids are paired with human operators who can supervise their work to ensure that it is free of errors and to even take control of the system if it malfunctions (e.g., pilots supervising an autopilot feature). The goal of this collaboration, between humans and machines, is that it can enhance performance beyond what would be possible by either alone. Arguably the success of this partnership depends in part upon attributions an operator develops that help guide their interaction with the automation. One particular factor that has been shown to guide operator reliance on an automated ‘teammate’ is trust. The following study examined 140 participants performing a simulated search-and-rescue task. The goal of this experiment was to examine the relationship between automated agent's reliability, operator trust, operator reliance, and performance scores. Results indicated that greater automation reliability is positively correlated with greater user reliance (r = .66), perceived trust (r = .21), and performance scores (r = .34). These results indicate that more reliable aids are rated as significantly higher in terms of perceived trust and relied upon more than less reliable aids. Additionally, the size of the effect is much larger for operator behaviors (i.e., reliance) compared to more subjective measures (i.e., self-reported trust).
Full-text available
Attempted to improve alarm responses using a hearsay method, in which Ss were told that false alarms would be less frequent than they actually were, and an urgency method, in which the urgency of alarms was increased. Response frequency, speed, and accuracy of 3 groups of 20 college students (Urgency, Hearsay, and Control) were compared across groups and sessions using analyses of variance and t tests. Ss performed a complex psychomotor activity when responding to alarms. Both methods for improving alarm response were successful; hearsay participants increased their response rates across sessions, and urgency participants decreased their response times. Results are discussed with regard to design of alarm systems and theory of human performance. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Full-text available
The effect of variations in the reliability of an automated monitoring system on human operator detection of automation failures was examined in two experiments. For four 30-min sessions, 40 subjects performed an IBM PC-based flight simulation that included manual tracking and fuel-management tasks, as well as a system-monitoring task that was under automation control. Automation reliability - the percentage of system malfunctions detected by the automation routine - either remained constant at a low or high level over time or alternated every 10 min from low to high. Operator detection of automation failures was substantially worse for constant-reliability than for variable-reliability automation after about 20 min under automation control, indicating that the former condition induced 'complacency'. When system monitoring was the only task, detection was very efficient and was unaffected by variations in automation reliability. The results provide the first empirical evidence of the performance consequences of automation-induced 'complacency'. We relate findings to operator attitudes toward automation and discuss implications for cockpit automation design.
Full-text available
Various levels of automation (LOA) designating the degree of human operator and computer control were explored within the context of a dynamic control task as a means of improving overall human/machine performance. Automated systems have traditionally been explored as binary function allocations; either the human or the machine is assigned to a given task. More recently, intermediary levels of automation have been discussed as a means of maintaining operator involvement in system performance, leading to improvements in situation awareness and reductions in out-of-the-loop performance problems. A LOA taxonomy applicable to a wide range of psychomotor and cognitive tasks is presented here. The taxonomy comprises various schemes of generic control system function allocations. The functions allocated to a human operator and/or computer included monitoring displays, generating processing options, selecting an 'optimal' option and implementing that option. The impact of the LOA taxonomy was assessed within a dynamic and complex cognitive control task by measuring its effect on human/system performance, situation awareness and workload. Thirty subjects performed simulation trials involving various levels of automation. Several automation failures occurred and out-of-the-loop performance decrements were assessed. Results suggest that, in terms of performance, human operators benefit most from automation of the implementation portion of the task, but only under normal operating conditions; in contrast, removal of the operator from task implementation is detrimental to performance recovery if the automated system fails. Joint human/system option generation significantly degraded performance in comparison to human or automated option generation alone. Lower operator workload and higher situation awareness were observed under automation of the decision making portion of the task (i.e. selection of options), although human/system performance was only slightly improved. The implications of these findings for the design of automated systems are discussed.
Full-text available
Technical developments in computer hardware and software now make it possible to introduce automation into virtually all aspects of human-machine systems. Given these technical capabilities, which system functions should be automated and to what extent? We outline a model for types and levels of automation that provides a framework and an objective basis for making such choices. Appropriate selection is important because automation does not merely supplant but changes human activity and can impose new coordination demands on the human operator. We propose that automation can be applied to four broad classes of functions: 1) information acquisition; 2) information analysis; 3) decision and action selection; and 4) action implementation. Within each of these types, automation can be applied across a continuum of levels from low to high, i.e., from fully manual to fully automatic. A particular system can involve automation of all four types at different levels. The human performance consequences of particular types and levels of automation constitute primary evaluative criteria for automation design using our model. Secondary evaluative criteria include automation reliability and the costs of decision/action consequences, among others. Examples of recommended types and levels of automation are provided to illustrate the application of the model to automation design.
Full-text available
Two experiments were conducted in which participants navigated a simulated unmanned aerial vehicle (UAV) through a series of mission legs while searching for targets and monitoring system parameters. The goal of the study was to highlight the qualitatively different effects of automation false alarms and misses as they relate to operator compliance and reliance, respectively. Background data suggest that automation false alarms cause reduced compliance, whereas misses cause reduced reliance. In two studies, 32 and 24 participants, including some licensed pilots, performed in-lab UAV simulations that presented the visual world and collected dependent measures. Results indicated that with the low-reliability aids, false alarms correlated with poorer performance in the system failure task, whereas misses correlated with poorer performance in the concurrent tasks. Compliance and reliance do appear to be affected by false alarms and misses, respectively, and are relatively independent of each other. Practical implications are that automated aids must be fairly reliable to provide global benefits and that false alarms and misses have qualitatively different effects on performance.
Full-text available
We provide an empirical demonstration of the importance of attending to human user individual differences in examinations of trust and automation use. Past research has generally supported the notions that machine reliability predicts trust in automation, and trust in turn predicts automation use. However, links between user personality and perceptions of the machine with trust in automation have not been empirically established. On our X-ray screening task, 255 students rated trust and made automation use decisions while visually searching for weapons in X-ray images of luggage. We demonstrate that individual differences affect perceptions of machine characteristics when actual machine characteristics are constant, that perceptions account for 52% of trust variance above the effects of actual characteristics, and that perceptions mediate the effects of actual characteristics on trust. Importantly, we also demonstrate that when administered at different times, the same six trust items reflect two types of trust (dispositional trust and history-based trust) and that these two trust constructs are differentially related to other variables. Interactions were found among user characteristics, machine characteristics, and automation use. Our results suggest that increased specificity in the conceptualization and measurement of trust is required, future researchers should assess user perceptions of machine characteristics in addition to actual machine characteristics, and incorporation of user extraversion and propensity to trust machines can increase prediction of automation use decisions. Potential applications include the design of flexible automation training programs tailored to individuals who differ in systematic ways.
Task-offload aids (e.g., an autopilot, an "intelligent" assistant) can be selectively engaged by the human operator to dynamically delegate tasks to automation. Introducing such aids eliminates some task demands but creates new ones associated with programming, engaging, and disengaging the aiding device via an interface. The burdens associated with managing automation can sometimes outweigh the potential benefits of automation to improved system performance. Aid design parameters and features of the overall multitask context combine to determine whether or not a task-offload aid will effectively support the operator. A modeling and sensitivity analysis approach is presented that identifies effective strategies for human-automation interaction as a function of three task-context parameters and three aid design parameters. The analysis and modeling approaches provide resources for predicting how a well-adapted operator will use a given task-offload aid, and for specifying aid design features that ensure that automation will provide effective operator support in a multitask environment.
Adaptive automation is an approach to automation design where tasks are dynamically allocated between the human operator and computer systems. Psychophysiology has two complementary roles in research on adaptive automation: first, to provide information about the effects of different forms of automation thus promoting the development of effective adaptive logic; and second, psychophysiology may yield information about the operator that can be integrated with performance measurement and operator modelling to aid in the regulation of automation. This review discusses the basic tenets of adaptive automation and the role of psychophysiological measures in the study of adaptive automation. Empirical results from studies of flight simulation are presented. Psychophysiological measures may prove especially useful in the prevention of performance deterioration in underload conditions that may accompany automation. Individual differences and the potential for learned responses require research to understand their influence on adaptive algorithms. Adaptive automation represents a unique domain for the application of psychophysiology in the work environment.