ArticlePDF Available

Human-Animal Teams as an Analog for Future Human-Robot Teams: Influencing Design and Fostering Trust

Authors:

Abstract and Figures

Our work posits that existing human-animal teams can serve as an analog for developing effective human-robot teams. Existing knowledge of human-animal partnerships can be readily applied to the HRI domain to foster accurate mental models and appropriately calibrated trust in future human-robot teams. Human-animal relationships are examined in terms of the benefiting roles animals can play in enabling effective teaming, as well as the level of team interdependency and team communication, with the goal of developing applications in future human-robot teams.
Content may be subject to copyright.
Human-Animal Teams as an Analog for Future
Human-Robot Teams: Influencing Design and
Fostering Trust
Elizabeth Phillips
Institute for Simulation & Training, University of Central
Florida
Kristin E. Schaefer
U.S. Army Research Laboratory
Deborah R. Billings
Agilis Consulting Group, LLC
Florian Jentsch
Institute for Simulation & Training, University of Central
Florida
and
Peter A. Hancock
Institute for Simulation & Training, University of Central
Florida
Our work posits that existing human-animal teams can serve as an analog for developing effective
human-robot teams. Existing knowledge of human-animal partnerships can be readily applied to
the HRI domain to foster accurate mental models and appropriately calibrated trust in future
human-robot teams. Human-animal relationships are examined in terms of the benefiting roles
animals can play in enabling effective teaming, as well as the level of team interdependency and
team communication, with the goal of developing applications in future human-robot teams.
Keywords: human-robot interaction, human-animal interaction, mental models, trust
Introduction
Recent years have seen a massive growth in global investments of robotic technologies across a
variety of sectors that now include a number of nontraditional robotic domains. The integration of
robotic technologies has led to the reimagining of robots as assets that more closely resemble
interactive companions. This has led to the need for a transition of the robot’s role from a tool to
Authors retain copyright and grant the Journal of Human-Robot Interaction right of first publication with the work
simultaneously licensed under a Creative Commons Attribution License that allows others to share the work with an
acknowledgement of the work's authorship and initial publication in this journal.
Journal of Human-Robot Interaction, Vol. 5, No. 1, 2016, Pages 100-125, DOI 10.5898/JHRI.5.1.Phillips
Phillips et al., Human-Animal Teams as an Analog
101
an interactive team member. This transition to future design is most commonly envisioned for
areas that require extensive social interaction with the general populous or in high-risk
environments that require increasingly interdependent forms of teaming between humans and
robots. For example, the future vision of military robots is one in which robots will possess
capabilities that allow them to serve as team members alongside soldiers, working to achieve
common goals and complete team tasks (e.g., small unit military fire teams; Army Research
Laboratory, 2011). As such, robots will be expected to move dynamically within their team,
maintain mission and situation awareness, understand salient features of their environment, and
proactively share information with their human companions.
However, the development of human teammate-like competence (e.g., cognitive architectures,
theory of mind, ability to react appropriately in dynamic team environments) is severely limited by
the progress of the current state of the art in robotic technologies. These contemporary and
presumably temporary constructs imply that, in the near term, robots will not fully replicate their
human counterparts in functionality or intelligence capacities. Yet, to meet the needs for near-term
development, robots will need to possess a subset of skills that can be leveraged to perform work,
not unlike the ways that working animals are utilized (see Groom & Nass, 2007; Hancock, 1997;
Helton, 2009). Therefore, human-animal teams serve as a convenient analog for imagining near-
term design capabilities of robots, because these teams are capable of completing a wide variety of
work by leveraging the unique capacities of each team member (both human and animal).
Interaction mechanisms used by human-animal teams allow us to forecast what soldier-robot
teaming, for example, could resemble in the near future.
The purpose of our paper then is to provide insight into the near-future stages of human-robot
teaming through an examination of the different human-animal team types. Our goal is to
extrapolate for future trust and performance in these up-and-coming human-robot teams. To begin,
we explore why the analog of a human-animal team can be a useful mechanism for supporting
human mental models of robots. Then, we review the functional benefits that human-animal teams
provide, the capabilities that support teamwork, and how trust is engendered, developed, and
sustained in these types of teams. Two key features that support human-animal teaming include
communication and interdependent interaction. To conclude, we discuss how the human-animal
teamwork metaphor can be leveraged to inform future human-robot teams, with an especial eye
toward critical military contexts.
Mental Models of Robots: The Human-Animal Team Analogy
Although the capabilities for modeling complex human-like teammate relationships in human-
robot teams may not exist (yet), we may draw upon other teammate-like relationships for design
guidance (e.g., including interaction patterns, forms of communication, and features that engender
trust). For instance, human-animal teams function with an implied interaction hierarchy, in which
humans generally maintain authority over animals (Morrow & Fiore, 2012). Similar human-robot
interaction designs may be beneficial for contexts in which devolving inter-team authority is a
possibility and in which designing for the human-robot team to fail gracefully represents an
ongoing challenge (e.g., soldier-robot teams working in battlefield environments). The human-
animal team analogy provides some level of insight that can help transition some of the ‘unknown-
unknowns’ to ‘known-unknowns’ during the near-term transition of a robotic system away from
teleoperations or control-by-wire. This knowledge can be leveraged to determine possible design
characteristics and training that influence a person’s expectations toward future interaction. These
expectations, along with knowledge estimations and attitudes, play a significant role in mental
model formation and trust in relationships in general.
Mental models refer to structured, organized knowledge possessed by humans that describe,
explain, and predict a system’s purpose, form, function, or state (Rouse & Morris, 1986). These
models impact how people interact with the world around them and are continuously being
Phillips et al., Human-Animal Teams as an Analog
102
modified and updated as new information is acquired. This is especially the case concerning
systems with which people are unfamiliar (Norman, 1983). Prior research has suggested that
mental models of robots are easily influenced by superficial features, including robotic form (i.e.,
the presence of anthropomorphic or biologically inspired limbs, the presence of extraneous
hardware; Kiesler & Goetz, 2002; Sims et al., 2005), perceived country of origin (Lee, Kiesler,
Lau, & Chiu, 2005), and the way robots communicate (Torrey, Fussell, & Kiesler, 2013). This
relationship between physical forms and mental models has been shown to influence initial
perceptions of robot trustworthiness (Schaefer, Sanders, Yordon, Billings, & Hancock, 2012b).
Yet, people often hold ill-formed or incomplete mental models of robots, which do not necessarily
align with the true behavior, capabilities, or limitations of current robotic systems (Hancock,
Billings, & Schaefer, 2011a). This misalignment of an individual’s mental model and the actual
system is, in part, due to the general populous’ access to depictions of robots in popular culture
and fictional media, which are infused with portrayals of robots that do not represent actual
capabilities of contemporary and real-world robots (see Schaefer, Adams, Cook, Bardwell-Owens,
& Hancock, 2015). Problematically, therefore, human mental models of robots often more closely
resemble popular depictions of robots rather than actual real-world robots (Ososky, Phillips,
Schuster, & Jentsch, 2013). This is important, because prior research on human interaction with
automated systems has shown that when expectations are unmatched by reality, humans are likely
to distrust or discontinue using the automated system (Parasuraman, & Riley, 1997). As such, trust
and mental models will be central to successful human-robot interaction with near-future robots.
Trust will dictate whether or not humans choose to rely on robots in human-robot team tasks
(Phillips, Ososky, & Jentsch, 2014), and mental models will provide the foundation for that trust,
by supporting human understanding of the capabilities and limitations of their robotic teammates
(Ososky, Schuster, Phillips, & Jentsch, 2013). Human-animal team analogs can be one means
through which to foster veridical mental models of robots that provide a more accurate
representation of their near-future capabilities. Additionally, human-animal team analogs of
interaction, as opposed to adult-child, or human-human analogs, also imply that full human
emulation in robots is not necessary for successful human-robot team performance.
Research has shown that analogies can be used to refine mental models and allow individuals
to leverage perceived similarities between concepts for subsequent generalization (Genter &
Holyoak, 1997). For instance, the desktop and file drawer metaphors for computer operating
systems were first introduced at the genesis of home computing. Human-animal team analogs can
be one means through which to foster veridical mental models of robots that provide a more
accurate representation of their near-future capabilities. One of the most common analogies is
through the direct emulation of animal characteristics and behaviors in technology. The support
for this sentiment is founded in the work on ‘shaping.’ For example, Helton (2010) found that the
shape or physical form of an animal influences how people form perceived attributions related to
that animal’s intelligence. Further, Finkbeiner, Russell, and Helton (2014) found that people, even
those with minimal experience interacting with dogs, were able to form immediate impressions
regarding the dog’s personality or functional capabilities based on shape information alone. This
relationship between physical form to perceived intelligence and expected capabilities has been
mirrored in human-robot interaction (HRI) and was shown to influence initial perceptions of
trustworthiness (Schaefer et al., 2012b). Several existing robots are designed to look or behave
like animals (e.g., zoomorphic, such as AIBO), primarily to evoke certain responses from humans
or for task or physical environment functionality (Coeckelbergh, 2011). Others employ animal-
inspired architecture to navigate in certain terrains and add to functionality. For example, BigDog
is a legged robot designed to function essentially as a pack mule and traverse terrain not accessible
by wheeled or tracked vehicles (Raibert, Blankespoor, Nelson, & Playter, 2008). Even the
behaviors of bees, birds, and ants supply the underlying computer architecture for modern robotics
and computer programming related to path finding, load balancing, communication, and
navigation in unknown terrains (e.g., Boyd’s flocking model, ant colony optimization; Balch &
Phillips et al., Human-Animal Teams as an Analog
103
Arkin, 1995; Reynolds, 1987; Vaughan, Støy, Sukhatme, & Matarić, 2002). Robotic designs
inspired by nature (biomimetic designs) have brought about tremendous changes and
improvements, especially concerning the dexterous and unique mobility of robots (Hancock,
2015). Other relevant examples include octopi-inspired swimming robots (Ackerman, 2014),
swarm behaviors in nano-UAVs (Werner, 2013), and cheetah-inspired, fast, four-legged
locomotion (Newman, 2014).
While animal-inspired designs have aided in improved robotic movement and manipulation,
we maintain that design inspired by human-animal teaming can provide similar gains in robotic
development, especially as it concerns improved human-robot interaction and teaming. As most
people have far more experience interacting with animals than with robots, they are generally
more able to recognize limitations in an animal’s ability to complete a task (Phillips, Ososky,
Swigert, & Jentsch, 2012). Along these lines, it is advantageous for design and training programs
of near-future robots to make use of established working relationships with animals that humans
have cultivated over centuries of interaction. Similarly, it may also be advantageous to make use
of established training paradigms, as similar techniques have been successfully used to train
synthetic agents (Blumberg et al., 2002).
In consequence, robotic designs inspired by human-animal relationships can lead to faster
acceptance while fostering more effective interactions between humans and robots, as humans tap
into well-established mental models, promote better understanding of near-future robots, and thus
appropriately calibrate trust in near-future robotic teammates.
Human-Animal Teaming: A Vision for Human-Robot Teaming
While robotic mirrors of human-animal teams exist, many of these do not (yet) offer a complete
solution or suitable stand-in for animals in all situations. However, our understanding of varying
types of human-robot interactions can be improved by drawing comparisons with human-animal
partnerships (Coeckelbergh, 2011). Human-animal teams may specifically help address or identify
current technology limitations specific to cognition, perception, action, and human-robot
interaction. Animals have been chosen and actively domesticated to fulfill these different roles
based on their natural abilities, instincts, characteristics, and functional capabilities (e.g., a horse
for riding, a carrier pigeon for long-distance communication, or a dog for guidance of the blind).
These relationships represent a unique form of partnership that often directly benefits humans
physically, emotionally, and cognitively (Wilson, 1994). Here, we expand on different types of
human-animal teams that have been used to successfully benefit people. We provide this review
for the purpose of understanding the different types of human-animal teams that currently exist
and provide a reference for future roboticists to consider during the development stage of the robot
lifecycle. Bi-directional communication and interdependency are research areas that still need
improvement and could benefit from a more in-depth look at human-animal teams (this topic will
be discussed at length later in the paper).
Human-Animal Teams: Physical Benefits
Robot action specific to mobility and manipulation has made major advances within controlled
and simulated settings (e.g., lifting massive objects, precision, and repeatability). However,
outside these controlled settings, they currently require teleoperation due to the challenging
operational environment characteristics (e.g., people, dynamic environments, objects, and sensory
variations; Kemp, Edsinger, & Torres-Jara, 2007). Additionally, robots struggle with many
perceptual tasks, including perception-based autonomous navigation, especially in cluttered and
complex terrain (Jackel et al., 2007; Nguyen-Huu, Titus, Tilbury, & Ulsoy, 2009). By looking at
human-animal teams used to provide physical benefits to people, we are able to start identifying
team structure elements that might help advance mobility, manipulation, and perception
limitations of current human-robot teams.
Phillips et al., Human-Animal Teams as an Analog
104
Figure 1: Physical benefits of human-animal relationships paralleled in robotic development
Here, we look at human-animal teams that have been used to provide physical benefits
requiring advanced action and perception in order replace, multiply, or augment a person’s
physical needs. Fig. 1 is not all inclusive, but we present it here to provide some examples of
human-animal teams that can be leveraged for current and future robotic systems.
Historically, the most direct benefit of human-animal teaming has been to replace physical
capabilities of the person with the animal’s innate ability to exceed current human ability. A
prime example is reflected in improved locomotion capabilities. This is most commonly seen in
animals that serve as mount or pack transporters (e.g., horse, elephant, camel-rider relationship, or
pack mules). Here, the animals provide more efficient locomotion for transportation of other
people or cargo (Klinkenborg, 1993; Mills, 2002; Nesbitt & Nesbitt, 2004; The Robinson Ranch,
2003). This type of human-animal teaming is one that has now been quite extensively explored
within the robotics community. For example, Boston Dynamics (now owned by Google) designed
and built a military robot, Big Dog, to carry cargo in order to reduce soldier load (Boston
Dynamics, 2013). The design of Big Dog mirrors that of a large dog or small mule in order to
successfully locomote in rough or uncertain terrain. More recent efforts have also been in the
transportation area with the design and development of a robotic passenger vehicle (see Gannes,
2014). In this arena, improving human movement capabilities has primarily been accomplished
through improvements in robotic locomotion, specifically improved quadrupedal movement in the
case of the Big Dog, and automated driving in the case of Google’s self-driving vehicle.
Another form of human-animal teaming is one in which the animal multiplies human
physical capabilities. Here, animals have been leveraged to assist people with tasks that exceed
human capabilities, such as lifting or pulling heavy objects or tasks requiring extended stamina.
For example, an ox is used to assist farmers with plowing fields and pulling heavy equipment
(“Oxen Team Practice,” 2010). The robotics field has traditionally leveraged this type of teaming
in manufacturing and/or industrial domains. This technology built on George Devol’s work in the
1950’s on programmable transfer machines and led to the creation of the robotic arm (e.g.,
Phillips et al., Human-Animal Teams as an Analog
105
Stanford arm, designed by Victor Scheinman in 1969 at Stanford University). As stated
previously, the development of robotics for this type of function is rather advanced within
controlled environments; however, human-robot teaming is moving into unstructured
environments. For example, more recent applications have extended to areas like search and
rescue (SAR), in which robotic arms are used to lift heavy objects, as well as to medical domains
in which robotic arms are used to lift patients out of beds or into assistive chairs (e.g., RIBA, robot
nurse). Human-animal teams have been successful in semi-structured and unstructured
environments, and this perspective can thus be leveraged. Human-animal teaming can also provide
direct assistance to humans with specific physical limitations or disabilities. These animal
teammates are often found in the physical therapy and service domains. Service animals can
augment and extend natural human functions. Such animals can help to enhance an
individual’s independence and execution of daily activities that they would not be able to perform
safely on their own (Watts & Everly, 2009; Zapf & Rough, 2002). For example, according to the
Americans With Disabilities Act of 1990 (C.F.R.§ 36.104, 2002), a service animal may be used for
“guiding individuals with impaired vision, alerting individuals with impaired hearing to intruders
or sounds, providing minimal protection or rescue work, pulling a wheelchair or fetching dropped
items.” Such animals also reduce stress by allowing individuals to retain a greater degree of self-
determination and dynamic adaptation to immediate and prospective environments. Service
animals, therefore, provide a means for individuals to live more independently. This is an area in
which robots are beginning to play their surrogate role, especially for therapies involving mobility
related issues. Examples include the care-providing robot FRIEND (Functional Robot arm with
user-frIENdly interface for Disabled people), developed by the ARMAROB research consortium.
The FRIEND robot seeks to provide enhanced dexterous manipulation for people suffering from
paralysis or other skeletal-muscular disorders (Volosyak, Ivlev, & Graser, 2005).
While a number of robot advancements can directly mirror these types of teams, additional
lessons can be learned. This type of exploration into the physical benefits to teaming can also
provide insight into the scope of relevant and non-relevant behaviors, as well as provide additional
insight into the mode of communication and type of feedback that may be important for
naturalistic communication in this type of setting.
Human-Animal Teams: Emotional Benefits
A more recent goal to advance human-robot teaming relies on the integration of what Fong,
Nourbakhsh, and Dautenhahn (2003) term, “human social” characteristics, or the “ability to
express or perceive emotions, communicate with high-level dialogue, learn or recognize models of
other agents, establish or maintain social relations, use natural cues (gaze, gestures, etc.), exhibit
distinctive personality and character, and learn or develop social competencies” (pp. 145). A
number of technical advancements have been made in the integration of human-like expression
and perception of emotions (e.g., Kismet; Breazeal, 2000). However, there is still a need for the
advancement of emotion-based characteristics (e.g., physical features, communication-based
behaviors, and physical behaviors or actions) as we continue toward team-based interactions (see
also Schaefer, Cook, Adams, Bell, Sanders, & Hancock, 2012a). However, integrating human-like
social and emotional characteristics is a difficult task, while not all advancements that incorporate
emotional functions into robotic design need to, or even should, mirror human-like emotions,
especially in the near-term development cycle.
Animals provide emotional benefits to human team members by providing comfort and
companionship. Animals can also inform a person of their emotional capabilities and even
augment emotional capabilities (see Fig. 2). The fields of social and therapeutic robotics have
begun to leverage much of this work.
One of the most common human-animal relationships is that of companionship and comfort
(e.g., pets). Companionship in these types of teams refers to animals that provide social support or
comfort by reducing loneliness for their human counterparts (Staats, Wallace, & Anderson, 2008).
Phillips et al., Human-Animal Teams as an Analog
106
Figure 2. Emotional benefits of human-animal relationships
This, in turn, has a number of health-related benefits, including improved immunological and
psychological functioning (e.g., decreased stress, anxiety, and blood pressure; Edwards & Beck,
2002; Wesley, Minatrea, & Watson, 2009; Yorke, Adams, & Coady, 2008). Two key elements
that come from exploring human-animal companionship are the importance of appearance and
proximity. For example, some domesticated animals, such as cats and dogs, are specifically bred
to produce 'cute' or attractive traits in order to facilitate instantaneous human-animal bonds that
offer unconditional love and trust (Keaveney, 2008). Additionally, an animal’s proximity to its
human teammate may also impact a human’s tendency to trust an animal. Such co-location has
also been found to have an emotional effect on people interacting with domesticated animals,
especially dogs and cats. Closeness allows pet owners to feel safe and comforted (Zasloff, 1996),
which may in turn influence the evolution of trust development and build lasting trust (e.g., horse
and rider; Keaveney, 2008).
The robotics industry is now starting to design a variety of systems that instill this level of
companionship. Many socio-emotional robot designs are mirroring the physical form of animals.
A relevant example here, in which robots are starting to provide similar emotional benefits to
humans, includes Paro, a small seal-like robot designed to alleviate depression by providing
companionship to elderly individuals (Subbaraman, 2013). Here, appearance is very important to
design. The zoomorphic characteristics of animals (i.e., the attribution of animal characteristics,
including physical appearance, to a machine) have also been found to impact trust (see Schaefer,
2013). People may use the appearance of an animal (or other entity) to assign that entity initial
attributes, regardless of whether or not the attributes match the animal’s true characteristics and
capabilities (Upham Ellis et al., 2005).
Animals have also been used to edify or inform individuals of a deficiency in emotional
capability. For example, the educational arena has been using animals to teach social skills, as well
as work with individuals dealing with anxiety-inducing or abusive situations (Walsh, 2009). A
child’s interaction with an animal (e.g., brushing or petting, reading to the animal) can facilitate
subsequent social behaviors, assist with disabilities such as dyslexia, reduce a child’s perceived
Phillips et al., Human-Animal Teams as an Analog
107
pain levels, improve self-esteem and self-acceptance, and enhance communication skills (see Fig.
2). For instance, research with the NAO robot was conducted for the purpose of using it to induce
care-taking behaviors in children. Tanaka and Matsuzoe (2012) have investigated the utility of
using social robots like NAO to encourage care-giving behaviors (e.g., teaching/instructing the
robot on some skill), which in turn reinforces learning by teaching a paradigm. This paradigm
allows children to gain deliberate practice with various skills by teaching those specific skills to a
robot.
Human-animal teaming has also been used to augment emotional capabilities. Here,
augmentation refers to situations in which humans and animals work jointly together, and through
this interaction, the person is able to express some advanced emotional capability. For example, by
providing enhanced levels of stimulation, therapeutic horseback riding, often referred to as
hippotherapy, has been used to improve social functioning and directed attention in children,
including those with autism spectrum disorder (Bass, Duchowny, & Llabre, 2009). In the robotics
domain, therapy robots like Romibo are similarly being used as therapy tools for children with
autism and other autism spectrum disorders (Shick, 2013). The concept of augmented emotion
within human-robot interaction is a recent theoretical construct that is only now beginning to be
explored. Schaefer et al. (2012a) proposed that robots may provide an ideal mechanism for the
concept of remote embodiment of emotions through “downloading emotions” (i.e., use robotic
technology to provide more salient emotional cues of a person’s emotions) or “emotional
displacement” (i.e., fulfill an emotional void). Therapy through the use of robotic dolls is thought
to improve mental functioning through activating rational thinking by imagining caring for the
doll as one would a child (Yoshitaka, Masayoshi, Taro, Masaru, & Tsuyoshi, 2012).
Human-Animal Teams: Cognitive Benefits
In the area of cognition, reaching human-like intelligence is a difficult problem. While a number
of advancements have been made in cognition, including the development of cognitive
architectures including Soar (Newell, 1990), ACT-R (Anderson & Lebiere, 1998), and EPIC
(Meyer & Kieras, 1997), symbolic and subsymbolic generalizations are still needed (but difficult
for computers) for more robust decision making (see Kelley & Long, 2010). However, in teaming
and interaction, the human oftentimes only needs to perceive the optimal intelligence level for a
specific task. Therefore, it is important to be aware that the coupling of perception and action that
impacts perceived intelligence (Brooks, 1999). Human-animal interaction provides the analogy for
cooperation without the need for higher-level cognition (see also Sowa, 2006; Long & Kelley,
2010).
Animals can provide humans with additional sensory information that can then be used by the
team to make better, more informed decisions, especially in extreme environments where human
sensory faculties are limited. However, human-animal teams vary greatly in the sophistication of
their skill sets. For example, working canaries act as static sensors of the environment; while
canines are able to independently act upon their environment and communicate information to
their human teammates. Here, we describe how animals have been used to replace, extend, or even
augment a human teammate’s cognitive needs (see Fig. 3).
These animals are enhancing cognitive capabilities by providing additional sensory
information that can be used subsequently to make more informed decisions. One way that
animals fill this role is by prosthetically replacing cognitive capabilities, often by acting as
sentinels used to detect risks to human team members. For instance, as late as the 1980’s, canaries
were still used as early warning detectors to provide British coal miners with a means to recognize
the presence of poisonous gases within a mine, as the avian respiratory system is a more sensitive
indicator of air quality and toxicity (BBC News, 1986; T. Clark, K. Clark, Paterson, Mackay, &
Norstrom, 1988). Nano UAV robots, such as the Black Hornet (Fig. 3), were used by British
soldiers in Afghanistan to replace sensing capabilities needed for scouting and reconnaissance.
Following a series of programmable waypoints, these tiny UAVs stream video imagery to their
Phillips et al., Human-Animal Teams as an Analog
108
Figure 3. Cognitive benefits of human-animal relationships
human counterparts, thereby replacing the need to physically peer around corners or obstacles
(Fincher, 2013).
A second type of human-animal team is used orthotically to extend or augment the human
team member’s cognitive reach by using the specific skills of an animal to assist in task
completion. These include canine law enforcement, military working dogs (MWDs), sheepdogs,
sled dogs, and specialist animals used in search-and-rescue (SAR) environments that exemplify
human-animal relationships working in, and adapting to, difficult and dangerous environments
(Finkel, 2012; Hancock, 1997; Helton, 2009). In extremely risky environments, the human
handler’s life and health may be in jeopardy if such human-animal interaction is unsuccessful. In
many of these types of teams, animals are able to enhance the sensory capabilities of the team and
enable better decision-making. For example, failure of a MWD to detect insurgents hiding in a
warehouse may lead to a group of unsuspecting soldiers walking into an ambush. Similar robotic
replacements are deployed in place of humans in dangerous environments. For example, robotic
tools were used in the aftermath of the Fukushima nuclear disaster in Japan in early 2011. These
robots helped officials maintain control over one of the nuclear reactors by providing a view of the
damaged nuclear core in locales that posed a significant threat to human operators (Goldenberg &
McCurry, 2011).
A second type of human-animal team is used orthotically to extend or augment the human
team member’s cognitive reach by using the specific skills of an animal to assist in task
completion. These include canine law enforcement, military working dogs (MWDs), sheepdogs,
sled dogs, and specialist animals used in search-and-rescue (SAR) environments that exemplify
human-animal relationships working in, and adapting to, difficult and dangerous environments
(Finkel, 2012; Hancock, 1997; Helton, 2009). In extremely risky environments, the human
handler’s life and health may be in jeopardy if such human-animal interaction is unsuccessful. In
many of these types of teams, animals are able to enhance the sensory capabilities of the team and
Phillips et al., Human-Animal Teams as an Analog
109
enable better decision-making. For example, failure of a MWD to detect insurgents hiding in a
warehouse may lead to a group of unsuspecting soldiers walking into an ambush. Similar robotic
replacements are deployed in place of humans in dangerous environments. For example, robotic
tools were used in the aftermath of the Fukushima nuclear disaster in Japan in early 2011. These
robots helped officials maintain control over one of the nuclear reactors by providing a view of the
damaged nuclear core in locales that posed a significant threat to human operators (Goldenberg &
McCurry, 2011).
The Relationship to Trust
As robots take on roles of integrated team members (presumably without human-level
intelligence-competence), the issue of trust becomes a major concern (Hancock et al., 2011a). Not
only does trust determine when and how much a user will rely on a particular system (Lee & See,
2004), it impacts the degree to which a human teammate is willing to accept contributions from a
robot (e.g., sensory data, information to assist in decision-making, suggestions for courses of
action). Thus, the human may potentially fail to take advantage of the inherent robotic system
benefits (Freedy, de Visser, Weltman, & Coeyman, 2007). For this reason, trust is especially
critical when it comes to decision-making in high-risk environments, such as military combat
missions (Park, Jenkins, & Jiang, 2008).
Similarly, as trust mediates human-automation relationships (Sheridan, 1975; Sheridan &
Hennessy, 1984), it also mediates human-animal relationships. Therefore, a human’s trust in a
non-human teammate is a necessary requirement to ensure that any functional relationship will
ultimately be effective regardless of domain, environment, or task. Here, we provide a brief review
revealing how humans learn to trust animals, in order to describe possible analogs for human-
robot teams. For a more in-depth qualitative review on human-animal trust, we refer the reader to
two extended technical reports: Billings et al. (2012) and Phillips, Ososky, Swigert, Grove, and
Jentsch (2010).
In general, trust exists prior to any interaction (Sheppard & Sherman, 1998). This principle
also holds true for human-animal relationships. A person draws conclusions about the attributes,
personality, capabilities, and level of intelligence of that animal, regardless of whether or not they
are true characteristics, behaviors, or capabilities (see Upham Ellis et al., 2005; Helton, 2010;
Finkbeiner et al., 2014). These linkages are also found in HRI and are discussed in terms of
influence on the initial trustworthiness of a partner, animal, machine, or robot (Schaefer, 2013).
What can be drawn from this discussion is that the shape or physical form not only matters, but it
can impact initial trustworthiness of a partnership. Therefore, considerations should be made in the
robot’s physical form design.
Trust also develops and changes during an interaction. In other domains, this is called either
cognition-based trust (Burke, Sims, Lazzara, & Salas, 2007; McAllister, 1995) or history-based
trust (Merritt & Ilgen, 2008). This type of trust is influence by both capabilities of the animal and
predispositions of the person. Specific to animal capabilities, predictability of behavior appears to
be key in the development of a trusted relationship. For example, trust can be influenced by the
animal’s ability to follow instructions given by the human (Pepe, Ellis, Sims, & Chin, 2008).
Additionally, awareness also appears to play an influential role in trust development, in that
determining or predicting the animal’s behavior in different circumstances and interpreting one’s
own capabilities in response to the animal’s behaviors is important. In human interpersonal teams,
this is referred to as directed and reflected knowledge (Mortensen & Neeley, 2012).
For instance, a dog handler must be continually aware of potential dangers in the environment
as well as the animal’s obedience and predictability in those different dangerous scenarios
(Sanders, 2006). Yet, people will often still hold predisposed mistrust against the animal, no
matter how much the animal has been domesticated or trained (Keaveney, 2008). In other words,
even though their experiences with animals can be positive, humans still hold beliefs that animals
will exhibit behavior characteristically associated with that particular animal (e.g., a horse will act
like a horse and bolt). However, it has also been shown that mutual trust can only occur after an
Phillips et al., Human-Animal Teams as an Analog
110
established means of communication and respect between the two entities has developed. Due to
the importance and impact of communication, a more in-depth description of communication and
its relationship to trust is provided in a later section here. Trust also changes over time, based on
the history of prior interactions. According to Wilson (1994), past experiences with animals serve
as indicators of future relationships. If experiences were largely positive, the likelihood of future
interactions with the same or similar animals may be greater than if experiences had been
predominately negative. These prior experiences were found to be related to trusting animals. For
a successful partnership to occur, humans must spend time with their animals each day, thereby
enabling them to predict how that animal will react to most situations. This is crucial to the
development of an understanding of the animal (Robinson, 1999). This is especially relevant for
interaction in high-risk environments. Trust plays the greatest role in contexts where there are high
levels of uncertainty and risk and a lesser role in situations that are non-threatening and
predictable (Miller, 2005). In effect, the type of human-animal partnership will dictate the levels
of risk involved in the interaction. As such, the role of situational trust in a human-pet relationship
is much lower, due to the low amount of risk involved, than the trust involved in a human-animal
interaction occurring in dangerous (e.g., riding a horse) or life-threatening environments (e.g., sub-
zero temperatures), due to high levels of uncertainty and risk. For example, dogsled patrols require
humans to rely on their dog team in highly remote Arctic locations, where hunger, life-threatening
injuries, exhaustion, frostbite, and threats from predators are extremely likely (Finkel, 2012).
Further, the extent to which a human must rely on an animal in order to perform specific tasks or
to extend their capabilities (e.g., guide dog for the blind) can impact the degree of trust the human
must have in order to interact most effectively. Essentially, the riskier the situation is, the more
important human-animal trust becomes, as sometimes a human must rely solely on the decisions
of their animal partner (e.g., sled-dog and human partnership during a bad storm in a secluded
area; Kuhl, 2008).
Therefore, the amount of training a human and their animal partner undergo before interacting
impacts trust in the human-animal relationship. For a respectful partnership to develop, both the
human and the animal partner should be trained appropriately (Greenebaum, 2010). For example,
the amount of training horse owners have with their horse influences their ability to relate with,
understand, and trust them (Keveaney, 2008). This may also be the case with human-animal
partnerships with pets, service animals, etc. For example, it has been demonstrated that consistent
interaction with a therapy animal can lead to the development of trust in that animal. This can
transfer (over time) to interactions with people (Wesley et al., 2009). Thus, training reinforces any
relationship between humans and animals.
Techniques individuals employ when training animals are reflections of their specific
perceptions (Greenebaum, 2010), or mental models, of human-animal relationships. Since humans
are socially inclined, they tend to automatically respond to social cues (Lee & Kiesler, 2005) that
generate estimates of a system’s, or in this case, an animal’s, abilities and overall performance.
Through training, animals develop an understanding of human body language, and vice versa. As
a formal interaction, training not only socializes an animal into its role as either a family member
or a teammate, but it also highlights the responsibilities of the trainer (Greenebaum, 20062007).
In turn, the socially-constructed status of the animal (companion, teammate) and of the human
(owner, guardian) will influence training methods (Sanders 1999; 2006; Greenebaum 2004; Irvine
2004) and the humananimal trust relationship. Certain training techniques, such as reward-based
training (behavior modification), establish trusting relationships (Greenebaum, 2010) and mutual
respect between human and animal. As trust is a two-way process, it requires both parties to
engage in ways that support a respectful partnership.
Guided by trust, humans rely on animals for their judgment, skills, and obedience, while
animals rely on humans for guidance and care. Using this knowledge to develop zoomorphic
robotic teammates may be advantageous, because it will build on the mental models people
already possess about human-animal training relationships. For instance, robots and teammates
may be able to build trusting relationships in settings where each entity is learning new skills. This
Phillips et al., Human-Animal Teams as an Analog
111
may provide an opportunity for each to become familiar with the other’s abilities and tendencies
as they work through tasks together. Researchers have noted that the most important requirements
for developing what will ultimately be humanrobot interactions that resemble teammate
interactions are system trust and the ability to predict system behavior (Marble, Bruemmer, Few,
& Dudenhoeffer, 2004). As such, establishing a training relationship between humans and their
future robotic partners may help to provide a basis upon which an understanding of robot
capabilities and limitations can be formed and trusting relationships can be established.
Factors Involved in Human-Animal Teamwork
Current human-animal teams are capable of accomplishing a variety of complex tasks and
providing a range of benefits to humans. It is important to understand the behaviors and
capabilities such teams exhibit, as they can be used to model the capabilities needed to support
near-future human-robot teaming. While reviewing existing human-animal teams, two factors
associated with teamwork proved prominent and widespread. These were task interdependence
and communication, both of which are predicated on trust.
Task Interdependence
Task interdependence is the “degree to which group members must rely on one another to perform
their respective tasks effectively” (Saavedra, Earley, & Van Dyne, 1993, p. 61). This may take the
form of initiated (i.e., work progresses from a particular unit or job to another job), or received
(i.e., the degree to which one job is affected by the progression of work from another job) task
interdependence (Kiggundu, 1981). Initiated interdependence is determined by providing outputs
to another group, team, or individual team member within the work environment, while received
interdependence is determined by the receipt of workflow outputs from another entity in the work
environment.
Additionally, task interdependence can be described as pooled, sequential, or reciprocal in its
nature. In pooled interdependence teams, members complete their work separately, there is a low
level of direct interaction between individuals, and the summation of their individual contributions
dictates successful team performance (Saavedra et al., 1993). As such, each member of a pooled
interdependence team initiates and receives low interdependence. In sequentially interdependent
teams, each member carries out their roles in a pre-defined sequence, requiring that each step be
performed efficiently and in the correct order. On the other hand, reciprocal task
interdependence is characterized by a distinct, externally imposed role structure within the group,
in which each member has a certain expertise or specialization. Additionally, the sequence in
which sub-parts of a task are carried out can be conducted in a flexible order.
The distinctions made between pooled, sequential, and reciprocal interdependence also
represent increasing needs for coordination between individual team members and reliance on
others for successful job performance. For successful team performance, more coordination
between individual team members is needed in sequentially interdependent teams than in teams
with pooled interdependence. Similarly, more coordination between individual team members is
needed in reciprocally interdependent teams than in sequentially interdependent teams
(Thompson, 1967, as cited in Saavedra et al., 1993). As described by Thompson (1967), these
“three patterns of work flow each represent a different intensity or degree of linkage between
units” (p.486). The distinctions between pooled, sequential, and reciprocal interdependence can be
used to classify various human-animal teams, where teammate-like relationships are characterized
by increasingly reciprocal interactions.
According to interdependence theory as described by Victor and Blackburn (1987), the
relationship between members of a work unit can be described by the requirements for joint
activity as dictated by the workflow in the team, requirements for one’s own actions, and
requirements for the actions of others. These three requirements give individuals in interdependent
Phillips et al., Human-Animal Teams as an Analog
112
Figure 5. Reciprocal task interdependence in a human-canine narcotics search team
teams a degree of control (either absolute or contingent) over the performance of the team. More
specifically, “the amount of interunit interdependence is defined as the extent to which a unit’s
outcomes are controlled directly by or are contingent upon the actions of another unit” (Victor &
Blackburn, 1987, p. 490).
A human-canine narcotics search team is an excellent example of reciprocal human-animal
interdependence teaming in which the outcomes of the team are controlled by each member’s
independent actions. In this team, the human team member generally guides the dog through an
overall search area, relying on the dog’s superior olfactory senses to perform a detailed search.
The dog relies on the human handler for overall direction and guidance, alerting the handler when
narcotics are detected. The handler then informs others that narcotics have been detected, and the
human-canine team moves to a different search area to repeat this process (Dark Knight K9
Detection, n.d.). As an iterative process, each team member performs a specific function based on
their individual expertise, the order in which each function is carried out is constantly alternating
between the two respective team members, and the degree to which the outcome of one member is
reliant on the other (and vice versa) is high. As such, the success of this team is highly contingent
upon the work accomplished by each entity alone as well as the way in which the work alternates
in the two-way interaction between entities.
Each member of the human-canine narcotics search team is characterized by both high-
initiated and high-received interdependence. In other words, both the human and the canine team
members provide an equally high number of outputs to the other. For the human, these outputs are
in the form of commands, navigational search guidance, and high-level handling and care for the
canine. The canine produces outputs in the form of superior search capability, sensory alerts to the
presence of narcotics, or the absence of sensory alerts when narcotics are not present. Both
members receive equally high outputs from the other. The human team member’s outputs become
the canine’s inputs and vice-versa, thus controlling the reciprocal nature of the team’s
interdependence (see Fig. 5).
Finally, it is important to note that human-animal team analogs of interdependence teaming
are pertinent means to illustrate that full human emulation is not necessary to facilitate meaningful
Phillips et al., Human-Animal Teams as an Analog
113
Figure 6. Example of pooled interdependence human-robot interaction.
human-robot teamwork. In fact, researchers suggested that full human emulation in human-robot
interaction is a somewhat misguided approach, as it ignores the skills that robots can uniquely
contribute to a team. For example, Groom and Nass (2007) have suggested that “in the struggle to
make robots into people, researchers have not fully identified the human characteristics that robots
lack nor have the robot characteristics that humans lack… Researchers sometimes overlooked
what makes robots special” (p. 494). They go on to suggest that human-robot teamwork is best
facilitated through an approach in which the skills of robots complement the skills of humans, not
unlike the ways in which humans and animals leverage skills to work interdependently and
complete a wide variety of work.
In addition, relevant examples of teaming that vary in level of task interdependence can be
found in the existing robotics domain. For instance, the Roomba, by iRobot, operating in a
household is an example of pooled interdependence between humans and robots (see Fig. 6).
While completing household cleaning tasks, the robot is specialized in its ability to complete
household work. Therefore, the human and the robot complete separate tasks. It is the summation
of each member’s work that dictates how well the task is performed.
Sequential interdependence of human-robot interaction can be seen in manufacturing settings.
Manufacturing often uses robots, and robotic technologies, for tracking and moving individual
components and parts (see Fig. 7). These automated entities help to route parts through the
machining sequence necessary to ensure that parts are tooled accurately and in the correct order
(Monette, Corriveau, & Dubois, 2007). In this type of interaction, human and robot team members
carry out separate task components. However, the order in which the tasks are completed is
important to ensure that the task is performed well.
Phillips et al., Human-Animal Teams as an Analog
114
Figure 7. Example of sequential interdependence human-robot interaction
Finally, reciprocal interdependence can be seen in command and control settings where a
human operator is capable of controlling multiple robots (see Fig. 8). Command and control of
multiple robots is often seen in military-type tasks and is not moving into a number of civilian
realms. In many of these settings, human(s) and robot(s) reconnoiter important areas. The human
is responsible for overseeing the tasking of the robot(s). Subsequently, while the robot(s)
autonomously carry out their tasks, they periodically send information about task completion back
to the human. The human, then, decides the next course of action, whether it be to re-task the
robot(s) or to adjust the tasking of the robot(s). This type of interaction is reciprocal in nature; the
human maintains high-level oversight of the robot(s), and based on the information provided by
the robot(s), decides the next course of action for the team. Reciprocal exchange of information
between team members dictates task completion.
Communication
Team communication is also a prominent and widespread feature of human-animal teaming.
Team communication is central to interaction and coordination. It requires both the transfer of
information from one partner to another and the use of commands and requests to gain information
when needed. The most common forms of communication seen in human-animal teams involve
auditory and visual communication styles, such as verbal commands and body language. Other
sensory communication forms, such as scent, are less commonly used to convey needs; however,
they can communicate intentions toward the animal (The Hunting Dog, 2008; Meredith, n.d;
Yeon, 2007). Certain communications are based on the degree of auditory and body language used
when both the human and animal engage in a task together. Higher amounts of communication
result in greater understanding of each team member’s needs and intentions; teams that employ
more communication across a variety of modalities exhibit higher levels of teaming than those
exhibiting less communication.
Phillips et al., Human-Animal Teams as an Analog
115
Figure 8. Example of reciprocal interdependence human-robot
interaction. Figure depicts conceptual representation of a command and
control interface for interacting with multiple robots.
The amount of communication in human-animal teams is mostly animal dependent and can be
directional or reciprocal contingent upon the animal’s capacities. For example, low levels of
communication are seen in harness teams, where the animal is primarily used for its endurance and
strength (e.g., oxen plowing fields or donkeys and mules transporting goods). Communication is
directional in these teams. The human team member initiates commands (e.g., “Gee,” “Haw,”
“Whoa”) which relay intention to the animal. The animal partner receives commands and executes
orders without overtly communicating back to their human teammate (“Oxen Team Practice,
2010; Russell, 2008). In these types of teams, animal communication with the human teammate is
often limited to compliance with the order.
On the other hand, animals such as dogs or horses have natural abilities to provide auditory
and visual communication to their human handler, as well as interpret and attribute meaning to the
human team member’s verbal (e.g., speech, clicks, utterances, inflection) and nonverbal
communication (e.g., body posture, pressure, gestures). The increased communication capabilities
of these animals are often demonstrated in sporting events, such as horse racing, dog obedience
tests, hunting games, and sheep-dog trials (see The Hunting Dog, 2008; Goodwin, 1999; Hancock,
1997; Hancock, 2009; Meredith, n.d.). For example, a canine will communicate intentions and
needs through barks, whines, and growls when vocalizing. Canines may also signal to the human
through gestures by the head and tail, the position of their legs, and other types of postures and
movements that might be employed when communicating with another dog; pointing for instance
(The Hunting Dog, 2008; Dodman, n.d.; Simpson, 1997). These types of teams are characterized
by a communication style in which both parties equally participate in outward vocalization as well
Phillips et al., Human-Animal Teams as an Analog
116
as gestural, nonverbal communication (Helton, 2009). Together, this bi-directional or reciprocal
communication appears to be most similar to natural humanhuman communication. This
reciprocal form of communication also involves speech-related and speech-independent gestures
(Knapp & Hall, 2010).
For many contexts, the implementation of the type of rudimentary communications in many
human-animal teams may be overly simplistic, given that the state of the art in robotic and other
autonomous technologies is, indeed, capable of more complex semantic language understanding
between humans and robots (Kollar, Tellex, Roy, & Roy, 2010). However, for other contexts
(such as military contexts), more simplistic forms of communication (e.g., gestures, simple verbal
commands) will be beneficial in supporting teaming that can generalize to a variety of missions
and contexts, especially contexts in which team members need to communication quickly, over
long distances, or in instances of stealth operations. Rudimentary communications that can be
efficiently conveyed through multiple modalities, such as hand signals or constrained natural
language vocalizations, could be more effective during these types of missions. For these reasons,
there has been a push in the human-robot interaction community to support this type of efficient
bi-directional communication for military human-robot teams (Barber, Lackey, Reinerman-Jones,
& Hudson, 2013; Lackey, Barber, Reinerman, Badler, & Hudson, 2011; Phillips, Rivera, &
Jentsch, 2013).
Understanding existing humananimal teams’ communication and the degree to which each
team member’s role is interdependent with the other can help us to better understand the degree to
which these teams resemble effective human-interpersonal teams. Although the ability to design
robots with communication and interaction capabilities as complex as human or working dogs is
not yet fully realized; other human-animal teams provide a good analog for building robotic
capabilities that, while they may be less complex than human capabilities, still can be leveraged to
accomplish many types of work. If the ability to create a robotic partner that replicates a human
partner is not yet available, then the next best capacity may be to create a robotic teammate that
resembles an animal partner.
The Relationship to Trust
Examining the nature of trust in human-animal relationships can increase our understanding of
how humans interact with certain technology, such as automation, autonomous systems, or robots.
It involves three separate notions: (1) knowing how a partner will respond, (2) trusting oneself to
interpret a partner’s behavior, and (3) trusting oneself to communicate. In these relationships, the
human needs to trust that their animal partner will do the task it was trained to perform. However,
the human must also understand that at times, the animal will also act like an animal, displaying
tendencies and behaviors that are based upon their particular instinctive reactions. For example,
according to the “human-horse mutual trust paradigm,” a rider must trust their horse to protect
them while mounted, but also understand that their horse could break from their predictable role
and “act like a horse,” shying away from the owner, galloping off, or responding to a frightening
stimulus (Keaveney, 2008).
Applying known information about human-animal teaming (e.g., level of interdependence,
communication structure) to robotic partners may be means by which humans can begin to build
trusting relationships with robots. Human teammates will be able to draw upon well-established
mental models that also assist in trust development with animal partners. The means by which
communication is accomplished in human-animal teams may be one model by which this
understanding can be fostered and mutual trust can be formed in human-robot teams.
Communication within human-animal work teams is very important; a human must
understand the animal, interpret its behavior, and possess the ability to communicate commands
effectively to the animal (Kuhl, 2011). Each partnership is unique, in part due to each animal
Phillips et al., Human-Animal Teams as an Analog
117
having a distinctive quality or style of expression, and each human may interpret that expression
differently. Therefore, trust can only occur after an established means of communication and
respect between the two entities has been developed (Oma, 2010; Sanders, 2006). This occurs
through frequent interaction and training. In terms of specific application to human-robot
interaction, future research should explore the costs and benefits of introducing robots capable of
emulating respect and trust for their human counterparts.
The type and level of interdependency in human-animal teams can also help us better
understand how trust is formed in human-robot teams. The human teammate must be willing to
trust and accept the information being supplied by the animal (or robot, as the case may be) as
accurate and correct. Likewise, the human must trust that the animal is performing its part of the
job competently, as trained. Interdependent work inherently requires team members to function in
tandem, relying on each other to provide a specific piece of the overall picture. If a human
teammate refuses to trust the information or work of his non-human counterpart, the team
becomes inefficient. We therefore conclude that our understanding of human-robot interactions
can be improved by drawing on comparisons with human-animal partnerships (Coeckelbergh,
2011).
Conclusions: Leveraging Metaphors to Inform HRI
Relationships with animals are prevalent in most human societies. Existing human-animal teams
range widely in the degree to which animal partners act more as interactive teammates rather than
animate tools. Further, such teams are used to accomplish a wide variety of human-animal tasks.
Animals may be good analogs for robots, because even the most “tool-like” of existing teams
show more autonomy than state-of-the-art teleoperated robots. Many roles animals fill in human-
animal teaming parallel existing and future roles of robots in human-robot teams. Many human-
animal interactions show work interdependence and communication patterns that are similar to
interpersonal teams. As such, existing human-animal teams serve to provide insight into a wide
range of intelligence and autonomy analogues from which robotic designers can take inspiration.
The widespread capabilities that support this type of teamwork can potentially be used as a tool to
guide the instantiation of working animal characteristics into robotic entities.
We may be able to start by modeling simpler (more tool-like) humananimal teams and
progress to more complex teams as technology itself progresses. For instance, many human-
animal teams can accomplish a wide range of tasks with limited vocabularies and communication
abilities (Hancock, 1997; Phillips et al., 2012; Phillips, Rivera, & Jentsch, 2013). In spite of a
limited communication bandwidth, humans and animals have been able to use this limited
communication ‘vocabulary’ to accomplish meaningful work for many centuries and even
millennia. Drawing on and expanding upon this vocabulary to develop a useful taxonomy may be
possible for further understanding and predicting future human-robot teams’ behavior, especially
for envisaged military contexts.
We have argued that forms of nonverbal communication in robots help humans to understand
why robots behave as they do, making robot behavior more transparent to humans (Breazeal,
Kidd, Thomaz, Hoffman, & Berlin, 2005). By drawing on communication and interaction methods
that are already meaningful to humans, we can start developing robots that resemble interactive
partners that draw upon well-established mental models. This transfer is beneficial, as it helps
humans to form expectations about their technological teammates that are appropriate to the robot
and slowly develop as robots become more sophisticated over time.
Trust is an essential dimension for building effective interactions between future human-robot
teams, as trust ensures that functional relationships between humans and robotic teammates can be
formed and sustained (Hancock et al., 2011a, 2011b; Schaefer, 2013). A trusting relationship
Phillips et al., Human-Animal Teams as an Analog
118
between humans and animals depends on cooperation, the ability to predict behavioral successes
and failures, and establishing a joint communication paradigm. The ability to simulate these in a
robotic entity will be a large step forward in developing trusting and effective partnerships
between humans and robotic teammates. These and prior observations on trust tend to beg
anthropomorphic questions. If indeed, human-animal teams are the metaphor for future HRI,
should we build robots that are specifically co-dependent on their users? That is, should we seek to
design, create, fabricate, and include all the problems and shortcomings of living systems into the
robot surrogate? Some might say that this defeats the whole purpose of robots in the first place.
After all, they are artificial machines with all of the potential advantages that such non-living,
insensate dimensions bring (e.g., no fear of danger, non-experience of pain, etc.). However, if they
emanate no empathy at all and are not appropriately responsive to their user, they may be in
danger of misuse, disuse, and abuse. It is a design conundrum that the human-animal team analog
identification brings to the fare. The underlying master-servant structure may, however, become
quickly obsolete as autonomous robots gain agency, self-awareness, and superior capabilities.
Key Points
The study of working animals, their capabilities, the ways in which trusting relationships are
formed with animal partners, and our interactions with them, all provide additional insights into
how to design future robotic teammates. To this end, the following propositions can be generated:
Human-animal relationships represent a unique form of partnership that can often directly
benefit the human physically, emotionally, and cognitively. Robotic technologies are
beginning to take a surrogate role in benefiting human partners in similar ways. However,
additional gains can be made.
Human-animal teams range in degrees of interdependence and complexity with which
they communicate with one another to accomplish work. If the goal is to foster human-
robot relationships that resemble human-human teammates, modeling these capabilities
(like human guidance in reciprocal interdependent human-canine teams) can represent a
logical progression in the development of better human-robot teaming.
Trust will ultimately determine how human users interact with their robotic teammates.
Thus, the means by which trust is fostered in analogous human-non-human teams serves
as a crucial template for fostering trusting relationships in human-robot teams.
Acknowledgements
The research reported was performed in connection with contract number W911NF-10-2-0016
with the U.S. Army Research Laboratory. This research was also supported in part by an
appointment to the U.S. Army Research Postdoctoral Fellowship Program administered by the
Oak Ridge Associated Universities through cooperative agreement W911-NF-12-2-0019 with the
U.S. Army Research Laboratory. The views and conclusions contained in this document are those
of the authors and should not be interpreted as presenting the official policies or position, either
expressed or implied, of the U.S. Army Research Laboratory, or of the U.S. Government, unless
so designated by other authorized documents. Citation of manufacturer's or trade names does not
constitute an official endorsement or approval of the use thereof. The U.S. Government is
authorized to reproduce and distribute reprints for government purposes notwithstanding any
copyright notation herein. The authors would also like to acknowledge the various research
assistants and support staff associated with the generation and submission of this document.
Phillips et al., Human-Animal Teams as an Analog
119
References
Ackerman, E. (2014). Robot octopus takes to the sea. IEEE Spectrum. Retrieved from
http://spectrum.ieee.org/automaton/robotics/robotics-hardware/robot-octopus-takes-to-the-sea
Americans With Disabilities Act of 1990, C.F.R.§ 36.104 (2002).
Anderson, J. R., & Lebiere, C. (1998). The atomic components of thought. Mahway, NJ: Lawrence
Erlbaum Associates.
Army Research Laboratory (2011). Robotics Collaborative Technology Alliance (RCTA) FY
2011 annual program plan. Retrieved from
http://www.arl.army.mil/www/pages/392/rcta.fy11.ann.prog.plan.pdf
Balch, T., & Arkin, R. C. (1995). Motor schema-based formation control for multiagent robot
teams. In Proceedings of the 1st International Conference on Multiagent Systems
(ICMAS) (pp. 10-16). AAAI, San Francisco, CA
Barber, D., Lackey, S., Reinerman-Jones, L., & Hudson, I. (2013, May). Visual and tactile
interfaces for bi-directional human robot communication. In Proceedings of the 2013
SPIE Defense, Security, and Sensing: Unmanned Systems Technology XV (Vol. 8741).
Baltimore, MD: SPIE. doi:10.1117/12.2015956
Bass, M. M, Duchowny, C. A., & Llabre, M. M. (2009). The effect of therapeutic horseback riding
on social functioning in children with autism. Journal of Autism and Developmental
Disorders, 39(9), 1261-1267. doi:10.1007/s10803-009-0734-3
BBC News. (1986). Coal mine canaries made redundant. Retrieved from
http://news.bbc.co.uk/onthisday/hi/dates/stories/december/30/newsid_2547000/2547587.stm
Billings, D. R., Schaefer, K. E., Chen, J. Y. C., Kocsis, V., Barrera, M., Cook, J., Ferrer, M., &
Hancock, P. A. (2012). Human-animal trust as an analog for human-robot trust: A review
of current evidence (Tech. Rep. No. ARL-TR-5949). Aberdeen Proving Ground, MD:
U.S. Army Research Laboratory.
Blumberg, B., Downie, M., Ivanov, Y., Berlin, M., Johnson, M. P., & Tomlinson, B. (2002).
Integrated learning for interactive synthetic characters. In Proceedings of the 29th Annual
Conference on Computer Graphics and Interactive Techniques (SIGGRAPH) (pp. 417-
426). New York, NY: ACM. doi:10.1145/566654.566597
Boston Dynamics. (2013). BigDogThe most advanced rough-terrain robot on earth. Retrieved
from: http://www.bostondynamics.com/robot_bigdog.html
Breazeal, C. L. (2000). Sociable machines: Expressive social exchange between humans and
robots. Cambridge, MA: Massachusetts Institute of Technology.
Breazeal, C., Kidd C. D., Thomaz, A. L., Hoffman, G. & Berlin, M. (2005). Effects of nonverbal
communication on efficiency and robustness in human-robot teamwork. In Proceedings
of the 2005 Conference on Intelligent Robots and Systems (IROS) (pp.708-713).
Barcelona: IEEE. doi:10.1109/IROS.2005.1545011
Brooks, R. A. (1999). Cambrian intelligence: The early history of the new AI. Cambridge, MA:
Massachusetts Institute of Technology.
Burke, C. S., Sims, D. E., Lazzara, E. H., & Salas, E. (2007). Trust in leadership: A multi-level
review and integration. The Leadership Quarterly, 18, 606-632.
doi:10.1016/j.leaqua.2007.09.006
Clark, T., Clark, K., Paterson, S., Mackay, D., & Norstrom, R. J. (1988). Wildlife monitoring,
modeling, fugacity: Indicators of chemical contamination. Environmental Science and
Technology, 22(2), 120-127. doi:10.10201/es00167a001
Phillips et al., Human-Animal Teams as an Analog
120
Coeckelbergh, M. (2011). Humans, animals, and robots: A phenomenological approach to human-
robot relations. International Journal of Social Robotics, 3, 197204.
doi:10.1007/s12369-010-0075-6
Dark Knight K9 Detection Agency (n.d.). Retrieved from http://www.darkknightk9.com/FAQ.html
Dodman, N. (n.d.). Dog to dog communication. Retrieved from http://www.petplace.com/dogs/
dog-to-dog-communication/page1.aspx
Edwards, N. E., & Beck, A. M. (2002). Animal-assisted therapy and nutrition in Alzheimer’s
disease. Western Journal of Nursing Research, 24(6), 697-712.
doi:10.1177/019394502320555430
Fincher, J. (2013, February). Handheld black hornet nano drones issued to U.K. soldiers. Gizmag.
Retrieved from http://www.gizmag.com/black-hornet-nano-uav/26118/
Finkbeiner, K. M., Russell, P. N., & Helton, W. S. (2014). Rating of dog breed differences
insights for quadropedal robot design. In Proceedings of the Human Factors and
Ergonomics Society Annual Meeting, 58(1), 581-585., Chicago, IL.
Finkel, M. (2012). The cold patrol. National Geographic, 221(1), 8295.
Fong, T., Nourbakhsh, I., & Dautenhahn, K. (2003). A survey of socially interactive robots.
Robotics and Autonomous Systems, 42 (3-4), 143-166. doi:10.1016/S0921-
8890(02)00372-X
Freedy, A., de Visser, E., Weltman, G., & Coeyman, N. (2007). Measurement of trust in human-
robot collaboration. In Proceedings of the 2007 International Conference on
Collaborative Technologies and Systems (pp. 106-114), Orlando, FL. IEEE.
doi:10.1109/CTS.2007.4621745
Gannes, L. (2014, May 27. Google’s new self-driving car ditches the steering wheel. Retrieved from:
http://recode.net/2014/05/27/googles-new-self-driving-car-ditches-the-steering-wheel
Genter, D., & Holyoak, K. J. (1997). Reasoning and learning by analogy: Introduction. American
Psychologist, 52(1), 32-34. doi:10.1037/0003-066X.52.1.32
Goldenberg, S. & McCurry, J. (2011, March). Japan nuclear plant gets help from U.S. robots. The
Guardian. Retrieved from http://www.guardian.co.uk/world/2011/mar/29/japan-nuclear-
plant-us-robots
Goodwin, D. (1999). The importance of ethology in understanding the behavior of the horse.
Equine Veterinary Journal Supplement 28, 15-19.
Greenebaum, J. B. (2004). It’s a dog’s life: Elevating status from pet to “fur baby” at yappy hour.
Society and Animals, 12(2), 117-135. doi:10.1163/1568530041446544
Greenebaum, J. B. (20062007). The throw-away society and the family dog: An exploration of
the consumption and dispossession of companion animals. Journal of Social and
Ecological Boundaries, 2(2), 34-55.
Greenebaum, J. B. (2010). Training dogs and training humans: Symbolic interaction and dog
training. Anthrozoös, 23(2), 129-141. doi:10.2752/175303710X12682332909936
Groom, V., & Nass, C (2007). Can robots be teammates? Benchmarks in human-robot teams.
Interaction Studies, 8 (3), 483-500. doi:10.1075/is.8.3.10gro
Hancock, P. A. (1997). The sheepdog and the Japanese garden. In Essays on the future of human-
machine systems. Eden Prairie, MN: Banta.
Hancock, P.A. (2009). Mind, machine and morality Chapter 5. (The sheepdog and the Japanese
garden.) Chichester, Engla: Ashgate.
Phillips et al., Human-Animal Teams as an Analog
121
Hancock, P. A. (2015). Autobiomimesis. In Plenary address given at the Annual Meeting on
Automotive User Interfaces. Seattle, WA.
Hancock, P. A., Billings, D. R., & Schaefer, K. E. (2011a). Can you trust your robot? Ergonomics
in Design, 19, 2429. doi:10.1177/1064804611415045
Hancock, P. A., Billings, D. R., Schaefer, K. E., Chen, J. Y. C., de Visser, E. J., & Parasuraman,
R. (2011b). A meta-analysis of factors affecting trust in human-robot interaction. Human
Factors, 53(5), 517527. doi:10.1177/0018720811417254
Helton, W. S. (2009). Canine ergonomics: The science of working dogs. Boca Raton, FL: CRC Press.
Helton, W. S. (2010). Does perceived trainability of dog (Canis lupus familiaris) breeds reflect
differences in learning or differences in physical ability? Behavioural Processes, 83 (3),
315-323.
Irvine, L. (2004). A model of animal selfhood: Expanding interactionist possibilities. Symbolic
Interaction, 27(1), 3-21. doi:10.1525/si.2004.27.1.3
Jackel, L. D., Hackett, D., Krotkov, E., Perschbacher, M., Pippine, J., & Sullivan, C. (2007). How
DARPA structures its robotics programs to improve locomotion and navigation.
Communications of the ACM, 50(11), 55-59. doi:10.1145/1297797.1297823
Keaveney, S. M. (2008). Equines and their human companions. Journal of Business Research, 61,
444−454. doi:10.1016/j.jbusres.2007.07.017
Kelley, T. D., & Long, L. N. (2010). Deep Blue cannot play checkers: The need for generalized
intelligence for mobile robots. Journal of Robotics, 2010(523757).
doi:10.1155/2010/523757
Kemp, C. C., Edsinger, A., & Torres-Jara, E. (2007). Challenges for robot manipulation in human
environments. IEEE Robotics & Automation Magazine, 14(1), 20-29.
Kiesler, S., & Goetz, J. (2002). Mental models of robotic assistants. In Proceedings of the 2002
CHI Extended Abstracts on Human Factors in Computing Systems (pp. 576-577). New
York, NY: ACM. doi:10.1145/506443.506491
Kiggundu, M.N. (1981). Task interdependence and the theory of job design. The Academy of
Management Review, 6(3), 499-508. doi:10.5465/AMR.1981.4285795
Klinkenborg, V. (1993). If it weren’t for the ox, we wouldn’t be where we are. Smithsonian. 24(6),
82-90.
Knapp, M. & Hall, J. (2010). Nonverbal communication in human interaction. Boston, MA:
Wadsworth, Cengage Learning.
Kollar, T., Tellex, S., Roy, D., & Roy, N. (2010). Toward understanding natural language
directions. In Proceedings of the 5th Annual International Conference on Human-Robot
Interaction (HRI), 259-266. Osaka, Japan: IEEE. doi:10.1109.HRI.2010.5453186
Kuhl, G. (2008). Human-sled dog relations: What can we learn from the stories and experiences of
mushers? (Master’s Thesis). Lakehead University, Thunder Bay, Ontario, Canada.
Kuhl, G. (2011). Human-sled dog relations: What can we learn from the stories and experiences of
mushers? Society and Animals, 19(1), 22-37. doi:10.1163/156853011X545510
Lackey, S., Barber, D., Reinerman, L., Badler, N. I., & Hudson, I. (2011). Defining next-
generation multi-modal communication in human robot interaction. In Proceedings of the
Human Factors and Ergonomics Society Annual Meeting, 55(1), 461-464.
doi:10.1177/1071181311551095
Phillips et al., Human-Animal Teams as an Analog
122
Lee, J. D. & See, K. A. (2004). Trust in automation: Designing for appropriate reliance. Human
Factors, 46(1), 50-80. doi:10.1518/hfes.46.1.50_30392
Lee, S. & Kiesler, S. (2005).Human mental models of humanoid robots. In Proceedings of the
2005 International Conference on Robotics and Automation (ICRA) (pp. 2767-2772).
IEEE. doi:10.1109/ROBOT.2005.1570532
Lee, S., Kiesler, S., Lau, I., & Chiu, C. (2005). Human mental models of humanoid robots. In
Proceedings of the 2005 International Conference on Robotics and Automation (ICRA),
2767-2772. IEEE. doi:10.1109/ROBOT.2005.1570532
Long, L. N., & Kelley, T. D. (2010). Review of consciousness and the possibility of conscious
robots. Journal of Aerospace Computing, Information, and Communication, 7, 68-84.
doi:10.2514/1.46188
Marble, J. L., Bruemmer, D. J., Few, D. A., & Dudenhoeffer, D. D. (2004). Evaluation of
supervisory vs. peer-peer interaction with human-robot teams. In Proceedings of the 37th
Annual International Conference on System Sciences. IEEE.
doi:10.1109/HICSS.2004.1265326
McAllister, D. J. (1995). Affect- and cognition-based trust as foundations for interpersonal
cooperation in organizations. The Academy of Management Journal, 38(1), 24-59.
doi:10.2307/256727
Meredith, F. (n.d.). The riding tree: Communication through hands. Retrieved from
http://www.meredithmanor.com/features/articles/faith/commaids.asp
Merritt, S. M., & Ilgen, D. R. (2008). Not all trust is created equal: Dispositional and history-
based trust in human-automation interactions. Human Factors, 50(2), 194-210.
doi:10.1518/001872008X288574
Meyer, D. E., & Kieras, D. E. (1997). A computational theory of executive cognitive processes and
multiple-task performance: Part 1 Basic mechanisms. Psychological Review, 104(1), 3-65.
Mills, J. L. (2002). The mule. Retrieved from http://www.ecu.edu/english/tcr/20-4/mule.htm
Monette, F., Corriveau, A., & Dubois, V. (2007). U.S. Patent No. 7,286,888. Washington, DC:
U.S. Patent and Trademark Office.
Morrow, P. B., & Fiore, S. M. (2012). Supporting human-robot teams in social dynamicism: An
overview of the metaphoric inference framework. In Proceedings of the Human Factors and
Ergonomics Society Annual Meeting, 56(1), 1718-1722. doi:10.1177/1071181312561344
Mortensen, M., & Neeley, T. B. (2012). Reflected knowledge and trust in global collaboration.
Management Science, 58(12), 2207-2224.
Nesbitt, B. & Nesbitt, G. (2004). Oxen overview. Retrieved from
http://www.prairieoxdrovers.com/pastpresent.html
Newell, A. (1990). Soar: A cognitive architecture in perspective. Cambridge, MA: Harvard
University Press.
Newman, L. H. (2014, September). Cheetah robot is now wireless and gallivanting on MIT’s
campus. Slate. Retrieved from http://www.slate.com/blogs/future_tense/2014/09/15/
mit_s_cheetah_robot_is_wireless_went_outside_for_the_first_time.html
Nguyen-Huu, P. N., Titus, J., Tilbury, D., & Ulsoy, A. G. (2009). Reliability and failure in
unmanned ground vehicle (UGV) (Rep. No. GRRC Tech. Rep. 2009-01). Ann Arbor, MI:
Ground Robotics Research Center, University of Michigan.
Norman, D. A. (1983). Some observations on mental models. In D. Gentner & A. L. Stevens
(Eds.), Mental Models (7-14). New York, NY: Lawrence Erlbaum Associates, Inc.
Phillips et al., Human-Animal Teams as an Analog
123
Oma, K. A. (2010). Between trust and domination: Social contracts between humans and animals.
World Archaeology, 42(2), 175−187. doi:10.1080/00438241003672724
Ososky, S., Philips, E., Schuster, D., & Jentsch, F. (2013). A picture is worth a thousand mental
models evaluating human understanding of robot teammates. In Proceedings of the
Human Factors and Ergonomics Society Annual Meeting, 57(1), 1298-1302.
doi:10.1177/1541931213571287
Ososky, S., Schuster, D., Phillips, E., & Jentsch, F. G. (2013). Building appropriate trust in human-
robot teams. In Proceedings of the AAAI Spring Symposium: Trust and Autonomous Systems
(pp. 60-65). http://www.aaai.org/ocs/index.php/SSS/SSS13/paper/viewFile/5784/6008
“Oxen Team Practice” (2010). AMomentWithRachel. [Video file]. Retrieved from
http://www.youtube.com/watch?v=T_GKkumzZgE
Parasuraman, R., & Riley, V. (1997). Humans and automation: Use, misuse, disuse, abuse. Human
Factors, 39(2), 230-253. doi:10.1518/001872097778543886
Park, E., Jenkins, Q., & Jiang, X. (2008). Measuring trust of human operators in new generation
rescue robots. In Proceedings of the 7th Annual JFPS International Symposium on Fluid
Power (pp. 489-492). Toyama, Japan: JFPS. doi:10.5739/isfp.2008.489
Pepe, A. A., Ellis, L. U., Sims, V. K., & Chin, M. G. (2008). Go, dog, go: Maze training AIBO vs. a
live dog, an exploratory study. Anthrozoös, 21(1), 71-83. doi:10.2752/089279308X274074
Phillips, E., Ososky, S., & Jentsch, F. (2014). An investigation of human decision-making in a
human-robot team task. In Proceedings of the Human Factors and Ergonomics Society
Annual Meeting, 58(1), 315-319. doi:10.1177/1541931214581065
Phillips, E., Ososky, S., Swigert, B., Grove, J., & Jentsch, F. (2010). Human-animal teams as an
analog for future human-robot teams. Contract No. W911NF-10-2-0016. Orlando, FL:
University of Central Florida
Phillips, E., Ososky, S., Swigert, B., & Jentsch, F. (2012). Human-animal teams as an analog for
future human-robot teams. In Proceedings of the Human Factors and Ergonomics Society
Annual Meeting, 56(1), 1553-1557. doi:10.1177/1071181312561309
Phillips, E. Rivera, J., & Jentsch, F. (2013). Developing a tactical language for future robotic
teammates. In Proceedings of the Human Factors and Ergonomics Society Annual
Meeting, 57(1), 1283-1287. doi:10.1177/1541931213571284
Raibert, M., Blankespoor, K., Nelson, G., & Playter, R. (2008). BigDog, the rough-terrain
quadruped robot. In Proceedings of the 17th World Congress, 17(1), 10822-10825. Korea.
Reynolds, C. W. (1987). Flocks, herds, and schools: A distributed behavioral model. In
Proceedings of the 14th Annual Conference on Computer Graphics and Interactive
Techniques (SIGGRAPH) (pp. 25-34). New York, NY: ACM. doi:10.1145/37401.37406
Robinson, I. H. (1999). The human-horse relationship: How much do we know? Equine
Veterinary Journal, 28, 42-45. doi:10.1111/j.2042-3306.1999.tb05155.x
Rouse, W. B., & Morris, N. M. (1986). On looking into the black box: Prospects and limits in the search
for mental models. Psychological Bulletin, 100(3), 349-363. doi:10.1037/0033-2909.100.3.349
Russell, C. (2008, November 20). #3 Gee and Haw [Online forum comment]. Retrieved from
http://www.draftanimalpower.com/showthread.php?771-Gee-and-Haw
Saavedra, R., Earley, C. P., & Van Dyne, L. (1993). Complex interdependence in task-performing
groups. Journal of Applied Psychology, 78(1), 61-72. doi:10.1037/0021-9010.78.1.61
Sanders, C. R. (1999). Understanding dogs: Living and working with canine companions.
Philadelphia, PA: Temple University Press.
Phillips et al., Human-Animal Teams as an Analog
124
Sanders, C. R. (2006). “The dog you deserve”: Ambivalence in the K-9 officer/patrol dog relationship.
Journal of Contemporary Ethnography, 35(2), 148-172. doi:10.1177/0891241605283456
Schaefer, K. E. (2013). The perception and measurement of human-robot trust (Doctoral
Dissertation). University of Central Florida, Orlando, FL.
Schaefer, K. E, Adams, J. K., Cook, J. G., Bardwell-Owens, A., & Hancock, P. A. (2015). The
future of robotic design: Trends from the history of media representations. Ergonomics in
Design, 23(1), 13-19. doi:10.1177/1064804614562214
Schaefer, K. E., Cook, J. G., Adams, J. K., Bell, J., Sanders, T. L., & Hancock, P. A. (2012a).
Augmented emotion and its remote embodiment: The importance of design from fiction
to reality. In Proceedings of the Human Factors and Ergonomics Society Annual
Meeting, 56(1), 1817-1821. doi:10.177/1071181312561366
Schaefer, K. E., Sanders, T. L., Yordon, R. E., Billings, D. R., & Hancock, P. A. (2012b). Classification
of robot form: Predicting perceived trustworthiness. In Proceedings of the Human Factors
and Ergonomics Society Annual Meeting, 56(1), 1548-1552. doi:10.1177/1071181312561308
Sheppard, B. H., & Sherman, D. M. (1998). The grammars of trust: A model and general
implications (Special topic forum on trust in and between organizations). Academy of
Management Review, 23(3), 422-437. doi:10.5465/AMR.1998.926619
Sheridan, T. B. (1975). Considerations in modeling the human supervisory controller. In
Proceedings of the 6th IFAC World Congress, Boston, MA: International Federation of
Automatic Control.
Sheridan, T. B., & Hennessy, R. T. (1984). Research and modeling of supervisory control
behavior: Report of a workshop. Washington, D.C.: National Academy Press.
Shick, A. (2013). Romibo robot project: An open-source effort to develop a low-cost sensory
adaptable robot for special needs therapy and education. In Proceedings of the ACM
SIGGRAPH Studio Talks (p. 16). Anaheim, CA: ACM.
Simpson, B. S. (1997). Canine communication. The Veterinary Clinics of North America: Small Practice,
27(3), 445-464. Abstract retrieved from http://www.ncbi.nlm.nih.gov/pubmed/9170629
Sims, V. K., Chin, M. G., Sushil, D. J., Barber, D. J., Ballion, T., Clark, B. R., . . . Finkelstein, N.
(2005). Anthropomorphism of robotic forms: A response to affordances? In Proceedings
of the Human Factors and Ergonomics Society Annual Meeting, 49(3), 602-605.
doi:10.1177/154193120504900383
Sowa, J. F. (2006). Categorization in cognitive computer science. Amsterdam: Elsevier.
Staats, S., Wallace, H., & Anderson, T. (2008). Reasons for companion animal guardianship (pet
ownership) from two populations. Society and Animals, 16(3), 279-291.
doi:10.1163/156853008X323411
Subbaraman, N. (2013). My robot friend: People find real comfort in artificial companions.
Retrieved from: http://www.nbcnews.com/tech/innovation/my-robot-friend-people-find-
real-comfort-artificial-companionship-f6C10146787
Tanaka, F., & Matsuzoe, S. (2012). Children teach a care-receiving robot to promote their
learning: Filed experiments in a classroom for vocabulary learning. Journal of Human-
Robot Interaction, 1(1), 78-95. doi:10.5898/JHRI.1.1.Tanaka
The Hunting Dog (2008). Communicate with dog training hand signals. Retrieved from
http://www.the-hunting-dog.com/dog-training-hand-signals.html
The Robinson Ranch. (2003). What can a donkey do? Retrieved from http://www.donkeys.com/info2.htm
Thompson, J. D. (1967). Organizations in action. New York: McGraw-Hill.
Phillips et al., Human-Animal Teams as an Analog
125
Torrey, C., Fussell, S. R.., & Kiesler, S. (2013). How a robot should give advice. In Proceedings of the
8th International Conference on Human-Robot Interaction (HRI) (pp.275-282). NJ: IEEE.
Upham Ellis, L., Sims, V. K., Chin, M. G., Pepe, A. A., Owens, C. W., Dolezal, M. J., Shumaker,
R., & Finkelstein, N. (2005). Those a-maze-ing robots: Attributions of ability are based
on form, not behavior. In Proceedings of the Human Factors and Ergonomics Society
Annual Meeting, 49(3), 598-601. doi:10.1177/154193120504900382
Vaughan, R. T., Støy, K., Sukhatme, G., & Matarić, M. J. (2002). LOST: Localization-space trails
for robot teams. Robotics and Automation, 18(5), 796-812. doi:10.1109/TRA.2002.803459
Victor, B., & Blackburn, R.S. (1987).Interdependence: An alternative conceptualization. Academy
of Management Review, 12 (3), 486-498.
Volosyak, I., Ivlev, O., & Graser, A. (2005). Rehabilitation robot FRIEND IIThe general concept
and current implementation. In Proceedings of the 9th Annual International Conference on
Rehabilitation Robotics (pp. 540-544). IEEE. doi:10.1109/ICORR.2005.1501160
Walsh, F. (2009). Human-animal bonds 1: The relational significance of companion animals.
Family Process, 48 (4), 462-480.
Watts, K., & Everly, J. S. (2009). Helping children with disabilities through animal-assisted
therapy. The Exceptional Parent, 39(5), 1-34.
Werner, D. (2013, June). Drone swarm: Networks of small UAVs offer big capabilities. Defense News.
Retrieved from http://www.defensenews.com/article/20130612/C4ISR/306120029/Drone-
Swarm-Networks-Small-UAVs-Offer-Big-Capabilities
Wesley, M. C., Minatrea, N. B., & Watson, J. C. (2009). Animal-assisted therapy in the treatment
of substance dependence. Anthrozoös, 22 (2), 137-148. doi:10.2752/175303709X434167
Wilson, C. C. (1994). A conceptual framework for human-animal interaction research: The
challenge revisited. Anthrozoös, 7(1), 424. doi:10.2752/089279394787002032
Yeon, S. C. (2007). The vocal communication of canines. Journal of Applied Veterinary Behavior:
Clinical Applications and Research, 2(4), 141-144. doi:10.1016/j.jveb.2007.07.006
Yorke, J., Adams, C., & Coady, N. (2008). Therapeutic value of equine-human bonding in
recovery from trauma. Anthrozoos, 21(1), 17-30. doi:10.2752/089279308X274038
Yoshitaka, F., Masayoshi, K., Taro, S., Masaru, S., & Tsuyoshi, N. (2012). Subjective evaluation of use
of Babyloid for doll therapy. In Proceedings of the IEEE International Conference on Fuzzy
Systems (FUZZ-IEEE). Brisbane, Australia: IEEE. doi:10.1109/FUZZ-IEEE.2012.6251247
Zapf, S. A., & Rough, R. B. (2002). The development of an instrument to match individuals with
disabilities and service animals. Disability and Rehabilitation, 24(1-3), 47-58.
doi:10.1080/09638280110066316
Zasloff, R. L. (1996). Measuring attachment to companion animals: A dog is not a cat is not a
bird. Applied Animal Behaviour Science, 47, 43-48. doi:10.1016/0168-1591(95)01009-2
Elizabeth Phillips, Institute for Simulation and Training, University of Central Florida, Orlando,
Florida, USA. ephillip@ist.ucf.edu; Kristin E. Schaefer, U.S. Army Research Laboratory,
Aberdeen Proving Ground, MD, USA. kristin.e.schaefer2.ctr@mail.mil; Deborah R. Billings,
Agilis Consulting Group, Cave Creek, AZ, USA. dbillings@agilisconsulting.com; Florian Jentsch,
Institute for Simulation and Training, University of Central Florida, Orlando, Florida, USA.
fjetsch@ist.ucf.edu; Peter A. Hancock, University of Central Florida, Orlando, Florida, USA.
peter.hancock@ucf.edu.
... Humans and real dogs have biologically co-evolved in mutually beneficial symbiotic relationships that have developed over centuries [7,8]. Thus, creating a social machine in dog form may foster more accurate mental models of trust in and bonding with these robotic companions [9,10] than other less familiar forms. ...
... Recent studies have shown that there may also be an uncanny valley for robots that are animal-like (zoomorphic), such that robots high and low in animallikeness (i.e., PARO, MiRO, etc.) were preferred over those mixing realistic and unrealistic features [50]. Recent news reports further indicate a potential public concern over creepy robot dogs [9]. The existence of an uncanny valley, or "uncanine valley" [13], for robotic dogs is, therefore, a possibility (see Figure 1), and attempts should be made to avoid it when enhancing the dog-likeness of a robot. ...
... Given the design iterations of Aibo, it is possible that co-evolution between humans and robot dogs is already underway [9]. The new version of Aibo elicits a sufficiently interactive response from human partners due to its dog-like form and basic behaviors. ...
Article
Full-text available
To understand how to improve interactions with dog-like robots, we evaluated the importance of “dog-like” framing and physical appearance on interaction, hypothesizing multiple interactive benefits of each. We assessed whether framing Aibo as a puppy (i.e., in need of development) versus simply a robot would result in more positive responses and interactions. We also predicted that adding fur to Aibo would make it appear more dog-like, likable, and interactive. Twenty-nine participants engaged with Aibo in a 2 × 2 (framing × appearance) design by issuing commands to the robot. Aibo and participant behaviors were monitored per second, and evaluated via an analysis of commands issued, an analysis of command blocks (i.e., chains of commands), and using a T-pattern analysis of participant behavior. Participants were more likely to issue the “Come Here” command than other types of commands. When framed as a puppy, participants used Aibo’s dog name more often, praised it more, and exhibited more unique, interactive, and complex behavior with Aibo. Participants exhibited the most smiling and laughing behaviors with Aibo framed as a puppy without fur. Across conditions, after interacting with Aibo, participants felt Aibo was more trustworthy, intelligent, warm, and connected than at their initial meeting. This study shows the benefits of introducing a socially robotic agent with a particular frame and importance on realism (i.e., introducing the robot dog as a puppy) for more interactive engagement.
... Team communication occurs between team members regardless of their proximity to one another and can be asynchronous or synchronous and face-to-face [15], but generally rely on linguistic components over nonverbal cues [16]. In contrast, cross-species teams (e.g., human-canine teams), while having been previously identified as an analog to HMTs, normally do not use natural language as a means of communication beyond one-word commands [17]. Crossspecies communication is simply "an interchange of meaning"-a transfer of a significant concept, but not "an interchange of language," spoken or written, making it a largely nonlinguistic system of communication [18]. ...
Conference Paper
Full-text available
In a complex task environment in which team behavior emerges and evolves, team agility is one of the primary determinants of a team’s success. Agility is considered an emergent phenomenon in which lower-level system elements interact to adapt to the dynamic environment. One of the dimensions of team agility is interactive decision-making. In this study, we conceptually model individual team members’ interactive decision-making process for their taskwork; we observe how much the choices of one team member depend on antecedent decisions and the behavior of other team members. This also helps us understand how team members synchronize during the decision-making process in agile teams, especially when team members team up with a machine. To improve the understanding of interactive decision-making, we propose two modeling techniques: (1) quantum cognition for the taskwork decision-making processes and (2) nonlinear dynamical systems modeling for teamwork processes.
Article
What makes an autonomous system a teammate? The paper presents an evaluation of factors that can encourage a human perceive an autonomous system as a teammate rather than a tool. Increased perception of teammate-likeness more closely matches the human’s expectations of a teammate’s behavior, benefiting coordination and cooperation. Previous work with commercial pilots suggested that autonomous systems should provide visible cues of actions situated in the work environment. These results motivated the present study to investigate the impact of feedback modality on the teammate-likeness of an autonomous system under low (sequential events) and high (concurrent events) task loads. A Cognitive Assistant (CA) was developed as an autonomous teammate to support a (simulated) Mars mission. With centralized feedback, an autonomous teammate provided verbal and written information on a dedicated display. With distributed feedback, the autonomous teammate provided visible cues of actions in the environment in addition to centralized feedback. Perception of teammate-likeness increased with distributed feedback due to increased awareness of the CA’s actions, especially under low task load. In high task load, teamwork performance was higher with distributed feedback when compared to centralized feedback, where in low task load there was no difference in teamwork performance between feedback modalities.
Chapter
Both human–autonomy teaming, specifically, and intelligent autonomous systems, more generally, raise new challenges in considering how best to specify, model, design, and verify correctness at a system level. Also important are extending this to monitoring and repairing systems in real time and over lifetimes to detect problems and restore desired properties when they are lost. Systems engineering methods that address these issues are typically based around a level of modeling that involves a broader focus on the life cycle of the system and much higher levels of abstraction and decomposition than some common ones used in disciplines concerned with the design and development of individual elements of intelligent autonomous systems. Nonetheless, many of the disciplines associated with autonomy do have reasons for exploring higher level abstractions, models, and ways of decomposing problems. Some of these may match well or be useful inspirations for systems engineering and related problems like system safety and human system integration. This chapter will provide a sampling of perspectives across scientific fields such as biology, neuroscience, economics/game theory, and psychology, methods for developing and accessing complex socio-technical systems from human factors and organizational psychology, and methods for engineering teams from computer science, robotics, and engineering. Areas of coverage will include considerations of team organizational structure, allocation of roles, functions, and responsibilities, theories for how teammates can work together on tasks, teaming over longer time durations, and formally modeling and composing complex human–machine systems.
Research
Full-text available
The designer is always concerned with how robotic systems exist, their direct interactions with the human element, and how robotic products connected to their dynamic presence within the real world. There was a need that Ergonomics should always be reviewed in accordance with the novelty of any of the products designed for human interactions, particularly those containing physical reactions to work on the reasonableness of spontaneous interaction. It has become necessary for the designer to set clear standards for designing a special dictionary for communication between the human element and robotic systems – Operational Space formulation – within interactive work environments. Through these crucial steps, the designer can set the theoretical rules for these interactions to appear logical, through which interactions can be designed with a new perspective in order to establish the concepts of technology innovations and to design robotic systems. As physical interaction becomes an integral part of designing robot products, the need led to integrate Ergonomics science to what is known as robot factors. Focusing on one of those elements should not be realistic, but rather the robot products must be shown within the experience – as new Behavioral Objects – and considered that it is a real being that has a unique behavior and represents a firm part of reality. The real role of realism is manifested when all the elements associated with the Operational Space formulation are equal. Therefore, the result of the application of the proactive work environment is reflected in the intuitive interactions or what means expectant intuition in humans, that is, when the designer works to formulate the operational space for a proactive work environment that makes it automatic interactions, in order to let the user have a successful experience and make it more unique and attractive. Therefore, the research aims to focus light on the determinants of Operational Space formulation for the presence of robotic systems within the interaction environments with the human element, and how the user realization them during the interaction experience. The researcher has relied on the inductive approach to study the research problem, enhancing the presence of the human element within interaction experiences, and working on how Raising the behavioral level of interactive products and robotic systems, finding the determinants of shaping the dynamic presence and Operational Space formulation for these types of future interactive products and robots.
Research
Full-text available
ينشغل المصمم دائماً بكيفية تواجد الأنظمة الروبوتية وتفاعلاتها المباشرة مع العنصر البشرى، ومدى إرتباط منتجات الروبوت بحضورهم الديناميكى داخل العالم الحقيقى؛ أدت الحاجة إلى إعادة النظر دائماً فى علوم الإرجونوميكس بشكل دائم طبقاً لحداثة أى من المنتجات التى صُممت من أجل التفاعلات البشرية، وخاصة التى تحتوى منها على تفاعلات فيزيائية لتعمل على معقولية التفاعل بشكل عفوى، وأصبح من الضرورى على المصمم وضع معايير واضحة لتصميم معجم خاص للإتصال بين العنصر البشرى والأنظمة الروبوتية – صياغة الفضاء التشغيلى – داخل بيئات العمل التفاعلية؛ فمن خلال تلك الخطوات الهامة يمكن للمصمم وضع القواعد النظرية لتلك التفاعلات لتبدو منطقية، والتى من خلالها يمكن تصميم تفاعلات بمنظور جديد من أجل ترسيخ مسلمات مستحدثات التكنولوجيا وتصميم الأنظمة الروبوتية؛ فكلما أصبح التفاعل الفيزيائى جزءاً أصيلاً فى تصميم منتجات الروبوت أدت الحاجة إلى دمج علوم الإرجونوميكس إلى ما يعرف بعوامل الروبوت؛ فلا ينبغى أن يكون التركيز على أحد تلك العناصر من الواقعية، بل لابد من إظهار منتجات الروبوت داخل التجربة – بإعتبارهم كائنات سلوكية جديدة – وإعتباره أنه كائن حقيقى له سلوك مُنفرد ويمثل جزءاً راسخاً من الواقع، ويتجلى دور الواقعية الحقيقى عند تكافؤ جميع العناصر المرتبطة بصياغة الفضاء التشغيلى؛ فيظهر ناتج تطبيق بيئة العمل الإستباقية فى بديهية التفاعلات أو ما يعنى الحدس التوقعى لدى البشر، أى حينما يعمل المصمم على صياغة الفضاء التشغيلى لبيئة عمل إستباقية يجعلها تلقائية التفاعلات، بغرض إنجاح تجربة المستخدم وجعلها أكثر تفرداً وجاذبية؛ لذلك يهدف البحث إلى إلقاء الضوء على محددات صياغة الفضاء التشغيلة لتواجد الأنظمة الروبوتية داخل بيئات التفاعل مع العنصر البشرى، وكيفية إدراك المستخدم لها أثناء تجربة التفاعل، ولقد إعتمد الباحث على المنهج الإستقرائى لدراسة مشكلة البحث، وتعزيز تواجد العنصر البشرى داخل تجارب التفاعل، والعمل على كيفية الإرتقاء بالمستوى السلوكى للمنتجات التفاعلية والأنظمة الروبوتية، وإيجاد المحددات الخاصة بتشكيل الحضور الديناميكى وصياغة الفضاء التشغيلى لتلك الأنواع من المنتجات المستقبلية التفاعلية والروبوت.
Article
Full-text available
In team-based tasks, successful communication and mutual understanding are essential to facilitate team coordination and performance. It is well-established that an important component of human conversation (whether in speech, text, or any medium) is the maintenance of common ground. Maintaining common ground has a number of associated processes in which conversational participants engage. Many of these processes are lacking in current synthetic teammates, and it is unknown to what extent this lack of capabilities affects their ability to contribute during team-based tasks. We focused our research on how teams package information within a conversation, by which we mean specifically (1) whether information is explicitly mentioned or implied, and (2) how multiple pieces of information are ordered both within single communications and across multiple communications. We re-analyzed data collected from a simulated remotely-piloted aerial system (RPAS) task in which team members had to specify speed, altitude, and radius restrictions. The data came from three experiments: the “speech” experiment, the “text” experiment, and the “evaluation” experiment (which had a condition that included a synthetic teammate). We asked first whether teams settled on a specific routine for communicating the speed, altitude, and radius restrictions, and whether this process was different if the teams communicated in speech compared to text. We then asked how receiving special communication instructions in the evaluation experiment impacted the way the human teammates package information. We found that teams communicating in either speech or text tended to use a particular order for mentioning the speed, altitude, and radius. Different teams also chose different orders from one another. The teams in the evaluation experiment, however, showed unnaturally little variability in their information ordering and were also more likely to explicitly mention all restrictions even when they did not apply. Teams in the speech and text experiments were more likely to leave unnecessary restrictions unmentioned, and were also more likely to convey the restrictions across multiple communications. The option to converge on different packaging routines may have contributed to improved performance in the text experiment compared some of the conditions in the evaluation experiment.
Article
Full-text available
Future robotic systems are expected to transition from tools to teammates, characterized by increasingly autonomous, intelligent robots interacting with humans in a more naturalistic manner, approaching a relationship more akin to human-human teamwork. Given the impact of trust observed in other systems, trust in the robot team member will likely be critical to effective and safe performance. Our thesis for this paper is that trust in a robot team member must be appropriately calibrated rather than simply maximized. We describe how the human team member's understanding of the system contributes to trust in human-robot teaming, by evoking mental model theory. We discuss how mental models are related to physical and behavioral characteristics of the robot, on the one hand, and affective and behavioral outcomes, such as trust and system use/disuse/misuse, on the other. We expand upon our discussion by providing recommendations for best practices in human-robot team research and design and other systems using artificial intelligence. © 2013, Association for the Advancement of artificial intelligence.
Article
Full-text available
The perception of what constitutes a robot depends on both technological advances and evolving social perceptions. Here, we explore how the fictional media affect those societal perceptions and the subsequent influence on robot design to date. We then examine how these design trends can be applied to present circumstances to suggest future design directions.
Article
Full-text available
Across the domains in which robots are prevalent, it is possible to imagine many different forms and functions of robots. The purpose of this investigation was to gain a better understanding of the scope and type of a priori knowledge structures humans hold of robots, among novice users of robotic systems. Participant mental models of a hypothetical robot in a military team scenario were elicited along the dimensions of form and function, taking prior individual experiences into consideration. Participants who conceived a robot with anthropomorphic or zoomorphic qualities reported more perceived knowledge of their robotic teammate, as well as of their human–robot team. Participants who had more experience with video games also believed that they had more knowledge of their imagined robot and their human–robot team. Insight into novice users’ understanding of robots has implications for HRI design and training.
Book
Since Canis lupus familiaris first shared a fire with man more than 15,000 years ago, dogs have been trusted and valued coworkers. Yet the relatively new field of canine ergonomics is just beginning to unravel the secrets of this collaboration. As with many new fields, the literature on working dogs is scattered across several non-overlapping disciplines from forensics and the life sciences to medicine, security, and wildlife biology. Canine Ergonomics: The Science of Working Dogs draws together related research from different fields into an interdisciplinary resource of science-based information. Providing a complete overview, from physiology to cognition, this is the first book to discuss working dogs from a scientific perspective. It covers a wide range of current and potential tasks, explores ergonomic and cognitive aspects of these tasks, and covers personality traits and behavioral assessments of working dogs. A quick look at the chapters, contributed by experts from across the globe and across the multidisciplinary spectrum, illustrates the breadth and depth of information available in this book. Traditionally, information concerning working dogs is mostly hearsay, with the exchange of information informal at best and non-existent at worst. Most books available are too general in coverage or conversely, too specific. They explain how to train a service dog or train a dog to track, based on training lore rather than empirical methods verified with rigorous scientific standards. This book, drawing on cutting edge research, unifies different perspectives into one global science: Canine Ergonomics.
Article
Automation is often problematic because people fail to rely upon it appropriately. Because people respond to technology socially, trust influences reliance on automation. In particular, trust guides reliance when complexity and unanticipated situations make a complete understanding of the automation impractical. This review considers trust from the organizational, sociological, interpersonal, psychological, and neurological perspectives. It considers how the context, automation characteristics, and cognitive processes affect the appropriateness of trust. The context in which the automation is used influences automation performance and provides a goal-oriented perspective to assess automation characteristics along a dimension of attributional abstraction. These characteristics can influence trust through analytic, analogical, and affective processes. The challenges of extrapolating the concept of trust in people to trust in automation are discussed. A conceptual model integrates research regarding trust in automation and describes the dynamics of trust, the role of context, and the influence of display characteristics. Actual or potential applications of this research include improved designs of systems that require people to manage imperfect automation. Copyright © 2004, Human Factors and Ergonomics Society. All rights reserved.
Chapter
This chapter surveys theories of categorization and reasoning in cognitive science that have been implemented and tested in computer systems. It presents the early work in the field of computation in cognitive science. It surveys top-down and bottom-up approaches to categorization and analyzes the implications of structure, context, and purposes on the choice of categories and the methods for recognizing individuals that belong to those categories. The chapter also considers the interactions of categorization and reasoning. It discusses the levels of cognition and some successes and failures in simulating those levels by artificial intelligence (AI). Theories of categorization in artificial intelligence, information retrieval, data mining, and other computational fields are no different in kind from theories that predate modem computers.
Article
A new theoretical framework, executive-process interactive control (EPIC), is introduced for characterizing human performance of concurrent perceptual-motor and cognitive tasks. On the basis of EPIC, computational models may be formulated to simulate multiple-task performance under a variety of circumstances. These models account well for reaction-time data from representative situations such as the psychological refractory-period procedure. EPIC's goodness of fit supports several key conclusions: (a) At a cognitive level, people can apply distinct sets of production rules simultaneously for executing the procedures of multiple tasks; (b) people's capacity to process information at "peripheral" perceptual-motor levels is limited; (c) to cope with such limits and to satisfy task priorities, flexible scheduling strategies are used; and (d) these strategies are mediated by executive cognitive processes that coordinate concurrent tasks adaptively.
Article
Analogy is a powerful cognitive mechanism that people use to make inferences and learn new abstractions. The history of work on analogy in modern cognitive science is sketched, focusing on contributions from cognitive psychology, artificial intelligence, and philosophy of science. This review sets the stage for the 3 articles that follow in this Science Watch section. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Participants rated robotic forms on three scales: perceived aggression, intelligence, and animation. The robot bodies varied along five dimensions: Types of edges (beveled or squared), method of movement (wheels, legs, spider legs, or treads), number of movement generators (2 or 4), body position (upright or down), and presence of arms (present or absent). Across ratings, movement method and presence of arms were the strongest predictors of participant perceptions. Legs and arms, both human characteristics, were associated with more positive attributions. Minimal affective characteristics, as displayed by the body design, are important in user perceptions of use and ability.
Participants were introduced to one of three robots-a bipedal Robosapien, a treaded vehicle, and a wheeled vehicle. They then used voice commands to guide this entity through a maze from a remote destination. Feedback was given via an arrow that showed the entity either responding to the voice commands or ignoring them. The same feedback was given in all conditions. However, participant ratings of mood and their attributions for the robots' abilities and functions differed. These results suggest that interactions with non-human intelligent entities are largely guided by pre-existing schemas. Additionally, individual differences in perceived control over a caregiving situation were predictive of responses to the robot, further supporting the idea that schemas for certain types of human-human interactions are activated by synthetic agents.