Conference PaperPDF Available

Canine computer interaction: towards designing a touchscreen interface for working dogs

Authors:

Abstract and Figures

Touchscreens can provide a way for service dogs to relay emergency information about their handlers from a home or office environment. In this paper, we build on work exploring the ability of canines to interact with touchscreen interfaces. We observe new requirements for training and explain best practices found in training techniques. Learning from previous work, we also begin to test new dog interaction techniques such as lift-off selection and sliding gestural motions. Our goal is to understand the affordances needed to make touchscreen interfaces usable for canines and help the future design of touchscreen interfaces for assistance dogs in the home.
Content may be subject to copyright.
Canine Computer Interaction: Towards Designing a
Touchscreen Interface for Working Dogs
Clint Zeagler
clintzeagler@gatech.edu
Jay Zuerndorfer
jzplusplus@gmail.com
Andrea Lau
andrea.lau@gatech.edu
Larry Freil
larry.freil@gatech.edu
Scott Gilliland
scott.gilliland@gatech.edu
Thad Starner
thad@gatech.edu
Melody Moore Jackson
melody@cc.gatech.edu
*All researchers from
Georgia Institute of Technology
ABSTRACT
Touchscreens can provide a way for service dogs to relay
emergency information about their handlers from a home or
office environment. In this paper, we build on work
exploring the ability of canines to interact with touchscreen
interfaces. We observe new requirements for training and
explain best practices found in training techniques.
Learning from previous work, we also begin to test new
dog interaction techniques such as lift-off selection and
sliding gestural motions. Our goal is to understand the
affordances needed to make touchscreen interfaces usable
for canines and help the future design of touchscreen
interfaces for assistance dogs in the home.
Author Keywords
Animal Computer Interaction; Assistance Dog Interface
Design; Touchscreen Interactions
ACM Classification Keywords
H.5.2 [Information interfaces and presentation]: User
Interfaces---user-centered design
INTRODUCTION
Figure 1
Jacob has epilepsy. His medical alert dog Dug is trained to
sense an oncoming seizure and notify Jacob before it starts
[4]. Dug is trained to nudge Jacob to a wall so he does not
fall down. Dug is also trained to lick Jacob’s face until he
recovers. Dug, however, is special; he is also trained to
interact with a wearable computer on his service vest [8, 9].
When Jacob has a seizure, Dug can activate a capacitive
bite sensor on his vest, which notifies health services and
Jacob’s loved ones of his condition and where to find him.
Figure 2
By summoning help, Dug performs a potentially life-saving
service for Jacob. But what if Jacob and Dug are at home
and Dug is not wearing his service dog vest? Dog-
computer interactions that do not require a vest or other
wearables could fill a critical need for people like Jacob.
Because of the ubiquitous nature of touchscreens, we began
exploring possibilities and challenges in designing virtual
interfaces for dogs [24]. Our initial study demonstrated a
proof of concept that a dog can use a touchscreen. In our
current study, we start to examine the questions “what is the
best way to train a dog to effectively use a touchscreen
interface, and how do we design an interface best for
canine computer interaction’?
RELATED WORK
Animals have been involved in research for a long time,
and many research experiments have used machine
interfaces to derive knowledge about animal behavior and
cognition from animal interactions [5, 19, 20]. Amundin et
al. [3] created an echolocation interaction based interface,
using echolocation as “touch” the system acts as a type of
touchscreen. Amundin’s system was built to understand
how a dolphin might best interact with a touchscreen, which
is close in motivation to our research with canines. Work
such as Mankoff’s [6] has talked about HCI in the context
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that copies
bear this notice and the full citation on the first page. Copyrights for
components of this work owned by others than ACM must be honored.
Abstracting with credit is permitted. To copy otherwise, or republish, to
post on servers or to redistribute to lists, requires prior specific permission
and/or a fee. Request permissions from
Permissions@acm.org
.
ACI '16, November 16-17, 2016, Milton Keynes, United Kingdom ©
2016 ACM. ISBN 978-1-4503-4758-7/16/11…$15.00
DOI: http://dx.doi.org/10.1145/2995257.2995384
of animals describing a “Fitts’ Law” test for dogs using
tennis balls. This work starts a conversation about canines
using computer interfaces. Other animal computer
interaction systems have focused around monitoring pets
when the owner/handler is not at home or providing
entertainment [7, 10, 12, 22, 23]. Certainly Robinson and
Mancini’s work also looks at ways for dogs to interact with
devices in the home, although these devices are similarly
motivated the interfaces are not centered on touchscreen
interactions [17, 18].
DOGS USING TOUCH SCREENS
The new lessons learned presented in this paper build on
work completed previously [24] in designing a touchscreen
user test for dogs modeled after Soukereff and MacKenzie’s
multidirectional tapping task [11, 21]. Potter, Weldon, and
Shneiderman describe three main ways interfaces can be
designed to accept touch interaction. [14]
Land-on, where the curser is under the touch, and only the
first impact point counts.
First-Contact, where the cursor is under the touch, but the
first contact with the target counts even if its not the first
impact with the surface.
Take-off, cursor is offset, and selections are made by where
the touch lifts off the surface. We do not use a cursor for
our dog interactions and we will call this type of interaction
Lift-off.
The original system used first-contact to make tapping
target selections. “Each tapping task trial consists of one
press on the blue target, zero or more erroneous touches,
followed by one touch on the yellow. When the dog makes
a correct selection, first hitting the blue target, a lower
frequency tone is played to signify to the dog he/she has
made the blue selection. After selecting the blue target if
the dog selects the yellow target a higher frequency tone is
played, and the targets disappear from the screen upon lift
off of the successful yellow target touch, signifying that the
dog has completed the task. Blue and yellow targets were
chosen because a dog can see the difference between
yellow and blue.[24]
We compared the results from humans performing this first-
contact tapping task to dogs performing the same task with
their noses. Notice in Figure 3 the human taps the screen on
the blue then yellow when asked to do so. As seen in Figure
4 dogs trained to perform the same task found it easier to
operate the first-contact interface by touching the blue and
sliding their noses on the screen to the yellow. From
observing these interactions we decided to try two new
types of interfaces.
New Interfaces
The first new interface tested for this study (Figure 5A)
mimicked the original tapping task but uses lift-off; in every
other way the interaction is the same. The touchscreen
produces a tone when the dog lifts his nose off of the
screen. We chose to test lift-off in an effort to mold the
dogs’ interactions with the touchscreen closer to a humans
interactions. If the dog participants were able to use the
system with lift-off interactions we could closer relate the
dogs interactions to what would be expected in a human-
based Fitts’ Law tapping task study [11, 21].
The second new interface tested (Figure 5B) was not an
effort to eliminate sliding, but instead an exploration of
what a sliding or gesture type interface might look like for a
canine. This interface was inspired by Accot and Zhai’s
description of steering law[1, 2].
Figure 3: An existing example of a recorded human’s
interaction with our touchscreen tapping task interface. Green
dots are the first successful touch; red dots are the second
successful touch [24].
Figure 4: This is an example of a dog interaction on the
original land-on based tapping task touchscreen interface [24].
A. B.
Figure 5: A. Screenshot of tapping task interface with lift-off.
B. Screenshot of sliding task interface.
Unlike the tapping task, the sliding interface activates a
constant tone through the duration of the touch. The dog
Touch interactions for human subject 1 at distance 120 and size 300
Distance in pixels
Distance in pixels
must touch the blue then enter the white path and follow it
to the yellow without lifting off the screen or going outside
the drawn path. To allow the dog to understand its progress
in the task, the tone is constant while touching and changes
from lower frequency on blue, to a higher tone while on the
path, back to low on yellow. The tone and visual display
disappear upon completion.
Interface Hardware and Software Improvements
We originally created our software in Java, but found that
there was an inconsistent delay between touching the screen
and hearing a tone. We switched to using Unity3D, a video
game engine, because it is designed explicitly for good
audio-visual performance. We also developed a testing
protocol to ensure that we could get consistent timing in the
recorded data. We used a solenoid powered by a function
generator to generate one touch every second, and recorded
the touch events. We found less than 6 milliseconds of jitter
in the timing between 10 consecutive touches. Our new
software also includes the ability to adjust the initial height
of the interface (in our first study we worked exclusively
with medium to large dogs, but we had a greater variety we
needed to accommodate for in this iteration) as in Figure 6.
Figure 6: A smaller dog (Basset Hound) learning to use our
new lift-off tapping task interface. In this training session
photo only the blue selection is presented.
We also replaced our slower IR touch interface with a
newer version [15], this new IR touch surface included new
drivers and is easier to troubleshoot.
Dog Training Methods
Because we changed some elements of the original
touchscreen interface, we felt it important to use a new set
of dog participants, who were unfamiliar with the original
first-contact tapping task interface. The new dog
participants would be trained on either the lift-off selection
system, or the swipe/gesture system, but not both. For all
experiments, our research team only used positive
reinforcement (R+); we did not employ any type of
correction or punishment. We trained the dogs in 15-20
minute sessions with at least 30 minutes rest between each
training session. We will describe the original training
method and the revised method used in this study to
compare.
Original “first contact” dog training method
In previous studies all of our dogs (set A) were pre-trained
in targeting (touching the handler’s hand or a specified
target with his nose). The dogs in our previous touchscreen
work were also trained with operant conditioning [20],
specifically shaping, which is creating new behaviors by
selectively reinforcing wanted behaviors the canine offers
[16]. The dogs were classically conditioned using a food
treat and a computer generated tone [13]. We used the same
tone to signify completion of the touchscreen task, so that
as the dogs used the interface they would hear the tone as a
“reward marker”.
The touchscreen interaction, while seemingly simple, is
quite complex for a canine. For training we subdivided the
tasks required. The dog was rewarded for first touching the
screen with his nose, then for seeking out the dot shape to
touch with his nose. We began by training the dog to touch
the blue dot (giving a reward only when the tone sounded)
and the yellow dot separately. Each of the dot selections is
marked by a different tone so the dog can understand his
progress through the task. Dogs normally use a multitude of
senses to find objects, part of the training was to reward the
dog for only using sight to find these “virtual objects”.
Once the dog was trained to touch the dots separately our
initial trainer used the backchaining [16] method to teach
the dog to touch first the blue then yellow dot in sequence.
By starting the dog on a mat, first he was rewarded for
going to the screen and touching the yellow dot. When
proficient with one dot, he was rewarded only after he
touched the blue then yellow dot and returned to his mat.
Each step was added after the dog showed mastery of the
previous one. The trainer started with the last needed
behavior in the sequence so the dog understood when the
reward was given. Because the reward is only given at the
end of the task the dogs were motivated to complete the
behavior chain as correctly and as quickly as possible.
Current “lift-off” dog training method
Because we wanted to work with a new set of canine
participants (set B) we reached out to a new trainer with a
pool of dogs who had never used our interface. The new
trainer attempted to train the dogs on our new lift-off
tapping task interface. Before we discuss the new trainers
techniques we believe it is important to stress that the new
lift-off interaction style seems more difficult for canines to
learn in general using either training method.
Our new trainer employed different techniques in two major
areas. First, she used a style of training called luring [16].
Luring is when a trainer shows a reward to the dog and
allows the dog to follow the reward through the task the
trainer hopes to impart to the dog. In this case the trainer
showed the new dog participants a food treat and enticed
them to follow the treat to the screen where she wanted the
dogs to touch. The luring approach was less effective than
the shaping approach of the original study, especially with
our lift-off system because the dog sees the treat at the
screen and does not associate the completion of the task
with the reward but rather the location as a place where
rewards appear. Second, the new trainer did not build up the
tapping task through backchaining. By building up the task
from the beginning, the dog learned to be rewarded during
each step of the sequential task along which the total task
needed to be trained. This created moments where in the
middle of the tapping task sequence the dog did not
comprehend that he needed to finish the entire sequence to
receive a reward. This is a major problem for our research,
as we need the entire sequence of the tapping task (or any
other sequential task) to be completed as quickly as
possible from the dog participant to be able to compare the
dog interactions with human interactions.
Upon seeing the shortcomings of this new training method
we began training a separate dog on the lift-off system using
our original shaping methods. We found that the dog
learned the system quicker than the (set B) dog participants,
but did not learn the lift-off task as quickly as the original
first-contact task.
Training for sliding/gesture interactions
Our new trainer also used luring to train one dog to interact
with a sliding gesture based interface modeled after Accot
and Zhai’s description of steering law[1, 2]. Luring is more
appropriate for beginning this training as the interface
reacts to a continuous touch and slide from the dog’s nose.
The need for the dog to follow the path of the interface
means that a trainer initially using luring can lead the dog
through the correct motions. It is important to quickly
transfer from luring to shaping so that the dog understands
that the task must be completed before it receives a reward.
Figure 7: Dog participants touch interactions, green lines are
successful sliding touches, and red lines are unsuccessful
attempts.
RESULTS
In this ongoing research we attempted to train five dogs on
the lift-off tapping task interface. The new trainer using her
methods trained four dogs, and our original trainer using
our original training protocol trained one dog. None of the
dogs being trained on the lift-off tapping task became
proficient enough to begin actual testing. Some did learn
the task, but were not consistent. In general the dogs in our
last study were able to learn first-contact much quicker and
once trained, were quite proficient and consistent in their
ability to activate the system. For this reason we believe
that lift-off is not a good choice for canine touchscreen
interface design.
We separately trained one dog participant on the new
sliding interface. Our testing protocol has not been
finalized or optimized for comparison with human
interactions, but the dog was able to learn the interaction
and successfully complete the sliding task. We can see from
Figure 7 the dog was often successful in completing the
task, sliding up from blue to yellow by staying on the
visible white path. The path is vertical to allow for the dogs
interaction to be visible and not occluded by the muzzle.
One interface observation is that the path should be at least
3.5” inches wide to allow the dog to see the path while its
nose is touching the screen.
LESSONS LEARNED: CANINE TOUCHSCREEN
INTERFACE DESIGN CONSIDERATIONS
The initial results of our lift-off study have generated a
preliminary foundation for touchscreen “best practices”:
Infrared Touchscreens with a backing non-
projection monitors seem to currently be the best
hardware for canine interactions [24]
Targets for tapping should be 3.5” or larger [24]
Target distances should be at least 3.5” apart [24]
Sliding paths should also be 3.5” wide or larger
Shaping is the most effective training method for
tapping task touchscreen interactions
Luring can be effectively used for initial training
of sliding/gestural interactions, but should be
quickly exchanged for shaping.
Backchaining seems to be the best method for
training the dog participants to complete the full
sequential task with motivation to move as fast as
possible through the task
Lift-off touchscreen interactions are much more
difficult for dogs to comprehend
First-contact touchscreen interactions are easier
for dogs to use and to learn
DISCUSSION AND FUTURE WORK
Through the course of attempting to update our canine
touchscreen interface we learned quite a few new design
considerations and also which training methods work best.
One exciting area of future work we intend to pursue can be
extrapolated from Figure 7. Notice that even when the dog
participant did not correctly stay within the path, the dog’s
motion and touch gesture look the same as when it did stay
within the path. If viewed from the perspective of a touch
gesture these interactions would also be successful
activations. We might be able to train the dogs using the
onscreen visuals as a guide, and later let them make the
gesture anywhere on the screen. Without having to stay
within the path, but by creating more complex paths, we
might be able to create a touch base gesture control system
dogs to could activate relatively easily. It could be
interesting to explore how complex these gestures could
become.
Finally, to showcase the usefulness of what we have learned
thus far, and using our canine touchscreen interface design
considerations, we created a first-contact tapping task
system that demonstrates directly a dog’s ability to call for
help (Figure 8). The system has three tapping targets and
once activated in sequence, sends a text message calling for
help (for now just to a private phone). One of our dog
participants is fully trained to activate the system when
someone says to him “go get help”.
Figure 8: A first-contact emergency notification canine
touchscreen interface.
ACKNOWLEDGMENTS
The work presented here was completed under National
Science Foundation NSF Grant IIS-1525937.
REFERENCES
[1] Accot, J. and Zhai, S. 2002. More than dotting the i’s ---
foundations for crossing-based interfaces. Proceedings of
the SIGCHI conference on Human factors in computing
systems Changing our world, changing ourselves - CHI
’02. 4 (2002), 73.
[2] Accot, J. and Zhai, S. 1999. Performance Evaluation of
Input Devices in Trajectory-based Tasks: An Application
of The Steering Law. Proceedings of the SIGCHI
conference on Human Factors in Computing Systems.
(1999), 466472.
[3] Amundin, M., Starkhammar, J., Evander, M., Almqvist,
M., Lindström, K. and Persson, H.W. 2008. An
echolocation visualization and interface system for
dolphin research. The Journal of the Acoustical Society of
America. 123, 2 (Feb. 2008), 118894.
[4] Canine Companions for Independence: www.cci.org.
Accessed: 2016-01-04.
[5] Delfour, F. and Marten, K. 2005. Inter-modal learning
task in bottlenosed dolphins (Tursiops truncatus): a
preliminary study showed that social factors might
influence learning strategies. Acta Ethologica. 8, 1 (May
2005), 5764.
[6] Dey, A., Mankoff, J. and Mankoff, K. 2005. Supporting
Interspecies Social Awareness!: Using peripheral displays
for distributed pack awareness. (2005), 253258.
[7] Hu, F., Silver, D. and Trudel, A. 2007. Lonely
Dog@Home. Proc of the Conf on Web Intelligence and
Intelligent Agent Technology Workshops (2007), pp. 333
337.
[8] Jackson, M., Zeagler, C. and Valentin, G. 2013. FIDO-
facilitating interactions for dogs with occupations:
wearable dog-activated interfaces. Proceedings
International Symposium on Wearable Computer. (2013),
8188.
[9] Jackson, M.M., Valentin, G., Freil, L., Burkeen, L.,
Zeagler, C., Gilliland, S., Currier, B. and Starner, T.
2014. FIDOFacilitating interactions for dogs with
occupations: wearable communication interfaces for
working dogs. Personal and Ubiquitous Computing.
(Oct. 2014).
[10] Lee, S.P., Cheok, A.D., James, T.K.S., Debra, G.P.L., Jie,
C.W., Chuang, W. and Farbiz, F. 2005. A mobile pet
wearable computer and mixed reality system for human
poultry interaction through the internet. Personal and
Ubiquitous Computing. 10, 5 (Nov. 2005), 301317.
[11] MacKenzie, I.S. 2009. Fitts’ Law as a Research and
Design Tool in Human-Computer Interaction. Human
Computer Interaction. 7, 1 (Nov. 2009), 91139.
[12] Noz, F. and An, J. 2011. Cat Cat Revolution!: An
Interspecies Gaming Experience. (2011), 26612664.
[13] Pavlov, I.P. 1927. Conditional Reflexes. Dover.
[14] Potter, R.L., Weldon, L.J. and Shneiderman, B. 1988.
Improving the accuracy of touch screens: an experimental
evaluation of three strategies. Proceedings of the SIGCHI
conference on Human factors in computing systems - CHI
’88. (1988), 2732.
[15] PQ Labs G4 Multi Touch Screen Overlay 60 inch:
http://multitouch.com/product_plus.html. Accessed:
2016-01-04.
[16] Pryor, K. 2009. Reaching the Animal Mind. Scribner,
Simon & Schuster, Inc.
[17] Robinson, C., Mancini, C., Linden, J. Van Der, Guest, C.
and Harris, R. 2014. Canine-Centered Interface Design:
Supporting the Work of Diabetes Alert Dogs.
Proceedings of the SIGCHI Conference on Human
Factors in Computing Systems. (2014), 37573766.
[18] Robinson, C., Mancini, C., van der Linden, J., Guest, C.,
Swanson, L., Marsden, H., Valencia, J. and
Aengenheister, B. 2015. Designing an emergency
communication system for human and assistance dog
partnerships. Proceedings of the 2015 ACM International
Joint Conference on Pervasive and Ubiquitous
Computing - UbiComp ’15. (2015), 337347.
[19] Savage-Rumbaugh, E.S. 1986. Ape language: From
conditioned response to symbol. Columbia University
Press.
[20] Skinner, B.F. 1938. The behavior of organisms: an
experimental analysis. Appleton- Century.
[21] Soukoreff, R.W. and MacKenzie, I.S. 2004. Towards a
standard for pointing device evaluation, perspectives on
27 years of Fitts’ law research in HCI. International
Journal of Human Computer Studies. 61, 6 (2004), 751
789.
[22] Wingrave, C.A. 2010. Early Explorations of CAT!:
Canine Amusement and Training. (2010), 26612669.
[23] Young, J.E., Young, N., Greenberg, S. and Sharlin, E.
Feline Fun Park!: A Distributed Tangible Interface For
Pets And Owners. 14.
[24] Zeagler, C., Gilliland, S., Freil, L., Starner, T. and
Jackson, M.M. 2014. Going to the Dogs!: Towards an
Interactive Touchscreen Interface for Working Dogs.
(2014), 497507.
... These vary from being entertained through screen systems [23], working by pressing buttons and pulling ropes to notify people [8,12,45] and informing people of their experiences through the use of computer systems [32,44]. Dogs as actors can be technology consumers [11], users through button pressing and activating screen systems [23,44,56], users as wearers of GPS technologies through GPS trackers and vest monitoring systems [8,34] and game players through gamified tablets [53]. ...
... For boundary-boxes, our dog participant had prior experience with this technology but did not prefer to interact in this manner, so this method was excluded. Touch interfaces in turn require extensive training, going against our method [8,56]. Likewise, we excluded gaze/head direction, as dogs typically look at screens for under three seconds at any one time, having an overall low mean attention time [22,23]. ...
... The above method formed the basic interaction mechanism of starting the DogPhone device and facilitated dog-led interactions with the device. While this is a nonspecific interaction method (moving the ball in any direction), this interaction style also allowed for the constant affordances of the toy initiating the video interaction, assigning a clear linear meaning to the toy and the video call [56]. As it is unknown how dogs experience, are motivated by, or can control interfaces-this work begins to take steps and discuss how dogs understand, are motivated intrinsically to use and what benefits they get from remote video call systems. ...
Article
Full-text available
Over the past decade, many systems have been developed for humans to remotely connect to their pets at home. Yet little attention has been paid to how animals can control such systems and what the implications are of animals using internet systems. This paper explores the creation of a video call device to allow a dog to remotely call their human, giving the animal control and agency over technology in their home. After building and prototyping a novel interaction method over several weeks and iterations, we test our system with a dog and a human. Analysing our experience and data, we reflect on power relations, how to quantify an animal's user experience and what interactive internet systems look like with animal users. This paper builds upon Human-Computer Interaction methods for unconventional users, uncovering key questions that advance the creation of animal-to-human interfaces and animal internet devices.
... This technology spans from animals that we keep in zoos and sanctuaries [6], towards animals that we domesticate as pets in our homes [14,33,49] and animals in the wild [23]. ACI research has seen technology systems designed for animals to both assist humans [5,52] and to entertain animals [43] aimed towards increasing the animal-human bond [6,49]. As there is a large number of animals living in domestic or caged situations, ACI has also investigated connecting owners to their dogs [41] and zoo visitors to the captive animals [6,48]. ...
... A sub field of ACI when focusing upon dogs is Dog-Computer Interaction (DCI). Dogs are unique in DCI as they can take on varying roles from both a working dog [52] and as a pet [18,49]. Whilst dogs can occur in both positions, there is a difference in the human's motivation towards these technologies in regards to the requirements and animal-centeredness; whose needs the technology is meeting (human or dog) and the requirement to be dog centeric [14]. ...
... In DCI interfaces, a dogs response can be derived from facial reactions such as eye movements [44], nose movements [20,52] and head movements [14,17,47]; behaviors such as biting [19], pulling [40], pushing buttons [12,39], touch screens [52] and posture analysis [31]; and biological responses such as heart and respiration rates [31] and hormone levels [12]. These behaviors and biological responses have been used within DCI systems to allow the dog to feed-back to a system such as operating nose plate interfaces [20], bitable pulleys and buttons [19,40], paw activated buttons [12,39], proximity [18], and haptic vests [19,26,5]. ...
Preprint
Full-text available
How humans use computers has evolved from human-machine interfaces to human-human computer mediated communication. Whilst the field of animal-computer interaction has roots in HCI, technology developed in this area currently only supports animal-computer communication. This design fiction paper presents animal-animal connected interfaces, using dogs as an instance. Through a co-design workshop, we created six proposals. The designs focused on what a dog internet could look like and how interactions might be presented. Analysis of the narratives and conceived designs indicated that participants' concerns focused around asymmetries within the interaction. This resulted in the use of objects seen as familiar to dogs. This was conjoined with interest in how to initiate and end interactions, which was often achieved through notification systems. This paper builds upon HCI methods for unconventional users, and applies a design fiction approach to uncover key questions towards the creation of animal-to-animal interfaces.
... Dogs are commonly recruited as participants in Animal-Computer Interaction (ACI) studies in a variety of contexts. For working dogs, a growing number devices have been developed including tech embedded wearables [19,20,8,2,44,50], tactile interaction systems [41,27,9], and touchscreen interfaces [51]. Studies focusing on companion dogs have explored wearable tracking [28,42,31,24,39] and technology supported interactions [40,4]. ...
... In addition to early data to iterate on the study design, we also aim to explore using positive reinforcement methods over a couple sessions to familiarize participants with a stimulating object. As previous work has called for training users for interacting with technology [51], the study procedures may uncover the effect of training periods. This in turn will impact the future study design and potentially aid future studies in incorporating training introductions to system evaluations to minimize neophobic responses and support confident interactions. ...
... As previous work has indicated the importance of understanding behavioral signs of participants [36,51], this protocol may also provide insights that build off of species-centric evaluation frameworks and may aid future work in this area. In particular, we are interested in discussing the impact of multiple sessions on behavior for future enrichment evaluations. ...
Conference Paper
Full-text available
Outdoor enrichment has a variety of potential uses for increasing physical activity and strengthening companion animal bonds. In this pilot protocol, we aim to explore a species-centric evaluation of drone flying patterns and distances for enrichment purposes. This includes presenting participants with introductory and training phases to encourage positive familiarization before evaluating flight patterns, with the ability to walk away at any time. Sessions will be repeated within participants to explore any novelty or familiarization effects, as well as collect guardian perceptions of impact. From this pilot, we aim to explore our evaluation methods, characterize pet guardian perspectives, and narrow preferred movement patterns and distances for a future deploy.
... Thus, it is vital that CCI designers understand what constitutes "good" in canine design. Different systems have used different modalities for canine input, i.e. gesture and positioning detection [24,49,29,55,50], facial reactions [18,20,24,44], bite activation [57,23], bite-and-pull activation [40], pressure activation [30,32,36,14], and touchscreen interfaces [59,58]. ...
... Work has also explored the use of touchscreens as an alert interface for medical alert dogs [59]. This work focuses on the use of nose touches to interact with a series of graphics on a large touchscreen. ...
... Yet, as our data proposes, while usage time gives an indication of an animal's choice, it does not comment on their quality and experience of the interaction. To peek into an animal's user experience and meaning-making, many researchers use their own experiences, as well as animal behaviour specialists and trainers, to make guesses (as humans) through subjective behavioural analysis [51,62,66,70]. This interpretation is based on the assumption that all animals, including us, co-exist somewhere on the same spectrum of understanding when interacting with computer systems [41]. ...
... The system interface was developed with the participation of and based on the preferences shown by individual dogs interacting with a range of toy-like prototypes; but the authors did not discuss whether and how dog behavioural traits might have informed the dogs' preferences or interaction patterns. Zeagler et al. [51] investigated the interaction of dogs with the touch-screen interface of a prospective alarm system. The authors describe in detail the effect of different training protocols on the dogs' performance, rather than focusing on behaviour al differences and similarities between the dogs. ...
... As such, what preferences and the association of liking can be drawn from such an unknown interaction is questionable when it is unknown what goals, interactions, and perceptions the animal user has. While researchers have investigated usability [49], preference [7] and interactions [48,50] with non-human animals with the context of computer devices and interfaces, it is an incomplete science that needs to have leeway for misunderstandings and misconceptions when interpreting a non-human species. In other words, the human idea of computer interaction and preference is not directly comparable to the non-human animal species with computer-enabled systems. ...
Article
Full-text available
Computer-enabled screen systems containing visual elements have long been employed with captive primates for assessing preference, reactions and for husbandry reasons. These screen systems typically play visual enrichment to primates without them choosing to trigger the system and without their consent. Yet, what videos primates, especially monkeys, would prefer to watch of their own volition and how to design computers and methods that allow choice is an open question. In this study, we designed and tested, over several weeks, an enrichment system that facilitates white-faced saki monkeys to trigger different visual stimuli in their regular zoo habitat while automatically logging and recording their interaction. By analysing this data, we show that the sakis triggered underwater and worm videos over the forest, abstract art, and animal videos, and a control condition of no-stimuli. We also note that the sakis used the device significantly less when playing animal videos compared to other conditions. Yet, plotting the data over time revealed an engagement bell curve suggesting confounding factors of novelty and habituation. As such, it is unknown if the stimuli or device usage curve caused the changes in the sakis interactions over time. Looking at the sakis’ behaviours and working with zoo personnel, we noted that the stimuli conditions resulted in significantly decreasing the sakis’ scratching behaviour. For the research community, this study builds on methods that allow animals to control computers in a zoo environment highlighting problems in quantifying animal interactions with computer devices.
... TOCs are now commonplace in studies utilizing more traditional laboratory animals such as rodents, primates, and pigeons. TOCs have also been used to study non-model species such as bears [2,3], dogs [4], and tortoises [5] in captivity. However, rarely has the use of this method been described in wild-caught individuals from non-model species (but see [6,7]). ...
Article
Full-text available
Operant chambers are small enclosures used to test animal behavior and cognition. While traditionally reliant on simple technologies for presenting stimuli (e.g., lights and sounds) and recording responses made to basic manipulanda (e.g., levers and buttons), an increasing number of researchers are beginning to use Touchscreen-equipped Operant Chambers (TOCs). These TOCs have obvious advantages, namely by allowing researchers to present a near infinite number of visual stimuli as well as increased flexibility in the types of responses that can be made and recorded. We trained wild-caught adult and juvenile great-tailed grackles ( Quiscalus mexicanus ) to complete experiments using a TOC. We learned much from these efforts, and outline the advantages and disadvantages of our protocols. Our training data are summarized to quantify the variables that might influence participation and success, and we discuss important modifications to facilitate animal engagement and participation in various tasks. Finally, we provide a “training guide” for creating experiments using PsychoPy, a free and open-source software that was incredibly useful during these endeavors. This article, therefore, should serve as a resource to those interested in switching to or maintaining a TOC, or who similarly wish to use a TOC to test the cognitive abilities of non-model species or wild-caught individuals.
Article
Full-text available
Computer-mediated interaction for working dogs is an important new domain for interaction research. In domestic settings, touchscreens could provide a way for dogs to communicate critical information to humans. In this paper we explore how a dog might interact with a touchscreen interface. We observe dogs' touchscreen interactions and record difficulties against what is expected of humans' touchscreen interactions. We also solve hardware issues through screen adaptations and projection styles to make a touchscreen usable for a canine's nose touch interactions. We also compare our canine touch data to humans' touch data on the same system. Our goal is to understand the affordances needed to make touchscreen interfaces usable for canines and help the future design of touchscreen interfaces for assistive dogs in the home.
Article
Full-text available
Working dogs have improved the lives of thousands of people throughout history. However, communication between human and canine partners is currently limited. The main goal of the FIDO project is to research fundamental aspects of wearable technologies to support communication between working dogs and their handlers. In this study, the FIDO team investigated on-body interfaces for dogs in the form of wearable technology integrated into assistance dog vests. We created five different sensors that dogs could activate based on natural dog behaviors such as biting, tugging, and nose touches. We then tested the sensors on-body with eight dogs previously trained for a variety of occupations and compared their effectiveness in several dimensions. We were able to demonstrate that it is possible to create wearable sensors that dogs can reliably activate on command, and to determine cognitive and physical factors that affect dogs’ success with body–worn interaction technology.
Article
Full-text available
Many people with Diabetes live with the continuous threat of hypoglycemic attacks and the danger of going into coma. Diabetes Alert Dogs are trained to detect the onset of an attack before the condition of the human handler they are paired with deteriorates, giving them time to take action. We investigated requirements for designing an alarm system allowing dogs to remotely call for help when their human falls unconscious before being able to react to an alert. Through a multispecies ethnographic approach we focus on the requirements for a physical canine user interface, involving dogs, their handlers and specialist dog trainers in the design process. We discuss tensions between the requirements for canine and the human users, argue the need for increased sensitivity towards the needs of individual dogs that goes beyond breed specific physical characteristics, and reflect on how we can move from designing for dogs to designing with dogs.
Article
Full-text available
Working dogs have improved the lives of thousands of people. However, communication between human and canine partners is currently limited. The main goal of the FIDO project is to research fundamental aspects of wearable technologies to support communication between working dogs and their handlers. In this pilot study, the FIDO team investigated on-body interfaces for assistance dogs in the form of wearable technology integrated into assistance dog vests. We created four different sensors that dogs could activate (based on biting, tugging, and nose gestures) and tested them on-body with three assistance-trained dogs. We were able to demonstrate that it is possible to create wearable sensors that dogs can reliably activate on command.
Article
Full-text available
1 Abstract Many pet owners spend long hours away from home, and are forced to leave their pets unattended and un-entertained during this time. To alleviate this, we have developed a distributed tangible interface that not only promotes pet activity, but that gives the owner the ability to monitor and further encourage cat activity while they are away from the home. Specifically, Feline Fun Park is a computerised 'cat condo' that: a) senses cat activity around and within it; b) automatically responds to these activities by triggering a variety of devices that encourage cat interaction with them; c) displays the cat's activity level to the pet's distant owner, and d) allows that owner, at his or her discretion, to activate the devices manually across the internet. The system offers a level of automatic interactive entertainment for a pet not possible with traditional toys, where it adapts to various levels of play to keep the cat interested, and where it encourages pet/owner interaction.
Article
Full-text available
A study comparing the speed, accuracy, and user satisfaction of three different touch screen strategies was performed. The purpose of the experiment was to evaluate the merits of the more intricate touch strategies that are possible on touch screens that return a continuous stream of touch data. The results showed that a touch strategy providing continuous feedback until a selection was confirmed had fewer errors than other touch strategies. The implications of the results for touch screens containing small, densely-packed targets were discussed.
Conference Paper
In this research we developed an alarm system that enables assistance dogs to call for help on behalf of their vulnerable owners in an emergency, involving the end users (both assistance dogs and their owners) directly in the entire design process. Here we present a high-fidelity prototype of a user-friendly canine alarm system. In developing the system, we sought to understand the level of support required for a canine user to successfully interact with an interface, finding that the type of emergency a dog is faced with may vary widely and that consequently dogs may have to act on behalf of their assisted owners with varying degrees of autonomy. We also explored the process of conducting usability testing with both canine and human participants, seeking to identify where requirements of one species may overlap with, or diverge from, the other.
Article
Skinner outlines a science of behavior which generates its own laws through an analysis of its own data rather than securing them by reference to a conceptual neural process. "It is toward the reduction of seemingly diverse processes to simple laws that a science of behavior naturally directs itself. At the present time I know of no simplification of behavior that can be claimed for a neurological fact. Increasingly greater simplicity is being achieved, but through a systematic treatment of behavior at its own level." The results of behavior studies set problems for neurology, and in some cases constitute the sole factual basis for neurological constructs. The system developed in the present book is objective and descriptive. Behavior is regarded as either respondent or operant. Respondent behavior is elicited by observable stimuli, and classical conditioning has utilized this type of response. In the case of operant behavior no correlated stimulus can be detected when the behavior occurs. The factual part of the book deals largely with this behavior as studied by the author in extensive researches on the feeding responses of rats. The conditioning of such responses is compared with the stimulus conditioning of Pavlov. Particular emphasis is placed on the concept of "reflex reserve," a process which is built up during conditioning and exhausted during extinction, and on the concept of reflex strength. The chapter headings are as follows: a system of behavior; scope and method; conditioning and extinction; discrimination of a stimulus; some functions of stimuli; temporal discrimination of the stimulus; the differentiation of a response; drive; drive and conditioning; other variables affecting reflex strength; behavior and the nervous system; and conclusion. (PsycINFO Database Record (c) 2012 APA, all rights reserved)