Canine Computer Interaction: Towards Designing a
Touchscreen Interface for Working Dogs
Melody Moore Jackson
*All researchers from
Georgia Institute of Technology
Touchscreens can provide a way for service dogs to relay
emergency information about their handlers from a home or
office environment. In this paper, we build on work
exploring the ability of canines to interact with touchscreen
interfaces. We observe new requirements for training and
explain best practices found in training techniques.
Learning from previous work, we also begin to test new
dog interaction techniques such as lift-off selection and
sliding gestural motions. Our goal is to understand the
affordances needed to make touchscreen interfaces usable
for canines and help the future design of touchscreen
interfaces for assistance dogs in the home.
Animal Computer Interaction; Assistance Dog Interface
Design; Touchscreen Interactions
ACM Classification Keywords
H.5.2 [Information interfaces and presentation]: User
Jacob has epilepsy. His medical alert dog Dug is trained to
sense an oncoming seizure and notify Jacob before it starts
. Dug is trained to nudge Jacob to a wall so he does not
fall down. Dug is also trained to lick Jacob’s face until he
recovers. Dug, however, is special; he is also trained to
interact with a wearable computer on his service vest [8, 9].
When Jacob has a seizure, Dug can activate a capacitive
bite sensor on his vest, which notifies health services and
Jacob’s loved ones of his condition and where to find him.
By summoning help, Dug performs a potentially life-saving
service for Jacob. But what if Jacob and Dug are at home
and Dug is not wearing his service dog vest? Dog-
computer interactions that do not require a vest or other
wearables could fill a critical need for people like Jacob.
Because of the ubiquitous nature of touchscreens, we began
exploring possibilities and challenges in designing virtual
interfaces for dogs . Our initial study demonstrated a
proof of concept that a dog can use a touchscreen. In our
current study, we start to examine the questions “what is the
best way to train a dog to effectively use a touchscreen
interface’, and ‘how do we design an interface best for
canine computer interaction’?
Animals have been involved in research for a long time,
and many research experiments have used machine
interfaces to derive knowledge about animal behavior and
cognition from animal interactions [5, 19, 20]. Amundin et
al.  created an echolocation interaction based interface,
using echolocation as “touch” the system acts as a type of
touchscreen. Amundin’s system was built to understand
how a dolphin might best interact with a touchscreen, which
is close in motivation to our research with canines. Work
such as Mankoff’s  has talked about HCI in the context
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that copies
bear this notice and the full citation on the first page. Copyrights for
components of this work owned by others than ACM must be honored.
Abstracting with credit is permitted. To copy otherwise, or republish, to
post on servers or to redistribute to lists, requires prior specific permission
and/or a fee. Request permissions from
ACI '16, November 16-17, 2016, Milton Keynes, United Kingdom ©
2016 ACM. ISBN 978-1-4503-4758-7/16/11…$15.00
of animals describing a “Fitts’ Law” test for dogs using
tennis balls. This work starts a conversation about canines
using computer interfaces. Other animal computer
interaction systems have focused around monitoring pets
when the owner/handler is not at home or providing
entertainment [7, 10, 12, 22, 23]. Certainly Robinson and
Mancini’s work also looks at ways for dogs to interact with
devices in the home, although these devices are similarly
motivated the interfaces are not centered on touchscreen
interactions [17, 18].
DOGS USING TOUCH SCREENS
The new lessons learned presented in this paper build on
work completed previously  in designing a touchscreen
user test for dogs modeled after Soukereff and MacKenzie’s
multidirectional tapping task [11, 21]. Potter, Weldon, and
Shneiderman describe three main ways interfaces can be
designed to accept touch interaction. 
Land-on, where the curser is under the touch, and only the
first impact point counts.
First-Contact, where the cursor is under the touch, but the
first contact with the target counts even if its not the first
impact with the surface.
Take-off, cursor is offset, and selections are made by where
the touch lifts off the surface. We do not use a cursor for
our dog interactions and we will call this type of interaction
The original system used first-contact to make tapping
target selections. “Each tapping task trial consists of one
press on the blue target, zero or more erroneous touches,
followed by one touch on the yellow. When the dog makes
a correct selection, first hitting the blue target, a lower
frequency tone is played to signify to the dog he/she has
made the blue selection. After selecting the blue target if
the dog selects the yellow target a higher frequency tone is
played, and the targets disappear from the screen upon lift
off of the successful yellow target touch, signifying that the
dog has completed the task. Blue and yellow targets were
chosen because a dog can see the difference between
yellow and blue.” 
We compared the results from humans performing this first-
contact tapping task to dogs performing the same task with
their noses. Notice in Figure 3 the human taps the screen on
the blue then yellow when asked to do so. As seen in Figure
4 dogs trained to perform the same task found it easier to
operate the first-contact interface by touching the blue and
sliding their noses on the screen to the yellow. From
observing these interactions we decided to try two new
types of interfaces.
The first new interface tested for this study (Figure 5A)
mimicked the original tapping task but uses lift-off; in every
other way the interaction is the same. The touchscreen
produces a tone when the dog lifts his nose off of the
screen. We chose to test lift-off in an effort to mold the
dogs’ interactions with the touchscreen closer to a humans
interactions. If the dog participants were able to use the
system with lift-off interactions we could closer relate the
dogs interactions to what would be expected in a human-
based Fitts’ Law tapping task study [11, 21].
The second new interface tested (Figure 5B) was not an
effort to eliminate sliding, but instead an exploration of
what a sliding or gesture type interface might look like for a
canine. This interface was inspired by Accot and Zhai’s
description of steering law[1, 2].
Figure 3: An existing example of a recorded human’s
interaction with our touchscreen tapping task interface. Green
dots are the first successful touch; red dots are the second
successful touch .
Figure 4: This is an example of a dog interaction on the
original land-on based tapping task touchscreen interface .
Figure 5: A. Screenshot of tapping task interface with lift-off.
B. Screenshot of sliding task interface.
Unlike the tapping task, the sliding interface activates a
constant tone through the duration of the touch. The dog
Touch interactions for human subject 1 at distance 120 and size 300
Distance in pixels
Distance in pixels
Touch interactions for dog subject 1 at distance 120 and size 300
Distance in pixels
Distance in pixels
must touch the blue then enter the white path and follow it
to the yellow without lifting off the screen or going outside
the drawn path. To allow the dog to understand its progress
in the task, the tone is constant while touching and changes
from lower frequency on blue, to a higher tone while on the
path, back to low on yellow. The tone and visual display
disappear upon completion.
Interface Hardware and Software Improvements
We originally created our software in Java, but found that
there was an inconsistent delay between touching the screen
and hearing a tone. We switched to using Unity3D, a video
game engine, because it is designed explicitly for good
audio-visual performance. We also developed a testing
protocol to ensure that we could get consistent timing in the
recorded data. We used a solenoid powered by a function
generator to generate one touch every second, and recorded
the touch events. We found less than 6 milliseconds of jitter
in the timing between 10 consecutive touches. Our new
software also includes the ability to adjust the initial height
of the interface (in our first study we worked exclusively
with medium to large dogs, but we had a greater variety we
needed to accommodate for in this iteration) as in Figure 6.
Figure 6: A smaller dog (Basset Hound) learning to use our
new lift-off tapping task interface. In this training session
photo only the blue selection is presented.
We also replaced our slower IR touch interface with a
newer version , this new IR touch surface included new
drivers and is easier to troubleshoot.
Dog Training Methods
Because we changed some elements of the original
touchscreen interface, we felt it important to use a new set
of dog participants, who were unfamiliar with the original
first-contact tapping task interface. The new dog
participants would be trained on either the lift-off selection
system, or the swipe/gesture system, but not both. For all
experiments, our research team only used positive
reinforcement (R+); we did not employ any type of
correction or punishment. We trained the dogs in 15-20
minute sessions with at least 30 minutes rest between each
training session. We will describe the original training
method and the revised method used in this study to
Original “first contact” dog training method
In previous studies all of our dogs (set A) were pre-trained
in targeting (touching the handler’s hand or a specified
target with his nose). The dogs in our previous touchscreen
work were also trained with operant conditioning ,
specifically shaping, which is creating new behaviors by
selectively reinforcing wanted behaviors the canine offers
. The dogs were classically conditioned using a food
treat and a computer generated tone . We used the same
tone to signify completion of the touchscreen task, so that
as the dogs used the interface they would hear the tone as a
The touchscreen interaction, while seemingly simple, is
quite complex for a canine. For training we subdivided the
tasks required. The dog was rewarded for first touching the
screen with his nose, then for seeking out the dot shape to
touch with his nose. We began by training the dog to touch
the blue dot (giving a reward only when the tone sounded)
and the yellow dot separately. Each of the dot selections is
marked by a different tone so the dog can understand his
progress through the task. Dogs normally use a multitude of
senses to find objects, part of the training was to reward the
dog for only using sight to find these “virtual objects”.
Once the dog was trained to touch the dots separately our
initial trainer used the backchaining  method to teach
the dog to touch first the blue then yellow dot in sequence.
By starting the dog on a mat, first he was rewarded for
going to the screen and touching the yellow dot. When
proficient with one dot, he was rewarded only after he
touched the blue then yellow dot and returned to his mat.
Each step was added after the dog showed mastery of the
previous one. The trainer started with the last needed
behavior in the sequence so the dog understood when the
reward was given. Because the reward is only given at the
end of the task the dogs were motivated to complete the
behavior chain as correctly and as quickly as possible.
Current “lift-off” dog training method
Because we wanted to work with a new set of canine
participants (set B) we reached out to a new trainer with a
pool of dogs who had never used our interface. The new
trainer attempted to train the dogs on our new lift-off
tapping task interface. Before we discuss the new trainer’s
techniques we believe it is important to stress that the new
lift-off interaction style seems more difficult for canines to
learn in general using either training method.
Our new trainer employed different techniques in two major
areas. First, she used a style of training called luring .
Luring is when a trainer shows a reward to the dog and
allows the dog to follow the reward through the task the
trainer hopes to impart to the dog. In this case the trainer
showed the new dog participants a food treat and enticed
them to follow the treat to the screen where she wanted the
dogs to touch. The luring approach was less effective than
the shaping approach of the original study, especially with
our lift-off system because the dog sees the treat at the
screen and does not associate the completion of the task
with the reward but rather the location as a place where
rewards appear. Second, the new trainer did not build up the
tapping task through backchaining. By building up the task
from the beginning, the dog learned to be rewarded during
each step of the sequential task along which the total task
needed to be trained. This created moments where in the
middle of the tapping task sequence the dog did not
comprehend that he needed to finish the entire sequence to
receive a reward. This is a major problem for our research,
as we need the entire sequence of the tapping task (or any
other sequential task) to be completed as quickly as
possible from the dog participant to be able to compare the
dog interactions with human interactions.
Upon seeing the shortcomings of this new training method
we began training a separate dog on the lift-off system using
our original shaping methods. We found that the dog
learned the system quicker than the (set B) dog participants,
but did not learn the lift-off task as quickly as the original
Training for sliding/gesture interactions
Our new trainer also used luring to train one dog to interact
with a sliding gesture based interface modeled after Accot
and Zhai’s description of steering law[1, 2]. Luring is more
appropriate for beginning this training as the interface
reacts to a continuous touch and slide from the dog’s nose.
The need for the dog to follow the path of the interface
means that a trainer initially using luring can lead the dog
through the correct motions. It is important to quickly
transfer from luring to shaping so that the dog understands
that the task must be completed before it receives a reward.
Figure 7: Dog participants touch interactions, green lines are
successful sliding touches, and red lines are unsuccessful
In this ongoing research we attempted to train five dogs on
the lift-off tapping task interface. The new trainer using her
methods trained four dogs, and our original trainer using
our original training protocol trained one dog. None of the
dogs being trained on the lift-off tapping task became
proficient enough to begin actual testing. Some did learn
the task, but were not consistent. In general the dogs in our
last study were able to learn first-contact much quicker and
once trained, were quite proficient and consistent in their
ability to activate the system. For this reason we believe
that lift-off is not a good choice for canine touchscreen
We separately trained one dog participant on the new
sliding interface. Our testing protocol has not been
finalized or optimized for comparison with human
interactions, but the dog was able to learn the interaction
and successfully complete the sliding task. We can see from
Figure 7 the dog was often successful in completing the
task, sliding up from blue to yellow by staying on the
visible white path. The path is vertical to allow for the dogs
interaction to be visible and not occluded by the muzzle.
One interface observation is that the path should be at least
3.5” inches wide to allow the dog to see the path while its
nose is touching the screen.
LESSONS LEARNED: CANINE TOUCHSCREEN
INTERFACE DESIGN CONSIDERATIONS
The initial results of our lift-off study have generated a
preliminary foundation for touchscreen “best practices”:
• Infrared Touchscreens with a backing non-
projection monitors seem to currently be the best
hardware for canine interactions 
• Targets for tapping should be 3.5” or larger 
• Target distances should be at least 3.5” apart 
• Sliding paths should also be 3.5” wide or larger
• Shaping is the most effective training method for
tapping task touchscreen interactions
• Luring can be effectively used for initial training
of sliding/gestural interactions, but should be
quickly exchanged for shaping.
• Backchaining seems to be the best method for
training the dog participants to complete the full
sequential task with motivation to move as fast as
possible through the task
• Lift-off touchscreen interactions are much more
difficult for dogs to comprehend
• First-contact touchscreen interactions are easier
for dogs to use and to learn
DISCUSSION AND FUTURE WORK
Through the course of attempting to update our canine
touchscreen interface we learned quite a few new design
considerations and also which training methods work best.
One exciting area of future work we intend to pursue can be
extrapolated from Figure 7. Notice that even when the dog
participant did not correctly stay within the path, the dog’s
motion and touch gesture look the same as when it did stay
within the path. If viewed from the perspective of a touch
gesture these interactions would also be successful
activations. We might be able to train the dogs using the
onscreen visuals as a guide, and later let them make the
gesture anywhere on the screen. Without having to stay
within the path, but by creating more complex paths, we
might be able to create a touch base gesture control system
dogs to could activate relatively easily. It could be
interesting to explore how complex these gestures could
Finally, to showcase the usefulness of what we have learned
thus far, and using our canine touchscreen interface design
considerations, we created a first-contact tapping task
system that demonstrates directly a dog’s ability to call for
help (Figure 8). The system has three tapping targets and
once activated in sequence, sends a text message calling for
help (for now just to a private phone). One of our dog
participants is fully trained to activate the system when
someone says to him “go get help”.
Figure 8: A first-contact emergency notification canine
The work presented here was completed under National
Science Foundation NSF Grant IIS-1525937.
 Accot, J. and Zhai, S. 2002. More than dotting the i’s ---
foundations for crossing-based interfaces. Proceedings of
the SIGCHI conference on Human factors in computing
systems Changing our world, changing ourselves - CHI
’02. 4 (2002), 73.
 Accot, J. and Zhai, S. 1999. Performance Evaluation of
Input Devices in Trajectory-based Tasks: An Application
of The Steering Law. Proceedings of the SIGCHI
conference on Human Factors in Computing Systems.
 Amundin, M., Starkhammar, J., Evander, M., Almqvist,
M., Lindström, K. and Persson, H.W. 2008. An
echolocation visualization and interface system for
dolphin research. The Journal of the Acoustical Society of
America. 123, 2 (Feb. 2008), 1188–94.
 Canine Companions for Independence: www.cci.org.
 Delfour, F. and Marten, K. 2005. Inter-modal learning
task in bottlenosed dolphins (Tursiops truncatus): a
preliminary study showed that social factors might
influence learning strategies. Acta Ethologica. 8, 1 (May
 Dey, A., Mankoff, J. and Mankoff, K. 2005. Supporting
Interspecies Social Awareness!: Using peripheral displays
for distributed pack awareness. (2005), 253–258.
 Hu, F., Silver, D. and Trudel, A. 2007. Lonely
Dog@Home. Proc of the Conf on Web Intelligence and
Intelligent Agent Technology Workshops (2007), pp. 333–
 Jackson, M., Zeagler, C. and Valentin, G. 2013. FIDO-
facilitating interactions for dogs with occupations:
wearable dog-activated interfaces. Proceedings
International Symposium on Wearable Computer. (2013),
 Jackson, M.M., Valentin, G., Freil, L., Burkeen, L.,
Zeagler, C., Gilliland, S., Currier, B. and Starner, T.
2014. FIDO—Facilitating interactions for dogs with
occupations: wearable communication interfaces for
working dogs. Personal and Ubiquitous Computing.
 Lee, S.P., Cheok, A.D., James, T.K.S., Debra, G.P.L., Jie,
C.W., Chuang, W. and Farbiz, F. 2005. A mobile pet
wearable computer and mixed reality system for human–
poultry interaction through the internet. Personal and
Ubiquitous Computing. 10, 5 (Nov. 2005), 301–317.
 MacKenzie, I.S. 2009. Fitts’ Law as a Research and
Design Tool in Human-Computer Interaction. Human–
Computer Interaction. 7, 1 (Nov. 2009), 91–139.
 Noz, F. and An, J. 2011. Cat Cat Revolution!: An
Interspecies Gaming Experience. (2011), 2661–2664.
 Pavlov, I.P. 1927. Conditional Reflexes. Dover.
 Potter, R.L., Weldon, L.J. and Shneiderman, B. 1988.
Improving the accuracy of touch screens: an experimental
evaluation of three strategies. Proceedings of the SIGCHI
conference on Human factors in computing systems - CHI
’88. (1988), 27–32.
 PQ Labs G4 Multi Touch Screen Overlay 60 inch:
 Pryor, K. 2009. Reaching the Animal Mind. Scribner,
Simon & Schuster, Inc.
 Robinson, C., Mancini, C., Linden, J. Van Der, Guest, C.
and Harris, R. 2014. Canine-Centered Interface Design:
Supporting the Work of Diabetes Alert Dogs.
Proceedings of the SIGCHI Conference on Human
Factors in Computing Systems. (2014), 3757–3766.
 Robinson, C., Mancini, C., van der Linden, J., Guest, C.,
Swanson, L., Marsden, H., Valencia, J. and
Aengenheister, B. 2015. Designing an emergency
communication system for human and assistance dog
partnerships. Proceedings of the 2015 ACM International
Joint Conference on Pervasive and Ubiquitous
Computing - UbiComp ’15. (2015), 337–347.
 Savage-Rumbaugh, E.S. 1986. Ape language: From
conditioned response to symbol. Columbia University
 Skinner, B.F. 1938. The behavior of organisms: an
experimental analysis. Appleton- Century.
 Soukoreff, R.W. and MacKenzie, I.S. 2004. Towards a
standard for pointing device evaluation, perspectives on
27 years of Fitts’ law research in HCI. International
Journal of Human Computer Studies. 61, 6 (2004), 751–
 Wingrave, C.A. 2010. Early Explorations of CAT!:
Canine Amusement and Training. (2010), 2661–2669.
 Young, J.E., Young, N., Greenberg, S. and Sharlin, E.
Feline Fun Park!: A Distributed Tangible Interface For
Pets And Owners. 1–4.
 Zeagler, C., Gilliland, S., Freil, L., Starner, T. and
Jackson, M.M. 2014. Going to the Dogs!: Towards an
Interactive Touchscreen Interface for Working Dogs.