Conference PaperPDF Available

A Framework for Training Animals to Use Touchscreen Devices for Discrimination Tasks

Authors:

Abstract and Figures

Recent technological advances have made touchscreen devices more widely available for animal-computer interaction, but there is little consensus about methods for discrimination task training frameworks. Here we discuss the potential enrichment and communicative uses for touchscreen-based interactions as well as benefits and limitations of automated learning systems and social learning systems. We review the literature for discrimination training methods on touchscreen devices for a variety of species and discuss what we recommend as an expanded framework for cross-species discrimination training methods. This framework includes environment and device selection and setup, orientation and habituation, touchscreen shaping skills, and discrimination training. When done ethically, human-assisted animal interaction with technology can improve psychological wellbeing and cognitive enrichment through environmental choice and control, enhance human-animal relationships, and provide data collection opportunities for research.
Content may be subject to copyright.
A Framework for Training Animals to Use Touchscreen
Devices for Discrimination Tasks
Training Animals to Use Touchscreen Devices
Jennifer M. Cunha, J.D.*
Parrot Kindergarten, Inc., jen@parrotkindergarten.com
Corinne C. Renguette, Ph.D.
Indiana University Purdue University Indianapolis, crenguet@iupui.edu
Recent technological advances have made touchscreen devices more widely available for animal-computer interaction, but
there is little consensus about methods for discrimination task training frameworks. Here we discuss the potential
enrichment and communicative uses for touchscreen-based interactions as well as benefits and limitations of automated
learning systems and social learning systems. We review the literature for discrimination training methods on touchscreen
devices for a variety of species and discuss what we recommend as an expanded framework for cross-species
discrimination training methods. This framework includes environment and device selection and setup, orientation and
habituation, touchscreen shaping skills, and discrimination training. When done ethically, human-assisted animal
interaction with technology can improve psychological wellbeing and cognitive enrichment through environmental choice
and control, enhance human-animal relationships, and provide data collection opportunities for research.
CCS CONCEPTS Human-computer interaction Animal-computer interaction Social and behavioral science
Additional Keywords and Phrases: Touchscreens, Animal enrichment, Human-animal relationships, Discrimination
training
1 Introduction
From enrichment opportunities to updated and more humane methods of cognition research, recent advances in
touchscreen devices have opened new doors for animal-computer interactions. Touchscreen devices are easily accessible
and offer flexibility with use, making them ideal for research data collection as well as mental stimulation for captive
non-human animals (hereafter, “animals”) [1]. Here, we will outline some of the research conducted using touchscreen
devices in animal-computer interactions to begin framing best practices for using touchscreens for cross-species
discrimination training.
1.1 Uses for Touchscreen Devices in Animal Research
Touchscreen devices may improve animal welfare and our understanding of animals in a variety of ways. They are used
to improve animal psychological wellbeing with cognitive enrichment interactions such as with primates [1, 2, 3], dogs
and wolves [4, 5], cats [6], and pigs [7]. Touchscreen devices enable animals to signal requests with picture symbols to
* Corresponding author.
2
enhance choice and autonomy in captive environments, as pressing a symbol to ask for specific foods [8, 9, 10]. They are
also widely used in behavior and cognition research [11, 12]and indeed have improved the task performance of many
laboratory animals including rats [13] and pigeons [14].
1.2 Discrimination Tasks and Touchscreen Devices
Researchers have utilized technology for animal cognition testing for decades in order to advance our knowledge of
animal cognition and behavior. One option for using touchscreen devices with animals is discrimination learning, or
teaching animals to distinguish similarities and differences between multiple stimuli [14]. Discrimination learning can
initiate a variety of enrichment experiences and provide useful data collection opportunities.
Discrimination tasks are frequently used in behavior and cognition research [15] In an article looking at humans,
gorillas, and bears, Johnson-Ulrich and Vonk mention several studies that show how in a natural setting, animals engage
in discrimination for quantity, especially as it relates to food (Baker, 2011, Banszegi, 2016, Evans, 2009, Hanus and Call,
2007, Perdue, 2012, Ward and Smuts, 2007, as cited in [15] and to evaluate danger (Rivas-Blanco, Pohl, Dale, Heberlein,
& Range, 2020, as cited in [15]). Comparative psychology has made use of discrimination tasks on touchscreen devices
to explore animals’ cognitive abilities, including discovering inferential reasoning by exclusion in birds such as pigeons
[16], cockatoos [17] and African greys [18], as well as canines [16], and a variety of primates including marmosets [19]
among others [20]. These results are also compared against human performance [16]. Dogs have been tested using
touchscreens on their ability to discriminate emotional expressions on human faces [21] and pigeons on whether they can
recognize familiar human faces[16, 21]. Discrimination tasks include investigation of animals’ abilities to understand
abstract concepts, often through matching-to-sample tests [22] and quantity discrimination skills [15, 23]. More recently,
researchers have used discrimination tasks on touchscreen devices to explore captive animals’ preferences in order to
improve captive welfare [8, 9, 10].
1.3 Ethics in Animal-Computer Interactions
While often beneficial for science, research is not always beneficial for the animals being tested [24, 25]. Mancini
proffered a manifesto [26] with an ethical framework for the use of technology in animal research in the field of Animal-
Computer Interaction (ACI). This framework views animal participants as stakeholder-users in every aspect of computer
interaction in which the use of technology directly benefits the specific animal-user. Within this framework, much
research into animal-centered technology design and interactions has emerged including tangible and physical, haptic
and wearable, olfactory, tracking technology, and screen technology [27]. A wide field of animal-centered technology
now exists across species, including farm animals such as chickens [28] and cows [29]; domestic animals such as dogs
[30] and cats [6, 31]; and undomesticated animals such as elephants [32] and deer [33]. These studies are designed
around the animal as a stakeholder and are in alignment with these ethical considerations.
Within this ethical framework, touchscreen research has also evolved from forced task engagement with punishment
and coercion (in rats for example) [34] to more voluntary, animal-centered cognition and behavior studies [35, 36]. As
the ethical considerations are evolving to require specific benefits to individual user-animals, research tasks have often
been turned into touchscreen-based “games” to collect data from animals in a manner that utilizes consent and autonomy
while providing enrichment. Such research has been collected using dogs and wolves [4], cockatoos [8], red-footed
tortoises [37], ring-tailed lemurs [38], bears [15], and great apes [15, 36], among others.
Humans and animals prefer free choice in their environment [36]. Research demonstrates that enhancing the degree of
control an animal has over their environment can lead to beneficial effects on their behavior [39, 40, 41]. Mancini’s
framework [26, 42] also includes animals’ perceived or actual consent to technology interactions so they have choice and
control in the research process. Perceived consent includes an animal’s voluntary interaction with technology [27]
without coercive food restrictions, for example, birds flying into the lab for tests as demonstrated by cockatoos [17] or
pigeons [35]; or primates interacting with computers within their enclosures as demonstrated by baboons [3] and white-
faced sakis [43, 44]. Actual consent is when the animal uses symbols to indicate their choice to participate (after training
and corroboration with symbol recognition), for example by selecting yes or no symbols to indicate that they do or do not
want to participate or by selecting symbols that represent certain activities [8, 45].
3
1.4 This Study
We scoured the literature for the availability of comprehensive methods on instruction for training animals to engage
with touchscreen devices for discrimination tasks. So far, we have found no comprehensive guides or agreed-upon best
practices for these methods. Here we will provide an overview of touchscreen training on discrimination tasks for
animals and recommend best practices based both on the literature and on our experiences. Finally, we will present an
expanded framework for training animals, particularly birds, on symbol discrimination, as this forms the basis for much
research and is also foundational for touchscreen-based enrichment and symbol communication training.
2 Methods and Materials: Review of the Literature
In alignment with Mancini’s [25, 26, 42] guiding principles of ethics, agency, and animals’ voluntary participation in
research in the field of animal-computer interaction, we surveyed the literature for examples of ethical touchscreen
discrimination training in a variety of species. To find relevant articles, we used Google Scholar and a university library
search engine to search multiple databases, catalogs, and the internet. Search terms included combinations of the
following:
specific animal terms: dogs, wolves, bears, rats, dolphins, gorillas, orangutans, lemurs, bears, apes,
bonobos, baboons, parrots, cockatoos, pigeons
more general terms: animals, primates, rodents, canines, birds
technology terms: touchscreen, tablet, computer
the desired learning method: discrimination learning, discrimination tasks
For example, we started with animals + tablet + touchscreen + discrimination and did iterations of the listed terms. When
we found a relevant article, we used snowballing to find additional relevant articles to examine (ex. sakis, red-footed
tortoises). We decided to include or exclude an article by first determining if the article included a description of the
training methods for discrimination learning or discrimination tasks and outcomes and then applying our criteria for
ethical standards and publication within the past ten years. We included articles older than ten years if they provide
foundational support [16, 18, 28, 3941, 46] or if nothing more recent could be found that fit the other parameters [3, 13,
19, 21, 22, 28, 39, 4749]. We excluded articles for example, for not including discrimination training [5052], for
ethical issues such as using punishment or food restriction [34, 53], or for not using a touchscreen device for the full
discrimination task [45].
One challenge in creating this literature review is that training methods often include a degree of punishment and
coercion, contrary to ethical principles. Positive punishment is the application of aversive stimuli in order to decrease a
behavior, for instance, turning on a bright light when a rodent selects an incorrect answer. Likewise, coercion is
compelling an animal to engage in a behavior to meet a basic need such as food or water after deprivation. Social
housing with conspecifics or socialization opportunities are also important for animal welfare [46, 35]. While there is no
consensus on what comprises agency and consent in animals [25, 42, 54], and as free choice is highly favored by animals
[36], in our review we selected studies that most closely conformed to the ethical principles of voluntary participation
[42] with low (or no) levels of punishment, weight control, or coercion, and that included access to socialization.
Depending on the species, in some cases there were no ideal studies, so we selected the most ethical study (studies on
rats and pigeons, for example, almost always use weight control [34, 35], so we selected one that was less severe [49]).
Additionally, we selected papers in which the animals’ discrimination task outcomes were reported. Many articles
exploring animal-computer interaction with high ethical standards have recently been published. For instance, detailed
touchscreen dog training methodologies have been developed [5, 55, 56]. Improvements in socialization access during
testing have been advanced, for example, in group-housed pigeons accessing cognitive tests within their aviary [35] or
animals voluntarily entering adjoining chambers for research tasks [57]. Although valuable to the advancement of the
field, these studies were not included, as we selected only research that included discrimination training as well as testing
results.
Finally, we weighted papers for the following species in terms 1) low (or no) levels of weight control, 2) low (or no)
levels of punishment, 3) voluntary participation, and 4) opportunities for socialization.
4
Table 1: Studies selected for discrimination training review purposes and the degree to which they align with ACI
principles
Species
Study
Discrimination
Task
Weight
Control
Incorrect Response
Voluntary
Initiation
Socialization
Baboons [3]
Fagot & Bonté, 2010
Two alternative
forced choice &
match-to sample
None
3-s pause, green
screen
Yes
Yes
Bears [15, 58]
Johnson-Ulrich & Vonk,
2018
Quantity
No
Buzz tone & 750ms
pause
Yes
Yes
Bonobos [59]
Lameris, 2022
Pictures for
emotional Stroop
task
None
3500ms pause
Yes
Yes
Cockatoos [8, 17]
Cunha and Rhoads, 2020
Pexigrams for
preference
corroboration
No
None
Yes
Yes
Dogs/Wolves [4]
Wallis, 2017
Images
No
3s pause, red screen,
buzz sound
No
Yes
Gorillas [15]
Johnson-Ulrich & Vonk,
2018
Quantity
None
Buzz sound & 750ms
pause
Yes
Yes
Horses [60]
Tomonaga, et al 2015
Shapes with
curvatures;
vertical and
horizontal lines
No
None
Yes
Yes
Marmosets [19]
Kurz, 2011
Images
Food
provided
after
tests
3s pause & red screen
Yes
Yes
Rats/Mice [49]
Horner, 2014
Multiple match-
to-sample
85% free-
feeding
5s pause and
extinguished house
light
No
Yes
Red Footed
Tortoises [37]
Mueller-Paul, 2014
Spatial tasks
None
3s pause
No
Yes
3 Discussion
As can be seen from Table 1, a variety of animals have been trained to interact with touchscreen devices for
discrimination tasks in many ways. In Section 3.1, we will discuss touchscreen training methods and protocols, device
selection, shaping protocols, and task outcomes for discrimination tasks. As many of the studies are not fully in
alignment with our articulated application of acceptable ethical principles, in Section 3.2 we provide a framework with
guidance and best practices for future work that provides agency and voluntary touchscreen interaction in research across
species.
5
3.1 Touchscreen Protocols in Research
With the advent of touchscreen computers and increasing cultural interest in animal-computer interaction, researchers,
facility workers and pet owners have begun developing methods to train their animals to interact with devices, e.g., [4,
8]. Yet they often use trial-and-error methods [61], repeating the same training with the same methods even when
undesired results occur. The trial-and-error method includes waiting for a correct result to reinforce the desired behavior;
however, these methods can cause inefficiency and frustration for both the humans and the animals [61]. Nevertheless,
literature on the use of touchscreen devices can prove informative, as common themes exist across species in
discrimination task training on touchscreen devices.
Here we present some touchscreen and discrimination task training methods, from environmental setup and device
specifications to habituation and training method steps. We include information across several varied species of animals.
In some cases, we did not find information for all the animals, so the table is marked N/A. This is meant to be a sampling
(rather than a comprehensive list) to help create a generalizable training framework. This section is an analysis of
training methods, environment setup, devices, and procedures drawn from Table 1 research in order to train animals to
interact with touchscreen devices for purposes of discrimination tasks.
3.1.1 Automated Training vs Social Learning.
Researchers use two primary methods to train animals to use technology devices: automated training systems (AUTs)
and social learning methods. AUTs [62] are designed to train animals to interact with technology independently of
human training through a sequential series of repetitive motor tasks (usually paired with food reinforcers for correct
responses) that increase in challenge as the animal passes certain pre-set benchmarks [3]. There are many benefits to
AUTs including: a lower human workload, stations located in the animals’ home environments, the potential for use as
environmental enrichment, and a standardized learning experience for arguably more reliable data regarding animals’
comparative behaviors and cognition [3].
Social learning methods (SLMs) are also operant learning systems, but they include human training and interaction
during the training process, such as encouragement and pointing [17], Model-Rival interactions [63], and positive-
reinforcement using a bridge and food rewards [4, 8]. While social learning methods may require a greater human
workload [3], the benefits may produce faster results [64], better meet ethical standards for animal-computer interactions,
and may be better suited to ensure each animal participant receives training at a pace that matches its individual needs.
Additionally, once training is complete, technology stations can be placed with automatic food dispensers in the animals’
home environment for continued data collection or environmental enrichment [1, 52].
Animals trained through social interaction require significantly fewer repetitions to learn [64] while animals trained
through AUTs can require thousands of trials to achieve similar results [3]. As novelty in environment, choice, and
control are important elements for animal psychological well-being [1, 2], the thousands of repetitions required for
animals to learn even the most basic tasks related to computer-interaction training [3], (particularly if animals are hungry
or thirsty such as traditional rodent and pigeon studies) seemingly violate the ethical standards related to animals’
uncoerced consent.
When the goal of the animal-computer interactions is psychological wellbeing and mental challenge, social learning
provides the benefit of a tailored learning experience to each animal as an individual, utilizing training techniques that
best meet the needs for the specific animal-user. Many non-human primates are initially trained by humans to use
computer technology [1]. After the basic skills have been taught, the animals have been observed to “play computer
games” in their home environments for up to an hour per day, and for years, as the games increase in challenge and
novelty, and continually provide mental enrichment [1]
Many options for training techniques exist but can be categorized into three main methods: automated learning, social
learning, and a hybrid of those two. Automated learning systems do not include the presence of a human during any
portion of the learning or testing, except as needed to transfer the animals to the testing area (as applicable). Social
learning includes a human that offers training or verbal praise during any part of the learning process. A hybrid method
includes the presence of a human with the animal to offer food reinforcers, but who is not otherwise involved in training
or testing. A list of these methods by species is presented in Table 2.
6
Table 2: Training Methods
Training Method
Automated Learning
Hybrid
Automated Learning
Social
Social
Hybrid
Social
Automated Learning
Hybrid
Automated Learning
Automated Learning
Social
3.1.2 Environmental Setup.
Whether using touchscreen devices for enrichment games, communication, or research, protocols for screen interactions
share common setup and training. These include environmental considerations such as time of day, lighting, sounds, and
conditioned reinforcers. What follows are some samples of touchscreen environmental practices found in the literature.
3.1.2.1 Time of Day
The time of day is an important consideration in discrimination task training. If an animal is too tired, they may not
engage in the training. For example, rats have a reverse-light cycle, so training at night may work better. When
determining time of day, it is also helpful to observe the animal’s feeding schedule as that is often when they are most
active. Below are the times of day the animals in the selected literature review were trained and tested.
Table 3: Time of day examples for touchscreen discrimination training
Animal
Time of Day
Baboons [3]
24 h availability
Bears [15, 58]
N/A
Bonobos [59]
12p.m.-3p.m.
Cockatoos [8, 17]
8-11a.m. or 4-7p.m. (Cunha)
Dogs/Wolves [4]
N/A
Gorillas [15]
7a.m.-10a.m.
Horses [60]
N/A
Marmosets [19]
11a.m.-3p.m.
Orangutans [36]
N/A
7
Animal
Time of Day
Pigeons [14]
N/A
Rats/Mice [49]
9am-5pm; reverse day/night (lights are
off from 7am-7pm)
Red Footed Tortoises
[37]
9a.m.-5p.m.
3.1.2.2 Lighting
Lighting setup is an important consideration as it may impact the activity level of the animal, and even serve as aversive
stimuli [47, 48]. Species have different reactions and preferences around lighting conditions, for example, rats and mice
sleep and hide in the light, while most parrots go to sleep in the dark. Here is a survey of the lighting conditions in the
subject animal literature.
Table 4: Lighting condition examples for touchscreen discrimination training
Animal
Brightness
Baboons [3]
N/A
Bears [15, 58]
N/A
Bonobos [3]
N/A
Cockatoos [8, 17]
Standard indoor house lighting
Dogs/Wolves [4]
N/A
Gorillas [15]
N/A
Horses [60]
Outdoor
Marmosets [19]
Between 40-30 LUX in cage
Orangutans [36]
N/A
Pigeons [14]
2-watt house light in chamber
Rats/Mice [49]
3-watt house light in chamber
Red Footed Tortoises
[37]
25-watt fluorescent tube lights
3.1.2.3 Sounds and conditioned reinforcers
Training environments should control external sources of sound, as sudden noises may startle animals, creating an
aversive association with the apparatus and decreasing training accuracy. Nevertheless, some secondary sounds (such as
a human’s voice offering encouragement when they are present while training) may be part of the environment, for
example with the selected dog literature [4].
Tones used as conditioned reinforcers may be conditioned by presenting the tone initially at a low-level and pairing
the tone with a food reinforcer to create a conditioned reinforcer. Here we survey the sounds and conditioned reinforcers
in the subject literature.
Table 5: Sounds and conditioned reinforcers for touchscreen discrimination training
Animal
Shaping
Correct Answer
Incorrect Answer
Baboons [3]
None
None
None
Bears [15, 58]
Melodic tone
Melodic tone
Buzzer
Bonobos [59]
N/A
Tone
None
Cockatoos [8, 17]
“Good” (verbal)
“Good” (verbal)
None
Dogs/wolves [4]
Clicker for shaping; beep for
food delivery
Tone
Buzzer
8
Animal
Shaping
Correct Answer
Incorrect Answer
Gorillas [15]
Melodic tone
Melodic tone
Buzzer
Horses [60]
Beep & chime
Chime
Buzzer
Marmosets [19]
Acoustic Tone
Melodic tone
None
Orangutans [36]
Auditory bridge
Auditory bridge
None
Pigeons [14]
Acoustic signal
Acoustic signal
None
Rats/Mice [49]
Tone
Tone
None
Red Footed Tortoises [37]
None
None
None
3.1.3 Discussion of devices, hardware, software, and feeding mechanisms.
Perhaps the single most important consideration for an animal’s successful performance in discrimination tasks is
whether the device accommodates the animal’s particular biology and physiology. While the devices used for each
species can vary widely, there are many commonalities between them, such as the relative size of the device to the size
of the animal, ensuring safety with an enclosure, screens that react to animal interactions, software that has been adapted
for the animal’s biology, and a reward system including a feeding mechanism that assists with the animal’s successful
performance on tasks.
When selecting a device and enclosure, particular care should be placed on the size of the screen in relation to the
animal’s body (see [51, 65] for example), the safety provided by the enclosure to prevent harm, and the size of the pixel
stimuli on the screen. Hardware and software may be developed internally or available open source/commercially. The
most important consideration is adapting the setup to the species’ physiology and biology [50, 51, 65], and Umwelt - the
sensory world as it is experienced by an animal.
3.1.3.1 Screen size
Screen size is an important factor in successful training. A screen that is too small will require higher levels of motor
skill development as the target stimuli size may be too small, and a screen that is too large will require more body
ambulatory movement to navigate between discrimination stimuli in tasks. A growing body of research has developed
regarding touchscreen interaction development and target stimuli size and spacing, for example in dogs [5] and rodents
[34]. Ideally, an animal should be able to center in front of the screen and press one option or another from a stationary
position. Table 6 presents the pixel and screen sizes used for these specific animals during discrimination training.
Table 6: Screen sizes and pixel sizes for touchscreen discrimination training
Animal
Symbol Pixel Size
Screen Size
Baboons [3]
300x300 x
29x29 cm
Bonobos [59]
N/A
56 cm
Bears [15, 58]
2268x2419 x
48 cm
Dogs/Wolves [4]
200-300 x
16x27 cm
Cockatoos [8, 17]
144 X 144 x
26 cm
Gorillas [15]
2268x2419 x
48 cm
Horses [60]
300x300 x initially, then
reduced
107 cm
Marmosets [19]
200x200 x
27x15 cm
Orangutans [36]
N/A
53 cm
Pigeons [14]
72 X 72 x
38 cm
Rats [49]
208x208 x
38 cm
Red Footed Tortoises [37]
94x94 x
38 cm
9
3.1.3.2 Hardware and screen sensitivity
In addition to the issue of the optimal screen size, screens must have an appropriate level of sensitivity so they will
trigger the responses of animals. Hence, the more sensitive screen - whether infrared touch screen panel or one that
measures capacitive touch - the less training will be required for motor skills to engage in discrimination tasks. This
review includes hardware acquired through commercial laboratory suppliers, built internally, and commercially available
products.
Just as hardware needs to be tailored to specific species, software also can be adapted for animals [30, 42]. Important
considerations in software include sensitivity to input (touch), spacing and size of stimuli. Like hardware, too, the
software below includes internally developed, open-source, and commercially available apps.
Table 7: Hardware and software samples for touchscreen discrimination training
Animal
Apparati
Hardware/Software Name
Baboons [3]
19-in. LCD touch monitor
Developed using Eprime language
Bears [15, 58]
Panasonic Toughbook laptop computer and a 19-inch,
Vartech Armorall Capacitive, touchscreen monitor
welded to the front of a rolling computer cart
Visual Basic for Windows
Bonobos [59]
22’’ Viewsonic TD2220 touchsensiti ve monitor (1920
× 1080 resolution) connected to the researcher's
(DWL) computer on an adjustable cart
Opensesame
Cockatoos [8, 17]
Samsung Galaxy Tab A, children’s plastic case
CommBoard by Shmootz, Ltd.
Dogs/Wolves [4]
15” laptop, TFT computer monitor mounted behind
infrared touchframe
Vienna Comparative Cognition
Technology
Gorillas [15]
Panasonic Toughbook Laptop CF19 computer and
1900 VarTech Armorall capacitive touchscreen
monitor welded inside a rolling LCD panel cart
encased with top and sides
Visual Basic for Windows
Horses [60]
42-inch LCD touchscreen monitor controlled by the
Surface Acoustic Wave system on a portable stand
N/A
Marmosets [19]
Mini laptop PC with touch-sensitive screen (Model
SC, Japan), acrylic case
Microsoft Direct X
Orangutans [36]
HP Desktop 260-A129 PC and a 21” color PC
computer monitor with a Keytec Magic Touch
touchscreen
Custom touchscreen-delivered
program written in Java
Pigeons [14]
38 cm thin film transistor display 5 cm behind the
pecking key (key with a switch that registers a single
peck)
N/A
Rats [49]
Flat screen monitor, Craft Data Ltd.
ELO Touchsystems, Displaze
Red Footed Tortoises [37]
15-inch IR “CarrollTouch” touchframe (Model
D87587-001, 15 in., without frame)
Vienna Comparative Cognition
Technology Cognition Lab 1.9
3.1.3.3 Food reward device
In automated learning systems [3, 14, 49], feeding mechanisms affix to the device itself and in hybrid and social learning
systems [8, 15, 36] the food reward is delivered by the trainer. The location can assist with increasing animals’ task
success. For mice and rats, the feeder device is often located in the back of the chamber so they must run to the back
[49]. They then have space to observe the new stimuli on the screen as they approach. Researchers with dogs relocated
the device from the front to the side as the dogs were too distracted by the initial location and did not pay attention to the
screen[4]. In hybrid and social learning systems [15, 59] the food is systematically hand-delivered, generally over the
10
touchscreen and centered to avoid side bias development. Reinforcers should also be adapted to the devices. For instance,
[4] noted that soft dog food got stuck in the feeder device.
Table 8: Feeding mechanisms or reinforcer devices for touchscreen discrimination training
Animal
Food Rew ard Device
Baboons [59]
Homemade food dispenser
Bears [15, 58]
Trainer/Human
Bonobos [59]
Manual via PVC tube
Cockatoos [8, 17]
Trainer/Human
Dogs/Wolves [4]
30-treat capacity food dispenser, Messerli Research
Treat & Train from PetSafe Pet Tutor
Gorillas [15]
Trainer/Human
Horses [60]
Tray below screen
Marmosets [19]
Unavaila ble
Orangutans [36]
Trainer/Human
Pigeons [21]
Grain feeder
Rats [49]
Pellet receptacle attached to a 45 mg pellet dispenser
Red Footed Tortoises
[37]
Polyoxymethylene plate (diameter
47 cm) wi th 16 small, indented placeholders
3.1.4 Introducing animals to touchscreen learning system.
Based on the literature, introducing animals to touchscreen learning systems often begins with identifying food
reinforcers, delivering them under situations in which the animal is most motivated to learn, and optimizing the context
for successful learning. Here we will explore the food reinforcers used in the studies we analyzed, the kinds of weight
control used in studies, animal touchscreen contact style, session frequency, and methods to habituate animals to the
screen.
3.1.4.1 Weight control and reinforcers
Most (but not all) automated learning systems have included some form of weight control [14, 49], and both automated
and social learning systems surveyed here use food reinforcers when the animal answers correctly. Here we will review
the kinds of reinforcers and weight control for each species surveyed.
1
Table 1 depicts weight control practices within
our ethical considerations in the literature review.
Reinforcers can vary from animal-to-animal, from day-to-day, and from minute-to-minute and can be chosen by the
animal [59]. Some systems use multiple reinforcers [8, 15]. (See Step 2 in the Recommended Best Practices section for
more information on reinforcers).
Table 9: Food reinforcer practices for touchscreen discrimination training
Animal
Reinforcer
Baboons [3]
Grains of dry wheat
Bears [15, 58]
Fruits, vegetables, honey roasted peanuts, banana pellets, dried banana chips,
yoghurt covered raisins and wafer cookies
Bonobos [3]
DK Zoological Trainings Biscuit
Cockatoos [8, 17]
Sunflower seeds, almonds, pine nuts
1
Note that the current authors do not promote or endorse weight control or food restriction. These practices were
repeatedly found in the literature, but we align practices with Mancini’s [42] ethical practices and find that animals
perform well without restricting food or controlling weight.
11
Animal
Reinforcer
Dogs/Wolves [4]
Liver sausage in a tube, cream cheese, peanut bu tter, normal dry or semi -dry food, cubed hard
cheese, hot dogs
Gorillas [15]
Regular zoo diet of fruits, vegetables, and Mazu ri Prim ate Leafeater Chow
Horses [60]
Carrots
Marmosets [19]
Gum powder, soybean powder, marshmallow, sponge cake, egg cookie, nuts, sweet potato, cheese
Orangutans [36]
Preferred food reward (i.e., blueberry)
Pigeons [14]
.2g grain
Rats/Mice [49]
Cheerios, yogurt, meat baby food, Formula P, Research Diets
Red Footed Tortoises [37]
Mushrooms, strawberries, and sweet corn among other preferred fruits and vegetables
3.1.4.2 Contact style
When determining a ‘touch’ style for an animal to make contact with the screen, it is important to take the animal’s body
type and movement style into consideration. Birds, mice, rats, and dogs use a nose-poke or beak/tongue touch on the
screen, and marmosets are taught to touch with their fingers. Some rats and dogs may also prefer touching with their
paws. In the table below, we list contact style in the articles selected.
Table 10: Touchscreen contact styles for touchscreen discrimination training
Animal
Contact Style
Baboons [3]
Hand
Bears [15, 58]
Hand and nose
Bonobos [59]
Hand
Cockatoos [8, 17]
Tongue/beak touch
Dogs/ Wolves [4]
Nose, paw
Gorillas [15]
Hand
Horses [60]
Nose
Marmosets [19]
Hand
Orangutans [36]
Wooden dowel
Pigeons [14]
Peck at pecking key (key with a switch that
registers a single peck)
Rats/mice [49]
Nose poke , paw
Red Footed Tortoises [37]
Touch with nose
3.1.4.3 Session duration and frequency
Training session length and frequency can vary and, as with many of these areas, the animal’s preferences should be
considered. The list below are the training schedules in the selected articles, although they may not reflect animals’
duration capacity or preference as training frequency may reflect trainers’ and experimenters’ availability.
Table 11: Session duration and frequency samples for touchscreen discrimination training
Animal
Session Duration/Frequency
Baboons [3]
24 h availability
Bears [15, 58]
3x/week, 8-10 sessions/day
Bonobos [59]
4-5x/week/varied by voluntary engagement
Cockatoos [8, 17]
5x/wk, 12-mins/session
Dogs/ Wolves [4]
1x/wk, 30-mins/session
Gorillas [15]
3x/week, 4-5 sessions/day
Horses [60]
4-7 sessions/day
12
Animal
Session Duration/Frequency
Marmosets [19]
5x/wk, 60-90 mins/day
Orangutans [36]
3-4x/week, 1-2 sessions/day
Pigeons [14]
5x/wk, 40 trials
Rats/mice [49]
30-mins/session
Red Footed Tortoises [37]
5x/week, up to 30 minutes/session
3.1.4.4 Habituation to the device
Habituating the animal to the device, or repeating the exposure so it becomes a regular part of their routine, is important
so the setting remains highly enriching and does not induce fear responses. In the articles we found, devices that were
contained in social housing areas did not indicate separate habituation procedures. Details regarding habituation were not
provided in those articles except that the devices were placed in the housing area, which indicates that they were already
part of the animal’s regular routine. More about habituation is included in Section 3.2 of the Recommended Best
Practices section.
Table 12: Samples of habituation methods for touchscreen discrimination training
Animal
Habituation Method
Baboons [3]
None ne eded; in housing area
Bears [15, 58]
None ne eded; in housing area
Bonobos [59]
None ne eded; in housing area
Cockatoos [8, 17]
1 session of desensitization training and introduction of touchscreen using food reinforcers
Dogs/ Wolves [4]
Luring first to touchsc reen then to treat dispense r; adap ting to feeder device sound
Gorillas [15]
None ne eded; in housing area
Horses [60]
None ne eded; in housing area
Marmosets [19]
None ne eded; in housing area
Orangutans [36]
None ne eded; in housing area
Pigeons [14]
Grain dispenser beneath tablet illuminated for 3 seconds when grain is delivered, several sessions in
testing enclosure
Rats/mice [49]
15-minute habituation in testing enclosure with pellet dispenser
Red Footed Tortoises
[37]
30-minute habituation in testing enclosure with food in dispenser until all food eaten fo r 3 sessions
3.1.5 Discrimination training.
This section demonstrates phases of the discrimination portion of training. Before discrimination training, shaping breaks
the more complex tasks into smaller tasks and helps condition them, which can then be followed by more complex
discrimination tasks and testing, additional shaping, discrimination tasks, and re-testing. This is somewhat recursive in
that shaping and discrimination are an iterative process and testing confirms success.
Table 13: Shaping for touchscreen discrimination training
Animal
Shaping Training
Baboons [3]
Initiation: Presenting hand through arm-port resulted in showing texture o n screen and food reward
Step 1: Presenting hand with sequentially 1-s, 15-s and 40-s delay or touching screen resulted in food
reward
Step 2: Presenting hand displayed 300x300 x stimulus for 15-s; touching resulted in food reward
Bears [15, 58]
Unpublish ed
Bonobos [59]
Initiation: N/A
Step 1: Shaped to touch small targets on screen
13
Animal
Shaping Training
Step 2: N/A
Cockatoos [8, 17]
Initiation: Accept an offered treat
Step 1: Touch anywhere on the screen for a low-value reinforcer
Step 2: Touch a target dot on the screen for a high-value reinforcer
Dogs/ Wolves [4]
Initiation: Lured to position in front o f touchscreen with food reward
Step 1: Rewarded for touching screen
Step 2: Rewarded for touching circle on screen
Gorillas [15]
Unpublish ed
Horses [60]
Initiation: Nose-touch with successive approximation to physical circle
Step 1: Nose-touch with successive approximation to touchscreen circle with physical angle bar guides
Step 2: Nose-touch and pi xel size adjustment based on horse behavior
Marmosets [19]
Initiation: N/A
Step 1: Touched a mashed banana on the screen atop a colorful square stimulus with visual and
auditory reinforcers
Step 2: Received a reward for touching the square without the mashed banana
Orangutans [36]
N/A
Pigeons [14]
Initiation: Voluntarily entered test area
Step 1: Trained to touch a disk image centered on the screen for a reward
Step 2: Trained to touch a disk image in different locations on the screen fo r a reward
Rats/mice [49]
Initiation: Nose touch to initiate contact before each trial in pre-training
Step 1: Trained to collect treats in a 30-second interval schedule
Step 2: Required to respond to a single stimulus on the screen to receive food reward
Red Footed Tortoises
[37]
Initiation: Presentation of strawberry on screen combined with manually shaping approximations
toward screen
Step 1: Manual training to touch stimulus on screen
Step 2: Manual training to multiple touches of different stimuli on screen
3.1.5.1 Targeting and stimuli discrimination training
Targeting (or target training) is the practice of training an animal to “touch” (or indicate by looking at) a desired target,
such as a toy, a dowel, or a dot on a screen, for example. Targeting is a precursor to discrimination training [66]. Table
14 lists some of the examples of discrimination training including highlights from the process, what the animal
experiences with correct vs. incorrect responses, and what performance success looks like.
Table 14: General discrimination training examples
Animal
Test
Initiation
Stimuli Process
Correct Response
Incorrect
Response
Performance
Success
Baboons [3]
Hand
presentation
Presentation of S+ and S-
images on touchscreen with
random left-right variations
Food reward
3-sec pause
80% accuracy in 200
consecutive trials
Bonobos [59]
N/A
Selecting the box outline
colored red (S+) vs blue (S-)
S+ resulted in a food
reward; additional
reward every 5th
correct response
and additional
rewards for
completing trials
S- resulted in 3500
ms pause
80% accuracy was
achieved within 1-2
24-trial sessions
Bears [15, 58]
N/A
Two arrays of differing dot
quantities (S+ and S-), larger
quantity assigned S+
Food reward
Buzz sound & 750ms
pause
5 out of 6
consecutive trials
out of a session of
20; 80% accuracy
14
Animal
Test
Initiation
Stimuli Process
Correct Response
Incorrect
Response
Performance
Success
within 4
consecutive 20-trial
sessions
Cockatoos [8,
17]
Accepts a food
reward
Two stimuli conditioned one
at a time with 4-7 repetitions
followed by discrimination
tasks
Audible bridge; food
reward provided
2-s pause; trial
repeated
70% accuracy or
better
Dogs/Wolves
[4]
Licks paste off
screen;
feeding device
offers reward
Presentation of S+ and S-
images on touchscreen with
random left-right variations
Tone bridge; food
reward
Buzz sound; 3-sec
pause
66% accuracy in 30
trials
Gorillas [15]
N/A
Two arrays of differing dot
quantities (S+ and S-), larger
quantity assigned S+
Food reward
Buzz sound & 750ms
pause
80% accuracy within
30 sessions of 80-
trials
Horses [60]
N/A
Two black circles presented
on the screen with S+ and S-
size-dependent
Food reward
None
Maintaining
accuracy rate above
70% across 12
sessions of 12 trials
Marmosets [19]
Red square
presented a t
the center of
the screen
until touched
S+ and S- pictures randomly
distributed on screen in a
two-choice force task
Food reward, 3s
interval
5s interval, repeat
and recue
90% in 100 trials.
Orangutans
[36]
Approaching
testing
apparatus
Preference for free or forced
choice analyzed via S+
(forced choice) and S- (free
choice)
S+ resulted in an
immediate food
reward
S- resulted in
several additional
free-choice
selection keys and
then same food
reward
Selections analyzed
to determine
preference for
forced or free choice
Pigeons [14]
Pecking at
screen for a
food reward
Multiple matching: 10 on
screen, 5+, 5- peck to select
stimulus image
Go/no go: one stimulus
behind pecking key (key with
a switch that registers a
single peck)
Stimulus disappears
when pecked;
reinforcer given
after all 5 are gone;
food reward
provided
Noth ing
Completing 50 40-
trial training
sessions
Rats/mice [49]
Nose poke to
screen for
reward
Two stimuli presented an
equal # of times per session,
alternate sides, a given
stimulus does not appear on
the same side more than 3x
consecutively.
Tone, light, pellet
5s pause + repeat
until correct
percent of correct
choices per session
of 100 trials
Red Footed
Tortoises [37]
Touches
triangle to
begin test
Two blue circles presented on
the screen with S+ and S-
location-dependent and
counter-balanced
Stimuli disappear
and reward
presented
3-s pause with a
blank screen; trial
repeated
10 completed 20-
block trials above
chance
15
3.1.5.2 Stop criteria
In each experiment, established criteria determined the point at which the session or experiment ended for a given
portion of the training or study. In some instances (marked with *) the engagement was voluntary or some of the subjects
did not advance.
Table 15: Examples of criteria for stopping the training sessions
Animal
Discrimination Test Stop Criteria
Baboons [3]
N/A
Bonobos [59]
24 trials or animal disengagement*
Bears [15, 58]
80 trials or animal disengagement*
Cockatoos [8, 17]
N/A
Dogs/Wolves [4]
30 trials
Gorillas [15]
80 trials or animal disengagement*
Horses [60]
12 or 24 trials per session or animal disengagement*
Marmosets [19]
60 minutes; failure to meet ne xt level criteria within 5000 trials
Orangutans [36]
64 trials or animal disengagement for 10 minutes
Pigeons [14]
40 trials
Rats/mice [49]
100 trials or 60 minutes
Red Footed Tortoises [37]
20 trials or animal disengagement*
3.2 Recommended Best Practices
The lead author is a researcher and professional parrot trainer with ten years of experience specializing in symbolic
representation and concept training development. She co-founded an online school for parrot caregivers and has
published research in the area of tablet-based parrot symbol communication, tablet-based parrot enrichment games and
analog symbol-based communication, and she is a frequent international lecturer. Her choice and consent symbolic
training methods have been translated and distributed globally. The co-author has been training with the lead author and
is a university researcher and associate professor.
Frameworks for touchscreen interactions have been developed with other species utilizing positive reinforcement and
force-free training. For instance, [65] drew from Don Norman’s model of “How Humans Do Things” in his work with
canines. The seven-stage model of action includes two phases: an execution phase and an evaluation phase. Within those
phases are steps, as follows:
Execution: 1) have a goal; 2) formulate a plan of action; 3) specify each stage in sequence; and 4)
perform the action sequence.
Evaluation: 5) perceive consequential state of the world; 6) interpret that perception; and 7)
compare to desired goal.
According to Freil et al. [65], the framework can be used to evaluate how well a system works at each stage and
where it may fall short in a cycle. For example, in his work on touchscreen interfaces [51, 65], training with back-
chaining on a multi-step canine touchscreen slide behavior produced resulted in dogs performing the sequence without
distraction, while a luring method caused dogs to show distraction between steps and without fluid execution of the task.
Using the framework, the breakdown arguably occurred between stages 3-4, in which the animal was unclear what
expectation existed for the stage of the sequence and when the action sequence concluded.
Importantly, [65] classifies execution errors into two categories: slips and mistakes. Animal interaction with screens
includes motor skill development as well as planning and execution skills. When a task is not properly executed due to
motor-control or mechanical error, it is a slip. For example, if stimuli are too close together and a parrot presses one
accidentally while reaching for the target, or a dog pulled a rope to trigger an electronic device but did not pull hard
enough. Mistakes occur when there is a failure in the plan and outcome, as an error in a discrimination task due to an
16
incorrect goal or bad plan. Such subcategories for animal-computer interactions are important, as a slip signals a need for
environmental adjustment or additional motor control training and a mistake may signal the animal’s misunderstanding
or lack of cognitive ability to perform the task.
Within the context of discrimination tasks, the framework can be utilized to measure points of breakdown in animal
error rates by researchers designing systems. Touchscreen devices can lower levels of slips by spacing stimuli for
optimal selection without accidental touches to other stimuli, reducing errors at stages 4 and 5. Leveled training in which
the animal advances only after demonstrating task ability reduces errors at the planning levels, stages 1 and 2. Lastly, a
signal such as a tone or a light that is classically conditioned with food reinforcers to indicate accurate completion of a
task [65], or a pause and corrective trial for an error, informs the animal in the evaluation phase, leading to a new plan in
task execution.
Based on our own experiences and what we have found in the literature, while individualistic preferences within these
animals will certainly influence each approach to discrimination task training, we have developed a recommended list of
steps for instituting training:
Set up the environment for ideal species learning conditions
Determine reinforcers and habituate the animal to the training area and apparatus
Shape behavior using positive reinforcement to initiate contact with the touchscreen device
Teach stimuli discrimination tasks with positive reinforcement
3.2.1 Step 1: Set Up the Environment for Ideal Species Learning Conditions & Choice.
The external environment plays a large role in determining an animal’s success in learning tasks. Under conditions
sensitive to species-specific circadian rhythms and dietary preferences, most animals may choose to engage in training
sessions voluntarily, without the need for weight control or force.
Observe the animals’ most behaviorally active times of the day (e.g., mornings and evenings for birds, lights-off
periods for rodents), and schedule training sessions during those periods to ensure the animal is alert. Likewise, engage
in training with reinforcers between mealtimes to permit the animal to have free access to a base diet while choosing to
train for ‘bonus’ food rewards.
Select apparati based on the animal’s ability to touch stimuli from a stationary position centered in front of the screen
and with stimulus pixel sizes and distribution that have been ideally tested for optimal sizing for the species (see Table
6). Touchscreens should be very responsive to animal touches to reduce frustration and shorten motor skill training
periods. Lighting can also be adjusted to the species’ natural preferences, as non-preferred lighting can be an
unintentional aversive stimulus.
When apparati are offered in group housing, it is recommended to have multiple stations to lower incidence of
aggression and resource guarding. Alternatively, whether solo or group housed, animals may be invited to partake in
research by voluntarily entering testing spaces that also allow them to also voluntarily leave. Trials can be sequential
without grouping into sessions such that animals voluntarily test and the data is not lost if they walk away (i.e., [36]).
Under these conditions, non-human primates voluntarily engage in “video games” for over an hour every day [1] and
parrots readily train 40 minutes per day without a need for weight control and with the option to leave the learning
sessions at any time [8].
In an environment that is sensitive to species-specific needs and preferences, touchscreen training and discrimination
tasks may provide excellent enrichment, in addition to providing research data.
3.2.2 Step 2: Determine Reinforcers and Habituate the Animal to the Training Area and Apparatus
Just as the physical environment may be arranged for ideal learning conditions, the selection of food reinforcers, an
optimal schedule of reinforcement, and habituating the animal to the environment and apparatus can create a low-
frustration, high-enrichment setting for task learning without a need for coercion.
Ideally, the reinforcer is tailored to the animal’s preference. This may be done by offering a selection of foods and
observing the order in which the animal eats them. A one or two-value food reinforcement system may be employed,
depending on the learning system. With parrots, under two-value reinforcement systems, a high-value and low-value
food reinforcer is used to teach the animal. When the animal engages in the target behavior, a bridge – generally a sound
alerts them that they have engaged in the target behavior. Right afterward, a high-value reinforcer is provided. When
17
the animal attempts the target behavior without success the stimuli is withdrawn for a 3-second pause, and they are
offered a lower-value reinforcer. In this way, the animal has a high rate of reinforcement and motivation and, in time,
will often voluntarily train for long durations.
In one-value reinforcement systems, the animal is rewarded for target behavior with a tone followed by a food
reinforcer. After two unsuccessful attempts, some animals, such as parrots, may benefit by being offered a simpler
behavior (such as touching a large dot on the screen). Upon successful completion of the simpler target behavior, the
animal is rewarded with a tone and a food reinforcer, and the target task resumes. This also reduces frustration during the
learning process by ensuring a high rate of reinforcement with opportunities for success after unsuccessful attempts at a
target behavior.
Habituation methods may include choice and voluntary engagement by offering high value food reinforcers in the
area of the apparatus for successive days until the animal demonstrates non-vigilant comfort behavior in the presence of
the apparatus or manually through shaping with food rewards. When the animal is comfortable with the apparatus,
behavior shaping for touchscreen training may begin. On each of the subsequent training sessions - shaping and
discrimination tasks - the animal is given a “free treat” to build behavioral momentum and encourage their participation
at the beginning of the session.
3.2.3 Step 3: Shape Behavior Using Positive Reinforcement to Initiate Capacitive Contact with the
Touchscreen Device
Using positive reinforcement, animals may be taught to engage with the touchscreen device through a series of steps.
First, under both social and automated learning systems, the animal is given a food reward for touching anywhere on the
touchscreen. In discrimination tasks, animals may also be lured
2
to the screen by adhering soft foods such as peanut
butter or cheese (as appropriate to the species) on the screen. When the animal touches the touchscreen in a responsive
manner, the tone sounds, and they are given an additional food reinforcer. In this way, the animal will naturally orient
toward touchscreen interactions.
Second, to shape target behavior to smaller-shaped stimuli on the screen that will lead to discrimination tasks, a large
circle is presented on the screen. When the animal touches the circle, they are rewarded with a tone and food reinforcer.
After demonstrating successful target behavior (as defined by species-appropriate accuracy) over several sessions, the
circle systematically narrows to the size of the discrimination stimuli, and then may be presented at various places on the
screen. Once the animal demonstrates fluency at target-touching the stimulus, they are ready for discrimination training.
3.2.4 Step 4: Teach Stimuli Discrimination Tasks with Positive Reinforcement
Prior to training animals on discrimination tasks on the touchscreen device, several “pre-steps” can help to prepare for a
successful session. First, the location(s) of the target stimuli should be analyzed to prevent side biases with successive
presentations. Generally, a stimulus that is randomly presented in the same location for three or more presentations may
create a bias to that location. Random placement can be achieved, for instance, by dividing the touch screen into
quadrants. In this way, random conditions would limit the possibility of the target stimulus in the same location for
consecutive successive tasks.
The second pre-step for one-value reinforcement systems is selecting a fallback successful target behavior to reduce
frustration during discrimination task training. For instance, if an animal selects incorrectly twice in a row, the large,
original target circle from the shaping procedure may appear in the center of the screen. When the animal touches the
circle, they receive a food reinforcer.
Finally, we recommend an ‘exit’ button on the testing page such that the animal may exit the test without discarding
the data, such that they continue to have autonomy during the training sessions. For animals that have access to
touchscreen-based enrichment games, they may be given high-value reinforcers for discrimination task sessions and
lower-value reinforcers for other tablet-based activities.
Discrimination task training begins with the presentation of two stimuli on the touchscreen. For both social and
automated learning, a correct response is paired with a sound and a food reinforcer, and an incorrect response results in a
2
Note, however, that luring has been less effective in complex non-discrimination multi-step shaping behaviors such as
line drawing, e.g., [51].
18
3-second pause before a next attempt. The touchscreen software may also offer a pause with a blank screen and then
reset with a new task or corrective trials. In social learning sessions, trainers may hold and position the touchscreen such
that the animal is directed to touch the target stimulus in order to create a direct association. Then the association help is
slowly withdrawn until the animal exhibits independence in the task. In automated learning systems, the animal is
provided food reinforcers for correctly touching the target stimulus as associations are made over successive trials.
4 Future Work
This study was limited to a literature review of 12 species that met our criteria for animal-computer interaction ethics and
was a narrow investigation into discrimination task training. Future work can include a more robust discussion of the
design elements around touchscreen devices for optimizing discrimination tasks including considerations for object
shape, color, depth perception, and the animals’ specific visual abilities. Future work could also include other types of
touchscreen tasks and more detailed insights and nuances into species-specific shaping and training. In addition, it would
be valuable to have a more comprehensive discussion of the various ethical frameworks and how they could be relevant
to animal use of touchscreen devices. Further research also could include Fitts’ tests for each species, animal size-to-
screen ratios, and whether capacitive or infrared screens are more effective for each species’ computer interactions.
5 Conclusion
Touchscreen devices have changed captive animal interactions in many ways, including enrichment and new ways to
collect research. Alongside the developing field, particular animal-centered ethics have also emerged in Animal-
Computer Interactions, placing animals as personal stakeholders in the research that impacts them. This paper has
presented a review of the literature around training animals to use touchscreen devices for discrimination tasks for social
interactions, enhanced enrichment, and research purposes through the lens of the animal-centered ethics. Most
particularly, across 12 species of animals, we analyzed environmental setup, touchscreen device hardware and software,
habituation and training, as well as discrimination task outcomes.
In addition, we have included an expanded framework of recommendations for best practices for instituting
discrimination training across species including 1) setting up the environment for ideal species learning conditions, 2)
determining an animal’s reinforcers and habituating the animal, 3) shaping behavior using positive reinforcement, and 4)
teaching stimuli discrimination tasks with positive reinforcement. There has been no comprehensive review of literature
around this topic to date, so it is our hope that this will help other researchers better understand methods for training
animals to use touchscreens.
REFERENCES
<bib id ="bib1 "><numbe r>[1]</nu mber>David Washburn, “The Four Cs of Psychological Wellbeing: Lessons from Three Decades of
Computer -based Environmental Enrichment,Anim Behav Cogn, vol. 2, no . 3, pp. 2 18232, Aug. 2015, doi: 10.12966/abc.08.02.2015.</bib>
<bib id ="bib2 "><numbe r>[2]</nu mber>Allyson J. Bennett and Jeremy D. Bailoo, “Psychological Evaluation model for NHP Environmental
Enrichment,” 2018.</bib>
<bib id ="bib3 "><numbe r>[3]</nu mber>Joël Fagot and Elodie Bonté, “Automated testing of cognitive performance in monkeys: Use of a
battery of computerized test systems by a troop of semi-free-ranging baboons (Papio papio),” Behav Res Methods, vol. 42, no. 2 , pp. 507516,
May 2010, doi: 10.3758/BRM.42.2.507.</bib>
<bib id ="bib4 "><numbe r>[4]</nu mber>Lisa J. Wallis, Friederike Range, Eniko Kubinyi, Durga Chapagain, Jessica Serra, and Ludwig Huber,
“Utilising dog-computer inter actions to pro vide me ntal st imulatio n in d ogs espec ially du ring agein g,” in ACM International Conference
Proceeding Series, Nov. 201 7, vol. Part F132525. doi: 10.1 145/3152130.3152146.</bib>
<bib id ="bib5 "><numbe r>[5]</nu mber>Clint Zeagle r, Scott Gilliland, Larr y Freil, Thad St arner, and Me lody Moo re Jackson, “Goin g to th e
dogs: Towards an interactive touchscreen interface for working dogs,” in UIST 2014 - Proceedings of the 27th Annual ACM Symposium on User
Interface Software and Technology, Oct. 2014, pp . 497508. doi: 10.1145/2642918.2647364.</bib>
<bib id ="bib6 "><numbe r>[6]</nu mber>Michelle Westerlaken and Stefano Gualeni, “Felino: The Philosophical Practice of Making an
Interspecies Videogame,” 2014.</bib>
<bib id ="bib7 "><numbe r>[7]</nu mber>Clemen s Drie ssen, Kars Alfrink, Mar inka Copier, Hein Lager weij, an d Irene van Pee r, “What c ould
playing with pigs do to us? Game design as multispecies philosophy,” 2014.</bib>
<bib id="bib 8"><number>[8]</number>Jennifer Cunha and Carlie Rhoads, “Use of a Tablet-Based Commun ication Board and Sub sequent
Choice a nd Behavioral Co rrespondences in a Goff in’s Cock atoo ( Cacatua goffiana),” Nov. 2 020. do i: 10.11 45/34460 02.3446063.</bib >
<bib id="bib 9"><number>[9]</number>Lydia Hopper, Crystal Eggelkamp, Mason Fidino, and Stephen Ross, “An assessment of touchscreens
for testing primate food preferences and valuations.,” Behav Res Methods, vol. 51, p. 639 650, 2018 .</bib>
19
<bib id ="bib10 "><numbe r>[10]</number >Jennifer Vonk, Jordyn Truax, and Molly C. McQuire, “A Food for All Seasons: Stability of Food
Preference s in Gorillas acro ss Testing Methods and Seasons,” Animals, vol. 1 2, no. 6 , Mar. 2022, doi: 10.339 0/ani12060685.</bib >
<bib id ="bib11 "><number>[1 1]</number >Viktoria Krakenberg, Maximilian Wewer, Rupert Palme, Sylvia Kaiser, Norbert Sachser, and S.
Helene Richter, “Regular touchscreen training affects faecal corticosterone metabolites and anxiety-like behaviour in mice,” Behavioural Brain
Research, vol. 401 , Mar. 2021 , doi: 10.1 016/j.bbr.2 020.11308 0.</bib>
<bib id ="bib12 "><numbe r>[12 ]</number >Benjamin M. Seitz, Kelsey M cCune, M aggie MacPher son, Luisa Bergeron, Aar on P. Blaisdell, an d
Corina J. Log an, “Using touchscreen equ ipped op erant c hambers to study animal cognition. Benefits, limita tions, an d advice,” PLoS One, vol.
16, no. 2 February 2021, Feb. 2021, doi: 10.1371/journal.pone.0246446.</bib>
<bib id ="bib13 "><numbe r>[13 ]</number >Robert G Cook, Alfred I. Geller, G uo-Rong Zh ang, and Ram Gowd a, “Touc hscreen-enhan ced vi sual
learning in rats,” 200 4. [Online]. Available: http://www.</bib>
<bib id ="bib14 "><numbe r>[14 ]</number >Ludwig Huber, Wilfried Apfalter, Michael Steurer, and Hermann Prossinger, “A new learn ing
paradigm elicits fast visual discrimination in pigeons,J Exp Psychol Anim Behav Process, vol. 31, no. 2, pp. 2 37246, Apr. 2005, doi:
10.1037/0097-7403.31.2.237.</bib>
<bib id ="bib15 "><numbe r>[15 ]</number >Zoe Johnson-Ulrich and Jennifer Vonk, “Spatial representation of magnitude in humans (Homo
sapiens), Western lowland gorillas ( Gorilla go rilla gor illa), and American black b ears (Ursus a mericanu s),” Anim Cogn, vol. 21, no. 4, pp . 531
550, Jul. 2018, doi: 10.1007/s10071-018-1186-y.</bib>
<bib id ="bib16 "><numbe r>[16 ]</number >Ulrike Aust, Friederike Range, Michael Steurer, and Ludwig Huber, “Inferential reasoning by
exclusion in pigeons, dogs, and humans,” Anim Cogn, vol. 11, n o. 4, pp. 5 87597, Oct. 2008, doi: 10.1007/s10071-008-0149-0.</bib>
<bib id ="bib17 "><numbe r>[17 ]</number >Mark O’Hara, Alice M.I. Auersperg, Thomas Bugnyar, and Ludwig Huber, “Inference by exclusion
in Goffin cockatoos (Cacatua goffini),” PLoS One, vol. 10 , no. 8, Aug. 2015 , doi: 10.137 1/journal.p one.0134 894.</bib >
<bib id="bib 18"><number>[18]</number>Sandra Mikola sch, Kurt Kotrschal, and Christian Schlo egl, “African grey parrots (Psittacus
erithacus) use inference by exclusion to find hidden food,” Biol Lett, vol. 7, no. 6, pp. 875877, Dec. 2011, doi: 10.1098/rsbl.2011.0500.</bib >
<bib id ="bib19 "><numbe r>[19 ]</number >Denise Kurz, “Touch-screen exp eriments-Co mmon marmose ts can discrimin ate bet ween positive
and negative stimuli but can they reason by exclusion,” 2011.</bib>
<bib id ="bib20 "><numbe r>[20 ]</number >Heidi Marsh, Alexander Vining, E mma Leven doski, and Peter Jud ge, “Infer ence by exclusion in
lion-tailed macaques (M acaca silenus), a hamadryas bab oon (Papio hamadryas), capu chins (Sapajus apella), and squirrel monkeys (Saimiri
sciureus).,” J Comp Psychol., vol. 12 9, no. 3, p. 256267, 2015.</bib>
<bib id ="bib21 "><numbe r>[21 ]</number >Claudia Stephan, Anna W ilkinson , and Lu dwig Hub er, “Hav e we met before? Pigeons r ecognise
familiar human faces,” Avian Biol Res, vol. 5, no. 2, pp . 7580, 2012, doi: 10.3184/175815512X 13350 970204867.</bib>
<bib id ="bib22 "><numbe r>[22 ]</number >Kent D. Bodily, Jeffrey S. Katz, and Anthony A. Wright, “Matching-to-Sample Abstract-Con cept
Learning by Pigeons,” J Exp Psychol Anim Behav Process, vol. 34, no. 1, p p. 178184, Jan. 2008, doi: 10.1037/0097-7403.34.1.178.</bib>
<bib id ="bib23 "><numbe r>[23 ]</number >Dániel Rivas-Blanco, I na Mar ia Pohl, Rachel Dale , Marian ne Theres Elisa beth He berlein, and
Friederike Ran ge, “Wolves and Dogs May Rely on No n-numeri cal Cues in Quanti ty Disc rimination Tasks When Given the Choice,” Front
Psychol, vol. 11, Sep . 2020, doi: 10.3389 /fpsyg.202 0.573317.</b ib>
<bib id ="bib24 "><numbe r>[24 ]</number >Clara Mancini, “ Towards a n animal -centred ethics for AnimalComputer In teraction,” International
Journal of Human Computer Studies, vol. 98, p p. 221233, Feb. 2017, doi: 10.1016/j.ijhcs.2016.04.008.</bib>
<bib id ="bib25 "><numbe r>[25 ]</number >Clara Mancini and Eleono ra Nannoni, “Rel evance, Impartiality, Wel fare an d Consent: Prin ciples o f
an Animal-Centered Research Ethics,” Frontiers in Animal Science, vol. 3, Apr. 2022, d oi: 10.3389/fanim.2 022.800186.</bib>
<bib id ="bib26 "><numbe r>[26 ]</number >Clara Mancini, “ Animal-Comp uter I nterac tion: A Manifesto ,” 2011.</bib>
<bib id ="bib27 "><numbe r>[27 ]</number >Ilyena Hirskyj-Dougla, Patricia Pons, Janet C. Read, and Javier Jaen, “Seven years after the
manifesto: Liter atur e rev iew and research direction s for techno logies in animal comp uter interactio n,” Multimodal Technolo gies and
Interaction, vol. 2, no. 2. MDPI AG, Jun . 01, 2018. doi: 10 .3390/mti2020030.</bib>
<bib id ="bib28 "><numbe r>[28 ]</number >Shang Ping Lee et al., “A mobile pet wearable computer and mixed reality syste m for hu man-poultry
interaction through the internet,” Pers Ubiquitous Comput, vol. 10 , no. 5, pp. 301317, Aug. 2006, doi: 10.1007/s00779-005-0051-6.</bib>
<bib id ="bib29 "><numbe r>[29 ]</number >Juan Haladjian, Z ardosht Hod aie, Stefa n Nüske, an d Bernd Brügge, “G ait anomaly detectio n in dairy
cattle,” in ACM International Conference Proceeding Series, Nov. 2017 , vol. Part F132525. doi: 10.1145/3152130.3152135.</bib>
<bib id ="bib30 "><numbe r>[30 ]</number >Ilyena Hirskyj-Douglas and Janet C. Read, “DoggyVision: Examining how dogs (Canis familiaris)
interact with media using a dog-driven proximity tracker device.,Anim Behav Cogn, vol. 5, no. 4, p p. 388405, Nov. 2018, doi:
10.26451/abc.05.04.06.2018.</bib>
<bib id ="bib31 "><numbe r>[31 ]</number >Patricia Pons, Javier Jaen, and Alejandro Catala, “Towards futur e interactive inte lligent systems for
animals: Study and recognition of embodied interactions,” in International Conference on Intelligent User Interfaces, Proceedings IUI, Mar.
2017, pp. 389400. doi: 1 0.1145/3025 171.3025175.</bib>
<bib id ="bib32 "><numbe r>[32 ]</number >Fiona French an d Martin Kaltenb runner, “User Experien ce for Elephants Researching Interactive
Enrichment through Design and Craft,” 2020.</bib>
<bib id ="bib33 "><numbe r>[33 ]</number >Hiroki Kobayashi, Kazuhiko Nakamura, Kana Muramatsu, Kaoru Saito, Junya Okuno, and Akio
Fujiwara, “Playful r ocksalt system: Animal-computer interaction design in wild environments,ACM International Conference Proceedings
Series, 12th Advances in Computer Entertainment Technology Conference, , 2015.</bib>
<bib id ="bib34 "><numbe r>[34 ]</number >Joshua E . Wolf, Catherine M. U rbano , Chad M. Ruprecht, an d Kenneth J. Leising , “Need to train
your r at? There is an A pp for that : A touchsc reen b ehavior al evaluati on syste m,” Behav Res Methods, vol. 46, no . 1, pp. 2 06214, 2 014, doi:
10.3758/s13428-013-0366-6.</bib >
<bib id ="bib35 "><numbe r>[35 ]</numbe r>Damian Scarf, “Getting out of the lab: The development of a free-range learning apparatus for
pigeons (FLAP) Authoritarian tendencies arising from a fear of COVID-19 View project Outdoor Education View project Getting out of the lab:
The development of a free-range learning apparatus for pigeons (FLAP) Acknowledgment,” 2022. [Online]. Available:
https://www.researchgate.net/publication/358275129</bib>
<bib id ="bib36 "><numbe r>[36 ]</number >Sarah E. Ritvo and Suzanne E. MacDonald, “Preferen ce for free or forced choice in Sumatran
orangutans (Pongo abelii),” J Exp Anal Behav, vol. 113, no. 2, pp. 419434, Mar. 2020, doi: 10.1002/jeab.584.</bib>
<bib id ="bib37 "><numbe r>[37 ]</number >Julia Mueller-Paul, A. Wilkinson, U. Aust, M. Steu rer, G. Hall, an d L. Huber, “Touchscreen
performance and knowledge transfer in the red-footed tortoise (Chelonoidis carbonaria),” Behavioural Processes, vol. 106, pp. 18 7192, 2014,
doi: 10.1016/j.beproc.2014.06.003.</bib>
20
<bib id ="bib38 "><numbe r>[38 ]</number >Carolin e B. Druc ker, Ta lia Baghdoyan, and Elizabeth M. Brannon, “Implicit sequence learning in
ring-tailed lemurs (Lemur catta),” J Exp Anal Behav, vo l. 105, no. 1 , pp. 12 3132, J an. 20 16, do i: 10 .1002/jeab.18 0.</bib >
<bib id ="bib39 "><numbe r>[39 ]</number >Linda Brent and D. Weaver, “The physiological and behavioral effects of radio music on signly
housed baboons,” 1996.</bib>
<bib id ="bib40 "><numbe r>[40 ]</number >Paul E. Honess an d C M. Marin, “Enrichment and aggression in p rimates,” Neuroscience an d
Biobehavioral Reviews, vol. 30, no. 3 . pp. 413436, 2006. doi: 10.1016/j.neubiorev.2005.05.002.</bib>
<bib id ="bib41 "><numbe r>[41 ]</number >Scott W. Line, A. S. Clarke, Hal Mark owitz, and G. Ellman , “Responses of female rhesus macaques
to an environmental en richment apparatus,” 1990.</bib>
<bib id ="bib42 "><numbe r>[42 ]</number >Clara Mancini, “ Towards a n animal -centred ethics for AnimalComputer In teraction,” International
Journal of Human Computer Studies, vol. 98, pp . 221233, Feb. 2017, doi: 10.1016/j.ijhcs.2016.04.008.</bib>
<bib id ="bib43 "><numbe r>[43 ]</number >Nicolas Claidière, Julie Gullstrand, Aurélien Latouche, and Joël Fagot, “Using Automated Learning
Devices for Monkeys (ALDM) to study social networks,” Behav Res Methods, vol. 49, no. 1, pp . 2434, Feb. 2017, doi: 10.3758/s13428-015-
0686-9.</bib>
<bib id ="bib44 "><numbe r>[44 ]</number >Ilyena Hirskyj-Douglas and Vilma Kankaanpää, “Article exploring how white-faced sakis control
digital visual enrichment systems,” Animals, vol. 11, no . 2, pp. 119, Feb. 2021, doi: 10.3390/ani11020557.</bib>
<bib id ="bib45 "><numbe r>[45 ]</number >Cecilie M. Mejdell, Turid Buvik, Grete H.M . Jorg ensen, an d Knut E . Bøe, “Ho rses can learn t o use
symbols to communica te their p references,” Appl Anim Behav Sci, vol. 184, pp. 6 673, Nov. 2016, doi : 10.10 16/j.applan im.2016.07.0 14.</b ib>
<bib id ="bib46 "><numbe r>[46 ]</number >Bruce S. McEwen and Robert M. Sap olsky, “Stress an d cog nitive functio n Introduction
Catecho lamines an d glucocortic oids,” 19 95.</bib>
<bib id ="bib47 "><numbe r>[47 ]</number >Zdeněk Hliňák and E. Rozmarová, “The locomotor-exploratory behaviour of laboratory male rats
tested under the ‘red’ and ‘white’ light conditio ns,” Act Nerv Super (Praha), vol. 28, 198 6.</bib >
<bib id ="bib48 "><numbe r>[48 ]</number >Robert B Lockard, Some effects of light upo n the behavior of rodentsPsychological Bulletin, vol.
60, no. 6, pp. 509-529, 1963.</bib>
<bib id ="bib49 "><numbe r>[49 ]</number >Alexa E. Horner et al., “The touchscreen operant platform for testing lear ning and memory in rats
and mice,” Nat Protoc, vol. 8, no. 10, pp. 19 611984, Oct. 2014, doi: 10.1038/nprot.2013 .122.</bib>
<bib id ="bib50 "><numbe r>[50 ]</number >Clint Zea gler et al., “Can ine compu ter interaction: To wards desig ning a tou chscreen interface fo r
working dogs,” in ACM International Conference Proceeding Series, Nov. 2016, vol. 1 5-17-November-2016 . doi:
10.1145/2995257.2995384.</bib>
<bib id ="bib51 "><numbe r>[51 ]</number >Clint Zea gler, Sc ott Gilliland, Larry Freil, Thad Starn er, and Melody Mo ore Jackson, “Go ing to the
dogs: Towards an interactive touchscreen interface for working dogs,” in UIST 2014 - Proceedings of the 27th Annual ACM Symposium on User
Interface Software and Technology, Oct. 2014, pp . 497508. doi: 10.1145/2642918.2647364.</bib>
<bib id ="bib52 "><numbe r>[52 ]</number >Caitlin A. Ford, L iz Bellwar d, Cliv e J. C. Phillips, and Kris Desco vich, “ Use of I nterac tive
Technology in Captive Great Ape Management,” Journal of Zoological and Botanical Gardens, vol. 2, no. 2 , pp. 30031 5, Jun. 202 1, doi:
10.3390/jzbg2020021.</bib>
<bib id="bib53"><number>[53]</number>Stephen E.G. Lea, Emman uel M. Pothos, A ndy J. Wills, Lisa A . Leaver, Catrio na M.E. Ryan, an d
Christin a Meier, “Multip le Featu re Use in Pigeon s’ Category Discrimination : The Influenc e of Stimulus Set Structu re and the Salience of
Stimulus Differen ces,” J Exp Psychol Anim Learn Cogn, vol. 04 , 2018, [Online]. Available: http s://pearl.p lymouth.ac .uk</bib >
<bib id ="bib54 "><numbe r>[54 ]</number >Heli Väätäjä et al., “Technolo gy for bonding in hu man-animal i nterac tion,” in ACM International
Conferen ce Proceeding S eries, Nov . 2017, vo l. Part F13 2525. do i: 10.1145 /3152130.315 2153.</bib >
<bib id ="bib55 "><numbe r>[55 ]</number >Melody Moore Jackson et al., “Technology for w orking do gs,” Dec. 20 18. doi:
10.1145/3295598.3295615.</bib>
<bib id="bib56"><number>[56]</number>Ce ara Byrne, Clint Ze agler, Larry Freil, All ison Rapoport, and Me lody Mo ore Jackson, “Do gs using
touchscreens in the ho me: A case study for assistance do gs operating emergency notification systems,” Dec. 20 18. doi:
10.1145/3295598.3295610.</bib>
<bib id ="bib57 "><numbe r>[57 ]</number >Ludwig Huber, Nils Heise, Christopher Zeman, and Christian Palmers, “The ALDB box: Automatic
testing of cognitive performance in groups of av iary-housed pigeons,” Behav Res Methods, vol. 47, no . 1, pp. 162171, 2015, doi:
10.3758/s13428-014-0462-2.</bib >
<bib id ="bib58 "><numbe r>[58 ]</number >Bonnie M. Perdue, “The e ffect o f computerized testing on sun bear behavior and enrichment
preferences,” Behavioral Sciences, vol. 6, no. 4, Dec. 2 016, do i: 10.3390/bs6040019 .</bib>
<bib id ="bib59 "><numbe r>[59 ]</number >Daan W. Laméris, Jonas Verspeek, Marcel Eens, and Jeroen M.G. Stevens, “Social and nonsocial
stimuli alter the p erformance of bon obos dur ing a pictorial emotio nal Stro op task,” Am J Primatol, vol. 84, no. 2, Feb. 2022, doi:
10.1002/ajp.23356.</bib>
<bib id ="bib60 "><numbe r>[60 ]</number >Masaki Tomonaga, Kiyonori Kumazaki, Florine Camus, Sophie Nicod, Carlos Pereira, and Tetsuro
Matsuzawa, “A Horse’s Eye View: Size and Shape Discrimination Compared with Other Mammals,” 2015.</b ib>
<bib id ="bib61 "><numbe r>[61 ]</number >Susan G. Friedman , “Tsk, No, Eh-eh: Clearing the Path to Reinforcement with an Errorless Learning
Mindset,” 2016.</bib>
<bib id ="bib62 "><numbe r>[62 ]</number >Fay E. Clark, “Cog nitive enrichment and welfare: Cur rent approaches and future directions,” Anim
Behav Cogn, vol. 4, no. 1 , pp. 5271, Feb. 2 017, doi: 1 0.129 66/ab c.05.02.2017.</bib>
<bib id ="bib63 "><numbe r>[63 ]</number >Irene M. Pepperberg, The Alex Studies. Cambrid ge, MA: Harvard University Press, 1999 .</bib>
<bib id ="bib64 "><numbe r>[64 ]</number >Irene M. Pepperberg, “Animal language studies: What happened?,” Psychon Bull Rev, vol. 24, no . 1,
pp. 181185, Feb. 2 017, d oi: 10.3758/s13423 -016-1101-y.</bib>
<bib id="bib 65"><number>[65]</number>Larry Freil et al., “Canine-centered computing,” Foundations and Trends in Human-Computer
Interaction, vol. 10, no. 2, p p. 87164, 2016, doi: 10.1561/1100000064.</bib>
<bib id ="bib66 "><numbe r>[66 ]</number >Jean Mckinley, Hannah Buchanan-Smith, Lois Bassett, and Keith Morris, “Training co mmon
marmose ts (Callith rix jacch us) to coopera te dur ing routine laborator y procedure s: Ease of train ing and time investmen t.,” J Appl Anim Welf Sci.,
vol. 6, no. 3, pp. 209220, 2003. </bib>
... They advocate for designing technology and interactive systems that consider the unique characteristics, behaviors, and environments of the individual animals involved. For a few more exceptional examples of such design procedures, see [13,22,29,77,78]. ...
... Building upon the conceptual backdrop laid out above, Cunha and Renguette (2022) present a recent and detailed set of best practices that serve as an exemplary step toward the efficacy and practicality of writing stuff down [13]. This includes an understanding of consent as a continuous process, rather than some one-off box to tick. ...
... Incorporate an "exit" button to allow animals to leave the test without losing data. Use positive reinforcement, such as a sound and a food reinforcer, for correct responses during discrimination tasks [13]. ...
Conference Paper
How do Animal-Computer Interaction (ACI) researchers working with live animal participants assess the animals’ willingness to participate in their research? In this paper we present the results of a systematic literature review designed to answer this question by examining the Proceedings of the ACM International Animal Computer Interaction Conference. From 2016-2022, these proceedings included 38 full papers that reported results from research with live animal participants. We found 1) only 74% or 28/38 of the papers reported how they assessed animal participants’ willingness to engage during their research, 2) the authors of papers focused on species other than dogs had a much higher rate of providing this information than did the authors of dog-based studies (100% or 12/12 non-dog papers v 62% or 16/26 of dog-based papers), 3) most researchers who addressed the issue of an animal participant’s willingness to engage in the research relied on some form of mediated consent, informed by behavioral observation methods, to do so. However, the researchers focused on non-dog species were much more likely than researchers focused on dogs to include elements of contingent consent in their protocols (75% (9/12) of the non-dog studies v 12% (3/26) of the dog-related studies). We argue that providing each other with more details about our research methods and possibly more fully embracing the principles of contingent consent would further ACI researchers’ existing ethical commitment to our animal participants, increase our adherence to standard scientific research practice, and accelerate the continued development of the field of Animal-Computer Interaction.
... Interactive technologies are playing an increasing role in the management of animals under human care. Typically, the aim is to enrich animals' life experiences by providing them with opportunities to undertake stimulating activities [12], [35], [38], [16], [85], attain greater control over their surroundings [7], [76], or potentially express themselves [13]. As a case in point, in recent years, a considerable amount of Animal-Computer Interaction (ACI) research has focused on parrots' use of screen-based applications such as video-conferencing, allowing the birds to remotely interact with their conspecifics [36], and AAC (Augmentative and Alternative Communication) devices, enabling them to produce a wide range of utterances using representations such as sound-text correspondences [15] and tools such as speech boards [13], [14]. ...
... Her 76.6% corroboration rate upon receiving the corresponding item ( Figure 6) might suggest that the random displacement of representations from one trial to the next might have had a confounding effect on her selection, but this could possibly be explained with the fact that Ellie had never encountered that particular challenge before. Additionally, parrots, pigeons and other species such as primates have been found to averagely score 67%-80% on discrimination tasks in research settings [16]. Thus, a selection accuracy rate of 76.6% for preferred treat selection between 5 randomly moving icons would be considered within a standard range of accuracy on a discrimination task. ...
Article
Full-text available
The principles of Replacement, Reduction and Refinement (3Rs) were developed to address the ethical dilemma that arises from the use of animals, without their consent, in procedures that may harm them but that are deemed necessary to achieve a greater good. While aiming to protect animals, the 3Rs are underpinned by a process-centered ethical perspective which regards them as instruments in a scientific apparatus. This paper explores the applicability of an animal-centered ethics to animal research, whereby animals would be regarded as autonomous subjects, legitimate stakeholders in and contributors to a research process, with their own interests and capable of consenting and dissenting to their involvement. This perspective derives from the ethical stance taken within the field of Animal-Computer Interaction (ACI), where researchers acknowledge that an animal-centered approach is essential to ensuring the best research outcomes. We propose the ethical principles of relevance, impartiality, welfare and consent, and a scoring system to help researchers and delegated authorities assess the extent to which a research procedure aligns with them. This could help researchers determine when being involved in research is indeed in an animal's best interests, when a procedure could be adjusted to increase its ethical standard or when the use of non-animal methods is more urgently advisable. We argue that the proposed principles should complement the 3Rs within an integrated ethical framework that recognizes animals' autonomy, interests and role, for a more nuanced ethical approach and for supporting the best possible research for the benefit animal partakers and wider society.
Article
Full-text available
Decisions about which foods to use during training and enrichment for captive animals may be based on invalid assumptions about individuals’ preferences. It is important to assess the stability of food preferences given that one-time preferences are often used to inform which items are offered over a longer period of time. Presenting preference assessments using images of food items allows control over factors such as size, scent, and inadvertent cueing but requires validation. We presented three male gorillas with choices between randomly selected pairs of actual food items from their morning meal using PVC feeders. We also presented the gorillas with two-alternative forced-choice tests between images of these foods on a touchscreen computer. Ranked preferences were correlated across method and seasons. Furthermore, gorillas selected images of preferred over less preferred foods in a validation task on the touchscreen. However, selections of some food items changed within sessions, suggesting that preference may be relative to other contextual factors. Researchers should assess how choices affect subsequent preferences to understand whether animals demonstrate absolute preferences for particular food items, or prefer variety.
Thesis
Full-text available
This thesis explores the challenge for humans of designing and crafting interactive enrichment systems for elephants housed in captivity. Captive elephants may have limited opportunity to express a full range of natural behaviours and therefore benefit from well-designed environmental enrichment. We asked whether technology could support the design and development of novel enrichment for elephants and investigated what kinds of technology-enabled systems would hold their interest. Crucially, these systems were designed to provide the elephants with opportunities to make and enact choices – giving them more control over what happened in their environment. After researching wild elephant lifestyle and characteristics, our fieldwork started with an ethnographic study of captive elephants. We then followed an exploratory approach: Research through Design and Craft. Over several years, a range of interactive systems were crafted for elephants. Each device included embedded technology that enabled elephant interactions to be captured and mapped to associated system outputs. Elephants and their keepers were involved in this cyclical process, and the elephants’ reactions to the devices were noted and interpreted, giving rise to insights that informed the subsequent designs. Analysis of the design and development of the enrichment systems revealed important interface attributes and design considerations that we describe in this document. Finally, we offer five contributions for the ACI community: (i) Research through Design and Craft methodology, which was developed and tested over several years; (ii) ZooJam workshops, which were organised with colleagues over three years; (iii) six key principles of interaction design for ACI development – consistency, differentiation, graduation, specificity, multiplicity and affordance; (iv) an exploration of More than Human Aesthetics focusing on performative aesthetics; (v) a prototype deck of Concept Craft Cards that share theoretical and practical topics with other designers and developers.
Article
Full-text available
The conservation status of great apes (chimpanzees Pan troglodytes, gorillas Gorilla sp., orangutans Pongo sp., and bonobos Pan paniscus) is grave and zoological institutions are vital for maintaining numbers of these species and educating the public about their importance. Technology provides tools that can assist zoos in meeting these objectives. However, the extant research on technology use in zoos is potentially constrained by small sample sizes and there is no framework detailing the methodologies necessary for the successful incorporation of technology into great ape management. Therefore, this study aimed to determine current technology use in the management of captive great apes and whether technology-directed behaviour differs between ape genera. Primary carers of great apes in zoos were surveyed using a 43-question, online questionnaire. The purpose of integrating interactive technology into captive ape management was primarily for enrichment (53% of respondents), followed by research (20% of respondents). However, only 25% of respondents had apes directly engaged with technology. There were no differences in technology-directed behaviours between ape genera. By identifying differences in practice, this research marks the initial stage in developing a best practice framework for using technology.
Article
Full-text available
Simple Summary Many zoo-housed primates use visual computer systems for enrichment but little is known about how monkeys would choose to control these devices. Here we investigate what visual enrichment white-faced saki monkeys would trigger and what effect these videos have on their behaviour. To study this, we built an interactive screen device that would trigger visual stimuli and track the sakis’ interactions when using the system. Over several weeks, we found that the sakis would trigger underwater and worm videos significantly more than animal, abstract art and forest videos, and the control condition of no-stimuli. Further, the sakis triggered the animal video significantly less often over all other conditions. Yet, viewing their interactions over time, the sakis’ usage of the device followed a bell curve, suggesting novelty and habituation factors. As such, it is unknown if the stumli or devices novelty and habituation caused the changes in the sakis interactions over time. These results also indicated that the different visual stimuli conditions significantly reduced the sakis’ scratching behaviour with the visual stimuli conditions compared to the control condition. Further, the usage of visual stimuli did not increase the sakis’ looking at and sitting in front of the screen behaviours. These results highlight problems in defining interactivity and screen usage with monkeys and screens from looking behaviours and proximity alone. Abstract Computer-enabled screen systems containing visual elements have long been employed with captive primates for assessing preference, reactions and for husbandry reasons. These screen systems typically play visual enrichment to primates without them choosing to trigger the system and without their consent. Yet, what videos primates, especially monkeys, would prefer to watch of their own volition and how to design computers and methods that allow choice is an open question. In this study, we designed and tested, over several weeks, an enrichment system that facilitates white-faced saki monkeys to trigger different visual stimuli in their regular zoo habitat while automatically logging and recording their interaction. By analysing this data, we show that the sakis triggered underwater and worm videos over the forest, abstract art, and animal videos, and a control condition of no-stimuli. We also note that the sakis used the device significantly less when playing animal videos compared to other conditions. Yet, plotting the data over time revealed an engagement bell curve suggesting confounding factors of novelty and habituation. As such, it is unknown if the stimuli or device usage curve caused the changes in the sakis interactions over time. Looking at the sakis’ behaviours and working with zoo personnel, we noted that the stimuli conditions resulted in significantly decreasing the sakis’ scratching behaviour. For the research community, this study builds on methods that allow animals to control computers in a zoo environment highlighting problems in quantifying animal interactions with computer devices.
Article
Full-text available
Operant chambers are small enclosures used to test animal behavior and cognition. While traditionally reliant on simple technologies for presenting stimuli (e.g., lights and sounds) and recording responses made to basic manipulanda (e.g., levers and buttons), an increasing number of researchers are beginning to use Touchscreen-equipped Operant Chambers (TOCs). These TOCs have obvious advantages, namely by allowing researchers to present a near infinite number of visual stimuli as well as increased flexibility in the types of responses that can be made and recorded. We trained wild-caught adult and juvenile great-tailed grackles (Quiscalus mexicanus) to complete experiments using a TOC. We learned much from these efforts, and outline the advantages and disadvantages of our protocols. Our training data are summarized to quantify the variables that might influence participation and success, and we discuss important modifications to facilitate animal engagement and participation in various tasks. Finally, we provide a “training guide” for creating experiments using PsychoPy, a free and open-source software that was incredibly useful during these endeavors. This article, therefore, should serve as a resource to those interested in switching to or maintaining a TOC, or who similarly wish to use a TOC to test the cognitive abilities of non-model species or wild-caught individuals.
Article
Full-text available
A wide array of species throughout the animal kingdom has shown the ability to distinguish between quantities. Aside from being important for optimal foraging decisions, this ability seems to also be of great relevance in group-living animals as it allows them to inform their decisions regarding engagement in between-group conflicts based on the size of competing groups. However, it is often unclear whether these animals rely on numerical information alone to make these decisions or whether they employ other cues that may covary with the differences in quantity. In this study, we used a touch screen paradigm to investigate the quantity discrimination abilities of two closely related group-living species, wolves and dogs, using a simultaneous visual presentation paradigm. Both species were able to successfully distinguish between stimuli of different quantities up to 32 items and ratios up to 0.80, and their results were in accordance with Weber’s law (which predicts worse performances at higher ratios). However, our controls showed that both wolves and dogs may have used continuous, non-numerical cues, such as size and shape of the stimuli, in conjunction with the numerical information to solve this task. In line with this possibility, dogs’ performance greatly exceeded that which they had shown in other numerical competence paradigms. We discuss the implications these results may have on these species’ underlying biases and numerical capabilities, as well as how our paradigm may have affected the animals’ ability to solve the task.
Article
The emotional Stroop task is a paradigm commonly applied in human studies to investigate how emotionally laden stimuli interfere with cognitive processes. Recent modifications of this task have enabled researchers to study similar Stroop effects in zoo-housed primates. Across three experiments using a pictorial emotional Stroop task, we investigated if the attention of bonobos was influenced by social (facial expressions during play, conflict, and neutral events) and nonsocial stimuli (a preferred food item, predator, and flower). Four bonobos successfully learned to complete the task on a touchscreen. First, we tested the bonobos on a standard color-interference Stroop task and found that they made more errors in color-congruent trials. Second, we included facial expressions of unknown conspecifics and found that it took the bonobos longer to select targets with play facial expressions compared to neutral expressions. Last, we included objects and found that the negative, positive and neutral objects altered performance. Our findings show that the cognitive processes of bonobos are influenced by both relevant social and nonsocial stimuli. Specifically, play faces interfered with the bonobos' attention suggesting that these facial expressions form a salient stimulus within bonobo society. Nonsocial stimuli also altered accuracy and reaction times during the task which may be explained by their evolutionary relevance. Our results help us to better understand the (socio-)emotional competencies of bonobos and how they respond to external stimuli. Future studies can further examine how a wider range of biologically relevant stimuli interfere with attentional processes in bonobos.
Article
Automated touchscreen techniques find increasing application for the assessment of cognitive function in rodents. However, hardly anything is known about the potential impact of touchscreen-based training and testing procedures on the animals under investigation. Addressing this question appears particularly important in light of the long and intensive training phases required for most of the operant tasks. Against this background, we here investigated the influence of regular touchscreen training on hormones and behaviour of mice. Faecal corticosterone metabolites (FCMs), reflecting corticosterone levels around the time of treatment, were significantly increased in touchscreen-trained mice, even one week after the training phase was already terminated. Such an effect was not detected on baseline FCMs. Thus, regular touchscreen training can be assumed to cause long-term effects on hypothalamus-pituitary-adrenal axis activity. Furthermore, anxiety-like behaviour was increased in touchscreen-trained mice two weeks after the end of the training phase. Traditionally, this would be interpreted as a negative influence of the training procedure on the animals’ affective state. Yet, we also provide two alternative explanations, taking the possibility into account that touchscreen training might have enriching properties.