ArticlePDF Available

Dogs recognize dog and human emotions


Abstract and Figures

The perception of emotional expressions allows animals to evaluate the social intentions and motivations of each other. This usually takes place within species; however, in the case of domestic dogs, it might be advantageous to recognize the emotions of humans as well as other dogs. In this sense, the combination of visual and auditory cues to categorize others' emotions facilitates the information processing and indicates highlevel cognitive representations. Using a cross-modal preferential looking paradigm, we presented dogs with either human or dog faces with different emotional valences (happy/playful versus angry/aggressive) paired with a single vocalization from the same individual with either a positive or negative valence or Brownian noise. Dogs looked significantly longer at the face whose expression was congruent to the valence of vocalization, for both conspecifics and heterospecifics, an ability previously known only in humans. These results demonstrate that dogs can extract and integrate bimodal sensory emotional information, and discriminate between positive and negative emotions from both humans and dogs. © 2016 The Author(s) Published by the Royal Society. All rights reserved.
Content may be subject to copyright.
Cite this article: Albuquerque N, Guo K,
Wilkinson A, Savalli C, Otta E, Mills D. 2016
Dogs recognize dog and human emotions. Biol.
Lett. 12: 20150883.
Received: 20 October 2015
Accepted: 22 December 2015
Subject Areas:
behaviour, cognition
Canis familiaris, cross-modal sensory
integration, emotion recognition,
social cognition
Author for correspondence:
Kun Guo
Electronic supplementary material is available
at or
Animal behaviour
Dogs recognize dog and human emotions
Natalia Albuquerque1,3, Kun Guo2, Anna Wilkinson1, Carine Savalli4,
Emma Otta3and Daniel Mills1
School of Life Sciences, and
School of Psychology, University of Lincoln, Lincoln LN6 7DL, UK
Department of Experimental Psychology, Institute of Psychology, University of Sa
˜o Paulo, Sa
˜o Paulo 05508-030,
Department of Public Politics and Public Health, Federal University of Sa
˜o Paulo, Santos 11015-020, Brazil
KG, 0000-0001-6765-1957
The perception of emotional expressions allows animals to evaluate the
social intentions and motivations of each other. This usually takes place
within species; however, in the case of domestic dogs, it might be advan-
tageous to recognize the emotions of humans as well as other dogs. In
this sense, the combination of visual and auditory cues to categorize
others’ emotions facilitates the information processing and indicates high-
level cognitive representations. Using a cross-modal preferential looking
paradigm, we presented dogs with either human or dog faces with different
emotional valences (happy/playful versus angry/aggressive) paired with a
single vocalization from the same individual with either a positive or nega-
tive valence or Brownian noise. Dogs looked significantly longer at the face
whose expression was congruent to the valence of vocalization, for both con-
specifics and heterospecifics, an ability previously known only in humans.
These results demonstrate that dogs can extract and integrate bimodal sen-
sory emotional information, and discriminate between positive and
negative emotions from both humans and dogs.
1. Introduction
The recognition of emotional expressions allows animals to evaluate the social
intentions and motivations of others [1]. This provides crucial information
about how to behave in different situations involving the establishment and
maintenance of long-term relationships [2]. Therefore, reading the emotions
of others has enormous adaptive value. The ability to recognize and respond
appropriately to these cues has biological fitness benefits for both signaller
and the receiver [1].
During social interactions, individuals use a range of sensory modalities,
such as visual and auditory cues, to express emotion with characteristic changes
in both face and vocalization, which together produce a more robust percept
[3]. Although facial expressions are recognized as a primary channel for the
transmission of affective information in a range of species [2], the perception
of emotion through cross-modal sensory integration enables faster, more accu-
rate and more reliable recognition [4]. Cross-modal integration of emotional
cues has been observed in some primate species with conspecific stimuli,
such as matching a specific facial expression with the corresponding vocaliza-
tion or call [5 7]. However, there is currently no evidence of emotional
recognition of heterospecifics in non-human animals. Understanding heterospe-
cific emotions is of particular importance for animals such as domestic dogs,
who live most of their lives in mixed species groups and have developed mech-
anisms to interact with humans (e.g. [8]). Some work has shown cross-modal
capacity in dogs relating to the perception of specific activities (e.g. food-guard-
ing) [9] or individual features (e.g. body size) [10], yet it remains unclear
whether this ability extends to the processing of emotional cues, which
inform individuals about the internal state of others.
&2016 The Author(s) Published by the Royal Society. All rights reserved.
on January 13, 2016 from
Dogs can discriminate human facial expressions and
emotional sounds (e.g. [11–18]); however, there is still no evi-
dence of multimodal emotional integration and these results
relating to discrimination could be explained through simple
associative processes. They do not demonstrate emotional
recognition, which requires the demonstration of categoriz-
ation rather than differentiation. The integration of
congruent signals across sensory inputs requires internal cat-
egorical representation [19– 22] and so provides a means to
demonstrate the representation of emotion.
In this study, we used a cross-modal preferential looking
paradigm without familiarization phase to test the hypoth-
esis that dogs can extract and integrate emotional
information from visual (facial) and auditory (vocal)
inputs. If dogs can cross-modally recognize emotions, they
should look longer at facial expressions matching the
emotional valence of simultaneously presented vocalizations,
as demonstrated by other mammals (e.g. [5 7,21,22]). Owing
to previous findings of valence [5], side [22], sex [11,22] and
species [12,23] biases in perception studies, we also investi-
gated whether these four main factors would influence the
dogs’ response.
2. Material and methods
Seventeen healthy socialized family adult dogs of various breeds
were presented simultaneously with two sources of emotional
information. Pairs of grey-scale gamma-corrected human or
dog face images from the same individual but depicting different
expressions (happy/playful versus angry/aggressive) were pro-
jected onto two screens at the same time as a sound was
played (figure 1a). The sound was a single vocalization (dog
barks or human voice in an unfamiliar language) of either
positive or negative valence from the same individual, or a neu-
tral sound (Brownian noise). Stimuli (figure 1b) featured one
female and one male of both species. Unfamiliar individuals
and an unfamiliar language (Brazilian Portuguese) were used
to rule out the potential influence of previous experience with
model identity and human language.
Experiments took place in a quiet, dimly-lit test room and
each dog received two 10-trial sessions, separated by two
weeks. Dogs stood in front of two screens and a video camera
recorded their spontaneous looking behaviour. A trial consisted
of the presentation of a combination of the acoustic and visual
stimuli and lasted 5 s (see electronic supplementary material
for details). Each trial was considered valid for analyses when
the dog looked at the images for at least 2.5 s. The 20 trials pre-
sented different stimulus combinations: 4 face-pairs (2 human
and 2 dog models) 2 vocalizations (positive and negative
valence) 2 face positions (left and right), in addition to 4 con-
trol trials (4 face-pairs with neutral auditory stimulus).
Therefore, each subject saw each possible combination once.
We calculated a congruence index ¼(C2I)/T, where Cand
Irepresent the amount of time the dog looked at the congruent
(facial expression matching emotional vocalization, C) and
incongruent faces (I), and Trepresents total looking time (look-
ing left þlooking right þlooking at the centre) for the given
trial, to measure the dog’s sensitivity to audio-visual emotional
cues delivered simultaneously. We analysed the congruence
index across all trials using a general linear mixed model
(GLMM) with individual dog included in the model as a
random effect. Only emotion valence, stimulus sex, stimulus
species and presentation position (left versus right) were
included as the fixed effects in the final analysis because first-
and second-order interactions were not significant. The means
were compared to zero and confidence intervals were presented
for all the main factors in this model. A backward selection pro-
cedure was applied to identify the significant factors. The
normality assumption was verified by visually inspecting plots
of residuals with no important deviation from normality ident-
ified. To verify a possible interaction between the sex of
subjects and stimuli, we used a separate GLMM taking into
account these factors. We also tested whether dogs preferentially
looked at a particular valence throughout trials and at a particu-
lar face in the control trials (see the electronic supplementary
material for details of index calculation).
3. Results
Dogs showed a clear preference for the congruent face in
67% of the trials (n¼188). The mean congruence index
was 0.19 +0.03 across all test trials and was significantly
greater than zero (t
¼5.53; p,0.0001), indicating dogs
looked significantly longer at the face whose expression
matched the valence of vocalization. Moreover, we found a
consistent congruent looking preference regardless of the
stimulus species (dog: t
¼5.39, p,0.0001; human:
¼2.48, p¼0.01; figure 2a), emotional valence (negative:
¼5.01, p,0.0001; positive: t
¼2.88, p¼0.005;
figure 2b), stimulus gender ( female: t
¼4.42, p,0.0001;
male: t
¼3.45, p,0.001; figure 2c) and stimulus
position (left side: t
¼2.74, p,0.01; right side: t
5.14, p,0.0001; figure 2d). When a backwards selection pro-
cedure was applied to the model with the four main factors,
the final model included only stimulus species. The congru-
ence index for this model was significantly higher for
viewing dog rather than human faces (dog: 0.26 +0.05,
human: 0.12 +0.05, F
¼4.42; p¼0.04, figure 2a), indicat-
ing that dogs demonstrated greater sensitivity towards
conspecific cues. In a separate model, we observed no signifi-
cant interaction between subject sex and stimulus sex
¼1.33, p¼0.25) or main effects (subject sex: F
0.17, p¼0.68; subject stimulus: F
¼0.56, p¼0.45).
Dogs did not preferentially look at either of the facial
expressions in control conditions when the vocalization was
the neutral sound (mean: 0.04 +0.07; t
¼0.56; p¼0.58).
The mean preferential looking index was 20.05 +0.03,
which was not significantly different from zero (t
p¼0.13), indicating that there was no difference in the pro-
portion of viewing time between positive and negative
facial expressions across trials.
4. Discussion
The findings are, we believe, the first evidence of the inte-
gration of heterospecific emotional expressions in a species
other than humans, and extend beyond primates the
demonstration of cross-modal integration of conspecific
emotional expressions. These results show that domestic
dogs can obtain dog and human emotional information
from both auditory and visual inputs, and integrate them
into a coherent perception of emotion [21]. Therefore, it
is likely that dogs possess at least the mental prototypes
for emotional categorization (positive versus negative
affect) and can recognize the emotional content of these
expressions. Moreover, dogs performed in this way without
any training or familiarization with the models, suggesting Biol. Lett. 12: 20150883
on January 13, 2016 from
that these emotional signals are intrinsically important. This
is consistent with this ability conferring important adaptive
advantages [24].
Our study shows that dogs possess a similar ability to
some non-human primates in being able to match auditory
and visual emotional information [5], but also demonstrates
an important advance. In our study, there was not a strict tem-
poral correlation between the recording of visual and auditory
cues (e.g. relaxed dog face with open mouth paired with play-
ful bark), unlike the earlier research on primates (e.g. [5]). Thus
the relationship between the modalities was not temporally
contiguous, reducing the likelihood of learned associations
accounting for the results. This suggests the existence of a
robust categorical emotion representation.
Although dogs showed the ability to recognize both con-
specific and heterospecific emotional cues, we found that
they responded significantly more strongly towards dog
stimuli. This could be explained by a more refined mechanism
for the categorization of emotional information from conspeci-
fics, which is corroborated by the recent findings of dogs
showing a greater sensitivity to conspecifics’ facial expressions
[12] and a preference for dog over human images [23]. The
ability to recognize emotions through visual and auditory
cues may be a particularly advantageous social tool in a
highly social species such as dogs and might have been
exapted for the establishment and maintenance of long-term
relationships with humans. It is possible that during domesti-
cation, such features could have been retained and potentially
selected for, albeit unconsciously. Nonetheless, the communi-
cative value of emotion is one of the core components of the
process and even less-social domestic species, such as cats,
express affective states such as pain in their faces [25].
test space(a)
220 cm
0 0.5 1.0 1.5 2.0 2.5
time (s)
3.0 3.5 4.0 4.5 0 0.5 1.0 1.5 2.0 2.5
time (s)
frequency (kHz)
3.0 3.5 4.0 4.5
0 0.5 1.0 1.5 2.0 2.5
time (s)
3.0 3.5 4.0 4.5 0 0.5 1.0 1.5 2.0 2.5
time (s)
frequency (kHz)
3.0 3.5 4.0 4.5
140 cm
167 cm
Figure 1. (a) Schematic apparatus. R2: researcher, C: camera, S: screens, L: loudspeakers, P: projectors, R1: researcher. (b) Examples of stimuli used in the study:
faces (human happy versus angry, dog playful versus aggressive) and their correspondent vocalizations. Biol. Lett. 12: 20150883
on January 13, 2016 from
There has been a long-standing debate as to whether dogs
can recognize human emotions. Studies using either visual or
auditory stimuli have observed that dogs can show differen-
tial behavioural responses to single modality sensory inputs
with different emotional valences (e.g. [12,16]). For example,
¨ller et al. [13] found that dogs could selectively respond to
happy or angry human facial expressions; when trained with
only the top (or bottom) half of unfamiliar faces they gener-
alized the learned discrimination to the other half of the
face. However, these human-expression-modulated behav-
ioural responses could be attributed solely to learning of
contiguous visual features. In this sense, dogs could be discri-
minating human facial expressions without recognizing the
information being transmitted.
Our subjects needed to be able to extract the emotional
information from one modality and activate the correspond-
ing emotion category template for the other modality. This
indicates that domestic dogs interpret faces and vocalizations
using more than simple discriminative processes; they obtain
emotionally significant semantic content from relevant audio
and visual stimuli that may aid communication and social
interaction. Moreover, the use of unfamiliar Portuguese
words controlled for potential artefacts induced by a dog’s
previous experience with specific words. The ability to form
emotional representations that include more than one sensory
modality suggests cognitive capacities not previously demon-
strated outside of primates. Further, the ability of dogs to
extract and integrate such information from an unfamiliar
human stimulus demonstrates cognitive abilities not known
to exist beyond humans. These abilities may be fundamental
to a functional relationship within the mixed species social
groups in which dogs often live. Moreover, our results
may indicate a more widespread distribution of the ability
to spontaneously integrate multimodal cues among non-
human mammals, which may be key to understanding the
evolution of social cognition.
Ethics. Ethical approval was granted by the ethics committee in the
School of Life Sciences, University of Lincoln. Prior to the study, writ-
ten informed consent was obtained from the dogs’ owners and human
models whose face images and vocalizations were sampled as the
stimuli. We can confirm that both the human models have agreed
that their face images and vocalizations can be used for research and
related publications, and we have received their written consent.
Data accessibility. The data underlying this study are available from
Authors’ contribution. N.A., K.G., A.W. and D.M. conceived/designed the
study and wrote the paper. E.O. conceived the study. N.A. performed
the experiments. N.A. and C.S. analysed and interpreted the data.
N.A. prepared the figures. All authors gave final approval for publi-
cation and agree to be held accountable for the work performed.
Competing interests. We declare we have no competing interests.
Funding. Financial support for N.A. from Brazil Coordination for the
Improvement of Higher Education Personnel is acknowledged.
Acknowledgements. We thank Fiona Williams and Lucas Albuquerque for
assisting with data collection/double coding and figures preparation.
1. Schmidt KL, Cohn JF. 2001 Human expressions as
adaptations: evolutionary questions in facial
expression research. Am. J. Phys. Anthropol.33,
3–24. (doi:10.1002/ajpa.20001)
2. Parr LA, Winslow JT, Hopkins WD, de Waal FBM.
2000 Recognizing facial cues: individual
discrimination by chimpanzees (Pan troglodytes)
and rhesus monkeys (Macaca mulatta). J. Comp.
Psychol.114, 47–60. (doi:10.1037/0735-7036.114.
3. Campanella S, Belin P. 2007 Integrating face
and voice in person perception. Trends Cogn. Sci.11,
dog human
species of stimulus
negative positive
valence of stimulus
congruence index
(mean ± s.e.)
female male
sex of stimulus
left right
side of stimulus presentation
congruence index
(mean ± s.e.)
Figure 2. Dogs’ viewing behaviour (calculated as congruence index). (a) Species of stimulus; (b) valence of stimulus; (c) sex of stimulus; (d) side of stimulus
presentation. *p,0.05, **p,0.01, ***p,0.001. Biol. Lett. 12: 20150883
on January 13, 2016 from
535–543. (doi:10.1016/j.tics.2007.
4. Yuval-Greenberg S, Deouell LY. 2009 The dog’s
meow: asymmetrical interaction in cross-modal
object recognition. Exp. Brain Res.193, 603– 614.
5. Ghazanfar AA, Logothetis NK. 2003 Facial
expressions linked to monkey calls. Nature 423,
937–938. (doi:10.1038/423937a)
6. Izumi A, Kojima S. 2004 Matching vocalizations to
vocalizing faces in a chimpanzee (Pan troglodytes).
Anim. Cogn.7, 179–184. (doi:10.1007/s10071-004-
7. Payne C, Bachevalier J. 2013 Crossmodal integration
of conspecific vocalizations in rhesus macaques.
PLoS ONE 8, e81825. (doi:10.1371/journal.pone.
8. Nagasawa M, Mitsui S, En S, Ohtani N, Ohta M,
Sakuma Y, Onaka T, Mogi K, Kikusui T. 2015
Oxytocin-gaze positive loop and the coevolution of
human-dog bonds. Science 348, 333–336. (doi:10.
9. Farago
´T, Pongra
´cz P, Range F, Vira
´nyi Z, Miklo
´si A.
2010 ‘The bone is mine’: affective and referential
aspects of dog growls. Anim. Behav.79, 917– 925.
10. Taylor AM, Reby D, McComb K. 2011 Cross modal
perception of body size in domestic dogs (Canis
familiaris). PLoS ONE 6, e0017069. (doi:10.1371/
11. Nagasawa M, Murai K, Mogi K, Kikusui T. 2011 Dogs
can discriminate human smiling faces from blank
expressions. Anim. Cogn.14, 525– 533. (doi:10.
12. Racca A, Guo K, Meints K, Mills DS. 2012 Reading
faces: differential lateral gaze bias in processing
canine and human facial expressions in dogs and
4-year-old children. PLoS ONE 7, e36076. (doi:10.
13. Mu¨ller CA, Schmitt K, Barber ALA, Huber L. 2015
Dogs can discriminate emotional expressions of
human faces. Curr. Biol.25, 601– 605. (doi:10.
14. Buttelmann D, Tomasello M. 2013 Can domestic
dogs (Canis familiaris) use referential
emotional expressions to locate hidden food? Anim.
Cogn.16, 137–145. (doi:10.1007/s10071-012-
15. Flom R, Gartman P. In press. Does affective
information influence domestic dogs’ (Canis lupus
familiaris) point-following behavior? Anim.Cogn.
16. Fukuzawa M, Mills DS, Cooper JJ. 2005 The effect of
human command phonetic characteristics on
auditory cognition in dogs (Canis familiaris).
J. Comp. Psychol.119, 117– 120. (doi:10.1037/
17. Custance D, Mayer J. 2012 Empathic-like
responding by domestic dogs (Canis familiaris)to
distress in humans: an exploratory study. Anim.
Cogn.15, 851–859. (doi:10.1007/s10071-012-
18. Andics A, Ga
´csi M, Farago
´T, Kis A, Miklo
´si A. 2014
Voice-sensitive regions in the dog and human
brain are revealed by comparative fMRI. Curr.
Biol.24, 574–578. (doi:10.1016/j.cub.2014.
19. Kondo N, Izawa E-I, Watanabe S. 2012 Crows cross-
modally recognize group member but not non-
group members. Proc. R. Soc. B 279, 1937– 1942.
20. Silwa J, Duhamel J, Pascalis O, Wirth S. 2011
Spontaneous voice–face identity matching by
rhesus monkeys for familiar conspecifics and
humans. Proc. Natl Acad. Sci. USA 108, 1735– 1740.
21. Proops L, McComb K, Reby D. 2009 Cross-modal
individual recognition in domestic horses (Equus
caballus). Proc. Natl Acad. Sci. USA 106, 947– 951.
22. Proops L, McComb K. 2012 Cross-modal individual
recognition in domestic horses (Equus caballus)
extends to familiar humans. Proc. R. Soc. B 282,
3131–3138. (doi:10.1098/rspb.2012.0626)
23. Somppi S, To
¨rnqvist H, Ha
¨nninen L, Krause C, Vainio
O. 2014 How dogs scan familiar and inverted faces:
an eye movement study. Anim. Cogn.17, 793–803.
24. Guo K, Meints K, Hall C, Hall S, Mills D. 2009 Left
gaze bias in humans, rhesus monkeys and domestic
dogs. Anim. Cogn.12, 409–418. (doi:10.1007/
25. Holden E, Calvo G, Collins M, Bell A, Reid J, Scot EM,
Nolan AM. 2014 Evaluation of facial expression in
acute pain in cats. J. Small Anim. Pract.55,
615–621. (doi:10.1111/jsap.12283) Biol. Lett. 12: 20150883
on January 13, 2016 from
... In crossmodal experiments, horses, dogs and cats were presented with human faces expressing anger or joy accompanied by a joyful or angry voice. These animals looked differently at the pictures according to their correspondence with the voice (Albuquerque et al. 2016;Nakamura et al. 2018;Quaranta et al. 2020;Trösch et al. 2019), suggesting that they integrated these signals of human emotions across modalities. More specifically, in experiments with dogs and cats and in an experiment with horses, a preferential looking paradigm was used: two images were presented to the animals while one voice was broadcast. ...
... More specifically, in experiments with dogs and cats and in an experiment with horses, a preferential looking paradigm was used: two images were presented to the animals while one voice was broadcast. Dogs and cats looked more at the congruent image (i.e., the one that matched the sound - Albuquerque et al. 2016;Quaranta et al. 2020) whereas horses looked more at the incongruent image (i.e., the one that did not match the sound -Trösch et al. 2019). In another experiment on horses, an expectancy violation paradigm was used: horses saw a picture of an angry or joyful face, followed by a joyful or angry voice. ...
... ***p ≤ 0.001 with the observed emotional face, indicating that they associated the facial and vocal stimuli that expressed the same emotion. This result may indicate that horses form crossmodal representations of joy and sadness in which vocal and facial features are associated (Albuquerque et al. 2016;Jardat et al. 2023;Quaranta et al. 2020). This preference for the incongruent video was not observed further in the test, which could be explained by horses' strong preference for the joyful face after first appraising both videos, as we discuss below. ...
Full-text available
Communication of emotions plays a key role in intraspecific social interactions and likely in interspecific interactions. Several studies have shown that animals perceive human joy and anger, but few studies have examined other human emotions, such as sadness. In this study, we conducted a cross-modal experiment, in which we showed 28 horses two soundless videos simultaneously, one showing a sad, and one a joyful human face. These were accompanied by either a sad or joyful voice. The number of horses whose first look to the video that was incongruent with the voice was longer than their first look to the congruent video was higher than chance, suggesting that horses could form cross-modal representations of human joy and sadness. Moreover, horses were more attentive to the videos of joy and looked at them for longer, more frequently, and more rapidly than the videos of sadness. Their heart rates tended to increase when they heard joy and to decrease when they heard sadness. These results show that horses are able to discriminate facial and vocal expressions of joy and sadness and may form cross-modal representations of these emotions; they also are more attracted to joyful faces than to sad faces and seem to be more aroused by a joyful voice than a sad voice. Further studies are needed to better understand how horses perceive the range of human emotions, and we propose that future experiments include neutral stimuli as well as emotions with different arousal levels but a same valence.
... While the patient reports on their experience in human medicine, in veterinary medicine, the client is the most common proxy. In addition to being able to identify changes and degrees of their pet's subjective status, owners can also interpret changes over an extended period of time [10,11]. Several CROMs have been developed to assess canine joint disease, including the Liverpool Osteoarthritis in Dogs (LOAD) [12,13] and the Canine Orthopaedic Index (COI) [14]. ...
Full-text available
Objective Osteoarthritis is the most common joint disease in companion animals. Several client-report outcome measures (CROMs) have been developed and validated to monitor patients and their response to treatment. However, estimates for minimal clinically-important differences for these CROMs in the context of osteoarthritis have not been published. Patients and methods Data from the Clínica Veterinária de Cães (Portuguese Gendarmerie Canine Clinic) clinical records were extracted. Baseline and 30-day post-treatment follow-up data from 296 dogs treated for hip osteoarthritis were categorized based on an anchor question, and estimates of minimal clinically-important differences (MCIDs) using distribution-based and anchor-based methods were performed. Results For the LOAD, the anchor-based methods provided a MCID estimate range of -2.5 to -9.1 and the distribution-based methods from 1.6 to 4.2. For the COI, the anchor-based methods provided a MCID estimate range of -4.5 to -16.6 and the distribution-based methods from 2.3 to 2.4. For the dimensions of COI, values varied from -0.5 to -4.9 with the anchor-based methods and from 0.6 to 2.7 with the distribution-based methods. Receiver operator characteristic curves provided areas under the curve >0.7 for the COI, indicating an acceptable cut-off point, and >0.8 for the LOAD, indicating an excellent cut-off point. Conclusion Our estimates of MCIDs for dogs with OA were consistent with previously proposed values of -4 for the LOAD and -14 for the COI in a post-surgical intervention context. ROC curve data suggest that LOAD may more reliably differentiate between anchor groups. We also presented estimates from COI of -4 for Stiffness, Function, and Gait and -3 for quality of life. These estimates can be used for research and patient monitoring.
... Unfortunately, none of the dogs produced the sufficient number of artefact-free traces for the REM stage (mostly due to the rapid eye-movements inherent to REM, which result in EEG artefacts, especially in our case, when the presence of the vocal cues presumably resulted in a more superficial sleep), thus it could not be analysed. The final sample thus consisted of N = 5 subjects for analyses involving the wake ( Subjects 4,8,10,11,13), N = 6 dogs for analyses involving the non-REM (Subjects 1, 2, 3, 5, 9, 13) and 4 additional, thus N = 10 dogs (Subjects 1, 2, 3,4,5,7,9,11,12,13) for analyses involving the drowsiness sleep-stage. ...
Full-text available
Dogs live in a complex social environment where they regularly interact with conspecific and heterospecific partners. Awake dogs are able to process a variety of information based on vocalisations emitted by dogs and humans. Whether dogs are also able to process such information while asleep, is unknown. In the current explorative study, we investigated in N = 13 family dogs, neural response to conspecific and human emotional vocalisations. Data were recorded while dogs were asleep, using a fully non-invasive event-related potential (ERP) paradigm. A species (between 250–450 and 600–800 ms after stimulus onset) and a species by valence interaction (between 550 to 650 ms after stimulus onset) effect was observed during drowsiness. A valence (750–850 ms after stimulus onset) and a species x valence interaction (between 200 to 300 ms and 450 to 650 ms after stimulus onset) effect was also observed during non-REM specific at the Cz electrode. Although further research is needed, these results not only suggest that dogs neurally differentiate between differently valenced con- and heterospecific vocalisations, but they also provide the first evidence of complex vocal processing during sleep in dogs. Assessment and detection of ERPs during sleep in dogs appear feasible.
... Several studies have looked at aspects of dogs' visual and social cognition using experimental paradigms involving the broadcasting of images or videos. Dogs are skilful at discriminating pictures of conspecifics from human or other animal faces [24,25] and can match different dog vocalizations to coherent pictorial representations [26][27][28]. Video stimuli have been successfully used in domestic dog cognition research, for example, showing that dogs performed at above chance level in a classic pointing task when a projection of an experimenter performing the pointing gestures was used, thus implying that dogs could perceive the actual content of the videos as a human being [29]. Evidence suggests that dogs process the videos in a "confusion mode", exchanging the image and its referent and thus reacting roughly the same way to an image as to the real object [30]. ...
Full-text available
Dogs’ displacement behaviours and some facial expressions have been suggested to function as appeasement signals, reducing the occurrences of aggressive interactions. The present study had the objectives of using naturalistic videos, including their auditory stimuli, to expose a population of dogs to a standardised conflict (threatening dog) and non-conflict (neutral dog) situation and to measure the occurrence of displacement behaviours and facial expressions under the two conditions. Video stimuli were recorded in an ecologically valid situation: two different female pet dogs barking at a stranger dog passing by (threatening behaviour) or panting for thermoregulation (neutral behaviour). Video stimuli were then paired either with their natural sound or an artificial one (pink noise) matching the auditory characteristics. Fifty-six dogs were exposed repeatedly to the threatening and neutral stimuli paired with the natural or artificial sound. Regardless of the paired auditory stimuli, dogs looked significantly more at the threatening than the neutral videos (χ2(56, 1) = 138.867, p < 0.001). They kept their ears forward more in the threatening condition whereas ears were rotated more in the neutral condition. Contrary to the hypotheses, displacement behaviours of sniffing, yawning, blinking, lip-wiping (the tongue wipes the lips from the mouth midpoint to the mouth corner), and nose-licking were expressed more in the neutral than the threatening condition. The dogs tested showed socially relevant cues, suggesting that the experimental paradigm is a promising method to study dogs’ intraspecific communication. Results suggest that displacement behaviours are not used as appeasement signals to interrupt an aggressive encounter but rather in potentially ambiguous contexts where the behaviour of the social partner is difficult to predict.
... They have become our loyal companions, developing unique social skills for interacting with humans. . For instance, studies indicate that dogs possess a sensitivity to our emotional states [12] and can interpret our social cues [13], even engaging in sophisticated communication through behaviors like gaze alternation [14]. Furthermore, dogs are capable of forming intricate attachment relationships with humans, resembling the bonds found in relationships between infants and caregivers [15]. ...
Full-text available
Background Animal-assisted therapy, also known as pet therapy, is a therapeutic intervention that involves animals to enhance the well-being of individuals across various populations and settings. This systematic study aims to assess the outcomes of animal-assisted therapy interventions and explore the associated policies. Methods A total of 16 papers published between 2015 and 2023 were selected for analysis. These papers were chosen based on their relevance to the research topic of animal-assisted therapy and their availability in scholarly databases. Thematic synthesis and meta-analysis were employed to synthesize the qualitative and quantitative data extracted from the selected papers. Results The analysis included sixteen studies that met the inclusion criteria and were deemed to be of moderate or higher quality. Among these studies, four demonstrated positive results for therapeutic mediation and one for supportive mediation in psychiatric disorders. Additionally, all studies showed positive outcomes for depression and neurological disorders. Regarding stress and anxiety, three studies indicated supportive mediation while two studies showed activating mediation. Conclusion The overall assessment of animal-assisted therapy shows promise as an effective intervention in promoting well-being among diverse populations. Further research and the establishment of standardized outcome assessment measures and comprehensive policies are essential for advancing the field and maximizing the benefits of animal-assisted therapy.
... The unique bond between humans and dogs can be attributed to their co-evolution over the centuries, allowing them to work together seamlessly, especially in play [14,15]. Emotional recognition is also crucial in human-dog interactions, as indicated by researchers [16,17]. Various features have emerged from research on human-animal interactions. ...
Full-text available
For some students, school success is not a simple matter. A growing, innovative approach that supports students’ functioning at school is programs in which animals are involved in education. The involvement of animals, especially dogs, in education is known as animal-assisted education (AAE). A literature review of AAE indicated a positive influence of AAE programs on the quality of learning and social emotional development in children. This study explored whether AAE positively impacts the social and emotional outcomes of elementary school students aged between 8 and 13 years through mixed methods. The methods used were a survey and an observational study. The survey section of the study showed that students participating in the program with the dogs rated themselves, after the intervention period, significantly higher in terms of self-confidence and had a more positive score for relationships with other students after the intervention. As rated by their teachers, after the intervention period, students scored significantly higher in relation to work attitude, pleasant behavior, emotional stability, and social behavior. In the observational study, we analyzed the video material of students who participated in an AAE program with dogs. We concluded that all verbal and non-verbal behaviors of the students increased, except eye contact. The current study indicates future directions for theoretical underpinnings, improved understanding, and the empirical measurement of the underlying variables and mechanisms.
... In particular, dogs make and maintain eye contact and use a variety of facial gestures to effectively communicate with human companions [21][22][23] and may even have developed facial expressions in response to non-human stimuli, such as pain [24]. They likewise understand the emotional valence of human faces [25,26]. Nagasawa and colleagues [27] show that "human-like modes of communication, including mutual gaze, in dogs may have been acquired during domestication with humans". ...
Full-text available
Facial phenotypes are significant in communication with conspecifics among social primates. Less is understood about the impact of such markers in heterospecific encounters. Through behavioral and physical phenotype analyses of domesticated dogs living in human households, this study aims to evaluate the potential impact of superficial facial markings on dogs’ production of human-directed facial expressions. That is, this study explores how facial markings, such as eyebrows, patches, and widow’s peaks, are related to expressivity toward humans. We used the Dog Facial Action Coding System (DogFACS) as an objective measure of expressivity, and we developed an original schematic for a standardized coding of facial patterns and coloration on a sample of more than 100 male and female dogs (N = 103), aged from 6 months to 12 years, representing eight breed groups. The present study found a statistically significant, though weak, correlation between expression rate and facial complexity, with dogs with plainer faces tending to be more expressive (r = −0.326, p ≤ 0.001). Interestingly, for adult dogs, human companions characterized dogs’ rates of facial expressivity with more accuracy for dogs with plainer faces. Especially relevant to interspecies communication and cooperation, within-subject analyses revealed that dogs’ muscle movements were distributed more evenly across their facial regions in a highly social test condition compared to conditions in which they received ambiguous cues from their owners. On the whole, this study provides an original evaluation of how facial features may impact communication in human–dog interactions.
Full-text available
Evidências indicam que os cães exibem comunicação referencial e intencional em situações que exigem cooperação conosco. O sinal comunicativo que ocorre com maior frequência nessas interações é a alternância de olhares entre o receptor (humano) e o referente (alimento de interesse). Nesta pesquisa, além de discutirmos aspectos relativos à relação entre humanos e cães, examinamos a produção comunicativa canina (frequência, duração e tipos de sinais manifestados) em situações nas quais um petisco esteve inacessível em cima de uma mesa de altura regulável e dentro de recipiente apropriado. Variações foram aplicadas, visando a analisar os efeitos da visibilidade do alimento (oculto em recipiente opaco ou visível em transparente - ambos com vedação), da presença (ou ausência) de alimento dentro do recipiente transparente e da presença (ou ausência) do tutor durante a ocultação. Sendo assim a pesquisa discutiu aspectos associados a permanência de objetos, saliência do estímulo visual (alimento) e tomada de perspectiva.
Full-text available
Several studies have examined dogs' (Canis lupus familiaris) comprehension and use of human communicative cues. Relatively few studies have, however, examined the effects of human affective behavior (i.e., facial and vocal expressions) on dogs' exploratory and point-following behavior. In two experiments, we examined dogs' frequency of following an adult's pointing gesture in locating a hidden reward or treat when it occurred silently, or when it was paired with a positive or negative facial and vocal affective expression. Like prior studies, the current results demonstrate that dogs reliably follow human pointing cues. Unlike prior studies, the current results also demonstrate that the addition of a positive affective facial and vocal expression, when paired with a pointing gesture, did not reliably increase dogs' frequency of locating a hidden piece of food compared to pointing alone. In addition, and within the negative facial and vocal affect conditions of Experiment 1 and 2, dogs were delayed in their exploration, or approach, toward a baited or sham-baited bowl. However, in Experiment 2, dogs continued to follow an adult's pointing gesture, even when paired with a negative expression, as long as the attention-directing gesture referenced a baited bowl. Together these results suggest that the addition of affective information does not significantly increase or decrease dogs' point-following behavior. Rather these results demonstrate that the presence or absence of affective expressions influences a dogs' exploratory behavior and the presence or absence of reward affects whether they will follow an unfamiliar adult's attention-directing gesture.
Full-text available
Human-like modes of communication, including mutual gaze, in dogs may have been acquired during domestication with humans. We show that gazing behavior from dogs, but not wolves, increased urinary oxytocin concentrations in owners, which consequently facilitated owners' affiliation and increased oxytocin concentration in dogs. Further, nasally administered oxytocin increased gazing behavior in dogs, which in turn increased urinary oxytocin concentrations in owners. These findings support the existence of an interspecies oxytocin-mediated positive loop facilitated and modulated by gazing, which may have supported the coevolution of human-dog bonding by engaging common modes of communicating social attachment. Copyright © 2015, American Association for the Advancement of Science.
Full-text available
Faces play an important role in communication and identity recognition in social animals. Domestic dogs often respond to human facial cues, but their face processing is weakly understood. In this study, facial inversion effect (deficits in face processing when the image is turned upside down) and responses to personal familiarity were tested using eye movement tracking. A total of 23 pet dogs and eight kennel dogs were compared to establish the effects of life experiences on their scanning behavior. All dogs preferred conspecific faces and showed great interest in the eye area, suggesting that they perceived images representing faces. Dogs fixated at the upright faces as long as the inverted faces, but the eye area of upright faces gathered longer total duration and greater relative fixation duration than the eye area of inverted stimuli, regardless of the species (dog or human) shown in the image. Personally, familiar faces and eyes attracted more fixations than the strange ones, suggesting that dogs are likely to recognize conspecific and human faces in photographs. The results imply that face scanning in dogs is guided not only by the physical properties of images, but also by semantic factors. In conclusion, in a free-viewing task, dogs seem to target their fixations at naturally salient and familiar items. Facial images were generally more attractive for pet dogs than kennel dogs, but living environment did not affect conspecific preference or inversion and familiarity responses, suggesting that the basic mechanisms of face processing in dogs could be hardwired or might develop under limited exposure.
Full-text available
Crossmodal integration of audio/visual information is vital for recognition, interpretation and appropriate reaction to social signals. Here we examined how rhesus macaques process bimodal species-specific vocalizations by eye tracking, using an unconstrained preferential looking paradigm. Six adult rhesus monkeys (3M, 3F) were presented two side-by-side videos of unknown male conspecifics emitting different vocalizations, accompanied by the audio signal corresponding to one of the videos. The percentage of time animals looked to each video was used to assess crossmodal integration ability and the percentages of time spent looking at each of the six a priori ROIs (eyes, mouth, and rest of each video) were used to characterize scanning patterns. Animals looked more to the congruent video, confirming reports that rhesus monkeys spontaneously integrate conspecific vocalizations. Scanning patterns showed that monkeys preferentially attended to the eyes and mouth of the stimuli, with subtle differences between males and females such that females showed a tendency to differentiate the eye and mouth regions more than males. These results were similar to studies in humans indicating that when asked to assess emotion-related aspects of visual speech, people preferentially attend to the eyes. Thus, the tendency for female monkeys to show a greater differentiation between the eye and mouth regions than males may indicate that female monkeys were slightly more sensitive to the socio-emotional content of complex signals than male monkeys. The current results emphasize the importance of considering both the sex of the observer and individual variability in passive viewing behavior in nonhuman primate research.
Full-text available
The importance of the face in social interaction and social intelligence is widely recognized in anthropology. Yet the adaptive functions of human facial expression remain largely unknown. An evolutionary model of human facial expression as behavioral adaptation can be constructed, given the current knowledge of the phenotypic variation, ecological contexts, and fitness consequences of facial behavior. Studies of facial expression are available, but results are not typically framed in an evolutionary perspective. This review identifies the relevant physical phenomena of facial expression and integrates the study of this behavior with the anthropological study of communication and sociality in general. Anthropological issues with relevance to the evolutionary study of facial expression include: facial expressions as coordinated, stereotyped behavioral phenotypes, the unique contexts and functions of different facial expressions, the relationship of facial expression to speech, the value of facial expressions as signals, and the relationship of facial expression to social intelligence in humans and in nonhuman primates. Human smiling is used as an example of adaptation, and testable hypotheses concerning the human smile, as well as other expressions, are proposed. Yrbk Phys Anthropol 44:3-24, 2001.
Faces are one of the most salient classes of stimuli involved in social communication. Three experiments compared face-recognition abilities in chimpanzees (Pan troglodytes) and rhesus monkeys (Macaco mulatto). In the face-matching task, the chimpanzees matched identical photographs of conspecifics' faces on Trial 1, and the rhesus monkeys did the same after 4 generalization trials. In the individual-recognition task, the chimpanzees matched 2 different photographs of the same individual after 2 trials, and the rhesus monkeys generalized in fewer than 6 trials. The feature-masking task showed that the eyes were the most important cue for individual recognition. Thus, chimpanzees and rhesus monkeys are able to use facial cues to discriminate unfamiliar conspecifics. Although the rhesus monkeys required many trials to learn the tasks, this is not evidence that faces are not as important social stimuli for them as for the chimpanzees.
This book follows a successful symposium organized in June 2009 at the Human Brain Mapping conference. The topic is at the crossroads of two domains of increasing importance and appeal in the neuroimaging/neuroscience community: multi-modal integration, and social neuroscience. Most of our social interactions involve combining information from both the face and voice of other persons: speech information, but also crucial nonverbal information on the person’s identity and affective state. The cerebral bases of the multimodal integration of speech have been intensively investigated; by contrast only few studies have focused on nonverbal aspects of face-voice integration. This work highlights recent advances in investigations of the behavioral and cerebral bases of face-voice multimodal integration in the context of person perception, focusing on the integration of affective and identity information. Several research domains are brought together. Behavioral and neuroimaging work in normal adult humans included are presented alongside evidence from other domains to provide complementary perspectives: studies in human children for a developmental perspective, studies in non-human primates for an evolutionary perspective, and studies in human clinical populations for a clinical perspective.
The question of whether animals have emotions and respond to the emotional expressions of others has become a focus of research in the last decade [1-9]. However, to date, no study has convincingly shown that animals discriminate between emotional expressions of heterospecifics, excluding the possibility that they respond to simple cues. Here, we show that dogs use the emotion of a heterospecific as a discriminative cue. After learning to discriminate between happy and angry human faces in 15 picture pairs, whereby for one group only the upper halves of the faces were shown and for the other group only the lower halves of the faces were shown, dogs were tested with four types of probe trials: (1) the same half of the faces as in the training but of novel faces, (2) the other half of the faces used in training, (3) the other half of novel faces, and (4) the left half of the faces used in training. We found that dogs for which the happy faces were rewarded learned the discrimination more quickly than dogs for which the angry faces were rewarded. This would be predicted if the dogs recognized an angry face as an aversive stimulus. Furthermore, the dogs performed significantly above chance level in all four probe conditions and thus transferred the training contingency to novel stimuli that shared with the training set only the emotional expression as a distinguishing feature. We conclude that the dogs used their memories of real emotional human faces to accomplish the discrimination task. Copyright © 2015 Elsevier Ltd. All rights reserved.
During the approximately 18–32 thousand years of domestication [1], dogs and humans have shared a similar social environment [2]. Dog and human vocalizations are thus familiar and relevant to both species [3], although they belong to evolutionarily distant taxa, as their lineages split approximately 90–100 million years ago [4]. In this first comparative neuroimaging study of a nonprimate and a primate species, we made use of this special combination of shared environment and evolutionary distance. We presented dogs and humans with the same set of vocal and nonvocal stimuli to search for functionally analogous voice-sensitive cortical regions. We demonstrate that voice areas exist in dogs and that they show a similar pattern to anterior temporal voice areas in humans. Our findings also reveal that sensitivity to vocal emotional valence cues engages similarly located nonprimary auditory regions in dogs and humans. Although parallel evolution cannot be excluded, our findings suggest that voice areas may have a more ancient evolutionary origin than previously known.