Content uploaded by Tobias Kowatsch
Author content
All content in this area was uploaded by Tobias Kowatsch on Mar 07, 2020
Content may be subject to copyright.
Emotion Capture among Real Couples
in Everyday Life
George Boateng
ETH Zürich
Zürich, Switzerland
gboateng@ethz.ch
Urte Scholz
University of Zürich
Zürich, Switzerland
Janina Lüscher
University of Zürich
Zürich, Switzerland
janina.luescher@psychologie.uzh.ch
Tobias Kowatsch
ETH Zürich, University of St.
Gallen
Abstract
Illness management among married adults is mainly shared
with their spouses and it involves social support. Social
support among couples has been shown to affect emotional
well-being positively or negatively and result in healthier
habits among diabetes patients. Hence, through automatic
emotion recognition, we could have an assessment of the
emotional well-being of couples which could inform the de-
velopment and triggering of interventions to help couples
better manage chronic diseases. We are developing an
emotion recognition system to recognize the emotions of
urte.scholz@psychologie.uzh.ch Zürich, St. Gallen, Switzerland
tobias.kowatsch@unisg.ch
Paper presented at the 1st Momentary Emotion Elicitation & Capture (MEEC)
workshop, co-located with the ACM CHI Conference on Human Factors in
Computing Systems, Honolulu, Hawaii, USA, April 25th, 2020. This is an open-
access paper distributed under the terms of the Creative Commons Attribution
License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted
use, distribution, and reproduction in any medium, provided the original work is
properly cited.
real couples in everyday life and in this paper, we describe
our approach to collecting sensor and self-report emotion
data among Swiss-based German-speaking couples in ev-
eryday life. We also discuss various aspects of the study
such as our novel approach of triggering data collection
based on detecting that the partners are close and speak-
ing, the self-reports and multimodal data as well as privacy
concerns with our method.
Author Keywords
Emotion; Couples; Multimodal Sensor Data; Smartwatch;
Smartphone; Wearable Computing; Mobile Computing
CCS Concepts
• Human-centered computing → Ubiquitous and mobile
computing systems and tools; •Applied computing →
Psychology;
Introduction
Evidence suggests that for married adults, illness manage-
ment is mainly shared with their spouses and it involves
social support [16, 12]. Social support among spouses is
associated with healthier habits among diabetes patients
[9] and has been shown to have positive or negative effects
on emotional well-being [11, 6, 4]. Hence through emotion
recognition, we could have an assessment of the emotional
well-being of couples which could inform the development
and triggering of interventions to help couples better man-
age chronic diseases. In effect, the development of a sys-
tem for automatic recognition of couples’ emotions could
aid social psychology researchers to understand various
dynamics of couples’ relationships and their impact on well-
being.
Currently, psychologists measure emotions through various
self-reports such as the PANAS [18]. These self-reports
are however not practical for continuous emotion measure-
ment in everyday life because completing self-reports fre-
quently will be obtrusive. Several works in the area of emo-
tion recognition use data from actors reading texts in a spe-
cific emotional tone [10] or acting out dyadic interactions
like couples [5]. It is not clear whether the algorithms devel-
oped using these data will work well for the use case of the
naturalistic interactions of real couples.
We are developing an emotion recognition system to rec-
ognize the emotions of real couples in everyday life and in
this paper, we describe our approach to collecting sensor
and self-report emotion data among Swiss-based German-
speaking couples in everyday life. We then discuss various
aspects of the study such as our novel approach of trigger-
ing data collection based on detecting that the partners are
close and speaking, the self-reports and multimodal data as
well as privacy concerns with our method.
Data Collection
We are running a field study in which we collect sensor and
self-report emotion data in the context of chronic disease
management among couples. Specifically, we collect data
for seven days from German-speaking couples in Switzer-
land in which one partner has type-2 diabetes [7]. We have
collected data from eight (8) couples so far.
Each partner is given a smartwatch and smartphone run-
ning the DyMand system, a novel open-source mobile and
wearable system that we developed for ambulatory assess-
ment of couples’ chronic disease management [2]. The
DyMand system triggers the collection of sensor and self-
report data for 5 minutes each hour during the hours that
subjects pick. We collect the following sensor data from
the smartwatch: audio, heart rate, accelerometer, gyro-
scope, Bluetooth low energy (BLE) signal strength between
watches and ambient light. After the sensor data collection,
a self-report is triggered on the smartphone that asks about
emotions over the last 5 minutes using the Affective Slider
[1] which assesses the valence and arousal dimensions of
their emotions. We also record a 3-second video of their
facial expression while they complete the self-report on the
smartphone.
We trigger sensor data collection when the partners are
close and speaking in two steps. First, we determine close-
ness using the BLE signal strength between the smart-
watches. We check if the signal strength is within a certain
threshold, which corresponds to a distance estimate [2].
Then, we determine if the partners are speaking by using a
voice activity detection (VAD) machine learning model that
classifies speech versus non-speech, which we developed
and implemented to run in real-time on the smartwatch [3].
Discussion
Our hypothesis is that we are likely to collect high-quality
sensor and self-report emotion data during times that the
partners are interacting. Hence, rather than trigger data
collection at some random times in the hour which is the
standard approach [8, 14], we use a novel method entailing
triggering data collection after we detect that the partners
are close and speaking. If none of these conditions are met
in the hour, we do a backup recording by triggering data
collection in the last 15 minutes of the hour. This approach
has the potential to collect data that contain several conver-
sation moments which would provide several data for devel-
oping the emotion recognition system. Other researchers
can use our DyMand system for their data collection as the
code is open source [2]. Additionally, the methods we use
could also be used by other researchers to optimize the
collection of sensor and self-report data among couples or
other dyads in daily life.
We use the Affective Slider as opposed to other self-reports
like the PANAS because it can be easily and quickly com-
pleted. Additionally, the valence and arousal dimensions
based on Russell’s circumplex model of emotions [15] can
be used to place various emotions. Currently, we collect
self-report emotion data on the smartphones which are
given to the couples. It is possible to implement the Affec-
tive Slider on the smartwatch to ease the burden of com-
pleting the self-report and make the process quicker.
We collect multimodal sensor data using a smartwatch
because previous works have shown that multimodal ap-
proaches to emotion recognition perform better than uni-
modal approaches [10]. Additionally, in an everyday life
context, certain data modalities might not be available and
hence, emotion recognition systems need to be developed
that can perform well with subsets of these data modali-
ties. Also, in the future, to aid in the recognition task, other
sensor data about behavioral patterns could be collected
such as phone unlock frequency, frequency of phone calls,
messages sent, among others [17].
There are huge privacy concerns and ethical implications
as sensitive data such as audio are collected frequently.
We address these concerns by first subjecting our study
protocol to review and resulting in approval by the ethics
committee of the canton of Zurich. Also, we ensure that we
collect a maximum of 5-minute of audio data per hour in or-
der not to record a significant percentage of the couples’
everyday life. Additionally, to protect the privacy of subjects
not taking part in the study, we ask subjects to wear a tag
we give to them indicating to others around them that they
may be recorded. Finally, when the couples return the de-
vices after the study, we give them the option to listen to the
recorded audios and to request the deletion of any as they
wish without any explanation. This approach has been used
in other studies [13, 14].
Conclusion
In this work, we described our approach to collecting sen-
sor and self-report emotion data from Swiss-based German-
speaking couples in everyday life. We discussed various
aspects of the study. First, we discussed the use of a novel
approach of triggering data collection based on detecting
that the partners are close and speaking rather than just
randomly in the hour. Next, we discussed using a smartphone-
based Affective Slider self-report because it is quick to com-
plete. Then, we discussed collecting multimodal sensor
data with a smartwatch because it could produce more ac-
curate emotion recognition models. Finally, we discussed
our approach to addressing privacy concerns such as giv-
ing subjects the option to request the deletion of any of their
audio upon returning the devices.
Acknowledgements
We are grateful to Prabakaran Santhanam and Dominik
Rügger for helping with the development of the mobile
software tools that we are using in running the study. This
work is funded by the Swiss National Science Foundation
(CR12I1_166348).
REFERENCES
[1] Alberto Betella and Paul FMJ Verschure. 2016. The
affective slider: A digital self-assessment scale for the
measurement of human emotions. PloS one 11, 2
(2016), e0148037.
[2] George Boateng, Prabhakaran Santhanam, Janina
Lüscher, Urte Scholz, and Tobias Kowatsch. 2019a.
Poster: DyMand–An Open-Source Mobile and
Wearable System for Assessing Couples’ Dyadic
Management of Chronic Diseases. In The 25th Annual
International Conference on Mobile Computing and
Networking. 1–3.
[3] George Boateng, Prabhakaran Santhanam, Janina
Lüscher, Urte Scholz, and Tobias Kowatsch. 2019b.
VADLite: an open-source lightweight system for
real-time voice activity detection on smartwatches. In
Adjunct Proceedings of the 2019 ACM International
Joint Conference on Pervasive and Ubiquitous
Computing and Proceedings of the 2019 ACM
International Symposium on Wearable Computers.
902–906.
[4] Niall Bolger and David Amarel. 2007. Effects of social
support visibility on adjustment to stress: Experimental
evidence. Journal of personality and social psychology
92, 3 (2007), 458.
[5] Carlos Busso, Murtaza Bulut, Chi-Chun Lee, Abe
Kazemzadeh, Emily Mower, Samuel Kim, Jeannette N
Chang, Sungbok Lee, and Shrikanth S Narayanan.
2008. IEMOCAP: Interactive emotional dyadic motion
capture database. Language resources and evaluation
42, 4 (2008), 335.
[6] Masumi Iida, Mary Ann Parris Stephens, Karen S
Rook, Melissa M Franks, and James K Salem. 2010.
When the going gets tough, does support get going?
Determinants of spousal support provision to type 2
diabetic patients. Personality and Social Psychology
Bulletin 36, 6 (2010), 780–791.
[7] Janina Lüscher, Tobias Kowatsch, George Boateng,
Prabhakaran Santhanam, Guy Bodenmann, and Urte
Scholz. 2019. Social Support and Common Dyadic
Coping in Couples’ Dyadic Management of Type II
Diabetes: Protocol for an Ambulatory Assessment
Application. JMIR research protocols 8, 10 (2019),
e13685.
[8] Matthias R Mehl, Megan L Robbins, and Fenne große
Deters. 2012. Naturalistic observation of
health-relevant social processes: The Electronically
Activated Recorder (EAR) methodology in
psychosomatics. Psychosomatic Medicine 74, 4
(2012), 410.
[9] Daisy Miller and J Lynne Brown. 2005. Marital
interactions in the process of dietary change for type 2
diabetes. Journal of Nutrition Education and Behavior
37, 5 (2005), 226–234.
[10] Soujanya Poria, Erik Cambria, Rajiv Bajpai, and Amir
Hussain. 2017. A review of affective computing: From
unimodal analysis to multimodal fusion. Information
Fusion 37 (2017), 98–125.
[11] Gabriele Prati and Luca Pietrantoni. 2010. The relation
of perceived and received social support to mental
health among first responders: a meta-analytic review.
Journal of Community Psychology 38, 3 (2010), 403–
417.
[12] Tuula-Maria Rintala, Pia Jaatinen, Eija Paavilainen,
and Päivi Åstedt-Kurki. 2013. Interrelation between
adult persons with diabetes and their family: a
systematic review of the literature. Journal of family
nursing 19, 1 (2013), 3–28.
[13] Megan L Robbins, Elizabeth S Focella, Shelley Kasle,
Ana María López, Karen L Weihs, and Matthias R
Mehl. 2011. Naturalistically observed swearing,
emotional support, and depressive symptoms in
women coping with illness. Health Psychology 30, 6
(2011), 789.
[14] Megan L Robbins, Ana María López, Karen L Weihs,
and Matthias R Mehl. 2014. Cancer conversations in
context: naturalistic observation of couples coping with
breast cancer. Journal of Family Psychology 28, 3
(2014), 380.
[15] James A Russell. 1980. A circumplex model of affect.
Journal of personality and social psychology 39, 6
(1980), 1161.
[16] Amber J Seidel, Melissa M Franks, Mary Ann Parris
Stephens, and Karen S Rook. 2012. Spouse control
and type 2 diabetes management: moderating effects
of dyadic expectations for spouse involvement. Family
relations 61, 4 (2012), 698–709.
[17] Mirjam Stieger, Marcia Nißen, Dominik Rüegger,
Tobias Kowatsch, Christoph Flückiger, and Mathias
Allemand. 2018. PEACH, a smartphone-and
conversational agent-based coaching intervention for
intentional personality change: study protocol of a
randomized, wait-list controlled trial. BMC psychology
6, 1 (2018), 43.
[18] David Watson, Lee Anna Clark, and Auke Tellegen.
1988. Development and validation of brief measures of
positive and negative affect: the PANAS scales.
Journal of personality and social psychology 54, 6
(1988), 1063.