ArticlePDF Available

Abstract and Figures

Fueled by the pervasion of tools like Slack or Microsoft Teams, the usage of text-based communication in distributed teams has grown massively in organizations. This brings distributed teams many advantages, however, a critical shortcoming in these setups is the decreased ability of perceiving, understanding and regulating emotions. This is problematic because better team members’ abilities of emotion management positively impact team-level outcomes like team cohesion and team performance, while poor abilities diminish communication flow and well-being. Leveraging chatbot technology in distributed teams has been recognized as a promising approach to reintroduce and improve upon these abilities. In this article we present three chatbot designs for emotion management for distributed teams. In order to develop these designs, we conducted three participatory design workshops which resulted in 153 sketches. Subsequently, we evaluated the designs following an exploratory evaluation with 27 participants. Results show general stimulating effects on emotion awareness and communication efficiency. Further, they report emotion regulation and increased compromise facilitation through social and interactive design features, but also perceived threats like loss of control. With some design features adversely impacting emotion management, we highlight design implications and discuss chatbot design recommendations for enhancing emotion management in teams
Content may be subject to copyright.
This is the author’s version of a work that was published in the following source
Benke, I., Knierim, Michael T., Maedche, A. (2020): Chatbot-based Emotion Management for Distributed
Teams: A Participatory Design Study. Proc. ACM Hum.-Comput. Interact. 4, CSCW2, Article 118
(October 2020), 30 pages. https://doi.org/10.1145/3415189
Please note: Copyright is owned by the author and / or the publisher.
Commercial use is not allowed.
Institute of Information Systems and Marketing (IISM)
Kaiserstraße 89-93
76133 Karlsruhe - Germany
http://iism.kit.edu
Karlsruhe Service Research Institute (KSRI)
Kaiserstraße 89
76133 Karlsruhe Germany
http://ksri.kit.edu
© 2017. This manuscript version is made available under the CC-
BY-NC-ND 4.0 license http://creativecommons.org/licenses/by-
nc-nd/4.0/
118
Chatbot-based Emotion Management for Distributed Teams:
A Participatory Design Study
IVO BENKE, Karlsruhe Institute of Technology, Germany
MICHAEL THOMAS KNIERIM, Karlsruhe Institute of Technology, Germany
ALEXANDER MAEDCHE, Karlsruhe Institute of Technology, Germany
Fueled by the pervasion of tools like Slack or Microsoft Teams, the usage of text-based communication in
distributed teams has grown massively in organizations. This brings distributed teams many advantages,
however, a critical shortcoming in these setups is the decreased ability of perceiving, understanding and
regulating emotions. This is problematic because better team members’ abilities of emotion management
positively impact team-level outcomes like team cohesion and team performance, while poor abilities diminish
communication ow and well-being. Leveraging chatbot technology in distributed teams has been recognized
as a promising approach to reintroduce and improve upon these abilities. In this article we present three chatbot
designs for emotion management for distributed teams. In order to develop these designs, we conducted
three participatory design workshops which resulted in 153 sketches. Subsequently, we evaluated the designs
following an exploratory evaluation with 27 participants. Results show general stimulating eects on emotion
awareness and communication eciency. Further, they report emotion regulation and increased compromise
facilitation through social and interactive design features, but also perceived threats like loss of control. With
some design features adversely impacting emotion management, we highlight design implications and discuss
chatbot design recommendations for enhancing emotion management in teams.
CCS Concepts:
Human-centered computing Computer supported cooperative work
;Natural lan-
guage interfaces;Participatory design.
Additional Key Words and Phrases: emotion management, chatbot, team communication, participatory design
ACM Reference Format:
Ivo Benke, Michael Thomas Knierim, and Alexander Maedche. 2020. Chatbot-based Emotion Management for
Distributed Teams: A Participatory Design Study. Proc. ACM Hum.-Comput. Interact. 4, CSCW2, Article 118
(October 2020), 30 pages. https://doi.org/10.1145/3415189
1 INTRODUCTION
Distributed teams are widespread in daily work settings [
24
]. Recent global developments due to
the COVID-19 pandemic have highlighted this trend more than ever expected [
52
]. The way teams
work has experienced substantial change in recent years with the emergence of tools like Slack or
Microsoft Teams, specically leading to increased text-based communication [
24
,
51
]. However,
these tools only provide limited capacity for traversing socio-emotional information (e.g. non-verbal
cues) [
27
]. This reduction has an adverse eect on the management of emotions (EM) - the ability
Authors’ addresses: Ivo Benke, ivo.benke@kit.edu, Karlsruhe Institute of Technology, Kaiserstraße 89-93, Karlsruhe, Germany,
76131; Michael Thomas Knierim, michael.knierim@kit.edu, Karlsruhe Institute of Technology, Kaiserstraße 89-93, Karlsruhe,
Germany, 76131; Alexander Maedche, alexander.maedche@kit.edu, Karlsruhe Institute of Technology, Kaiserstraße 89-93,
Karlsruhe, Germany, 76131.
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee
provided that copies are not made or distributed for prot or commercial advantage and that copies bear this notice and
the full citation on the rst page. Copyrights for components of this work owned by others than ACM must be honored.
Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires
prior specic permission and/or a fee. Request permissions from permissions@acm.org.
© 2020 Association for Computing Machinery.
2573-0142/2020/10-ART118 $15.00
https://doi.org/10.1145/3415189
Proc. ACM Hum.-Comput. Interact., Vol. 4, No. CSCW2, Article 118. Publication date: October 2020.
118:2 Ivo Benke et al.
to reason about and use emotions to enhance thought through perception, understanding and
regulation - which is important for team collaboration. Simultaneously, EM is highly challenging
for self-managing teams in distributed conditions [
56
]. While successful EM exerts a strong positive
inuence on team-level outcomes like team cohesion and performance [
31
], poor EM, in contrast,
may lead to decreases in decision-making, communication ow, and well-being [
2
,
11
,
40
]. Ergo,
the lack of adequate EM is a major challenge for text-based team communication. For these reasons,
emotions in teams and their inuence on team work has gained increasing interest in research [
70
].
Beyond the primary function of text-based communication, modern collaboration tools allow for
the integration of third-party applications [
42
]. Chatbots are one particularly interesting instance
of such applications since they have been suggested as team facilitators for task support [
1
,
69
]. In
this function, chatbot applications represent a novel opportunity for EM within teams as they can
serve as a replacement for missing human emotion managers [
58
]. Due to their ability for natural
language processing, precise information retrieval [
44
,
75
] and emotion recognition [
60
], they are
particularly suited for this role of managing team emotions and to excel better in this role than
other interaction interface, e.g. like notications. Previous research has followed similar avenues
and proposed an exploratory prototype [
56
] in order to create a positive aective tone within a
group chat. This work provides initial evidence for the basic ability of chatbots to support emotion
regulation. However, the study results also showed no usefulness and partial annoyance perceived
by its users through the chatbot design. One well-known solution to overcome these problems is to
involve end-users in the design process [34].
In this study, we expand early research [
56
] by leveraging a participatory design approach [
34
].
We conducted participatory design workshops with end-users to investigate chatbot-based EM for
teams in order to overcome previously identied challenges. For the scope of the design space, three
relevant entities are considered which guided the design process, namely the messaging system,
the interacting team, and the chatbot. Three design workshops with 16 participants amassed to
153 design sketches. Through a user-informed selection process, three nal design prototypes
were derived subsequently: NeutralBot (NBT) which is neutrally reporting, SocialBot (SBT) which
embeds anthropomorphic and social design features, and ActionBot (ABT), which acts as immer-
sive moderator that actively intervenes (e.g. through break suggestions). These prototypes were
subsequently evaluated within a laboratory experiment and ensuing interviews in order to receive
detailed insights with 27 participants (9 three-member groups).
Both, quantitative and qualitative results provide evidence for the positive inuence of all chatbot
designs (NBT, SBT, ABT) of increased emotion awareness and communication eciency. Beyond
that, SBT and ABT present positive development in emotion regulation and compromise facilitation
through stimulating eects of social (e.g. anthropomorphic appearance) and interactive (e.g. break)
design features. While general chatbot experience was positive, the ndings report the importance
to consider contextual factors (timing, accuracy, time-pressure), and limit too obtrusive (content
deletion, large images) and too neutral interventions (missing explanations) in order not to under-
mine these eects. Finally, since emotions are a sensitive domain for the users, we also nd rst
evidence for threats of chatbot-based EM with a described feeling of surveillance and loss of control.
In summary, the present work contributes to the literature on chatbot-based EM in distributed
team communication in a three-fold manner: (1) We present three user-informed design concepts
for chatbot-based EM for distributed teams (Section 3, 4). (2) We evaluate these design concepts
regarding the participants’ experience and the EM abilities (Section 5, 6). (3) We discuss in detail
the design implications arising and highlight negative pitfalls of the designs (Section 7). Based on
this, we suggest a selective combination of design features to improve positive eects and mitigate
Proc. ACM Hum.-Comput. Interact., Vol. 4, No. CSCW2, Article 118. Publication date: October 2020.
Chatbot-based Emotion Management for Distributed Teams 118:3
downsides of the designs for the future of distributed teams, especially in times of new workplace
realities like COVID-19 created. Finally, our ndings reveal the importance to consider feelings of
loss of control in the future design of chatbot-based EM for distributed teams.
2 RELATED WORK
2.1 Distributed Teams and Emotion Management
Distributed, virtual teams are dened as groups of individuals that are geographically dispersed
and brought together by digital technologies to work on a particular task [
22
,
35
]. Such teams
are common in the workplace and their interaction dynamics can be generically described by
Input-Process-Output (IPO) models [
38
]. Input factors of the team inuence team processes and
emergent states, which nally determine team eectiveness [
13
,
38
,
45
,
50
,
61
]. Thereby, eec-
tiveness is a matter of input and process management. Team processes can be divided into three
mutually inuencing categories: cognitive, aective and behavioral [
17
]. Aective dynamics and
the abilities to manage emotions have been increasingly discussed in team eectiveness literature as
an important driver for team eectiveness-improving behaviors like decision-making and conict
management [
31
,
70
]. EM abilities have been researched in various forms, but are herein understood
as łthe ability to carry out accurate reasoning about emotions and the ability to use emotions and
emotional knowledge to enhance thoughtž [
47
]. By this denition, EM contains three main causally
related competences: emotion perception, understanding, and regulation [
31
]. Emotion perception
represents the ability to identify emotions in oneself or others, as well as in other stimuli like stories
[
5
]. The ability to understand emotions łentails understanding how emotions evolve over time, how
emotions dier from each other, and which emotion is most appropriate for a given contextž [
31
].
Finally, emotion regulation is considered as central factor for the inuence on behavioral outcome
variables. It is described as processes of individual inuence on the emotions they have, when they
have them and how they experience and express them [
25
]. Through the management of these three
components, team-level constructs like team cohesion, satisfaction and conict may be inuenced
and nally team eectiveness may be stimulated [
13
]. Poor management of these components,
in contrast, creates adverse inuence. For individuals, lower degree of EM leads to task-related
worry and avoidance [
46
], and enforces work-family conict and lower career commitment [
11
].
For the team, absent EM enforces communicative breakdowns [
4
] and decreased decision-making
[
40
]. Depending on context factors like task-type and emotional labor these negative eects of
poor EM are enhanced [
31
]. Both, the positive eect of increased EM as well as the negative eect
of poor EM prove the importance of EM support. Furthermore, as there is a global trend towards
more remote work and team collaboration at the workplace which aggravates EM, EM skills are
becoming even more valuable [
30
]. Therefore, improving EM in distributed team communication
through innovative technologies are in the focus of this study.
Fig. 1. Simplified representation of human emotion management and regulation process according to [
25
,
31
].
2.2 Emotions in Text-based Communication
Instant text-based messaging (IM) provides real-time communication via chat between users [
41
].
IM has several benets, foremost the ability to know when personal contacts are available (increased
co-presence), nearly instantaneous communication, and the ability to carry on several conversations
at once [
15
] which is why it is highly used in the workplace. However, IM also comes with specic
Proc. ACM Hum.-Comput. Interact., Vol. 4, No. CSCW2, Article 118. Publication date: October 2020.
118:4 Ivo Benke et al.
pitfalls like increased workplace disruption which leads to user frustration and decreased eciency
[14,16]. These IM costs show the fragility of communication ow through IM.
Besides the disruptive characteristic for users, communication through IM is łleanž by nature of
the system [
27
]. Much of the transmitted information, including aective connotations, is beyond
the written word. While such information is not generally impossible to add, research has found
that some system user groups (e.g. men) tend not to explicitly re-introduce emotionally explicit
annotations [
18
]. This property of reduced social signals not only limits availability, but also the
understanding of emotional information of single messages as well as the overall eciency of the
communication since content may be misinterpreted. Also, the signal reduction interferes with
the regulation of individual and team emotions [
3
] which can fuel the development of misunder-
standings and non-productive conicts [
12
]. Two approaches can be pursued to overcome these
challenges: (1) increasing the technology’s transfer capacity, and/or (2) improving the capabilities
of the communication partners to work with the existing emotional information. Prior to this
work, studies have focused on the rst approach of exploring design features to augment the
capabilities of text-based communication through provision of additional emotional information.
[
27
] developed designs for a text-based messenger with heart-rate information called HeartChat,
with context annotation (ContextChat) [
8
] and font personalization (TabScript) [
9
], which enriched
the text-based communication channel with emotional information in real-time. In these works, the
additional emotional information is based on social signals the communication partners are sending
or by eliciting more explicit input from the involved actors. Bubba Talk [
67
], Russkman IM [
64
]
and ChatCircles [
54
] explored dierent forms of visualization for this purpose. Curtains Messenger
put focus on the context by showing an opened or closed curtain when the chat partner was
present/absent [
59
]. They proved the possibility to increase the conveyed and detected emotional
content within the messages through dierent visualizations and usage of context information,
although they negatively experienced adoption issues and problems in understanding. Some studies
used physical sensors like e.g. EEG [
39
] or EmpaTalk, using skin conductance and blood volume, to
measure emotional information. They were able to transfer more emotional information within the
dyadic text-based communication. However, such approaches are invasive for the user and with
current technological devices not yet practical for group chats in the daily work routine. Therefore,
they are out of scope of this study.
To sum up, while the ndings of previous studies for emotional augmentation of text-based
communication are being relevant for the design of this study since they have highlighted dierent
aspects (e.g. visualization, context, timing), such designs mainly focused on expansion of the
communication channel. We focus on the second approach of supporting the abilities of EM
through innovative technologies with this work. Therefore, we aim to explore designs to improve
the abilities of communication partners to perceive, understand and regulate their emotions through
the provision of EM.
2.3 Chatbots in Teams Leveraging Emotional Information
Chatbots exist since the late 1960s, starting with ELIZA [
72
]. Since then, the majority of research
on chatbots has focused on the communication of chatbots with a human counterpart, a dyadic
interaction [
66
], and has neglected the polyadic constellation of multiple human team members
together with a chatbot. However, such interaction forms are on the rise, since collaboration tools
like Slack, Microsoft Teams, or Facebook Messenger allow for integration of third-party applications
into group chats [
37
,
42
]. In the following we present relevant chatbot support studies based on
the dimensions of interaction level (dyadic, polyadic), support level (individual, team), and task vs.
emotional support.
Proc. ACM Hum.-Comput. Interact., Vol. 4, No. CSCW2, Article 118. Publication date: October 2020.
Chatbot-based Emotion Management for Distributed Teams 118:5
Dyadic Interaction. In general, there exists an extensive research body on chatbot support for
individuals (in dyadic interaction with the chatbot). However, in favour of focusing the related
work, we want to shed light on selected studies that have particularly focused on the support of
emotions through chatbots in dyadic interactions. [
73
] have explored the usage of chatbots for
reattachment support through focus on work-related emotions for work tasks with their Switchbot.
They documented a sustainable productivity increase when conversations addressed work-related
emotions. [
36
] developed and evaluated Amber, an intelligent chatbot for individual task sched-
uling and work-break suggestion for emotional well-being. They reported positive aspects, but
also challenges when deploying chatbots at work. Both studies documented positive outcomes
for chatbots and emotion support on the individual level and, thereby, provide potential for the
deployment of chatbots for EM in team communication.
Polyadic Interaction - Individual Task Support. A set of relevant studies has targeted the idea
of plugging chatbots into polyadic interaction. It exists a research body with focus on chatbot
application in group chat with dierent generic focal aspects (search, decision-making, modelling).
Nevertheless, these studies provide diverging results regarding the design process and outcomes.
Targeting individual support within teams, TaskBot was designed specically to coordinate tasks
[
69
]. [
28
] investigated how agents in use on Google Allo may support interaction through con-
tentśrelated suggestions. Both studies showed that the approach of chatbots is accepted and may
be benecial for users in such an interaction.
Polyadic Interaction - Team Task Support. Only few studies have researched forms of team support
in polyadic interaction. [
1
] investigated the support of collaborative search tasks through SearchBots
and documented enhanced search experiences. Simplifying decision-making, [
57
] used chatbots
for supporting modelling processes in teams with their chatbot Socio. Nevertheless, all of these
studies focused primarily on supporting task management.
Polyadic Interaction - Individual Emotional Support. Literature exploring EM in group chat us-
ing chatbots is scarce, whereby the results still demand for detailed exploration in future work
regarding user-experience and design. On the individual support level, ReactionBot increased the
communication channel’s capacity for aective signals through continuous sending of emojis [
43
].
While the results showed an increase in self-awareness, they also reported high anxiety towards
negative emotion leakage. Providing coaching of individuals, a collaborative coach CoCo pro-
vides individuals with feedback of previous multiparty video-conferences [
63
]. The study reported
changes in follow-up team work in form of more balanced participation and slight increases in
emotional communication. However, both approaches were on the individual level, which might
create conict between the participants.
Polyadic Interaction - Team Emotional Support. As one of the rst targeting team level support,
GremoBot has used advances in articial intelligence in order to provide real-time emotion reg-
ulation mechanisms for teams by chatbots through sending graphical notication messages in
case of negative emotions [
56
]. With the basic goal of establishing a positive aective tone, the
ndings from this study provide initial ideas for the potential, positive eects of chatbot-based EM
in distributed team settings. However, the study also nds diverse results regarding the proof for
perceived usefulness of chatbot emotion regulation which might be the case since the design was
not informed by real-users. This inspires a deepened exploration of design instances regarding EM
in teams in general and the specic components of EM in detail. The experiences with GremoBot
revealed negative results like annoyance with the chatbot, as well as limited expressiveness on
the nuanced qualitative implications. In order to provide implications beyond these initial results
design studies with real users are necessary. Therefore, this study targets the existing gap and
Proc. ACM Hum.-Comput. Interact., Vol. 4, No. CSCW2, Article 118. Publication date: October 2020.
118:6 Ivo Benke et al.
extends previous ndings through a participatory design approach and the development of chatbot
instances for EM in group chat with real users.
3 DESIGN EXPLORATION
For the development of chatbot designs we followed a participatory approach in order to actively
involve end-users into the design process [
33
,
34
,
65
]. In a second step, we applied two participatory
design techniques in three design workshops in order to identify design themes. Three entities
represent the main interacting components in chatbot-based EM within group chat. Therefore,
these three entities represent the fundamental design dimension: (1) the text-based communication
system, (2) the design characteristics of chatbots, and (3) the team conguration.
3.1 Design Pillars
Text-based Communication Systems. Text-based communication is predetermined by the environ-
ment for chatbot-based EM. Therefore, we rely on the design space for augmented mobile messaging
by [
8
]. In abstraction, it decomposes systems in three levels related to the sender characteristics (in
this case the team and its members), the channel characteristics (the messaging application enriched
through a chatbot), and the receiver characteristics, which is the team as well. Furthermore, the
visualization for supporting EM needs to be considered. For this reason we used explorative ndings
and their outcomes from existing work [63] and applied them to conne our design space.
Design Characteristics of Chatbots. Social cues are inherent design aspects of chatbots and are
thus relevant design dimensions. A chatbot social cue triggers a social reaction of the user towards
the chatbot [
55
]. According to [
20
], social characteristics of chatbots can be divided into verbal,
visual, auditory, and invisible characteristics. Based on these design foundations we derive three
major design aspects: (1) visual appearance of the chatbot and its messages, (2) verbal behavior of
the chatbot, and (3) łinvisiblež behavior related to its interaction patterns.
Team Conguration. This study designs chatbot-based EM for teams. Therefore, the design must
incorporate aspects of team communication patterns and team roles as well as inherent team
processes, in this case the ability and processes of EM [
13
]. Designing chatbots as mutual team
members for humans raises specic demands on their role in the team. [
77
] have identied dierent
roles in computer-supported groups, like moderators, mediators or leader, which a chatbot as
emergent leader can maintain.
3.2 Ideation Workshops
We conducted three design workshops with 16 participants in total (cf. Table 1for detailed de-
mographics). Two workshops were conducted with students from a public university who had
gained design expertise by attending a course for human-centered design. The sample contained
participants from four dierent cultural groups (Asia, Middle-East, Europe, South-America). One
participant had a visual impairment. The participants did not mention familiarity with each other
beyond the lecture. The third workshop was conducted with six participants from a software
company with professions for user experience (UX) design, marketing, and product management.
By these means, three distinct user characteristics were included into the design process (pro-
fession, culture, impairment) in order to involve broad user perspectives. In all workshops the
participants rst received an introduction to the scenario of EM by chatbots for group chats and
were familiarized with the design goals and boundary treatments. Afterwards, the participants
conducted two participatory design techniques, rst, the 6-8-5 method, an individual creativity
technique [
21
], and second the Walt-Disney ideation method [
19
], a group design technique. The
design sessions were moderated by the primary researcher. Sessions lasted on average two hours
Proc. ACM Hum.-Comput. Interact., Vol. 4, No. CSCW2, Article 118. Publication date: October 2020.
Chatbot-based Emotion Management for Distributed Teams 118:7
and were audio-recorded. Within each session and technique, the procedure included three to four
rounds of sketching. During the rst technique we encouraged the participants to develop three to
six sketches independently in each round and to be creative within the stated boundary treatments
and design goals. After each round the participants shortly presented their sketches. Throughout
the second design technique, the participants took over three roles in three rounds: the (1) dreamer,
which shall dream about the optimal solutions; the (2) realist, which must adapt the dreamer’s
ideas to reality; and nally the (3) critic who should put the feasibility of the sketch to the acid test.
After each round the groups changed the sketch and continued to work on another group’s sketch.
Table 1. Demographic statistics for design workshop participants.
Session
ID
Particip.
ID
Gender
distribution
Age
distribution
Prociency
information
Degree Cultural
background
Further details
W1 P1-P4 75.00% female
/ 25.00% male
M: 25.25,
SD: 2.75
Industrial
Engineering
100.00%
Bachl.
Europe (2), Asia
(1), Middle-East
(1)
Recruited from a
university study
pool
W2
P5-
P10
16.67% female
/ 83.33% male
M: 25.00,
SD: 1.34
Industrial
Engineering,
Mathematics,
Computer Science
100.00%
Bachl.
Europe (6)
Recruited from a
university study
pool
1 participant with
visual impairment
W3
P11-
P16
66.67% female
/ 33.33% male
M: 27.33,
SD: 2.33
User Experience
Design,
Marketing, Prod-
uct Management
66.67%
Bachl.,
33.33%
Master
Europe (5),
South-America
(1)
Recruited from a
software company
Total
P1-
P16
43.75% female
/ 56.25% male
M: 25.94,
SD: 1.78
See above
87.50%
Bachl.,
12.50%
Master
Europe (13), Asia
(1), Middle-East
(1), South-
America (1)
1 participant with
visual impairment
3.3 Ideation Findings
In the three workshops the participants developed 153 design sketches in total (cf. Figure 2,3,4).
For an overview, thematically matching design features are organized in themes. These themes are
derived through a qualitative content analysis in order to develop a category system. This system
was rst deductively created by the design pillars categories and afterwards expanded by inductive
category development [6,48].
Graphical Visualization of the Chatbot. Most common for the graphical appearance was the usage
of symbols and pictorial language in chatbot messages to express the emotional state of the team.
The use of weather icons as analogies, for example storm/sun for bad/good mood, was generated
several times. Similar approaches were emojis and the application of łthumbs upž/łthumbs downž
indicators. More extroverted symbols were a bombing raid, or an image for explosions known
from comics as a symbol for conict. Further, the participants sketched short GIF-like video images
to capture team emotional states and to intensify these states. A design used for both concepts
were trac lights. This idea was reinforced by the amplication of chatbot messages with colored
borders or backgrounds. Images were found to provide an adequate amount of abstraction for
everyone to associate with. An approach to target team cohesion was the sending of pictures of
team members in mutually successful situations in order to remind the discussants to unite again.
More neutral solutions were to use reports in form of graphs, bars or net diagrams like a team
Proc. ACM Hum.-Comput. Interact., Vol. 4, No. CSCW2, Article 118. Publication date: October 2020.
118:8 Ivo Benke et al.
Fig. 2. Example presentation of design workshop and design sketches during both individual and group
ideation methods.
łequalizerž. Lastly, especially the aspect of comparing a team’s mood against an optimal state was
popular with the participants.
Verbal Behavior of the Chatbot. A common theme of verbal behavior was the provision of feedback
on an abstract level. Design sketches recommended support in the form of simple quotes (e.g. "the
smarter gives way") or to show a joke to lighten the mood. Motivating sentences were also a
common idea. More seldom was the use of warnings, which were reported to be received as
rather negative and undesirable. An important remark was given by the participant with visual
impairment (almost no eyesight) (W2-P6) that sending written word exclusively might be dicult
to perceive. The message should therefore be always delivered through multiple channels (e.g.
combinations of words and images). Overall the connotation of a message was considered a lot.
Many sketches detailed the importance of being cautious and non-invasive as to not oend team
members. Opposing this view, some sketches outlined more oensive approaches of letting the
chatbot be straight forward in its wording. For example, the ł#BeefAlertž idea was proposing a
chatbot that is shouting visually at the participants. A proposal was the adjustment according to the
severity of the conict. In the beginning, very cautiously and reserved, it may become more direct
in case of emerging conict. A common verbal theme was the application of dierent translation
modes, described as the łgood mood-modež. When perceiving negative team states, the chatbot
might change the content of the sending party’s messages text into more polite, team-supporting
tones.
łInvisiblež Behavior of the Chatbot. Many designs concerned the strength of team moderation by
a chatbot. Participants raised the argument of chatbots’ ability to take a neutral role in the team
and, simultaneously, being a member with which participants may create a bond. Common was the
proposition to establish the chatbot as a moderator that acknowledges negative conicts. To relieve
the conict, the chatbot could take the function of explaining and reviewing the discussed topics.
This might lead to reconsideration of individuals’ arguments and defuse the situation. Further, it
was also proposed, that the chatbot could ask specic team members to change their direction
into a more objective and task-related discussion. The chatbot as mediator was seen as a popular
approach too, which demands more advanced capabilities in domain knowledge representation
and natural language processing. Such a chatbot may be able to relieve conicts by referring to and
integrating neutral team members. The mediator, instead of changing the topic, might suggest a
changing of roles to allow team members to switch tasks and perspectives. Lastly, some participants
Proc. ACM Hum.-Comput. Interact., Vol. 4, No. CSCW2, Article 118. Publication date: October 2020.
Chatbot-based Emotion Management for Distributed Teams 118:9
adopted coaching ideas. They suggested to have the chatbot regularly remind a team about initially
formed goals and to review them.
Fig. 3. Examples of promoted design sketches with descriptions of thematical design features for the first
ideation technique.
Collecting User Input. The collection of user input by stating questions in the form of small
surveys (in 23.53% of designs) was mentioned by participants in all workshops (łThe chatbot could
help the team by asking them team about [..] and letting them decide.ž (W3-P5)). The query of
participants’ opinions using Like-, Dislike-, or similar input buttons were popular. The intention
was the integration of team members and the adjustment of the reaction of the chatbot, for example
by a reduction of the number of messages. Many design sketches required the chatbot to be capable
of more advanced contextual extraction. For example, in case of a conict, the chatbot might be able
to deduce current problems, propose solutions and let participants vote for answers. To provide
input, graphical lists or buttons were considered. By doing so, the whole team would be involved
in a conict-solving process and would elaborate a solution together.
Behavioral and Auditory Chatbot Support. Designs also addressed chatbot adaption based on
individuals’ behaviors and alternative modalities to inuence team members through music or
games. A playful approach was the chatbot sending mini-games like ł3 in a rowž which distracts the
participants from discussions. While innovative, the applicability of this idea is questionable in the
work context. Other participants discussed gamication methodologies in order to increase team
spirit. Pictorial ideas such as a "group karma" were introduced. Also, more realistic approaches
like giving out real bonus vouchers to teams were proposed, when conict-solving attempts were
initiated. Often, the idea of breaking up the discussion by a short break was discussed. This should
send a clear signal to the emotionally charged team to refocus. Dierent implementations were
proposed, such as a break button to let the team decide, or an enforced break for some or all
team members. The possibility for the group to empty the chat was another idea - to symbolically
create a new beginning. Participants from Asia and the Middle-East (W1-P1, W1-P3) mentioned
the possibility to play a positive or known song in order to remind participants of a positive event
(łHearing the national anthem mind remind people to stick togetherž (W1-P3)).
Overcoming Privacy Concerns. During the ideation workshops, concerns about the collection
and leveraging of emotions were discussed. The participants mentioned two possible solutions
Proc. ACM Hum.-Comput. Interact., Vol. 4, No. CSCW2, Article 118. Publication date: October 2020.
118:10 Ivo Benke et al.
in this regard. First, the negatively connoted collection of emotions was neutralized by the fact
that the emotions could be processed anonymously and evaluated at the team level. Second, the
emotional information should be presented either without any concrete values, or in all cases only
at the team level. Thus, anonymity would be given, and no personal implications may be deduced.
Fig. 4. Examples of promoted design sketches with description of thematical design features for the second
ideation technique.
4 FINAL DESIGN AND SYSTEM IMPLEMENTATION
Based on the raw design sketches nal design concepts were derived. The selection of the nal
design features was conducted according to a previously dened procedure of renement and
selection with the workshop participants. Within each design workshop initial design sketches were
followed up by a three-step renement procedure. During the rst ideation method, the participants
discussed previous individual designs after each sketching round. Through this iterative approach,
participants rened design sketches continuously. In the second ideation technique, renement
took place as participants selected valuable previous ideas as a starting point for group sketching.
In this ideation method the participants took three dierent roles consecutively, critically merging
their sketches towards application in reality. The combination of group sketching and critically
reecting represents the third renement step. Finally, in each workshop the participants voted for
one to three favorite design sketches, representing a user-informed selection of design features.
Based on this approach we received 8 to 12 promoted design sketches per workshop. These sketches
represent the foundation for the nal chatbot designs in this study. Before implementing the nal
designs, some design features had to be excluded due to technological or ethical limitations (e.g.
highlighting individual emotional information).
4.1 System Designs
From the selected design sketches, we developed three chatbot prototypes. These prototypes fullled
two functions. In order to justify the existence of the chatbot in the team as a facilitator, it should
perform a simple task for the team members. This rst functionality represented the ability to rank
items and display the current ranking. Through the conversational interface the participants could
change the items’ positions. In this way we were able to simulate a realistic scenario of a chatbot as
Proc. ACM Hum.-Comput. Interact., Vol. 4, No. CSCW2, Article 118. Publication date: October 2020.
Chatbot-based Emotion Management for Distributed Teams 118:11
helper and possible team member. On the other hand, this basic functionality did not create specic
associations with the users. Importantly, beyond this basic functionality, the chatbot prototypes
included a second functionality, namely EM within the team. Based on the results of the design
workshops three chatbot prototypes were developed (cf. Figures 5,6,7).
NeutralBot. The NBT embodies a neutral and observing team member. It is not conspicuous
through a social appearance nor deliberate emotional behavior in the team communication. At
the same time, the NBT does not take a specic role in the team. Its aim is to present unobtrusive
and minimally invasive support for EM. Visually, the chatbot receives only initials as avatar image.
Regarding its graphical and verbal appearance, the NBT sends neutral reports about the current
team mood. The reports are designed graphically lean in the form of a scatter chart. The NBT also
indicates a reference value of the overall team mood. Together with the graphic the chatbot sends
an explaining sentence on how the value is to be assessed.
Fig. 5. Chatbot design instance NeutralBot implemented based on design workshops with explanations.
SocialBot. Visually, the SBT implements anthropomorphic social cues. An avatar picture of
a robot with human characteristics is assigned to it. The avatar shows movement and winks,
suggesting humanistic characteristics. The chatbot answers with delay relative to the size of the
message sent and shows typing indicators. Verbally, the SBT behaves empathetic and sensitive.
Through empathetic behavior, users may become rather aware of current team emotions. This
is expressed in its choice of words and the intervening sentences (e.g. łsensing some temperž,
łwe might be carefulž). Additionally, it adds emoticons to its sentences to emphasize the aective
meaning. The SBT shows that it considers itself as active part of the team, using pronouns like
łwež or łusž. The SBT relies on methods of coaching, such as encouraging language, graphic
representation and comparison of team emotion values against realistic achievable and positive
emotional team emotions. This was considered to possibly increase emotion understanding, since
team members are supported when evaluating and assessing their own and others actual emotional
states. Regarding its message content, the SBT implements accumulative levels of escalation. Three
levels are implemented. Each level sends dierent kind of message content. First, there are images,
comparing the negative, current state against a positive state. On the second phase GIFs are sent
out showing successful teamwork in a playful way. This engages the positive, social interaction
in the group. On the third level, again the comparison of two pictures is displayed, however with
Proc. ACM Hum.-Comput. Interact., Vol. 4, No. CSCW2, Article 118. Publication date: October 2020.
118:12 Ivo Benke et al.
a stronger emotional message, like a łmushroom cloudž. Through the combination of written
words (advice, remarks, tips) and symbolic images the accessibility for handicapped users can be
achieved (cf. Section 3.3). In addition, the SBT asks the team members for feedback regarding their
impression of the eectiveness of the feedback for the team emotions after sending messages. This
is achieved through a natural conversation in form of questions and answers by the SBT with
predened answer buttons in order to facilitate this interaction and minimize the disturbance of
the ow of conversation for the team. In this way the participants can interact and have control of
the intervention recurrence.
Fig. 6. Chatbot design instance
SocialBot
implemented based on design workshops with explanations of
selected design features.
ActionBot. The immersive ABT is conceptualized as a moderator that intervenes in team interac-
tions through suggestion of breaks. This design implements a more dominant behavior, matching
its actionable intention. The dominance is expressed through the wording in the chatbot’s messages.
Visually, the ABT comes with a small avatar symbolizing active behavior. The ABT implements
accumulative levels of escalation like the SBT. These escalating types of messages are designed
to give users the impression of increased intensity and should generate greater awareness of the
situation. Doing so, together with the indication of team emotions, emotion perception ought to be
improved. Emotional regulation strategies are designed to be stimulated by recommendations for
action and direct mediation. On the rst of three levels, the ABT sends dominant verbal phrases
and a question to the team if the chat history shall be deleted. If the team decides to do so, it
can delete the history. The deletion should symbolize a new beginning and make the participants
Proc. ACM Hum.-Comput. Interact., Vol. 4, No. CSCW2, Article 118. Publication date: October 2020.
Chatbot-based Emotion Management for Distributed Teams 118:13
reconsider their behavior. On the second level the ABT sends an insistent message that there is
negative atmosphere amongst the team and actually deletes the chat. On the last escalation level,
the chatbot breaks the conversation by stopping the ability to chat for 15 seconds. This gives the
team members the possibility to detach themselves mentally from the discussion, stop the negative
emotional spiral and have a fresh start afterwards.
Fig. 7. Chatbot design instance
ActionBot
implemented based on design workshops with explanations of
selected design features.
4.2 System Architecture
The system architecture is based on a chatbot application, integrated modules and a connection
with a webchat which can be integrated in every website. It utilizes the Microsoft Bot-framework
(V4), a node.js application, the Microsoft webchat (V4) which is based on react.js, and a SQLite
database. The sentiment analysis is triggered when a new message is perceived by the chatbot
and realized through the VADER module [
74
] to get a compound score
𝑐𝑉 𝐴𝐷𝐸 𝑅 ∈ [−
1
,
1
]
while 1
is most positive and -1 most negative. The score is based on three values (negative, positive, and
neutral) and initial calculations are based on a sentiment lexicon. Each message is analyzed on its
sentiment and an accumulative score is calculated and aggregated on the team level. In short detail,
team emotion is a weighted sum of participants’ aective message share [
56
]. The score of the
messages is averaged over the dynamic interval I by averaging against the conversation share of the
participants. If this team sentiment of recent messages shows a negative trend against the overall
team sentiment and exceeds threshold (
𝑠𝑡𝑟 𝑖 =
0
.
12
)
, which was derived through pre-experimental
Proc. ACM Hum.-Comput. Interact., Vol. 4, No. CSCW2, Article 118. Publication date: October 2020.
118:14 Ivo Benke et al.
studies, with an increasing probability dependent on a timer expiration, the system accounted for
that the chatbot invoked at least once in each conversation. The database stored related metadata,
treatment type, timestamps, messages, and sentiments of messages.
5 EVALUATION
We conducted a between-subject laboratory experiment with three conditions (the three design
instances NBT, SBT, and ABT) as independent variables. Each session lasted for two hours. To gather
deeper insights on experiences with the chatbots, we collected both qualitative and quantitative
data. In focus of the evaluation are the experiences of participants with chatbot-based EM, the
perception of the design and implications for the individual and the team.
5.1 Participants
For the experiment we recruited 27 participants (38% female, 62% male) through an online panel. We
controlled for group size with nine groups with three members. The participants were between 19
and 38 years old (M=23.9, SD=2.3) and had their educational background in engineering, computer
science, and business administration. Participation was rewarded with 12$/hour on average. One
group had to be excluded due to technical diculties and three participants were excluded from
both quantitative and qualitative evaluations due to incorrect answered control questions. In total,
participants were distributed on the conditions with
𝑁𝑁 𝐵𝑇
= 5,
𝑁𝑆𝐵𝑇
= 8, and
𝑁𝐴𝐵𝑇
= 8 (cf. Table 2
for demographics).
Table 2. Experiment participant demographics (over all treatments).
Session
Chatbot Age Gender Message
amount
Chatbot
appearance
Educational
background
Familiarity
1 NBT 32 Male 70 2
Mechanical Engineering
No
1 NBT 20 Female 70 2 Industrial Engineering No
1 NBT 20 Male 70 2 Computer Science No
2 NBT 27 Female 89 3 Chemistry No
2 NBT 24 Female 89 3 Other No
3 SBT 24 Male 67 3 Computer Science No
3 SBT 25 Female 67 3 Computer Science No
3 SBT 25 Male 67 3 Other No
4 SBT 24 Male 110 1
Mechanical Engineering
No
4 SBT 25 Male 110 1 Industrial Engineering No
4 SBT 25 Female 110 1 Industrial Engineering No
5 SBT 20 Male 67 2 Computer Science No
5 SBT 20 Male 67 2 Computer Science No
6 ABT 24 Female 123 3 Computer Science No
6 ABT 25 Male 123 3 Industrial Engineering No
6 ABT 25 Male 123 3
Mechanical Engineering
No
7 ABT 26 Male 82 2 Industrial Engineering No
7 ABT 23 Female 82 2 Industrial Engineering No
7 ABT 21 Male 82 2
Mechanical Engineering
No
8 ABT 25 Male 94 2
Mechanical Engineering
No
8 ABT 21 Female 94 2 Computer Science No
Total
NBT, SBT,
ABT
M: 23.9,
SD: 2.3
38.1% female
/ 61.9% male
M: 87.75,
SD: 16.25
M: 2.25,
SD: 0.56
- No
Proc. ACM Hum.-Comput. Interact., Vol. 4, No. CSCW2, Article 118. Publication date: October 2020.
Chatbot-based Emotion Management for Distributed Teams 118:15
5.2 Materials
As main experiment task the survival on the moon decision-making scenario [
26
] was selected.
This survival scenario represents an established decision-making task in team research [
63
]. In
the scenario, teams must rank 15 items regarding their importance for a stranded astronaut which
has to survive on the moon. The task was chosen for several reasons. First, a ranking task is
reasonable in the context of distributed teams at the workplace. Teams must make collective
decisions and discuss them intensely. Second, chatbots can take the task of ranking items through
conversational interaction very well. There exist applications of similar tasks already in practice.
Therefore, chatbots performed in two tasks the basic functionality of ranking items, while only in
the second task EM capabilities intervened. Third, decision-making tasks within groups represent
a class of tasks which have both cooperative and competitive aspects [
49
] and, thus, have the
potential to create conict. In contrast to [
56
], we focused on a single decision-making task, since
in this case the probability for negative emotions is higher than in brainstorming tasks.
5.3 Controls
To rigorously examine the eects of chatbot-based EM, we controlled for possible factors that
may aect group processes, following the rigorous process of [
29
]. For group dynamics, we held
the group size constant (three members), as well as controlled for group familiarity (anonymity,
no hierarchy). We controlled for the remote and synchronous location through the laboratory
environment. To control for task familiarity, we asked for previous experience with the main task,
which was denied by all participants.
5.4 Measures
To investigate the dierence between states before and after the exposure to the designs we con-
ducted a quantitative evaluation. To assess changes of emotional abilities, the emotional competence
(EC) of the participants, we applied the Short Prole of Emotional Competence (S-PEC) before and
after the main task [
53
]. It represents a short and well-suited method to collect detailed characteris-
tics about specic EM abilities [
53
]. After an introductory reference to the treatment, representative
items for subconstructs are for Identication "I am good at sensing what others are feeling", and for
Regulation "I nd it dicult to handle my emotions". In order to evaluate the interaction experience,
i.e. the emotionally enhanced communication, we applied the Aective Benets and Costs of
Communication Technology (ABCCT) questionnaire [
76
] following [
27
]. An example item for the
subconstruct Emotional Expression is "Communicating with my team using the system with the
Teambot helps me to tell how my team is feeling that day", and for Social Presence is "Communicating
with my team using the system with the Teambot helps me feel closer to my team". The results allow to
reect on exchange of emotional information and thereby the ability of team members to perceive
and regulate emotions. Through these components we were able to receive a representation of the
change in emotional states and interaction.
To collect nuanced insights about the impression of the chatbot designs and the eects of
EM support on the behavior and EM abilities of the participants, we performed individual, semi-
structured qualitative interviews. These had the following sections: First, the participants were
asked about (1) the perception, experience and satisfaction with the chatbot designs. This included
questions like łHow was the perception of the chatbot?ž and łHow did you experience the appearance of
the chatbot?ž. This section depicts the general experience with the chatbot designs of the participants.
Next, this was followed by questions about (2) the eect of chatbot designs on EM for individuals
and teams. At this section, the interviews explicitly asked for the eects on the components of
EM, emotion perception, understanding, and regulation (cf. Section 2.1). All questions were asked
Proc. ACM Hum.-Comput. Interact., Vol. 4, No. CSCW2, Article 118. Publication date: October 2020.
118:16 Ivo Benke et al.
regarding individual and team eects. The third interview section targeted the (3) eects of chatbot
designs on team behavior and outcomes. Parameters such as behavior with regard to regulatory
strategies, team cohesion and well-being in the team, and communication were investigated. The
nal interview section left space for (4) individual suggestions for improvement of the chatbot
designs. An exemplary question was łHow would you have liked the behavior and intervention of
the chatbot?ž. The interviews were conducted by three researchers and audio-recorded.
5.5 Procedure
Upon arrival at the experiment the participants were located separately in soundproof and temperature-
controlled booths, equipped with the same PC system. They could communicate only through
the chat interface. The allocation of the participants into group chats with three members was
randomly. When seated, the participants gave written consent for their participation and received
an introduction into the procedure of the experiment. Participants were asked to avoid writing their
real name or to uncover personal details. As a rst task, teams conducted an initial task phase to get
to know each other, to establish an initial sense of team aliation and to become familiar with the
ranking mechanism of the chatbot system. In this phase the participants chatted for
𝑡1
=12 minutes
and ranked ve general topics (e.g. economy, health) upon their societal importance. Afterwards,
they were asked to ll out a rst questionnaire regarding their perceived EC (S-PEC, ABCCT).
Afterwards, the survival on the moon task was started. During the task, participants rst created
an individual ranking. Subsequently they discussed the nal solution in the group. Based on those
rankings against the optimal task solution [
26
] the task performance was derived. To increase the
possibility of conict situations, a performance-based rewards ratio was calculated. One part of
the reward depended on the correctness of the nal group solution (
𝑟𝑤𝑔𝑟𝑜
=20%) and the other
part on the dierence between the nal group solution and each participant’s individual solution
(
𝑟𝑤𝑖𝑛𝑑
=80%). Such goal conicts represent a realistic work situation, as the individual goals of team
members often do not completely match the teams’ collective goals.
The group chat for the main task was set to
𝑡2
=18 minutes. Within this frame of the main task the
three chatbot-based EM treatments intervened. The amount of appearances is presented in Table 2.
The ratios (
𝑟𝑤𝑔𝑟𝑜
=20%,
𝑟𝑤𝑖𝑛𝑑
=80%) and the experimental timing (
𝑡1
=12m,
𝑡2
=18m) were determined
through iterative pre-experimental pilot studies with four groups. To prevent the impairment of
emotion recognition, the sentiment analysis excluded formal messages concerning the ranking
mechanism. After the main task the participants were asked to ll out the nal questionnaire.
Once the questionnaire was completed, the participants were interviewed separately by three
researchers. The interviews were audio-recorded, and notes were taken by the researchers. Finally,
the participants received a debrieng and their compensation.
6 RESULTS
This section presents the results of the evaluation. First, the quantitative analysis conducted two
questionnaires covering the emotional competence (S-PEC) and the emotional experience of the
communication (ABCCT). This was done, in order to assess general eects on the team interaction.
Second, we conducted qualitative interviews in order to collect nuanced impressions of the chatbot
designs and the eects of EM on the participants. The results show that emotion awareness and
communication eciency increased throughout all designs, while SBT and ABT show further
changes in emotion regulation and compromise facilitation. Regarding the design experience, social
and interactive features were appreciated, e.g. coaching and chat breaks. However, contextual factors
(timing, accuracy, time-pressure), too obtrusive interventions (content deletion, large images), and
too neutral messages (missing explanations) led to disturbance or confusion. On the downside, the
results report partially surveillance and loss of control through the intervention in all three designs.
Proc. ACM Hum.-Comput. Interact., Vol. 4, No. CSCW2, Article 118. Publication date: October 2020.
Chatbot-based Emotion Management for Distributed Teams 118:17
6.1 antitative Evaluation
To assess perceptions, the responses to standardized survey items were analyzed. The internal
consistency of latent constructs was assessed using Cronbach’s Alpha with a cuto at 0.6 considered
acceptable given the small sample size [
71
]. Therefore, since items did not meet the requirements
following constructs had to be removed: Social Presence (1 of 3) of the ABCCT and EC (1 of 10) in
𝑡1
and
𝑡2
. Cronbach’s Alphas then ranged from .621 to .837. For the analysis of the EC subscales single
items could be used. Afterwards, scales were mean scored. Given the small sample size (especially
for the group level), it was decided to refrain from further inference statistical analyses, since these
tests would have to account for the nested structure of the data (individuals in small groups). Instead,
a thorough, descriptive analysis including this structure was pursued. The following analyses thus
consider individual-level experience, yet include group level information to qualify how group
membership might have inuenced individual experiences.
Short Prole of Emotional Competence Questionnaire. It was rst assessed, whether clear changes
are detected from the rst (
𝑡1
- during rst task) to the second measurement (
𝑡2
ś during main
task) for each treatment. Figure 8shows the changes for each group (including the direction of
change for each paricipant) and the average level per chatbot treatment after the treatment (
𝑡2
). The
most coherent and clearest change in EC is visible in the NBT treatment, in which all groups and
participants show a decline in EC. The patterns in the SBT treatment are mixed throughout, with
each group showing a dierent response pattern. It is therefore considered that the SBT has no
visible improvement eect on EC in the quantitative analysis. For the ABT, a slight trend towards
EC improvements is visible, as at least each group shows EC improvements in one member and
overall, 5 out of 8 participants showed an improvement in EC. The slopes of changes in the ABT
treatment are however fairly at. Comparing the perceived EC levels in
𝑡2
, the previous eect
appears further supported, as the NBT participants report the lowest EC level and no discernable
dierence is visible in the SBT and ABT conditions. Overall, the results show that EC abilities
decrease (are suppressed) in the NBT treatment, and, in contrast, can be considered to be at least
maintained in the SBT and ABT treatments.
Fig. 8. EC (all items aggregated) changes from the first task (
𝑡1
) to the main task (
𝑡2
) per chatbot treatment
and per group. Dashed line represents mean per treatment at 𝑡2.
ABCCT Questionnaire. To further analyse emotional experiences within communication, the
scales of the ABCCT questionnaire were evaluated. Figure 9shows the perceptions per treatment
and group. From the distributions, it does not appear like group-level inuences are particularly
prominent, while the strongest dierences across treatments are visible for the dimensions Emotion
Proc. ACM Hum.-Comput. Interact., Vol. 4, No. CSCW2, Article 118. Publication date: October 2020.
118:18 Ivo Benke et al.
Expression, Engagement, Social Presence, and Social Support. For these dimensions, the dierence
emerges by comparing the NBT to the two other treatments, and with weaker contrast by further
comparing the SBT to the ABT. The general pattern for these four dimensions that focus on
emotional experience is that highest levels are reported for the NBT, and lower levels for SBT and
ABT. In a weaker form, the ABT treatment further shows a reduction amongst these dimensions.
At the rst look, these patterns appear to stand in contrast to the previous observations on EC
perceptions, as it was considered that improved EM would be accompanied by a more positively
valenced emotional experience. Interesting is however, that the patterns develop in the opposite
direction (extrema for NBT and ABT), which prompts the conclusion, that a higher level of EC
support by the chatbots might lead to a disengagement from ones own emotional eorts towards
the group. Stated otherwise, the chatbot-based EM support might be crowding out intrinsic eorts
to emotionally engage with the group. For the dimensions Obligations, Unmet Expectations, and
Privacy, no clear dierences are visible, neither by treatment nor by groups. It is however interesting
to note, that for all three dimensions the reported absolute levels are rather low, which indicates a
positive user experience in these dimensions. Important is especially, that privacy (here: "The fear
that sensitive information about oneself could be revealed to other group members.") concerns were
reported as rather low. To further elaborate on the outlined observations and initial conclusions,
the next section discusses the results from the subsequent interviews.
Fig. 9. ABCCT results per chatbot treatment and per group. Dashed line represents mean per treatment.
6.2 Interviews
The interview data was analyzed with a inductive thematic analysis [
6
,
68
]. Six hours of audio-
recorded interviews were transcribed. Afterwards, two researchers coded 20% of the interviews
independently, using samples from all three treatments. Duplicates were expelled and a nal coding
tree was jointly developed and rened through an in-depth discussion (cf. Table 3). Subsequently,
the primary researcher coded the rest of the interviews. Based on the coding tree 7 themes were
elaborated. This section presents those themes and their main aspects, introducing rst design-
generic and second design-specic results, if they are available. A comparison of design-generic and
-specic results is also comprehended in Table 4. Selected proof-quotes of the themes are displayed
in Table 5[62]. For better understanding, we will refer to the dierent participants with P1-P21.
Proc. ACM Hum.-Comput. Interact., Vol. 4, No. CSCW2, Article 118. Publication date: October 2020.
Chatbot-based Emotion Management for Distributed Teams 118:19
Table 3. Themes with codes for thematic analysis of interviews with frequency of appearance over all
interviews.
Chatbot
experience
Support and
coaching
Disturbance
and
surveillance
Controversy Emotion
perception
Emotion
regulation
Behavior and
performance
Positive
perception
(71.4%)
Helpfulness
and advice
(38.1%)
Disturbance
(57.1%)
Potent. Misin-
terpretation
(57.1%)
Emotion
awareness
(38.1%)
Emotion
regulation
(28.6%)
Consensus
facilitation
(66.7%)
Design fea-
ture eects
(52.4%)
Team coach-
ing (28.6%)
Confusion
(42.9%)
Timing of
intervention
(52.3%)
Behavioral
reection
(14.3%)
Team
cohesion
(23.8%)
Communication
eciency
(42.9%)
Context factor
eects (38.1%)
Surveillance
and
loss of control
(19.0%)
Discussion
interruption
(33.3%)
Neutrality
(14.3%)
Ignorance
(19.0%)
Decision-
making
(23.8%)
Indisposition
and pressure
(9.5%)
Mutual
consideration
(14.3%)
Table 4. Comparison of chatbot design-generic and -specific eects.
All designs NBT SBT ABT
UX and
inuencing
factors
Positive experience.
Disturbance and in-
disposition in dier-
ent forms.
Surveillance and loss
of control.
Disturbance: Indispo-
sition and confusion
through lack of
understanding about
given information.
Partial neglect of in-
tervention.
Disturbance: Annoy-
ance eects of im-
ages.
Socialness, anthropo-
morphism, and inter-
activity.
Positive coaching ex-
perience.
Disturbance: Inter-
ruption through
large images and
content deletion.
Helpful and active in-
tervention.
Positive coaching ex-
perience.
Emotion
percep-
tion and
regulation
Increased emotion
awareness.
Stronger emotion
regulation.
Improved team cohe-
sion.
Stronger emotion
regulation.
Behavioral
outcome
Higher communica-
tion eciency.
Better compromise
facilitation.
Better compromise
facilitation.
6.2.1 UX of the Chatbots.
Chatbot Experience. Overall, 71.4% of participants documented a positive perception of the chatbot
intervention (P1, P2, P4, P5-7, P9, P11, P12, P14, P15, P16, P19-P21). P14 stated łI would say yes, the
intervention helped, because it basically taught us that we should nd a solutionž. The interviews
revealed further that context factors had impact on the chatbot perceptions. Time-pressure was
a major inuence on the participants’ attention (38.1%). This seemed to mitigate the eect of the
chatbot message content. P6 reected łWell, maybe it can work when you have timež.
Proc. ACM Hum.-Comput. Interact., Vol. 4, No. CSCW2, Article 118. Publication date: October 2020.
118:20 Ivo Benke et al.
52.4% of the participants reported design-specic perceptions (P3-P5, P7, P9, P10, P13, P14, P16,
P18, P20). The break sent by the ABT was perceived as helpful by the participants since it gave the
participants time to reassess the situation. P2 stated łOnce it said, now we take a 15 second break, I
liked that.ž. However, some participants (23.8%) described the large formatted messages of ABT and
SBT as interruption (e.g. P10: łSo the messages were so big that you didn’t see the rest of the chat
anymorež).
For the SBT, three participants (P14, P16, P18) experienced the anthropomorphic appearance
explicitly as positive. P18 stated łWhile talking to the chatbot I had the feeling that it was a human
being, because there were no automatic answers I got.ž. Further answers positively reported the
interactive abilities of the interventions, e.g. łI thought it was good, three answers, like, don’t like it
and don’t carež (P16). However, the images of the SBT elicited also diverging impressions for the
participants with P14 reporting negative associations: łYes, so I found the pictures a bit unnecessary.ž.
In case of the NBT, few participants experienced missing explanations (P17, P20, P21). P20 reected
ł[..] because you don’t know what you’re doing right or wrongž. They also questioned the way to
calculate the reported measures (P21: łI don’t know on what basis that was assessedž).
Support and Coaching. A common theme in sessions of SBT and ABT (P4, P7, P14-P16, P18) was
the impression of the chatbot to act like a team coach or moderator (28.6%). P15 reected łI think
it’s cool that you have a chatbot in such dangerous situation as some kind of mediatorž. In general,
participants (38.1%) valued the provision of advice by the chatbot (P14: łI think the both suggestions
that we should switch, they were goodž).
Disturbance, Surveillance, and Ignorance. Despite general positive perceptions, disturbance was
present in dierent forms over all designs (57.1%). Nine participants (42.9%) reported confusion
(P10: łYes, so I think we were all confused by the intervention [..].ž). A sense of surveillance (19.0%) and
loss of control (19.0%) were reported. P11 reected łit’s like someone is looking over your shoulder,
[..] being watched. [..] this feeling of loss of controlž. Especially with the NBT, participants mentioned
neglect of chatbot messages (P3, P13, P17, P21 / 19.0%), e.g. P3: ł[..] I just ignored itž. This might
originate from the occasional impression of examination (e.g. P17: łIt’s really hard when a bot tells
you anything, what do you do with it?ž).
Chatbot Perspective Controversy. Participants over all designs reported the importance of the
timing of intervention (52.3%) and the negative consequences of misinterpretation (57.1%) (P1-
P7, P11, P13, P15-P18). Mainly documented with interventions of the SBT und ABT, experiences
with the chatbot design about interventions which contradict the actual team emotions led to
disengaging reactions. Also, appearance of the chatbot with wrong timing, when conict was
solved already, was reported negatively (P17: łThat it would intervene if I didn’t want it tož).
6.2.2 Emotion Management.
Emotion Perception. Over all three designs, participants (38.1%) mentioned emotional perception
and self-reection explicitly in the interviews. They paid more attention to others and experienced
an increased emotional awareness within the team (P3, P4, P7, P14, P18-P21). P4 stated łYes well,
awareness was created. Of course, you become more sensitive, especially for things, which otherwise
go down in such discussionž. P14 reected triggered perception processes with łCan I say that I
did not felt it but I perceived the emotionž. This might have stimulated the increased experience of
reection of some participants (P2, P9, P10). P2 reported łIt really gave you a little time to think [..].
I would like to keep that.ž and P9 said that they sorted themselves and their thoughts emotionally.
Despite stimulation of emotion perception the participants did not mention improved emotion
Proc. ACM Hum.-Comput. Interact., Vol. 4, No. CSCW2, Article 118. Publication date: October 2020.
Chatbot-based Emotion Management for Distributed Teams 118:21
Table 5. Selected proof-quotes [62] of interviewed participants.
Theme Code Quote
Chatbot
experience
Positive perception
łSo I thought the chatbot intervention was good, so feedback is a gift, my father
always says. [..] then we could see how things were going.ž (P21)
Design feature eects łThis short break that was announced, I don’t know, it was so shortž (P3)
Context factor eects łFor the reason that we did not have enough time, I only read half of it.ž (P2)
Neutrality łThe interventions, they were neutral [..].ž (P16)
Support
and
coaching
Helpfulness and advice
łI thought the 15 second pause was quite useful, [..] it was good that he realized
that there was a longer discussion and otherwise it would of course be great if
the chatbot could help more actively in the decision-making process [..] .ž (P7)
Team coaching
łI think it was good, it does not give us the feeling that it is a bot, so much. It feels
like a human being is trying to get us as a team, together.ž (P14)
Disturbance
and
surveillance
Disturbance łThat bothered me, and I personally didn’t really take a break.ž (P1)
Confusion łNo, not really. I was rather confused about it coming.ž (P10)
Surveillance and
loss of control
łIt was just that moment when you see someone reading along, it’s like someone
looking over your shoulder, just feeling unwell, being watched.ž (P11)
Ignorance
"So I ignored those things coming, okay, there’s a rating now, but I just kept doing
my thing.ž (P21)
Indisposition
and pressure
łYes, I personally felt rather unwell.ž (P11)
Controversy
Potential
misinterpretation
łOn the one hand, I wouldn’t have seen t for the chatbot to intervene.ž (P3)
Timing of intervention
łNo, not really. At least I was surprised by the time he said anything. I didn’t get
the impression that it was necessary.ž (P9)
Emotion
perception
Emotion awareness
łI would say that the interference has heightened your awareness of the heatened
conict.ž (P7)
Behavioral reection
ł[..] So I might have paid a bit more attention to it myself, in case there were Red
Flags somewhere, in case someone felt disadvantaged, but I felt that everyone
was well involved in the discussion.ž (P4)
Emotion
regulation
Emotion regulation ł..and when you look at the pictures, you should calm down.ž (P15)
Team cohesion "This probably improved the general feeling in the team." (P7)
Behavior
and
performance
Communication
eciency
łI think from that point on things went a bit faster and easier.ł (P10)
Consensus
facilitation
łSo you are already going into the others, so one of them wanted the map to be
rst, so [..] before there’s trouble; whether it’s rst or third, I mean that is ok.ž
(P11)
Discussion
interruption
łSo that everyone then stops and thinks for a moment, what do we actually have
to solve now and what should we stop.ž (P16)
Decision-making
łIt showed us that we were in a conict and we had to solve or change the direction
we are heading to.ž (P14)
Mutual consideration łYou are already taking care of other people.ž (P11)
understanding in any design. While there was awareness of emotional processes the treatments
did not facilitate the concrete specication of those processes.
Emotion Regulation. Beyond that, SBT and ABT seemed to stimulate internal regulation processes,
since participants (28.6%) reported emotion regulation. P16 mentioned the pictures to trigger a
change in perspective: ł[..] then you just think yourself into this picture and see that you should
change it, then you laugh very briey [..] okay, wasn’t as bad as I thought it wasž. P6 stated that this
Proc. ACM Hum.-Comput. Interact., Vol. 4, No. CSCW2, Article 118. Publication date: October 2020.
118:22 Ivo Benke et al.
mental change also happened to the team: łYes, it has [..] led to the fact that I believe also in the
whole team, there was rethoughtž. For some participants it let them to forego existing conicts. P2
reected ł[..] It has also contributed [..] that people have checked that we should start from scratchž.
Regarding the team level, predominantly for SBT and ABT, participants reported an increase in
team cohesion after the treatment occurrence (P4, P7, P11, P15, P19 / 23.8%), e.g. P15: łif he’s there
because you need some help, [..] it’s just positive for the cohesionž.
6.2.3 Team Outcomes.
Behavior and Performance. 42.9% of the participants reported a perceived facilitation of team
communication over all chatbot designs (P2, P3, P7, P10-P12, P14, P20, P21). P14 mentioned that
the intervention łhelped us to get together and being more communicative with each otherž. Of these,
some participants (19.0%) explicitly mentioned an increased communication eciency (P3, P7, P10,
P21). P7 stated łThrough the interruption, a certain increase in eectivenessž. Dierent reasons were
addressed like increased task-focus (P3), team cohesion, or a faster decision-making process (P18,
łSo maybe we should make a decision a little quicker, there’s been an impact.ž).
In total, 66.7% of the participants experienced consensus facilitation in dierent forms. 23.8% of
the participants described compromises (P1, P4, P9, P10, P12). P1 stated łthen we understood that we
must decide more quickly or nd a compromisež. Regarding design details, participants of SBT and
ABT sessions (33.3%) reported conict resolution through interruption of discussions (P3, P7, P10,
P11, P13, P14, P16). As reasons, P2 considered the reduction of unnecessary discussions: łthat we’re
not just talking about something unimportant [..] we’re deciding somethingž.
7 DISCUSSION AND DESIGN IMPLICATIONS
In the previous sections we presented results from the conducted participatory design workshops
which resulted in three chatbot prototypes. This user-informed design process has highlighted
that designing chatbot-based EM for group chat is a non-trivial and multi-faceted endeavor. To
satisfy dierent preferences, the prototypes target three distinct perspectives of chatbot design:
being social (SBT), active (ABT), and lean (NBT). Subsequently, we evaluated the designs in a
laboratory group experiment. On the upside, all three designs show increased emotion awareness
and communication eciency. Further, SBT and ABT present positive development in emotion
regulation and compromise facilitation. The general experiences with the treatment designs were
positive. Especially, social and interactive features were appreciated, e.g. coaching or chat breaks.
However, contextual factors (timing, accuracy, time-pressure), too obtrusive interventions (content
deletion, large images), and too neutral messages (missing explanations) had negative eects like
disturbance or confusion. On the downside, the results report partially surveillance and loss of
control through the intervention in all three designs. Therefore, a combination of benecial design
features should catalyze positive eects whilst mitigating drawbacks. In abstracting the results,
we highlight major topics in three clusters: (1) Emotion and behavior management strategies, (2)
Perception of chatbot messages, and the problem of (3) Surveillance, loss of control, and examination.
Emotion and Behavior Management Strategies. All three chatbot designs revealed an increased
emotion perception in the interviews (łWell, awareness is created [..]ž (P4), łI think it was easy to
grasp the emotions of other team members.ž (P20)). A potential interpretation is that the notication
regarding emotions leads to a general stimulating eect. However, there were dierences in the
nature of the eect of chatbots on emotion perception across the three designs. A trend of decreasing
EC is visible with the NBT, while it stayed constant for the ABT and SBT. This indicates, that
more social and interactive designs maintained the EC of team members better, while the neutral
design actively reduced it. Support for this case is the reported impression of moderation with the
Proc. ACM Hum.-Comput. Interact., Vol. 4, No. CSCW2, Article 118. Publication date: October 2020.
Chatbot-based Emotion Management for Distributed Teams 118:23
SBT (ł[..] It feels like a human being tries to get us together as teamž (P16)). Further, the fact that
SBT and ABT showed lower levels of experience in emotional content (e.g. social support, social
presence), might underline this interpretation since these designs were able to crowd out intrinsic
eorts for participants to emotionally engage with the group. To conclude, notications about team
emotions seem to inuence users through all chatbot designs, but the eect is greater if it comes in
a social and interactive guise. We suggest therefore to optimize the eect of chatbot-based emotion
perception through social design features which trigger interaction within group chat.
None of the three chatbot designs inuenced emotion understanding in the team. To improve
emotion understanding, technology might have to inform users consciously about antecedents and
consequences of emotional processes explicitly providing more assistance [
31
]. However, SBT and
ABT elicited emotion regulation strategies within the interviews, in particular in form of changes
in perspective. The graphical visualizations together with an anthropomorphic appearance (SBT)
seemed to enforce this eect (łAnd when you look at the pictures, you should calm downž (P15)).
As to why only the SBT and ABT caused such change (but not the NBT which caused neglect),
two interpretations might apply. A common social response is to react to something perceived as
human [39], even without deeper understanding. Therefore, an anthropomorphic chatbot could
elicit emotional regulation and the social character might be ecacy-boosting for chatbot-based
EM. This goes in line with recent ndings by [
32
] which describe the benets of personalized, and
thereby humanized, and subjective style of interventions for higher acceptance. Otherwise, the NBT
might not provide enough task- or socially-related information in order to trigger emotion-specic
reactions. Along with the feeling of powerlessness since no explanation was given how to change
the emotional status (ł[..] okay, there’s a rating now, but I just kept doing my thing.ž (P21)), the
default reaction might be neglect.
Possibly triggered by emotion regulation, the interviews reveal increased, perceived communica-
tion eciency for all chatbot designs (łI think from that point on things went a bit [..] easierł (P10)).
Specically, compromises as focusing approaches were mentioned. This was especially initiated by
the visualizations (łYou think yourself into this picture [..] and then you laugh [..]. That just breaks
up the conictž (P16)), which suggests that symbols and social appearance provide easier entry
to an emotional and, in consequence, behavioral regulation strategy. Such a connection between
EM and behavioral reaction is also established in literature on team processes [26]. A potential
interpretation from the data is, that a cut originating in chatbot-based EM may lead to triggering
of a compromise-forming process and thereby task- and discussion-related refocusing.
Perception of Chatbot Messages. The SBT was perceived as social and anthropomorphic, and
participants evaluated the visual content (images, GIFs) as atmospherically relaxing. For the ABT,
participants liked its active strategies, mentioning the positive eect of direct moderation. These ex-
periences support that anthropomorphic appearance is valuable for the acceptance of chatbot-based
EM in group chat [38]. However, overly active behavior through the deletion of messages impaired
the discussion process. The decreased level of emotion expression, engagement, and presence which
is reported with SBT and ABT in the ABCCT questionnaire emphasizes that. This may be related to
a general demand for autonomy by the participants which was compromised in case of too social
chatbots and led to less engagement in the discussion. Further, for both designs, chatbot messages
were reported to be too obtrusive due to their size in the chat (łIt took too much space.ž (P12)). This
obviously breaks the conversation ow. The observation (especially for the ABT) can be related
to ndings from IM applications supporting productivity through break recommendations while
minimizing interruptions (e.g. [
10
,
32
]). Analogous to team communication, these studies report
positive eect on well-being under inuence of contextual factors like personalization, timing,
and intervention length [
10
,
32
]. This supports the ndings of our study for chatbot-based EM.
Proc. ACM Hum.-Comput. Interact., Vol. 4, No. CSCW2, Article 118. Publication date: October 2020.
118:24 Ivo Benke et al.
Two solutions are reasonable in this case: First, provide a specic location within the application
outside the chat. This, however, replaces the chatbot and the EM would not benet from the
positive chatbot capabilities for EM support [17] and its anthropomorphic experience. Second, we
recommend limiting the size of the chatbot messages or use folding/unfolding mechanisms for the
message content. Not discussed in previous research, we highlight the importance of the context
factor time-pressure. As outlined, decision-making tasks are very common in work routines and
group chat compared to creativity tasks. In all designs, time-pressure towards the end of the task
led to ignorance and deactivation of the intended chatbot-based eect. We suggest to conceptualize
the message content dynamically, adapting to the situation through shortening of the messages
with sharp recommendations.
In turn, the positive feature eects can be easily undermined through inappropriate timing and
accuracy of chatbot-based EM. Here, our results advance existing assumptions, that too impulsive
chatbot appearance was experienced obtrusive, which may be a reason for confusion (łI was
rather confused about it coming.ž (P10)) or disturbance. We, therefore, suggest tuning the triggering
mechanism precisely to the task and conversation ow, but also to the individual preference (cf. [
32
]).
However, we report that implications are not as obvious as they suggest, since the chatbot messages
still require additional explanations to bridge potential gaps in the human emotion understanding.
One way to achieve this, besides the optimization of the mechanism, is the provision of explanations
about the derived behavior, as well as learning chatbots through implementing regular checkbacks.
Surveillance, Loss of Control, and Examination. The interviews reveal specic negative perceptions
regarding discomfort in their personal space with all three chatbot designs. This is important since
such complications have not been found in previous work [40]. Participants perceived surveillance
and loss of control (ł[..] it’s like someone is looking over your shoulder [..]ž (P9)). As potential
reason, users might presently feel uncomfortable when articial entities take an increasing role
within group chat, especially when it comes in an anthropomorphic form emphasized through
social features. In case of the NBT, we see reported indisposition through the plots, potentially
perceived as examination. Since plots might also take unnecessary space of the chat corpus (also
cf. [40]), we suggest to clearly dene and communicate their purpose when showing them. That
goes hand in hand with the perceived loss of control by participants, which may be explained
through the handing over of a share of leadership to an articial team member. This represents a
loss in autonomy and self-determination. To overcome these downsides, provision of explanations
with sensitive information in private spaces between the chatbot and the human team member
may help. By letting the member decide when the coaching is on, further control may be provided.
We highlight the importance of this implication, not discussed in related studies, since negative
experience in reality may lead to strong reluctance of chatbot-based EM.
In Sum. Chatbots for EM in group chat should be designed social and anthropomorphic. The
design needs to be supported through interactive patterns. Expanding stimulated awareness, both
increases emotion regulation and behavioral compromises within teams. However, the trade-o
between chatbots as supportive mentors vs. surveilling micromanagers requires delicate congu-
ration. It might be essential for now, that users retain the control over the enabled features. Too
social and invasive behavior could undermine autonomy and create the impression of surveillance.
This uncovers the importance of transparency and explanations to counteract future reluctance.
8 LIMITATIONS
Several limitations must be appraised for this research. With regard to the experimental evaluation,
the laboratory study naturally comes with a reduced level of external validity. First of all, the study
deals with a limited sample size which is related to the group level evaluation and its exploratory
Proc. ACM Hum.-Comput. Interact., Vol. 4, No. CSCW2, Article 118. Publication date: October 2020.
Chatbot-based Emotion Management for Distributed Teams 118:25
nature. Further research on chatbot-based EM with (1) a larger sample and (2) in real working
scenarios can therefore conrm most valid and transferable perceptions in reality. In general, team-
level constructs in reality may be determined by multiple factors, e.g. common experiences. This
makes the assessment complex and the laboratory study with ad-hoc teams cannot account for this.
The ndings, though, are relevant for future research since they (1) provide a rst foundation with
ad-hoc teams and (2) let future research deduct hypothesis when investigating specic phenomena.
This can be addressed in future studies through exploration of specic laboratory experimental
settings with experienced teams, and with eld studies. We made experimental assumptions like the
invocation timing and style of the chatbots. In reality, humans expect human-like, exact triggering
mechanisms (cf. [
56
]). Further, instead of combining all ideated features together, we separated
distinct design features into three designs in order to assess eects of specic features. Since this
study deals with chatbot-based EM especially in the case of negative emotions, we had to rely on
potential negative emotions in ad-hoc teams. Although intrinsic conicts appeared by adjusting
context factors, creating real conict in the laboratory is more dicult and also ethically challenging
in laboratory research.
9 CONCLUSION AND FUTURE WORK
In this study we conducted the design, implementation, and evaluation of chatbot-based EM in
text-based communication for distributed teams. Based on participatory design workshops we
developed three chatbot designs: NeutralBot, neutrally reporting, SocialBot, anthropomorphic and
socially engaging, and ActionBot, actively intervening. We evaluated the designs in a mixed-method
laboratory experiment with 27 participants. The ndings report stimulation of emotion awareness
and communication eciency through all chatbot designs, and especially an increase in emotion
regulation and compromise facilitation through social (SBT) and interactive (ABT) design features.
These design features were appreciated, however, for all designs situating constraints limit the
eectiveness, e.g. contextual factors (timing, accuracy, time-pressure), too obtrusive interventions
(content deletion, large images), and too neutral messages (missing explanations) were confusing
and disturbing. Thereby, a combination of design features makes sense to improve positive eects
and mitigate potential downsides. Besides positive results, we found support for threats by chatbots
through perceived surveillance and loss of control.
Concluding Remarks regarding COVID-19. The importance of these ndings should be critically
appraised against the backdrop of present societal changes. Since distancing measures due to the
COVID-19 pandemic have taken eect, the global workforce has vitally relied on collaboration
software (e.g. Slack, Microsoft Teams). As our work is closely related, we present a few concluding
thoughts about the opportunities of chatbot-based EM in this new future of work. In the past
months, home-oce work congurations have been pivotal and they have been accompanied
by a series of challenging developments in terms of emotional experiences (e.g. [
7
,
51
]). Some
developments stand out: With primarily remote collaboration, communication exchange becomes
shorter (cf. [
52
]) and focuses towards mandatory, ocial meetings. In consequence, individual
isolation appears to increase since social-emotional communication might decrease. This may lead
to psychologically detrimental experiences like loneliness as well as impaired leadership. For the
future of collaborative software it requires to counterbalance this structural change. Chatbot-based
EM could provide means for this by strengthening social bonds in distributed teams, for example
through providing space for traversing missing emotional information and diminishing emotional
shortcomings. Further, reduced or articial social interaction may lead to emotional disengagement
from work. With this development the motivation to collaborate might shrink, too. Therefore, chat-
bots with emotional capabilities could provide a mean to positively inuence work motivation in
Proc. ACM Hum.-Comput. Interact., Vol. 4, No. CSCW2, Article 118. Publication date: October 2020.
118:26 Ivo Benke et al.
distributed teams. In summary we propose, that many of the presently unfolding challenges for the
now broader remote workforce are related to social interaction and, thereby, to EM. Chatbot-based
EM support mechanisms could therefore play a vital role in making the shift from articial and
cold virtual interaction to an emotionally rich and functional digital workplace of the future.
In future work, we will, therefore, follow two paths. First, since we have explored combined
eects of design features on the constructs of EM, we will narrow the eects down and research
single eects in multiple evaluation series. For example, we aim to investigate the specic eect of
chatbot-initiated chat breaks on emotion regulation. Second, informed by these ndings we may
combine design features into one chatbot prototype. Thereby, we will mitigate the downsides of the
reported results, i.e. the exclusion of too neutral and obtrusive interventions like content deletion.
For better EM we plan to implement the ability of dynamic adaption to task-related content, to
context factors, and to team states. In this regard, we aim to draw closer to the important related
literature on productivity and break interventions in workplace environments (e.g. [
10
,
32
]). Early
research on IM on this regard has explored interruption characteristics, e.g. length and complexity
[
23
], relevance and timing [
15
], and task type [
16
]. Combining such ndings with our work, we aim
to explore more in-depth personalization and task-related aspects for chatbot-based EM. Special
focus is given to avoid feelings of loss of control triggered through anthropomorphic appearance. We
will develop specic control features in order to overcome this. Potentially, conceptual foundation
can be derived from the privacy-by-design or machine automation literature. Finally, we will test
the resulting prototype in real-world settings, with distributed teams in the workplace. This will
allow to address the limitation of ad-hoc teams and the challenge of creating conict articially in
the laboratory. With this work we hope to facilitate EM in conversations by leveraging chatbots and
thereby to improve often dysfunctional processes of collaboration in distributed teams in future
workplaces.
ACKNOWLEDGMENTS
We thank Ulrich Gnewuch for his support in proof-reading. Further, we thank Paul Lux, Tim Rietz,
and Marcel Ruo for their help in our work.
REFERENCES
[1]
Sandeep Avula, Gordon Chadwick, Jaime Arguello, and Robert Capra. 2018. Searchbots: User engagement with chatbots
during collaborative search. In CHIIR 2018 - Proceedings of the 2018 Conference on Human Information Interaction and
Retrieval, Vol. 2018-March. 52ś61. https://doi.org/10.1145/3176349.3176380
[2]
Murray R. Barrick and Michael K. Mount. 2009. Select on conscientiousness and emotional stability. In Handbook of
Principles of Organizational Behavior. 19ś39. https://doi.org/10.1002/9781119206422.ch6
[3]
Sigal G. Barsade. 2002. The ripple eect: Emotional contagion and its inuence on group behavior. Administrative
Science Quarterly 47, 4 (2002). https://doi.org/10.2307/3094912
[4]
Pernille Bjùrn and Ojelanki Ngwenyama. 2009. Virtual team collaboration: Building shared meaning, resolving
breakdowns and creating translucence. Information Systems Journal 19, 3 (2009), 227ś253. https://doi.org/10.1111/j.1365-
2575.2007.00281.x
[5]
Marc A. Brackett, Susan E. Rivers, Sara Shiman, Nicole Lerner, and Peter Salovey. 2006. Relating emotional abilities
to social functioning: A comparison of self-report and performance measures of emotional intelligence. Journal of
Personality and Social Psychology 91, 4 (2006), 780ś795. https://doi.org/10.1037/0022-3514.91.4.780
[6]
Virginia Braun and Victoria Clarke. 2006. Using thematic analysis in psychology. Qualitative Research in Psychology 3,
2 (2006), 77ś101. https://doi.org/10.1191/1478088706qp063oa
[7]
Erik Brynjolfsson, Daniel Rock, John Horton, Adam Ozimek, Garima Sharma, and Hong Yi Tu Ye. 2020. CO VID-19 and
Remote Work : An Early Look at US Data (No. w27344). National Bureau of Economic Research. (2020), 1ś16.
[8]
D. Buschek, A. De Luca, and F. Alt. 2015. There is more to typing than speed: Expressive mobile touch keyboards via
dynamic font personalisation. In MobileHCI 2015 - Proceedings of the 17th International Conference on Human-Computer
Interaction with Mobile Devices and Services. 125ś130. https://doi.org/10.1145/2785830.2785844
Proc. ACM Hum.-Comput. Interact., Vol. 4, No. CSCW2, Article 118. Publication date: October 2020.
Chatbot-based Emotion Management for Distributed Teams 118:27
[9]
Daniel Buschek, Mariam Hassib, and Florian Alt. 2018. Personal Mobile Messaging in Context: Chat Augmentations
for Expressiveness and Awareness. ACM Transactions on Computer-Human Interaction 25, 4 (2018), 1ś33. https:
//doi.org/10.1145/3201404
[10]
Scott A. Cambo, Daniel Avrahami, and Matthew L. Lee. 2017. BreakSense: Combining physiological and location
sensing to promote mobility during work-breaks. In Proceedings of the 2017 CHI Conference on Human Factors in
Computing Systems (CHI ’17). 3595ś3607. https://doi.org/10.1145/3025453.3026021
[11]
Abraham Carmeli. 2003. The relationship between emotional intelligence and work attitudes, behavior and outcomes:
An examination among senior managers. Journal of Managerial Psychology 18 (2003), 788ś813. https://doi.org/10.
1108/02683940310511881
[12]
Nicholas Clarke. 2010. Emotional intelligence abilities and their relationships with team processes. Team Performance
Management: An International Journal 16 (2010), 6ś32. https://doi.org/10.1108/13527591011028906
[13]
Amy Collins, Sandra A. Lawrence, Ashlea C. Troth, and Peter J. Jordan. 2013. Group aective tone: A review and
future research directions. Journal of Organizational Behavior 34 (2013), 43ś62. https://doi.org/10.1002/job
[14]
Edward Cutrell, Mary Czerwinski, and Eric Horvitz. 2001. Notication, Disruption, and Memory: Eects of Messaging
Interruptions on Memory and Performance. Interact 1999 (2001), 263ś269.
[15]
Mary Czerwinski, Edward Cutrell, and Eric Horvitz. 2000. Instant messaging: Eects of Relevance and Timing. People
and computers XIV: Proceedings of HCI. Vol. 2. British Computer Society, 2000 (2000), 71ś76. http://research.microsoft.
com/en-us/um/people/marycz/hci2000.pdf
[16]
Mary Czerwinski, Edward Cutrell, and Eric Horvitz. 2002. Instant Messaging and Interruption: Inuence of Task Type
on Performance. Computer Fraud and Security 2002, 11 (2002), 19ś20. https://doi.org/10.1016/S1361-3723(02)01112- 0
[17]
Leslie A. DeChurch and Jessica Mesmer-Magnus. 2010. The cognitive underpinnings of eective teamwork: a
continuation. Journal of Applied Psychology 22, 5 (2010), 507ś519. https://doi.org/10.1108/CDI-08-2017-0140
[18]
Daantje Derks, Agneta H. Fischer, and Arjan E.R. Bos. 2008. The role of emotion in computer-mediated communication:
A review. Computers in Human Behavior 24, 3 (2008), 766ś785. https://doi.org/10.1016/j.chb.2007.04.004
[19]
Designorate.com. 2019. Walt-Disney Ideation Method. Retrieved 2020-08-07 from https://www.designorate.com/
disneys-creative-strategy/
[20]
Jasper Feine, Ulrich Gnewuch, Stefan Morana, and Alexander Maedche. 2019. A Taxonomy of Social Cues for
Conversational Agents. International Journal of Human-Computer Studies 132, July (2019), 138ś161. https://doi.org/10.
1016/j.ijhcs.2019.07.009
[21] Gamestorming.com. 2019. 6-8-5 Ideation Method. Retrieved 2020-08-07 from https://gamestorming.com/6-8-5s/
[22]
Susanne Geister and Guido Hertel. 2015. Eects of Process Feedback on Motivation, Satisfaction, and Performance in
Virtual Teams. Small Group Research 37, 5 (2015), 459ś489.
[23]
Tony Gillie and Donald Broadbent. 1989. What makes interruptions disruptive? A study of length, similarity, and
complexity. Psychological Research 50, 4 (1989), 243ś250. https://doi.org/10.1007/BF00309260
[24]
Lucy L. Gilson, M. Travis Maynard, Nicole C. Jones Young, Matti Vartiainen, and Marko Hakonen. 2015. Virtual
Teams Research: 10 Years, 10 Themes, and 10 Opportunities. Journal of Management 41, 5 (2015), 1313ś1337. https:
//doi.org/10.1177/0149206314559946
[25]
James J. Gross. 1998. The Emerging Field of Emotion Regulation: An Integrative Review. Review of General Psychology
2, 3 (1998), 271ś299.
[26]
Jay Hall and W.H. Watson. 1970. The eects of a normative intervention on group decision-making performance.
Human Relations 23, 4 (1970), 299ś317.
[27]
Mariam Hassib, Daniel Buschek, Pawel W. Wozniak, and Florian Alt. 2017. HeartChat: Heart Rate Augmented Mobile
Messaging to Support Empathy and Awareness. In Proceedings of the 2017 CHI Conference on Human Factors in
Computing Systems - CHI ’17. 2239ś2251. https://doi.org/10.1145/3025453.3025758
[28]
Jess Hohenstein and Malte Jung. 2018. AI-supported messaging: An investigation of human-human text conversation
with AI support. In Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems, Vol. 2018-
April. https://doi.org/10.1145/3170427.3188487
[29]
Sungsoo Hong, Minhyang Suh, Nathalie Henry Riche, Jooyoung Lee, Juho Kim, and Mark Zachry. 2018. Collaborative
dynamic queries: Supporting distributed small group decision-making. Conference on Human Factors in Computing
Systems - Proceedings 2018-April (2018), 1ś12. https://doi.org/10.1145/3173574.3173640
[30]
Capgemini Research Institute. 2018. Emotional intelligence ś the essential skillset for the age of AI. Retrieved
2020-08-07 from https://www.capgemini.com/research/emotional-intelligence/
[31]
Dana L. Joseph and Daniel A. Newman. 2010. Emotional Intelligence: An Integrative Meta-Analysis and Cascading
Model. Journal of Applied Psychology 95, 1 (2010), 54ś78. https://doi.org/10.1037/a0017286
[32]
Harmanpreet Kaur, Alex C Williams, Daniel Mcdu, Mary Czerwinski, Jaime Teevan, and Shamsi T Iqbal. 2020.
Optimizing for Happiness and Productivity: Modeling Opportune Moments for Transitions and Breaks at Work. In
Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (CHI ’20). 1ś15.
Proc. ACM Hum.-Comput. Interact., Vol. 4, No. CSCW2, Article 118. Publication date: October 2020.
118:28 Ivo Benke et al.
[33]
Finn Kensing and Jeanette Blomberg. 1998. Participatory Design: Issues and Concerns. Computer Supported Cooperative
Work 6, 1-2 (1998), 167ś185.
[34]
Finn Kensing and Andreas Munk-Madsen. 1993. PD: Structure In The Toolbox. Communication of the ACM 36, 4
(1993).
[35]
Young Ji Kim, David Engel, Anita Williams Woolley, Jerey Yu Ting Lin, Naomi McArthur, and Thomas W. Malone.
2017. What makes a strong team? Using collective intelligence to predict team performance in League of Legends. In
Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing. 2316ś2329.
https://doi.org/10.1145/2998181.2998185
[36]
Everlyne Kimani, Kael Rowan, Daniel McDu, Mary Czerwinski, and Gloria Mark. 2019. A Conversational Agent
in Support of Productivity and Wellbeing at Work. 2019 8th International Conference on Aective Computing and
Intelligent Interaction, ACII 2019 (2019), 332ś338. https://doi.org/10.1109/ACII.2019.8925488
[37]
Lorenz Cuno Klopfenstein, Saverio Delpriori, Silvia Malatini, and Alessandro Bogliolo. 2017. The rise of bots: A survey
of conversational interfaces, patterns, and paradigms. In Proceedings of the 2017 Conference on Designing Interactive
Systems. 555ś565. https://doi.org/10.1145/3064663.3064672 arXiv:1407.5225
[38]
Steve W. J. Kozlowski and Daniel R. Ilgen. 2006. Enhancing the Eectiveness of Work Groups and Teams. Psychological
Science In The Public Interest 105, 1 (2006), 39ś40.
[39]
Ravi Kuber and Franklin P. Wright. 2013. Augmenting the Instant Messaging Experience Through the Use of Brain-
Computer Interface and Gestural Technologies. International Journal of Human-Computer Interaction 29, 3 (2013),
178ś191. https://doi.org/10.1080/10447318.2012.702635
[40]
Jennifer Lerner. 2013. Emotions and Decision Making. Annual Review of Psychology 53, 9 (2013), 1689ś1699.
arXiv:arXiv:1011.1669v3
[41]
Dahui Li, Patrick Chau, and Craig Van Slyke. 2010. A Comparative Study of Individual Acceptance of Instant Messaging
in the US and China : A Structural Equation Modeling Approach. Communications of the Association for Information
Systems 26, 1 (2010).
[42]
Bin Lin, Alexey Zagalsky, Margaret Anne Storey, and Alexander Serebrenik. 2016. Why developers are slacking o:
Understanding how software teams use slack. In Proceedings of the 19th ACM Conference on Computer Supported
Cooperative Work and Social Computing Companion, Vol. 26-Februar. ACM Press, New York, New York, USA, 333ś336.
https://doi.org/10.1145/2818052.2869117
[43]
Miki Liu, Austin Wong, Ruhi Pudipeddi, Betty Hou, David Wang, and Gary Hsieh. 2018. ReactionBot: Exploring the
eects of expression-triggered emoji in text messages. In Proceedings of the ACM on Human-Computer Interaction,
CSCW, Vol. 2. 1ś16. https://doi.org/10.1145/3274379
[44]
Ewa Luger and Abigail Sellen. 2016. łLike Having a Really Bad PAž: The Gulf between User Expectation and Experience
of Conversational Agents. Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (2016),
5286ś5297. https://doi.org/10.1145/2858036.2858288
[45]
Michelle A Marks, John E. Mathieu, and Stephen Zaccaro. 2001. A Temporally Based Framework and Taxonomy of
Team Processes. Academy of Management Review 26, 3 (2001), 356ś376. https://doi.org/10.5465/AMR.2001.4845785
[46]
Gerald Matthews, Amanda K. Emo, Gregory Funke, Moshe Zeidner, Richard D. Roberts, Paul T. Costa, and Ralf Schulze.
2006. Emotional intelligence, personality, and task-induced stress. Journal of Experimental Psychology: Applied 12, 2
(2006), 96ś107. https://doi.org/10.1037/1076-898X.12.2.96
[47]
John D. Mayer, Richard D. Roberts, and Sigal G. Barsade. 2008. Human Abilities: Emotional Intelligence. Annual Review
of Psychology 59, 1 (2008), 507ś536. https://doi.org/10.1146/annurev.psych.59.103006.093646
[48]
Philipp Mayring. 2014. Qualitative content analysis: theoretical foundation, basic procedures and software solution.
Technical Report. https://doi.org/10.1016/S1479-3709(07)11003- 7
[49] Joseph E. McGrath. 1984. Groups: Interaction and Performance. Vol. 29. 469 pages. https://doi.org/10.2307/2393041
[50]
Joseph E. McGrath. 1991. Time, Interaction, and Performance (TIP): A Theory of Groups. Small Group Research 22, 2
(1991), 147ś174. https://doi.org/10.1177/1046496491222001
[51]
Microsoft. 2020. How remote work impacts collaboration: ndings from our team. Retrieved 2020-08-07 from https://
www.microsoft.com/en-us/microsoft-365/blog/2020/04/22/how- remote-work- impacts-collaboration-ndings- team/
[52]
Microsoft. 2020. Remote work trend report: meetings. Retrieved 2020-08-07 from https://www.microsoft.com/en-
us/microsoft-365/blog/2020/04/09/remote-work-trend-report-meetings/
[53]
Moïra Mikolajczak, Sophie Brasseur, and Carole Fantini-Hauwel. 2014. Measuring intrapersonal and interpersonal
EQ: The Short Prole of Emotional Competence (S-PEC). Personality and Individual Dierences 65 (2014), 42ś46.
https://doi.org/10.1016/j.paid.2014.01.023
[54]
Matthew K. Miller, John C. Tang, Gina Venolia, Gerard Wilkinson, and Kori Inkpen. 2017. Conversational chat circles:
Being all here without having to hear it all. In Proceedings of the 2017 CHI Conference on Human Factors in Computing
Systems, Vol. 2017-May. 2394ś2404. https://doi.org/10.1145/3025453.3025621
Proc. ACM Hum.-Comput. Interact., Vol. 4, No. CSCW2, Article 118. Publication date: October 2020.
Chatbot-based Emotion Management for Distributed Teams 118:29
[55]
Cliord Nass and Youngme Moon. 2000. Machines and Mindlessness: Social Responses to Computers. Journal of Social
Issues 56, 1 (2000), 81ś103. https://doi.org/10.1111/0022-4537.00153
[56]
Zhenhui Peng, Taewook Kim, and Xiaojuan Ma. 2019. GremoBot: Exploring emotion regulation in group chat. In
Conference Companion Publication of the 2019 on Computer Supported Cooperative Work and Social Computing. 335ś340.
https://doi.org/10.1145/3311957.3359472
[57]
S. Perez-Soler, E. Guerra, and J. de Lara. 2018. Collaborative Modelling and Group Decision-Making using Chatbots
within Social Networks. IEEE Software (2018). https://doi.org/10.1109/MS.2018.290101511
[58]
Virginia E Pitts, Natalie A Wright, and Lindsey C Harkabus. 2012. Communication in Virtual Teams : The Role of
Emotional Intelligence. Journal of Organizational Psychology 28, 1 (2012), 2046ś2054. https://doi.org/10.1016/j.chb.
2012.06.001
[59]
Martin Podlubny, John Rooksby, Mattias Rost, and Matthew Chalmers. 2017. Synchronous text messaging: A eld trial
of Curtains Messenger. In Proceedings of the ACM on Human-Computer Interaction, Vol. 1. 1ś20. https://doi.org/10.
1145/3134721
[60]
Soujanya Poria, Navonil Majumder, Rada Mihalcea, and Eduard Hovy. 2019. Emotion Recognition in Conversation:
Research Challenges, Datasets, and Recent Advances. IEEE Access 7 (2019), 100943ś100953. https://doi.org/10.1109/
access.2019.2929050
[61]
Anne Powell, Gabriele Piccoli, and Blake Ives. 2004. Virtual teams: A Review of Current Literature and Directions for
Future Research. ACM SIGMIS Database 35, 1 (2004), 6ś36. https://doi.org/10.1145/968464.968467
[62]
Michael Pratt. 2009. From the editors: For the lack of a boilerplate: Tips on writing up (and reviewing) qualitative
research. Academy of Management Journal 52, 5 (2009), 856ś862. https://doi.org/10.5465/AMJ.2009.44632557
[63]
Samiha Samrose, Ru Zhao, Jeery White, Vivian Li, Luis Nova, Yichen Lu, Mohammad Rafayet Ali, and Mo-
hammed Ehsan Hoque. 2018. CoCo: Collaboration Coach for Understanding Team Dynamics during Video Con-
ferencing. In Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, Vol. 1. 1ś24.
https://doi.org/10.1145/3161186
[64]
J. Alfredo Sánchez, Norma P. Hernández, Julio C. Penagos, and Yulia Ostróvskaya. 2006. Conveying mood and emotion
in instant messaging by using a two-dimensional model for aective states. In Proceedings of VII Brazilian symposium
on Human factors in computing systems - IHC ’06. 66. https://doi.org/10.1145/1298023.1298033
[65]
Elizabeth B.-N. Sanders and Pieter Jan Stappers. 2008. Co-creation and the new landscapes of design. CoDesign 4, 1
(2008), 5ś18. https://doi.org/10.1080/15710880701875068
[66]
Joseph Seering, Michal Luria, Geo Kaufman, and Jessica Hammer. 2019. Beyond dyadic interactions: Considering
chatbots as community members. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems.
https://doi.org/10.1145/3290605.3300680
[67]
A. Tat and M. S.T. Carpendale. 2002. Visualising human dialog. In Proceedings Sixth International Conference on
Information Visualisation. IEEE, 16ś21. https://doi.org/10.1109/IV.2002.1028751
[68]
David R. Thomas. 2006. A General Inductive Approach for Analyzing Qualitative Evaluation Data. American Journal
of Evaluation 27, 2 (2006), 237ś246. https://doi.org/10.1177/1098214005283748
[69]
Carlos Toxtli, Andrés Monroy-Hernández, and Justin Cranshaw. 2018. Understanding chatbot-mediated task man-
agement. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, Vol. 2018-April.
https://doi.org/10.1145/3173574.3173632
[70]
Ashlea C. Troth, Peter J. Jordan, Sandra A. Lawrence, and Herman H. M. Tse. 2012. A multilevel model of emotional
skills, communication performance, and task performance in teams. Journal of Organizational Behavior 33, 5 (2012),
700ś722. https://doi.org/10.1002/job.785
[71]
Ralf A.L.F. van Griethuijsen, Michiel W. van Eijck, Helen Haste, Perry J. den Brok, Nigel C. Skinner, Nasser Mansour,
Ayse Savran Gencer, and Saouma BouJaoude. 2015. Global patterns in students’ views of science and interest in science.
Research in Science Education 45, 4 (2015), 581ś603. https://doi.org/10.1007/s11165-014-9438-6
[72]
Joseph Weizenbaum. 1966. ELIZAÐa computer program for the study of natural language communication between
man and machine. Commun. ACM 9, 1 (1966), 36ś45. https://doi.org/10.1145/365153.365168
[73]
Alex C. Williams, Harmanpreet Kaur, Gloria Mark, Anne Loomis Thompson, Shamsi T. Iqbal, and Jaime Teevan. 2018.
Supporting workplace detachment and reattachment with conversational intelligence. Proceedings of the 2018 CHI
Conference on Human Factors in Computing Systems 2018-April (2018), 1ś13. https://doi.org/10.1145/3173574.3173662
[74]
J Wilson and C Hernández-Hall. 2014. Physics laboratory experiments. In Eighth International AAAI Conference on
Weblogs and Social Media. 18. https://doi.org/10.1210/en.2011-1066
[75]
Ziang Xiao, Michelle X. Zhou, Q. Vera Liao, Gloria Mark, Changyan Chi, Wenxi Chen, and Huahai Yang. 2019. Tell
Me About Yourself: Using an AI-Powered Chatbot to Conduct Conversational Surveys. ACM Trans. Comput.-Hum.
Interaction 1, 1 (2019). arXiv:1905.10700 http://arxiv.org/abs/1905.10700
[76]
Svetlana Yarosh, Panos Markopoulos, and Gregory D. Abowd. 2014. Towards a questionnaire for measuring aective
benets and costs of communication technologies. In Proceedings of the 17th ACM conference on Computer Supported
Proc. ACM Hum.-Comput. Interact., Vol. 4, No. CSCW2, Article 118. Publication date: October 2020.
118:30 Ivo Benke et al.
Cooperative Work and Social Computing. 84ś96. https://doi.org/10.1145/2531602.2531634
[77]
Ilze Zigurs and Kenneth A. Kozar. 2006. An Exploratory Study of Roles in Computer-Supported Groups. MIS Quarterly
18, 3 (2006), 277. https://doi.org/10.2307/249619
Received January 2020; revised June 2020; accepted July 2020
Proc. ACM Hum.-Comput. Interact., Vol. 4, No. CSCW2, Article 118. Publication date: October 2020.
... Body language (gestures, facial expressions) (Benke et al., 2020;Bittner et al., 2021) Team mood board (own suggestion); jokes and anecdotes in response to negative mood (Strohmann et al., 2018) Body language (eye gaze, mimics, gestures) (Benke et al., 2020;Bittner et al., 2021) Reminder to take a break, information load board (own suggestion) Keywords for breaking rules Reminder of the rules of creative teamwork Critical or negative utterances or behavior Starostka et al., 2021) Reminder to keep the DT mindset; Explanation on how to express constructive feedback Speech share & centrality Przybilla et al., 2019) Informing about unbalanced speech shares to promote equal participation (Leimeister, 2014) Team process ...
... Body language (gestures, facial expressions) (Benke et al., 2020;Bittner et al., 2021) Team mood board (own suggestion); jokes and anecdotes in response to negative mood (Strohmann et al., 2018) Body language (eye gaze, mimics, gestures) (Benke et al., 2020;Bittner et al., 2021) Reminder to take a break, information load board (own suggestion) Keywords for breaking rules Reminder of the rules of creative teamwork Critical or negative utterances or behavior Starostka et al., 2021) Reminder to keep the DT mindset; Explanation on how to express constructive feedback Speech share & centrality Przybilla et al., 2019) Informing about unbalanced speech shares to promote equal participation (Leimeister, 2014) Team process ...
... Regarding the DT area of Team Interaction, we found even more cases where AI could meaningfully assist. For instance, by using different inputs from body language (mimics, gestures, eye gaze, facial expressions), AI could create a team mood board, based on which it could step in to lighten the mood with jokes or anecdotes, in case the mood is shifting to negative (Benke et al., 2020;Bittner et al., 2021;Strohmann et al., 2018). By analyzing eye-movement data, AI could create an information load board and suggest a break at a suitable time point (Fig. 1b). ...
Conference Paper
Full-text available
AI-assisted Design Thinking shows great potential for supporting collaborative creative work. To foster creative thinking processes within teams with individualized suggestions, AI has to rely on data provided by the teams. As a prerequisite, team members need to weigh their disclosure preferences against the potential benefits of AI when disclosing information. To shed light on these decisions, we identify relevant information such as emotional states or discussion arguments that design thinking teams could provide to AI to enjoy the benefits of its support. Using the privacy calculus as theoretical lens, we draft a research design to analyze user preferences for disclosing different information relevant to the service bundles that AI provides for respective information. We make explorative contributions to the body of knowledge in terms of AI use and its corresponding information disclosure. The findings are relevant for practice as they guide the design of AI that fosters information disclosure.
... Scholars have shown that these machines are able to compensate for human shortcomings or exceed human capacities [Fox and Gambino, 2021, Guzman and Lewis, 2020, Whittaker et al., 2018. However, prior works focus on designing and evaluating dyadic human-AI interaction, which involve only one-to-one interactions between humans and their CAs [Bickmore et al., 2005, Schulman and Bickmore, 2009, Kopp et al., 2005, Anabuki et al., 2000; whereas more recent works start tapping into polyadic human-AI interactions that also support human-human interactions [Kim et al., 2021, Wang et al., 2021, Kim et al., 2020, Toxtli et al., 2018, Benke et al., 2020. ...
... One major challenge in multi-user social settings is maintaining positive relationships, which is crucial to forming a solid team or group. For example, in online collaborative work, there could be a lack of emotional awareness and mutual understanding between team members, as it is hard for them to detect and regulate emotions [Benke et al., 2020, Peng et al., 2019, Narain et al., 2020. Moreover, it is always important to grow trust within a team, and the feasibility of using CAs for trust-building [Strohkorb Sebo et al., 2018] and setting privacy boundaries [Luria et al., 2020b] are explored and developed. ...
... Meanwhile, most of the designed features are proposed by researchers without leveraging prior designs or catering to specific user-needs, e.g., [Toxtli et al., 2018, Dohsaka et al., 2009, Seering et al., 2020. Two CAs adopted participatory design methods, including need-finding interviews, e.g., [Kim et al., 2020, Zhang and and two ran ideation workshops, e.g., [Benke et al., 2020, Luria et al., 2020b. ...
Preprint
Full-text available
Early conversational agents (CAs) focused on dyadic human-AI interaction between humans and the CAs, followed by the increasing popularity of polyadic human-AI interaction, in which CAs are designed to mediate human-human interactions. CAs for polyadic interactions are unique because they encompass hybrid social interactions, i.e., human-CA, human-to-human, and human-to-group behaviors. However, research on polyadic CAs is scattered across different fields, making it challenging to identify, compare, and accumulate existing knowledge. To promote the future design of CA systems, we conducted a literature review of ACM publications and identified a set of works that conducted UX (user experience) research. We qualitatively synthesized the effects of polyadic CAs into four aspects of human-human interactions, i.e., communication, engagement, connection, and relationship maintenance. Through a mixed-method analysis of the selected polyadic and dyadic CA studies, we developed a suite of evaluation measurements on the effects. Our findings show that designing with social boundaries, such as privacy, disclosure, and identification, is crucial for ethical polyadic CAs. Future research should also advance usability testing methods and trust-building guidelines for conversational AI.
... In general, they have shown positive results in emotion perception, communication efficiency, and performance (Samrose et al., 2018). However, previous research has also identified negative outcomes when users perceived the chatbot's ability to recognize emotions as threatening and displeasing, which led to a decrease in autonomy, trust and, thereby, aversion to using the chatbot (Benke et al., 2020;McDuff & Czerwinski, 2018). ...
... Through this design, it aims to stimulate the emotional capabilities of virtual team members (Benke et al., 2020;McDuff & Czerwinski, 2018;Peng et al., 2019). ...
Article
Emotion-aware chatbots that can sense human emotions are becoming increasingly prevalent. However, the exposition of emotions by emotion-aware chatbots undermines human autonomy and users' trust. One way to ensure autonomy is through the provision of control. Offering too much control, in turn, may increase users’ cognitive effort. To investigate the impact of control over emotion-aware chatbots on autonomy, trust, and cognitive effort, as well as user behavior, we carried out an experimental study with 176 participants. The participants interacted with a chatbot that provided emotional feedback and were additionally able to control different chatbot dimensions (e.g., timing, appearance, and behavior). Our findings show, first, that higher control levels increase autonomy and trust in emotion-aware chatbots. Second, higher control levels do not significantly increase cognitive effort. Third, in our post hoc behavioral analysis, we identify four behavioral control strategies based on control feature usage timing, quantity, and cognitive effort. These findings shed light on the individual preferences of user control over emotion-aware chatbots. Overall, our study contributes to the literature by showing the positive effect of control over emotion-aware chatbots and by identifying four behavioral control strategies. With our findings, we also provide practical implications for future design of emotion-aware chatbots.
... Participatory design involves a level of compromise [54,55]; not all of the students' ideas were implemented in practice. For example, Taylor has a synthetic (although lifelike) voice rather than a human one, and students are not given a choice of voices with different genders or regional accents. ...
Article
Administrative burden in education is a serious issue for disabled students. Form-filling and bureaucracy are ubiquitous in further and higher education, particularly for students who need to disclose a disability and arrange for accommodations and support for an equitable educational experience. Paradoxically, many of these processes are inherently inaccessible for disabled students, and yet completing them can be critical to their success. Artificial Intelligence has potential to alleviate some of the burden imposed by administration and bureaucracy; virtual assistants and chatbots can replace forms with dialogue, without placing additional strain on institutions. However, it is essential that solutions are designed in partnership with disabled students to ensure that students’ needs are met, their concerns addressed, and the final solution is equitable for them. This paper explores a case study of participatory research with disabled students in a large UK distance learning institution, in which participatory research identified an issue of administrative burden for disabled students, and a virtual assistant was designed as a solution using participatory design. It shares the methodology and design process, explores findings from different phases of the research, and shares recurrent themes arising throughout the study. In doing so, it aims to provide a foundation for future participatory research to reduce barriers for disabled students.
Article
Technology plays an increasingly prominent role in emotional lives. Researchers have begun to study how people use devices to cope with and shape emotions: a phenomenon that has been called Digital Emotion Regulation. We report a study of the impact of the COVID-19 pandemic upon young people's digital habits and emotion regulation behaviors. We conducted a two-wave longitudinal survey, collecting data from 154 university students both before and during the COVID-19 pandemic. During the pandemic, participants were subject to increased emotional distress as well as restrictions on movement and social interaction. We present evidence that participants' emotion regulation strategies changed and became more homogeneous during the pandemic, with participants resorting to digital tools when offline strategies were less available, while also becoming more emotionally dependent upon their devices. This study underscores the growing significance of the digital for contemporary emotional experience, and contributes to understanding the potential role for technology in supporting well-being during high-impact events.
Article
Full-text available
As the inclusion of users in the design process receives greater attention, designers need to not only understand users, but also further cooperate with them. Therefore, engineering design education should also follow this trend, in order to enhance students’ ability to communicate and cooperate with users in the design practice. However, it is difficult to find users on teaching sites to cooperate with students because of time and budgetary constraints. With the development of artificial intelligence (AI) technology in recent years, chatbots may be the solution to finding specific users to participate in teaching. This study used Dialogflow and Google Assistant to build a system architecture, and applied methods of persona and semi-structured interviews to develop AI virtual product users. The system has a compound dialog mode (combining intent- and flow-based dialog modes), with which multiple chatbots can cooperate with students in the form of oral dialog. After four college students interacted with AI userbots, it was proven that this system can effectively participate in student design activities in the early stage of design. In the future, more AI userbots could be developed based on this system, according to different engineering design projects for engineering design teaching.
Conference Paper
Full-text available
Maintaining a positive group emotion is important for team collaboration. It is, however, a challenging task for self-managing teams especially when they conduct intra-group collaboration via text-based communication tools. Recent advances in AI technologies open the opportunity of using chatbots for emotion regulation in group chat. However, little is known about how to design such a chatbot and how group members react to its presence. As an initial exploration, we design GremoBot based on text analysis technology and emotion regulation literature. We then conduct a study with nine three-person teams performing different types of collective tasks. In general, participants find GremoBot useful for reinforcing positive feelings and steering them away from negative words. We further discuss the lessons learned and considerations derived for designing a chatbot for group emotion management.
Article
Full-text available
Conversational agents (CAs) are software-based systems designed to interact with humans using natural language and have attracted considerable research interest in recent years. Following the Computers Are Social Actors paradigm, many studies have shown that humans react socially to CAs when they display social cues such as small talk, gender, age, gestures, or facial expressions. However, research on social cues for CAs is scattered across different fields, often using their specific terminology, which makes it challenging to identify, classify, and accumulate existing knowledge. To address this problem, we conducted a systematic literature review to identify an initial set of social cues of CAs from existing research. Building on classifications from interpersonal communication theory, we developed a taxonomy that classifies the identified social cues into four major categories (i.e., verbal, visual, auditory, invisible) and ten subcategories. Subsequently, we evaluated the mapping between the identified social cues and the categories using a card sorting approach in order to verify that the taxonomy is natural, simple, and parsimonious. Finally, we demonstrate the usefulness of the taxonomy by classifying a broader and more generic set of social cues of CAs from existing research and practice. Our main contribution is a comprehensive taxonomy of social cues for CAs. For researchers, the taxonomy helps to systematically classify research about social cues into one of the taxonomy's categories and corresponding subcategories. Therefore, it builds a bridge between different research fields and provides a starting point for interdisciplinary research and knowledge accumulation. For practitioners, the taxonomy provides a systematic overview of relevant categories of social cues in order to identify, implement, and test their effects in the design of a CA.
Conference Paper
Full-text available
Chatbots have grown as a space for research and development in recent years due both to the realization of their commercial potential and to advancements in language processing that have facilitated more natural conversations. However, nearly all chatbots to date have been designed for dyadic, one-on-one communication with users. In this paper we present a comprehensive review of research on chatbots supplemented by a review of commercial and independent chatbots. We argue that chatbots' social roles and conversational capabilities beyond dyadic interactions have been underexplored, and that expansion into this design space could support richer social interactions in online communities and help address the longstanding challenges of maintaining, moderating, and growing these communities. In order to identify opportunities beyond dyadic interactions, we used research-through-design methods to generate more than 400 concepts for new social chatbots, and we present seven categories that emerged from analysis of these ideas.
Article
Full-text available
Mobile text messaging is one of the most important communication channels today, but it suffers from lack of expressiveness, context and emotional awareness, compared to face-to-face communication. We address this problem by augmenting text messaging with information about users and contexts. We present and reflect on lessons learned from three field studies, in which we deployed augmentation concepts as prototype chat apps in users’ daily lives. We studied (1) subtly conveying context via dynamic font personalisation (TapScript), (2) integrating and sharing physiological data – namely heart rate – implicitly or explicitly (HeartChat) and (3) automatic annotation of various context cues: music, distance, weather and activities (ContextChat). Based on our studies, we discuss chat augmentation with respect to privacy concerns, understandability, connectedness and inferring context in addition to methodological lessons learned. Finally, we propose a design space for chat augmentation to guide future research, and conclude with practical design implications.
Article
Full-text available
Modelling is used in early phases of software and system development to discuss and explore problems, understand domains, evaluate alternatives and comprehend their implications. In this setting, modelling is inherently collaborative as it involves stakeholders with different backgrounds and expertise, who need to cooperate to build solutions based on consensus. However, modelling tools typically provide unwieldy diagrammatic editors that hamper the active involvement of domain experts and lack mechanisms to ease decision-making. To tackle these issues, we embed modelling within social networks, so that the interface for modelling is natural language, which a chatbot interprets to derive an appropriate domain model. Social networks have intuitive built-in discussion mechanisms, while the use of natural language lowers the entry barrier to modelling for domain experts. Moreover, we enhance modelling with soft consensus decision-making that facilitates the choice among modelling alternatives. This proposal is supported by our tool Socio, which works on social networks like Telegram.
Article
Emotion is intrinsic to humans and consequently emotion understanding is a key part of human-like artificial intelligence (AI). Emotion recognition in conversation (ERC) is becoming increasingly popular as a new research frontier in natural language processing (NLP) due to its ability to mine opinions from the plethora of publicly available conversational data on platforms such as Facebook, Youtube, Reddit, Twitter, and others. Moreover, it has potential applications in health-care systems (as a tool for psychological analysis), education (understanding student frustration), and more. Additionally, ERC is also extremely important for generating emotion-aware dialogues that require an understanding of the user’s emotions. Catering to these needs calls for effective and scalable conversational emotion-recognition algorithms. However, it is a difficult problem to solve because of several research challenges. In this paper, we discuss these challenges and shed light on the recent research in this field. We also describe the drawbacks of these approaches and discuss the reasons why they fail to successfully overcome the research challenges in ERC.
Article
In this paper we present ReactionBot, a system that attaches emoji based on users' facial expressions to text messages on Slack. Through a study of 16 dyads, we found that ReactionBot was able to help communicate participants' affect, reducing the need for participants to self-react with emoji during conversations. However, contrary to our hypothesis, ReactionBot reduced social presence (behavioral interdependence) between dyads. Post study interviews suggest that the emotion feedback through ReactionBot indeed provided valuable nonverbal cues: offered more genuine feedback, and participants were more aware of their own emotions. However, this can come at the cost of increasing anxiety from concerns about negative emotion leakage. Further, the more active role of the system in facilitating the conversation can also result in unwanted distractions and may have attributed to the reduced sense of behavioral interdependence. We discuss implications for utilizing this type of cues in text-based communication.
Conference Paper
Research has shown that productivity is mediated by an individual's ability to detach from their work at the end of the day and reattach with it when they return the next day. In this paper we explore the extent to which structured dialogues, focused on individuals' work-related tasks or emotions, can help them with the detachment and reattachment processes. Our inquiry is driven with SwitchBot, a conversational bot which engages with workers at the start and end of their work day. After preliminarily validating the design of a detachment and reattachment dialogue frame-work with 108 crowdworkers, we study SwitchBot's use in-situ for 14 days with 34 information workers. We find that workers send fewer e-mails after work hours and spend a larger percentage of their first hour at work using productivity applications than they normally would when using SwitchBot. Further, we find that productivity gains were better sustained when conversations focused on work-related emotions. Our results suggest that conversational bots can be effective tools for aiding workplace detachment and reattachment and help people make successful use of their time on and off the job.