Conference PaperPDF Available

Working with ELSA - How an Emotional Support Agent Builds Trust in Virtual Teams

Authors:

Abstract

Virtual collaboration is an increasing part of daily life for many employees. Despite many advantages, however, virtual collaborative work can lead to a lack of trust among virtual team members, e.g., due to spatial separation and little social interaction. Previous findings indicated that emotional support provided by a conversational agent (CA) can impact human-agent trust and the perceived social presence. We developed an emotional support agent called ELSA and conducted a between-subject online experiment to examine how CAs can provide emotional support in order to increase the level of trust among colleagues in virtual teams. We found that human-agent trust positively influences the level of calculus-based trust among team members and increases team cohesion, whereas perceived anthropomorphism and social presence towards a CA seems to be less important for trust among team members.
Working with ELSA
How an Emotional Support Agent Builds Trust in Virtual Teams
Lennart Hofeditz
University of Duisburg-
Essen, Germany
lennart.hofeditz@uni-
due.de
Mareen Harbring
University of Duisburg-
Essen, Germany
mareen.harbring@stud.uni-
due.de
Milad Mirbabaie
Paderborn University,
Germany
milad.mirbabaie@uni-
paderborn.de
Stefan Stieglitz
University of Duisburg-
Essen, Germany
stefan.stieglitz@uni-
due.de
Abstract
Virtual collaboration is an increasing part of daily life
for many employees. Despite many advantages,
however, virtual collaborative work can lead to a lack
of trust among virtual team members, e.g., due to
spatial separation and little social interaction.
Previous findings indicated that emotional support
provided by a conversational agent (CA) can impact
human-agent trust and the perceived social presence.
We developed an emotional support agent called
ELSA and conducted a between-subject online
experiment to examine how CAs can provide
emotional support in order to increase the level of
trust among colleagues in virtual teams. We found that
human-agent trust positively influences the level of
calculus-based trust among team members and
increases team cohesion, whereas perceived
anthropomorphism and social presence towards a CA
seems to be less important for trust among team
members.
1. Introduction
Even before the challenges faced by enterprises
through Covid-19, widely distributed teams and
projects carried out in virtual collaboration were
increasingly part of the everyday life of employees [1].
In order to secure competitive advantages, it is of great
importance for organizations to find ways to improve
teamwork, especially within the growing field of
virtual work. Despite the various advantages such as
being independent of the employee’s location, virtual
collaboration can lead to feelings of isolation by
lacking consistent physical encounters with other team
members and a lack of trust among team members [2].
One reason can be the spatial separation of colleagues,
decreasing the transparency of how much effort each
team member puts into a project [3]. In case of new
composed work teams, building trust in a virtual team
is even more difficult. To increase the level of trust in
virtual teams, information and communication
technology (ICT) focusing on the graphical
visualization of activities and work contributions of
various team members can be applied [4]. Real trust in
terms of equal trust that is established in physical face-
to-face teamwork, however, is best built through
interaction with a real person [5].
There is one specific type of ICT that represents a
natural connection of both an almost real-word type of
human-like trust and an appropriate virtual solution
that is expected to enable a support of virtually built-
up trust: a Conversational Agent (CA). CAs are
automated tools that can contain human characteristics
and behavioral aspects, reaching from superficial
appearances to certain language features [6]. They
represent a range of technologies and are designed for
processing natural language that enables them to serve
as a non-human interaction partner for a human user.
CAs can be considered both as a teammate in virtual
collaboration and as a tool for information and
communication [7]. CAs (such as chatbots for
strengthening mental health) are able to provide
emotional support to humans by analyzing people’s
emotions or suggesting exercises and tasks [2]. They
are also found to enrich efficient team interactions for
collaboration purposes [8]. Thus, they represent a
promising approach to address the problem of
isolation and lack of trust within virtual teams. While
there has been research on how to achieve human-
agent trust, there is limited research that contributes to
building trust among human virtual team members
through CAs providing emotional support. In order to
provide more insight into this research field, we aimed
to address the issue of isolation and a lack of trust in
virtual teams by raising the following research
question (RQ):
How can CAs provide individual emotional
support in order to increase trust among human team
members within virtual collaboration?
Proceedings of the 55th Hawaii International Conference on System Sciences | 2022
Page 418
URI: https://hdl.handle.net/10125/79380
978-0-9981331-5-7
(CC BY-NC-ND 4.0)
Based on a throughout literature analysis, we
proposed a new research model for emotional support
through a CA in a virtual work context and developed
an emotionally supportive CA, called ELSA
(EmotionaL Support Agent). ELSA can provide
emotional support to individual team members during
work related tasks. We evaluated ELSA in a between-
subject online experiment. One group received
emotional support during a work-related task through
ELSA and the control group only received task related
support by a CA.
With this work we provide knowledge on how
CAs can be applied not only as a useful tool that
provides work related information, but also as virtual
teammate that spends emotional support in order to
increase trust and team cohesion within virtual teams.
2. Related Work
There is a wide range of collaboration
technologies which can be applied to support virtual
collaborative work such as virtual assistants, CAs or
other AI-based collaboration technology [9]. The
progress of these collaborative technologies suggests
that in the future, these machines will be more than
tools that support team performance. Mirbabaie et al.
[7] were already able to show that these technologies
can be perceived as both supportive collaboration tools
and virtual teammates. This offers a variety of new
opportunities to mitigate shortcomings of virtual
collaboration.
One of the most widely researched and deployed
collaboration technologies in virtual collaboration are
CAs [9, 10, 11, 12, 13]. A CA can be defined as
software that interacts and exchanges information
with its users through natural language[8:1] by using
natural language processing. Despite application fields
like customer service and education, they can also be
used for organizational purposes such as task
management in in virtual collaborative work [14]. For
CAs to be adopted and used by employees, it is
necessary to ensure that CAs are perceived trustworthy
[15]. Trust in a CA is found to have a positive effect
on its acceptance of a human user [15], creating
positive experiences and establishing a stable
relationship with it [16]. Furthermore, trust in a CA
has a positive effect on the intention to use it again in
the future [17]. For the present context, we consider
trust in a CA as “an individual’s belief in the
competence, dependability and security” of a CA
[18:482].
Team members of conventional face-to-face
teams also establish trustworthiness through personal
and social cues [3]. Resulting from a systematic
literature review the taxonomy of Feine et al. [19]
provides a comprehensive and up-to-date overview of
social cues for CAs, summarized from different
contexts and terminologies of empirical literature.
They define the term social cue as “a cue that triggers
a social reaction towards the emitter of the cue”
[19:11]. According to this definition, a social cue
develops into a social signal which is perceived by a
human user. This in turn triggers a social reaction
towards a CA. The taxonomy comprises a total of 48
social cues which can be divided into four main
categories with several corresponding sub-categories.
The main categories are verbal, visual, auditory and
invisible. Social cues provided by people in a physical
face-to-face encounter are barely given within
interaction of virtual teams. They are easily lost or
misinterpreted via digital communication. Therefore,
we implemented social cues in ELSA’s vocabulary
and appearance to enrich the virtual collaboration by
transmitting social cues during each team member’s
individual conversation. Therefore, we support the
building of trust among team members as well as trust
towards the CA.
As we found that, in addition to social cues,
emotional support though a CA can also result in team
cohesion and trust among virtual team member, we
focused on socio-emotional CA behavior. Janssen et
al. [20] define the term of socio-emotional behavior as
an ability of a CA to have a conversation with the
awareness of what topic is being discussed. This
includes showing empathy for the user’s individual
needs as well as responding to expressed emotions of
the human user. Emotions are human characteristics
that a technology like a CA does not naturally possess.
In order to provide emotional support, a CA needs to
contain human-like emotional characteristics such as
anthropomorphic features which can be summarized
as “the attribution of human-like physical or non-
physical features, behavior, emotions, characteristics
and attributes to a non-human agent” [10:523].
Anthropomorphic features are found to have a
significant impact on the way users perceive the
interaction with a CA and are closely connected to
social cues [19]. In addition, they can have a positive
impact on the believability of a CA [6], on the desire
to interact with it [21] and on a CA’s social presence
[22]. Social presence represents the emotional
component of anthropomorphism. It relates to the
aspects of empathy, sociability and warmth of human
behavior [23]. Social presence of a CA is found to
predict the acceptance of it [24]. The implementation
of social and anthropomorphic cues also leads to the
well-known Computers are Social Actors (CASA)
paradigm [25, 26] which states that humans behave
towards computers in the same way as towards other
humans, although the human user is aware of the fact
Page 419
that a computer is no human being. Although the
relationship of social presence, social cues, and
anthropomorphism with trust is already suggested, it
is largely unclear how CAs can use targeted emotional
support to improve trust within virtual teams. What is
clear, however, is that there is often a lack of resources
for providing emotional support that can be addressed
by applying CAs [2].
3. Theoretical Background
An overview of the empirical literature reveals
that there is no common agreement on a uniform
definition for the construct of trust due to its
interdisciplinarity nature. According to Lewicki and
Bunker [5], we adopt the definition of Boon and
Holmes [27], as it covers the crucial elements of a
multitude of definitions. Accordingly, trust is defined
as a state involving confident positive expectations
about another’s motives with respect to oneself in
situations entailing risk[27:194].
Although some empirical trust models assume
that trust can only develop over time, there also exists
the paradox of high initial trust levels which
demonstrates a high level of trust after a short time
[28]. We decided to focus on this high initial trust
levels, as we assumed that emotional support is
especially necessary for employees in their initial
stage of virtual collaborative work in order to achieve
a high level of trust within a team. The High-Level
Model of Initial Formation of Trust aims to explain the
process of building initial trust in an organizational
context [28]. Overall, there is a consensus in research
that the level of trust increases in the form of
successive stages. A three-part division into the
successive levels of a building stage, followed by a
stabilization stage and ending in a mature state of trust,
often appears in the literature [29].
Against the background of the High-Level Model
of Initial Formation of Trust, Lewicki and Bunker’s
model of organizational trust building [5] serves as the
underlying theoretical basis to define the
understanding of trust in this work. Lewicki and
Bunker [5] propose three types of trust: calculus-based
trust, knowledge-based trust and identification-based
trust. Calculus-based trust refers to “consistency of
behavior” [5:118], whether a trustee really does what
he or she says. This stage represents the early, initial
formation of trust and therefore serves as the focus of
this research. People in this phase have not yet had any
background information or past experience that they
can use to form an opinion about a trustee’s
trustworthiness. This stage is therefore mainly about
weighing up costs and benefits by entering into a
trusting relationship. A risk factor of whether a trustee
fulfills the expectations placed on him or her also plays
a role here as to whether a trustor considers it
beneficial to trust the trustee [30].
Trust building has already been widely examined
in the context of virtual collaboration with employees
working spatially distributed and can be considered as
considered crucial for virtual collaboration.
Pinsonnault and Caya [31] state that “trust is one of the
most important process variables in virtual team
research” [31:4]. In virtual teams, trust needs to
replace the supervision and transparency of the work
of other team members [3]. For virtual teams, building
trust among one another is a much greater challenge
than for conventional teams mainly because of the
limited personal contact.
Existing trust within a team can have a positive
impact on other aspects of group dynamics like team
cohesion. Team cohesion was found to be the “most
important small group variable” [32:259] and can be
defined as the degree to which team members like each
other and desire to remain in the team [33]. Trust in
other team members and team cohesion show a
significant relationship and have frequently been
examined together. Accordingly, a high level of trust
within a team could lead to a higher level of team
cohesion.
Working in virtual teams can also have an impact
on the well-being, “defined as the physical and mental
health of employees” [34:363]. Thus, a connection
between remote workers and low workplace well-
being can be assumed in the manner of psychological,
social and physical points of view. Besides social
contact, trust at the workplace can be a fundamental
predictor for the construct of subjective well-being
(SWB). Considering these related works and
background, we derived the following nine
hypotheses:
H1: Receiving emotional support from a CA in virtual
collaboration positively affects the level of calculus-
based trust in the team.
H2: Receiving emotional support from a CA in virtual
collaboration positively affects the level of human-
agent trust.
H3: Receiving emotional support from a CA in virtual
collaboration positively affects the level of perceived
anthropomorphism.
H4: Receiving emotional support from a CA in virtual
collaboration positively affects the level of perceived
social presence.
H5: Human-agent trust has a positive impact on the
level of calculus-based trust in virtual collaboration.
H6: Perceived anthropomorphism has a positive
impact on the level of calculus-based trust in virtual
collaboration.
Page 420
H7: Perceived social presence has a positive impact
on the level of calculus-based trust in virtual
collaboration.
H8: Calculus-based trust among team members in
virtual collaboration has a positive impact on the level
of team cohesion.
H9: Calculus-based trust among team members in
virtual collaboration has a positive impact on the level
of SWB.
H10: Human-agent trust has a positive impact on the
level of team cohesion.
4. Research Design
4.1. CA Design and Procedure
To examine the proposed model we designed
ELSA, a CA with consideration of emotional support,
containing trustworthy, social, anthropomorphic and
trust supporting cues. Regarding a between-subject
design and in order to examine ELSA’s effectiveness,
we also developed a second CA named TaskBot.
Contrary to ELSA, TaskBot exclusively interacts on a
task-related basis and its design of communication did
not include any kind of social cues and emotionally
supporting phrases. We provide an overview of
ELSA’s cues in Table 1.
Table 1. Social, anthropomorphic,
trustworthy and trust building cues of ELSA
Category
Examples
Social cues
(Feine et al. [6])
Greetings, thanking, small talk,
smileys & informal language
Anthropomorph
ic cues (Peuffer
et al. [10]; [35])
Profile picture, delayed
response time, and referring to
herself as “I”
Cues for
trustworthiness
towards CA [15,
17]
Giving transparency & using
an anthropomorphic verbal
style
Cues for
providing
emotional
support
Providing feedback about the
team’s work progress and
asking about the impressions
about the other team members
The two CAs only differed in the way of how
communication was controlled. ELSA was controlled
according to the Wizard of Oz method which is a well-
known technique for research in the context of human-
computer interaction [36]. ELSA’s responses were
controlled by an experimenter in order to simulate the
realistic functioning of ELSA as a CA, while the
participants were not informed about the fact that
ELSA was controlled by a human. Nevertheless, it was
clearly communicated in advance that ELSA is an
automated CA. On the other side, TaskBot was
implemented and realized using Dialogflow from
Google. However, for the participants, the difference
was not visible as both groups interacted with the CA
in the same interface via Slack.
To test ELSAs abilities, we conducted an online
team experiment via Zoom and Slack. The evaluation
contained an interaction with ELSA or TaskBot
followed by subsequent questionnaires about the
interaction and perception. We chose a text-based CA
as because it represents the most commonly used type
of CAs in practice. To simulate a realistic
organizational environment, both ELSA and TaskBot
were implemented in the instant messenger software
Slack which is often used for organizational team
communication. Participants were recruited online via
social network sites. The assignment of the
participants to one of the two conditions was
conducted randomly.
During the first part of the evaluation, a virtual
work team consisting of three to five participants was
invited to a Zoom room to get to know each other
before each of the team members interacted with one
of the CAs. Each participant received a link to
LimeSurvey, in which demographic data and
information about previous experience with CAs were
inquired. Afterwards, the participants receive further
instructions for the subsequent individual CA
interaction and the access data for Slack. The
participants were asked to solve the Desert Survival
Task which is a widely known team building task
with the advice to contact their CA for help. After
solving the task and interacting with the CAs in a
separate meeting room, participants of both groups
were asked to answer questionnaires on the interaction
with the CA and their perception of the virtual team.
This part is expected to show whether participants who
interacted with ELSA are able to establish a higher
level of calculus-based trust with one another. The
data collection period is from 11th of December 2020
to 31st of January 2021. In Figure 1, we provide a
representative screenshot of how our participants
interacted with ELSA.
4.3. Questionnaires
Our survey consisted of a total of ten different
questionnaires. We used Likert scales reaching from 1
(strongly disagree) to 7 (strongly agree). At the
Page 421
beginning, demographic information of the
participants, namely age, gender, level of education
and level of occupation were requested. We measured
previous experience with CAs by using the sub-
construct Experience of the Use of Technology
questionnaire [37]. This was followed by the
questionnaire on the construct of human-agent trust,
adapted from the original Human-Computer Trust
questionnaire [38]. For reasons of content fitting, we
only adopted the sub-constructs Benevolence and
Reciprocity. The level of team cohesion was measured
by Seashore’s [39] Index of Group Cohesiveness. The
level of calculus-based trust among each team member
was measured by the Calculus-Based Trust Scale [30].
To measure how human-like the CAs were perceived
we measured anthropomorphism [35]. Our
questionnaire on social presence [40] consisted of five
items in total. In addition, we measured SWB
according to Ashleigh et al. [34] by the Subjective
Happiness Scale [41] and the Satisfaction with Life
Scale [42]. The penultimate block of questions
comprised six items on the future use of the CA. With
a self-developed item, we questioned whether the
participant would like to use such an CA as ELSA or
the TaskBot for his or her (future) job. Lastly, the
question block included the sub-construct Task Fit
from the Use of Technology questionnaire [37].
4.1. Sample
A total of 98 participants, of which 96 were
considered as valid data sets, took part in our survey.
Two participants have to be excluded, as they failed
two attention check questions. Participants of the final
sample were between 18 and 47 years old, with an
average age of 22 years. The sample consists of 73
female (76.0%) and 22 male (22.9%) participants. One
participant gave the answer “diverse”. 22 people stated
that they had minor difficulties with the CA or that the
CA did not always provided correct answers. On
average, the participants had little previous experience
with CAs (M = 2.82). In this respect, the groups of
ELSA and TaskBot did not differ significantly. We
found a distribution across the group conditions of 47
participants in the experimental group (n = 47) and 49
participants in the control group (n = 49).
5. Results
5.1. Statistical Results
For examining the stated hypotheses of the
proposed research model, we calculated one-way
analyzes of variance (H1-4) and linear regressions
(H5-9) using IBM SPSS version 27. In addition to
validity, the reliability of all constructs is also
examined in advance. The reliability shows an
acceptable value for each construct since the limit of
.7 is exceeded for all of them.
We first checked the prerequisites for conducting
all following one-way analyzes of variance. The
requirement of normal distribution was not met for H1
and H3. We summarized the results of our calculations
for H1-4 in Table 2.
Table 2. One-way analyses of variance
p
η2
MELSA
MTaskBot
H1
.051
.04
4.85
4.35
H2
.001
.12
5.12
4.36
H3
.006
.08
3.10
2.26
H4
< .001
.42
4.62
2.38
Figure 1. Representative extract from a participant’s conversation with ELSA
Page 422
To examine the influences of human-agent trust
(H5), perceived anthropomorphism (H6) and
perceived social presence (H7) on the level of
calculation-based trust, we calculated linear
regressions.
The fulfillment of prerequisites for calculating a
linear regression analysis were checked in advance.
The results of the regression for H5 showed that the
level of human-agent trust could significantly predict
the level of calculus-based trust with a positive,
moderate effect size (see Table 3). According to
Cohen [43], this represents a moderate effect size with
f2 = .20. With regard to the assumed causal relationship
of H6, we found that the requirement for a linear
relationship of perceived anthropomorphism and
calculus-based trust was not met. For perceived social
presence and calculus-based trust (H7) the
requirement of a linear relationship was also violated.
We therefore did not further carry out the other linear
regressions.
Table 3. Overview of linear regressions
Furthermore, we calculated a linear regression to
test H8. The results showed that the level of calculus-
based trust could significantly predict the level of team
cohesion with positive, moderate influence (see Table
3). The effect size was medium to rather high.
Accordingly, the size of f2 amounts .32 which,
according to Cohen [43], could be classified as a
medium to large effect size. The requirement of a
linear relationship was violated for calculus-based
trust and SWB (H9), resulting in that we did not
carried out a further linear regression.
As the results reveal a significant positive causal
relation between human-agent trust and the level of
calculus-based trust, we also assumed an influence of
human-agent trust on team cohesion (H10). The
results of the respective linear regressions showed that
the level of human-agent trust could significantly
predict the level of team cohesion with a high positive
influence (β = .57, t (45) = 4.62, p < .001). The level
of human-agent trust could also explain 32.2%
significant proportion of the total variance in the team
cohesion level (R2 = .32, F (1, 45) = 21.35, p < .001).
With R2 = .32, the linear model fitted the data well and,
according to Cohen [43], represented a very large
effect size (f2 = .47). We summarized our results in
Figure 2.
Figure 2. Visualization of the main findings
5.2. Further Results
In addition to these results, we found further
relevant aspects that we recognized through the
evaluation of ELSA and TaskBot. Participants who
interacted with ELSA used more emoticons overall,
chose a nicer tone of texting and showed interest in
ELSA counter-questions from the participants. In
addition, the task fit was also requested in the survey,
that is, the extent to which the participants believed
that the future use of the respective CA would also be
well suited for future tasks. The experimental group
that interacted with ELSA showed a higher mean
value (MELSA = 4.29) than the control group (MTaskBot =
3.78). When they were asked whether they would like
such a CA as they got to know in the experiment for
their job, the majority of the experimental condition
answered with “yes” (n = 31), while the majority of
the participants from the control answered with “no”
(n = 31).
6. Discussion
We expected to find a difference in the calculus-
based trust level of participants between interacting
with ELSA and TaskBot. However, our findings did
not support this hypothesis. Thus, we cannot conclude
that emotional support of a CA can directly contribute
to a higher level of calculus-based trust among team
members (H1 not supported). According to the mental
model approach [44], one possible explanation is that
ELSA possibly was still perceived as too robotic
which resulted in an aversion for interacting with her.
In connection with human/CA perception, the
phenomenon of Uncanny Valley [45] could be another
explanation. The low average value of previous
experience with CAs of the sample suggests that it
takes more time for unexperienced users to get used to
a CA interaction. However, arguments stating that it
would be easy for teams to build trust in the initial
β
t (45)
p
R2
F (1, 45)
H5
.41
3.03
.004
.17
9.15
H8
.49
3.81
.001
.24
14.54
* = p < 0.05 ** = p < 0.001
Page 423
phase of teamwork [28] and that teams are also able to
build this up quickly, are not to be completely
invalidated by the result of H1. Since the initial phase
of teamwork, to which the measurement of calculus-
based trust relates, is not limited to one initial task, it
is entirely possible that a longer lasting experiment
could bear obverse findings.
The statistical result of H2 supports that the
groups of ELSA and TaskBot differ significantly in
their level of trust towards the respective CA. This
finding confirmed previous literature on
anthropomorphic and social cues, as these cues are
considered to have a positive influence on
believability and trust in a CA [23]. Although trust is
a phenomenon that originally refers to be developed
between humans, trustworthy cues, in connection with
social and anthropomorphic cues, can certainly
contribute to the fact that non-human technologies like
CAs can be classified as trustworthy.
H5 was supported by the result of the linear
regression. It can be deviated that the design of a trust-
supporting CA should not be limited to
anthropomorphic and social cues, but rather need to
include trustworthy cues (e.g., transparency about
what other team members do or trust towards the CA).
This finding can be explained by the fact that people
are more likely to engage trust-supporting
communication measures if they consider a CA
trustworthy. Consistent with the work of [46], this was
reached by reducing uncertainty towards ELSA.
For H3, the conducted calculation showed the
expected significant difference between the conditions
with regard to perceived anthropomorphism (H3
supported). The finding also confirmed previous
research on anthropomorphic cues, especially a
correspondence with the CASA paradigm [26] can be
recognized here. The aspect of social presence may
also have contributed to the result of the significantly
higher mean value for the experimental group, since
aspects such as human warmth are considered to be the
social component of anthropomorphic cues [23].
Contrary to our expectations, H6 could not be
supported by or results. An insufficient level of
perceived anthropomorphism could be responsible for
the fact that the corresponding technology is not
accepted by a human, or in the present case not
sufficiently accepted for effective emotional support.
At this point, the Uncanny Valley approach [45] can
be considered again for explanation, since the mean
value for the experimental group shows a rather
medium value of perceived anthropomorphism. As
another explanation, it may not be necessary to design
a CA as human as possible for the purpose of
emotional support. According to [21], “learning purely
from human-human behavior may not always be the
most effective approach” (p. 74). This can also be
supported by the fact that most of our participants
stated that they would like to use ELSA for their work.
For H4, we found a significant difference between
the group conditions with regard to the perceived
social presence of the respective CA. Accordingly, the
group’s mean values for the construct of social
presence showed that the social presence of ELSA was
perceived as much higher than for TaskBot. The result
fit with previous research that concluded that
anthropomorphic cues have a positive effect on the
perceived social presence [23, 47]. According to the
CASA paradigm [26], the finding suggests that it is
entirely possible to attribute socio-emotional
characteristics such as human warmth, empathy and
sociability to a CA.
With regard to social presence, we assumed that
perceived social presence is suitable for predicting the
calculus-based trust level among team members who
receive emotional support. However, our results
showed that H7 could not be supported. This result
differs from previous studies which found social
presence as a suitable predictor for trust especially in
the context with CAs [48]. Despite the short
interaction time, it has been shown that calculus-based
trust has a significantly positive effect on team
cohesion in the present work. Thus, the assumption of
H8 was supported. Although the groups of
experimental and control condition do not
significantly differ in terms of their level of calculus-
based trust, it is still evident, that trust and team
cohesion are positively related and that trust can serve
as suitable predictor for predicting team cohesion [32].
As H9 was not supported, we assume no effect for
the causal relationship of calculus-based trust as a
predictor and the criterion SWB. Against previous
research suggesting a relationship between existing
trust in work colleagues and SWB [34], SWB can
neither be supported nor transferred to the construct of
calculus-based trust.
As a further result, the response behavior of
participants towards both CAs was conspicuous. The
majority of participants interacting with ELSA
behaved very nicely, sensitively and with interest
towards the CA while this was not the case towards
TaskBot. This phenomenon supports the CASA
paradigm [26]. Although ELSA’s focus lies on socio-
emotional behavior rather than efficiency, the result
suggest that the two aspects of socio-emotional
interaction and efficiency do not seem to contradict
each other. Despite small talk and other non-task
related questions, the participants interacting with
ELSA indicated that they would like to use the system
for future tasks in their everyday work. The finding fits
with empirical literature that states “trust in [virtual]
Page 424
advisors positively affect their reuse intentions
[17:3]. It can be derived from this finding that social
talk is at least as important to users as efficiency.
The investigated influence of human-agent trust
on team cohesion showed a significant effect (H10
supported). We conclude that trust is a suitable
predictor for team cohesion. Against this background,
it can be deduced as an important finding of this work
that both kinds of trust (calculus-based trust towards
other human team members and trust in a CA) can
predict the level of team cohesion supported by
emotional support. In contrast, the exploratory
investigated influence of human-agent trust on SWB
shows no significant causal relationship. It can be
deduced that both types of trust calculus-based trust
and trust in a CA do not influence the level of a user’s
SWB in the initial phase of virtual teamwork.
Although the mean value of human-agent trust
towards ELSA was significantly higher than the mean
value of team trust, in general, a high level of trust
does not have any significant influence on SWB in the
present context.
6.2. Implications
With this work, we contribute to research and
practice. The paradox of high initial levels of trust in
new formed teams stated by [38] can be claimed as
transferable to a certain extent to the construct of
calculus-based trust in emotional supported virtual
collaboration because there is a medium-high entry
level of calculus-based trust already with the first
interaction to be observed. It can be derived that a
CA’s emotional support is more effective in teams
with a high tendency to trust technologies such as
CAs. In order to be able to transfer the initial phase of
Lewicki and Bunker’s [5] trust model to virtual
collaboration supported by a CA, it needs to be taken
into account that the design of an emotional support
agent should contain trustworthy cues. The high-level
model of initial formation of trust [28] can be used as
a starting point to understand emotional support by a
CA to increase the level of trust in virtual teams. In
addition, the result of the statistical examination of H9
concerning the causal relationship of calculus-based
trust on SWB contributes additional knowledge to the
question of how mental health of employees who work
in virtual collaborative teams can be improved.
If organizations decide to integrate an emotional
support agent into their work routine, they can benefit
from the present results by adopting ELSA’s cues for
being considered as a trustworthy CA. From an
employee’s point of view, we provide evidence that
socio-emotional communication is at least as
important to users as efficiency which should be taken
into account by organizations. We also could show
that employees stated that they would like to further
use ELSA for their own work.
6.3. Limitations and Further Research
This article comes with some limitations and
suggestions for future studies. For the present
research, we focused on emotional support for newly
assembled work teams without a common background
and thus, exclusively on the first stage of building
trust. Therefore, researchers could extend the present
findings by developing an emotional support agent for
the two subsequent stages of the model of organization
trust building [28]. As the trend towards virtual work
increases, also through artificial intelligence, the
creation of a multimodal CA and an animated version
in virtual reality would also be interesting.
Due to the fact that we conducted an elaborate
experiment, our sample was not as large as for an
online survey and our participants were relatively
young. Therefore, future research could examine
larger teams in a real virtual organizational setting. In
addition, the length of task could be varied. Due to the
circumstances of an online experiment, internet
crashes appeared in a few cases resulting in some
teams consisting of only two participants.
For technical reasons, ELSA was also not able to
recognize the current user’s mood in real time and to
adapt her response to it. It would therefore be
interesting to create a CA which is able to carry out a
sentiment analysis of the user’s response behavior. An
investigation over time with repeated measurements
would also be interesting to determine a development
of a participant’s initial level of SWB. As globally
distributed teams are not the focus of this work, the
influence of cultural diversity could be added to the
proposed research model. In this regard, additional
attention needs to be paid to possible language barriers
and cultural differences. In addition to cultural
differences, the inclusion of communication theories,
such as the investigation of virtual and gender
communication, would also be interesting.
Finally, the aspect of gender stereotypes gets
reinforced, because emotional support is more likely
to be attributed to women. To counteract this, an
investigation with a genderless and a masculine-
appearing CA would be interesting. Participants also
had predominantly a female gender, so that this could
have had an impact on the perception of ELSA's
sympathy.
Page 425
7. Conclusion
With this work, we were able to show that
calculus-based trust among virtual collaborative team
members can be strengthen by applying a CA with
trustworthy cues. Emotional support is more effective
if a human classifies a CA as trustworthy. When
designing an emotional support agent, researchers
should consider equipping a CA with cues that make a
CA appear trustworthy to a human user. In addition, it
can be derived that emotional support of a CA in
virtual collaboration is more effective in teams with a
high tendency to trust a CA.
Our findings suggest that anthropomorphism is
not the most important design feature of a CA that is
aimed to possess socio-emotional behavior. We
showed that it is not the purely human characteristics,
but the trust-indicating cues that influence calculus-
based trust. This does not indicate that
anthropomorphic cues are irrelevant for supporting
humans emotionally, but suggests that the main focus
for future research should focus less on humanization
and more on the reliable functioning of a CA. We
observed that calculus-based trust in virtual
collaboration starts to build already with the first
interaction, even with an average medium-high level
supporting the existence for the paradox of high initial
trust levels [28].
Based on the highlighted key findings, the RQ of
the present work could be answered by showing how
an emotional support agent can be designed to provide
emotional support for individual human team
members in order to increase trust within virtual
collaborative teams. Because anthropomorphic and
social cues presumably also contribute to a CA being
perceived as trustworthy, this aspect should not be
neglected. However, the key for providing emotional
support through a CA lies in its trustworthiness.
8. References
[1] Hassell, M., and J. Cotton, “Some things are better left
unseen: Toward more effective communication and team
performance in video-mediated interactions”, Computers in
Human Behavior 73, 2017, pp. 200208.
[2] Denecke, K., S. Vaaheesan, and A. Arulnathan, “A
Mental Health Chatbot for Regulating Emotions (SERMO)
- Concept and Usability Test”, IEEE Transactions on
Emerging Topics in Computing 99(1), 2020.
[3] Plotnick, L., S.R. Hiltz, and R.J. Ocker, “Trust in
partially distributed teams”, ICIS 2009 Proceedings -
Thirtieth International Conference on Information Systems,
(2009), 117.
[4] Al-Ani, B., and D. Redmiles, “Trust in distributed
teams: Support through continuous coordination”, IEEE
Software 26(6), 2009, pp. 3540.
[5] Lewicki, R.J., and B.B. Bunker, “Developing and
maintaining trust in work relationships”, In Trust in
Organizations: Frontiers of Theory and Research. SAGE
Publications, Inc., 1996, 114139.
[6] Feine, J., U. Gnewuch, S. Morana, and A. Maedche, “A
Taxonomy of Social Cues for Conversational Agents”,
International Journal of Human Computer Studies
132(September 2018), 2019, pp. 138161.
[7] Mirbabaie, M., S. Stieglitz, F. Brünker, L. Hofeditz, B.
Ross, and N.R.J. Frick, “Understanding Collaboration with
Virtual Assistants The Role of Social Identity and the
Extended Self”, Business & Information Systems
Engineering Published, 2020.
[8] Diederich, S., A.B. Brendel, S. Lichtenberg, and L.
Kolbe, “Design for Fast Request Fulfillment or Natural
Interaction? Insights from an Experiment with a
Conversational Agent”, 27th European Conference on
Information Systems (ECIS), (2019), 117.
[9] Seeber, I., E. Bittner, R.O. Briggs, et al., “Machines as
teammates: A research agenda on AI in team
collaboration”, Information and Management 57(2), 2020,
pp. 103174.
[10] Pfeuffer, N., A. Benlian, H. Gimpel, and O. Hinz,
“Anthropomorphic Information Systems”, Business &
Information Systems Engineering 61(4), 2019, pp. 523
533.
[11] Tavanapour, N., and E.A.C. Bittner, “Automated
Facilitation for Idea Platforms: Design and Evaluation of a
Chatbot Prototype”, International Conference on
Information Systems (ICIS) Proceedings, 2018, pp. 19.
[12] Seeber, I., L. Waizenegger, S. Seidel, and S. Morana,
“Reinventing Collaboration With Autonomous
Technology-Based Agents”, 2019.
[13] Elson, J.S., D. Derrick, and G. Ligon, “Examining
Trust and Reliance in Collaborations between Humans and
Automated Agents”, Proceedings of the 51st Hawaii
International Conference on System Sciences, 2018, pp.
430439.
[14] Toxtli, C., A. Monroy-Hernández, and J. Cranshaw,
“Understanding chatbot-mediated task management”,
Proceedings of the 2018 CHI Conference on Human
Factors in Computing Systems, 2018, pp. 16.
[15] Benke, I., “Towards Design Principles for Trustworthy
Affective Chatbots in Virtual Teams”, ECIS 2020
Proceedings, ECIS 2020 (2020), 112.
[16] Benbasat, I., and W. Wang, “Trust in and adoption of
online recommendation agents”, ournal of the Association
for Information Systems 6(3), 2005, pp. 72101.
[17] Al-Natour, S., I. Benbasat, and R. Cenfetelli,
“Trustworthy virtual advisors and enjoyable interactions:
Designing for expressiveness and transparency”, 18th
European conference on information systems, ECIS 2010,
ECIS 2010 Proceedings (2010), 112.
[18] Ogonowski, A., A. Montandon, E. Botha, and M.
Reyneke, “Should new online stores invest in social
presence elements? The effect of social presence on initial
trust formation”, Journal of Retailing and Consumer
Services 21(4), 2014, pp. 482491.
[19] Feine, J., U. Gnewuch, S. Morana, and A. Maedche,
“A taxonomy of social cues for conversational agents”,
International Journal of Human-Computer Studies 132,
Page 426
2019, pp. 138161.
[20] Janssen, A., J. Passlick, D. Rodríguez Cardona, and
M.H. Breitner, “Virtual assistance in any context: A
taxonomy of design elements for domain-specific
chatbots”, Business and Information Systems Engineering
62(3), 2020, pp. 211225.
[21] McDuff, D., and M. Czerwinski, “Designing
emotionally sentient agents”, Communications of the ACM
61(12), 2018, pp. 7483.
[22] Rietz, T., I. Benke, and A. Maedche, “The Impact of
Anthropomorphic and Functional Chatbot Design Features
in Enterprise Collaboration Systems on User Acceptance”,
2019 14th International Conference on
Wirtschaftsinformatik (WI’19), 2019, pp. 16561670.
[23] Qiu, L., and I. Benbasat, “Evaluating anthropomorphic
product recommendation agents: A social relationship
perspective to designing information systems”, Journal of
Management Information Systems 25(4), 2008, pp. 145
182.
[24] Benbasat, I., A. Dimoka, P.A. Pavlou, and L. Qiu,
“Incorporating social presence in the design of the
anthropomorphic interface of recommendation agents:
Insights from an fmri study”, 31st International Conference
on Information Systems, ICIS 2010 Proceedings (2010), 1
22.
[25] Lee, J.-E.R., and C.I. Nass, “Trust in computers: The
computers-are-social-actors (CASA) paradigm and
trustworthiness perception in human-computer
communication”, In Trust and technology in a ubiquitous
modern environment: Theoretical and methodological
perspectives. IGI Global, 2010, 115.
[26] Reeves, B., and C. Nass, The media equation: How
people treat computers, television, and new media like real
people, Cambridge University Press, Cambridge, UK,
1996.
[27] Boon, S.D., and J.G. Holmes, “The dynamics of
interpersonal trust: Resolving uncertainty in the face of
risk”, In Cooperation and prosocial behavior. Cambridge
University Press, 1991, 190–211.
[28] McKnight, D.H., L.L. Cummings, and N.L. Chervany,
“Initial trust formation in new organizational
relationships”, The Academy of Management Review 23(3),
1998, pp. 473490.
[29] Siakas, K. V, D. Maoutsidis, and E. Siakas, “Trust
facilitating good software outsourcing relationships”, In
European Conference on Software Process Improvement.
Springer, Berlin, Heidelberg, 2006, 171182.
[30] Zhao, N., Y. Shi, Z. Xin, and J. Zhang, “The impact of
traditionality/modernity on identification- and calculus-
based trust”, International Journal of Psychology 54(2),
2019, pp. 237246.
[31] Pinsonneault, A., and O. Caya, “Virtual teams: What
we know, what we don’t know”, International Journal of e-
Collaboration 1(3), 2005, pp. 116.
[32] Liu, Y.C., C. Lin, and Y.A. Huang, “How do virtual
teams work - A social relationship model by SEM”,
Proceedings of the International Conference on Electronic
Business, (2007), 258260.
[33] Huang, R., T. Carte, and L. Chidambaram, “Cohesion
and performance in virtual teams: An empirical
investigation”, Proceedings of the Tenth Americas
Conference on Information Systems, (2004), 12831290.
[34] Ashleigh, M.J., M. Higgs, and V. Dulewicz, “A new
propensity to trust scale and its relationship with individual
well-being: Implications for HRM policies and practices”,
Human resource management journal 22(4), 2012, pp.
360376.
[35] Seeger, A., J. Pfeiffer, and A. Heinzl, “Designing
Anthropomorphic Conversational Agents : Development
and Empirical Evalua- tion of a Design Framework”, Thirty
Ninth International Conference on Information Systems,
AIS (2018), 117.
[36] Riek, L.D., “Wizard of Oz studies in HRI: A
systematic review and new reporting guidelines”, Journal
of Human-Robot Interaction 1(1), 2012, pp. 119136.
[37] Thompson, R.L., C.A. Higgins, and J.M. Howell,
“Influence of experience on personal computer utilization:
Testing a conceptual model”, Journal of Management
Information Systems 11(1), 1994, pp. 167187.
[38] Gulati, S.N., S.C. Sousa, and D. Lamas, “Design,
development and evaluation of a human-computer trust
scale”, Behaviour and Information Technology 38(10),
2019, pp. 10041015.
[39] Seashore, S.E. 1979., Group cohesiveness in industrial
work groups, University of Michigan, Ann Arbor,
Michigan, USA, 1954.
[40] Gefen, D., and D. Straub, “Managing user trust in B2C
e-services”, e-Service Journal 2(2), 2003, pp. 724.
[41] Lyubomirsky, S., and L. Ross, “Changes in
attractiveness of elected, rejected, and precluded
alternatives: A comparison of happy and unhappy
individuals”, Journal of Personality and Social Psychology
76(6), 1999, pp. 9881007.
[42] Diener, E., R.A. Emmons, R.J. Larsen, and S. Griffin,
“The satisfaction with life scale”, Journal of Personality
Assessment 49(1), 1985, pp. 7175.
[43] Cohen, J., Statistical Power Analysis for the
Behavioral Sciences, Taylor and Francis, Hoboken, 1988.
[44] Feng, S., and P. Buxmann, “My virtual colleague: A
state-of-the-Art analysis of conversational agents for the
workplace”, Proceedings of the 53rd Hawaii International
Conference on System Sciences, (2020), 156165.
[45] Mori, M., “The Uncanny Valley”, Energy 7(4), 1970,
pp. 3335.
[46] Söllner, M., A. Hoffmann, H. Hoffmann, A. Wacker,
and J.M. Leimeister, “Understanding the formation of trust
in IT artifacts”, 33rd International Conference on
Information Systems, Association for Information Systems
(2012).
[47] Rietz, T., I. Benke, and A. Maedche, “The impact of
anthropomorphic and functional chatbot design features in
enterprise collaboration systems on user acceptance”, 14th
International Conference on Wirtschaftsinformatik, (2019),
16421656.
[48] Hess, T., M. Fuller, and D. Campbell, “Designing
interfaces with social presence: Using vividness and
extraversion to create social recommendation agents”,
Journal of the Association for Information Systems 10(12),
2009, pp. 889919.
Page 427
... Their role is to intervene, moderate and provide input when needed . Hofeditz et al. (2022) developed and investigated a specific form of AI-based teammate; ELSA, an emotional support agent that was able to strengthen trust among virtual collaborative team members, thus fulfilling a specific role. Elshan and Ebel (2020) present design knowledge (i.e. ...
Article
Full-text available
The increasing importance of artificial intelligence (AI) in everyday work also means that new insights into team collaboration must be gained. It is important to research how changes in team composition affect joint work, as previous theories and insights on teams are based on the knowledge of pure human teams. Especially, when AI-based systems act as coequal partners in collaboration scenarios, their role within the team needs to be defined. With a multi-method approach including a quantitative and a qualitative study, we constructed four team roles for AI-based teammates. In our quantitative survey based on existing team role concepts (n = 1.358), we used exploratory and confirmatory factor analysis to construct possible roles that AI-based teammates can fulfill in teams. With nine expert interviews, we discussed and further extended our initially identified team roles, to construct consistent team roles for AI-based teammates. The results show four consistent team roles: the coordinator, creator, perfectionist and doer. The new team roles including their skills and behaviors can help to better design hybrid human-AI teams and to better understand team dynamics and processes.
Article
Full-text available
Organizations introduce virtual assistants (VAs) to support employees with work-related tasks. VAs can increase the success of teamwork and thus become an integral part of the daily work life. However, the effect of VAs on virtual teams remains unclear. While social identity theory describes the identification of employees with team members and the continued existence of a group identity, the concept of the extended self refers to the incorporation of possessions into one's sense of self. This raises the question of which approach applies to VAs as teammates. The article extends the IS literature by examining the impact of VAs on individuals and teams and updates the knowledge on social identity and the extended self by deploying VAs in a collaborative setting. Using a laboratory experiment with N = 50, two groups were compared in solving a task, where one group was assisted by a VA, while the other was supported by a person. Results highlight that employees who identify VAs as part of their extended self are more likely to identify with team members and vice versa. The two aspects are thus combined into the proposed construct of virtually extended identification explaining the relationships of collaboration with VAs. This study contributes to the understanding on the influence of the extended self and social identity on collaboration with VAs. Practitioners are able to assess how VAs improve collaboration and teamwork in mixed teams in organizations.
Article
Full-text available
Several domain-specific assistants in the form of chatbots have conquered many commercial and private areas. However, there is still a limited level of systematic knowledge of the distinctive characteristics of design ele- ments for chatbots to facilitate development, adoption, implementation, and further research. To close this gap, the paper outlines a taxonomy of design elements for chatbots with 17 dimensions organized into the perspectives intel- ligence, interaction and context. The conceptually groun- ded design elements of the taxonomy are used to analyze 103 chatbots from 23 different application domains. Through a clustering-based approach, five chatbot arche- types that currently exist for domain-specific chatbots are identified. The developed taxonomy provides a structure to differentiate and categorize domain-specific chatbots according to archetypal qualities that guide practitioners when taking design decisions. Moreover, the taxonomy serves academics as a foundation for conducting further research on chatbot design while integrating scientific and practical knowledge.
Conference Paper
Full-text available
Conversational interfaces at the workplace are not a new idea, but it is only the recent technological advancements that turned what was once a vision into near-future reality. Improved reliability and accuracy enable conversational systems to be used in higher stake environments, such as the workplace. In this work, we perform a literature review on concepts proposed to incorporate Conversational Agents (CA) into the workplace. We found 29 workplace CAs designed for workers that contribute to eight different application domains. Based on the studies of these CAs, we compiled a list of aspects to be considered when designing such CAs and identified starting points for further research.
Conference Paper
Full-text available
Purpose: This article reports the results from a panel discussion held at the 2019 European Conference on Information Systems (ECIS) on the use of technology-based autonomous agents in collaborative work. Approach: The panelists (Drs. Izak Benbasat, Paul Benjamin Lowry, Stefan Morana, and Stefan Seidel) presented ideas related to affective and cognitive implications of using autonomous technology-based agents in terms of (1) emotional connection with these agents, (2) decision making, and (3) knowledge and learning in settings with autonomous agents. These ideas provided the basis for a moderated panel discussion (the moderators were: Drs. Isabella Seeber and Lena Waizenegger), during which the initial position statements were elaborated on and additional issues were raised. Findings: Through the discussion, a set of additional issues were identified. These issues related to (1) the design of autonomous technology-based agents in terms of human-machine workplace configurations, as well as transparency and explainability, and (2) the unintended consequences of using autonomous technology-based agents in terms of de-evolution of social interaction, prioritization of machine teammates, psychological health, and biased algorithms.
Article
Full-text available
New technologies, data, and algorithms impact nearly every aspect of daily life. Unfortunately, many of these algorithms operate like black boxes and cannot explain their results even to their programmers, let alone to end-users. As more and more tasks get delegated to such intelligent systems and the nature of user interactions with them becomes increasingly complex, it is important to understand the amount of trust that a user is willing to place on such systems. However, attempts at quantifying trust have either been limited in their scope or not empirically thorough. To address this, we build on prior work which empirically modelled trust in user-technology interactions and describe the development and evolution of a human computer trust scale. We present results of two studies (N=118 & N=183) which were undertaken to assess the reliability and validity of the proposed scale. Our study contributes to the literature by (a) developing a multi-dimensional scale to assess user trust in HCI and (b) being the first study to use the concept of design fiction and future scenarios to study trust.
Article
Full-text available
Conversational agents (CAs) are software-based systems designed to interact with humans using natural language and have attracted considerable research interest in recent years. Following the Computers Are Social Actors paradigm, many studies have shown that humans react socially to CAs when they display social cues such as small talk, gender, age, gestures, or facial expressions. However, research on social cues for CAs is scattered across different fields, often using their specific terminology, which makes it challenging to identify, classify, and accumulate existing knowledge. To address this problem, we conducted a systematic literature review to identify an initial set of social cues of CAs from existing research. Building on classifications from interpersonal communication theory, we developed a taxonomy that classifies the identified social cues into four major categories (i.e., verbal, visual, auditory, invisible) and ten subcategories. Subsequently, we evaluated the mapping between the identified social cues and the categories using a card sorting approach in order to verify that the taxonomy is natural, simple, and parsimonious. Finally, we demonstrate the usefulness of the taxonomy by classifying a broader and more generic set of social cues of CAs from existing research and practice. Our main contribution is a comprehensive taxonomy of social cues for CAs. For researchers, the taxonomy helps to systematically classify research about social cues into one of the taxonomy's categories and corresponding subcategories. Therefore, it builds a bridge between different research fields and provides a starting point for interdisciplinary research and knowledge accumulation. For practitioners, the taxonomy provides a systematic overview of relevant categories of social cues in order to identify, implement, and test their effects in the design of a CA.
Article
Full-text available
What if artificial intelligence (AI) machines became teammates rather than tools? This paper reports on an international initiative by 65 collaboration scientists to develop a research agenda for exploring the potential risks and benefits of machines as teammates (MaT). They generated 819 research questions. A subteam of 12 converged them to a research agenda comprising three design areas – Machine artifact, Collaboration, and Institution – and 17 dualities – significant effects with the potential for benefit or harm. The MaT research agenda offers a structure and archetypal research questions to organize early thought and research in this new area of study.
Conference Paper
Full-text available
Information technology is rapidly changing the way how people collaborate in enterprises. Chatbots integrated into enterprise collaboration systems can strengthen collaboration culture and help reduce work overload. In light of a growing usage of chatbots in enterprise collaboration systems, we examine the influence of anthropomorphic and functional chatbot design features on user acceptance. We conducted a survey with professionals familiar with interacting with chatbots in a work environment. The results show a significant effect of anthropomorphic design features on perceived usefulness, with a strength four times the size of the effect of functional chatbot features. We suggest that researchers and practitioners alike dedicate priorities to anthropomorphic design features with the same magnitude as common for functional design features in chatbot design and research.
Article
Mental disorders are widespread in countries all over the world. Nevertheless, there is a global shortage in human resources delivering mental health services. Leaving people with mental disorders untreated may increase suicidal attempts and mortality. To address this matter of limited resources, conversational agents have gained momentum in the last years. In this work, we introduce SERMO, a mobile application with integrated chatbot that implements methods from cognitive behaviour therapy (CBT) to support mentally ill people in regulating emotions and dealing with thoughts and feelings. SERMO asks the user on a daily basis on events that occurred and on emotions. It determines automatically the basic emotion of a user from the natural language input using natural language processing and a lexicon-based approach. Depending on the emotion, an appropriate measurement such as activities or mindfulness exercises are suggested by SERMO. Additional functionalities are an emotion diary, a list of pleasant activities, mindfulness exercises and information on emotions and CBT in general. User experience was studied with 21 participants using the User Experience Questionnaire (UEQ). Findings show that efficiency, perspicuity and attractiveness are considered as good. The scales describing hedonic quality (stimulation and novelty), i.e., fun of use, show neutral evaluations.