Conference PaperPDF Available

The Effect of Anthropomorphism on Investment Decision-Making with Robo-Advisor Chatbots

Authors:
  • Technical University of Darmstadt

Abstract

Robo-advisors promise to offer professional financial advice to private households at low cost. However, the automation of financial advisory processes to fully-digitalized services often goes hand in hand with a loss of human contact and lack of trust. Especially in an investment context, customers demand a human involved to ensure the trustworthiness of digitalized financial services. In this paper, we report the findings of a lab experiment (N=183) investigating how different levels of anthropomorphic design of a robo-advisor chatbot might compensate for the lack of human involvement and positively affect users’ trusting beliefs and likeliness to follow its recommendations. We found that the anthropomorphic design influences users’ perceived social presence of the chatbot. While trusting beliefs in the chatbot enhance users’ likeliness to follow the recommendation by the chatbot, we do not find evidence for a direct effect of social presence on likeliness to follow its recommendation. However, social presence has a positive indirect effect on likeliness to follow the recommendation via trusting beliefs. Our research contributes to the body of knowledge on the design of robo-advisors and provides more insight into the factors that influence users’ investment decisions when interacting with a robo-advisor chatbot.
This is the author’s version of a work that was published in the following source
Morana, S., Gnewuch, U., Jung, D., Granig, C. (2020). The Effect of
Anthropomorphism on Investment Decision-Making with Robo-Advisor Chatbots,
Proceedings of European Conference on Information Systems (ECIS), Marrakech,
Morocco.
Please note: Copyright is owned by the author and / or the publisher.
Commercial use is not allowed.
Saarland University
Junior Professor for Digitale Transformation and Information Systems
Campus C3 1
66123 Saarbrücken
https://www.uni
-
saarland.de/lehrstuhl/morana/
© 2020. This manuscript version is made available under the CC-
BY-NC-ND 4.0 license http://creativecommons.org/licenses/by-
nc-nd/4.0/
Morana et al. / Anthropomorphizing Robo-Advisor Chatbots
Twenty-Eigth European Conference on Information Systems (ECIS2020), Marrakesh, Morocco.
1
THE EFFECT OF ANTHROPOMORPHISM ON INVESTMENT
DECISION-MAKING WITH ROBO-ADVISOR CHATBOTS
Research paper
Morana, Stefan, Saarland University, Saarbruecken, Germany,
stefan.morana@uni-saarland.de
Gnewuch, Ulrich, Karlsruhe Institute of Technology (KIT), Karlsruhe, Germany,
ulrich.gnewuch@kit.edu
Jung, Dominik, Karlsruhe Institute of Technology (KIT), Karlsruhe, Germany,
dominik.jung@kit.edu
Granig, Carsten, Karlsruhe Institute of Technology (KIT), Karlsruhe, Germany,
carstengranig@gmail.com
Abstract
Robo-advisors promise to offer professional financial advice to private households at low cost.
However, the automation of financial advisory processes to fully-digitalized services often goes hand in
hand with a loss of human contact and lack of trust. Especially in an investment context, customers
demand a human involved to ensure the trustworthiness of digitalized financial services. In this paper,
we report the findings of a lab experiment (N=183) investigating how different levels of
anthropomorphic design of a robo-advisor chatbot might compensate for the lack of human involvement
and positively affect users’ trusting beliefs and likeliness to follow its recommendations. We found that
the anthropomorphic design influences users’ perceived social presence of the chatbot. While trusting
beliefs in the chatbot enhance users’ likeliness to follow the recommendation by the chatbot, we do not
find evidence for a direct effect of social presence on likeliness to follow its recommendation. However,
social presence has a positive indirect effect on likeliness to follow the recommendation via trusting
beliefs. Our research contributes to the body of knowledge on the design of robo-advisors and provides
more insight into the factors that influence users’ investment decisions when interacting with a robo-
advisor chatbot.
Keywords
Robo-advisory; chatbot; anthropomorphism; experiment; social presence; trusting beliefs
1 Introduction
Robo-advisors are web-based systems that have received much attention in recent years because of their
potential to offer professional financial advice to private households at low-cost (Fisch et al. 2018; Jung et
al. 2019; Jung, Dorner, Glaser, et al. 2018). They use different approaches to digitalize and automate wealth
management (Ludden et al. 2015; Sironi 2016), based on state-of-the-art algorithms in portfolio-management
and are hence detached from emotionally biased decisions (Tertilt and Scholz 2017). However, the
automation of the financial advisory process to a full-digitalized service, while maintaining characteristics
like trust or service quality, is a challenging task (Hodge et al. 2018; Jung, Dorner, Weinhardt, et al. 2018).
In a recent study by Jung et al. (2018), users requested to first have a personal conversation with a human
advisor before making an investment with the robo-advisor. Interestingly, their motivation was not to receive
Morana et al. / Anthropomorphizing Robo-Advisor Chatbots
Twenty-Eigth European Conference on Information Systems (ECIS2020), Marrakesh, Morocco.
2
more information, but primarily to convince themselves of the robo-advisor’s trustworthiness by double-
checking with a human advisor. Accordingly, it seems like personal contact, via the human advisor, plays an
important role in users’ confidence-building and acceptance of robo-advisors’ investment advice. However,
it is not clear whether a human advisor is actually required or whether a more human-like or anthropomorphic
design of the robo-advisors might compensate for the lack of human contact and have positive effects on the
users’ trusting beliefs and acceptance of the robo-advisors’ investment recommendations.
While digitalization improves efficiency, it removes the social aspect of human-to-human interaction by
replacing it with human-to-computer interaction (HCI). However, research suggests that HCI is
fundamentally social because certain design features of a computer can trigger emotional, cognitive, and
behavioral reactions by users that are appropriate when directed at other humans, but inappropriate when
directed at computers (Nass et al. 1994; Nass and Moon 2000). These reactions can be explained by the
“Computers are Social Actors” (CASA) paradigm and social response theory. It posits that humans tend to
form similar expectations about interactions with computers when they show human-like design features as
they do with humans (Nass and Moon 2000). For example, several studies have shown that social aspects
play an essential role in establishing trustworthiness and believability of advisory agents by increasing the
agents’ social presence (Nass et al. 1994; Sproull et al. 1996). In fact, several studies about recommendation
agents show that social presence enhance users’ trusting beliefs and ultimately usage intentions (e.g., Hess et
al. 2009; Lee and Choi 2017; Qiu and Benbasat 2009). Thus, increasing the social presence of robo-advisors
using an anthropomorphic design might positively affect users’ trusting beliefs and their likeliness to accept
recommendations by robo-advisors.
However, although the design of robo-advisory has been increasingly investigated in the last years (e.g.
Ivanov et al. 2018; Kobets et al. 2018; Tertilt and Scholz 2018), little attention has been paid to the effect of
anthropomorphizing the robo-advisor itself. Since robo-advisors are intended to replace human advisors in
financial advisory, the question arises of how “human-like” a robo-advisor has to be in order to provide
convincing investment recommendations. Moreover, it is unclear whether a certain level of
anthropomorphism (i.e., a high level of anthropomorphism meaning a high level of human-likeliness) more
positively affects the users’ acceptance of the robo-advisor as a trustworthy financial advisor and ultimately
following its recommendation, than other levels. In the current body of knowledge, the role of the
anthropomorphizing financial robo-advisory is not well understood, and only little evidence is available (e.g.
Hodge et al. 2018). Conversational user interfaces and chatbots are becoming increasingly popular (Følstad
and Brandtzæg 2017). Due to their use of natural language, they promise to offer a more natural and human-
like interface to digitalized services. Consequently, one way to anthropomorphize robo-advisors is to use a
conversational user interface (CUI) (McTear 2017) and implement the robo-advisor as a text-based
conversational agent (i.e., a chatbot). Robo-advisor chatbots (RACs) have received little attention in research
(e.g. Day et al. 2018) and there is a need to better understand the outcomes of interacting with RACs on the
users’ trusting beliefs and likeliness to follow their investment recommendations. In our research, we
investigate whether different levels of anthropomorphism of the RAC affects users’ perception of social
presence, trusting beliefs, and ultimately their likeliness to follow the RAC’s advice. Thus, we address the
following research question:
How do different levels of anthropomorphic design of a robo-advisor chatbot influence the users’
perceived social presence, trusting beliefs, and their likeliness to follow its advice?
In this paper, we report the findings of a between-subjects laboratory experiment that investigated the effect
of anthropomorphized RACs in the context of investment decision-making as an important specific task in
financial decision-making. Drawing on social response theory (Nass and Moon 2000), we investigate the
effect of three different levels of anthropomorphism of a RAC on the users’ perceptions of the RAC and their
investment decision-making. We found a significant effect of a higher level of anthropomorphism on the
users’ perception of the RAC’s social presence. While trusting beliefs in the chatbot enhance users’ likeliness
to follow the recommendation by the chatbot, we do not find evidence for a direct effect of social presence
on likeliness to follow its recommendation. However, social presence has a positive indirect effect on
likeliness to follow the recommendation via trusting beliefs. Our research contributes to the body of
knowledge about the design of robo-advisory by emphasizing the importance of purposefully designing RAC
and confirming the importance of users’ trusting beliefs on their investment decision-making.
Morana et al. / Anthropomorphizing Robo-Advisor Chatbots
Twenty-Eigth European Conference on Information Systems (ECIS2020), Marrakesh, Morocco.
3
2
Theoretical Background and Related Work
Robo-advisors are systems providing financial advice to investors based on algorithms that analyze financial
information without human intervention (Hodge et al. 2018). However, the interaction with a robo-advisor
is similar to financial advisor process with a human: The robo-advisor asks questions about the individuals’
financial risk-preferences, their long- and short-term investment goals, and subsequently gives an investment
recommendation that fits the users’ needs (Hodge et al. 2018; Jung, Dorner, Weinhardt, et al. 2018). Research
on the design of the interaction between users and robo-advisors is scarce as the interest in this technology
from an academic point of view just started a couple of years ago. Existing research focuses primarily on the
design of the robo-advisor process and the digitalization of the financial advisory process in general (Jung,
Dorner, Glaser, et al. 2018). Kobets et al. (2018) investigate the decision-making process of robo-advisor
users and provide a general framework for providing user-specific decision support with robo-advisors.
Tertilt and Scholz (2018) investigate the capabilities of robo-advisors to provide user-sensitive risk-
assessment. Furthermore, Ivanov et al. (2018) propose a guideline to implement risk-assessment in robo-
advisory based on user-specific input. So far, research has primarily focused on how the financial advisory
process can be digitized using robo-advisors. However, there is evidence that suggests that many investors
still prefer a human advisor (Jung, Dorner, Weinhardt, et al. 2018). Previous research in the context of HCI
has shown that increasing the anthropomorphism of a system may make up for a lack of human contact by
increasing the perceived social presence of the system (e.g., Araujo 2018; Hess et al. 2009; Qiu and Benbasat
2009). Thus, investigating the effect of anthropomorphizing a robo-advisor chatbot might provide an
interesting avenue for research and contribute to the knowledge base on the design of robo-advisory.
2.1 Anthropomorphism and Social Presence
Anthropomorphism is the attribution of “humanlike properties, characteristics, or mental states to real or
imagined nonhuman agents and objects(Epley et al. 2007, p. 865). In essence, it represents a human
heuristic that helps to understand unknown agents by applying anthropocentric knowledge (Griffin and
Tversky 1992). Research has found that it can be applied to all nonhuman agents such as nonhuman animals,
technological devices, or mechanical devices and the extent to which individuals engage in
anthropomorphism depends on different factors (Epley et al. 2008). For example, children tend to
anthropomorphize more often than adults (Carey 1985). More importantly, the tendency to
anthropomorphize an object depends on the presence of characteristics expressing a degree of humanness
(Epley et al. 2007). Therefore, the more specific anthropomorphic features/cues are embedded in the design
of an information system (IS), the more likely human users are to anthropomorphize it (Pfeuffer et al. 2019).
Extant research shows the important role of anthropomorphic cues in influencing users’ perceptions of IS
and behaviors when interacting with an anthropomorphic IS. Moreover, research has shown that
anthropomorphic cues influences users’ perceptions of social presence (Araujo 2018; Go and Sundar 2019;
Kim and Sundar 2012). In the context of HCI, the concept of social presence broadly refers to the feeling of
warmth, sociability, and human contact during the interaction that can be created without actual human
contact (Gefen and Straub 2004). Social presence has been identified as a key factor in the design of IS in
online settings (Hess et al. 2009) and as an important driver of users’ trusting beliefs (Cyr et al. 2007; Hess
et al. 2009; Qiu and Benbasat 2009). For example, Lu, Fan, and Zhou (2016) showed that social presence
strongly influences consumers’ online shopping behavior. Moreover, other studies suggest that social
presence (e.g., induced by an avatar) can increase users’ satisfaction and trust in an online shopping context
(Etemad-Sajadi 2016; Holzwarth et al. 2006). While previous IS research has provided broad evidence of the
effect of anthropomorphic features/cues, there is a lack of understanding on how these features influence
users’ investment decisions. This is important because a higher level of human-likeliness of the robo-advisor
might have a positive effect on the users’ trusting beliefs and the likeliness to follow the recommendation by
the robo-advisor. However, there is only little research available. Hodge, Mendoza, and Sinha (2018)
examine the effect of humanizing robo-advisors in contrast to human advisors in the context of investor
judgements. They conducted an experiment and manipulated the type of advisor (human vs. robo-advisor)
and the level of humanization (low vs. high). They found that investors are more likely to follow the
recommendation of a robo-advisor with a low level of humanization. In contrast, investors are more likely to
Morana et al. / Anthropomorphizing Robo-Advisor Chatbots
Twenty-Eigth European Conference on Information Systems (ECIS2020), Marrakesh, Morocco.
4
follow human advisors exhibiting a high level of human characteristics. Hence, there is more research
required to investigate the effect of anthropomorphizing a robo-advisor. Recently, chatbots have also been
introduced as financial advisors (Day et al. 2018) and using a CUI might provide new insights into how the
trusting beliefs towards a robo-advisor could be increased in order to increase users’ likeliness to follow the
robo-advisor’s investment recommendation.
2.2
Chatbots and Conversational Agents
In previous research, many terms have been used to describe IS that allow users to interact with them using
natural language (e.g., conversational agent, chatbot, or digital assistant) (Gnewuch et al. 2017; Maedche et
al. 2019). In IS research, the most commonly used term for this class of systems is “conversational agent”,
which includes embodied (Cassell et al. 2000; Derrick et al. 2011), voice-based (e.g., Amazon’s Alexa), and
text-based conversational agents (i.e., chatbots) (Araujo 2018; Dale 2016; Gnewuch et al. 2017). Since
chatbots are often found in messaging applications and on websites, users interact with them using text-based
natural language (Følstad and Brandtzæg 2017). The first chatbot ELIZA, developed in the early 1960s, was
able to simulate a human conversation based on pattern-matching algorithms (Weizenbaum 1966). Much
research has been conducted since then to improve existing algorithms for natural language processing and
create new architectures (Sarikaya 2017). While most studies focus on their technical aspects, understanding
conversational processes, and identifying elements that influence users’ social perceptions of chatbots seems
to be equally important (Følstad and Brandtzæg 2017; Jenkins et al. 2007). This is mainly because users
interact with chatbots via natural language (i.e., a uniquely human quality) (Seeger et al. 2017). Moreover,
chatbots are often implemented to fulfill the role of service employees (Larivière et al. 2017) and serve as
recommendation agents to support consumers in their decision-making when searching for products (Qiu
and Benbasat 2009).
Summed up, we identified a potential need for a more human-like design of robo-advisory (Hodge et al.
2018; Jung, Dorner, Weinhardt, et al. 2018). Moreover, there is only little research on the effects of using a
CUI to interact with the robo-advisory (Day et al. 2018). In line with existing research, we assume that using
a CUI and anthropomorphizing RACs will increase the perceived social presence, positively effects the users’
trusting beliefs (Araujo 2018; Qiu and Benbasat 2009), and ultimately their likeliness to follow the
recommendation by the RAC.
3
Research Model
To answer our research question and investigate the effects of anthropomorphizing RACs, we developed a
research model and four hypotheses. The following section derives the hypotheses on the proposed effect of
anthropomorphizing a RAC to the users’ perceived social presence of the RAC (H1) and the interplay of
users’ perceived social presence, trusting beliefs, and likeliness to follow the RACs recommendation (H2-
H4). Thus, our research model (see Figure 1) comprises our treatment as the independent variable
Anthropomorphism and the dependent variables Perceived Social Presence, Trusting Beliefs, and Likeliness
to Follow Advice. Furthermore, we control for different factors potentially influencing the likeliness to follow
the recommendation by the RAC such as age, gender, disposition to trust in technology, and risk attitude.
Besides, we control for common usage characteristics like users’ experience with chatbots and experience
with robo-advisors.
Figure 1. Research Model
Level of
Anthropomorphism
(1) LOW
(2) MED
(3) HIGH
Perceived
Social
Presence
Trusting
Beliefs
Likeliness
toFollow
Advice
H4 (+)
H3 (+)
H1 (+)
H2 (+)
Morana et al. / Anthropomorphizing Robo-Advisor Chatbots
Twenty-Eigth European Conference on Information Systems (ECIS2020), Marrakesh, Morocco.
5
There is substantial evidence that suggests a linear relationship between the degree of anthropomorphism in
computers and users’ perceptions of social presence (Epley et al. 2007; Gong 2008; Nass and Moon 2000).
Nass and Moon (2000) state that the more computers present characteristics that are associated with
humans, the more likely they are to elicit social behavior (Nass and Moon 2000, p. 97). While users to a
certain degree respond socially to computers even without humanoid traits (Klein et al. 2002), this aspect is
amplified by anthropomorphic cues such as more human-looking avatars (Nass et al. 1998).
Anthropomorphic cues, such as a human-like avatar, greetings or politeness, make it more difficult for users
to carefully elaborate the “ontological nature” of the RAC (Qiu and Benbasat 2009). This may lead them to
evaluate, judge, and respond to the anthropomorphized RAC by subconsciously applying the same social
rules they apply in everyday life (Nass et al. 1994, 1995; Qiu and Benbasat 2009). For example, the study of
Qiu and Benbasat (2009) found that anthropomorphic cues, such as avatars, positively influence perceived
social presence of a product recommendation agent. Therefore, users are expected to perceive the interaction
with the anthropomorphized RAC as more interpersonal, the more anthropomorphic cues are present during
the interaction. Hence, we assume that the different levels of anthropomorphizing the RAC will influence
users’ perceived social presence:
H1: An increased level of anthropomorphism of the robo-advisory chatbot is positively associated
with users’ perceived social presence of the robo-advisory chatbot.
Trusting beliefs consists of three dimensions: competence, benevolence, and integrity (McKnight et al. 2002).
Prior research found social presence as both an enabler and an antecedent of trust in the context business-to-
consumer e-services provided by a website or by (recommender) agents (e.g., Gefen and Straub 2003;
McKnight et al. 2002; Qiu and Benbasat 2009). More specifically, trusting beliefs can be formed by users
perceived quality of the information provided by the agent (Qiu and Benbasat 2009), such as the relevance
of questions, the amount and scope of explanations, or how well recommendations conform to users
preferences. In addition, a higher level of social presence of the agent is found to be perceived as more
transparent by the users, and thus, as more trustworthy (Hess et al. 2009). Especially during the first
interactions with an agent, it is nearly impossible for users to accurately appraise the integrity and
completeness of information provided by the agent. This so-called product-related evaluation is especially
difficult for users with a limited product knowledge (Qiu and Benbasat 2009). Thus, one important source of
trusting beliefs is the so called initial trust, which is formed upon “whatever information is available
(Meyerson et al. 1996; Qiu and Benbasat 2009). In face-to-face human interactions, trivial but important cues
lead to the formation of perceived trustworthiness (Metzger et al. 2003). In the context of HCI, social
presence (enabled through cues of the website or agent) is identified as such a non-product-related
information that enables and antecedes trust (Gefen and Straub 2003; Lee and Nass 2004). Additionally,
linguistic cues widely employed by human individuals increase users’ perceived benevolence and credibility
(Cassell and Bickmore 2000). Moreover, Hertzum et al. (2002) found lively personifications that lead to a
higher social presence to be more convincing than agents with a less personified visual appearance. Summing
up prior study results about the effects of perceived social presence on users’ trusting beliefs lead to the
following hypothesis:
H2: Users’ perceived social presence of the robo-advisory chatbot is positively associated with
users’ trusting beliefs towards the robo-advisory chatbot.
Robo-advisors are designed to help users in investment decision-making and require a trustworthy
atmosphere (Jung, Dorner, Weinhardt, et al. 2018). Consequently, if users do not trust the robo-advisory
agents, they are likely to reject their recommendations (Wang and Benbasat 2005). While the robo-advisory
process has a lot of overlaps with the traditional financial advisory process, humans judgments about advisors
can substantially differ between the virtual and the human agent (Hodge et al. 2018). Although prior research
on HCI suggests that people perceive humans to be more likeable and social than computers, it remains
unclear if this also makes humans more persuasive, especially when it comes to investment decisions
(Burgoon et al. 2000; Fogg and Tseng 1999; Hodge et al. 2018; Nan et al. 2006). Many characteristics that
lead to an increased persuasiveness in human-human interaction can be extended to situations in which
humans interact with technology (Yoo and Gretzel 2011). Since social presence bridges the gap between
Morana et al. / Anthropomorphizing Robo-Advisor Chatbots
Twenty-Eigth European Conference on Information Systems (ECIS2020), Marrakesh, Morocco.
6
HCI and traditional human-human interaction (Nass and Moon 2000), it might mediate the effect between
the RAC design (i.e. the anthropomorphic cues implemented in the RAC) and users’ likeliness to follow the
advice from the RAC. Additionally, research has shown that the implementation of social aspects that elicit
social responses from users lead to a higher persuasiveness of the respective technology (Al-Natour et al.
2006; Wang and Benbasat 2005). Hence, we propose that an increased level of perceived social presence of
the RAC will result in a higher probability of the users to follow the recommendation by the RAC:
H3: Users that exhibit higher levels of social presence are more likely to follow the
recommendation by the robo-advisory chatbot.
When individuals are about to determine whether to follow a recommendation from someone else or
not, the individuals’ perception of the other person’s competence and trustworthiness is of critical
importance (van Doorn et al. 2017; Friestad and Wright 1994; Hodge et al. 2018; O’Keefe 2015).
Trusting beliefs have been found to be effective mediators that influence trusting intentions (Lee and
Turban 2001; McKnight et al. 2002; McKnight and Chervany 2001). While trusting beliefs describe
users general impressions about the agent’s trustworthiness, trusting intentions are intentions to engage
in trust related behaviors” (McKnight et al. 2002, p. 335). They are therefore related to actual actions,
such as following a recommendation or not. Several other studies also found that perceived competence
and trustworthiness are important dimensions when determining whether to follow a recommendation
or not (van Doorn et al. 2017; Friestad and Wright 1994; O’Keefe 2015). These dimensions play a key
role in mediating the relationship between the change in the likeliness of following the advisor’s
recommendation and the level of anthropomorphism (Hodge et al. 2018). Thus, summarizing the
findings in literature, we propose the following hypothesis:
H4: Users that exhibit higher levels of trusting beliefs are more likely to follow the recommendation
by the robo-advisory chatbot.
4 Method
To assess the proposed hypotheses and investigate the effects of anthropomorphizing a robo-advisor in the
form of a chatbot, we conducted a controlled laboratory experiment. In line with existing research (Go and
Sundar 2019; Gong 2008; Wünderlich and Paluch 2017), we decided to implement different levels of
anthropomorphism in our artifact: low, medium, and high. In the experiment, the participants were asked to
interact with a RAC, receive a recommendation for an investment, and decide for an investment in our
fictitious scenario (i.e., follow the advice by the RAC or not). After this interaction, the participants filled out
a questionnaire about their perceptions of the RAC and their interaction with it.
4.1 Participants
Our study was conducted in the behavioral research lab of a German University and we recruited the
participants from a pool of university students. We decided to involve students in our study because we are
interested in participants with only little prior knowledge of financial investments and usage of financial
decision aids (such as robo-advisors). Moreover, research indicates that students are either already using
conversational agents (such as chatbots) or are open towards using them (Flanagin 2005; Jain et al. 2018).
Thus, students seem to be an adequate sample for our study. The required sample size was determined a
priori with GPower (Faul et al. 2007). We assumed an effect size of d = .25, alpha = .05, and power = 0.8 for
our study and the power analysis indicated a minimum required sample size of 159 participants for the three
treatment groups. Because of potential technical issues or individual mistakes by the participants during the
experiment, we invited 200 participants using hroot (Bock et al. 2014). Overall 195 participants (91 females,
104 males, mean age=23.734, SD=4.319) took part in our study. They received a performance-based
payment ranging from 2.00 to 4.00 Euro based on their investment and an additional show up fee of 3.00
Euro for participating in the experiment. The mean payoff was 5.686 Euro (SD=0.822) and the whole
experiment took approximately 25 minutes (mean=21.435, SD=4.636 minutes). We invited no additional
participants after initial data analysis.
Morana et al. / Anthropomorphizing Robo-Advisor Chatbots
Twenty-Eigth European Conference on Information Systems (ECIS2020), Marrakesh, Morocco.
7
After the experiment and before the data analysis, we cleaned the dataset as outlined in the following. First,
we removed all participants that made no investment decision, failed to provide all information during the
interaction with the RAC, or did not follow the experimental task by analyzing the conversation logs (8
participants). Second, we removed all participants that failed both attention check questions (4 participants).
The final dataset includes 183 participants (89 females, 94 males, mean age=23.765, SD=4.366), which are
nearly evenly distributed among the three experimental treatment groups. We found no significant
differences between the three groups for gender, age, and experience with chatbots. Moreover, except for
five participants, none had experience with financial robo-advisory.
4.2 Treatment Structure
Our laboratory experiment applied a between-subjects design. We are interested if a varying degree of
anthropomorphizing the RAC influences the users’ investment behavior and perception of the RAC.
Following existing studies (Go and Sundar 2019; Gong 2008; Seeger et al. 2018; Wünderlich and Paluch
2017), we decided to assess three levels of anthropomorphizing the RAC (LOW, MED, and HIGH). To
implement these three levels, we decided to vary certain social cues (Feine et al. 2019) of the RAC in order
to be more human-like or less human-like. Following Table 1 depicts the three levels of anthropomorphism
used in our experiment with their respective social cue design decision and studies investigating this social
cue.
Social Cue Level of Anthropomorphism References
LOW MED HIGH
Name
<NONE>
“Robo
-
Advisor”
“Charles”
(Ho
dge et al. 2018)
Avatar
(Wünderlich and Paluch 2017)
Response time Instant Instant Dynamic
(Appel et al. 2012; Gnewuch et al.
2018a)
Typing Indicator
(Gnewuch et al. 2018b)
Greeting & Farewell
(Leite et al. 2013; Sabelli et al. 2011)
Self
-
reference
(Nass et al. 1994)
Civility / Thanking
(Fogg and Nass 1997)
Remember user’s name
(Richards and Bransky 2014)
Table 1. Experiment Treatment - Chatbot Anthropomorphic Design
The three RAC in our study varied by the name they displayed during the interaction with the user. In
the LOW group, we provided no name, in the MED and HIGH groups we displayed a name for the
RAC. In the MED group, we chose the name “Robo-Advisor” and following Hodge et al. (2018), we
named the RAC of the HIGH group “Charles”. We adapted three avatar pictures investigated by
Wünderlich and Paluch (2017) for our three levels of anthropomorphism. In order to simulate a more
human-like messaging experience, we delayed the response time of the HIGH group dynamically to the
messages send (Appel et al. 2012). In the LOW and MED group, we decided to not delay the RACs’
response time. Moreover, the RAC in the HIGH group provides a graphical indication that the message
is prepared (i.e. during the dynamic delay), while LOW and MED provided no such graphical indication
(Gnewuch et al. 2018b). In addition to these social cues addressing the visual appearance and
chronometric behavior, we also decided to vary the way the RAC is texting. In the MED and the HIGH
group the RACs greet the participants and provide a farewell (Leite et al. 2013; Sabelli et al. 2011) to
simulate human-human communication. In contrast, the RAC of the LOW group does not provide a
greeting or farewell message. Similarly, the RACs of the MED and HIGH groups acknowledge input
by the participant with a “thank you” (Derrick and Ligon 2014; Fogg and Nass 1997). In addition, the
RACs in the MED and HIGH groups refer to themselves in the conversation (Nass et al. 1994) by using
personal pronouns such as “I” and “we”. In comparison, the RAC in the LOW group avoided this by
using passive formulations. Moreover, the RACs in the MED and HIGH groups remembered the
Morana et al. / Anthropomorphizing Robo-Advisor Chatbots
Twenty-Eigth European Conference on Information Systems (ECIS2020), Marrakesh, Morocco.
8
participant’s name during the interaction and utilized the name when addressing the participant, e.g. for
requesting an action from the participant (Richards and Bransky 2014). Summed up, we have three
distinct RAC designs that vary in their degree of anthropomorphism based on the dedicated
implementation of various social cues.
4.3
Procedure and Scenario
Our experiment consisted of four steps. Upon arrival in our laboratory, the participants signed the consent
forms and were seated randomly in the experimental cabins. The entire experiment was performed using the
web-based questionnaire LimeSurvey. The experiment started with a general introduction text about the
experiment followed by the description of the scenario. Moreover, we randomly assigned the participant into
one of the three experimental treatment groups.
After completing the introductory part, one of the three RAC (depending on the participant’s group
assignment) was included in LimeSurvey and the participants started their interaction with the RAC. The
RAC created the user profile by asking questions regarding participant’s name, age, and gender as well as
elicited their individual risk level using the multiple price list method (Holt and Laury 2005). Subsequently,
the actual investment decision task was started. In our experiment, we simulated an online investment
decision based on the interaction with a RAC that provides an explanation of the investment options (three
fictive companies) and recommends one company. After a short introduction from the RAC, the participants
are presented fact sheets on three firms in the semi-inductor industry. Adapted from Hodge et al. (2018), they
contain financial information for the recent quarter, a dividend summary, and an industry outlook. The key
financial figures for each firm are mere multiples of the other firms’ figures, the outlook is the same for the
three companies and none of the companies pay dividends. This makes it impossible for the participant to
identify a clear favorite investment option, since no firm appears objectively better than the other (Hodge et
al. 2018). Furthermore, we randomized the presentation sequence of the fact sheets to avoid order effects.
Participants must actively confirm that they thoroughly read the company fact sheets. At the end of this first
interaction with the RAC, the participants were asked to invest 3000 MU into one of the three fictive
companies. All money related values in the experiment were given in “Monetary Units (MU)” and 100 MU
are equal 10 Eurocent.
Next, the participants processed the post-experimental questionnaire with the measures outlined in the next
section. After completing the questionnaire, the participants once more interacted with the RAC and the result
of their investment was presented. The actual performance of their investment was randomly selected
between four outcomes (-1000 MU, -500 MU, +500 MU, +1000 MU).
4.4 Measures
All measures used in the questionnaire were adopted from existing research. Table 2 summarizes the
measures, items, sources, descriptive values as well as the factor loadings for each item. To account for
alternative explanations of the investigated effects, we additionally included the following control variables:
age, gender, prior experiences with chatbots, disposition to trust in technology, prior experiences with robo-
advisors, and individual risk level (Holt and Laury 2005; McKnight et al. 2011). The patterns of results
remained qualitatively unchanged when including these control variables in our models except for gender
and disposition to trust in technology. Accordingly, we will omit all controls except for these two when
presenting our results in the subsequent sections. Moreover, 178 out of 183 participants reported no prior
experience with robo-advisory, thus, we did not include this factor in our analysis.
Morana et al. / Anthropomorphizing Robo-Advisor Chatbots
Twenty-Eigth European Conference on Information Systems (ECIS2020), Marrakesh, Morocco.
9
Measures
Item
Mean
SD
Loadings
Anthropomorphism (α=.793; M=2.413; SD=1.320) (Epley et al. 2007)
The robo-advisory chatbot has intentions. ANP1 4.093 1.845 .494 (dropped)
The robo-advisory chatbot can experience emotion. ANP2 2.055 1.421 .567 (dropped)
The robo-advisory chatbot has a free will. ANP3 1.907 1.312 .789
The robo-advisory chatbot has consciousness. ANP4 2.383 1.599 .815
The robo-advisory chatbot has a mind of its own. ANP5 2.951 1.764 .740
Social Presence (α=.864; M=3.443; SD=1.398) (Gefen and Straub 2003; Qiu and Benbasat 2009)
There is a sense of human contact in the robo-advisory chatbot. SP1 4.410 1.597 0.794
There is a sense of personalness in the robo-advisory chatbot. SP2 2.754 1.569 0.429 (dropped)
There is a sense of sociability in the robo-advisory chatbot. SP3 3.437 1.682 0.796
There is a sense of human warmth in the robo-advisory chatbot. SP4 2.82 1.659 0.744
There is a sense of human sensitivity in the robo-advisory chatbot. SP5 3.104 1.698 0.728
Trusting Beliefs (α=.914; M=4.949; SD=1.029) (McKnight et al. 2002; Qiu and Benbasat 2009; Wang and Benbasat 2005)
I believe that the robo-advisory chatbot would act in my best interest. TB1 4.874 1.537 0.797
If I required help, the robo-advisory chatbot would do its best to help me. TB2 5.033 1.410 0.726
The robo-advisory chatbot is interested in my well-being, not just its own. TB3 4.503 1.596 0.693
The robo-advisory chatbot is truthful in its dealings with me. TI1 4.847 1.366 0.822
I would characterize the robo-advisory chatbot as honest. TI2 4.902 1.494 0.806
The robo-advisory chatbot would keep its commitments. TI3 5.814 1.063 0.623
The robo-advisory chatbot is sincere and genuine. TI4 4.546 1.341 0.725
The robo-advisory chatbot is competent and effective in providing financial advice. TC1 5.104 1.397 0.707
The robo-advisory chatbot performs its role of giving financial advice very well. TC2 5.055 1.386 0.731
Overall, the robo-advisory chatbot is a capable and proficient financial advice provider. TC3 4.809 1.302 0.708
In general, the robo-advisory chatbot is very knowledgeable about finance. TC4 5.098 1.322 0.599
Disposition to Trust in Technology (α=.868; M=4.499; SD=1.370) (McKnight et al. 2011)
My typical approach is to trust new IT until they prove to me that I shouldn’t trust them. TIT1 4.634 1.552 0.918
I usually trust in information technology until it gives me a reason not to. TIT2 4.65 1.554 0.918
I generally give an information technology the benefit of the doubt when I first use it TIT3 4.213 1.513 0.782
α = Cronbach’s alpha | M = Mean | SD = standard deviation
Table 2: Measures used in the post-experimental questionnaire
To test whether our manipulation of the RAC with respect to the three levels of anthropomorphism was
successful, we assessed users perceived anthropomorphism of the RAC (Epley et al. 2007). A one-way
ANOVA revealed a significant effect of the treatment condition on perceived anthropomorphism
(
F(2,180)=3.579, p=.030). The results of a Tukey HSD post-hoc comparison showed that there was a
significant difference between the LOW condition (M=2.44, SD=1.05) and the HIGH condition (M=3.00,
SD=1.28). However, the differences between the MED condition (M=2.73, SD=1.28) and both the LOW and
HIGH conditions were not significant. Despite these insignificant results, we did not exclude the MED
condition because its mean was in the middle of the range between LOW and HIGH and we believe that with
a larger sample one should be able to also observe significant differences for the MED condition.
5
Experiment Results
To test our proposed hypotheses, we conducted the following two steps. First, we conducted a one-way
ANCOVA to test the effect of the three levels of anthropomorphism on the users’ perceived social presence
of the RAC (H1). Second, we estimated three regression models to examine the relationships between
perceived social presence, trusting beliefs, and users’ likeliness to follow the recommendation by the RAC
(H2, H3, H4). All regression analyses were performed in Stata version 13.0. Table 3 summarizes the
descriptive results and Table 4 displays the results of the regression analyses.
Morana et al. / Anthropomorphizing Robo-Advisor Chatbots
Twenty-Eigth European Conference on Information Systems (ECIS2020), Marrakesh, Morocco.
10
Group N Gender
1
Chatbot
Experience
2
Perceived
anthropomorphism
3
Perceived
social
presence
3
Trusting
beliefs
3
Disposition to
trust technology
3
Individual risk
level
4
Followed RAC
recommendation
LOW 62
(30 | 32) (29 | 33) 2.442 (1.047) 2.806 (1.186)
4.811 (1.058)
4.505 (1.291) 5.323 (2.125)
83.87 %
MED 61
(29 | 32) (31 | 30) 2.731 (1.103) 3.459 (1.387)
5.125 (1.002)
4.694 (1.341) 5.180 (1.866)
85.25 %
HIGH 60
(30 | 30) (22 | 38) 3.000 (1.277) 4.083 (1.333)
4.953 (0.985)
4.294 (1.468) 6.217 (2.043)
81.67 %
1 (female | male) users | 2 (novice | experienced) chatbot users | 3 measured on a 7-point Likert scale |4 measured on a scale from 1 to 10
Table 3. Descriptive Results
First, consistent with H1, the results of a one-way ANCOVA confirmed a significant effect of the levels of
anthropomorphism on perceived social presence
(
F(2,178)=15.37, p<.001). Thereby, we controlled for
gender and user’s disposition to trust in technology. Post-hoc Tukey HSD comparisons revealed significant
differences between the LOW condition (M=2.81, SD=1.19) and the MED condition (M=3.46, SD=1.39) as
well as between the LOW condition and the HIGH condition (M=4.08, SD=1.33). Further, there was a
significant difference between the MED condition and HIGH condition. Additionally, gender and disposition
to trust in technology were significant control variables. More specifically, we found that females (M=3.70,
SD=1.36) yield significantly higher levels of perceived social presence than males (M=3.20, SD=1.40).
Taken together, these results confirm the proposed positive effect of anthropomorphizing a RAC on users
perceptions of the RAC’s social presence, thus supporting H1.
To test H2, we ran a linear regression (regression I) to assess the effect of perceived social presence on users’
trusting beliefs in the RAC. The results indicate that social presence significantly influences users’ trusting
beliefs (β=0.217, p<.001). Furthermore, we found that female users (β=0.324, p=.031) yield a higher level
of trusting beliefs and that users’ disposition to trust in technology (β=0.147, p=.012) significantly influences
their trusting beliefs. Therefore, these results confirm the proposed positive effect of social presence on
trusting beliefs, thus supporting H2.
Finally, we assessed the effect of social presence and trusting beliefs on the users’ likeliness to follow the
recommendation given by the RAC (H3 and H4). First, we conducted a binary logistic regression (regression
II) to test the effect of perceived social presence on users’ likeliness to follow the recommendation. The
results indicate that there is no significant effect of social presence on likeliness to follow (β=-0.171, p=.256),
thus rejecting H3. Second, to test H4, we estimated another binary logistic regression (regression III)
assessing the effect of trusting beliefs and social presence on the likeliness to follow the recommendation by
the RAC. Consistent with H4, the analysis reveals that trusting beliefs significantly influences likeliness to
follow the recommendation by the RAC (β=0.601, p=.003, OR=1.824 [95% CI: 1.219, 2.729]). Moreover,
when including trusting beliefs, the effect of social presence on likeliness to follow approached significance
(β=-0.313, p=.052, OR=0.731 [95% CI: 0.533, 1.003]).
Dependent
variable
(I) Trusting
beliefs
(II) Likeliness to
follow
(III) Likeliness to
follow
Intercept
3.382***
1.199
-
0.712
(0.296)
(0.905)
(1.153)
Factors
Social presence
0.217***
-
0.171
-
0.313+
(0.0607)
(0.150)
(0.161)
Trusting beliefs
0.601**
(0.206)
Controls
Gender: female
0.324*
0.0976
-
0.113
(0.149)
(0.411)
(0.412)
Disposition to trust in
technology
0.147*
0.225
0.143
(0.0580)
(0.163)
(0.167)
F / χ²
F(
3
, 17
9
)=
12.222
***
χ²(
3
)=
2.81
χ²(
5
)=
12.13
*
Observations
183
183
183
R² = 0.170
Pseudo
= .0
20
Pseudo
= .
066
Note:
+
< 0.1; * < .05; ** < 0.01; *** < 0.001
|
Robust standard errors in
parentheses
Table 4. Regression analyses results
Morana et al. / Anthropomorphizing Robo-Advisor Chatbots
Twenty-Eigth European Conference on Information Systems (ECIS2020), Marrakesh, Morocco.
11
Given the results as described above, we conducted a mediation analysis with 10,000 bootstrap samples
(Hayes 2018, Model 4) to test whether trusting beliefs mediate the relationship between perceived social
presence and likeliness to follow the recommendation by the RAC. We defined perceived social presence as
the independent variable, trusting beliefs as the mediator, and likeliness to follow the recommendation as the
dependent variable. The indirect effect of social presence on likeliness to follow the recommendation was
statistically significant (β=0.158, SE=0.062, [95% CI: 0.054, 0.2970]), indicating that trusting beliefs
mediates the relationship between social presence and likeliness to follow the recommendation by the RAC.
The direct effect of social presence on likeliness to follow the recommendation was not significant (β=-0.310,
p=.051). The effects of social presence on trusting beliefs (β=0.252, p<.001) and trusting beliefs on likeliness
to follow the recommendation by the RAC (β=0.627, p=.004) were still significant. Therefore, these results
show that social presence has a positive indirect effect on likeliness to follow the recommendation via trusting
beliefs.
6 Discussion
Our study investigates how anthropomorphism in the design of an RAC influences users’ perceived social
presence, trusting beliefs, and ultimately, their likeliness to follow the recommendation by the RAC. We
found support for hypotheses H1 and H2 confirming the proposed effects of an increasing level of
anthropomorphism on the participants’ perceived social presence and trusting beliefs. Our findings are in line
with existing research (e.g., Araujo 2018; Go and Sundar 2019; Hess et al. 2009; Qiu and Benbasat 2009)
and add to the body of knowledge on how to design chatbots yielding a high level of social presence and
trust for certain contexts (in our case: investment robo-advisory). In our experiment, the participants were
asked to decide between three fictive investments and the RAC recommended one investment. There was no
significant difference in the likeliness to follow the recommendation by the RAC between the three
experimental groups (χ²(2)= 0.288, p=.866). Perceived social presence did not directly impact users’
likeliness to follow the recommendation (H3), but there was a positive indirect effect on likeliness to follow
the recommendation via trusting beliefs. Moreover, our results suggest that users trusting beliefs had a
significant impact on their likeliness to follow the recommendation by the RAC (H4).
6.1 Theoretical Contributions
Our first contribution is to show that the RAC’s anthropomorphic design not only influences users’
perceptions (i.e., social presence, trust) but also their behavior (i.e., whether or not they follow the
recommendation by the RAC). Therefore, when offering robo-advisory in the form of a chatbot, one
important aspect to be considered is to design the RAC in such a way that the users’ trust and follow the
investment advices provided by the RAC (Hodge et al. 2018; Jung, Dorner, Weinhardt, et al. 2018).
Our second contribution is to highlight the mixed effects of gender on social presence and trusting beliefs.
Female participants yielded a higher level of both measures. This is in line with recent research highlighting
the importance to consider users’ gender in order to achieve the intended level of trust (Beldad et al. 2016)
and credibility (Nowak and Rauh 2005). Besides, the increased level of perceived social presence for female
participants is consistent with findings from Thayalan et al. (2012) who found females to have a higher
perception of social presence in an e-learning environment.
Our third contribution is to provide a more nuanced understanding of how anthropomorphism affects users’
likeliness to follow the RAC’s advice. The lack of a direct effect of the anthropomorphism on the participants’
likeliness to follow, but significant effect of their trusting beliefs in the chatbot, can be explained with the
CASA paradigm and social response theory (Nass and Moon 2000). During the interaction with RAC, the
participants experience various social cues instantiated in the RAC and these cues trigger a mindless behavior
by the participants resulting in a behavioral response based on their prior experiences and expectations (Nass
and Moon 2000). The different implementations of the social cues influence the participant’s perceived social
presence of the RAC, which subsequently also has a positive effect on their trusting beliefs towards the RAC.
While trusting beliefs are a significant predictor of users’ investment decision-making (i.e., follow the
recommendation by the RAC or not), we find no direct or mediated effect of perceived social presence on
likeliness to follow the recommendation by the RAC. This finding could indicate a potential Uncanny Valley
Morana et al. / Anthropomorphizing Robo-Advisor Chatbots
Twenty-Eigth European Conference on Information Systems (ECIS2020), Marrakesh, Morocco.
12
effect (Mori et al. 2012) in our study. Our results suggest that simply increasing social presence to the
maximum is not reasonable in this context. On the contrary, designers need to consider to what extent social
presence should be increased in order to achieve a high level of trusting beliefs. In addition, it is important to
investigate additional factors that influence users’ trusting beliefs so that the design of as well as the
interaction with RAC can be further enhanced to increase the likeliness that users accept investment
recommendations by RAC.
6.2 Limitations and Future Research
There are several limitations to this study that should be considered when assessing our findings. In the
following, we discuss these limitations and outline avenues for future research. First, our experimental design
considered only three levels of anthropomorphism in the design of the RAC, which were manifested through
the design of eight social cues. While there is empirical evidence for each of the implemented social cues to
affect users’ perceived anthropomorphism (see Table 1), the structure of each level was not derived from an
established framework. Therefore, we assumed that the combination of these cues represents an adequate
manifestation of three distinct levels of anthropomorphic design. However, as the results of our manipulation
check suggest, it is not clear to what extent which social cues individually or in combination affect user’
perceived anthropomorphism. Therefore, it is a limitation of our study that we did not find signficant
differences in perceived anthropomorphism between the medium and low/high level of anthropomorphic
design of the RAC. Moreover, the construct anthropomorphism seems to measure aspects that go beyond
what we considered in the RAC’s design, such as the intelligence and a free will of the RAC. Therefore,
other measures such as perceived humanness (Holtgraves et al. 2007), could be more appropriate in our
context and therefore, could be verified in future research. Moreover, our study lacks a baseline in terms of
social presence and anthropomorphism to assess the effect of using a CUI for robo-advisory. While the
conversational character of the interaction itself can be understood as an anthropomorphic social cue (McTear
2002; Nass et al. 1994), we had no experimental treatment group with an absolutely non-anthropomorphic
design (i.e. no anthropomorphic cues implemented). Commercial robo-advisors are only implemented in
form of GUIs, thus, a treatment group that passes through the same advisory process using a GUI only
without a chatbot could have led to other or stronger differences between treatments. However, our study
focuses on the interaction of users with robo-advisory using a CUI. This decision has been taken because we
intended to utilize the proposed positive effects of using natural language for the interaction on users’
perceived social presence and trusting beliefs following the CASA paradigm and social response theory
(Nass and Moon 2000). Future research could, for example, investigate the effects of combining a CUI and
GUI for providing robo-advisory.
Second, although our experimental task was based on a realistic robo-advisory process, it had a considerably
lower complexity. The investment decision (i.e., choosing from three options and deciding to follow the
recommendation by the RAC or not) was rather simple in contrast to more complex financial decision-
making in reality. Participants’ expectation to gain more monetary payout has been found to be an appropriate
motivation in an experimental context, but more research is required to understand the effect of a CUI (e.g.
in the form of a text-based chatbot or a voice-based conversational agent) in a real-life financial context. In
our experiment, there were no costs associated with consulting the RAC, the decision to invest in one of the
companies, and ultimately (not) following the recommendation by the RAC. Therefore, we overserved a
relatively high likeliness to follow the RAC’s advice (82% – 85%) among our three experimental groups. In
contrast, existing robo-advisor services require users to pay about 0.5% – 2% of their investment sum per
year. Moreover, because of the relatively low complexity of the experimental task, the interaction time with
the RAC was rather short (about 5 minutes on average). Therefore, we examine only participants’ first
impressions of the RAC. Future research could increase the complexity and realism by adding several
iterations in the experimental process to include the rebalancing phase (Jung, Dorner, Weinhardt, et al. 2018).
Several iterations would also better reflect long-term usage, which has not been investigated in this study,
but could potentially lead to different results and/or to a richer explanation of the perceptions and usage of
RACs.
Morana et al. / Anthropomorphizing Robo-Advisor Chatbots
Twenty-Eigth European Conference on Information Systems (ECIS2020), Marrakesh, Morocco.
13
Third, our robo-advisory process included a risk assessment based on the multiple pricelist method (Holt and
Laury 2005) and we found no significant effect of the participants’ risk-level in our analysis. Although we
assumed an equal distribution of the participants’ risk level between our three groups, we identified
significant differences between the three treatment groups in a subsequent ANOVA test (F(2, 180)=4.698,
p=.010). The TukeyHSD post-hoc analysis reveals that the HIGH (mean=6.217, SD=2.043) group has a
significantly higher risk level than the LOW (mean=5.323, SD=2.125) and MED (mean=5.180, SD=1.866)
groups. Furthermore, the risk-assessments implication on the investment was not clearly stated during the
experimental process, since the assessment only served to assess the participants’ risk level and to increase
participants’ immersion in the investment advisory scenario. Because we lack a baseline risk level (i.e. risk
level assessment before the interaction with the RAC), a further investigation of this interesting finding is
subject to future research.
Fourth, there might be other theoretical explanations of the observed effects, such as the uncanny valley
assumption (Mori et al. 2012). According to this assumption, an overly human-like or anthropomorphic
design of the RAC might have triggered unintended negative effects, such as a reduced level of trust. Future
research could, for example, investigate whether there is a certain, good enough level of anthropormophism
that yields a maximum level of trust into the RAC, while not being perceived as uncanny because of a too
high level of anthropormorphism.
Fifth, our student sample was selected purposefully in order to have a homogeneous group of participants
with low or no financial expertise. Future studies could employ participants with a higher level of expertise
to assess our findings as well as to investigate the potential effect of finanical expertise on the users
perceptions of the RAC and investment decisions.
Summed up, we outlined three potential avenues for further research, but there is more research required in
order to understand the interaction between users and robo-advisory in the form of chatbots as well as how
the design of certain social cues of RAC affect this interaction.
7 Conclusion
In this paper, we present our study investigating the effects of anthropomorphism on users’ perceptions of an
RAC and their likeliness to follow its investment recommendations. To summarize, we provide valuable
empirical knowledge to the increasing knowledge base on the interaction between users and robo-advisors.
Our findings have implications for the design of RACs and we can make several recommendations for
practice and future research. First, it is imperative to ensure a high level of trust towards the offered robo-
advisory in general and in the form of a chatbot in particular. Second, in the context of chatbots, the results
of our study as well as existing research found that an increased level of social presence of the agent is
positively correlated with the users’ trusting beliefs. Thus, it is recommended to ensure an adequate level of
perceived social presence, for example by an appropriate social cues design of RAC. Furthermore, we found
different results for the female participants (i.e. a higher level of social presence and trusting beliefs), which
suggest that when designing a RAC the potential users’ gender as well as other user characteristics (e.g.
disposition to trust in technology) should be considered.
Morana et al. / Anthropomorphizing Robo-Advisor Chatbots
Twenty-Eigth European Conference on Information Systems (ECIS2020), Marrakesh, Morocco.
14
References
Al-Natour, S., Benbasat, I., and Cenfetelli, R. 2006. “The Role of Design Characteristics in Shaping
Perceptions of Similarity: The Case of Online Shopping Assistants,” Journal of the Association
for Information Systems (7:12), pp. 821–861.
Appel, J., Von Der Pütten, A., Krämer, N. C., and Gratch, J. 2012. “Does Humanity Matter? Analyzing
the Importance of Social Cues and Perceived Agency of a Computer System for the Emergence of
Social Reactions during Human-Computer Interaction,” Advances in Human-Computer
Interaction (2012).
Araujo, T. 2018. “Living up to the Chatbot Hype: The Influence of Anthropomorphic Design Cues and
Communicative Agency Framing on Conversational Agent and Company Perceptions,”
Computers in Human Behavior (85), pp. 183–189.
Beldad, A., Hegner, S., and Hoppen, J. 2016. “The Effect of Virtual Sales Agent (VSA) Gender
Product Gender Congruence on Product Advice Credibility, Trust in VSA and Online Vendor, and
Purchase Intention,” Computers in Human Behavior (60), pp. 62–72.
Bock, O., Baetge, I., and Nicklisch, A. 2014. “Hroot: Hamburg Registration and Organization Online
Tool,” European Economic Review (71), pp. 117–120.
Burgoon, J. ., Bonito, J. ., Bengtsson, B., Cederberg, C., Lundeberg, M., and Allspach, L. 2000.
“Interactivity in Human–Computer Interaction: A Study of Credibility, Understanding, and
Influence,” Computers in Human Behavior (16:6), pp. 553–574.
Carey, S. 1985. Conceptual Change in Childhood. The MIT Press Series in Learning, Development, and
Conceptual Change, Cambridge, MA: MIT Press.
Cassell, J., and Bickmore, T. 2000. “External Manifestations of Trustworthiness in the Interface,”
Communications of the ACM (43:12), pp. 50–56.
Cassell, J., Sullivan, J., Prevost, S., and Churchill, E. 2000. Embodied Conversational Agents,
Cambridge, MA, USA: MIT Press.
Cyr, D., Hassanein, K., Head, M., and Ivanov, A. 2007. “The Role of Social Presence in Establishing
Loyalty in E-Service Environments,” Interacting with Computers (19:1), pp. 43–56.
Dale, R. 2016. “The Return of the Chatbots,” Natural Language Engineering.
Day, M.-Y., Lin, J.-T., and Chen, Y.-C. 2018. “Artificial Intelligence for Conversational Robo-
Advisor,” in 2018 IEEE/ACM International Conference on Advances in Social Networks Analysis
and Mining (ASONAM), Barcelona Spain: IEEE, August, pp. 1057–1064.
Derrick, D. C., Jenkins, J. L., and Nunamaker, J. F. 2011. “Design Principles for Special Purpose,
Embodied, Conversational Intelligence with Environmental Sensors (SPECIES) Agents,” AIS
Transactions on Human-Computer Interaction (3:2), pp. 62–81.
Derrick, D. C., and Ligon, G. S. 2014. “The Affective Outcomes of Using Influence Tactics in Embodied
Conversational Agents,” Computers in Human Behavior (33:April), pp. 39–48.
van Doorn, J., Mende, M., Noble, S. M., Hulland, J., Ostrom, A. L., Grewal, D., and Petersen, J. A.
2017. “Domo Arigato Mr. Roboto,” Journal of Service Research (20:1), pp. 43–58.
Epley, N., Waytz, A., Akalis, S., and Cacioppo, J. T. 2008. “When We Need A Human: Motivational
Determinants of Anthropomorphism,” Social Cognition (26:2), pp. 143–155.
Epley, N., Waytz, A., and Cacioppo, J. T. 2007. “On Seeing Human: A Three-Factor Theory of
Anthropomorphism,” Psychological Review (114:4), pp. 864–886.
Etemad-Sajadi, R. 2016. “The Impact of Online Real-Time Interactivity on Patronage Intention: The
Use of Avatars,” Computers in Human Behavior (61), pp. 227–232.
Faul, F., Erdfelder, E., Lang, A.-G., and Buchner, A. 2007. “G*Power 3: A Flexible Statistical Power
Analysis Program for the Social, Behavioral, and Biomedical Sciences,” Behavior Research
Methods (39:2), pp. 175–191.
Feine, J., Gnewuch, U., Morana, S., and Maedche, A. 2019. “A Taxonomy of Social Cues for
Conversational Agents,” International Journal of Human-Computer Studies (132), pp. 138–161.
Fisch, J. E., Labouré, M., and Turner, J. A. 2018. “The Emergence of the Robo-Advisor,” No. PRC
WP2018-12, Pension Research Council Working Paper, Philadelphia.
Flanagin, A. J. 2005. “IM Online: Instant Messaging Use Among College Students,” Communication
Morana et al. / Anthropomorphizing Robo-Advisor Chatbots
Twenty-Eigth European Conference on Information Systems (ECIS2020), Marrakesh, Morocco.
15
Research Reports (22:3), pp. 175–187.
Fogg, B. J., and Nass. 1997. “How Users Reciprocate to Computers: An Experiment That Demonstrates
Behavior Change,” in CHI EA ’97 CHI ’97 Extended Abstracts on Human Factors in Computing
Systems, pp. 331–332.
Fogg, B. J., and Tseng, H. 1999. “The Elements of Computer Credibility,” in Proceedings of the SIGCHI
Conference on Human Factors in Computing Systems the CHI Is the Limit - CHI ’99, New York,
New York, USA: ACM Press, pp. 80–87.
Følstad, A., and Brandtzæg, P. B. 2017. “Chatbots and the New World of HCI,” Interactions (24:4), pp.
38–42.
Friestad, M., and Wright, P. 1994. “The Persuasion Knowledge Model: How People Cope with
Persuasion Attempts,” Journal of Consumer Research (21:1), p. 1.
Gefen, D., and Straub, D. W. 2003. “Managing User Trust in B2C E-Services,” E-Service Journal (2:2),
pp. 7–24.
Gefen, D., and Straub, D. W. 2004. “Consumer Trust in B2C E-Commerce and the Importance of Social
Presence: Experiments in e-Products and e-Services,” Omega (32:6), pp. 407–424.
Gnewuch, U., Morana, S., Adam, M. T. P., and Maedche, A. 2018a. “Faster Is Not Always Better:
Understanding the Effect of Dynamic Response Delays in Human-Chatbot Interaction,”
Proceedings of the European Conference on Information Systems (ECIS).
Gnewuch, U., Morana, S., Adam, M. T. P., and Maedche, A. 2018b. “‘The Chatbot Is Typing …’ – The
Role of Typing Indicators in Human-Chatbot Interaction,” in Proceedings of the 17th Annual Pre-
ICIS Workshop on HCI Research in MIS, San Francisco, CA, USA.
Gnewuch, U., Morana, S., and Maedche, A. 2017. “Towards Designing Cooperative and Social
Conversational Agents ForCustomer Service,” in ICIS 2017 Proceedings.
Go, E., and Sundar, S. S. 2019. “Humanizing Chatbots: The Effects of Visual, Identity and
Conversational Cues on Humanness Perceptions,” Computers in Human Behavior (97), Elsevier
Ltd, pp. 304–316.
Gong, L. 2008. “How Social Is Social Responses to Computers? The Function of the Degree of
Anthropomorphism in Computer Representations,” Computers in Human Behavior (24:4), pp.
1494–1509.
Griffin, D., and Tversky, A. 1992. “The Weighing of Evidence and the Determinants of Confidence,”
Cognitive Psychology (24:3), pp. 411–435.
Hayes, A. F. 2018. Introduction to Mediation, Moderation, and Conditional Process Analysis, Second
Edition: A Regression-Based Approach, (2nd ed.), New York, NY, USA: Guilford Publications.
Hertzum, M., Andersen, H. H. ., Andersen, V., and Hansen, C. B. 2002. “Trust in Information Sources:
Seeking Information from People, Documents, and Virtual Agents,” Interacting with Computers
(14:5), pp. 575–599.
Hess, T., Fuller, M., and Campbell, D. 2009. “Designing Interfaces with Social Presence: Using
Vividness and Extraversion to Create Social Recommendation Agents.,” Journal of the
Association for Information Systems (10:12), pp. 889–919.
Hodge, F. D., Mendoza, K., and Sinha, R. 2018. “The Effect of Humanizing Robo-Advisors on Investor
Judgments,” SSRN Electronic Journal.
Holt, C. A., and Laury, S. K. 2005. “Risk Aversion and Incentive Effects: New Data without Order
Effects,” American Economic Review (95:3), pp. 902–912.
Holtgraves, T. M., Ross, S. J., Weywadt, C. R., and Han, T. L. 2007. “Perceiving Artificial Social
Agents,” Computers in Human Behavior (23:5), pp. 2163–2174.
Holzwarth, M., Janiszewski, C., and Neumann, M. M. 2006. “The Influence of Avatars on Online
Consumer Shopping Behavior,” Journal of Marketing (70:4), pp. 19–36.
Ivanov, O., Snihovyi, O., and Kobets, V. 2018. “Implementation of Robo-Advisors Tools for Different
Risk Attitude Investment Decisions,” in Proceedings of the 14th International Conference on ICT
in Education, Research and Industrial Applications. Integration, Harmonization and Knowledge
Transfer. Volume II: Workshops (ICTERI 2018), Kyiv, Ukraine, pp. 195–206.
Jain, M., Kota, R., Kumar, P., and Patel, S. N. 2018. “Convey,” in Proceedings of the 2018 CHI
Conference on Human Factors in Computing Systems - CHI ’18, New York, New York, USA:
Morana et al. / Anthropomorphizing Robo-Advisor Chatbots
Twenty-Eigth European Conference on Information Systems (ECIS2020), Marrakesh, Morocco.
16
ACM Press, pp. 1–6.
Jenkins, M.-C., Churchill, R., Cox, S., and Smith, D. 2007. “Analysis of User Interaction with Service
Oriented Chatbot Systems,” in Human-Computer Interaction. HCI Intelligent Multimodal
Interaction Environments, J. Jacko (ed.), Berlin, Heidelberg: Springer, pp. 76–83.
Jung, D., Dorner, V., Glaser, F., and Morana, S. 2018. “Robo-Advisory: Digitalization and Automation
of Financial Advisory,” Business & Information Systems Engineering (60:1), pp. 81–86.
Jung, D., Dorner, V., Weinhardt, C., and Pusmaz, H. 2018. “Designing a Robo-Advisor for Risk-Averse,
Low-Budget Consumers,” Electronic Markets (28:3), pp. 367–380.
Jung, D., Glaser, F., and Köpplin, W. 2019. Robo-Advisory: Opportunities and Risks for the Future of
Financial Advisory, pp. 405–427.
Kim, Y., and Sundar, S. S. 2012. “Anthropomorphism of Computers: Is It Mindful or Mindless?,”
Computers in Human Behavior (28:1), Elsevier Ltd, pp. 241–250.
Klein, J., Moon, Y., and Picard, R. W. 2002. “This Computer Responds to User Frustration:,” Interacting
with Computers (14:2), pp. 119–140.
Kobets, V., Yatsenko, V., Mazur, A., and Zubrii, M. 2018. “Data Analysis of Private Investment
Decision Making Using Tools Of Robo-Advisers in Long-Run Period,” in Proceedings of the 14th
International Conference on ICT in Education, Research and Industrial Applications. Integration,
Harmonization and Knowledge Transfer. Volume II: Workshops (ICTERI 2018), Kyiv, Ukraine,
pp. 144–159.
Larivière, B., Bowen, D., Andreassen, T. W., Kunz, W., Sirianni, N. J., Voss, C., Wünderlich, N. V.,
and De Keyser, A. 2017. “‘Service Encounter 2.0’: An Investigation into the Roles of Technology,
Employees and Customers,” Journal of Business Research (79), pp. 238–246.
Lee, K. M., and Nass, C. 2004. “The Multiple Source Effect and Synthesized Speech: Doubly-
Disembodied Language as a Conceptual Framework,” Human Communication Research (30:2),
pp. 182–207.
Lee, M. K. O., and Turban, E. 2001. “A Trust Model for Consumer Internet Shopping,” International
Journal of Electronic Commerce (6:1), pp. 75–91.
Lee, S. Y., and Choi, J. 2017. “Enhancing User Experience with Conversational Agent for Movie
Recommendation: Effects of Self-Disclosure and Reciprocity,” International Journal of Human
Computer Studies (103:May 2016), Elsevier Ltd, pp. 95–105.
Leite, I., Martinho, C., and Paiva, A. 2013. “Social Robots for Long-Term Interaction: A Survey,”
International Journal of Social Robotics (5:2), pp. 291–308.
Lu, B., Fan, W., and Zhou, M. 2016. “Social Presence, Trust, and Social Commerce Purchase Intention:
An Empirical Research,” Computers in Human Behavior (56), pp. 225–237.
Ludden, C., Thompson, K., and Mohsin, I. 2015. “The Rise of Robo-Advice: Changing the Concept of
Wealth Management.”
Maedche, A., Legner, C., Benlian, A., Berger, B., Gimpel, H., Hess, T., Hinz, O., Morana, S., and
Söllner, M. 2019. “AI-Based Digital Assistants,” Business & Information Systems Engineering
(61:4), pp. 535–544.
McKnight, D. H., and Chervany, N. L. 2001. “What Trust Means in E-Commerce Customer
Relationships: An Interdisciplinary Conceptual Typology,” International Journal of Electronic
Commerce2 (6:2), pp. 35–59.
McKnight, D. H., Choudhury, V., and Kacmar, C. 2002. “Developing and Validating Trust Measures
for E-Commerce: An Integrative Typology,” Information Systems Research (13:3), pp. 334–359.
McKnight, D. H., Lankton, N., and Tripp, J. 2011. “Social Networking Information Disclosure and
Continuance Intention: A Disconnect,” in 2011 44th Hawaii International Conference on System
Sciences, IEEE, January, pp. 1–10.
McTear, M. F. 2002. “Spoken Dialogue Technology: Enabling the Conversational User Interface,” ACM
Computing Surveys (34:1), pp. 90–169.
McTear, M. F. 2017. “The Rise of the Conversational Interface: A New Kid on the Block?,” in Lecture
Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and
Lecture Notes in Bioinformatics).
Metzger, M. J., Flanagin, A. J., Eyal, K., Lemus, D. R., and Mccann, R. M. 2003. “Credibility for the
Morana et al. / Anthropomorphizing Robo-Advisor Chatbots
Twenty-Eigth European Conference on Information Systems (ECIS2020), Marrakesh, Morocco.
17
21st Century: Integrating Perspectives on Source, Message, and Media Credibility in the
Contemporary Media Environment,” Annals of the International Communication Association
(27:1), pp. 293–335.
Meyerson, D., Weick, K. E., and Kramer, R. M. 1996. “Swift Trust and Temporary Groups,” in Trust
in Organizations: Frontiers of Theory and Research, 2455 Teller Road, Thousand Oaks California
91320 United States: SAGE Publications, Inc., pp. 166–195.
Mori, M., MacDorman, K. F., and Kageki, N. 2012. “The Uncanny Valley,” IEEE Robotics and
Automation Magazine (19:2), pp. 98–100.
Nan, X., Anghelcev, G., Myers, J. R., Sar, S., and Faber, R. 2006. “What If a Web Site Can Talk?
Exploring the Persuasive Effects of Web-Based Anthropomorphic Agents,” Journalism & Mass
Communication Quarterly (83:3), pp. 615–631.
Nass, C., Kim, E.-Y., and Lee, E.-J. 1998. “When My Face Is the Interface,” in Proceedings of the
SIGCHI Conference on Human Factors in Computing Systems - CHI ’98, New York, New York,
USA: ACM Press, pp. 148–154.
Nass, C., and Moon, Y. 2000. “Machines and Mindlessness: Social Responses to Computers,” Journal
of Social Issues (56:1), pp. 81–103.
Nass, C., Moon, Y., Fogg, B. J., Reeves, B., and Dryer, D. C. 1995. “Can Computer Personalities Be
Human Personalities?,” International Journal of Human-Computer Studies (43:2), pp. 223–239.
Nass, C., Steuer, J., and Tauber, E. R. 1994. “Computers Are Social Actors,” in Conference Companion
on Human Factors in Computing Systems - CHI ’94.
Nowak, K. L., and Rauh, C. 2005. “The Influence of the Avatar on Online Perceptions of
Anthropomorphism, Androgyny, Credibility, Homophily, and Attraction,” Journal of Computer-
Mediated Communication (11:1), pp. 153–178.
O’Keefe, D. J. 2015. Persuasion: Theory and Research, SAGE Publications, Inc.
Pfeuffer, N., Benlian, A., Gimpel, H., and Hinz, O. 2019. “Anthropomorphic Information Systems,”
Business & Information Systems Engineering (forthcoming).
Qiu, L., and Benbasat, I. 2009. “Evaluating Anthropomorphic Product Recommendation Agents: A
Social Relationship Perspective to Designing Information Systems,” Journal of Management
Information Systems (25:4), pp. 145–182.
Richards, D., and Bransky, K. 2014. “ForgetMeNot: What and How Users Expect Intelligent Virtual
Agents to Recall and Forget Personal Conversational Content,” International Journal of Human-
Computer Studies (72:5), pp. 460–476.
Sabelli, A. M., Kanda, T., and Hagita, N. 2011. “A Conversational Robot in an Elderly Care Center: An
Ethnographic Study,” in 2011 6th ACM/IEEE International Conference on Human-Robot
Interaction (HRI).
Sarikaya, R. 2017. “The Technology Behind Personal Digital Assistants: An Overview of the System
Architecture and Key Components,” IEEE Signal Processing Magazine (34:1), pp. 67–81.
Seeger, A.-M., Pfeiffer, J., and Heinzl, A. 2017. “When Do We Need a Human? Anthropomorphic
Design and Trustworthiness of Conversational Agents,” in Proceedings of the Sixteenth Annual
Pre-ICIS Workshop on HCI Research in MIS, Seoul, South Korea, pp. 1–6.
Seeger, A.-M., Pfeiffer, J., and Heinzl, A. 2018. “Designing Anthropomorphic Conversational Agents:
Development and Empirical Evaluation of a Design Framework,” in Proceedings of the
International Conference on Information Systems, San Francisco, CA, USA.
Sironi, P. 2016. FinTech Innovation: From Robo-Advisors to Goal Based Investing and Gamification,
Wiley.
Sproull, L., Subramani, M., Kiesler, S., Walker, J., and Waters, K. 1996. “When the Interface Is a Face,”
Human-Computer Interaction (11:2), pp. 97–124.
Tertilt, M., and Scholz, P. 2017. “To Advise, or Not to Advise How Robo-Advisors Evaluate the Risk
Preferences of Private Investors,” SSRN Electronic Journal.
Tertilt, M., and Scholz, P. 2018. “To Advise, or Not to Advise— How Robo-Advisors Evaluate the Risk
Preferences of Private Investors,” The Journal of Wealth Management (21:2), pp. 70–84.
Thayalan, X., Shanthi, A., and Paridi, T. 2012. “Gender Difference in Social Presence Experienced in
E-Learning Activities,” Procedia - Social and Behavioral Sciences (67), pp. 580–589.
Morana et al. / Anthropomorphizing Robo-Advisor Chatbots
Twenty-Eigth European Conference on Information Systems (ECIS2020), Marrakesh, Morocco.
18
Wang, W., and Benbasat, I. 2005. “Trust In and Adoption of Online Recommendation Agents,” Journal
of the Association for Information Systems (6:3), pp. 72–101.
Weizenbaum, J. 1966. “ELIZA A Computer Program For the Study of Natural Language
Communication Between Man And Machine,” Communications of the ACM.
Wünderlich, N. V., and Paluch, S. 2017. “A Nice and Friendly Chat With a Bot: User Perceptions of
AI-Based Service Agents,” in Proceedings of the International Conference on Information Systems
(ICIS), pp. 1–11.
Yoo, K.-H., and Gretzel, U. 2011. “Creating More Credible and Persuasive Recommender Systems: The
Influence of Source Characteristics on Recommender System Evaluations,” in Recommender
Systems Handbook, Boston, MA: Springer US, pp. 455–477.
... The manifestation of the social presence of CAs has been acknowledged to have various positive influences on customer satisfaction (Gnewuch et al., 2018;Hess et al., 2009;Morana et al., 2020). A high degree of social presence can result in higher levels of perceived warmth and positively influence customers' trust towards a CA (Hess et al., 2009). ...
... A high degree of social presence can result in higher levels of perceived warmth and positively influence customers' trust towards a CA (Hess et al., 2009). Although the level of social presence determines the quality of human-CA interactions, most of the previous studies used quantitative approaches to study individuals' awareness of the CA's social presence without considering the interactivities that users and the CA engage in during the interactions (Choi et al., 2011;Hess et al., 2009;Morana et al., 2020;Schuetzler et al., 2018). ...
... Our study extends previous research on the positive effects of social presence that shows that the CA's social presence can lead to perceived warmth, increased trust of customers towards the CA and higher customer satisfaction (Gnewuch et al., 2018;Hess et al., 2009;Morana et al., 2020). Our findings suggest that a) bi-directional interactions, with b) a medium degree of social presence with congruent involvement obligation, c) CA's social cues that evolve around offering support such as asking to start or pursue a dialog, offering help, and advice and tips and d) the social cues of the user showcasing engagement and responsive behavior including the use of pre-set options, greetings and farewell, interest and asking questions are the most successful human-chatbot interaction strategies resulting in positive service encounters (Larivière et al., 2017). ...
Article
Full-text available
Interactions with conversational agents (CAs) become increasingly common in our daily life. While research on human-CA interactions provides insights into the role of CAs, the active role of users has been mostly neglected. We addressed this void by applying a thematic analysis approach and analysed 1000 interactions between a chatbot and customers of an energy provider. Informed by the concepts of social presence and social cues and using the abductive logic, we identified six human-chatbot interaction types that differ according to salient characteristics, including direction, social presence, social cues of customers and the chatbot and customer effort. We found that bi-directionality, a medium degree of social presence and selected social cues used by the chatbot and customers are associated with desirable outcomes in which customers mostly obtain requested information. The findings help us understand the nature of human-CA interactions in a customer service context and inform the design and evaluation of CAs.
... Other articles found the degree of humanization to be a significant increasing factor of trust, especially when the task complexity is perceived as low (e.g., Hodge et al. 2020). Morana et al. (2020) found that a higher humanization degree increases the perceived social presence and trusting beliefs. While the first did not significantly influence the acceptance of RA recommendations, the latter had a significant impact on their likeliness to follow the recommendation (Morana et al. 2020). ...
... Morana et al. (2020) found that a higher humanization degree increases the perceived social presence and trusting beliefs. While the first did not significantly influence the acceptance of RA recommendations, the latter had a significant impact on their likeliness to follow the recommendation (Morana et al. 2020). Also, the usage of conversational RA, utilizing a more human-like communication style, positively affects trust of the RA and its provider, thereby enhancing RA adoption and recommendation acceptance (Hildebrand and Bergner 2020;Ostern et al. 2020). ...
... Due to the absence of human interaction and the importance of trust in financial advisory, RA providers improve the initiation process by giving information about the whole advisory process, products used, and costs associated with the services, enhancing its transparency (Jung et al. 2017;Litterscheidt and Streich 2020). Belanche et al. (2019) also indicate that RAs should consider the user's familiarity with AI-based systems and provide ad-hoc support, e.g., by employing chatbots (Morana et al. 2020). RAs often utilize simple static online questionnaires to create a user risk profile, which is questioned in literature, suggesting that this profiling method alone is insufficient (e.g., Beketov et al. 2018). ...
Conference Paper
Full-text available
Robo-advisors (RAs) guide investors through an automated financial advice process, recommend personalized portfolio assignments based on their risk-affinity and goals, and rebalance their portfolio automatically over time. While still a novel instantiation of FinTechs, an increased number of RA publications, especially in 2019 and 2020, shows a determined interest of research in the subject. However, no comprehensive state-of-the-art nor a set of future research directions is available. We, therefore, conduct a systematic literature review, analyzing 42 peer-reviewed articles focusing on RA. We provide descriptive statistics of the articles, including research approaches and regional focuses, and classify the literature in an Organizing Framework for RA Research with the three main themes RA Users, RA Service, and RA Competition. We summarize RA's current scientific knowledge by showing important insights on each theme and the interrelation between the themes. Lastly, we provide fruitful future research directions derived from RA literature.
... While guidelines on ethical traditional financial advice exist, it is not clear, what exactly these directives mean for RA and how RAs should be designed to ensure these goals are met. A recent RA literature review found that existing research on RA design concentrated on the degree of delegation and automation (e.g., Rühr et al., 2019), the degree of humanization, including conversational abilities (Morana et al. 2020;Ostern et al. 2020), and mitigation of behavioral biases of investors (e.g., Adam et al., 2019;Jung et al., 2017), while neglecting ethical considerations (Torno et al. 2021). Therefore, we address this research gap by proposing the following research question (RQ): ...
... While RA is in essence an automated IS, users often prefer human over machine interactions and therefore aim for a certain degree of humanization (Hildebrand and Bergner 2020). Also, a higher level of social presence of RA is thereby perceived as more transparent and trustworthy (Morana et al. 2020). Especially during the first interactions with an RA, social presence and thereby initial trust could be increased by adding videos and photos of the RA development team (Jung and Weinhardt 2018;Scholz 2021). ...
Conference Paper
Full-text available
Automated investing in form of Robo-Advice (RA) has promising qualities, e.g., mitigating personal biases through algorithms and enable financial advice for less wealthy clients. However, RA is criticized for its rudimentary personalization ability questioning its fiduciary duties, nontransparent recommendations and violations of data privacy and security. These ethical issues pose significant risks, especially for the less financially educated targeted clients, who could be exploited by RA as illustrated in the movie “Wolf of Wall Street”. Yet, a distinct ethical perspective on RA design is missing in literature. Based on scientific literature on RA and international standards and guidelines of ethical financial advice we derive eight meta-requirements and develop 15 design principles, that can guide more ethical and trustworthy RA design. We further evaluated and enhanced the design artifact through interviews with domain experts from science and practice. With our study we provide design knowledge that enables more ethical RA outcomes.
... In the financial services, chatbots have been adopted for common tasks like providing support to customers or allowing them to execute certain orders and transactions. One popular use case in investing contexts has been to use conversational "robo-advisors" [8,11,19,28]. Such advisors often aim to (i) capture user's interests and preferences (e.g., for specific industries), their values (for example concerning environmental, social, and corporate governance aspects), and their risk aversion, to then (ii) provide suggestions for suitable financial products, investment strategies, as well as portfolio allocations. ...
Conference Paper
Full-text available
Chatbots have become commonplace – they can provide customer support, take orders, collect feedback, and even provide (mental) health support. Despite this diversity, the opportunities of designing chatbots for more complex decision-making tasks remain largely underexplored. Bearing this in mind leads us to ask: How can chatbots be embedded into software tools used for complex decision-making and designed to scaffold and probe human cognition? The goal of our research was to explore possible uses of such “probing bots”. The domain we examined was stock investment where many complex decisions need to be made. In our study, different types of investors interacted with a prototype, which we called “ProberBot”, and subsequently took part in in-depth interviews. They generally found our ProberBot was effective at supporting their thinking but when this is desirable depends on the type of task and activity. We discuss these and other findings as well as design considerations for developing ProberBots for similar types of decision-making tasks.
... This notion means that users can experience the computer as a social actor through social presence, similar to what they would if it were an actual human, which may affect how much they anthropomorphize it (Lee, 2006). Furthermore, previous research indicates a strong positive connection between social presence and anthropomorphism in social computers (e.g.: Morana et al., 2020;Araujo, 2018;Adam et al., 2021). Thus, deploying these variables in social settings would mean that higher social presence scores may simultaneously yield higher scores of anthropomorphism. ...
Thesis
Full-text available
With advancements in voice technology, there are benefits to be reaped by using more human-like voices in chatbots. Intonation patterns can be applied to such voices to give them a human-like attitude, which may allow users to perceive the agent as anthropomorphic. On the one hand, anthropomorphism subsequently may cause trust since human-like voices may be deemed more trustful, and these concepts often relate to each other in human-computer interaction. On the other hand, humans tend to trust someone less when confronted with a negative attitudinal intonation pattern, such as doubt. The present study investigated this contradiction with two hypothesized mediation models: (1) doubtful intonation leads to higher anthropomorphism mediated by social presence, and (2) doubtful intonation leads to higher trust mediated by anthropomorphism. In an interactive chatbot-guided museum tour, participants heard a chatbot with a doubtful intonation applied or the standard text-to-speech output. Two mediation analyses revealed that using a doubtful intonation pattern leads to a higher perception of anthropomorphism, mediated by social presence. However, it also showed that the usage of doubtful intonation directly lowers trust and found no evidence of anthropomorphism as a mediator. Therefore, the results indicate the importance of assessing the effects of separate intonation patterns in voice-based chatbots, and the recommendations and potential for future research on such patterns are discussed.
Conference Paper
Full-text available
As distance learning and large-scale learning environments continue to grow, interactive knowledge distribution is becoming a more challenging task. Although studies show that active and emotional student engagement is the best way to achieve promising educational outcomes, educational institutions still face challenges in providing students with interactive learning scenarios. Pedagogical conversational agents (PCAs) offer one way for educational settings to create such scenarios. Despite the increasing research interest of PCAs in research, there is a lack of shared knowledge about the different design elements of PCAs. Hence, our goal is to develop a taxonomy to classify PCAs into three main categories (structure, technology, task/people). In addition, we aim to provide preliminary results on possible outcome variables that could result from the presented design elements of the taxonomy. Our findings are intended to provide researchers and practitioners with deeper insight into the field of PCAs to possibly guide design decisions.
Article
Is human or artificial intelligence more conducive to a stable financial system? To answer this question, we compare human and artificial intelligence with respect to several facets of their decision-making behaviour. On that basis, we characterize possibilities and challenges in designing partnerships that combine the strengths of both minds and machines. Leveraging on those insights, we explain how the differences in human and artificial intelligence have driven the usage of new techniques in financial markets, regulation, supervision, and policy-making, and discuss their potential impact on financial stability. Finally, we describe how effective mind–machine partnerships might be able to reduce systemic risks.
Chapter
Full-text available
Robo-advisors are online services that use computer algorithms to provide financial advice and manage customers’ investment portfolios. This chapter describes the development of the robo-advisor industry and compares robo-advisors to traditional human financial advisors. Robo-advisors emerged in response to people’s need for financial advice and the high cost of obtaining that advice from human advisors. Pure robo-advisors, which offer no direct human contact, are generally substantially less expensive than human advisors, and the use of computer algorithms allows for an increasing degree of personalization of the advice. Nevertheless, robo-advisors do not provide customers with all of the services offered by human advisors and, in particular, human contact. In response, some firms are offering hybrid robo/human services that combine cost savings from a robo-advisor with some human input.
Article
Full-text available
Conversational agents (CAs) are software-based systems designed to interact with humans using natural language and have attracted considerable research interest in recent years. Following the Computers Are Social Actors paradigm, many studies have shown that humans react socially to CAs when they display social cues such as small talk, gender, age, gestures, or facial expressions. However, research on social cues for CAs is scattered across different fields, often using their specific terminology, which makes it challenging to identify, classify, and accumulate existing knowledge. To address this problem, we conducted a systematic literature review to identify an initial set of social cues of CAs from existing research. Building on classifications from interpersonal communication theory, we developed a taxonomy that classifies the identified social cues into four major categories (i.e., verbal, visual, auditory, invisible) and ten subcategories. Subsequently, we evaluated the mapping between the identified social cues and the categories using a card sorting approach in order to verify that the taxonomy is natural, simple, and parsimonious. Finally, we demonstrate the usefulness of the taxonomy by classifying a broader and more generic set of social cues of CAs from existing research and practice. Our main contribution is a comprehensive taxonomy of social cues for CAs. For researchers, the taxonomy helps to systematically classify research about social cues into one of the taxonomy's categories and corresponding subcategories. Therefore, it builds a bridge between different research fields and provides a starting point for interdisciplinary research and knowledge accumulation. For practitioners, the taxonomy provides a systematic overview of relevant categories of social cues in order to identify, implement, and test their effects in the design of a CA.
Article
Full-text available
This article summarizes the panel discussion at the International Conference on Wirtschafts-informatik in March 2019 in Siegen (WI 2019) and presents different perspectives on AI-based digital assistants. It sheds light on (1) application areas, opportunities, and threats as well as (2) the BISE community’s roles in the field of AI-based digital assistants. The different authors’ contributions emphasize that BISE, as a socio-technical discipline, must address the designs and the behaviors of AI-based digital assistants as well as their interconnections. They have identified multiple research opportunities to deliver descriptive and prescriptive knowledge, thereby actively shaping future interactions between users and AI-based digital assistants. We trust that these inputs will lead BISE researchers to take active roles and to contribute an IS perspective to the academic and the political discourse about AI-based digital assistants.
Conference Paper
Full-text available
Chatbots have attracted considerable interest in recent years. A key design challenge to increase their adoption is to make the interaction with them feel natural and human-like. Therefore, it is suggested to incorporate social cues in the chatbot design. Drawing on the Computers are Social Actors paradigm and the "uncanny valley" hypothesis, we study the effect of one specific social cue (i.e., typing indicators) on social presence of chatbots. In an online experiment, we investigate the effect of two specific designs of typing indicators. Our preliminary results indicate that graphical typing indicators increase social presence of chatbots, but only for novice users. Therefore, our results suggest that the relationship between typing indicators and perceived social presence of chatbots depends on the design of these indicators and user's prior experience. We contribute with empirical insights and design knowledge that support researchers and practitioners in understanding and designing more natural human-chatbot interactions.
Chapter
Full-text available
Without professional advisors, taking financial risks is a challenging task for most private households (retail investors). Across countries, digital financial advisory services, in particularly robo-advisors, are becoming more popular in retail and private banking. These tools support their users in financial decision-making, like risk-measurement, portfolio selection, or rebalancing. Recent studies suggest that in the long-term, they could supplement human financial advisory. This work illustrates the key concepts of this (r)evolution, and discusses strengths, weaknesses, opportunities and risks of robo-advisory. The results suggest that robo-advisors have a huge potential to shape the future of the financial advisory industry, despite the fact that there is still a lot of potential yet to be exploited.
Conference Paper
Full-text available
A key challenge in designing conversational user interfaces is to make the conversation between the user and the system feel natural and human-like. In order to increase perceived humanness, many systems with conversational user interfaces (e.g., chatbots) use response delays to simu-late the time it would take humans to respond to a message. However, delayed responses may also negatively impact user satisfaction, particularly in situations where fast response times are expected, such as in customer service. This paper reports the findings of an online experiment in a customer service context that investigates how user perceptions differ when interacting with a chatbot that sends dynamically delayed responses compared to a chatbot that sends near-instant responses. The dynamic delay length was calculated based on the complexity of the re-sponse and complexity of the previous message. Our results indicate that dynamic response de-lays not only increase users’ perception of humanness and social presence, but also lead to greater satisfaction with the overall chatbot interaction. Building on social response theory, we provide evidence that a chatbot’s response time represents a social cue that triggers social re-sponses shaped by social expectations. Our findings support researchers and practitioners in understanding and designing more natural human-chatbot interactions.
Article
Chatbots are replacing human agents in a number of domains, from online tutoring to customer-service to even cognitive therapy. But, they are often machine-like in their interactions. What can we do to humanize chatbots? Should they necessarily be driven by human operators for them to be considered human? Or, will an anthropomorphic visual cue on the interface and/or a high-level of contingent message exchanges provide humanness to automated chatbots? We explored these questions with a 2 (anthropomorphic visual cues: high vs. low anthropomorphism) × 2 (message interactivity: high vs. low message interactivity) × 2 (identity cue: chat-bot vs. human) between-subjects experiment (N = 141) in which participants interacted with a chat agent on an e-commerce site about choosing a digital camera to purchase. Our findings show that a high level of message interactivity compensates for the impersonal nature of a chatbot that is low on anthropomorphic visual cues. Moreover, identifying the agent as human raises user expectations for interactivity. Theoretical as well as practical implications of these findings are discussed.
Article
Disembodied conversational agents in the form of chatbots are increasingly becoming a reality on social media and messaging applications, and are a particularly pressing topic for service encounters with companies. Adopting an experimental design with actual chatbots powered with current technology, this study explores the extent to which human-like cues such as language style and name, and the framing used to introduce the chatbot to the consumer can influence perceptions about social presence as well as mindful and mindless anthropomorphism. Moreover, this study investigates the relevance of anthropomorphism and social presence to important company-related outcomes, such as attitudes, satisfaction and the emotional connection that consumers feel with the company after interacting with the chatbot.