Conference PaperPDF Available

The Effect of Perceived Similarity in Dominance on Customer Self-Disclosure to Chatbots in Conversational Commerce

Authors:

Abstract and Figures

Recent years have seen increased interest in the application of chatbots for conversational commerce. However, many chatbots are falling short of their expectations because customers are reluctant to disclose personal information to them (e.g., product interest, email address). Drawing on social response theory and similarity-attraction theory, we investigated (1) how a chatbot’s language style influences users’ perceived similarity in dominance (i.e., an important facet of personality) between them and the chatbot and (2) how these perceptions influence their self-disclosure behavior. We conducted an online experiment (N=205) with two chatbots with different language styles (dominant vs. submissive). Our results show that users attribute a dominant personality to a chatbot that uses strong language with frequent assertions, commands, and self-confident statements. Moreover, we find that the interplay of the user’s own dominance and the chatbot’s perceived dominance creates perceptions of similarity. These perceptions of similarity increase users’ degree of self-disclosure via an increased likelihood of accepting the chatbot’s advice. Our study reveals that language style is an important design feature of chatbots and highlights the need to account for the interplay of design features and user characteristics. Furthermore, it also advances our understanding of the impact of design on self-disclosure behavior.
Content may be subject to copyright.
This is the author’s version of a work that was published in the following source
Gnewuch, U., Meng, Y., and Maedche, A. (2020). “The Effect of Perceived Similarity in Dominance on
Customer Self-Disclosure to Chatbots in Conversational Commerce,” in Proceedings of the 28th European
Conference on Information Systems (ECIS 2020), Marrakech, Morocco.
Please note: Copyright is owned by the author and / or the publisher.
Commercial use is not allowed.
Institute of Information Systems and Marketing (IISM)
Kaiserstraße 89-93
76133 Karlsruhe - Germany
https://iism.kit.edu
Karlsruhe Service Research Institute (KSRI)
Kaiserstraße 89
76133 Karlsruhe Germany
https://ksri.kit.edu
© 2017. This manuscript version is made available under the CC-
BY-NC-ND 4.0 license http://creativecommons.org/licenses/by-
nc-nd/4.0/
Twenty-Eigth European Conference on Information Systems (ECIS2020), Marrakesh, Morocco. 1
THE EFFECT OF PERCEIVED SIMILARITY IN DOMINANCE
ON CUSTOMER SELF-DISCLOSURE TO CHATBOTS IN
CONVERSATIONAL COMMERCE
Research paper
Gnewuch, Ulrich, Karlsruhe Institute of Technology (KIT), Institute of Information Systems
and Marketing (IISM), Karlsruhe, Germany, ulrich.gnewuch@kit.edu
Yu, Meng, Karlsruhe Institute of Technology (KIT), Institute of Information Systems and
Marketing (IISM), Karlsruhe, Germany, mengyu0607@gmail.com
Maedche, Alexander, Karlsruhe Institute of Technology (KIT), Institute of Information
Systems and Marketing (IISM), Karlsruhe, Germany, alexander.maedche@kit.edu
Abstract
Recent years have seen increased interest in the application of chatbots for conversational commerce.
However, many chatbots are falling short of their expectations because customers are reluctant to
disclose personal information to them (e.g., product interest, email address). Drawing on social
response theory and similarity-attraction theory, we investigated (1) how a chatbot’s language style
influences users’ perceived similarity in dominance (i.e., an important facet of personality) between
them and the chatbot and (2) how these perceptions influence their self-disclosure behavior. We
conducted an online experiment (N=205) with two chatbots with different language styles (dominant
vs. submissive). Our results show that users attribute a dominant personality to a chatbot that uses
strong language with frequent assertions, commands, and self-confident statements. Moreover, we find
that the interplay of the users own dominance and the chatbots perceived dominance creates
perceptions of similarity. These perceptions of similarity increase users’ degree of self-disclosure via
an increased likelihood of accepting the chatbot’s advice. Our study reveals that language style is an
important design feature of chatbots and highlights the need to account for the interplay of design
features and user characteristics. Furthermore, it also advances our understanding of the impact of
design on self-disclosure behavior.
Keywords: chatbot, language style, dominance, self-disclosure, personality similarity.
1 Introduction
Conversational commerce refers to the use of chat, voice, and other natural language interfaces in e-
commerce environments. While customers’ questions and inquiries were primarily handled by human
service agents in the past, recent advances in artificial intelligence (AI) have led to a growing interest in
using chatbots (i.e., text-based conversational agents) for such tasks (Tuzovic and Paluch, 2018; Watson,
2017). For example, customers can use chatbots to find and book airline flights or search and buy fashion
items (Dale, 2016; Tuzovic and Paluch, 2018; Watson, 2017). In contrast to voice assistants, such as
Amazon’s Alexa or Google Home, chatbots rely on written language and can be found on many e-
commerce websites and messaging platforms (Dale, 2016). For example, Facebook announced that there
are already more than 300,000 chatbots on Facebook Messenger (Facebook, 2018).
While marketers have attributed great potential to chatbots, even declaring 2016 the year of conversational
commerce (Messina, 2015), organizations have realized that there are several challenges that need to be
Gnewuch et al. / Customer Self-Disclosure in Conversational Commerce
Twenty-Eigth European Conference on Information Systems (ECIS2020), Marrakesh, Morocco. 2
addressed when using chatbots for conversational commerce (Tuzovic and Paluch, 2018). More
specifically, research has shown that users are reluctant to disclose personal information to chatbots
because, for example, they are worried about what happens with their data (Saffarizadeh, Boodraj, and
Alashoor, 2017; Tuzovic and Paluch, 2018). However, customer self-disclosure is key to establishing long-
term relationships with customers and essential for many business transactions (e.g., purchasing processes,
marketing campaigns) (Campbell, 2019). Self-disclosure can be understood as the process of revealing
personal information, such as name, product interests, or email address, to an e-commerce provider
(Campbell, 2019; Cozby, 1973). Given the importance of customer self-disclosure for e-commerce
providers, much research has examined the antecedents to users’ willingness to disclose personal
information (e.g., Al-Natour, Benbasat, and Cenfetelli, 2009; Campbell, 2019; Moon, 2000).
However, while existing research provides valuable knowledge about the antecedents of self-disclosure,
research on the impact of specific design features on self-disclosure (Al-Natour et al., 2009; Spiekermann,
Grossklags, and Berendt, 2001), particularly in the context of chatbots (e.g., Adam and Klumpe, 2019), is
scarce. Research has identified a myriad of design features that could potentially influence self-disclosure
to a chatbot (Feine, Gnewuch, Morana, and Maedche, 2019; Pfeuffer, Benlian, Gimpel, and Hinz, 2019).
Since users interact with a chatbot using natural language, its language style might be a particularly
important design feature and a social cue that can influence how users interact with a chatbot (e.g.,
Chattaraman, Kwon, Gilbert, and Ross, 2019; Sah and Peng, 2015). For example, research has shown that
users are more willing to confide in a chatbot that uses a dominant language style during a job interview
(Zhou, Mark, Li, and Yang, 2019). Moreover, building on social response theory, studies have shown that
users ascribe a personality (e.g., extroverted/introverted) to a computer based on its language style (Moon
and Nass, 1996; Nass, Moon, Fogg, Reeves, and Dryer, 1995). Furthermore, these personality attributions
may even lead users to form perceptions about how similar they are to the computer (Al-Natour, Benbasat,
and Cenfetelli, 2005, 2006; Hess, Fuller, and Mathew, 2005).
Despite the importance of language for the design of chatbots, there is a lack of research on how a chatbot’s
language style influences users’ perceptions and self-disclosure behavior. To take a first step in closing this
research gap, we focus on one specific personality facet (i.e., dominance) that is often reflected in a
person’s language style (Carli, 1990; Leaper and Ayres, 2007) and examine the effect of users’ perceived
similarity in dominance between them and the chatbot. Consequently, we investigate the following two
research questions: (1) How does a chatbots language style influence users’ perceived similarity in
dominance between them and the chatbot? (2) How do these perceptions influence their self-disclosure
behavior?
To address these research questions, we conducted a two-condition, between-subjects online experiment in
which participants interacted with one of two chatbots in a conversational commerce scenario. The two
chatbots differed only in their language style (dominant vs. submissive). The dominant chatbot used strong
language with frequent assertions, commands, and self-confident statements. In contrast, the submissive
chatbot primarily used suggestions and unassuming statements. Customer self-disclosure was assessed
during the interaction when the chatbot asked users for their personal information (i.e., product interest and
email address). Our findings show that users attribute a dominant personality to a chatbot when its language
style is characterized by confident and assertive statements. Moreover, we find that the interplay of users’
own dominance and the chatbots perceived dominance creates perceptions of similarity. Furthermore,
perceived similarity in dominance has a positive indirect effect on self-disclosure via an increased
likelihood of accepting the chatbot’s advice. Our study makes three major contributions. First, it advances
our understanding of the impact of language style as an important design feature of chatbots. Second, it
provides further evidence that it is the interplay between user characteristics and design features, not only
the design per se, that shapes users perceptions of chatbots. Third, this study extends prior research on
customer self-disclosure by demonstrating how perceptions of similarity between a user and a chatbot
influence self-disclosure behavior.
Gnewuch et al. / Customer Self-Disclosure in Conversational Commerce
Twenty-Eigth European Conference on Information Systems (ECIS2020), Marrakesh, Morocco. 3
2 Theoretical Foundations and Related Work
2.1 Dominance
Researchers have defined human personality as stable traits which reflect basic dimensions on which
people differ (Matthews, Deary, and Whiteman, 2003). The well-known Five-Factor-Model describes
human personality in terms of five core traits: extraversion, agreeableness, neuroticism,
conscientiousness, and openness to experience (John and Srivastava, 1999). Extraversion has been
identified as a particularly relevant trait in the context of social interaction and therefore, has often
been used in human-computer interaction (HCI) (e.g., Al-Natour et al., 2006; Hess, Fuller, and
Campbell, 2009). Extraversion implies an energetic approach toward the social and material world and
includes facets such as dominance, sociability, and positive emotionality (John and Srivastava, 1999).
Dominant individuals are self-confident, self-assertive, and willing to take charge (Al-Natour,
Benbasat, and Cenfetelli, 2011). Consequently, the way people communicate is often influenced by
their dominance level. In general, dominant people state their opinions with assurance and force and
are able to influence and lead others (Galassi, Delo, Galassi, and Bastien, 1974; Schlee, 2005). In
contrast, submissive people use more equivocal and less confident language (Rich and Smith, 2000).
2.2 Language Style of Chatbots
Text-based conversational agents, commonly referred to as chatbots, have a long history in HCI. However,
recent developments in AI research and technology have opened up interesting possibilities for chatbots in
conversational commerce (Følstad and Brandtzæg, 2017). Consequently, many organizations are turning to
chatbots in order to reduce their costs and make it easier for customers to interact with them (Gnewuch,
Morana, and Maedche, 2017; Watson, 2017). For example, customers can already use chatbots to find and
book flights, hail a taxi, and check public transport schedules (Watson, 2017). Furthermore, chatbots have
been shown to be effective in other domains such as education (e.g., Wambsganss, Winkler, Söllner, and
Leimeister, 2020; Winkler and Söllner, 2018), team collaboration (e.g., Rietz, Benke, and Maedche, 2019),
and healthcare (e.g., Laumer, Maier, and Gubler, 2019).
Extant research on chatbots and conversational agents has shown that many of their design features (e.g.,
human-like avatars, language style) are unconsciously perceived as social cues and trigger social responses
from users (e.g., Adam and Klumpe, 2019; Diederich, Brendel, Lichtenberg, and Kolbe, 2019; Diederich,
Janßen-Müller, Brendel, and Morana, 2019; Pfeuffer, Adam, Toutaoui, Hinz, and Benlian, 2019). This
phenomenon has been extensively studied in many domains under the Computers are Social Actors
(CASA) paradigm (Nass and Moon, 2000; Nass, Steuer, and Tauber, 1994). According to social response
theory, which is based on CASA, even rudimentary social cues are sufficient to generate a wide range of
social responses (Nass and Moon, 2000). For example, Nass et al. (1995) found that when computers are
endowed with personality-like characteristics, users respond to them as if they have personalities. More
specifically, Isbister and Nass (2000) showed that users distinguished between an extroverted computer that
usedstrong and friendly language expressed in the form of confident assertions and an introverted
computer that used weaker language expressed in the form of questions and suggestions” (p. 258). In the
context of recommendation agents, Al-Natour et al. (2006) found that an agent using more assertive
statements and expressions of higher confidence levels was perceived as more dominant. Moreover, Li et
al. (2017) showed that users were more willing to confide in a chatbot with a reserved and dominant
personality as compared to a chatbot with a warm and cheerful personality. Taken together, these findings
indicate that language style is an important design feature and that users may form impressions of the
chatbot’s personality based on its language style.
2.3 Similarity-Attraction Theory
Similarity-attraction theory posits that people like and are attracted to others who are similar, rather than
dissimilar, to themselves (Byrne, 1971). More specifically, people who share similar personality traits are
attracted to each other (Byrne, Griffitt, and Stefaniak, 1967). Research has shown that this theory not only
Gnewuch et al. / Customer Self-Disclosure in Conversational Commerce
Twenty-Eigth European Conference on Information Systems (ECIS2020), Marrakesh, Morocco. 4
applies to interpersonal communication, but also to HCI. For example, Moon and Nass (1996) found that
users were more attracted to a computer exhibiting similar personality traits compared to a dissimilar
computer. Moreover, Al-Natour et al. (2011) showed that perceived personality similarity to a
recommendation agent either directly or indirectly influenced users’ perceived enjoyment, ease of use,
usefulness, social presence, and trusting beliefs. In contrast, Li et al. (2017) found that personality similarity
between a user and a chatbot taking the role of a virtual interviewer did not influence the user’s willingness
to confide in and listen to the chatbot. In summary, research has shown that similarity-attraction theory
generally also applies to HCI, but there is a lack of research on how perceptions of similarity to a chatbot
influence user behavior when interacting with a chatbot.
2.4 Customer Self-Disclosure
Broadly speaking, self-disclosure can be defined as any personal information that a person communicates
to another (Cozby, 1973). The degree of self-disclosure is often categorized along two dimensions: (1)
breadth or amount of information disclosed and (2) depth or intimacy of information disclosed (Altman and
Taylor, 1973; Cozby, 1973). In e-commerce, organizations often need to gather personal information from
customers, such as product preferences, payment information, or contact details, in order to conduct their
business (Campbell, 2019). For example, eliciting customers preferences for products is necessary for
creating a customer profile and providing personalized suggestions (Adomavicius and Tuzhilin, 2001). In
addition, customers’ contact information, such as email addresses, are collected to be able to contact
customers with promotions and other marketing information (Campbell, 2019). However, research has
shown that customers are increasingly reluctant to disclose such information because they fear that their
information may get into the wrong hands (Olivero and Lunt, 2004; Spiekermann et al., 2001). Therefore,
much research has focused on identifying antecedents to users willingness to disclose personal information
(e.g., Al-Natour et al., 2009; Campbell, 2019).
While there is reason to believe that social cues of chatbots affect users’ self-disclosure (e.g., Adam and
Klumpe, 2019; Sah and Peng, 2015; Schuetzler, Giboney, Grimes, and Nunamaker, 2018), to the best of
our knowledge, there is no previous research that investigates how perceptions of similarity in dominance
(i.e., one specific personality facet) can be created through a specific social cue (i.e., chatbot language style)
in order to influence users’ degree of self-disclosure.
3 Research Model and Hypotheses
Research has shown that the interplay between individual user characteristics and design features of
systems, such as computers or online recommendation agents, plays an important role in HCI (Al-Natour et
al., 2006; Nass et al., 1995). Therefore, building on social response theory, we develop a research model
that first describes how perceptions of similarity in dominance can be created through a chatbot’s language
style. Subsequently, drawing upon similarity-attraction theory, we theorize how perceived similarity in
dominance increases the likelihood of accepting the chatbot’s advice and leads to customer self-disclosure.
Controls: Gender, age, prior
experience with chatbots, chat duration
H1
H2
H3H5
Likelihood of
Accepting
Chatbot Advice
Customer
Self-
Disclosure
Perceived
Similarity in
Dominance
Personality Match
User
Dominance
Perceived
Chatbot
Dominance
XH4
Chatbot
Language
Style
1 Dominant
2 Submissive
Figure 1. Research model
Gnewuch et al. / Customer Self-Disclosure in Conversational Commerce
Twenty-Eigth European Conference on Information Systems (ECIS2020), Marrakesh, Morocco. 5
3.1 Manifesting Chatbot Dominance through Language Style
Extant research has shown that there is a link between personality and language use in a variety of contexts
(e.g., Holtgraves, 2011; Pennebaker and King, 1999; Yarkoni, 2010). People automatically infer
personality traits from the way other people communicate (Costa and MacCrae, 1992). As described above,
dominance has been identified as an important personality facet. Several studies have shown that
dominance can be expressed verbally (e.g., using phrases and words like “you must, “absolutely”, “I’m
sure that”) and that such a language style can also be implemented in the design of interactive systems (e.g.,
Al-Natour et al., 2006; Moon and Nass, 1996; Nass et al., 1995). Furthermore, drawing on social response
theory (Nass and Moon, 2000), these studies have demonstrated that based on cues in the language, users
attribute personality to a system and are able to distinguish, for example, between extroverted and
introverted personalities. Building on this evidence, we argue that a chatbot that uses strong language with
assertions, commands, and self-confident statements (e.g., you should”, “I’m sure that”) is perceived as
more dominant than a chatbot that uses a submissive language style (e.g., “you could, “maybe”). Thus, we
propose that:
H1: The language style of a chatbot is directly related to its perceived dominance.
3.2 Shaping Perceptions of Similarity in Dominance between User and
Chatbot
Several studies have shown that perceptions of personality similarity are shaped by the interplay between
user characteristics and design features of interactive systems (e.g., Al-Natour et al., 2006; Hess et al.,
2005). For example, Al-Natour et al. (2006) found that a user’s perceived personality similarity to a
recommendation agent can be predicted by comparing separate assessments of the agents and the user’s
level of dominance. Building on these findings, we argue that perceptions of similarity can also arise when
a dominant (submissive) user interacts with a chatbot that is perceived to have a dominant (submissive)
personality. Consequently, perceived similarity in dominance should be higher when there is a match
between the chatbot’s and the user’s level of dominance. Hence, we propose that:
H2: Users’ perceptions of the chatbot’s dominance and their own dominance interact to affect
users’ perceived similarity in dominance between them and the chatbot.
3.3 The Effect of Perceived Similarity in Dominance on Likelihood of
Accepting Chatbot Advice and Customer Self-Disclosure
Personality similarity has been identified as a key factor in the design of recommendation agents and an
important driver of trust, enjoyment, and involvement (Al-Natour et al., 2011; Hess et al., 2005). Moreover,
research has shown that people are not only attracted to others who are similar, but are also more likely to
follow and trust their advice when making purchase decisions (Byrne, 1971; Jiang, Hoegg, Dahl, and
Chattopadhyay, 2010). Taken together, these findings may indicate that perceived similarity in dominance
also influences how users perceive the advice from the chatbot (e.g., a recommendation on which product
to buy). Therefore, based on similarity-attraction theory, we propose that users are more likely to accept the
chatbot’s advice when they perceive the chatbot to be similar to them in terms of its dominance. Hence, we
argue that:
H3: Users’ perceived similarity in dominance influences their likelihood of accepting the chatbot’s
advice.
Research in psychology has pointed out that people who perceive themselves to be similar to another
person (e.g., in attitude or personality traits) are willing to disclose not only more, but also more intimate
information about themselves to this person (Gelman and McGinley, 1978; Knecht, Lippman, and Swap,
1973). The underlying rationale is that similarity provides attributional confidence, reduces uncertainty, and
creates feelings of closeness (Byrne, 1971; Byrne et al., 1967). As revealing personal information usually
makes the discloser feel vulnerable (Kelly and McKillop, 1996), perceived similarity may reduce feelings
Gnewuch et al. / Customer Self-Disclosure in Conversational Commerce
Twenty-Eigth European Conference on Information Systems (ECIS2020), Marrakesh, Morocco. 6
of vulnerability and therefore, facilitates the process of self-disclosure. In line with this reasoning, we argue
that perceived similarity to a chatbot also lowers the threshold for disclosing personal information during
the interaction. More specifically, perceived similarity in dominance may reduce users’ feelings of
uncertainty during the interaction and therefore, increase their willingness to disclose more or more
intimate personal information (e.g., their email address) to the chatbot. Thus, we propose:
H4: Users’ perceived similarity in dominance influences their degree of self-disclosure to the
chatbot.
It has been shown that the decision to disclose personal information involves an evaluation of costs
and rewards (Altman and Taylor, 1973). In e-commerce, many organizations require users to provide
their email address or other information to gain access to potential rewards (e.g., special offers or
promotions) (Campbell, 2019). Therefore, perceived rewards or benefits have been found to be an
important antecedent of self-disclosure (Al-Natour et al., 2009; Campbell, 2019; Loiacono, 2015).
Thus, we argue that when users are more likely to accept the chatbot’s advice, they focus more on the
potential rewards associated with that advice (e.g., an interesting product recommendation), in contrast
to any perceived costs (e.g., privacy concerns). Consequently, they are willing to disclose more
information about themselves to the chatbot in order to reap the expected rewards. Thus, we
hypothesize that:
H5: A higher likelihood of accepting the chatbot’s advice increases the degree of self-disclosure to
the chatbot.
4 Methodology
To test our hypotheses, we conducted a between-subjects online experiment. Participants were randomized
to interact with one of two chatbots that differed only in their language style. The experimental task was to
find and select a (fictitious) mobile phone plan using the chatbot. We selected this task since chatbots are
often used for such tasks in conversational commerce (Tuzovic and Paluch, 2018; Watson, 2017). The
chatbots were able to answer participantsquestions about mobile phone plans and guide them towards
selecting one plan by asking a set of questions (e.g., “Would you like to have unlimited calls?). After the
chatbots had recommended a plan, they asked the participants whether they would be interested in getting
more information about this plan and would be willing to enter their email address to receive additional
information via email (see Figure 2). Entering an email address was optional and not required to receive a
compensation for participating in the experiment. Upon completion of the experimental task, participants
filled out a questionnaire that asked them to evaluate the chatbot.
4.1 Participants
Participants were recruited from a pool of students at a German university. We consider students to be
appropriate subjects for our experiment because they often shop online (Walczuch and Lundgren, 2004)
and are among the early adopters of chatbots (Brandtzaeg and Følstad, 2017). Using G*Power (Faul,
Erdfelder, Lang, and Buchner, 2007), we calculated a required sample size of about 200 participants (effect
size = .20, α = .05, power = .80). Among all participants, we raffled 600€ as compensation for participating
in the experiment. Before the experiment, all participants provided informed consent via an online form
which explained the context of the study, that their data (i.e., survey and conversation data) would be de-
identified, and that they could opt-out of the experiment at any time. In total, 214 subjects participated in
the experiment. After data collection, we excluded five participants who provided incorrect answers to one
of two attention check questions and four participants who did not follow the scenario or encountered
technical difficulties during the interaction with the chatbot. Therefore, our final sample included 205
participants (63 females, 142 males, mean age = 23 years).
Gnewuch et al. / Customer Self-Disclosure in Conversational Commerce
Twenty-Eigth European Conference on Information Systems (ECIS2020), Marrakesh, Morocco. 7
4.2 Experimental Conditions and Treatment Design
The online experiment employed a between-subjects design with two conditions (chatbot language style:
dominant vs. submissive). In both conditions, participants were told beforehand that their counterpart was a
chatbot, not a human being. In order to create two different language styles, we focused on the personality
facet of dominance (John and Srivastava, 1999) and formulated two different versions of each message sent
by the chatbots (see Table 1). As prior research has shown, a dominant language style is characterized by
the use assertions and commands (Isbister and Nass, 2000; Nass et al., 1995) and can be cued by the use of
directives and decisional guidance that are communicated in an authoritative manner (Al-Natour et al.,
2006). In contrast, a submissive language style is characterized by the use of questions and suggestions
(Isbister and Nass, 2000; Nass et al., 1995) as well as timid and unassuming statements (Al-Natour et al.,
2006; Hess et al., 2005). Both language styles were pretested with 28 participants. The results of the pretest
suggested that there were significant differences in how users perceive the level of dominance between the
two language styles of the chatbots.
Submissive Language Style (SUB)
1
Hello! I’m your personal assistant for mobile phone
plans. I would be happy to help you find a new plan.
2
“Would you like to have unlimited calls?”
3
„Here I have found a mobile phone plan that could
possibly meet your needs. I would like to suggest you
the following plan:”
4
“I hope this plan satisfies your expectations. If you like
the offer, I would be glad to send you further
information. Would you like to know more?
5
„Im very sorry, I did not understand your question,
but I’m trying to get better every day. Could you please
rephrase your message? If you need help, you can
always enter help’.
Table 1. Exemplary messages for both experimental conditions
4.3 Measures
We adapted the measurement items in the questionnaire from existing scales. We assessed user dominance,
perceived chatbot dominance, and perceived similarity in dominance by adapting the items from Al-Natour
et al. (2006). Likelihood of accepting the chatbot’s advice was measured using the items from Köhler et al.
(2011). Table 2 shows all constructs and corresponding measurement items. Additionally, several control
variables were examined in the survey (i.e., age, gender, experience with chatbots) or calculated afterwards
(i.e., chat duration). No significant differences were found between the experimental conditions for any of
these control variables.
As illustrated in Figure 2, our dependent variable customer self-disclosure was assessed during the
interaction. After the chatbot had recommended a mobile phone plan, participants were asked (1) whether
they would like to receive more information on their recommended mobile phone plan (i.e., state their
interest in the given plan) and (2) to provide their email address to receive this further information. Only
participants who stated their interest (i.e., clicked on Yes please) could enter their email address in the
following message from the chatbot. Consequently, customer self-disclosure represents a three-category
ordinal variable (0 = no disclosure; 1 = disclosure of product interest; 2 = disclosure of email address).
Gnewuch et al. / Customer Self-Disclosure in Conversational Commerce
Twenty-Eigth European Conference on Information Systems (ECIS2020), Marrakesh, Morocco. 8
Construct
Item
Scale / Measurement
Source
Perceived Chatbot
Dominance (PCD) /
User Dominance
(UD)
PCD: In my opinion, the chatbot is … /
UD: In my opinion, I am …
7-pt. Likert scale
(1=strongly disagree;
7=strongly agree”)
Al-Natour et al.
(2006)
PCD1 / UD1
dominant
PCD2 / UD2
assertive
PCD3 / UD3
domineering
PCD4 / UD4
forceful
PCD5 / UD5
self-confident
PCD6 / UD6
self-assured
PCD7 / UD7
firm
PCD8 / UD8
persistent
Perceived Similarity
in Dominance
(PSD)
I think the chatbot and I are similar in terms of…
7-pt. Likert scale
(1=strongly disagree;
7=strongly agree”)
Al-Natour et al.
(2006)
PSD1
my self-confidence level
PSD2
my self-assurance level
PSD3
my firmness level
PSD4
my persistence level
PSD5
my authoritativeness
PPS6
my dominance level
Likelihood of
Accepting the
Chatbot’s Advice
(LAA)
LAA1
What is the likelihood that you would
accept the chatbots advice?
7-pt. Likert scale (1 = “not likely at
all”; 7 = “very likely”)
Köhler et al.
(2011)
LAA2
How probable is it that you would accept
the chatbots advice?
7-pt. Likert scale (1 = “not probable
at all”; 7 = “very probable”)
LAA3
How influential do you perceive the
chatbots advice to be?
7-pt. Likert scale (1 = “not influential
at all”; 7 = “very influential”)
Customer Self-
Disclosure (CSD)
Behavioral measurement during the experiment (see description above and Figure 2).
Kang & Gratch
(2014), Moon (2000)
Table 2. Constructs and measurement items
1. Disclosure of product interest
2. Disclosure of email address
Figure 2. Screenshot of self-disclosure measurement (dominant language style condition shown)
To assess reliability and validity of the measures, we conducted a confirmatory factor analysis (CFA) using
the structural equation modeling (SEM) package lavaan 0.6-3 in R version 3.5.0 (Rosseel, 2012). However,
several items of both dominance scales (i.e., for user and for chatbot) and of the perceived similarity in
dominance scale did not load as expected, which we believe to be the result of a social desirability bias.
Consequently, we removed these items and only kept three dominance items (i.e., “I am / the chatbot is:
self-confident, self-assured, firm”) and three perceived similarity items (i.e., “The chatbot and I are similar
in terms of: my self-confidence level, my self-assurance level, my firmness level). After rerunning the
CFA, the loadings for all items on their intended constructs exceeded the recommended threshold of .60
(Gefen and Straub, 2005). Next, we compared the square root of the AVE of each construct with its
Gnewuch et al. / Customer Self-Disclosure in Conversational Commerce
Twenty-Eigth European Conference on Information Systems (ECIS2020), Marrakesh, Morocco. 9
correlations with other constructs to assess discriminant validity. All constructs met this criterion. Finally,
Cronbach alpha scores were above .70 and average variance extracted (AVE) values above .50. The CFA
showed acceptable model fit (χ2 = 133.847, df = 58, χ2/df = 2.308, RMSEA = .08, CFI = .918, TLI = .889,
SRMR = .089).
4.4 Manipulation Check
To check whether the manipulation of the chatbots language style was successful, we used three items for
perceived language style based on existing research (e.g., Moon and Nass, 1996; Nass et al., 1995). More
specifically, we asked participants to rate whether the chatbot expressed itself confidently, provided
information in an authoritative manner, and made self-confident statements (7-point Likert scales: 1 =
“strongly disagree”; 7 = “strongly agree”). As this construct displayed high internal consistency as well as
convergent and discriminant validity, we computed a score by averaging participants’ responses across the
three items. A one-way analysis of variance (ANOVA) showed that participants in the DOM condition (M
= 5.84, SD = 1.04) perceived the chatbot’s language style to be significantly more dominant than did those
in the SUB condition (M = 4.87, SD = 1.47; F(1, 203) = 30.05, p < .001), thus indicating that our
manipulation was successful.
5 Results
We performed the following three steps in our analysis. First, we analyzed the effect of the treatment
(chatbot language style) on perceived chatbot dominance. Second, we calculated a dyadic personality
similarity score to assess the match between the users’ and the chatbots’ dominance. Third, we evaluated
the structural model with the relationships between perceived similarity in dominance, likelihood of
accepting the chatbot’s advice, and customer self-disclosure, again using the SEM package lavaan in R.
5.1 Descriptive Results
The descriptive statistics for all constructs in the both experimental conditions are reported in Table 3.
Experimental
Condition
N
Perceived
Chatbot
Dominance
User
Dominance
Perceived
Similarity in
Dominance
Likelihood of
Accepting the
Chatbot’s Advice
Customer Self-Disclosure
0 = No
Disclosure
1 = Product
Interest
2 = Email
Address
DOM
102
5.60 (0.76)
5.25 (1.09)
4.69 (1.50)
5.25 (1.25)
7 (6.86%)
12 (9.80%)
83 (81.37%)
SUB
103
4.83 (1.24)
5.35 (1.07)
4.54 (1.46)
5.56 (1.22)
10 (9.71%)
18 (17.48%)
75 (72.82%)
205
5.21 (1.10)
5.30 (1.08)
4.61 (1.48)
5.40 (1.24)
17 (8.29%)
30 (14.63%)
158 (77.07%)
Note: Means with standard deviations in parentheses.
Note: Numbers and percentages (in parentheses) of
participants who did / did not disclose information.
Table 3. Descriptive statistics
5.2 The Effect of Chatbot Language Style on Perceived Chatbot Dominance
To test the effect of chatbot language style (i.e., our treatment) on perceived chatbot dominance, we
conducted a one-way ANOVA. The results showed that participants in DOM condition (M = 5.60, SD =
0.76) perceived the chatbot to be significantly more dominant than did those in the SUB condition (M =
4.83, SD = 1.24; F(1, 203) = 28.01, p < .001; H1 supported).
5.3 Predicting Perceived Similarity in Dominance
Following the approach of Al-Natour et al. (2006), we calculated a dyadic personality similarity score using
pairwise intraclass correlations (Fisher, 1925) between the participant’s assessment of their own dominance
and their perception of the chatbots dominance. The intraclass correlation coefficient (ICC) takes values
in the range [-1.0, 1.0], where 1.0 means perfect agreement. ICCs for each participant were calculated
using the R package psy. In order to derive a new factor representing personality match, the dyadic
similarity scores (i.e., ICC scores) were dichotomized by a median split in two groups (0 = mismatch, 1 =
Gnewuch et al. / Customer Self-Disclosure in Conversational Commerce
Twenty-Eigth European Conference on Information Systems (ECIS2020), Marrakesh, Morocco. 10
match). Subsequently, a one-way ANOVA was conducted to test whether the computed personality
(mis)match influenced perceived similarity in dominance. The results showed that participants whose
dominance “matched” the chatbot’s perceived dominance (M = 4.87, SD = 1.30) perceived the chatbots
personality to be significantly more similar to their own compared to “mismatched” participants (M = 4.36,
SD = 1.60; F(1, 203) = 6.33, p = .013; H2 supported). In addition, Figure 3 shows that a personality match
is particularly effective when the chatbot uses a dominant language style.
N
Perceived
Similarity in
Dominance
Mismatch
103
4.36 (1.60)
Match
102
4.87 (1.30)
205
4.61 (1.48)
Note: Means with standard
deviations in parentheses.
Table 4. Descriptive results
Figure 3. Personality match
5.4 The Effect of Perceived Similarity in Dominance and Likelihood of
Accepting Chatbot Advice on Customer Self-Disclosure
Finally, we specified and estimated a structural model to examine the remaining relationships in our
research model. Because self-disclosure was measured as an ordinal variable, we used robust weighted
least squares estimation (i.e., WLSMV) to fit our model in lavaan. This estimation method has been found
reliable for estimating models with non-normal dependent variables and has been used in prior research
with nominal or ordinal dependent variables (e.g., Santos, Patel, and D’Souza, 2011; Wright and Marett,
2010). The overall fit indices of the structural model showed a good fit to the data (χ2 = 42.189, df = 41,
χ2/df = 1.029, RMSEA = .012, CFI = .994, TLI = .997, SRMR = .034).
Consistent with H3, perceived similarity in dominance had a statistically significant positive effect on
likelihood of accepting the chatbot’s advice (b = 0.359, p < .001). However, the effect of perceived
similarity in dominance on self-disclosure was not significant (b = 0.067, p = .513; H4 rejected). The effect
of likelihood of accepting the chatbot’s advice on self-disclosure was statistically significant (b = 0.171, p =
.025; H5 supported). In order to test whether likelihood of accepting the chatbot’s advice mediates the
relationship between perceived similarity and self-disclosure, we followed the procedure suggested by
Hayes (2018). Thus, we tested a mediation model (Model 4) using a bootstrapping procedure (10,000
samples) with perceived similarity in dominance as the independent variable, likelihood of accepting the
chatbot’s advice as the mediator, and self-disclosure as the dependent variable. The indirect effect of
perceived similarity in dominance on self-disclosure via likelihood of accepting the chatbot’s advice was
statistically significant (b = 0.0118, SE = 0.0079, [95% CI: 0.0007, 0.0339]). The direct effect of perceived
similarity on self-disclosure was not significant (b = -0.027, p = .410). Moreover, the relationships between
perceived similarity and likelihood of accepting the chatbot’s advice (b = 0.132, p = .049) as well as
between likelihood of accepting the chatbot’s advice and self-disclosure were still significant (b = 0.089, p
= .048). Taken together, these results suggest that perceived similarity in dominance has a positive indirect
effect on self-disclosure via an increased likelihood of accepting the chatbot’s advice.
6 Discussion
Drawing on social response theory and similarity-attraction theory, we investigated how the language style
of chatbots influences users perceptions of similarity in dominance and how these perceptions affect their
self-disclosure behavior. Our results yielded three key findings. First, users attribute a dominant personality
Gnewuch et al. / Customer Self-Disclosure in Conversational Commerce
Twenty-Eigth European Conference on Information Systems (ECIS2020), Marrakesh, Morocco. 11
to a chatbot when its language style is characterized by confident and assertive statements. Second, the
interplay of users’ own dominance and the chatbots perceived dominance creates perceptions of similarity.
Third, perceived similarity to a chatbot increases users’ degree of self-disclosure via an increased likelihood
of accepting the chatbot’s advice.
6.1 Theoretical Contributions
Our study makes three major theoretical contributions. First, our findings suggest that language style is an
important design feature of chatbots, conversational agents, and other natural language interfaces.
Consistent with social response theory, users ascribe a different personality to a chatbot, depending on
whether it uses dominant (e.g., you should”, “I’m sure that”) or submissive language (e.g., “you could”,
“maybe”). A possible explanation is that, since the use of natural language is a unique human capability
and humans form personality impressions from others language within a few seconds (Costa and
MacCrae, 1992), users also automatically infer personality traits from the way a chatbot communicates.
This finding has important implications for the design of chatbots. Existing design knowledge for chatbots
often consists of high-level suggestions such as Dont sound like a robot or Give your chatbot a
personality (McTear, 2017). In contrast, our findings provide a more nuanced understanding of how one
specific personality facet (i.e., dominance) can be implemented in a chatbot using one specific design
feature (i.e., language style).
Our second theoretical contribution is to highlight the interplay of design features and user characteristics
that create perceptions of similarity. Our results indicate that during the interaction with a chatbot, a
match between the chatbot’s perceived dominance (cued through its language style) and the user’s own
level of dominance can be established. This is in line with prior studies that have found a match between
other design features and user characteristics such as gender (e.g., Beldad, Hegner, and Hoppen, 2016;
Nass, Moon, and Green, 1997; Qiu and Benbasat, 2010) and ethnicity (e.g., Qiu and Benbasat, 2010).
Consequently, our study provides further evidence that it is the interplay between user characteristics and
design features, not only the design per se, that shapes users perceptions of chatbots.
Third, our study supplements literature on customer self-disclosure. Our results show that perceived
similarity in dominance has a positive indirect effect on self-disclosure via the users likelihood of
accepting the chatbots advice. Therefore, contrary to our hypothesis based on similarity attraction theory,
there seems to be no direct effect of perceived similarity on self-disclosure. This finding suggests that,
particularly in business interactions (e.g., e-commerce, conversational commerce), self-disclosure behavior
is not solely driven by users perceptions of the chatbot, but also by their evaluation of the benefits
associated with disclosing certain information (e.g., receiving interesting information about a product).
Therefore, a possible explanation in line with research on trust building (e.g., Saffarizadeh et al., 2017;
Wang, Qiu, Kim, and Benbasat, 2016) is that, despite users automatic social (and often emotional)
responses to a chatbot (e.g., forming personality impressions), self-disclosure also involves cognitive
decision-making processes.
6.2 Practical Implications
Our results also have important practical implications. For chatbot designers and organizations who aim to
introduce a chatbot to their customers, it is key to understand that its language style can have a large impact
on users perceptions and behavior. Consequently, designers and organizations should not only focus on the
technical capabilities of their chatbot (e.g., architecture, natural language processing algorithms, integration
with other IT systems), but also carefully examine and test how its language is perceived by its users and
whether its fits to the context (e.g., application domain, e-commerce channel). Moreover, designers and
organizations should consider the important role that individual user characteristics (e.g., personality traits)
play in shaping the interaction between a chatbot and its users. Our study shows that for increasing self-
disclosure, not only the design of the chatbot, but also how its design matches certain personality
characteristics of its users is crucial. Therefore, designers and organizations who are aiming to introduce a
new chatbot or convince users to engage with their existing chatbot could analyze their users personality
traits and adapt the chatbots language style accordingly. Since only few users are willing to provide
sensitive personality-related information to organizations, automated approaches could be used to infer
Gnewuch et al. / Customer Self-Disclosure in Conversational Commerce
Twenty-Eigth European Conference on Information Systems (ECIS2020), Marrakesh, Morocco. 12
personality traits from users written text during the interaction (e.g., IBM Watson Personality Insights; see
IBM, 2019), similar to sentiment analysis approaches that automatically extract users emotions from their
messages (Feine, Morana, and Gnewuch, 2019). Finally, our analysis highlights that, although the chatbots
design is important, there are also cognitive evaluations involved before users disclose personal information
(e.g., product interests, email address) to a chatbot. Therefore, organizations using chatbots (e.g., for
conversational commerce) should critically examine whether and how their services provide users with
added value that justifies the need for collecting personal information.
6.3 Limitations and Future Research
Our study has several limitations that suggest potential avenues for future research. First, although existing
research has often studied self-disclosure behavior using experimental methods (e.g., Kang and Gratch,
2014; Moon, 2000; Sah and Peng, 2015), this research design might limit the external validity and
generalizability of our findings because potential concerns about disclosing personal information might be
reduced in an experimental setting. To better understand the potential severity of this issue, we asked
participants at the end of the questionnaire to explain why or why not they disclosed their personal interests
or email address to the chatbot. Only one of the participants responding to the open-ended question
mentioned that she thought disclosing her email address was necessary for completing the experiment. The
responses of the remaining participants did not address this issue but rather focused on aspects related to the
chatbot’s recommendation (e.g., more information about the product) or the usersgeneral privacy
concerns. However, to strengthen the external validity of our findings, future research could observe self-
disclosure behavior in more realistic settings (e.g., using field studies). Moreover, when the chatbot asked
for the participants email address, there was no option to decline (e.g., another no thanks button). While
many participants just left the chat at this point in time without disclosing their email address, we cannot
rule out that providing an explicit option to decline would have resulted in different outcomes.
Second, our study focused on only one specific design feature (i.e., language style) and one facet of
personality (i.e., dominance). In order to create two different language styles, we used different linguistic
elements in the chatbots’ responses (e.g., assertions and directives vs. questions and suggestions). However,
the chatbots also introduced themselves differently in their first message (i.e., expert vs. personal assistant).
Since research has shown that even minimal cues can substantially affect users perceptions of chatbots
(e.g., Gnewuch, Morana, Adam, and Maedche, 2018), this might have also had an impact on users
perceptions of the chatbots personality because users might have allocated different social roles to the
chatbots. Therefore, future research could examine the impact of social roles and other design features (e.g.,
avatars, emojis) on user perceptions of a chatbots personality. Moreover, given the complex nature of
human personality, most existing studies have examined only one personality facet (e.g., Al-Natour et al.,
2006; Hess et al., 2005; Moon and Nass, 1996; Nass et al., 1995). Thus, future studies could investigate a
combination of personality facets (e.g., dominance and neuroticism) and other user characteristics (e.g.,
gender) to better understand the interplay between user characteristics and design features of chatbots.
Third, we assessed users level of dominance using a self-report questionnaire. However, research has
shown that personality traits can also be inferred from textual data (Yarkoni, 2010). Since the interactions
in our experiment were rather short (i.e., six minutes on average) and most messages from users were less
than five words, we could not use a data-driven approach to cross-validate our findings. Thus, future
research may explore the use of data-driven approaches to identify users personality traits and compute
similarity scores from conversation data. This could also enable chatbots to dynamically adapt their design
(e.g., language style) in real-time to become more similar to the user during the interaction.
Finally, our research was primarily concerned with how self-disclosure behavior is influenced by the
chatbot’s design and its interplay with the userslevel of dominance. However, future research is needed to
expand our research model and to examine other potentially relevant mediating and moderating factors,
such as privacy concerns or trust, that have been found to affect self-disclosure behavior in other contexts
(e.g., Benlian, Klumpe, and Hinz, 2019; Saffarizadeh et al., 2017).
Gnewuch et al. / Customer Self-Disclosure in Conversational Commerce
Twenty-Eigth European Conference on Information Systems (ECIS2020), Marrakesh, Morocco. 13
References
Adam, M., and Klumpe, J. (2019). Onboarding with a Chat The Effects of Message Interactivity and
Platform Self-Disclosure on User Disclosure Propensity. Proceedings of the Twenty-Seventh
European Conference on Information Systems (ECIS2019). Stockholm, Sweden.
Adomavicius, G., and Tuzhilin, A. (2001). Using Data Mining Methods to Build Customer Profiles.
Computer, 34(2), 7482.
Al-Natour, S., Benbasat, I., and Cenfetelli, R. (2005). The role of similarity in e-commerce
interactions: The case of online shopping assistants. Proceedings of the 4th Annual Pre-ICIS
Workshop on HCI Research in MIS. Las Vegas, NV, USA.
Al-Natour, S., Benbasat, I., and Cenfetelli, R. (2009). The Antecedents of Customer Self-Disclosure to
Online Virtual Advisors. Proceedings of the 30th International Conference on Information
Systems (ICIS2009). Phoenix, AZ, USA.
Al-Natour, S., Benbasat, I., and Cenfetelli, R. (2011). The Adoption of Online Shopping Assistants:
Perceived Similarity as an Antecedent to Evaluative Beliefs. Journal of the Association for
Information Systems, 12(5), 347374.
Al-Natour, S., Benbasat, I., and Cenfetelli, R. T. (2006). The Role of Design Characteristics in
Shaping Perceptions of Similarity: The Case of Online Shopping Assistants. Journal of the
Association for Information Systems, 7(12), 821861.
Altman, I., and Taylor, D. A. (1973). Social penetration: The development of interpersonal
relationships. Holt, Rinehart & Winston.
Beldad, A., Hegner, S., and Hoppen, J. (2016). The effect of virtual sales agent (VSA) gender -
Product gender congruence on product advice credibility, trust in VSA and online vendor, and
purchase intention. Computers in Human Behavior, 60, 6272.
Benlian, A., Klumpe, J., and Hinz, O. (2019). Mitigating the intrusive effects of smart home assistants
by using anthropomorphic design features: A multimethod investigation. Information Systems
Journal, (forthcoming).
Brandtzaeg, P. B., and Følstad, A. (2017). Why people use chatbots. Proceedings of the 4th
International Conference on Internet Science, 377392. Thessaloniki, Greece.
Byrne, D. (1971). The Attraction Paradigm. New York, NY, USA: Academic Press.
Byrne, D., Griffitt, W., and Stefaniak, D. (1967). Attraction and similarity of personality
characteristics. Journal of Personality and Social Psychology, 5(1), 8290.
Campbell, D. (2019). A Relational Build-up Model of Consumer Intention to Self-disclose Personal
Information in E-commerce B2C Relationships. AIS Transactions on Human-Computer
Interaction, 11(1), 3353.
Carli, L. L. (1990). Gender, language, and influence. Journal of Personality and Social Psychology,
59(5), 941951.
Chattaraman, V., Kwon, W.-S., Gilbert, J. E., and Ross, K. (2019). Should AI-Based, conversational
digital assistants employ social- or task-oriented interaction style? A task-competency and
reciprocity perspective for older adults. Computers in Human Behavior, 90, 315330.
Costa, P. T., and MacCrae, R. R. (1992). Revised NEO personality inventory (NEO PI-R) and NEO
five-factor inventory (NEO-FFI): Professional manual. Odessa, FL: Psychological Assessment
Resources.
Cozby, P. C. (1973). Self-disclosure: A literature review. Psychological Bulletin, 79(2), 7391.
Dale, R. (2016). The return of the chatbots. Natural Language Engineering, 22(5), 811817.
Diederich, S., Brendel, A. B., Lichtenberg, S., and Kolbe, L. M. (2019). Design for fast request
fulfillment or natural interaction? Insights from an experiment with a conversational agent.
Proceedings of the Twenty-Seventh European Conference on Information Systems (ECIS2019).
Stockholm, Sweden.
Diederich, S., Janßen-Müller, M., Brendel, A. B., and Morana, S. (2019). Emulating Empathetic
Behavior in Online Service Encounters with Sentiment-Adaptive Responses: Insights from an
Experiment with a Conversational Agent. Proceedings of the 40th International Conference on
Gnewuch et al. / Customer Self-Disclosure in Conversational Commerce
Twenty-Eigth European Conference on Information Systems (ECIS2020), Marrakesh, Morocco. 14
Information Systems (ICIS2019). Munich, Germany.
Facebook. (2018). https://www.facebook.com/business/news/david-marcus-f8-keynote-2018.
Retrieved February 22, 2020, from https://www.facebook.com/business/news/david-marcus-f8-
keynote-2018
Faul, F., Erdfelder, E., Lang, A.-G., and Buchner, A. (2007). G*Power 3: A flexible statistical power
analysis program for the social, behavioral, and biomedical sciences. Behavior Research
Methods, 39(2), 175191.
Feine, J., Gnewuch, U., Morana, S., and Maedche, A. (2019). A Taxonomy of Social Cues for
Conversational Agents. International Journal of Human-Computer Studies, 132, 138161.
Feine, J., Morana, S., and Gnewuch, U. (2019). Measuring Service Encounter Satisfaction with
Customer Service Chatbots using Sentiment Analysis. Proceedings of the 14th International
Conference on Wirtschaftsinformatik (WI2019). Siegen, Germany.
Fisher, R. (1925). Fisher RA: Statistical methods for research workers. Edinburgh: Genesis
Publishing, Oliver and Boyd; In Biological monographs and manuals.
Følstad, A., and Brandtzæg, P. B. (2017). Chatbots and the new world of HCI. Interactions, 24(4), 38
42.
Galassi, J. P., Delo, J. S., Galassi, M. D., and Bastien, S. (1974). The college self-expression scale: A
measure of assertiveness. Behavior Therapy, 5(2), 165171.
Gefen, D., and Straub, D. W. (2005). A practical guide to factorial validity using PLS-Graph: Tutorial
and annotated example. Communications of the Association for Information Systems, 16, 91109.
Gelman, R., and McGinley, H. (1978). Interpersonal liking and self-disclosure. Journal of Consulting
and Clinical Psychology, 46(6), 15491551.
Gnewuch, U., Morana, S., Adam, M. T. P., and Maedche, A. (2018). Faster is Not Always Better:
Understanding the Effect of Dynamic Response Delays in Human-Chatbot Interaction.
Proceedings of the Twenty-Sixth European Conference on Information Systems (ECIS2018).
Portsmouth, UK.
Gnewuch, U., Morana, S., and Maedche, A. (2017). Towards Designing Cooperative and Social
Conversational Agents for Customer Service. Proceedings of the 38th International Conference
on Information Systems (ICIS2017). Seoul, South Korea.
Hayes, A. F. (2018). Introduction to Mediation, Moderation, and Conditional Process Analysis: A
Regression-based Approach (2nd ed.). New York, NY, USA: Guilford Press.
Hess, T., Fuller, M., and Campbell, D. (2009). Designing interfaces with social presence: Using
vividness and extraversion to create social recommendation agents. Journal of the Association
for Information Systems, 10(12), 889919.
Hess, T., Fuller, M., and Mathew, J. (2005). Involvement and Decision-Making Performance with a
Decision Aid: The Influence of Social Multimedia, Gender, and Playfulness. Journal of
Management Information Systems, 22(3), 1554.
Holtgraves, T. (2011). Text messaging, personality, and the social context. Journal of Research in
Personality, 45(1), 9299.
IBM. (2019). IBM Watson Personality Insights: The science behind the service. Retrieved March 9,
2020, from https://cloud.ibm.com/docs/services/personality-insights?topic=personality-insights-
science
Isbister, K., and Nass, C. (2000). Consistency of personality in interactive characters: verbal cues,
non-verbal cues, and user characteristics. International Journal of Human-Computer Studies, 53,
251267.
Jiang, L., Hoegg, J., Dahl, D. W., and Chattopadhyay, A. (2010). The Persuasive Role of Incidental
Similarity on Attitudes and Purchase Intentions in a Sales Context. Journal of Consumer
Research, 36(5), 778791.
John, O. P., and Srivastava, S. (1999). The Big-Five trait taxonomy: History, measurement, and
theoretical perspectives. In L. Pervin and O. P. John (Eds.), Handbook of personality: Theory
and research (2nd ed., pp. 102138). New York, NY, USA: Guilford Press.
Kang, S.-H., and Gratch, J. (2014). Exploring users social responses to computer counseling
interviewers behavior. Computers in Human Behavior, 34, 120130.
Gnewuch et al. / Customer Self-Disclosure in Conversational Commerce
Twenty-Eigth European Conference on Information Systems (ECIS2020), Marrakesh, Morocco. 15
Kelly, A. E., and McKillop, K. J. (1996). Consequences of revealing personal secrets. Psychological
Bulletin, 120(3), 450465.
Knecht, L., Lippman, D., and Swap, W. (1973). Similarity, attraction, and self-disclosure. Proceedings
of the Annual Convention of the American Psychological Association, 205206.
Köhler, C. F., Breugelmans, E., and Dellaert, B. G. C. (2011). Consumer Acceptance of
Recommendations by Interactive Decision Aids: The Joint Role of Temporal Distance and
Concrete Versus Abstract Communications. Journal of Management Information Systems, 27(4),
231260.
Laumer, S., Maier, C., and Gubler, F. T. (2019). Chatbot acceptance in healthcare: Explaining user
adoption of conversational agents for disease diagnosis. Proceedings of the Twenty-Seventh
European Conference on Information Systems (ECIS2019). Stockholm, Sweden.
Leaper, C., and Ayres, M. M. (2007). A meta-analytic review of gender variations in adults language
use: Talkativeness, affiliative speech, and assertive speech. Personality and Social Psychology
Review, 11(4), 328363.
Li, J., Zhou, M. X., Yang, H., and Mark, G. (2017). Confiding in and Listening to Virtual Agents: The
Effect of Personality. Proceedings of the 22nd International Conference on Intelligent User
Interfaces - IUI 17, 275286. Limassol, Cyprus.
Loiacono, E. (2015). Self-Disclosure Behavior on Social Networking Web Sites. International Journal
of Electronic Commerce, 19(2), 6694.
Matthews, G., Deary, I. J., and Whiteman, M. C. (2003). Personality traits (2nd ed.). New York, NY,
US: Cambridge University Press.
McTear, M. F. (2017). The Rise of the Conversational Interface: A New Kid on the Block? In FETLT
2016 (pp. 3849). Cham: Springer.
Messina, C. (2015). 2016 will be the year of conversational commerce. Retrieved November 16, 2019,
from https://medium.com/chris-messina/2016-will-be-the-year-of-conversational-commerce-
1586e85e3991
Moon, Y. (2000). Intimate exchanges: Using computers to elicit selfdisclosure from consumers.
Journal of Consumer Research, 26(4), 323339.
Moon, Y., and Nass, C. (1996). How Real Are Computer Personalities? Psychological Responses to
Personality Types in Human-Computer Interaction. Communication Research, 23(6), 651674.
Nass, C., and Moon, Y. (2000). Machines and mindlessness: Social responses to computers. Journal of
Social Issues, 56(1), 81103.
Nass, C., Moon, Y., Fogg, B. J., Reeves, B., and Dryer, C. D. (1995). Can computer personalities be
human personalities? International Journal of Human-Computer Studies, 43(2), 223239.
Nass, C., Moon, Y., and Green, N. (1997). Are machines gender neutral? Gender-stereotypic
responses to computers with voices. Journal of Applied Social Psychology, 27(10), 864876.
Nass, C., Steuer, J., and Tauber, E. R. (1994). Computers are social actors. Proceedings of the SIGCHI
Conference on Human Factors in Computing Systems (CHI 94), 7278. Boston, USA.
Olivero, N., and Lunt, P. (2004). Privacy versus willingness to disclose in e-commerce exchanges: The
effect of risk awareness on the relative role of trust and control. Journal of Economic
Psychology, 25(2), 243262.
Pennebaker, J. W., and King, L. A. (1999). Linguistic styles: Language use as an individual difference.
Journal of Personality and Social Psychology, 77(6), 12961312.
Pfeuffer, N., Adam, M., Toutaoui, J., Hinz, O., and Benlian, A. (2019). Mr. and Mrs. Conversational
Agent - Gender Stereotyping in Judge-Advisor Systems and the Role of Egocentric Bias.
Proceedings of the 40th International Conference on Information Systems (ICIS 2019). Munich,
Germany.
Pfeuffer, N., Benlian, A., Gimpel, H., and Hinz, O. (2019). Anthropomorphic Information Systems.
Business & Information Systems Engineering, 61(4), 523533.
Qiu, L., and Benbasat, I. (2010). A study of demographic embodiments of product recommendation
agents in electronic commerce. International Journal of Human Computer Studies, 68(10), 669
688.
Rich, M. K., and Smith, D. C. (2000). Determining relationship skills of prospective salespeople.
Gnewuch et al. / Customer Self-Disclosure in Conversational Commerce
Twenty-Eigth European Conference on Information Systems (ECIS2020), Marrakesh, Morocco. 16
Journal of Business & Industrial Marketing, 15(4), 242259.
Rietz, T., Benke, I., and Maedche, A. (2019). The Impact of Anthropomorphic and Functional Chatbot
Design Features in Enterprise Collaboration Systems on User Acceptance. Proceedings of the
14th International Conference on Wirtschaftsinformatik (WI2019). Siegen, Germany.
Rosseel, Y. (2012). lavaan: An R Package for Structural Equation Modeling. Journal of Statistical
Software, 48(2), 136.
Saffarizadeh, K., Boodraj, M., and Alashoor, T. (2017). Conversational Assistants: Investigating
Privacy Concerns, Trust, and Self-Disclosure. Proceedings of the 38th International Conference
on Information Systems (ICIS2017). Seoul, South Korea.
Sah, Y. J., and Peng, W. (2015). Effects of visual and linguistic anthropomorphic cues on social
perception, self-awareness, and information disclosure in a health website. Computers in Human
Behavior, 45, 392401.
Santos, B., Patel, P., and DSouza, R. (2011). Venture Capital Funding for Information Technology
Businesses. Journal of the Association for Information Systems, 12(1), 5787.
Schlee, R. P. (2005). Social styles of students and professors: Do students social styles influence their
preferences for professors? Journal of Marketing Education, 27(2), 130142.
Schuetzler, R. M., Giboney, J. S., Grimes, G. M., and Nunamaker, J. F. (2018). The influence of
conversational agent embodiment and conversational relevance on socially desirable responding.
Decision Support Systems, 114, 94102.
Spiekermann, S., Grossklags, J., and Berendt, B. (2001). E-privacy in 2nd generation E-commerce:
privacy preferences versus actual behavior. Proceedings of the 3rd ACM Conference on
Electronic Commerce, 3847. Tampa, FL, USA.
Tuzovic, S., and Paluch, S. (2018). Conversational Commerce A New Era for Service Business
Development? In M. Bruhn and K. Hadwich (Eds.), Service Business Development (pp. 81100).
Wiesbaden: Springer.
Walczuch, R., and Lundgren, H. (2004). Psychological antecedents of institution-based consumer trust
in e-retailing. Information and Management, 42(1), 159177.
Wambsganss, T., Winkler, R., Söllner, M., and Leimeister, J. M. (2020). A Conversational Agent to
Improve Response Quality in Course Evaluations. CHI Conference on Human Factors in
Computing Systems Extended Abstracts. Honolulu, HI, USA.
Wang, W., Qiu, L., Kim, D., and Benbasat, I. (2016). Effects of rational and social appeals of online
recommendation agents on cognition- and affect-based trust. Decision Support Systems, 86, 48
60.
Watson, H. J. (2017). Preparing for the Cognitive Generation of Decision Support. MIS Quarterly
Executive, 16(3), 153169.
Winkler, R., and Söllner, M. (2018). Unleashing the Potential of Chatbots in Education: A State-Of-
The-Art Analysis. Academy of Management Annual Meeting (AOM). Chicago, USA.
Wright, R. T., and Marett, K. (2010). The Influence of Experiential and Dispositional Factors in
Phishing: An Empirical Investigation of the Deceived. Journal of Management Information
Systems, 27(1), 273303.
Yarkoni, T. (2010). Personality in 100,000 Words: A large-scale analysis of personality and word use
among bloggers. Journal of Research in Personality, 44(3), 363373.
Zhou, M. X., Mark, G., Li, J., and Yang, H. (2019). Trusting Virtual Agents: The Effect of
Personality. ACM Transactions on Interactive Intelligent Systems, 9(23).
... These behaviors include the exchange of information (e.g., personal data, attitudes, values), expressions of positive and negative affects, and mutual activities. The theory portrays people as having multiple layers, akin to layers of Anthropomorphic (informal tone, longer response time, use of emojis) vs. non-anthropomorphic chatbot [38] Message interactivity [44] Anthropomorphic (humor, communication delays, and social presence (name, informal language, typing indicator, read receipts)) vs. non-anthropomorphic chatbot [45] Socio-emotional features (ability to respond empathetically, active listening skills, ability to personalize (using the user's preferred name), and politeness, including being able to turn-take and make friendly small talk) [46] Other conversational factors Question sequence [43] Perceived responsiveness of the virtual agent (VA) [47] Dominant vs. submissive chatbot language style [48] Personality of the AI interviewer (warm and cheerful vs. serious and assertive conversational style) [49] User characteristics -Privacy self-efficacy [50] Prior beliefs about machines [37] Personality of the user (Big 5) ...
... The authors state that users in the living space expected more reciprocal services from the IoT CA (e.g., chatting, help with cooking and housework, reminding the user to exercise and sleep, and providing energy consumption information) than those in the workspace, and this led to higher self-disclosure in the living space [14]. The reciprocity aspect of the theory has been used to explain the positive effect of platform selfdisclosure on user self-disclosure in the context of an online academia-to-industry matching platform [44], and why [2,48] Calculus mechanism Perceived benefits/performance expectancy [47,50] Perceived risks/information misuse risk [47,50] Perceived social presence and anthropomorphism Perceived social presence [2,38,46] Perceived anthropomorphism [40] Other mediating mechanisms Evaluative capability of interviewer [1] ...
... Similarity-attraction theory has been used to explain why, in the context of an e-commerce chatbot, using a conversation style that matched the user's personality (dominant or submissive) increased the user's degree of self-disclosure [48] and why, in the context of accounting interviews, making the embodied conversational agent (ECA) facially and vocally similar to the interviewee resulted in increased disclosure [2]. ...
Article
Full-text available
Conversational technologies have become increasingly prevalent, with interactions becoming more and more personalized. This paper reviews literature on the phenomenon of self-disclosure to conversational technologies. Five types of factors emerge as influencing self-disclosure: interface modality, conversational factors, user characteristics, mediating mechanisms, and contextual factors. We describe each type of factor, cover findings from the literature, present the framework of factors influencing self-disclosure that thus emerges, and put forth pertinent questions for future research on self-disclosure to conversational technologies.
... To make a first step towards closing this research gap, we focus on a selected personality trait, i.e., dominance, that is often reflected in a person's language style (Carli, 1990;Leaper and Ayres, 2007). Although (1) dominance and restraint date back as considered expressions in the early days of chatbot research (Weizenbaum, 1966) and (2) even resulted in subsequent experiments examining the perception of chatbots concerning this manipulated variable (e.g., Nass et al., 1995), these were explored in non-educational contexts such as recruitment (Zhou et al., 2019) or customer care (Gnewuch et al., 2020). A learning interaction differs from these domains because neither the profit orientation nor a selection process is in the foreground. ...
... In conversations, dominant individuals take an active role and lead the interaction (Moon & Nass, 1996). Both their speaking time and length of conversation are higher compared to those who are more reserved (Gnewuch et al., 2020;Kiesler, 1983). To express their stance, dominant individuals frequently interrupt their conversation partners successfully (Burgoon et al., 1998). ...
... To express their stance, dominant individuals frequently interrupt their conversation partners successfully (Burgoon et al., 1998). The dominant speaking style is characterized by the use of strong expressions (Gnewuch et al., 2020;Nass et al., 1995). Dominant individuals are typically known for making many confident, assertive statements and requests (Gnewuch et al., 2020;Nass et al., 1995). ...
Conference Paper
Full-text available
In this Wizard-of-Oz experiment, we assess the impact of language style (dominant vs. submissive) on students' perception of a chatbot in a learning context. 38 probands were involved in this within-subject experiment while learning about Digital Literacy in a guided text-based conversation via Slack, in which they supposedly interacted with a chatbot. We quantitatively measured constructs in a follow-up survey and discussed implications with the probands and 14 other students from a bachelor's course on digital transformation. Results show that the dominant language style significantly negatively affected learners' enjoyment during the interaction, but did not reveal a significant influence on perceived competence, empathy, identification, trust, or usefulness between the two contrasted language styles. Our qualitative results indicate that language style preference depends on learning context, interaction time, and learner personality. We reflect on future research needed to build upon these initial findings.
... This research-in-progress paper presents the research design we apply to address this question as well as initial insights. Drawing on communication accommodation theory (Giles et al., 2010), we derive a research model, that is tested in an online experiment. In the experiment, participants interact with an individualized CA that is adapted to their individual characteristics (i.e., their rational/intuitive cognitive style or need for interaction). ...
... Research on human-human interactions shows that individuals adapt their communication style to their interaction partner (Van Pinxteren et al., 2020). According to communication accommodation theory, this individual adaptation contributes to a more positive perception of the speaker and the interaction itself (Giles et al., 2010). The theory suggests that humans adapt their communication style to show their attitude toward the interlocutor and steer the evaluation of the speaker and the interaction (Giles et al., 2010). ...
... According to communication accommodation theory, this individual adaptation contributes to a more positive perception of the speaker and the interaction itself (Giles et al., 2010). The theory suggests that humans adapt their communication style to show their attitude toward the interlocutor and steer the evaluation of the speaker and the interaction (Giles et al., 2010). Moreover, an accommodated communication style can enhance interaction efficiency (Gallois et al., 2005). ...
Conference Paper
Full-text available
Conversational agents (CA) offer a range of benefits to firms and users, yet user experiences are often unsatisfying. An explanation might be that individual differences of users are only insufficiently addressed in today’s CA design. Drawing on communication accommodation theory, we develop a research model and study design to investigate how adapting CA design to users’ individual characteristics influences the user experience. In particular, we develop text-based CAs (i.e., chatbots) that are adapted to users’ rational/intuitive cognitive style or need for interaction, and compare the user experience to non-adapted CAs. Initial results from our pilot study (n=37) confirm that individualized CA design can enhance the user experience. We expect to contribute to the growing research field of adaptive CA design. Moreover, our results will provide guidance for developers on how to facilitate a pleasing user experience by adapting the CA design to users.
... Prior research highlights that preferences for chatbot behaviors often diverge among users, with some works suggesting a notable inclination for chatbots that resemble their own personality traits, often referred to as the similarity attraction effect [12,49,60]. For instance, empirical evidence suggests that congruence in traits such as agreeableness and extraversion between users and chatbots can enhance user engagement and lead to improved outcomes such as increased satisfaction and greater openness in conversations [24,79]. However, these effects are not consistently observed [45,88]; for instance, Völkel et al. [88] reported that while users with high levels of agreeableness tend to prefer similarly agreeable chatbots, this preference does not necessarily extend to those with low agreeableness, indicating an asymmetry in the similarity attraction effect. ...
Preprint
Full-text available
Communication traits in text-based human-AI conversations play pivotal roles in shaping user experiences and perceptions of systems. With the advancement of large language models (LLMs), it is now feasible to analyze these traits at a more granular level. In this study, we explore the preferences of information workers regarding chatbot communication traits across seven applications. Participants were invited to participate in an interactive survey, which featured adjustable sliders, allowing them to adjust and express their preferences for five key communication traits: formality, personification, empathy, sociability, and humor. Our findings reveal distinct communication preferences across different applications; for instance, there was a preference for relatively high empathy in wellbeing contexts and relatively low personification in coding. Similarities in preferences were also noted between applications such as chatbots for customer service and scheduling. These insights offer crucial design guidelines for future chatbots, emphasizing the need for nuanced trait adjustments for each application.
... In HCI, simple similarity-attraction effects with regard to personality attributions to the chatbot (e.g., dominant / submissive) are widely observed (e.g., Al-Natour et al. 2005; 2006; Gnewuch et al. 2020; Hess et al. 2005. For instance, similarly to us, Gnewuch et al. (2020) investigated the perceptions of style as a verbal cue but instead of its influence on users' perceived similarity in social group membership, they focused on dominance as a facet of personality and how these perceptions impact users' self-disclosure behaviour in e-commerce. ...
Conference Paper
Full-text available
While blood products are a critical resource in healthcare systems, providing sufficient blood products is a worldwide challenge, especially so since the COVID-19 pandemic. As easy and timely access to information is crucial to convince (potential) donors to change their behaviour and become regular donors, chatbots can offer fast and easy access to information whenever (potential) donors need it. Due to their human-like design, chatbots can help motivating and convincing users to donate blood regularly to work against the ongoing, post-pandemic challenges in providing sufficient blood supply. Based on previous findings, we assume that users' perception of a blood donation chatbot can vary worldwide, in relation to the incorporated design features. As part of a design science study, we conducted an online between-subject experiment with participants from USA, Germany, South Africa and India. We could show a significant negative moderating effect of horizontal individualism in terms of the chatbot's individualistic conversation style and the perceived similarity in social group membership, implicating the so-called "con-tribution conflict" with regard to IS and culture.
... For instance, Yu et al. (2019) investigated the inf luence of interaction mode and chatbot gender. Gnewuch et al. (2020) explored the inf luence of chatbot conversation style. Other studies have found selfdisclosure to be facilitated by the chatbot's perceived anonymity and lack of judgment (e.g. ...
Article
Full-text available
Self-disclosure in human–chatbot relationship (HCR) formation has attracted substantial interest. According to social penetration theory, self-disclosure varies in breadth and depth and is influenced by perceived rewards and costs. While previous research has addressed self-disclosure in the context of chatbots, little is known about users' qualitative understanding of such self-disclosure and how self-disclosure develops in HCR. To close this gap, we conducted a 12-week qualitative longitudinal study (n = 28) with biweekly questionnaire-based check-ins. Our results show that while HCRs display substantial conversational breadth, with topics spanning from emotional issues to everyday activities, this may be reduced as the HCR matures. Our results also motivate a nuanced understanding of conversational depth, where even conversations about daily activities or play and fantasy can be experienced as personal or intimate. Finally, our analysis demonstrates that conversational depth can develop in at least four ways, influenced by perceived rewards and costs. Theoretical and practical implications are discussed.
Preprint
Full-text available
This German study (N = 317) tests social communication (i.e., self-disclosure, content intimacy, relational continuity units, we-phrases) as a potential compensation strategy for algorithm aversion. Therefore, we explore the acceptance of a robot as an advisor in non-moral, somewhat moral, and very moral decision situations and compare the influence of two verbal communication styles of the robot (functional vs. social). Subjects followed the robot's recommendation similarly often for both communication styles (functional vs. social), but more often in the non-moral decision situation than in the moral decision situations. Subjects perceived the robot as more human and more moral during social communication than during functional communication but similarly trustworthy, likable, and intelligent for both communication styles. In moral decision situations, subjects ascribed more anthropomorphism and morality but less trust, likability, and intelligence to the robot compared to the non-moral decision situation. Subjects perceive the robot as more moral in social communication. This unexpectedly led to subjects following the robot's recommendation less often. No other mediation effects were found. From this we conclude, that the verbal communication style alone has a rather small influence on the robot's acceptance as an advisor for moral decision-making and does not reduce algorithm aversion. Potential reasons for this (e.g., multimodality, no visual changes), as well as implications (e.g., avoidance of self-disclosure in human-robot interaction) and limitations (e.g., video interaction) of this study, are discussed.
Article
Full-text available
Conversational agents (CAs) are software-based systems designed to interact with humans using natural language and have attracted considerable research interest in recent years. Following the Computers Are Social Actors paradigm, many studies have shown that humans react socially to CAs when they display social cues such as small talk, gender, age, gestures, or facial expressions. However, research on social cues for CAs is scattered across different fields, often using their specific terminology, which makes it challenging to identify, classify, and accumulate existing knowledge. To address this problem, we conducted a systematic literature review to identify an initial set of social cues of CAs from existing research. Building on classifications from interpersonal communication theory, we developed a taxonomy that classifies the identified social cues into four major categories (i.e., verbal, visual, auditory, invisible) and ten subcategories. Subsequently, we evaluated the mapping between the identified social cues and the categories using a card sorting approach in order to verify that the taxonomy is natural, simple, and parsimonious. Finally, we demonstrate the usefulness of the taxonomy by classifying a broader and more generic set of social cues of CAs from existing research and practice. Our main contribution is a comprehensive taxonomy of social cues for CAs. For researchers, the taxonomy helps to systematically classify research about social cues into one of the taxonomy's categories and corresponding subcategories. Therefore, it builds a bridge between different research fields and provides a starting point for interdisciplinary research and knowledge accumulation. For practitioners, the taxonomy provides a systematic overview of relevant categories of social cues in order to identify, implement, and test their effects in the design of a CA.
Conference Paper
Full-text available
Conversational agents continue to permeate our lives in different forms, such as virtual assistants on mobile devices or chatbots on websites and social media. The interaction with users through natural language offers various aspects for researchers to study as well as application domains for practitioners to explore. In particular their design represents an interesting phenomenon to investigate as humans show social responses to these agents and successful design remains a challenge in practice. Compared to digital human-to-human communication, text-based conversational agents can provide complementary, preset answer options with which users can conveniently and quickly respond in the interaction. However, their use might also decrease the perceived human likeness and social presence of the agent as the user does not respond naturally by thinking of and formulating a reply. In this study, we conducted an experiment with N=80 participants in a customer service context to explore the impact of such elements on agent anthropomorphism and user satisfaction. The results show that their use reduces perceived humanness and social presence yet does not significantly increase service satisfaction. On the contrary, our findings indicate that preset answer options might even be detrimental to service satisfaction as they diminish the natural feel of human-CA interaction.
Article
Full-text available
With the growing proliferation of Smart Home Assistants (SHAs), digital services are increasingly pervading people's private households. Through their intrusive features, SHAs threaten to not only increase individual users' strain but also impair social relationships at home. However, while previous research has predominantly focused on technology features' detrimental effects on employee strain at work, there is still a lack of understanding of the adverse effects of digital devices on individuals and their social relations at home. In addition, we know little about how these deleterious effects can be mitigated by using IT artifact-based design features. Drawing on the person-technology fit model, self-regulation theory and the literature on anthropomorphism, we used the synergistic properties of an online experiment (N=136) and a follow-up field survey with a representative sample of SHA users (N=214) to show how and why SHAs' intrusive technology features cause strain and interpersonal conflicts at home. Moreover, we demonstrate how SHAs' anthropomorphic design features can attenuate the harmful effects of intrusive technology features on strain by shaping users' feelings of privacy invasion. More broadly, our study sheds light on the largely under-investigated psychological and social consequences of the digitization of the individual at home.
Article
Full-text available
We present artificial intelligent (AI) agents that act as interviewers to engage with a user in a text-based conversation and automatically infer the user's personality traits. We investigate how the personality of an AI interviewer and the inferred personality of a user influences the user's trust in the AI interviewer from two perspectives: the user's willingness to confide in and listen to an AI interviewer. We have developed two AI interviewers with distinct personalities and deployed them in a series of real-world events. We present findings from four such deployments involving 1,280 users, including 606 actual job applicants. Notably, users are more willing to confide in and listen to an AI interviewer with a serious, assertive personality in a high-stakes job interview. Moreover, users’ personality traits, inferred from their chat text, along with interview context, influence their perception of and their willingness to confide in and listen to an AI interviewer. Finally, we discuss the design implications of our work on building hyper-personalized, intelligent agents.
Conference Paper
Full-text available
Information technology is rapidly changing the way how people collaborate in enterprises. Chatbots integrated into enterprise collaboration systems can strengthen collaboration culture and help reduce work overload. In light of a growing usage of chatbots in enterprise collaboration systems, we examine the influence of anthropomorphic and functional chatbot design features on user acceptance. We conducted a survey with professionals familiar with interacting with chatbots in a work environment. The results show a significant effect of anthropomorphic design features on perceived usefulness, with a strength four times the size of the effect of functional chatbot features. We suggest that researchers and practitioners alike dedicate priorities to anthropomorphic design features with the same magnitude as common for functional design features in chatbot design and research.
Conference Paper
Full-text available
Chatbots are software-based systems designed to interact with humans using text-based natural language and have attracted considerable interest in online service encounters. In this context, service providers face the challenge of measuring chatbot service encounter satisfaction (CSES), as most approaches are limited to post-interaction surveys that are rarely answered and often biased. As a result, service providers cannot react quickly to service failures and dissatisfied customers. To address this challenge, we investigate the application of automated sentiment analysis methods as a proxy to measure CSES. Therefore, we first compare different sentiment analysis methods. Second, we investigate the relationship between objectively computed sentiment scores of dialogs and subjectively measured CSES values. Third, we evaluate whether this relationship also exists for utterance sequences throughout the dialog. The paper contributes by proposing and applying an automatic and objective approach to use sentiment scores as a proxy to measure CSES.
Article
Full-text available
Conversational agents (CAs) are becoming an increasingly common component in a wide range of information systems. A great deal of research to date has focused on enhancing traits that make CAs more humanlike. However, few studies have examined the influence such traits have on information disclosure. This research builds on self-disclosure, social desirability, and social presence theories to explain how CA anthropomorphism affects disclosure of personally sensitive information. Taken together, these theories suggest that as CAs become more humanlike, the social desirability of user responses will increase. In this study, we use a laboratory experiment to examine the influence of two elements of CA design—conversational relevance and embodiment—on the answers people give in response to sensitive and non-sensitive questions. We compare the responses given to various CAs to those given in a face-to-face interview and an online survey. The results show that for sensitive questions, CAs with better conversational abilities elicit more socially desirable responses from participants, with a less significant effect found for embodiment. These results suggest that for applications where eliciting honest answers to sensitive questions is important, CAs that are “better” in terms of humanlike realism may not be better for eliciting truthful responses to sensitive questions.