Content uploaded by Ulrich Gnewuch
Author content
All content in this area was uploaded by Ulrich Gnewuch on May 18, 2020
Content may be subject to copyright.
This is the author’s version of a work that was published in the following source
Gnewuch, U., Meng, Y., and Maedche, A. (2020). “The Effect of Perceived Similarity in Dominance on
Customer Self-Disclosure to Chatbots in Conversational Commerce,” in Proceedings of the 28th European
Conference on Information Systems (ECIS 2020), Marrakech, Morocco.
Please note: Copyright is owned by the author and / or the publisher.
Commercial use is not allowed.
Institute of Information Systems and Marketing (IISM)
Kaiserstraße 89-93
76133 Karlsruhe - Germany
https://iism.kit.edu
Karlsruhe Service Research Institute (KSRI)
Kaiserstraße 89
76133 Karlsruhe – Germany
https://ksri.kit.edu
© 2017. This manuscript version is made available under the CC-
BY-NC-ND 4.0 license http://creativecommons.org/licenses/by-
nc-nd/4.0/
Twenty-Eigth European Conference on Information Systems (ECIS2020), Marrakesh, Morocco. 1
THE EFFECT OF PERCEIVED SIMILARITY IN DOMINANCE
ON CUSTOMER SELF-DISCLOSURE TO CHATBOTS IN
CONVERSATIONAL COMMERCE
Research paper
Gnewuch, Ulrich, Karlsruhe Institute of Technology (KIT), Institute of Information Systems
and Marketing (IISM), Karlsruhe, Germany, ulrich.gnewuch@kit.edu
Yu, Meng, Karlsruhe Institute of Technology (KIT), Institute of Information Systems and
Marketing (IISM), Karlsruhe, Germany, mengyu0607@gmail.com
Maedche, Alexander, Karlsruhe Institute of Technology (KIT), Institute of Information
Systems and Marketing (IISM), Karlsruhe, Germany, alexander.maedche@kit.edu
Abstract
Recent years have seen increased interest in the application of chatbots for conversational commerce.
However, many chatbots are falling short of their expectations because customers are reluctant to
disclose personal information to them (e.g., product interest, email address). Drawing on social
response theory and similarity-attraction theory, we investigated (1) how a chatbot’s language style
influences users’ perceived similarity in dominance (i.e., an important facet of personality) between
them and the chatbot and (2) how these perceptions influence their self-disclosure behavior. We
conducted an online experiment (N=205) with two chatbots with different language styles (dominant
vs. submissive). Our results show that users attribute a dominant personality to a chatbot that uses
strong language with frequent assertions, commands, and self-confident statements. Moreover, we find
that the interplay of the user’s own dominance and the chatbot’s perceived dominance creates
perceptions of similarity. These perceptions of similarity increase users’ degree of self-disclosure via
an increased likelihood of accepting the chatbot’s advice. Our study reveals that language style is an
important design feature of chatbots and highlights the need to account for the interplay of design
features and user characteristics. Furthermore, it also advances our understanding of the impact of
design on self-disclosure behavior.
Keywords: chatbot, language style, dominance, self-disclosure, personality similarity.
1 Introduction
Conversational commerce refers to the use of chat, voice, and other natural language interfaces in e-
commerce environments. While customers’ questions and inquiries were primarily handled by human
service agents in the past, recent advances in artificial intelligence (AI) have led to a growing interest in
using chatbots (i.e., text-based conversational agents) for such tasks (Tuzovic and Paluch, 2018; Watson,
2017). For example, customers can use chatbots to find and book airline flights or search and buy fashion
items (Dale, 2016; Tuzovic and Paluch, 2018; Watson, 2017). In contrast to voice assistants, such as
Amazon’s Alexa or Google Home, chatbots rely on written language and can be found on many e-
commerce websites and messaging platforms (Dale, 2016). For example, Facebook announced that there
are already more than 300,000 chatbots on Facebook Messenger (Facebook, 2018).
While marketers have attributed great potential to chatbots, even declaring 2016 the year of conversational
commerce (Messina, 2015), organizations have realized that there are several challenges that need to be
Gnewuch et al. / Customer Self-Disclosure in Conversational Commerce
Twenty-Eigth European Conference on Information Systems (ECIS2020), Marrakesh, Morocco. 2
addressed when using chatbots for conversational commerce (Tuzovic and Paluch, 2018). More
specifically, research has shown that users are reluctant to disclose personal information to chatbots
because, for example, they are worried about what happens with their data (Saffarizadeh, Boodraj, and
Alashoor, 2017; Tuzovic and Paluch, 2018). However, customer self-disclosure is key to establishing long-
term relationships with customers and essential for many business transactions (e.g., purchasing processes,
marketing campaigns) (Campbell, 2019). Self-disclosure can be understood as the process of revealing
personal information, such as name, product interests, or email address, to an e-commerce provider
(Campbell, 2019; Cozby, 1973). Given the importance of customer self-disclosure for e-commerce
providers, much research has examined the antecedents to users’ willingness to disclose personal
information (e.g., Al-Natour, Benbasat, and Cenfetelli, 2009; Campbell, 2019; Moon, 2000).
However, while existing research provides valuable knowledge about the antecedents of self-disclosure,
research on the impact of specific design features on self-disclosure (Al-Natour et al., 2009; Spiekermann,
Grossklags, and Berendt, 2001), particularly in the context of chatbots (e.g., Adam and Klumpe, 2019), is
scarce. Research has identified a myriad of design features that could potentially influence self-disclosure
to a chatbot (Feine, Gnewuch, Morana, and Maedche, 2019; Pfeuffer, Benlian, Gimpel, and Hinz, 2019).
Since users interact with a chatbot using natural language, its language style might be a particularly
important design feature and a social cue that can influence how users interact with a chatbot (e.g.,
Chattaraman, Kwon, Gilbert, and Ross, 2019; Sah and Peng, 2015). For example, research has shown that
users are more willing to confide in a chatbot that uses a dominant language style during a job interview
(Zhou, Mark, Li, and Yang, 2019). Moreover, building on social response theory, studies have shown that
users ascribe a personality (e.g., extroverted/introverted) to a computer based on its language style (Moon
and Nass, 1996; Nass, Moon, Fogg, Reeves, and Dryer, 1995). Furthermore, these personality attributions
may even lead users to form perceptions about how similar they are to the computer (Al-Natour, Benbasat,
and Cenfetelli, 2005, 2006; Hess, Fuller, and Mathew, 2005).
Despite the importance of language for the design of chatbots, there is a lack of research on how a chatbot’s
language style influences users’ perceptions and self-disclosure behavior. To take a first step in closing this
research gap, we focus on one specific personality facet (i.e., dominance) that is often reflected in a
person’s language style (Carli, 1990; Leaper and Ayres, 2007) and examine the effect of users’ perceived
similarity in dominance between them and the chatbot. Consequently, we investigate the following two
research questions: (1) How does a chatbot’s language style influence users’ perceived similarity in
dominance between them and the chatbot? (2) How do these perceptions influence their self-disclosure
behavior?
To address these research questions, we conducted a two-condition, between-subjects online experiment in
which participants interacted with one of two chatbots in a conversational commerce scenario. The two
chatbots differed only in their language style (dominant vs. submissive). The dominant chatbot used strong
language with frequent assertions, commands, and self-confident statements. In contrast, the submissive
chatbot primarily used suggestions and unassuming statements. Customer self-disclosure was assessed
during the interaction when the chatbot asked users for their personal information (i.e., product interest and
email address). Our findings show that users attribute a dominant personality to a chatbot when its language
style is characterized by confident and assertive statements. Moreover, we find that the interplay of users’
own dominance and the chatbot’s perceived dominance creates perceptions of similarity. Furthermore,
perceived similarity in dominance has a positive indirect effect on self-disclosure via an increased
likelihood of accepting the chatbot’s advice. Our study makes three major contributions. First, it advances
our understanding of the impact of language style as an important design feature of chatbots. Second, it
provides further evidence that it is the interplay between user characteristics and design features, not only
the design per se, that shapes users’ perceptions of chatbots. Third, this study extends prior research on
customer self-disclosure by demonstrating how perceptions of similarity between a user and a chatbot
influence self-disclosure behavior.
Gnewuch et al. / Customer Self-Disclosure in Conversational Commerce
Twenty-Eigth European Conference on Information Systems (ECIS2020), Marrakesh, Morocco. 3
2 Theoretical Foundations and Related Work
2.1 Dominance
Researchers have defined human personality as stable traits which reflect basic dimensions on which
people differ (Matthews, Deary, and Whiteman, 2003). The well-known Five-Factor-Model describes
human personality in terms of five core traits: extraversion, agreeableness, neuroticism,
conscientiousness, and openness to experience (John and Srivastava, 1999). Extraversion has been
identified as a particularly relevant trait in the context of social interaction and therefore, has often
been used in human-computer interaction (HCI) (e.g., Al-Natour et al., 2006; Hess, Fuller, and
Campbell, 2009). Extraversion implies an energetic approach toward the social and material world and
includes facets such as dominance, sociability, and positive emotionality (John and Srivastava, 1999).
Dominant individuals are self-confident, self-assertive, and willing to take charge (Al-Natour,
Benbasat, and Cenfetelli, 2011). Consequently, the way people communicate is often influenced by
their dominance level. In general, dominant people state their opinions with assurance and force and
are able to influence and lead others (Galassi, Delo, Galassi, and Bastien, 1974; Schlee, 2005). In
contrast, submissive people use more equivocal and less confident language (Rich and Smith, 2000).
2.2 Language Style of Chatbots
Text-based conversational agents, commonly referred to as chatbots, have a long history in HCI. However,
recent developments in AI research and technology have opened up interesting possibilities for chatbots in
conversational commerce (Følstad and Brandtzæg, 2017). Consequently, many organizations are turning to
chatbots in order to reduce their costs and make it easier for customers to interact with them (Gnewuch,
Morana, and Maedche, 2017; Watson, 2017). For example, customers can already use chatbots to find and
book flights, hail a taxi, and check public transport schedules (Watson, 2017). Furthermore, chatbots have
been shown to be effective in other domains such as education (e.g., Wambsganss, Winkler, Söllner, and
Leimeister, 2020; Winkler and Söllner, 2018), team collaboration (e.g., Rietz, Benke, and Maedche, 2019),
and healthcare (e.g., Laumer, Maier, and Gubler, 2019).
Extant research on chatbots and conversational agents has shown that many of their design features (e.g.,
human-like avatars, language style) are unconsciously perceived as social cues and trigger social responses
from users (e.g., Adam and Klumpe, 2019; Diederich, Brendel, Lichtenberg, and Kolbe, 2019; Diederich,
Janßen-Müller, Brendel, and Morana, 2019; Pfeuffer, Adam, Toutaoui, Hinz, and Benlian, 2019). This
phenomenon has been extensively studied in many domains under the Computers are Social Actors
(CASA) paradigm (Nass and Moon, 2000; Nass, Steuer, and Tauber, 1994). According to social response
theory, which is based on CASA, even rudimentary social cues are sufficient to generate a wide range of
social responses (Nass and Moon, 2000). For example, Nass et al. (1995) found that when computers are
endowed with personality-like characteristics, users respond to them as if they have personalities. More
specifically, Isbister and Nass (2000) showed that users distinguished between an extroverted computer that
used “strong and friendly language expressed in the form of confident assertions” and an introverted
computer that used “weaker language expressed in the form of questions and suggestions” (p. 258). In the
context of recommendation agents, Al-Natour et al. (2006) found that an agent using more assertive
statements and expressions of higher confidence levels was perceived as more dominant. Moreover, Li et
al. (2017) showed that users were more willing to confide in a chatbot with a reserved and dominant
personality as compared to a chatbot with a warm and cheerful personality. Taken together, these findings
indicate that language style is an important design feature and that users may form impressions of the
chatbot’s personality based on its language style.
2.3 Similarity-Attraction Theory
Similarity-attraction theory posits that people like and are attracted to others who are similar, rather than
dissimilar, to themselves (Byrne, 1971). More specifically, people who share similar personality traits are
attracted to each other (Byrne, Griffitt, and Stefaniak, 1967). Research has shown that this theory not only
Gnewuch et al. / Customer Self-Disclosure in Conversational Commerce
Twenty-Eigth European Conference on Information Systems (ECIS2020), Marrakesh, Morocco. 4
applies to interpersonal communication, but also to HCI. For example, Moon and Nass (1996) found that
users were more attracted to a computer exhibiting similar personality traits compared to a dissimilar
computer. Moreover, Al-Natour et al. (2011) showed that perceived personality similarity to a
recommendation agent either directly or indirectly influenced users’ perceived enjoyment, ease of use,
usefulness, social presence, and trusting beliefs. In contrast, Li et al. (2017) found that personality similarity
between a user and a chatbot taking the role of a virtual interviewer did not influence the user’s willingness
to confide in and listen to the chatbot. In summary, research has shown that similarity-attraction theory
generally also applies to HCI, but there is a lack of research on how perceptions of similarity to a chatbot
influence user behavior when interacting with a chatbot.
2.4 Customer Self-Disclosure
Broadly speaking, self-disclosure can be defined as any personal information that a person communicates
to another (Cozby, 1973). The degree of self-disclosure is often categorized along two dimensions: (1)
breadth or amount of information disclosed and (2) depth or intimacy of information disclosed (Altman and
Taylor, 1973; Cozby, 1973). In e-commerce, organizations often need to gather personal information from
customers, such as product preferences, payment information, or contact details, in order to conduct their
business (Campbell, 2019). For example, eliciting customers’ preferences for products is necessary for
creating a customer profile and providing personalized suggestions (Adomavicius and Tuzhilin, 2001). In
addition, customers’ contact information, such as email addresses, are collected to be able to contact
customers with promotions and other marketing information (Campbell, 2019). However, research has
shown that customers are increasingly reluctant to disclose such information because they fear that their
information may get into the wrong hands (Olivero and Lunt, 2004; Spiekermann et al., 2001). Therefore,
much research has focused on identifying antecedents to users’ willingness to disclose personal information
(e.g., Al-Natour et al., 2009; Campbell, 2019).
While there is reason to believe that social cues of chatbots affect users’ self-disclosure (e.g., Adam and
Klumpe, 2019; Sah and Peng, 2015; Schuetzler, Giboney, Grimes, and Nunamaker, 2018), to the best of
our knowledge, there is no previous research that investigates how perceptions of similarity in dominance
(i.e., one specific personality facet) can be created through a specific social cue (i.e., chatbot language style)
in order to influence users’ degree of self-disclosure.
3 Research Model and Hypotheses
Research has shown that the interplay between individual user characteristics and design features of
systems, such as computers or online recommendation agents, plays an important role in HCI (Al-Natour et
al., 2006; Nass et al., 1995). Therefore, building on social response theory, we develop a research model
that first describes how perceptions of similarity in dominance can be created through a chatbot’s language
style. Subsequently, drawing upon similarity-attraction theory, we theorize how perceived similarity in
dominance increases the likelihood of accepting the chatbot’s advice and leads to customer self-disclosure.
Controls: Gender, age, prior
experience with chatbots, chat duration
H1
H2
H3H5
Likelihood of
Accepting
Chatbot Advice
Customer
Self-
Disclosure
Perceived
Similarity in
Dominance
Personality Match
User
Dominance
Perceived
Chatbot
Dominance
XH4
Chatbot
Language
Style
1 Dominant
2 Submissive
Figure 1. Research model
Gnewuch et al. / Customer Self-Disclosure in Conversational Commerce
Twenty-Eigth European Conference on Information Systems (ECIS2020), Marrakesh, Morocco. 5
3.1 Manifesting Chatbot Dominance through Language Style
Extant research has shown that there is a link between personality and language use in a variety of contexts
(e.g., Holtgraves, 2011; Pennebaker and King, 1999; Yarkoni, 2010). People automatically infer
personality traits from the way other people communicate (Costa and MacCrae, 1992). As described above,
dominance has been identified as an important personality facet. Several studies have shown that
dominance can be expressed verbally (e.g., using phrases and words like “you must”, “absolutely”, “I’m
sure that”) and that such a language style can also be implemented in the design of interactive systems (e.g.,
Al-Natour et al., 2006; Moon and Nass, 1996; Nass et al., 1995). Furthermore, drawing on social response
theory (Nass and Moon, 2000), these studies have demonstrated that based on cues in the language, users
attribute personality to a system and are able to distinguish, for example, between extroverted and
introverted personalities. Building on this evidence, we argue that a chatbot that uses strong language with
assertions, commands, and self-confident statements (e.g., “you should”, “I’m sure that”) is perceived as
more dominant than a chatbot that uses a submissive language style (e.g., “you could”, “maybe”). Thus, we
propose that:
H1: The language style of a chatbot is directly related to its perceived dominance.
3.2 Shaping Perceptions of Similarity in Dominance between User and
Chatbot
Several studies have shown that perceptions of personality similarity are shaped by the interplay between
user characteristics and design features of interactive systems (e.g., Al-Natour et al., 2006; Hess et al.,
2005). For example, Al-Natour et al. (2006) found that a user’s perceived personality similarity to a
recommendation agent can be predicted by comparing separate assessments of the agent’s and the user’s
level of dominance. Building on these findings, we argue that perceptions of similarity can also arise when
a dominant (submissive) user interacts with a chatbot that is perceived to have a dominant (submissive)
personality. Consequently, perceived similarity in dominance should be higher when there is a match
between the chatbot’s and the user’s level of dominance. Hence, we propose that:
H2: Users’ perceptions of the chatbot’s dominance and their own dominance interact to affect
users’ perceived similarity in dominance between them and the chatbot.
3.3 The Effect of Perceived Similarity in Dominance on Likelihood of
Accepting Chatbot Advice and Customer Self-Disclosure
Personality similarity has been identified as a key factor in the design of recommendation agents and an
important driver of trust, enjoyment, and involvement (Al-Natour et al., 2011; Hess et al., 2005). Moreover,
research has shown that people are not only attracted to others who are similar, but are also more likely to
follow and trust their advice when making purchase decisions (Byrne, 1971; Jiang, Hoegg, Dahl, and
Chattopadhyay, 2010). Taken together, these findings may indicate that perceived similarity in dominance
also influences how users perceive the advice from the chatbot (e.g., a recommendation on which product
to buy). Therefore, based on similarity-attraction theory, we propose that users are more likely to accept the
chatbot’s advice when they perceive the chatbot to be similar to them in terms of its dominance. Hence, we
argue that:
H3: Users’ perceived similarity in dominance influences their likelihood of accepting the chatbot’s
advice.
Research in psychology has pointed out that people who perceive themselves to be similar to another
person (e.g., in attitude or personality traits) are willing to disclose not only more, but also more intimate
information about themselves to this person (Gelman and McGinley, 1978; Knecht, Lippman, and Swap,
1973). The underlying rationale is that similarity provides attributional confidence, reduces uncertainty, and
creates feelings of closeness (Byrne, 1971; Byrne et al., 1967). As revealing personal information usually
makes the discloser feel vulnerable (Kelly and McKillop, 1996), perceived similarity may reduce feelings
Gnewuch et al. / Customer Self-Disclosure in Conversational Commerce
Twenty-Eigth European Conference on Information Systems (ECIS2020), Marrakesh, Morocco. 6
of vulnerability and therefore, facilitates the process of self-disclosure. In line with this reasoning, we argue
that perceived similarity to a chatbot also lowers the threshold for disclosing personal information during
the interaction. More specifically, perceived similarity in dominance may reduce users’ feelings of
uncertainty during the interaction and therefore, increase their willingness to disclose more or more
intimate personal information (e.g., their email address) to the chatbot. Thus, we propose:
H4: Users’ perceived similarity in dominance influences their degree of self-disclosure to the
chatbot.
It has been shown that the decision to disclose personal information involves an evaluation of costs
and rewards (Altman and Taylor, 1973). In e-commerce, many organizations require users to provide
their email address or other information to gain access to potential rewards (e.g., special offers or
promotions) (Campbell, 2019). Therefore, perceived rewards or benefits have been found to be an
important antecedent of self-disclosure (Al-Natour et al., 2009; Campbell, 2019; Loiacono, 2015).
Thus, we argue that when users are more likely to accept the chatbot’s advice, they focus more on the
potential rewards associated with that advice (e.g., an interesting product recommendation), in contrast
to any perceived costs (e.g., privacy concerns). Consequently, they are willing to disclose more
information about themselves to the chatbot in order to reap the expected rewards. Thus, we
hypothesize that:
H5: A higher likelihood of accepting the chatbot’s advice increases the degree of self-disclosure to
the chatbot.
4 Methodology
To test our hypotheses, we conducted a between-subjects online experiment. Participants were randomized
to interact with one of two chatbots that differed only in their language style. The experimental task was to
find and select a (fictitious) mobile phone plan using the chatbot. We selected this task since chatbots are
often used for such tasks in conversational commerce (Tuzovic and Paluch, 2018; Watson, 2017). The
chatbots were able to answer participants’ questions about mobile phone plans and guide them towards
selecting one plan by asking a set of questions (e.g., “Would you like to have unlimited calls?”). After the
chatbots had recommended a plan, they asked the participants whether they would be interested in getting
more information about this plan and would be willing to enter their email address to receive additional
information via email (see Figure 2). Entering an email address was optional and not required to receive a
compensation for participating in the experiment. Upon completion of the experimental task, participants
filled out a questionnaire that asked them to evaluate the chatbot.
4.1 Participants
Participants were recruited from a pool of students at a German university. We consider students to be
appropriate subjects for our experiment because they often shop online (Walczuch and Lundgren, 2004)
and are among the early adopters of chatbots (Brandtzaeg and Følstad, 2017). Using G*Power (Faul,
Erdfelder, Lang, and Buchner, 2007), we calculated a required sample size of about 200 participants (effect
size = .20, α = .05, power = .80). Among all participants, we raffled 600€ as compensation for participating
in the experiment. Before the experiment, all participants provided informed consent via an online form
which explained the context of the study, that their data (i.e., survey and conversation data) would be de-
identified, and that they could opt-out of the experiment at any time. In total, 214 subjects participated in
the experiment. After data collection, we excluded five participants who provided incorrect answers to one
of two attention check questions and four participants who did not follow the scenario or encountered
technical difficulties during the interaction with the chatbot. Therefore, our final sample included 205
participants (63 females, 142 males, mean age = 23 years).
Gnewuch et al. / Customer Self-Disclosure in Conversational Commerce
Twenty-Eigth European Conference on Information Systems (ECIS2020), Marrakesh, Morocco. 7
4.2 Experimental Conditions and Treatment Design
The online experiment employed a between-subjects design with two conditions (chatbot language style:
dominant vs. submissive). In both conditions, participants were told beforehand that their counterpart was a
chatbot, not a human being. In order to create two different language styles, we focused on the personality
facet of dominance (John and Srivastava, 1999) and formulated two different versions of each message sent
by the chatbots (see Table 1). As prior research has shown, a dominant language style is characterized by
the use assertions and commands (Isbister and Nass, 2000; Nass et al., 1995) and can be cued by the use of
directives and decisional guidance that are communicated in an authoritative manner (Al-Natour et al.,
2006). In contrast, a submissive language style is characterized by the use of questions and suggestions
(Isbister and Nass, 2000; Nass et al., 1995) as well as timid and unassuming statements (Al-Natour et al.,
2006; Hess et al., 2005). Both language styles were pretested with 28 participants. The results of the pretest
suggested that there were significant differences in how users perceive the level of dominance between the
two language styles of the chatbots.
Dominant Language Style (DOM)
Submissive Language Style (SUB)
1
“Hey! I’m an expert for mobile phone plans from all
providers and I’m sure I know the perfect plan for
you.”
“Hello! I’m your personal assistant for mobile phone
plans. I would be happy to help you find a new plan.”
2
„Unlimited calls would certainly not be bad, would
they?”
“Would you like to have unlimited calls?”
3
“Perfect! Here is my recommendation for you. I’m
absolutely sure that this plan meets all your needs.
Take a look:”
„Here I have found a mobile phone plan that could
possibly meet your needs. I would like to suggest you
the following plan:”
4
“Sounds great, right? I will gladly send you further
information about this offer. Do you want to know
more?”
“I hope this plan satisfies your expectations. If you like
the offer, I would be glad to send you further
information. Would you like to know more?”
5
“Sorry, I did not understand your question. Please
rephrase your message. If you need help, just enter
‘help’.”
„I’m very sorry, I did not understand your question,
but I’m trying to get better every day. Could you please
rephrase your message? If you need help, you can
always enter ‘help’.”
Table 1. Exemplary messages for both experimental conditions
4.3 Measures
We adapted the measurement items in the questionnaire from existing scales. We assessed user dominance,
perceived chatbot dominance, and perceived similarity in dominance by adapting the items from Al-Natour
et al. (2006). Likelihood of accepting the chatbot’s advice was measured using the items from Köhler et al.
(2011). Table 2 shows all constructs and corresponding measurement items. Additionally, several control
variables were examined in the survey (i.e., age, gender, experience with chatbots) or calculated afterwards
(i.e., chat duration). No significant differences were found between the experimental conditions for any of
these control variables.
As illustrated in Figure 2, our dependent variable customer self-disclosure was assessed during the
interaction. After the chatbot had recommended a mobile phone plan, participants were asked (1) whether
they would like to receive more information on their recommended mobile phone plan (i.e., state their
interest in the given plan) and (2) to provide their email address to receive this further information. Only
participants who stated their interest (i.e., clicked on “Yes please”) could enter their email address in the
following message from the chatbot. Consequently, customer self-disclosure represents a three-category
ordinal variable (0 = no disclosure; 1 = disclosure of product interest; 2 = disclosure of email address).
Gnewuch et al. / Customer Self-Disclosure in Conversational Commerce
Twenty-Eigth European Conference on Information Systems (ECIS2020), Marrakesh, Morocco. 8
Construct
Item
Scale / Measurement
Source
Perceived Chatbot
Dominance (PCD) /
User Dominance
(UD)
PCD: In my opinion, the chatbot is … /
UD: In my opinion, I am …
7-pt. Likert scale
(1=“strongly disagree”;
7=“strongly agree”)
Al-Natour et al.
(2006)
PCD1 / UD1
dominant
PCD2 / UD2
assertive
PCD3 / UD3
domineering
PCD4 / UD4
forceful
PCD5 / UD5
self-confident
PCD6 / UD6
self-assured
PCD7 / UD7
firm
PCD8 / UD8
persistent
Perceived Similarity
in Dominance
(PSD)
I think the chatbot and I are similar in terms of…
7-pt. Likert scale
(1=“strongly disagree”;
7=“strongly agree”)
Al-Natour et al.
(2006)
PSD1
my self-confidence level
PSD2
my self-assurance level
PSD3
my firmness level
PSD4
my persistence level
PSD5
my authoritativeness
PPS6
my dominance level
Likelihood of
Accepting the
Chatbot’s Advice
(LAA)
LAA1
What is the likelihood that you would
accept the chatbot’s advice?
7-pt. Likert scale (1 = “not likely at
all”; 7 = “very likely”)
Köhler et al.
(2011)
LAA2
How probable is it that you would accept
the chatbot’s advice?
7-pt. Likert scale (1 = “not probable
at all”; 7 = “very probable”)
LAA3
How influential do you perceive the
chatbot’s advice to be?
7-pt. Likert scale (1 = “not influential
at all”; 7 = “very influential”)
Customer Self-
Disclosure (CSD)
Behavioral measurement during the experiment (see description above and Figure 2).
Kang & Gratch
(2014), Moon (2000)
Table 2. Constructs and measurement items
1. Disclosure of product interest
2. Disclosure of email address
Figure 2. Screenshot of self-disclosure measurement (dominant language style condition shown)
To assess reliability and validity of the measures, we conducted a confirmatory factor analysis (CFA) using
the structural equation modeling (SEM) package lavaan 0.6-3 in R version 3.5.0 (Rosseel, 2012). However,
several items of both dominance scales (i.e., for user and for chatbot) and of the perceived similarity in
dominance scale did not load as expected, which we believe to be the result of a social desirability bias.
Consequently, we removed these items and only kept three dominance items (i.e., “I am / the chatbot is:
self-confident, self-assured, firm”) and three perceived similarity items (i.e., “The chatbot and I are similar
in terms of: my self-confidence level, my self-assurance level, my firmness level”). After rerunning the
CFA, the loadings for all items on their intended constructs exceeded the recommended threshold of .60
(Gefen and Straub, 2005). Next, we compared the square root of the AVE of each construct with its
Gnewuch et al. / Customer Self-Disclosure in Conversational Commerce
Twenty-Eigth European Conference on Information Systems (ECIS2020), Marrakesh, Morocco. 9
correlations with other constructs to assess discriminant validity. All constructs met this criterion. Finally,
Cronbach alpha scores were above .70 and average variance extracted (AVE) values above .50. The CFA
showed acceptable model fit (χ2 = 133.847, df = 58, χ2/df = 2.308, RMSEA = .08, CFI = .918, TLI = .889,
SRMR = .089).
4.4 Manipulation Check
To check whether the manipulation of the chatbots’ language style was successful, we used three items for
perceived language style based on existing research (e.g., Moon and Nass, 1996; Nass et al., 1995). More
specifically, we asked participants to rate whether the chatbot expressed itself confidently, provided
information in an authoritative manner, and made self-confident statements (7-point Likert scales: 1 =
“strongly disagree”; 7 = “strongly agree”). As this construct displayed high internal consistency as well as
convergent and discriminant validity, we computed a score by averaging participants’ responses across the
three items. A one-way analysis of variance (ANOVA) showed that participants in the DOM condition (M
= 5.84, SD = 1.04) perceived the chatbot’s language style to be significantly more dominant than did those
in the SUB condition (M = 4.87, SD = 1.47; F(1, 203) = 30.05, p < .001), thus indicating that our
manipulation was successful.
5 Results
We performed the following three steps in our analysis. First, we analyzed the effect of the treatment
(chatbot language style) on perceived chatbot dominance. Second, we calculated a dyadic personality
similarity score to assess the match between the users’ and the chatbots’ dominance. Third, we evaluated
the structural model with the relationships between perceived similarity in dominance, likelihood of
accepting the chatbot’s advice, and customer self-disclosure, again using the SEM package lavaan in R.
5.1 Descriptive Results
The descriptive statistics for all constructs in the both experimental conditions are reported in Table 3.
Experimental
Condition
N
Perceived
Chatbot
Dominance
User
Dominance
Perceived
Similarity in
Dominance
Likelihood of
Accepting the
Chatbot’s Advice
Customer Self-Disclosure
0 = No
Disclosure
1 = Product
Interest
2 = Email
Address
DOM
102
5.60 (0.76)
5.25 (1.09)
4.69 (1.50)
5.25 (1.25)
7 (6.86%)
12 (9.80%)
83 (81.37%)
SUB
103
4.83 (1.24)
5.35 (1.07)
4.54 (1.46)
5.56 (1.22)
10 (9.71%)
18 (17.48%)
75 (72.82%)
205
5.21 (1.10)
5.30 (1.08)
4.61 (1.48)
5.40 (1.24)
17 (8.29%)
30 (14.63%)
158 (77.07%)
Note: Means with standard deviations in parentheses.
Note: Numbers and percentages (in parentheses) of
participants who did / did not disclose information.
Table 3. Descriptive statistics
5.2 The Effect of Chatbot Language Style on Perceived Chatbot Dominance
To test the effect of chatbot language style (i.e., our treatment) on perceived chatbot dominance, we
conducted a one-way ANOVA. The results showed that participants in DOM condition (M = 5.60, SD =
0.76) perceived the chatbot to be significantly more dominant than did those in the SUB condition (M =
4.83, SD = 1.24; F(1, 203) = 28.01, p < .001; H1 supported).
5.3 Predicting Perceived Similarity in Dominance
Following the approach of Al-Natour et al. (2006), we calculated a dyadic personality similarity score using
pairwise intraclass correlations (Fisher, 1925) between the participant’s assessment of their own dominance
and their perception of the chatbot’s dominance. The intraclass correlation coefficient (ICC) takes values
in the range [-1.0, 1.0], where 1.0 means perfect agreement. ICCs for each participant were calculated
using the R package psy. In order to derive a new factor representing personality match, the dyadic
similarity scores (i.e., ICC scores) were dichotomized by a median split in two groups (0 = mismatch, 1 =
Gnewuch et al. / Customer Self-Disclosure in Conversational Commerce
Twenty-Eigth European Conference on Information Systems (ECIS2020), Marrakesh, Morocco. 10
match). Subsequently, a one-way ANOVA was conducted to test whether the computed personality
(mis)match influenced perceived similarity in dominance. The results showed that participants whose
dominance “matched” the chatbot’s perceived dominance (M = 4.87, SD = 1.30) perceived the chatbot’s
personality to be significantly more similar to their own compared to “mismatched” participants (M = 4.36,
SD = 1.60; F(1, 203) = 6.33, p = .013; H2 supported). In addition, Figure 3 shows that a personality match
is particularly effective when the chatbot uses a dominant language style.
N
Perceived
Similarity in
Dominance
Mismatch
103
4.36 (1.60)
Match
102
4.87 (1.30)
205
4.61 (1.48)
Note: Means with standard
deviations in parentheses.
Table 4. Descriptive results
Figure 3. Personality match
5.4 The Effect of Perceived Similarity in Dominance and Likelihood of
Accepting Chatbot Advice on Customer Self-Disclosure
Finally, we specified and estimated a structural model to examine the remaining relationships in our
research model. Because self-disclosure was measured as an ordinal variable, we used robust weighted
least squares estimation (i.e., WLSMV) to fit our model in lavaan. This estimation method has been found
reliable for estimating models with non-normal dependent variables and has been used in prior research
with nominal or ordinal dependent variables (e.g., Santos, Patel, and D’Souza, 2011; Wright and Marett,
2010). The overall fit indices of the structural model showed a good fit to the data (χ2 = 42.189, df = 41,
χ2/df = 1.029, RMSEA = .012, CFI = .994, TLI = .997, SRMR = .034).
Consistent with H3, perceived similarity in dominance had a statistically significant positive effect on
likelihood of accepting the chatbot’s advice (b = 0.359, p < .001). However, the effect of perceived
similarity in dominance on self-disclosure was not significant (b = 0.067, p = .513; H4 rejected). The effect
of likelihood of accepting the chatbot’s advice on self-disclosure was statistically significant (b = 0.171, p =
.025; H5 supported). In order to test whether likelihood of accepting the chatbot’s advice mediates the
relationship between perceived similarity and self-disclosure, we followed the procedure suggested by
Hayes (2018). Thus, we tested a mediation model (Model 4) using a bootstrapping procedure (10,000
samples) with perceived similarity in dominance as the independent variable, likelihood of accepting the
chatbot’s advice as the mediator, and self-disclosure as the dependent variable. The indirect effect of
perceived similarity in dominance on self-disclosure via likelihood of accepting the chatbot’s advice was
statistically significant (b = 0.0118, SE = 0.0079, [95% CI: 0.0007, 0.0339]). The direct effect of perceived
similarity on self-disclosure was not significant (b = -0.027, p = .410). Moreover, the relationships between
perceived similarity and likelihood of accepting the chatbot’s advice (b = 0.132, p = .049) as well as
between likelihood of accepting the chatbot’s advice and self-disclosure were still significant (b = 0.089, p
= .048). Taken together, these results suggest that perceived similarity in dominance has a positive indirect
effect on self-disclosure via an increased likelihood of accepting the chatbot’s advice.
6 Discussion
Drawing on social response theory and similarity-attraction theory, we investigated how the language style
of chatbots influences users’ perceptions of similarity in dominance and how these perceptions affect their
self-disclosure behavior. Our results yielded three key findings. First, users attribute a dominant personality
Gnewuch et al. / Customer Self-Disclosure in Conversational Commerce
Twenty-Eigth European Conference on Information Systems (ECIS2020), Marrakesh, Morocco. 11
to a chatbot when its language style is characterized by confident and assertive statements. Second, the
interplay of users’ own dominance and the chatbot’s perceived dominance creates perceptions of similarity.
Third, perceived similarity to a chatbot increases users’ degree of self-disclosure via an increased likelihood
of accepting the chatbot’s advice.
6.1 Theoretical Contributions
Our study makes three major theoretical contributions. First, our findings suggest that language style is an
important design feature of chatbots, conversational agents, and other natural language interfaces.
Consistent with social response theory, users ascribe a different personality to a chatbot, depending on
whether it uses dominant (e.g., “you should”, “I’m sure that”) or submissive language (e.g., “you could”,
“maybe”). A possible explanation is that, since the use of natural language is a unique human capability
and humans form personality impressions from others’ language within a few seconds (Costa and
MacCrae, 1992), users also automatically infer personality traits from the way a chatbot communicates.
This finding has important implications for the design of chatbots. Existing design knowledge for chatbots
often consists of high-level suggestions such as “Don’t sound like a robot” or “Give your chatbot a
personality” (McTear, 2017). In contrast, our findings provide a more nuanced understanding of how one
specific personality facet (i.e., dominance) can be implemented in a chatbot using one specific design
feature (i.e., language style).
Our second theoretical contribution is to highlight the interplay of design features and user characteristics
that create perceptions of similarity. Our results indicate that during the interaction with a chatbot, a
“match” between the chatbot’s perceived dominance (cued through its language style) and the user’s own
level of dominance can be established. This is in line with prior studies that have found a match between
other design features and user characteristics such as gender (e.g., Beldad, Hegner, and Hoppen, 2016;
Nass, Moon, and Green, 1997; Qiu and Benbasat, 2010) and ethnicity (e.g., Qiu and Benbasat, 2010).
Consequently, our study provides further evidence that it is the interplay between user characteristics and
design features, not only the design per se, that shapes users’ perceptions of chatbots.
Third, our study supplements literature on customer self-disclosure. Our results show that perceived
similarity in dominance has a positive indirect effect on self-disclosure via the user’s likelihood of
accepting the chatbot’s advice. Therefore, contrary to our hypothesis based on similarity attraction theory,
there seems to be no direct effect of perceived similarity on self-disclosure. This finding suggests that,
particularly in business interactions (e.g., e-commerce, conversational commerce), self-disclosure behavior
is not solely driven by users’ perceptions of the chatbot, but also by their evaluation of the benefits
associated with disclosing certain information (e.g., receiving interesting information about a product).
Therefore, a possible explanation in line with research on trust building (e.g., Saffarizadeh et al., 2017;
Wang, Qiu, Kim, and Benbasat, 2016) is that, despite users’ automatic social (and often emotional)
responses to a chatbot (e.g., forming personality impressions), self-disclosure also involves cognitive
decision-making processes.
6.2 Practical Implications
Our results also have important practical implications. For chatbot designers and organizations who aim to
introduce a chatbot to their customers, it is key to understand that its language style can have a large impact
on users’ perceptions and behavior. Consequently, designers and organizations should not only focus on the
technical capabilities of their chatbot (e.g., architecture, natural language processing algorithms, integration
with other IT systems), but also carefully examine and test how its language is perceived by its users and
whether its fits to the context (e.g., application domain, e-commerce channel). Moreover, designers and
organizations should consider the important role that individual user characteristics (e.g., personality traits)
play in shaping the interaction between a chatbot and its users. Our study shows that for increasing self-
disclosure, not only the design of the chatbot, but also how its design matches certain personality
characteristics of its users is crucial. Therefore, designers and organizations who are aiming to introduce a
new chatbot or convince users to engage with their existing chatbot could analyze their users’ personality
traits and adapt the chatbot’s language style accordingly. Since only few users are willing to provide
sensitive personality-related information to organizations, automated approaches could be used to infer
Gnewuch et al. / Customer Self-Disclosure in Conversational Commerce
Twenty-Eigth European Conference on Information Systems (ECIS2020), Marrakesh, Morocco. 12
personality traits from users’ written text during the interaction (e.g., IBM Watson Personality Insights; see
IBM, 2019), similar to sentiment analysis approaches that automatically extract users’ emotions from their
messages (Feine, Morana, and Gnewuch, 2019). Finally, our analysis highlights that, although the chatbot’s
design is important, there are also cognitive evaluations involved before users disclose personal information
(e.g., product interests, email address) to a chatbot. Therefore, organizations using chatbots (e.g., for
conversational commerce) should critically examine whether and how their services provide users with
added value that justifies the need for collecting personal information.
6.3 Limitations and Future Research
Our study has several limitations that suggest potential avenues for future research. First, although existing
research has often studied self-disclosure behavior using experimental methods (e.g., Kang and Gratch,
2014; Moon, 2000; Sah and Peng, 2015), this research design might limit the external validity and
generalizability of our findings because potential concerns about disclosing personal information might be
reduced in an experimental setting. To better understand the potential severity of this issue, we asked
participants at the end of the questionnaire to explain why or why not they disclosed their personal interests
or email address to the chatbot. Only one of the participants responding to the open-ended question
mentioned that she thought disclosing her email address was necessary for completing the experiment. The
responses of the remaining participants did not address this issue but rather focused on aspects related to the
chatbot’s recommendation (e.g., more information about the product) or the users’ general privacy
concerns. However, to strengthen the external validity of our findings, future research could observe self-
disclosure behavior in more realistic settings (e.g., using field studies). Moreover, when the chatbot asked
for the participants’ email address, there was no option to decline (e.g., another “no thanks” button). While
many participants just left the chat at this point in time without disclosing their email address, we cannot
rule out that providing an explicit option to decline would have resulted in different outcomes.
Second, our study focused on only one specific design feature (i.e., language style) and one facet of
personality (i.e., dominance). In order to create two different language styles, we used different linguistic
elements in the chatbots’ responses (e.g., assertions and directives vs. questions and suggestions). However,
the chatbots also introduced themselves differently in their first message (i.e., expert vs. personal assistant).
Since research has shown that even minimal cues can substantially affect users’ perceptions of chatbots
(e.g., Gnewuch, Morana, Adam, and Maedche, 2018), this might have also had an impact on users’
perceptions of the chatbots’ personality because users might have allocated different social roles to the
chatbots. Therefore, future research could examine the impact of social roles and other design features (e.g.,
avatars, emojis) on user perceptions of a chatbot’s personality. Moreover, given the complex nature of
human personality, most existing studies have examined only one personality facet (e.g., Al-Natour et al.,
2006; Hess et al., 2005; Moon and Nass, 1996; Nass et al., 1995). Thus, future studies could investigate a
combination of personality facets (e.g., dominance and neuroticism) and other user characteristics (e.g.,
gender) to better understand the interplay between user characteristics and design features of chatbots.
Third, we assessed users’ level of dominance using a self-report questionnaire. However, research has
shown that personality traits can also be inferred from textual data (Yarkoni, 2010). Since the interactions
in our experiment were rather short (i.e., six minutes on average) and most messages from users were less
than five words, we could not use a data-driven approach to cross-validate our findings. Thus, future
research may explore the use of data-driven approaches to identify users’ personality traits and compute
similarity scores from conversation data. This could also enable chatbots to dynamically adapt their design
(e.g., language style) in real-time to become more similar to the user during the interaction.
Finally, our research was primarily concerned with how self-disclosure behavior is influenced by the
chatbot’s design and its interplay with the users’ level of dominance. However, future research is needed to
expand our research model and to examine other potentially relevant mediating and moderating factors,
such as privacy concerns or trust, that have been found to affect self-disclosure behavior in other contexts
(e.g., Benlian, Klumpe, and Hinz, 2019; Saffarizadeh et al., 2017).
Gnewuch et al. / Customer Self-Disclosure in Conversational Commerce
Twenty-Eigth European Conference on Information Systems (ECIS2020), Marrakesh, Morocco. 13
References
Adam, M., and Klumpe, J. (2019). Onboarding with a Chat – The Effects of Message Interactivity and
Platform Self-Disclosure on User Disclosure Propensity. Proceedings of the Twenty-Seventh
European Conference on Information Systems (ECIS2019). Stockholm, Sweden.
Adomavicius, G., and Tuzhilin, A. (2001). Using Data Mining Methods to Build Customer Profiles.
Computer, 34(2), 74–82.
Al-Natour, S., Benbasat, I., and Cenfetelli, R. (2005). The role of similarity in e-commerce
interactions: The case of online shopping assistants. Proceedings of the 4th Annual Pre-ICIS
Workshop on HCI Research in MIS. Las Vegas, NV, USA.
Al-Natour, S., Benbasat, I., and Cenfetelli, R. (2009). The Antecedents of Customer Self-Disclosure to
Online Virtual Advisors. Proceedings of the 30th International Conference on Information
Systems (ICIS2009). Phoenix, AZ, USA.
Al-Natour, S., Benbasat, I., and Cenfetelli, R. (2011). The Adoption of Online Shopping Assistants:
Perceived Similarity as an Antecedent to Evaluative Beliefs. Journal of the Association for
Information Systems, 12(5), 347–374.
Al-Natour, S., Benbasat, I., and Cenfetelli, R. T. (2006). The Role of Design Characteristics in
Shaping Perceptions of Similarity: The Case of Online Shopping Assistants. Journal of the
Association for Information Systems, 7(12), 821–861.
Altman, I., and Taylor, D. A. (1973). Social penetration: The development of interpersonal
relationships. Holt, Rinehart & Winston.
Beldad, A., Hegner, S., and Hoppen, J. (2016). The effect of virtual sales agent (VSA) gender -
Product gender congruence on product advice credibility, trust in VSA and online vendor, and
purchase intention. Computers in Human Behavior, 60, 62–72.
Benlian, A., Klumpe, J., and Hinz, O. (2019). Mitigating the intrusive effects of smart home assistants
by using anthropomorphic design features: A multimethod investigation. Information Systems
Journal, (forthcoming).
Brandtzaeg, P. B., and Følstad, A. (2017). Why people use chatbots. Proceedings of the 4th
International Conference on Internet Science, 377–392. Thessaloniki, Greece.
Byrne, D. (1971). The Attraction Paradigm. New York, NY, USA: Academic Press.
Byrne, D., Griffitt, W., and Stefaniak, D. (1967). Attraction and similarity of personality
characteristics. Journal of Personality and Social Psychology, 5(1), 82–90.
Campbell, D. (2019). A Relational Build-up Model of Consumer Intention to Self-disclose Personal
Information in E-commerce B2C Relationships. AIS Transactions on Human-Computer
Interaction, 11(1), 33–53.
Carli, L. L. (1990). Gender, language, and influence. Journal of Personality and Social Psychology,
59(5), 941–951.
Chattaraman, V., Kwon, W.-S., Gilbert, J. E., and Ross, K. (2019). Should AI-Based, conversational
digital assistants employ social- or task-oriented interaction style? A task-competency and
reciprocity perspective for older adults. Computers in Human Behavior, 90, 315–330.
Costa, P. T., and MacCrae, R. R. (1992). Revised NEO personality inventory (NEO PI-R) and NEO
five-factor inventory (NEO-FFI): Professional manual. Odessa, FL: Psychological Assessment
Resources.
Cozby, P. C. (1973). Self-disclosure: A literature review. Psychological Bulletin, 79(2), 73–91.
Dale, R. (2016). The return of the chatbots. Natural Language Engineering, 22(5), 811–817.
Diederich, S., Brendel, A. B., Lichtenberg, S., and Kolbe, L. M. (2019). Design for fast request
fulfillment or natural interaction? Insights from an experiment with a conversational agent.
Proceedings of the Twenty-Seventh European Conference on Information Systems (ECIS2019).
Stockholm, Sweden.
Diederich, S., Janßen-Müller, M., Brendel, A. B., and Morana, S. (2019). Emulating Empathetic
Behavior in Online Service Encounters with Sentiment-Adaptive Responses: Insights from an
Experiment with a Conversational Agent. Proceedings of the 40th International Conference on
Gnewuch et al. / Customer Self-Disclosure in Conversational Commerce
Twenty-Eigth European Conference on Information Systems (ECIS2020), Marrakesh, Morocco. 14
Information Systems (ICIS2019). Munich, Germany.
Facebook. (2018). https://www.facebook.com/business/news/david-marcus-f8-keynote-2018.
Retrieved February 22, 2020, from https://www.facebook.com/business/news/david-marcus-f8-
keynote-2018
Faul, F., Erdfelder, E., Lang, A.-G., and Buchner, A. (2007). G*Power 3: A flexible statistical power
analysis program for the social, behavioral, and biomedical sciences. Behavior Research
Methods, 39(2), 175–191.
Feine, J., Gnewuch, U., Morana, S., and Maedche, A. (2019). A Taxonomy of Social Cues for
Conversational Agents. International Journal of Human-Computer Studies, 132, 138–161.
Feine, J., Morana, S., and Gnewuch, U. (2019). Measuring Service Encounter Satisfaction with
Customer Service Chatbots using Sentiment Analysis. Proceedings of the 14th International
Conference on Wirtschaftsinformatik (WI2019). Siegen, Germany.
Fisher, R. (1925). Fisher RA: Statistical methods for research workers. Edinburgh: Genesis
Publishing, Oliver and Boyd; In Biological monographs and manuals.
Følstad, A., and Brandtzæg, P. B. (2017). Chatbots and the new world of HCI. Interactions, 24(4), 38–
42.
Galassi, J. P., Delo, J. S., Galassi, M. D., and Bastien, S. (1974). The college self-expression scale: A
measure of assertiveness. Behavior Therapy, 5(2), 165–171.
Gefen, D., and Straub, D. W. (2005). A practical guide to factorial validity using PLS-Graph: Tutorial
and annotated example. Communications of the Association for Information Systems, 16, 91–109.
Gelman, R., and McGinley, H. (1978). Interpersonal liking and self-disclosure. Journal of Consulting
and Clinical Psychology, 46(6), 1549–1551.
Gnewuch, U., Morana, S., Adam, M. T. P., and Maedche, A. (2018). Faster is Not Always Better:
Understanding the Effect of Dynamic Response Delays in Human-Chatbot Interaction.
Proceedings of the Twenty-Sixth European Conference on Information Systems (ECIS2018).
Portsmouth, UK.
Gnewuch, U., Morana, S., and Maedche, A. (2017). Towards Designing Cooperative and Social
Conversational Agents for Customer Service. Proceedings of the 38th International Conference
on Information Systems (ICIS2017). Seoul, South Korea.
Hayes, A. F. (2018). Introduction to Mediation, Moderation, and Conditional Process Analysis: A
Regression-based Approach (2nd ed.). New York, NY, USA: Guilford Press.
Hess, T., Fuller, M., and Campbell, D. (2009). Designing interfaces with social presence: Using
vividness and extraversion to create social recommendation agents. Journal of the Association
for Information Systems, 10(12), 889–919.
Hess, T., Fuller, M., and Mathew, J. (2005). Involvement and Decision-Making Performance with a
Decision Aid: The Influence of Social Multimedia, Gender, and Playfulness. Journal of
Management Information Systems, 22(3), 15–54.
Holtgraves, T. (2011). Text messaging, personality, and the social context. Journal of Research in
Personality, 45(1), 92–99.
IBM. (2019). IBM Watson Personality Insights: The science behind the service. Retrieved March 9,
2020, from https://cloud.ibm.com/docs/services/personality-insights?topic=personality-insights-
science
Isbister, K., and Nass, C. (2000). Consistency of personality in interactive characters: verbal cues,
non-verbal cues, and user characteristics. International Journal of Human-Computer Studies, 53,
251–267.
Jiang, L., Hoegg, J., Dahl, D. W., and Chattopadhyay, A. (2010). The Persuasive Role of Incidental
Similarity on Attitudes and Purchase Intentions in a Sales Context. Journal of Consumer
Research, 36(5), 778–791.
John, O. P., and Srivastava, S. (1999). The Big-Five trait taxonomy: History, measurement, and
theoretical perspectives. In L. Pervin and O. P. John (Eds.), Handbook of personality: Theory
and research (2nd ed., pp. 102–138). New York, NY, USA: Guilford Press.
Kang, S.-H., and Gratch, J. (2014). Exploring users’ social responses to computer counseling
interviewers’ behavior. Computers in Human Behavior, 34, 120–130.
Gnewuch et al. / Customer Self-Disclosure in Conversational Commerce
Twenty-Eigth European Conference on Information Systems (ECIS2020), Marrakesh, Morocco. 15
Kelly, A. E., and McKillop, K. J. (1996). Consequences of revealing personal secrets. Psychological
Bulletin, 120(3), 450–465.
Knecht, L., Lippman, D., and Swap, W. (1973). Similarity, attraction, and self-disclosure. Proceedings
of the Annual Convention of the American Psychological Association, 205–206.
Köhler, C. F., Breugelmans, E., and Dellaert, B. G. C. (2011). Consumer Acceptance of
Recommendations by Interactive Decision Aids: The Joint Role of Temporal Distance and
Concrete Versus Abstract Communications. Journal of Management Information Systems, 27(4),
231–260.
Laumer, S., Maier, C., and Gubler, F. T. (2019). Chatbot acceptance in healthcare: Explaining user
adoption of conversational agents for disease diagnosis. Proceedings of the Twenty-Seventh
European Conference on Information Systems (ECIS2019). Stockholm, Sweden.
Leaper, C., and Ayres, M. M. (2007). A meta-analytic review of gender variations in adults’ language
use: Talkativeness, affiliative speech, and assertive speech. Personality and Social Psychology
Review, 11(4), 328–363.
Li, J., Zhou, M. X., Yang, H., and Mark, G. (2017). Confiding in and Listening to Virtual Agents: The
Effect of Personality. Proceedings of the 22nd International Conference on Intelligent User
Interfaces - IUI ’17, 275–286. Limassol, Cyprus.
Loiacono, E. (2015). Self-Disclosure Behavior on Social Networking Web Sites. International Journal
of Electronic Commerce, 19(2), 66–94.
Matthews, G., Deary, I. J., and Whiteman, M. C. (2003). Personality traits (2nd ed.). New York, NY,
US: Cambridge University Press.
McTear, M. F. (2017). The Rise of the Conversational Interface: A New Kid on the Block? In FETLT
2016 (pp. 38–49). Cham: Springer.
Messina, C. (2015). 2016 will be the year of conversational commerce. Retrieved November 16, 2019,
from https://medium.com/chris-messina/2016-will-be-the-year-of-conversational-commerce-
1586e85e3991
Moon, Y. (2000). Intimate exchanges: Using computers to elicit self‐disclosure from consumers.
Journal of Consumer Research, 26(4), 323–339.
Moon, Y., and Nass, C. (1996). How “Real” Are Computer Personalities? Psychological Responses to
Personality Types in Human-Computer Interaction. Communication Research, 23(6), 651–674.
Nass, C., and Moon, Y. (2000). Machines and mindlessness: Social responses to computers. Journal of
Social Issues, 56(1), 81–103.
Nass, C., Moon, Y., Fogg, B. J., Reeves, B., and Dryer, C. D. (1995). Can computer personalities be
human personalities? International Journal of Human-Computer Studies, 43(2), 223–239.
Nass, C., Moon, Y., and Green, N. (1997). Are machines gender neutral? Gender-stereotypic
responses to computers with voices. Journal of Applied Social Psychology, 27(10), 864–876.
Nass, C., Steuer, J., and Tauber, E. R. (1994). Computers are social actors. Proceedings of the SIGCHI
Conference on Human Factors in Computing Systems (CHI 94), 72–78. Boston, USA.
Olivero, N., and Lunt, P. (2004). Privacy versus willingness to disclose in e-commerce exchanges: The
effect of risk awareness on the relative role of trust and control. Journal of Economic
Psychology, 25(2), 243–262.
Pennebaker, J. W., and King, L. A. (1999). Linguistic styles: Language use as an individual difference.
Journal of Personality and Social Psychology, 77(6), 1296–1312.
Pfeuffer, N., Adam, M., Toutaoui, J., Hinz, O., and Benlian, A. (2019). Mr. and Mrs. Conversational
Agent - Gender Stereotyping in Judge-Advisor Systems and the Role of Egocentric Bias.
Proceedings of the 40th International Conference on Information Systems (ICIS 2019). Munich,
Germany.
Pfeuffer, N., Benlian, A., Gimpel, H., and Hinz, O. (2019). Anthropomorphic Information Systems.
Business & Information Systems Engineering, 61(4), 523–533.
Qiu, L., and Benbasat, I. (2010). A study of demographic embodiments of product recommendation
agents in electronic commerce. International Journal of Human Computer Studies, 68(10), 669–
688.
Rich, M. K., and Smith, D. C. (2000). Determining relationship skills of prospective salespeople.
Gnewuch et al. / Customer Self-Disclosure in Conversational Commerce
Twenty-Eigth European Conference on Information Systems (ECIS2020), Marrakesh, Morocco. 16
Journal of Business & Industrial Marketing, 15(4), 242–259.
Rietz, T., Benke, I., and Maedche, A. (2019). The Impact of Anthropomorphic and Functional Chatbot
Design Features in Enterprise Collaboration Systems on User Acceptance. Proceedings of the
14th International Conference on Wirtschaftsinformatik (WI2019). Siegen, Germany.
Rosseel, Y. (2012). lavaan: An R Package for Structural Equation Modeling. Journal of Statistical
Software, 48(2), 1–36.
Saffarizadeh, K., Boodraj, M., and Alashoor, T. (2017). Conversational Assistants: Investigating
Privacy Concerns, Trust, and Self-Disclosure. Proceedings of the 38th International Conference
on Information Systems (ICIS2017). Seoul, South Korea.
Sah, Y. J., and Peng, W. (2015). Effects of visual and linguistic anthropomorphic cues on social
perception, self-awareness, and information disclosure in a health website. Computers in Human
Behavior, 45, 392–401.
Santos, B., Patel, P., and D’Souza, R. (2011). Venture Capital Funding for Information Technology
Businesses. Journal of the Association for Information Systems, 12(1), 57–87.
Schlee, R. P. (2005). Social styles of students and professors: Do students’ social styles influence their
preferences for professors? Journal of Marketing Education, 27(2), 130–142.
Schuetzler, R. M., Giboney, J. S., Grimes, G. M., and Nunamaker, J. F. (2018). The influence of
conversational agent embodiment and conversational relevance on socially desirable responding.
Decision Support Systems, 114, 94–102.
Spiekermann, S., Grossklags, J., and Berendt, B. (2001). E-privacy in 2nd generation E-commerce:
privacy preferences versus actual behavior. Proceedings of the 3rd ACM Conference on
Electronic Commerce, 38–47. Tampa, FL, USA.
Tuzovic, S., and Paluch, S. (2018). Conversational Commerce – A New Era for Service Business
Development? In M. Bruhn and K. Hadwich (Eds.), Service Business Development (pp. 81–100).
Wiesbaden: Springer.
Walczuch, R., and Lundgren, H. (2004). Psychological antecedents of institution-based consumer trust
in e-retailing. Information and Management, 42(1), 159–177.
Wambsganss, T., Winkler, R., Söllner, M., and Leimeister, J. M. (2020). A Conversational Agent to
Improve Response Quality in Course Evaluations. CHI Conference on Human Factors in
Computing Systems Extended Abstracts. Honolulu, HI, USA.
Wang, W., Qiu, L., Kim, D., and Benbasat, I. (2016). Effects of rational and social appeals of online
recommendation agents on cognition- and affect-based trust. Decision Support Systems, 86, 48–
60.
Watson, H. J. (2017). Preparing for the Cognitive Generation of Decision Support. MIS Quarterly
Executive, 16(3), 153–169.
Winkler, R., and Söllner, M. (2018). Unleashing the Potential of Chatbots in Education: A State-Of-
The-Art Analysis. Academy of Management Annual Meeting (AOM). Chicago, USA.
Wright, R. T., and Marett, K. (2010). The Influence of Experiential and Dispositional Factors in
Phishing: An Empirical Investigation of the Deceived. Journal of Management Information
Systems, 27(1), 273–303.
Yarkoni, T. (2010). Personality in 100,000 Words: A large-scale analysis of personality and word use
among bloggers. Journal of Research in Personality, 44(3), 363–373.
Zhou, M. X., Mark, G., Li, J., and Yang, H. (2019). Trusting Virtual Agents : The Effect of
Personality. ACM Transactions on Interactive Intelligent Systems, 9(2–3).