Conference PaperPDF Available

Abstract

With the advent of artificial intelligence (AI), individuals are increasingly teaming up with AI-based systems to enhance their creative collaborative performance. When working with AI-based systems, several aspects of team dynamics need to be considered, which raises the question how humans' approach and perceive their new teammates. In an experimental setting, we investigate the influence of social presence in a group ideation process with an AI-based teammate and examine its effects on the motivation to contribute. Our results show a multi-mediation model in which social presence indirectly influences whether human team members are motivated to contribute to a team with AI-based teammates, which is mediated by willingness to depend and team-oriented commitment.
Motivation to Contribute to Creative Collaboration with AI
Forty-Third International Conference on Information Systems, Copenhagen 2022
1
Examining the Antecedents of Creative
Collaboration with an AI Teammate
Short Paper
Dominik Siemon
LUT University
dominik.siemon@lut.fi
Edona Elshan
University of St. Gallen
edona.elshan@unisg.ch
Triparna de Vreede
University of South Florida
tdevreede@usf.edu
Sarah Oeste-Reiß
University of Kassel
oeste-reiss@uni-kassel.de
Gert-Jan de Vreede
University of South Florida
gdevreede@usf.edu
Philipp Ebel
University of St. Gallen
philipp.ebel@unisg.ch
Abstract
With the advent of artificial intelligence (AI), individuals are increasingly teaming up
with AI-based systems to enhance their creative collaborative performance. When
working with AI-based systems, several aspects of team dynamics need to be considered,
which raises the question how humans’ approach and perceive their new teammates. In
an experimental setting, we investigate the influence of social presence in a group
ideation process with an AI-based teammate and examine its effects on the motivation to
contribute. Our results show a multi-mediation model in which social presence indirectly
influences whether human team members are motivated to contribute to a team with AI-
based teammates, which is mediated by willingness to depend and team-oriented
commitment.
Keywords: artificial intelligence, team, creativity, brainstorming, human-AI collaboration
Introduction
People often form teams to make complex decisions, create innovative solutions, and depend on each other
to complete intricate tasks (Randrup et al. 2016; Siemon et al. 2019). With the advent of artificial
intelligence (AI), individuals working together are increasingly teaming up with AI-based systems to
enhance their joint performance and collaborative outcomes (Seeber et al. 2020). Combining
complementary strengths of human intelligence (e.g., flexibility, empathy, creativity, and common sense)
with AI (e.g., pattern recognition, speed, and efficiency) has led to the emergence of hybrid intelligence
(Dellermann et al. 2019). Hybrid intelligence allows human teammates to overcome gaps and biases in their
decision-making, creativity, and collaboration (Jarrahi 2018). Possible applications of AI-based systems in
collaborative scenarios range from systems that are used as facilitators (Derrick et al. 2013; Elshan et al.
2022; Strohmann et al. 2017) to systems that evaluate the creative ideas of other people (Siemon 2022).
Thus, AI can take on roles where it actively interacts with humans and roles where it is an equal
collaboration partner, serving as a mediator or arbitrator to resolve disputes or problems (Larson 2010).
When employing and working with AI teammates, several aspects of team dynamics need to be considered
Motivation to Contribute to Creative Collaboration with AI
Forty-Third International Conference on Information Systems, Copenhagen 2022
2
(Elshan and Ebel 2020; Siemon et al. 2020), which raises questions about how humans approach and
perceive their new teammates.
When humans collaborate to complete tasks and exchange information and perspectives, they learn from
each other thereby enhancing their co-creating efforts (Bunderson and Sutcliffe 2002; Chae et al. 2015). To
this end, team members need to be placed in the right environment to fully realize the potential of
collaborative co-creation. The absence of the right relationships between the team members may thwart
their collaboration even if the team composition and the technology are appropriate for the situation
(Fabriek et al. 2008; Mohammed and Angell 2003). This essential relationship may not be present between
humans and AI-based teammates, potentially creating dissonance among human team members (Shaikh
and Cruz 2022; Zhang et al. 2021). Therefore, to design and facilitate productive human-AI collaboration
it is critical to identify the implicit assumptions that guide human-human collaboration and affect their
motivation to contribute to the task. There is early research on how people behave in collaborative scenarios
working with AI-based teammates (Bittner et al. 2021; Poser et al. 2022; Shaikh and Cruz 2022; Siemon
2022; Stieglitz et al. 2021). Research on a specific type of AI-based system, conversational agents, shows
that the social presence of the AI influences the behavior of the humans interacting with it. Social presence
describes the extent to which humans perceive the presence of another entity (human or AI-based system).
Social presence is a key phenomenon related to the interaction with AI-based systems (Mozafari et al. 2021),
as well as in computer-mediated collaboration and has been one of the most frequently analyzed variables
(Elshan et al. 2022). However, interaction differs from collaboration in various aspects, especially around
the joint value creation (Siemon et al. 2019), willingness to depend on teammates, commitment to the team
and the motivation to contribute to the common task. Therefore, a critical aspect of human-AI team
collaboration remains unexplored: how does social presence influence people to be less or more motivated
to contribute to a team when working with AI-based team members?
Theoretical Background
Human-AI Collaboration
Advancements in AI are allowing AI-based systems to take on more active roles in collaboration (Anderson
et al. 2018). While AI-based systems of the past were meant to adapt to humans or perform activities in a
reactive manner, they now operate proactively and generate value on their own (Russell and Norvig 2021).
Current AI-based systems offer the potential for task automation as well as augmentation. In terms of task
augmentation, humans and AI are combined to form a hybrid intelligence that forms the basis for
collaboration (Dellermann et al. 2019). In this study, we define collaboration as the combined effort
individuals that is purposefully organized to achieve a common objective (Leimeister 2014). In a
collaboration context, and in this case human-AI collaboration, there are various mechanisms and
phenomena (Siemon et al. 2019) that do not or only partially exist in a human-computer interaction context.
Such phenomena include, for example, social loafing and evaluation apprehension (Siemon 2022; Siemon
and Wank 2021; Stieglitz et al. 2021). Early research in the context of AI-based team members found that
such phenomena in ‘traditional’ contexts differ when human team members are augmented with AI-based
team members (Siemon 2022; Stieglitz et al. 2021). For example, Stieglitz et al. (2021) show that people
tend to exert less effort when collaborating with AI-based teammates and tend to cede responsibility to the
AI-based teammate. Consequently, the augmentation of human team members with AI-based ones changes
team dynamics and team member behaviors (Siemon 2022). The computational and analytical capabilities
of AI can be leveraged to deal with the complexity of decision-making processes, with the goal of
augmenting human intelligence and decision-making tasks rather than replacing humans from the process
(Jarrahi 2018). This symbiosis is particularly effective in augmentation tasks because machines can
consistently and accurately complete repetitive tasks requiring large amount of data to be processed while
humans are superior in empathetic or intuitive tasks. When the complementary strengths of human
intelligence and artificial intelligence are combined, both will behave more intelligently than each of them
individually (Kamar 2016). Such enhancement in any job environment will improve employees' decision-
making abilities, increase time allocated to non-trivial activities, increased creativity, and improved
productivity for both employee and business (Daugherty and Wilson 2018). However, the integration of
human input into AI implies challenges, as human intelligence can only be integrated if certain
organizational constraints are adequately considered. For instance, many hybrid intelligence systems fail
due to a lack of participation from human contributors, or due to the fact that human contributors are
Motivation to Contribute to Creative Collaboration with AI
Forty-Third International Conference on Information Systems, Copenhagen 2022
3
submitting random or erroneous contributions on different tasks to gain rewards without making useful
contributions (Kamar 2016). Consequently, the value of human-AI collaboration generated through a
symbiotic partnership may only be fully realized in practice (and beyond the theoretical narratives)
provided humans comprehend, trust, and adopt AI (Dubey et al. 2020; Wamba-Taguimdje et al. 2020)
during their collaborative efforts. Studies have found various concerns that threaten human comprehension
of and trust in AI: concerns regarding the negative impact of AI, such as poor decision-making,
discrimination, bias, and inaccurate recommendations on human/AI collaborations (Davenport et al.
2020) or the fear and skepticism among human workers towards working with AI (Rampersad 2020).
Humans frequently reject and disregard new technology if they feel threatened by it (i.e., if it impacts their
financial well-being), regardless of their (initial) enthusiasm for it (Elkins et al. 2013). This is mainly
because humans are less able to understand an AI-based team member, are not familiar with it, and do not
feel social presence, unlike with human team members that they may even know only through technology-
mediated collaboration (Mozafari et al. 2021). Unfamiliar team members, especially non-humans, can
consequently lead to discomfort, which can affect collaborative behaviors.
Social Presence Theory
Social presence refers to "the extent to which other beings in the world appear to exist and react to the user"
(Heeter 1992). In communication, this refers to the degree to which an individual is viewed as an
approachable real person. The core of social presence theory is that the social impact of a communication
medium is largely determined by the level of social presence (e.g., human contact, personalness, human
warmth, etc.) that it provides to its users (Short et al. 1976). This presence is essential for people to learn
about and interact with other people (Short et al. 1976). Accordingly, whenever there is a contact between
two people, both parties engage in acting out specific roles as well as developing or maintaining some type
of personal relationship. These two aspects of a connection are referred to as interparty and interpersonal
exchanges. A proper level of social presence should improve interaction, and collaboration, and
subsequently enhance the related motivation to contribute (Wei et al. 2017). Haines (2021) state that
feelings of social presence had a significant impact on willingness to work” in virtual teams. Further
research in computer-mediated collaboration also shows that social presence is essential for effective
collaboration and influences engagement and motivation to perform in teams. Therefore, we hypothesize a
positive relationship between social presence and motivation to contribute.
H1: Social presence positively influences motivation to contribute.
Furthermore, social presence has been demonstrated to be an experiential phenomenon, implying that
different collaborators may experience varying amounts of social presence when interacting with the same
collaboration system (Wei et al. 2017). Previous studies examined the antecedents and characteristics of
social presence and discovered aspects such as psychological participation and engagement or commitment
(Biocca and Harms 2002). Psychological participation is defined as an observer being aware of another
person's intention or idea through focusing on the other person's emotional state (Kim et al. 2013). Similar,
Haines (2021) define this as the perceived awareness of the activities of other team members, which shows
that others are putting forth their effort within the team. Commitment can be characterized as the other
person's perceived reliance, connection, and responsiveness to the observer's activities (Kim et al. 2013).
Only if one is aware of another person's intention, idea, and activities, then one develops trusting beliefs
and is willing to depend on that person (Qiu and Benbasat 2009). Consequently, we hypothesize that social
presence also has a positive influence on willingness to depend.
H2: Social presence positively influences willingness to depend.
For a team to function and for everyone to pull together, willingness to depend is essential to establish team-
oriented commitment (Peters and Manz 2007; Sheng et al. 2010). Willingness to depend shows that one
can rely on the actions, such as decision making, of one's team members or that tasks are completed as
coordinated and distributed, thus creating a commitment to the team. Haines (2021) further state that a
higher willingness to work is associated with higher feelings of loyalty to the team. Therefore, we
hypothesize a positive relationship between willingness to depend and team-oriented commitment.
H3: Willingness to depend positively influences team-oriented commitment.
Commitment to the team is essential to be motivated to deliver effort. This motivation comes from being
able to rely on other team members (i.e. willingness to depend) and build on their completion of tasks and
Motivation to Contribute to Creative Collaboration with AI
Forty-Third International Conference on Information Systems, Copenhagen 2022
4
knowing that all team members are pulling in the same direction (Siemon et al. 20219). Although
motivation to contribute depends on other factors, team-oriented commitment is a critical factor (Chang et
al. 2010; Haines 2021). If a team member shows commitment to the team, this can also indirectly mean
that he or she is motivated to contribute to the team. “When team members are willing to do additional
work for their team, they are likely to feel that other team members likewise feel obligated to do the same
(Haines 2021). Team-oriented commitment, therefore, leads to increased motivation to contribute, which
we hypothesize.
H4: Team-oriented commitment positively influences motivation to contribute.
Consequently, we investigate two basic theoretical derivations. The direct influence of social presence on
motivation to contribute and the indirect, mediated connection between social presence and motivation to
contribute via the mediators willingness to depend and team-oriented commitment. We define the indirect
multi-mediation as our last hypothesis.
H5: Social presence influences motivation to contribute mediated by willingness to depend and
team-oriented commitment.
In figure 1 we show our theoretical research model with H1-H5.
Figure 1. Theoretical model
Consequently, we aim to investigate what role social presence plays in digital collaboration and, more
importantly, how different actors (AI-based team members versus human team members) influence this
mechanism.
Methodology
To test whether and how social presence influences motivation to contribute, we designed a between-
subjects simulation based online experiment. Our objective was to analyze whether creating the perception
of an AI-based teammate being a collaboration partner (keeping all other aspects of collaboration and team
contributions in a brainstorming session the same), makes a difference on individuals' motivation to
contribute. The experiment had two conditions. Subjects were randomly assigned either to the experimental
or to the control group. A special website called collaborationspace was developed for the experiment. On
the website, each participant was guided through a simulated brainstorming session. The website simulated
the presence of six geographically dispersed participants in a synchronous brainstorming session. Apart
from the subject, each participant was a rule-based script that was played to simulate a conversation.
In the experimental group, the participant was added to a group with five actor scripts. Three scripts
represented teammates and one script represented the facilitator. All scripts represented human beings.
One actor script was represented as an AI-based system. All human actors had unisex names (Kim, Taylor,
Sam, Alex), while the AI-based system was named GenBo. In the control group, the participants were also
added to a group with five actor scripts. In this condition, all teammates and the facilitator actor scripts
were presented as humans. All human actors had unisex names (Kim, Taylor, Sam, Alex). The AI-based
teammate, GenBo, was replaced by the name of a human actor with the unisex name Taylor.
Experimental Procedure
The collaborative activity in the experiment followed three steps and took on average 15 minutes to
complete:
Motivation to Contribute to Creative Collaboration with AI
Forty-Third International Conference on Information Systems, Copenhagen 2022
5
(1) Introduction: The process started with a short introduction of the task and a declaration of consent
implemented on Unipark. Participants were informed that the purpose of the activity was to test a newly
developed collaboration system that enabled users to synchronously brainstorm on a given task. In
addition, the participants were told that the collaboration system will automatically form a team for the
brainstorming task. At this point the participants were randomly divided into the experimental and control
groups and were forwarded to collaborationspace.
(2) Team Composition and Brainstorming Session on Collaborationspace: The team composition process
was simulated and the “group formation” took around one minute in which all actors joined the session.
After the team composition process was complete, the participants went through a short introduction
session. Each actor introduced him/her/itself briefly with one or two sentences. In the experimental group,
GenBo introduced itself as an AI-based system ("Hey, I am GenBo. I may interact with you like a human,
but I am an Artificial Intelligence Bot."). In the control group, Taylor introduced him/herself (“Hey, I am
Taylor. Nice to meet you guys. Looking forward to our brainstorming session.”). After all participants had
introduced themselves, the facilitator initiated the brainstorming session and introduced the brainstorming
task: Helping a local zoo to gain more visitors
1
. The facilitator then informed the participants that they could
not build upon each other’s ideas. This resulted in a brainstorming session that lasted six minutes and had
21 messages (ideas). For example, the actor script ‘Kim’ contributed with the idea, “Another idea could be
sky walks. Walking on bridges over them or between them”. In both the experiment and control conditions,
the brainstorming task and brainstorming contributions that were provided remained identical. Only the
names of one of the teammates (GenBo [treatment group], Taylor [control group]) differed. After six
minutes of brainstorming, the facilitator ended the session.
(3) Survey: After the brainstorming session was concluded, the participants were automatically directed to
a survey on Unipark.
Measures
The survey questionnaire consisted of 35 items, including demographics. We used adaptations of the five-
item social presence scale by Qiu & Benbasat (2009), four-item willingness to depend scale based on
McKnight et al. (2002), seven-item team-oriented commitment scale (Van den Heuvel 1995), and five-item
motivation to contribute scale (Lin 2006). All measures were on a 5-point Likert-type scale. As our
experiment required high attention, we took several measures to ensure that participants thoughtfully
participated in the experiment. First, we ensured that the participants stayed in the browser the entire time
by automatically canceling the experiment whenever participants closed or switched the browser window.
We also included some attention check questions in the survey. Furthermore, in the post-experiment
survey, we asked participants to provide the name of the AI-based teammate or facilitator, depending on
which condition they were in. Finally, following Driskell et al. (2010), we included a suspicion check (also
serves as manipulation check) by asking whether the participants thought that GenBo was indeed an AI-
based teammate (vice versa for the control group) and a general suspicion check reflecting on the nature of
the experiment.
Participants
We recruited 180 participants from Amazon Mechanical Turk (MTurk). We choose MTurk, as it offers an
integrated compensation system and access to a large pool of participants. All participants were required to
have the Master Qualification as defined by MTurk. Due to the implementation of several attention checks
and suspicion checks, 80 participants were removed from our analysis (11 failed the basic attention check,
31 failed to provide the name of GenBo, 17 failed to provide the facilitator name, are 21 were suspicious that
the brainstorming was simulated). This resulted in a dataset of 100 participants with 54 in the experimental
group and 46 in the control group. The average age of the participants was 42 years. 42 participants were
female, 57 were male, and one preferred not to disclose. Eleven participants had a high school degree or
1
The entire brainstorming session, including the participants' ideas and responses, is based on a brainstorming session
conducted and filmed by Ideo in 2017. The original interaction was transcribed, and the answers were slightly adjusted
(e.g., partially shortened, expressions used mainly in spoken language removed etc.). The entire video/transcript can
be seen here: https://www.youtube.com/watch?v=VvdJzeO9yN8
Motivation to Contribute to Creative Collaboration with AI
Forty-Third International Conference on Information Systems, Copenhagen 2022
6
equivalent (e.g., GED), 10 had some college but no degree, 6 had an associate degree (e.g., AA, AS), 55 had
a bachelor’s degree (e.g., BA, BS), 17 had a master’s degree (e.g., MA, MS, Med), and one had a professional
degree (e.g., MD, DDS, DVM).
Results
We performed an analysis of reliability, where all scales met the minimum required Cronbach's alpha value
of 0.700 (Social presence = 0.941; willingness to depend = 0.897; team-oriented commitment = 0.796;
motivation to contribute = 0.744). We used confirmatory factor analysis to test our measures, which showed
that all items have significant (significance level of p .001) positive factor loadings higher than .700.
Furthermore, we calculated composite reliability (CR) and average variance extracted (AVE), both
indicating reliable factors (Urbach et al. 2010). To test the hypotheses, we performed a mediation analysis.
We tested the research model using structural equation modeling with partial least squares analysis, using
SmartPLS3. We incorporated bootstrapping re-sampling to assess the significance of the paths with 1000
samples. The research model is a multiple mediation model with the outcome variable motivation to
contribute. The predictor variable is social presence. The mediator variables are willingness to depend and
team-oriented commitment. The direct effect of social presence on motivation to contribute is not
significant (β=0.29), rejecting H1. The direct effect of social presence on willingness to depend is significant
(β=0.80**) supporting H2, as well as the direct path from willingness to depend on team-oriented
commitment (β=0.42*) supporting H3. Furthermore, the direct path from team-oriented commitment on
motivation to contribute is significant (β=0.86**) supporting H4. The indirect effect of social presence on
motivation to contribute is significant (β=0.30*), mediated by willingness to depend and team-oriented
commitment supporting H5. Figure 1 shows the multiple mediation model including the significant direct
and indirect paths.
*0.05; **0.001
Figure 2. Multiple Mediation Analysis
Discussion and Outlook
The multi mediation model shows that social presence is an essential aspect of collaboration between
human and AI-based teammates. It indirectly influences whether human team members are motivated to
contribute to a team with AI-based teammates. However, this influence is not direct (rejecting H1), and is
mediated by willingness to depend and team-oriented commitment (supporting H5). The perceived social
presence, which was significantly lower in the AI-based teammate GenBo (M=3.32, SD=1.05) compared to
the human teammate (M=4.01, SD=0.67, p<0.001) had a strong effect on the willingness to depend
(β=0.80**). Our results confirm that people are less willing to depend on each other if no social presence is
felt. Although this has already been confirmed by various studies within the realm of computer-mediated
communication (Rourke et al. 1999; Tu and McIsaac 2002) or human-AI interaction (Elshan et al. 2022),
our results show that simply creating the perception of the collaboration partner as AI as opposed to naming
it as a human makes a significant difference: people feel a different social presence between these two
situations. This shows that people are less willing to depend on an AI-based teammate as they would depend
on a human. This is particularly interesting since the content of the contributions from the GenBo and
Taylor teammate were identical. Furthermore, willingness to depend has a strong direct effect on team-
oriented commitment (β=0.42*). That shows that if participants in a team are less willing to rely on
someone in the team, team-oriented commitment also decreases, since only a coherent team with reliable
Motivation to Contribute to Creative Collaboration with AI
Forty-Third International Conference on Information Systems, Copenhagen 2022
7
team members generates commitment (Sheng et al. 2010). If there is less commitment to the team, the
motivation to contribute decreases, which can affect overall team performance (Chang et al. 2010; Sheng et
al. 2010). It is therefore not surprising that motivation to contribute is lower in the experimental group
(M=4.07, SD=0.65) than in the control group (M=4.29, SD=0.48), which however is not a significant
difference (p=1.92). The direct effect of team-oriented commitment on motivation to contribute shows this
relationship (β=0.86**). Subsequently, social presence has an influence on the motivation to contribute,
only mediated by willingness to depend and team-oriented commitment, but no significant direct effect.
Social presence has therefore no significant power to directly influence the motivation to contribute.
Our results are in line with existing studies that show that social presence is an essential part of the
interaction with AI-based systems. Research on conversational agents also highlights the importance of
social presence for communication satisfaction and successful service encounters (Diederich et al. 2022;
Elshan et al. 2022). Furthermore, there is an emphasis on the design of AI-based interaction partners to
enhance social presence and ensure a pleasant interaction (van Doorn et al. 2017; Qiu and Benbasat 2009).
However, these studies do not evaluate the basic conditions that need to be present for effective
collaboration with AI-based systems. This study fills that gap and expands the body of knowledge. Our
findings suggest that before evaluating the design and implications of AI-based systems like conversational
agents one should consider investigating insights about human behavior and expectations for collaboration
with AI-based systems. These insights form an essential foundation for further design choices. Our findings
also contribute to the understanding of team collaboration and the effect of social presence on motivation
to contribute. We show that social presence is not only important in one-to-one interaction scenarios, as
demonstrated by Qiu and Benbasat (2009) who show the direct effect of social presence on trusting beliefs
(which is similar to willingness to depend) in human-AI interaction, but also its importance for human-AI
collaboration by showing the mediated effect on team-oriented commitment and motivation to contribute.
We assume that people might have reservations, anxiety, and concerns about working with an AI teammate
and are therefore less willing to depend on the AI teammate (Thiebes et al. 2021) due to not being familiar
or not understanding the AI-based teammate. Future research on AI-based systems must address these
concerns in a meaningful way. Willingness to depend is a factor that is related to trust and people tend not
to trust collaboration partners who they do not understand or are not familiar with (Thiebes et al. 2021).
While contributing to the foundational body of AI-human research, this study has several limitations that
provide opportunities for future research. One limitation is the short duration of the experimental task
during which the actual collaboration takes place. Although creativity sessions often take place with ad-hoc
teams, collaboration with all its dynamics and social facets is difficult to achieve over such a short period of
time. The nature of our simulation experiment is another limitation that needs to be considered. While only
responses of participants who passed the suspicion check were used, it cannot be ruled out that participants
suspected that the actors were in fact rule-based scripts. Our results show that social presence is an
important mechanism influencing expected team performance regarding the willingness to depend, which
calls for further research to investigate how social presence can be increased in AI-based teammates. While
research on conversational agents mostly focuses on the design of such systems (e.g., incorporating
anthropomorphic features), we aim to elaborate on whether the familiarity with AI or the understandability
of AI may influence factors that affect social presence. Further research can also pursue a qualitative
evaluation of the participants ideas, as they collectively produced 5815 words during the brainstorming
session (an average of 581 words per person). This could shed light on the extent to which the presence of
an AI-based teammate may have influenced the quantity as well as the quality of the ideas and on whether
the motivation to contribute influenced the nature and quality of the actual contributions.
Acknowledgement
This research was supported by the Dr. Hans Riegel Foundation (Connex! Grand 2110-018).
References
Anderson, J., Rainie, L., and Luchsinger, A. 2018. “Artificial Intelligence and the Future of Humans,” Pew
Research Center (10), p. 12.
Biocca, F., and Harms, C. 2002. “Defining and Measuring Social Presence: Contribution to the Networked
Minds Theory and Measure,” Proceedings of PRESENCE (2002), Citeseer, pp. 136.
Motivation to Contribute to Creative Collaboration with AI
Forty-Third International Conference on Information Systems, Copenhagen 2022
8
Bittner, E., Mirbabaie, M., and Morana, S. 2021. Digital Facilitation Assistance for Collaborative, Creative
Design Processes, presented at the Proceedings of the 54th HICSS., p. 370.
Bunderson, J. S., and Sutcliffe, K. M. 2002. “Comparing Alternative Conceptualizations of Functional
Diversity in Management Teams: Process and Performance Effects,” Academy of Management
Journal (45:5), US: Academy of Management, pp. 875893. (https://doi.org/10.2307/3069319).
Chae, S. W., Seo, Y. W., and Lee, K. C. 2015. “Task Difficulty and Team Diversity on Team Creativity: Multi-
Agent Simulation Approach,” Computers in Human Behavior (42), Elsevier, pp. 8392.
Chang, K., Sheu, T. S., Klein, G., and Jiang, J. J. 2010. “User Commitment and Collaboration: Motivational
Antecedents and Project Performance,” Information & Software Technology (52:6), , pp. 672679.
Daugherty, P. R., and Wilson, H. J. 2018. Human+ Machine: Reimagining Work in the Age of AI, Harvard
Business Press.
Davenport, T., Guha, A., Grewal, D., and Bressgott, T. 2020. “How Artificial Intelligence Will Change the
Future of Marketing,” Journal of the Academy of Marketing Science (48:1), pp. 2442.
Dellermann, D., Ebel, P., Söllner, M., and Leimeister, J. M. 2019. “Hybrid Intelligence,” BISE (61:5), pp.
637643.
Derrick, D. C., Read, A., Nguyen, C., Callens, A., and De Vreede, G.-J. 2013. “Automated Group Facilitation
for Gathering Wide Audience End-User Requirements,” in 2013 46th Hawaii International
Conference on System Sciences, IEEE, pp. 195204.
Diederich, S., Brendel, A. B., Morana, S., and Kolbe, L. 2022. On the Design of and Interaction with
Conversational Agents: An Organizing and Assessing Review of Human-Computer Interaction
Research.
van Doorn, J., Mende, M., Noble, S., Hulland, J., Ostrom, A. L., Grewal, D., and Petersen, J. A. 2017. “Domo
Arigato Mr. Roboto: Emergence of Automated Social Presence in Organizational Frontlines and
Customers’ Service Experiences,” Journal of Service Research (20:1), pp. 43-58.
Dubey, R., Gunasekaran, A., Childe, S. J., Bryde, D. J., Giannakis, M., Foropon, C., Roubaud, D., and Hazen,
B. T. 2020. “Big Data Analytics and Artificial Intelligence Pathway to Operational Performance
under the Effects of Entrepreneurial Orientation and Environmental Dynamism: A Study of
Manufacturing Organisations,” International Journal of Production Economics (226), pp. 112.
Elkins, A. C., Dunbar, N. E., Adame, B., and Nunamaker, J. F. 2013. “Are Users Threatened by Credibility
Assessment Systems?,” JMIS (29:4), Taylor & Francis, pp. 249262.
Elshan, E., and Ebel, P. 2020. Let’s Team Up: Designing Conversational Agents as Teammates.
Elshan, E., Siemon, D., de Vreede, T., de Vreede, G.-J., Oeste-Reiß, S., and Ebel, P. 2022. Requirements for
AI-Based Teammates: A Qualitative Inquiry in the Context of Creative Workshops.
Elshan, E., Zierau, N., Engel, C., Janson, A., and Leimeister, J. M. 2022. “Understanding the Design
Elements Affecting User Acceptance of Intelligent Agents: Past, Present and Future,” Information
Systems Frontiers, Springer, pp. 132.
Fabriek, M., van Brand, M., Brinkkemper, S., and Harmsen, F. 2008. “Reasons for Success and Failure in
Offshore Software Development Projects,” European Conference on Information Systems.
Haines, R. 2021. “Activity Awareness, Social Presence, and Motivation in Distributed Virtual Teams,”
Information & Management (58:2), p. 103425.
Heeter, C. 1992. “Being There: The Subjective Experience of Presence,” Presence: Teleoperators & Virtual
Environments (1:2), MIT Press, pp. 262271.
Jarrahi, M. H. 2018. “Artificial Intelligence and the Future of Work: Human-AI Symbiosis in Organizational
Decision Making,” Business Horizons (61:4), pp. 577586.
Kamar, E. 2016. “Directions in Hybrid Intelligence: Complementing AI Systems with Human Intelligence.,”
in IJCAI, pp. 40704073.
Kim, H., Suh, K.-S., and Lee, U.-K. 2013. “Effects of Collaborative Online Shopping on Shopping Experience
through Social and Relational Perspectives,” Information & Management (50:4), pp. 169180.
Larson, D. A. 2010. “Artificial Intelligence: Robots, Avatars, and the Demise of the Human Mediator,” Ohio
St. J. on Disp. Resol. (25), HeinOnline, p. 105.
Leimeister, J. M. 2014. Collaboration Engineering: IT-Gestützte Zusammenarbeitsprozesse Systematisch
Entwickeln Und Durchführen, Springer-Verlag.
Mohammed, S., and Angell, L. C. 2003. “Personality Heterogeneity in Teams,” SGR (34:6), pp. 651677.
Mozafari, N., Weiger, W. H., and Hammerschmidt, M. 2021. That’s so Embarrassing! When Not to Design
for Social Presence in HumanChatbot Interactions.
Motivation to Contribute to Creative Collaboration with AI
Forty-Third International Conference on Information Systems, Copenhagen 2022
9
Peters, L. M., and Manz, C. C. 2007. “Identifying Antecedents of Virtual Team Collaboration,” Team
Performance Management: An International Journal, Emerald Group Publishing Limited.
Poser, M., stermann, G. C., Tavanapour, N., and Bittner, E. A. 2022. “Design and Evaluation of a
Conversational Agent for Facilitating Idea Generation in Organizational Innovation Processes,”
Information Systems Frontiers, Springer, pp. 126.
Qiu, L., and Benbasat, I. 2009. “Evaluating Anthropomorphic Product Recommendation Agents: A Social
Relationship Perspective to Designing Information Systems,” Journal of Management
Information Systems (25:4), pp. 145182.
Rampersad, G. 2020. “Robot Will Take Your Job: Innovation for an Era of Artificial Intelligence,” Journal
of Business Research (116), Elsevier, pp. 6874.
Randrup, N., Druckenmiller, D., and Briggs, R. O. 2016. “Philosophy of Collaboration,” in 49th HICSS,
IEEE, pp. 898907.
Rourke, L., Anderson, T., Garrison, D. R., and Archer, W. 1999. “Assessing Social Presence in Asynchronous
Text-Based Computer Conferencing,” The Journal of Distance Education/Revue de l’ducation
Distance (14:2), Athabasca University Press, pp. 5071.
Russell, S. J., and Norvig, P. 2021. Artificial Intelligence: A Modern Approach, (Fourth edi.), Pearson.
Seeber, I., Bittner, E., Briggs, R. O., Vreede, T., Vreede, G.-J., Elkins, A., Maier, R., Merz, A. B., Oeste-Reiß,
S., Randrup, N., Schwabe, G., and Söllner, M. 2020. Machines as Teammates: A Research Agenda
on AI in Team Collaboration,” Information & Management (57:2), p. 103174..
Shaikh, S. J., and Cruz, I. F. 2022. “AI in Human Teams: Effects on Technology Use, Members’ Interactions,
and Creative Performance under Time Scarcity,” AI & SOCIETY, Springer, pp. 114.
Sheng, C.-W., Tian, Y.-F., and Chen, M.-C. 2010. “Relationships among Teamwork Behavior, Trust,
Perceived Team Support, and Team Commitment,” Social Behavior and Personality: An
International Journal (38:10), Scientific Journal Publishers, pp. 12971305.
Short, J., Williams, E., and Christie, B. 1976. The Social Psychology of Telecommunications, John Wiley &
Sons.
Siemon, D. 2022. “Let the Computer Evaluate Your Idea: Evaluation Apprehension in Human-Computer
Collaboration,” Behaviour & Information Technology, Taylor & Francis, pp. 119.
Siemon, D., Becker, F., Eckardt, L., and Robra-Bissantz, S. 2019. “One for All and All for One-towards a
Framework for Collaboration Support Systems,” Education and Information Technologies (24:2),
Springer, pp. 18371861.
Siemon, D., Li, R., and Robra-Bissantz, S. 2020. “Towards a Model of Team Roles in Human-Machine
Collaboration,” ICIS 2020 Proceedings.
Siemon, D., and Wank, F. 2021. Collaboration With AI-Based TeammatesEvaluation of the Social
Loafing Effect.
Stieglitz, S., Mirbabaie, M., Möllmann, N. R., and Rzyski, J. 2021. “Collaborating with Virtual Assistants in
Organizations: Analyzing Social Loafing Tendencies and Responsibility Attribution,” Information
Systems Frontiers, Springer, pp. 126.
Strohmann, T., Siemon, D., and Robra-Bissantz, S. 2017. BrAInstorm: Intelligent Assistance in Group Idea
Generation, presented at the International Conference on Design Science Research in Information
System and Technology, Springer, pp. 457461.
Thiebes, S., Lins, S., and Sunyaev, A. 2021. “Trustworthy Artificial Intelligence,” Electronic Markets (31:2),
pp. 447464.
Tu, C.-H., and McIsaac, M. 2002. “The Relationship of Social Presence and Interaction in Online Classes,”
The American Journal of Distance Education (16:3), Taylor & Francis, pp. 131150.
Urbach, N., Ahlemann, F., and others. 2010. “Structural Equation Modeling in Information Systems
Research Using Partial Least Squares,” Journal of Information Technology Theory and Application
(11:2), pp. 540.
Wamba-Taguimdje, S.-L., Fosso Wamba, S., Kala Kamdjoug, J. R., and Tchatchouang Wanko, C. E. 2020.
“Influence of Artificial Intelligence (AI) on Firm Performance: The Business Value of AI-Based
Transformation Projects,” Business Process Management Journal (ahead-of-p:ahead-of-print)..
Wei, J., Seedorf, S., Lowry, P. B., Thum, C., and Schulze, T. 2017. “How Increased Social Presence through
Co-Browsing Influences User Engagement in Collaborative Online Shopping, Electronic
Commerce Research and Applications (24), pp. 8499..
Zhang, R., McNeese, N. J., Freeman, G., and Musick, G. 2021. “An Ideal Human,” Proceedings of the ACM
on Human-Computer Interaction (4:CSCW3), pp. 125.
... Insights from the workshop, such as the need for clear role delineation (Siemon et al. [26] and effective communication with AI, point to new theoretical frameworks that integrate these elements into the human-AI collaboration model. In addition, the way turn-taking was achieved as well as creating a shared understanding of the task contributes to current research in the field (Hevner and Story [27]; Siemon et al. [28] In practice, the workshop's findings have several direct implications. Recognizing AI as a collaborative tool, rather than just an assistant, changes the way software engineers approach their work [4]. ...
... Others investigated more specific use cases for AI-support in design. For example, Siemon et al. (2022) explored the potentials of AI as a teammate in creative collaboration, Nobari et al. (2021) explored how generative adversarial networks can be used to create innovative designs for bicycles, and Hwang (2022) reports on a literature review revealing that AI-driven tools are mostly used to support the generation and execution of ideas. Several authors provided overviews giving the reader an idea about the developments and potentials of AI in relation to the design field, see, e.g., Verganti et al. (2020) and Gero (2007). ...
Article
Full-text available
Generative AI algorithms that are able to generate creative output are progressing at tremendous speed. This paper presents a research agenda for Generative AI-based support for designers. We present examples of existing applications and thus illustrate the possible application space of Generative AI reflecting the current state of this technology. Furthermore, we provide a theoretical foundation for AI-supported design, based on a typology of design knowledge and the concept of evolutionary creativity. Both concepts are discussed in relation to the changing roles of AI and the human designer. The outlined research agenda presents 10 research opportunities for possible AI-support to augment the designer of the future. The results presented in this paper provide researchers with an introduction to and overview of Generative AI, as well as the theoretical understanding of potential implications for the future of the design discipline.
Article
Full-text available
Large numbers of incomplete, unclear, and unspecific submissions on idea platforms hinder organizations to exploit the full potential of open innovation initiatives as idea selection is cumbersome. In a design science research project, we develop a design for a conversational agent (CA) based on artificial intelligence to facilitate contributors in generating elaborate ideas on idea platforms where human facilitation is not scalable. We derive prescriptive design knowledge in the form of design principles, instantiate, and evaluate the CA in two successive evaluation episodes. The design principles contribute to the current research stream on automated facilitation and can guide providers of idea platforms to enhance idea generation and subsequent idea selection processes. Results indicate that CA-based facilitation is engaging for contributors and yields well-structured and elaborated ideas.
Article
Full-text available
Time and technology permeate the fabric of teamwork across a variety of settings to affect outcomes which have a wide range of consequences. However, there is a limited understanding about the interplay between these factors for teams, especially as applied to artificial intelligence (AI) technology. With the increasing integration of AI into human teams, we need to understand how environmental factors such as time scarcity interact with AI technology to affect team behaviors. To address this gap in the literature, we investigated the interaction between the availability of intelligent technology and time scarcity in teams. Drawing from the theoretical perspective of computers are social actors and extant research on the use of heuristics and human–AI interaction, this study uses behavioral data from 56 teams who participated in a between-subjects 2 (intelligent assistant available × control/no intelligent assistant) × 2 (time scarcity × control/no time scarcity) lab experiment. Results show that teams working under time scarcity used the intelligent assistant more often and underperformed on a creative task compared to teams without the temporal constraints. Further, teams who had an intelligent assistant available to them had fewer interactions between members compared to teams who did not have the technology. Implications for research and applications are discussed.
Article
Full-text available
Intelligent agents (IAs) are permeating both business and society. However, interacting with IAs poses challenges moving beyond technological limitations towards the human-computer interface. Thus, the knowledgebase related to interaction with IAs has grown exponentially but remains segregated and impedes the advancement of the field. Therefore, we conduct a systematic literature review to integrate empirical knowledge on user interaction with IAs. This is the first paper to examine 107 Information Systems and Human-Computer Interaction papers and identified 389 relationships between design elements and user acceptance of IAs. Along the independent and dependent variables of these relationships, we span a research space model encompassing empirical research on designing for IA user acceptance. Further we contribute to theory, by presenting a research agenda along the dimensions of the research space, which shall be useful to both researchers and practitioners. This complements the past and present knowledge on designing for IA user acceptance with potential pathways into the future of IAs.
Article
Full-text available
Organizations increasingly introduce collaborative technologies in form of virtual assistants (VAs) to save valuable resources, especially when employees are assisted with work-related tasks. However, the effect of VAs on virtual teams and collaboration remains uncertain, particularly whether employees show social loafing (SL) tendencies, i.e., applying less effort for collective tasks compared to working alone. While extant research indicates that VAs collaboratively working in teams exert greater results, less is known about SL in virtual collaboration and how responsibility attribution alters. An online experiment with N = 102 was conducted in which participants were assisted by a VA in solving a task. The results indicate SL tendencies in virtual collaboration with VAs and that participants tend to cede responsibility to the VA. This study makes a first foray and extends the information systems (IS) literature by analyzing SL and responsibility attribution thus updates our knowledge on virtual collaboration with VAs.
Conference Paper
Full-text available
Consumers increasingly rely on chatbots when interacting with firms. This is not only because it is convenient, but also because consumers do not feel judged by these artificial conversational agents. However, when compared to interacting with human employees, interactions with chatbots lack human warmth and sociability. To facilitate these social experiences, firms design their chatbots to convey social presence. Prior research shows that once perceptions of social presence are elicited, consumers’ intentions to use the chatbot increase. However, the present work questions whether designing for social presence is always desirable by spotlighting settings in which the topic of the interaction is perceived as embarrassing by consumers. A scenario experiment shows that while designing for social presence by concealing the chatbot’s identity increases usage intention in non-embarrassing contexts, it backfires in contexts perceived as embarrassing. These results challenge the current mantra of the salutary effects of social presence in human-chatbot interactions.
Conference Paper
Full-text available
Innovation requires organizations to tap into the knowledge and creativity of teams. However, teams are confronted with massive amounts of data and information, necessitating a broad set of knowledge, methodologies, and approaches to solve problems and innovate. Consequently, team composition has become a critical challenge. Recent advances in artificial intelligence (AI) may assist in addressing this challenge. As AI is permeating both business and private sectors, organizational teams may be augmented with AI team members. However, given the nascent nature of this phenomenon, little is known about the specific roles and requirements for such AI teammates. Within an interview study we discover common challenges in teams and identify recurring capability gaps of participants and behaviors that negatively impact the team's collective performance. Based on our findings, we propose requirements for AI-based teammates to address these gaps and support beneficial collaboration between humans and AI in teams.
Conference Paper
Full-text available
Individuals tend to engage with reduced motivation and effort when working collectively in a group as opposed to when working individually on a task. In the conducted research, an artificial intelligence (AI)-based teammate is proposed to trigger this phenomenon called social loafing. An experimental study tests, if humans social loaf when collaborating with an AI-based teammate and how AI-based teammates affect social loafing of humans in collaboration. The results show no significant difference regarding the expected and perceived exertion of effort in the evaluation of co-workers, but the analysis of collaboration transcript reveals less detail and length of contributions in the experimental condition. The findings contribute to existing theories of human-AI collaboration and the understanding of the social response theory and the social loafing effect when collaborating with AI.
Article
Full-text available
Conversational agents (CAs), described as software with which humans interact through natural language, have increasingly attracted interest in both academia and practice, due to improved capabilities driven by advances in artificial intelligence and, specifically, natural language processing. CAs are used in contexts like people's private life, education, and healthcare, as well as in organizations, to innovate and automate tasks, for example in marketing and sales or customer service. In addition to these application contexts, such agents take on different forms concerning their embodiment, the communication mode, and their (often human-like) design. Despite their popularity, many CAs are not able to fulfill expectations and to foster a positive user experience is a challenging endeavor. To better understand how CAs can be designed to fulfill their intended purpose, and how humans interact with them, a multitude of studies focusing on human-computer interaction have been carried out. These have contributed to our understanding of this technology. However, currently a structured overview of this research is missing, which impedes the systematic identification of research gaps and knowledge on which to build on in future studies. To address this issue, we have conducted an organizing and assessing review of 262 studies, applying a socio-technical lens to analyze CA research regarding the user interaction, context, agent design, as well as perception and outcome. We contribute an overview of the status quo of CA research, identify four research streams through a cluster analysis, and propose a research agenda comprising six avenues and sixteen directions to move the field forward.
Article
Individuals tend to hold back their ideas because they feel concerned about being evaluated. This leads to the untapped creative potential for organizations that depend on the creative abilities and ideas of their employees, as idea evaluation is essential for further developing and assessing creative ideas that inhibit the potential to turn into innovative products or services. In our research, we propose the use of AI-based computer systems for idea evaluation to address evaluation apprehension. With the help of an experiment (n=228), we test whether individuals feel concerned about evaluation when a computer evaluates their idea. Our results show that people do not feel evaluation apprehension when they present their idea to an AI-based system, but in contrast, feel concerned when they present their idea to a human. These findings contribute to the theory of evaluation apprehension but also to theories of human-computer collaboration and hold potential for companies to increase their creative outcome.
Article
Distributed virtual team members are unable to directly observe each other, making it difficult to evaluate each other’s efforts. The results of an experiment are reported in which activity awareness and feelings of social presence had a significant impact on willingness to work, which in turn were associated with higher feelings of loyalty. This suggests that in many working contexts, awareness practices can increase motivation and performance, in contrast with prior virtual team research that found decreased trust and effort. Designers and users of collaboration technology are therefore advised to consider social motivation effects that might occur in virtual teams.