Project

Designing Conversational Agents

Goal: To advance existing research on conversational agent design by adapting human concepts to human–conversational agent interaction.

Updates

0 new
0
Recommendations

0 new
0
Followers

0 new
2
Reads

0 new
8

Project log

Fabian Reinkemeier
added 7 research items
Voice assistants (VAs) such as Google Assistant and Amazon Alexa are spreading rapidly. They offer users the opportunity to order products online in a spoken dialogue (voice commerce). However, the widespread use of voice commerce is hindered by a lack of satisfaction and trust among VA users. This study investigates whether social cues and the accompanying perception of the VA’s humanness and social presence can overcome existing obstacles in voice commerce. The empirical comparison (N = 323) of two VAs (low vs. high level of social cues) shows that providing VAs with more cues increases user satisfaction. Nevertheless, the analysis does not reveal entirely positive effects on perceived trust and its dimensions of benevolence, competence, and integrity. Surprisingly, users had less trust in the integrity of a VA with more social cues. For a differentiated view, a more in-depth analysis of the individual cues and their interactions is required.
This research examines the impact of more humanlike design in voice apps on parasocial interaction and relationship quality in voice commerce.
The use of voice assistants is spreading rapidly, enabling companies to develop voice apps and establish a natural form of spoken dialogue in e-commerce. However, such voice commerce remains limited, as the apps struggle to provide satisfying interactions and build sufficient trust. To address these issues, we investigate the use of anthropomorphic designs in voice commerce and present a laboratory experiment (N = 323) demonstrating the significance of humanness and social presence in interactions. Our findings highlight the importance of companies endowing voice apps with social cues, as doing so leads customers to more satisfying interactions and higher levels of trust in the benevolence of voice apps. However, our results also reveal that such a design has no effect on trust in a voice app’s competence and even negative effects on trust in its integrity. Nevertheless, an anthropomorphic design increases the behavioral intentions to use and recommend the voice app.
Fabian Reinkemeier
added a project goal
To advance existing research on conversational agent design by adapting human concepts to human–conversational agent interaction.