Background:
Chatbots are increasingly used to support COVID-19 vaccination programs. Their persuasiveness may depend on the conversation-related context.
Objective:
To investigate the moderating role of the conversation quality and chatbot expertise cues in the effects of expressing empathy/autonomy-support by COVID-19 vaccine chatbots.
Methods:
An experiment with 196 Dutch-speaking adults living in Belgium, who engaged in a conversation with a chatbot providing vaccination information, used 2 (empathy/autonomy-support expression: present vs. absent) × 2 (chatbot expertise cues: expert endorser vs. layperson endorser) between-subject design. Chatbot conversation quality was assessed through the actual conversation logs. Perceived user autonomy (PUA), chatbot patronage intention (CPI), and vaccination intention shift (VIS) were measured after the conversation, coded from 1 to 5 (PUA, CPI) and from -5 to 5 (VIS).
Results:
There occurred a negative interaction effect of chatbot empathy/autonomy-support expression and conversation fallback (the percentage of chatbot answers "I do not understand" in a conversation) on PUA (PROCESS, Model 1, B = -3.358, SE = 1.235, t(186) = 2.718, P = .007). Specifically, empathy/autonomy-support expression had a more negative effect on PUA when the conversation fallback was higher (conditional effect of empathy/autonomy-support expression at the conversation fallback (CF) level of +1SD: B = -.405, SE = .158, t(186) = 2.564, P = .011; conditional effects non-significant for the mean level (B = -.103, SE = .113, t(186) = .914, P = .36) and the -1SD level (B = .031, SE = .123, t(186) = .252, P = .80)). Moreover, an indirect effect of empathy/autonomy-support expression on CPI via PUA was more negative when CF was higher (PROCESS, Model 7, 5000 bootstrap samples, moderated mediation index = -3.676, BootSE = 1.614; 95%CI[-6.697, -.102]; the conditional indirect effect at the CF level of +1SD: B = -.443, BootSE = .202; 95%CI[-.809, -.005]; conditional indirect effects non-significant for the mean level (B = -.113, BootSE = .124; 95%CI[-.346, .137]) and the -1SD level (B = .034, BootSE = .132; 95%CI[-.224, .305]). Indirect effects of empathy/autonomy-support expression on VIS via PUA were marginally more negative when the CF was higher. No effects of chatbot expertise cues were found.
Conclusions:
The findings suggest that expressing empathy/autonomy-support by a chatbot may harm its evaluation and persuasiveness when the chatbot fails to answer its users' questions. The paper adds to the literature on vaccine chatbots by exploring the conditional effects of chatbot empathy/autonomy-support expression. The results guide policymakers and chatbot developers dealing with vaccination promotion in designing the way chatbots express their empathy and support for user autonomy.