ArticlePublisher preview available

Gendered Artificial Intelligence in Marketing: Behavioral and Neural Insights Into Product Recommendations

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract and Figures

Marketing research consistently demonstrates that gender stereotypes influence the effectiveness of product recommendations. When artificial intelligence (AI) agents are designed with gendered features to enhance anthropomorphism, a follow‐up question is whether these agents' recommendations are also shaped by gender stereotypes. To investigate this, the current study employed a shopping task featuring product recommendations (utilitarian vs. hedonic), using both behavioral measures (purchase likelihood, personal interest, and tip amount) and event‐related potential components (P1, N1, P2, N2, P3, and late positive potential) to capture explicit and implicit responses to products recommended by male and female humans, virtual assistants, or robots. The findings revealed that gender stereotypes influenced responses at both levels but in distinct ways. Behaviorally, participants consistently favored female recommenders across all conditions. Additionally, female recommenders received more tips than males for hedonic products in the virtual assistant condition and utilitarian products in the robot condition. Implicitly, the N1 and N2 components reflected a classic gender stereotype from prior research: utilitarian products recommended by male humans elicited greater attention and received more inhibition control. We propose that task design and cultural factors may have contributed to the observed discrepancies between explicit (consumer behaviors) and implicit responses. These findings provide insights for mitigating the impact of gender difference when designing the anthropomorphic appearance of AI agents, which would help the development of more effective marketing strategies.
This content is subject to copyright. Terms and conditions apply.
Psychology & Marketing
RESEARCH ARTICLE
Gendered Artificial Intelligence in Marketing: Behavioral
and Neural Insights Into Product Recommendations
Jiayue Huang
1,2
| Ruolei Gu
3,4
| Yi Feng
5
| Wenbo Luo
1,2
1
Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian, China |
2
Key Laboratory of Brain and Cognitive Neuroscience,
Dalian, Liaoning Province, China |
3
CAS Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing,
China |
4
Department of Psychology, University of Chinese Academy of Sciences, Beijing, China |
5
Mental Health Center, Central University of Finance and
Economics, Beijing, China
Correspondence: Ruolei Gu (gurl@psych.ac.cn) | Wenbo Luo (wenbo9390@sina.com)
Received: 8 October 2024 | Revised: 8 January 2025 | Accepted: 9 January 2025
Funding: This study was funded by the National Natural Science Foundation of China (32020103008, 32071083, 32371130) and Beijing Philosophy and Social
Science Foundation (24DTR063).
Keywords: artificial intelligence | consumption recommendation | eventrelated potential | gender stereotypes | utilitarian vs. hedonic products
ABSTRACT
Marketing research consistently demonstrates that gender stereotypes influence the effectiveness of product recommendations.
When artificial intelligence (AI) agents are designed with gendered features to enhance anthropomorphism, a followup
question is whether these agents' recommendations are also shaped by gender stereotypes. To investigate this, the current study
employed a shopping task featuring product recommendations (utilitarian vs. hedonic), using both behavioral measures
(purchase likelihood, personal interest, and tip amount) and eventrelated potential components (P1, N1, P2, N2, P3, and late
positive potential) to capture explicit and implicit responses to products recommended by male and female humans, virtual
assistants, or robots. The findings revealed that gender stereotypes influenced responses at both levels but in distinct ways.
Behaviorally, participants consistently favored female recommenders across all conditions. Additionally, female recommenders
received more tips than males for hedonic products in the virtual assistant condition and utilitarian products in the robot
condition. Implicitly, the N1 and N2 components reflected a classic gender stereotype from prior research: utilitarian products
recommended by male humans elicited greater attention and received more inhibition control. We propose that task design and
cultural factors may have contributed to the observed discrepancies between explicit (consumer behaviors) and implicit
responses. These findings provide insights for mitigating the impact of gender difference when designing the anthropomorphic
appearance of AI agents, which would help the development of more effective marketing strategies.
1 | Introduction
Artificial intelligence (AI)driven consumption recommenda-
tions have become increasingly important in our daily lives,
allowing consumers to quickly discover products they might be
interested in (Adomavicius et al. 2018; Xiao and Benbasat 2018).
However, acceptance of these AI recommendations is remark-
ably lower than that of humanprovided suggestions, even
when evidence indicates that AI outperforms human judgment
(Castelo, Bos, and Lehmann 2019; Prahl and Van Swol 2017).
This resistance to algorithmic advice, known as algorithm
aversion, persists across various domains (Chen, Dang, and
Liu 2024; Dietvorst, Simmons, and Massey 2015). For example,
when making medical decisions, patients are less willing to
adopt algorithmic advice, fearing that such systems may neglect
their specific conditions (Longoni, Bonezzi, and Morewedge
2019). Also, AI voice assistants are often perceived as lacking
warmth and empathy (Lou, Kang, and Tse 2022). A recent
© 2025 Wiley Periodicals LLC.
1415 of 1431Psychology & Marketing, 2025; 42:14151431
https://doi.org/10.1002/mar.22186
ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
Purpose This study aimed to investigate the neural mechanism by which virtual chatbots' gender might influence users' usage intention and gender differences in human–machine communication. Approach Event-related potentials (ERPs) and subjective questionnaire methods were used to explore the usage intention of virtual chatbots, and statistical analysis was conducted through repeated measures ANOVA. Results/findings The findings of ERPs revealed that female virtual chatbots, compared to male virtual chatbots, evoked a larger amplitude of P100 and P200, implying a greater allocation of attentional resources toward female virtual chatbots. Considering participants' gender, the gender factors of virtual chatbots continued to influence N100, P100, and P200. Specifically, among female participants, female virtual chatbots induced a larger P100 and P200 amplitude than male virtual chatbots, indicating that female participants exhibited more attentional resources and positive emotions toward same-gender chatbots. Conversely, among male participants, male virtual chatbots induced a larger N100 amplitude than female virtual chatbots, indicating that male participants allocated more attentional resources toward male virtual chatbots. The results of the subjective questionnaire showed that regardless of participants' gender, users have a larger usage intention toward female virtual chatbots than male virtual chatbots. Value Our findings could provide designers with neurophysiological insights into designing better virtual chatbots that cater to users' psychological needs.
Article
Full-text available
This study investigated the impact of task types (functional vs. social) and the gendered voices (female vs. male) of Siri, an intelligent virtual assistant (IVA), on social presence and trust perceptions toward the IVA. In an online experiment involving 172 participants, individuals were randomly assigned to one of four conditions, interacting with Siri on their iPhones for various task inquiries. Results from multivariate analyses of covariances revealed significant differences in trust levels based on the type of task. Trust was found to be higher for functional tasks when assisted by Siri, compared to social tasks. However, there was no significant difference in trust based on Siri’s gendered voices. Post-hoc analyses indicated significant interactions between the gender match of participants and Siri’s gendered voices in two dimensions of trust. Men tended to trust the male-voiced Siri more than the female- voiced Siri, while women did not exhibit a preference for the female voice over the male voice. This study successfully replicated the task effect observed in prior research but did not replicate the gender effect. Key distinctions between the current study and previous ones include language and the participants’ nationality, with this study focusing on Korean participants interacting with Korean Siri.
Article
Full-text available
Purpose Drawing on person–environment fit theory, this study aims to investigate how the relationships between service task types (i.e. utilitarian and hedonic service tasks) and perceived authenticity (i.e. service and brand authenticity) differ under different conditions of service providers (human employee vs service robot). This study further examines whether customers’ stereotypes toward service robots (competence vs warmth) moderate the relationship between service types and perceived authenticity. Design/methodology/approach Using a 2 × 2 between-subjects experimental design, Study 1 examines a casual restaurant, whereas Study 2 assesses a theme park restaurant. Analysis of covariance and PROCESS are used to analyze the data. Findings Both studies reveal that human service providers in hedonic services positively affect service and brand authenticity more than robotic employees. Additionally, the robot competence stereotype moderates the relationship between hedonic services, service and brand authenticity, whereas the robot warmth stereotype moderates the relationship between hedonic services and brand authenticity in Study 2. Practical implications Restaurant managers need to understand which functions and types of service outlets are best suited for service robots in different service contexts. Robot–environment fit should be considered when developers design and managers select robots for their restaurants. Originality/value This study blazes a new theoretical trail of service robot research to systematically propose customer experiences with different service types by drawing upon person–environment fit theory and examining the moderating role of customers’ stereotypes toward service robots.
Article
This paper explores human trust in artificial intelligence (AI), focusing on the effects of social categorization (ingroup vs. outgroup) and AI human‐likeness through two pre‐registered studies involving 160 participants each. The first study, a lab experiment in China, and the second, an online experiment representative of the United States, both utilized a trust game to assess trust across four conditions: ingroup‐humanoid AI, ingroup‐non‐humanoid AI, outgroup‐humanoid AI, and outgroup‐non‐humanoid AI. Results indicated higher trust for ingroup and humanoid AIs, with statistical significance. Mixed‐design ANOVA was used to analyze the data, revealing significant main effects and interactions. The second study also identified an emotional connection as a mediator in trust, suggesting significant design implications for AI in trust‐critical sectors like healthcare and autonomous transportation.
Article
Perceptions of algorithms as opaque, commonly referred to as the black box problem, can make people reluctant to accept a recommendation from an algorithm rather than a human. Interventions that enhance people’s subjective understanding of algorithms have been shown to reduce this aversion. However, across four preregistered studies (N = 960), we found that in the online shopping context, after explaining the algorithm recommendation process (versus human recommendation), users felt dehumanized and thus averse to algorithms (Study 1). This effect persisted, regardless of the type of algorithm (i.e., conventional algorithms or large language models; Study 2) or recommended product (i.e., search or experience products; Study 3). Notably, considering large language models (versus conventional algorithms) as the recommendation agent (Study 2) and framing algorithm recommendation as consumer-serving (versus website-serving; Study 4) mitigated algorithm aversion caused by meta-dehumanization. Our findings contribute to ongoing discussions on algorithm transparency, enrich the literature on human–algorithm interaction, and provide practical insights for encouraging algorithm adoption.
Article
Studying heroism in controlled settings presents challenges and ethical controversies due to its association with physical risk. Leveraging virtual reality (VR) technology, we conducted a three-study series with 397 participants from China to investigate heroic actions. Participants unexpectedly witnessed a criminal event in a simulated scenario, allowing observation of their tendency to physically intercept a thief. We examined situational factors (voluntariness, authority, and risk) and personal variables [gender, impulsivity, empathy, and social value orientation (SVO)] that may influence heroism. Also, the potential association between heroism and social conformity was explored. In terms of situational variables, voluntariness modulated participants’ tendency to intercept the escaping thief, while perceived risk demonstrated its impact by interacting with gender. That is, in study 3 where the perceived risk was expected to be higher (as supported by an online study 5), males exhibited a greater inclination toward heroic behavior compared to females. Regarding other personal variables, the tendency to engage in heroic behavior decreased as empathy levels rose among males, whereas the opposite trend was observed for females. SVO influenced heroic behavior but without a gender interaction. Finally, an inverse relationship between heroism and social conformity was observed. The robustness of these findings was partly supported by the Chinese sample (but not the international sample) of an online study 4 that provided written descriptions of VR scenarios, indicating cultural variations. These results advance insights into motivational factors influencing heroism in the context of restoring order and highlight the power of VR technology in examining social psychological hypotheses beyond ethical constraints.
Article
Purpose Financial services industry is increasingly showing interest in automated financial advisors, or robo-advisors, with the aim of democratizing access to financial advice and stimulating investment behavior among populations that were previously less active and less served. However, the extent to which consumers trust this technology influences the adoption of rob-advisors. The resemblance to a human, or anthropomorphism, can provide a sense of social presence and increase trust. Design/methodology/approach In this paper, we conduct an experiment ( N = 223) to test the effect of anthropomorphism (low vs medium vs high) and gender (male vs female) of the robo-advisor on social presence. This perception, in turn, enables consumers to evaluate personality characteristics of the robo-advisor, such as competence, warmth, and persuasiveness, all of which are related to trust in the robo-advisor. We separately conduct an experimental study ( N = 206) testing the effect of gender neutrality on consumer responses to robo-advisory anthropomorphism. Findings Our results show that consumers prefer human-alike robo-advisors over machinelike or humanoid robo-advisors. This preference is only observed for male robo-advisors and is explained by perceived competence and perceived persuasiveness. Furthermore, highlighting gender neutrality undermines the positive effect of robo-advisor anthropomorphism on trust. Originality/value We contribute to the body of knowledge on robo-advisor design by showing the effect of robot’s anthropomorphism and gender on consumer perceptions and trust. Consequently, we offer insightful recommendations to promote the adoption of robo-advisory services in the financial sector.
Article
Recent decades have witnessed a burst of neuroscience research investigating mental and physiological processes central to consumer behavior, including sensory perception, memory, and decision‐making. Nonetheless, few publications that include neural and physiological measures, or develop conceptual frameworks around neuroscience principles, have been published in consumer psychology. It is clear that “consumer neuroscience” has thus far not lived up to its promises in the marketing literature. We suggest three main reasons for this. First, neural and other biological markers are often mistaken to be identical to the overlaying psychological constructs in traditional consumer psychology work. Second, somewhat surprisingly, there has been an overly narrow utilization of neural data. Most previous work focused on linking existing behavioral phenomena or psychological constructs central to consumer research to neural correlates using brain imaging techniques while ignoring other methods. We argue that much can be gained from improved integration of physiological measures and through them, different levels of analysis. Third, there remain significant structural hurdles to the broad adoption of neural and physiological measures for consumer researchers. We outline how addressing these three components can translate to a more holistic understanding of the consumer via both broader and deeper consumer insights.