Julian De Freitas’s research while affiliated with Harvard University and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (50)


Figure 1 Stimuli used in Study 1. The top row depicts imperfectly humanlike robots; the bottom row depicts the perfectly humanlike robots / humans.
Results of Study 4. All scales were 0-10 except donation choice, which was binary.
Anti Robot Speciesism
  • Preprint
  • File available

March 2025

·

30 Reads

Julian De Freitas

·

Noah Castelo

·

Bernd Schmitt

·

Humanoid robots are a form of embodied artificial intelligence (AI) that looks and acts more and more like humans. Powered by generative AI and advances in robotics, humanoid robots can speak and interact with humans rather naturally but are still easily recognizable as robots. But how will we treat humanoids when they seem indistinguishable from humans in appearance and mind? We find a tendency (called "anti-robot" speciesism) to deny such robots humanlike capabilities, driven by motivations to accord members of the human species preferential treatment. Six experiments show that robots are denied humanlike attributes, simply because they are not biological beings and because humans want to avoid feelings of cognitive dissonance when utilizing such robots for unsavory tasks. Thus, people do not rationally attribute capabilities to perfectly humanlike robots but deny them capabilities as it suits them.

Download

Public perception and autonomous vehicle liability

January 2025

·

7 Reads

Journal of Consumer Psychology

Julian De Freitas

·

Xilin Zhou

·

Margherita Atzei

·

[...]

·

Luigi Di Lillo

The deployment of autonomous vehicles (AVs) and the accompanying societal and economic benefits will greatly depend on how much liability AV firms will have to carry for accidents involving these vehicles, which in turn impacts their insurability and associated insurance premiums. Across three experiments ( N = 2677), we investigate whether accidents where the AV was not at fault could become an unexpected liability risk for AV firms, by exploring consumer perceptions of AV liability. We find that when such accidents occur, the not‐at‐fault vehicle becomes more salient to consumers when it is an AV. As a result, consumers are more likely to view as relevant counterfactuals in which the not‐at‐fault vehicle might have behaved differently to avoid or minimize damage from, the accident. This leads them to judge AV firms as more liable than both firms that make human‐driven vehicles and human drivers for damages when not at fault.


Generative AI Image Creation. The images were generated using ChatGPT. A textual prompt is processed by the LLM GPT-4 to create an internal representation of the image that needs to be generated. This representation includes details about the layout, color scheme, and other visual elements that align with the textual description. Once the internal representation is ready, the image generator DALL-E3 uses an image-to-text model to produce the image.
Percentage of Minority Groups Represented. (a) The percentage of minority groups (visually impaired, older, people with high body weight, racial minorities, female) represented before and after making the images funnier. Error bars represent standard errors of proportions. (b). The percentage of politically sensitive (racial minorities and female) and non-politically sensitive (visually impaired, people with high body weight, and older) groups represented after making the images funnier. Error bars represent standard errors of proportions. Politically sensitive groups were less likely to be represented after making the images funnier, whereas non-politically sensitive groups were more likely to be represented after making the images funnier.
Humor as a window into generative AI bias

January 2025

·

70 Reads

A preregistered audit of 600 images by generative AI across 150 different prompts explores the link between humor and discrimination in consumer-facing AI solutions. When ChatGPT updates images to make them “funnier”, the prevalence of stereotyped groups changes. While stereotyped groups for politically sensitive traits (i.e., race and gender) are less likely to be represented after making an image funnier, stereotyped groups for less politically sensitive traits (i.e., older, visually impaired, and people with high body weight groups) are more likely to be represented.




Reducing prejudice with counter-stereotypical AI

December 2024

·

60 Reads

·

1 Citation

Consumer Psychology Review

Based on a review of relevant literature, we propose that the proliferation of AI with human-like and social features presents an unprecedented opportunity to address the underlying cognitive and affective drivers of prejudice. An approach informed by the psychology of intergroup contact and prejudice reduction is necessary because current AI systems often reinforce or avoid prejudices. Against this backdrop, we outline unique opportunities for prejudice reduction through 'synthetic' intergroup contact, wherein consumers interact with AI products and services that counter stereotypes and serve as a 'proxy' members of the outgroup (i.e., counter-stereotypical AI). In contrast to human-human contact, humanizing and socializing AI can reduce prejudice through more repeated, direct, unavoidable, private, non-judgmental, col-laborative, and need-satisfying contact. We illustrate the potential of synthetic inter-group contact with counter-stereotypical AI using examples of gender stereotypes and hate speech and discuss practical considerations for implementing counter-stereotypical AI without inadvertently perpetuating or reinforcing prejudice. K E Y W O R D S artificial intelligence, intergroup psychology, prejudice, stereotypes, synthetic contact


Is Personal Identity Intransitive?

December 2024

·

45 Reads

Journal of Experimental Psychology: General

There has been a call for a potentially revolutionary change to our existing understanding of the psychological concept of personal identity. Apparently, people can psychologically represent people, including themselves, as multiple individuals at the same time. Here, we ask whether the intransitive judgments found in these studies truly reflect the operation of an intransitive concept of personal identity. We manipulate several factors that arbitrate between transitivity and intransitivity and find most support for transitivity: In contrast to the prior work, most participants do not make intransitive judgments when there is any reason to favor one individual over another. People change which single individual they personally identify with, depending on which individual competes more strongly or weakly for identity, rather than identifying with both individuals. Even when two individuals are identical and therefore both entitled to be the same person, we find that people make more transitive judgments once they understand the practical commitments of their responses (Experiment 4) and report not being able to actually imagine two perspectives simultaneously when reasoning about the scenario (Experiment 5). In short, we suggest that while people may make intransitive judgments, these do not reflect that they psychologically represent identity in an intransitive manner.


Lessons From an App Update at Replika AI: Identity Discontinuity in Human-AI Relationships

December 2024

·

39 Reads

·

1 Citation

Can consumers form especially deep emotional bonds with AI and be vested in AI identities over time? We leverage a natural app-update event at Replika AI, a popular US-based AI companion, to shed light on these questions. We find that, after the app removed its erotic role play (ERP) feature, preventing intimate interactions between consumers and chatbots that were previously possible, this event triggered perceptions in customers that their AI companion's identity had discontinued. This in turn predicted negative consumer welfare and marketing outcomes related to loss, including mourning the loss, and devaluing the "new" AI relative to the "original". Experimental evidence confirms these findings. Further experiments find that AI companions users feel closer to their AI companion than even their best human friend, and mourn a loss of their AI companion more than a loss of various other inanimate products. In short, consumers are forming human-level relationships with AI companions; disruptions to these relationships trigger real patterns of mourning as well as devaluation of the offering; and the degree of mourning and devaluation are explained by perceived discontinuity in the AIs identity. Our results illustrate that relationships with AI are truly personal, creating unique benefits and risks for consumers and firms alike.


FIGURE 1 MEAN APP RATINGS IN STUDY 2
FIGURE 2
FIGURE 4 RESULTS IN STUDY 4
ENGAGEMENT OF LONELINESS-RELATED VS. -UNRELATED CONVERSATIONS IN STUDY 1
AI Companions Reduce Loneliness

July 2024

·

441 Reads

·

2 Citations

Chatbots are now able to engage in sophisticated conversations with consumers in the domain of relationships, providing a potential coping solution to widescale societal loneliness. Behavioral research provides little insight into whether these applications are effective at alleviating loneliness. We address this question by focusing on AI companions applications designed to provide consumers with synthetic interaction partners. Studies 1 and 2 find suggestive evidence that consumers use AI companions to alleviate loneliness, by employing a novel methodology for fine tuning large language models to detect loneliness in conversations and reviews. Study 3 finds that AI companions successfully alleviate loneliness on par only with interacting with another person, and more than other activities such watching YouTube videos. Moreover, consumers underestimate the degree to which AI companions improve their loneliness. Study 4 uses a longitudinal design and finds that an AI companion consistently reduces loneliness over the course of a week. Study 5 provides evidence that both the chatbots' performance and, especially, whether it makes users feel heard, explain reductions in loneliness. Study 6 provides an additional robustness check for the loneliness alleviating benefits of AI companions.



Citations (20)


... Conversely, researchers have also highlighted the positive effects of human-AI relationships. Chatbots and other AI tools may help reduce loneliness (De Freitas et al., 2024), and AI may act as a safe conversational partner, as people may feel more comfortable sharing sensitive information with an AI that lacks a capacity to "judge" them (Skjuve et al., 2021). Moreover, human-AI relationships have led some authors to begin rethinking our conceptual understanding of relationships with non-human entities and their ethical and legal consequences (Jecker, 2024;Puzio, 2024). ...

Reference:

Second-Person Authenticity and the Mediating Role of AI: A Moral Challenge for Human-to-Human Relationships?
AI Companions Reduce Loneliness
  • Citing Article
  • January 2024

SSRN Electronic Journal

... Consistent with a social expertise perspective, adults spontaneously recognize abstract roles more easily in social as opposed to nonsocial scenes [54], reason more easily about social as opposed to nonsocial relations [55,56], focus more on abstract patterns in social as opposed to nonsocial learning [27,57], and easily identify abstract causes to explain sparse social behavior [58]. Moreover, although no physical pattern defines social relations like help or harm, people are so practiced at recognizing these relations that doing so has hallmarks of automatic perception rather than deliberate reasoning [59][60][61]. ...

Moral thin-slicing: Forming moral impressions from a brief glance
  • Citing Article
  • May 2024

Journal of Experimental Social Psychology

... Despite AI's growing role in mental health care, its categorization and regulation remain unclear. A key challenge is the blurred line between wellness tools for general well-being and clinical tools for diagnosing, treating, or managing conditions [37]. Wellness tools often face minimal regulation, relying on self-certification and post-market accountability. ...

The health risks of generative AI-based wellness apps
  • Citing Article
  • April 2024

Nature Medicine

... In the context of human-AI interaction, the sense of being treated as objects by AI may prevent users from expressing themselves to AI and make them devalue their own genuine opinions (Valenzuela et al., 2024). ...

How Artificial Intelligence Constrains the Human Experience

Journal of the Association for Consumer Research

... Prior research has shown that consumers are often reluctant to use AI or reject its advice altogether (i.e., "algorithm aversion; ";De Freitas et al., 2023;Dietvorst et al., 2015). The reasons for this aversion are manifold and include but are not limited to consumers' perception of AI as less authentic and moral (Bigman & Gray, 2018;Dietvorst & Bartels, 2022;Giroux et al., 2022;Jago, 2019;Jago et al., 2022), neglecting their unique needs (Longoni et al., 2019), not learning from mistakes (Reich et al., 2023), being less capable than humans at the same tasks (Agarwal et al., 2024), and is viewed as not a member of the same species as humans . ...

Acceptance of Automated Vehicles Is Lower for Self than Others
  • Citing Article
  • February 2024

Journal of the Association for Consumer Research

... For instance, people place greater trust in AI robots for agency-related tasks than for experience-related ones [4,44]. Following the widespread belief that robots are capable of agency but lack experience [14], this study focused on perceived robotic agency. ...

Psychological factors underlying attitudes toward AI tools
  • Citing Article
  • November 2023

Nature Human Behaviour

... With the increasing use of AI and GenAI in mental health support (e.g., screening for mental health issues [88,89], LLM-powered psychotherapy [47,67], mental education [89]), several ethical challenges emerge [18,68,84]. These ethical considerations have centered around (1) accountability that encompasses governance, legal responsibilities, and liability, ensuring that actions and decisions by AI are traceable and justifiable [13,100]; (2) autonomy that demands respect for human decision-making, emphasizing informed consent and human oversight so that individuals retain control over their mental health treatment [31,53,83]; (3) equity that seeks to eliminate biases and ensure fairness and justice in AI interactions [56,83,102,103,112]; (4) integrity that relates to the honesty and ethical conduct in mental health research and psychotherapy delivery [94,103]; (5) non-maleficence focusing on preventing harm, avoiding misleading information, and ensuring the safety and mental well-being of users [28,86]; (6) privacy that focuses on handling mental health data and protection of client confidentiality [56,83,102]; (7) security that aims to protect sensitive data from unauthorized access and breaches, emphasizing confidentiality and safety [16,48]; (8) transparency that involves reasoning behind AI-driven mental health recommendations be explainable and accessible to clients and practitioners [16,56]; and (9) trust, cultivated through consistent reliability and therapeutic value of AI tools within mental health care [94]. ...

Chatbots and Mental Health: Insights into the Safety of Generative AI

Journal of Consumer Psychology

... The first step in this learning process would be detecting what we control on the screen, i.e., the agent representing the player in the game. Agent detection is one of the key ideas in human-like learning and also a stark differentiator from large-scale machinelike pattern matching (De Freitas et al., 2023). After knowing the where and how of the agent, the next step would be to devise a locomotive strategy, necessitating knowledge of at least a minimal set of affordances associated with other game entities, for which we devise a set of representative object categories and learn category-level affordances . ...

Self-orienting in human and machine learning

Nature Human Behaviour

... Some studies indicate a negative net effect, such as negative social media reactions (Wang et al. 2022), reduced purchase intentions and increased punishment intentions (Kang and Kirmani 2024). In contrast, other studies assert positive emotional responses and increased consumer empowerment resulting from brand activism (Ahmad, Guzmán, and Kidwell 2022;Nam et al. 2023). ...

Speedy activists: Firm response time to sociopolitical events influences consumer behavior
  • Citing Article
  • July 2023

Journal of Consumer Psychology

... Unlike traditional task-performing chatbots like customer service representatives or ticket booking chatbots, systems that deliberately optimize and reward a free-form social conversation of a friendly or romantic variety are ACs. 9 ACs have been trending recently across the globe (see Table 1). For instance, the launch of Open AI's ChatGPT in November 2022 became viral within five days of launch and reached one million users worldwide. ...

Ethical Risks of Autonomous Products: The Case of Mental Health Crises on AI Companion Applications
  • Citing Article
  • January 2022

SSRN Electronic Journal