Jean-François Bonnefon’s research while affiliated with Toulouse School of Economics and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (200)


Figure 2. The average levels of fear expressed in twenty countries (n = 500 respondents per
Figure 3. Bootstrap analyses that show substantial country variations across the measures of
Figure 5. Fears of AI in the 20 countries, as a function of the proportion of AI's matched
Fears About Artificial Intelligence Across 20 Countries and Six Domains of Application
  • Article
  • Full-text available

December 2024

·

357 Reads

American Psychologist

·

Jane Rebecca Conway

·

Jean-François Bonnefon

·

[...]

·

The frontier of artificial intelligence (AI) is constantly moving, raising fears and concerns whenever AI is deployed in a new occupation. Some of these fears are legitimate and should be addressed by AI developers—but others may result from psychological barriers, suppressing the uptake of a beneficial technology. Here, we show that country-level variations across occupations can be predicted by a psychological model at the individual level. Individual fears of AI in a given occupation are associated with the mismatch between psychological traits people deem necessary for an occupation and perceived potential of AI to possess these traits. Country-level variations can then be predicted by the joint cultural variations in psychological requirements and AI potential. We validated this preregistered prediction for six occupations (doctors, judges, managers, care workers, religious workers, and journalists) on a representative sample of 500 participants from each of 20 countries (total N = 10,000). Our findings may help develop best practices for designing and communicating about AI in a principled yet culturally sensitive way, avoiding one-size-fits-all approaches centered on Western values and perceptions.

Download

Using Generative AI to Increase Skeptics' Engagement with Climate Science

November 2024

·

69 Reads

Climate skepticism remains a significant barrier to public engagement with accurate climate information, because skeptics actively engage in information avoidance to escape exposure to climate facts. Here we show that generative AI can enhance engagement with climate science among skeptical audiences by subtly modifying headlines to align better with their existing perspectives, without compromising factual integrity. In a controlled experiment (N = 2000) using a stylized social media interface, headlines of climate science articles modified by an open-source large language model (Llama3 70B, version 3.0) led to more bookmarks and more upvotes, and these effects were strongest among the most skeptical participants. Skeptics who engaged with climate science as a result of this intervention showed a shift in beliefs toward the scientific consensus by the end of the study. These results show that generative AI can alter the information diet skeptics consume, with the promise that scalable, sustained engagement will promote better epistemic health. They highlight the potential of generative AI as a tool for truth, showing that while it can be misused by bad actors, it also holds promise for advancing public understanding of science when responsibly deployed by well-intentioned actors.


Experimental evidence that delegating to intelligent machines can increase dishonest behaviour

October 2024

·

96 Reads

While artificial intelligence (AI) enables significant productivity gains from delegating tasks to machines, it can also facilitate the delegation of unethical behaviour. Here, we demonstrate this risk by having human principals instruct machine agents to perform a task with an incentive to cheat. Principals’ requests for cheating behaviour increased when the interface implicitly afforded unethical conduct: Machine agents programmed via supervised learning or goal specification evoked more cheating than those programmed with explicit rules. Cheating propensity was unaffected by whether delegation was mandatory or voluntary. Given the recent rise of large language model-based chatbots, we also explored delegation via natural language. Here, cheating requests did not vary between human and machine agents, but compliance diverged: When principals intended agents to cheat to the fullest extent, the majority of human agents did not comply, despite incentives to do so. In contrast, GPT4, a state-of-the-art machine agent, nearly fully complied. Our results highlight ethical risks in delegating tasks to intelligent machines, and suggest design principles and policy responses to mitigate such risks.



The impact of generative artificial intelligence on socioeconomic inequalities and policy making

June 2024

·

813 Reads

·

38 Citations

PNAS Nexus

Generative artificial intelligence (AI) has the potential to both exacerbate and ameliorate existing socioeconomic inequalities. In this article, we provide a state-of-the-art interdisciplinary overview of the potential impacts of generative AI on (mis)information and three information-intensive domains: work, education, and healthcare. Our goal is to highlight how generative AI could worsen existing inequalities while illuminating how AI may help mitigate pervasive social problems. In the information domain, generative AI can democratize content creation and access but may dramatically expand the production and proliferation of misinformation. In the workplace, it can boost productivity and create new jobs, but the benefits will likely be distributed unevenly. In education, it offers personalized learning, but may widen the digital divide. In healthcare, it might improve diagnostics and accessibility, but could deepen pre-existing inequalities. In each section, we cover a specific topic, evaluate existing research, identify critical gaps, and recommend research directions, including explicit trade-offs that complicate the derivation of a priori hypotheses. We conclude with a section highlighting the role of policymaking to maximize generative AI's potential to reduce inequalities while mitigating its harmful effects. We discuss strengths and weaknesses of existing policy frameworks in the European Union, the United States, and the United Kingdom, observing that each fails to fully confront the socioeconomic challenges we have identified. We propose several concrete policies that could promote shared prosperity through the advancement of generative AI. This article emphasizes the need for interdisciplinary collaborations to understand and address the complex challenges of generative AI.


Lie detection algorithms disrupt the social dynamics of accusation behavior

June 2024

·

98 Reads

iScience

Humans, aware of the social costs associated with false accusations, are generally hesitant to accuse others of lying. Our study shows how lie detection algorithms disrupt this social dynamic. We develop a supervised machine-learning classifier that surpasses human accuracy and conduct a large-scale incentivized experiment manipulating the availability of this lie-detection algorithm. In the absence of algorithmic support, people are reluctant to accuse others of lying, but when the algorithm becomes available, a minority actively seeks its prediction and consistently relies on it for accusations. Although those who request machine predictions are not inherently more prone to accuse, they more willingly follow predictions that suggest accusation than those who receive such predictions without actively seeking them.


Fig. 4 Preferences for speed accuracy trade-offs from own perspective, in the balanced UK sample (N = 739; 47% as welfare claimants)
Perspective taking in the balanced UK sample (N = 1462; 48% as welfare claimants). (A) The average gap between the willingness of claimants and non-claimants to let AI make welfare decisions across the 20 tradeoffs. (B) Biases of claimants and non-claimants trying to predict the answers of the other group.
False Consensus Biases AI Against Vulnerable Stakeholders

May 2024

·

147 Reads

The deployment of AI systems for welfare benefit allocation allows for accelerated decision-making and faster provision of critical help, but has already led to an increase in unfair benefit denials and false fraud accusations. Collecting data in the US and the UK (N = 2 449), we explore the public acceptability of such speed-accuracy trade-offs in populations of claimants and non-claimants. We observe a general willingness to trade off speed gains for modest accuracy losses, but this aggregate view masks notable divergences between claimants and non-claimants. Although welfare claimants comprise a relatively small proportion of the general population (e.g., 20% in the US representative sample), this vulnerable group is much less willing to accept AI deployed in welfare systems, raising concerns that solely using aggregate data for calibration could lead to policies misaligned with stakeholder preferences. Our study further uncovers asymmetric insights between claimants and non-claimants. The latter consistently overestimate claimants' willingness to accept speed-accuracy trade-offs, even when financially incentivized for accurate perspective-taking. This suggests that policy decisions influenced by the dominant voice of non-claimants, however well-intentioned, may neglect the actual preferences of those directly affected by welfare AI systems. Our findings underline the need for stakeholder engagement and transparent communication in the design and deployment of these systems, particularly in contexts marked by power imbalances.


Fig. 2 Preferences for speed accuracy trade-offs from own perspective, in the representative US sample (N = 506; 21% as welfare claimants).
Fig. 4 Preferences for speed accuracy trade-offs from own perspective, in the balanced UK sample (N = 739; 47% as welfare claimants)
False consensus biases AI against vulnerable stakeholders

May 2024

·

28 Reads

The deployment of AI systems for welfare benefit allocation allows for accelerated decision-making and faster provision of critical help, but has already led to an increase in unfair benefit denials and false fraud accusations. Collecting data in the US and the UK (N = 2449), we explore the public acceptability of such speed-accuracy trade-offs in populations of claimants and non-claimants. We observe a general willingness to trade off speed gains for modest accuracy losses, but this aggregate view masks notable divergences between claimants and non-claimants. Although welfare claimants comprise a relatively small proportion of the general population (e.g., 20% in the US representative sample), this vulnerable group is much less willing to accept AI deployed in welfare systems, raising concerns that solely using aggregate data for calibration could lead to policies misaligned with stakeholder preferences. Our study further uncovers asymmetric insights between claimants and non-claimants. The latter consistently overestimate claimant willingness to accept speed-accuracy trade-offs, even when financially incentivized for accurate perspective-taking. This suggests that policy decisions influenced by the dominant voice of non-claimants, however well-intentioned, may neglect the actual preferences of those directly affected by welfare AI systems. Our findings underline the need for stakeholder engagement and transparent communication in the design and deployment of these systems, particularly in contexts marked by power imbalances.


Discovering the unknown unknowns of research cartography with high-throughput natural description

February 2024

·

10 Reads

Behavioral and Brain Sciences

To succeed, we posit that research cartography will require high-throughput natural description to identify unknown unknowns in a particular design space. High-throughput natural description, the systematic collection and annotation of representative corpora of real-world stimuli, faces logistical challenges, but these can be overcome by solutions that are deployed in the later stages of integrative experiment design.



Citations (58)


... This change undermines the prevailing belief that technical advancements generally lead to a decrease in middle-and low-skilled professions. Instead, it indicates a more extensive and profound transformation of the labor market compared to earlier technological revolutions, with potential to amplify inequality, both within and across different occupations (Capraro et al., 2024;Cazzaniga et al., 2024). ...

Reference:

Empowering K-12 Education with AI: Preparing for the Future of Education and Work
The impact of generative artificial intelligence on socioeconomic inequalities and policy making

PNAS Nexus

... Since the introduction of large language models (LLMs), generative artificial intelligence (AI) has become a focal point of debate. 1 The impressive generative capabilities of LLMs enable the production of high-quality outputs. 2 However, this technology is not without its challenges: it has the potential to generate both beneficial and harmful content. 3 Whether positive or negative, the content generated by AI results from the interaction between the prompting human and the AI model. 4 Consequently, ethical questions arise, particularly as to AI users' moral responsibility, including how much credit or blame they deserve for AI-generated content. ...

The Moral Psychology of Artificial Intelligence

... text, video, images, code) that is hardly distinguishable from human-created content has opened up many new opportunities for using AI in the workplace -for communication, training, artistic creation, coding, and research (Prasad Agrawal, 2023). At the same time, new challenges arise regarding job displacement, data privacy and ethics (Capraro et al., 2023). The advances of GenAI have required governmental and non-governmental institutions to prepare citizens to engage with GenAI (e.g. ...

The Impact of Generative Artificial Intelligence on Socioeconomic Inequalities and Policy Making

SSRN Electronic Journal

... As a theoretical review, the ethical considerations surrounding highly intelligent non-human entities have already evolved into two primary theoretical frameworks: "Human-Centered" ethics and "Ecology-Centered" ethics (Dong, Bonnefon, and Rahwan 2024). The human-centered approach prioritizes human values, aiming to develop AI that is adaptable, trustworthy, and beneficial to humans, with the ultimate purpose of AI being defined in relation to the human value system. ...

Toward human-centered AI management: Methodological challenges and future directions

Technovation

... Finally, we believe that generated video resources can help reduce the digital divide caused by the application of AI, making education more equitable. The application of GAI requires certain infrastructure, devices and Internet, which may result in unequal access and a lack of resources for education in some regions and communities (Capraro et al., 2023). While GAI may be challenging to distribute widely, generated resources can be rapidly produced in large quantities and distributed to different regions, ensuring that teachers in remote areas can also benefit. ...

The Impact of Generative Artificial Intelligence on Socioeconomic Inequalities and Policy Making

... Collectively, our work suggests that the research community is at a critical juncture, grappling with upholding fundamental values of originality, rigor, and ethical conduct in academia. As Brinkmann et al. [12] terms it, we are currently shaping "Machine Culture, " where technologies like LLMs serve as cultural mediators and generators, capable of transforming cultural evolutionary processes. As we continue to progress on both the capability of LLMs and LLM-based research support tools to provide greater benefits, the integration of LLMs into research practices is also likely to continue to increase in both depth and breadth. ...

Machine culture
  • Citing Article
  • November 2023

Nature Human Behaviour

... The presence of AVs may thus act as a form of 'social facilitation' ). These findings align with broader observations in human-autonomous machine interactions, where such machines are often treated more harshly (Bonnefon, Rahwan, and Shariff 2024;Liu, Du, and Xu 2019). There might be certain cognitive, affective, moral, and social mechanisms underlying the negative behavioural adaptations prompted by AVs. ...

The Moral Psychology of Artificial Intelligence
  • Citing Article
  • September 2023

Annual Review of Psychology

... They also perceived others' potential AI-MC use as more acceptable than their own use (Study 2). These findings are generally in line with previous research establishing important self-other differences in AI-attitudes (e.g., Purcell & Bonnefon, 2023a, 2023b and those suggesting a mutually reinforcing relationship between descriptive norms of what people typically do and injunctive norms of what behaviours are deemed acceptable (Eriksson et al., 2015). However, it should be noted that when we asked participants to evaluate others, the descriptions were rather general without specifying identities (e.g., gender or political orientation) or relationships (e.g., friends or colleagues). ...

Research on Artificial Intelligence is Reshaping Our Definition of Morality
  • Citing Article
  • September 2023

Psychological Inquiry

... However, the rise of digital technologies such as automation, AI, and IoT has further developed the concept of trust. Trust can no longer be conceptualized as strictly being a human-to-human interaction but also as a human-machine interaction [33]. Commonly, trust is closely related to finances, and in the era of e-commerce and digital payments, users need to trust that their data will be rigorously protected [34]. ...

Trust within human-machine collectives depends on the perceived consensus about cooperative norms

... They also perceived others' potential AI-MC use as more acceptable than their own use (Study 2). These findings are generally in line with previous research establishing important self-other differences in AI-attitudes (e.g., Purcell & Bonnefon, 2023a, 2023b and those suggesting a mutually reinforcing relationship between descriptive norms of what people typically do and injunctive norms of what behaviours are deemed acceptable (Eriksson et al., 2015). However, it should be noted that when we asked participants to evaluate others, the descriptions were rather general without specifying identities (e.g., gender or political orientation) or relationships (e.g., friends or colleagues). ...

Humans Feel Too Special for Machines to Score Their Morals

PNAS Nexus