David Leslie’s research while affiliated with The Alan Turing Institute and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (49)


Mapping the individual, social, and biospheric impacts of Foundation Models
  • Preprint
  • File available

July 2024

·

41 Reads

·

Shyam Krishna

·

Antonella Maia Perini

·

[...]

·

David Leslie

Responding to the rapid roll-out and large-scale commercialization of foundation models, large language models, and generative AI, an emerging body of work is shedding light on the myriad impacts these technologies are having across society. Such research is expansive, ranging from the production of discriminatory, fake and toxic outputs, and privacy and copyright violations, to the unjust extraction of labor and natural resources. The same has not been the case in some of the most prominent AI governance initiatives in the global north like the UK's AI Safety Summit and the G7's Hiroshima process, which have influenced much of the international dialogue around AI governance. Despite the wealth of cautionary tales and evidence of algorithmic harm, there has been an ongoing over-emphasis within the AI governance discourse on technical matters of safety and global catastrophic or existential risks. This narrowed focus has tended to draw attention away from very pressing social and ethical challenges posed by the current brute-force industrialization of AI applications. To address such a visibility gap between real-world consequences and speculative risks, this paper offers a critical framework to account for the social, political, and environmental dimensions of foundation models and generative AI. We identify 14 categories of risks and harms and map them according to their individual, social, and biospheric impacts. We argue that this novel typology offers an integrative perspective to address the most urgent negative impacts of foundation models and their downstream applications. We conclude with recommendations on how this typology could be used to inform technical and normative interventions to advance responsible AI.

Download

AI Accountability in Practice

AI Accountability in Practice aims to provide resources and training materials to help you and your team establish an end-to-end accountability framework. This will enable you to integrate the ethical values and practical principles, which motivate and steer responsible innovation, into the actual processes that characterise your AI project lifecycle. This workbook is part of the AI Ethics and Governance in Practice series (https://aiethics.turing.ac.uk) co-developed by researchers at The Alan Turing Institute in partnership with key public sector stakeholders.


AI Safety in Practice

Project teams frequently engage in tasks pertaining to the technical safety and sustainability of their AI projects. In doing so, they need to ensure that their resultant models are reproducible, robust, interpretable, reliable, performant, and secure. The issue of AI safety is of paramount importance, because possible failures have the potential to produce harmful outcomes and undermine public trust. This work of building safe AI outputs is an ongoing process requiring reflexivity and foresight. To aid teams in this, the workbook introduces the core components of AI Safety (reliability, performance, robustness, and security), and helps teams develop anticipatory and reflective skills which are needed to responsibly apply these in practice. This workbook is part of the AI Ethics and Governance in Practice series (https://aiethics.turing.ac.uk) co-developed by researchers at The Alan Turing Institute in partnership with key public sector stakeholders.


Responsible Data Stewardship in Practice. The Alan Turing Institute.

This workbook on Data Stewardship aims to provide resources and training which help you and your team to ethically steward the data you access and utilise by proactively initiating and facilitating responsible data practices. You will learn how to use these tools and how they may be relevant at different stages of the project lifecycle. The tools, approaches, and policies introduced should be discussed with your core team and your stakeholders, and should be clearly documented. Data is essential in developing AI models and systems, forming the core information on which they are trained, and, as such, shaping their knowledge base and epistemic (knowledge-contributing) capacity. For this reason, responsible data stewardship is crucial for developing ethical and responsible AI. This workbook is part of the AI Ethics and Governance in Practice series (https://aiethics.turing.ac.uk) co-developed by researchers at The Alan Turing Institute in partnership with key public sector stakeholders.


AI Explainability in Practice

The purpose of this workbook is to introduce participants to the principle of AI Explainability. Understanding how, why, and when explanations of AI-supported or -generated outcomes need to be provided, and what impacted people’s expectations are about what these explanations should include, is crucial to fostering responsible and ethical practices within your AI projects. To guide you through this process, we will address essential questions: What do we need to explain? And who do we need to explain this to? This workbook offers practical insights and tools to facilitate your exploration of AI Explainability. By providing actionable approaches, we aim to equip you and your team with the means to identify when and how to employ various types of explanations effectively. This workbook is part of the AI Ethics and Governance in Practice series (https://aiethics.turing.ac.uk) co-developed by researchers at The Alan Turing Institute in partnership with key public sector stakeholders.







Citations (26)


... The energy used for training and running big models leads to carbon footprint and energy consumption. There is an ethical obligation to balance innovation with sustainability, particularly as models increase in size [75,78]. ...

Reference:

Foundation Models: From Current Developments, Challenges, and Risks to Future Opportunities
Mapping the individual, social and biospheric impacts of Foundation Models
  • Citing Conference Paper
  • June 2024

... Today, cutting-edge AI employs large-scale frontier models that are powering the generative AI (GenAI) revolution [24]. While these systems are proving highly capable in many areas, they are not without their challenges [25]. For these reasons, this paper will make use of a GenAI example to elucidate the current and forthcoming challenges to human-machine decision making. ...

‘Frontier AI,’ Power, and the Public Interest: Who Benefits, Who Decides?

... 1. Improved Trust and Accountability: Transparent assumptions ensure that stakeholders-ranging from engineers to executives-can validate and trust the predictions and recommendations of a Digital Twin. When everyone understands the underlying logic and data, it fosters confidence and accountability for the decisions made using these systems., clear documentation of AI systems' assumptions is essential to building trust and ensuring accountability in industrial applications (Leslie & Perini, 2024). 2. Better Governance: Governance frameworks in Industry 4.0 thrive when the operation of intelligent systems like Digital Twins aligns with ethical and operational standards. ...

Future Shock: Generative AI and the International AI Policy and Governance Crisis
  • Citing Article
  • May 2024

... Artificial intelligence (AI) is a generic term typically referring to intelligent technologies augmenting the human ability to learn, remember, and perform meaningful activities (Leslie et al., 2024). AI systems use a group of mathematical techniques to perform tasks closely associated with human intelligence. ...

AI Ethics and Governance in Practice: An Introduction
  • Citing Article
  • January 2024

SSRN Electronic Journal

... Artificial intelligence (AI) is a generic term typically referring to intelligent technologies augmenting the human ability to learn, remember, and perform meaningful activities (Leslie et al., 2024). AI systems use a group of mathematical techniques to perform tasks closely associated with human intelligence. ...

AI Ethics and Governance in Practice: An Introduction

... Since synthetic data mimic the original data, it is vital to keep ethical issues in consideration. A report by [44] guides how practitioners and innovators can responsibly use synthetic data. The synthetic malware traffic samples developed in this study are aimed at improving the performance of models used in cybersecurity research. ...

Exploring responsible applications of Synthetic Data to advance Online Safety Research and Development
  • Citing Article
  • January 2024

SSRN Electronic Journal

... A further issue in compounding educational inequality from a multiliteracies perspective is the diminishing diversity of voices in educational discourse. GenAI models have been said to produce 'echo chambers' (Turobov et al., 2024) and may lead to overreliance on such systems, which could reinforce existing knowledge hierarchies (Leslie, 2023). Given that the multiliteracies approach focuses on inclusivity and a range of diverse voices, this could damage the potential for educational justice. ...

Does the sun rise for ChatGPT? Scientific discovery in the age of generative AI
  • Citing Article
  • July 2023

AI and Ethics

... Creative and innovative ideas have long enabled scientists to model and predict the behaviour of complex systems, from neuroscience to astrophysics. Recently, the impressive capabilities of large language models have prompted researchers to explore their potential to advance the generation of scientific ideas (Ziems et al., 2023;Birhane et al., 2023;Xie et al., 2023;Noever & McKee, 2023;Si et al., 2024;Kumar et al., 2024;Xiong et al., 2024b;Zhou et al., 2024b;Cohrs et al., 2025). Not only do these models excel in understanding and generating human language (e.g., Devlin et al., 2018;Brown et al., 2020;Team et al., 2023;Grattafiori et al., 2024), but they also demonstrate a remarkable ability to make nuanced deductions and establish relationships across varied contexts (Elkins & Chun, 2020), rendering them an ideal basis for the generation of semantic hypotheses. ...

Science in the age of large language models
  • Citing Article
  • April 2023

Nature Reviews Physics

... 181 The United States, China, and other countries have proposed their own frameworks for the use of AI by public and private actors. 182 Among expert groups and researchers, some studies identify trustworthiness as the key regulatory objective, 183 while others focus on governance, 184 assurance, 185 or accountability. 186 We prefer the latter term as an umbrella concept that encompasses the core values of international human rights adjudication ...

Human Rights, Democracy, and the Rule of Law Assurance Framework for AI Systems: A Proposal
  • Citing Article
  • January 2021

SSRN Electronic Journal

... These methodological and ethical considerations are particularly relevant in development studies, where Leslie [29] points out the crucial need to balance technological advancement with ethical considerations in computational social science. This balance becomes increasingly important as we develop more sophisticated tools for analyzing development patterns [30]. ...

The Ethics of Computational Social Science