Jess Whittlestone

Jess Whittlestone
Warwick Business School · Department of Behavioural Science

About

23
Publications
14,456
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
1,066
Citations
Introduction
Skills and Expertise

Publications

Publications (23)
Chapter
Full-text available
This anthology brings together a diversity of key texts in the emerging field of Existential Risk Studies. It serves to complement the previous volume The Era of Global Risk: An Introduction to Existential Risk Studies by providing open access to original research and insights in this rapidly evolving field. At its heart, this book highlights the o...
Preprint
Full-text available
Advanced AI models hold the promise of tremendous benefits for humanity, but society needs to proactively manage the accompanying risks. In this paper, we focus on what we term "frontier AI" models: highly capable foundation models that could possess dangerous capabilities sufficient to pose severe risks to public safety. Frontier AI models pose a...
Preprint
Full-text available
Current approaches to building general-purpose AI systems tend to produce systems with both beneficial and harmful capabilities. Further progress in AI development could lead to capabilities that pose extreme risks, such as offensive cyber capabilities or strong manipulation skills. We explain why model evaluation is critical for addressing extreme...
Technical Report
Full-text available
A response to the UK’s Future of Compute Review, co-signed by Jess Whittlestone (CLTR), Shahar Avin (CSER), Lennart Heim (GovAI), Markus Anderljung (GovAI), and Girish Sastry (OpenAI). It states that the Future of Compute review fails to provide adequate strategies for ensuring responsible and well-governed use of compute and gives four suggestions...
Preprint
Full-text available
It is increasingly recognised that advances in artificial intelligence could have large and long-lasting impacts on society. However, what form those impacts will take, just how large and long-lasting they will be, and whether they will ultimately be positive or negative for humanity, is far from clear. Based on surveying literature on the societal...
Preprint
Full-text available
Artificial intelligence is already being applied in and impacting many important sectors in society, including healthcare, finance, and policing. These applications will increase as AI capabilities continue to progress, which has the potential to be highly beneficial for society, or to cause serious harm. The role of AI governance is ultimately to...
Article
Artificial intelligence is already being applied in and impacting many important sectors in society, including healthcare, finance, and policing. These applications will increase as AI capabilities continue to progress, which has the potential to be highly beneficial for society, or to cause serious harm. The role of AI governance is ultimately to...
Article
This handbook is currently in development, with individual articles publishing online in advance of print publication. At this time, we cannot add information about unpublished articles in this handbook, however the table of contents will continue to grow as additional articles pass through the review process and are added to the site. Please note...
Article
The terms ‘human-level artificial intelligence’ and ‘artificial general intelligence’ are widely used to refer to the possibility of advanced artificial intelligence (AI) with potentially extreme impacts on society. These terms are poorly defined and do not necessarily indicate what is most important with respect to future societal impacts. We sugg...
Article
The world is changing so fast that it is hard to know how to think about what we ought to do. We barely have time to reflect on how scientific advances will affect our lives before they are upon us. New kinds of dilemma are springing up. Can robots be held responsible for their actions? Will artificial intelligence be able to predict criminal activ...
Article
Full-text available
AI based technologies promise benefits for tackling a pandemic like covid-19, but also raise ethical challenges for developers and decision makers. If an ethical approach is not taken, the risks increase of unintended harmful consequences and a loss of stakeholder trust. Ethical challenges from the use of AI systems arise because they often require...
Article
Full-text available
We propose a method for identifying early warning signs of transformative progress in artificial intelligence (AI), and discuss how these can support the anticipatory and democratic governance of AI. We call these early warning signs ‘canaries’, based on the use of canaries to provide early warnings of unsafe air pollution in coal mines. Our method...
Article
Full-text available
Achieving the global benefits of artificial intelligence (AI) will require international cooperation on many areas of governance and ethical standards, while allowing for diverse cultural perspectives and priorities. There are many barriers to achieving this at present, including mistrust between cultures, and more practical challenges of coordinat...
Article
Full-text available
Artificial intelligence tools can help save lives in a pandemic. However, the need to implement technological solutions rapidly raises challenging ethical issues. We need new approaches for ethics with urgency, to ensure AI can be safely and beneficially used in the COVID-19 response and beyond.
Preprint
Full-text available
One way of carving up the broad "AI ethics and society" research space that has emerged in recent years is to distinguish between "near-term" and "long-term" research. While such ways of breaking down the research space can be useful, we put forward several concerns about the near/long-term distinction gaining too much prominence in how research qu...
Preprint
Full-text available
Recently the concept of transformative AI (TAI) has begun to receive attention in the AI policy space. TAI is often framed as an alternative formulation to notions of strong AI (e.g. artificial general intelligence or superintelligence) and reflects increasing consensus that advanced AI which does not fit these definitions may nonetheless have extr...
Preprint
Full-text available
This paper explores the tension between openness and prudence in AI research, evident in two core principles of the Montr\'eal Declaration for Responsible AI. While the AI community has strong norms around open sharing of research, concerns about the potential harms arising from misuse of research are growing, prompting some to consider whether the...
Preprint
Full-text available
The aim of this paper is to facilitate nuanced discussion around research norms and practices to mitigate the harmful impacts of advances in machine learning (ML). We focus particularly on the use of ML to create "synthetic media" (e.g. to generate audio, video, images, and text), and the question of what publication and release processes around su...
Technical Report
Full-text available
This report sets out a broad roadmap for work on the ethical and societal implications of ADA-based technologies. The roadmap identifies the questions for research that need to be prioritised in order to inform and improve the standards, regulations and systems of oversight of ADA-based technologies. Without this, the report’s authors conclude the...
Conference Paper
Full-text available
The last few years have seen a proliferation of principles for AI ethics. There is substantial overlap between different sets of principles, with widespread agreement that AI should be used for the common good, should not be used to harm people or undermine their rights, and should respect widely held values such as fairness, privacy, and autonomy....

Network

Cited By