Sandra Wachter

Sandra Wachter
  • PhD
  • Senior Researcher at University of Oxford

About

32
Publications
71,455
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
9,075
Citations
Current institution
University of Oxford
Current position
  • Senior Researcher

Publications

Publications (32)
Article
Full-text available
Careless speech is a new type of harm created by large language models (LLM) that poses cumulative, long-term risks to science, education and shared social truth in democratic societies. LLMs produce responses that are plausible, helpful and confident, but that contain factual inaccuracies, misleading references and biased information. These subtle...
Preprint
Full-text available
We present OxonFair, a new open source toolkit for enforcing fairness in binary classification. Compared to existing toolkits: (i) We support NLP and Computer Vision classification as well as standard tabular problems. (ii) We support enforcing fairness on validation data, making us robust to a wide range of overfitting challenges. (iii) Our approa...
Article
In recent years, fairness in machine learning (ML), artificial intelligence (AI), and algorithmic decision-making systems has emerged as a highly active area of research and development. To date, most measures and methods to mitigate bias and improve fairness in algorithmic systems have been built in isolation from policymaking and civil societal c...
Article
Firms are increasingly personalising their offers and services, leading to an ever finer-grained segmentation of consumers online. Targeted online advertising and online price discrimination are salient examples of this development. While personalisation's overall effects on consumer welfare are expectably ambiguous, it can lead to concentration in...
Article
Full-text available
In its attempt to better regulate the platform economy, the European Commission recently proposed a Digital Markets Act (DMA) and a Digital Services Act (DSA). While the DMA addresses worries about digital markets not functioning properly, the DSA is concerned with societal harms stemming from the dissemination of (illegal) content on platforms. Bo...
Article
Full-text available
In recent years a substantial literature has emerged concerning bias, discrimination, and fairness in artificial intelligence (AI) and machine learning. Connecting this work to existing legal non-discrimination frameworks is essential to create tools and methods that are practically useful across divergent legal regimes. While much work has been un...
Article
Online targeting isolates individual consumers, causing what we call epistemic fragmentation. This phenomenon amplifies the harms of advertising and inflicts structural damage to the public forum. The two natural strategies to tackle the problem of regulating online targeted advertising, increasing consumer awareness and extending proactive monitor...
Article
Online behavioural advertising (OBA) relies on inferential analytics to target consumers based on data about their online behaviour. While the technology can improve the matching of adverts with consumers’ preferences, it also poses risks to consumer welfare as consumers face offer discrimination and the exploitation of their cognitive errors. The...
Article
Full-text available
Columbia Business Law Review, 2019(2). Big Data analytics and artificial intelligence (AI) draw non-intuitive and unverifiable inferences and predictions about the behaviors, preferences, and private lives of individuals. These inferences draw on highly diverse and feature-rich data of unpredictable value, and create new opportunities for discrimi...
Article
Europe’s data protection laws must evolve to guard against pervasive inferential analytics in nascent digital technologies such as edge computing.
Conference Paper
Full-text available
Recent work on interpretability in machine learning and AI has focused on the building of simplified models that approximate the true criteria used to make decisions. These models are a useful pedagogical device for teaching trained professionals how to predict what decisions will be made by the complex system, and most importantly how the system m...
Preprint
Recent work on interpretability in machine learning and AI has focused on the building of simplified models that approximate the true criteria used to make decisions. These models are a useful pedagogical device for teaching trained professionals how to predict what decisions will be made by the complex system, and most importantly how the system m...
Article
Full-text available
In October 2016, the White House, the European Parliament, and the UK House of Commons each issued a report outlining their visions on how to prepare society for the widespread use of artificial intelligence (AI). In this article, we provide a comparative assessment of these three reports in order to facilitate the design of policies favourable to...
Article
Full-text available
There has been much discussion of the right to explanation in the EU General Data Protection Regulation, and its existence, merits, and disadvantages. Implementing a right to explanation that opens the black box of algorithmic decision-making faces major legal and technical barriers. Explaining the functionality of complex algorithmic decision-maki...
Article
Full-text available
he Internet of Things (IoT) requires pervasive collection and linkage of user data to provide personalised experiences based on potentially invasive inferences. Consistent identification of users and devices is necessary for this functionality, which poses risks to user privacy. The General Data Protection Regulation (GDPR) contains numerous provis...
Article
Full-text available
In the Internet of Things (IoT), identification and access control technologies provide essential infrastructure to link data between a user's devices with unique identities, and provide seamless and linked up services. At the same time, profiling methods based on linked records can reveal unexpected details about users' identity and private life,...
Preprint
There has been much discussion of the right to explanation in the EU General Data Protection Regulation, and its existence, merits, and disadvantages. Implementing a right to explanation that opens the black box of algorithmic decision-making faces major legal and technical barriers. Explaining the functionality of complex algorithmic decision-maki...
Article
Full-text available
Full text openly available via direct link at: http://digitalethicslab.oii.ox.ac.uk/sandra-wachter/#tab-7456a56b194183e7f18 To create fair and accountable AI and robotics, we need precise regulation and better methods to certify, explain, and audit inscrutable systems.
Article
Full-text available
Since approval of the EU General Data Protection Regulation (GDPR) in 2016, it has been widely and repeatedly claimed that a 'right to explanation' of decisions made by automated or artificially intelligent algorithmic systems will be legally mandated by the GDPR. This right to explanation is viewed as an ideal mechanism to enhance the accountabili...
Article
Full-text available
In information societies, operations, decisions and choices previously left to humans are increasingly delegated to algorithms, which may advise, if not decide, about how data should be interpreted and what actions should be taken as a result. More and more often, algorithms mediate social processes, business transactions, governmental decisions, a...

Network

Cited By