Reuben Binns’s research while affiliated with University of Oxford and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (90)


Access Denied: Meaningful Data Access for Quantitative Algorithm Audits
  • Preprint
  • File available

February 2025

·

11 Reads

Juliette Zaccour

·

Reuben Binns

·

Independent algorithm audits hold the promise of bringing accountability to automated decision-making. However, third-party audits are often hindered by access restrictions, forcing auditors to rely on limited, low-quality data. To study how these limitations impact research integrity, we conduct audit simulations on two realistic case studies for recidivism and healthcare coverage prediction. We examine the accuracy of estimating group parity metrics across three levels of access: (a) aggregated statistics, (b) individual-level data with model outputs, and (c) individual-level data without model outputs. Despite selecting one of the simplest tasks for algorithmic auditing, we find that data minimization and anonymization practices can strongly increase error rates on individual-level data, leading to unreliable assessments. We discuss implications for independent auditors, as well as potential avenues for HCI researchers and regulators to improve data access and enable both reliable and holistic evaluations.

Download

Participant Demographics
Governance of Generative AI in Creative Work: Consent, Credit, Compensation, and Beyond

January 2025

·

38 Reads

Lin Kyi

·

Amruta Mahuli

·

M. Six Silberman

·

[...]

·

Asia J. Biega

Since the emergence of generative AI, creative workers have spoken up about the career-based harms they have experienced arising from this new technology. A common theme in these accounts of harm is that generative AI models are trained on workers' creative output without their consent and without giving credit or compensation to the original creators. This paper reports findings from 20 interviews with creative workers in three domains: visual art and design, writing, and programming. We investigate the gaps between current AI governance strategies, what creative workers want out of generative AI governance, and the nuanced role of creative workers' consent, compensation and credit for training AI models on their work. Finally, we make recommendations for how generative AI can be governed and how operators of generative AI systems might more ethically train models on creative output in the future.


The Interaction Layer: An Exploration for Co-Designing User-LLM Interactions in Parental Wellbeing Support Systems

November 2024

·

45 Reads

Parenting brings emotional and physical challenges, from balancing work, childcare, and finances to coping with exhaustion and limited personal time. Yet, one in three parents never seek support. AI systems potentially offer stigma-free, accessible, and affordable solutions. Yet, user adoption often fails due to issues with explainability and reliability. To see if these issues could be solved using a co-design approach, we developed and tested NurtureBot, a wellbeing support assistant for new parents. 32 parents co-designed the system through Asynchronous Remote Communities method, identifying the key challenge as achieving a "successful chat." Aspart of co-design, parents role-played as NurturBot, rewriting its dialogues to improve user understanding, control, and outcomes.The refined prototype evaluated by 32 initial and 46 new parents, showed improved user experience and usability, with final CUQ score of 91.3/100, demonstrating successful interaction patterns. Our process revealed useful interaction design lessons for effective AI parenting support.


"Diversity is Having the Diversity": Unpacking and Designing for Diversity in Applicant Selection

October 2024

·

46 Reads

When selecting applicants for scholarships, universities, or jobs, practitioners often aim for a diverse cohort of qualified recipients. However, differing articulations, constructs, and notions of diversity prevents decision-makers from operationalising and progressing towards the diversity they all agree is needed. To understand this challenge of translation from values, to requirements, to decision support tools (DSTs), we conducted participatory design studies exploring professionals' varied perceptions of diversity and how to build for them. Our results suggest three definitions of diversity: bringing together different perspectives; ensuring representativeness of a base population; and contextualising applications, which we use to create the Diversity Triangle. We experience-prototyped DSTs reflecting each angle of the Diversity Triangle to enhance decision-making around diversity. We find that notions of diversity are highly diverse; efforts to design DSTs for diversity should start by working with organisations to distil 'diversity' into definitions and design requirements.





Figure 1: Inspired by Despress and Chauvel's chart mapping the regions of practice in knowledge management, this shows which regions are covered by a knowledge management tool that addresses the feedback presented by our participants [14].
This outlines the IDs and Gender of our participants, a short descrip- tion of their current role, and their primary location of operation.
We Are Not There Yet: The Implications of Insufficient Knowledge Management for Organisational Compliance

May 2023

·

74 Reads

Since GDPR went into effect in 2018, many other data protection and privacy regulations have been released. With the new regulation, there has been an associated increase in industry professionals focused on data protection and privacy. Building on related work showing the potential benefits of knowledge management in organisational compliance and privacy engineering, this paper presents the findings of an exploratory qualitative study with data protection officers and other privacy professionals. We found issues with knowledge management to be the underlying challenge of our participants' feedback. Our participants noted four categories of feedback: (1) a perceived disconnect between regulation and practice, (2) a general lack of clear job description, (3) the need for data protection and privacy to be involved at every level of an organisation, (4) knowledge management tools exist but are not used effectively. This paper questions what knowledge management or automation solutions may prove to be effective in establishing better computer-supported work environments.


Fortifying the algorithmic management provisions in the proposed Platform Work Directive

April 2023

·

17 Reads

·

7 Citations

European Labour Law Journal

The European Commission proposed a Directive on Platform Work at the end of 2021. While much attention has been placed on its effort to address misclassification of the employed as self-employed, it also contains ambitious provisions for the regulation of the algorithmic management prevalent on these platforms. Overall, these provisions are well-drafted, yet they require extra scrutiny in light of the fierce lobbying and resistance they will likely encounter in the legislative process, in implementation and in enforcement. In this article, we place the proposal in its sociotechnical context, drawing upon wide cross-disciplinary scholarship to identify a range of tensions, potential misinterpretations, and perversions that should be pre-empted and guarded against at the earliest possible stage. These include improvements to ex ante and ex post algorithmic transparency; identifying and strengthening the standard against which human reviewers of algorithmic decisions review; anticipating challenges of representation and organising in complex platform contexts; creating realistic ambitions for digital worker communication channels; and accountably monitoring and evaluating impacts on workers while limiting data collection. We encourage legislators and regulators at both European and national levels to act to fortify these provisions in the negotiation of the Directive, its potential transposition, and in its enforcement.



Citations (62)


... A small community of AI researchers have proposed using causal models as a potential solution to the fairness and machine learning problem [1,8,9,20,29,35,41]. In particular, techniques proposed by Pearl using causal Bayesian networks (CBNs) may offer a particularly promising solution [31,32]. ...

Reference:

Debiasing Alternative Data for Credit Underwriting Using Causal Inference
Unlawful Proxy Discrimination: A Framework for Challenging Inherently Discriminatory Algorithms

... Algorithmic fairness methods aim to address these biases computationally (Barocas, Hardt, and Narayanan 2023). The notions of fairness or justice employed by algorithmic fairness interventions are also primarily derived from legal scholarship and legislation on anti-discrimination (Ajunwa et al. 2016;Binns, Adams-Prassl, and Kelly-Lyth 2023). However, the descriptive foundations of the anti-discriminatory legal rulings or legislative policies (i.e., reasons for using them and if they were effective in practice) are relatively under-discussed in the fairness literature. ...

Legal Taxonomies of Machine Bias: Revisiting Direct Discrimination
  • Citing Conference Paper
  • June 2023

... In response to these challenges, the European Commission proposed the Directive on Platform Work, aiming to improve working conditions and clarify employment status (Silberman, 2023;Veale et al., 2023). The Directive seeks to address false self-employment, regulate algorithmic management practices, and enhance transparency and accountability of digital labour platforms. ...

Fortifying the algorithmic management provisions in the proposed Platform Work Directive
  • Citing Article
  • April 2023

European Labour Law Journal

... Complementing these top-down approaches, worker data collectives are emerging as powerful tools for empowerment 2 . These collectives, which have been studied in HCI literature [40,70,86] [87], encompass online and offline social institutions [39,82], third-party tools for data sharing and analysis [22,36,40,70,85], and platform evaluation mechanisms like Fairwork [31]. Serving as communities of resistance [7], they enable collective data resistance strategies [74,75,89]. ...

‘You are you and the app. There’s nobody else.’: Building Worker-Designed Data Institutions within Platform Hegemony

... This technology has resulted in new jobs for engineers, data scientists, and software developers who can design and maintain these systems. AI technology has also enhanced the fan experience by providing personalized recommendations for content and merchandise and creating new types of experiences that enrich the spectator experience (9,10). This has led to the creation of new jobs in marketing, sales, and digital media. ...

Spectators of AI : Football Fans vs. the Semi-Automated Offside Technology at the 2022 FIFA World Cup
  • Citing Conference Paper
  • April 2023

... 67 To trace contacts of confirmed cases, linked location, surveillance and transaction data were used, 68 along with the Ebola Contact Tracing application, 69 AliPay Health-Code app 70 71 and voluntary contact-tracing apps that collect location data via Global Positioning System (GPS) or cellular networks, 72 proximity data via Bluetooth 73 or a combination of both. 74 75 Emerging international frameworks, including Decentralised Privacy-Preserving Proximity Tracing, 76 the Pan-European Privacy-Preserving Proximity Tracing initiative 77 and the joint Google-Apple framework 78 are also being developed, each with varying levels of privacy preservation. ...

Decentralized Privacy-Preserving Proximity Tracing: Overview of Data Protection and Security
  • Citing Article
  • May 2020

... In context of the US law, these two concepts correspond to disparate treatment (direct discrimination) and disparate impact (indirect discrimination) [90]. However, importantly, EU-based direct discrimination does not need -compared to US-based disparate treatment -any intentional wrongdoing [3,90,95]. ...

Directly Discriminatory Algorithms

Modern Law Review

... This idea has been appropriated for specific application areas by both researchers and companies, for example, as part of iOS 14. However, recent investigations showed that these can often be inaccurate and misleading, with discrepancies between the data disclosed in the apps' nutrition labels and actual data practices [18]. Privacy Badge [11] is a privacy-aware user interface for small hand-held devices designed to communicate the type of data being disclosed, when, to whom, and for what purpose. ...

Goodbye Tracking? Impact of iOS App Tracking Transparency and Privacy Labels
  • Citing Conference Paper
  • June 2022

... Papers that do appears to associate respect with due regard, such as rights and dignity. This includes working with the elderly [43], or ensuring how AI systems are designed to treat humans with respect [57]. These thought lines are indicative of the predominance of western moral and political philosophy, which can assert the primacy of regarding people as equal, no matter what their status and background or our subjective interests or relationship [24,29]. ...

Respect as a Lens for the Design of AI Systems
  • Citing Preprint
  • June 2022

... Although recent efforts have been made to produce similar data on iOS apps ( The dynamic analysis approach can be used to find evidence of undiscovered SDKs as the data output is significantly smaller than from the static analysis. Conversely, it is less suited for studying the broader ecosystems of mobile apps and third-party tracking that go beyond individual apps and particular dataflows (Binns, 2022). In developing valid methods for monitoring -and ultimately regulating -mobile infrastructures for datafication, the static approach constitutes a more suitable entry point. ...

Tracking on the Web, Mobile and the Internet of Things
  • Citing Article
  • January 2022

Foundations and Trends® in Web Science