Reuben Binns's research while affiliated with University of Oxford and other places
What is this page?
This page lists the scientific contributions of an author, who either does not have a ResearchGate profile, or has not yet added these contributions to their profile.
It was automatically created by ResearchGate to create a record of this author's body of work. We create such pages to advance our goal of creating and maintaining the most comprehensive scientific repository possible. In doing so, we process publicly available (personal) data relating to the author as a member of the scientific community.
If you're a ResearchGate member, you can follow this page to keep up with this author's work.
If you are this author, and you don't want us to display this page anymore, please let us know.
It was automatically created by ResearchGate to create a record of this author's body of work. We create such pages to advance our goal of creating and maintaining the most comprehensive scientific repository possible. In doing so, we process publicly available (personal) data relating to the author as a member of the scientific community.
If you're a ResearchGate member, you can follow this page to keep up with this author's work.
If you are this author, and you don't want us to display this page anymore, please let us know.
Publications (76)
How much are we to trust a decision made by an AI algorithm? Trusting an algorithm without cause may lead to abuse, and mistrusting it may similarly lead to disuse. Trust in an AI is only desirable if it is warranted; thus, calibrating trust is critical to ensuring appropriate use. In the name of calibrating trust appropriately, AI developers shoul...
The European Commission proposed a Directive on Platform Work at the end of 2021. While much attention has been placed on its effort to address misclassification of the employed as self-employed, it also contains ambitious provisions for the regulation of the algorithmic management prevalent on these platforms. Overall, these provisions are well-dr...
A recently published pre-print titled 'GDPR and the Lost Generation of Innovative Apps' by Jan{\ss}en et al. observes that a third of apps on the Google Play Store disappeared from this app store around the introduction of the GDPR in May 2018. The authors deduce 'that GDPR is the cause'. The effects of the GDPR on the app economy are an important...
Critical examinations of AI systems often apply principles such as fairness, justice, accountability, and safety, which is reflected in AI regulations such as the EU AI Act. Are such principles sufficient to promote the design of systems that support human flourishing? Even if a system is in some sense fair, just, or 'safe', it can nonetheless be e...
Tracking is a highly privacy-invasive data collection practice that has been ubiquitous in mobile apps for many years due to its role in supporting advertising-based revenue models. In defence of user privacy, Apple introduced two significant changes with iOS 14: App Tracking Transparency (ATT), a mandatory opt-in system for enabling tracking on iO...
While many studies have looked at privacy properties of the Android and Google Play app ecosystem, comparatively much less is known about iOS and the Apple App Store, the most widely used ecosystem in the US. At the same time, there is increasing competition around privacy between these smartphone operating system providers. In this paper, we prese...
Tracking' is the collection of data about an individual's activity across multiple distinct contexts and the retention, use, or sharing of data derived from that activity outside the context in which it occurred. This paper aims to introduce tracking on the web, smartphones, and the Internet of Things, to an audience with little or no previous know...
Third-party tracking, the collection and sharing of behavioural data about individuals, is a significant and ubiquitous privacy threat in mobile apps. The EU General Data Protection Regulation (GDPR) was introduced in 2018 to protect personal data better, but there exists, thus far, limited empirical evidence about its efficacy. This paper studies...
Provisions in many data protection laws require a legal basis, or at the very least safeguards, for significant, solely automated decisions; Article 22 of the GDPR is the most notable. - Little attention has been paid to Article 22 in light of decision-making processes with multiple stages, potentially both manual and automated, and which together...
This article examines the concept of ‘AI fairness’ for people with disabilities from the perspective of data protection and equality law. This examination demonstrates that there is a need for a distinctive approach to AI fairness that is fundamentally different to that used for other protected characteristics, due to the different ways in which di...
While many studies have looked at privacy properties of the Android and Google Play app ecosystem, comparatively much less is known about iOS and the Apple App Store, the most widely used ecosystem in the US. At the same time, there is increasing competition around privacy between these smartphone operating system providers. In this paper, we prese...
•Provisions in many data protection laws require a legal basis, or at the very least safeguards, for significant, solely automated decisions; Article 22 of the GDPR is the most notable. •Little attention has been paid to Article 22 in light of decision-making processes with multiple stages, potentially both manual and automated, and which together...
This article examines the concept of 'AI fairness' for people with disabilities from the perspective of data protection and equality law. This examination demonstrates that there is a need for a distinctive approach to AI fairness that is fundamentally different to that used for other protected characteristics, due to the different ways in which di...
Third-party tracking allows companies to collect users' behavioural data and track their activity across digital devices. This can put deep insights into users' private lives into the hands of strangers, and often happens without users' awareness or explicit consent. EU and UK data protection law, however, requires consent, both 1) to access and st...
Homomorphic encryption, secure multi-party computation, and differential privacy are part of an emerging class of Privacy Enhancing Technologies which share a common promise: to preserve privacy whilst also obtaining the benefits of computational analysis. Due to their relative novelty, complexity, and opacity, these technologies provoke a variety...
Arguments in favor of tempering algorithmic decision making with human judgment often appeal to concepts and criteria derived from legal philosophy about the nature of law and legal reasoning, arguing that algorithmic systems cannot satisfy them (but humans can). Such arguments often make implicit appeal to the notion that each case needs to be ass...
This document describes and analyzes a system for secure and privacy-preserving proximity tracing at large scale. This system, referred to as DP3T, provides a technological foundation to help slow the spread of SARS-CoV-2 by simplifying and accelerating the process of notifying people who might have been exposed to the virus so that they can take a...
The increasingly widespread use of 'smart' devices has raised multifarious ethical concerns regarding their use in domestic spaces. Previous work examining such ethical dimensions has typically either involved empirical studies of concerns raised by specific devices and use contexts, or alternatively expounded on abstract concepts like autonomy, pr...
As government pressure on major technology companies builds, both firms and legislators are searching for technical solutions to difficult platform governance puzzles such as hate speech and misinformation. Automated hash-matching and predictive machine learning tools – what we define here as algorithmic moderation systems – are increasingly being...
As government pressure on major technology companies builds, both firms and legislators are searching for technical solutions to difficult platform governance puzzles such as hate speech and misinformation. Automated hash-matching and predictive machine learning tools – what we define here as algorithmic moderation systems – are increasingly being...
Connected devices in the home represent a potentially grave new privacy threat due to their unfettered access to the most personal spaces in people's lives. Prior work has shown that despite concerns about such devices, people often lack sufficient awareness, understanding, or means of taking effective action. To explore the potential for new tools...
A distinction has been drawn in fair machine learning research between `group' and `individual' fairness measures. Many technical research papers assume that both are important, but conflicting, and propose ways to minimise the trade-offs between these measures. This paper argues that this apparent conflict is based on a misconception. It draws on...
Amid growing concern about the use and abuse of personal data over the last decade, there is an emerging suggestion that regulators may need to turn their attention towards the concentrations of power deriving from large-scale data accumulation. No longer the preserve of data protection or privacy law, personal data is receiving attention within co...
The democratic role of the press relies on maintaining independence, ensuring citizens can access controversial materials without fear of persecution, and promoting transparency. However, as news has moved to the web, reliance on third-parties has centralized revenue and hosting infrastructure, fostered an environment of pervasive surveillance, and...
Recent changes to data protection regulation, particularly in Europe, are changing the design landscape for smart devices, requiring new design techniques to ensure that devices are able to adequately protect users' data. A particularly interesting space in which to explore and address these challenges is the smart home, which presents a multitude...
Many people struggle to control their use of digital devices. However, our understanding of the design mechanisms that support user self-control remains limited. In this paper, we make two contributions to HCI research in this space: first, we analyse 367 apps and browser extensions from the Google Play, Chrome Web, and Apple App stores to identify...
Many people struggle to control their use of digital devices. However, our understanding of the design mechanisms that support user self-control remains limited. In this paper, we make two contributions to HCI research in this space: first, we analyse 367 apps and browser extensions from the Google Play, Chrome Web, and Apple App stores to identify...
The ever-increasing application of algorithms to decision-making in a range of social contexts has prompted demands for algorithmic accountability. Accountable decision-makers must provide their decision-subjects with justifications for their automated system’s outputs, but what kinds of broader principles should we expect such justifications to ap...
Many individuals are concerned about the governance of machine learning systems and the prevention of algorithmic harms. The EU's recent General Data Protection Regulation (GDPR) has been seen as a core tool for achieving better governance of this area. While the GDPR does apply to the use of models in some limited situations, most of its provision...
Amid growing concern about the use and abuse of personal data over the last decade, there is an emerging suggestion that regulators may need to turn their attention towards the concentrations of power deriving from large-scale data accumulation. No longer the preserve of data protection or privacy law, personal data is receiving attention within co...
Cite as: Michael Veale, Reuben Binns and Lilian Edwards (2018) Algorithms That Remember: Model Inversion Attacks and Data Protection Law. Philosophical Transactions A, forthcoming. doi:10.1098/rsta.2018.0083Many individuals are concerned about the governance of machine learning systems and the prevention of algorithmic harms. The EU's recent Genera...
Many individuals are concerned about the governance of machine learning systems and the prevention of algorithmic harms. The EU's recent General Data Protection Regulation (GDPR) has been seen as a core tool for achieving better governance of this area. While the GDPR does apply to the use of models in some limited situations, most of its provision...
Data protection by design (DPbD), a holistic approach to embedding principles in technical and organizational measures undertaken by data controllers, building on the notion of Privacy by Design, is now a qualified duty in the GDPR. Practitioners have seen DPbD less holistically, instead framing it through the confidentiality-focussed lens of priva...
What does it mean for an algorithmic decision-making system to be “fair” or “non-discriminatory” in terms that can be operationalized? Providing a rigorous understanding of these terms has long been a preoccupation of moral and political philosophers. This article draws on such work to elucidate emerging debates about fair algorithms.
This paper proposes that two significant and emerging problems facing our connected, data-driven society may be more effectively solved by being framed as sensemaking challenges. The first is in empowering individuals to take control of their privacy, in device-rich information environments where personal information is fed transparently to complex...
Calls for heightened consideration of fairness and accountability in algorithmically-informed public decisions—like taxation, justice, and child protection—are now commonplace. How might designers support such human values? We interviewed 27 public sector machine learning practitioners across 5 OECD countries regarding challenges understanding and...
Data-driven decision-making consequential to individuals raises important questions of accountability and justice. Indeed, European law provides individuals limited rights to ‘meaningful information about the logic’ behind significant, autonomous decisions such as loan approvals, insurance quotes, and CV filtering. We undertake three studies examin...
Most smartphone apps collect and share information with various first and third parties; yet, such data collection practices remain largely unbeknownst to, and outside the control of, end-users. In this paper, we seek to understand the potential for tools to help people refine their exposure to third parties, resulting from their app usage. We desi...
Equating users' true needs and desires with behavioural measures of 'engagement' is problematic. However, good metrics of 'true preferences' are difficult to define, as cognitive biases make people's preferences change with context and exhibit inconsistencies over time. Yet, HCI research often glosses over the philosophical and theoretical depth of...
Third party tracking allows companies to identify users and track their behaviour across multiple digital services. This paper presents an empirical study of the prevalence of third-party trackers on 959,000 apps from the US and UK Google Play stores. We find that most apps contain third party tracking, and the distribution of trackers is long-tail...
Third party tracking allows companies to identify users and track their behaviour across multiple digital services. This paper presents an empirical study of the prevalence of third-party trackers on 959,000 apps from the US and UK Google Play stores. We find that most apps contain third party tracking, and the distribution of trackers is long-tail...
• Data Protection by Design (DPbD), a holistic approach to embedding principles in technical and organisational measures undertaken by data controllers, building on the notion of Privacy by Design, is now a qualified duty in the GDPR. • Practitioners have seen DPbD less holistically, instead framing it through the confidentiality-focussed lens of P...
Cite as Michael Veale, Reuben Binns and Max Van Kleek (2018) Some HCI Priorities for GDPR-Compliant Machine Learning. The General Data Protection Regulation: An Opportunity for the CHI Community? (CHI-GDPR 2018), Workshop at ACM CHI'18, 22 April 2018, Montreal, Canada. In this short paper, we consider the roles of HCI in enabling the better governa...
In this short paper, we consider the roles of HCI in enabling the better governance of consequential machine learning systems using the rights and obligations laid out in the recent 2016 EU General Data Protection Regulation (GDPR)---a law which involves heavy interaction with people and systems. Focussing on those areas that relate to algorithmic...
Equating users' true needs and desires with behavioural measures of 'engagement' is problematic. However, good metrics of 'true preferences' are difficult to define, as cognitive biases make people's preferences change with context and exhibit inconsistencies over time. Yet, HCI research often glosses over the philosophical and theoretical depth of...
Cite as: Michael Veale, Reuben Binns and Jef Ausloos (2018) When Data Protection by Design and Data Subject Rights Clash. International Data Privacy Law (2018) doi:10.1093/idpl/ipy002. [Note: An earlier draft was entitled "We Can't Find Your Data, But A Hacker Could: How 'Privacy by Design' Trades-Off Data Protection Rights"]Abstract➔Data Protectio...
Third-party networks collect vast amounts of data about users via web sites and mobile applications. Consolidations among tracker companies can significantly increase their individual tracking capabilities, prompting scrutiny by competition regulators. Traditional measures of market share, based on revenue or sales, fail to represent the tracking c...
Cite as:Michael Veale, Max Van Kleek and Reuben Binns (2018) Fairness and Accountability Design Needs for Algorithmic Support in High-Stakes Public Sector Decision-Making. ACM Conference on Human Factors in Computing Systems (CHI'18). doi: 10.1145/3173574.3174014Calls for heightened consideration of fairness and accountability in algorithmically-in...
Calls for heightened consideration of fairness and accountability in algorithmically-informed public decisions---like taxation, justice, and child protection---are now commonplace. How might designers support such human values? We interviewed 27 public sector machine learning practitioners across 5 OECD countries regarding challenges understanding...
Data-driven decision-making consequential to individuals raises important questions of accountability and justice. Indeed, European law provides individuals limited rights to 'meaningful information about the logic' behind significant, autonomous decisions such as loan approvals, insurance quotes, and CV filtering. We undertake three experimental s...
Cite as:Reuben Binns, Max Van Kleek, Michael Veale, Ulrik Lyngs, Jun Zhao and Nigel Shadbolt (2018) 'It's Reducing a Human Being to a Percentage'; Perceptions of Justice in Algorithmic Decisions. ACM Conference on Human Factors in Computing Systems (CHI'18), April 21–26, Montreal, Canada. doi: 10.1145/3173574.3173951Data-driven decision-making cons...
What does it mean for a machine learning model to be `fair', in terms which can be operationalised? Should fairness consist of ensuring everyone has an equal probability of obtaining some benefit, or should we aim instead to minimise the harms to the least advantaged? Can the relevant ideal be determined by reference to some alternative state of af...
Decisions based on algorithmic, machine learning models can be unfair, reproducing biases in historical data used to train them. While computational techniques are emerging to address aspects of these concerns through communities such as discrimination-aware data mining (DADM) and fairness, accountability and transparency machine learning (FATML),...
The internet has become a central medium through which 'networked publics' express their opinions and engage in debate. Offensive comments and personal attacks can inhibit participation in these spaces. Automated content moderation aims to overcome this problem using machine learning classifiers trained on large corpora of texts manually annotated...
Cite as: Veale, Michael and Binns, Reuben (2017) Fairer machine learning in the real world: Mitigating discrimination without collecting sensitive data. Big Data & Society 4(2). doi:10.1177/2053951717743530Decisions based on algorithmic, machine learning models can be unfair, reproducing biases in historical data used to train them. While computati...
The internet has become a central medium through which ‘networked publics’ express their opinions and engage in debate. Offensive comments and personal attacks can inhibit participation in these spaces. Automated content moderation aims to overcome this problem using machine learning classifiers trained on large corpora of texts manually annotated...
The internet has become a central medium through which `networked publics' express their opinions and engage in debate. Offensive comments and personal attacks can inhibit participation in these spaces. Automated content moderation aims to overcome this problem using machine learning classifiers trained on large corpora of texts manually annotated...
This paper explores how individuals' privacy-related decision-making processes may be influenced by their pre-existing relationships to companies in a wider social and economic context. Through an online role-playing exercise, we explore attitudes to a range of services including home automation, Internet-of-Things and financial services. We find t...
Most users of smartphone apps remain unaware of what data about them is being collected, by whom, and how these data are being used. In this mixed methods investigation, we examine the question of whether revealing key data collection practices of smartphone apps may help people make more informed privacy-related decisions. To investigate this ques...
Privacy protection is one of the most prominent concerns for web users. Despite numerous efforts, users remain powerless in controlling how their personal information should be used and by whom, and find limited options to actually opt-out of dominating service providers, who often process users information with limited transparency or respect for...
The next generation of systems will do more than connect people' they will invisibly orchestrate our social processes and help us achieve the previously impossible. Consumer electronics in their current form of smartphones, wearables, and sensors, along with other devices yet to be envisioned, will power this next generation of systems, providing t...
Citations
... Thereby, the authorities can detect people who may have had close contact with the infected one and notify them promptly to break the infection chain of diseases. However, to handle the privacy issues, the Decentralized Privacy-Preserving Proximity Tracing (DP-3T) protocol [60] is developed to facilitate privacy-preserving digital contact tracing of infected cases. This protocol ensures that the central server does not access contact records. ...
... Another potential attack is to track users' behavior across different apps, which became more crucial after Apple added the app tracking transparency (ATT) permission. Due to this permission, app developers are trying to find potential ways to fingerprint users' phones and infer their behaviors on different apps [25]. iStelan could be integrated with other data to increase the probability of detecting users' behavior, allowing more personalized ads and increasing revenue. ...
... In the field of public services, especially in the field of higher education, the influence of the general Internet has become increasingly apparent, and the various Internet-based teaching innovations such as the MOOCs and flipped classrooms have emerged one after another. This research will focus on the impact of the Internet on college teaching and how colleges and universities should carry out teaching innovation in the "Internet +" era [12]. Accordingly, the figure 1 shows the online information analytic framework. ...
... GDPR came into effect in May 2018 [10], bringing significant changes in the area of personal data protection across the EU that strongly affected different areas of our life [11][12][13][14]. ...
... Forms of 'human in the loop' oversight of automated systems have been explored within a variety of contexts. These include defining the boundaries of 'meaningful control within defence [97], the scope and implementation of restrictions upon automated decisionmaking within Article 22 of the General Data Protection Regulations (GDPR) [8] and current debates regarding mechanisms of oversight of AI systems [41,90]. While this work is undoubtedly useful, one gap thus far is the extent to which it engages with the idea of operational oversight as a team activity, rather than something that can be carried out by an individual. ...
... Our primary contribution is a formalization of deletion-as-control in contrast to deletion-asconfidentiality, building on the legal scholarship of Veale et al. [2018a]. Informally, confidentiality requires that parties other than the controller should be unable to tell whether a data subject Alice requested erasure from the controller or simply never interacted with the controller in the first place. ...
... Courthouses appear to be a 'wild' where the HCI community has feared to tread. In some respects, this is surprising, given the multitude of challenging settings which HCI researchers have elected to personally conduct research and the increasing importance of legal (interaction) design [1,10,11,15]. The operation of the juridical process is of great importance for HCI: the law is a set of (sometimes ill-defined) constraints and opportunities that apply to the design and implementation of all information systems. ...
Reference: (Legal Design) Research through Litigation
... A growing number of privacy preserving computation technologies have emerged in recent years, which share the common promise of preserving privacy while also obtaining the benefits of computational analysis [1]. To preserve visual privacy, existing solutions mainly adopted post-processing techniques such as image blurring and encryption techniques for images containing visual privacy information, e.g., human faces [7,22,24,30,33,43,49,50]. ...
... A focus upon how 'appropriate confidence' is balanced and the level of 'sufficient information' required highlights the need to understand existing institutional structures of oversight. As Binns highlights, 'these issues will have to be worked out as algorithmic systems are deployed in context; if individual justice is worth protecting, we cannot assume that it will be secured by simply putting a human in the algorithmic loop' [7]. Recognising and mapping existing structures of oversight within high-stakes contexts highlights the primacy of teams of professionals operating within existing rules that structure and bound clinical oversight. ...
... The SIG follows similar strands from past workshops at CHI 2020, 2021, and 2022 [9,10,22]. The topics discussed are evolving and growing ( Figure 1); hence, a SIG at CHI 2023 would be timely. ...