Article

The scored society: Due process for automated predictions

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Big Data is increasingly mined to rank and rate individuals. Predictive algorithms assess whether we are good credit risks, desirable employees, reliable tenants, valuable customers-or deadbeats, shirkers, menaces, and "wastes of time." Crucial opportunities are on the line, including the ability to obtain loans, work, housing, and insurance. Though automated scoring is pervasive and consequential, it is also opaque and lacking oversight. In one area where regulation does prevail-credit-the law focuses on credit history, not the derivation of scores from data. Procedural regularity is essential for those stigmatized by "artificially intelligent" scoring systems. The American due process tradition should inform basic safeguards. Regulators should be able to test scoring systems to ensure their fairness and accuracy. Individuals should be granted meaningful opportunities to challenge adverse decisions based on scores miscategorizing them. Without such protections in place, systems could launder biased and arbitrary data into powerfully stigmatizing scores.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... And yet, while it might be appealing to take these steps to connect the nascent field of ML law with its older cyberlaw sibling, upon deeper examination, the comparison between ML and the law via functions does not hold up. For one, as much legal scholarship acknowledges, the mechanism by which ML translates from inputs to outputs fundamentally differs from the analogous mechanisms in the law [3,11,12,23,27,32,33]. The law has a variety of mechanisms -rules, standards, factors tests, etc. -each accompanied with justifications for (and amendments concerning) their use, as well as a long record in jurisprudence of their application to specific cases. ...
... Moreover, as we saw in Section 2, non-determinism can make it very difficult to reason about the difference between correctness and incorrectness in ML program behaviors, thus making accuracy a fuzzy concept that is difficult to pin down. 10 And yet, in the existing legal literature on ML, the issue of inaccuracy and accuracy, particularly at the individual model level, has been a dominant theme [3,6,8,11,18,19,28]. For the law to adequately contend with non-determinism, we have argued that the legal literature must shift to also consider the viewpoint of distributions over outcomes, as this viewpoint indicates how non-determinism fundamentally problematizes our understanding of accuracy. ...
... Based on this prior discussion, we now argue that this will also require shift in the dominant thread of cyberlaw thinking that echoes the refrain that "code is law" 11 In brief, "code as law" stands in for the idea that code does the work of law; code, like the law, is a modality for regulating and mediating human behavior [22,29]. As Grimmelmann [22] summarizes in more detail, "code is law" captures the idea that "software itself can be effectively regulated by major social institutions, such as businesses or governments. ...
Preprint
Full-text available
Legal literature on machine learning (ML) tends to focus on harms, and as a result tends to reason about individual model outcomes and summary error rates. This focus on model-level outcomes and errors has masked important aspects of ML that are rooted in its inherent non-determinism. We show that the effects of non-determinism, and consequently its implications for the law, instead become clearer from the perspective of reasoning about ML outputs as probability distributions over possible outcomes. This distributional viewpoint accounts for non-determinism by emphasizing the possible outcomes of ML. Importantly, this type of reasoning is not exclusive with current legal reasoning; it complements (and in fact can strengthen) analyses concerning individual, concrete outcomes for specific automated decisions. By clarifying the important role of non-determinism, we demonstrate that ML code falls outside of the cyberlaw frame of treating "code as law," as this frame assumes that code is deterministic. We conclude with a brief discussion of what work ML can do to constrain the potentially harm-inducing effects of non-determinism, and we clarify where the law must do work to bridge the gap between its current individual-outcome focus and the distributional approach that we recommend.
... There is a new challenge to firm legitimacy-namely, the increasing use of algorithms and computers to make decisions that directly impact people's lives (Citron & Pasquale, 2014). Businesses use algorithmic decision-making (ADM) systems to make hiring, firing, and promotion decisions (Ajunwa, 2020). ...
... Businesses use algorithmic decision-making (ADM) systems to make hiring, firing, and promotion decisions (Ajunwa, 2020). Banks assess loan eligibility and credit risk algorithmically (Citron & Pasquale, 2014). Social media platforms rely on ADM to moderate and curate content (Vincent, 2020;Gillespie, 2019). ...
Article
Full-text available
Firms use algorithms to make important business decisions. To date, the algorithmic accountability literature has elided a fundamentally empirical question important to business ethics and management: Under what circumstances, if any, are algorithmic decision-making systems considered legitimate ? The present study begins to answer this question. Using factorial vignette survey methodology, we explore the impact of decision importance, governance, outcomes, and data inputs on perceptions of the legitimacy of algorithmic decisions made by firms. We find that many of the procedural governance mechanisms in practice today, such as notices and impact statements, do not lead to algorithmic decisions being perceived as more legitimate in general, and, consistent with legitimacy theory, that algorithmic decisions with good outcomes are perceived as more legitimate than bad outcomes. Yet, robust governance, such as offering an appeal process, can create a legitimacy dividend for decisions with bad outcomes. However, when arbitrary or morally dubious factors are used to make decisions, most legitimacy dividends are erased. In other words, companies cannot overcome the legitimacy penalty of using arbitrary or morally dubious factors, such as race or the day of the week, with a good outcome or an appeal process for individuals. These findings add new perspectives to both the literature on legitimacy and policy discussions on algorithmic decision-making in firms.
... In "algorithmic governance," deference to automated decision-making makes algorithms "a source and factor of social order" (Just and Latzer 2017, 246). A vast interdisciplinary literature has emerged to study the consequences of algorithmic governance for social control (Amoore 2020;Andrejevic 2020;Benjamin 2019;Bucher 2018;Citron and Pasquale 2014;Crawford 2021; Crawford and Schultz 2019; Danaher et al. 2017;Noble 2018;Pasquale 2015;Yeung 2018;Zuboff 2019). International relations scholarship has identified artificial intelligence (AI; Dafoe 2018), internet governance (DeNardis 2014), privacy regulation (Farrell and Newman 2019;Wong 2020), and technology platforms (Atal 2020;Gorwa 2019) as important research areas. ...
... Unsupervised machine-learning algorithms are "not able to tell programmers exactly why they produce the outputs they do" (Donaher 2016, 255). In AI systems, there are distinctions between "human-in-the-loop" with full human command, "human-on-the-loop" with possible human override, and "human-out-of-the-loop" with no human oversight (Citron and Pasquale 2014). These distinctions collapse as more deference to algorithms leads to "systems that are far more complex and outside the upper limits of human reason" (Donaher 2016, 253). ...
Article
Full-text available
Big technology companies like Facebook, Google, and Amazon amass global power through classification algorithms. These algorithms use unsupervised and semi-supervised machine learning on massive databases to detect objects, such as faces, and to process texts, such as speech, to model predictions for commercial and political purposes. Such governance by algorithms—or “algorithmic governance”—has received critical scrutiny from a vast interdisciplinary scholarship that points to algorithmic harms related to mass surveillance, information pollution, behavioral herding, bias, and discrimination. Big Tech’s algorithmic governance implicates core IR research in two ways: (1) it creates new private authorities as corporations control critical bottlenecks of knowledge, connection, and desire; and (2) it mediates the scope of state–corporate relations as states become dependent on Big Tech, Big Tech circumvents state overreach, and states curtail Big Tech. As such, IR scholars should become more involved in the global research on algorithmic governance.
... While the business benefits are apparent, it is perhaps unsurprising that the use of algorithms in Human Resource Management (HRM) operations, processes, and practices (Cheng & Hackett, 2021) has also come under increasing scrutiny by critical studies within the field of HRM (e.g., Hmoud & Laszlo, 2019;Leicht-Deobald et al., 2019;Ong, 2019). Critical research has not only considered the lack of regulatory measures (Ajunwa, 2020) or "good" employment data (Citron & Pasquale, 2014), but also the implications of algorithmic decision-making on employee control, surveillance, ethics, and discrimination, raising questions around the governance of such phenomena (Ajunwa, 2020;Mittelstadt et al., 2016;Parry et al., 2016). Moreover, it has highlighted how human biases can be inscribed into the code of the HRM algorithms embedding and sustaining inequalities while assuming a veneer of objectivity (Raghavan et al., 2020). ...
... Algorithms tend to segregate individuals into groups, drawing conclusions about how groups behave differently (Citron & Pasquale, 2014) and their common characteristics: an action that perpetuates stereotypes (Crawford, 2013). Hiring algorithms, for example, are based on the traits that differentiate high from low performers within a company (Hart, 2005), outcomes subsequently used to recommend certain applicants in hiring and promotion decisions. ...
Article
Full-text available
Human Resource (HR) algorithms are now widely used for decision making in the field of HR. In this paper, we examine how biases may become entrenched in HR algorithms, which are often designed without consultation with HR specialists , assumed to operate with scientific objectivity and often viewed as instruments beyond scrutiny. Using three orienting concepts such as scientism, illusio and rationales, we demonstrate why and how biases of HR algorithms go unchecked and in turn may perpetuate the biases in HR systems and consequent HR decisions. Based on a narrative review, we examine bias in HR algorithms; provide a methodology for algorithmic hygiene for HR professionals.
... It was also facilitated by the so-called social indicators movement aiming to measure social values using numbers and data. Compared to the first wave, private corporations and markets played increasingly important roles in transforming social life into metrics and data (Citron and Pasquale, 2014). Credit scoring systems (e.g., FICO) and constant surveillance were developed by private sectors for evaluating whether people would default on their debts and for building "the scored society" (Citron and Pasquale, 2014;Fourcade and Healy, 2017). ...
... Compared to the first wave, private corporations and markets played increasingly important roles in transforming social life into metrics and data (Citron and Pasquale, 2014). Credit scoring systems (e.g., FICO) and constant surveillance were developed by private sectors for evaluating whether people would default on their debts and for building "the scored society" (Citron and Pasquale, 2014;Fourcade and Healy, 2017). While the first wave quantification aimed to count people at the population level, the second wave focused on creating profiling and measurement at the individual level. ...
Article
Full-text available
This article examines citizen scoring in China's Social Credit Systems (SCSs). Focusing on 50 municipal cases that potentially cover 210 million population, we analyze how state actors quantify social and economic life into measurable and comparable metrics and discuss the implications of SCSs through the lens of social quantification. Our results illustrate that the SCSs are envisioned and designed as social quantification practices including two facets: a normative apparatus encouraging “good” citizens and social morality, and a regulative apparatus disciplining “deviant” behaviors and enforcing social management. We argue that the SCSs illustrate the significant shift in which state actors increasingly become data processors whereas citizens are reconfigured as datafied subjects that can be measured, compared, and governed. We suggest that the SCSs function as infrastructures of social quantification for enforcing social management, constructing differences, and nudging people towards desired behaviors defined by the state.
... Classification accuracy can no longer be the sole criterion for task allocation. Indeed, several recent studies have examined possible human-machine configurations that take into account additional considerations, such as organizational implications [31], the degree of automation versus augmentation [48], the contingencies of configurations based on parameters like task complexity or ambiguity [58], and ethical issues such as discretion [15]. Keeping the human in the loop can therefore be argued on grounds other than relative advantages in task performance. ...
Article
Full-text available
The need for advanced automation and artificial intelligence (AI) in various fields, including text classification, has dramatically increased in the last decade, leaving us critically dependent on their performance and reliability. Yet, as we increasingly rely more on AI applications, their algorithms are becoming more nuanced, more complex, and less understandable precisely at a time we need to understand them better and trust them to perform as expected. Text classification in the medical and cybersecurity domains is a good example of a task where we may wish to keep the human in the loop. Human experts lack the capacity to deal with the high volume and velocity of data that needs to be classified, and ML techniques are often unexplainable and lack the ability to capture the required context needed to make the right decision and take action. We propose a new abstract configuration of Human-Machine Learning (HML) that focuses on reciprocal learning, where the human and the AI are collaborating partners. We employ design-science research (DSR) to learn and design an application of the HML configuration, which incorporates software to support combining human and artificial intelligences. We define the HML configuration by its conceptual components and their function. We then describe the development of a system called Fusion that supports human-machine reciprocal learning. Using two case studies of text classification from the cyber domain, we evaluate Fusion and the proposed HML approach, demonstrating benefits and challenges. Our results show a clear ability of domain experts to improve the ML classification performance over time, while both human and machine, collaboratively, develop their conceptualization, i.e., their knowledge of classification. We generalize our insights from the DSR process as actionable principles for researchers and designers of 'human in the learning loop' systems. We conclude the paper by discussing HML configurations and the challenge of capturing and representing knowledge gained jointly by human and machine, an area we feel has great potential.
... Similarly, the complex, often opaque, nature of ADMS may hinder the possibility of linking the outcome of an action to its causes (Oxborough et al. 2018). For example, the structures that enable learning in neural networks, including the use of hidden layers, contributes to technical opacity that may undermine the attribution of accountability for the action of ADMS (Citron and Pasquale 2014). While it should be noted that opacity can also be a result of intentional corporate or state secrecy (Burrell 2016), our main concern here relates to inherent technical complexity. ...
Preprint
Full-text available
Important decisions that impact human lives, livelihoods, and the natural environment are increasingly being automated. Delegating tasks to so-called automated decision-making systems (ADMS) can improve efficiency and enable new solutions. However, these benefits are coupled with ethical challenges. For example, ADMS may produce discriminatory outcomes, violate individual privacy, and undermine human self-determination. New governance mechanisms are thus needed that help organisations design and deploy ADMS in ways that are ethical, while enabling society to reap the full economic and social benefits of automation. In this article, we consider the feasibility and efficacy of ethics-based auditing (EBA) as a governance mechanism that allows organisations to validate claims made about their ADMS. Building on previous work, we define EBA as a structured process whereby an entity's present or past behaviour is assessed for consistency with relevant principles or norms. We then offer three contributions to the existing literature. First, we provide a theoretical explanation of how EBA can contribute to good governance by promoting procedural regularity and transparency. Second, we propose seven criteria for how to design and implement EBA procedures successfully. Third, we identify and discuss the conceptual, technical, social, economic, organisational, and institutional constraints associated with EBA. We conclude that EBA should be considered an integral component of multifaced approaches to managing the ethical risks posed by ADMS.
... However, AI blurs the epistemic distinctions between causation and correlationwith some arguing that a narrative of causality is no longer even needed for explanation (Raji et al. 2020;Latour 2002Latour , 2010. In economic contexts, predictive algorithms used in high-speed trading shape markets rather than simply predict what will happen in the future (Carrion 2013;Citron and Pasquale 2014) and enable high-speed traders to gain a technological advantage discontinuous with market productivity. Prediction, in this epistemological reasoning, works to shape reality as a form of self-fulfilling prophecy. ...
Article
Full-text available
Given the complexity of teams involved in creating AI-based systems, how can we understand who should be held accountable when they fail? This paper reports findings about accountable AI from 26 interviews conducted with stakeholders in AI drawn from the fields of AI research, law, and policy. Participants described the challenges presented by the distributed nature of how AI systems are designed, developed, deployed, and regulated. This distribution of agency, alongside existing mechanisms of accountability, responsibility, and liability, creates barriers for effective accountable design. As agency is distributed across the socio-technical landscape of an AI system, users without deep knowledge of the operation of these systems become disempowered, unable to challenge or contest when it impacts their lives. In this context, accountability becomes a matter of building systems that can be challenged, interrogated, and, most importantly, adjusted in use to accommodate counter-intuitive results and unpredictable impacts. Thus, accountable system design can work to reconfigure socio-technical landscapes to protect the users of AI and to prevent unjust apportionment of risk.
... It is generally agreed that algorithms should rarely replace the human role completely but should instead be used to enhance people's decision-making (Citron & Pasquale, 2014;Green & Chen, 2020), and free up their valuable time for more thorough assessment of complex cases (Raghu et al., 2019), or creative work (Diakopoulos, 2019). Allowing algorithmic decision-making support systems to function without human oversight can lead to discrimination and perpetuation of biases. ...
Article
Full-text available
Algorithmic decision support systems are widely applied in domains ranging from healthcare to journalism. To ensure that these systems are fair and accountable, it is essential that humans can maintain meaningful agency, understand and oversee algorithmic processes. Explainability is often seen as a promising mechanism for enabling human-in-the-loop, however, current approaches are ineffective and can lead to various biases. We argue that explainability should be tailored to support naturalistic decision-making and sensemaking strategies employed by domain experts and novices. Based on cognitive psychology and human factors literature review we map potential decision-making strategies dependent on expertise, risk and time dynamics and propose the conceptual Expertise, Risk and Time Explainability Framework, intended to be used as explainability design guidelines. Finally, we present a worked example in journalism to illustrate the applicability of our framework in practice.
... One criticism is that how scores derived from big data are not adequately protected by the existing laws, and misclassification of applicants needs to be prevented to avoid stigmatization. Regulations are necessary for these novel credit scoring systems to perform objective and accurate credit risk assessments (Citron & Pasquale, 2014). King and Forder (2016) also attracted attention to "lack of transparency" as an important privacy concern. ...
Article
Full-text available
This study aims to reveal the predictors of individuals’ financial behavior associated with credit default for accurate and reliable credit risk assessment. Within the scope of credit use research, a systematic review of 108 studies was performed. Among the reviewed studies, a fair number have analyzed the determinants of default and delinquency. A remarkable number has examined the factors affecting outstanding and problematic debt levels, and some have investigated the financial behavior in terms of responsibility, debt repayment, and credit misuse. A wide range of socioeconomic, demographic, psychological, situational, and behavioral factors was explored, and their role in predicting the investigated outcome domain at various time-points was analyzed. The main analysis techniques and mix of predictors in papers also differed based on different time periods. While the synthesis of findings revealed some strong and consistent predictors for each outcome variable, mixed results were obtained for some factors. Additionally, a cluster of new practices that includes a wide range of alternative factors to improve prediction accuracies were uncovered. Study findings revealed a paradigm shift regarding the use of non-traditional data sources, especially big data, and novel techniques.
... When high-stakes decisions are automated or informed by data-driven algorithms, decisionsubjects will strategically modify their observable features in a way which they believe will maximize their chances of achieving better outcomes [14,6]. Often in such settings, the decisionsubject has a set of actions/interventions available to them. ...
Preprint
Full-text available
When subjected to automated decision-making, decision-subjects will strategically modify their observable features in ways they believe will maximize their chances of receiving a desirable outcome. In many situations, the underlying predictive model is deliberately kept secret to avoid gaming and maintain competitive advantage. This opacity forces the decision subjects to rely on incomplete information when making strategic feature modifications. We capture such settings as a game of Bayesian persuasion, in which the decision-maker sends a signal, e.g., an action recommendation, to a decision subject to incentivize them to take desirable actions. We formulate the decision-maker's problem of finding the optimal Bayesian incentive-compatible (BIC) action recommendation policy as an optimization problem and characterize the solution via a linear program. Through this characterization, we observe that while the problem of finding the optimal BIC recommendation policy can be simplified dramatically, the computational complexity of solving this linear program is closely tied to (1) the relative size of the decision-subjects' action space, and (2) the number of features utilized by the underlying predictive model. Finally, we provide bounds on the performance of the optimal BIC recommendation policy and show that it can lead to arbitrarily better outcomes compared to standard baselines.
... The use of algorithms in the fight against corruption may negatively impact fundamental rights such as data protection and privacy (Crawford & Schultz, 2014;Citron & Pasquale, 2014). ...
Chapter
Administrative transparency, or the condition of an administration made visible and understandable in its processes, is a tool for preventing corruption under two profiles , each connected to the other. The first concerns transparency as a natural antidote to the condition of opacity, a concealment in which corrupt transactions occur: corrupt exchange is hidden, because it is concerned in concealing the devi-ated/illegal interest that it aims to satisfy and the associated distraction/waste of funds. Secondly, public monitoring ensured by transparency acts as a deterrent to such behavior. This essay is devoted to exploring the way by which transparency enabled by digitalization may prevent and detect corruption. The essay is structured as follows: the first paragraph explores the different paths to transparency enabled by digitalization; the second paragraph offers a special focus on means enabled by open data, big data and machine learning to detect and prevent corruption; the final paragraph analyzes those anticorruption tools in context, highlighting critical success factors and barriers.
... Applications like facial recognition systems, smart surveillance systems, selfdriving cars, and numerous other services that we use every day have benefited from the recent advances in machine learning technologies. Several business processes are being automated [1] [2] and such automation has resulted in a reduction in cost and processing time and has improved quality. ...
... Xiang and Raji note that the term "procedural fairness" as described in the fair machine learning literature is a narrow and misguided view of what procedural fairness means from a legal lens [50]. Procedural justice aims to arrive at a just outcome through an iterative process as well as through a close examination of the set of governing laws in place that guide the decision-maker to a specific decision [50,65]. They pose that the overall goal of procedural fairness in machine learning should be re-aligned with the aim of procedural justice by instead analyzing the system surrounding the algorithm, as well as its use, rather than simply looking at it from the specifics of the algorithm itself. ...
Preprint
Full-text available
Over the past several years, a slew of different methods to measure the fairness of a machine learning model have been proposed. However, despite the growing number of publications and implementations, there is still a critical lack of literature that explains the interplay of fair machine learning with the social sciences of philosophy, sociology, and law. We hope to remedy this issue by accumulating and expounding upon the thoughts and discussions of fair machine learning produced by both social and formal (specifically machine learning and statistics) sciences in this field guide. Specifically, in addition to giving the mathematical and algorithmic backgrounds of several popular statistical and causal-based fair machine learning methods, we explain the underlying philosophical and legal thoughts that support them. Further, we explore several criticisms of the current approaches to fair machine learning from sociological and philosophical viewpoints. It is our hope that this field guide will help fair machine learning practitioners better understand how their algorithms align with important humanistic values (such as fairness) and how we can, as a field, design methods and metrics to better serve oppressed and marginalized populaces.
... Social harm scholars have argued that many significant harms are not recognised as crimes and blameworthy harms in capitalist societies (Agnew 2011: 38) and remain normalised consequences of existing patterns of social organisation and modes of production (Boukli and Kotzé 2018; Hillyard and Tombs 2004). Critical algorithm studies, in turn, show that algorithmic technologies exacerbate societal biases and discrimination (e.g., Angwin et al. 2016; Barocas and Selbst 2016;Eubanks 2017;Sandving et al. 2016), violate fundamental rights (Citron and Pasquale 2014;Todolí-Signes 2019), increase inequality and destabilise our political environments (Gillespie and Seaver 2016;Tufekci 2015). Some scholars have also argued that algorithmic technologies have triggered changes in capitalism and its mode of accumulation (Suarez-Villa 2009;Zuboff 2019) and drive uneven participation in social life, posing a threat to social justice (Cinnamon 2017). ...
Article
Full-text available
Growing evidence suggests that the affordances of algorithms can reproduce socially embedded bias and discrimination, increase the information asymmetry and power imbalances in socio‑economic relations. We conceptualise these affordances in the context of socially mediated mass harms. We argue that algorithmic technologies may not alter what harms arise but, instead, affect harms qualitatively—that is, how and to what extent they emerge and on whom they fall. Using the example of three well-documented cases of algorithmic failures, we integrate the concerns identified in critical algorithm studies with the literature on social harm and zemiology. Reorienting the focus from socio‑economic to socio-econo-technological structures, we illustrate how algorithmic technologies transform the dynamics of social harm production on macro and meso levels by: (1) systematising bias and inequality; (2) accelerating harm propagation on an unprecedented scale; and (3) blurring the perception of harms.
... The complexity in unpacking the ADM process may necessitate increased transparency by employers that use it [148][149][150]. Burdon calls for reformulating information privacy laws to regulate the consequences of ubiquitous autonomous data collection [151]. ...
Article
Full-text available
Purpose This article examines ways COVID-19 health surveillance and algorithmic decision-making (“ADM”) are creating and exacerbating workplace inequalities that impact post-treatment cancer survivors. Cancer survivors’ ability to exercise their right to work often is limited by prejudice and health concerns. While cancer survivors can ostensibly elect not to disclose to their employers when they are receiving treatments or if they have a history of treatment, the use of ADM increases the chances that employers will learn of their situation regardless of their preferences. Moreover, absent significant change, inequalities may persist or even expand. Methods We analyze how COVID-19 health surveillance is creating an unprecedented amount of health data on all people. These data are increasingly collected and used by employers as part of COVID-19 regulatory interventions. Results The increase in data, combined with the health and economic crisis, means algorithm-driven health inequalities will be experienced by a larger percentage of the population. Post-treatment cancer survivors, as for people with disabilities generally, are at greater risk of experiencing negative outcomes from algorithmic health discrimination. Conclusions Updated and revised workplace policy and practice requirements, as well as collaboration across impacted groups, are critical in helping to control the inequalities that flow from the interaction between COVID-19, ADM, and the experience of cancer survivorship in the workplace. Implications for Cancer Survivors The interaction among COVID-19, health surveillance, and ADM increases exposure to algorithmic health discrimination in the workplace.
... Los sistemas de adopción de decisiones automatizadas han de rendir cuentas y cumplir estándares básicos de justicia. Los individuos deben tener garantizado el modo de impugnar decisiones adversas basadas en categorizaciones incorrectas (Barocas y Selbst, 2016;Citron y Pasquale, 2014;Kroll et al., 2017;Balkin, 2017). En sentido similar, la implantación de un «Habeas Data» supone garantías procesales para hacer efectiva la facultad del individuo de conocer y controlar las informaciones que le conciernen procesadas en bancos de datos. ...
Article
Full-text available
Este artículo reseña: Carissa Véliz, Privacy is power: why and how you should take back control of your data, (2020) Bantam Press, London, 268 pp.
... The first step is predictive scoring, which consists in assigning a score to an entity. The score expresses the likelihood that the entity has the predicted property (see Citron and Pasquale 2014). Depending on the domain, different target properties can be predicted. ...
Article
Full-text available
Machine learning classifiers are increasingly used to inform, or even make, decisions significantly affecting human lives. Fairness concerns have spawned a number of contributions aimed at both identifying and addressing unfairness in algorithmic decision-making. This paper critically discusses the adoption of group-parity criteria (e.g., demographic parity, equality of opportunity, treatment equality) as fairness standards. To this end, we evaluate the use of machine learning methods relative to different steps of the decision-making process: assigning a predictive score, linking a classification to the score, and adopting decisions based on the classification. Throughout our inquiry we use the COMPAS system, complemented by a radical simplification of it (our SAPMOC I and SAPMOC II models), as our running examples. Through these examples, we show how a system that is equally accurate for different groups may fail to comply with group-parity standards, owing to different base rates in the population. We discuss the general properties of the statistics determining the satisfaction of group-parity criteria and levels of accuracy. Using the distinction between scoring, classifying, and deciding, we argue that equalisation of classifications/decisions between groups can be achieved thorough group-dependent thresholding. We discuss contexts in which this approach may be meaningful and useful in pursuing policy objectives. We claim that the implementation of group-parity standards should be left to competent human decision-makers, under appropriate scrutiny, since it involves discretionary value-based political choices. Accordingly, predictive systems should be designed in such a way that relevant policy goals can be transparently implemented. Our paper presents three main contributions: (1) it addresses a complex predictive system through the lens of simplified toy models; (2) it argues for selective policy interventions on the different steps of automated decision-making; (3) it points to the limited significance of statistical notions of fairness to achieve social goals.
... whether the individual has been regular with their payments or not). However, opening credit lines requires additional investment on the part of the individuals Citron & Pasquale (2014). Another example is the setting of college admissions, where the features are individuals' demographics, school academic records, extra-curricular records, and scores from standardized tests like GRE. ...
Preprint
In real-world classification settings, individuals respond to classifier predictions by updating their features to increase their likelihood of receiving a particular (positive) decision (at a certain cost). Yet, when different demographic groups have different feature distributions or different cost functions, prior work has shown that individuals from minority groups often pay a higher cost to update their features. Fair classification aims to address such classifier performance disparities by constraining the classifiers to satisfy statistical fairness properties. However, we show that standard fairness constraints do not guarantee that the constrained classifier reduces the disparity in strategic manipulation cost. To address such biases in strategic settings and provide equal opportunities for strategic manipulation, we propose a constrained optimization framework that constructs classifiers that lower the strategic manipulation cost for the minority groups. We develop our framework by studying theoretical connections between group-specific strategic cost disparity and standard selection rate fairness metrics (e.g., statistical rate and true positive rate). Empirically, we show the efficacy of this approach over multiple real-world datasets.
... Studying procedural justice is important for two main reasons. First, procedural justice has been connected to the use of algorithms because introducing algorithms might undermine the legitimacy of processes (Bovens & Zouridis, 2002;Citron & Pasquale, 2014;Crawford & Schultz, 2014;Danaher, 2016;Parkin, 2011). This can have different reasons as procedural justice is essentially an umbrella term that relates to perceptions on accuracy, consistency, bias suppression, correctability, representativeness and ethics (Greenberg & Colquitt, 2005). ...
Thesis
Full-text available
The rise of behavioral public administration demonstrated that we can understand and change decision-making by using insights about heuristics. Heuristics are mental shortcuts that reduce complex tasks to simpler ones. Whereas earlier studies mainly focused on interventions such as nudges, scholars are now broadening their scope to include debiasing, and psychological theories beyond heuristics. Scholars are moreover shifting their attention away from citizen-focused interventions to public sector worker-oriented interventions, i.e. the very people who are expected to nudge society. This dissertation seeks to explore how behavioral sciences can facilitate understanding and support decision-making across the public sector. We present four studies that investigate a range of behavioral theories, practices, issues and public sector workers. This dissertation shows that when handling heuristics in the public sector, we need to take into account the institutional and situational settings, as well as differences between public sector workers. The results of this dissertation can be used by practitioners and academics to understand and support decision-making in public sector contexts.
... Even if people do not get the outcome they desire, being able to appeal a decision can provide a sense of closure. However, there is limited guidance around how appeal processes for algorithmic decisions could or should be designed [14,37,39]. There are numerous examples of poorly designed appeal processes, where people impacted by algorithmic decisions are unable to contest the decision. ...
... AI-based decisions now pervade our daily lives and offer a wide range of benefits. In many domains, including healthcare, criminal justice, job hiring, and education, automated predictions have a tangible impact on the final decision (Citron and Pasquale, 2014;Caruana et al., 2015;Barocas et al., 2019;Vyas et al., 2021). As research has shown, however, modern systems sometimes have unintentional and undesirable biases (e.g., Buolamwini and Gebru, 2018;Angwin et al., 2016;Pariser, 2011). ...
Preprint
Artificial intelligence and machine learning algorithms have become ubiquitous. Although they offer a wide range of benefits, their adoption in decision-critical fields is limited by their lack of interpretability, particularly with textual data. Moreover, with more data available than ever before, it has become increasingly important to explain automated predictions. Generally, users find it difficult to understand the underlying computational processes and interact with the models, especially when the models fail to generate the outcomes or explanations, or both, correctly. This problem highlights the growing need for users to better understand the models' inner workings and gain control over their actions. This dissertation focuses on two fundamental challenges of addressing this need. The first involves explanation generation: inferring high-quality explanations from text documents in a scalable and data-driven manner. The second challenge consists in making explanations actionable, and we refer to it as critiquing. This dissertation examines two important applications in natural language processing and recommendation tasks. Overall, we demonstrate that interpretability does not come at the cost of reduced performance in two consequential applications. Our framework is applicable to other fields as well. This dissertation presents an effective means of closing the gap between promise and practice in artificial intelligence.
... A.I.-based recommendation tools may also deprive consumers of access to services, such as the loans market. Such refusals are sometimes based on social bias reflected and aggravated by algorithms bias[11].Content courtesy of Springer Nature, terms of use apply. Rights reserved. ...
Article
Full-text available
The growing use of artificial intelligence (A.I.) algorithms in businesses raises regulators' concerns about consumer protection. While pricing and recommendation algorithms have undeniable consumer-friendly effects, they can also be detrimental to them through, for instance, the implementation of dark patterns. These correspond to algorithms aiming to alter consumers' freedom of choice or manipulate their decisions. While the latter is hardly new, A.I. offers significant possibilities for enhancing them, altering consumers' freedom of choice and manipulating their decisions. Consumer protection comes up against several pitfalls. Sanctioning manipulation is even more difficult because the damage may be diffuse and not easy to detect. Symmetrically, both ex-ante regulation and requirements for algorithmic transparency may be insufficient, if not counterproductive. On the one hand, possible solutions can be found in counter-algorithms that consumers can use. On the other hand, in the development of a compliance logic and, more particularly, in tools that allow companies to self-assess the risks induced by their algorithms. Such an approach echoes the one developed in corporate social and environmental responsibility. This contribution shows how self-regulatory and compliance schemes used in these areas can inspire regulatory schemes for addressing the ethical risks of restricting and manipulating consumer choice.
... For automated predictions such as credit scoring cf. Citron & Pasquale (2014); for a more general overview cf. his 2015 book (Pasquale, 2015). ...
Article
Full-text available
Algorithmic decision-making based on profiling may significantly affect people’s destinies. As a rule, however, explanations for such decisions are lacking. What are the chances for a “right to explanation” to be realized soon? After an exploration of the regulatory efforts that are currently pushing for such a right it is concluded that, at the moment, the GDPR stands out as the main force to be reckoned with. In cases of profiling, data subjects are granted the right to receive meaningful information about the functionality of the system in use; for fully automated profiling decisions even an explanation has to be given. However, the trade secrets and intellectual property rights (IPRs) involved must be respected as well. These conflicting rights must be balanced against each other; what will be the outcome? Looking back to 1995, when a similar kind of balancing had been decreed in Europe concerning the right of access (DPD), Wachter et al. (2017) find that according to judicial opinion only generalities of the algorithm had to be disclosed, not specific details. This hardly augurs well for a future right of access let alone to explanation. Thereupon the landscape of IPRs for machine learning (ML) is analysed. Spurred by new USPTO guidelines that clarify when inventions are eligible to be patented, the number of patent applications in the US related to ML in general, and to “predictive analytics” in particular, has soared since 2010—and Europe has followed. I conjecture that in such a climate of intensified protection of intellectual property, companies may legitimately claim that the more their application combines several ML assets that, in addition, are useful in multiple sectors, the more value is at stake when confronted with a call for explanation by data subjects. Consequently, the right to explanation may be severely crippled.
Article
Today, humans interact with automation frequently and in a variety of settings ranging from private to professional. Their behavior in these interactions has attracted considerable research interest across several fields, with sometimes little exchange among them and seemingly inconsistent findings. In this article, we review 138 experimental studies on how people interact with automated agents, that can assume different roles. We synthesize the evidence, suggest ways to reconcile inconsistencies between studies and disciplines, and discuss organizational and societal implications. The reviewed studies show that people react to automated agents differently than they do to humans: In general, they behave more rationally, and seem less prone to emotional and social responses, though this may be mediated by the agents’ design. Task context, performance expectations and the distribution of decision authority between humans and automated agents are all factors that systematically impact the willingness to accept automated agents in decision-making - that is, humans seem willing to (over-)rely on algorithmic support, yet averse to fully ceding their decision authority. The impact of these behavioral regularities for the deliberation of the benefits and risks of automation in organizations and society is discussed.
Book
Full-text available
This collection explores the relevance of global trade law for data, big data and cross-border data flows. Contributing authors from different disciplines including law, economics and political science analyze developments at the World Trade Organization and in preferential trade venues by asking what future-oriented models for data governance are available and viable in the area of trade law and policy. The collection paints the broad picture of the interaction between digital technologies and trade regulation as well as provides in-depth analyses of critical to the data-driven economy issues, such as privacy and AI, and different countries’ perspectives.
Chapter
This paper not only introduces the development and application of AI, but also analyzes the demand of AI for maritime foreign-related emergency disposal, and innovatively puts forward various application forms of intelligent auxiliary technology in maritime foreign-related emergency disposal.
Article
Full-text available
Este artigo objetiva analisar três medidas não farmacológicas adotadas pelo Poder Público brasileiro para o enfrentamento da Covid-19: 1) o repasse de informações pelas operadoras de telecomunicação sobre a circulação de pessoas; 2) o compartilhamento de dados pessoais para a implantação de teleatendimento pelo Ministério da Saúde; 3) o compartilhamento de dados ao IBGE de todos os consumidores de empresas de telecomunicações. Tais medidas tematizam questionamentos sobre a afetação e a possível violação do direito à proteção de dados pessoais. A metodologia aplicada é a analítica. O problema é resolvido por intermédio de levantamento bibliográfico e documental em três etapas: defenderemos o direito à proteção de dados pessoais como direito fundamental; descreveremos as três medidas objeto de análise; estabeleceremos as relações de precedência condicionada aplicáveis a cada uma das medidas. Concluímos que as medidas 1 e 2 podem ser adotadas desde que observadas condições específicas; e que a medida 3 representa violação do direito à privacidade e uma vigilância indevida
Chapter
Mobile sensing applications exploit big data to measure and assess human-behavioural modelling. However, big data profiling and automated decision practices, albeit powerful and pioneering, they are also highly unregulated and thereby unfair and intrusive. Their risk to privacy has been indeed identified as one of the biggest challenges faced by mobile computing applications. In this Chapter, we delve into the privacy risks arising from the ubiquitous mobile computing and sensing applications and, in particular, from the big data algorithmic processing which infers sensitive personal details such as people’s social behaviour or emotions. In this respect, we thoroughly discuss the risks of profiling which are further elaborated in the tax and financial context. We also explore strategies towards mitigating these privacy risks, and we investigate the extent to which the GDPR protects against these threats, especially against aggressive profiling and automated decision-making. To mitigate these risks, we explore implementation challenges and we introduce countermeasures in the context of financial privacy that can adhere to the privacy requirements of the GDPR.
Book
Full-text available
Wie verändern sich gesellschaftliche Praktiken und die Chancen demokratischer Technikgestaltung, wenn neben Bürger*innen und Öffentlichkeit auch Roboter, Algorithmen, Simulationen oder selbstlernende Systeme einbezogen und als Beteiligte ernstgenommen werden? Die Beiträger*innen des Bandes untersuchen die Neukonfiguration von Verantwortung und Kontrolle, Wissen, Beteiligungsansprüchen und Kooperationsmöglichkeiten im Umgang mit intelligenten Systemen wie smart grids, Servicerobotern, Routenplanern, Finanzmarktalgorithmen und anderen soziodigitalen Arrangements. Aufgezeigt wird, wie die digitalen »Neulinge« dazu beitragen, die Gestaltungsmöglichkeiten für Demokratie, Inklusion und Nachhaltigkeit zu verändern und Macht- und Kraftverhältnisse zu verschieben.
Article
Full-text available
Over recent years, the stakes and complexity of online content moderation have been steadily raised, swelling from concerns about personal conflict in smaller communities to worries about effects on public life and democracy. Because of the massive growth in online expressions, automated tools based on machine learning are increasingly used to moderate speech. While ‘design-based governance’ through complex algorithmic techniques has come under intense scrutiny, critical research covering algorithmic content moderation is still rare. To add to our understanding of concrete instances of machine moderation, this article examines Perspective API, a system for the automated detection of ‘toxicity’ developed and run by the Google unit Jigsaw that can be used by websites to help moderate their forums and comment sections. The article proceeds in four steps. First, we present our methodological strategy and the empirical materials we were able to draw on, including interviews, documentation, and GitHub repositories. We then summarize our findings along five axes to identify the various threads Perspective API brings together to deliver a working product. The third section discusses two conflicting organizational logics within the project, paying attention to both critique and what can be learned from the specific case at hand. We conclude by arguing that the opposition between ‘human’ and ‘machine’ in speech moderation obscures the many ways these two come together in concrete systems, and suggest that the way forward requires proactive engagement with the design of technologies as well as the institutions they are embedded in.
Chapter
Full-text available
Wie verändern sich gesellschaftliche Praktiken und die Chancen demokratischer Technikgestaltung, wenn neben Bürger*innen und Öffentlichkeit auch Roboter, Algorithmen, Simulationen oder selbstlernende Systeme einbezogen und als Beteiligte ernstgenommen werden? Die Beiträger*innen des Bandes untersuchen die Neukonfiguration von Verantwortung und Kontrolle, Wissen, Beteiligungsansprüchen und Kooperationsmöglichkeiten im Umgang mit intelligenten Systemen wie smart grids, Servicerobotern, Routenplanern, Finanzmarktalgorithmen und anderen soziodigitalen Arrangements. Aufgezeigt wird, wie die digitalen »Neulinge« dazu beitragen, die Gestaltungsmöglichkeiten für Demokratie, Inklusion und Nachhaltigkeit zu verändern und Macht- und Kraftverhältnisse zu verschieben.
Article
Full-text available
El proceso judicial será tecnológico o no será. Superar el debate acerca de la conveniencia de abrazar el avance tecnológico no implica aceptar de facto posiciones integradas. La pandemia del COVID-19 ha acentuado aún más –si cabe– las miserias y disonancias del proceso judicial en España. Una justicia lenta, paralizada ante la necesidad de teletrabajar y sin recursos con garantías para celebrar juicios a distancia, no es justicia, y hoy se agotan más rápido que nunca las excusas, puesto que nuevas cuotas de eficacia, eficiencia y celeridad, con pleno respeto a las garantías procesales, pueden alcanzarse con el empleo de las más modernas y nuevas tecnologías.El presente trabajo pretende explorar las posibilidades de empleo de los sistemas de inteligencia artificial en un ámbito muy concreto: el de la valoración de la prueba. En este contexto, la hipótesis que el artículo pretende demostrar es que la presencia humana seguirá siendo necesaria, si bien los sistemas de inteligencia artificial podrían apoyar y asistir la libre valoración de la prueba por parte del juzgador y colmar aquellas lagunas donde el ser humano muestra su faceta más errática; por ejemplo, en las corroboraciones.
Article
Full-text available
Rapid technological advances in automation, algorithms, robots, and artificial intelligence (AI) entail risks, such as the loss of jobs, biases, and security threats, which raises questions about responsibility for the damage incurred by such risks and the development of solutions to ameliorate them. Because media play a central role in the representation and perception of technological risks and responsibility, this study explores the news coverage of automation. While previous research has focused on specific technologies, this study conducts a comprehensive analysis of the debate on automation, algorithms, robotics, and AI, including tonality, risks, and responsibility. The longitudinal media content analysis of three decades of Austrian news reports revealed that overall, the coverage increased, and it was optimistic in tone. However, algorithms were more frequently associated with risks and less positivity than other automation areas. Robotics received the most positive and the least risk-related coverage. Moreover, industry stakeholders were at the center of the responsibility network in the media discourse.
Article
Blockchain is a new general-purpose technology that poses significant challenges to policymaking, law, and society. Blockchain is even more distinctive than other transformative technologies, as it is by nature a global technology; moreover, it operates based on a set of rules and principles that have a law-like quality—the lex cryptographia. The global nature of blockchain has led to its adoption by international organizations such as the United Nations and the World Bank. However, the law-like nature of the technology makes some of its uses by international organizations questionable from an international law and foreign affairs perspective. In this light, the article examines the effectiveness and legitimacy of the use of blockchain for international policymaking.
Chapter
Full-text available
Explainable machine learning and uncertainty quantification have emerged as promising approaches to check the suitability and understand the decision process of a data-driven model, to learn new insights from data, but also to get more information about the quality of a specific observation. In particular, heatmapping techniques that indicate the sensitivity of image regions are routinely used in image analysis and interpretation. In this paper, we consider a landmark-based approach to generate heatmaps that help derive sensitivity and uncertainty information for an application in marine science to support the monitoring of whales. Single whale identification is important to monitor the migration of whales, to avoid double counting of individuals and to reach more accurate population estimates. Here, we specifically explore the use of fluke landmarks learned as attention maps for local feature extraction and without other supervision than the whale IDs. These individual fluke landmarks are then used jointly to predict the whale ID. With this model, we use several techniques to estimate the sensitivity and uncertainty as a function of the consensus level and stability of localisation among the landmarks. For our experiments, we use images of humpback whale flukes provided by the Kaggle Challenge “Humpback Whale Identification” and compare our results to those of a whale expert.
Article
Full-text available
In the digital economy, consumer vulnerability is not simply a vantage point from which to assess some consumers’ lack of ability to activate their awareness of persuasion. Instead, digital vulnerability describes a universal state of defencelessness and susceptibility to (the exploitation of) power imbalances that are the result of the increasing automation of commerce, datafied consumer–seller relations, and the very architecture of digital marketplaces. Digital vulnerability, we argue, is architectural, relational, and data-driven. Based on our concept of digital vulnerability, we demonstrate how and why using digital technology to render consumers vulnerable is the epitome of an unfair digital commercial practice.
Chapter
Automated decision support systems with high stake decision processes are frequently controversial. The Online Compliance Intervention (herewith “OCI” or “RoboDebt”) is a system of compliance implemented with the intention to facilitate automatic issuance of statutory debt notices to individuals, taking a receipt of welfare payments and exceeding their entitlement. The system appears to employ rudimentary data scraping and expert systems to determine whether notices should be validly issued. However, many individuals that take receipt of debt notices assert that they were issued in error. The commentary on the system has resulted in a lot of conflation of the system with other system types and caused many to question the role of decision of support systems in public administration given the potentially deleterious impacts of such systems for the most vulnerable. The authors employ a taxonomy of Robotic Process Automation (RPA) issues, to review the OCI and RPA more generally. This paper identifies potential problems of bias, inconsistency, procedural fairness, and overall systematic error. This research also considers a series of RoboDebt specific issues regarding contractor arrangements and the potential impact of the system for Australia's Indigenous population. The authors offer a set of recommendations based on the observed challenges, emphasizing the importance of moderation, independent algorithmic audits, and ongoing reviews. Most notably, this paper emphasizes the need for greater transparency and a broadening of criteria to determine vulnerability that encompasses, temporal, geographic, and technological considerations.
Chapter
The quest to explain the output of artificial intelligence systems has clearly moved from a mere technical to a highly legally and politically relevant endeavor. In this paper, we provide an overview of legal obligations to explain AI and evaluate current policy proposals. In this, we distinguish between different functional varieties of AI explanations - such as multiple forms of enabling, technical and protective transparency - and show how different legal areas engage with and mandate such different types of explanations to varying degrees. Starting with the rights-enabling framework of the GDPR, we proceed to uncover technical and protective forms of explanations owed under contract, tort and banking law. Moreover, we discuss what the recent EU proposal for an Artificial Intelligence Act means for explainable AI, and review the proposal’s strengths and limitations in this respect. Finally, from a policy perspective, we advocate for moving beyond mere explainability towards a more encompassing framework for trustworthy and responsible AI that includes actionable explanations, values-in-design and co-design methodologies, interactions with algorithmic fairness, and quality benchmarking.
Article
Present-day securities trading is dominated by fully automated algorithms. These algorithmic systems are characterized by particular forms of knowledge risk (adverse effects relating to the use or absence of certain forms of knowledge) and principalagent problems (goal conflicts and information asymmetries arising from the delegation of decision-making authority). Moreover, where automated trading systems used to be based on human-defined rules, increasingly, machine-learning (ML) techniques are being adopted to produce machine-generated strategies. Drawing on 213 interviews with market participants involved in automated trading, this study compares the forms of knowledge risk and principal-agent relations characterizing both human-defined and ML-based automated trading systems. It demonstrates that ML-based automated trading leads to a change in knowledge risks, particularly concerning dramatically changing market settings, and that it is characterized by a lack of insight into how and why trading rules are being produced by the ML systems. This not only intensifies but also reconfigures principal-agent problems in financial markets.
Preprint
Full-text available
Following on from the publication of its Feasibility Study in December 2020, the Council of Europe's Ad Hoc Committee on Artificial Intelligence (CAHAI) and its subgroups initiated efforts to formulate and draft its Possible Elements of a Legal Framework on Artificial Intelligence, based on the Council of Europe's standards on human rights, democracy, and the rule of law. This document was ultimately adopted by the CAHAI plenary in December 2021. To support this effort, The Alan Turing Institute undertook a programme of research that explored the governance processes and practical tools needed to operationalise the integration of human right due diligence with the assurance of trustworthy AI innovation practices. The resulting framework was completed and submitted to the Council of Europe in September 2021. It presents an end-to-end approach to the assurance of AI project lifecycles that integrates context-based risk analysis and appropriate stakeholder engagement with comprehensive impact assessment, and transparent risk management, impact mitigation, and innovation assurance practices. Taken together, these interlocking processes constitute a Human Rights, Democracy and the Rule of Law Assurance Framework (HUDERAF). The HUDERAF combines the procedural requirements for principles-based human rights due diligence with the governance mechanisms needed to set up technical and socio-technical guardrails for responsible and trustworthy AI innovation practices. Its purpose is to provide an accessible and user-friendly set of mechanisms for facilitating compliance with a binding legal framework on artificial intelligence, based on the Council of Europe's standards on human rights, democracy, and the rule of law, and to ensure that AI innovation projects are carried out with appropriate levels of public accountability, transparency, and democratic governance.
Article
In this article I examine the structure of four deliberative models: epistemic democracy, epistocracy, dystopic algocracy, and utopian algocracy. Epistocracy and algocracy (which in its two versions is an extremization of epistocracy) represent a challenge to the alleged epistemic superiority of democracy: epistocracy for its emphasis on the role of experts; algocracy for its emphasis on technique as a cognitively and ethically superior tool. In the concluding remarks I will advance the thesis that these challenges can only be answered by emphasizing the value of citizens’ political participation, which can also represent both an increase in their cognitive abilities and a value for public ethics.
Article
This article examines the complaint that arbitrary algorithmic decisions wrong those whom they affect. It makes three contributions. First, it provides an analysis of what arbitrariness means in this context. Second, it argues that arbitrariness is not of moral concern except when special circumstances apply. However, when the same algorithm or different algorithms based on the same data are used in multiple contexts, a person may be arbitrarily excluded from a broad range of opportunities. The third contribution is to explain why this systemic exclusion is of moral concern and to offer a solution to address it.
Article
Full-text available
The garbage can theory of organizational choice is one of the best-known innovations in modern organization theory. It also has significantly shaped a major branch of the new institutionalism. Yet, the theory has not received the systematic assessment that it both deserves and needs. We evaluate the early verbal theory and argue that it fails to create an adequate foundation for scientific progress. We then analyze and rerun Cohen, March, and Olsen's computer model and discover that its agents move in lockstep patterns that are strikingly different from the spirit of the theory. Indeed, the simulation and the theory are incompatible. Next, we examine how the authors have built upon these incompatible formulations in developing the theory further. We assess this larger program, which includes the March-Olsen version of the new institutionalism, and find that many of the problems that attended the original article have intensified over time. We conclude that a fundamental overhaul is required if the theory is to realize its early promise.
Article
Full-text available
Starting from the objectively dominant position of the sociology of markets in economic sociology, this article suggests that markets have served as a privileged terrain for the development and application of general theoretical arguments about the shape of the social order. I offer a critical overview of the sociology of markets as it relates to our concepts of society, focusing on four main representations of what is sociologically important about markets: the social networks that sustain them, the systems of social positions that organize them, the institutionalization processes that stabilize them, and the performative techniques that bring them into existence. I then speculate about the possible future directions that such theorizing might take, calling in particular for a stronger contribution of the sociology of markets to the analysis of societies as moral orders.
Article
Can human behavior be predicted? A broad variety of governmental initiatives are using computerized processes to try. Vast datasets of personal information enhance the ability to engage in these ventures and the appetite to push them forward. Governments have a distinct interest in automated individualized predictions to foresee unlawful actions. Novel technological tools, especially data-mining applications, are making governmental predictions possible. The growing use of predictive practices is generating serious concerns regarding the lack of transparency. Although echoed across the policy, legal, and academic debate, the nature of transparency, in this context, is unclear. Transparency flows from different, even competing, rationales, as well as very different legal and philosophical backgrounds. This Article sets forth a unique and comprehensive conceptual framework for understanding the role transparency must play as a regulatory concept in the crucial and innovative realm of automated predictive modeling. Part II begins by briefly describing the predictive modeling process while focusing on initiatives carried out in the context of federal income tax collection and law enforcement. It then draws out the process's fundamental elements, while distinguishing between the role of technology and humans. Recognizing these elements is crucial for understanding the importance and challenges of transparency. Part HI moves to address the flow of information the prediction process generates. In doing so, it addresses various strategies to achieve transparency in this process-some addressed by law, while others are ignored. In doing so, the Article introduces a helpful taxonomy that will be relied upon throughout the analysis. It also establishes the need for an overall theoretical analysis and policy blueprint for transparency in prediction. Part IV shifts to a theoretical analysis seeking the sources of calls for transparency. Here, the analysis addresses transparency as a tool to enhance government efficiency, facilitate crowdsourcing, and promote both privacy and autonomy. Part V turns to examine counterarguments which call for limiting transparency. It explains how disclosure can undermine government policy and authority, as well as generate problematic stereotypes. After mapping out the justifications and counterclaims, Part VI moves to provide an innovative and unique policy framework for achieving transparency. It concludes, in Part VII, by explaining which concerns and risks of the predictive modeling process transparency cannot mitigate, and calling for other regulatory responses.
Article
One of the great ironies about information privacy law is that the primary regulation of privacy in the United States has barely been studied in a scholarly way. Since the late 1990s, the Federal Trade Commission (FTC) has been enforcing companies’ privacy policies through its authority to police unfair and deceptive trade practices. Despite over fifteen years of FTC enforcement, there is no meaningful body of judicial decisions to show for it. The cases have nearly all resulted in settlement agreements. Nevertheless, companies look to these agreements to guide their privacy practices. Thus, in practice, FTC privacy jurisprudence has become the broadest and most influential regulating force on information privacy in the United States — more so than nearly any privacy statute or any common law tort. In this Article, we contend that the FTC’s privacy jurisprudence is functionally equivalent to a body of common law, and we examine it as such. We explore how and why the FTC, and not contract law, came to dominate the enforcement of privacy policies. A common view of the FTC’s privacy jurisprudence is that it is thin, merely focusing on enforcing privacy promises. In contrast, a deeper look at the principles that emerge from FTC privacy “common law” demonstrates that the FTC’s privacy jurisprudence is quite thick. The FTC has codified certain norms and best practices and has developed some baseline privacy protections. Standards have become so specific they resemble rules. We contend that the foundations exist to develop this “common law” into a robust privacy regulatory regime, one that focuses on consumer expectations of privacy, extends far beyond privacy policies, and involves a full suite of substantive rules that exist independently from a company’s privacy representations.
Conference Paper
One of KM applications in X Company is the development of Community of Practice (CoP). To support these strategies, X Company must have activity framework to ensure that the knowledge management processes, known as knowledge management cycle, are well implemented. This framework is built based on SECI method by considering the elements of KM Triad, which are people, process, and technology.
Article
We are at the cusp of a historic shift in our conceptions of the Fourth Amendment driven by dramatic advances in surveillance technology. Governments and their private sector agents continue to invest billions of dollars in massive data-mining projects, advanced analytics, fusion centers, and aerial drones, all without serious consideration of the constitutional issues that these technologies raise. In United States v. Jones, the Supreme Court signaled an end to its silent acquiescence in this expanding surveillance state. In that case, five justices signed concurring opinions defending a revolutionary proposition: that citizens have Fourth Amendment interests in substantial quantities of information about their public or shared activities, even if they lack a reasonable expectation of privacy in the constitutive particulars. This quantitative approach to the Fourth Amendment has since been the subject of hot debate on and off the courts. Among the most compelling challenges are questions about quantitative privacy’s constitutional pedigree, how it can be implemented in practice, and its doctrinal consequences. This Article takes up these challenges. The conversation after Jones has been dominated by proposals that seek to assess and protect quantitative privacy by focusing on the informational “mosaics” assembled by law enforcement officers in the course of their investigations. We think that this case-by-case approach both misunderstands the Fourth Amendment issues at stake and begets serious practical challenges. Drawing on lessons from information privacy law, we propose as an alternative that legislatures and courts acting in the shadow of Jones focus on the technologies. Under this technology-centered approach, any technology that is capable of facilitating broad programs of continuous and indiscriminate surveillance would be subject to Fourth Amendment regulation. This does not mean that government would be barred from using these technologies. Rather, it would require that the terms of their deployment and use reflect a reasonable balance between privacy concerns and law enforcement’s interests in preventing, detecting, and prosecuting crime. This Article offers concrete proposals for how legislatures and courts might strike this balance while providing the clear guidance and predictability that critics of the mosaic theory rightly demand.
Article
This paper examines how statistical credit-scoring technologies, sanctioned by the state in the interests of promoting equality, became applied by lenders to the problem of controlling levels of default within American consumer credit. However, these technologies, constituting consumers as 'risks', are themselves seen to be problematic, subject to their own conceived sets of methodological, procedural and temporal risks. Nevertheless, as this article will show, such technologies have increasingly been applied to other areas of consumer lending, thus interpreting a wider array of operational contingencies in terms of risk. Finally, it is argued that, since the 1980s, the constitution of credit consumers as risks has been deployed to new ends through technologies of 'profit scoring' and new practices of 'risk pricing'.
Article
During the first two decades of the Cold War, a new kind of academic figure became prominent in American public life: the credentialed social scientist or expert in the sciences of administration who was also, to use the parlance of the time, a “man of affairs.” Some were academic high-fliers conscripted into government roles in which their intellectual and organizational talents could be exploited. McGeorge Bundy, Walt Rostow, and Robert McNamara are the archetypes of such persons. An overlapping group of scholars became policymakers and political advisers on issues ranging from social welfare provision to nation-building in emerging postcolonial states. Many of these men—and almost without exception they were men—were also consummate operators within the patronage system that grew up around American universities after World War II. Postwar leaders of the social and administrative sciences such as Talcott Parsons and Herbert Simon were skilled scientific brokers of just this sort: good “committee men,” grant-getters, proponents of interdisciplinary inquiry, and institution-builders. This hard-nosed, suit-wearing, business-like persona was connected to new, technologically refined forms of social science. No longer sage-like social philosophers or hardscrabble, number-crunching empiricists, academic human scientists portrayed themselves as possessors of tools and programs designed for precision social engineering. Antediluvian “social science” was eschewed in favour of mathematical, behavioural, and systems-based approaches to “human relations” such as operations research, behavioral science, game theory, systems theory, and cognitive science.
Article
Should search engines be subject to the types of regulation now applied to personal data collectors, cable networks, or phone books? In this article, we make the case for some regulation of the ability of search engines to manipulate and structure their results. We demonstrate that the First Amendment, properly understood, does not prohibit such regulation. Nor will such interventions inevitably lead to the disclosure of important trade secrets. After setting forth normative foundations for evaluating search engine manipulation, we explain how neither market discipline nor technological advance is likely to stop it. Though savvy users and personalized search may constrain abusive companies to some extent, they have little chance of checking untoward behavior by the oligopolists who now dominate the search market. Against the trend of courts that would declare search results unregulable speech, this article makes a case for an ongoing conversation on search engine regulation.
Article
Internet service providers and search engines have mapped the web, accelerated e-commerce, and empowered new communities. They also pose new challenges for law. Individuals are rapidly losing the ability to affect their own image on the web - or even to know what data are presented about them. When web users attempt to find information or entertainment, they have little assurance that a carrier or search engine is not biasing the presentation of results in accordance with its own commercial interests.Technology’s impact on privacy and democratic culture needs to be at the center of internet policymaking. Yet before they promulgate substantive rules, key administrators must genuinely understand new developments. While the Federal Trade Commission and the Federal Communications Commission in the U.S. have articulated principles of editorial integrity for search engines and net neutrality for carriers, they have not engaged in the monitoring necessary to enforce these guidelines. This article proposes institutions for “qualified transparency” within each Commission to fill this regulatory gap. Qualified transparency respects legitimate needs for confidentiality while promoting individuals’ and companies' capacity to understand how their reputations - and the online world generally - are shaped by dominant intermediaries.
Article
Robotics and artificial intelligence hold enormous promise but raise a variety of ethical and legal concerns, including with respect to privacy. Robotics and artificial intelligence implicate privacy in at least three ways. First, they increase our capacity for surveillance. Second, they introduce new points of access to historically private spaces such as the home. Finally, they trigger hardwired social responses that can threaten several of the values privacy protects. Responding to the privacy implications of robotics and artificial intelligence is likely to require a combination of design, law, and education.
Article
The investment fueled US mortgage market has traditionally been sustained by New Deal institutions called government sponsored enterprises (GSEs). Known as Freddie Mac and Fannie Mae, the GSEs once dominated mortgage backed securities underwriting. The recent subprime mortgage crisis has drawn attention to the fact that during the real estate boom, these agencies were temporarily overtaken by risk tolerant channels of lending, securitization, and investment, driven by investment banks and private capital players. This research traces the movement of a specific brand of commercial consumer credit analytics into mortgage underwriting. It demonstrates that what might look like the spontaneous rise (and fall) of a 'free' market divested of direct government intervention has been thoroughly embedded in the concerted movement of calculative risk management technologies. The transformations began with a sequence of GSE decisions taken in the mid-1990's to implement a consumer risk score called a FICO® into automated underwriting systems. Having been endorsed by the GSEs, this scoring tool was gradually hardwired throughout the industry to become a distributed and collective 'market device'. As the paper will show, once modified by specific GSE interpretations the calculative properties generated by these credit bureau scores reconfigured mortgage finance into two parts: the conventional, risk-adverse, GSE conforming 'prime' and an infrastructurally distinct, risk-avaricious, investment grade 'subprime'.
Article
Pharmaceutical companies have long relied on direct marketing of their drugs to physicians through one-on-one meetings with sales representatives. This practice of “detailing” is substantial in its costs and its number of participants. Every year, pharmaceutical companies spend billions of dollars on millions of visits to physicians by tens of thousands of sales representatives. Critics have argued that drug detailing results in sub-optimal prescribing decisions by physicians, compromising patient health and driving up spending on medical care. In this view, physicians often are unduly influenced both by marketing presentations that do not accurately reflect evidence from the medical literature and by the gifts that sales representatives deliver in conjunction with their presentations.
Article
: This essay warns of eroding accountability in computerized societies. It argues that assumptions about computing and features of situations in which computers are produced create barriers to accountability. Drawing on philosophical analyses of moral blame and responsibility, four barriers are identified: (1) the problem of many hands, (2) the problem of bugs, (3) blaming the computer, and (4) software ownership without liability. The paper concludes with ideas on how to reverse this trend. If a builder has built a house for a man and has not made his work sound, and the house which he has built has fallen down and so caused the death of the householder, that builder shall be put to death. If it destroys property, he shall replace anything that it has destroyed; and, because he has not made sound the house which he has built and it has fallen down, he shall rebuild the house which has fallen down from his own property. If a builder has built a house for a man and does not make his wor...
Article
From an analysis of actual cases, three categories of bias in computer systems have been developed: preexisting, technical, and emergent. Preexisting bias has its roots in social institutions, practices, and attitudes. Technical bias arises from technical constraints or considerations. Emergent bias arises in a context of use. Although others have pointed to bias in particular computer systems and have noted the general problem, we know of no comparable work that examines this phenomenon comprehensively and which offers a framework for understanding and remedying it. We conclude by suggesting that freedom from bias should be counted among the select set of criteria - including reliability, accuracy, and efficiency -according to which the quality of systems in use in society should be judged.