Chapter

Wrongfully Accused by an Algorithm *

Authors:
To read the full-text of this research, you can request a copy directly from the author.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... Indeed, IMs have tested the limits of these rights with alarming results, for example, natural language processing platforms e.g. ChatGPT imagining facts, unavoidable accidents involving selfdriving cars, or the wrongful identification of suspects, to name a few [27,28,29]. These events have revealed a moral dilemma-one that remains unanswered by ethical-philosophical scholars-on whether we should incorporate ethical rules in IM functioning systems and outputs. ...
... Indeed, IMs have tested the limits of these rights with alarming results, for example, natural language processing platforms e.g. ChatGPT imagining facts, unavoidable accidents involving selfdriving cars, or the wrongful identification of suspects, to name a few [27][28][29]. These events have revealed a moral dilemma one that remains unanswered by ethical-philosophical scholars on whether we should incorporate ethical rules in IM functioning systems and outputs. ...
Article
Full-text available
Intelligent machines (IMs), which have demonstrated remarkable innovations over time, require adequate attention concerning the issue of their duty–rights split in our current society. Although we can remain optimistic about IMs’ societal role, we must still determine their legal-philosophical sense of accountability, as living data bits have begun to pervade our lives. At the heart of IMs are human characteristics used to self-optimize their practical abilities and broaden their societal impact. We used Kant’s philosophical requirements to investigate IMs’ moral dispositions, as the merging of humans with technology has overwhelmingly shaped psychological and corporeal agential capacities. In recognizing the continuous burden of human needs, important features regarding the inalienability of rights have increased the individuality of intelligent, nonliving beings, leading them to transition from questioning to defending their own rights. This issue has been recognized by paying attention to the rational capacities of humans and IMs, which have been connected in order to achieve a common goal. Through this teleological scheme, we formulate the concept of virtual dignity to determine the transition of inalienable rights from humans to machines, wherein the evolution of IMs is essentially imbued through consensuses and virtuous traits associated with human dignity.
... This is a real issue for people of colour who are thus more likely to be misidentified because the database from which their identity is inferred is less granular than that for white men, creating a greater likelihood of 'false positives'. A recent review of facial recognition systems in the USA found that they falsely identified African-American and Asian faces up to a hundred times more often than Caucasian faces (Hill 2022). It seems that "in the 21st century United States, Josef K. is black and is falsely accused by an algorithm" (Coeckelbergh 2022, p. 2). ...
Article
Full-text available
At the centenary of the death of Franz Kafka (1883–1924), this paper explores the complexities of Artificial Intelligence (AI) through the lens of Kafka’s literary and professional work, especially those relating to the dynamics of recognition and misrecognition. Through Kafkan eyes, both philosophical and technological hankerings after recognition and its connection with the notion of the ‘true self’ are thrown into sharp relief, whether this ‘truth’ is related to authenticity or to accuracy. This encourages us to challenge some of the core assumptions of our relationship with systems and tools, including (1) the taken-for-granted formula of recognition being good, misrecognition being bad; (2) the suggestion that aligning AI with human values will make it, and therefore us, safer and more secure; and (3) the assumption that the masters are in charge in the master/slave dialectic that is often used to express the relationship between humans and technologies. The paper references three of Kafka’s most famous works, The Trial, The Castle and In the Penal Colony, in ways that are accessible to those new to Kafka. More seasoned Kafka enthusiasts will be able to see and contextualise the paper’s themes and provocations within these works, and extrapolate to his other writings.
... For example, there are an increasing number of real-world cases where AI facial recognition tools have identified the wrong suspect in a crime investigation [42]. For instance, in 2020, Robert Williams was mistakenly identified as a suspect in the theft of watches from a store due to an AI system matching his image with security camera footage using a facial recognition database [43]. Despite police officers acknowledging during the interview that Williams bore little resemblance to the individual in the footage, and no other supporting evidence being present, he was detained until he could post bail. ...
Article
Full-text available
Coincidences are rare unexpected events that can fascinate, but their typical post hoc discovery and haphazard nature also have the potential to confuse and to confound correct scientific analysis. Mathematically, they can sometimes be modeled by birthday-problem collisions in datasets. In this paper, we take an expository approach to considering some of the issues involved in modeling of coincidences found from data searches and examine a double birthday problem that arises when multiple data sets are considered.
... Notwithstanding encouraging claims and anecdotal successes, critics paint a less rosy view of FRT's application in policing. Some have expressed concerns about privacy infringements and unbridled mass surveillance due to the lack of transparency and oversight surrounding government FRT use. 2 Another concern, despite rapid advancements in facial recognition software, is the risk that false positives and misidentifications can result in wrongful arrests and other unconstitutional overreaches (Benedict, 2022;Hill, 2022). Aside from the limitations of the technology are concerns regarding documented racial and gender biases embedded within facial recognition algorithms. ...
Article
Full-text available
This study presents novel insights into the effects of police facial recognition applications on violent crime and arrest dynamics across 268 U.S. cities from 1997 to 2020. We conducted generalized difference-indifferences regressions with multiway fixed effects to exploit this technology's staggered implementation. Our findings indicate that police facial recognition applications facilitate reductions in the rates of felony violence and homicide without contributing to over-policing or racial disparities in arrest for violent offenses. Greater reductions were observed for cities that adopted these technologies earlier in the study period, suggesting that their public safety benefits appreciate over time. The results of parallel trend and robustness tests also support these conclusions. While further research is necessary to assess the implementation and effects of facial recognition systems in various contexts, presented evidence suggests that urban police agencies that responsibly deploy these innovations to support crime control efforts can keep their residents safer and reduce the lives lost to violence.
... Petty [42] describes how Detroiters are trading the perception of safety for surveillance, which has actually hindered community safety efforts and directly harmed community members. Perhaps most notable is the case of Robert Williams, who was wrongfully arrested in January 2020 as a result of Project Green Light, possibly the first instance of a wrongful arrest by a facial recognition system [23]. By this time, computer scientists such as Raji et al. [45], including both Buolamwini and Gebru, would shift their focus to the use of "algorithmic audits" to rectify the harmful impacts of FRTs, stating that these technologies "need to have careful privacy considerations, and avoid exploiting marginalized groups in the blind pursuit of increasing representation." ...
Preprint
Full-text available
How is identity constructed and performed in the digital via face-based artificial intelligence technologies? While questions of identity on the textual Internet have been thoroughly explored, the Internet has progressed to a multimedia form that not only centers the visual, but specifically the face. At the same time, a wealth of scholarship has and continues to center the topics of surveillance and control through facial recognition technologies (FRTs), which have extended the logics of the racist pseudoscience of physiognomy. Much less work has been devoted to understanding how such face-based artificial intelligence technologies have influenced the formation and performance of identity. This literature review considers how such technologies interact with faciality, which entails the construction of what a face may represent or signify, along axes of identity such as race, gender, and sexuality. In grappling with recent advances in AI such as image generation and deepfakes, I propose that we are now in an era of "post-facial" technologies that build off our existing culture of facility while eschewing the analog face, complicating our relationship with identity vis-a-vis the face. Drawing from previous frameworks of identity play in the digital, as well as trans practices that have historically played with or transgressed the boundaries of identity classification, we can develop concepts adequate for analyzing digital faciality and identity given the current landscape of post-facial artificial intelligence technologies that allow users to interface with the digital in an entirely novel manner. To ground this framework of transgression, I conclude by proposing an interview study with VTubers -- online streamers who perform using motion-captured avatars instead of their real-life faces -- to gain qualitative insight on how these sociotechnical experiences.
... The desire to comprehend the decision-making capabilities of AI models and address issues of incorrect predictions has prompted researchers and government organizations to prioritize enhancing the explainability, fairness, accountability, and transparency of algorithmic decision-making systems [8][9][10][11][12][13]. The Defense Advanced Research Projects Agency (DARPA) initiated the XAI program [14] intending to develop tools that will make intelligent systems understandable. ...
... Despite their achievements, neural networks are not flaw-free. Instances of errors in neural networks have, in some cases, resulted in serious consequences, including loss of life [16] and wrongful arrests [17,18]. Hence, much like the process of debugging in traditional software programs [3], there is a critical need to devise automated methods for rectifying these defects in neural networks and ensuring that they adhere to desired standards. ...
Article
Full-text available
Neural networks are important computational models used in the domains of artificial intelligence and software engineering. Parameters of a neural network are obtained via training it against a specific dataset with a standard process, which guarantees each sample within that set is mapped to the correct class. In general, for a trained neural network, there is no warranty of high-level properties, such as fairness, robustness, etc. In this case, one need to tune the parameters in an alternative manner, and it is called repairing. In this paper, we present AutoRIC (Automated Repair wI th Constraints), an analytical-approach-based white-box repairing framework against general properties that could be quantitatively measured. Our approach is mainly based on constrained optimization, namely, we treat the properties of neural network as the optimized objective described by a quadratic formula about the faulty parameters. To ensure the classification accuracy of the repaired neural network, we impose linear inequality constraints to the inputs that obtain incorrect outputs from the neural network. In general, this may generate a huge amount of constraints, resulting in the prohibitively high cost in the problem solving, or even making the problem unable to be solved by the constraint solver. To circumvent this, we present a selection strategy to diminish the restrictions, i.e., we always select the most 'strict' ones into the constraint set each time. Experimental results show that repairing with constraints performs efficiently and effectively. AutoRIC tends to achieve a satisfactory repairing result whereas brings in a negligible accuracy drop. AutoRIC enjoys a notable time advantage and this advantage becomes increasingly evident as the network complexity rises. Moreover, experiment results also demonstrate that repairing based on unconstrained optimizations are not stable, which embodies the necessity of constraints.
... A notable example is the widespread media coverage of racial or gender bias in facial recognition technology, leading to wrongful arrests and misidentifications. The case of Robert Williams was covered extensively by major news outlets such as The New York Times (Hill, 2020). According to The Guardian, Williams was the first African American person to be wrongfully arrested due to a false match by facial recognition software (Bhuiyan, 2023). ...
Preprint
Most research on augmented judgment and decision-making is human-centered. Specifically, Theory of Machine is a conceptual framework for describing and analyzing people’s lay theories about how human and algorithmic judgment differ. Reminiscent of the Theory of Mind, it conceptualizes the idea of ascribing thought processes or mental states to algorithms. However, based on their own perceptions, past personal experiences, and shaped by public media, people may conceive of humans and algorithms as functionally distinct ontological entities. Therefore, research on augmented judgment and decision-making should also focus on the differences between various algorithms in terms of (cognitive) abilities and behavior. In this article, several agendas for future research are proposed that explicitly consider people’s diverse interactions with various decision-support and artificial intelligence systems in their daily lives. Such primarily algorithm-centric research will help to gain insights into a more fine-grained Theory of Machine that also distinguishes between different levels of algorithmic fairness, transparency, and explainability. Ideally, a better understanding of how people mentalize about algorithmic behavior can also be used to improve algorithmic augmentations of human judgment and decision-making.
... Thus, the poor performance of this one group could go unnoticed. Yet, if a system does not work well for a certain category of population, it can lead to discrimination, such as people of colour being wrongly accused of committing crimes because an algorithm has matched their face to that of a criminal (Hill, 2020). ...
... In 2020, the Citizen Lab at Toronto University published their report, To Surveil and Predict, presenting a human rights analysis of algorithmic policing including the widespread application of FRTs in Canada, finding similar concerns surrounding questions of bias, privacy, and harms (Robertson et al. 2020). In 2020, the case of Robert Julian-Borchak Williams was the first publicly known false arrest sparked by a faulty facial recognition match (Hill 2020b). ...
Article
Full-text available
On February 13, 2020, the Toronto Police Services (TPS) issued a statement admitting that its members had used Clearview AI’s controversial facial recognition technology (FRT). The controversy sparked widespread outcry by the media, civil society, and community groups, and put pressure on policy-makers to address FRTs. Public consultations presented a key tool to contain the scandal in Toronto and across Canada. Drawing on media reports, policy documents, and expert interviews, we investigate four consultations held by the Toronto Police Services Board (TPSB), the Office of the Privacy Commissioner (OPC), and the parliamentary Standing Committee on Access to Information, Privacy and Ethics (ETHI) to understand how public opinion and outrage translate into policy. We find that public consultations became a powerful closure mechanism in the policy-making toolbox, inhibiting rather than furthering democratic debate. Our findings show that consultations do not advance public literacy; that opportunities for public input are narrow; that timeframes are short; and that mechanisms for inclusion are limited. Even in the best-case circumstances, consultations are merely one of many factors in AI governance and seldom impact concrete policy outcomes in the cases studied here.
... High-profile success stories of FRT-assisted identification and apprehension of violent offenders undergird growing public and law enforcement support for the largely unregulated application of FRT in crime control. One of the most prominent cases involves the use of this technology to identify and 9 For example, a series of positive stories related to the identification of people involved in the January 6 th insurrection [33]. 10 These are frequently showcased by the Security Industry Association (SIA) in various press releases supportive of the proliferation of its constituents' technology products [34]. ...
Preprint
Full-text available
This study presents novel insights into the effects of police facial recognition applications on violent crime and arrest dynamics across 268 U.S. cities from 1997 to 2020. We conducted generalized difference-in-differences regressions with multiway fixed effects to exploit this technology’s staggered implementation. As the first to examine how the police use of these systems impacts violent crime control, this study fills a critical research gap. Our findings indicate that police facial recognition applications facilitate reductions in the rates of felony violence and homicide without contributing to over-policing or racial disparities in arrest for violent offenses. Greater reductions were observed for cities that adopted these technologies earlier in the study period, suggesting that their public safety benefits appreciate over time. The results of parallel trend and robustness tests also support these conclusions. While further research is necessary to assess the implementation and effects of facial recognition systems in various contexts, presented evidence suggests that city police agencies that responsibly deploy these innovations to support crime control efforts can keep their residents safer and reduce the lives lost to violence.
... The decision to arrest him was primarily based on a facial detection algorithm which matched Mr. Williams' driving license photo with the picture of a man who was suspected of watch theft two years earlier. Not only did the computer 'get it wrong' as one of the detectives said, when Mr. Williams made them aware that the picture of the suspect obviously wasn't resembling him, the probably unreliable algorithm very likely contributed to racial discrimination (Hill 2020). ...
Article
Full-text available
Mark Coeckelbergh starts his book with a very powerful picture based on a real incident: On the 9th of January 2020, Robert Williams was wrongfully arrested by Detroit police officers in front of his two young daughters, wife and neighbors. For 18 hours the police would not disclose the grounds for his arrest (American Civil Liberties Union 2020; Hill 2020). The decision to arrest him was primarily based on a facial detection algorithm which matched Mr. Williams’ driving license photo with the picture of a man who was suspected of watch theft two years earlier. Not only did the computer ‘get it wrong’ as one of the detectives said, when Mr. Williams made them aware that the picture of the suspect obviously wasn’t resembling him, the probably unreliable algorithm very likely contributed to racial discrimination (Hill 2020). It is well documented that many available facial detection algorithms at this time had significant problems (e.g. a comparably high false positive rate) with respect to black persons, like Mr. Williams (NIST 2019). Multiple causes may exist, such as unbalanced training datasets and insufficient optimization. Coeckelbergh compares the disturbing case of Mr. Williams with a political interpretation of Franz Kafka’s The Trial, where the protagonist, Josef K., is accused of an unspecified crime by an opaque, oppressive and absurd bureaucracy: “In the 21st-century United States, Josef K. is black and is falsely accused by an algorithm, without explanation” (p. 2).
... However, face recognition technologies are also error prone. For example, in the U.S., there are known cases where misidentifying a person as a wanted criminal has led to a wrongful arrest, accompanied by at least temporary imprisonment and inappropriate treatment from the police [7][8][9]. In this context, Garvie and Bedoya [10] documented a disproportional higher arrest and search rate of African-Americans based on face recognition software decisions. ...
Article
Full-text available
With the rise of deep neural networks, the performance of biometric systems has increased tremendously. Biometric systems for face recognition are now used in everyday life, e.g., border control, crime prevention, or personal device access control. Although the accuracy of face recognition systems is generally high, they are not without flaws. Many biometric systems have been found to exhibit demographic bias, resulting in different demographic groups being not recognized with the same accuracy. This is especially true for facial recognition due to demographic factors, e.g., gender and skin color. While many previous works already reported demographic bias, this work aims to reduce demographic bias for biometric face recognition applications. In this regard, 12 face recognition systems are benchmarked regarding biometric recognition performance as well as demographic differentials, i.e., fairness. Subsequently, multiple fusion techniques are applied with the goal to improve the fairness in contrast to single systems. The experimental results show that it is possible to improve the fairness regarding single demographics, e.g., skin color or gender, while improving fairness for demographic subgroups turns out to be more challenging.
... Research has revealed persistent inaccuracies when applying facial detection and identification algorithms to individuals of color (Klare et al. 2012). Troublingly, these inaccuracies have resulted in wrongful arrests, with innocent individuals, particularly from Black communities, being targeted due to flawed facial recognition systems (Hill 2020). Moreover, gender biases have also been observed, with lower accuracy rates for women compared to men, exacerbating the challenges faced by marginalized groups (Buolamwini and Gebru 2018). ...
Article
Full-text available
Issue assignment process is a common practice in open source projects for managing incoming and existing issues. While traditionally performed by humans, the adoption of software bots for automating this process has become prevalent in recent years. The objective of this paper is to examine the diversity in issue assignments between bots and humans in open source projects, with the aim of understanding how open source communities can foster diversity and inclusivity. To achieve this, we conducted a quantitative analysis on three major open source projects hosted on GitHub, focusing on the most likely racial and ethnic diversity of both human and bot assignors during the issue assignment process. We analyze how issues are assigned by humans and bots, as well as the distribution of issue types among White and Non-White open source collaborators. Additionally, we explore how the diversity in issue assignments evolves over time for human and bot assignors. Our results reveal that both human and bot assignors majorly assign issues to developers of the same most likely race and ethnicity. Notably, we find bots assign more issues to perceived White developers than Non-White developers. In conclusion, our findings suggest that bots display higher levels of bias than humans in most cases, although humans also demonstrate significant bias in certain instances. Thus, open source communities must actively address these potential biases in their GitHub issue assignment process to promote diversity and inclusivity.
... Despite this, the police were slow to doubt the authority of the computer system. After 30 hours of wrongful arrest, Williams had to pay bail and he is still troubled by family, personal and psychological sequels of the episode (Hill, 2020). Although shocking, cases like these are becoming more and more frequent, albeit underreported. ...
Article
Full-text available
This chapter of the author’s 2022 book Racismo algorítmico: inteligência artificial e discriminação nas redes digitais (Algorithmic racism: artificial intelligence and discrimination in digital networks) demonstrates the racism encoded in artificial intelligence. It addresses the material and symbolic, often lethal, violence inflicted upon Black and poor individuals and populations by the deployment of predictive systems built and backfed from datasets that reflect a history of exploitation and segregation. It begins with a historical overview of the normalisation of hypervigilance and violent control over racialized populations in the United States and in Brazil. It then shows the continuity of that control by contemporary algorithmic classification systems such as facial recognition, predictive policing, and health and security risk scores. Aligned with official narratives of racial harmony and the meritocratic justification of colour-blind policy, they are deemed neutral by their developers, private enterprises and the public institutions that employ them, who insidiously ignore their bias.
... A pesar de ello, los agentes se mostraron reacios a cuestionar la autoridad del sistema informático. Después de permanecer encarcelado injustamente durante 30 horas, Williams aún tuvo que pagar fianza para salir en libertad y todavía enfrenta secuelas familiares, personales y psicológicas de lo sucedido(Hill, 2020).Ambos casos pueden sorprender, pero son cada vez más frecuentes, aunque no se denuncien. El reconocimiento facial con fines policiales existe desde hace más de veinte años, pero una combinación de abaratamiento de la tecnología, aumento de las bases de datos biométricas, indulgencia legislativa y presión de las empresas ha acelerado su adopción en los últimos tiempos. ...
Article
Full-text available
Resumen Este capítulo del libro Racismo algorítmico: inteligência artificial e discriminação nas redes digitais (Racismo algorítmico: inteligencia artificial y discriminación en redes digitales), publicado por el autor en 2022, demuestra el racismo codificado en la inteligencia artificial. Aborda la violencia material y simbólica, a menudo letal, infligida a personas y poblaciones negras y pobres mediante el despliegue de sistemas predictivos construidos y retroalimentados a partir de conjuntos de datos que reflejan una historia de explotación y segregación. Comienza con un panorama histórico de la normalización de la hipervigilancia y el control violento sobre poblaciones racializadas en Estados Unidos y Brasil. Luego muestra la continuidad de ese control por parte de los sistemas de clasificación algorítmicos contemporáneos, como el reconocimiento facial, la vigilancia predictiva y las puntuaciones de riesgo en seguridad y salud. Alineados con las narrativas oficiales de armonía racial y la justificación meritocrática de la política daltónica, sus desarrolladores, las empresas privadas e instituciones públicas que los emplean los consideran neutrales e ignoran insidiosamente sus prejuicios.
... One study has shown that COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) was twice as likely to label black offenders as high-risk than whites. 30 Further examples of discrimination from other areas are abundant: Google's face recognition system identified black people as gorillas, Microsoft's chatbot Tay became a racist neo-Nazi in 24 Ashraf (2020) Hill (2020b). 30 In fact, in these cases there was no element of specific targeting involved. ...
Article
Full-text available
New technologies based on digitalization, automation, and artificial intelligence have fundamentally transformed our lives and society as a whole, in just a few decades. These technologies support human well-being and prosperity by enhancing progress and innovation, however, they also have the potential to negatively impact human rights, democracy, and the rule of law. Discrimination, the violation of privacy, increasing surveillance, the weakening of personal autonomy, disinformation and electoral interference are but a few of the many concerns. This paper examines the specific human rights implications of AI-driven systems through the lens of the most important international instruments adopted by the UN and regional human rights mechanisms. The paper shows how AI can affect the exercise of all human rights, not only a most obvious few. In line with major international organizations, the author calls on decision-makers to take a precautionary approach by adopting AI regulations that are consistent with the standards of fundamental human rights, and that balance the realization of the opportunities with the potential risks which AI presents.
... For example, Yadav and Lachney (2022) emphasized the importance of developing pre-service teachers' knowledge of teaching with technology (to support their students' learning), about technology (to identify how technology leads to productivity and harm), and through technology (to support creativity and self-expression in class-rooms). When viewed from this lens, CT and computational literacies offer a powerful framework for future educators to use technology to support their students' disciplinary learning, as well as help them learn about how technologies operate and potentially cause harm (e.g., bias in facial recognition that misidentifies individuals with darker skin and leads to their arrest, see Hill, 2020). ...
Article
This paper explores teacher educators’ perceptions and over- arching approaches to discipline-specific computational thinking (CT) integration in pre-service teacher education courses. To that end, we collaborated with 31 teacher educators from various teaching backgrounds in a university system in the Northeastern United States. As part of the first phase of a year-long professional development, the faculty attended a series of virtual meetings and an asynchronous virtual classroom, working towards integrating CT into their current teaching practices. In this study, we report the findings from the analysis of pre- and post-professional development surveys. The paired sample t-test results suggested a significant difference in teacher educators’ CT confidence and ability for CT integration between the pre-and post-PD phases. Qualitative analysis of the open-ended responses indicated changes in teacher educators’ conceptions of CT and its role in preparing future teachers. In the post-PD survey, teacher education faculty overwhelmingly framed CT as a new literacy, a form of knowledge that facilitates teaching and learning in technology-enhanced environments, and a cross-cutting concept. Teacher educators also regarded CT as a metacognitive pedagogical skill that could support pre-service teachers’ abilities in pedagogy by enabling them to monitor and modify their own teaching practice. We discuss these emergent themes, focusing on the novel and recurring aspects of teacher educators’ perceptions and approaches to CT, such as viewing CT as a transdisciplinary metacognitive pedagogical skill and a new literacy practice.
... The system is expected to dismiss all law-abiding passengers and alert the security personnel whenever criminal offenders turn up. However, recent newspaper articles have shown that people being misidentified is not a hypothetical exercise but has actually occurred several times across the United States [5,6]. To make matters worse, false alarms should be avoided by any means since a system identification error may bias the security approach and mistakenly hold up innocent people in custody [7]. ...
Conference Paper
Full-text available
Open-set face recognition is a scenario in which biometric systems have incomplete knowledge of all existing subjects. This arduous requirement must dismiss irrelevant faces and focus on subjects of interest only. For this reason, this work introduces a novel method that associates an ensemble of compact neural networks with data augmentation at the feature level and an entropy-based cost function. Deep neural networks pre-trained on large face datasets serve as the preliminary feature extraction module. The neural adapter ensemble consists of binary models trained on original feature representations along with negative synthetic mix-up embeddings, which are adequately handled by the designed open-set loss since they do not belong to any known identity. We carry out experiments on well-known LFW and IJB-C datasets where results show that the approach is capable of boosting closed and open-set identification accuracy. OpenLoss Package: https://pypi.org/project/openloss/
... Much of this work has come into focus because of the research and advocacy of Black women, like Joy Buolamwini, Timnit Gebru, and Mutale Nkonde, who discuss the large racial bias in algorithmic systems that extend and amplify racial inequity Nkonde, 2019;Raji et al., 2020). These results of faulty facial recognition systems were underscored in a recent New York Times article that described the experience of a Black man who was falsely arrested in Detroit for a crime he did not commit (Hill, 2020). In addition to facial recognition, new research from the Brennan Center at NYU indicates that over the last five years, new surveillance companies have developed and are selling software, powered by AI, that can allegedly detect signs of violence or other concerning behavior among youth on social media (Patel et al., 2020). ...
Chapter
Full-text available
Essays on the challenges and risks of designing algorithms and platforms for children, with an emphasis on algorithmic justice, learning, and equity. One in three Internet users worldwide is a child, and what children see and experience online is increasingly shaped by algorithms. Though children's rights and protections are at the center of debates on digital privacy, safety, and Internet governance, the dominant online platforms have not been constructed with the needs and interests of children in mind. The editors of this volume, Mizuko Ito, Remy Cross, Karthik Dinakar, and Candice Odgers, focus on understanding diverse children's evolving relationships with algorithms, digital data, and platforms and offer guidance on how stakeholders can shape these relationships in ways that support children's agency and protect them from harm. This book includes essays reporting original research on educational programs in AI relational robots and Scratch programming, on children's views on digital privacy and artificial intelligence, and on discourses around educational technologies. Shorter opinion pieces add the perspectives of an instructional designer, a social worker, and parents. The contributing social, behavioral, and computer scientists represent perspectives and contexts that span education, commercial tech platforms, and home settings. They analyze problems and offer solutions that elevate the voices and agency of parents and children. Their essays also build on recent research examining how social media, digital games, and learning technologies reflect and reinforce unequal childhoods. Contributors:Paulo Blikstein, Izidoro Blikstein, Marion Boulicault, Cynthia Breazeal, Michelle Ciccone, Sayamindu Dasgupta, Devin Dillon, Stefania Druga, Jacqueline M. Kory-Westlund, Aviv Y. Landau, Benjamin Mako Hill, Adriana Manago, Siva Mathiyazhagan, Maureen Mauk, Stephanie Nguyen, W. Ian O'Byrne, Kathleen A. Paciga, Milo Phillips-Brown, Michael Preston, Stephanie M. Reich, Nicholas D. Santer, Allison Stark, Elizabeth Stevens, Kristen Turner, Desmond Upton Patton, Veena Vasudevan, Jason Yip
... Recently, there has been increasing backlash from civil rights organizations and in communities where people are being targeted with facial recognition surveillance and other remote biometric recognition technologies. These technologies have been used in the surveillance of protestors and civilians in India 72 and Russia 73 and of ethnic and religious minorities in China 74 and have resulted in the wrongful arrests of innocent individuals in Argentina 75 and the US.76 Civil rights organizations are advocating for bans on government use of facial recognition technology and for the adoption of legislation that protects individuals from nonconsensual collection of biometric information by private entities.77 ...
... Our second definition constitutes another fairness issue arising from the following observation: In recidivism prediction instruments, the model's decision is surprisingly brittle due to the inclusion or removal of certain training data even potentially irrelevant [17,27,34]. Take the facial recognition model developed by ClearviewAI, which has been widely used by law enforcement agencies as evidence for arrest justification; as an example, the prediction of one person can be sensitive to or completely reverse as a result of the existence of another individual who shares few salient characteristics [50]. ...
Article
Full-text available
Fairness in machine learning (ML) has gained attention within the ML community and the broader society beyond with many fairness definitions and algorithms being proposed. Surprisingly, there is little work quantifying and guaranteeing fairness in the presence of uncertainty which is prevalent in many socially sensitive applications, ranging from marketing analytics to actuarial analysis and recidivism prediction instruments. To this end, we revisit fairness and reveal idiosyncrasies of existing fairness literature assuming certainty on the class label that limits their real-world utility. Our primary contributions are formulating fairness under uncertainty and group constraints along with a suite of corresponding new fairness definitions and algorithm. We argue that this formulation has a broader applicability to practical scenarios concerning fairness. We also show how the newly devised fairness notions involving censored information and the general framework for fair predictions in the presence of censorship allow us to measure and mitigate discrimination under uncertainty that bridges the gap with real-world applications. Empirical evaluations on real-world datasets with censorship and sensitive attributes demonstrate the practicality of our approach.
Preprint
Full-text available
With the increasing proliferation of mobile applications in our everyday experiences, the concerns surrounding ethics have surged significantly. Users generally communicate their feedback, report issues, and suggest new functionalities in application (app) reviews, frequently emphasizing safety, privacy, and accountability concerns. Incorporating these reviews is essential to developing successful products. However, app reviews related to ethical concerns generally use domain-specific language and are expressed using a more varied vocabulary. Thus making automated ethical concern-related app review extraction a challenging and time-consuming effort. This study proposes a novel Natural Language Processing (NLP) based approach that combines Natural Language Inference (NLI), which provides a deep comprehension of language nuances, and a decoder-only (LLaMA-like) Large Language Model (LLM) to extract ethical concern-related app reviews at scale. Utilizing 43,647 app reviews from the mental health domain, the proposed methodology 1) Evaluates four NLI models to extract potential privacy reviews and compares the results of domain-specific privacy hypotheses with generic privacy hypotheses; 2) Evaluates four LLMs for classifying app reviews to privacy concerns; and 3) Uses the best NLI and LLM models further to extract new privacy reviews from the dataset. Results show that the DeBERTa-v3-base-mnli-fever-anli NLI model with domain-specific hypotheses yields the best performance, and Llama3.1-8B-Instruct LLM performs best in the classification of app reviews. Then, using NLI+LLM, an additional 1,008 new privacy-related reviews were extracted that were not identified through the keyword-based approach in previous research, thus demonstrating the effectiveness of the proposed approach.
Article
Full-text available
In connection with innovation of modern biometric technologies in various spheres of life, both at the levels of individuals and of the State, there exists an increasing risk of serious negative consequences of potentially possible mistaken identifications. Such cases already happened both in Russia and abroad. In the course of legal proceedings on facts of mistaken identity and its consequences, forensic examinations are carried out in most cases. Depending on the type of biometric registration and identification system, forensic computer-technical, traceological, portrait, video technical examinations as well as forensic medical examination and other medicolegal examinations are assigned. The article gives an assessment of legality and scientific validity of performing forensic anthropometric examinations of photographs and video recordings of the suspects captured on them.
Article
The issue of the use of facial recognition technology, which includes the processing of biometric data on a large scale, and in public areas in the Republic of Serbia, has been debated since 2019, when the deployment of these technologies was announced. The use of technology for the purpose of improving security and its impact on the rights and freedoms of persons may be viewed as conflicting interests, even mutually exclusive. Yet these two, equally important, interests are not necessarily incompatible. This paper presents an overview of the impact of the use of facial recognition technology on the behaviour of individuals, processes of drafting different versions of interior affairs bills containing provisions enabling the use of this technology and different versions of data protection impact assessments, as well as relevant acts of the United Nations and the European Union. Although the final answer whether the use of such technology to process biometric data in public areas should be permitted and, if so, when and under what conditions, is still pending, a proposal is given vis-à-vis measures that could contribute to the accommodation of different interests without interfering with the rights and freedoms of persons in a way that is not excessive in a democratic society.
Article
Full-text available
Facial recognition technologies (FRTs) are used by law enforcement agencies (LEAs) for various purposes, including public security, as part of their legally mandated duty to serve the public interest. While these technologies can aid LEAs in fulfilling their public security responsibilities, they pose significant risks to data protection rights. This article identifies four specific risks associated with the use of FRT by LEAs for public security within the frameworks of the General Data Protection Regulation and Artificial Intelligence Act. These risks particularly concern compliance with fundamental data protection principles, namely data minimisation, purpose limitation, data and system accuracy, and administrative challenges. These challenges arise due to legal, technical, and practical factors in developing algorithms for law enforcement. Addressing these risks and exploring practical mitigations, such as broadening the scope of data protection impact assessments, may enhance transparency and ensure that FRT is used for public security in a manner that serves the public interest.
Chapter
Full-text available
Brittany Johnson and Justin Smith
Book
Full-text available
When Face Recognition Goes Wrong explores the myriad ways that humans and machines make mistakes in facial recognition. Adopting a critical stance throughout, the book explores why and how humans and machines make mistakes, covering topics including racial and gender biases, neuropsychological disorders, and widespread algorithm problems. The book features personal anecdotes alongside real-world examples to showcase the often life-changing consequences of facial recognition going wrong. These range from problems with everyday social interactions through to eyewitness identification leading to miscarriages of justice and border control passport verification. Concluding with a look to the future of facial recognition, the author asks the world’s leading experts what are the big questions that still need to be answered, and can we train humans and machines to be super recognisers? This book is a must-read for anyone interested in facial recognition, or in psychology, criminal justice and law.
Article
Introduction: Computer science (CS) lacks representation from people who identify as one or more of the following identities: woman, Black, Indigenous, Hispanic, Latina/Latino/Latinx, or disabled. We refer to these groups as historically underrepresented groups (HUGs). Informal learning, like CS summer camps and hackathons, can increase interest in K-12 students but still struggles to broaden participation. Objectives: In this study, we examine one source of struggle for informal learning programs: recruiting practices. Methods: Towards the goal of understanding this struggle, we interviewed 14 informal K-12 CS learning programs across a diverse region in the Northwestern United States to understand what recruiting practices are being used. We used a cultural competency lens to examine the variation within recruiting practices and how some practices could lead to broader participation in computing. Results: We identified 18 different recruiting practices used by informal CS learning program organizers. Some programs had similar practices, but subtle differences in implementation that led them to fall at different points on the cultural competence continuum. More culturally competent implementations generally involve reflection on the needs of specific populations that programs were trying to recruit, on why previous recruiting implementations did not work, and on feedback from stakeholders to change their implementations. This is the first paper to investigate how the implementation of the recruiting practice determines its cultural competency. Conclusion: Results from this study illuminate some of the problems informal CS programs face in broadening participation in computing and provide insights on how program organizers’ can overcome them. Our work highlights how students or parents access resources, the challenges program organizers encounter, and whether current recruiting practices effectively engage students from HUGs.
Conference Paper
Full-text available
As more algorithmic systems have come under scrutiny for their potential to inflict societal harms, an increasing number of organizations that hold power over harmful algorithms have chosen (or were required under the law) to abandon them. While social movements and calls to abandon harmful algorithms have emerged across application domains, little academic attention has been paid to studying abandonment as a means to mitigate algorithmic harms. In this paper, we take a first step towards conceptualizing "algorithm abandonment" as an organization's decision to stop designing, developing, or using an algorithmic system due to its (potential) harms. We conduct a thematic analysis of real-world cases of algorithm abandonment to characterize the dynamics leading to this outcome. Our analysis of 40 cases reveals that campaigns to abandon an algorithm follow a common process of six iterative phases: discovery, diagnosis, dissemination, dialogue, decision, and death, which we term the "6 D's of abandonment". In addition, we highlight key factors that facilitate (or prohibit) abandonment, which include characteristics of both the technical and social systems that the algorithm is embedded within. We discuss implications for several stakeholders, including proprietors and technologists who have the power to influence an algorithm's (dis)continued use, FAccT researchers, and policymakers.
Conference Paper
Full-text available
Business and technology are intricately connected through logic and design. They are equally sensitive to societal changes and may be devastated by scandal. Cooperative multi-robot systems (MRSs) are on the rise, allowing robots of different types and brands to work together in diverse contexts. Generative artificial intelligence has been a dominant topic in recent artificial intelligence (AI) discussions due to its capacity to mimic humans through the use of natural language and the production of media, including deep fakes. In this article, we focus specifically on the conversational aspects of generative AI, and hence use the term Conversational Generative artificial intelligence (CGI). Like MRSs, CGIs have enormous potential for revolutionizing processes across sectors and transforming the way humans conduct business. From a business perspective, cooperative MRSs alone, with potential conflicts of interest, privacy practices, and safety concerns, require ethical examination. MRSs empowered by CGIs demand multi-dimensional and sophisticated methods to uncover imminent ethical pitfalls. This study focuses on ethics in CGI-empowered MRSs while reporting the stages of developing the MORUL model.
Chapter
Working Group 9.10 is the newest group under Technical Committee 9, and it has a focus on ICT and its impact and uses in promoting and maintaining peace, as well as the use of ICTs in conflict and war. The focus of the working group’s activities thus far has related to cybersecurity and cyberwarfare, with members being involved in organizing conference and specialist tracks, with other book projects and activities with related communities. After giving an introduction and history to the working group, the chapter covers some of the major themes and recent developments that are related to the themes of the working group.
Article
Amidst calls for public accountability over large data-driven systems, feminist and indigenous scholars have developed refusal as a practice that challenges the authority of data collectors. However, because data affects so many aspects of daily life, it can be hard to see seemingly different refusal strategies as part of the same repertoire. Furthermore, conversations about refusal often happen from the standpoint of designers and policymakers rather than the people and communities most affected by data collection. In this paper, we introduce a framework for data refusal from below —writing from the standpoint of people who refuse, rather than the institutions that seek their compliance. Because refusers work to reshape socio-technical systems, we argue that refusal is an act of design, and that design-based frameworks and methods can contribute to refusal. We characterize refusal strategies across four constituent facets common to all refusal, whatever strategies are used: autonomy , or how refusal accounts for individual and collective interests; time , or whether refusal reacts to past harm or proactively prevents future harm; power , or the extent to which refusal makes change possible; and cost , or whether or not refusal can reduce or redistribute penalties experienced by refusers. We illustrate each facet by drawing on cases of people and collectives that have refused data systems. Together, the four facets of our framework are designed to help scholars and activists describe, evaluate, and imagine new forms of refusal.
Article
Full-text available
3D face reconstruction algorithms from images and videos are applied to many fields, from plastic surgery to the entertainment sector, thanks to their advantageous features. However, when looking at forensic applications, 3D face reconstruction must observe strict requirements that still make its possible role in bringing evidence to a lawsuit unclear. An extensive investigation of the constraints, potential, and limits of its application in forensics is still missing. Shedding some light on this matter is the goal of the present survey, which starts by clarifying the relation between forensic applications and biometrics, with a focus on face recognition. Therefore, it provides an analysis of the achievements of 3D face reconstruction algorithms from surveillance videos and mugshot images and discusses the current obstacles that separate 3D face reconstruction from an active role in forensic applications. Finally, it examines the underlying data sets, with their advantages and limitations, while proposing alternatives that could substitute or complement them.
Article
Law enforcement has transformed drastically by advances in technology. Law enforcement bodies around the world have adopted facial recognition capabilities powered by artificial intelligence and contend that facial recognition technology is an effective tool in preventing, disrupting, investigating, and responding to crime. As the practice has grown, so have criticisms of its use and policing outcomes. Criticisms relate to the violation of civil liberties, namely the potential for abuse, propensity for inaccuracies, and improper use. In an effort to assess the validity of these criticisms, this paper examines the link between facial recognition technology and racial bias through an analysis of existing research and the use of a case study of an American municipality that has banned the use of facial recognition technology by police. Studies to date demonstrate a propensity for algorithms to mirror the biases of the datasets on which they are trained, including racial and gender biases; rates of match inaccuracy were consistently seen in relation to black persons, particularly black females. In addition to academic research, multiple examples of misidentifications of black citizens in the United States, along with related commentary from human rights and civil liberties groups, suggests that these concerns are translating into real world injustices. This paper validates concerns with the use of facial recognition technology for law enforcement purposes in the absence of adequate governance mechanisms.
Thesis
Full-text available
Organizations increasingly use Artificial Intelligence (AI) to achieve their goals. However, the use of AI has led to negative side effects harming people. The work presented in this thesis focuses on harvesting the benefits of AI while preventing harm by presenting theoretical and practical approaches to the responsible management of AI. The thesis answers the research question: How can organizations ensure responsible use of artificial intelligence? Five papers contribute to answering this question. The first paper asks the research question: How can an organization exploit inscrutable AI systems in a safe and socially responsible manner? We answer this question with an exploratory case study in theDanish Business Authority. The paper provides two key contributions by introducing the concept of sociotechnical envelopment and how it enables organizations to manage the trade- off between predictive power and explainability in AI. The second paper asks the research question: How can organizations reconcile the growing demands for explanations of how AI based algorithmic decisions are made with their desire to leverage AI to maximize business performance? The paper is part of a double issue with the first paper, sharing a similar foundation but differentiating itself by targeting a practitioner's audience. The paper contributes by proposing a framework with six dimensions to explain the behavior of black-box AI systems and four recommendations for explaining the behavior of black-box AI systems. The third paper asks the research question: How do we ensure that machine learning (ML) models meet and maintain quality standards regarding interpretability and responsibility in a governmental setting? We address this with the use of the action design research method. The paper introduces the action design research project in the Danish Business Authority and the first version of the design artifact X-RAI framework, including it's four sub-frameworks. The fourth paper asks the research question: How should procedures be designed to assess the risks associated with a new AI system? The paper uses action design research, focuses on the first artifact of the X-RAI framework, the Artificial Intelligence Risk Assessment (AIRA) tool, and provides five design principles. The fifth paper asks the research question: How to plan for successful evaluation of AI systems in production? The paper uses action design research and focuses on the second artifact of the X-RAI framework, the Evaluation Plan. The paper finds five challenges in evaluating AI and prescribes five design principles to address them.
Article
AI systems are harming people. Harms such as discrimination and manipulation are reported in the media, which is the primary source of information on AI incidents. Reporting AI near-misses and learning from how a serious incident was prevented would help avoid future incidents. The problem is that ongoing efforts to catalog AI incidents rely on media reports—which does not prevent incidents. Developers, designers, and deployers of AI systems should be incentivized to report and share information on near misses. Such an AI near-miss reporting system does not have to be designed from scratch; the aviation industry’s voluntary, confidential, and non-punitive approach to such reporting can be used as a guide. AI incidents are accumulating, and the sooner such a near-miss reporting system is established, the better.
Article
Facial recognition technology (FRT) has become a significant topic in CSCW owing to widespread adoption and related criticisms: the use of FRT is often considered an assault on privacy or a kind of neo-phrenology. This discussion has revolved around uses of FRT for identification, which are often non-voluntary, in particular for surveillance wherein people are (by and large) unwittingly recognized by FRT systems. At the same time, we have also seen a rise of forms of FRT for verification (e.g., passport control or Apple's Face ID), which typically are overt and interactive. In this paper we study an interactive FRT system used for guest check-in at a hotel in China. We show how guests and bystanders engage in 'self-disciplining work' by controlling their facial (and bodily) comportment both to get recognized and at times to avoid recognition. From our analysis we discuss the role of preparatory and remedial work, as well as dehumanization, and the importance of CSCW paying closer attention to the significance of interactional compliance for people using and bystanding facial recognition technologies.
ResearchGate has not been able to resolve any references for this publication.