Article

Technology on the margins: AI and global migration management from a human rights perspective

Authors:
To read the full-text of this research, you can request a copy directly from the author.

Abstract

Experiments with new technologies in migration management are increasing. From Big Data predictions about population movements in the Mediterranean, to Canada's use of automated decision-making in immigration and refugee applications, to artificial-intelligence lie detectors deployed at European borders, States are keen to explore the use of new technologies, yet often fail to take into account profound human rights ramifications and real impacts on human lives. These technologies are largely unregulated, developed and deployed in opaque spaces with little oversight and accountability. This paper examines how technologies used in the management of migration impinge on human rights with little international regulation, arguing that this lack of regulation is deliberate, as States single out the migrant population as a viable testing ground for new technologies. Making migrants more trackable and intelligible justifies the use of more technology and data collection under the guise of national security, or even under tropes of humanitarianism and development. The way that technology operates is a useful lens that highlights State practices, democracy, notions of power, and accountability. Technology is not inherently democratic and human rights impacts are particularly important to consider in humanitarian and forced migration contexts. An international human rights law framework is particularly useful for codifying and recognising potential harms, because technology and its development is inherently global and transnational. More oversight and issue-specific accountability mechanisms are needed to safeguard fundamental rights of migrants such as freedom from discrimination, privacy rights and procedural justice safeguards such as the right to a fair decision-maker and the rights of appeal.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... Regarding the use of AI tools to control migratory movements and manage border spaces, scholars and civil society actors have voiced concerns about the increasing reliance on AI driven technologies to solve the complex problem of migration governance (Molnar 2019b;Bircan and Korkmaz 2021;Nalbandian 2022). ...
... From a human rights perspective, scholars and human rights advocates have highlighted the risks that the deployment and use of AI data-driven technologies by states, international organizations, and humanitarian actors involved in migration management may entail in terms of jeopardizing human rights (Molnar 2019b;Bircan and Korkmaz 2021;Beduschi 2021). Human rights of migrants, refugees, and asylum seekers, such as the right to life, liberty, equality and non-discrimination, and privacy and data protection, can be seriously impacted by the deployment and use of these tools in the domain of migration management (Molnar 2019b). ...
... From a human rights perspective, scholars and human rights advocates have highlighted the risks that the deployment and use of AI data-driven technologies by states, international organizations, and humanitarian actors involved in migration management may entail in terms of jeopardizing human rights (Molnar 2019b;Bircan and Korkmaz 2021;Beduschi 2021). Human rights of migrants, refugees, and asylum seekers, such as the right to life, liberty, equality and non-discrimination, and privacy and data protection, can be seriously impacted by the deployment and use of these tools in the domain of migration management (Molnar 2019b). ...
Article
Full-text available
AI predictive tools for migration management in the humanitarian field can significantly aid humanitarian actors in augmenting their decision-making capabilities and improving the lives and well-being of migrants. However, the use of AI predictive tools for migration management also poses several risks. Making humanitarian responses more effective using AI predictive tools cannot come at the expense of jeopardizing migrants’ rights, needs, and interests. Against this backdrop, embedding AI ethical principles into AI predictive tools for migration management becomes paramount. AI ethical principles must be imbued in the design, development, and deployment stages of these AI predictive tools to mitigate risks. Current guidelines to apply AI ethical frameworks contain high-level ethical principles which are not sufficiently specified for achievement. For AI ethical principles to have real impact, they must be translated into low-level technical and organizational measures to be adopted by those designing and developing AI tools. The context-specificity of AI tools implies that different contexts raise different ethical challenges to be considered. Therefore, the problem of how to operationalize AI ethical principles in AI predictive tools for migration management in the humanitarian field remains unresolved. To this end, eight ethical requirements are presented, with their corresponding safeguards to be implemented at the design and development stages of AI predictive tools for humanitarian action, with the aim of operationalizing AI ethical principles and mitigating the inherent risks.
... The arrival of "innovative data sources"-often referred to as "Big Data" or "digital trace data"have been described as a "migration data revolution" (Laczko & Rango, 2014) and bears much potential to complement traditional migration data (Cesare et al., 2018;Sîrbu et al., 2021). At the same time, digital data present a host of new ethical challenges for researchers that are of great concern (Beduschi, 2020;Brayne, 2018;Hayes, 2017;Latonero & Kift, 2018;Leese et al., 2021;Molnar, 2019;Zwitter, 2014). ...
... The review will discuss new data sources along five domains: (1) Reliability-the consistency and reproducibility of migration measurements, (2) validity-the accuracy of migration measures and the extent to which data allows to capture the intended concepts used by migration researchers, (3) scope-the breadth and depth of migration-related research that could be explored based on the respective data source, (4) accessibility-the degree to which data is accessible to researchers, and lastly, (5) ethics -the potential risk of violations of data privacy, consent and data protection principles in the data generation process and potential risk of (unintended) harm for research subjects as a result of analysis produced based on new data sources (e.g. Beduschi, 2020;Brayne, 2018;Cesare et al., 2018;Hayes, 2017;Latonero & Kift, 2018;Leese et al., 2021;Molnar, 2019;Zwitter, 2014). ...
... Ethics CDR data poses serious ethical concerns. When entering a mobile phone contract or installing a smartphone operating system, many users may not be aware that their location data is collected and analysed for various purposes (Beduschi, 2020;Brayne, 2018;Molnar, 2019). Such data uses are often hidden in the fine print. ...
Article
Full-text available
The interest in human migration is at its all-time high, yet data to measure migration is notoriously limited. “Big data” or “digital trace data” have emerged as new sources of migration measurement complementing ‘traditional’ census, administrative and survey data. This paper reviews the strengths and weaknesses of eight novel, digital data sources along five domains: reliability, validity, scope, access and ethics. The review highlights the opportunities for migration scholars but also stresses the ethical and empirical challenges. This review intends to be of service to researchers and policy analysts alike and help them navigate this new and increasingly complex field.
... The review will discuss new data sources along five domains: (1) Reliability-the consistency and reproducibility of migration measurements, (2) validity-the accuracy of migration measures and the extent to which data allows to capture the intended concepts used by migration researchers, (3) scope-the breadth and depth of migration-related research that could be explored based on the respective data source, (4) accessibility-the degree to which data is accessible to researchers, and lastly, (5) ethics -the potential risk of violations of data privacy, consent and data protection principles in the data generation process and potential risk of (unintended) harm for research subjects as a result of analysis produced based on new data sources (e.g. Beduschi, 2020;Brayne, 2018;Cesare et al., 2018;Hayes, 2017;Latonero & Kift, 2018;Leese et al., 2021;Molnar, 2019;Zwitter, 2014). ...
... Ethics CDR data poses serious ethical concerns. When entering a mobile phone contract or installing a smartphone operating system, many users may not be aware that their location data is collected and analysed for various purposes (Beduschi, 2020;Brayne, 2018;Molnar, 2019). Such data uses are often hidden in the fine print. ...
... New highresolution satellite imagery and drone images can identify individuals using face recognition technology. Law enforcement, policing and intelligence agencies use such approaches (Brayne, 2018;Hayes, 2017;Leese et al., 2021;Molnar, 2019) which raises serious concerns regarding the situation in undemocratic countries with low data protection standards and policies aiming to suppress and control groups in society. Drones may also be increasingly operated by companies in addition to governments which raises concerns over unknown privacy violations by non-governmental actors. ...
Article
Full-text available
The interest in human migration is at its all-time high, yet data to measure migration is notoriously limited. “Big data” or “digital trace data” have emerged as new sources of migration measurement complementing ‘traditional’ census, administrative and survey data. This paper reviews the strengths and weaknesses of eight novel, digital data sources along five domains: reliability, validity, scope, access and ethics. The review highlights the opportunities for migration scholars but also stresses the ethical and empirical challenges. This review intends to be of service to researchers and policy analysts alike and help them navigate this new and increasingly complex field.
... Others point to the fact that the use of AI in areas such as asylum raises significant ethical (Ajana 2015) and human rights (Land and Aronson 2020) concerns and warn that the turn to AI may well become one of the next great challenges to international protection of refugees and asylum seekers (e.g. Molnar 2019). ...
Preprint
As refugee law practice enters the world of data, it is time to take stock as to what refugee law research can gain from technological developments. This article provides an outline for a computationally driven research agenda to tackle refugee status determination variations as a recalcitrant puzzle of refugee law. It firstly outlines how the growing field of computational law may be canvassed to conduct legal research in refugee studies at a greater empirical scale than traditional legal methods. It then turns to exemplify the empirical purchase of a data driven approach to refugee law through an analysis of the Danish Refugee Appeal Board’s asylum case law and outlines methods for comparison with datasets from Australia, Canada and the United States. The article concludes by addressing the data politics arising from a turn to digital methods, and how these can be confronted through insights from critical data studies and reflexive research practices.
... Others point to the fact that the use of AI in areas such as asylum raises significant ethical (Ajana 2015) and human rights (Land and Aronson 2020) concerns and warn that the turn to AI may well become one of the next great challenges to international protection of refugees and asylum seekers (e.g. Molnar 2019). ...
Article
As refugee law practice enters the world of data, it is time to take stock as to what refugee law research can gain from technological developments. This article provides an outline for a computationally driven research agenda to tackle refugee status determination variations as a recalcitrant puzzle of refugee law. It first outlines how the growing field of computational law may be canvassed to conduct legal research in refugee studies at a greater empirical scale than traditional legal methods. It then turns to exemplify the empirical purchase of a data-driven approach to refugee law through an analysis of the Danish Refugee Appeal Board’s asylum case law and outlines methods for comparison with datasets from Australia, Canada, and the United States. The article concludes by addressing the data politics arising from a turn to digital methods, and how these can be confronted through insights from critical data studies and reflexive research practices.
... Applications and use scenarios for AI are already vast. From scenes of AI-enabled cognitive enhancement technology (see e.g., Rousi and Renko, 2020) to stream-lined migration processes (Molnar, 2019) and predictive healthcare (Bohr and Memarzadeh, 2020), each area brings sets of considerations that will require specific forms of framework to ensure the pillars of the transformative practices' reliability (Kurtz and Schrank, 2007;Spremic, 2017;Nwaiwu, 2018). Reliability, consistency and contingency plans (risk management and mitigation) are pre-requisites for human trust (Saariluoma et al., 2018). ...
... This will reduce the likelihood of recurrence of deviant behavior and better adapt to the development of society. Molnar pointed out that in the prevention and intervention stage of adolescent deviant behavior, we need to carefully investigate and analyze the surrounding environment of adolescents and then cooperate with the family, school, community, and society to provide a good surrounding environment for adolescents [7]. ...
Article
Full-text available
In this paper, in-depth research and analysis of juvenile delinquency prevention and occupational therapy education guidance using artificial intelligence are conducted, and its response mechanism is designed in this way. Two crime type prediction algorithms based on time-crime type count vectorization and dense neural network and crime type prediction based on the fusion of dense neural network and long- and short-term memory neural network are proposed. The outputs of both are fed into a new neural network for training to achieve the fusion of the two neural networks. Among them, the use of the dense neural network can effectively fit the relationship between the constructed features and crime types. The behavioral manifestations and causes of the formation of deviant behavior in adolescents are discussed. They can only read numerical data, but there is a lot of information in the textual data that is closely related to the training effect. When experimenting, it is necessary to extract knowledge and build applications. The practical work with adolescents with deviant behaviors is again carried out from group work and casework, respectively, with problem diagnosis, needs assessment, and service plan development for specific clients, to carry out relevant practical service work. The causes of juvenile delinquency in the Internet culture are discussed in terms of the Internet environment, juvenile use of the Internet, Internet supervision, and crime prevention education, respectively. The fourth chapter focuses on the analysis of the prevention and control measures for juvenile delinquency in cyberculture. In response to the above-mentioned causes of juvenile delinquency in cyberculture, the prevention and control measures are discussed in four aspects, namely, strengthening the construction of cyberculture and building a healthy cyber environment, strengthening the capacity building of guiding juveniles to use cyber correctly, building a prevention and supervision system to promote the improvement of the legal system, and improving and innovating the crime prevention education in the cyber era.
... 154 Petra Molnar argued that a managerial orientation to migration has advanced experimentation with AI as a policy tool on migrant populations. 155 And finally, Dimitri Van Der Meerssche has conceptualized the 'virtual border', a system: scattered across digital systems without fixed territorial coordinates : : : [that] operates as a central site of data extraction and social sorting : : : a system of discrimination and division where the standards of hierarchy or inclusion : : : are continuously kept in play. 156 147 Through this prism, emphasizing the co-production of infrastructures and their political environments, repurposed rescue rafts illuminate the fundamental confusion managerialism generates between persecution and protection. ...
Article
Full-text available
Looking at the migration management policies at Europe’s external Aegean border, this article examines how and why infrastructures of protection come to function as technologies of border violence. The repurposing of rescue rafts for extreme border violence in the Aegean Sea reveals a little-examined dark side of European ‘migration management’ as a process purportedly aimed to ‘civilize’ Greek coastguard operations. In transforming life-saving materials into life-threatening ones, patterns of border violence tell an alarming story about the relationship between law, politics, and the materiality of physical objects: absent concrete political and moral commitments to international protection, rescue’s physical infrastructure has been weaponized. The weaponized life raft further challenges the assumptions underpinning European ‘migration management’: the idea that technocratic solutions can fix structural injustices, or that ‘neutral assistance can ensure human rights compliance. The case study thus demonstrates the incompatibility of managerialism with human rights protection in the context of contemporary migration.
... For a fulsome analysis of the applicability of international human rights law and the variety of rights engaged in migration management technologies, see PetraMolnar (2019). 6 This is one reason why the EU's General Data Protection Regulation (GDPR) requires the ability to demonstrate that the correlations applied in algorithmic decision-making are 'legitimate justifications for the automated decisions. ...
Book
Full-text available
This open access book discusses the socio-political context of the COVID-19 crisis and questions the management of the pandemic emergency with special reference to how this affected the governance of migration and asylum. The book offers critical insights on the impact of the pandemic on migrant workers in different world regions including North America, Europe and Asia. The book addresses several categories of migrants including medical staff, farm labourers, construction workers, care and domestic workers and international students. It looks at border closures for non-citizens, disruption for temporary migrants as well as at special arrangements made for essential (migrant) workers such as doctors or nurses as well as farmworkers, ‘shipped’ to destination with special flights to make sure emergency wards are staffed, and harvests are picked up and the food processing chain continues to function. The book illustrates how the pandemic forces us to rethink notions like membership, citizenship, belonging, but also solidarity, human rights, community, essential services or ‘essential’ workers alongside an intersectional perspective including ethnicity, gender and race.
... For a fulsome analysis of the applicability of international human rights law and the variety of rights engaged in migration management technologies, see PetraMolnar (2019). 6 This is one reason why the EU's General Data Protection Regulation (GDPR) requires the ability to demonstrate that the correlations applied in algorithmic decision-making are 'legitimate justifications for the automated decisions. ...
Chapter
Full-text available
People on the move are often left out of conversations around technological development and become guinea pigs for testing new surveillance tools before bringing them to the wider population. These experiments range from big data predictions about population movements in humanitarian crises to automated decision-making in immigration and refugee applications to AI lie detectors at European airports. The Covid-19 pandemic has seen an increase of technological solutions presented as viable ways to stop its spread. Governments’ move toward biosurveillance has increased tracking, automated drones, and other technologies that purport to manage migration. However, refugees and people crossing borders are disproportionately targeted, with far-reaching impacts on various human rights. Drawing on interviews with affected communities in Belgium and Greece in 2020, this chapter explores how technological experiments on refugees are often discriminatory, breach privacy, and endanger lives. Lack of regulation of such technological experimentation and a pre-existing opaque decision-making ecosystem creates a governance gap that leaves room for far-reaching human rights impacts in this time of exception, with private sector interest setting the agenda. Blanket technological solutions do not address the root causes of displacement, forced migration, and economic inequality – all factors exacerbating the vulnerabilities communities on the move face in these pandemic times.
... Agencies within the US Department of Homeland Security also use new technologies to automate migration-related and asylum-related decisions. However, human rights experts have criticised this approach for using vulnerable migrants and asylum-seekers as experimental subjects to train AI algorithms (Molnar, 2019). ...
Article
Full-text available
Although human activity constantly generates massive amounts of data, these data can only be analysed by mainly the private sector and governmental institutes due to data accessibility restrictions. However, neither migrants (as the producers of this data) nor migration scholars (as scientific experts on the topic) are in a position to monitor or control how governments and corporations use such data. Big Data analytics and Artificial Intelligence (AI) technologies are promoted as cutting-edge solutions to ongoing and emerging social, economic and governance challenges. Meanwhile, states increasingly rely on digital and frontier technologies to manage borders and control migratory movements, and the defence industry and military–intelligence sectors provide high-tech tools to support these efforts. Worryingly, during the design and testing of algorithmic tools, migrants are often portrayed as a security threat instead of human beings with fundamental rights and liberties. Thus, privacy, data protection, and confidentiality issues continue to pose risks and challenges to migrant communities and raise important questions for the public and decision-makers alike. This comment seeks to shed light on the lack of effective regulation of AI and Big Data as they are applied in migration ‘management’. Additionally, from the perspective of privacy issues and immigrant rights (seeking asylum as a human right, it aims at advocating improved access to Big Data for scientific research which might act as a social control function for the smart border and existing/ongoing migration governance practices of countries. We argue that the use of Big Data and AI for migration governance requires much better collaboration between migrants (including the civil society and grassroots organisations solidarity that represent them), data scientists, migration scholars and policymakers if the potential of these technologies is to be reached in a way that is reasonable and ethical. Numerous critical privacy questions arise are regarding the legal requirements, confidentiality, and rules of engagement as well as the ethical concerns of (mis)use of new technologies. When the secretive nature of the ongoing exploitation of migrant data by states and corporations is considered raising such questions is essential for progress.
... Risk can involve, for example, the violation of privacy through the collection of identifiable personal information, commercial gains obtained from suspending restrictions on testing technological products on people, or from the distribution of resources. 48 From the perspective of refugee law, mistakes, malfunctions, or abuse of a biometric registration process or biometric data can result in the rejection of a claim for refugee status, for example, because of mistaken identity. ...
Article
Full-text available
This article explores how the digital transformation of humanitarianism, and the refugee regime reshapes refugee lawyering. Much refugee lawyering is based on a traditional understanding of legal protection – and a focus on legal aid, law reform advocacy, and specific protection procedures (such as refugee status determination). Refugee lawyering must now grapple with the challenges offered by the digital transformation of international protection. This entails identifying emergent protection issues and how they relate to the law and legal claims. To that end, the article puts forward suggestions regarding the risks of the digital transformation as it pertains to the humanitarian space, the refugee management infrastructure, and refugee lawyering. The article also considers its implications for legal knowledge and the possibility that law may be “displaced” by technology. The article concludes by discussing what it means for refugee lawyering and for being accountable to the norms and values of rule of law standards, international protection, and to individual clients.
... While it proved not only difficult to effectively 'add on' this new role in government, most of these efforts failed after a short time (Jones, O'Brien, and Ryan 2018). It is timely to consider an institution's ability to act with an eye towards a longer time horizon independent to the political agenda of the day, as AI is likely to be increasingly influential across various political areas in the near and longer-term future (Raso et al. 2018;Sharkey 2019;Molnar 2019;Nemitz 2018) and will significantly disrupt many processes we have grown accustomed to as a society (Buolamwini and Gebru 2018;Anderson et al. 2014;Russell 2019). ...
Preprint
Full-text available
Governance efforts for artificial intelligence (AI) are taking on increasingly more concrete forms, drawing on a variety of approaches and instruments from hard regulation to standardisation efforts, aimed at mitigating challenges from high-risk AI systems. To implement these and other efforts, new institutions will need to be established on a national and international level. This paper sketches a blueprint of such institutions, and conducts in-depth investigations of three key components of any future AI governance institutions, exploring benefits and associated drawbacks: (1) purpose, relating to the institution's overall goals and scope of work or mandate; (2) geography, relating to questions of participation and the reach of jurisdiction; and (3) capacity, the infrastructural and human make-up of the institution. Subsequently, the paper highlights noteworthy aspects of various institutional roles specifically around questions of institutional purpose, and frames what these could look like in practice, by placing these debates in a European context and proposing different iterations of a European AI Agency. Finally, conclusions and future research directions are proposed.
... While it proved not only difficult to effectively 'add on' this new role in government, most of these efforts failed after a short time [29]. It is timely to consider an institution's ability to act with an eye towards a longer time horizon independent to the political agenda of the day, as AI is likely to be increasingly influential across various political areas in the near and longer-term future [36,40,43,47] and will significantly disrupt many processes we have grown accustomed to as a society [7,12,44]. ...
Article
Full-text available
Governance efforts for artificial intelligence (AI) are taking on increasingly more concrete forms, drawing on a variety of approaches and instruments from hard regulation to standardisation efforts, aimed at mitigating challenges from high-risk AI systems. To implement these and other efforts, new institutions will need to be established on a national and international level. This paper sketches a blueprint of such institutions, and conducts in-depth investigations of three key components of any future AI governance institutions, exploring benefits and associated drawbacks: (1) “purpose”, relating to the institution’s overall goals and scope of work or mandate; (2) “geography”, relating to questions of participation and the reach of jurisdiction; and (3) “capacity”, the infrastructural and human make-up of the institution. Subsequently, the paper highlights noteworthy aspects of various institutional roles specifically around questions of institutional purpose, and frames what these could look like in practice, by placing these debates in a European context and proposing different iterations of a European AI Agency. Finally, conclusions and future research directions are proposed.
... The European Union has proposed a number of 'Alternatives' that are filtered through a human rights discourse but amount to detention by another name; these proposals are marked by their characteristics of 'externalization' beyond European borders and refractions of remade but recognizable colonial governance strategies (see Lemberg-Pedersen (2019) on 'regional disembarkation platforms'; Silverman (2018a) and Tazzioli and Garelli (2020) on the 'hotspot' approach; and Kaytaz and Missbach, both in this issue, on the roles played by Turkey and Indonesia in containing would-be refugees from seeking protection through onward movements, respectively). The proliferation of new technologies, including the use of bail-determination algorithms at the US-Mexico border and AI-powered lie-detectors at border checkpoints in Europe, are characterized as twenty-first century 'reforms' but impinge on the human rights of migrants in unregulated and increasingly opaque spaces (Molnar 2019). Moving to the international level, Julia Morris (2017) argues that not only has the global detention rights movement shifted its position from wholesale abolition to a much more modest goal of improving detention standards through legislative and regulatory reforms; but that this shift has licensed the expansion not only of private firms looking to offer 'solutions' but of an entire capital investment market for shopping by governments. ...
Article
Full-text available
This special issue focuses on what a standpoint of carceral abolitionism brings to citizenship studies, with immigration detention as the key case study. The nine articles and editorial introduction probe the intersections of detention with current and potential forms of citizenship. The contributions collectively emphasize what citizenship studies also documents: similar to how the prison is a site of social control, immigration control is a nation-building site where access to permanent status and citizenship is closely filtered along racial, gender, class, ableist, and other lines of discrimination. Employing a plurality of case studies spanning North America, Europe, and Asia, and coming to the subject from a spectrum of interdisciplinary backgrounds, all contributors nonetheless foreground the recognition that deprivation of liberty is one of the most serious harms that someone can experience. Like the activists protesting police brutality around the world, the special issue contributors are thinking across the spectrum of de-funding policing, overhauling the ‘criminal justice’ system, eradicating prisons (penal abolitionism), and doing away with all forms of containment (carceral abolitionism). The collective findings reaffirm that neither the prison nor the detention centre are inevitable in the modern, democratic order. Abolishing all forms of immigration detention would open the door for the emergence of new visions of justice.
... Consequently, hand-in-hand with the digitisation of borders is a stretching of bordering responsibility beyond standard government agents. Molnar (2019) suggests that it is also an official tactic to spread and reduce the responsibility attached to the implementation of digital borders and migration control, so as to allow a degree of official experimentation in the management of populations. Molnar (2019, p. 306) states that: ...
Chapter
Full-text available
The integration of digital technologies into the processes of daily life has caused social interactions that were once based on physical information to become reliant on digitally stored and transmitted data. This chapter focuses on this technological/social transition, referred to as digitisation, in the context of migration and the crossing of borders between sovereign states. Drawing on extant social scientific analysis, I examine how digitisation is fracturing state boundaries and spreading bordering agencies across human-machine and machine-machine interactions. It is a process that involves, on the one hand, embedding state borders in virtual flows of information, and on the other hand, attaching them to the biometrically coded bodies of travellers. Additionally, this chapter will look at how digital technologies provide travellers with new tools to facilitate their migration projects, while at the same time altering the experience of travel, such as by displacing bordering labour onto travellers.
... Across public services, AI is by public officials often justified as a mean to make public services more effective and less contingent on subjective judgments [13], or to ensure fairness in traditionally opaque decision-making and discretionary practices, thus leading to better decisions and mitigating individual caseworkers' arbitrary prejudice or bias. From the prediction of child harm [48], predictive policing [13], determining eligibility to receive welfare support [23,27], or experimenting with automated decision-making in asylum and integration systems [36], AI and street-level algorithms are in numerous ways being implemented in public services. ...
Article
Studies of algorithmic decision-making in Computer-Supported Cooperative Work (CSCW) and related fields of research increasingly recognize an analogy between AI and bureaucracies. We elaborate this link with an empirical study of AI in the context of decision-making in a street-level bureaucracy: job placement. The study examines caseworkers' perspectives on the use of AI, and contributes to an understanding of bureaucratic decision-making, with implications for integrating AI in caseworker systems. We report findings from a participatory workshop on AI with 35 caseworkers from different types of public services, followed up by interviews with five caseworkers specializing in job placement. The paper contributes an understanding of caseworkers' collaboration around documentation as a key aspect of bureaucratic decision-making practices. The collaborative aspects of casework are important to show because they are subject to process descriptions making case documentation prone for an individually focused AI with consequences for the future of how casework develops as a practice. Examining the collaborative aspects of caseworkers' documentation practices in the context of AI and (potentially) automation, our data show that caseworkers perceive AI as valuable when it can support their work towards management, (strengthen their cause, if a case requires extra resources), and towards unemployed individuals (strengthen their cause in relation to the individual's case when deciding on, and assigning a specific job placement program). We end by discussing steps to support cooperative aspects in AI decision-support systems that are increasingly implemented into the bureaucratic context of public services.
... This is being done using, for example, enhanced satellite imagery, photogrammetric reconstruction, lasers, remote sensing, ground radar, and other tools (Piracés 2018). There are also developments in artificial intelligence (Molnar 2019), machine learning and a host of other similar processes all of which can assist in doing work that involves extensive data that can be analysed using this type of technology. ...
Article
Full-text available
This article argues that while the right to the truth has come to the fore over the last few decades, victims around the world have not really felt its practical effect. It is argued that for the right to have real impact, human rights violations need to be documented and investigated, and the victims identified. This has, however, been limited in the past for a variety of reasons, including the inability to document violations to the extent needed. The article therefore considers how scientific and technological tools can help with this. It is argued that while the right to the truth has been assisted by the advent of DNA analysis, this tool is often not available in large parts of the world because of a lack of resources. Thus, it is argued that other types of techniques can, and must, be used to identify victims of human rights abuses. The article considers how ordinary people and NGOs can use a range of other tools, including a variety of apps and social media, to collect evidence of human rights violations, find people and fight impunity. The article also discusses why there ought therefore to be a greater reliance on open-source information and how it can be used to improve documentation and investigations of human rights violations. Examples that best embody the advantages and disadvantages of these scientific and technological tools are provided, as well as ideas on how to overcome the challenges they present.
... Borders are tools for governing space and individuals, achieved through laws, policing, and surveillance. Big data and biometrics have allowed for increasingly fine grained, individualized interventions, discriminating with morally troublesome algorithms (Molnar, 2019). Nowhere is this more problematic than in the role of borders play in racializing groups and producing illegal status. ...
Article
Full-text available
Interdisciplinary work on the nature of borders and society has enriched and complicated our understanding of democracy, community, distributive justice, and migration. It reveals the cognitive bias of methodological nationalism, which has distorted normative political thought on these topics, uncritically and often unconsciously adapting and reifying state‐centered conceptions of territory, space, and community. Under methodological nationalism, state territories demarcate the boundaries of the political; society is conceived as composed of immobile, culturally homogenous citizens, each belonging to one and only one state; and the distribution of goods is analyzed according to a stark opposition between the domestic and the international. This article describes how methodological nationalism has shaped central debates in political philosophy and introduces recent work that helps dispel this bias.
... The latter include: various forms of pervasive algorithmic bias [20,21] , challenges around transparency and explainability [22,23] ; the safety of autonomous vehicles and other cyber-physical systems [24] , or the potential of AI systems to be used in (or be susceptible to) malicious or criminal attacks [25][26][27] ; the erosion of democracy through e.g. 'computational propaganda' or 'deep fakes' [28][29][30] , and an array of threats to the full range of human rights [31,32] . The latter may eventually cumulate in the possible erosion of the global legal order by the comparative empowerment of authoritarian states [33,34] . ...
Preprint
Full-text available
Recent progress in artificial intelligence (AI) raises a wide array of ethical and societal concerns. Accordingly, an appropriate policy approach is needed today. While there has been a wave of scholarship in this field, the research community at times appears divided amongst those who emphasize near-term concerns, and those focusing on long-term concerns and corresponding policy measures. In this paper, we seek to map and critically examine this alleged gulf, with a view to understanding the practical space for inter-community collaboration on AI policy. This culminates in a proposal to make use of the legal notion of an incompletely theorized agreement. We propose that on certain issue areas, scholars working with near-term and long-term perspectives can converge and cooperate on selected mutually beneficial AI policy projects all the while maintaining divergent perspectives.
... The latter include: various forms of pervasive algorithmic bias [20,21], challenges around transparency and explainability [22,23]; the safety of autonomous vehicles and other cyberphysical systems [24], or the potential of AI systems to be used in (or be susceptible to) malicious or criminal attacks [25][26][27]; the erosion of democracy through e.g. 'computational propaganda' or 'deep fakes' [28][29][30], and an array of threats to the full range of human rights [31,32]. The latter may eventually cumulate in the possible erosion of the global legal order by the comparative empowerment of authoritarian states [33,34]. ...
Article
Full-text available
Recent progress in artificial intelligence (AI) raises a wide array of ethical and societal concerns. Accordingly, an appropriate policy approach is urgently needed. While there has been a wave of scholarship in this field, the research community at times appears divided amongst those who emphasize ‘near-term’ concerns and those focusing on ‘long-term’ concerns and corresponding policy measures. In this paper, we seek to examine this alleged ‘gap’, with a view to understanding the practical space for inter-community collaboration on AI policy. We propose to make use of the principle of an ‘incompletely theorized agreement’ to bridge some underlying disagreements, in the name of important cooperation on addressing AI’s urgent challenges. We propose that on certain issue areas, scholars working with near-term and long-term perspectives can converge and cooperate on selected mutually beneficial AI policy projects, while maintaining their distinct perspectives.
... As liberal democracies have moved to prioritize economic migrants, the world has seen the proliferation and dispersal of biopolitical technologies of surveillance and control that facilitate the unfettered movement of goods while simultaneously employing highly regulated and exclusionary systems for the movement of people, asylum seekers and migrants in particular (C t - Boucher, 2015;Bell, 2006;Molnar, 2019;Walters, 2015). Neoliberal rationalities have facilitated the development of immigration policies based on rigid taxonomies of desirability that correlate human value with the potential for economi ation (Walsh, 2011). ...
Article
Full-text available
The European migrant crisis of 2015 brought to light the urgent need for solidarit and responsibility-sharing in dealing with large influxes of people fleeing war, conflict and persecution. This spirit was captured in two subsequent international agreements: the Global Compact on Refugees (GCR) (2018) and the Global Compact for Safe and Orderly Migration (GCM) (2018). In the midst of a very different kind of crisis the global COVID-19 pandemic the need for solidarity and responsibility-sharing is all the more imperative as COVID-19 has become a risk multiplier for as lum seekers, compounding e isting dri ers. By examining how Western nation states in the global North have responded to asylum seekers during the pandemic against the backdrop of existing international refugee law, practice, and policy, this essay seeks to evaluate the normative potential of the GCR and the GCM for the entrenchment of the principle of solidarity. Employing the theoretical framework of governmentality, it argues that despite the rhetoric of responsibility-sharing, the reactions of Western nation states reflect an existing trend toward exclusionary impulses, with countries reflexively reverting to patterns of state-centric, insular protectionism. Taking these issues into consideration, the essay goes on to focus on Canada s response to the COVID-19 pandemic in light of its proximity to and relationship with the United States to illustrate how biopower is being deployed to exclude in line with neoliberal rationalities, even in a country that is usually heralded as a beacon of humanitarianism. The essay concludes with a guarded diagnosis that warns of the potential for an international protection crisis should civil society fail to challenge prevailing biopolitics. Keywords: COVID-19, Asylum Seekers, Refugees, Solidarity, Responsibility-sharing, Governmentality, Biopower, Neoliberal, Canada, United States
... Should NHS X require Google to issue this update, as part of its duty to scrutinise its processors under Article 28 GDPR? Furthermore, in May 2019, Palantir, which business model is based on buying and selling huge troves of (personal) data (Waldman, Chapman, and Robertson 2018), was criticised for selling to US immigration agency tracking software that enables the agency to take decisions in breach of human rights (Franco 2019;Molnar 2019). The close link of Faculty AI with the Prime Minister's special adviser and his Vote Leave campaign which violated data protection laws (ICO 2019), also fuelled suspicions that these companies would profit from the contracts in ways incompatible with the need to provide strong safeguards against unlawful sharing of millions of individuals' sensitive personal data (Ruhaak 2020). ...
Article
With the objective of controlling the spread of the coronavirus, the UK has decided to create and, in early May 2020, was live testing a digital contact tracing app, under the direction of NHS X, a joint unit of NHS England and NHS Improvement. In parallel, NHS X has been building the backend datastore, contracting a number of companies. While the second iteration of the app should integrate a more privacy-friendly design, the project has continued to be criticised for its potential to increase government surveillance beyond the pandemic and for purposes other than tracing the spread of the virus. While I share these concerns, I argue that equal attention should be given to the collaboration between NHS X and the private sector because it has the potential to magnify the illegal collection and sharing of data. Systematic enforcement of the General Data Protection Regulation (GDPR) in the private sector would disrupt the current dynamics hidden in plain sight.
... 26. This does not mean that using AI as a tool will be uncontroversial -see, for instance, the problems that ensued when AI was used as a 'lie detector' in immigration procedures (Kendric, 2019;Molnar, 2019;Beduschi, 2020), or when facial recognition scans were implemented for prison visitors in England and Wales (Jee, 2019). 27. ...
Article
This article introduces the concept of Artificial Intelligence (AI) to a criminological audience. After a general review of the phenomenon (including brief explanations of important cognate fields such as 'machine learning', 'deep learning', and 'reinforcement learning'), the paper then turns to the potential application of AI by criminals, including what we term here 'crimes with AI', 'crimes against AI', and 'crimes by AI'. In these sections, our aim is to highlight AI's potential as a criminogenic phenomenon, both in terms of scaling up existing crimes and facilitating new digital transgressions. In the third part of the article, we turn our attention to the main ways the AI paradigm is transforming policing, surveillance, and criminal justice practices via diffuse monitoring modalities based on prediction and prevention. Throughout the paper, we deploy an array of programmatic examples which, collectively, we hope will serve as a useful AI primer for criminologists interested in the 'tech-crime nexus'.
Article
Purpose The purpose of this paper is to explore the benefits of co-creation methods when designing information and communications technology (ICT) solutions to aid migrant integration by outlining the process of co-creating an innovative platform with migrants, including asylum seekers and refugees, and non-governmental organisation representatives and public service providers. Design/methodology/approach The study used mixed methods and was divided into three stages. Researchers carried out an extensive literature review and case studies, whilst data were collected via surveys, focus groups and in-depth interviews. Findings The paper demonstrates that co-creation methods are essential in the development of ICT solutions for vulnerable groups like migrants, asylum seekers and refugees enabling researchers to counter the adverse effects of eurocentric bias by improving inclusiveness and trust in the platform vis-à-vis migrant users. Originality/value The research reflects on the development of an innovative platform, created and validated in situ with migrants and other end-users. It provides an often-unexplored analysis of the link between methodological approaches in ICT tools development (co-creation), and real-life impacts for migrants in terms of mitigating digital exclusion and white ethnocentric bias. The article complements two whitepapers and other policy briefs written on the results of this research that have informed EC policy-making in the area of migration, including the EU action plan on integration and inclusion 2021–2027.
Chapter
Full-text available
In recent years, increasing attention has been paid to the potential of new technologies in the field of migration governance, whether to support the deployment of humanitarian aid for migrants, including refugees, or to better manage administrative processes. There has been notable interest in developing artificial intelligence (AI) technologies to make predictions related to migrant movements and to automate visa processing. However promising, these technologies are also currently weakly regulated, in that they do not yet benefit from the regulatory framework that other innovations might have to protect human beings against unintended consequences. Although these technologies were used before the pandemic, COVID-19 has accelerated the deployment of AI in relation to migrants globally, both in higher-income countries and in those already experiencing humanitarian crises. COVID-19 has, in fact, been named a data-driven pandemic. The use of AI models to mitigate the spread and severity of the disease has been largely driven by predictive and scenario-based models, which aim to work as support for public health agencies’ decision-making. Artificial intelligence has also been used to track and control border crossing, and to administer social protection and vaccines. During COVID-19, we have also seen the vulnerability of certain migrants exacerbated, with women and gender non-binary persons adversely impacted globally. They tend to be at further risk of marginalization, as well as physical and sexual assault. Many non-binary persons, for example, may be fleeing persecution, and are at risk of violence even inside camps.
Article
Full-text available
Following the large-scale 2015-2016 migration crisis that shook Europe, deploying big data and social media harvesting methods became gradually popular in mass forced migration monitoring. These methods have focused on producing 'real-time' inferences and predictions on individual and social behavioral, preferential, and cognitive patterns of human mobility. Although the volume of such data has improved rapidly due to social media and remote sensing technologies, they have also produced biased, flawed, or otherwise invasive results that made migrants' lives more difficult in transit. This review article explores the recent debate on the use of social media data to train machine learning classifiers and modify thresholds to help algorithmic systems monitor and predict violence and forced migration. Ultimately, it identifies and dissects five prevalent explanations in the literature on limitations for the use of such data for A.I. forecasting, namely 'policy-engineering mismatch', 'accessibility/comprehensibility', 'legal/legislative legitimacy', 'poor data cleaning', and 'difficulty of troubleshooting'. From this review, the article suggests anonymization, distributed responsibility, and 'right to reasonable inferences' debates as potential solutions and next research steps to remedy these problems.
Article
Full-text available
This article considers the benefits and pitfalls of international human rights law as a component of artificial intelligence (AI) governance initiatives. It argues that (1) human rights law can serve as an authoritative resource for providing definitions to highly contested terms such as fairness or equality, (2) it can be used to address the problem of international regulatory arbitrage, and (3) it provides a framework to hold public and private actors legally accountable. At the same time, the paper considers recent critiques of human rights law and its application to AI governance, such as (1) lack of effectiveness; (2) inability to effect structural change, and finally, (3) the problem of cooptation. The article argues that while there is room for international human rights in the realm of AI governance, we should look to it with tempered expectations as to its promises and limitations.
Chapter
Artificial intelligence has been used in decisions concerning the admissibility, reception, and even deportation of migrants and refugees into a territory. Since decisions involving migration can change the course of people's lives, it is imperative to verify the neutrality of the algorithms used. This chapter analyses how AI has been applied to the decision-making process regarding migration, mainly evaluating whether AI violates international pacts related to the protection of human rights. This chapter considers the case studies of Canada, New Zealand, the United Kingdom, and a pilot project that might be implemented in Europe. It is concluded that automated decisions regarding immigration have the potential to discriminate against migrants, and likely have been doing so since their creation, due to intrinsic biases present in the current application methods and the systems themselves. Possible solutions that might help these systems provide equal treatment to migrants consist of greater transparency regarding the variables used in training. Consistent evaluations of methods and performance, to detect and remove biases emerging from historical data or structural inequity might also be a solution.KeywordsImmigration policyRefugee policyAlgorithmic discriminationImmigration automated decision-makingArtificial intelligenceAlgorithms and migration
Article
Full-text available
Data-driven artificial intelligence (AI) technologies are progressively transforming the humanitarian field, but these technologies bring about significant risks for the protection of vulnerable individuals and populations in situations of conflict and crisis. This article investigates the opportunities and risks of using AI in humanitarian action. It examines whether and under what circumstances AI can be safely deployed to support the work of humanitarian actors in the field. The article argues that AI has the potential to support humanitarian actors as they implement a paradigm shift from reactive to anticipatory approaches to humanitarian action. However, it recommends that the existing risks, including those relating to algorithmic bias and data privacy concerns, must be addressed as a priority if AI is to be put at the service of humanitarian action and not to be deployed at the expense of humanitarianism. In doing so, the article contributes to the current debates on whether it is possible to harness the potential of AI for responsible use in humanitarian action.
Article
This study aims to map digitally mediated injustice and to understand how judicial versus non-judicial bodies contextualize and translate such harm into human rights violations. This study surveys judicial and quasi-judicial cases and case reports by non-judicial bodies, mainly civil society organizations, international organizations, and media. It divides digitally mediated harms identified through the survey into three categories: direct, structural, and hybrid harm. It then examines how these three forms of harm are represented and articulated in judicial judgments and case reports. To differentiate between the three forms of digitally mediated harm, this study uses Iris Young's political philosophy of structural injustice and Johan Galtung's account of structural violence in peace studies. The focus of this study is understanding the forms of injustices that are present but rendered invisible because of how they are contextualized. Therefore, the epistemology of absence is applied as the theoretical approach, that is, interpretation of absence and invisibility. The epistemology of absence facilitates the identification of structural and intersectional injustices that are not articulated in the same way they are experienced in society. The assessment reveals four observations. 1) Structural injustice is rarely examined through a conventional adjudicatory process. 2) Harms of structural quality examined by courts are narrowly interpreted when translated into rights violations. 3) The right to privacy, often presented as a gateway right, addresses structural injustice only partially, as this right has a subject-centric narrow interpretation currently. 4) There are limitations to the mainstream way of seeing and representing risks and injustices in the digital space, and such a view yields metonymic reasoning when framing digitally produced harms. As a result, the conventional way of contextualization is blind to unconventional experiences of vulnerability, which renders structural and intersectional injustices experienced by marginalized communities invisible.
Article
Full-text available
Smart devices have become ubiquitous in everyday life, and it is commonplace that migrants are among the users of connected tools. With the realization that migrants rely on connectivity for multiple purposes, including to access information and services, many initiatives started working on developing ICT tools to assist migrants to integrate into their new society. Technological tools, however, come with inherent risks, many of which are linked to the processing of personal data of their users. This is especially true for migrants, who are often vulnerable due to their migration status, which is not always secure in the host country. To mitigate these risks, we argue that an expanded data protection impact assessment, analyzing not only the impacts related to data protection, but also to the specific situation of migrants, should be conducted at the outset of any technology development project to influence the development of safe and reliable ICT tools for this target population. A practical example of the application of such an assessment is provided, based on the authors’ experience as legal advisors in the REBUILD project, which is one of the current initiatives in the EU aiming to develop ICT tools for migrant integration.
Article
Full-text available
Rather than taking for granted the emergence and implementation of data-driven automated technologies as smooth tools of migration governance, I analyze how the discursive and political narration of intelligent borders is central for the socio-technical renderings of data-driven border and migration policing. To this end, I analyze the implementation of data-driven and semi-automated technologies to authenticate and recognize asylum seekers' identities and claims in the context of asylum administration and migration control in Germany, with a particular focus on the practice of forensic smartphone data extraction. First, I argue that discourses of intelligent borders produce smartphone data as representative of a person's history of flight and persecution, affecting a shift in asylum proceedings and decision-making that impacts political and legal personhood. Second, I show that current discursive framings of migration as a crisis in Europe make possible the proliferation of machine learning technologies in which invocations of intelligent borders reify migration management as a system of governance and administration that functions seamlessly. Third, I argue that local instances when data-driven and computational technologies emerge allow us to interrogate moments of failure and contestation and reveal the longer development of the legal and political convergence of racial securitization and migration as constitutive of the (partial) consolidation of power in the European border regime. As such, media technologies like the smartphone function to mediate contestations and struggles over the freedom of movement, recognition, and belonging.
Article
The securitization of the EU’s external borders and repressive asylum policies biopolitically control and discipline the bodies of refugees. In Germany, these developments hark back to a longer colonial history of racialization that the state collectively disavows. To approach this continuity of racialized citizenship, I will analyse a series of hunger strikes that were staged by refugees from 2012 till 2014 in Germany. By asking which possibilities lie in staging the hunger strike, I will argue that Germany’s necropolitical geography of detention, asylum, and deportation marks the racialized refugees’ bodies as disposable within the logics of citizenship. I propose that hunger strike is a form of becoming flesh, which makes visible how racialized violence is enacted on the refugees’ bodies. Becoming flesh articulates a politics of refusal that subverts the logics of recognition, empathy and suffering liberal rights discourses rely on and, instead, performs an embrace of the refugees’ abjection.
ResearchGate has not been able to resolve any references for this publication.