ChapterPDF Available

PROTECTING MIGRANTS AGAINST THE RISKS OF ARTIFICIAL INTELLIGENCE TECHNOLOGIES

Authors:

Abstract and Figures

In recent years, increasing attention has been paid to the potential of new technologies in the field of migration governance, whether to support the deployment of humanitarian aid for migrants, including refugees, or to better manage administrative processes. There has been notable interest in developing artificial intelligence (AI) technologies to make predictions related to migrant movements and to automate visa processing. However promising, these technologies are also currently weakly regulated, in that they do not yet benefit from the regulatory framework that other innovations might have to protect human beings against unintended consequences. Although these technologies were used before the pandemic, COVID-19 has accelerated the deployment of AI in relation to migrants globally, both in higher-income countries and in those already experiencing humanitarian crises. COVID-19 has, in fact, been named a data-driven pandemic. The use of AI models to mitigate the spread and severity of the disease has been largely driven by predictive and scenario-based models, which aim to work as support for public health agencies’ decision-making. Artificial intelligence has also been used to track and control border crossing, and to administer social protection and vaccines. During COVID-19, we have also seen the vulnerability of certain migrants exacerbated, with women and gender non-binary persons adversely impacted globally. They tend to be at further risk of marginalization, as well as physical and sexual assault. Many non-binary persons, for example, may be fleeing persecution, and are at risk of violence even inside camps.
Content may be subject to copyright.
12
12. PROTECTING MIGRANTS
AGAINST THE RISKS OF ARTIFICIAL
INTELLIGENCE TECHNOLOGIES
Eleonore Fournier-Tombs : Senior researcher, United Nations University Institute in Macao SAR, China,
and Director of the Inclusive Technology Lab, University of Ottawa
Céline Castets-Renard : Professor and university research chair on accountable articial intelligence
in a global context, University of Ottawa
Introduction
In recent years, increasing attention has been paid to the potential of new technologies in the eld
of migration governance, whether to support the deployment of humanitarian aid for migrants,
including refugees, or to better manage administrative processes. There has been notable interest
in developing articial intelligence (AI) technologies to make predictions related to migrant
movements and to automate visa processing. However promising, these technologies are also
currently weakly regulated, in that they do not yet benet from the regulatory framework that
other innovations might have to protect human beings against unintended consequences.1
Although these technologies were used before the pandemic, COVID-19 has accelerated the
deployment of AI in relation to migrants globally, both in higher-income countries and in those
already experiencing humanitarian crises.2 COVID-19 has, in fact, been named a data-driven
pandemic.3 The use of AI models to mitigate the spread and severity of the disease has been largely
driven by predictive and scenario-based models, which aim to work as support for public health
agencies’ decision-making.4 Articial intelligence has also been used to track and control border
crossing,5 and to administer social protection and vaccines.6
During COVID-19, we have also seen the vulnerability of certain migrants exacerbated, with
women and gender non-binary persons adversely impacted globally.7 They tend to be at further risk
of marginalization, as well as physical and sexual assault.8 Many non-binary persons, for example,
may be eeing persecution, and are at risk of violence even inside camps.9
1 Molnar, 2019.
2 McAulie et al., 2021.
3 Term rst coined by Roberto Rocha; see Rocha, 2020.
4 Khemasuwan and Colt, 2021.
5 Bastani et al., 2021.
6 Greig, 2021.
7 On women, see, for example, UN-Women, 2021; on non-binary persons, see Tschalaer , 2021.
8 Obradovic, 2015.
9 UNHCR, 2021b.
171
The impacts of COVID-19 on migration and migrants from a gender perspective
Risks based on gender are further exacerbated by the mere fact of being a migrant.10 In fact,
migrants are rarely consulted when it comes to AI or other technologies. For example, during
the pandemic, citizens of Canada were extensively consulted when their Government deployed
COVID-19 tracking applications.11 Migrants have often not been extended the same opportunity
to voice their concerns in relation to data collection, privacy, or algorithmic decision-making.12
In addition, AI-related risks for some migrants, go beyond the technology itself. Rather, these
risks are impacted by the convergence of migration status with diverging objectives in migration
management. Migrants are often negotiating complex visa and asylum systems, while also facing
challenging immigration policies from destination countries. In this sense, governments tend to
have less incentive to consult them in the development of AI policies, since they may not even be
on a path to citizenship.
There are possible benets to using AI in a migration context, such as protecting girls and women
from tracking,13 or predicting displacement to prepare humanitarian logistics.14 However, there
are also many risks in doing so. Many organizations have recently warned against the unrestricted
use of AI in migration contexts, particularly during the COVID-19 pandemic.15 Michelle Bachelet,
United Nations Human Rights Commissioner, recently pressed for national and international AI
regulations that would protect the human rights of vulnerable populations.16 In this context, it is
critical to consider the possible risks that these technologies generate or enhance for migrants,
not only to mitigate them in the short term but also to inform future policymaking and regulatory
frameworks.
With this in mind, we discuss the impact of AI technologies used in the COVID-19 context on
migrants. In this paper, we discuss uses of these technologies in relation to migrants, focusing
on four types of technologies: migration forecasting; biometric identication; satellite image
recognition; and automated decision-making for immigration processing. We then examine the
risks in using these tools in a migration context, and detail examples of biases, errors and other
issues, and their impacts on female and non-binary migrants. Finally, we detail the current legal
and regulatory framework governing these technologies, and point to further policies that could
mitigate the risks that articial intelligence technologies pose for migrants.
Dening relevant articial intelligence technologies
In this paper, we use the OECD denition of articial intelligence: “a machine-based system that
can, for a given set of human-dened objectives, make predictions, recommendations, or decisions
inuencing real or virtual environments”.17 While this denition can include a broad range of
technologies, we focus here on those that have direct application to migration, notably in relation
to:
1. Predicting the movement of people;
2. Providing digital identities to refugees and migrants;
3. Managing visa and border processes;
4. Managing asylum processes.18
10 See the discussion on vulnerability and migration as discussed in Beduschi, 2018.
11 Gamache, 2020.
12 Latonero el al., 2019.
13 Zinser and Thinyane, 2021.
14 UNHCR, 2021b.
15 EDRi, 2020.
16 OHCHR, 2021.
17 OECD, 2019.
18 See Bither and Ziebarth, 2020.
172
12. Protecting migrants against the risks of articial intelligence technologies
Each one of these technologies uses entry data, which comes from a variety of sources. These
might include photos and ngerprints of migrants, remote photos of informal dwellings or land,
social media posts and survey results. This entry data is then used to make a variety of predictions
impacting migrants. These can include identifying an individual migrant, predicting natural disasters
impacting groups of migrants, or predicting the movement or needs of the migrant. Articial
intelligence systems can integrate enormous volumes of disparate data and make many kinds of
predictions that inuence human decision-making.
Figure 1. Uses of articial intelligence technologies in migration management
Biometric
identification
Automated
decision-making for
Immigration
Migration
forecasting
Satelite
image
recognition
Processing
visas and
asylum
Predicting
migrant
movements
Identifying
and tracking
invididuals
Migration
management
Source: Authors’ elaboration.
“Biometric identication” applies mathematical measurements to biology in order to provide
a unique personal record.19 Biometrics are typically separated into three categories: biological
(measuring DNA and blood patterns); morphological (measuring facial images, ngerprints, iris and
retina features, and even voice patterns); and behavioural (measuring gait, handwriting, or keyboard
strokes).20 Although experimentation is happening in all categories, morphological biometrics are
most used in relation to migration, notably facial recognition, ngerprinting and iris recognition. AI
is used to build models that will identify a match between a person’s features and those stored in
a central database.
“Satellite image recognition” is a means of using articial intelligence to interpret satellite imagery,
notably by recognizing certain entities in images, such as buildings, people and land cover, often to
measure changes over time. Like biometric identication, satellite image recognition uses articial
intelligence to build models that will recognize certain patterns in the images based on a database
showing features of informal and formal dwellings, for example. Unlike biometric identication,
however, satellite image recognition does not aim to identify individuals, but rather is used to make
general predictions about what is contained in a particular image.
19 OPC, 2011.
20 Dierent categories of biometrics are discussed here by the biometrics company Idemia; see Idemia, 2021.
173
The impacts of COVID-19 on migration and migrants from a gender perspective
“Migration forecasting” involves developing articial intelligence models that will predict the
number of migrants arriving in certain locations,21 usually in a 1- to 6-month horizon.22 It can also
involve predicting the needs of those migrants, allowing for preparation of food, water and medical
supplies, along with shelter.
“Automated decision-making systems in immigration” rely on algorithms, and are used by
governments to make a decision about a migration visa or claim for international protection, for
example, to determine either the public security risk of accepting a migrant across its borders, or
the possibility of fraud through misrepresentation. These tools use the information provided in the
migrant’s case le, as well as, occasionally, external information, to recommend a decision to the
case ocer.23
Impacts of articial intelligence technologies on migrants
International organizations and governments have, over the last few years, invested considerably in
technological innovation that aects displaced persons. This investment has increased during the
pandemic.24 Governments have also increased the use of technologies to monitor borders, notably
to restrict the movement of migrants to limit the spread of the virus.25
Predicting movements of people: Forecasting and satellite imagery
During the COVID-19 pandemic, national governments and international organizations alike used
articial intelligence methods to predict the number of infections and severity of illness among
migrants, notably in refugee camps. These methods can be categorized into three types:
1. Predicting new migration ows related to compounding disasters or economic crises;26
2. Including migration into epidemiological models to predict the spread and severity of the
disease;27
3. Predicting the spread and the severity of the disease among migrants who were not
necessarily moving between borders, for example, those in informal dwellings.28
Many of these forecasting models included satellite imagery data, which allowed for the inclusion of
climatic factors as well as the visual observation of migration patterns. These data can be accessed
relatively easily from Google Earth Engine,29 as well as through partnerships with space agencies
such as the European Space Agency.30
While some of these models served to inform humanitarian interventions for the benet of
migrants,31 others resulted in an increase in movement restrictions for migrants. For example,
researchers in India found that the unplanned movement of migrant workers threatened to increase
the number of COVID-19 cases in the country. They recommended that the Indian Government
limit internal migration and implement smartphone tracking systems, similar to COVID-19 tracking
apps.32
21 UNHCR, 2021a.
22 See United Nations Global Pulse, n.d., for numerous examples of this kind of research.
23 For a more detailed description, see Citizen Lab, 2018.
24 See, for example, the partnership between Google and the United Nations for Articial Intelligence for crisis response: Google, 2021.
25 European Union, 2021.
26 This has been explored, for example, in disaster risk early warning systems. See ITU, 2020.
27 Centre for Humanitarian Data, 2020.
28 United Nations Global Pulse, n.d.
29 Google Earth, 2021.
30 ESA, 2021.
31 United Nations Global Pulse, 2020.
32 Pal et al., 2021.
174
12. Protecting migrants against the risks of articial intelligence technologies
Providing digital identities: Biometric identication
Biometric identication is increasingly used in migration management. The technology has the
potential to facilitate border crossings and case management by reducing fraud and maintaining
accurate and transferable records.33 For those crossing internal borders, it can also facilitate the
transfer of social protection services from one jurisdiction to the next. India, for example, has
implemented Aadhar,34 a biometric identication system used to access welfare, manage voting,
and move between states while retaining access to services.
However, there are some concerns in relation to the use of this technology. Biometric identication
systems that are built for the population at large may accidentally exclude some migrants, making
it more dicult for them to access services than before. For instance, biometric identication
systems that are built for the population at large may accidentally exclude some, such as migrants,
making it more dicult for them to access services than before. The problem of exclusion was
documented in the case of Aadhar, which did not account for those who might not physically be
able to provide biometrics, such as manual workers who had damaged ngerprints.35 The exclusion
of those speaking indigenous languages, which are not currently supported by the platform, has
also been described, as has the eects on migrant workers inside the country, who may also
struggle with access.36
Automated decision-making systems
Visa requests, asylum claims and other processes related to establishing short- and longer-term
residency in new countries are increasingly processed by automated decision-making systems.
These systems can take data provided by migrants in their case le – such as personal information,
images and past locations – to provide a recommendation to the case ocer. New data, such
as data provided by a lie detector, are also sometimes collected.37 These systems have also been
known to include social media analysis or other analysis of Internet data to provide a bigger
picture of the migrant’s life and to identify whether the migrant could pose a security risk in the
destination country. This, for example, is known to be the case for visa applicants to the United
States, who may be asked to provide their social media accounts.38
Several issues have been raised in relation to these activities, notably concerning the fairness and
transparency of the decision-making.39 These systems have also been critiqued for lacking nuance,
particularly in cases of intersectional identities, as would be the case with female and non-binary
migrants.40
Furthermore, this type of analysis falls in the category of articial intelligence technologies known
as behavioural analytics, which typically try to predict human behaviour based on certain factors. A
migrant may, therefore, be categorized as a potential public security risk and denied a visa, causing
signicant upheaval in their own lives. A study by the University of Toronto’s Citizen Lab has raised
concerns about the potential for racial and gender discrimination in these tools, which would
increase the vulnerability of already vulnerable migrants.41
33 IOM, n.d.
34 Government of India, 2021.
35 Krishna, 2018.
36 Panigrahi, 2019.
37 Molnar, 2019.
38 Lazzarotti and Peck, 2020.
39 Citizen Lab, 2018.
40 Maat for Peace, 2018.
41 Citizen Lab, 2018.
175
The impacts of COVID-19 on migration and migrants from a gender perspective
Risks of articial intelligence technologies for female and non-binary migrants
During COVID-19, women and non-binary migrants have had worse economic outcomes than
other groups, increasing any pre-existing vulnerabilities they may have had.42 Inappropriate uses
of AI technologies during the COVID-19 pandemic have also been shown to increase their
vulnerability, notably due to algorithmic errors, biases and lack of privacy.
Table 1 shows the types of data used by each category of AI, their potential benet for migrants
and the risks associated with their use.
Table 1. Data used by types of articial intelligence, and their benets and risks for
migrants
Articial intelligence
technology Input data sources Potential benets
to migrants
Potential damages
to migrants
Biometric identication Facial images, ngerprints,
iris and retina scans
Ease of identication
without requiring
documentation
Misidentication, resulting
in economic eects and
deportation, surveillance
Automated decision
support
Visa and asylum case les,
social media data
Increase speed of visa and
asylum processing
Inappropriate decisions
with no possibility of
appeal, surveillance
Satellite image recognition Images of informal camps,
urban dwellings, land cover
Preparing humanitarian
community for migrant
arrival, natural disasters
Misinterpretation, leading
to logistics errors, privacy
breaches
Forecasting Social media data,
socioeconomic data, public
health data
Preparing humanitarian
community, governments
and border agencies for
migrant movements
Forecasting errors, leading
to logistics errors, privacy
breaches
Managing errors and uncertainty in articial intelligence
AI tools are predictive, in that they use past data to inform a decision which will always have a
certain rate of error and uncertainty. If a certain indicator was not included in the model, or if the
data used are awed or biased, the prediction can be wrong.43 Similarly, AI models predict future
events, such as the arrival of migrants, based on certain assumptions and past data. In migration
settings, there are high amounts of uncertainty,44 in relation to unexpected events and conditions
and mistaken assumptions. The forecasts made using articial intelligence models can be useful in
informing decision-making, but there is always the possibility of error.
In using articial intelligence models, it is important to clearly communicate error rates and
uncertainties to decision makers. Error rates are usually calculated by reserving a portion of data
for testing.45 On these test data, the model’s results will then be compared to actual results,
and two numbers will be calculated: the number of false positives, which is the number of
– for example – entities that were identied as informal shelters but are really something else
(for example, a car); and the number of false negatives, which is the number of informal shelters
(in our example) that were not recognized by the algorithm.46 AI models can be calibrated to be
42 UN-Women, 2021.
43 Cortes et al., 1995.
44 Napierala et al., 2021.
45 Techopedia, n.d.
46 Google, 2020.
176
12. Protecting migrants against the risks of articial intelligence technologies
more tolerant of false positives or to false negatives.47 It is therefore important for policymakers
to consider not only whether an entity is wrongly identied as something of importance, but also
whether it goes unseen by the model.
In their research, Buolamwini and Gebru found that women and people of colour suered both
eects in common facial recognition software.48 They were either not identied at all, or they were
assigned by mistake to a dierent person. They found that the software was trained on faces that
were more than 83 per cent white and 77 per cent male, resulting in misidentications of up to
46 per cent for women of colour. This nding is particularly relevant to our analysis, where
vulnerable migrants already face gender-based discrimination and racism.49
Misidentication errors in facial recognition and other biometric processes such as ngerprinting
have been documented,50 whether they are matches with the wrong person or simply no match at
all.51 These have been linked to migrants having their asylum cases rejected or delayed, as well as
having problems accessing basic services.52
Tracking, surveillance, and privacy
In addition to error management, one of the most important issues for some migrants is the
possibility of personally identiable information being shared widely. This can be due either to
improper consideration of the right to privacy of migrants,53 or to cybersecurity breaches that put
the migrants at risk.54 Data identifying migrants personally can include their location, age, sexual
identication, gender identication, ethnicity and disability. In many cases, this information can be
dangerous when shared, leading to discrimination, violence and even tracking or re-tracking.55
Although there are many initiatives using AI to locate tracked migrants,56 there are also concerns
that data about vulnerable migrants can be used to identify, recruit, and track them.57 Smugglers
who use social media to propose safe passage to Europe for work, for example, can begin to
sexually exploit women and unaccompanied children as repayment.58 Sex trackers are also
known to use exploit data available online to identify future victims.59
A considerable amount of data is required when developing an AI tool. The United States, for
example, is said to have a database that includes data about hundreds of millions of individuals who
can be identied through remote biometrics systems.60 In certain cases, a broader use of biometric
tools – including not only facial recognition and ngerprints but also image recognition through
social media – has enabled governments to arrest and detain undocumented migrants. A striking
example is the United States Immigration and Customs Enforcement agency, which has partnered
with the analytics rm Palantir since 2016 to track and apprehend undocumented migrants in the
country.61
47 Russell, 2020.
48 Buolamwini and Gebru, 2018.
49 Astles, 2020.
50 Kaurin, 2019.
51 Oxfam and The Engine Room, 2018.
52 Gelb and Clark, 2013.
53 Sandvick, 2021.
54 ICRC, 2022.
55 Kosevaliska, 2021.
56 See, for example, Global Emancipation Network, n.d.
57 UNODC, 2019.
58 Zenko, 2017.
59 Wulfhorst, 2017.
60 Djanegara, 2021.
61 Amnesty International, 2020.
177
The impacts of COVID-19 on migration and migrants from a gender perspective
A 2021 report by Transnational Institute and Stop Wapenhandel documents the border
surveillance industry,62 highlighting the activities of 23 companies and several investment rms
lobbying governments to take a more “militarized” approach to border control involving articial
intelligence. Notably, the report documents the “Smart Borders” sector,63 which involves biometric
identication as well as phone and social media tracking.
More recently, the deployment of vaccine passports in numerous countries has accelerated
the implementation of digital identity management, with governments now tracking access
to restaurants, bars, public spaces, and social services through a combination of identication
and health certication. This is of concern to anyone in vulnerable social positions, especially
undocumented migrants who will see their access to public spaces nearly entirely curtailed.64
Protection of migrants’ rights to non-discrimination and privacy
Migrants are currently protected under dierent international conventions. As we will see below,
the rights to privacy and non-discrimination, which are included in the Universal Declaration of
Human Rights and the 1966 International Covenant on Civil and Political Rights, are the rights
most often cited as threatened by articial intelligence.65 However, migrants are also protected
internationally by the 1951 Refugee Convention and its 1967 Protocol,66 the 1954 Convention
Relating to the Status of Stateless Persons,67 and the International Convention on the Protection
of the Rights of All Migrant Workers and Members of Their Families.68 Women’s rights are also
protected under the Convention on the Elimination of All Forms of Discrimination Against
Women,69 which includes further detail on non-discrimination.
Articial intelligence is known to have the tendency to reinforce existing societal biases, if left
unchecked.70 Uses of AI that reinforce gender stereotypes and propagate discrimination against
women exist even outside of the migration context.71 For example, as we have seen, there are
certain cases in which AI systems are less accurate for women than for men.72 Some AI systems
used for human resources were also found to recommend against the hiring of women,73 and some
that were used to assess loan applications approved lower nancial amounts for women,74 other
indicators being equal. From a legal perspective, however, proving discrimination can be dicult.75
Although the principles of equality and non-discrimination are protected under international law,
they are particularly challenging to put into practice when AI is involved.76
There have been several attempts recently to use existing legal instruments to protect human
rights when using AI. For example, the Court of Justice of the European Union77 has had requests
for a preliminary ruling concerning the interpretation of exceptions on privacy and data protection
by Member States,78 read in light of the Charter of Fundamental Rights of the European Union.79
Notably, these requests argued that national governments’ automated analysis of trac and location
62 TNI and Stop Wapenhandel, 2021.
63 EDRi, 2018.
64 Renieris, 2021.
65 CDPDJ, 2021.
66 United Nations, 1951.
67 United Nations, 1954.
68 United Nations, 1990.
69 United Nations, 1979.
70 Noble, 2018.
71 Fournier-Tombs and Castets-Renard, 2022.
72 Buolamwini and Gebru, 2018.
73 Dastin, 2018.
74 The Guardian, 2019.
75 Bathaee, 2017.
76 Xenidis and Senden, 2020.
77 CJEU, 2020.
78 Article 15(1) of Directive 2002/58/EC of the European Parliament and of the Council of 12 July 2002 concerning the processing of personal data and
the protection of privacy in the electronic communications.
79 Articles 4, 6, 7, 8 and 11 and article 52(1) and article 4(2) of the Treaty of the European Union.
178
12. Protecting migrants against the risks of articial intelligence technologies
data should be limited to serious threats to national security. Privacy related to geolocalization,
which could relate to satellite detection and data used in forecasting, might therefore become a
more important consideration in the next few years.
In the meantime, national and international bodies have worked towards stronger regulations
that would protect the most vulnerable. A notable example is the European Commission’s recent
regulatory proposal on AI (the European Union Articial Intelligence Act),80 which takes a risk-based
approach, categorizing AI systems into four groups: no risk, low risk, high risk, and unacceptable
risk. Many of the high-risk uses of AI are relevant in the context of international migration. These
include biometric identication; management and operation of critical infrastructure; education;
employment and access to employment; access to services; law enforcement; migration asylum
and border management; and administration of justice and democratic processes. Companies
and organizations wishing to deploy high-risk AI solutions in the European Union market will
be required to obtain certication rst. This certication process will involve providing technical
documentation demonstrating that data biases, errors, privacy considerations and discrimination
have been addressed before deployment.
The Oce of the United Nations Commissioner for Human Rights recently called for a ban on all
uses of articial intelligence threatening human rights.81 In doing so, she cited several international
instruments, including article 12 of the Universal Declaration of Human Rights,82 which protects
privacy, and the International Covenant on Civil and Political Rights. Echoing the European
Commission’s regulatory proposal, she also called for the regulation of high-risk uses of AI.
Some national governments are currently considering how to develop their own regulatory
frameworks. For example, the Government of Canada published a directive on automated
decision-making,83 which sets standards for the use of AI by the federal Government. This directive
applies to automated decision-making in the case of migrants. The United States has also released
two drafts of guidance for regulating AI applications, which set out some of the values, such as
fairness and non-discrimination, which should be prioritized when developing a regulation.84 China
has also released, in the last few years, several documents related to AI regulation, including one on
ethical norms for new AI generation,85 which contains six ethical requirements including fairness,
justice and privacy.
In addition, many organizations working with migrants have also developed ethical guidelines
for their uses of articial intelligence, such as the Humanitarian Data Science and Ethics Group
(DSEG), which published a framework for the ethical use of advanced data science methods in the
humanitarian sector.86 On a larger scale, the United Nations Educational, Scientic and Cultural
Organization (UNESCO) spearheaded the adoption of a recommendation on the ethics of articial
intelligence,87 which will serve to inform standards and regulations globally. Although non-binding,
these guidelines may serve to inform humanitarian standards, which would regulate the way that
organizations provide support to migrants.
A key distinction in these existing human rights frameworks, in the European Union Articial
Intelligence Act and in these ethical frameworks is where they position themselves in relation
to the deployment of articial intelligence systems. The European Union Articial Intelligence
Act will require certication of these systems before they are deployed to the public. Like the
European Union Articial Intelligence Act, ethical frameworks attempt to pre-empt human rights
80 European Union, 2021.
81 OHCHR, 2021.
82 Article 12: “No one shall be subjected to arbitrary interference with his privacy, family, home or correspondence, nor to attacks upon his honour and
reputation. Everyone has the right to the protection of the law against such interference or attacks.”
83 Government of Canada, 2021.
84 United States Government, 2020.
85 People’s Republic of China Ministry of Science and Technology, 2021.
86 DSEG, 2020.
87 UNESCO, 2021.
179
The impacts of COVID-19 on migration and migrants from a gender perspective
violations by providing a verication process that should take place before the technology is used.
Certication schemes such as these highlight the need to prevent violations of non-discrimination
and privacy in the rst place, rather than bringing cases to human rights tribunals after deployment.
In such cases, it may be dicult to repair violations to human rights, particularly when it comes to
non-discrimination and privacy.
Moving forward in a context of weak citizenship and divergence of intents
As we have seen, migrants are particularly vulnerable to the risks presented by certain uses of
AI, especially during COVID-19. Some migrants may be in situations of high uncertainty in which
they cannot advocate for their rights. They may be unable to express their privacy preferences
clearly and safely, or negotiate algorithmic errors. As such, they may have very little recourse when
technologies that aect them are deployed.
Furthermore, one of the greatest challenges in using AI in the context of migration is the divergence
of intents amongst the various actors involved. Humanitarian organizations might want to support
migrants and mitigate threats to them. National governments might have diverse intentions,
supporting humanitarian work, encouraging some forms of migration while limiting others. Private
companies, in turn, that develop a large portion of AI technologies that aect migrants, may be
driven by a prot motive and unequipped to protect the rights of migrants without guidance.
In this paper, we examined four AI technologies: migration forecasting, satellite image recognition,
biometric identication and automated decision-making for immigration. These are used in
migration management, notably to predict migrant movements, process visas and asylum claims,
and identify and track migrants. We presented several risks in using these technologies, such as
managing errors and uncertainty, issues related to surveillance and privacy, and discrimination. We
further showed that the legal framework protecting migrants’ rights when it comes to AI is not yet
adequate, although it is changing rapidly.
Female and gender non-binary migrants are particularly vulnerable to inappropriate uses of articial
intelligence, as well as to technological errors. During the COVID-19 pandemic, not only has this
vulnerability increased, but so has the use of articial intelligence and other new technologies,
leading to an increased risk of harm to members of this group. This can be addressed primarily
in two ways: by considering migrants’ rights when developing new AI regulations; and by working
directly with migrants to mitigate some of these risks.
Technological innovation has always been a part of the international community’s response to
migration. As the regulation of AI systems continues to evolve, paying attention to the protection
of migrants will help to distinguish between innovations that will support migrants and those that
will put them at risk.
180
12. Protecting migrants against the risks of articial intelligence technologies
References*
Amnesty International
2020 Failing to do right: The urgent need for Palantir to respect human rights. September.
Astles, J.
2020 Intersecting discriminations: Migrants facing racism [blog]. 4 June.
Bastani, H., K. Drakopoulos, V. Gupta, I. Vlachogiannis, C. Hadjicristodoulou, P. Lagiou, G. Magiorkinis,
D. Paraskevis and S. Tsiodras
2021 Ecient and targeted COVID-19 border testing via reinforcement learning. Nature,
599:108–113.
Bathaee, Y.
2017 The articial intelligence black box and the failure of intent and causation. Harvard Journal of
Law and Technology, 31(2):890–938.
Beduschi, A.
2018 Vulnerability on trial: Protection of migrant children’s rights in the jurisprudence of international
human rights courts. Boston University International Law Journal, 36(1):55–85.
Bither, J. and A. Ziebarth
2020 AI, digital identities, biometrics, blockchain: A primer on the use of technology in migration
management. Migration Strategy Group on International Cooperation and Development, June.
Buolamwini, J. and T. Gebru
2018 Gender shades: Intersectional accuracy disparities in commercial gender classication.
Proceedings of Machine Learning Research, 81:1–15.
Centre for Humanitarian Data
2020 OCHA-Bucky: A COVID-19 model to inform humanitarian operations. 28 October.
Citizen Lab
2018 Bots at the gate: A human rights analysis of automated decision-making in Canada’s immigration
and refugee system. Toronto.
Commission des droits de la personne et des droits de la jeunesse (CDPDJ)
2021 Submission to the Commission d’accès à l’information on articial intelligence consultation.
May.
Cortes, C., L.D. Jackel and W.-P. Chiang
1995 Limits on learning machine accuracy imposed by data quality. Proceedings of the First International
Conference on Knowledge Discovery and Data Mining (U. Fayyad and R. Uthurusamy, eds.).
The American Association for Articial Intelligence, pp. 57–62.
Dastin, J.
2018 Amazon scraps secret AI recruiting tool that showed bias against women. Reuters, 11 October.
Data Science and Ethics Group (DSEG)
2020 A framework for the ethical use of advanced data science methods in the humanitarian sector.
April.
Djanegara, N.
2021 How 9/11 sparked the rise of America’s biometrics security empire. Fast Company,
10 September.
Court of Justice of the European Union (CJEU)
2020 Quadrature du Net, French Data Network, Fédération des fournisseurs d’accès à Internet
associatifs, Igwan.net v. Premier ministre, Garde des Sceaux, ministre de la Justice, ministre
de l’Intérieur, ministre des Armées, and Ordre des barreaux francophones et germanophone,
Académie Fiscale ASBL, UA, Liga voor Mensenrechten ASBL, Ligue des Droits de l’Homme
ASBL, VZ, WY, XX v. Conseil des ministres, Judgment, Grand Chamber, Joined Cases C-511/18,
C-512/18 and C-520/18, 16 November.
European Digital Rights (EDRi)
2018 Smart Borders: The challenges remain a year after its adoption [blog]. 25 July.
2020 Technology, migration and illness in the time of COVID-19 [blog]. 15 April.
* All hyperlinks were active at the time of writing this report in February 2022.
181
The impacts of COVID-19 on migration and migrants from a gender perspective
European Space Agency (ESA)
2019 Space for humanitarian action: Space19+ proposals. 26 November.
European Union
2021 Articial Intelligence at EU Borders: Overview of Applications and Key Issues. European Parliamentary
Research Services, Brussels.
Fournier-Tombs, E. and C. Castets-Renard
2022 Algorithms and the propagation of gendered cultural norms. In: IA, Culture et Médias
(V. Guèvremont and C. Brin, eds.), Presses de l’Université de Laval (forthcoming in French).
Gamache, V.
2020 COVID-19: Québec lance une consultation publique sur une application de traçage [Quebec
launches a public consultation on a tracking application]. Radio Canada, 9 July.
Gelb, A. and J. Clark
2013 Identication for development: The biometrics revolution. Center for Global Development
working paper 315. 28 January.
Global Emancipation Network
n.d. About [webpage].
Google
2020 Classication: True vs. false and positive vs. negative. Machine learning crash course.
2021 Collaborating with the UN to accelerate crisis response [blog]. Keyword team, 9 September.
Google Earth
2021 Earth Engine [webpage].
Government of Canada
2021 Directive on automated decision-making.
Government of India
2021 Unique Identication Authority of India homepage.
Greig, J
2021 How AI is being used for COVID-19 vaccine creation and distribution. TechRepublic, 20 April.
Idemia
2021 What is biometrics?
International Committee of the Red Cross (ICRC)
2022 Sophisticated cyber-attack targets Red Cross Red Crescent data on 500,000 people.
19 January.
International Organization for Migration (IOM)
n.d. Biometrics.
International Telecommunication Union (ITU)
2020 A safer, more resilient world: Reducing disaster risks with AI. 20 October.
Kaurin, D.
2019 Data protection and digital agency for refugees. World Refugee Council research paper no. 12,
May.
Khemasuwan, D. and H.G. Colt
2021 Applications and challenges of AI-based algorithms in the COVID-19 pandemic. British Medical
Journal Innovations, 7:387–398.
Kosevaliska, O.
2021 Human security of migrants in the on-line world. In: Sicurezza umana negli spazi navigabili: Sde
comuni e nuove tendenze [Human security in navigable spaces: Common challenges and new
trends] (G. Bevilacqua, ed.). Editoriale Scientica, pp. 165–175.
Krishna, G.
2018 Fixing Aadhaar bugs: Putting a nger on the biometric problem. Business Standard, 16 January.
Latonero, M., K. Hiatt, A. Napolitano, G. Clericetti and M. Penagos
2019 Digital identity in the migration and refugee context: Italy case study. Data and Society,
15 April.
182
12. Protecting migrants against the risks of articial intelligence technologies
Lazzarotti, J.J. and A.L. Peck
2020 Privacy issues of U.S. collection of social media information from visa applicants. 22 June.
National Law Review, 10(174).
Maat for Peace
2018 Impact of digital technology on discriminatory policies in the border management. Report
submitted to the special rapporteur on contemporary forms of racism, May.
McAulie, M., J. Blower and A. Beduschi
2021 Digitalization and articial intelligence in migration and mobility: Transnational implications of
the COVID-19 pandemic. Societies, 11(4):135.
Molnar, P.
2019 Technology on the margins: AI and global migration management from a human rights
perspective. Cambridge International Law Journal, 8(20):305–330.
Napierala, J., J. Hilton, J.J. Forster, M. Carammia and J. Bijak
2021 Toward an early warning system for monitoring asylum-related migration ows in Europe.
International Migration Review, 56(1):33–62.
Noble, S.
2018 Algorithms of Oppression: How Search Engines Reinforce Racism. New York University Press,
New York.
Obradovic, M.
2015 Protecting female refugees against sexual and gender-based violence in camps. United Nations
University, 11 September.
Oce of the Privacy Commissioner for Canada (OPC)
2011 Biometrics and the challenges to privacy. February.
Oce of the United Nations High Commissioner for Human Rights (OHCHR)
2021 The right to privacy in the digital age. Report of the United Nations High Commissioner for
Human Rights (A/HRC/48/31), 13 September.
Oce of the United Nations High Commissioner for Refugees (UNHCR)
2021a Project Jetson [webpage].
2021b UNHCR statement on the situation of LGBTIQ+ refugees in Kakuma camp. 25 March.
Organisation for Economic Co-operation and Development (OECD)
2019 Articial Intelligence in Society. Paris.
Oxfam and The Engine Room
2018 Biometrics in the humanitarian sector. March.
Pal, S.C., A. Saha, I. Chowdhuri, P. Roy, R. Chakrabortty and M. Shit
2021 Threats of unplanned movement of migrant workers for sudden spurt of COVID-19 pandemic
in India. Cities, 109:6.
Panigrahi, S.
2019 #MarginalizedAadhaar: Exclusion in access to public information for marginalized groups.
November 20.
People’s Republic of China, Ministry of Science and Technology
2021 Ethical norms for new generation articial intelligence. English translation by the Centre for
Security and Emerging Technology. 25 September [original]/12 October [translation].
Renieris, E.
2021 What’s really at stake with vaccine passports. Centre for International Governance Innovation,
5 April.
Rocha, R.
2020 The data-driven pandemic: Information sharing with COVID-19 is ‘unprecedented’. CBC
News, 17 March.
Russell, J.
2020 Machine learning fairness in justice systems: Base rates, false positives, and false negatives. arXiv,
02214(1).
Sandvick, K.
2021 The digital transformation of refugee governance. In: The Oxford Handbook of International
Refugee Law (C. Costello, M. Foster and J. McAdam, eds.). Oxford University Press, Oxford.
183
The impacts of COVID-19 on migration and migrants from a gender perspective
Techopedia
n.d. Test set (denition).
The Guardian
2019 Apple card issuer investigated after claims of sexist credit checks. 10 November.
Transnational Institute (TNI) and Stop Wapenhandel
2021 Financing border wars: The border industry, its nanciers and human rights. 9 April.
Tschalaer, M.
2021 The Eects of COVID-19 on queer asylum claimants in Germany. Policy Bristol,
Policy brieng 87.
United Nations
1951 Convention and Protocol Relating to the Status of Refugees. (A/RES/2198/21).
1954 Convention Relating to the Status of Stateless Persons. (E/RES/526/17).
1979 Convention on the Elimination of all Forms of Discrimination Against Women. (A/RES/34/180).
1990 International Convention on the Protection of the Rights of All Migrant Workers and Members
of Their Families. (A/RES/45/158).
United Nations Educational, Scientic and Cultural Organization (UNESCO)
2021 Recommendation on the ethics of articial intelligence (SHS/BIO/REC-AIETHICS/2021).
United Nations Entity for Gender Equality and the Empowerment of Women (UN-Women)
2021 Addressing the impacts of COVID-19 on women migrant workers. Guidance note.
United Nations Global Pulse
2020 Modelling the spread of COVID-19 and the impact of public health interventions in Cox’s
Bazar and other refugee camps. 27 October.
n.d. Discovery: Projects [webpage].
United Nations Oce on Drugs and Crime (UNODC)
2019 Privacy and data concerns. E4J university module series: Tracking in persons and smuggling
of migrants. Module 14: Links between cybercrime, tracking in persons and smuggling of
migrants.
United States Government
2020 Guidance for regulation of articial intelligence applications. Memorandum for the heads of
executive departments and agencies from the director of the Oce of Management and
Budget, 17 November.
Wulfhorst, E.
2017 Latest technology helps sex trackers recruit, sell victims – FBI. Reuters, 25 April.
Xenidis, R. and L. Senden
2020 EU non-discrimination law in the era of articial intelligence: Mapping the challenges of
algorithmic discrimination. In General Principles of EU Law and the EU Digital Order (U. Bernitz,
X. Groussot, J. Paju and S.A. de Vries, eds.). Kluwer Law International, Alphen aan den Rijn.
Zenko, M.
2017 Sex tracking and the refugee crisis: Exploiting the vulnerable. Council on Foreign Relations
blog, 8 May.
Zinser, S. and H. Thinyane
2021 A step forward for Palermo’s tracking protocol, this time integrating frontier technology. Yale
Journal of International Aairs, 16:140–151.
184
12. Protecting migrants against the risks of articial intelligence technologies
ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
Digitalization and artificial intelligence (AI) technologies in migration and mobility have incrementally expanded over recent years. Iterative approaches to AI deployment experienced a surge during 2020 and into 2021, largely due to COVID-19 forcing greater reliance on advanced digital technology to monitor, inform and respond to the pandemic. This paper critically examines the implications of intensifying digitalization and AI for migration and mobility systems for a post-COVID transnational context. First, it situates digitalization and AI in migration by analyzing its uptake throughout the Migration Cycle. Second, the article evaluates the current challenges and, opportunities to migrants and migration systems brought about by deepening digitalization due to COVID-19, finding that while these expanding technologies can bolster human rights and support international development, potential gains can and are being eroded because of design, development and implementation aspects. Through a critical review of available literature on the subject, this paper argues that recent changes brought about by COVID-19 highlight that computational advances need to incorporate human rights throughout design and development stages, extending well beyond technical feasibility. This also extends beyond tech company references to inclusivity and transparency and requires analysis of systemic risks to migration and mobility regimes arising from advances in AI and related technologies.
Article
Full-text available
Throughout the COVID-19 pandemic, countries relied on a variety of ad-hoc border control protocols to allow for non-essential travel while safeguarding public health: from quarantining all travellers to restricting entry from select nations based on population-level epidemiological metrics such as cases, deaths or testing positivity rates1,2. Here we report the design and performance of a reinforcement learning system, nicknamed 'Eva'. In the summer of 2020, Eva was deployed across all Greek borders to limit the influx of asymptomatic travellers infected with SARS-CoV-2, and to inform border policies through real-time estimates of COVID-19 prevalence. In contrast to country-wide protocols, Eva allocated Greece's limited testing resources based upon incoming travellers' demographic information and testing results from previous travellers. By comparing Eva's performance against modelled counterfactual scenarios, we show that Eva identified 1.85 times as many asymptomatic, infected travellers as random surveillance testing, with up to 2-4 times as many during peak travel, and 1.25-1.45 times as many asymptomatic, infected travellers as testing policies that only utilize epidemiological metrics. We demonstrate that this latter benefit arises, at least partially, because population-level epidemiological metrics had limited predictive value for the actual prevalence of SARS-CoV-2 among asymptomatic travellers and exhibited strong country-specific idiosyncrasies in the summer of 2020. Our results raise serious concerns on the effectiveness of country-agnostic internationally proposed border control policies3 that are based on population-level epidemiological metrics. Instead, our work represents a successful example of the potential of reinforcement learning and real-time data for safeguarding public health.
Article
Full-text available
The COVID-19 pandemic is shifting the digital transformation era into high gear. Artificial intelligence (AI) and, in particular, machine learning (ML) and deep learning (DL) are being applied on multiple fronts to overcome the pandemic. However, many obstacles prevent greater implementation of these innovative technologies in the clinical arena. The goal of this narrative review is to provide clinicians and other readers with an introduction to some of the concepts of AI and to describe how ML and DL algorithms are being used to respond to the COVID-19 pandemic. First, we describe the concept of AI and some of the requisites of ML and DL, including performance metrics of commonly used ML models. Next, we review some of the literature relevant to outbreak detection, contact tracing, forecasting an outbreak, detecting COVID-19 disease on medical imaging, prognostication and drug and vaccine development. Finally, we discuss major limitations and challenges pertaining to the implementation of AI to solve the real-world problem of the COVID-19 pandemic. Equipped with a greater understanding of this technology and AI’s limitations, clinicians may overcome challenges preventing more widespread applications in the clinical management of COVID-19 and future pandemics.
Chapter
Full-text available
While most studies on the topic of AI, algorithms and bias have been conducted from the point of view of ‘fairness’ in the field of information technologies and computer science, this chapter explores the question of algorithmic discrimination – a category that does not neatly overlap with algorithmic bias – from the specific perspective of non-discrimination law. In particular and by contrast to the majority of current research on the question of algorithms and discrimination, which focuses on the United States context, this chapter takes EU non-discrimination law as its object of enquiry. We pose the question of the resilience of the general principle of non-discrimination, that is, the capacity for EU equality law to respond effectively to the specific challenges posed by algorithmic discrimination. Because EU law represents an overarching framework and sets minimum safeguards for the protection against discrimination at national level in EU Member States, it is important to test out the protection against the risks posed by the pervasive and increasing use of AI techniques in everyday life applications which this framework allows for. This chapter therefore maps the challenges arising from artificial intelligence for equality and non-discrimination, which are both a general principle and a fundamental right in EU law. First, we identify the specific risks of discrimination that AI-based decision-making, and in particular machine-learning algorithms, pose. Second, we review how EU non-discrimination law can capture algorithmic discrimination in terms of its substantive scope. Third, we conduct this review from a conceptual perspective, mapping the friction points that emerge from the perspective of the EU concepts of direct and indirect discrimination, as developed by the Court of Justice of the European Union (CJEU). In the final step, we identify the core challenges algorithmic discrimination poses at the enforcement level and propose potential ways forward.
Article
Full-text available
Although vulnerability does not have an express legal basis in international human rights law, international human rights courts, particularly the European Court of Human Rights ("ECtHR"), have increasingly drawn on this concept in their jurisprudence. The ECtHR has developed an important line of cases concerning migrant children, whom it considers as particularly vulnerable to physical and mental harm during the migratory process. The Inter-American Court ofHuman Rights ("IA CtHR")also anchored this notion in an influential advisory opinion on the rights of migrant children. This article critically examines this case-law against the existing scholarship on vulnerability and the legal framework on human rights protection. It argues that the concept of vulnerability, when complemented by considerations of best interests of the child, can operate as a magnifying glass for State obligations, exposing a greater duty of protection and care vis-a-vis migrant children. It suggests that human rights courts should deploy a more substantial approach to migrant children's rights based on the concept of vulnerability and on the principle of best interests of the child. Above all, this approach would foster stronger protection ofthese children's rights in the long term. In addition, if effectively applied, it would allow human rights courts to avoid stigmatizing the most exposed individuals in the ongoing global migration crisis.
Article
People migrate from one region to another, attracted by many push and pull factors to develop their standard of living. The unplanned movement during the nationwide lockdown period of COVID-19 pandemic has become a painful threat to migrant workers in India and abroad. The central and state governments have jointly arranged trains to repatriate these migrants to their own homeland. However, the lack of proper planning, infrastructure and precautions has increased the spread of positive cases compared to the pre-return period. Thus, to show the previous and present positive cases migrants, we selected AR (Auto Regressive) and MA (Moving Average) models that finally put together and established the ARIMA model to estimate the increase in the number of patients affected (Average 72%) in those states (Jharkhand, Bihar, West Bengal and Odisha) following the start of the SHRAMIK special train. So this situation causes rapid, drastic changes to become more positive from the negative. The government should therefore implement region wise policy strategy in the various sectors to ensure that every human being has proper shelter, food, medicine and digital contact surveillance technology (Aragyau Shetu) so that the rate of decline in these states will differ in the coming days. Keywords Unplanned movementLockdown periodMigrantsCOVID-19
Article
Experiments with new technologies in migration management are increasing. From Big Data predictions about population movements in the Mediterranean, to Canada's use of automated decision-making in immigration and refugee applications, to artificial-intelligence lie detectors deployed at European borders, States are keen to explore the use of new technologies, yet often fail to take into account profound human rights ramifications and real impacts on human lives. These technologies are largely unregulated, developed and deployed in opaque spaces with little oversight and accountability. This paper examines how technologies used in the management of migration impinge on human rights with little international regulation, arguing that this lack of regulation is deliberate, as States single out the migrant population as a viable testing ground for new technologies. Making migrants more trackable and intelligible justifies the use of more technology and data collection under the guise of national security, or even under tropes of humanitarianism and development. The way that technology operates is a useful lens that highlights State practices, democracy, notions of power, and accountability. Technology is not inherently democratic and human rights impacts are particularly important to consider in humanitarian and forced migration contexts. An international human rights law framework is particularly useful for codifying and recognising potential harms, because technology and its development is inherently global and transnational. More oversight and issue-specific accountability mechanisms are needed to safeguard fundamental rights of migrants such as freedom from discrimination, privacy rights and procedural justice safeguards such as the right to a fair decision-maker and the rights of appeal.
Book
As seen in Wired and Time A revealing look at how negative biases against women of color are embedded in search engine results and algorithms Run a Google search for “black girls”—what will you find? “Big Booty” and other sexually explicit terms are likely to come up as top search terms. But, if you type in “white girls,” the results are radically different. The suggested porn sites and un-moderated discussions about “why black women are so sassy” or “why black women are so angry” presents a disturbing portrait of black womanhood in modern society. In Algorithms of Oppression, Safiya Umoja Noble challenges the idea that search engines like Google offer an equal playing field for all forms of ideas, identities, and activities. Data discrimination is a real social problem; Noble argues that the combination of private interests in promoting certain sites, along with the monopoly status of a relatively small number of Internet search engines, leads to a biased set of search algorithms that privilege whiteness and discriminate against people of color, specifically women of color. Through an analysis of textual and media searches as well as extensive research on paid online advertising, Noble exposes a culture of racism and sexism in the way discoverability is created online. As search engines and their related companies grow in importance—operating as a source for email, a major vehicle for primary and secondary school learning, and beyond—understanding and reversing these disquieting trends and discriminatory practices is of utmost importance. An original, surprising and, at times, disturbing account of bias on the internet, Algorithms of Oppression contributes to our understanding of how racism is created, maintained, and disseminated in the 21st century.