PreprintPDF Available
Preprints and early-stage research may not have been peer reviewed yet.

Abstract

This chapter is about the ways in which AI affects, and will continue to affect, the Global South. It highlights why the design and deployment of AI in the South should concern us. Towards this, it discusses what is meant by the South. The term has a history connected with the 'Third World' and has referred to countries that share post-colonial history and certain development goals. However scholars have expanded and refined on it to include different kinds of marginal, disenfranchised populations such that the South is now a plural concept-there are Souths. The AI-related risks for Southern populations include concerns of discrimination, bias, oppression, exclusion and bad design. These can be exacerbated in the context of vulnerable populations, especially those without access to human rights law or institutional remedies. This Chapter outlines these risks as well as the international human rights law that is applicable. It argues that a human rights, centric, inclusive, empowering context-driven approach is necessary.
Draft Chapter for the Oxford Handbook of Ethics of AI, ed. M. Dubber, F. Pasquale and
S. Das 2019 (forthcoming)
AI and the Global South: Designing for Other Worlds
-Chinmayi Arun
1
Abstract
This chapter is about the ways in which AI affects, and will continue to affect, the Global South.
It highlights why the design and deployment of AI in the South should concern us.
Towards this, it discusses what is meant by the South. The term has a history connected with the
‘Third World’ and has referred to countries that share post-colonial history and certain
development goals. However scholars have expanded and refined on it to include different kinds
of marginal, disenfranchised populations such that the South is now a plural concept - there are
Souths.
The AI-related risks for Southern populations include concerns of discrimination, bias,
oppression, exclusion and bad design. These can be exacerbated in the context of vulnerable
populations, especially those without access to human rights law or institutional remedies. This
Chapter outlines these risks as well as the international human rights law that is applicable. It argues
that a human rights, centric, inclusive, empowering context-driven approach is necessary.
Keywords: Artificial Intelligence, discrimination, bias, databases, global south, south, third world,
developing countries, privacy, human rights, equality, development, social security, right to work
Introduction
In his essay, ‘A Place in the Sun’
2
, architect Charles Correa describes the hazards of replicating
designs without any regard to context. Picture poorly designed housing - the insulated, weather-
resistant ‘box’ created for severely cold northern European regions - taking over warm Indian
cities, replacing the ventilated homes with verandahs and courtyards that are necessary in the
tropical climate. This housing designed for Northern Europe is unable to meet the needs of people
living in the warmer cities of the developing world in a different social and cultural context. Correa
argues that we must place the needs, history, and the cultural and economic context of a society at
the centre of design.
It is worth thinking of the algorithmic society from this architectural point of view.
3
Manuel
Castells wrote, ‘we know that technology does not determine society: it is society’.
4
Increasingly,
privately owned web-based platforms control our access to public services, security, education, the
public sphere, health services and our very relationship with the countries we live in. As society is
‘datafied’, public services are delivered through public-private partnerships.
5
There is a push for
‘data driven development’ mediated by private actors. Development donors such as international
1
I am grateful to Paola Ricaurte for all that she has taught me about the Global South and for talking some of these
issues through with me, to Dragana Kaurin for sharing her inspiring unpublished work, to my colleague Salome
Viljoen for encouraging me to make bolder choices and to my mother Radha Arun for coming through at short
notice as my final reader for this Chapter and much else that I have written.
2
Charles Correa, ‘A Place in the Sun’ in A Place in the Shade, (Gurgaon: Penguin Random House India, 2010) .
3
See Ryan Calo, ‘Robotics and the Lessons of Cyberlaw’, (2015) 103 Cal. L. Rev. 513, and Jack M. Balkin, ‘The Path
of Robotics Law’, (2015) The Circuit 72.
4
Manuel Castells, The Network Society, 3.
5
Taylor and Broeders, ‘In the name of Development: Power, profit and the datafication of the global South’,
Geoforum 64 (2015) 229237, 229-230. See also Anita Gurumurthy, Nandini Chami and Deepti Bharthur,
Democratic Accountability in the Digital Age, IT for Change, 2016.
Draft Chapter for the Oxford Handbook of Ethics of AI, ed. M. Dubber, F. Pasquale and
S. Das 2019 (forthcoming)
NGOs and governments rely on data collected by corporations,
6
creating potentially-biased,
opaque decision-making systems. We should examine the design of the systems of automation and
artificial intelligence that are gradually permeating citizens’ lives. We must think about who these
systems are designed for, who designs them, how they are designed, and what ends they serve.
I focus on the risks, not the benefits of AI, in this Chapter to highlight the ways in Southern
populations are vulnerable. Northern countries offer their citizens stronger safeguards than
Southern Countries - most of these countries already have law in place for data privacy
7
. While the
World Economic Forum has published proposals to incorporate ethics in AI, and
the Organisation for Economic Co-operation and Development has published principles on AI,
these do not guarantee protection to Southern populations.
This chapter is about the ways in which AI may affect the Global South. I begin with explaining
why this is a concern and move on to discussing what is meant by the Global South. Although the
term ‘South’ has a history connected with ‘third world’ and associated with certain countries
negotiating together, it is not a clear geographical segregation or even a uniform idea. Scholars
argue that it is a plural concept - there are Souths. After discussing the meaning of ‘South’, I use
four case studies to show that there are many ways in which Southern populations are affected by
technology. The term ‘South’ is complex and necessitates a context-driven approach to AI.
Finally, I outline the issues we must take into account in the context of AI and the Global South.
The risks of AI are exacerbated for Southern populations but it is difficult to discuss the effects
of AI on the South without discussing the effects of AI more broadly. This is why I discuss systems
of discrimination first and then discuss how this affects Southern populations. I follow this with a
summary of how international human rights might apply. In conclusion, I argue that a context-
driven approach, participative, empowering approach is necessary to ensure that human rights of
Southern populations are protected.
It will be clear by the end of this chapter that we need to transform the way we innovate, frame
policies for, and think about AI. The enormity of the effort involved should not deter us. Correa
pointed out that the developing world is eager for innovation and change, and that genius lies in
stitching new ideas into an old social fabric and producing a ‘seamless wonder’.
8
This metaphor is
worth bearing in mind as we review the hot mess that is currently the use of Artificial Intelligence
(AI) in the Global South.
Why we worry about the ‘Global South’
There is an increasing awareness that we should be thinking more about the impact of AI on the
Global South. The broad concern is clear enough: if privileged white men are designing the
technology and the business models for AI
9
, how will they design for the South? The answer is
that they will design in a manner that is at best an uneasy fit, and at worst amplifies existing systemic
harm and oppression to horrifying proportions.
As ‘Global South’ advocates furrow their brows about AI, they may be thinking of web-based AI
designed by people who live in worlds that rarely see power cuts or internet shutdowns and then
6
Taylor and Broeders, In the name, 229-230.
7
Commission Nationale de l'Informatique et des Libertés, Data Protection Around the World, available at
https://www.cnil.fr/en/data-protection-around-the-world , last visited on June 8, 2019.
8
Correa, A Place, 25.
9
This is a concern that is well-founded. See Sarah Myers West, Meredith Whittaker and Kate Crawford,
Discriminating systems (report), (New York: AI Now, 2019).
Draft Chapter for the Oxford Handbook of Ethics of AI, ed. M. Dubber, F. Pasquale and
S. Das 2019 (forthcoming)
deployed to the rural hinterlands of countries with poor internet connectivity and only a few hours
of electricity a day. They may worry about the resources diverted from education and health-care
budgets to technology-centric solutions from the companies that are building these systems. They
may be concerned about the surveillance of Southern children through AI for Education, built by
people whose own children go to private school and have restricted access to screens. In
authoritarian countries, they may lose sleep over AI that uses facial recognition, drones and other
forms of surveillance to oppress vulnerable populations. They may worry about the loss of jobs
and the impact on economies as AI replaces low-skilled workers.
These concerns are not without foundation. Ideas of the past like one laptop per child
10
have
resulted in spectacular failure despite the bright-eyed optimism and laudable intentions with which
they were created. Technology designed out of context may fail to take local resources, social
norms and cultural context into account. 'One day delivery’ can mean very different things in
Boston and Hyderabad even if the system designed for both cities is the same. Facebook can be
fairly harmless in most countries and find itself weaponised in a country with Myanmar’s socio-
political context, to contribute to genocide.
11
It can take effort for Google Maps to be able to
account for the favelas of Rio de Janeiro.
12
Technology policy frameworks can impact whole
countries, as we might have learned from the debate on drug patents and public health in
developing world
There are so many ways in which Artificial Intelligence can wreak havoc in Southern countries and
affect the human rights of Southern populations. In the absence of local regulation in Southern
countries, AI may be deployed in its experimental stages such that the people of these countries
bear the risk of harm that may ensue. At a larger scale, AI may impact the economies of these
countries by affecting their role in the global economy: several developing countries that benefited
from their role in the Internet-driven global economy may gradually find the low skilled outsourced
services they offer replaced by automation. The ‘call-centres’ of Bangalore, and employment and
business they generate, can be undone as automation makes human intervention unnecessary.
Automated cars may result in the cab drivers of New York - famously from all over the world -
finding themselves out of work with a redundant skill.
We need to begin our journey towards including the South as a priority, and we need to do go
beyond the mere use of the phrase in policy documents or speeches. For this, we have to
understand the many things we specifically worry about when we speak of the Global South. Who
is being left out and endangered?
What is the Global South?
Contemporary use of the term ‘Global South’ has a complicated history and is linked to but
different from other terms like ‘Third World’ and ‘Developing countries’.
13
‘Global South’ has now
largely replaced ‘Third World’ and ‘Developing countries’, but is not without its controversies
While the latter two terms were used in the context of geo-politics and ‘global south’ shares this
10
Joshua Keating, ‘Why did One Laptop per Child Fail’, Foreign Policy, 9 September 2009:
foreignpolicy.com/2009/09/09/why-did-one-laptop-per-child-fail/ .
11
‘Facebook has turned into a beast in Myanmar’, BBC, 13 March 2018: www.bbc.com/news/technology-43385677.
12
Max Oprey, ‘How Google is putting Rio's invisible favelas back on the map’, The Guardian, 9 October 2016:
www.theguardian.com/sustainable-business/2016/oct/09/invisible-favelas-brazil-rio-maps-erasing-poorer-parts-
city .
13
See Anne Garden Mahler, ‘Beyond the Colour Curtain’, in The Global South Atlantic (New York: Fordham
University Press, 2017).
Draft Chapter for the Oxford Handbook of Ethics of AI, ed. M. Dubber, F. Pasquale and
S. Das 2019 (forthcoming)
history, there is a convincing body of scholarship about how ‘global south’ transcends borders to
stand for more than nation states. It helps to have a little context in the form of the history of
these terms and the changes in politics, culture and economics that accompanied them.
Although the term ‘South’ had been used by scholars earlier
14
, its journey towards becoming
mainstream might have started when the Brandt Commission reports used it in the eighties, in the
context of their argument for the transfer of funds from the ‘North' to the ‘South’.
15
‘South’ has
significant overlap with the term ‘third world’, which came to be used from the nineteen fifties to
move away from ‘east’ and ‘west’ with their cold war overtones.
16
‘Third world’ was a term used
initially to distinguished the ‘colonised or neocolonised world’
17
, but over time it also came to
stand for certain values.
18
While ‘third world’ was used to organise countries around certain
ideologies, it appears that ‘South’ came to be used when development aid was offered to the South.
Over time the term has come to expand beyond borders and no longer viewed in a geographically
restrictive way.
19
‘South’ as it is currently used by many scholars, is an expansive term, so that it includes ‘countless
Souths’, including within what we understand as the West.
20
I discuss these arguments here and
then offer illustrations of how this expansive definition is useful in the next part of this chapter.
A striking articulation of this expansive thinking about the South comes from Santos, who argues
that the South cannot be seen as a geographic concept,
21
and must be seen instead as ‘a metaphor
for the human suffering caused by capitalism and colonialism on the global level, as well as for the
resistance to overcoming or minimising such suffering’.
22
This definition accounts for the migrant
workers with few rights and an abysmal standard of living in countries that one would otherwise
describe as wealthy. It allows us to distinguish between the billionaires residing in India, Mexico
and China, and the marginalised impoverished residents of these countries. Such an expanded
reading of the global South focuses on inequality, oppression, and resistance to injustice and
oppression .
23
Santos argues that the South can be found within Europe and North America ‘in the form of
excluded silenced and marginalised populations such as undocumented immigrants, the
unemployed, ethnic or religious minorities, and victims of sexism, homophobia, racism and
islamaphobia’.
24
Milan builds on this to say that the South must be understood as a ‘plural entity’
containing within it ‘the different, the underprivileged, the alternative, the resistant, the invisible,
and the subversive’.
25
The significance of framing what we refer to as the South in this manner, is
that this way we include within it disenfranchised populations, many of whom are geographically
clustered in countries we think of as the ‘South’ and some of whom are within countries we would
14
See nour dados and raewyn connell, ‘the global south’, (2012) 11(1) Contexts, 12.
15
Arif Dirlik, ‘Global South: Predicament and Promise’, (Indiana University Press, 2007) 1(1) The Global South, 12-
23.
16
Mark T. Berger, ‘After the Third World? History, destiny and the fate of Third Worldism’, 2004 25(1) Third World
Quarterly 9-39, 10.
17
Dirlik, Global South, 13.
18
Berger, ˆ, 10.
19
Dirlik, Global South, 15-20.
20
Stefania Milan and Emiliano Treré, ‘Big Data from the South(s): Beyond Data Universalism’, (2019) 20(4),
Television & New Media, 319335, 325.
21
Boaventura de Sousa Santos, ‘Epistemologies of the South and the future’, (2016) 1 From the European South, 17-29,
18.
22
Santos, Epistemologies, 18.
23
Milan and Treré, Big Data, 325.
24
Santos, Epistemologies, 19.
25
Milan and Treré, Big Data, 321.
Draft Chapter for the Oxford Handbook of Ethics of AI, ed. M. Dubber, F. Pasquale and
S. Das 2019 (forthcoming)
describe as the North. This conception of the South might encompass refugees in the United
States of America, who lead a markedly different life from the upper class, privileged, dominant
race individuals who also reside in the country. It would also support the idea that there are Souths
- what is designed for one Southern community or population would not necessarily fit another
Southern community or population. This means that designing for the South will mean accounting
for many different contexts.
This inclusive definition of the South as a plural entity is worth holding on to since it accounts for
the rights and priorities of the many populations excluded from our current thinking about AI. It
forces us to understand that the concerns raised by the South are varied, and it helps to think about
different populations of the South within their own context. A contextual understanding should
not prevent us from recognising the value of strategic South-South alliances around particular
issues to gain leverage. There are affinities between Southern societies based on their shared history
of economic, political and social marginalisation, and past global co-operation for common causes
such as the Group of 20 and the WTO protests.
26
We must however recognise that South-South
co-operation is far from simple as powerful Southern societies like China, India, Brazil and South
Africa compete with each other for power, and powerful groups within these Southern societies
benefit from the perpetuation of the transnational economy in its current form.
27
In considering AI’s impact on the South, we have to acknowledge the dominance of ‘Western
technology companies’, while noting that China is challenging the United States in the fields of AI
and big data.
28
Ricaurte points out that there is a cluster of countries from which data is extracted,
and who consume the services offered by dominant global technology companies.
29
Some of these
countries, such as India, acknowledge and highlight their own potential for extraction of such data,
ignoring the potential impacts on citizens. The political elite, working closely with the industry
elite, of these countries, can tend to focus more on protection of markets than on protection of
citizens. The commodification of citizens is not questioned - the focus is on ensuring that local
capital, rather than foreign capital, benefits from this commodification. Ricaurte highlights the role
of governments in ‘data colonisation’
30
, pointing out that governments create frameworks to
validate this process and contract with AI companies for public services provided using private
data extracted from the populations they are meant to serve.
31
It is clear that the exploitation of the South has many dimensions. It might take place entirely
within what we understand as the ‘North’, with data collection and monitoring of refugees,
immigrants and other marginalised populations. It might also take place entirely within the South,
where the rising inequality, economic models and close ties between industry and government
might mean that legal frameworks are designed to facilitate local industry’s extraction of data from
citizens. However, in keeping with the broader ways in which global power and capital has worked
in the past, during colonisation and after, it also takes place across borders. Northern companies
‘mine’ data from the South relatively easily. This extraction is a part of a privatised process
32
. The
extraction of data has been compared to the extractive practices of colonialism by Couldry and
26
Dirlik, Global South, 16.
27
Dirlik, Global South, 16.
28
Paola Ricaurte, ‘Data Epistemologies, Coloniality of Power, and Resistance’, (2019) Television & New Media 1-16, 9.
29
Ricaurte, Data Epistemologies, 9.
30
Ricaurte, Data Epistemologies and Nick Couldry and Ulises Mejias, ‘Data Colonialism: Rethinking Big Data’s
Relation to the Contemporary Subject’, (2018) Television and New Media, 1-14
31
Ricaurte, Data Epistemologies, 8.
32
Jim Thatcher, David O’Sullivan and Dillon Mahmoudi, ‘Data colonialism through Accumulation by Disposession:
New Metaphors for Daily Data’, 2016 34(6) Society and Space 990.
Draft Chapter for the Oxford Handbook of Ethics of AI, ed. M. Dubber, F. Pasquale and
S. Das 2019 (forthcoming)
Mejeias.
33
The elite that govern the countries from which the extraction takes place are often
complicit in this extraction. The burden of the extraction is borne by the disenfranchised. In the
recent years, Southern countries have also developed relationships mirroring North-South
extractive practices with other Southern countries - Indian and Chinese businesses have expanded
to other Southern countries. The next part of this chapter illustrates four models through which
vulnerable Southern populations are put at risk by technology.
Technology in other worlds
In discussing the idea of the ‘South’, different models of exploitation of the South using technology
have surfaced. Here four case studies are used to highlight the complexity and vulnerabilities of
the South. The first case study of Facebook in Myanmar is the classic illustration of how
technology designed in the North can be harmful when exported to the South. The second case
study explores the exploitation of Southern populations by the governing elite within Southern
countries by examining Aadhaar, India’s national identity database. The third case study focuses
on Southern populations in Northern countries through a discussion of refugees in Europe. The
last case study discusses South-South exploitation using China’s export of surveillance technology
as an example.
Facebook in Myanmar
Among the most shocking ways in which data and algorithms may affect human rights is the role
that Facebook played in the Rohingya genocide in Myanmar. It prompted the UN investigators of
the genocide to note in their report that Facebook “has been a useful instrument to those that seek
to spread hate” and recommended an independent investigation of the extent of the company’s
role
34
.
Facebook, a US based company, brought its social media platform to Myanmar which is a former
colony with a history of decades of state control. This was a classic case where the business model
and technological architecture from a Northern country was used in a Southern country. Facebook
aggressively marketed its platform, offering it free of cost through its controversial Free Basics
program,
35
in country that had not had the time to develop a healthy media ecosystem. Myanmar’s
press was described as ‘not free’ in Freedom House’s Freedom of the Press report in 2012. At the
time, criticism of the government was outlawed and most private publications were subject to pre
publication censorship.
36
Domestic broadcast and print media were owned or controlled by the
government and the import of foreign periodicals was restricted. Without a healthy media
ecosystem, citizens have no way of ascertaining the truth. Facebook was designed for a society
with a very robust media eco-system, protected by the First Amendment. It is not clear that it had
given any thought to what would happen if the same platform dominated the information eco-
system of a country like Myanmar which has been described as a rumour-filled society
37
.
33
Nick Couldry and Ulises Mejias, ‘Data Colonialism: Rethinking Big Data’s Relation to the Contemporary Subject’,
(2018) Television and New Media, 1-14, 1.
34
Report of the independent international fact-finding mission on Myanmar, Human Rights Council, 12 September
2018, paragraph 74.
35
Catherine Trautwein, ‘Facebook Free Basics Lands in Myanmar, Myanmar Times, 6 June 2016 available at
www.mmtimes.com/business/technology/20685-facebook-free-basics-lands-in-myanmar.html
36
‘Freedom of the Press 2012’, Freedom House, Washington D.C available at freedomhouse.org/report/freedom-
press/freedom-press-2012, 92-93.
37
BSR, (2018) ‘Human Rights Impact Assessment: Facebook in Myanmar’.
Draft Chapter for the Oxford Handbook of Ethics of AI, ed. M. Dubber, F. Pasquale and
S. Das 2019 (forthcoming)
The BSR human rights impact assessment of Facebook in Myanmar pointed out that Facebook
was used to incite and co-ordinate violence. It is clear from news reports that hate speech went
viral on Facebook in Myanmar
38
, and the military used the platform to spread the hatred
39
during
the genocide. Based on interviews, BSR argued that Facebook should make the effort to
understand the local context better.
Myanmar is a small country and its public institutions and legal systems offered the victims of the
violence little support. Facebook should have been careful while entering, making the effort to
understand the local context and to build a feedback loop. It was the most vulnerable people, the
truly marginalised within a Southern country that suffered the harm.
A biometric identity database in India
Aadhaar, the biometrics - based ‘unique identity’ number
40
database in India has no cross-border
element. This is a top-down control heavy system designed by powerful elite upper caste men
41
- a
software billionaire, Nandan Nilekani, supported by high ranking politicians and civil servants
42
-
for the under-privileged people of India. The initial object was to give all people, including migrant
workers, a way to access government services.
43
However the system is an interface between
people and welfare services. Enrolling in the database will not spare an impoverished person the
effort of opening a bank account, or acquiring a ration card.
44
There were a limited number of consultations, and no serious cost benefit analysis or impact
assessment studies of this very expensive project. Experts on Indian food distribution and welfare
schemes, life-saving public services for the impoverished people of India, were critical of the
project from the start.
45
They pointed out that other less expensive models have been found to
work better.
46
Nilekani does not appear to have been willing to engage with the fundamental
questions of whether Aadhaar the best way to administer the state’s welfare systems.
47
Aadhaar is mandatory for anyone who wants to access the Indian welfare system. It has been
criticised for excluding people from this welfare system owing to the many ways in which it
malfunctions. Researchers have found that upto 27 starvation deaths from 2015 onwards have
been directly linked to Aadhaar.
48
The database has also been breached several times and news
38
Megha Rajagopalan, ‘Internet Trolls Are Using Facebook To Target Myanmar's Muslims’, Buzzfeed News, 18
March 2017 available at www.buzzfeednews.com/article/meghara/how-fake-news-and-online-hate-are-making-life-
hell-for#.wlGyPB4gk
39
Paul Mozur, ‘A Genocide Incited on Facebook with Posts from Myanmar’s Military’, New York Times, 15 October
2018, available at www.nytimes.com/2018/10/15/technology/myanmar-facebook-genocide.html.
40
For more information, see uidai.gov.in/what-is-aadhaar.html
41
Ian Parker, ‘The I.D. Man’, The New Yorker, 3 October 2011 issue available at
www.newyorker.com/magazine/2011/10/03/the-i-d-man.
42
Payal Arora, ‘The Bottom of the Data Pyramid: Big Data and the Global South’, International Journal of
Communication 10(2016), 16811699 at 1683-1684.
43
See Parker, I.D. Man.
44
Reetika Khera, ‘The UID Project and Welfare Schemes’ (2011) XLVI (9), Economic and Political Weekly 38.
45
Khera, UID Project.
46
Khera, UID Project.
47
See Khera, UID Project and Parker, I.D. Man.
48
'Aadhaar Linked To Half The Reported Starvation Deaths Since 2015, Say Researchers’, Huffington Post India, 26
September 2018 available at https://www.huffingtonpost.in/2018/09/25/aadhaar-linked-to-half-the-reported-
starvation-deaths-since-2015-say-researchers_a_23539768/
Draft Chapter for the Oxford Handbook of Ethics of AI, ed. M. Dubber, F. Pasquale and
S. Das 2019 (forthcoming)
reports say that almost a billion records with personally identifiable information have been
compromised.
49
Aadhaar has played havoc with people’s lives and has caused people to starve by preventing them
from accessing the government services that deliver their basic right to food. In addition to causing
harm within the system it was supposed to fix, the Aadhaar targets vulnerable people such as
undocumented Bangladeshi migrant workers residing in India - one of the stated goals of the
system is to make it easier to find and deport these people.
50
The system is also unfriendly to the
impoverished populations for whom it was built. The architecture of the biometric data collection
system does not account for what happens to their bodies as a result of living on the streets.
51
It illustrates that it is possible within a Southern state, for the elite to force the marginalised to help
them construct big data sets that are then used to exclude them surveil them and violate their rights
in other ways.
Refugees and data collection in Europe
A powerful illustration of how the South exists within what we see as Northern countries, come
from Dragana Kaurin’s work on the digital agency of refugees in the European Union.
52
European
laws, international law and even humanitarian agencies use technology to deprive asylum seekers
of agency and make them even more vulnerable.
Refugees are made to give up personal data when they seek asylum in the European Union.
Although they are physically based in what is usually considered the global North, asylum seekers
are vulnerable people who receive no protection from their countries of origin and little protection
from their country of residence. They are often under threat from their country of origin, which
they have fled, and from their host countries where the law enforcement agencies often have the
mandate to find, imprison or deport them.
53
While they are in this vulnerable position, law
enforcement and border control agencies, as well as UN aid agencies and NGOs collect asylum
seekers’ and refugees’ biometrics.
Kaurin explains how the use of automation can harm refugees and asylum seekers. The social
media and communication devices that help them maintain their ties with family and seek
information from humanitarian aid workers as they are on the move also subject them to
surveillance as the private sector and government actors harvest their data and monitor their
movements.
54
Even well-intentioned efforts using technology can put them at risk. For example,
the Trace the Face program by the International Committee of the Red Cross uses facial
recognition technology that searches for missing persons using photos provided by the families of
missing migrants of either the missing migrants themselves or their blood relatives.
55
Kaurin
49
‘1 Billion Records Compromised in Aadhaar Breach since January: Gemalto’, Business Line, 15 October 2018
available at https://www.thehindubusinessline.com/news/1-bn-records-compromised-in-aadhaar-breach-since-
january-gemalto/article25224758.ece.
50
Payal Arora, 'The Bottom of the Big Data Pyramid: Big Data and the Global South’, (2016) 10 International Journal
of Communication, 16811699, 1684.
51
Payal Arora, The Bottom, 1685.
52
Dragana Kaurin, Data Protection and Digital Agency of Refugees, CIGI Special Report, May 14, 2019.
53
Kaurin, Data Protection, 4.
54
Kaurin, Data Protection, 5.
55
Kaurin, Data Protection, 12.
Draft Chapter for the Oxford Handbook of Ethics of AI, ed. M. Dubber, F. Pasquale and
S. Das 2019 (forthcoming)
references an interview with a refugee to point out the chilling fact that some refugees ‘are also
running away from family or someone who wants to hurt them’.
56
This is an illustration of Southern populations that inhabit the global North that are made more
vulnerable through collection of data and the use of technology. Systems built to help refugees
and asylum-seekers have adopted this technology that does not take their needs into account.
Kaurin points out that refugees are not usually consulted and engaged in the framing of these
policies that affect them.
57
To reduce the vulnerability and increase the agency of asylum seekers,
she recommends that impacted communities, especially the minorities and marginalised groups
within them, be involved in designing processes and making decisions for asylum seekers.
58
Chinese Facial Recognition Technology in Zimbabwe
Using strategy similar to the North’s expansion to the South, China is selling surveillance
technology to countries like Ethiopia.
59
One might see this as oppression of the population in one
Southern people by the elite within that country, facilitated by another Southern Country.
It is well known that China is using big data to build enhanced systems of surveillance, ranging
from the social credit system and facial recognition to systems that will predict which individuals
might be a threat to public safety. These systems are used by the elite within the country to control
the rest of the population, and have now taken a cross-border dimension.
Chinese companies make and sell closed-circuit television cameras and monitoring systems,
sometimes high-definition and equipped with facial and movement recognition technology, to
other countries including Brazil, Ecuador and Kenya. Northern countries like Germany, and
recently the United States have taken steps to control foreign acquisitions and to control the
technologies they use but it appears that a significant market exists for this technology in Southern
countries. In a country like Ethiopia, the government purchases Chinese technology to monitor
of mobile phones and internet activity of its people.
As the case studies above might suggest, there is more than one model through which Southern
populations are harmed or exploited through the use of technology. The same institutional
weaknesses that leave Southern populations in the Southern countries vulnerable to technology
from Northern countries, make them vulnerable to technologies from other Southern countries.
The technology developed for surveillance and control of populations within countries in the
South, is exported and used against marginalised populations in other Southern nations.
AI and the Global South
It is worth reading work by scholars who think about AI and discrimination, while noting that
Southern institutions and legal frameworks can exacerbate the harms that they discuss. Southern
populations within Northern countries might not have same access as privileged people to the
institutions within the same countries. Autonomous systems are used so broadly that they can
affect the economy, housing, intimate relationships and more. They can introduce or enhance
discrimination and oppression, and they can erase populations by failing to account for their
existence.
56
Kaurin, Data Protection, 13.
57
Kaurin, Data Protection, 24.
58
Kaurin, Data Protection, 24.
59
Maya Wang, ‘China’s Dystopian Push to Revolutionize Surveillance’,
https://www.hrw.org/news/2017/08/18/chinas-dystopian-push-revolutionize-surveillance
Draft Chapter for the Oxford Handbook of Ethics of AI, ed. M. Dubber, F. Pasquale and
S. Das 2019 (forthcoming)
I begin with discussing autonomous systems as systems of discrimination, I then move on to
discussing what this may mean for Southern populations, especially since the fragile democracies
and non-democracies of the world do not offer their citizens the institutional protections that may
be available in the USA or Europe.
Systems of Discrimination
Any discussion of AI in the context of discrimination has to discuss big data, which is ‘the fuel
that runs the Algorithmic Society’.
60
Algorithmic systems are often trained on a corpus of data,
which means that the big data and its inherent biases affect the outcome of these systems.
61
There
are several stages at which inaccuracies and bias can be introduced into algorithmic decision-
making. These range from the recording of the data to the actual question answered by the
algorithm.
There is a tendency to accept predictions based on datasets as the truth
62
even though the the
outcome is typically an interpretation of the data
63
and may be inaccurate.
64
The dataset could
suffer from any number of problems which would skew the outcome. Scholars use the term ‘dirty
data’ to refer to missing, incorrect and badly represented data, as well as to data that has been
manipulated intentionally or distorted by biases.
65
Crawford has pointed out that “not all data is
created or even collected equally”.
66
Data collection has embedded power and assumptions. The
recording of fingerprints for example is difficult for those who do manual work such as refugees,
and migrant and contract labourers.
67
The very design of data sets can be biased as a result of assumptions and gaps.
68
The datasets could
under-represent or wrongly represent certain populations, leading to discrimination against them
or to their exclusion.
69
Even if the dataset is accurate, its structure can end up discriminating and
marginalising people: the classic example being datasets that code people as either male or female,
erasing other forms of gender identity.
70
A dataset might discriminate indirectly by recording a
seemingly innocuous fact that acts as a marker for identity. An illustration of this is employment
which can be used to infer caste based on the historic employment of marginalised caste people
for certain tasks (such as manual scavenging).
71
60
Jack M. Balkin, ‘The Three laws of Robotics in the Age of Big Data, (2017) 78(5) Ohio State Law Journal,
1217,1219.
61
Ifeoma Ajunwa, ‘The Paradox of Automation as Anti-Bias Intervention’, (Forthcoming, 2020) 41 Cardozo, L. Rev.,
13.
62
Ajunwa, The Paradox, 13.
63
danah boyd and Kate Crawford, ‘Six Provocations for Big Data’, (Oxford Internet Institute, 2011), A Decade in
Internet Time: Symposium on the Dynamics of the Internet and Society, 6.
64
Kate Crawford and Jason Schultz, ‘Big Data and Due Process: Toward a Framework to Redress Predictive
Privacy Harms’, (2014) 55 (1) Boston College Law Review 93, 101.
65
Rashida Richardson, Jason M. Schultz and Kate Crawford, ‘Dirty Data, Bad Predictions: How Civil Rights
Violations impact Police Data, Predictive Policing Systems and Justice’, (2019) 94 New York University Law Review 192,
195.
66
Ajunwa, The Paradox, 13, based on Kate Crawford, Think Again: Big Data, Foreign Policy (May 10, 2013, 12:40 AM),
https://foreignpolicy.com/2013/05/10/think-again-big-data.
67
See Arora, The Bottom and Kaurin, Data Protection.
68
Ajunwa, The Paradox, 13.
69
Ajunwa, The Paradox, 13-18.
70
West et al, Discriminating Systems, 6.
71
The Citizen Bureau, ‘Caste and Aadhaar: How will a Manual Scavenger Leave his Past Behind?’, The Citizen, 5
August 2017 available at www.thecitizen.in/index.php/en/newsdetail/index/2/11396/caste-and-aadhar-how-will-a-
manual-scavenger-leave-his-past-behind.
Draft Chapter for the Oxford Handbook of Ethics of AI, ed. M. Dubber, F. Pasquale and
S. Das 2019 (forthcoming)
The training data for algorithms can embed bias,
72
and algorithms trained on real world data would
replicate real word discrimination.
73
Therefore a hospital computer program used to sort out
medical school students based on previous admissions decisions, ends up discriminating against
women and racial minorities because of the rules it learned from the hospital’s older biased
decisions.
74
Big data essentially generates correlations.
75
Although scientists understand the
difference between correlation and causation, the rest of the world tends to treat conclusions based
on big data as ‘enough’.
76
The AI Now institute has articulated the problem in unambiguous terms.
77
It has pointed out that
since classification, differentiation and ranking are central to AI systems, these systems are
‘systems of discrimination’. It has argued that the bias in AI systems is connected with the lack of
diversity in the AI industry, including the people who build AI tools and the environment in which
they are built. The large scale AI systems come from elite university labs and a few technology
companies, which are ‘white, affluent, technically oriented and male’ spaces.
78
In other words,
these technologies are designed by people from the North. Context can be reintroduced if
universities studying AI collaborate with social and humanities disciplines, affected communities
and civil society organisations.
79
It is important in to account for plurality, context and
intersectionality.
80
Southern populations
In addition to changing how decisions are made about design, data and deployment in the
algorithmic society, we must give Southern populations the tools to engage effectively with the
questions that affect them. This is already proving challenging in what we understand as Global
North countries despite the lively debate and relatively strong privacy and anti-discrimination laws.
When companies deploy these technologies in Southern countries, there are fewer resources and
institutions to help protect marginalised people’s rights. This needs to be remedied as a high
priority.
The systems discussed in the four case studies are designed by people with privileged access to the
data of data subjects. The data subjects have little control or autonomy over their own data. It is
typical, when autonomous systems are used, that the data subject has no idea who has access to
their data or how it is used.
81
This is exacerbated in Southern countries. Young democracies lack
institutional stability since it takes time to build institutions and institutionalise democratic
practices.
82
This is why Milan argues that we need diverse ways for citizens and civil society
engagement to ward off datafication practices that result in oppression and inequality.
83
72
Solon Barocas and Andrew D. Selbst, ‘Big Data’s Disparate Impact’, (2016) 104, California Law Review, 671, 680-
681.
73
Ajunwa, The Paradox, 14.
74
Barocas and Selbst, Big Data’s Disparate Impact, 682.
75
Ajunwa, The Paradox, 13.
76
Ajunwa, The Paradox, 15.
77
West et al, Discriminating systems, 6.
78
West et al, Discriminating systems, 6.
79
AI Now Report 2018, AI Now Institute, New York University.
80
West et al, Discriminating Systems, 3.
81
danah boyd and Kate Crawford, ’Critical Questions for Big Data’, (2014) 15(5) Information, Communication & Society,
662-679, 673.
82
Ethan B. Kapstein and Nathan Converse, ‘Why Democracies Fail’, (2008) 19 (4) Journal of Democracy, Johns
Hopkins University Press, 57-68.
83
Milan and Treré, Big Data, 328.
Draft Chapter for the Oxford Handbook of Ethics of AI, ed. M. Dubber, F. Pasquale and
S. Das 2019 (forthcoming)
The institutional frameworks of Southern countries must be taken into account as we consider
what impact AI might have on the South. Freedom depends not just on political and civil rights,
but also on other social and economic arrangement such as education and health care.
84
Development, Amartya Sen argues, depends on the removal of sources ‘unfreedom’ such as
systematic social deprivation, poverty, poor economic opportunities and tyranny, Sen describes
poverty in terms of capability deprivation, in what is now famously knows as the ‘capabilities
approach’ to development. Julie Cohen has applied Sen’s work, as build on by Martha Nussbaum,
to access to knowledge, and has pointed out that we need to pay more attention to the relationship
between the networked information environment and human flourishing.
85
The rights of Southern populations can be realised through efforts made by states, but can also be
eroded by the governing elite of states. In the past, Southern countries worked together as a bloc,
to gain access to technology, capital and markets.
86
They had a shared commitment to
development, opposition of colonialism, the creation of equitable conditions for socio-economic
development of all countries and the evolution of South-South co operation.
87
This co-operation
has been taking place since the Non Aligned movement, in which developing countries came
together to negotiate development and trade issues. As the developing countries began what they
called South-South co-operation, triangular co-operation also began such that donors and
northern partners became involved in South-South initiatives.
88
Progress has been made over the years on South-South initiatives but one might argue that the co-
operation between the Southern states and triangular co-operation has had mixed results. Over
the years, non State actors such as businesses and civil society have started playing a powerful role
in Southern Countries. These countries have developed groups that are wealthy and influential,
and populations that are more affluent than their fellow citizens - the extractive, exploitative
consequences are evident in the Aadhaar case study. Some Southern states are more developed
and have greater economic influence than other Southern states. The exploitative nature of this
relationship is evident in the China-Zimbabwe case study.
How International Human Rights apply
It is clear that Southern populations are varied and scattered through Northern and Southern
countries. It helps to bear in mind that they are all entitled to human rights under international
law, which offers a standard and a threshold that debates on innovation and AI must take into
account. AI will affect human rights, especially for Southern populations, and work is underway
to map how these rights may be affected. As the UN Secretary General’s high level panel on digital
co-operation acknowledges, the major documents codifying international human rights were
written before the age of digital co-operation.
89
84
Amartya Sen, Development as Freedom, Anchor Books: New York, 2008, ‘Introduction’.
85
Julie Cohen, Configuring the Networked Self, Yale University Press: New Haven; London: Yale University Press, 2012,
Chapter 9.
86
Rubin Patterson, ‘Global Trade and Technology Regimes: The South’s Asymmetrical Struggle', Perspectives on
Global Development and Technology, (2005) 4(3-4), 382.
87
Report of the UN Secretary General to the UN General Assembly (2018), 73rd session, Role of South-South
cooperation and the implementation of the 2030 Agenda for Sustainable Development: Challenges and opportunities, 3.
88
Report of the UN Secretary General, Role of South-South, 5.
89
Report of the UN Secretary-General’s High-level Panel on Digital Cooperation (2019), The Age of Digital
Interdependence, 16.
Draft Chapter for the Oxford Handbook of Ethics of AI, ed. M. Dubber, F. Pasquale and
S. Das 2019 (forthcoming)
AI could potentially impact the rights to freedom of expression, privacy, social security, and the
right against discrimination. It might also violate state parties’ commitment to guarantee these
rights without discrimination. The UN Special Rapporteur for freedom of expression, David Kaye,
has recommended that companies should account for discrimination at both the input and the
output level of AI systems, and design systems that are non discriminatory and account for
diversity.
90
He has suggested that states and companies might be obligated to conduct human
rights impact assessments and public consultations during the design and deployment of new AI
systems, or existing systems in new markets. He has also recommended that states should ensure
that human rights are central to the design, deployment and implementation of AI systems.
91
These recommendations offer concrete ways to ensure that states make an effort to prevent
companies from violating human rights as they build and deploy AI. The recommendation about
impact assessments when technology is used in new markets is especially valuable for Southern
countries since it acknowledges that it can be risky to deploy technology designed for the North
unthinking in the South. It accounts for context.
Although the UN Special Rapporteur on extreme poverty and human rights is yet to publish his
report on AI, his public consultation has elicited useful responses from human rights
organisations. These responses point out that discriminatory AI systems might violate the right to
social security
92
. They may also affect states’ obligation to ensure that people are able to access the
right to work
93
, necessitating efforts to enable people whose skills and jobs are affected by AI to
acquire new skills and competence so that they are able to work, and to explore alternative income
models like the Universal Basic Income.
94
The recommendations about ways in which states must develop institutional frameworks to
guarantee human rights in a world dominated by AI systems are useful. There is however much
more work to be done. We need a clear framework against which States can be assessed, to monitor
their progress in protecting human rights and advancing the sustainable development goals as they
develop and use AI.
Conclusion
The degree to which the AI industry is willing to experiment on human populations
95
, in the name
of innovation, should make us uncomfortable. As Castells reminds us, invoking the Holocausts,
we must remember how destructive technology can be before we lose ourselves in its wonders.
96
The technology and capital that drives AI currently rests firmly in privileged Northern hands.
Vulnerable Southern populations in particular are at risk from the surveillance and other forms of
discrimination, bias and poorly tailored outcomes that will result from AI that is designed with no
regard to their local contexts. The politics of design need to be examined, and AI systems need to
be studied in situated realities’
97
90
Report of the Special Rapporteur on the promotion and protection of the right to freedom of opinion and
expression to the UN General Assembly, Seventy-third session, 2018.
91
Report of the Special Rapporteur (2018).
92
Article 22 of the Universal Declaration of Human Rights as well as Article 9 of the International Covenant on
Civil and Political Rights. See Human Rights Watch, Submission to the UN Special Rapporteur on extreme poverty
and human rights, May 2019.
93
Article 6(1) of the International Covenant on Economic, Social and Cultural Rights.
94
Amnesty International, Submission to the UN Special Rapporteur on extreme poverty and human rights, May
2019.
95
AI Now Institute Report 2018, 24.
96
Manuel Castells, The Network Society, 3
97
West et al, Discriminating systems, 16.
Draft Chapter for the Oxford Handbook of Ethics of AI, ed. M. Dubber, F. Pasquale and
S. Das 2019 (forthcoming)
Boyd and Crawford write powerfully, that ‘big data has emerged a system of knowledge that is
already changing the objects of knowledge , while also having the power to inform how we
understand communities and networks’.
98
With every year that passes, this system intertwines itself
with our institutions and permeates our societies. This is why we must heed Ricaurte’s call for
alternative digital futures and pluriverses, and for the protection of cultures that are resistant to
being governed by the market.
99
We must work on reversing extractive technologies in favour of
justice and human rights.
Although scholars, scientists and UN experts have cautioned against the speedy adoption of AI
which may harm vulnerable populations, or affect their agency and autonomy,
100
more work is
necessary to account for the plural contexts of the Global South and adopt modes of engagement
that include these populations, empower them and design for them. It may be necessary to re-
imagine models of innovation to achieve this.
101
The UN Secretary General’s high level panel on
digital co-operation has recognised this and has called for an inclusive digital economy and society,
that accounts for local conditions, human rights and barriers faced by marginalised groups.
102
It
has also recognised the need to develop capacity so that all stakeholders are able to understand
and make critical choices about emerging technologies.
Although this redesigning of these technology and market models as we know them may seem
daunting and arguments may be made that efforts to contextualise them will affect their ability to
operate at scale, it is not too late to start. Correa wrote that big question for architects in the Third
World is not the size or value of their projects, but “the nature of the questions they raise-and
which we must confront. A chance to grow: the abiding virtue of a place in the sun’.
103
Bibliography
Ifeoma Ajunwa, ‘The Paradox of Automation as Anti-Bias Intervention’, (Forthcoming, 2020)
41 Cardozo, L. Rev., 13.
Nick Couldry and Ulises Mejias, ‘Data Colonialism: Rethinking Big Data’s Relation to the
Contemporary Subject’, (2018) Television and New Media, 1-14,
Arif Dirlik, ‘Global South: Predicament and Promise’, (Indiana University Press, 2007) 1(1) The
Global South, 12-23.
Anne Garden Mahler, ‘Beyond the Colour Curtain’, in The Global South Atlantic (New York:
Fordham University Press, 2017).
Stefania Milan and Emiliano Treré, ‘Big Data from the South(s): Beyond Data Universalism’,
(2019) 20(4), Television & New Media, 319335.
Paola Ricaurte, ‘Data Epistemologies, Coloniality of Power, and Resistance’, (2019) Television &
New Media 1-16.
Boaventura de Sousa Santos, ‘Epistemologies of the South and the future’, (2016) 1 From the
European South, 17-29,
98
boyd and Crawford, Critical, 665.
99
Ricaurte, Data Epistemologies, 12.
100
Russel et al, "Research Priorities for Robust and Beneficial Artificial Intelligence: An Open Letter”, (2015)
available at futureoflife.org/ai-open-letter; and the Report of the Special Rapporteur (2018) paragraph 47.
101
See Milan and Treré, Big Data, 328, and Ricaurte, Data Epistemologies, 12.
102
UN Secretary-General’s High-level Panel on Digital Cooperation, 29.
103
Correa, A Place, 25.
Draft Chapter for the Oxford Handbook of Ethics of AI, ed. M. Dubber, F. Pasquale and
S. Das 2019 (forthcoming)
Rashida Richardson, Jason M. Schultz and Kate Crawford, ‘Dirty Data, Bad Predictions: How
Civil Rights Violations impact Police Data, Predictive Policing Systems and Justice’, (2019) 94 New
York University Law Review 192,
Taylor and Broeders, ‘In the name of Development: Power, profit and the datafication of the
global South’, Geoforum 64 (2015) 229237.
Sarah Myers West, Meredith Whittaker and Kate Crawford, Discriminating systems (report), (New
York: AI Now, 2019).
ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
This article introduces the tenets of a theory of datafication of and in the Souths. It calls for a de-Westernization of critical data studies, in view of promoting a reparation to the cognitive injustice that fails to recognize non-mainstream ways of knowing the world through data. It situates the “Big Data from the South” research agenda as an epistemological, ontological, and ethical program and outlines five conceptual operations to shape this agenda. First, it suggests moving past the “universalism” associated with our interpretations of datafication. Second, it advocates understanding the South as a composite and plural entity, beyond the geographical connotation (i.e., “global South”). Third, it postulates a critical engagement with the decolonial approach. Fourth, it argues for the need to bring agency to the core of our analyses. Finally, it suggests embracing the imaginaries of datafication emerging from the Souths, foregrounding empowering ways of thinking data from the margins.
Article
Full-text available
To date, little attention has been given to the impact of big data in the Global South, about 60% of whose residents are below the poverty line. Big data manifests in novel and unprecedented ways in these neglected contexts. For instance, India has created biometric national identities for her 1.2 billion people, linking them to welfare schemes, and social entrepreneurial initiatives like the Ushahidi project that leveraged crowdsourcing to provide real-time crisis maps for humanitarian relief. While these projects are indeed inspirational, this article argues that in the context of the Global South there is a bias in the framing of big data as an instrument of empowerment. Here, the poor, or the “bottom of the pyramid” populace are the new consumer base, agents of social change instead of passive beneficiaries. This neoliberal outlook of big data facilitating inclusive capitalism for the common good sidelines critical perspectives urgently needed if we are to channel big data as a positive social force in emerging economies. This article proposes to assess these new technological developments through the lens of databased democracies, databased identities, and databased geographies to make evident normative assumptions and perspectives in this under-examined context.
Article
Full-text available
The era of Big Data has begun. Computer scientists, physicists, economists, mathematicians, political scientists, bio-informaticists, sociologists, and other scholars are clamoring for access to the massive quantities of information produced by and about people, things, and their interactions. Diverse groups argue about the potential benefits and costs of analyzing genetic sequences, social media interactions, health records, phone logs, government records, and other digital traces left by people. Significant questions emerge. Will large-scale search data help us create better tools, services, and public goods? Or will it usher in a new wave of privacy incursions and invasive marketing? Will data analytics help us understand online communities and political movements? Or will it be used to track protesters and suppress speech? Will it transform how we study human communication and culture, or narrow the palette of research options and alter what ‘research’ means? Given the rise of Big Data as a socio-technical phenomenon, we argue that it is necessary to critically interrogate its assumptions and biases. In this article, we offer six provocations to spark conversations about the issues of Big Data: a cultural, technological, and scholarly phenomenon that rests on the interplay of technology, analysis, and mythology that provokes extensive utopian and dystopian rhetoric.
Article
Data assemblages amplify historical forms of colonization through a complex arrangement of practices, materialities, territories, bodies, and subjectivities. Data-centric epistemologies should be understood as an expression of the coloniality of power manifested as the violent imposition of ways of being, thinking, and feeling that leads to the expulsion of human beings from the social order, denies the existence of alternative worlds and epistemologies, and threatens life on Earth. This article develops a theoretical model to analyze the coloniality of power through data and explores the multiple dimensions of coloniality as a framework for identifying ways of resisting data colonization. Finally, this article suggests possible alternative data epistemologies that are respectful of populations, cultural diversity, and environments.
Article
This article documents and then examines the various benefits that, it is claimed, will flow from linking the Unique Identity number with the public distribution system and the National Rural Employment Guarantee Scheme. It filters the unfounded claims, which arise from a poor understanding of how the pds and nregs function, from the genuine ones. On the latter, there are several demanding conditions that need to be met in order to reap marginal benefits. A hasty linking of the PDS/NREGA with the UID can be very disruptive. Therefore, other cheaper technological innovations currently in use in some parts of the country to fix existing loopholes in a less disruptive manner are explored.
Article
After a period of relative optimism about the prospects for democracy around the world, observers have raised concerns that democratic institutions are being rolled back in a growing number of countries. To the extent that a backlash against democracy may be emerging, public officials in both the industrial and developing worlds will wish to ensure that they adopt the policy mix-including foreign aid policies-best suited to democratic consolidation. Making use of a newly constructed data set of democratizations occurring between1960 and 2004, this paper uses descriptive statistics and a continuous time hazard model to explore the underlying reasons for reversals in young democracies. We find that good economic performance is indeed significantly related to the survival of democracy, but emphasize that high growth and low inflation by no means guarantee that democracy will endure. Conversely adverse initial conditions, notably low levels of per capita income, are significantly associated with the failure of democracy, but are not a sure sign that democracy is under threat. We also find that strong constraints on the power of the executive are significantly related to a higher probability of democratic survival. Thus, recognizing that democracy cannot effectively take root when political and economic power becomes too concentrated, we recommend greater coordination between foreign assistance targeting economic development and that focused on democracy-building.
Article
The trade and technology regimes of WTO and TRIPS embody the values, objectives, and operational dynamics that primarily satisfy Western powers. Southern states submit to these constraining regimes only to the extent of avoiding severe penalties. They feign compliance whenever possible through "stealthy noncompliance" in order to advance their development and security. This paper addresses three issues: the West-South asymmetrical struggle over technology and trade; the role of state-supported technological development in the South; and the prospective consequences for the South relating to the restricted labor-absorbing capacity of twenty-first century technology.
Article
Incl. bibl. notes, index.