ArticlePDF Available

Algorithms, future and digital rights: Some reflections

Authors:

Abstract

The paper intends to briefly present a view on current trends on our society, highlighting the technical aspects introduced by the big data phenomenon and the machine learning and artificial intelligence algorithms. It covers the threats that privacy and human rights can suffer by the general ignorance about this issues,nd calls for a discussion on educational agendas and a new digital literacy.
Education for Information 34 (2018) 179–183 179
DOI 10.3233/EFI-180200
IOS Press
Algorithms, future and digital rights: Some reflections
Renato Rocha Souzaa,b
aFederal University of Minas Gerais, Gerais, Brazil
bFundação Getulio Vargas, Praia de Botafogo, Rio de Janeiro, Brazil
Tel.: +55 21 37995529; E-mail: rsouza@eci.ufmg.br
The paper intends to briefly present a view on current trends on our society, highlighting the technical
aspects introduced by the big data phenomenon and the machine learning and artificial intelligence algo-
rithms. It covers the threats that privacy and human rights can suffer by the general ignorance about this
issues,nd calls for a discussion on educational agendas and a new digital literacy.
Keywords: Big data, algorithms, privacy, social media, contemporaneity, digital literacy
1. Introduction
One of the points on which the vast majority of contemporary analyses of society
agree is informational overload, and the technologies that promote them. The last
decades have brought us the internet, the web, social networks – empowered by mo-
bile devices – and the internet of things. If we think about the relation of society to
these stocks of available information, we could imagine an undeniable process of
democratization, both by the popularization of technologies and also by the greater
availability of data, fueling the access to a more significant and diversified range of
the cultural production of humanity. Such a scenario, in theory, would make it dif-
ficult to control information, because the sources are so many and varied today that
biases should be more explicit and easily overcome. Paradoxically, it is increasingly
complex to establish parameters for judging the quality of information, precisely
because no sample is more significant – in comparison to the whole – and the phe-
nomenon of rapid obsolescence makes the knowledge produced rapidly outdated. At
the same time, bubbles and silos, arising from the manipulation of information by the
great content producers, create and multiply fragmented societies, in terms of ideas
and values. We, as individuals, are somewhat numbed and passively uncritical over
the great discussions underway, about the choices made and its consequences. Being
subject to media manipulation, narrative wars, and the “post-truth” phenomenon, we
perceive a context that resembles a blend of two dystopias: “The Brave New World”,
by Aldous Huxley (1998) and “1984”, by George Orwell (2009).
In his book “The Consequences of Modernity” (2013), Giddens already pointed
to trust in expert systems as a distinctive feature of our times. In terms of behav-
ior, the ubiquity of the network and the pervasive characteristic of its products in
0167-8329/18/$35.00 c
2018 – IOS Press and the authors. All rights reserved
AUTHOR COPY
180 R.R. Souza / Algorithms, future and digital rights: Some reflections
our lives have shaped society and instilled the need for permanent connection. En-
tertainment is based on streaming technologies,1constantly forcing the reinvention
of traditional channels’ business models.2Information repositories are digital. Ur-
ban mobility depends on navigation systems, GPSs and information captured by the
collective use of the systems themselves, such as Waze and Google Maps. Intelli-
gent city management involves ongoing monitoring of public transport, monitoring
of events with cameras and drones, and the use of smart grids for energy manage-
ment and distribution.3Communication between individuals is fluid in a myriad of
channels that compete with each other, coupled with social networks, and based on
mobile phones. Medicine is based on images and measurements for increasingly de-
tailed diagnostics,4powered by algorithms capable of predicting diseases. In some
determined tasks, they have got to be more accurate than doctors.5And we’re just at
the the beginning. Either incidentally or because of the myriad technological options
available,6the mere fact of living in this epoch is enough for us to be permanently
watched and recorded, through cameras, sensors and automatic systems. We give
away huge chunks about our life in social networks, and through smart devices con-
nected to IoT (Internet of Things). In this case, the concept of privacy is violated by
clauses in lower case letters,7in the terms of use of products and services.8But it is
not an easy task to free oneself from unhook from these technologies, because that
would imply giving up the comfort we got used to in this century.9
Fed by the informational excesses (the “new oil”) and a necessary condition to
feed predictive algorithms, artificial intelligence10,11 emerged after an apparent fail-
ure in the 1980s. It is gradually evident that its technologies and derived artifacts have
revolutionized human life;12 a silent revolution that happens to the extent that these
devices integrate and interact13 invisibly in our daily lives. Their “intelligence” lies
in supercomputers in the clouds but also in prosaic devices such as cell phones, wrist-
watches, televisions, toasters, refrigerators, and even children’s toys.14 Increasingly
1http://www.pwc.com/us/en/industry/entertainment-media/publications/assets/pwc-streaming-the-
future-february-2016.pdf.
2https://www.recode.net/2017/5/1/15386694/nfl-live-stream-amazon-prime-thursday-night-football-
ratings.
3https://www.scientificamerican.com/report/smart-electricity-grid/.
4https://www.technologyreview.com/s/428134/the-future-of-medical-visualisation/.
5https://www.nytimes.com/2017/05/01/health/artificial-nose-scent-disease.html.
6https://www.technologyreview.com/s/528076/my-life-logged/.
7https://www.nytimes.com/2017/05/09/magazine/how-privacy-became-a-commodity-for-the-rich-
and-powerful.html.
8https://en.wikipedia.org/wiki/Terms_and_Conditions_May_Apply.
9https://www.nytimes.com/2017/05/10/technology/techs-frightful-five-theyve-got-us.html.
10https://www.scientificamerican.com/article/rise-of-the-robots/
11https://medium.com/@NathanBenaich/why-go-long-on-artificial-intelligence-67bb7d0d6ff4.
12https://medium.com/startup-grind/why-do-we-need-the-democratization-of-machine-learning-
80104e43c76f.
13http://www.zdnet.com/article/hundreds-of-apps-are-using-ultrasonic-sounds-to-track-your-ad-
habits/.
14https://www.nytimes.com/2017/02/17/technology/cayla-talking-doll-hackers.html.
AUTHOR COPY
R.R. Souza / Algorithms, future and digital rights: Some reflections 181
integrated into our routines,15 we see the power of algorithms learning our habits16
and dictating the price of products in electronic commerce;17 suggesting what we
should watch,18 eat,19 buy20 and where to travel.21 Personal assistants help us get
around,22 get information – often deciding what we should or should not read.23
Sometimes they occupy affective spaces,24 and soon they will25 relegate The Turing
test to a reminiscence of good old and romantic times, in which humans still used to
challenge machines in games such as Trivia, Poker, Chess or GO.26
Inasmuch as these “artificially sentient” devices gain space, we can design scenarii
that gradually leave the field of science fiction.27 We see the return of the robots, very
present in the collective imagination of those who lived in the 1970s; arousing di-
vergent feelings about the future of our relationship with these anthropomorphized
technological creations.28 Often the media express the fears that these machines take
up human jobs and occupations,29 and for some professions, extinction is a great
probability,30,31 in a very near future.32 This fear is not exclusive to the old gener-
ations, but affects those who are entering the labor market.33 More than fears about
losing jobs, we wonder what would motivate our lives in a world where we would not
have to work.34 A big issue in democratization of AI is to guarantee that its benefits
will be shared by everyone.35
In this scenario, it seems extremely important to ask some questions about the al-
leged imputability of the algorithms that animate artificial intelligence devices. The
decisions we delegate to them go beyond the merely pragmatic and invade the field
15https://www.technologyreview.com/s/514346/the-data-made-me-do-it/.
16https://www.amazon.com/The-Master-Algorithm-Ultimate-Learning/dp/0465065708.
17https://www.theatlantic.com/magazine/archive/2017/05/how-online-shopping-makes-suckers-of-us-
all/521448/.
18https://www.wired.com/2009/09/how-the-netflix-prize-was-won/.
19http://nymag.com/scienceofus/2015/10/future-of-dieting-is-personalized-algorithms.html.
20http://www.bbc.com/capital/story/20161212-algorithms-are-making-us-small-minded.
21http://www.marketwatch.com/story/why-artificial-intelligence-is-the-future-of-2017-01-10.
22https://phys.org/news/2017-03-self-driving-cars.html.
23https://www.ted.com/talks/eli_pariser_beware_online_filter_bubbles.
24http://digg.com/2017/amazon-alexa-is-not-your-friend.
25http://www.turing.org.uk/scrapbook/test.html.
26http://www.makeuseof.com/tag/ais-winning-5-times-computers-beat-humans/.
27https://thinkgrowth.org/killer-robots-and-the-many-ways-in-which-ai-could-go-wrong-31e31a221
bd6.
28https://www.theguardian.com/science/head-quarters/2017/apr/24/why-are-we-reluctant-to-trust-
robots.
29https://www.wired.com/2012/12/ff-robots-will-take-our-jobs/.
30https://www.nytimes.com/2017/03/19/technology/lawyers-artificial-intelligence.html.
31https://thinkgrowth.org/silicon-valley-is-right-our-jobs-are-already-disappearing-c1634350b3d8.
32https://www.linkedin.com/pulse/ais-impact-jobs-finally-force-silicon-valley-grow-up-fairchild.
33https://www.linkedin.com/pulse/job-stealing-robots-millennials-see-hope-fear-cathy-engelbert.
34https://www.theguardian.com/technology/2017/may/08/virtual-reality-religion-robots-sapiens-
book.
35https://www.technologyreview.com/s/603465/the-relentless-pace-of-automation/.
AUTHOR COPY
182 R.R. Souza / Algorithms, future and digital rights: Some reflections
of ethics, psychology and even philosophy. A while ago, in more romantic times,
the three laws of robotics, as stated by Isaac Asimov36 were sufficient to guarantee
safe parameters, but perhaps they may not cover the complexity of current use cases.
As an example, we can think of an autonomous vehicle in the imminent threat of
an accident, having to choose between protecting the passengers or the passers-by.
Should we expect it to minimize human losses, or protect its owner? Whose fault
is it when accidents happen; manufacturers or owners? Moreover, should a drone
that has identified and attacked terrorists accept a controlled number of civilian ca-
sualties to achieve its goals? These are deep questions with no easy answers. On the
other hand, there is a wide range of arguments as to why some decisions should be
left to computers. They, in theory, would not be affected by cognitive biases; would
not modify programmed behavior under stress, pressure, or fatigue and, in principle,
could be modeled on the highest moral standards. But, as we all know, definitions
of ethics are culturally and diachronically conditioned, and transcend the decision
scope of well-meaning programmers and designers.37 Another aspect of the problem
arises when we examine how these algorithms learn. The prejudice we observe in
programs and applications38 are inherent in the social inequalities and biased cleav-
ages present in the data we produce in our social fabric and context. As a primordial
source of learning of these algorithms, these data are being produced in the core of
the human activities, reproducing and perpetuating points of view and comprehen-
sion gaps.39 The alleged objectivity associated to computer based activities is, in this
case, a dangerous myth, and all neutrality is suspect. Furthermore, the effort to make
the personal assistants – Alexa, Google, Cortana, Siri, etc. – seem more human40
may exacerbate their skewed behaviors.
In more extreme analyses, such as those from Yuval Harari,41 a new form of reli-
gion is already gathering acolytes: “dataism”,42 which reifies a preponderance of the
algorithms in decision making processes. Just as a divine authority was legitimized
by religion and mythology, and a human authority was imposed through humanist
ideologies, high-tech gurus and silicon valley prophets are creating a new universal
narrative fueled by Big Data.
Overcoming these dilemmas depends on many variables. Understanding and rais-
ing awareness for the problem by the general public – the great majority of the users
of technologies – are sine qua non conditions. Unfortunately, the observed trend is
36https://en.wikipedia.org/wiki/Three_Laws_of_Robotics.
37http://www.zdnet.com/article/can-ai-really-be-ethical-and-unbiased/.
38https://www.theatlantic.com/technology/archive/2016/04/the-underlying-bias-of-facial-recognition-
systems/476991/.
39https://www.fordfoundation.org/ideas/equals-change-blog/posts/weapons-of-math-destruction-data-
scientist-cathy-o-neil-on-how-unfair-algorithms-perpetuate-inequality/.
40https://techcrunch.com/2017/04/28/alexa-learns-to-talk-like-a-human-with-whispers-pauses-
emotion/.
41https://www.ft.com/content/50bb4830-6a4c-11e6-ae5b-a7cc5dd5a28c.
42https://www.ft.com/content/50bb4830-6a4c-11e6-ae5b-a7cc5dd5a28c.
AUTHOR COPY
R.R. Souza / Algorithms, future and digital rights: Some reflections 183
the amplification of the digital divide into the field of algorithms and artificial intel-
ligence. Yet, very few seem to care about privacy, and fewest are willing to sacrifice
the comfort provided by technology because of issues such as privacy or ethics. As
history recalls, a technology has never ceased to be used because of ethical issues.
And this is most likely a recipe for trouble ahead.
So. . . what should we do?
There is an urge to reformulate school curricula and reorganize discussion patterns
in society. Being challenged by a new digital divide, we need answers promoting a
new digital literacy. It is not enough just to “learn to program”, as many campaigns
to reformulate school curricula advocate. To make the problem of the pervasiveness
of the algorithms less intrusive, we must stimulate holistic and interdisciplinary ap-
proaches that promote debate, hacker activism, social appropriation of themes such
as ethics, privacy, and digital awareness. The civil rights framework for the Internet
must to be extended to cover digital rights, and there are fuzzy lines to draw, between
changes that may allow equality and welfare and those that increase inequality, per-
petuate social and intellectual imbalance and seem to forget the illuminist Voltaire
(1971) saying Sapere Aude, or “Dare to know”. As Rousseau (1968) puts it, our free
will should be, therefore, the highest authority of all.
References
Giddens, A. The Consequences of Modernity. John Wiley & Sons, 2013.
Huxley, A. Brave New World (1998). 1932. London: Vintage.
Orwell, G. Nineteen eighty-four. Everyman’s Library, 2009.
Rousseau, J.-J. The social contract (1762). na, 1968.
Voltaire, F. Philosophical dictionary, Vol. 3. Penguin UK, 1971.
AUTHOR COPY
... Cognitive scientists took the concept forward via the embodied mind thesis [9]. Post-2011, work on AI has focused predominantly on Computational Intelligence (CI), deep learning and big-data, where Convolutional Neural Networks (CNN's) and Recurrent Neural Network's (RNN's) have bettered even human accuracy on recognition of handwritten digits, beat humans at answering general trivia questions and even games like Go and Poker [10]. However, the algorithms continued suffering from over-fitting, the vanishing gradient problem and persistent criticism about the definition and method of evaluating intelligence being fundamentally flawed [11] [12], due to important properties of the brain's cortex being ignored [13]. ...
Preprint
Full-text available
The design of any artificial intelligence is an interdisciplinary pursuit that requires keeping within perspective, the various properties of matter and life in the known universe, while remaining cautious of biases and misconceptions that arise from the limitations of prior learning, available tools and sensory capabilities. This paper curates a vast collection of human knowledge gathered during the process of exploring and questioning the fundamentals of why the mechanisms that constitute life were built in specific ways, and re-questioning those facts using anomalies that sometimes contradict or expose errors in human assumptions of life and intelligence. A meta-analysis of such knowledge at a single glance also helps identify interesting patterns in various phenomena that can in-time, spur creative solutions, alter the direction of research, assist with intelligent inferences and hopefully result in the identification and creation of artificial intelligence.<br
... Cognitive scientists took the concept forward via the embodied mind thesis [9]. Post-2011, work on AI has focused predominantly on Computational Intelligence (CI), deep learning and big-data, where Convolutional Neural Networks (CNN's) and Recurrent Neural Network's (RNN's) have bettered even human accuracy on recognition of handwritten digits, beat humans at answering general trivia questions and even games like Go and Poker [10]. However, the algorithms continued suffering from over-fitting, the vanishing gradient problem and persistent criticism about the definition and method of evaluating intelligence being fundamentally flawed [11] [12], due to important properties of the brain's cortex being ignored [13]. ...
Preprint
Full-text available
This paper curates a vast collection of human knowledge, gathered during the process of investigating the fundamentals of intelligence. It is an effort that hopes to provide students and researchers a quick overview of the complexities that constitute intelligence, so that anybody setting out to design an artificial intelligence or investigate intelligence would be able to perform a meta-analysis based on the facts presented and either delve into further research and/or re-think and re-design artificial intelligence without being biased by contemporary knowledge.<br
Chapter
Big data has commonly been linked to volume, speed, and variety and the way in which data are produced and stored. This chapter aims to explore the trends, classify the research themes, and discuss the limitations of studies that approach big data associated with mathematical and statistical education contexts. The discussion intends to contribute to possible future directions toward a statistics literacy that critically approaches big data in mathematics and statistics education. The text focuses on reflections of an integrative review of articles published in the Web of Science database with the following descriptors: big data, literacy, and teacher education. The results indicate a polysemy referring to the term big data. The review also suggests that studies linking big data to mathematics and statistics teacher education are scarce. In the thematic analysis, some surprising questions emerge, such as the narrative—or even folklore—around big data as a form of knowledge capable of solving many economic, social, and, particularly, educational problems. Finally, the results revealed that most studies have a predominantly non-critical perspective of big data in the interface with mathematics and statistics education.KeywordsBig dataStatistical literacyStatistics educationMathematics educationTeacher education
The Consequences of Modernity
  • A Giddens
Nineteen eighty-four. Everyman's Library, 2009. Rousseau, J.-J. The social contract (1762). na
  • G Orwell
Orwell, G. Nineteen eighty-four. Everyman's Library, 2009. Rousseau, J.-J. The social contract (1762). na, 1968.
  • F Voltaire
Voltaire, F. Philosophical dictionary, Vol. 3. Penguin UK, 1971.