BookPDF Available

AI no longer has a plug: about ethics in the design process

Authors:

Abstract

This publication is meant for everyone who is interested in the role of ethics in the development process of AI. The aim is not just to map ethical issues, but also to examine how we develop ethically responsible AI applications in the future. Do you want to know how we can put ethics into practice? And are you open to a new perspective and approach to ethics? Then this publication is for you.
AI
no
longer
has a
plug
About ethics
in the design
process
Rudy van Belkom
The Netherlands Study Centre
for Technology Trends (STT)
–––––––––––––––
AI NO LONGER
HAS A PLUG
About ethics
in the design
process
––––––––––––––––––––––––––––––––––––––––
Part III in the series 'The future
of artificial intelligence (AI)'
Making choices in and for the future
––––––––––––––––––
Rudy van Belkom
The Netherlands Study Centre for Technology Trends (STT)
AI no longer has a plug. About ethics in the design process
4 5
Table of contents
Foreword: ’A handle for the future of AI’ 4
Research approach 6
Readin uide part III 10
1. AI & Ethics 16
1.1 Ethics in the spotliht 18
1.2 Urent ethical issues 25
1.3 A matter of ethical perspective 48
Guest contribution Marijn Janssen
(Delft University of Technoloy):
‘AI overnance - the ood, the bad and the uly’ 58
2. Ethical uidelines for AI 62
2.1 From corporate to overnment 64
2.2 Conflictin values 70
2.3 Practical challenes 81
Guest contribution Maarten Stol (BrainCreators):
’Compromises surroundin reliability of AI’ 90
3. Ethics in AI desin 94
3.1 A new approach to ethics 95
3.2 Ethics by Desin 103
3.3 The Ethical Scrum 118
Guest contribution Bernard ter Haar
(Ministry of the Interior and Kindom Relations):
‘The ethics of AI in practice’ 132
Concludin observations: ‘Don’t put all the
responsibility on the prorammer’s shoulders’ 134
Epiloue: ’Ethics in action’ 136
Glossary 138
Sources 146
Appendices About the author, Participants,
The Netherlands Study Centre for Technoloy Trends 152
Colophon 156
AI no longer has a plug. About ethics in the design process
76
Foreword: 'A handle for the future of AI'
which can only be understood by a select roup of experts?
Or one that comes with information written in laymen’s
terms reardin the main desin choices, source data,
operation and side-effects? Secondly, some principles can
conflict when put into practice, for instance transparency
and privacy. Aain, the context is important: medical
information is different from Netflix preferences.
Who decides which principle takes precedence? We need
to take that complicated next step toether, because these
are choices that the desiners of alorithms are already
makin every day.
This foreword was written durin the intellient lockdown
of the corona-crisis, in April of 22. For those who
were worried that the Netherlands would miss out when
it comes to diitization and AI: our physical infrastructure
(from cables to cloud) turns out to be robust enouh and
most consumers and businesses were also able to switch to
workin from home with relative ease. We all develop new
diital skills with remarkable speed. But what is perhaps
even more interestin is that we also use the media on a
massive scale to take part in the dialoue about alorithms
and apps. For example the ‘appathon’ that the Ministry of
Health oranized surroundin the corona app. How do you
create that app in such a way that it safeuards the privacy
of citizens, cannot be misused and is accurate at the same
time? And when we say accurate, does that mean ‘not to
miss any corona cases’ (no false neatives) or ‘nobody
bein quarantined needlessly’ (no false positives)? As such,
the current situation, no matter how sad, helps us create
clarity reardin a number of ethical choices in AI.
With 17 million participants nationwide.
I hope that, with the help of this study and in particular
via the interactive online components, we will be able
to continue a focused and broad dialoue and translate it
into practical handles that will lead to an AI future in
the Netherlands that we not only accept but can really
embrace as well.
Foreword –––––––––––––––––––––––––––––––––––––––––––––––––––
A handle for the
future of AI
By Maria de Kleijn-Lloyd, Senior
Principal, Kearney, Chairperson think
tank STT futures exploration AI
What is the future of AI in the
Netherlands? That is a question that is
almost impossible to answer. Because:
what is AI exactly, which future
scenarios are there and who determines ‘what we want’,
and on what basis? The third part of the STT triloy
‘the future of AI’ focuses on that third, normative
question. The aim is to enerate a broad social discussion
about this issue, because AI will touch us all in one way
or another: directly, as users of apps, but also indirectly,
when other people and oranizations use AI, for instance
doctors who let a scan et analysed alorithmically to be
able to make a dianosis. This is not science fiction;
a lot of it is already possible. Even today, the impact of
AI is sinificant and it is expected that the impact will
only row. That’s why it is ood to focus explicitly on
the associated ethical and social choices.
A lot of work is already bein done. A hih level
expert roup of the EU, for instance, has described
the main ethical principles of AI, like explainability and
fairness in reat detail. But that is not enouh, because,
it is relatively easy to aree when it comes to eneral
principles: of course we want privacy, of course we want
fair results. In discussions about a vital infrastructure,
these are also known as feel ood principles. Of course
we are in areement.
Thins tend to become more complicated when we try to
translate the principles into practical applications, when
we are faced with two challenes. Firstly, we need to
find a way to apply the principle in practice. For example,
what is a transparent alorithm? One of which the entire
code – sometimes multiple terabytes – is published, but
...........................................................
...........................................................
...........................,*%#(#%&&&&&&&/.............,,..
.......................*%%%&&&%&&&%%&&&@&&&..........,,,,..
..................,,,,%&%#%&&&%%&%@@%%%&&&&&&%%,.......,**/
.................,,,*&%%%#%&&%%&%&&&@%%%&@@&@%&&*......,*/(
.................,,/#%@%&*,,,,,***#@&%%%&&&@&&&%%%.....,*/(
.................,,*&@*,,,,,,,,,**&@&@&%&@@%%%#%&,...,**/..
.................,*&@@&*,,,,,,,,,,***(@%&@%&&@@#%(&%*..,**/
.................,/&@@#*,,,,,,,,,,,***/@#&@&%&&%#,.,**/....
.................,&@@@#*,,,,,,,,,,,,**/(@@&&@%@@%%%%%#,,**/
................,(&/&@(/(((/,,,,,,*/(###&&&@@&&#&%(,,**....
.................@@##@**&,((/*,,**##/#(&/&@%@@@%@#%&%*,,,..
.................#@*/(,,,,****,,*#(*,*((((@&@@@%@##(%&%(,,,
................,%@#**,,,,,,,,,*/(/*****//(@@@@&@%%(%@&(*..
.................*#@*,*,,,,,,*,**/(******/#&@@@@@%##%%&%(..
..................*%@@*,,,***/,,*(##/****/(&&@@@@%(##&@%(,.
..................*%@@*,,**,,,,,,*//(***/((%@@@@@%##%@@&(,.
..................(%@@/,,,,/*,*/*//%&,,*//(#&@&@@%#%%@%%(,.
..................#@@@@*,,,,,,,**///***///(#@@@@@%&&&@%((..
...............*(%%#@@@&*,,,,,,,,******//(%@@@@@&&&&&&(,,..
...............*,/@&&%#/*#*,,,,,,***//((#&&@&@@@@&&&%***...
.................,,*((#(*,,****,*///((#%%%%&@%&@%&&/,,,,,..
.................,*,**,(*,,,,,**//(((##((#%#%&%(/((....,,..
.................,,*/(**,,*,,,,,,,,*//**/(/%&%@%%,....,,,..
.................,,*((*,,*/,,,,,,,,******//#%%%%&&*...,,,..
.................,,*((/,,*#,,,,,,,,***,,*.,*(@&%&@..,,.....
................,/&&@@&*,(@,,,,,,,,**,,,..,/@&%&&&&&&,,....
.............&&&&@@&&&&&,,,(*,,,,,,,,,...,#@@&&&&&&@@@&@&&%
AI no longer has a plug. About ethics in the design process
98
Research approach
Part 1: Predictin
When we talk about the future of AI, there appear to
be only two flavours: a utopian one and a dystopian one.
Often, these discussions bein with ethical questions,
while skippin the question whether the technoloies can
actually produce these scenarios in the future, which is
why the focus in part 1 is on the technoloies:
How does AI relate to human decision-makin and
how will AI develop in the future?
To examine that question, we used technoloy
forecastin. Based on literature studies and expert
interviews, we mapped the most realistic development
trajectory of the technoloies (scope 0-10 years). For
this part, we consulted 40 experts, from AI experts and
neuroscientists to psycholoists and manaement experts.
Part 2: Explorin
The forms in which AI will be deployed in the future
depend on the social context. In addition to the utopian
and dystopian visions, there are other future visions as
well. In part 2, the focus is therefore on the implemen-
tation of possible future scenarios:
What are the implications of the way AI develops on
decision-makin in the future and what are the potential
future scenarios that can be deployed accordinly?
To examine that question, we used scenario plannin.
Via creative sessions, several future scenarios were
mapped out (scope 10-20 years). For this part,
we oranized four scenario workshops with
30 experts from different areas.
–––––––––––––––––––––––––––––––––––––––––––––––––––––––––
Research approach
Artificial intellience (AI) is hih on the aenda
of politicians, directors, manaers and policy-makers.
Despite the fact that a lot has been written about AI,
there is a lot that we do not know yet about the way
the technoloies will develop in the future and what
impact they may have on society. That is why it is time
for a futures research on AI. A major difference with
other technoloies is that AI is becomin increasinly
autonomous, leavin fewer and fewer decisions up to us,
which has led to the followin central question:
What is the impact of AI on
decision-makin in the future?
Human decision-makin is the outcome of various
components. In addition to factual knowlede, the
perception and ambitions of the people involved in the
decision-makin process also play a role. The way these
components are oranized is subject to constant chane.
In addition, the people takin part in decision-makin
processes often have diverin ideas about reality.
What one person considers to be an irrefutable fact,
is questioned by others. In that sense, you could arue
that automated decision-makin could intrude the
necessary objectivity. However, the question remains
to what extent AI can take over human decision-makin
processes. And to what extent we can and are willin
to allow that to happen.
The future is often seen as somethin that happens to us,
when it is in fact somethin that we, as humans, have
the power to influence. The ambition of this futures
research is to try and formulate the desired future of
AI with a multidisciplinary roup of experts, and to
examine what we need to realize that desired future.
That has led to the followin triloy:
AI no longer has a plug. About ethics in the design process
1110
Research approach
» Technoloy often no loner has a
plu, which means you cannot simply
unplu it. «
Think tank
Durin the exploration, we made extensive use of the
expertise and experience of administrators and relevant
experts, compilin the followin multidisciplinary
think tank:
Marc Burer Capemini CEO
Patrick van
der Duin
STT Director
Bernard ter
Haar
Ministry of
the Interior
and Kindom
Relations
Special advisor
Frank van
Harmelen
VU University Professor Knowlede
Representation &
Reasonin
Fred
Herrebout
T-Mobile Senior Stratey Manaer
Marijn
Janssen
Delft
University of
Technoloy
Professor ICT &
Governance
Maria de
Kleijn-Lloyd*
Kearney Senior Principal
Leendert
van Maanen
Utrecht
University
Assistant professor in
Human-centred AI
Marieke van
Putten
Ministry of
the Interior
and Kindom
Relations
Senior Innovation
Manaer
Jelmer de
Ronde
SURF Project manaer SURFnet
Klamer
Schutte
TNO Lead Scientist
Intellient Imain
Maarten
Stol
BrainCreators Principal Scientific
Advisor
* chairperson think tank STT futures research AI
Part 3: Normative
AI can become the first technoloy that will determine
its own future, in which case it would be more
important than ever before to determine the desired
conditions for its development. What kind of future do
we want? That is why the focus in part 3 is on ethical
issues:
Which ethical questions play a role in the impact of AI
on decision-makin in the future and how can we develop
ethically responsible AI?
To answer that question, we used backcastin. Based on
an online questionnaire, the desired future of AI was
mapped (scope 20-30 years). We then examined which
elements are needed to realize that future. For this
part, a questionnaire was distributed amon three
roups, namely experts, administrators and students,
more than 100 of whom filled in the questionnaire.
Scope
Usually, the futures research of the Netherlands Study
Centre for Technoloy Trends (STT) have a scope of about
3 years, the aim of which is not to produce concrete
statements about a specific year, but to indicate that
the explorations look beyond short-term developments.
The oal is to overcome the limitations of the current
zeiteist. For that, we need to look beyond boundaries
and broaden our horizon in such a way that we can adopt
a more future-oriented approach.
This futures research builds on the insihts from earlier
STT studies on data, namely Dealin with the data flood
(2002) and Data is power (2017). Data provides the
buildin blocks for today’s AI technoloies. The question
is whether that continues to be the case in the future
or whether the technoloies will develop in a different
direction.
AI no longer has a plug. About ethics in the design process
1312
Readin uide part III
To et a rip on the development of AI, it is important
to et a rip on the operation and application of AI.
That is why in part 1 of this futures research,
‘Submarines don’t swim’, we took a close look at
what AI is, how it works, how it relates to human
intellience, to what extent human decision-makin
can be automated, what the most dominant expert
opinions are with reard to the development of AI
and which economic and political factors affect
the direction in which AI is developin.
Retrospect part II
When people talk about the future of AI, they often think
in extremes: will it be utopia or dystopia? In addition
to the fact that such extremes often work better in
movies and newspaper headlines, this dichotomy also has
to do with the idea that, althouh it is very unlikely
that either scenario will ever occur, the potential
impact can be so reat that it deserves a certain
measure of reflection. That applies both to the utopian
vision (we never have to work aain) and the dystopian
vision (we will become slaves to technoloy). However,
there are multiple flavours.
That is why, in part 2 of this futures research,
‘Computer says no’, we examined several alternative
realities and translated them into five future scenarios,
each with an increasin level of intensity. In the
scenario of Game Over, the limited availability of
resources means that the promise that AI once was never
materialized. In the scenario of The winner takes all,
the need for control and reulation means that AI is
mainly used as a tool to increase human intellience.
In the scenario of Privacy for sale, the quest for
automation means that people are replaced by AI in
different areas. In the scenario of Robot Rihts,
man and machine work and live toether as equals,
while AI transcends human intellience in all domains
in the scenario of The Sinularity is here.
–––––––––––––––––––––––––––––––––––––––––––––––––––––––––
Readin uide part III
Artificial intellience (AI) appears to be one of the
most frequently discussed technoloies at the moment,
as well as bein one of the least understood
technoloies. In some areas, it is ‘dumber’ than people
think, but in other areas, it is actually ‘smarter’.
And althouh havin a computer for a Prime Minister
still seems a little far-fetched, AI certainly has
an impact on our labour market. ‘AI is here to stay’.
However, these intellient systems are still often seen
as a oal in themselves, without wonderin whether AI
is the best solution to a iven problem. Many people
appear to assume that AI is an unstoppable force of
nature that we have to put to use somehow. No matter
what. We need to realize, however, that AI in itself
is neither ood nor bad, the question is how it is used
by people. So the question is what kind of society we
want to be, iven all the technoloical developments.
Society will chane fundamentally no matter what
and AI can help us find the riht path. But we do need
to find out where it is exactly that we want to o.
Retrospect part I
The overarchin nature of AI makes it a concept
that is hard to define and there is no unequivocal
and internationally accepted definition. In 219, the
Dutch overnment launched the Strateic Action Plan for
Artificial Intellience (SAPAI), which describes the
intention of the overnment to speed up the development
of AI in the Netherlands and profile it internationally.
The document uses the definition of the European
Commission: ‘AI refers to systems that display
intellient behaviour by analysin their environment
and – with a certain deree of independence – take action
to achieve specific oals’. This sentence is filled with
broad terms that can be interpreted in different ways:
Systems? Intellience? A certain deree of independence?
Specific oals? And yet, based on this holistic
definition, a complete action plan is developed.
AI no longer has a plug. About ethics in the design process
1514
Research approach
» Many researchers will tell you that
the heaven-or-hell scenarios are
extremely unlikely. We’re not oin
to et the AI we dream of or the
one that we fear, but the one we
plan for. Desin will matter. «
–– Stephan Talty
Who is this publication for?
This publication is meant for everyone who is interested
in the role of ethics in the development process of AI.
The aim is not just to map ethical issues, but also
to examine how we develop ethically responsible AI
applications in the future. Do you want to know how we
can put ethics into practice? And are you open to a new
perspective and approach to ethics? Then this publication
is for you.
How is this publication oranized?
In the first chapter, we zoom in on AI & Ethics, tryin
to explain the emerence of ethics in AI and lookin
at the ethical issues that play a role in today’s
discussion. Next, in chapter 2, we look at the ethical
uidelines that have already been developed and what
their limitations are. In the third and final chapter,
we look at a new vision and approach to the interation
of ethics in the development of AI, focusin on the
desin as well as the process.
These future scenarios help us to come to rips with
the chanin relationship between people and technoloy,
in addition to allowin us to identify the desirable
elements in the future, ivin us somethin to work
towards.
Why this publication?
The way in which AI will be deployed in the future
depends to a lare extent on the social context. So it
is not just about the performance of the technoloies
and the availability of the resources, but also about
strateic interests and social acceptance. What is often
overlooked is that we create that context ourselves.
The choices we make today will have a major impact on
the possible futures, which is why it is important to
examine ethical questions and look for answers. Who is
responsible when an AI application messes up? Can we
rant rihts to technoloies? And how do we make sure
that AI applications are free of prejudices? Fortunately,
there are more and more ethical uidelines that have
to uarantee the development of reliable AI systems.
However, the question is how you can translate those
abstract values into concrete practical applications.
At the moment, people mostly talk about ethics, but
as yet, there are no practical tools for interatin
ethics into the development process. If we want to use
ethically responsible applications in the future,
now is the time to put those ethics into practice.
1.1 Titel van het subhoofdstuk
17
1 –––––––––––––
AI & Ethics
––––––––––––––––––––––––––––––––––––––––––
 1.1 Ethics in the spotlight
 1.2 Urgent ethical issues
 1.3 A matter of ethical perspective
––––––––––––––––––
Paes: 44
Words: 10.130
Readin time: approx. 1,5 hour
AI no longer has a plug. About ethics in the design process
1918
1. AI & Ethics
The fact that both AI and ethics are overarchin concepts
that include various movements and approaches makes the
discussion that much more complicated. As a consequence,
various AI applications and ethnical movements tend
to intertwine, which is why it is important to take
a closer look at the emerence of ethics in AI and to
distinuish different ethical issues and movements.
–––––––––––––––––––––––––––––––––––––––––––––––––––––––––
1. AI & Ethics
When you use the term artificial intellience  (AI),
that will make people pay attention in most cases.
We all want ‘somethin’ with AI. But there is no
consensus about the question as to what exactly AI
is and what we can do with it. People often speak
of ‘technoloy’ in eneric terms, rather than about
specific technoloies. It is the result of a series
of different technoloies that toether produce a form
of intellience. This artificial form of intellience 
can be realised in different ways. Think, for instance,
of whole brain emulation (WBE), which refers to the
attempt to transfer a complete brain into a computer.
However, when we talk about AI, in most cases,
we refer to applications in the area of machine
learnin, a revolution in which people no loner do
the prorammin (if this, then that), but in which
the machines themselves deduce rules from data.
With a lare quantity of data , computin power
and alorithms , there is no AI.
A similar principle appears to apply to AI; it is not
easy to achieve an unambiuous view. Ethics is an area
of philosophy that concerns itself with the systematic
reflection of which actions can be called ood or riht.
With reard to ethical issues, we all have ‘some’
opinions about it, often without bein able to tell from
which perspective we are reasonin. There is no such
thin as ‘ethics’. There are various ethical movements,
for instance consequentialism  (which focuses on the
results of our actions) and deontoloy  (which focuses
on the startin point, despite the consequences of our
actions). Think, for instance, of Robin Hood: he steals
from the rich to ive to the poor. The question whether
his actions are ethically responsible depends on the
perspective. In the case of consequentialism, the actions
of mister Hood are defensible, because they promote
equality. But not in the case of deontoloy: stealin
is wron, even if it is for a ood cause.
AI no longer has a plug. About ethics in the design process
2120
1.1 Ethics in the spotliht
not only visible in the prorammin of the conferences,
but also in their oranization, for instance that of
the larest AI conference in the world, the Neural
Information Processin Systems conference. A lare-scale
questionnaire in 2017 showed that the conference did not
provide a hospitable environment to female participants.
Respondents reported sexual intimidation and sexist or
sexually offensive remarks and jokes. The oranization
therefore decided to introduce a new code of conduct
in 2018, to avoid discrimination. In addition, they
tried to make the event more inclusive by supportin
childcare, amon other thins, and they also chaned
the acronym NIPS to NeurIPS, to avoid the association
with nipples. A seeminly minor, yet important chane.
In a previous edition, some male visitors of a workshop
about women in machine learnin wore a t-shirt with
a ‘joke’ about nipples.
And it doesn’t stop at conferences. Dozens of orani-
zations, from businesses and scientists to overnments,
have set up ethical uidelines  to be able to uarantee
the development of reliable AI applications. Think, for
example, of the of the ‘Perspectives on Issues in AI
Governance’ of Goole, the ‘Asilomar AI Principles
of the Future of Life Institute and the Ethics uidelines
for trustworthy AI’ of the Hih-Level Expert Group
on Artificial Intellience (AI HLEG) of the European
Commission. The American Ministry of Defence has
its own ‘AI Principles, with recommendations to
safeuard the ethical use of AI within the department.
Even the Vatican published uidelines in 22 for the
development and use of AI: ‘Rome Call for AI Ethics’.
Tech iants like IBM and Microsoft were amon the
first sinatories. Values like privacy, transparency 
and fairness  are iven ample attention in different
uidelines. In addition to different ethical principles,
more and more ‘ethical boards’ are created to supervise
the ethical actions of tech companies. Accordin to
Gartner, ‘Diital Ethics & Privacy’ was one of the
1 ‘Strateic Technoloy Trends’ of 219.
1.1 –––––––––––––––––––––––––––––––––––––––––––––––––––––
Ethics in the spotliht
Ethics  are a hot topic riht now, in particular with
reard to the development of AI . Sometimes it would
appear as thouh the subject is reserved exclusively for
AI. But of course that is not the case. Various ethical
issues play an important role in different domains.
Think, for instance, about the discussion surroundin
clonin in medical bioloy or the behaviour of bankers
in the financial sector. And yet, we seem to be makin
little headway when it comes to ‘the riht actions’
in relation to AI, which, when you think about it,
is actually not that strane.
As AI is able to operate more and more independently,
we will have to relinquish more and more control.
For humans, this is new territory and it is bound to
create feelins of anxiety, which Hollywood does not
hesitate to feed. For almost a century, scenarios about
robots risin up aainst people have been a popular
storyline (assumin that Fritz Lan’s Metropolis is the
first real science fiction movie in which robots have
bad intensions). In addition, fiction is often overtaken
by reality. In 216, for example, a Tesla runnin on
autopilot crashed, killin the driver. Physical injury
as a result of the use of technoloy is not new,
but physical injury without a human in the loop (HITL)
very much is. We can no loner afford to philosophise
about what is riht and wron from a distance. More
than ever, ethics is a matter of practical philosophy.
Ethics on the rise
Who is responsible for an accident with a self-drivin
car? How do we protect our privacy with data-hunry
applications? And how can we prevent unjust actions
by AI systems? Nowadays, it is hard to attend an
AI-related conference without part of the proramme
bein dedicated to ethics. And while, until a few years
ao, ethical specialists were the support act, nowadays
they increasinly make up the main event. That shift is
AI no longer has a plug. About ethics in the design process
2322
1.1 Ethics in the spotliht
So is it all just bad news? No, fortunately, there are
also beneficial side-effects. But it did draw attention
to the need for reulation. Lare tech companies, like
Microsoft and Amazon, have explicitly called on the
US overnment to reulate technoloies like facial
reconition.
» Because AI isn’t just tech. AI is
power, and politics, and culture. «
–– AI Now Institute
Growin resistance
In 2019, the AI Now Institute published another overview
of important news about the social implications of
AI and the tech industry, and it was different from the
year before. There appeared to be a rowin resistance
to hazardous AI applications. A selection from the
2019 database: an acceleration of reulatory measures
(includin a ban on the use of facial reconition in
San Francisco), increasin pressure from shareholders
(like the pressure from the shareholders of Amazon
aainst the sale of facial reconition software to
overnments) and an increase in the number of protests
and strikes of employees of lare tech companies
(like the climate strike of thousands of employees of,
amon others, Amazon, Goole and Microsoft aainst the
damain effects of AI applications on the environment).
The resistance in 2019 reminds us of the fact that there
is still room to determine which AI applications are
acceptable and how we want to control them. But aain,
what is desirable appears to be overtaken by reality.
Unethical ethics
However, it would appear that this ‘ethical hype’
is limited to bi tech oranizations. Despite the fact
that companies around the world expect they will apply
AI within their oranization, they la behind in the
discussions surroundin ethics accordin to research
by Genesys (2019). More than half of the interviewed
employers state that their business does not have a
written policy reardin the ethical use of AI at the
moment. An interestin fact in this reard is that only
1 in 5 of the respondents express any concern about the
possibility of AI bein used in an unethical way in their
oranization. That percentae is even lower for older
respondents. While 21% of the millennials are worried
about such unethical use in their oranization, a mere
12% of eneration X and 6% of the baby-boomers share
those concerns. The researchers wonder whether that
is really the riht attitude.
In 218, the AI Now Institute produced an overview
of important news about the social implications of
AI and the tech industry. The overview confirmed
what many people suspected: it was a turbulent year.
Just some facts from 218: an increase in the abuse
of data (culminatin in the revelation of the Cambride
Analytica scandal, in which millions of personal data
were used to influence elections), an acceleration of
facial reconition software (like the collaboration
of IBM and the New York Police Department, makin it
possible to search faces bases on race, usin camera
imaes of thousands of police officers who unwittinly
participated) and an increase in the detrimental
consequences by testin AI systems on live populations
in hih-risk domains (like the deadly collision of
a pedestrian by a self-drivin Uber). And that is just
a small selection from a much larer database.
AI isn’t
just tech.
–– AI Now Institute
power
AI is
politics
AI is
culture
AI is
AI no longer has a plug. About ethics in the design process
2726
1.2 Urent ethical issues
1.2 –––––––––––––––––––––––––––––––––––––––––––––––––––––––
Urent ethical issues
Within the development of AI , the spotliht is firmly
on ethics . Questions about explainability  and biases
are everywhere. But what exactly do these concepts mean?
And how do they express themselves in practice? To be able
to answer those questions, we first need to understand
which ethical questions are involved and what the
potential considerations are. When mappin ethical
issues, three central concepts are often used,
namely responsibility, freedoms and rihts,
and justice (Van Dalen, 2013).
Responsibility
When talkin about the development of AI, people often
mention the term ‘responsibility’. Most people, for
example, have at some point heard the question who is
responsible in case of an accident involvin an autonomous
car. Is it the passener, the developer or the system
itself? We previously only met machines as independent
actors in science fiction movies. This leads to a variety
of new issues, both leally and socially. The time for
armchair philosophers has passed. These issues are now
part of reality. But to determine who is responsible, we
first need to determine what it is they are responsible
for and what behaviour that does and does not include.
The question then becomes how we can deduce the level of
responsibility.
What are we responsible for?
A well-known ethical thouht experiment is the so-called
trolley problem, where the main question is if it is
ethically riht to sacrifice the life of one person to
save the lives of many. To visualize that question, the
experiment uses a tram or trolley.
Because of the rapid spread of the corona virus in 2020,
overnments are usin data-driven apps to monitor
the spread of the virus and apply lockdowns in a more
tareted fashion. People have a track and trace app on
their smartphone; when they have been in the vicinity of
an infected person, they are notified. Needless to say,
that is at odds with our ideas about privacy and, perhaps
more importantly, with our views on self - determination
and equality. What if people are forced to be quarantined
on the basis of a false positive? In the Netherlands,
the introduction of these apps met with considerable
resistance. At the same time, it appears that many
people fail to realize that, when it comes to privacy,
these apps may be a lot more friendly than the apps by
Facebook or Goole that we have been usin for years
without expressin any privacy concerns. At any rate,
it is important to keep askin questions about who
benefits from AI, who is disadvantaed by AI and who
can and should be allowed to make decisions about that.
AI no longer has a plug. About ethics in the design process
2928
1.2 Urent ethical issues
over animals, while other judments are culturally
determined. Participants from Central and South America
tend to save women over men and people with an athletic
body over obese people, while participants from countries
with a hih level of income inequality tended to save
people with a hiher social status over people with a
lower social status.
In short, what we consider to be ‘responsible’ cannot
be objectified completely. However, the question is
whether that warrants stoppin the development of the
self-drivin car. After all, how often do these types
of dilemmas occur in everyday life? And how does that
relate to the number of accidents that can be prevented
with autonomous vehicles? Especially when everythin
is connected (so not just cars to each other, but all
people in traffic, includin pedestrians and cyclists),
autonomous systems are able to look ahead more quickly
than people can and anticipate potentially danerous
situations. The trolley problem is a ood way to
philosophise and to shed liht on the complexity of
ethical dilemmas, but it is still a thouht experiment.
In practice, developers are more focused on makin the
self-drivin car as safe as possible. You can also look
at the route the car has taken before the dilemma occurs.
In the case of the trolley problem, the focus is too
much on the existin context: why would we create an
environment for autonomous cars where vehicles and
pedestrians cross paths in the first place? Elon Musk,
for instance, is workin on an underround form of
mobility with ‘The Borin Company’ where people can
be transported by autonomous vehicles in an underround
tunnel, completely bypassin the trolley problem
altoether.
‘A runaway trolley is movin headlon towards a roup of
five railway workers. You can still intervene by pullin
a switch and movin the trolley onto a different track,
where there is only one railway worker. Do you save the
lives of five people by pullin the switch or do you save
the life of one by not pullin the switch?’
Of course this raises a number of questions. Are five
lives worth more than one? Are you responsible when
you intervene? And what about if you don’t intervene?
It is interestin to see whether the decision includes
the identity of the five people that may be saved and
the one person bein sacrificed. Do people make the same
decision if that one person is their loved one and the
five others are straners? Or when that one person is
a youn doctor and the other five are senior citizens?
The question then is no loner about quantity, but about
quality. And that is a question that is very difficult
to automate in AI.
With the arrival of the autonomous car, the trolley
problem isn’t just a mere thouht experiment; we now
live in a world where those kinds of situations can
actually occur. After all, the semi-autonomous cars
that are currently allowed on our streets can break
and switch lanes on their own, ivin rise to the
question what a car should do, for example, when
a roup of people is crossin the road and the car
cannot break in time. Should it keep drivin, to protect
the passeners, or swerve and avoid hittin the people
crossin the road but killin the passeners? And, of
course, the question is if it makes a difference who
crosses the road and who is inside the car. To answer
that question, MIT developed the Moral Machine to see
what people would do in such a situation. The experiment
started in 2016 and by now has been filled in by over
40 million people. The results were published in 2018
in Nature. An analysis of the results showed that there
are eneral preferences in some areas, for instance
savin youn people over older people and savin people
AI no longer has a plug. About ethics in the design process
3130
1.2 Urent ethical issues
safety, the systems will have their own challenes,
for instance in situations where the hardware fails,
there is a bu in the software, the system is hacked,
the interaction between man and machine falters or when
the system fails to properly anticipate other vehicles
and people in traffic or unexpected traffic situations
(Van Wees, 218).
However, at the moment, the uidelines surroundin
product liability appear to be insufficient for the
introduction of autonomous vehicles. For example, riht
now, it is unclear whether software also falls under
different uidelines. The principle of product liability
is focused on movable property (like an autonomous car),
but the question is whether, leally speakin, software
is included in that definition. When autonomous vehicles
are introduced from level 3, a paradoxical situation
occurs. Accordin to the uidelines of the Society of
Automotive Enineers (SAE), vehicles below level 3 can,
under specific circumstances, drive autonomously and
the driver isn’t needed to keep an eye on the circ um-
stances. However, the driver has to be able at any
moment to intervene, when the vehicle indicates that
it is necessary. But a driver who does not have to pay
attention hardly seems able to intervene when called
upon.
» The discussion about the trolley
problem shouldn’t be about the
forced choice, but about optimisin
safety. «
–– Arjen Goedeebure, OGD ict-services
Who is responsible?
But even if the trolley system does not occur,
accidents can still happen as a result of system
errors of autonomous vehicles and the question remains
who is leally responsible, for example in case of
a collision. Leally prescribed responsibilities are
called liabilities. Incidentally, the liability issue
is not completely new for AI. Dutch leislation uses
the principle of ‘product liability’. In Article 6:185
of the Dutch Civil Code, product liability is described
as follows: ‘The manufacturer is responsible for the
damae caused by a defect of his product’. When the
driver of a reular car can demonstrate that he suffered
injuries and that those injuries are the result of a
defective product, the car manufacturer is liable.
Accordin to information from the Dutch overnment,
the rules that apply to normal cars also apply to tests
involvin self-drivin cars. ‘The driver is responsible
if he is drivin the vehicle himself. But if the
system does not function properly, the manufacturer
is responsible’. Accordin to the overnment,
this leislation is sufficient for the test phase.
After all, in the case of the semi-autonomous cars that
are currently on the market, the driver is expected to
pay attention and intervene when somethin oes wron.
Various car manufacturers have announced, however,
that they intend to market completely autonomous cars
in a few years. In that case, product liability
will play a bier role in the future, because fully
autonomous cars take over the drivers’ tasks, which
means that safeuardin their safety is also increasinly
the system’s responsibility. Despite the reasonin that
autonomous vehicles can sinificantly increase traffic
AI no longer has a plug. About ethics in the design process
3332
1.2 Urent ethical issues
thins went wron, but in 2019, the company announced
it will start offerin paid taxi rides without a human
driver. Of course, traffic is slihtly more predictable
in the suburbs of Phoenix than it is in the centre of
Amsterdam, but it is still a breakthrouh.
» On the road soon: self-drivin robot
cars without a spare human behind
the wheel «
–– Bard van de Weijer, Volkskrant
Because self-learnin systems are involved (where
choices are not prorammed entirely, but are based
on data and experiences of the system itself), the
manufacturer cannot be held completely responsible.
And without a human driver, the passener can also not
carry all the responsibility. In that case, it should
be possible for the system itself to be held to account.
That may sound like science fiction, but it isn’t.
In 216, Goole’s self-drivin car system was officially
reconized in the US as ‘driver’ by the National Hihway
Traffic Safety Administration (NHTSA), essentially
classifyin the AI-system as the car’s driver.
In the same year, the European Commission for the first
time introduced the term ‘e-personhood’. The term was
used in a report to describe the leal status of the most
advanced autonomous robots. Accordin to the report,
these systems would be able to obtain certain rihts and
obliations to be able to be held accountable for damae
bein caused (Delvaux, 2016). And yes, in addition to
obliations, the systems would also be iven rihts.
Does that mean that, in the future, systems could
also hold humans accountable?
In addition, when people no loner have to drive
themselves, the drivin skills will decrease over
time. The question is whether people are still able
to intervene when the system demands it. In most cases
this will involve complex situations. In that respect,
a self-drivin car requires a more competent, rather
than a less competent driver. This principle also applies
in other situations. For example, if a human doctor has
to intervene with a robotic sureon, he must also have
his surical skills up to date and must have knowlede
of the complex system. This makes it difficult to
uarantee the principle of human in the loop (HITL).
» The curse of automation: the need
of hiher skilled operators. «
–– Edar Reehuis, Hudl
Nevertheless, the development of autonomous systems
continues to boom. Tech iants like Goole and Alibaba
claim to be developin level 4 and even level 5
vehicles, which no loner require a human driver.
Waymo (Goole’s erstwhile self-drivin project) has
been experimentin with fully autonomous vehicles in
some suburbs of Phoenix since 2017, so far always with
a human behind the wheel, who could intervene when
MONITORED DRIVING NON MONITORED DRIVING
Lateral or
longitudinal
control is
accompised by
the system
Driver is
continuosly
exercising
longitudinal
AND lateral
control
Driver is
continuosly
exercising
longitudinal OR
lateral control
Driver has to
monitor the
system at all
times
Driver does not
have to monitor
the system at
all times; must
always be in a
position to
resume control
Driver is not
required during
defined use
System has
longitudinal and
lateral control
in a specific
use case
System has
longitudinal
AND lateral
control in a
specific use
case. System
recognises the
performance
limits and
requests driver
to resume
control within
a sufficient
time margin
System can
cope with all
situations
automatically
in a defined
use case
System can cope
with all
situations
automatically
during the
entire journey.
No driver
required.
Assisted High automation
Full automationDriver only Conditional
automation
Vehicle role
Driver role
Temporary hands off Eyes on, Hands on Eyes off, Hands off
‘On the road to automated mobility: An EU strategy for mobility of the future 2018’
– Source: SAE (2019).
3
4
5
0
1
2
AI no longer has a plug. About ethics in the design process
3534
1.2 Urent ethical issues
retrospect. However, systems that are currently bein
developed, for instance with the help of deep learnin ,
are so complex that people can hardly understand them.
You can’t just ask a system for an explanation and it
will print out a chronoloical explanation of its main
considerations and decisions. It is a multi-layered web
of connections, much like the human brain , come to
think of it. We can’t look under ‘the hood’ of the human
decision-makin process either. When someone causes
a traffic accident, we also have to make do with an
explanation that is constructed in retrospect. We cannot
look into a person’s brain and determine exactly what
led to the decision that was made.
» It is easy to put transparency
and explainability on the aenda,
but puttin them into practice is
enormously complex. «
–– Marijn Janssen, Delft University of Technoloy
Transparency is a misleadin concept anyway; it is
always a limited representation of reality. Think of
a window, for instance. You can look outside throuh
the lass and it seems transparent. But you can see
what is visible within the borders of the framework.
That leads to a overnance-related problem. Everythin
has to be checked and audited; not only the output, but
also the decision-makin rules and input. However, it is
almost impossible to ascertain what the exact oriin of
the data is, because data is already pruned and reduced
before it is used. This so-called data cleansin makes
it hard to reproduce the process even before it starts.
How can we deduce responsibility?
As AI will make more and more autonomous decisions
(and as a result, will be held accountable more and more
as well), it is important to ain insiht into the way
the decisions of such systems came about. Alorithms 
increasinly make the clusters  themselves, without
pre-prorammed labels, makin it increasinly difficult
for people to determine on what the decisions were based.
As such, people often compare AI to a black box.
An important question in this reard is whether the
decision-makin process of the future is sufficiently
transparent and whether the results can be sufficiently
explained, which is why a lot of attention is paid to
Explainable AI (XAI).
A distinction has to be made between transparency,
the explanation and explainability. Transparency is
mostly about the process and the prior criteria, while
the explanation and explainability refer to the explanation
and deducibility of the decision afterwards. Explaina-
bility is very subjective and context-dependent, while
transparency and explanation are more objective. As
such, puttin explainability in practice is difficult. It
involves a move from a black box towards a ‘lass box’.
In practice, it means comin up with an explanation in
RIGHTS
ROBOT
RIGHTS
ROBOT
RIGHTS
ROBOT
RIGHTS
ROBOT
RIGHTS
ROBOT
RIGHTS
ROBOT
RIGHTS
ROBOT
RIGHTS
ROBOT
AI no longer has a plug. About ethics in the design process
3736
1.2 Urent ethical issues
Freedoms and rihts
At a fundamental level, freedom is about the freedom
people have to determine how they want to oranize
their lives. It is even a riht, the riht to self-
determination, which is then limited by the duty not to
harm others. So you don’t have the freedom to break into
other people’s homes. The formulation of rihts is built
around the individual’s freedom. In some cases, people
need to leave you alone or even not do certain thins to
protect your freedoms (freedom rihts), while in other
cases, others – usually overnments – actually have to
make an effort to allow you to realize your freedoms
(social rihts). Freedom rihts and social rihts are
reconized internationally and have been included
in the Universal Declaration of the Rihts of Man.
The emerence of AI bes the question to what extent
we are still completely free and to what extent certain
rihts are bein curtailed, for instance when we are
increasinly bein monitored with facial reconition
software.
To what extent are we free to make our own choices?
The discussion surroundin AI often includes the
nihtmare scenario  in which intellient robots
can take over control and we lose any form of self-
determination. That nihtmare scenario is based on
the assumption that a separate system will have the
capacity to exceed human intellience . What is often
overlooked is that, if very powerful specialized systems
are connected to each other, that can also create an
intellient system. Instead of General AI  one bad’,
it becomes Narrow AI  everywhere’.
Cities are increasinly filled with sensors,
which increasinly allows intellient systems to make
independent decisions, even without human intervention.
These days, any self-respectin city calls itself a Smart
City. In other words, a city that uses different kinds of
data to optimize processes and tackle problems in areas
like traffic, safety and the environment.
In today’s discussions about the applicability of AI
in decision-makin processes, people fail to draw enouh
of a distinction between the different types of decisions
and the impact that the decisions have on the people
involved. There’s a sinificant difference if it involves
a recommendation for a movie, a medical dianosis
on the basis of lun X-rays or an evasive action of an
autonomous vehicle. The severity of the impact depends,
amon other thins, on the potential risks. Which is why
we need ‘levels of explainability’. For instance, there’s
a less need for explainability in the case of chatbots
than in the case of self-drivin cars or war drones.
As such, the need for explainability and the consequences
of makin the wron decisions depend on the context and
on the type of decision.
This is closely related to the deree of autonomy that
we will rant to the system and thus the relationship
we have with technoloy. Is it ‘just’ a tool or does it
make completely autonomous decisions? In that reard, AI
still has a lon way to o. Research from the University
of Amsterdam into automated decision-makin by AI from
218 shows that many Dutch people are concerned that AI
may lead to manipulation, risks or unacceptable results.
It is only when more objective decision-makin processes
are involved, such as a mortae application, that they
feel AI has potential. Human control, human dinity,
honesty and accuracy  are considered to be important
values when reflectin on decision-makin by AI.
» There may well come a day when we
tell AI ‘explain yourself’, and AI
responds ‘you wouldn’t understand
it anyway’. «
–– Maarten Stol, BrainCreators
AI no longer has a plug. About ethics in the design process
3938
1.2 Urent ethical issues
Do we have the riht to be forotten?
AI systems penetrate ever deeper into our lives
and sometimes clash with human rihts. Think, for
instance, of the System Risk Indication (SyRI) that
the Dutch overnment uses to combat fraud in the area
of subsidies, taxes and overnment allowances. On the
basis of information involvin, amon other thins,
work, income, pensions and debts, the system calculates
who miht commit fraud. In particular in vulnerable
neihbourhoods. An important objection to such a
system is that the data of all the people livin in a
neihbourhood can be analysed, even if they are innocent,
makin them uilty until proven innocent. Various civil
rihts and privacy oranizations felt that such a system
was unacceptable and sued the Dutch State. And with
success. In 22, the courts ruled that the leislation
om which the use of SyRI is based violates article 8 of
the European Treaty for Human Rihts (ETHR), namely the
riht to respect for our private lives, which requires
a fair balance between the social interests served by
the leislation and the extent to which it affects our
private lives. The courts ruled that the prevention and
restriction of fraud was outweihed by our riht to
privacy.
In 2017, the Rathenau Institute arued in favour of
a new European treaty that would update human rihts
and adjust them to the diital society. The report even
mentions new human rihts, includin the riht not
to be measured, analysed or influenced (the riht to
refuse online profilin, trackin and influencin).
And not without reason. Applications in the area of,
for instance, facial reconition  put more and more
pressure on our riht to privacy. A ood example is
the case surroundin the company ClearView, which
‘scrapes’ millions of pictures from Facebook and other
sites for the benefit of its facial reconition software
and which offers its services to numerous intellience
aencies. Or the siner Taylor Swift who used facial
reconition technoloy to identify potential stalkers
Think, for instance, of cameras usin facial reconition
software that make it possible to ban hoolians from
football stadiums or to monitor social media to map
and manae tourist flows in the city.
» In the city of the future, lampposts
take part in the conversation, but
citizens do not. «
–– Maurits Martijn & Sanne Blauw, The Correspondent
In that sense, democracy can chane in a so-called
‘alocracy’, where cities (and therefore people) are
manaed by data. More and more experts warn us about
a black box society, in which the choices that are made by
smart alorithms can no loner be traced, which means that
citizens can decide less and less whether or not they want
to be a part of this data-driven society. Technoloy can
even be used to create a totalitarian state; Bi Brother is
watchin you. China, for instance, is slowly rollin out a
‘social credit system’, where Chinese citizens are iven a
certain score based on their behaviour. On the basis of that
score, people can be placed on a blacklist and lose all kinds
of rihts and privilees. In 218, 23 million Chinese were
banned from buyin a train or plane ticket. In addition,
restrictions on Internet usae are imposed in more and
more places, for example in Pakistan where in 22, the
authorities approved far-reachin new rules that restrict
the use of social media, endanerin the people’s freedom
of speech.
The riht to make one’s own choices can, of course, also be
approached from the opposite position. In several areas, AI
is (or will be) better than people, in particular when it
involves very specialized applications. This oes way beyond
extremely powerful chess computers. Even now, alorithms
are better at reconizin cancer on lun X-rays than doctors.
So the question can then be if certain tasks should even be
left up to humans. In that sense, people should be allowed
to choose an artificial system over a person.
AI no longer has a plug. About ethics in the design process
4140
1.2 Urent ethical issues
How can we take the law into our own hands?
Governments increasinly are in the news for ‘spyin’
on us, which means we are increasinly livin in a
‘surveillance society’. As early as 2013, former CIA
employee Edward Snowden alerted the world to the
lare-scale surveillance by the American National
Security Aency (NSA). Bi Brother was indeed watchin
you. Inspired by those developments and the quote from
Geore Orwell’s 1984, an Australian clothin brand
marketed a specially developed clothin line that hides
your telephone. The main feature of the 1984 clothes line
is the so-called ‘UnPocket’, a canvass pocket interwoven
with special metal materials that blocks Wi-Fi and GPS
sinals, amon other thins. If that makes you safe from
the NSA remains to be seen, but it does offer people
sufficient protection from location trackin.
in her audience. Cameras don’t even have to be in
the neihbourhood any loner. At the moment, facial
reconition systems are bein developed for the military
that can identify people up to a kilometre away.
A version of the riht not to be measured became
a reality in the European General Data Protection
Reulation (GDPR), a piece of leislation that includes
the Riht to be Forotten’. The aim is to ive people
control over their personal information and to see what
it is exactly that companies do with that information.
That means that all oranizations explicitly have to ask
for permission to collect and use people’s personal data.
It also means that users have the riht to know what
information a iven oranization processes and how that
information is secured. Users also have the riht to ask
oranizations to remove all their personal information.
Oranizations that fail to comply risk hefty fines.
In 219, the Swedish Authority for Data Protection fined
a school for usin facial reconition technoloy to check
school attendance. It involved a fine of 2, Swedish
kroner, which is almost 19,000 euros.
There are exceptions, however. For instance when an
oranization is leally oblied to use data, for example
the data required for a leal ordinance or protectin
public health. This makes the discussions surroundin
apps that are used to monitor epidemics, for instance
durin the corona-crisis in 2020, even more intense.
In addition, the riht to be forotten does not reach
beyond the boundaries of the European Union, accordin
to a rulin by the European Court of Justice in 2019.
The balance between the riht to privacy on the one hand
and the freedom of information of Internet users on the
other will, accordin to a statement by the jude, vary
considerably around the world. That means that search
enine iants are not oblied to remove information
outside the EU countries. As such, in the future, privacy
could become a luxury that is only available to a small
number of people.
AI no longer has a plug. About ethics in the design process
4342
1.2 Urent ethical issues
What is a fair distribution?
AI is used increasinly to assess credit applications,
detect fraud and evaluate job interviews. The startin
point is that people are iven equal chances and opportu-
nities. However, in practice, that often isn’t the case
at all. For instance, in 214, Amazon developed an AI
application to evaluate applicants and select the best
candidates. It was only a few years later that they
discovered that the application was sexist. The problem
was that the alorithm was trained with data from
the people who had applied at Amazon for the past 1
years. And because, in the tech sector, most of them
are males, the alorithm developed a preference for
male candidates. The development of the application
was discontinued in 217. To experience the unequal
distribution of the alorithms for themselves, four
alumni of NYU Abu Dhabi developed the ame ‘Survival
of the Best Fit’. The educational ame exposes the
prejudices of AI  in the application process.
So in those kinds of processes, prejudices – and diital
discrimination – are actually reinforced. The problem
is that data is not diverse enouh and, as a result, the
technoloies are not neutral. Research from the Georia
Institute of Technoloy shows that even the best imae
reconition systems are less accurate in detectin
pedestrians with a dark skin than pedestrians with
a liht skin. The researchers indicated that that bias
was caused predominantly because of the fact that few
examples of pedestrians with darker skins were used in
trainin sets. When the input is incomplete, then so is
the output. ‘Garbae in, arbae out’.
The question is whether there is enouh data from
the more vulnerable roups. It appears that prejudices
aainst the handicapped are even more tenacious
than discrimination on the basis of ender and race.
For instance in the case of self-drivin cars.
The alorithms are trained, amon other thins,
to reconize pedestrians, to avoid cars from drivin over
» I don't know why people are so keen
to put the details of their private
life in public; they foret that
invisibility is a superpower. «
–– Banksy
Rather than wait for revised leislation (which is
not always sufficient), creative entrepreneurs more
and more develop apps and adets to help us protect
our own privacy. Other examples show that the fashion
industry is also very active in this area. For instance
the raphic prints on clothes that confuse surveillance
technoloies or stylized facial masks that make your
face unreconizable for facial reconition software.
There is even a clothin item desined to prevent us
from bein photoraphed all the time (and often uninten-
tionally posted on social media – without bein asked).
The desiner of Dutch oriin Saif Siddiqui developed
the ISHU scarf, a special scarf that reflects the flash
lihts of smartphones thanks to the tiny crystal balls
embedded in the scarf, makin the pictures unusable.
Justice
When we talk about justice, essentially we are talkin
about the equality of people. People should be treated
equally and be iven the same opportunities. That does
not mean that there aren’t or shouldn’t be differences
between people. But when people are treated differently,
there has to be a demonstrable reason to justify that
difference, for instance differences in pay based on
experience and education. The question is whether an
equal distribution and treatment can be safeuarded
with the emerence of AI.
–– Banksy
PEOPLE ARE SO KEEN
TO PUT THE DETAILS
OF THEIR PRIVATE
LIFE IN PUBLIC
the
y
f
o
rge
t
t
h
a
t
i
n
v
i
s
i
b
i
l
i
t
y
i
s
a s
u
p
e
r
p
ower.
I DON'T KNOW WHY
AI no longer has a plug. About ethics in the design process
4746
1.2 Urent ethical issues
Liberties Union (ACLU) in 218, it turned out that the
software of Amazon wronfully identified 28 members of
Conress as people who had been arrested for committin
crimes.
In the US, software is often used to predict the
likelihood of people becomin repeat offenders.
Research by ProPublica from 2016 shows that the software
bein used is prejudiced aainst people with a darker
skin colour. It is extremely difficult to estimate
how autonomous systems will behave when they are
interconnected, for instance with reard to neative
feedback loops. When the data shows, for example,
that black men are more likely to end up back in prison
after bein released, the alorithm will determine that
black men should remain in prison loner, which in turn
affects follow-up fiures: black men will end up servin
loner prison sentences, which is once aain reinforced
by the alorithm, etc. This does not take into account
that the fiures are biased as a result of human police
work, which may well involve profilin. So the biases
in alorithms are caused predominantly by biases  in
people, which bes the question if a robot jude is
indeed more objective.
» Alorithms are as biased as the
people makin them. «
–– Sanne Blauw, The Correspondent
How can we safeuard fair procedures?
The question who can and may decide about what is
and isn’t ethically responsible in relation to the
development and application of AI is as yet not asked
often enouh. So far, it is especially the lare tech
companies that appear to be in control. They possess
by far the most resources, like data, computin power,
money and knowlede. Especially the importance of
knowlede is often underestimated. Think, for example,
of the time when Mark Zuckerber was questioned by
the American Conress in 218 about the privacy leak
them. So when the trainin data do not include
people in wheelchairs, the technoloy can place them
in hazardous positions. And, althouh handicaps are
relatively common, there are many different kinds
of handicap. And they are not always visible. Cars can
honk their horn to warn approachin pedestrians, but
deaf people won’t hear them. In addition, information
about handicaps is very sensitive. People are more
reluctant to provide information about their handicap
than information about ender, ae or race. In some
situations it is even illeal to ask that information.
In 2018, Virinia Eubanks wrote a book entitled
Automatin Inequality’, describin the way in which
automated systems – rather than people – determine
which neihbourhoods are checked, which families
are iven the necessary resources and who will be
investiated for fraud. Especially people with fewer
resources and opportunities are disadvantaed by those
systems.
» Data protection is not a private
but a eneral interest and is at the
heart of the constitutional state. «
–– Maxim Februari, philosopher and writer
To what extent are people treated fairly?
The fact that people need to be treated equally and iven
equal opportunities is somethin that is embedded in the
Dutch Constitution. Paradoxically enouh, it is becomin
increasinly clear that the AI systems that are used
within our leal system actually promote inequality.
Think, for example, of developments within predictive
policin, whereby criminal behaviour is predicted usin
lare-scale monitorin and data analysis. The risk is
that the wron people are apprehended. What is just in
those cases? Do you risk apprehendin innocent people or
do you risk them committin a crime? The technoloies
bein used, like facial reconition software, are far
from perfect. In a test conducted by the American Civil
AI no longer has a plug. About ethics in the design process
1.2 Urent ethical issues
4948
We see the same inward-lookin attitude when it comes
to ethics. Various countries have launched their own
ethical uidelines . From Germany and Austria to
Australia and the United States. The question is not
which country has the best ethical uidelines, the
important thin is to see where they are in areement.
Ethics isn’t a contest, but a team sport. Without a
lobal approach, it is almost impossible to develop
reliable AI applications. The different approaches,
which are often culturally determined, have to come
toether. At the moment, these approaches are missin
in the lobal debate about AI and ethics and, as
a result, the development of AI looks more like
a contest . The financial interests appear to be
more important that the moral interests.
In addition to the unethical implications of AI,
we also have to think about desinin ethical uidelines,
or otherwise the ethics will be unethical.
at Facebook. Senators asked him questions that clearly
showed they had no idea what exactly Facebook is and
does. The question by Utah Senator Orrin Hatch showed
it most clearly, when he asked how Facebook was able
to make any money, when its users didn’t have to pay
for it.
» How do you sustain a business model
in which users don't pay for your
service? «
–– Orrin Hatch, US Senator
To which a visibly stunned Zuckerber responded:
‘Senator, we run adds’. This kind of diital illiteracy
makes it difficult to reulate complex technoloies.
However, overnments still appear to play an important
role in the application of AI, for instance when in 2019,
Russian president Vladimir Putin sined a controversial
law makin it a crime to ‘show contempt’ for the state
and to spread ‘fake news’ online. In the same year,
the Iranian overnment shut down the Internet durin
protests aainst the increasin inflation, makin it
harder for demonstrators to oranise and for journalists
to obtain information about the situation. It appears
that countries want above all to protect their own
diital infrastructure and power.
WINNER
TAKES
ALL
THE
AI no longer has a plug. About ethics in the design process
5150
1.3 A matter of ethical perspective
Rouhly speakin, we can distinuish two approaches,
namely the normative approach and the non-normative
approach.
The normative approach
Within normative ethics, there are clear moral
positions. ‘Good’ and ‘bad’ are translated into eneral
basic principles, desined to show people how to act and
to serve as a basis for reulatin people’s behaviour.
This approach is also known as prescriptive ethics,
because it provides people with rules and principles
on how to behave.
The non-normative approach
With the non-normative approach, no moral positions are
taken. There is a distinction between descriptive ethics,
meta-ethics and applied ethics. Descriptive ethics is
about describin and understandin what people consider
to be ‘ood’. In the case of meta-ethics, the focus is on
studyin the central concepts of ethics (responsibility,
freedoms & rihts and fairness). The aim is also to see
whether it is possible to create an ethical framework
that can be applied in any situation, reardless of
our own opinions. Applied ethics focuses on specific
domains, like bio-ethics, business ethics or medical
ethics, and looks at ethical questions from a practical
perspective. The question is to what extent principles
from normative ethics can be applied and provide an
answer to the concepts from meta-ethics.
Because AI applications are used in various domains,
it is not easy to create clear ethical frameworks.
What is ‘ood’ or ‘fair’ always depends on the specific
context. Many people are unaware that, in addition
to the different contexts, there are also different
approaches and startin points that often blend toether,
which makes ethical discussions surroundin the
applications of AI often a little messy.
1.3 ––––––––––––––––––––––––––––––––––––––––––––––––––––
A matter of ethical perspective
Within the development of AI , there are diverin
ethical  issues. Who is responsible in the case of
a collision with a self-drivin car? How important
is the riht to privacy  in relation to the riht to
information of Internet users? And to what extent can
a robot jude produce an objective verdict? These issues
are far from unambiuous and, as a result, very complex.
For instance, we may not even want a robot jude to be
objective. What people consider to be fair cannot always
be captured in a formula and very much depends on the
context and possible extenuatin circumstances.
To be able to assess such issues, it is important
to understand that there are different perspectives.
There is often no universally accepted truth, which
makes it difficult to reach a consensus about what,
ethically speakin, the best solution is. Generally
speakin, we aree, for example, that privacy is
important. But in 216, a jude had to decide whether
or not Apple should ive the FBI access to the data on
the iPhone of a terrorist. Not only are these issues
context-dependent, the outcome also depends on the
ethical perspective bein used.
Different kinds of ethics
In the discussions, ethics and morality often
interminle, but there is a clear difference. Morality
is the totality of opinions, decisions and actions with
which people (individually or collectively) express what
they think is ood or riht, while ethics, on the other
hand, is the systematic reflection on what is moral
(Van de Poel & Royakkers, 2011). Different ethical
uidelines , like the uidelines of the European
Commission, are in essence moral uidelines. Morality
is about the actions themselves, while ethics is about
studyin those actions. Within the study of what is
morally riht, there are various subcateories.
AI no longer has a plug. About ethics in the design process
5352
1.3 A matter of ethical perspective
into a universal law of nature’. For instance, when you
wonder whether you are allowed to throw waste out of
your car window, it’s easy to realize that, if everyone
were to do so (because it is the law), all the world
would be a mess, so it is not the standard.
» Act in such a way that you would
wish that your principle would
be turned into a universal law of
nature. «
–– Immanuel Kant
Consequential ethics
Consequential ethics states that the consequences of a
iven action determine whether or not it was ‘riht’.
In other words, the consequences have to be positive,
even if it undermines certain principles. So the
action itself is not called into question, only its
consequences. To assess the consequences of an action,
values are used. A value is a oal that we want to
achieve as a society throuh our actions, for example
justice or freedom. They are also known as end-values.
Qualities that help people realize those end-values, are
also known as instrumental values, like helpfulness and
responsibility.
Consequential ethics is also known as consequentialism
or utilitarianism. The founder of utilitarianism is the
British philosopher Jeremy Bentham. With utilitarian
theory, the moral value of an action is measured by
the contribution that action makes to the common ood.
So the question is to what extent the action contributes
to the maximisation of happiness. The oal of an action
is always to provide the maximum amount of happiness
to the larest possible roup of people. If, in exchane
for that, a small roup of people has to face neative
consequences, that is considered to be acceptable.
» The end justifies the means «
Different approaches within ethics
People find different thins important, dependin
on the context. In some cases, the emphasis is on the
action itself, while in other cases, the focus is more
on the consequences of that action. And sometimes, it
is all about the intentions of the person carryin out
the action. As such, there are different approaches and
opinions within ethics that are sometimes each other’s
exact opposites. What is ‘ood’ and what are ‘ood
actions’ are questions that people have thouht about for
centuries. And with the arrival of AI, those questions
are more relevant than ever before. Technoloies are
becomin increasinly autonomous and have to be able
to act autonomously with reard to those issues.
Principle ethics
In the case of principle ethics, a principle is always
used as a startin point, for instance respect for life
and human dinity. The solution of an ethical problem
has to observe one or more of those principles. The
principle has to be applied at all times, reardless
of the consequences, so people’s actions are considered
to be moral as lon as they observe said principles.
Some actions can be considered to be ood, even if
their consequences are neative. And vice versa. These
behavioural rules, or values, are areements about how
we treat each other. Althouh there are often no concrete
sanctions for violatin these values, they are maintained
by a society as a whole. Many reliions contain such
behavioural rules, like the 1 Commandments in the
Bible. Some values have been formulated as laws,
like laws prohibitin discrimination.
Principle ethics is also known as deontoloy. The most
famous proponent of deontoloy is the German philosopher
Immanuel Kant. In his ‘Kritik der praktischen Vernunft
in 1788, he formulated the ‘cateorical imperative’.
The most well-known statement is ‘Act in such a way
that you would wish that your principle would be turned
AI no longer has a plug. About ethics in the design process
5554
1.3 A matter of ethical perspective
Different opinions about what is ethically responsible
To be able to jude the various ethical discussions
within the development of AI systems, it is important
to determine the ethical principles bein applied.
The question where it is acceptable to sacrifice the
life of one person to save the lives of several people
(trolley dilemma) to a lare extent depends on the
startin point. A distinction has to be made between
the action (deontoloy), the consequences of the action
(consequentialism) and the person – dependin on the
outcome of the responsibility issue , in this case, the
person in the car, the manufacturer or the system itself
– performin the action (virtue ethics).
What should a car do when a roup of people is crossin
a zebra crossin and the car cannot break in time?
Continue to save the life of the person inside the car
or swerve to save the people crossin the road, while
killin the person inside the car?
The question is, then, if it can be justified if the car
does not intervene and protects the life of the person
inside the car rather than the lives of the roup of
pedestrians. Within the framework of consequentialism,
it cannot be justified, because it kills the roup of
pedestrians instead of one person. However, within the
framework of deontoloy, it can be justified, because
intervenin would make the car deviate from its natural
course and the system would be responsible for killin
the person in the car, and killin someone is aainst the
law, even if it saves the lives of others. Within the
framework of virtue ethics, the opposite can be arued
based on a similar startin point. When the car has the
opportunity to intervene but fails to do so, it displays
a lack of virtue and its actions are immoral. You always
take other people into consideration and act responsibly.
Virtue ethics
In the case of virtue ethics, the moral focus is not on
the rules or certain principles, but on the character of
the person performin the action. Aain, the action is
separated from its explicit consequences. To be able to
perform morally sound actions requires certain character
traits, or virtues. A virtue is a positive character
trait steerin a person’s behaviour. When virtues are
used to assess a person’s actions, the focus is not on
any individual action, but on the person involved and
his or her intentions.
Aristotle is considered to be the founder of virtue
ethics. Unlike deontoloy and consequentialism, the
human bein is taken into consideration. Accordin to
Aristotle, ood actions are actions that make you a
better human bein, which means that people have to
keep workin on themselves. Virtues can be developed.
A virtue is seen as a kind of happy medium between more
extreme behaviour characteristics. For instance, bravery
is a virtue that lies between hubris and cowardice.
» Doin a ood deed is easy;
developin the habit to always do
that isn’t. «
–– Aristotle
Not every reat thinker can be assined to one of the
three cateories described above. The German philosopher
Friedrich Nietzsche, for instance, was seen as an
‘ethics critic’. He arued that ethical opinions at the
time (1887) were based on a ‘slave morality’ and that
people were docile and no loner thouht for themselves.
Nietzsche wanted people to free themselves from this
(at the time above all Christian) morality. Man should
desin his own ethics and create his own values.
Accordin to Nietzsche, ethics is not a matter of duty
or virtue, but of personal preferences.
Doing
a good deed
is easy;
developing
the habit to
always do that
isn’t.
–– Aristotle
AI no longer has a plug. About ethics in the design process
5958
1.3 A matter of ethical perspective
be spared if the FBI was able to fiht terrorists more
effectively. Ultimately, the jude ruled that Apple
had to help the FBI to unlock the terrorist’s iPhone.
Contrary to what was demanded earlier, namely to create
a ‘backdoor’ for the FBI, Apple only had to uarantee
access to that specific iPhone.
It is unclear whether the verdict provided justice for
all involved. In many cases, there is no consensus about
what the best solution is. Ultimately, it is about the
question what the best society is and it will take us a
while to fiure that out. For example, what is better?
A society in which a lare roup of people is marinally
happy or a society in which a small roup is very happy?
There is no absolute philosophical theory. In the end, it
should be society itself that determines what the best
society is. It is a democratic issue. And even when we
reach a consensus as a society about what we consider to
be important, the question as to how you can interate
that into AI remains unanswered. But first thins first.
Before we can determine how we can interate ethical
uidelines into the desin of AI applications, we need
to map which ethical uidelines there are (deontoloy)
and to what extent those uidelines endure in practice
(consequentialism and virtue ethics).
» My ethics are not by definition
also your ethics. «
–– Patrick van der Duin, STT
However, these types of dilemma’s are not always
clear-cut. Think, for instance, about whether or not
Apple should ive the FBI access to the data on the
iPhone of a terrorist, which was the central question
in a court case in 216. A year earlier, a terrorist
shot and killed fourteen people in San Bernardino.
At the time, the FBI confiscated the man’s iPhone,
to ain access to vital data, like information about
contacts and possible accomplices. Of course, the iPhone
is protected with a PIN code. When the wron code is
entered 1 times in a row, all the information is erased
from the phone, which is why the FBI asked Apple to
help circumvent this security. At the time, that was
not yet possible, and nor did Apple want to help develop
it. At face value, it appears to be a relatively simple
question, assumin it was just about the one phone.
But that was not the case. If Apple were to aree to the
FBI’s request, it would have to weaken the security of
all iPhones. Apple had been workin for years to improve
security and this could do considerable damae to their
reputation.
In addition, they wanted to make an example: when they
allowed the FBI to ain access to the personal data of
their users, Apple feared that would be the thin end
of the wede and other overnment aencies would make
similar requests, which would potentially violate
people’s riht to privacy. Which is why Apple arued
that only the user has access to the data of a secure
phone and nobody else.
It is hard to determine whether or not Apple’s position
was morally responsible. After all, it is far from
clear that the riht to privacy should always trump the
war on terror. As such, it is unclear which action is
moral in this case and what the interests are of the
wider population. The riht to privacy or the war on
terror? It also has to do with the short-term versus the
lon-term impact. Here, it was just the one case, but
in the future, the lives of many people could potentially
AI no longer has a plug. About ethics in the design process
6160
Guest contribution: 'AI overnance - the ood, the bad and the uly'
In the science fiction show Star Trek, the Vulcans have
a solution. “Loic is the cement of our civilization,
with which we ascend from chaos, usin reason as our
uide” (T’Plana-Hath, Matron of Vulcan Philosophy,
Star Trek IV: The Voyae Home, 1986). Loic should lead
to a discussion about prejudices, makin sure we have
all the facts and preventin us from seein only what
we want to see. The loic that the Vulcans have adopted
leads to a technocratic perspective and also has its
disadvantaes, because there are always people playin
politics and amin the system. In addition, information
is almost never complete and not all snakes in the rass
are visible. We need loic, but we also need a suitable
overnance model.
The sensitivity of alorithms to chane and their
capricious nature because of the many variables make
it hard to overn AI alorithms. Such a overnance
model not only needs to steer the technoloy, but also
comprehend it. And that takes more than technoloical
know-how alone. It takes a overnance model that
contains a well-defined shared model on how to deal
with AI. Before people are allowed to use AI, it should
first be tested, the way we only market medicines after
riorous testin and insiht into possible side-effects.
AI systems have to meet requirements to prevent the
creation of a surveillance state, where all of our
actions are affected by AI without our knowlede
or consent.
Furthermore, oranizational knowlede is needed to
understand the ethical consequences. In the case of
AI-driven autonomous cars, we may no loner need a
steerin wheel, but that doesn’t mean that the car is
no loner bein driven. The overnance model has to make
sure the bad and uly sides of AI don’t occur. At the
moment, AI overnance is not yet mature, even thouh
we are already usin AI on a lare scale. Let’s use AI
overnance to move toward an AI society where people
are in the driver’s seat and computers provide support.
Guest contribution ––––––––––––––––––––––––––––––––––––––
AI overnance - the ood,
the bad and the uly
By Marijn Janssen, Professor ICT
& Governance, Delft University of
Technoloy
Artificial intellience (AI) is
used in various places to create
a better society. It is used, for
instance, to detect illnesses at an
earlier stae and to tackle social problems (‘the ood’).
At the same time, AI us used to influence our political
preferences and to commit lare-scale fraud (‘the bad’),
while it can also copy human prejudices and dictate
everyday life without people noticin. AI monitors our
behaviour to determine our health and insurance premiums
and implicitly discriminates population roups (‘the
uly’, in other words the mean one). It is especially
in the latter cateory that AI can be pernicious, because
the unintended effects are not immediately visible and
we are often unaware of them. Whereas ‘the ood’ and
‘the bad’ are more explicitly visible, ‘the uly’ is less
visible and harder to ascertain.
The social call for transparent, responsible and fair
AI is therefore a justifiable one. Often, it is arued
that AI should be transparent, but what that means
in practice is not addressed. Implicitly, it is said
that we need to understand the alorithm, but is the
situation without AI transparent and do we understand
what happens in the human brain when people make
decisions? Unknown makes unloved. Is it even reasonable
to expect a complex alorithm to be transparent and that
everybody will be able to fathom its complexity? Most
people don’t even bother to read the conditions of an app
or website they use and aree without thinkin. Complex
alorithms involve advanced maths. For most people,
complete transparency is an illusion and the question
is how we will deal with AI as a society.
...........................................................
...........................................................
...........................................................
...........................................................
...........................................................
...........................................................
................/&@@@@@@@&@&&&@@&..........................
.............#&@@&@@@@@@@@@%@@@@@@@@&......................
...........#%&@@@%@@@@@@#(/***/(%%@@@@&....................
..........&@@@@&@@%(*,.,,,,,,,,***/#@@@@...................
.........%@&&@@@@&*,.........,,,,,**/&@@,..................
.........@@@@&(**..........,.,,.,,,,*/@@@..................
........%@@@@%*,,,...............,,,**/@@..................
.........@@@&,........,,,,,,,,*##&&@@&%#@/.................
.........#&&*..,##@@@&##&@@@&&@#(/(///(@@@.................
........*.*@&&(@/#&#@&*&/#@/,*@/((/,*//**/.................
........%../(,,,,,***,,,*,**.,*********//,,................
........*/#/.,,,.......,*/,****/*/*,,,***/*................
.........,.,.,,*,....,,,,/((&#/(((*((**/*,*................
.............,,..,,**/**,*,,***////(/(/***.................
...........,*,,,,,****((/(######/#@((/****.................
..............*,,,****/////*//*//(****/,/..................
...............///**//,****,******/****((@.................
................/#%/**,*,.**/,*/*(/(//##/@@@&&/............
..................(%%%###%((/(((((/((((/..@@@@@@@@@@@@@@@@@
..................#//(%#%%%%#(((((////....@@@@@@@@@@@@@@@@@
................,@&&,,*/(((((//*,***......@@@@@@@@@@@@@@@@@
............/#&@@@%@,...,,,*(***,.........@@@@@@@@@@@@@@@@@
......#(&@@&@@@@&&&@@*......*,,..........&@@@@@@@@@@@@@@@@@
1.1 Titel van het subhoofdstuk
63
2 –––––––––––––
Ethical
guidelines
for AI
––––––––––––––––––––––––––––––––––––––––––
 2.1 From corporate to government
 2.2 Conflicting values
 2.3 Practical challenges
––––––––––––––––––
Paes: 30
Words: 6427
Readin time: approx. 50 minutes
AI no longer has a plug. About ethics in the design process
6564
2. Ethical uidelines for AI
The question is, however, whether the development of
uidelines is enouh. Ultimately, those uidelines have
to be translated into practice. And, as always, practice
is less malleable than theory. For instance, how do
we deal with conflictin values? And what challenes
still await us? It is important to also look at the
consequences. Fortunately, there are more and more
oranizations that have been workin on a variety of
checklists, assessments and toolkits, which brins us
a step closer to puttin ethics  into practice, althouh
we need to make sure that these tools really help us
alon in the desin process. Do these tools actually
allow us to act ethically?
2. –––––––––––––––––––––––––––––––––––––––––––––––––––––
Ethical uidelines for AI
The time when AI systems  predominantly made the news
by beatin chess randmasters appears to be far behind
us. Of course, at the time, people wondered, if
computers can beat us in chess or Go, in which other
areas would they be able to beat us? What does that do
with the relationship between people and technoloy?
And what does it mean for our humanity? Interestin
questions, but still fairly abstract and philosophical
in nature. Since then, AI systems have been applied
in a wide variety of domains and we are faced with
a rowin number of issues involvin applied ethics.
It’s no loner a question of ‘what if?’, but of
‘what now?’.
For instance, autonomous vehicles have made their first
lethal victims, our riht to privacy  is undermined
by the use of location apps and entire sections of the
population are disadvantaed by fraud detection systems.
Now the hypotheses have been confirmed, it would appear
that we have woken up. From the private sector to social
oranizations and overnments, they have all bean to
formulate ethical uidelines. A ood and important first
step. Especially for proponents of principle ethics.
It is interestin to see what the similarities and
differences between these uidelines are.
6766
2.1 From corporate to overnment
The timeline clearly shows that the frequency of
publications in recent years increased enormously.
A distinction is made between principles and
uidelines of:
> Social oranizations, like the Top 10 Principles for
ethical AI’ of the UNI Global Union (2017);
> Governments, like the ‘AI Ethics Principles &
Guidelines’ of Dubai (2019);
> Interovernmental oranizations, like the ‘Principles
on AI’ of the OECD (2019);
> Multi-stakeholders, like the ‘Beijin AI Principles
of the Beijin Academy of AI (2019);
> The private sector, like the Everyday Ethics for AI
of IBM (2019).
Ethics is clearly no loner only a European affair.
In 2019, ‘even’ the Chinese overnment launched the
’Governance Principles for a New Generation of AI’.
Obviously, China realizes that, if it wants to continue
to do business in AI internationally, it needs to address
the subject of ethical AI applications. They do like
to stay in control and find fraudulent practices as
undesirable as any other overnment.
Distribution of ethical uidelines
Ethical principles and uidelines come in a variety of
types and sizes. Despite the fact that there are so many
principles and uidelines, their distribution is limited.
In 219, researchers of ETH Zurich analysed no fewer than
84 ethical uidelines that were published worldwide
in recent years. From the private sector to social
oranizations and overnments. Research shows that
most ethical uidelines come from the US (21),
Europe (19) and Japan (4).
2.1 –––––––––––––––––––––––––––––––––––––––––––––––––––––
From corporate to overnment
In recent years, various companies, research institutes
and overnment oranizations have set up different
principles and uidelines for ethical AI, at a national,
continental and lobal level.
Emerence of ethical uidelines
To create order in the framented discussion about the
development of ethically responsible AI applications,
researchers of the Berkman Klein Center in 22 carried
out an analysis of the 36 most prominent AI uidelines,
which they also translated into a clear timeline.
‘A map of Ethical and Rights-based Approaches to Principles for AI’
– Source: Berkman Klein Center (2020).
20172016
Civil Society
Government
Inter-governmental Organisation
Multistakeholder
Private Sector
sep
oct
jan
apr
oct
dec
jan
feb
mar
apr
may
jun
jul
oct
nov
dec
jan
feb
mar
apr
may
jun
oct
2018 2019
AI no longer has a plug. About ethics in the design process
6968
2.1 From corporate to overnment
in international oranizations settin up uidelines,
only a few of them actually published ethical uidelines
of their own. However, accordin to the researchers,
that is very important, because different cultures have
different opinions about AI. A lobal colla boration
is needed to provide ethical AI in the future that
contributes to the welfare of individuals and societies.
A first attempt to realise a lobal collaboration was
made by the Oranisation for Economic Co-operation and
Development (OECD), a coalition of countries aimed at
promotin democracy and economic development, which,
in 2019, announced a set of five principles for the
development and use of AI. However, because China isn’t
a member of the OECD, it has not been included in the
creation of the uidelines. The principles involved
appear to be at odds with the way AI is used. Especially
with reard to facial reconition and supervision of
ethnic roups that are bein associated with political
dissidence. But, especially in the case of conflictin
opinions, it is important to open a dialoue and try
to reach a kind of consensus.
Similarities and differences
The researchers of ETH Zurich not only looked at the
eoraphical distribution of the ethical principles and
uidelines, but also at the similarities and differences
between the principles. The study shows that, althouh
no ethical principles are exactly identical, there
is a clear converence surroundin the principles of
transparency , justice  and fairness , reliability,
responsibility  and privacy. These principles are
mentioned in more than half of all sources.
The hihest ‘uideline density’ is found in the
United Kindom, where no fewer than 3 ethical
uidelines were published. Member states of the
G7 produced the hihest number of ethical uidelines.
The G7 (Group of 7) consists of seven important
industrial states, namely Canada, Germany, France,
Italy, Japan, the United Kindom and the United States.
In 1997, the European Union also joined the G7, but
the name was not adjusted to reflect that. It is no
coincidence that these countries publish the most
uidelines. In 2018, the ministers of the member states
responsible for innovation have sined a G7-declaration
involvin Human-centric AI, in which they presented a
joint vision that is desined to strike a balance between
encourain economic rowth throuh AI innovation,
increasin the confidence  in and acceptance of AI
and promotin inclusivity in the development and
implementation of AI.
Althouh the overview is a snapshot (for instance,
two new uidelines were published in China after
the publication of the study), it does present a clear
division. So far, it is above all the richer countries
that dominate the worldwide discussion about AI.
Althouh some developin countries were involved
>–15
5–14
2–4
1
G7 members
‘Geographic distribution of issuers of ethical AI guidelines by number of documents released’
– Source: ETH Zürich (2019).
71
2.1 From corporate to overnment
Aain, it is clear that, when you look at the details and
interpretations, there are clear differences between the
ethical principles and uidelines. Not only in the extent
to which certain principles have been worked out, but
also in the extent to which they refer to international
human rihts, both as a eneral concept and in terms
of specific documents, like the ‘Universal Declaration
of Human Rihts’ or the ‘United Nations Sustainable
Development Goals. Some ethical uidelines even use
an explicit ‘Human Rihts Framework’, which means
that human rihts are the basis for the formulation
of ethical uidelines for the development of AI
applications. Aainst expectations, it is above all
the uidelines of the private sector that refer to
human rihts, and to a lesser extent the uidelines
of overnments.
However, there are sinificant differences in the way
ethical principles are interpreted. In particular the
specific recommendations and areas of attention that
were based on each principle vary enormously.
For instance, accordin to some uidelines, AI is
above all meant to make the decision-makin process
explainable, while other uidelines arue that it
is necessary for the decisions of AI to be completely
traceable as well. That is why the researchers emphasize
the need to interate the various uidelines to reach
a worldwide consensus about adequate implementation
strateies.
The researchers of the Berkman Klein Center arrived at
a similar conclusion. Based on an analysis of the terms
bein used, they presented a list of eiht overarchin
principles, which in broad lines match the terms of
the study by ETH Zurich.
‘A map of Ethical and Rights-based Approaches to Principles for AI’
– Source: Berkman Klein Center (2020).
Principled
Artificial
Intelligence
May 2018, Canada
Toronto Declaration
Amnesty International | Access Now
Oct 2018, Belgium
Universal Guidelines for AI
The Public Voice Coalition
Jan 2019, United Arab Emirates
AI Principles and Ethics
Smart Dubaiz
Feb 2019, Singapore
Monetary Authority of Singapore
Jun 2019, China
Governance Principles
for a New Generation of AI
Chinese National Governance Committee for AI
Mar 2019, Japan
Social Principles of
Human-Centric AI
Government of Japan; Cabinet Office;
Council for Science, Technology and Innovation
Mar 2019, United States
Ethically Aligned
Design
IEEE
Mar 2019, United States
Seeking Ground
Rules for AI
New York Times
May 2019, China
Beijing AI
Principles
Beijing Academy of AI
Jun 2019, China
AI Industry Code
of Conduct
AI Industry Alliance
Jan 2017, United States
Asilomar AI Principles
Future of Life Institute
Apr 2018, United Kingdom
AI in the UK
UK House of Lords
Jun 2018, India
National Strategy for AI
Niti Aayog
Apr 2018, Belgium
AI for Europe
European Commission
Mar 2018, France
For a Meaningful AI
Mission assigned by the
French Prime Minister
Jan 2018, China
White Paper on AI
Standardization
Standards Administration of China
Nov 2018, United States
Human Rights in
the Age of AI
Access Now
Oct 2016, United States
Preparing for the
Future of AI
U.S. National Science and
Technology Council
Dec 2018, Canada
Montreal Declaration
University of Montreal
Feb 2018, United States
Microsoft AI Principles
Microsoft
Feb 2019, Chile
Declaration of the Ethical
Principles for AI
IA Latam
Oct 2019, United States
IBM Everyday
Ethics for AI
IBM
Jan 2019, Sweden
Guiding Principles on
Trusted AI Ethics
Telia Company
Oct 2018, Spain
AI Principles of
Telefónica
Telefónica
Jun 2018, United States
AI at Google:
Our Principles
Google
Oct 2017, United States
AI Policy Principles
ITI
Apr 2017, China
Six Principles of AI
Tencent Institute
Sep 2016, United States
Tenets
Partnership on AI
Nov 2018, Germany
AI Strategy
German Federal Ministries of Education,
Economic Affairs, and Labour and Social Affairs
Jul 2018, Argentina
Future of Work and Education
for the Digital Age
T20: Think20
Dec 2017, Switzerland
Top 10 Principles
for Ethical AI
UNI Global Union
Jun 2018, Mexico
AI in Mexico
British Embassy in Mexico City
1
2
3
3
May 2019, France
OECD Principles on AI
OECD
June 2019, Rotating (Japan)
G20 AI Principles
G20
Dec 2018, France
European Ethical Charter
on the Use of AI in
Judicial Systems
Council of Europe: CEPEJ
K
E
Y
T
H
E
M
E
S
Apr 2019, Belgium
Ethics Guidelines for
Trustworthy AI
European High Level Expert Group on AI
Government
Civil society
Inter-governmental
organization
Multistakeholder
Private sector
Principles to Promote FEAT
AI in the Financial Sector
International Human Rights
Promotion of Human Values
Professional Responsibility
Human Control of Technology
Fairness and Non-discrimination
Transparency and Explainebility
Safety and Security
Accountability
Privacy
-----------------
-----------------
‘Ethical principles identified in existing AI guidelines’
– Source: ETH Zürich (2019)
73/84
68/84
60/84
60/84
47/84
41/84
34/84
28/84
14/84
13/84
6/84
Transparency
Justice & fairness
Non-maleficence
Responsibility
Privacy
Beneficence
Freedom & autonomy
Trust
Sustainability
Dignity
Solidarity
Ethical principle Number of
documents
AI no longer has a plug. About ethics in the design process
7372
2.2 Conflictin values
What do we find really important?
Accordin to the uidelines of the European Union,
the seven ‘key requirements’ are equally important.
However, when you ask people to rank them, it turns out
that there are differences in how much value we attach
to different principles. Our own research shows that
the values autonomy (‘Human aency and Oversiht’),
privacy  (‘Privacy and Data Governance’) and Equality
(‘Diversity, Non-Discrimination and Fairness ’) have a
much hiher score than explainability  (‘Transparency’)
and res ponsibility (‘Accountability ’). We presented
respondents (n = 108) with seven principles and asked
them to rank them in order of importance: 1 = a low
priority and 7 = a hih priority.
2.2 –––––––––––––––––––––––––––––––––––––––––––––––––––––
Conflictin values
Ethical  principles  can help us reach an areement
about what we consider to be important and allow us
to develop the AI  applications to match. The question
is, however, to what extent these uidelines survive
in practice, because the focus is very much on the
conditions and to a lesser extent on the consequences.
However, in practice, dilemma’s occur, which can create
value conflicts. For instance, on a fundamental level,
we all aree that it is wron to kill another person.
But what do you do if a terrorist threatens to kill 100
people? Should an autonomous weapon drone be allowed
to intervene? In practice, there are often extenuatin
circumstances. Another example is that we all aree that
stealin is wron. But what if a sinle mother steals
to feed her baby? Should a robot jude simply apply the
rules and punish the mother to the full extent of the
law? Ethical uidelines shouldn’t just be about what we
think is important, but also how important we consider
different values to be in relation to each other.
And in which circumstances.
When we take ‘Ethics Guidelines for Trustworthy AI
of the European Union as a startin point, we notice
that such trade-offs are hardly mentioned. In fact,
it is emphasized that all the requirements are equally
important and that they support each other.
The report includes only one small pararaph about
trade-offs, statin that trade-offs can occur and that
the pros and cons have to be weihed, althouh what
those pros and cons are and how they can be weihed
remains unclear. The report only indicates that the
pros and cons have to be evaluated and documented.
‘Interrelationship of the seven requirements: all are of equal importance, support each other,
and should be implemented and evaluated throughout the AI system’s lifecycle’
– Source: High Level Expert Group on AI (2019).
Human agency
and oversight
Technical
robustness
and Safety
Privacy
and Data
Governance
Transparency
Diversity,
Non-
Discrimination
and Fairness
Soccietal and
Evironmental
wellbeing
Accountability
To be continuously
evaluated and
addressed throughout
the AI system’s
life cycle
AI no longer has a plug. About ethics in the design process
7574
2.2 Conflictin values
It becomes even more interestin when the different
principles are pitted aainst each other. In eneral,
both privacy and equality are iven a very hih score.
However, when we pit them aainst each other, most
respondents prefer to stay in possession of their data
instead of relinquishin control of their data to further
equality. The differences are even reater in other
areas. For instance, some respondents indicate that,
if we do not understand how an AI system obtains its
results, we shouldn’t use it, while as many of the
other respondents state that it doesn’t matter how
an AI system ets its results. They find it above
all important that the system performs as expected.
So people have different opinions when it comes to these
types of principles, opinions that are context-dependent,
makin it difficult to translate eneral uidelines
into practice. In many cases, a balance will have to
be found.
» Norms and values are universal.
They are the moral judments that
vary amon people. «
–– Wieert van Dalen, Ethicist
What trade-offs are there?
When AI systems are applied in practice, there are
various possible value conflicts. Both between values
and within values. What we consider to be important
depends, amon other thins, on the context in which the
system is applied. There’s a bi difference whether it
involves a recommendation for a movie, a dianosis on
the basis of lun X-rays in the hospital or a recommen-
dation concernin a business takeover. Furthermore, there
are also technical considerations. For example, when we
o all out for transparency, that will affect the level
of privacy. That doesn’t mean we have to relinquish our
privacy altoether.
Principle Averae
score
People stayin in control
(autonomy)
4,81
Protectin personal data (privacy) 4,80
Fihtin inequality (preventin
biases)
4,67
Optimisin human choices
(efficiency)
3,88
Increasin explainability of the
system (solvin black box)
3,77
Solvin accountability problem
(leislation)
3,34
Improvin international position in
AI (eopolitics)
2,73
The other questions also show that autonomy
is considered to be very important. For instance,
a lare majority of the respondents indicate that strict
leislation and reulation is needed to maintain control
of AI, even if that were to slow down the development
of the technoloy.
However, when we zoom in, we can see that there are
differences, especially between the different roups
of respondents. We presented the same questionnaire to
three different roups, AI experts, administrators and
students. The analysis shows that AI experts tend to take
more risks with AI than the other roups. For instance,
they are more open to AI systems makin autonomous
decisions and they are prepared to accept a hiher
marin of uncertainty of the systems. When it comes to
acquirin an international advantae  in the development
of AI, on the other hand, it is the administrators who
are willin to take reater risks, indicatin that,
as a country, we should do everythin we can to secure
an international lead in the development of AI, even
if that leads to international tensions. Students are
relatively speakin more concerned about their privacy.
AI no longer has a plug. About ethics in the design process
7776
2.2 Conflictin values
A similar trade-off occurs between privacy and accuracy.
Generally speakin, the more complete and encompassin
the data set is with which an AI system is trained,
the more accurate the system will be. For instance,
when AI is used to predict future purchases of consumers
on the basis of their purchasin history, the model
will be more accurate if the data it can use is enriched
with, for instance, demoraphical information. However,
collectin personal data can violate the privacy of the
customers.
When the data set is incomplete, that can lead to skewed
or discriminatin results. There can be a trade-off
between privacy and fairness. Oranizations can take
various technical measures to limit the risk of that
happenin, but most of those techniques will make the
system less accurate. For instance, when you want to
prevent a credit ratin system to assin people in a
certain class based on where they live or what their
ethnicity is, the model should not include those data.
However, althouh that can help prevent discriminatin
results, it will also lead to less accurate
measurements, because a person’s zip-code can also be
an indicator for a leitimate factor, like job security,
so it will reduce the accuracy of the results.
In turn, these considerations affect the safety of the
system. If your model is less accurate, the likelihood
of errors is reater, which will affect safety. If you
optimize safety at the expense of explainability, that
will in turn affect accountability, because that will
be harder to deduce when the system’s explainability is
limited. As such, trade-offs can be placed on a spectrum
and people need to decide where in that spectrum they
feel most comfortable. There is no one size fits all
in that, it has to be tailor-made.
It is not a zero sum ame where we have to trade one
thin for another, but choices will have to be made in
the desin. To be able to maximise values in relation
to each other, any potential tensions first have to be
identified.
Technical trade-offs
One important trade-off in practice is that between
accuracy and explainability . Methods that are currently
bein used in the development of AI, like deep learnin ,
are so complex that the exact decision-makin
processes are impossible to trace. At the moment, the
opti misation of these types of systems takes place on a
trial and error basis: the input is tweaked to see what
it does to the output. If you want to optimise the
accuracy of the system, you will have to ive up part
of the explainability. On the other end of the spectrum
we find linear reression, which, compared to deep
learnin is a method that is far from flexible, but is
easy to explain . Sometimes people choose this method
for the sake of explainability, even if they know that
the relationship between the underlyin variables isn’t
directly proportional.
» If you have arbitrary data and you
want to be able to learn from it,
you pay a price for that. «
–– Maarten Stol, BrainCreators
AI no longer has a plug. About ethics in the design process
7978
2.2 Conflictin values
If, for instance, you want to use an alorithm 
that determines what you can cook on the basis of
inredients, it is very important who determines what
a successful outcome is. Parents want their children to
eat healthy, but the kids themselves would much prefer
somethin that tastes ood. At the moment, the decision
as to what a successful outcome is still lies in the
hands of a very small roup of people.
Contextual trade-offs
What we find important to a lare extent depends on the
context. Take, for instance, privacy. If a doctor asks
you to take off your pants to examine your private parts,
that is usually not a problem. But if your baker asks you
the same question, that’s a violation of your privacy.
The same applies to explainability, the need for which
will be lower when talkin about a chatbot compared
to a self-drivin car. As such, the importance depends,
amon other thins, on the risks involved. The acceptable
level of subjectivity also depends on the context.
For example, a ‘faulty’ recommendation by Netflix
won’t do much damae, but when a medical dianosis is
wron, that may have far reater consequences. Althouh
the subjectivity of Netflix’s recommendation system is
much hiher than the imae reconition software of a
hospital, the latter has to meet much hiher standards.
The question what we find important also depends on
the perspective, which is often culturally determined.
In many cases, the data for imae reconition  software
is still labelled by people. In some cultures, an imae
of a man or woman with a lass of beer is labelled
with havin a ood time, bein toether, partyin, etc.,
while in other cultures, it is taed with alcoholism,
rowdiness, etc. The perspective also depends very much
on ae. For instance, many people consider it inhuman
to have robots take care of elderly people. The National
Future Monitor of 219 shows that most Dutch people have
a neative view about havin intimate relations with a
robot. But many of the elderly in need of help think it’s
» An alorithm that performs exactly
as intended and with perfect
accuracy is not necessarily an
ethical use of AI. «
–– Kalev Leetaru, Geore Washinton University
Alinment trade-off
In 540 BC, Kin Midas wished that everythin he touched
would turn to old. That meant, however, that he also
turned his food and loved ones into old, which made
him so lonely and hunry that he relinquished his
superpower. When he formulated the end, he failed
to consider the means. That is also known as the
Value Alinment Problem (VAP). Theoretically speakin,
an intellient machine that is prorammed in such a way
as to produce as many paperclips as possible would do
everythin in its power to make that happen. In his book
Superintellience , Nick Bostrom philosophises that the
machine will clear from its path anythin that comes in
the way of production. Even people, because after all,
they don’t contribute to the production of paperclips.
A machine can be so oal-oriented that the results
don’t match what we want.
It is important, then, to determine what a successful
outcome is. The question is whether we proramme on
the basis of desirability or on the basis of reality.
When you ooled ‘CEO’ a number of years ao, that would
yield pictures of predominantly white middle-aed men.
You would have to scroll quite a bit to see a picture
of a woman. However, if we look at statistics, that
is not completely inaccurate. Women continue to be
underrepresented in the top manaement positions.
The data of Pew Research Center shows that, in 218,
the percentae of female CEO’s in the Fortune 4
was a mere 4.8%, but when you oole ‘CEO’ in 22,
3 of the first 2 pictures that you see are of women.
Which is 15%. Still a low percentae, but a lot
hiher that it is in reality. And the question is
who determines what a successful outcome is.
AI no longer has a plug. About ethics in the design process
8180
2.2 Conflictin values
What trade-offs are we willin to make?
In the Netherlands, more and more cameras and trackers
are used to follow and record movements. When it
is about improvin safety and liveability, people
appear to be willin to accept the use of sensors and
the collection of sensor data. But there are specific
conditions. Research by the Rathenau Institute from 219
indicates that the acceptance depends predominantly on
the context. People are not a priori aainst the use
of bodycams or Wi-Fi trackers, but it depends on when
and in what situation they are deployed. These insihts
are supported by research by the European Commission
from 22, which shows that 59% of all respondents
are willin to securely share part of their personal
information to improve public services. Especially when
it concerns the improvement of medical research and
care (42%), the improvement of the response to a crisis
(31%) or the improvement of public transport or reducin
air pollution (26%).
Analysis by the Rathenau Institute show that there
are two crucial factors, namely the level of safety
people experience and the type of livin environment
in which sensor technoloy is applied. In situations in
which citizens feel unsafe, they will accept the use of
sensors more easily than in situations where they feel
safe. The use of sensors is considered less acceptable
in private spaces that in public spaces where there are
many people. As such, the use of sensors is desirable
to improve safety and liveability in crowded public
spaces, but not in private spaces. It is interestin
to note that people not only weih safety and privacy,
but several other values as well, like democratic
rihts, transparency, efficiency and human contact.
an ideal outcome. For instance elderly people who are no
loner able to eat independently. They feel ashamed when
they are bein fed by straners in a home. Only their
kids are allowed to do that and otherwise they prefer
not to eat. Robots fill a need and actually provide
autonomy. What we consider to be ‘well-bein’ is often
a subjective affair. Emotionally speakin, we want to
prolon life as much as possible, especially when it
concerns the people in our own environment. The question
is, however, what the value of life is for terminally
ill people themselves. What are the optimisation oals
for AI in such situations? As efficiently, socially,
sustainably or humane as possible? And does humane
mean prolonin life or reducin unnecessary sufferin?
We are suddenly faced with similar issues by the corona
crisis in 2020, where it becomes clear that different
interests are interwoven and have a mutual influence.
For instance, for purely health-related reasons,
a complete lockdown could be sensible. However, for
people in developin countries – some of whom are
dependent on day labour – that would also mean a
complete loss of income and possible starvation. What is
worse? Dyin because of the virus or starvin to death?
A crisis like this one underlines the need for posin
fundamental questions. It turns out to be difficult to
express the value of life in measurable values, althouh
this is done durin a pandemic. The entire economy
rinds to a halt to protect the vulnerable. How far
should you o in that? These are not popular questions,
but they cannot be avoided. Especially when AI-systems
will play a reater role in decision-makin processes,
we need to determine where the balance is between
rationality and emotion, objectivity and subjectivity,
the lon term and the short term. Are we willin to
accept ‘objective’ decisions by AI systems or should
we also build in emotions  and moral intuition?
AI no longer has a plug. About ethics in the design process
8382
2.3 Challenes in practice
2.3 –––––––––––––––––––––––––––––––––––––––––––––––––––––
Practical challenes
The formulation of ethical  principles and uidelines is
an important first step in the realisation of ethically
responsible AI applications . However, it is not easy to
translate those uidelines into practice. For instance,
when talkin about transparency , its meanin depends,
amon other thins, on the domain and environment in
which the AI application is used. For example, when
Spotify recommends a son I don’t like, I don’t need
an explanation as to why that happened, but if an
alorithm  causes me to be rejected durin a job
application process, I would like to know the criteria
on the basis of which I have been rejected. In practice,
there are a number of potential value conflicts, for
instance between transparency and privacy . How do
we safeuard our privacy when we demand transparency?
In many cases, it is not exactly clear which ethical
questions play a role in the development of AI
applications.
Fortunately, there are more and more oranisations,
at a national as well as international level, that have
developed various tools that can help identify ethical
dilemmas in practice and that can be used to map the
ethical implications of applyin AI in practice and the
ethical issues that are involved. Think, for instance of:
> The Alorithmic Impact Assessment (AIA)
of the AI Now Institute
> Data Ethics Decision Aid (DEDA)
of the Utrecht Data School
> Artificial Intellience Impact Assessment (AIIA)
of the ECP
Therefore, ethical principles and uidelines cannot
simply be copied and pasted into practice. Each specific
situation requires specific considerations. It is always
necessary to determine in the context which values
conflict and what is acceptable in this. A trade-off
that is accepted in certain situations may be completely
unacceptable in other situations. Toether we will have
to determine what we can and cannot accept in different
situations.
» Sometimes we think that technoloy
will inevitably erode privacy, but
ultimately humans, not technoloy,
make that choice. «
–– Hu Yon, Pekin University
AI no longer has a plug. About ethics in the design process
8584
2.3 Challenes in practice
What does ‘fair’ mean in these cases? Do you risk
lockin innocent people up or do you risk havin them
commit crimes?
In America, software has been used on a lare scale
to predict the likelihood of recidivism. Research
by ProPublica from 2016 shows that the software is
prejudiced aainst people with a darker complexion.
It is obvious that that is unfair, but it is not easy
to determine what is fair and how you can measure it.
Is fairness defined by usin the same variables or
by usin the same statistics after usin different
variables? Does fairness mean that the same percentae
of black and white individuals are iven hih risk
assessment scores? Or that the same risk level should
result in the same score, reardless of race?
So the question is whether fairness is about treatin
people equally (with the risk of an uneven result)
or about ettin equal results (with a possible unfair
treatment). Research shows that different mathematical
definitions of fairness are mutually exclusive (Selbst et
al., 2019). So it is impossible to meet both definitions
at the same time, which means that, at some point,
a choice has to be made.
However, it is not possible to make a universal choice.
How we define fairness depends on the application
domain. You cannot simply transpose a system that is
used to produce fair leal verdicts onto an application
process. However, people often think that a powerful
system can be applied in multiple domains. Different
cultures and communities have different ideas about
fairness. Not only do different standards apply, there
are also different laws. In addition, our opinions about
what is riht or wron can chane over time. That makes
it difficult, perhaps even undesirable, to determine in
advance how an AI system should act.
Based on different questions, oranisations are helped to
et a clearer view of which ethical issues play a role
in their AI projects and how they want to handle them.
Examples are questions like ‘Are personal data bein
used in the project?’, ‘Are all the various roups of
citizens represented in the data set(s)?’ and ‘Who are
we missin or aren’t yet visible?’. That can help expose
and prevent potential biases  in the application. It also
helps oranisations to document their considerations,
makin the process more transparent and allowin them
to be accountable to their stakeholders. However, such
uidelines are often hard to express unambiuously and
difficult to quantify.
» There’s no such thin as a sinle
set of ethical principles that can
be rationally justified in a way
that every rational bein will
aree to. «
–– Tom Chatfield, Tech philosopher
The clarity of uidelines
It is obvious that we all want to prevent the use of AI
to treat people unfairly and to discriminate them on the
basis of their ender or ethnicity, which is why fairness
is a commonly used principle in ethical uidelines and
assessment tools. However, it is not easy to determine
what ‘fair’ exactly means. It is an issue that has kept
philosophers busy for hundreds of years. Is a society
in which everyone is treated exactly the same fair?
The arrival of AI ives this issue a new dimension,
because the concept of fairness has to be expressed in
mathematical terms . Think, for example, of the use of
AI in the leal system. The use of predictive policin
can help predict criminal behaviour throuh lare-scale
data monitorin and data analyses. However, there is
always a risk that people who do not meet the criteria
bein used et a positive score (false positives) and that
people who do meet those criteria et a neative score
(false neatives).
AI no longer has a plug. About ethics in the design process
8786
2.3 Challenes in practice
The motives behind the uidelines
In 2019, Goole introduced the Advanced Technoloy
External Advisory Council (AETAC), an external ethical
board desined to make sure that the company would
adhere to its own uidelines for ethically responsible
AI applications. However, the ethical board was
disbanded after a week. Immediately after announcin
who was on the board, there were intense discussions,
in particular about the position of Kay Coles James.
The president of the Heritae Foundation is known for
her conservative opinions, amon other thins about
the rihts of the LHBTI community.
Even apart from that, one has to wonder why Goole
decided to create such an advisory board in the first
place. Accordin to experts, the ethical uidelines
and advisory boards of Goole and other commercial
oranisations are aimed at circumventin overnment
reulations. This is also known as ‘ethical washin’.
It is said to be a way to deflect criticism, not to act
in a enuinely ethical way. Because the advisory boards
have no real power, the oranisations don’t actually
have to adjust their behaviour. And it seems hardly
surprisin that Goole created its advisory board after
a period when it had been under considerable pressure.
At the time, Goole worked toether with the Chinese
overnment on Project Draonfly, a search enine that
blocked results that the Chinese authorities considered
undesirable. Accordin to Amnesty International, the
modified search enine threatened the freedom of speech
and privacy of millions of China’s citizens. Later, the
employees of Goole also started a protest and wrote
an open letter to Goole’s manaement. In 218, Goole
announced it would stop workin on Project Draonfly,
but employees doubt that that actually happened.
The advisory board appears to be above all a way
for Goole to tell the world: ‘Look, we are doin
everythin we can’.
» By fixin the answer, you’re
solvin a problem that looks very
different than how society tends
to think about these issues. «
–– Andrew Selbst, Data & Society Research Institute
The measurability of the uidelines
The various ethical principles and uidelines have
different levels of abstraction, which end values and
instrumental values ettin mixed up. For instance,
Societal well-bein and Safety are end values that
we, as a society, aim for, while Accountability  and
Transparency are instrumental values that we can use
in our pursuit of those end values. Some uidelines are
easier to quantify than others. For instance, the level
of accuracy can be made measurable, but in the case of
transparency, it’s more complicated. When is somethin
‘transparent enouh’? At 70% transparency? And what
exactly does that mean?
The question is whether we should even aim for 1%
transparency. Research by Microsoft Research from
218 shows that too much transparency can lead to
information overload. It turns out that it is even harder
to detect and correct the errors in transparent models.
In addition, there is a risk that people may trust
transparent models when they shouldn’t. A follow-up
study by Microsoft Research from 22, in collaboration
with the University of Michian, shows that the use
of visualisations about the trainin results of machine
learnin  tools create a misplaced trust about the
possible applications of the models. Even when the data
had been manipulated and the explanation didn’t match
reality.
AI no longer has a plug. About ethics in the design process
8988
2.3 Challenes in practice
whether the AI application complies or will comply
with the principles, not how the requirements must be
interated into the desin itself (and thus withstand
the evaluation). While many uidelines and assessments
provide different checklists and questionnaires, they
do not answer how AI systems can make ethically
responsible decisions.
» Despite an apparent areement that
AI should be ‘ethical’, there is
debate about both what constitutes
‘ethical AI’ and which ethical
requirements, technical standards
and best practices are needed for
its realization. «
–– Effy Vayena, ETH Zurich
It doesn’t appear to be a coincidence that other tech
oranisations also launched ethical uidelines and
advisory boards in a period in which numerous problems
in the tech sector came to liht, like the Cambride
Analytica scandal in 2018. For instance, Microsoft
created its AI ethics committee and conducted broad
research into the transparency of AI systems, while
Amazon sponsors a research proramme aimed at promotin
‘fairness in artificial intellience’ and Facebook has
invested in an ‘AI ethics research center’ in Germany.
» Ethics boards and charters aren’t
chanin how companies operate. «
–– James Vincent, The Vere
The challene is to arrive at bindin uidelines.
Accordin to different experts, leislation is necessary
to make sure that ethical uidelines are observed.
The first step in this direction is described in the
Whitepaper on AI’, which was presented by the European
Commission in 22 and in which the commission explains
proposals to promote the development of AI in Europe,
takin European fundamental rihts into account.
An important part is the proposal to develop a ‘prior
conformity assessment’ for risky AI applications, based
on the ethical uidelines of the Hih Level Expert Group.
That leal framework is desined to tackle the risks
facin fundamental rihts and safety, allowin reliable
AI systems to et a quality mark, makin it clear to
users which systems they can trust.
Althouh it is very important to make sure that ethical
uidelines are applied in practice and respect European
laws and fundamental rihts, it provides insufficient
tools for actually interatin the uidelines in the
development process. The currently available checklists
and assessment tools are insufficiently quantifiable.
Now, every aspect of the list can be ‘checked’ without
completely meetin them. Ethical uidelines and
assessments are therefore mainly tools for evaluatin
Ethics
boards
and
charters
aren’t
changing
how companies
operate.
–– James Vincent, The Verge
AI no longer has a plug. About ethics in the design process
9392
Guest contribution: 'Compromises surroundin reliability of AI'
nature, it will never be 1% correct, in other words:
not 1% safe. To et as close as possible to 1%,
increasinly complex systems are bein developed.
Due to the nature of the technoloy, the result of
that increasin complexity is that such systems take
on the character of a black box. That is to say that,
even thouh the decisions may be of a ood quality,
the oriin of each individual decision is less
accessible. So explainability is sacrificed in favour of
safety. Safety vs. Accountability is a trade-off that can
never be avoided completely with existin technoloies.
There are other similar trade-offs. Privacy vs.
Transparency: how much privacy must citizens ive up
to unlock data that must make sure that AI systems are
transparent to other citizens? Human aency vs. Societal
well-bein: which interests are more important, those
of the individual or those of the collective? Technical
robustness vs. Environmental well-bein: how much
enery is the continuous trainin and maintenance of AI
systems allowed to use? Etcetera. It would therefore
appear to be reasonable to assume that, in the future,
we will not be able to put all the uidelines into
practice. Instead, we shall need to compromise. Both
industry and overnment have a role to play in that.
Industry will have to assume a measure of responsibility
to try and follow the uidelines. And overnment will
have to think about leislation desined to manae these
developments. One thin is certain: when it comes to
the ethical use of AI, the future will not be without
compromises.
Guest contribution ––––––––––––––––––––––––––––––––––––––
Compromises surroundin
reliability of AI
By Maarten Stol, Principal Scientific
Advisor, BrainCreators
In 2019, the European Commission
published the ‘Ethics Guidelines for
Trustworthy AI’. It contains a list
of seven requirements an AI system
has to meet to be allowed to be
called ‘reliable’. In parentheses, because, even thouh
we all have a sense about what reliability is, in the
case of advanced technical systems, the definition is
hard to make exact. Such lists have been made before
(think for example of the Asilomar AI Principles),
but the European try to cover the term ‘reliable’ as
completely as possible and at the same time keep the
list from becomin confusin:
1. Human aency and oversiht.
2. Technical robustness and safety.
3. Privacy and data overnance.
4. Transparency.
5. Diversity, non-discrimination and fairness.
6. Societal and environmental well-bein.
7. Accountability.
However, the uidelines also make it clear that there
can be fundamental tensions between some of these
requirements. And that’s where the complexity of the
issue lies. Leavin aside the question how to meet these
requirements on an individual level (easier said than
done, technically), I would like to address the possible
interactions and trade-offs, startin with Safety vs.
Accountability. The behaviour of current AI systems is
larely determined by data, more than by prorammin
code alone. The oal of machine learnin is to try to
use that data and automatically teach a proramme that
can make a decision in the eneral trend of the input
data. However, since machine learnin is statistical in
**,,,,,,,,,,,,,,,,,,,,,,,*******,**************************
,**,,,,,,*,,*,,,,,,,,***/##%&%%%%(*************************
,***************,,*,/(%%&&@&@@&&&&&/***********************
****************,,**,(#(%##&&&&%&&%%&%%%(******************
***************,,****/(#(%&%&%%#%#%%&%%********************
*/**********,**,,,,,,..,,*/*****/((%%(#%##*****************
////////*/*/*,,,, . ...,,,,***/(#((****************
////////////*/,. . ....,,,,,***//(##(***************
////////////,,.. .....,,,,,***//(##(***************
//(((/////*,,*,.. ........,,,,,,,***/(##%(**************
(/(((((((*//#(.. .....,,,,,.***//((%&%**************
((((((((((((#@..... ....,,,,,,,,***//(((%@@(*************
((#((((((//(#((,(##%%%%*,**,*(#%&@@&@@%#&@&%***************
((##(((((((/#*#/###%##((((//((#&@&&&&&&&#@%#***************
#(####(((((*/,,#/*/#(@%&&//&%&@%@&%%#((%*******************
########(((#,,,,,,,,**,,*/*../%%((/(*(//(#%%%#*************
#########(##*,*,.......,*,*.,(##(*/***//##%##**************
#############%%,,......,**,.,/%%/*****/##%@@#**************
#############&&,,,,,,*/(*,,,*/#(##(///(##%@@&**************
###%########%%%/**,,/*,**(**/(&&((#%####%@@@@&*************
###%%########%%#**,*/#&&&&@@&@@@@@@@#%%@@&%&%**************
%##%%%%####%%%%%(**/&&&&&&&&&&@@@@@@@&%&@@&&&%#************
%%%%%%%%%%%%%%%@(*(*%*/(((#(%&%%@@@&@@@@@@&&***************
%%%%%%%%%%%%%(&@*,,/%/(//(#&%%&%&@@@@%@@&&&&&&&&%%%%%%#****
%%%%%%%%%##((#@&*,,,(#(%%#%&@@@&@@&&%@@@&&&&&&&&&%%%%%%%#%#
%%%%((#%###((#&&*,,,,*(%#(%@@@&@&##@@@@&&&&%&&%%%%%%%%%%%**
##((((#####((#&@@*,,,,/###%@@@&&%%##&@@@@&&%%%&%&%%%%%%%%%%
#((((#####((((%@@@@%,*,**/%&@@%#(##@@@@@@&%%%%%%%&%%%%%%%%%
###(######(((#(&@@@@@@@(*/##&@&%%&@@@@@@@@&%%%%%%%%%%%%%%%%
1.1 Titel van het subhoofdstuk
95
3 –––––––––––––
Ethics in
AI design
––––––––––––––––––––––––––––––––––––––––––
 3.1 A new approach to ethics
 3.2 Ethics by Design
 3.3 The Ethical Scrum
––––––––––––––––––
Paes: 40
Words: 7770
Readin time: approx. 1 hour
AI no longer has a plug. About ethics in the design process
9796
3.1 A new perspective on ethics
3.1 –––––––––––––––––––––––––––––––––––––––––––––––––––––
A new perspective on ethics
In the past, ethics  was mostly about human actions.
However, with the arrival of AI , there is a new
player in the ame, namely self-learnin technoloy.
As technoloy becomes more and more autonomous,
takes over more decisions and makes it harder to
trace the decision-makin rules, new ethical questions
emere. The question is to what extent the current
ethical terminoloy is still sufficient. What is the
relationship between people and technoloy?
And how will that relationship develop?
A broadenin of the concept of ethics
Accordin to Peter-Paul Verbeek, professor of
Philosophy of Man and Technoloy at Twente University,
the terminoloy of ethics needs to be expanded.
To be ethical, accordin to existin opinions,
it is necessary to have intentions and to be able to act
freely. Accordin to Verbeek (211), technoloies that
help shape moral decisions have no intentionality 
and people who are uided by technoloy in their moral
decisions are not free. Opinions are seriously divided
on whether or not technoloy possesses intentionality,
but it is clear that we are no loner completely
autonomous and that our lives are affected by technoloy.
As such, the idea of a completely autonomous person
that is behind existin ethics is often erroneous.
The question is not whether we want completely
explainable systems (that ship has lon sailed),
but how we can apply systems that are no loner
completely explainable. So we should focus more on
mappin the impact of these systems and come up with
a constructive plan on how we can and want to deal with
them. Rather than focusin exclusively on the question
what is riht and wron, we need to develop a concrete
ethical framework that allows us to deal with the
errors of AI systems. We cannot approach that from
a purely theoretical perspective, but have to experience
3. ––––––––––––––––––––––––––––––––––––––––––––––––––––––
Ethics in AI desin
From overnments and social oranizations to scientific
institutes and the private sector, for different reasons,
they all think it is important to develop ethically 
responsible AI  applications, which is why ethical
principles and uidelines  pop up like mushrooms.
Different uidelines dictate, amon other thins, that
AI systems have to be secure and robust, safeuard our
privacy  and treat us fairly . The question is to what
extent such noble oals affect each other in practice.
Can we develop AI systems that are both accurate  and
explainable ? And are explainable systems by definition
also fair? Can we translate fairness into mathematical
terms  without treatin people unfairly? And do we
aree in the first place about what a fair society
should look like?
Ethical uidelines – and in particular their interpre-
tation – are to a lare extent context-dependent and
subjective. In that sense, ethical uidelines are
more about morality than about ethics. Ethics should
be about alinin the different applications to that
morality. From that perspective, ethics are more
a desin issue than a collection of opinions about
what we find important. If we want to use ethically
responsible AI applications in the future, we will
have to concern ourselves now with the question as to
how we can develop applications that are in areement
with morality. It is not enouh only to assess whether
or not an AI application meets the uidelines. Ethical
principles and uidelines actually have to be interated
into the desin. And in the desin process as well.
That requires a different approach to ethics in the
development of AI.
AI no longer has a plug. About ethics in the design process
9998
3.1 A new perspective on ethics
» Instead of seein ethics as
‘judin’, it could also be seen as
a normative ‘uidance’ of technoloy
in society. «
–– Peter-Paul Verbeek and Daniël Tijink, ECP
A clear distinction is made between three kinds of
activities. With reard to the first kind (ethics by
desin), the focus is primarily on the desin of the
technoloy itself. Values like privacy  have to be
included in the desin. For instance, it has to be
possible to use AI to make medical dianoses without
puttin the privacy of the patients at risk. AI systems
could be trained, for example, with data from different
hospitals, without the data in question ever leavin a
hospital buildin or touchin the servers of a technoloy
company. With reard to the section kind of activity
(ethics in context), the focus is on context-specific
areements. The introduction of a new technoloy is
often accompanied by chanes in the environment, which
are not always visible, however, for instance the
use of facial reconition software. That’s why it has
to be clear to everybody that AI is bein used, what
information is collected, who has access to the data,
what possible ways there are to contest the decisions
of an AI system, etc. In the case of the last activity
(ethics by user), the focus is on the use of the
technoloy. It is important that everybody has enouh
knowlede to deal with the technoloy in a critical
and responsible way, both the developers and the users.
There could still be driver’s licences for people in
self-drivin cars, for example, to reflect the new
skills that have to be learned, for instance for the
communication with the autonomous systems or when
there is a need to intervene when the system
requests it.
it in practice. We need to experiment in controlled
settins. From human in the loop (HITL) to human on the
loop (HOTL). AI actually allows us to have more control
over inequality with reard to thins like ender or race.
AI is far better equipped to manae the conditions that
create biases  than people are. That way, we can create
enouh diversity in the data sets and in the backround
of the prorammers developin the alorithms  and
the supervisin leislators. We need to try and create
systems that divide unfairness more evenly. AI may not
be able to solve our mistakes entirely, it can distribute
them more fairly.
» The worst form of inequality
is tryin to make unequal thins
equal. «
–– Aristotle
Ethics as leitmotiv
Many ethical discussions focus on the question when
the use of AI is and is not acceptable, in an attempt
to create hard boundaries and rules to ‘control’ the
technoloy. However, that would suest that it is
possible to separate society and technoloy from one
another completely. But everyday practice is not that
black and white. People and technoloy affect each other
mutually; we shape technoloy and technoloy shapes us.
In addition to broadenin the concept, it also requires
a different approach to ethics.
Guidance ethics
In 2019, ECP, the Platform for the Information Society,
in collaboration with Peter-Paul Verbeek, published a
report about ‘uidance ethics’. Instead of askin how
we can assess AI, accordin to their approach, we should
focus on how best to uide the implementation of AI in
our society and how we can deal with it in a responsible
way. In this approach, the focus is on the development of
technoloy with action perspective.
AI no longer has a plug. About ethics in the design process
101100
3.1 A new perspective on ethics
» Those who introduce strict
reulations, accept that the
development of AI will initially
be slower. «
–– Mona de Boer, PwC Netherlands
There is a middle way, however. Think, for instance,
of ariculture, which has become intensely innovative
because of overnment reulations. They have become
so innovative that our reenhouses are considered to be
the most sustainable in the world, which is why Dutch
reenhouse builders are increasinly active abroad as
well and contribute to the realisation of sustainable
horticulture projects around the world. So ethics doesn’t
have to be an anchor that you have to dra around, it can
provide a beckonin perspective for innovation.
Unitin values
Trade-offs are still often presented as forced binary
choices. Do you want the overnment to know everythin
about you, or live in unsafe conditions? The same thin
occurred with the use of trackin apps durin the corona
crisis in 22. Newspaper headlines and articles were
focused on the trade-off between privacy and public
health. The emphasis appears to be on the individual
versus the collective and people are expected to
surrender their privacy for the reater ood. However,
the notion that these apps only work if people ive up
their privacy is mistaken. There’s a reason that the
Dutch overnment decided aainst usin any of the seven
apps proposed durin the ‘appathon’ in 22, because
they violated privacy uidelines. Apple and Goole also
announced that apps that use the location data of users
would not be iven access to their operatin systems.
This leads to a more holistic approach to ethics.
Both the use and the users, and the technoloy are
included in this approach, which does justice to the
dynamic practice of AI: the adjustment between people
and technoloy is an onoin process. It is no loner
either people or the technoloy makin the decisions.
They mutually shape one another. The limitation of
this approach is, however, that it is suested that
technoloy will keep developin and that we miht as
well resin ourselves to that. But the question where
we should use AI in certain contexts continues to be
relevant. We mustn’t see AI as a oal in itself, but
as a means to a certain other oal. We cannot skip the
question whether or not AI is the best way to achieve
that oal. So we must keep askin ourselves what kind
of society we want to be, iven all the technoloical
developments.
» AI is an Ideoloy,
Not a Technoloy. «
–– Jaron Lanier & Glen Weyl, Wired
Beckonin perspective
When we talk about the future of AI, it is often
suested that ethics slow down innovation and that,
for instance, Europe has to make a choice: either impose
strict reulation and slow down innovation or innovate
and try to join the US and China. Accordin to a report
by PwC Netherlands from 22, there is a trade-off
between strict reulation and quick innovation.
The ‘White Paper on AI’, in which the European
Commission voices its ambition to speed up the
development of AI and at the same time announces
stricter reulation, could be a case of puttin one
foot on the brake and the other on the accelerator.
AI is an
Ideology
not a
Technology
–– Jaron Lanier & Glen Weyl, Wired
AI no longer has a plug. About ethics in the design process
105104
3.2 Ethics by Desin
3.2 –––––––––––––––––––––––––––––––––––––––––––––––––––––
Ethics by Desin
When we want to use ethically  responsible AI systems 
in the future, we need more than just uidelines 
and assessments. We need tools that will allow us
to actually interate such principles into the desin,
which means we have to move from evaluatin to
interatin. In short, we need Ethics by Desin.
Despite the fact that more and more ethicists share
that view, they rarely o beyond statin that we need
to interate values into AI applications, which leaves the
question as to how that is supposed to happen in practice.
Examples are mentioned of AI applications that have come
about in an ethically responsible way, but that says
little about the AI system itself. Think, for example,
of the Fairphone: a smartphone that is both fair to the
environment and to people in the production process.
Althouh this phone shows that it is possible to unite
values, it does not answer the question whether or not
its operatin system is able to make ethically responsible
decisions.
The same oes for the existin impact and assessment
tools that are used to assess AI. They focus more on
the development of the AI system than on its actions.
Most assessment tools are checklists that focus mostly
on the use of the datasets. Has the data been anonymised?
And is the process transparent? They often don’t answer
the question how an AI system can arrive at ethically
responsible decisions. To develop enuinely ethical AI,
we need to look at the different ways an AI system can
learn what is and isn’t ethically responsible. So the
question is how we can build systems that are able to act
in an ethically responsible way in different situations.
Can you proramme ethical rules into the system? Do we
have to equip the system with ethical taret functions?
Or is the system itself able to make moral judments?
To answer these questions, we distinuish three different
system approaches, namely static learnin, adaptive
learnin and intuitive learnin.
» Those who would ive up essential
Liberty, to purchase a little
temporary Safety, deserve neither
Liberty nor Safety. «
–– Benjamin Franklin
Value-Sensitive Desin
It is possible, then, to unite values in the desin,
which is the startin point for Value-Sensitive Desin
(VSD). ‘Desinin for values’ provides an alternative
approach to innovation. Accordin to Jeroen van den
Hoven, Professor of Ethics and Technoloy at Delft
University of Technoloy, we need to use innovation
to serve values and remove value conflicts. That way,
we can innovate with AI in a responsible manner.
You only talk about trade-offs if you actually experience
them in practice, so the challene is to avoid those
situations throuh desin, for which we have to create
environments in which we do not have to choose between
different values, but in which we can maximise values
in relation to each other. Yes, there are choices, but
by choosin the riht desin, you can ensure that the
choices don’t do any damae.
» Ethics to a lare extent is a
desin discipline and has to do
with shapin our society and livin
environment in a responsible way. «
–– Jeroen van den Hoven, Delft University of Technoloy
AI no longer has a plug. About ethics in the design process
107106
3.2 Ethics by Desin
» It is not ethicists, but
enineers who are at the frontline
of ethics. «
–– Peter-Paul Verbeek, Twente University
Disadvantaes
However, this approach fails to take exceptions into
account that can occur in practice. It requires that
there be rules for every possible situation, which in
practice is virtually impossible to realise. In addition,
there are situations in which contradictory rules apply.
It is, for instance, not allowed to run a red liht,
but when an autonomous vehicle has to avoid hittin
a roup of people, it is allowed to run a red liht.
It is almost impossible to record all the exceptions.
In addition, it is impossible to predict all the possible
consequences. When GPS functionalities were developed
for the aerospace sector, nobody could predict that that
functionality would ultimately end up as an app on our
smartphones. With positive and neative consequences.
In addition, AI systems are updated: is it necessary
to apply for a new quality mark with each update?
It’s virtually impossible to capture all that in
ethical uidelines in advance.
» You can tell a security robot not
to hurt people. But that will be a
limitation when that robot has to
prevent a terrorist attack. «
–– Leon Kester, TNO
Static learnin
In the case of static learnin, ethical principles
and rules are prorammed into the intellient system,
which implicitly makes the oal of the AI system part
of the alorithm , to be filled in by a prorammer.
If we want an autonomous vehicle to brin us from A to
B as quickly as possible, the exceptions also have to be
embedded in the alorithm. We don’t want the vehicle to
violate traffic rules and just drive in a straiht line
at 200 miles an hour. Objectively speakin, as quickly
as possible literally means as quickly as possible.
The alorithm also has to consider values like safety.
This approach also appears to be the one that is used
by ethicists and developers within the Value Sensitive
Desin (VSD) community. The startin point is that
values like safety have to be made explicit as early as
possible in the desin process. The values can then be
formalised and embedded in the AI system.
» Ethics has to be part of the desin
of technoloy. «
–– Jeroen van den Hoven, Delft University of Technoloy
Advantaes
The major advantae of this approach is that ethical
principles are fairly transparent and relatively easy
to interpret by people, which allows us to think
toether about what we, as a society, consider to be
important and embed that in AI systems. That provides
a certain level of human control. We can monitor the
development of AI even before it bein marketed and ive
quality marks to the applications that meet the relevant
ethical uidelines. In this context, it is relatively
clear when certain uidelines are bein violated, makin
it possible to hold oranisations that violate the rules
responsible, set up supervisory bodies and monitor the
development of AI.
> YES > NO
» KILL?
You can tell a security
robot not to hurt people.
But that will be a limitation
when that robot has to prevent
a terrorist attack.
–– Leon Kester, TNO
AI no longer has a plug. About ethics in the design process
111110
3.2 Ethics by Desin
1. The only oal of the AI system is maximise the
realisation of human values;
2. It is initially unclear to the AI system exactly what
those values are;
3. Human behaviour provides the AI system with
information about human values.
In other words, learnin on the job. It is important
that machines learn everythin about human values, to
ascertain what is really important to us.
Advantaes
The advantae of this approach is that the AI system
learns what the riht action is within the context,
allowin it to handle conflictin values, which is
virtually impossible when the rules are pre-prorammed.
It makes AI systems much more flexible and easier to
use. In addition, this approach makes it possible for the
system to learn from human behaviour, without copyin
undesirable qualities, because the system doesn’t just
learn from individual people (who all do somethin
‘bad’ on occasion), but from society as a whole (placin
‘bad’ behaviour in a broader context). The system can
learn, for instance, that people sometimes steal thins
when they don’t have enouh money to send their kids to
school. Rather than learnin that stealin is allowed
in such a situation, it will try to help find a way to
send the kids to school. Systems are not ‘burdened’ with
human ures and emotions, like status and power, which
are the result of bioloical evolution.
» The robot does not have any
objective of its own.
It's purely altruistic. «
–– Stuart Russell, University of Berkeley
This approach also places too much responsibility on
the prorammer, because the uidelines don’t specify
how the values should be formalised in mathematical
terms . The question as to what exactly is fair 
depends on the context and the specific user application,
and there is always a risk that, for instance in the case
of a medical test, the result of patients is erroneously
classified as positive or neative, which could lead
to healthy people bein administered medication and
sick people not bein treated. It is unfair as well
as irresponsible to leave such confiurations only
up to the prorammer.
Adaptive learnin
In the case of adaptive learnin, the rules are not
pre-prorammed, but the system instead learns what
is ‘riht and wron’ based on human behaviour. To that
end, the alorithm is equipped with a oal function,
with which it can be specified on what the alorithm
should be optimised. A clear distinction is made between
the problem-solvin ability of the intellient system
and the oal function. That way, specific application
oals can be combined with ethical oals, allowin, for
instance, an autonomous vehicle to brin us from A to
B as quickly as possible (application oal) as well as
take our safety into account (ethical oal). When there
are contradictory rules in practice, the AI system has
to be able to make a judment. This approach is popular
with, amon other thins, Open AI and the Future of Life
Institute. Accordin to AI pioneer Stuart Russell, AI
systems can only make those kinds of judment when the
systems learn in practice what human values mean.
In his TED-Talk in 2017, he discussed three pillars
to be able to develop safer AI applications.
AI no longer has a plug. About ethics in the design process
113112
3.2 Ethics by Desin
Another challene is that many of our moral opinions
are implicit . We don’t express them literally, which
makes it hard for an AI system to learn. Also, it is
hard for a computer to assess emotions  correctly.
When people lauh, it is hard to determine if they are
sincere, or whether there is another underlyin emotion
or motivation.
» Computers can’t tell if you’re
happy when you smile. «
–– Anela Chen, MIT Technoloy Review
Intuitive learnin
In the case of intuitive learnin, elements of static
and adaptive learnin are combined. In this approach,
the alorithm is also iven a oal function, but it
does not learn from the behaviour of people. Instead,
people determine how much value they attach to a oal
by assinin weiht factors. The weiht is determined
based on the usefulness the oal has for society,
which is why it is also called a ‘utility function’.
This allows the system to make reasoned assessments
on the basis of the pre-weihted factors. When an
autonomous vehicle has to move us from A to B, there
are various oals that are relevant, like travel time,
comfort, safety and sustainability. Different weihts
are assined to these different oals. The autonomous
vehicle will use the oal function to decide which
route to take and which drivin behaviour best matches
both the wishes of the passeners (comfort and time
of arrival) and of society (safety and the environment).
Dependin on the weiht and the current state of the
surroundins (like the amount of traffic on the road),
different outcomes are possible.
Disadvantaes
The challene of this approach is that an AI system
has to act accordin to the human values of society
as a whole, not just those of the user. If a machine
puts your interests first, that can be at the expense of
others. In one way or another, the system has to weih
the preferences of many different people. In his Talk,
Russell lists a number of examples where this approach
can o wron. Imaine you have forotten your wife’s
birthday and you have a meetin you cannot cancel.
An AI system can help you by delayin the fliht of the
person you are meetin, allowin you to take your wife
out to dinner. But that would upset the lives of other
people. It can also happen the other way around. Imaine
you are hunry and you ask your ‘robot chef’ to make
you a ham sandwich, it can refuse your request, because
there are people elsewhere on the planet who are more
hunry. Accordin to Russell, it’s also possible that
different human values have to be weihed aainst each
other. Imaine that your robot chef decides to make you
that ham sandwich, but there’s no meat in the fride.
There is, however, a cat in the house. Which value is
more important: your need for food or the sentimental
value of a pet? Accordin to Maslow’s pyramid, the cat
loses. Because we don’t indicate in advance what is ood
or bad, we relinquish a lare part of the control over
the system. When an AI application causes unintended
consequences, it is very hard to intervene.
» We had better be quite sure that
the purpose we put into the machine
is the purpose which we really
desire. «
–– Norbert Wiener, 1960
Computers can’t
tell if you’re
happy when you
smile.
OK :)
–– Angela Chen, MIT Technology Review
AI no longer has a plug. About ethics in the design process
117116
3.2 Ethics by Desin
Disadvantaes
The disadvantae of this approach is that it assumes
that alorithms are capable of makin nuanced consider-
ations. Accordin to Peter Eckersley, research director
of The Partnership on AI, alorithms are desined to
pursue a sinle mathematical objective, like minimisin
costs or maximisin the number of apprehended fraudulent
people. When an attempt is made to pursue more than one
oal at the same time – with some of which competin
with each other – the development of AI is faced with
practical and conceptual problems, in what is also known
as the ‘impossibility theorem’. In particular when
immaterial values like freedom and wellbein have to be
maximised, Eckersley arues that, in some cases, there
simply is no mathematical solution. It turns out that
ethics is about more than calculatin costs and benefits.
It also involves less tanible thins, like empathy ,
compassion and respect. In a by now ‘infamous’ article,
Eckersley describes that it is impossible to formally
specify what a ood result is for a society without
violatin human ethical intuitions.
» Such systems should not use
objective functions in the strict
mathematical sense. «
–– Peter Eckersley, The Partnership on AI
This form of learnin looks a lot like the human
decision-makin  process: it is intuitive. It is
possible for humans to drive the car on the basis of
laws and rules, because they translate them to the
specific context. That last step cannot be prorammed.
For instance, the speed limit was lowered to 100 km
in many places in the Netherlands, the idea bein that
it is better for the environment and traffic safety.
However, althouh people are allowed to drive 100 km per
hour, that’s not what they do all the time. The speed is
constantly adjusted to the surroundins.
Advantaes
The advantae of this approach is that the added value of
static and adaptive learnin is combined: it provides the
control of the static approach and the flexibility of the
adaptive approach. That way, an AI system is provided
with the values that society considers important, which
it can interate in a recommendation or decision,
utilizin its computin power to calculate the best
possible outcome in every situation, safeuardin
ethically responsible outcomes without completely
relinquishin control. After all, it’s still people
who assin weihts to the factors that are included in
the calculation. This also bypasses the sharp trade-off
between ood or bad. In reality, situations occur all the
time where we have to choose between two ‘evils’, and
which evil prevails depends on the context. With this
approach, the system can determine which outcome is
best for the individual and for society.
» That will allow an autonomous
vehicle to choose between two
alternatives that are undesirable
in principle. «
–– Leon Kester TNO
AI no longer has a plug. About ethics in the design process
119118
3.2 Ethics by Desin
Despite the criticism, intuitive learnin seems to be
the only way to deal with the complexity of ethical
issues. We need a system that is able to reason with
uncertainty and that has a notion of ‘self’ that will
allow it to deal with ‘trolley-like’ problems. We need
to include the value  that society places on the action,
the consequences  of the action and the person  (or
object) performin the action, which is why ‘ethics’
shouldn’t be literally part of the desin, but the system
itself ouht to be able to make ethical considerations.
And, instead of speakin of ‘Ethics by Desin’, we
should call it ‘Desinin for Ethics’. At the moment,
technoloy isn’t sufficiently advanced yet, but people
are workin hard on new perspectives, for instance in
the area of hybrid AI, which involves systems that can
see (usin neural networks , amon other thins) and
that are able to reason (usin formal loic , amon
other thins). This approach is also known as Deep
Reasonin , a combination of deep learnin and symbolic
reasonin. It increases AI system’s ability to learn
intuitively.
» Deep Reasonin is the field of
enablin machines to understand
implicit relationships between
different thins. «
–– Adar Kahiri, Towards Data Science
AI no longer has a plug. About ethics in the design process
121120
3.3 The Ethical Scrum
» The problems with ethics are not
located in the perspectives, but in
the processes. «
–– Robert de Snoo, Human & Tech Institute
Ethicists versus Technicians
In discussions about AI and ethics, it is often
the ethicists who are talkin about technoloy, or
technicians who are talkin about ethics. In both cases,
it is not their area of expertise, and there is a hue
difference between the various approaches. Technicians
approach the question mostly from the point of view
of optimisation. How can I formalise  values like
privacy  and transparency ? From that point of view,
ethics is a problem that has to be solved; it requires
concrete answers. Ethicists, on the other hand, are much
more focused on examinin the question itself. Existin
questions may lead to new questions. However, such
abstract insihts are hard to translate into the
concrete user practice.
To create ethically responsible AI systems, we need
both approaches, which is why it is important to adopt
a more holistic approach. At the moment, scientists
from different disciplines are still competin with
each other, when they can complement one another.
The development of AI oes beyond technoloy and
philosophy. The use of AI affects society as a whole;
the way we work toether and live toether. Ethicists
and technicians should start workin toether with
socioloists and economists. But also with bioloists
and psycholoists. The decision-makin process of
systems more and more resembles human decision-makin
processes , so we need to understand how such processes
take place in the human brain and express themselves
in human behaviour. And that requires a more
transdisciplinary approach.
3.3 –––––––––––––––––––––––––––––––––––––––––––––––––––––
The Ethical Scrum
There are different approaches to allow AI systems 
to learn about what is and isn’t ethically  responsible.
Ethical principles and uidelines  can be prorammed
into the system (static learnin), the system can
learn from human behaviour and optimise human values
(adaptive learnin) and the system can weih multiple
oals on the basis of predefined weiht factors
(intuitive learnin). In many ethical discussions,
these approaches are not included sufficiently, because
it is the system approach that determines how we need
to formulate what is important and what we do and don’t
include in the system. In the case of static learnin,
that is a complete set of rules and exceptions, while,
in the case of adaptive and intuitive learnin, we have
to determine which oal functions to include. When we
opt in favour of intuitive learnin, we not only have
to determine which oals we consider to be important,
but also what their relative weiht is in different
situations.
No matter which approach we select, it is virtually
impossible to map in advance what the possible
implications of the different desin choices are, because
the optimisation of AI systems is a process of trial and
error, which means that new challenes emere durin
the development process. Think, for instance, of an area
like safety. The oriinal safety principles are affected
by choices in the process. To avoid vulnerabilities in
the system, choices have to be made durin the process
and desin criteria have to be adjusted. That is why it
is important to look not only at the desin, but also at
the desin process.
122 123
3.3 The Ethical Scrum
In 2019, researchers at various American universities
and companies published an article in Nature in which
they call for a transdisciplinary scientific research
aenda. The aim is to ain more insiht into the
behaviour of AI systems. Developments in the area of
AI brin machines that have a form of ‘aency’ ever
closer. In other words, machines that act independently
and make autonomous decisions. That turns machines into
a new class of actors in our society, with their own
behaviour and ecosystems. Experts arue that this calls
for the development of a new research area, namely that
of Machine Behaviour. The startin point is that we need
to study AI systems in the same way we do with animals
and humans, namely throuh empirical observation and
experiments.
Operationalisation in practice
The different approaches by ethicists, technicians
and others can be explained via the so-called ‘values
hierarchy’.
At the top of the pyramid, we find the most abstract
values that many ethicists are concerned with. At the
bottom, we find the concrete desin requirements that
many technicians work with. Ultimately, values have
to be operationalised. Desin requirements have to be
made measurable to allow them to be used for and by
AI systems. Normalisation can help close the ap,
which rouhly speakin involves three steps in the
desin process:
1. Conceptualisation: first of all, values have to be
defined. Their meanin has to be clear and universally
applicable. That is what many ethical principles and
uidelines do. Despite the limitations that many
uidelines have, it is an important step that is
necessary.
2. Specification: the defined values have to be translated
into the specific context, because values have different
meanins in different situations. This produces more
concrete norms that can uide the desin process.
3. Operationalisation: the specified norms then have
to be translated into measurable desin requirements.
That way, different desin choices can be weihed
aainst each other.
These measurable requirements can then be translated
into technical standards, on which quality marks and
certificates can be based. The challene is to be able
to harmonise such standards on an international level.
That way, technicians have more concrete tools at their
disposal to develop ethically responsible
AI applications.
Values
Norms
Design requirements
‘Values hierarchy’
– Source: Van de Poel (2013)
124 125
3.3 The Ethical Scrum
The ethical desin process
There are different approaches to the development
of software, from so-called ‘waterfall’ models to
‘aile’ approaches (Leijnen et al., 22). In the case
of waterfall models, the desin is determined at the
start of the process, while aile models adopt a more
iterative approach, which doesn’t o from A to B in
a straiht line, but accepts the possibility of encoun-
terin new challenes in the course of the development
process. Sometimes, that means havin to take a step
back to et ahead or understandin that you shouldn’t
aim for B, but for C instead.
To a lare extent, the waterfall model matches the
current value-driven ethical desin disciplines, like
Value-Sensitive Desin (VSD), where the startin point
is to define values as early on in the desin process as
possible, makin it possible to maximise as many values
as possible aainst each other, while innovation serves
the optimisation of those values. For that to happen,
it is important to choose the desin at an early stae
and make the values explicit. However, a limitation of
this waterfall approach is that the focus is primarily
on value conflicts that exist at an abstract level,
without sufficiently takin into account the tensions
that can occur durin the desin process. In particular
when we are talkin about AI applications, where factors
like safety are crucially important, it is hard to
determine in advance which desin requirements have to
be embedded, because those requirements are subject to
chane. When you approach this in a static way, that may
lead to vulnerabilities in the system. The assumption
that requirements that are defined in advance will flow
over into the next stae of the desin process is wron.
However, in practice, there is a risk that parts of this
will be lost and fall outside of the process, makin it
possible that new requirements are needed to safeuard
an ethical application of AI.
» Standards are broadly supported
areements about the ethics,
overnance and technoloy of AI,
allowin AI to meet the same
requirements everywhere. «
–– Yvette Mulder, NEN
However, there’s an important step that’s missin from
this process, namely the quantification of what is
‘ood’ and ‘not ood’. Without such considerations,
a system cannot determine what the riht action
is within a iven context. Machines need a kind of
moral intuition that has to develop alonside society.
Universal values may not chane very quickly, but the
weihin of different specifications of those values
in different contexts does, which requires a different
approach to the desin process.
Design
Concept
Design
Develop
Design
Develop
Develop
AI no longer has a plug. About ethics in the design process
127126
3.3 The Ethical Scrum
Scrum
Within the aile method, it is especially the scrum
process that appears to be able to provide inspiration
to safeuard ethics in the desin process, for instance
because scrum applies so-called ‘user stories’, the
advantae of which is that they focus on people and on
the different factors that play a role in determinin the
relative weiht of values. A user story is constructed as
follows:
As……(stakeholder) I want……(waarden)
in order to……(belanen) at……(context).
When we translate this to a specific application domain
– like the self-drivin car – and place it in a specific
context – like a collision between two autonomous
vehicles, for the value ‘transparency’, that leads to
the followin user stories:
> As manufacturer, I want to increase traceability,
in order to be able to track the system error and
avoid collisions
> As user, I want to increase communication, in order
to be informed about actions and further steps to be
taken in the case of a collision
> As leislator, I want to increase explainability,
in order to impose even stricter requirements on
the system in case of a collision
> As insurer, I want to increase explainability,
in order to be able to determine the uilty party
in case of a collision.
An aile approach makes it easier to deal with
unforeseen circumstances and adjust the desin in the
course of the process, makin sure that ethical consider-
ations play a role throuhout the desin process.
However, existin aile approaches focus too much on
the functional system requirements. The focus appears
to be on the system, not the human element. Developin
AI systems that can make ethically responsible decisions
requires an approach that also looks at less tanible
system requirements, like values. At the moment, the
development of AI systems still often focuses on the
question how we can improve the reliability of AI
systems, instead of on the question how we can develop
AI systems that can assin the riht value to people.
It is therefore important to determine the relative
weiht of different values in relation to each other
in a iven context. To be able to do that, we need
to understand that the weihin process depends on
different factors. There are at least four factors
that play a role:
> The stakeholders involved
> Social oals (values)
> Specific interests
> Context
To put ethics in practice, all these factors have to
be translated to the desin process. The context is
different for every application and there are other
interests. In this process, the more specific the context
is, the stroner the desin will be, because it is more
customised. And that is exactly what is missin in
the current ethical discussions. We need a process that
translates ethics to the context.
AI no longer has a plug. About ethics in the design process
129128
3.3 The Ethical Scrum
In the case of weihin ethical interests, however,
it is better to apply the so-called ‘T-shirt sizin’
method, where the different interests are weihed by
assinin S, M, L and XL to express the relative weiht
of the interests, without creatin the illusion that
one interest is 1 times more important than another
interest. Also, it is not possible to exchane a bad deed
for a ood deed. Ultimately, all interests are included
in the desin process, while makin it possible to make
choices in the desin process.
These methods can also be used to detect potential
value conflicts and weih the values involved aainst
each other. What is more important in case of a
collision between two autonomous cars? The traceability
of the decision-makin process for the manufacturer
(transparency)? Or protectin the personal information
of the user (privacy)? These interests can also be
weihed aainst each other, showin that most trade-offs
take place between different users roups, and to a
lesser extent between people and system. And aain,
it is not so much an exchane as it is a rankin of
priorities. The oal is ultimately to maximise these
values in relation to each other via the desin.
By placin the universal value of ‘transparency’
in context, it is specified, drawin a distinction
between traceability (the data set and the processes that
enerate the decision of the AI system), explainability
(a suitable explanation of the decision-makin process
of the AI system) and communication (the com munication
about the level of accuracy  and limitations of the
system). This makes it clear that different stakeholders
have different interests within the same value, which
also applies to other values, like privacy:
> As manufacturer, I want to increase the value and
interity of the data, in order to help avoid in -
accuracies, errors and mistakes in case of a collision
> As driver, I want to increase privacy and data
protection, in order to uarantee that my personal
information is protected in case of a collision
> As leislator, I want to control access to data,
in order to be able to create protocols and manae
access to data in case of a collision
> As insurer, I want to control access to data, in order
to et clarity about who can access data under what
circumstances in case of a collision.
This creates requirements at user level and makes it
possible to achieve customisation. The advantae of
this approach is that prorammers are used to workin
with these types of processes, which will make their
implementation in the desin process easier.
Weihin
For a manufacturer to be able to develop a self-drivin
car, it is not only important to know which different
interests there are, but also to know how these
different interests relate to one another. That can also
serve as inspiration durin the scrum process, by lookin
at ‘plannin poker’. Normally speakin, this method
is used to determine which activities in the desin
process have priority and need to be carried out first.
Measurable values, like , 1, 4 and 1, are assined.
AI no longer has a plug. About ethics in the design process
131130
3.3 The Ethical Scrum
For that to happen, technicians, ethicists, socioloists,
economists and others have to join forces to work on AI
systems that can make ethically responsible decisions.
It is important to specify what we consider to be ‘ood’
and ‘bad’ behaviour and how much value we assin to
different elements. So there’s an important task for
leislators. Prorammers should only be responsible for
makin the system as intellient as possible and for
optimisin the oal functions. It is up to the leislator
to define those oal functions after listenin to
society. Without clear leislation, a developer or user
doesn’t know on the basis of what they will be assessed.
We’re not there yet. But I do hope that this publication
and tool have brouht the development of ethically
responsible AI applications a little bit further.
» Real ethics starts where the use
of words stops. «
–– Albert Schweitzer, 1923
Ethical desin ame
A book about ethics in the desin process of AI which
arues that a lot is written about ethics and that an
action perspective is often missin, has to o beyond
the written word, which is why we have developed an
ethical desin ame, inspired by the scrum process,
that can be used to better streamline the discussion
about ethics. The ame was developed in collaboration
with the standards commission AI of the NEN (The Royal
Netherlands Standardization Institute) and the Artificial
Intellience lectorate of the HU University of Applied
Sciences Utrecht. In offers policy-makers, developers,
philosophers and essentially everybody who is interested
in ethics an opportunity to take part in the discussion
about what we, as a society, find important when it
comes to the development of AI. It helps provide more
insiht into the various stakeholder perspectives and is
desined to contribute to a more constructive discussion
about AI and ethics. Because different interests can be
weihed aainst each other, it is possible to make more
concrete choices in the desin process.
A final piece of advice
To safeuard ethically responsible AI applications in
the future, all movements and approaches ultimately
have to be united. We need ethical uidelines,
assessments and standards (principle ethics) to be
able to map value conflicts at an early stae and
brin values toether (consequential ethics), and to
then develop AI applications that can make ethically
responsible decisions (virtue ethics). That means that
the best elements of static and adaptive learnin have
to be combined to develop AI systems that can aspire
to our human intuition and make the riht decisions
within specific contexts. Complete control over these
technoloies may be an illusion, but we can desin
AI systems that will act in our interests.
Now and in the future.
Real ethics starts
where the use of words
stops.
–– Albert Schweitzer, 1923
AI no longer has a plug. About ethics in the design process
135134
Guest contribution: 'The ethics of AI in practice'
So securin ethical values is an everyday activity, and
an essential and very dynamic element of our democratic
constitutional state. We secure our ethics in laws and
rules that are created democratically. And yet, as I wrote
earlier, when we are talkin about ethics and AI, we see
a lot of theory and little practice. Where do thins o
wron? Both on a departmental and parliamentary level,
leislators appear reluctant to act. There is a lack of
knowlede and insiht. There is the question at which level
the responsibility lies, national, European or lobal.
And there is a widespread fiction about the value of total
freedom of the Internet. Protection of property is the
basis of almost every ethical system, but do diital data
have an owner? Of course it does, but that needs to be
translated into laws and ethics.
To create ethically responsible AI, there are a few
important thins. Leislators need to know what is
happenin in the diital world. It oes further than that,
obviously. We all need to know. After all, democracy can
only function when there’s a broad social discussion. The
transparency of every diital action has to be increased
considerably. That is complicated, which is why we need
to ive it serious thouht. Once we know more about
what happens in the diital realm, which includes the
developin world of AI, we can decide on a day to day basis
what is acceptable and what is not. And we will also
start limitin the latter cateory in one way or another.
Experience has to make us wiser, there’s no way we can
reulate everythin in advance. That’s no different in the
physical world. It does require leislators who are able to
keep up with the pace of diital development. A challene,
but one we cannot walk away from!
Guest contribution ––––––––––––––––––––––––––––––––––––––
The ethics of AI
in practice
By Bernard ter Haar, Special advisor,
Ministry of the Interior and Kindom
Relations
A lot is thouht, written and spoken
about the importance of securin
ethical values for artificial
intellience (AI), or even for the
entire diital ecosystem. There’s a lot of theory, and
little practice. And all that theory has a bit of a
paralysin effect. As thouh ethics is really too lofty
or difficult a subject to put into practice. That is,
at any rate, an enormous misunderstandin.
Simply put, I see ethics as thinkin about ood and
evil. And we do that every day. We constantly jude
developments in this world on whether they are ood
or bad. For instance, many people think it’s ethically
irresponsible to let refuees suffer on some Greek
island, while others think it’s ethically important to
defend authentic Dutch culture. And not only do we jude
every day, our judment also shifts over time. Until the
196s, female teachers had to quit their jobs once they
ot married, based on our ethical stance in relation to
family values. Nowadays, we see that as oppression of
women, at least most of us do. There’s never a complete
consensus about ethical values. In China, they thouht
lon and hard about ood and bad and the possibilities
of AI. They set up a kind of social score system, in
which people are scored on the basis of whether they
behave well or badly. In the eyes of the Chinese a ood
way to increase the ethical content of social behaviour.
In the Netherlands, we abhor such an approach, because
it is at odds with our views on individual freedom and
expression. The fact that an ethically positive oal like
social cohesion is used to suar-coat purely commercial
interests, like Facebook does, is also viewed with
increasin scepticism.
...........................................................
...........................................................
...........................................................
...........................................................
...........................................................
.....................,**,/**.,,............................
...................***/*//((/(/*/#(/,......................
................,*////*,....,,*(##/**((/...................
............/(/*,/**,....,.,,,,***/#,,**//.................
............/*/*,,,*.....,,***,***//(*/(#%(................
...........,****,,,,.,.,,,,,*/**///((*/((%%*...............
..........,*****/***,,*******/(((/((((//((%#*..............
..........,//*///////*/////////(####(##//((/*..............
..........,////****/*///**,,,*//*((###%%#%%(/..............
............#(*,,**/(/(((((/(#(//((((##%&&&%#*.............
...........,&*((#%%%%%(##%&&&&@&%%#(((##%&&%#&,............
............&(#@%@@&@%.,(%%%#(##(%(%/((/%&/%##%............
.............##(#&#//*.*//**/***,**//(####%(&&#............
..............,,**/(%*,*//*(%%#(/(((((###%#%(%.............
..............*(##&%#&#%&&%%((%%&#######%#%##..............
...............#%&%#((%%%%%%&&%%%###%(###%*................
...............,%##%@@#(/(%@%%((##(%####((.................
.................(((/((##%#######(%%###%%&.................
..................(#*/(####((/(*(%%%#&&&%%%................
....................#////#((((##%&&&&&%%&/(/...............
......................#%#%%%%&&&%&&&&&&/*/(//*&/...........
......................@@#%%&%%&&@@@@(/((*(/(/(%%#*.........
.....................(@@%%%%%&&&@%/(/((//(/(#%%&%%((.......
....................,#@@@&%&@@/*((/((//*#/*&&#%%%(#@((*/,..
...........................................................
...........................................................
...........................................................
...........................................................
...........................................................
.....................,**,/**.,,............................
...................***/*//((/(/*/#(/,......................
................,*////*,....,,*(##/**((/...................
............/(/*,/**,....,.,,,,***/#,,**//.................
............/*/*,,,*.....,,***,***//(*/(#%(................
...........,****,,,,.,.,,,,,*/**///((*/((%%*...............
..........,*****/***,,*******/(((/((((//((%#*..............
..........,//*///////*/////////(####(##//((/*..............
..........,////****/*///**,,,*//*((###%%#%%(/..............
............#(*,,**/(/(((((/(#(//((((##%&&&%#*.............
...........,&*((#%%%%%(##%&&&&@&%%#(((##%&&%#&,............
............&(#@%@@&@%.,(%%%#(##(%(%/((/%&/%##%............
.............##(#&#//*.*//**/***,**//(####%(&&#............
..............,,**/(%*,*//*(%%#(/(((((###%#%(%.............
..............*(##&%#&#%&&%%((%%&#######%#%##..............
...............#%&%#((%%%%%%&&%%%###%(###%*................
...............,%##%@@#(/(%@%%((##(%####((.................
.................(((/((##%#######(%%###%%&.................
..................(#*/(####((/(*(%%%#&&&%%%................
....................#////#((((##%&&&&&%%&/(/...............
......................#%#%%%%&&&%&&&&&&/*/(//*&/...........
......................@@#%%&%%&&@@@@(/((*(/(/(%%#*.........
.....................(@@%%%%%&&&@%/(/((//(/(#%%&%%((.......
....................,#@@@&%&@@/*((/((//*#/*&&#%%%(#@((*/,..
AI no longer has a plug. About ethics in the design process
137136
Final thouhts: 'Don't put all the responsibility on the prorammer's shoulders'
want everyone to be treated equally? The question then
is what is ‘fair enouh’. What percentae of erroneous
quarantine cases can we and do we want to accept? In
practice, the various mathematical definitions of fairness
turn out to be mutually exclusive. So if we don’t quantify
these uidelines, prorammers will, accordin to the
current approach, have to make a choice themselves.
Furthermore, what we consider to be fair is context-
dependent. Several months ao we couldn’t even imaine
havin to place people in quarantine in the first place,
and we will probably respond differently when we are
talkin about prison sentences, rather than a relatively
luxurious quarantine in our own homes. Universal values
may not be subject to chane, our norms certainly are.
That makes it not only difficult, but also irresponsible
to proramme principles and uidelines into AI systems.
Ultimately, the system itself has to be able to make moral
considerations. That sounds scary, but without a morally
intuitive system, it is almost impossible to apply AI in
practice in an ethically responsible way. The only reason
that people are able to deal with rules is because we can
translate them into behaviour within a iven context.
At the moment, social distancin is the norm, but if
someone were to stumble and fall down a fliht of stairs,
I believe I should catch that person if I am able to.
As such, we should spend less enery in settin up ethical
uidelines and spend more time buildin systems that
can make ethically responsible decisions within a iven
context. Ethics is above all a desin issue. In addition
to prorammers and ethicists, socioloists, psycholoists,
bioloists and economists should also be involved. If we are
unable to make that happen, then perhaps we shouldn’t want
to use AI systems at all. Or accept that our ethics
are unethical.
Final thouhts ––––––––––––––––––––––––––––––––––––––––––––––
Don’t put all the
responsibility on the
prorammer’s shoulders
By Rudy van Belkom
There was a lot of criticism reardin
the ‘appathon’ oranised by the Dutch
Ministry of Health to try and use smart
technoloies to stop the corona virus
from spreadin. The process was too
hasty and chaotic. Developers had to try and make some
last-minute improvements to the trackin apps in a pressure
cooker. Accordin to the experts involved, the results were
disappointin. None of the apps met the relevant privacy
uidelines. Ethically irresponsible, was the final verdict.
And yet, the way I see it, the real criticism doesn’t
involve the appathon itself, but the way we all conduct
ethics. At the moment, the final responsibility lies with
the prorammers, which is not as it should be (and may
even be unintentional). We think that we have covered
everythin with different ethical principles and uidelines,
but nothin could be farther from the truth, because
those principles and uidelines say nothin about the way
values like privacy need to be expressed in mathematical
terms. And the development of AI is all about statistics.
Smart systems have to be able to extract patterns from
lare amounts of data and learn from that independently.
In addition to the fact that that means that personal
information is exposed, the system can unintentionally
disadvantae certain roups of people by misinterpretin
the data.
In that sense, it’s not so much about privacy, but about
fairness. And the question in that case is what is fair
from a statistical point of view. Do we not want to
overlook any corona cases, or don’t we want to unjustly
quarantine people? And what if it turns out that people
in some areas have a hiher risk of contamination?
Do those variables have to be factored in, or do we
,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,..........................
,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,..........................
,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,.........................
,,,,,,,,,,,,,,,,,,,,,,,,,,@##(/**/#*.......................
,,,,,,,,,,,,,,,,,,,,,,,/**,,,,.............................
,,,,,,,,,,,,,,,,,,,,*(//**,,,..........,...................
,,,,,,,,,,,,,,,,,,,%((//**,,,............,.................
,,,,,,,,,,,,,,,,,,%((((/**,,.............,.................
,,,,,,,,,,,,,,,,,,####(/**,,,............,/................
,,,,,,,,,,,,,,,,,,####(//**,,............,*................
,,,,,,,,,,,,,,,,,(%##%&&&%%%(*,,,,*/(((//*,,...............
,,,,,,,,,,,,,,,,,%%((##(#%(/#,,/*#(//,,,.#.................
,,,,,,,,,,,,,,,%#%##%%#((/(#(,**,**/**,,*,,,...............
,,,,,,,,,,,,,,,&%%#(((////((/,.,,.,,,,,,,,,,...............
,,,,,,,,,,,,,,,#%&@##((//*//##(,.,,,...,,,,,,..............
,,,,,,,,,,,,,,,,%#%%#(////%((*,,,,,...,,(,.................
,,,,,,,,,,,,,,,,,%%&%%%#(//(&@@@(&,,,,,,,,/,...............
,,,,,,,,,,,,,,,,,,,@&%%%((#%@@%###/,,,**,..................
,,,,,,,,,,,,,,,,,,,@@&%%%@@&&&&%%#%#%(///(,................
,,,,,,,,,,,,,,,,,,,&@@@&&#(/***,***#(##&,..................
,,,,,,,,,,,,,,,,,,,,@@@@@&%##(##(#//(%%(,..................
,,,,,,,,,,,,,,,,,,,/@@@&@&@%%####(((%&@@@/.................
,,,,,,,,,,,,,,,,,,,,@@@@@@@@@@%@&%%&@@@@#,.................
,,,,,,,,,,,,,,,,,,,@@@@@@@@@@@%@@@@@@@&&,,.................
,,,,,,,,,,,,,,,,,,*@@@@@@@@@@@@@@@@@@@&&,,,................
,,,,,,,,,,,,,,,,,@@@@@@@@@@@@@@@@@@@@@@%*,,,,,,..,.........
,,,,,,,,,,,,,,%@@@@@@@@@@@@@@@@@@@@@@@@&&%%%%#*,,,,,,,,,,.,
,,,,,,,,,,,,@@@@@@@@@@@@@@@@@@@@@@@@@@@&@&&&&%%%%%#*,,,,,,,
,,,,,,,,,,#@@@@@@@@@@@@@@@@@@@@@@@@@@@@&&&&&&&&&%%%%%%%*,,,
AI no longer has a plug. About ethics in the design process
139138
Epiloue: 'Ethics in action'
Not to limit the research but above all to create
clarity about what AI means, to allow us to put the
hype surroundin AI into perspective. The second study
deliberately created some confusion by usin a number of
scenarios to show the various possible futures of AI, with
the aim of breakin throuh the dominant discourse that AI
is somethin that is really bad for humanity and to show
that there are alternatives. Alternatives that didn’t just
materialise out of thin air but that are the result of what
we, as a society, want and desire. Many arue that AI
may well be the most far-reachin technoloy that mankind
ever has developed and will develop, which is exactly why
‘human aency’ is important, so that we shape AI the way
we want to.
In this third and final part, Rudy van Belkom has shown
how you can turn theory surroundin AI into practice.
Not just by explainin that ethics (like AI) is a
many-headed, well-intended monster, but also by aruin
that it can only be put into practice by includin anyone
and everyone who is interested in and cares for AI. Ethics
is somethin to talk about, but it’s also somethin to do.
This STT study is one of the first to establish a direct
connection between thinkin about the future and actin
accordinly, which, as far as the development of AI is
concerned, is not a luxury, but a necessity. Not only to
prevent AI from oin into the ‘wron’ direction, but above
all to use the possibilities of AI to further the norms and
values of our society.
Epiloue –––––––––––––––––––––––––––––––––––––––––––––––––––
Ethics in action
By Patrick van der Duin,
Director Netherlands Study Centre
for Technoloy Trends
Credit where credit is due. It was
former Prime Minister Jan Peter
Balkenende who, in 2002, said we should
talk more about norms and values.
I was amon those who found that rather
amusin and felt that it was an ancient discussion. And
that his norms and values aren’t the same as mine (and that
it’s actually values and norms). But now, in 2020, norms
and values are more important than ever before. Balkenende
proved to be a veritable prophet, who was rewarded when
a norm was named after him: the Balkenende norm, which
states that manaers in the public and semi-public sector
are not allowed to earn more than a overnment Minister.
The current ‘ethical turn’ (perhaps similar to the
‘linuistic turn’) also fills me with a sense of nostalia.
In 22, I was workin at the Technoloy, Governance and
Manaement faculty at Delft University of Technoloy. There
was a section Philosophy of Technoloy, but that was hardly
a rand affair oin by the small number of staff members:
a mere handful. But in 22, it is the larest section
of the entire faculty. Under the header of ‘responsible
innovation’, the ladies and entlemen ethicists and philos-
ophers have stood up from their respective armchairs to
examine how they can put ethics into action. No more idle
philosophies, but research into how desin processes can be
manaed ethically and to that end develop practical methods
that really have to make the world a little more humane.
The STT study into AI in the future, carried out by project
leader Rudy van Belkom, has to be seen in this context of
the increasin importance of ethics in our society, economy
and technoloy. This final part is a loical conclusion of
the triloy, which bean by examinin what AI is, how
diverse it is and also what it is not.
,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,
*,,,,,,,,,,,,,,,,,,,,,,,,,*%(&%&@@**,,,,,,,,,,,,,,,,,,,,,,,
**,,,,,,,,,,,,,,,,,,,,**(@@@@&%@@&%(//*,,,,,,,,,,,,,,,,,,,,
***,,,,,,,,,,,,,,,,,&@@@@&@&&&@@&@@&&@@@&,,,,,,,,,,,,,,,,,,
****,,,,,,,,,,,,,,@@@@@@###%%%%&@&@(((@@@@%,,,,,,,,,,,,,,,,
*****,,,,,,,,,,,,@@@@@@#####((////*////*/#@@%%,,,,,,,,,,,,,
*****,,,,,,,,,,,@@@@@##(#(((#((/////*******&@#/,,,,,,,,,,,,
*****,,,,,,,,,,(@@@@((((###((////***//******#,,,,,,,,,,,,,,
*****,,,,,,,,,,*&@%#((((((((///**/*******,***#/,,,,,,,,,,,,
*****,,,,,,,,,,*%%%#((((((((///**/*****,,,,,,//,,,,,,,,,,,,
*****,,,,,,,,,,,#&%((((((/((//*////***,**,,,,(/,,,,,,,,,,,,
*****,,,,,,,,,,,#((((@@@%(//******,,@&(%#*,*/,,,,,,,,,,,,,,
*****,,,,,,,,,,@(%#(((@%#/(#&%((************,*/(,,,,,,,,,,,
*****,,,,,,,,,*@%&%((#&@@@@@@(*,,#&@@&&**,,*,*,,,,,,,,,,,,.
*****,,,,,,,,,,%@@#(((#(////@@%(/,,**//*,*,,,**,,,,,,,,,,.,
*****,,,,,,,,,,%#(#(((//(#####/*****//*,,,,**,,,,,,,,.,,,..
*****,,,,,,,,,,,#%#((#((//(%@#/**//(/,*,,***,,,,,,,,,,,,,.,
******,,,,,,,,,,###((((#((%@@#/****,*%/,***,,,,,,,,,,,,,,,.
*******,,,,,,,,,,,#(####/(@%##%#/#*****/&/**,,,,,,,,,,,,,,.
********,,,,,,,,,,#####*(%@%###///****(&/(***,,,,,,,,,,,,..
*********,,,,,,,,,/###%(,/(%&@@@@@@##**/#***,,,,,,,,,,,,,..
************,,,,,,,###%%(/(###%#((//*,**,**,,,,,,,,,,,,.,..
/**************,,,@(###%%##((((((*********,@@&*,,,,,,,,,..,
///************(@@@*####&&%#((((/***,,****.@@@@@@@@(,,,.,..
/////******/@@@@@@%**%###%&@%(/**/******.@@@@@@@@@@@@@@@(,,
//////*/@@@@@@@@@@%****@###%%%#%//******,..@@@@@@@@@@@@@@@@
//#@@@@@@@@@@@@@@@&******%%###%#(/***,,....@@@@@@@@@@@@@@@@
@@@@@@@@@@@@@@@@@@@***,,,,,,(##(//***......@@@@@@@@@@@@@@@@
@@@@@@@@@@@@@@@@@@@***,,,......../.........@@@@@@@@@@@@@@@@
AI no longer has a plug. About ethics in the design process
141140
Glossary
–––––––––––––––––––––––––––––––––––––––––––––––––––––––––
Glossary
Accountability
Responsibility that is leally prescribed is called accountability.
III.1.2 Urent ethical issues 
Accuracy
Generally speakin, the more complete and elaborate the dataset is
with which an AI system is trained, the more accurate the system
will be. III.2.2 Conflictin values 
Affective computin
Affective computin refers to systems that can detect and reconize
emotions.
Artificial Intellience (AI)
The most dominant association with AI is machine learnin. Research
by the World Intellectual Property Oranisation (WIPO) from 2019
also shows that machine learnin is the most dominant AI technoloy
included in patent applications, which is why the focus in this
study is mainly on machine learnin and related methods.
Alorithms
An alorithm is a mathematical formula. It is a finite sequence
of instructions that works from a iven startin point towards
a predetermined oal.
Biases
Because we are prorammed by evolution to save as much enery as
we can, the way we process information can lead to fallacies, also
known as conitive biases.
Brain
We are not yet able to model complex concepts that we want to link
to the brain, like awareness and free will, and we cannot connect
them individually to certain areas of the brain.
Clusters
When creatin clusters, the alorithm independently searches for
similarities in the data and tries to reconize patterns.
Common sense
Common sense consists of all knowlede about the world; from
physical and visible aspects, to cultural and therefore more
implicit rules, like how we should treat each other.
Consequential ethics
Consequential ethics states that it is the consequences of a certain
action that determine whether the action is ‘riht’. In other
words, the behaviour has to have positive consequences, even when it
undermines certain principles. So it is not about the action itself,
but about the consequences. III.1.3 A matter of ethical perspective 
Contest
At the moment, the interest in AI is so reat that world powers
have entered into a kind of AI contest. Research by PwC from 2017
shows that, in 2030, the worldwide Gross Domestic Product (GDP)
will be 14% hiher thanks to developments in the area of AI.
Accordin to Russian President Putin, the country with the best AI
will rule the world (2017).
Deep learnin
Deep learnin is a machine learnin method that uses various layered
artificial neural networks.
Deep reasonin
This new approach tackles problems in the old approaches by
combinin them. Deep reasonin solves the scalability problem of
symbolism (it is impossible to proramme all options efficiently),
while at the same time tacklin the data problem of neural networks
(lare data sets are often not available or incomplete).
AI no longer has a plug. About ethics in the design process
143142
Glossary
Decision-makin
We like to believe that human beins are rational creatures. But
the decision-makin process is capricious. In addition to factual
information, perception and ambition also play an important role.
Dependence on data
One of the main limitations of AI is that it depends on hue
amounts of data, which is why, in the case of deep learnin, people
sometimes talk about data-hunry neural networks. As a result, the
technoloy does not perform well in peripheral cases where there is
little data available.
Empathy
Empathy is the ability to imaine yourself in the situation and
feelins of other people. Empathy also allows us to read and
understand the non-verbal communication of others.
Ethical uidelines
In recent years, various companies, research institutes and
overnment oranisations have set up different principles and
uidelines for ethical AI, at a national, continental and lobal
level. III.2.1 From corporate to overnment 
Ethics
Ethics is a branch of philosophy that enaes in the systematic
reflection of what should be considered ood or riht actions.
III.1 AI and Ethics 
Explainability
It is increasinly difficult for people to determine on the basis of
which data the results of AI are based, which is why AI is still
often compared to a black box.
Explanation
Within the decision-makin process of AI, the explanation and
explainability involve the explanation and traceability of the
decision in retrospect. III.1.2 Urent ethical issues 
Fairness
Fairness is a much used principle in ethical uidelines and
assessment tools. Philosophers have thouht for hundreds of years
about the concept of fairness, and with the arrival of AI, a whole
new dimension has been added, because now, the concept of fairness
has to be expressed in mathematical terms. III.2.3 Practical
challenes 
Formal loic
In the first phase of AI, from 1957 to the late 1990s, formal loic,
in other words if-then rules, were the main tool bein used. This
form of AI was focused predominantly on hih level conition, like
reasonin and problem-solvin.
Freedoms and rihts
In essence, freedom refers to the freedom people have to determine
how to oranise their lives. This is even a riht, the riht
of self-determination. However, that riht is then limited by
prohibition to harm others. III.1.2 Urent ethical issues 
General AI
General Artificial Intellience should be able to carry out all the
intellectual tasks that a human bein can also perform.
Imae reconition systems
Imae reconition systems often use a so-called Convolutional Neural
Network (CNN), which acts as a filter movin across the imae,
lookin for the presence of certain characteristics.
Intellience
Intellience can be described as a sequence of mental abilities,
processes and skills, like the ability to reason and adapt to new
situations.
AI no longer has a plug. About ethics in the design process
145144
Glossary
Narrow AI
Narrow Artificial Intellience is a form of AI that is very ood
in carryin out specific tasks, for instance playin chess, makin
recommendations and makin quantifiable predictions.
Neural networks
Artificial neural networks can be used within deep learnin and are
oriinally based on the human brain, whereby neurons are connected
to each other in a layered fashion.
Nihtmare scenarios
Scenarios about robots risin up have been a popular storyline for
almost 100 years (assumin that director Fritz Lan’s 1927 movie
‘Metropolis’ is the first real science fiction movie in which a
robot has bad intentions). III.1.1 Ethics in the spotliht 
Principle ethics
In the case of principle ethics, a principle is used as the startin
point, for instance respect for life and human dinity. When
solvin an ethical problem, one or more of these principles need
to be taken into account. The principle has to be applied at all
times, reardless of the consequences. III.1.3 A matter of ethical
perspective 
Privacy
In the European Treaty for Human Rihts, privacy is included as the
riht to respect of people’s private lives, which requires a fair
balance between the social interest that a technoloy serves and the
extent to which it violates people’s private lives. III.1.2 Urent
ethical issues 
Intentionality
Even if you were to proramme all the knowlede in the world into
a computer, the question remains whether that computer enuinely
understands its actions. That understandin is also referred to as
intentionality.
Justice
When we are talkin about justice, in essence, we are talkin about
the equality of people. People should be treated equally and be iven
equal opportunities. III.1.2 Urent ethical issues 
Machine biases
Not only human intellience, but also AI can be biased. The output
of alorithms can be biased in terms of ender and race. The
explanation for that is simple. When your input isn’t pure, then
neither will the output be. So the biases of alorithms are caused
by the conitive biases of people.
Machine learnin
Machine learnin involves a revolution in which it is no loner
people who proramme (if this, then that), but in which machines
themselves deduce rules from data.
Mathematics
In essence, AI is ‘ordinary’ mathematics. Albeit a very advanced
form of mathematics, but mathematics nonetheless. It is above all a
tool to realize an optimisation oal.
Morality
In discussions about AI, people often confuse ethics and morality,
even thouh there is a clear difference. Morality is the entirety
of opinions, decisions and actions with which people (individually
or collectively) express what they think is ood or riht. Ethics,
on the other hand, is the systematic reflection on what is moral.
III.1.3 A matter of ethical perspective 
AI no longer has a plug. About ethics in the design process
147146
Glossary
Responsibility
In discussions about the development of AI, the term ‘responsi-
bility’ is often mentioned. For instance, who is responsible in an
accident involvin a self-drivin car? However, to determine who
is responsible, we first need to determine what it is they are
responsible for and what behaviour can and cannot be defended. In
addition, the question is how we can deduce the level of responsi-
bility. III.1.2 Urent ethical issues 
Superintellience
Artificial Super Intellience (ASI) can be realised when AI
transcends the abilities of the human brain in every possible
domain.
Transparency
Within the decision-makin process of AI, transparency is primarily
about the process and the predetermined criteria. III.1.2 Urent
ethical issues 
Trust
Research from, amon others, the University of Pennsylvania from
2014 shows that, when people see an alorithm make a small and
insinificant mistake, chances are they well have lost all trust.
Amon researchers, this is also referred to as alorithm aversion.
Virtue ethics
In the case of virtue ethics, it is not the rules of certain
principles that are central to moral judments, but the character
of the actor, whose actions are separated from their explicit
consequences. Riht actions require certain characteristics, or
virtues. III.1.3 A matter of ethical perspective 
AI no longer has a plug. About ethics in the design process
149148
Sources
Delvaux, M. (2016). DRAFT REPORT with recommendations to the
Commission on Civil Law Rules on Robotics. Consulted on https://
www.europarl.europa.eu/doceo/document/JURI-PR-582443_EN.pdf?re-
direct
Duin, P. van der, Snijders, D. & Lodder, P. (2019). The National
Future Monitor: How do Dutch people think about technoloy and the
future? Den Haa: STT
Eckersley, P. (2018). Impossibility and Uncertainty Theorems in AI
Value Alinment (or why your AGI should not have a utility function)
Consulted on https://arxiv.or/abs/1901.00064
ECP (2018). Artificial Intellience Impact Assessment (AIIA).
Consulted on https://ecp.nl/wp-content/uploads/2019/01/Artificial-In-
tellience-Impact-Assessment-Enlish.pdf
Elsevier (2018). ArtificiaI Intellience: How knowlede is created,
transferred, and used. Consulted on https://www.elsevier.com/
research-intellience/resource-library/ai-report
Est, R. van & Gerritsen, J. (2017). Human rihts in the robot ae
Challenes arisin from the use of robotics, artificial intellience,
and virtual and aumented reality. Consulted on https://www.
rathenau.nl/nl/diitale-samenlevin/mensenrechten-het-robottijdperk
Eubanks, V. (2018). Automatin Inequality; How Hih-Tech Tools
Profile, Police, and Punish the Poor. New York: St. Martin's Press
European Commission (2020). Attitudes towards the impact of
diitalisation on daily lives. Consulted on https://ec.europa.eu/
commfrontoffice/publicopinion/index.cfm/survey/etsurveydetail/
instruments/special/surveyky/2228
European Commission (2020). White Paper on Artificial Intellience:
a European approach to excellence and trust. Consulted on https://
ec.europa.eu/info/publications/white-paper-artificial-intelli-
ence-european-approach-excellence-and-trust_en
–––––––––––––––––––––––––––––––––––––––––––––––––––––––––
Sources
AI Now Institute (2018). AI in 2018: a year in review. Consulted on
https://medium.com/@AINowInstitute/ai-in-2018-a-year-in-review-
8b161ead2b4e
AI Now Institute (2019). AI in 2019: a year in review. Consulted
on https://medium.com/@AINowInstitute/ai-in-2019-a-year-in-re-
view-c1eba5107127
Araujo, T. et al. (2018). Automated Decision-Makin Fairness in an
AI-driven World: Public Perceptions, Hopes and Concerns. Consulted
on http://www.diicomlab.eu/wp-content/uploads/2018/09/20180925_
ADMbyAI.pdf
Beijin Academy of Artificial Intellience (2019). Beijin AI
Principles. Consulted on https://www.baai.ac.cn/blo/beijin-ai-
principles
Blauw, S. (2018). Aloritmes zijn even bevooroordeeld als de
mensen die ze maken. Consulted on https://decorrespondent.nl/8802/
aloritmes-zijn-even-bevooroordeeld-als-de-mensen-die-ze-mak-
en/4676003192124-d89e66a9
Boer, M. de (2020). The many futures of Artificial Intellience:
Scenarios of what AI could look like in the EU by 2025. Consulted on
https://www.pwc.nl/nl/actueel-publicaties/assets/pdfs/the-many-fu-
tures-of-artificial-intellience.pdf
Bostrom, N. (2016). Superintellience: Paths, Daners, Strateies.
Oxford: Oxford University Press
Dalen, W. van (2012). Ethiek de basis; Morele competenties voor
professionals. Groninen/Houten: Noordhoff Uitevers
AI no longer has a plug. About ethics in the design process
151150
Sources
Fjeld, J., Achten, N., Hillioss, H., Nay, A. & Srikumar, M.
(2020). Principled Artificial Intellience: Mappin Consensus in
Ethical and Rihts-Based Approaches to Principles for AI. Berkman
Klein Center Research Publication No. 2020-1. http://dx.doi.
or/10.2139/ssrn.3518482
Future of Life Institute (2017). Asilomar AI Principles. Consulted on
https://futureoflife.or/ai-principles/
G7 Innovation Ministers (2018). G7 Innovation Ministers' Statement
on Artificial Intellience. Consulted on http://www.8.utoronto.ca/
employment/2018-labour-annex-b-en.html
Gartner (2019). Top 10 Strateic Technoloy Trends for 2019: Diital
Ethics and Privacy. Consulted on https://www.artner.com/en/
documents/3904420/top-10-strateic-technoloy-trends-for-2019-dii-
tal-ethi
Genesys (2019). New Workplace Survey Finds Nearly 80% of Employers
Aren’t Worried About Unethical Use of AI — But Maybe They Should
Be. Consulted on https://www.enesys.com/en-b/company/newsroom/
announcements/new-workplace-survey-finds-nearly-80-of-employers-ar-
ent-worried-about-unethical-use-of-ai-but-maybe-they-should-be
Goole (2019). Perspectives on Issues in AI Governance. Consulted on
https://ai.oole/static/documents/perspectives-on-issues-in-ai-ov-
ernance.pdf
Hih-Level Expert Group on Artificial Intellience (2019). Ethics
uidelines for trustworthy AI. Consulted on https://ec.europa.eu/
diital-sinle-market/en/news/ethics-uidelines-trustworthy-ai
Hill, K. (2020). The Secretive Company That Miht End Privacy as
We Know It. Consulted on https://www.nytimes.com/2020/01/18/
technoloy/clearview-privacy-facial-reconition.html
Hoven, J. van den, Miller, S. & Poe, T. (2017). Desinin in
Ethics. Cambride: Cambride University Press
IBM (2019). Everyday Ethics on AI. Consulted on https://www.ibm.
com/desin/ai/ethics/everyday-ethics
Jobin, A., Ienca, M. & Vayena, E. (2019). The lobal landscape of
AI ethics uidelines.Nat Mach Intell1,389–399. https://doi.
or/10.1038/s42256-019-0088-2
Kant, I. (1788). Kritik der praktischen Vernunft. Keulen: Anaconda
Verla
Kaur, H. et al. (2020). Interpretin Interpretability: Understandin
Data Scientists’ Use of Interpretability Tools for Machine Learnin.
Consulted on http://www.jennwv.com/papers/interp-ds.pdf
Leijnen, S. et al. (2020). An Aile Framework for Trustworthy AI.
ECAI 2020 position paper.
Maurits, M. & Blauw, S. (2019). In de stad van de toekomst praten
lantaarnpalen mee en burers niet. Consulted on https://decorre-
spondent.nl/9148/in-de-stad-van-de-toekomst-praten-lantaarnpalen-
mee-en-burers-niet/4859813360776-3fcc1087
Microsoft Research (2018). Manipulatin and Measurin Model
Interpretability. Consulted on https://arxiv.or/pdf/1802.07810.pdf
Ministry of Economic Affairs and Climate Policy (2019). Strateic
Action Plan for Artificial Intellience. Consulted on https://
www.overnment.nl/documents/reports/2019/10/09/strateic-ac-
tion-plan-for-artificial-intellience
NeurIPS. (z.d.). Code of Conduct. Consulted on https://nips.cc/
public/CodeOfConduct
OECD (2019). Principles on AI. Consulted on http://www.oecd.or/
oin-diital/ai/principles/
AI no longer has a plug. About ethics in the design process
153152
Sources
Pew Research Center (2018). The Data on Women Leaders. Consulted on
https://www.pewsocialtrends.or/fact-sheet/the-data-on-women-lead-
ers/#ceos
Poel, I. van de & Royakkers, L. (2011). Ethics, Technoloy and
Enineerin; An Introduction. Hoboken: Wiley-Blackwell
Poel, I. van de. (2013). Translatin Values into Desin Requirements.
In: MichelFelder D., McCarthy N., Goldber D. (eds), Philosophy and
Enineerin: Reflections on Practice, Principles and Process. Vol.
15 of Philosophy of Enineerin and Technoloy, Spriner, Dordrecht
(pp.253-266)
ProPublica (2016). Machine Bias. Consulted on https://www.
propublica.or/article/machine-bias-risk-assessments-in-crimi-
nal-sentencin
Rahwan, I. (2019). Machine behavior. Consulted on https://www.
nature.com/articles/s41586-019-1138-y
Russel, S. (2017). 3 principles for creatin safer AI. Consulted on
https://www.ted.com/talks/stuart_russell_3_principles_for_creatin_
safer_ai
Selbst, A.D. et al. (2019). Fairness and Abstraction in
Sociotechnical Systems. Consulted on https://dl.acm.or/doi/
pdf/10.1145/3287560.3287598
Snijders, D., Biesiot, M., Munnichs, G. & Est, R. van (2019).
Citizens and sensors: Eiht rules for usin sensors to promote
security and quality of life. Den Haa: Rathenau Instituut
Smart Dubai (2019). AI Ethics Principles & Guidelines. Consulted on
https://www.smartdubai.ae/pdfviewer/web/viewer.html?file=https://
www.smartdubai.ae/docs/default-source/ai-principles-resources/
ai-ethics.pdf?sfvrsn=d4184f8d_6
The Pontifical Academy for Life (2020). Rome Call for AI Ethics.
Consulted on http://www.academyforlife.va/content/dam/pav/
documenti%20pdf/2020/CALL%2028%20febbraio/AI%20Rome%20Call%20
x%20firma_DEF_DEF_.pdf
UNI Global Union (2017). Top 10 Principles for ethical AI. Consulted
on http://www.thefutureworldofwork.or/media/35420/uni_ethical_
ai.pdf
U.S. Department of Defense (2020). AI Principles: Recommendations
on the Ethical Use of Artificial Intellience. Consulted on https://
media.defense.ov/2019/Oct/31/2002204458/-1/-1/0/DIB_AI_PRINCIPLES_
PRIMARY_DOCUMENT.PDF
Utrecht Data School (2017). Data Ethics Decision Aid (DEDA).
Consulted on: https://dataschool.nl/deda/deda-worksheet/?lan=en
Verbeek, P.-P (2011). De rens van de mens: Over ethiek, techniek en
de menselijke natuur. Rotterdam: Lemniscaat
Weijer, B. van de (2019). Binnenkort op de we: zelfrijdende
robottaxi's zonder reservemens achter het stuur. Consulted on
https://www.volkskrant.nl/economie/binnenkort-op-de-we-zelfrij-
dende-robottaxi-s-zonder-reservemens-achter-het-stuur~b671f376/
Wilson, B., Hoffman, J. & Morenstern, J. (2019). Predictive
Inequity in Object Detection. Consulted on https://arxiv.or/
abs/1902.11097