PreprintPDF Available

From What to How: An Overview of AI Ethics Tools, Methods and Research to Translate Principles into Practices

Authors:
Preprints and early-stage research may not have been peer reviewed yet.

Abstract

The debate about the ethical implications of Artificial Intelligence dates from the 1960s (Wiener, 1960) (Samuel, 1960). However, in recent years symbolic AI has been complemented and sometimes replaced by (Deep) Neural Networks and Machine Learning (ML) techniques. This has vastly increased its potential utility and impact on society, with the consequence that the ethical debate has gone mainstream. Such a debate has primarily focused on principles-the 'what' of AI ethics (beneficence, non-maleficence, autonomy, justice and explicability)-rather than on practices, the 'how.' Awareness of the potential issues is increasing at a fast rate, but the AI community's ability to take action to mitigate the associated risks is still at its infancy. Therefore, our intention in presenting this research is to contribute to closing the gap between principles and practices by constructing a typology that may help practically-minded developers 'apply ethics' at each stage of the pipeline, and to signal to researchers where further work is needed. The focus is exclusively on Machine Learning, but it is hoped that the results of this research may be easily applicable to other branches of AI. The article outlines the research method for creating this typology, the initial findings, and provides a summary of future research needs. 2
1
From What to How: An Overview of AI Ethics Tools, Methods and Research to
Translate Principles into Practices
Jessica Morley1, Luciano Floridi1,2, Libby Kinsey3, Anat Elhalal3
1 Oxford Internet Institute, University of Oxford, 1 St Giles’, Oxford, OX1 3JS
2Alan Turing Institute, British Library, 96 Euston Rd, London NW1 2DB
3Digital Catapult, 101 Euston Road, Kings Cross, London, NW1 2RA
Corresponding author: Jessica.morley@kellogg.ox.ac.uk
Statement of Funding: This research was funded by the Digital Catapult
Statement of Contribution: LK and LF contributed equally to this article
Abstract
The debate about the ethical implications of Artificial Intelligence dates from the 1960s (Wiener,
1960) (Samuel, 1960). However, in recent years symbolic AI has been complemented and
sometimes replaced by (Deep) Neural Networks and Machine Learning (ML) techniques. This has
vastly increased its potential utility and impact on society, with the consequence that the ethical
debate has gone mainstream. Such a debate has primarily focused on principles—the ‘what’ of AI
ethics (beneficence, non-maleficence, autonomy, justice and explicability)—rather than on
practices, the how.’ Awareness of the potential issues is increasing at a fast rate, but the AI
community’s ability to take action to mitigate the associated risks is still at its infancy. Therefore,
our intention in presenting this research is to contribute to closing the gap between principles and
practices by constructing a typology that may help practically-minded developers ‘apply ethics’ at
each stage of the pipeline, and to signal to researchers where further work is needed. The focus is
exclusively on Machine Learning, but it is hoped that the results of this research may be easily
applicable to other branches of AI. The article outlines the research method for creating this
typology, the initial findings, and provides a summary of future research needs.
2
1. Introduction
As the availability of data on almost every aspect of life, and the sophistication of machine learning
(ML) techniques, has increased (Lepri, Oliver, Letouzé, Pentland, & Vinck, 2018) so have the
opportunities for improving both public and private life (Luciano Floridi & Taddeo, 2016). Society
has greater control than it has ever had over outcomes related to: (1) who people can become; (2)
what people can do; (3) what people can achieve; and (4) how people can interact with the world
(Floridi and colleagues, 2018a). Yet, growing concerns about the ethical challenges posed by the
increased use of ML in particular, and Artificial Intelligence (AI) more generally, in society,
threaten to put a halt to the advancement of beneficial applications, including in data science
(Mittelstadt, 2019), unless handled proportionately.
Balancing this tension between the need to support innovation, so that society’s right to
benefit from science is protected (Knoppers & Thorogood, 2017), with the need to limit the
potential harms associated with poorly-designed AI (and specifically ML in this context),
(summarised in figure 1) is challenging. ML algorithms are powerful (Ananny & Crawford, 2018)
socio-technical constructs that raise concerns that are as much (if not more) about people as they
are about code (Crawford & Calo, 2016). Enabling the so-called dual advantage of ‘ethical ML’
so that the opportunities are capitalised on, whilst the harms are foreseen and minimised or
prevented (Floridi and colleagues. 2018)—requires asking difficult questions about design,
development, deployment, practices, uses and users, as well as the data that fuel the whole process
(Cath, Zimmer, Lomborg, & Zevenbergen, 2018). Lessig was right all along: code is both our
greatest threat and our greatest promise (Lessig, 2006).
Ethical Concern
Explanation
Inconclusive Evidence
Algorithmic conclusions are probabilities and therefore not infallible. This can lead to
unjustified actions. For example, an algorithm used to assess credit worthiness could
be accurate 99% of the time, but this would still mean that one out of a hundred
applicants would be denied credit wrongly.
Inscrutable Evidence
A lack of interpretability and transparency can lead to algorithmic systems
that are hard to control, monitor, and correct. This is the commonly cited ‘black-box’
issue.
Misguided Evidence
Conclusions can only be as reliable (but also as neutral) as the data they are based
on, and this can lead to bias. For example, Dressel & Farid, 2018 found that the
COMPAS recidivism algorithm commonly used in pretrial, parole, and sentencing
decisions in the United States, is no more accurate or fair than predictions made by
people with little or no criminal justice expertise.
Unfair outcomes
An action could be found to be discriminatory if it has a disproportionate impact on
one group of people. For instance, Selbst, 2017 articulates how the adoption of
predictive policing tools is leading to more people of colour being arrested, jailed or
physically harmed by police.
Transformative effects
Algorithmic activities, like profiling, can lead to challenges for autonomy and
informational privacy. For example, Polykalas & Prezerakos, 2019 examined the level
of access required to personal data by more than 1000 apps listed in the ‘most
popular’ free and paid for categories on the Google Play Store. They found that free
3
apps requested significantly more data than paid-for apps, suggested that the business
model of these ‘free’ apps is the exploitation of the personal data.
Traceability
It is hard to assign responsibility to algorithmic harms and this can lead to issues with
moral responsibility. For example, it may be unclear who (or indeed what) is
responsible for autonomous car fatalities. An in depth ethical analysis of this specific
issue is provided by (Hevelke & Nida-Rümelin, 2015)
Figure 1: Ethical concerns related to algorithmic use based on the ‘map’ created by Mittelstadt and colleagues
(2016).
This might seem like an incredibly tall order. However, rising to the challenge is both essential and
possible. Indeed, those that claim that it is impossible are falling foul of the is-ism fallacy where
they confuse the way things are with the way things can be (Lessig 2006), or indeed should be. It
is possible to design an algorithmically-enhanced society pro-ethically
1
(Floridi 2017b), so that it
protects the values, principles, and ethics that society thinks are fundamental (Floridi 2018). This
is the message that social scientists, ethicists, philosophers, policymakers, technologists and civil
society have been delivering in a collective call for the development of appropriate governance
mechanisms (D’Agostino & Durante, 2018) that will enable society to capitalise on the
opportunities, whilst ensuring that human rights are respected (Luciano Floridi & Taddeo, 2016),
and fair and ethical decision-making is maintained (Lipton, 2016).
The purpose of the following pages is to highlight the part that technologists, or ML
developers, can take in this broader conversation. Specifically, section 2 discusses how efforts to
date have been too focused on the ‘what’ of ethical AI (i.e. debates about principles and codes of
conduct) and not enough on the ‘how’ of applied ethics. Section 3 outlines the research planned
to contribute to closing this gap between principles and practice through the creation of an ‘applied
ethical AI typology’, and the methodology for its creation. Section 4, summarises what the typology
shows about the low availability and maturity, as well as skewed distribution, of tools and
methodologies available for practical AI ethics. Section 5, argues that there is a need for a more
coordinated effort, from multi-disciplinary researchers, innovators, policymakers, citizens,
developers and designers, to create and evaluate new tools and methodologies, in order to ensure
that there is a ‘how’ for every ‘what’ at each stage of the Machine Learning pipeline. Finally, section
6, concludes that this will be challenging to achieve, but it would be imprudent not to try.
1
The difference between ethics by design and pro-ethical design is the following: ethics by design can be paternalistic in ways that constrain the
choices of agents, because it makes some options less easily available or not at all; instead, pro-ethical design still forces agents to make choices, but
this time the nudge is less paternalistic because it does not preclude a course of action but requires agents to make up their mind about it. A
simple example can clarify the difference. A speed camera is a form of nudging (drivers should respect the speed limits) but it is pro-ethical
insofar as it leaves to the drivers the freedom to choose to pay a ticket, for example in case of an emergency. On the contrary, in terms of ethics
by design, speed bumps are a different kind of traffic calming measure designed to slow down vehicles and improve safety. They may seem like a
good idea, but they involve a physical alteration of the road, which is permanent and leaves no real choice to the driver. This means that
emergency vehicles, such as a medical ambulance, a police car, or a fire engine, must also slow down, even when responding to an emergency.
4
2. Moving from Principles to Practice
As the call for ‘AI governance’ has got louder, the number of available Ethical Codes of Practice
for AI has increased. Currently (April 2019), there are at least 70 publicly available sets of ethical
principles and frameworks for AI
2
. The list includes documents produced by industry (Google
3
,
IBM
4
, Microsoft
5
, Intel
6
), Government (Montreal Declaration
7
, Lords Select Committee
8
,
European Commission’s High-Level Expert Group
9
), and academia (Future of Life Institute
10
,
IEEE
11
, AI4People
12
). The hope is that these principles, as abstractions (Anderson & Anderson,
2018), can act as normative constraints (Turilli, 2007) on the ‘do’s’ and ‘don’ts’ of algorithmic use
in society. This is a worthwhile aim, and a necessary building block in the creation of an
environment that fosters ethical, responsible, and beneficial AI. However, the mere existence of
these principles does little to bring about actual change in the design of algorithmic systems, leading
to accusations of ‘ethics washing’ and feelings of ‘ethics fatigue’ (Floridi 2019b).
The issue is that, whilst these principles—which (at least in Europe) are largely based on
the bioethical principles of beneficence, non-maleficence, justice and autonomy, and the concept
of ‘explicability’ (Floridi and colleagues, 2018)—provide a useful framework for what ethical AI
looks like, they do not tell developers how to design it. The gap between principles and practice is
simply too large. This is risky: unless mechanisms are developed to close this gap, the lack of
guidance may (a) result in the costs of ethical mistakes outweighing the benefits of ethical successes
(even a single critical ‘AI scandal’ could stifle innovation); (b) undermine public acceptance of
2
2
See these two repositories: Algorithm Watch. The AI Ethics Guidelines Global Inventory (9 April 2019):
https://algorithmwatch.org/en/project/ai-ethics-guidelines-global-inventory/ and Winfield, A. (18 April 2019): An Updated Round Up of
Ethical Principles of Robotics and AI. http://alanwinfield.blogspot.com/2019/04/an-updated-round-up-of-ethical.html
3
Google’s AI Principles: https://www.blog.google/technology/ai/ai-principles/
4
IBM’s everyday ethics for AI: https://www.ibm.com/watson/assets/duo/pdf/everydayethics.pdf
5
Microsoft’s guidelines for conversational bots: https://www.microsoft.com/en-
us/research/uploads/prod/2018/11/Bot_Guidelines_Nov_2018.pdf
6
Intel’s recommendations for public policy principles on AI: https://blogs.intel.com/policy/2017/10/18/naveen-rao-announces-intel-ai-public-
policy/#gs.8qnx16
7
The Montreal Declaration for Responsible AI: https://www.montrealdeclaration-responsibleai.com/the-declaration
8
House of Lords Select Committee on Artificial Intelligence: AI in the UK: ready, willing and able?:
https://publications.parliament.uk/pa/ld201719/ldselect/ldai/100/100.pdf
9
European Commission’s Ethics Guidelines for Trustworthy AI: https://ec.europa.eu/futurium/en/ai-alliance-consultation/guidelines/1
10
Future of Life’s Asilomar AI Principles: https://futureoflife.org/ai-principles/
11
IEEE General Principles of Ethical Autonomous and Intelligent Systems: http://alanwinfield.blogspot.com/2019/04/an-updated-round-up-
of-ethical.html
12
Floridi, L. and colleagues. AI4PeopleAn Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations.
Minds and Machines 28, 689-707, doi:10.1007/s11023-018-9482-5 (2018).
5
algorithmic systems; (c) reduce adoption of algorithmic systems; and (d) ultimately create a
scenario in which society incurs significant opportunity costs (Cookson, 2018).
Avoiding these opportunity costs is crucial, and so the social need for means of translating
between the ‘what’ (ethical principles) and the ‘how’ (technical requirements) (Dignum, 2017) is
growing. Complexity, variability, subjectivity, and lack of standardisation, including variable
interpretation of the ‘components’ of each of the ethical principles, make this challenging
(Alshammari & Simpson, 2017). However, it is achievable if the right questions are asked (Green,
2018)(Wachter, Mittelstadt, & Floridi, 2017a) and closer attention is payed to how the design
process can influence (Kroll, 2018) whether an algorithm is more or less ‘ethically-aligned.’ Thus,
this is the aim of this research project: to identify the methods and tools available to help
developers, engineers and designers of ML specifically (but we hope the results of the this research
may be easily applicable to other branches of AI) reflect on and apply ‘ethics’ (Adamson, Havens,
& Chatila, 2019) so that they may know not only what to do or not to do, but also how to do it,
or avoid doing it (Alshammari & Simpson, 2017).
3. Methodology
The first task was to design a typology of applied AI tools, for the very practically minded ML
community (Holzinger, 2018), to embed reflection on the ethical principles of beneficence, non-
maleficence, autonomy, justice and explicability (Floridi and colleagues, 2018)) into the ML
pipeline from pre-processing, to use by the end user (Holzinger, 2018). To do this, the ethical
principles were combined with the stages of algorithmic development outlined in the overview of
the Information Commissioner’s Office (ICO) auditing framework for artificial intelligence and
its core components,
13
as shown in Figure 2, to encourage ML developers to go between design
decision and ethical principles regularly.
13
More detail is available here: https://ai-auditingframework.blogspot.com/2019/03/an-overview-of-auditing-framework-for_26.html
6
Business and use-
case development
Problem/improvements
are defined and use of
AI is proposed
Design Phase
The business case is
turned into design
requirements for
engineers
Training and test
data procurement
Initial data sets are
obtained to train and
test the model
Building
AI application is built
Testing
The system is tested
Deployment
When the AI system goes live
Monitoring
Performance of the
system is assessed
Beneficence
Value-alignment
Responsible design
Non-Maleficence
Privacy by design
Secure by default
Robustness
(Cavoukian, Taylor,
& Abrams, 2010)
outline 7 foundational
principles for Privacy
by Design:
1.Proactive not reactive:
preventative not
reactive.
2.Privacy as the default
3.Privacy embedded
into design
4.Full functionality =
positive sum, not zero
sum
5.End-to-End Lifecyle
protection
6.Visibility and
Transparency
7.Respect for user
privacy
(Oetzel &
Spiekermann, 2014)
set out a step-by-step
privacy impact
assessment (PIA) to
enable companies to
achieve ‘privacy-by-
design.’
(Antignac, Sands,
& Schneider, 2016)
provide the python
code to create
DataMin (a data
minimiser a pre-
processor modifying
the input of data to
ensure only the data
needed are available
to the program) as a
series of Java source
code files which can
be run on the data
source points before
disclosing the data.
(Kolter & Madry,
2018) provide a
practical introduction,
from a mathematical
and coding
perspective, to the
topic of adversarial
robustness with the
idea being that it is
possible to train deep
learning classifiers to
be resistant to
adversarial attacks:
https://adversarial-
ml-tutorial.org/
(Dennis, Fisher,
Lincoln, Lisitsa, &
Veres, 2016) outline
a methodology for
verifying the
decision-making of
an autonomous
agent to confirm that
the controlling agent
never deliberately
makes a choice it
believes to be
unsafe.
The AI Now Institute
Algorithmic Accountability
Policy Toolkit, provides a list
of questions policy and legal
advocates will want to ask
when considering introducing
an automated system into a
public service and provides
detailed advice on how where
in the procurement process to
ask questions about
accountability and potential
harm.
https://ainowinstitute.org/aap-
toolkit.pdf
(Makri &
Lambrinoudakis,
2015) outline a
structured privacy
audit procedure based
on the most widely
adopted privacy
principles:
- Purpose
specification
- Collection limitation
- Data quality
- Use, retention and
disclosure limitation
- Safety safeguards
- Openness
- Individual
participation
- Accountability
Autonomy
Exercising of rights
Justice
Fairness
Compliance
Explicability
Accountability
Intelligibility
Transparency
Figure 2: ‘Applied AI Ethics’ Typology comprising ethical principles from Floridi and colleagues (2018) and the ICO’s
Auditing Framework for Artificial Intelligence with illustrative ‘non-maleficent’ example
The second task was to conduct a thorough literature review (in Scopus
14
and arXiv
15
) and Internet
search using the terms outlined in Figure 3. The original search returned more than 800 results. A
review of the abstracts or website introductions was then conducted to refine this list (articles,
blogs, reports, websites, online resources and conference papers were checked for relevance,
actionability by ML developers and generalisability across industry sectors) to 253 sources that
provide a practical or theoretical contribution to the answer of the question: ‘how to develop an
ethical algorithmic system.
14
Scopus is the largest abstract and citation database of peer-reviewed literature: scientific journals, books and conference proceedings:
https://www.scopus.com/home.uri
15
Arxiv provides open access to 1,532,009 e-prints in the fields of physics, mathematics, computer science, quantitative biology, quantitative
finance, statistics, electrical engineering and systems science, and economics: https://arxiv.org/
7
Search Term(s)
Number of saved Scopus results
Date of search
Public Perception
3
14/02/2019
Intellectual Property
10
14/02/2019
Business Model
3
14/02/2019
Evaluation
7
14/02/2019
Data Sharing Agreement
1
14/02/2019
GDPR
33
13/02/2019
Impact Assessment
6
13/02/2019
Counterfactuals
59
13/02/2019
Privacy by design
18
13/02/2019
Data minimisation
18
13/02/2019
Bias
5
13/02/2019
Harm
18
13/02/2019
Responsible Technology
231
12/02/2019
Regulation
47
06/02/2019
Ethical
120
05/02/2019
Figure 3: Full list of terms used for literature search (Number of saved results from Scopus listed). All were searched with
AND Machine Learning OR Artificial Intelligence OR AI
The third, and final task, was to review the recommendations, theories, methodologies, and tools
outlined in the reviewed sources, and identify where they may fit in the typology. The ultimate
ambition is to ensure that there is a choice of mature tools in each box of the typology (this is not
the case currently), so that ML developers have aids to ethically-informed design at each stage of
the process. For example, as illustrated in figure 2, a developer looking to ensure their ML
algorithm is ‘non-maleficent’ (unlikely to cause harm related to privacy or security issues) can start
with the foundational principles of privacy by design (Cavoukian and colleagues, 2010) to guide
8
ideation appropriately, use techniques such as data minimisation (Antignac and colleagues, 2016),
training for adversarial robustness (Kolter & Madry, 2018), and decision-making verification
(Dennis and colleagues, 2016) in the train-build-test phases, and end by launching the system with
an accompanying privacy audit procedure (Makri & Lambrinoudakis, 2015).
What is revealing about this example, and what makes it illustrative of one of the
overarching limitations of many of the tools included in the typology, is that all the techniques
referenced are currently in the academic research stage and, although promising, they require more
work before being ‘production-ready.’ This current lack of ‘market-testing’ means that applying
ethics still requires considerable amounts of effort on behalf of the ML developers (even when
there are open-source code libraries available documentation is often limited and the skill-level
required for use is high), undermining one of the main aims of developing and using
technologically-based ‘tools’: to remove friction from applied ethics.
The full typology is available here [ https://tinyurl.com/AppliedAIEthics ]. It is important
to note that the purpose of presenting it is not to imply that it is ‘complete’ nor that the tools and
methodologies highlighted are the best, or indeed the only, means of ‘solving’ each of the
individual ethical problems. It is more a proof of concept. How to apply ethics to the development
of ML is an open question that can be solved in a multitude of different ways at different scales
and in different contexts (Floridi, 2019a). Instead, the goal is to provide a brief snapshot of what
tools are currently available to ML developers to encourage the progression of ethical AI from
principles to practice and to signal clearly, to the ‘ethical AI’ community at large, where further
work is needed. It is with the intention of starting this conversation that an overview of the
findings is presented in the next section.
Figure 4: Heatmap of the ‘Applied AI Ethics’ Typology. Darker shades of blue represent more densely populated areas of
the typology and grey represents ‘blank space.’
9
4. Initial Findings
It is evident, by simply looking at the typology (see figure 4 above), that interest in the practice of
‘ethical ML’, and thus the availability of tools and methods, is not evenly distributed across the
ML pipeline. Currently, most attention for all the ethical principles is focused on interventions at
the early input stages (Binns, 2018b) (business and use-case development, design phase and
training and test data procurement) or at the model testing phases. For example, the review failed
to identify tools
16
or methods for ensuring value-alignment (beneficence) at the deployment stage,
and found very few tools or methods for promoting autonomy (the user’s ability to exercise their
rights (Floridi and colleagues, 2018)) during the middle building and testing phases. Several factors
may have influenced this skewed distribution of interest, but three are likely to have been more
influential:
1. at least in Europe, the introduction of the European General Data Protection
Regulation (GDPR)
17
;
2. a focus on the need to ‘protect’ the individual over the collective; and
3. the lack of clarity around definitions of key terms.
They are interrelated, but for the sake of simplicity let us analyse each separately.
4.1 Legislative influence
In Europe, the introduction of the GDPR in May 2018, with its threat of large fines for
inappropriate collection or processing of data
18
and supposed right to an explanation (Wachter,
Mittelstadt, & Floridi, 2017b) has clearly had a significant impact on the ML research community,
concentrating attention on methods to ensure non-maleficence (privacy and security), autonomy
(consent), and post-hoc explanations. This is understandable but concerning.
The GDPR may, like all regulation, prove to have unintended consequences in terms of
incentivising some types of research over others and promoting minimum adherence over best
practice. For example, it has clearly encouraged a focus on privacy and explicability over the
promotion of autonomy in design choices and done very little to encourage competition to be the
most ethical system (Luciano Floridi, 2018). The risk is that being compliant may be seen to be
good enough. This is evident in the case of the ‘the right to an explanation’, where a focus on
achieving what can be termed a minimum-viable-explanation has encouraged the ML research
16
The word “failed” is used because it is important to note that this does not mean that such tools do not exist, but that they did not come up in
our search, are available only as proprietary solutions, or are currently only available in theory and not in practice.
17
The GDPR can be viewed in full here: https://eur-lex.europa.eu/legal-content/EN/TXT/?qid=1532348683434&uri=CELEX:02016R0679-
20160504
18
A guide to the implications of the GDPR is provided by the ICO here: https://ico.org.uk/for-organisations/guide-to-data-protection/guide-
to-the-general-data-protection-regulation-gdpr/
10
community to focus on mechanisms that can inform users (Wachter, Mittelstadt, & Floridi, 2017b)
in a simplified manner of how the inputs are related to the outputs. This may be necessary but is
not sufficient, because such mechanisms—e.g. LIME (Ribeiro, Singh, & Guestrin, 2016), SHAP
(Lundberg & Lee, 2017), Sensitivity Analysis (Oxborough and colleagues, 2018)—do not really
succeed in helping developers provide meaningful (Edwards & Veale, 2018) explanations that give
individuals greater control over what is being inferred about them from their data.
4.2 An individual focus
Few of the available tools surveyed provide meaningful ways to assess, and respond to, the impact
that the data-processing involved in their ML algorithm has on an individual, and even less on the
impact on society as a whole (Poursabzi-Sangdeh, Goldstein, Hofman, Vaughan, & Wallach,
2018). This is evident from the very sparsely populated ‘deployment’ column of the typology. Its
emptiness implies that the need for pro-ethically designed human-computer interaction (at an
individual level) or networks of ML systems (at a group level) has been paid little heed. This is
most likely because it is very difficult to translate complex human behaviour into simple to use,
generalisable design tools.
This might not seem particularly important, but the impact this has on the overall
acceptance of AI in society could be significant. For example, it is unlikely that counterfactual
explanations
19
(i.e. if input variable x had been different, the output variable y would have been
different as well) will do anything to improve the interpretability of recommendations made by
black-box systems for the average member of the public or the technical community. If such
methods become the de facto means of providing ‘explanations,’ the extent to which the ‘algorithmic
society’ is interpretable to the general public will be very limited. And counterfactual explanations
could easily be embraced by actors uninterested in providing factual explanations, because the
counterfactual ones provide a vast menu of options, which may easily decrease the level of
responsibility of the actor choosing it. For example, if a mortgage provider does not offer a
mortgage, the factual reason may be a bias, for example the gender of the applicant, but the
provider could choose from a vast menu of innocuous, counterfactual explanations—if some
variable x had been different the mortgage might have bene provided—e.g., a much higher
income, more collaterals, lower amount, and so forth, without ever mentioning the gender of the
applicant. All this could considerably limit the level of trust people are willing to place in such
systems.
19
See for example (Wachter, Mittelstadt, & Russell, 2017) (Johansson, Shalit, & Sontag, 2016) (Lakkaraju, Kleinberg, Leskovec, Ludwig, &
Mullainathan, 2017) (Russell, Kusner, Loftus, & Silva, 2017).
11
This potential threat to trust is further heightened by the fact that the lack of attention
paid to impact means that ML developers are currently hampered in their ability to develop systems
that promote user’s (individual or group’s) autonomy. For example, currently there is an
assumption that prediction = decision, and little research has been done (in the context of ML) on
how people translate predictions into actionable decisions. As such, tools that, for example, help
developers pro-ethically design solutions that do not overly restrict the user’s options in acting on
this prediction (i.e. tools that promote the user’s autonomy) are in short supply (Kleinberg,
Lakkaraju, Leskovec, Ludwig, & Mullainathan, 2017). If users feel as though their decisions are
being too curtailed and controlled by systems that they do not understand, it is very unlikely that
these systems will meet the condition of social acceptability, never mind the condition of social
preferability which should be the aim for truly ethically designed ML (Luciano Floridi & Taddeo,
2016).
4.3 A lack of consistency
Producing tools to fill in the white space on the typology is likely to be challenging. There is a
distinct lack of agreement on what the aims of such tools would be. Key terms such as ‘fairness’
(Friedler, Scheidegger, & Venkatasubramanian, 2016) (Kleinberg, Mullainathan, & Raghavan,
2016) (Overdorf, Kulynych, Balsa, Troncoso, & Gürses, 2018), ‘accountability’, transparency
(Ananny & Crawford, 2018) (Turilli & Floridi, 2009), and ‘interpretability’ (Doshi-Velez & Kim,
2017) (Guidotti and colleagues. 2018) (Bibal & Frénay, 2016) have myriad definitions, and
sometimes (e.g. in the case of ‘fairness’) many statistical implementations that are not compatible
and require informed decisions about trade-offs. Indeed, in some instances, definitions of the same
concept directly contradict each other (Friedler and colleagues. 2016) and recently there has even
been debate as to whether black boxes are as problematic as popular opinion makes them out to
be (Holm, 2019). This makes it almost impossible to measure the impact, ‘define success’, and
document the performance (Mitchell and colleagues. 2019) of a new design methodology or tool.
Without a clear business case, that is, in the absence of a clear problem statement and a clear
outcome, it is hard for the ML community to justify time and financial investment in developing
these much-needed tools and techniques.
This issue stems from the fact that the entire field is imbued with subjectivity (Bibal &
Frénay, 2016) and the relative success of ethical-alignment—from ideation through to operation
at scale—is also context-dependent. Yet there seem to be few tools available to help ML
developers deal with this subjectivity and associated complexity. For example, at the “beneficence
® use-case ® design” intersection, there are a number of tools highlighted to help elicit social
12
values. These include the responsible research and innovation methodology employed by the
European Commission’s Human Brain Project (Stahl & Wright, 2018), the field guide to human-
centred design,
20
and Involve and DeepMind’s guidance on stimulating effective public
engagement on the ethics of artificial intelligence.
21
However, although they are useful, such
methods offer limited guidance on how to deal with value pluralism (i.e. variation in values across
different population groups). Without this guidance, the values that are embedded and protected
by design tools are likely to be perceived as imposed and paternalistic, and restricted to those of
the groups in society that have the loudest voices. In other words, there has been little attention
paid to the potential for ‘value bias’ to develop at scale.
5. A way forward
Social scientists (Matzner, 2014) and political philosophers (from Rousseau and Kant, to Rawls
and Habermas) (Binns, 2018a), are used to dealing with the kind of plurality outlined in section 4,
and to thinking about the interaction between individual level and group level ‘ethics.’ This is why
Nissenbaum argues for a contextual account of privacy, one that recognises the varying nature of
informational norms (Matzner, 2014) and Kemper & Kolkman, (2018) state that transparency is
only meaningful in the context of a defined critical audience. However, the ML developer
community may be less used to dealing with this kind of complexity, and more used to scenarios
where there is at least a seemingly quantifiable relationship between input and output. As a result,
the existing approaches to designing and programming ethical ML fail to resolve what Arvan, 2018
terms the moral-semantic trilemma, as almost all tools and methods highlighted in the typology
are either too semantically strict, too semantically flexible, or overly unpredictable (Arvan, 2018).
Overcoming this nervousness of social complexity, embracing uncertainty, and accepting
that: (1) AI is built on assumptions; (2) human behaviour is complex; (3) algorithms can have
unfair consequences; (4) algorithmic predictions can be hard to interpret (Vaughan & Wallach,
2016); (5) trade-offs may be inevitable; and (6) positive, ethical features are open to progressive
increase but are not bounded between 0 and 1 (e.g., an algorithm can be increasingly fair, and fairer
than another algorithm or a previous version, but makes no sense to say that it is fair 100% in
absolute terms, compare this to the case of speed: it makes sense to say that an object is moving
quickly, or that it is fast or faster than another, but not that it is fast 100%), is likely to be highly
beneficial for the development of applied ethical tools and methodologies for at least three reasons.
20
http://www.designkit.org/resources/1
21
https://bit.ly/2HKNtPh
13
First, embracing uncertainty will naturally encourage ML developers to ask more probing
and open (i.e., philosophical) questions (Luciano Floridi, 2019b) that will lead to more nuanced
and reasoned answers and hence decisions about why and when certain trade-offs, for example,
between accuracy and interpretability (Goodman & Flaxman, 2017), are justified, based on factors
such as proportionality to risk (Holm, 2019). Second, it will encourage a more flexible and reflexive
approach to applied ethics that is more in-keeping with the way ML systems are actually developed:
it is not think then code, but rather think and code. In other words, it will encourage a move away
from the ‘move fast and break things’ approach towards an approach of ‘make haste slowly’ (festina
lente) (Luciano Floridi, 2019a) . Finally, it would also mitigate a significant risk posed by the current
sporadic application of ethical-design tools and/or methods during different development stages,
of the ethical principles having been written into the business and use-case, but coded out by the
time a system gets to deployment.
To enable developers to embrace this valuable uncertainty, it will be important to
promote the development of tools, like DotEveryone’s agile consequence scanning event,
22
that
prompt developers to reflect on the impacts (both direct and indirect) of the solutions they are
developing on the ‘end user’, and on how these impacts can be altered by seemingly minor
design decisions at each stage of development. In other words, ML developers should regularly
a. look back and ask: ‘if I was abiding by ethical principles x in my design then, am I still
now?’ (as encouraged by Wellcome Data Lab’s agile methodology (Mikhailov, 2019) and
b. look forward and ask: ‘if I am abiding by ethical principles x in my design now, should I
continue to do so? And how?by using foresight methodologies (Taddeo & Floridi, 2018)
(Floridi & Strait, forthcoming)such as AI Now’s Algorithmic Impact Assessment
Framework (Reisman, Schultz, Crawford, & Whittaker, 2018)).
Taking this approach recognises that, in a digital context, ethical principles are not simply either
applied or not, but they are regularly re-applied or applied differently, or better, or ignored as
algorithmic systems are developed, deployed, configured (Ananny & Crawford, 2018) tested,
revised and re-tuned (Arnold & Scheutz, 2018).
Although clearly beneficial, this approach to applied ML ethics of regular reflection and
application will not be possible unless (i) the skewed distribution of tools in the typology is
balanced out and (ii) acceleration of tools maturity level from research labs into production
environments. To achieve (i)-(ii), society needs to come together in communities comprised of
multi-disciplinary researchers (Cath, Wachter, Mittelstadt, Taddeo, & Floridi, 2017), innovators,
22
Full details of DotEveryone’s Consequence Scanning Event: https://doteveryone.org.uk/project/consequence-scanning/
14
policymakers, citizens, developers and designers (Taddeo & Floridi, 2018) to foster the
development of: (1) common knowledge and understanding; and (2) a common goal to be
achieved from the development of tools and methodologies for applied AI ethics (Durante, 2010).
These outputs will provide a reason, a mechanism, and a consensus to coordinate the efforts
behind tool development. Ultimately, this will produce better results than the current approach,
which allows a 'thousand flowers to bloom' but fails to create tools that fill in the gaps (this is a
typical ‘intellectual market’ failure), and may encourage competition to produce preferable options.
The opportunity that this presents is too great for us to wait, the ML research community should
start collaborating now with a specific focus on:
1. creation of tools that ensure people, as individuals, groups and societies are given an
equal and meaningful opportunity to participate in the design of algorithmic solutions
at each stage of development;
2. evaluation of the tools that are currently in existence so that what works, what can be
improved, and what needs to be developed can be identified;
3. commitment to reproducibility, openness, and sharing of knowledge and technical
solutions (e.g., software), also in view of satisfying (1) and supporting (2);
4. evaluation and creation of pro-ethical business models and incentive structures that
balance the costs and rewards of investing in ethical AI across society, also in view of
supporting (1)-(3).
6. Conclusion
The realisation that there is a need to embed ethical considerations into the design of
computational, specifically algorithmic, artefacts is not new. Both Alan Turing and Norbert Weiner
were vocal about this in the 1940s and 1960s (Turilli, 2008). However, as the complexity of
algorithmic systems and our reliance on them increases (Cath and colleagues. 2017), so too does
the need to be critical of pro-ethical (Floridi, 2016) AI governance (Cath, 2018) and design
solutions. It is possible to design things to be better (Floridi, 2017), but this will require more
coordinated and sophisticated approaches (Allen, Varner, & Zinser, 2000) to translating ethical
principles into design protocols (Turilli, 2007).
This call for increased coordination is necessary as this research has shown that there is a
skewed distribution of effort across the ‘Applied AI Ethics’ typology. Furthermore, many of the
tools included are relatively immature. This makes it difficult to assess the scope of their use
(resulting in Arvan’s 2018 ‘moral-semantic trilemma’) and consequently hard to encourage their
15
adoption by the practically-minded ML developers, especially when the competitive advantage of
more ethically-aligned AI is not yet clear.
Constructive patience needs to be exercised, by society and by the ethical AI community,
because the question of ‘how’ to meet the ‘what’ will not be solved overnight, and there will
definitely be mistakes along the way. The ML research community will have to accept this, trust
that everyone is trying to meet the same end-goal but also accept that it is unacceptable to delay
any full commitment, when it is known how serious the consequences of doing nothing are. Only
by accepting this can society be positive about the opportunities presented by AI to be seized,
whilst remaining mindful of the potential costs to be avoided (Floridi and colleagues. 2018).
16
Bibliography
Adamson, G., Havens, J. C., & Chatila, R. (2019). Designing a Value-Driven Future for Ethical Autonomous and
Intelligent Systems. Proceedings of the IEEE, 107(3), 518525.
https://doi.org/10.1109/JPROC.2018.2884923
AI Now Institute Algorithmic Accountability Policy Toolkit. (n.d.). Retrieved from https://ainowinstitute.org/aap-
toolkit.pdf
Allen, C., Varner, G., & Zinser, J. (2000). Prolegomena to any future artificial moral agent. Journal of Experimental &
Theoretical Artificial Intelligence, 12(3), 251261. https://doi.org/10.1080/09528130050111428
Alshammari, M., & Simpson, A. (2017). Towards a Principled Approach for Engineering Privacy by Design. In E.
Schweighofer, H. Leitold, A. Mitrakas, & K. Rannenberg (Eds.), Privacy Technologies and Policy (Vol. 10518,
pp. 161177). https://doi.org/10.1007/978-3-319-67280-9_9
Ananny, M., & Crawford, K. (2018). Seeing without knowing: Limitations of the transparency ideal and its
application to algorithmic accountability. New Media & Society, 20(3), 973989.
https://doi.org/10.1177/1461444816676645
Anderson, M., & Anderson, S. L. (2018). GenEth: a general ethical dilemma analyzer. Paladyn, Journal of Behavioral
Robotics, 9(1), 337357. https://doi.org/10.1515/pjbr-2018-0024
Antignac, T., Sands, D., & Schneider, G. (2016). Data Minimisation: a Language-Based Approach (Long Version).
ArXiv:1611.05642 [Cs]. Retrieved from http://arxiv.org/abs/1611.05642
Arnold, T., & Scheutz, M. (2018). The “big red button” is too late: an alternative model for the ethical evaluation of
AI systems. Ethics and Information Technology, 20(1), 5969. https://doi.org/10.1007/s10676-018-9447-7
Arvan, M. (2018). Mental time-travel, semantic flexibility, and A.I. ethics. AI & SOCIETY.
https://doi.org/10.1007/s00146-018-0848-2
Bibal, A., & Frénay, B. (2016). Interpretability of Machine Learning Models and Representations: an Introduction.
Binns, R. (2018a). Algorithmic Accountability and Public Reason. Philosophy & Technology, 31(4), 543556.
https://doi.org/10.1007/s13347-017-0263-5
Binns, R. (2018b). What Can Political Philosophy Teach Us about Algorithmic Fairness? IEEE Security & Privacy,
16(3), 7380. https://doi.org/10.1109/MSP.2018.2701147
Cath, C. (2018). Governing artificial intelligence: ethical, legal and technical opportunities and challenges.
Philosophical Transactions of the Royal Society A: Mathematical, Physical and
Engineering Sciences, 376(2133), 20180080. https://doi.org/10.1098/rsta.2018.0080
17
Cath, C., Wachter, S., Mittelstadt, B., Taddeo, M., & Floridi, L. (2017). Artificial Intelligence and the ‘Good Society’:
the US, EU, and UK approach. Science and Engineering Ethics. https://doi.org/10.1007/s11948-017-9901-7
Cath, C., Zimmer, M., Lomborg, S., & Zevenbergen, B. (2018). Association of Internet Researchers (AoIR)
Roundtable Summary: Artificial Intelligence and the Good Society Workshop Proceedings. Philosophy &
Technology, 31(1), 155162. https://doi.org/10.1007/s13347-018-0304-8
Cavoukian, A., Taylor, S., & Abrams, M. E. (2010). Privacy by Design: essential for organizational accountability and
strong business practices. Identity in the Information Society, 3(2), 405413. https://doi.org/10.1007/s12394-
010-0053-z
Cookson, C. (2018, June 9). Artificial intelligence faces public backlash, warns scientist. Financial Times. Retrieved
from https://www.ft.com/content/0b301152-b0f8-11e8-99ca-68cf89602132
Crawford, K., & Calo, R. (2016). There is a blind spot in AI research. Nature, 538(7625), 311313.
https://doi.org/10.1038/538311a
D’Agostino, M., & Durante, M. (2018). Introduction: the Governance of Algorithms. Philosophy & Technology, 31(4),
499505. https://doi.org/10.1007/s13347-018-0337-z
Dennis, L. A., Fisher, M., Lincoln, N. K., Lisitsa, A., & Veres, S. M. (2016). Practical verification of decision-making
in agent-based autonomous systems. Automated Software Engineering, 23(3), 305359.
https://doi.org/10.1007/s10515-014-0168-9
Dignum, V. (2017). Responsible Autonomy. Proceedings of the Twenty-Sixth International Joint Conference on Artificial
Intelligence, 46984704. https://doi.org/10.24963/ijcai.2017/655
Doshi-Velez, F., & Kim, B. (2017). Towards A Rigorous Science of Interpretable Machine Learning.
ArXiv:1702.08608 [Cs, Stat]. Retrieved from http://arxiv.org/abs/1702.08608
Dressel, J., & Farid, H. (2018). The accuracy, fairness, and limits of predicting recidivism. Science Advances, 4(1),
eaao5580. https://doi.org/10.1126/sciadv.aao5580
Durante, M. (2010). What Is the Model of Trust for Multi-agent Systems? Whether or Not E-Trust Applies to
Autonomous Agents. Knowledge, Technology & Policy, 23(34), 347366. https://doi.org/10.1007/s12130-
010-9118-4
Edwards, L., & Veale, M. (2018). Enslaving the Algorithm: From a “Right to an Explanation” to a “Right to Better
Decisions”? IEEE Security & Privacy, 16(3), 4654. https://doi.org/10.1109/MSP.2018.2701152
Floridi, L., & Strait, A. (n.d.). Ethical foresight analysis: what it is and why it is needed.
Floridi, Luciano. (2016). Tolerant Paternalism: Pro-ethical Design as a Resolution of the Dilemma of Toleration.
Science and Engineering Ethics, 22(6), 16691688. https://doi.org/10.1007/s11948-015-9733-2
18
Floridi, Luciano. (2017). The Logic of Design as a Conceptual Logic of Information. Minds and Machines, 27(3), 495
519. https://doi.org/10.1007/s11023-017-9438-1
Floridi, Luciano. (2018). Soft ethics, the governance of the digital and the General Data Protection Regulation.
Philosophical Transactions. Series A, Mathematical, Physical, and Engineering Sciences, 376(2133).
https://doi.org/10.1098/rsta.2018.0081
Floridi, Luciano. (2019a). Establishing the rules for building trustworthy AI. Nature Machine Intelligence.
https://doi.org/10.1038/s42256-019-0055-y
Floridi, Luciano. (2019b). The logic of information: a theory of philosophy as conceptual design (1st edition). New York, NY:
Oxford University Press.
Floridi, Luciano, Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., … Vayena, E. (2018).
AI4PeopleAn Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and
Recommendations. Minds and Machines, 28(4), 689707. https://doi.org/10.1007/s11023-018-9482-5
Floridi, Luciano, & Taddeo, M. (2016). What is data ethics? Philosophical Transactions of the Royal Society A: Mathematical,
Physical and Engineering Sciences, 374(2083), 20160360. https://doi.org/10.1098/rsta.2016.0360
Friedler, S. A., Scheidegger, C., & Venkatasubramanian, S. (2016). On the (im)possibility of fairness.
ArXiv:1609.07236 [Cs, Stat]. Retrieved from http://arxiv.org/abs/1609.07236
Goodman, B., & Flaxman, S. (2017). European Union regulations on algorithmic decision-making and a “right to
explanation.” AI Magazine, 38(3), 50. https://doi.org/10.1609/aimag.v38i3.2741
Green, B. P. (2018). Ethical Reflections on Artificial Intelligence. Scientia et Fides, 6(2), 9.
https://doi.org/10.12775/SetF.2018.015
Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F., & Pedreschi, D. (2018). A Survey of Methods for
Explaining Black Box Models. ACM Computing Surveys, 51(5), 142. https://doi.org/10.1145/3236009
Hevelke, A., & Nida-Rümelin, J. (2015). Responsibility for Crashes of Autonomous Vehicles: An Ethical Analysis.
Science and Engineering Ethics, 21(3), 619630. https://doi.org/10.1007/s11948-014-9565-5
Holm, E. A. (2019). In defense of the black box. Science, 364(6435), 2627. https://doi.org/10.1126/science.aax0162
Holzinger, A. (2018). From Machine Learning to Explainable AI. 2018 World Symposium on Digital Intelligence for
Systems and Machines (DISA), 5566. https://doi.org/10.1109/DISA.2018.8490530
Johansson, F. D., Shalit, U., & Sontag, D. (2016). Learning Representations for Counterfactual Inference.
ArXiv:1605.03661 [Cs, Stat]. Retrieved from http://arxiv.org/abs/1605.03661
Kemper, J., & Kolkman, D. (2018). Transparent to whom? No algorithmic accountability without a critical audience.
Information, Communication & Society, 116. https://doi.org/10.1080/1369118X.2018.1477967
19
Kleinberg, J., Lakkaraju, H., Leskovec, J., Ludwig, J., & Mullainathan, S. (2017). Human Decisions and Machine
Predictions*. The Quarterly Journal of Economics. https://doi.org/10.1093/qje/qjx032
Kleinberg, J., Mullainathan, S., & Raghavan, M. (2016). Inherent Trade-Offs in the Fair Determination of Risk
Scores. ArXiv:1609.05807 [Cs, Stat]. Retrieved from http://arxiv.org/abs/1609.05807
Knoppers, B. M., & Thorogood, A. M. (2017). Ethics and big data in health. Current Opinion in Systems Biology, 4, 53
57. https://doi.org/10.1016/j.coisb.2017.07.001
Kolter, Z., & Madry, A. (n.d.). Materials for tutorial Adversarial Robustness: Theory and Practice. Retrieved from
https://adversarial-ml-tutorial.org/
Kroll, J. A. (2018). The fallacy of inscrutability. Philosophical Transactions of the Royal Society A: Mathematical,
Physical and Engineering Sciences, 376(2133), 20180084.
https://doi.org/10.1098/rsta.2018.0084
Lakkaraju, H., Kleinberg, J., Leskovec, J., Ludwig, J., & Mullainathan, S. (2017). The Selective Labels Problem:
Evaluating Algorithmic Predictions in the Presence of Unobservables. Proceedings of the 23rd ACM SIGKDD
International Conference on Knowledge Discovery and Data Mining - KDD ’17, 275284.
https://doi.org/10.1145/3097983.3098066
Lepri, B., Oliver, N., Letouzé, E., Pentland, A., & Vinck, P. (2018). Fair, Transparent, and Accountable Algorithmic
Decision-making Processes: The Premise, the Proposed Solutions, and the Open Challenges. Philosophy &
Technology, 31(4), 611627. https://doi.org/10.1007/s13347-017-0279-x
Lessig, L., & Lessig, L. (2006). Code (Version 2.0). New York: Basic Books.
Lipton, Z. C. (2016). The Mythos of Model Interpretability. ArXiv:1606.03490 [Cs, Stat]. Retrieved from
http://arxiv.org/abs/1606.03490
Lundberg, S., & Lee, S.-I. (2017). A Unified Approach to Interpreting Model Predictions. ArXiv:1705.07874 [Cs,
Stat]. Retrieved from http://arxiv.org/abs/1705.07874
Makri, E.-L., & Lambrinoudakis, C. (2015). Privacy Principles: Towards a Common Privacy Audit Methodology. In
S. Fischer-Hübner, C. Lambrinoudakis, & J. López (Eds.), Trust, Privacy and Security in Digital Business (Vol.
9264, pp. 219234). https://doi.org/10.1007/978-3-319-22906-5_17
Matzner, T. (2014). Why privacy is not enough privacy in the context of “ubiquitous computing” and “big data.”
Journal of Information, Communication and Ethics in Society, 12(2), 93106. https://doi.org/10.1108/JICES-08-
2013-0030
Mikhailov, D. (2019). A new method for ethical data science. Retrieved from Medium website:
https://medium.com/wellcome-data-labs/a-new-method-for-ethical-data-science-edb59e400ae9
20
Mitchell, M., Wu, S., Zaldivar, A., Barnes, P., Vasserman, L., Hutchinson, B., … Gebru, T. (2019). Model Cards for
Model Reporting. Proceedings of the Conference on Fairness, Accountability, and Transparency - FAT* ’19, 220229.
https://doi.org/10.1145/3287560.3287596
Mittelstadt, B. (2019). The Ethics of Biomedical ‘Big Data’ Analytics. Philosophy & Technology, 32(1), 1721.
https://doi.org/10.1007/s13347-019-00344-z
Oetzel, M. C., & Spiekermann, S. (2014). A systematic methodology for privacy impact assessments: a design
science approach. European Journal of Information Systems, 23(2), 126150.
https://doi.org/10.1057/ejis.2013.18
Overdorf, R., Kulynych, B., Balsa, E., Troncoso, C., & Gürses, S. (2018). Questioning the assumptions behind
fairness solutions. ArXiv:1811.11293 [Cs]. Retrieved from http://arxiv.org/abs/1811.11293
Oxborough, C., Cameron, E., Rao, A., Birchall, A., Townsend, A., & Westermann, C. (n.d.). Explainable AI: Driving
Business Value through Greater Understanding. Retrieved from PWC website: https://www.pwc.co.uk/audit-
assurance/assets/explainable-ai.pdf
Polykalas, S. E., & Prezerakos, G. N. (2019). When the mobile app is free, the product is your personal data. Digital
Policy, Regulation and Governance, 21(2), 89101. https://doi.org/10.1108/DPRG-11-2018-0068
Poursabzi-Sangdeh, F., Goldstein, D. G., Hofman, J. M., Vaughan, J. W., & Wallach, H. (2018). Manipulating and
Measuring Model Interpretability. ArXiv:1802.07810 [Cs]. Retrieved from
http://arxiv.org/abs/1802.07810
Reisman, D., Schultz, J., Crawford, K., & Whittaker, M. (2018). Algorithmic Impact Assessments: A Practical Framework
for Public Agency Accountability. Retrieved from AINow website:
https://ainowinstitute.org/aiareport2018.pdf
Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). “Why Should I Trust You?”: Explaining the Predictions of Any
Classifier. ArXiv:1602.04938 [Cs, Stat]. Retrieved from http://arxiv.org/abs/1602.04938
Russell, C., Kusner, M. J., Loftus, J., & Silva, R. (2017). When Worlds Collide: Integrating Different Counterfactual
Assumptions in Fairness. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan,
& R. Garnett (Eds.), Advances in Neural Information Processing Systems 30 (pp. 64146423). Retrieved from
http://papers.nips.cc/paper/7220-when-worlds-collide-integrating-different-counterfactual-assumptions-
in-fairness.pdf
Samuel, A. L. (1960). Some Moral and Technical Consequences of Automation--A Refutation. Science, 132(3429),
741742. https://doi.org/10.1126/science.132.3429.741
21
Selbst, A. D. (2017). Disparate Impact in Big Data Policing. Georgia Law Review, 52(1), 109196. Retrieved from
https://heinonline.org/HOL/P?h=hein.journals/geolr52&i=121.
Stahl, B. C., & Wright, D. (2018). Ethics and Privacy in AI and Big Data: Implementing Responsible Research and
Innovation. IEEE Security & Privacy, 16(3), 2633. https://doi.org/10.1109/MSP.2018.2701164
Taddeo, M., & Floridi, L. (2018). How AI can be a force for good. Science, 361(6404), 751752.
https://doi.org/10.1126/science.aat5991
Turilli, M. (2007). Ethical protocols design. Ethics and Information Technology, 9(1), 4962.
https://doi.org/10.1007/s10676-006-9128-9
Turilli, M. (2008). Ethics and the practice of software design. In A. Briggle, P. Brey, & K. Waelbers (Eds.), Current
issues in computing and philosophy. Amsterdam: IOS Press.
Turilli, M., & Floridi, L. (2009). The ethics of information transparency. Ethics and Information Technology, 11(2), 105
112. https://doi.org/10.1007/s10676-009-9187-9
Vaughan, J., & Wallach, H. (2016). The inescapability of Uncertainty: AI, Uncertainty, and Why You Should Vote
No Matter What Predictions Say. Retrieved July 4, 2019, from Points. Data Society website:
https://points.datasociety.net/uncertainty-edd5caf8981b
Wachter, S., Mittelstadt, B., & Floridi, L. (2017a). Transparent, explainable, and accountable AI for robotics. Science
Robotics, 2(6), eaan6080. https://doi.org/10.1126/scirobotics.aan6080
Wachter, S., Mittelstadt, B., & Floridi, L. (2017b). Why a Right to Explanation of Automated Decision-Making Does
Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2), 7699.
https://doi.org/10.1093/idpl/ipx005
Wachter, S., Mittelstadt, B., & Russell, C. (2017). Counterfactual Explanations without Opening the Black Box:
Automated Decisions and the GDPR. ArXiv:1711.00399 [Cs]. Retrieved from
http://arxiv.org/abs/1711.00399
Wiener, N. (1960). Some Moral and Technical Consequences of Automation. Science, 131(3410), 13551358.
https://doi.org/10.1126/science.131.3410.1355
ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
An increasing number of technology firms are implementing processes to identify and evaluate the ethical risks of their systems and products. A key part of these review processes is to foresee potential impacts of these technologies on different groups of users. In this article, we use the expression Ethical Foresight Analysis (EFA) to refer to a variety of analytical strategies for anticipating or predicting the ethical issues that new technological artefacts, services, and applications may raise. This article examines several existing EFA methodologies currently in use. It identifies the purposes of ethical foresight, the kinds of methods that current methodologies employ, and the strengths and weaknesses of each of these current approaches. The conclusion is that a new kind of foresight analysis on the ethics of emerging technologies is both feasible and urgently needed.
Article
Full-text available
The European Commission’s report ‘Ethics guidelines for trustworthy AI’ provides a clear benchmark to evaluate the responsible development of AI systems, and facilitates international support for AI solutions that are good for humanity and the environment, says Luciano Floridi.
Article
Full-text available
Purpose Mobile devices (smartphones, tables etc.) have become the de facto means of accessing the internet. While traditional Web browsing is still quite popular, significant interaction takes place via native mobile apps that can be downloaded either freely or at a cost. This has opened the door to a number of issues related to privacy protection since the smartphone stores and processes personal data. The purpose of this paper is to examine the extent of access to personal data, required by the most popular mobile apps available in Google Play store. In addition, it is examined whether the relevant procedure is in accordance with the provisions of the new EU Regulation. Design/methodology/approach The paper examines more than a thousand mobile apps, available from the Google Play store, with respect to the extent of the requests for access to personal data. In particular, for each available category in Google Play store, the most popular mobile apps have been examined both for free and paid apps. In addition, the permissions required by free and paid mobile apps are compared. Furthermore, a correlation analysis is carried out aiming to reveal any correlation between the extent of required access to personal data and the popularity and the rating of each mobile app. Findings The findings of this paper suggest that the majority of examined mobile apps require access to personal data to a high extent. In addition, it is found that free mobile apps request access to personal data in a higher extent compared to the relevant requests by paid apps, which indicates strongly that the business model of free mobile apps is based on personal data exploitation. The most popular types of access permissions are revealed for both free and paid apps. In addition, important questions are raised in relation to user awareness and behavior, data minimization and purpose limitation for free and paid mobile apps. Originality/value In this study, the process and the extent of access to personal data through mobile apps are analyzed. Although several studies analyzed relevant issues in the past, the originality of this research is mainly based on the following facts: first, this work took into account the recent Regulation of the EU in relation to personal data (GDPR); second, the authors analyzed a high number of the most popular mobile apps (more than a thousand); and third, the authors compare and analyze the different approaches followed between free and paid mobile apps.
Article
Full-text available
In our information societies, tasks and decisions are increasingly outsourced to automated systems, machines, and artificial agents that mediate human relationships, by taking decisions and acting on the basis of algorithms. This raises a critical issue: how are algorithmic procedures and applications to be appraised and governed? This question needs to be investigated, if one wishes to avoid the traps of ICTs ending up in isolating humans behind their screens and digital delegates, or harnessing them in a passive role, by curtailing their freedom and autonomy.
Article
Full-text available
Artificial Intelligence (AI) technology presents a multitude of ethical concerns, many of which are being actively considered by organizations ranging from small groups in civil society to large corporations and governments. However, it also presents ethical concerns which are not being actively considered. This paper presents a broad overview of twelve topics in ethics in AI, including function, transparency, evil use, good use, bias, unemployment, socio-economic inequality, moral automation and human de-skilling, robot consciousness and rights, dependency, social-psychological effects, and spiritual effects. Each of these topics will be given a brief discussion, though each deserves much deeper consideration.
Book
This is a book on the logic of design and hence on how we make, transform, refine, and improve the objects of our knowledge. The starting point is that reality provides the data, to be understood as constraining affordances, and we transform them into information, like semantic engines. Such transformation or repurposing is not equivalent to portraying, or picturing, or photographing, or photocopying anything. It is more like cooking: the dish does not represent the ingredients, it uses them to make something else out of them, yet the reality of the dish and its properties hugely depend on the reality and the properties of the ingredients. Models are not representations understood as pictures, but interpretations understood as data elaborations, of systems. Thus, the whole book may also be read as an articulation and defence of the thesis that knowledge is design, and that philosophy is the ultimate form of conceptual design. This is the third volume in a tetralogy that includes The Philosophy of Information (OUP 2011) and The Ethics of Information (OUP 2013). The three volumes are all written as stand-alone, but they are complementary. By working like a hinge between the two previous books, this third one prepares the basis for volume four, on The Politics of Information . There, the epistemological, conceptual, and normative constructionism supports the study of the design opportunities we have in understanding and shaping what I like to call “the human project” in our information societies.
Article
Codes of ethics provide a critical foundation for an organization to define the tenets that best reflect their values. These codes provide a touchstone for employees to follow that establishes trust between an organization and their customers and key stakeholders. The rapid acceleration of autonomous and intelligent systems (A/IS) has created an imperative for technologists to complement these codes with new methodologies focused on the widespread implementation of ethical considerations in all levels of design and manufacture. This paper describes IEEE's work in this arena, including the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems, the IEEE TechEthics program, the IEEE P7000 Standards Series focused on A/IS issues, Digital Inclusion through Trust and Agency, and the ongoing work of the IEEE Society on Social Implications of Technology. This paper also includes a table of approximately 50 activities within the IEEE related to ethics.
Conference Paper
Trained machine learning models are increasingly used to perform high-impact tasks in areas such as law enforcement, medicine, education, and employment. In order to clarify the intended use cases of machine learning models and minimize their usage in contexts for which they are not well suited, we recommend that released models be accompanied by documentation detailing their performance characteristics. In this paper, we propose a framework that we call model cards, to encourage such transparent model reporting. Model cards are short documents accompanying trained machine learning models that provide benchmarked evaluation in a variety of conditions, such as across different cultural, demographic, or phenotypic groups (e.g., race, geographic location, sex, Fitzpatrick skin type [15]) and intersectional groups (e.g., age and race, or sex and Fitzpatrick skin type) that are relevant to the intended application domains. Model cards also disclose the context in which models are intended to be used, details of the performance evaluation procedures, and other relevant information. While we focus primarily on human-centered machine learning models in the application fields of computer vision and natural language processing, this framework can be used to document any trained machine learning model. To solidify the concept, we provide cards for two supervised models: One trained to detect smiling faces in images, and one trained to detect toxic comments in text. We propose model cards as a step towards the responsible democratization of machine learning and related artificial intelligence technology, increasing transparency into how well artificial intelligence technology works. We hope this work encourages those releasing trained machine learning models to accompany model releases with similar detailed evaluation numbers and other relevant documentation.