ArticlePDF Available

Abstract

A new generation of ethical standards in robotics and artificial intelligence is emerging as a direct response to a growing awareness of the ethical, legal and societal impacts of the fields. But what exactly are these ethical standards and how do they differ from conventional standards?
Pre-print of article published 15 February 2019 in Nature Electronics:
https://doi.org/10.1038/s41928-019-0213-6
Ethical standards in Robotics and AI
A new generation of ethical standards in robotics and artificial intelligence is emerging as a
direct response to a growing awareness of the ethical, legal and societal impact of the fields.
But what exactly are these ethical standards and how do they differ from conventional
standards?
Alan Winfield
Standards are a vital part of the infrastructure of the modern world: invisible, but no
less important than roads, airports and telephone networks. It is hard to think of any
aspect of everyday life untouched by standards. The International Organization for
Standardisation (ISO) just one of several standards bodies lists a total of 22,482
published standards. Take the simple act of brushing your teeth in the morning: there
are standards for your toothbrush (both manual ISO 20126 and electric ISO 20127),
your toothpaste and its packaging (ISO 11609), and the quality of your tap water
(ISO 5667-5). Although it might seem odd to wax lyrical on standards, they do
represent a truly remarkable body of work drafted by countless expert volunteers
with an extraordinary impact on individual and societal health and safety.
All standards embody a principle and often it is an ethical principle or value. Safety
standards are founded on the general principle that products and systems should do
no harm that they should be safe; ISO 13482, for instance, sets out safety
requirements for personal care robots. Quality management standards, such as ISO
9001, describe how things should be done, and can be thought of as expressing the
principle that shared best practice leads to improved quality. And technical
standards, like IEEE 802.11 (better known as WiFi), can be thought of as embodying
the benefits of interoperability. Even the basic idea of standards as codifying shared
ways of doing things can be thought of as expressing the values of cooperation and
harmonisation. All standards can therefore be thought of as implicit ethical standards.
We can define an explicit ethical standard as one that addresses clearly articulated
ethical concerns, and seeks through its application to, at best remove, hopefully
reduce, or at the very least highlight the potential for unethical impacts or their
consequences.
!
What are the ethical principles which underpin these new ethical standards? An
informal survey1 in December 2017 listed a total of ten different sets of ethical
principles for robotics and AI. The earliest (1950) are Asimov’s laws of robotics:
important because they established the principle that robots should be governed by
principles. Very recently we have seen a proliferation of principles; of the ten sets
surveyed seven were published in 2017.
!
Perhaps not surprisingly these ethical principles have much in common. In summary:
robots and artificial intelligences (AIs) should do no harm, while being free of bias
and deception; respect human rights and freedoms, including dignity and privacy,
while promoting well-being; and be transparent and dependable while ensuring that
the locus of responsibility and accountability remains with their human designers or
operators. Just as interesting is the increasing frequency of their publication: clear
evidence for a growing awareness of the urgent need for ethical principles for
robotics and AI. But, while an important and necessary foundation, principles are not
practice. Ethical standards are the next important step toward ethical governance in
robotics and AI2.
!
Ethical risk assessment
Almost certainly the world’s first explicit ethical standard in robotics is BS 8611 Guide
to the Ethical Design and Application of Robots and Robotic Systems3, which was
published in April 2016. Incorporating the EPSRC principles of robotics4, BS8611 is
not a code of practice, but instead guidance on how designers can undertake an
ethical risk assessment of their robot or system, and mitigate any ethical risks so
identified. At its heart is a set of 20 distinct ethical hazards and risks, grouped under
four categories: societal, application, commercial & financial, and environmental.
Advice on measures to mitigate the impact of each risk is given, along with
suggestions on how such measures might be verified or validated. The societal
hazards include, for example, loss of trust, deception, infringements of privacy and
confidentiality, addiction, and loss of employment. The idea of ethical risk
assessment is of course not new it is essentially what research ethics committees
do but a method for assessing robots for ethical risks is a powerful new addition to
the ethical roboticist’s toolkit.
In April 2016, the IEEE Standards Association launched a global initiative on the
Ethics of Autonomous and Intelligent Systems5. The significance of this initiative
cannot be overstated; coming from a professional body with the standing and reach
of the IEEE Standards Association it marks a watershed in the emergence of ethical
standards. And it is a radical step. As I’ve argued above all standards are even if
not explicitly based on ethical principles. But for a respected standards body to
launch an initiative which explicitly aims to address the deep ethical challenges that
face the whole of autonomous and intelligent systems from driverless car autopilots
to medical diagnosis AIs, drones to deep learning, and care robots to chat bots is
both ambitious and unprecedented.
Humanity first
The IEEE initiative positions human well-being as its central tenet6. This is a bold and
political stance since it explicitly seeks to reposition robotics and AI as technologies
for improving the human condition rather than simply vehicles for economic growth.
The initiative’s mission is “to ensure every stakeholder involved in the design and
development of autonomous and intelligent systems is educated, trained, and
empowered to prioritize ethical considerations so that these technologies are
advanced for the benefit of humanity”.
!
The first major output from the IEEE Standards Association’s global ethics initiative is
a discussion document called Ethically Aligned Design (EAD)7, developed through an
iterative process which invited public feedback. The published second edition of EAD
sets out more than 100 ethical issues and recommendations, and a third edition will
be launched early in 2019. The work of more than 1000 volunteers across thirteen
committees, EAD covers: general (ethical) principles; how to embed values into
autonomous intelligent systems; methods to guide ethical design; safety and
beneficence of artificial general intelligence and artificial superintelligence; personal
data and individual access control; reframing autonomous weapons systems;
economics and humanitarian issues; law; affective computing; classical ethics in AI;
policy; mixed-reality, and well-being.
Each EAD committee was additionally tasked with identifying, recommending and
promoting new candidate standards, and to date a total of 14 new IEEE
standards working groups have started work on drafting so called human standards
(Box 1).
// start Box 1
Box 1: IEEE P7000 series human standards in development
P7000 Model Process for Addressing Ethical Concerns During System Design
P7001 Transparency of Autonomous Systems
P7002 Data Privacy Process
P7003 Algorithmic Bias Considerations
P7004 Standard for Child and Student Data Governance
P7005 Standard for Transparent Employer Data Governance
P7006 Standard for Personal Data Artificial Intelligence (AI) Agent
P7007 Ontological Standard for Ethically Driven Robotics and Automation Systems
P7008 Standard for Ethically Driven Nudging for Robotic, Intelligent and
Autonomous Systems
P7009 Standard for Fail-Safe Design of Autonomous and Semi-Autonomous
Systems
P7010 Wellbeing Metrics Standard for Ethical Artificial Intelligence and
Autonomous Systems
P7011 Standard for the Process of Identifying and Rating the Trustworthiness of
News Sources
P7012 Standard for Machine Readable Personal Privacy Terms
P7013 Inclusion and Application Standards for Automated Facial Analysis
Technology
// end Box 1
The importance of transparency and explainability
Consider P7001 as a case study. One of the general principles8 of EAD asks how
can we ensure that autonomous and intelligent systems are transparent?, and
recommends a new standard for transparency. P7001 Transparency in Autonomous
Systems was initiated as a direct response. IEEE P7001 directly addresses the
straightforward ethical principle that it should always be possible to find out why an
autonomous system made a particular decision.
A robot or AI is transparent if it is possible to find out why it behaves in a certain way.
We might for instance want to discover why it made a particular decision, especially if
that decision caused an accident or for the less serious reason that the robot or
AI’s behaviour is puzzling. Transparency is not intrinsic to robots and AIs, but must
be designed for, and it is a property which autonomous systems might have more or
less of. And full transparency might be very challenging to provide, for instance in
systems based on artificial neural networks (deep learning systems), or systems that
are continually learning.
There are two reasons transparency is so important.
First, because modern robots and AIs are designed to work with or alongside
humans, who need to be able to understand what they are doing and why. If we take
an assisted living robot as an example transparency (or to be precise, explainability)
means the user can understand what the robot might do in different circumstances.
An elderly person might be very unsure about robots, so it is important that her robot
is helpful, predictable never does anything that frightens her and above all safe. It
should be easy for her to learn what the robot does and why, in different
circumstances. An explainer system that allows her to ask the robot why did you just
do that?and receive a simple natural language explanation would be very helpful in
providing this kind of transparency. A higher level of transparency would be the
ability to ask questions like what would you do if I fell down? or what would you do
if I forget to take my medicine? This allows her to build a mental model of how the
robot will behave in different situations.
And second, because robots and AIs can and do go wrong. If physical robots go
wrong they can cause physical harm or injury. Real world trials of driverless cars
have already resulted in several fatalities9. Even a software AI can cause harm. A
medical diagnosis AI might, for instance, give the wrong diagnosis, or a biased credit
scoring AI might cause someone’s loan application to be wrongly rejected. Without
transparency, discovering what went wrong is extremely difficult and may in some
cases be impossible. The ability to find out what went wrong and why is not only
important to accident investigators, it might also be important to establish who is
responsible, for insurance purposes, or in a court of law. And following high profile
accidents wider society needs the reassurance of knowing that problems have been
found and fixed.
Transparency and explainability measured
But transparency is not one thing. Clearly an elderly relative does not require the
same level of understanding of a care robot as the engineer who repairs it. The
P7001 working group has defined five distinct groups of stakeholders (the
beneficiaries of the standard): users, safety certifiers or agencies, accident
investigators, lawyers or expert witness, and the wider public. For each of these
stakeholder groups, P7001 is setting out measurable, testable levels of transparency
so that autonomous systems can be objectively assessed and levels of compliance
determined, in a range that defines minimum levels up to the highest achievable
standards of transparency.
Of course, the way in which transparency is provided is very different for each group.
Safety certification agencies need access to technical details of how the system
works, together with verified test results. Accident investigators will need access to
data logs of exactly what happened prior to and during an accident, most likely
provided by something akin to an aircraft flight data recorder10. Lawyers and expert
witnesses will need access to the reports of safety certifiers and accident
investigators, along with evidence of the developer or manufacturer’s quality
management processes. And wider society needs accessible documentary-type
science communication to explain autonomous systems and how they work. P7001
will provide system designers with a toolkit for self-assessing transparency, and
recommendations for how to achieve greater transparency and explainability.
Outlook
How might these new ethical standards be applied when, like most standards, they
are voluntary? First, standards which relate to safety (and especially safety-critical
systems), can be mandated by licensing authorities, so that compliance with those
standards becomes a de facto requirement of obtaining a licence to operate that
system; for the P7000 series candidates might include P7001 and P7009. Second, in
a competitive market, compliance with ethical standards can be used to gain market
advantage especially among ethically aware consumers. Third, there is growing
pressure from professional bodies for their members to behave ethically. Emerging
professional codes of ethical conduct such as the recently published ACM11 and
IEEE12 codes of ethics and professional conduct are very encouraging; in turn, those
professionals are increasingly likely to exert internal pressure on their employers to
adopt ethical standards. And fourth, soft governance plays an important role in the
adoption of new standards: by requiring compliance with standards as a condition of
awarding procurement contracts governments can and do influence and direct the
adoption of standards across an entire supply chain without explicit regulation.
For data- or privacy-critical applications, a number of the P7000 standards
(P7002/3/4/5/12 and 13, for instance) could find application this way.
!
While some argue over the pace and level of impact of robotics and AI (on jobs, for
instance), most agree that increasingly capable intelligent systems create significant
ethical challenges, as well as great promise. This new generation of ethical
standards takes a powerful first step toward addressing those challenges. Standards,
like open science13, are a trust technology. Without ethical standards, it is hard to see
how robots and AIs will be trusted and widely accepted, and without that acceptance
their great promise will not be realised.
Alan Winfield is Professor of Robot Ethics at the Bristol Robotics Laboratory, UWE
Bristol, and visiting professor at the University of York. He chairs IEEE Standards
Working Group P7001.
e-mail: Alan.Winfield@brl.ac.uk
The views expressed in this article are those of the author only, and do not represent
the opinions of any organisation mentioned, or with which I am affiliated.
References
1. http://alanwinfield.blogspot.com/2017/12/a-round-up-of-robotics-and-ai-
ethics.html
2. Winfield, A. F. & Jirotka, M. Phil. Trans. R. Soc. A 376, 20180085 (2018);
http://dx.doi.org/10.1098/rsta.2018.0085
3. British Standards Institute, BS 8611:2016 Robots and robotic devices. Guide to
the ethical design and application of robots and robotic systems (2016)
https://shop.bsigroup.com/ProductDetail/?pid=000000000030320089
4. EPSRC, Principles of Robotics (2011)
https://epsrc.ukri.org/research/ourportfolio/themes/engineering/activities/principl
esofrobotics/
5. https://standards.ieee.org/industry-connections/ec/autonomous-systems.html
6. http://standards.ieee.org/develop/indconn/ec/ec_about_us.pdf
7. IEEE Standards Association, Ethically Aligned Design (2017)
https://ethicsinaction.ieee.org/
8. https://standards.ieee.org/content/dam/ieee-
standards/standards/web/documents/other/ead_general_principles_v2.pdf
9. Stilgoe J, Winfield A, The Guardian, 13 April 2018
https://www.theguardian.com/science/political-science/2018/apr/13/self-driving-
car-companies-should-not-be-allowed-to-investigate-their-own-crashes
10. Winfield A.F., Jirotka M. (2017) The Case for an Ethical Black Box. In: Gao Y.,
Fallah S., Jin Y., Lekakou C. (eds) Lecture Notes in Computer Science, vol
10454. Springer, Cham.
11. https://www.acm.org/code-of-ethics
12. https://www.ieee.org/about/corporate/governance/p7-8.html
13. Grand, A., Wilkinson, C., Bultitude, K. & Winfield, A.F. Open Science: A new
‘trust technology’? Science Communication 34, 679- 689 (2012).
... Recently, organizations such as IEEE (Institute of Electrical and Electronics Engineers) have begun to develop ethical standards for AI-enabled products which combine normative and descriptive concerns [101,102]. This is also happening within individual companies and regulatory bodies (such as the European Commission [103]). ...
Article
Technological advances are enabling roles for machines that present novel ethical challenges. The study of 'AI ethics' has emerged to confront these challenges, and connects perspectives from philosophy, computer science, law, and economics. Less represented in these interdisciplinary efforts is the perspective of cognitive science. We propose a framework - computational ethics - that specifies how the ethical challenges of AI can be partially addressed by incorporating the study of human moral decision-making. The driver of this framework is a computational version of reflective equilibrium (RE), an approach that seeks coherence between considered judgments and governing principles. The framework has two goals: (i) to inform the engineering of ethical AI systems, and (ii) to characterize human moral judgment and decision-making in computational terms. Working jointly towards these two goals will create the opportunity to integrate diverse research questions, bring together multiple academic communities, uncover new interdisciplinary research topics, and shed light on centuries-old philosophical questions.
... There is also reasonable doubt in terms of whether technology-based solutions can replace more intensive and interactive in-person solutions. This concern is particularly salient in light of the possibility that some people may suffer from mental issues that prevent them from using technology-based solutions, owing to reasons such as paranoiac tendencies and/or fear of technology (e.g., cyber paranoia) [50][51][52]. ...
Article
Full-text available
Introduction: Domestic violence is a threat to human dignity and public health. Mounting evidence shows that domestic violence erodes personal and public health, spawning issues such as lifelong mental health challenges. To further compound the situation, COVID-19 and societies' poor response to the pandemic have not only worsened the domestic violence crisis but also disrupted mental health services for domestic violence victims. While technology-based health solutions can overcome physical constraints posed by the pandemic and offer timely support to address domestic violence victims' mental health issues, there is a dearth of research in the literature. To bridge the research gap, in this study, we aim to examine technology-based mental health solutions for domestic violence victims amid COVID-19. Methods: A literature review was conducted to examine solutions that domestic violence victims can utilize to safeguard and improve their mental health amid COVID-19. Databases including PubMed, PsycINFO, and Scopus were utilized for the literature search. The search was focused on four themes: domestic violence, mental health, technology-based interventions, and COVID-19. A reverse search of pertinent references was conducted in Google Scholar. The social ecological model was utilized to systematically structure the review findings. Results: The findings show that a wide array of technology-based solutions has been proposed to address mental health challenges faced by domestic violence victims amid COVID-19. However, none of these proposals is based on empirical evidence amid COVID-19. In terms of social and ecological levels of influence, most of the interventions were developed on the individual level, as opposed to the community level or social level, effectively placing the healthcare responsibility on the victims rather than government and health officials. Furthermore, most of the articles failed to address risks associated with utilizing technology-based interventions (e.g., privacy issues) or navigating the online environment (e.g., cyberstalking). Conclusion: Overall, our findings highlight the need for greater research endeavors on the research topic. Although technology-based interventions have great potential in resolving domestic violence victims' mental health issues, risks associated with these health solutions should be comprehensively acknowledged and addressed.
... The IEEE P7001 standardization project on Transparency in Autonomous Systems, for example, interprets transparency as explainability; and seeks to identify measurable and testable levels of transparency suitable for different audiences e.g. users, safety certifiers, accident investigators, lawyers, etc. [21]. But transparency can be targeted at the ML system itself, or at the human organization using the ML system. ...
... The environment is neither too dense nor too sparse in terms of number of robots and resources. 5 Other important challenges such as AI explainability [43][44][45] and ethical issues [46] are voluntarily left aside as they do not offer per se a means to improve social learning w.r.t. pure performance. ...
Article
Full-text available
In this paper, we present an implementation of social learning for swarm robotics. We consider social learning as a distributed online reinforcement learning method applied to a collective of robots where sensing, acting and coordination are performed on a local basis. While some issues are specific to artificial systems, such as the general objective of learning efficient (and ideally, optimal) behavioural strategies to fulfill a task defined by a supervisor, some other issues are shared with social learning in natural systems. We discuss some of these issues, paving the way towards cumulative cultural evolution in robot swarms, which could enable complex social organization necessary to achieve challenging robotic tasks. This article is part of a discussion meeting issue ‘The emergence of collective knowledge and cumulative culture in animals, humans and machines’.
Article
We examine the impacts of potential artificial intelligence (AI) regulations on managers’ perceptions on ethical issues related to AI and their intentions to adopt AI technologies. We conduct a randomized online survey experiment on more than a thousand managers in the United States. We randomly present managers with different proposed AI regulations, and ask about ethical issues related to AI and their intentions related to AI adoption. We find that information about AI regulation increases manager perception of the importance of safety, privacy, bias/discrimination, and transparency issues related to AI. However, there is a tradeoff; regulation information reduces manager intent to adopt AI technologies. Moreover, information about regulation increases manager intent to spend on developing AI strategy including ethical issues at the cost of investing in AI adoption, such as providing AI training to current employees or purchasing AI software packages. (JEL: K24, L21, L51, O33, O38)
Article
Besides radically altering work, advances in automation and intelligent technologies have the potential to bring significant societal transformation. These transitional periods require an approach to analysis and design that goes beyond human-machine interaction in the workplace to consider the wider sociotechnical needs of envisioned work systems. The Sociotechnical Influences Space, an analytical tool motivated by Rasmussen's risk management model, promotes a holistic approach to the design of future systems, attending to societal needs and challenges, while still recognising the bottom-up push from emerging technologies. A study explores the concept and practical potential of the tool when applied to the analysis of a large-scale, 'real-world' problem, specifically the societal, governmental, regulatory, organisational, human, and technological factors of significance in mixed human-artificial agent workforces. Further research is needed to establish the feasibility of the tool in a range of application domains, the details of the method, and the value of the tool in design. Practitioner summary: Emerging automation and intelligent technologies are not only transforming workplaces, but may be harbingers of major societal change. A new analytical tool, the Sociotechnical Influences Space, is proposed to support organisations in taking a holistic approach to the incorporation of advanced technologies into workplaces and function allocation in mixed human-artificial agent teams.
Article
Full-text available
Security is one of the biggest challenges concerning networks and communications. The problem becomes aggravated with the proliferation of wireless devices. Artificial Intelligence (AI) has emerged as a promising solution and a volume of literature exists on the methodological studies of AI to resolve the security challenge. In this survey, we present a taxonomy of security threats and review distinct aspects and the potential of AI to resolve the challenge. To the best of our knowledge, this is the first comprehensive survey to review the AI solutions for all possible security types and threats. We also present the lessons learned from the existing AI techniques and contributions of up-to-date literature, future directions of AI in security, open issues that need to be investigated further through AI, and discuss how AI can be more effectively used to overcome the upcoming advanced security threats.
Article
The subject of the research is the legal nature of the digital profile of a citizen, as well as a set of legal norms regulating digital profiling relations in Russia. The comparative method, the method of system analysis, as well as the method of legal modeling are used in the article. The purpose of the article is to confirm or disprove the hypothesis that legal regulation is not the only mechanism for regulating relations in the field of digital profiling. The main results, scope of application. The article studies the phenomenon of digital profile, the main approaches to the digital profiling as well as the circumstances that have caused the state's interest in digital profiling. The creation and operation of a digital profile should be aimed at achieving the goal set out in the legislation. The digital profile is a set of relevant, reliable information about individuals and legal entities formed in the unified identification and authentication system or other information systems of state and local government authorities. The formation of a digital profile is carried out in order to provide data to authorities, legal entities and persons who have requested access to this information through the digital profile infrastructure. The analysis of the Russian legal regulation of relations in the field of digital profiling is presented, the problems of enforcement practice are identified. The analysis revealed the main differences between the digital profile and related categories, including social scoring, the unified population register and others. The comparison of a digital profile with a digital avatar and a digital character was carried out. It is extremely important to pay close attention to the problems of digital profiling both at the level of fundamental and applied scientific research. At the state level, it is important to strategically determine what a digital profile is, as well as formulate the main directions of the digital profiling development, challenges and risks. The importance of the development of digital profiling for unified system of public authorities in the Russian Federation is outlined. Conclusions. The analysis of the emerging practice of digital profiling in contemporary society shows that legal regulation does not always allow us to keep up with the rapidly developing relations in this area. The possibility of using other mechanisms should be considered. The use of mechanisms of regulatory experiments can also be considered as special mechanisms for regulating relations in the field of digital profiling. The goal of the research has been achieved, the legal nature of the digital profile has been revealed, approaches to regulating this phenomenon in the conditions of digital transformation have been proposed.
Article
Full-text available
This paper explores the question of ethical governance for robotics and artificial intelligence (AI) systems. We outline a roadmap—which links a number of elements, including ethics, standards, regulation, responsible research and innovation, and public engagement—as a framework to guide ethical governance in robotics and AI. We argue that ethical governance is essential to building public trust in robotics and AI, and conclude by proposing five pillars of good ethical governance. This article is part of the theme issue ‘Governing artificial intelligence: ethical, legal, and technical opportunities and challenges’.
Article
Full-text available
The emerging practice of open science, which makes the entire process of a scientific investigation available, could extend membership of the research community to new, public audiences, who do not have access to science’s long-established trust mechanisms. This commentary considers if the structures that enable scientists to trust each other, and the public to trust scientists, are enriched by the open science approach. The completeness of information provided by open science, whether as a replacement for or complement to older systems for establishing trust within science, makes it a potentially useful “trust technology.”
Conference Paper
Full-text available
This paper proposes that robots and autonomous systems should be equipped with the equivalent of a Flight Data Recorder to continuously record sensor and relevant internal status data. We call this an ethical black box. We argue that an ethical black box will be critical to the process of discovering why and how a robot caused an accident, and thus an essential part of establishing accountability and responsibility. We also argue that without the transparency afforded by an ethical black box, robots and autonomous systems are unlikely to win public trust.
Article
Full-text available
The emerging practice of open science, which makes the entire process of a scientific investigation available, could extend membership of the research community to new, public audiences, who do not have access to science’s long-established trust mechanisms. This commentary considers if the structures that enable scientists to trust each other, and the public to trust scientists, are enriched by the open science approach. The completeness of information provided by open science, whether as a replacement for or complement to older systems for establishing trust within science, makes it a potentially useful “trust technology.”
Guide to the Ethical Design and Application of Robots and Robotic Systems BS
  • Robotic Robots
  • Devices
British Standards Institute, BS 8611:2016 Robots and robotic devices. Guide to the ethical design and application of robots and robotic systems (2016) https://shop.bsigroup.com/ProductDetail/?pid=000000000030320089
A round up of robotics and AI ethics
  • A Winfield
Self-driving car companies should not be allowed to investigate their own crashes. The Guardian
  • J Stilgoe
  • A Winfield
  • A F Winfield
  • M Jirotka
  • Phil
Winfield, A. F. & Jirotka, M. Phil. Trans. R. Soc. A 376, 20180085 (2018);