Ethical standards in robotics and AI

Article (PDF Available) · February 2019with 1,868 Reads 
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
DOI: 10.1038/s41928-019-0213-6
Cite this publication
Abstract
A new generation of ethical standards in robotics and artificial intelligence is emerging as a direct response to a growing awareness of the ethical, legal and societal impacts of the fields. But what exactly are these ethical standards and how do they differ from conventional standards?
Advertisement
Pre-print of article published 15 February 2019 in Nature Electronics:
https://doi.org/10.1038/s41928-019-0213-6
Ethical standards in Robotics and AI
A new generation of ethical standards in robotics and artificial intelligence is emerging as a
direct response to a growing awareness of the ethical, legal and societal impact of the fields.
But what exactly are these ethical standards and how do they differ from conventional
standards?
Alan Winfield
Standards are a vital part of the infrastructure of the modern world: invisible, but no
less important than roads, airports and telephone networks. It is hard to think of any
aspect of everyday life untouched by standards. The International Organization for
Standardisation (ISO) just one of several standards bodies lists a total of 22,482
published standards. Take the simple act of brushing your teeth in the morning: there
are standards for your toothbrush (both manual ISO 20126 and electric ISO 20127),
your toothpaste and its packaging (ISO 11609), and the quality of your tap water
(ISO 5667-5). Although it might seem odd to wax lyrical on standards, they do
represent a truly remarkable body of work drafted by countless expert volunteers
with an extraordinary impact on individual and societal health and safety.
All standards embody a principle and often it is an ethical principle or value. Safety
standards are founded on the general principle that products and systems should do
no harm that they should be safe; ISO 13482, for instance, sets out safety
requirements for personal care robots. Quality management standards, such as ISO
9001, describe how things should be done, and can be thought of as expressing the
principle that shared best practice leads to improved quality. And technical
standards, like IEEE 802.11 (better known as WiFi), can be thought of as embodying
the benefits of interoperability. Even the basic idea of standards as codifying shared
ways of doing things can be thought of as expressing the values of cooperation and
harmonisation. All standards can therefore be thought of as implicit ethical standards.
We can define an explicit ethical standard as one that addresses clearly articulated
ethical concerns, and seeks through its application to, at best remove, hopefully
reduce, or at the very least highlight the potential for unethical impacts or their
consequences.
!
What are the ethical principles which underpin these new ethical standards? An
informal survey1 in December 2017 listed a total of ten different sets of ethical
principles for robotics and AI. The earliest (1950) are Asimov’s laws of robotics:
important because they established the principle that robots should be governed by
principles. Very recently we have seen a proliferation of principles; of the ten sets
surveyed seven were published in 2017.
!
Perhaps not surprisingly these ethical principles have much in common. In summary:
robots and artificial intelligences (AIs) should do no harm, while being free of bias
and deception; respect human rights and freedoms, including dignity and privacy,
while promoting well-being; and be transparent and dependable while ensuring that
the locus of responsibility and accountability remains with their human designers or
operators. Just as interesting is the increasing frequency of their publication: clear
evidence for a growing awareness of the urgent need for ethical principles for
robotics and AI. But, while an important and necessary foundation, principles are not
practice. Ethical standards are the next important step toward ethical governance in
robotics and AI2.
!
Ethical risk assessment
Almost certainly the world’s first explicit ethical standard in robotics is BS 8611 Guide
to the Ethical Design and Application of Robots and Robotic Systems3, which was
published in April 2016. Incorporating the EPSRC principles of robotics4, BS8611 is
not a code of practice, but instead guidance on how designers can undertake an
ethical risk assessment of their robot or system, and mitigate any ethical risks so
identified. At its heart is a set of 20 distinct ethical hazards and risks, grouped under
four categories: societal, application, commercial & financial, and environmental.
Advice on measures to mitigate the impact of each risk is given, along with
suggestions on how such measures might be verified or validated. The societal
hazards include, for example, loss of trust, deception, infringements of privacy and
confidentiality, addiction, and loss of employment. The idea of ethical risk
assessment is of course not new it is essentially what research ethics committees
do but a method for assessing robots for ethical risks is a powerful new addition to
the ethical roboticist’s toolkit.
In April 2016, the IEEE Standards Association launched a global initiative on the
Ethics of Autonomous and Intelligent Systems5. The significance of this initiative
cannot be overstated; coming from a professional body with the standing and reach
of the IEEE Standards Association it marks a watershed in the emergence of ethical
standards. And it is a radical step. As I’ve argued above all standards are even if
not explicitly based on ethical principles. But for a respected standards body to
launch an initiative which explicitly aims to address the deep ethical challenges that
face the whole of autonomous and intelligent systems from driverless car autopilots
to medical diagnosis AIs, drones to deep learning, and care robots to chat bots is
both ambitious and unprecedented.
Humanity first
The IEEE initiative positions human well-being as its central tenet6. This is a bold and
political stance since it explicitly seeks to reposition robotics and AI as technologies
for improving the human condition rather than simply vehicles for economic growth.
The initiative’s mission is “to ensure every stakeholder involved in the design and
development of autonomous and intelligent systems is educated, trained, and
empowered to prioritize ethical considerations so that these technologies are
advanced for the benefit of humanity”.
!
The first major output from the IEEE Standards Association’s global ethics initiative is
a discussion document called Ethically Aligned Design (EAD)7, developed through an
iterative process which invited public feedback. The published second edition of EAD
sets out more than 100 ethical issues and recommendations, and a third edition will
be launched early in 2019. The work of more than 1000 volunteers across thirteen
committees, EAD covers: general (ethical) principles; how to embed values into
autonomous intelligent systems; methods to guide ethical design; safety and
beneficence of artificial general intelligence and artificial superintelligence; personal
data and individual access control; reframing autonomous weapons systems;
economics and humanitarian issues; law; affective computing; classical ethics in AI;
policy; mixed-reality, and well-being.
Each EAD committee was additionally tasked with identifying, recommending and
promoting new candidate standards, and to date a total of 14 new IEEE
standards working groups have started work on drafting so called human standards
(Box 1).
// start Box 1
Box 1: IEEE P7000 series human standards in development
P7000 Model Process for Addressing Ethical Concerns During System Design
P7001 Transparency of Autonomous Systems
P7002 Data Privacy Process
P7003 Algorithmic Bias Considerations
P7004 Standard for Child and Student Data Governance
P7005 Standard for Transparent Employer Data Governance
P7006 Standard for Personal Data Artificial Intelligence (AI) Agent
P7007 Ontological Standard for Ethically Driven Robotics and Automation Systems
P7008 Standard for Ethically Driven Nudging for Robotic, Intelligent and
Autonomous Systems
P7009 Standard for Fail-Safe Design of Autonomous and Semi-Autonomous
Systems
P7010 Wellbeing Metrics Standard for Ethical Artificial Intelligence and
Autonomous Systems
P7011 Standard for the Process of Identifying and Rating the Trustworthiness of
News Sources
P7012 Standard for Machine Readable Personal Privacy Terms
P7013 Inclusion and Application Standards for Automated Facial Analysis
Technology
// end Box 1
The importance of transparency and explainability
Consider P7001 as a case study. One of the general principles8 of EAD asks how
can we ensure that autonomous and intelligent systems are transparent?, and
recommends a new standard for transparency. P7001 Transparency in Autonomous
Systems was initiated as a direct response. IEEE P7001 directly addresses the
straightforward ethical principle that it should always be possible to find out why an
autonomous system made a particular decision.
A robot or AI is transparent if it is possible to find out why it behaves in a certain way.
We might for instance want to discover why it made a particular decision, especially if
that decision caused an accident or for the less serious reason that the robot or
AI’s behaviour is puzzling. Transparency is not intrinsic to robots and AIs, but must
be designed for, and it is a property which autonomous systems might have more or
less of. And full transparency might be very challenging to provide, for instance in
systems based on artificial neural networks (deep learning systems), or systems that
are continually learning.
There are two reasons transparency is so important.
First, because modern robots and AIs are designed to work with or alongside
humans, who need to be able to understand what they are doing and why. If we take
an assisted living robot as an example transparency (or to be precise, explainability)
means the user can understand what the robot might do in different circumstances.
An elderly person might be very unsure about robots, so it is important that her robot
is helpful, predictable never does anything that frightens her and above all safe. It
should be easy for her to learn what the robot does and why, in different
circumstances. An explainer system that allows her to ask the robot why did you just
do that?and receive a simple natural language explanation would be very helpful in
providing this kind of transparency. A higher level of transparency would be the
ability to ask questions like what would you do if I fell down? or what would you do
if I forget to take my medicine? This allows her to build a mental model of how the
robot will behave in different situations.
And second, because robots and AIs can and do go wrong. If physical robots go
wrong they can cause physical harm or injury. Real world trials of driverless cars
have already resulted in several fatalities9. Even a software AI can cause harm. A
medical diagnosis AI might, for instance, give the wrong diagnosis, or a biased credit
scoring AI might cause someone’s loan application to be wrongly rejected. Without
transparency, discovering what went wrong is extremely difficult and may in some
cases be impossible. The ability to find out what went wrong and why is not only
important to accident investigators, it might also be important to establish who is
responsible, for insurance purposes, or in a court of law. And following high profile
accidents wider society needs the reassurance of knowing that problems have been
found and fixed.
Transparency and explainability measured
But transparency is not one thing. Clearly an elderly relative does not require the
same level of understanding of a care robot as the engineer who repairs it. The
P7001 working group has defined five distinct groups of stakeholders (the
beneficiaries of the standard): users, safety certifiers or agencies, accident
investigators, lawyers or expert witness, and the wider public. For each of these
stakeholder groups, P7001 is setting out measurable, testable levels of transparency
so that autonomous systems can be objectively assessed and levels of compliance
determined, in a range that defines minimum levels up to the highest achievable
standards of transparency.
Of course, the way in which transparency is provided is very different for each group.
Safety certification agencies need access to technical details of how the system
works, together with verified test results. Accident investigators will need access to
data logs of exactly what happened prior to and during an accident, most likely
provided by something akin to an aircraft flight data recorder10. Lawyers and expert
witnesses will need access to the reports of safety certifiers and accident
investigators, along with evidence of the developer or manufacturer’s quality
management processes. And wider society needs accessible documentary-type
science communication to explain autonomous systems and how they work. P7001
will provide system designers with a toolkit for self-assessing transparency, and
recommendations for how to achieve greater transparency and explainability.
Outlook
How might these new ethical standards be applied when, like most standards, they
are voluntary? First, standards which relate to safety (and especially safety-critical
systems), can be mandated by licensing authorities, so that compliance with those
standards becomes a de facto requirement of obtaining a licence to operate that
system; for the P7000 series candidates might include P7001 and P7009. Second, in
a competitive market, compliance with ethical standards can be used to gain market
advantage especially among ethically aware consumers. Third, there is growing
pressure from professional bodies for their members to behave ethically. Emerging
professional codes of ethical conduct such as the recently published ACM11 and
IEEE12 codes of ethics and professional conduct are very encouraging; in turn, those
professionals are increasingly likely to exert internal pressure on their employers to
adopt ethical standards. And fourth, soft governance plays an important role in the
adoption of new standards: by requiring compliance with standards as a condition of
awarding procurement contracts governments can and do influence and direct the
adoption of standards across an entire supply chain without explicit regulation.
For data- or privacy-critical applications, a number of the P7000 standards
(P7002/3/4/5/12 and 13, for instance) could find application this way.
!
While some argue over the pace and level of impact of robotics and AI (on jobs, for
instance), most agree that increasingly capable intelligent systems create significant
ethical challenges, as well as great promise. This new generation of ethical
standards takes a powerful first step toward addressing those challenges. Standards,
like open science13, are a trust technology. Without ethical standards, it is hard to see
how robots and AIs will be trusted and widely accepted, and without that acceptance
their great promise will not be realised.
Alan Winfield is Professor of Robot Ethics at the Bristol Robotics Laboratory, UWE
Bristol, and visiting professor at the University of York. He chairs IEEE Standards
Working Group P7001.
e-mail: Alan.Winfield@brl.ac.uk
The views expressed in this article are those of the author only, and do not represent
the opinions of any organisation mentioned, or with which I am affiliated.
References
1. http://alanwinfield.blogspot.com/2017/12/a-round-up-of-robotics-and-ai-
ethics.html
2. Winfield, A. F. & Jirotka, M. Phil. Trans. R. Soc. A 376, 20180085 (2018);
http://dx.doi.org/10.1098/rsta.2018.0085
3. British Standards Institute, BS 8611:2016 Robots and robotic devices. Guide to
the ethical design and application of robots and robotic systems (2016)
https://shop.bsigroup.com/ProductDetail/?pid=000000000030320089
4. EPSRC, Principles of Robotics (2011)
https://epsrc.ukri.org/research/ourportfolio/themes/engineering/activities/principl
esofrobotics/
5. https://standards.ieee.org/industry-connections/ec/autonomous-systems.html
6. http://standards.ieee.org/develop/indconn/ec/ec_about_us.pdf
7. IEEE Standards Association, Ethically Aligned Design (2017)
https://ethicsinaction.ieee.org/
8. https://standards.ieee.org/content/dam/ieee-
standards/standards/web/documents/other/ead_general_principles_v2.pdf
9. Stilgoe J, Winfield A, The Guardian, 13 April 2018
https://www.theguardian.com/science/political-science/2018/apr/13/self-driving-
car-companies-should-not-be-allowed-to-investigate-their-own-crashes
10. Winfield A.F., Jirotka M. (2017) The Case for an Ethical Black Box. In: Gao Y.,
Fallah S., Jin Y., Lekakou C. (eds) Lecture Notes in Computer Science, vol
10454. Springer, Cham.
11. https://www.acm.org/code-of-ethics
12. https://www.ieee.org/about/corporate/governance/p7-8.html
13. Grand, A., Wilkinson, C., Bultitude, K. & Winfield, A.F. Open Science: A new
‘trust technology’? Science Communication 34, 679- 689 (2012).
  • ... A new generation of explicitly ethical standards are now emerging [39,27]. Standards are simply "consensus-based agreed-upon ways of doing things" [9]. ...
    ... Standards are simply "consensus-based agreed-upon ways of doing things" [9]. Although all standards embody a principle or value, explicitly ethical standards address clearly articulated ethical concerns and -through their application -seek to remove, reduce or highlight the potential for unethical impacts or their consequences [39]. ...
    ... At its heart is a set of 20 distinct ethical hazards and risks, grouped under four categories: societal, application, commercial and financial, and environmental. Advice on measures to mitigate the impact of each risk is given, along with suggestions on how such measures might be verified or validated" [39]. Societal hazards include, for example, anthropomorphisation, loss of trust, deception, infringements of privacy and confidentiality, addiction, and loss of employment, to which we should add the Uncanny Valley [25], weak security, lack of transparency (for instance the lack of data logs needed to investigate accidents), unrepairability and unrecyclability. ...
    Preprint
    Robot accidents are inevitable. Although rare, they have been happening since assembly-line robots were first introduced in the 1960s. But a new generation of social robots are now becoming commonplace. Often with sophisticated embedded artificial intelligence (AI) social robots might be deployed as care robots to assist elderly or disabled people to live independently. Smart robot toys offer a compelling interactive play experience for children and increasingly capable autonomous vehicles (AVs) the promise of hands-free personal transport and fully autonomous taxis. Unlike industrial robots which are deployed in safety cages, social robots are designed to operate in human environments and interact closely with humans; the likelihood of robot accidents is therefore much greater for social robots than industrial robots. This paper sets out a draft framework for social robot accident investigation; a framework which proposes both the technology and processes that would allow social robot accidents to be investigated with no less rigour than we expect of air or rail accident investigations. The paper also places accident investigation within the practice of responsible robotics, and makes the case that social robotics without accident investigation would be no less irresponsible than aviation without air accident investigation.
  • ... Highlighted by AlphaGo [14,15], artificial intelligence (AI) powered by deep learning and artificial neural networks (ANN) is booming. AI brings innovations and revolutions to various areas such as medicine [16], human-computer interaction [17], gaming [18], robotics [19] and autonomous driving [20]. Researchers in different fields attempt to take advantage of AI to enable intelligent systems. ...
    Article
    Full-text available
    The emerging intelligence technologies represented by deep learning have broadened their applications to various fields. Beyond the conventional electronics-based processing systems, the convergence of photonics and artificial intelligence (AI) technology enhances the performance and learning ability of AI. In this review, we propose the concept of an intelligent photonic system (IPS), illustrating it as a developing architecture with three different versions. For each version of IPS, we review several representative studies. Moreover we discuss the challenges towards an IPS and provide some prospects for the future development.
  • ... Standardisation organisations such as the International Standards Organisation or the IEEE Standards Association (IEEE SA) are consensusbuilding organisations, open mainly to (paying) members who create the standards. The IEEE has established 14 working groups to develop what they call 'human' standards, as part of their 'Global Initiative on Ethics of Autonomous and Intelligent Systems', which also includes an 'Ethics Certification Program for Autonomous and Intelligent Systems' (ECPAIS) (Winfield 2019). To engage in the standardisation process, technical expertise, financial resources and institutional knowledge are all required (Cihon 2019 This publication argues that standards should be used to deal with the technical aspects of AI, rather than with human behaviour. ...
    Article
    Full-text available
    Superpowers, states and companies around the world are all pushing hard to win the AI race. Artificial intelligence (AI) is of strategic importance for the EU, with the European Commission recently stating that ‘artificial intelligence with a purpose can make Europe a world leader’. For this to happen, though, the EU needs to put in place the right ethical and legal framework. This Foresight Brief argues that such a framework must be solidly founded on regulation – which can be achieved by updating existing legislation – and that it must pay specific attention to the protection of workers. Workers are in a subordinate position in relation to their employers, and in the EU’s eagerness to win the AI race, their rights may be overlooked. This is why a protective and enforceable legal framework must be developed, with the participation of social partners. AI, along with other new technologies such as robotics, machine learning and blockchain, will disrupt life as we know it. If Europe develops regulation according to its values and, in particular, ensures the protection of workers, it can become a genuine global player and win the AI race while remaining faithful to its democratic identity.
  • ... Awareness of, and a growing interest in, ethical considerations for the development of social robots is increasing due to the predicted increasing likelihood of robots being a part of our everyday lives in the future (Malle et al., 2015;Esposito et al., 2016;Li et al., 2019). This is evident through the emergence of relatively new conferences like the International Conference on Robot Ethics and Standards 1 , and new ethical standards in robotics and AI (Winfield, 2019). Socially assistive robots can provide psycho-social, physical and/or cognitive support while interacting with their users (Robinson et al., 2014). ...
    Article
    Full-text available
    Emotional deception and emotional attachment are regarded as ethical concerns in human-robot interaction. Considering these concerns is essential, particularly as little is known about longitudinal effects of interactions with social robots. We ran a longitudinal user study with older adults in two retirement villages, where people interacted with a robot in a didactic setting for eight sessions over a period of 4 weeks. The robot would show either non-emotive or emotive behavior during these interactions in order to investigate emotional deception. Questionnaires were given to investigate participants' acceptance of the robot, perception of the social interactions with the robot and attachment to the robot. Results show that the robot's behavior did not seem to influence participants' acceptance of the robot, perception of the interaction or attachment to the robot. Time did not appear to influence participants' level of attachment to the robot, which ranged from low to medium. The perceived ease of using the robot significantly increased over time. These findings indicate that a robot showing emotions—and perhaps resulting in users being deceived—in a didactic setting may not by default negatively influence participants' acceptance and perception of the robot, and that older adults may not become distressed if the robot would break or be taken away from them, as attachment to the robot in this didactic setting was not high. However, more research is required as there may be other factors influencing these ethical concerns, and support through other measurements than questionnaires is required to be able to draw conclusions regarding these concerns.
  • Conference Paper
    Full-text available
    This paper explores how explicitly ethical standards for robotics are peer-produced. It describes the motivations, organisation and practices of standardization contributed by a globally distributed community of experts. The research question asks what kind of rules for robots are being created through standardization and what are the motivational and organizational features of this knowledge production? In addressing this question, I reflect on how ethical principles are applied in practice within the field of autonomous and intelligent systems and what implications this may have for the governance of robotics innovation. The paper directly responds to the aims of the workshop by speculating on the potential for post-automation robotics innovation pathways that are not automatically determined, but arrived at by means of broad participation in governance decisions and innovation processes.
  • Article
    Socially inspired robotics involves drawing on the observation and study of human social interactions to apply them to the design of sociable robots. As there is increasing expectation that robots may participate in social care and provide some relief for the increasing shortage of human care workers, social interaction with robots becomes of increasing importance. This paper demonstrates the potential of socially inspired robotics through the exploration of a case study of the interaction of a partially sighted social worker with a support worker. This is framed within the capability approach in which the interaction of a human and a sociable robot is understood as resulting in a collaborative capability which is grounded the relationship between the human and the robot rather than the autonomous capabilities of the robot. The implications of applying the case study as an analogy for human–robot interaction are expressed through a discussion of capabilities and social practice and policy. The study is attenuated by a discussion of the technical limits of robots and the extensive complexity of the social context in which it is envisaged sociable robots may be employed.
  • Guide to the Ethical Design and Application of Robots and Robotic Systems BS
    • Robotic Robots
    • Devices
  • A round up of robotics and AI ethics
    • A Winfield
  • Self-driving car companies should not be allowed to investigate their own crashes. The Guardian
    • J Stilgoe
    • A Winfield
    • A F Winfield
    • M Jirotka
    • Phil
    Winfield, A. F. & Jirotka, M. Phil. Trans. R. Soc. A 376, 20180085 (2018);
  • Guide to the ethical design and application of robots and robotic systems
    British Standards Institute, BS 8611:2016 Robots and robotic devices. Guide to the ethical design and application of robots and robotic systems (2016) https://shop.bsigroup.com/ProductDetail/?pid=000000000030320089
  • Article
    Full-text available
    This paper explores the question of ethical governance for robotics and artificial intelligence (AI) systems. We outline a roadmap—which links a number of elements, including ethics, standards, regulation, responsible research and innovation, and public engagement—as a framework to guide ethical governance in robotics and AI. We argue that ethical governance is essential to building public trust in robotics and AI, and conclude by proposing five pillars of good ethical governance. This article is part of the theme issue ‘Governing artificial intelligence: ethical, legal, and technical opportunities and challenges’.
  • Article
    Full-text available
    The emerging practice of open science, which makes the entire process of a scientific investigation available, could extend membership of the research community to new, public audiences, who do not have access to science’s long-established trust mechanisms. This commentary considers if the structures that enable scientists to trust each other, and the public to trust scientists, are enriched by the open science approach. The completeness of information provided by open science, whether as a replacement for or complement to older systems for establishing trust within science, makes it a potentially useful “trust technology.”
  • Conference Paper
    Full-text available
    This paper proposes that robots and autonomous systems should be equipped with the equivalent of a Flight Data Recorder to continuously record sensor and relevant internal status data. We call this an ethical black box. We argue that an ethical black box will be critical to the process of discovering why and how a robot caused an accident, and thus an essential part of establishing accountability and responsibility. We also argue that without the transparency afforded by an ethical black box, robots and autonomous systems are unlikely to win public trust.
  • Article
    Full-text available
    The emerging practice of open science, which makes the entire process of a scientific investigation available, could extend membership of the research community to new, public audiences, who do not have access to science’s long-established trust mechanisms. This commentary considers if the structures that enable scientists to trust each other, and the public to trust scientists, are enriched by the open science approach. The completeness of information provided by open science, whether as a replacement for or complement to older systems for establishing trust within science, makes it a potentially useful “trust technology.”