ArticlePDF Available

Abstract

A new generation of ethical standards in robotics and artificial intelligence is emerging as a direct response to a growing awareness of the ethical, legal and societal impacts of the fields. But what exactly are these ethical standards and how do they differ from conventional standards?
Pre-print of article published 15 February 2019 in Nature Electronics:
https://doi.org/10.1038/s41928-019-0213-6
Ethical standards in Robotics and AI
A new generation of ethical standards in robotics and artificial intelligence is emerging as a
direct response to a growing awareness of the ethical, legal and societal impact of the fields.
But what exactly are these ethical standards and how do they differ from conventional
standards?
Alan Winfield
Standards are a vital part of the infrastructure of the modern world: invisible, but no
less important than roads, airports and telephone networks. It is hard to think of any
aspect of everyday life untouched by standards. The International Organization for
Standardisation (ISO) just one of several standards bodies lists a total of 22,482
published standards. Take the simple act of brushing your teeth in the morning: there
are standards for your toothbrush (both manual ISO 20126 and electric ISO 20127),
your toothpaste and its packaging (ISO 11609), and the quality of your tap water
(ISO 5667-5). Although it might seem odd to wax lyrical on standards, they do
represent a truly remarkable body of work drafted by countless expert volunteers
with an extraordinary impact on individual and societal health and safety.
All standards embody a principle and often it is an ethical principle or value. Safety
standards are founded on the general principle that products and systems should do
no harm that they should be safe; ISO 13482, for instance, sets out safety
requirements for personal care robots. Quality management standards, such as ISO
9001, describe how things should be done, and can be thought of as expressing the
principle that shared best practice leads to improved quality. And technical
standards, like IEEE 802.11 (better known as WiFi), can be thought of as embodying
the benefits of interoperability. Even the basic idea of standards as codifying shared
ways of doing things can be thought of as expressing the values of cooperation and
harmonisation. All standards can therefore be thought of as implicit ethical standards.
We can define an explicit ethical standard as one that addresses clearly articulated
ethical concerns, and seeks through its application to, at best remove, hopefully
reduce, or at the very least highlight the potential for unethical impacts or their
consequences.
!
What are the ethical principles which underpin these new ethical standards? An
informal survey1 in December 2017 listed a total of ten different sets of ethical
principles for robotics and AI. The earliest (1950) are Asimov’s laws of robotics:
important because they established the principle that robots should be governed by
principles. Very recently we have seen a proliferation of principles; of the ten sets
surveyed seven were published in 2017.
!
Perhaps not surprisingly these ethical principles have much in common. In summary:
robots and artificial intelligences (AIs) should do no harm, while being free of bias
and deception; respect human rights and freedoms, including dignity and privacy,
while promoting well-being; and be transparent and dependable while ensuring that
the locus of responsibility and accountability remains with their human designers or
operators. Just as interesting is the increasing frequency of their publication: clear
evidence for a growing awareness of the urgent need for ethical principles for
robotics and AI. But, while an important and necessary foundation, principles are not
practice. Ethical standards are the next important step toward ethical governance in
robotics and AI2.
!
Ethical risk assessment
Almost certainly the world’s first explicit ethical standard in robotics is BS 8611 Guide
to the Ethical Design and Application of Robots and Robotic Systems3, which was
published in April 2016. Incorporating the EPSRC principles of robotics4, BS8611 is
not a code of practice, but instead guidance on how designers can undertake an
ethical risk assessment of their robot or system, and mitigate any ethical risks so
identified. At its heart is a set of 20 distinct ethical hazards and risks, grouped under
four categories: societal, application, commercial & financial, and environmental.
Advice on measures to mitigate the impact of each risk is given, along with
suggestions on how such measures might be verified or validated. The societal
hazards include, for example, loss of trust, deception, infringements of privacy and
confidentiality, addiction, and loss of employment. The idea of ethical risk
assessment is of course not new it is essentially what research ethics committees
do but a method for assessing robots for ethical risks is a powerful new addition to
the ethical roboticist’s toolkit.
In April 2016, the IEEE Standards Association launched a global initiative on the
Ethics of Autonomous and Intelligent Systems5. The significance of this initiative
cannot be overstated; coming from a professional body with the standing and reach
of the IEEE Standards Association it marks a watershed in the emergence of ethical
standards. And it is a radical step. As I’ve argued above all standards are even if
not explicitly based on ethical principles. But for a respected standards body to
launch an initiative which explicitly aims to address the deep ethical challenges that
face the whole of autonomous and intelligent systems from driverless car autopilots
to medical diagnosis AIs, drones to deep learning, and care robots to chat bots is
both ambitious and unprecedented.
Humanity first
The IEEE initiative positions human well-being as its central tenet6. This is a bold and
political stance since it explicitly seeks to reposition robotics and AI as technologies
for improving the human condition rather than simply vehicles for economic growth.
The initiative’s mission is “to ensure every stakeholder involved in the design and
development of autonomous and intelligent systems is educated, trained, and
empowered to prioritize ethical considerations so that these technologies are
advanced for the benefit of humanity”.
!
The first major output from the IEEE Standards Association’s global ethics initiative is
a discussion document called Ethically Aligned Design (EAD)7, developed through an
iterative process which invited public feedback. The published second edition of EAD
sets out more than 100 ethical issues and recommendations, and a third edition will
be launched early in 2019. The work of more than 1000 volunteers across thirteen
committees, EAD covers: general (ethical) principles; how to embed values into
autonomous intelligent systems; methods to guide ethical design; safety and
beneficence of artificial general intelligence and artificial superintelligence; personal
data and individual access control; reframing autonomous weapons systems;
economics and humanitarian issues; law; affective computing; classical ethics in AI;
policy; mixed-reality, and well-being.
Each EAD committee was additionally tasked with identifying, recommending and
promoting new candidate standards, and to date a total of 14 new IEEE
standards working groups have started work on drafting so called human standards
(Box 1).
// start Box 1
Box 1: IEEE P7000 series human standards in development
P7000 Model Process for Addressing Ethical Concerns During System Design
P7001 Transparency of Autonomous Systems
P7002 Data Privacy Process
P7003 Algorithmic Bias Considerations
P7004 Standard for Child and Student Data Governance
P7005 Standard for Transparent Employer Data Governance
P7006 Standard for Personal Data Artificial Intelligence (AI) Agent
P7007 Ontological Standard for Ethically Driven Robotics and Automation Systems
P7008 Standard for Ethically Driven Nudging for Robotic, Intelligent and
Autonomous Systems
P7009 Standard for Fail-Safe Design of Autonomous and Semi-Autonomous
Systems
P7010 Wellbeing Metrics Standard for Ethical Artificial Intelligence and
Autonomous Systems
P7011 Standard for the Process of Identifying and Rating the Trustworthiness of
News Sources
P7012 Standard for Machine Readable Personal Privacy Terms
P7013 Inclusion and Application Standards for Automated Facial Analysis
Technology
// end Box 1
The importance of transparency and explainability
Consider P7001 as a case study. One of the general principles8 of EAD asks how
can we ensure that autonomous and intelligent systems are transparent?, and
recommends a new standard for transparency. P7001 Transparency in Autonomous
Systems was initiated as a direct response. IEEE P7001 directly addresses the
straightforward ethical principle that it should always be possible to find out why an
autonomous system made a particular decision.
A robot or AI is transparent if it is possible to find out why it behaves in a certain way.
We might for instance want to discover why it made a particular decision, especially if
that decision caused an accident or for the less serious reason that the robot or
AI’s behaviour is puzzling. Transparency is not intrinsic to robots and AIs, but must
be designed for, and it is a property which autonomous systems might have more or
less of. And full transparency might be very challenging to provide, for instance in
systems based on artificial neural networks (deep learning systems), or systems that
are continually learning.
There are two reasons transparency is so important.
First, because modern robots and AIs are designed to work with or alongside
humans, who need to be able to understand what they are doing and why. If we take
an assisted living robot as an example transparency (or to be precise, explainability)
means the user can understand what the robot might do in different circumstances.
An elderly person might be very unsure about robots, so it is important that her robot
is helpful, predictable never does anything that frightens her and above all safe. It
should be easy for her to learn what the robot does and why, in different
circumstances. An explainer system that allows her to ask the robot why did you just
do that?and receive a simple natural language explanation would be very helpful in
providing this kind of transparency. A higher level of transparency would be the
ability to ask questions like what would you do if I fell down? or what would you do
if I forget to take my medicine? This allows her to build a mental model of how the
robot will behave in different situations.
And second, because robots and AIs can and do go wrong. If physical robots go
wrong they can cause physical harm or injury. Real world trials of driverless cars
have already resulted in several fatalities9. Even a software AI can cause harm. A
medical diagnosis AI might, for instance, give the wrong diagnosis, or a biased credit
scoring AI might cause someone’s loan application to be wrongly rejected. Without
transparency, discovering what went wrong is extremely difficult and may in some
cases be impossible. The ability to find out what went wrong and why is not only
important to accident investigators, it might also be important to establish who is
responsible, for insurance purposes, or in a court of law. And following high profile
accidents wider society needs the reassurance of knowing that problems have been
found and fixed.
Transparency and explainability measured
But transparency is not one thing. Clearly an elderly relative does not require the
same level of understanding of a care robot as the engineer who repairs it. The
P7001 working group has defined five distinct groups of stakeholders (the
beneficiaries of the standard): users, safety certifiers or agencies, accident
investigators, lawyers or expert witness, and the wider public. For each of these
stakeholder groups, P7001 is setting out measurable, testable levels of transparency
so that autonomous systems can be objectively assessed and levels of compliance
determined, in a range that defines minimum levels up to the highest achievable
standards of transparency.
Of course, the way in which transparency is provided is very different for each group.
Safety certification agencies need access to technical details of how the system
works, together with verified test results. Accident investigators will need access to
data logs of exactly what happened prior to and during an accident, most likely
provided by something akin to an aircraft flight data recorder10. Lawyers and expert
witnesses will need access to the reports of safety certifiers and accident
investigators, along with evidence of the developer or manufacturer’s quality
management processes. And wider society needs accessible documentary-type
science communication to explain autonomous systems and how they work. P7001
will provide system designers with a toolkit for self-assessing transparency, and
recommendations for how to achieve greater transparency and explainability.
Outlook
How might these new ethical standards be applied when, like most standards, they
are voluntary? First, standards which relate to safety (and especially safety-critical
systems), can be mandated by licensing authorities, so that compliance with those
standards becomes a de facto requirement of obtaining a licence to operate that
system; for the P7000 series candidates might include P7001 and P7009. Second, in
a competitive market, compliance with ethical standards can be used to gain market
advantage especially among ethically aware consumers. Third, there is growing
pressure from professional bodies for their members to behave ethically. Emerging
professional codes of ethical conduct such as the recently published ACM11 and
IEEE12 codes of ethics and professional conduct are very encouraging; in turn, those
professionals are increasingly likely to exert internal pressure on their employers to
adopt ethical standards. And fourth, soft governance plays an important role in the
adoption of new standards: by requiring compliance with standards as a condition of
awarding procurement contracts governments can and do influence and direct the
adoption of standards across an entire supply chain without explicit regulation.
For data- or privacy-critical applications, a number of the P7000 standards
(P7002/3/4/5/12 and 13, for instance) could find application this way.
!
While some argue over the pace and level of impact of robotics and AI (on jobs, for
instance), most agree that increasingly capable intelligent systems create significant
ethical challenges, as well as great promise. This new generation of ethical
standards takes a powerful first step toward addressing those challenges. Standards,
like open science13, are a trust technology. Without ethical standards, it is hard to see
how robots and AIs will be trusted and widely accepted, and without that acceptance
their great promise will not be realised.
Alan Winfield is Professor of Robot Ethics at the Bristol Robotics Laboratory, UWE
Bristol, and visiting professor at the University of York. He chairs IEEE Standards
Working Group P7001.
e-mail: Alan.Winfield@brl.ac.uk
The views expressed in this article are those of the author only, and do not represent
the opinions of any organisation mentioned, or with which I am affiliated.
References
1. http://alanwinfield.blogspot.com/2017/12/a-round-up-of-robotics-and-ai-
ethics.html
2. Winfield, A. F. & Jirotka, M. Phil. Trans. R. Soc. A 376, 20180085 (2018);
http://dx.doi.org/10.1098/rsta.2018.0085
3. British Standards Institute, BS 8611:2016 Robots and robotic devices. Guide to
the ethical design and application of robots and robotic systems (2016)
https://shop.bsigroup.com/ProductDetail/?pid=000000000030320089
4. EPSRC, Principles of Robotics (2011)
https://epsrc.ukri.org/research/ourportfolio/themes/engineering/activities/principl
esofrobotics/
5. https://standards.ieee.org/industry-connections/ec/autonomous-systems.html
6. http://standards.ieee.org/develop/indconn/ec/ec_about_us.pdf
7. IEEE Standards Association, Ethically Aligned Design (2017)
https://ethicsinaction.ieee.org/
8. https://standards.ieee.org/content/dam/ieee-
standards/standards/web/documents/other/ead_general_principles_v2.pdf
9. Stilgoe J, Winfield A, The Guardian, 13 April 2018
https://www.theguardian.com/science/political-science/2018/apr/13/self-driving-
car-companies-should-not-be-allowed-to-investigate-their-own-crashes
10. Winfield A.F., Jirotka M. (2017) The Case for an Ethical Black Box. In: Gao Y.,
Fallah S., Jin Y., Lekakou C. (eds) Lecture Notes in Computer Science, vol
10454. Springer, Cham.
11. https://www.acm.org/code-of-ethics
12. https://www.ieee.org/about/corporate/governance/p7-8.html
13. Grand, A., Wilkinson, C., Bultitude, K. & Winfield, A.F. Open Science: A new
‘trust technology’? Science Communication 34, 679- 689 (2012).
... The second predictor of SAHR acceptance that will be considered in the present study is ethical acceptability. This variable may be of the utmost importance, as the discussions about deploying SAHRs in different contexts frequently focus on ethical concerns, which is reflected through the emergence of new conferences related explicitly to robot ethics and new ethical standards in robotics and AI [64,65]. Such debates are also prevalent in the healthcare context, where critics of implementing SAHRs often highlight their potential threat to humane and dignified care. ...
Article
Full-text available
Healthcare systems around the world are currently witnessing various challenges, including population aging and workforce shortages. As a result, the existing, overworked staff are struggling to meet the ever-increasing demands and provide the desired quality of care. One of the promising technological solutions that could complement the human workforce and alleviate some of their workload, are socially assistive humanoid robots. However, despite their potential, the implementation of socially assistive humanoid robots is often challenging due to low acceptance among key stakeholders, namely, patients and healthcare professionals. Hence, the present study first investigated the extent to which these stakeholders accept the use of socially assistive humanoid robots in nursing and care routine, and second, explored the characteristics that contribute to higher/lower acceptance within these groups, with a particular emphasis on demographic variables, technology expectations, ethical acceptability, and negative attitudes. In study 1, conducted on a sample of 490 healthcare professionals, the results of structural equation modeling showed that acceptance is driven primarily by aspects of ethical acceptability, although education and technology expectations also exert an indirect effect. In study 2, conducted on a sample of 371 patients, expectations regarding capabilities and attitudes towards the social influence of robots emerged as important predictors of acceptance. Moreover, although acceptance rates differed between tasks, both studies show a relatively high acceptance of socially assistive humanoid robots. Despite certain limitations, the study findings provide essential knowledge that enhances our understanding of stakeholders' perceptions and acceptance of socially assistive humanoid robots in hospital environments, and may guide their deployment.
... It is for this reason that the iterative nature of violet teaming is also emphasized, and that there is no real finish line or point at which we can simply shrink-wrap AI and forget about it (A. Winfield 2019). This is what it means for us to be tool-creating apes, capable of changing ourselves and our systems, where the things we make and use in today's world invariably lead to a very different world, or future state. ...
Preprint
Full-text available
Artificial intelligence (AI) promises immense benefits across sectors, yet also poses risks from dual-use potentials, biases, and unintended behaviors. This paper reviews emerging issues with opaque and uncontrollable AI systems and proposes an integrative framework called violet teaming to develop reliable and responsible AI. Violet teaming combines adversarial vulnerability probing (red teaming) with solutions for safety and security (blue teaming) while prioritizing ethics and social benefit. It emerged from AI safety research to manage risks proactively by design. The paper traces the evolution of red, blue, and purple teaming toward violet teaming, and then discusses applying violet techniques to address biosecurity risks of AI in biotechnology. Additional sections review key perspectives across law, ethics, cybersecurity, macrostrategy, and industry best practices essential for operationalizing responsible AI through holistic technical and social considerations. Violet teaming provides both philosophy and method for steering AI trajectories toward societal good. With conscience and wisdom, the extraordinary capabilities of AI can enrich humanity. But without adequate precaution, the risks could prove catastrophic. Violet teaming aims to empower moral technology for the common welfare.
... Grounded in prior literature on both ethical and trust implications in HATs and with the use of HATs expected to rise in future [27], it is necessary to understand how an autonomous teammate's perceived ethicality may influence trust in that teammate. Ethical judgments play a major role in the development and sustainment of trust between persons [39,108,109]. Similarly, interpersonal interactions are influenced by one's ethical judgments of others' actions [42,45,95]. ...
Article
Full-text available
Human-autonomy teams will likely first see use within environments with ethical considerations (e.g., military, healthcare). Therefore, we must consider how to best design an ethical autonomous teammate that can promote trust within teams, an antecedent to team effectiveness. In the current study, we conducted 14 semi-structured interviews with US Air Force pilots on the topics of autonomous teammates, trust, and ethics. A thematic analysis revealed that the pilots see themselves serving a parental role alongside a developing machine teammate. As parents, the pilots would feel responsible for their machine teammate’s behavior, and their unethical actions may not lead to a loss of trust. However, once the pilots feel their teammate has matured, their unethical actions would likely lower trust. To repair that trust, the pilots would want to understand their teammate’s processing, yet they are concerned about their ability to understand a machine’s processing. Additionally, the pilots would expect their teammates to indicate that it is improving or plans to improve. The findings from this study highlight the nuanced relationship between trust and ethics, as well as a duality of infantilized teammates that cannot bear moral weight and advanced machines whose decision-making processes may be incomprehensibly complex. Future investigations should further explore this parent–child paradigm and its relation to trust development and maintenance in human-autonomy teams.
... Regulation seeks to modify behaviour to achieve particular outcomes [42] and a regulatory framework encompasses norms along with monitoring and correction. While an AI regulatory framework is far from complete, regulatory norms for AI exist including ethics principles and codes of conduct [8], standards [50] and law (both applying existing law and proposals for Regulations [16]). These are all necessary components but are not sufficient. ...
Preprint
Full-text available
Intentional and accidental harms arising from the use of AI have impacted the health, safety and rights of individuals. While regulatory frameworks are being developed, there remains a lack of consensus on methods necessary to deliver safe AI. The potential for explainable AI (XAI) to contribute to the effectiveness of the regulation of AI is being increasingly examined. Regulation must include methods to ensure compliance on an ongoing basis, though there is an absence of practical proposals on how to achieve this. For XAI to be successfully incorporated into a regulatory system, the individuals who are engaged in interpreting/explaining the model to stakeholders should be sufficiently qualified for the role. Statutory professionals are prevalent in domains in which harm can be done to the health, safety and rights of individuals. The most obvious examples are doctors, engineers and lawyers. Those professionals are required to exercise skill and judgement and to defend their decision making process in the event of harm occurring. We propose that a statutory profession framework be introduced as a necessary part of the AI regulatory framework for compliance and monitoring purposes. We will refer to this new statutory professional as an AI Architect (AIA). This AIA would be responsible to ensure the risk of harm is minimised and accountable in the event that harms occur. The AIA would also be relied on to provide appropriate interpretations/explanations of XAI models to stakeholders. Further, in order to satisfy themselves that the models have been developed in a satisfactory manner, the AIA would require models to have appropriate transparency. Therefore it is likely that the introduction of an AIA system would lead to an increase in the use of XAI to enable AIA to discharge their professional obligations.
... This general question is answered through the field of AI ethics [1]. This field, in turn, takes shape in several other aspects, focusing on issues such as the integration of robotics in society [2], issues of digital privacy [3], and many others. One particularly interesting concept is algorithmic fairness [4], which focuses on how systems can replicate human biases, discriminating people based on protected characteristics such as sex, gender, race, or age. ...
Preprint
Demographic biases in source datasets have been shown as one of the causes of unfairness and discrimination in the predictions of Machine Learning models. One of the most prominent types of demographic bias are statistical imbalances in the representation of demographic groups in the datasets. In this paper, we study the measurement of these biases by reviewing the existing metrics, including those that can be borrowed from other disciplines. We develop a taxonomy for the classification of these metrics, providing a practical guide for the selection of appropriate metrics. To illustrate the utility of our framework, and to further understand the practical characteristics of the metrics, we conduct a case study of 20 datasets used in Facial Emotion Recognition (FER), analyzing the biases present in them. Our experimental results show that many metrics are redundant and that a reduced subset of metrics may be sufficient to measure the amount of demographic bias. The paper provides valuable insights for researchers in AI and related fields to mitigate dataset bias and improve the fairness and accuracy of AI models. The code is available at https://github.com/irisdominguez/dataset_bias_metrics.
... One such standard is the "Ethical Risk Assessment" (BS 8611), which was published in 2016 and provides rules for identifying potential ethical harm that may arise from the increasing use of robots and autonomous systems in daily life. Another relevant standard is the "Humanity First" (IEEE P7000 series), which includes a range of humancentered standards in development, including P7007, which is a standard for ethically driven robotics and automation systems, and P7008, a standard for ethically driven nudging for robotics, intelligent, and autonomous systems [24]. As ethics becomes an increasingly important factor in the development of products, systems, and services, it is important to consider ethical considerations during the design of robotics and automation systems. ...
Conference Paper
This paper presents a primer for learners interested in self-study of robotics, a multidisciplinary field that involves the design, construction, and operation of robots having wide-ranging applications in industries such as manufacturing, automation, business, and education. Acquiring knowledge and skills in robotics can lead to expertise in the highly in-demand Internet of Things (IoT) and cyber-physical systems (CPS) industries. The paper presents a curated curriculum based on existing resources, covering fundamental concepts and technologies such as kinematics, control systems, sensor design and integration, and robotics programming. It also discusses various resources including online courses, textbooks, software tools, hardware components, and real or virtual robots to support self-study, as well as offering tips for success such as setting clear goals, applying knowledge through exercises and projects, seeking support from a community of learners or experts, staying up-to-date with new developments, and seeking additional resources or help when needed. The field of robotics is complex and wide-ranging, with a high bar to entry, especially for those without access to engineering or computing courses. Our recommendations aim to help those interested in self-study in robotics maximize their learning and progress towards their goals.
... Winfield's work on ethics of robotics underlines that an ethical governance is required to build trust in robotics for the public, and that the voluntary character is a recurring problem in ethics standards and code of conducts [25,26]. According to Winfield, this issue is made relevant also by the fact that practitioners in robotics and AI pay increasing attention to the ethical aspects of their job, in the sense that, increasingly often, employees in AI companies demand their employers to adopt ethical standards. ...
Article
Full-text available
This article proposes an integrative approach to robotics research, based on bringing interdisciplinarity into the lab. Such an approach will facilitate researchers across various fields in gaining a more nuanced understanding of technology, how it is developed, and its potential impacts. We describe how a philosopher spent time embedded in robotics labs in different European countries as part of an interdisciplinary team, gaining insights into their work and perspectives, including how robotics researchers view ethical issues related to robotics research. Focusing on issues raised by the EU Parliamentary Motion on Robotics, we developed a seminar and questionnaire that investigated questions of ethics, electronic personhood and the role of policy in research ethics. Our findings highlight that while robotics researchers care about the ethical implications of their work and support policy that addresses ethical concerns, they believe there to be significant misunderstandings in how policy makers view robotics and AI, as well as a lack of understanding of, and trust in, the role that experts outside of robotics can play in regulating robotics research effectively. We propose that an integrative approach can break down these misunderstandings by demystifying the way that knowledge is created across different fields.
Article
Full-text available
Objective : to explore the modern condition of the artificial intelligence technology in forming prognostic ethical-legal models of the society interactions with the end-to-end technology under study. Methods : the key research method is modeling. Besides, comparative, abstract-logic and historical methods of scientific cognition were applied. Results : four ethical-legal models of the society interactions with the artificial intelligence technology were formulated: the tool (based on using an artificial intelligence system by a human), the xenophobia (based on competition between a human and an artificial intelligence system), the empathy (based on empathy and co-adaptation of a human and an artificial intelligence system), and the tolerance (based on mutual exploitation and cooperation between a human and artificial intelligence systems) models. Historical and technical prerequisites for such models formation are presented. Scenarios of the legislator reaction on using this technology are described, such as the need for selective regulation, rejection of regulation, or a full-scale intervention into the technological economy sector. The models are compared by the criteria of implementation conditions, advantages, disadvantages, character of “human – artificial intelligence system” relations, probable legal effects and the need for regulation or rejection of regulation in the sector. Scientific novelty : the work provides assessment of the existing opinions and approaches, published in the scientific literature and mass media, analyzes the technical solutions and problems occurring in the recent past and present. Theoretical conclusions are confirmed by references to applied situations of public or legal significance. The work uses interdisciplinary approach, combining legal, ethical and technical constituents, which, in the author’s opinion, are criteria for any modern socio-humanitarian researches of the artificial intelligence technologies. Practical significance : the artificial intelligence phenomenon is associated with the fourth industrial revolution; hence, this digital technology must be researched in a multi-aspectual and interdisciplinary way. The approaches elaborated in the article can be used for further technical developments of intellectual systems, improvements of branch legislation (for example, civil and labor), and for forming and modifying ethical codes in the sphere of development, introduction and use of artificial intelligence systems in various situations.
Chapter
Several high profile incidents that involve Artificial Intelligence (AI) have captured public attention and increased demand for regulation. Low public trust and attitudes towards AI reinforce the need for concrete policy around its development and use. However, current guidelines and standards rolled out by institutions globally are considered by many as high-level and open to interpretation, making them difficult to put into practice. This paper presents ongoing research in the field of Responsible AI and explores numerous methods of operationalising AI ethics. If AI is to be effectively regulated, it must not be considered as a technology alone—AI is embedded in the fabric of our societies and should thus be treated as a socio-technical system, requiring multi-stakeholder involvement and employment of continuous value-based methods of assessment. When putting guidelines and standards into practice, context is of critical importance. The methods and frameworks presented in this paper emphasise this need and pave the way towards operational AI ethics.KeywordsAI ethicsResponsible AISocio-technical assessment
Article
Full-text available
This paper explores the question of ethical governance for robotics and artificial intelligence (AI) systems. We outline a roadmap—which links a number of elements, including ethics, standards, regulation, responsible research and innovation, and public engagement—as a framework to guide ethical governance in robotics and AI. We argue that ethical governance is essential to building public trust in robotics and AI, and conclude by proposing five pillars of good ethical governance. This article is part of the theme issue ‘Governing artificial intelligence: ethical, legal, and technical opportunities and challenges’.
Article
Full-text available
The emerging practice of open science, which makes the entire process of a scientific investigation available, could extend membership of the research community to new, public audiences, who do not have access to science’s long-established trust mechanisms. This commentary considers if the structures that enable scientists to trust each other, and the public to trust scientists, are enriched by the open science approach. The completeness of information provided by open science, whether as a replacement for or complement to older systems for establishing trust within science, makes it a potentially useful “trust technology.”
Conference Paper
Full-text available
This paper proposes that robots and autonomous systems should be equipped with the equivalent of a Flight Data Recorder to continuously record sensor and relevant internal status data. We call this an ethical black box. We argue that an ethical black box will be critical to the process of discovering why and how a robot caused an accident, and thus an essential part of establishing accountability and responsibility. We also argue that without the transparency afforded by an ethical black box, robots and autonomous systems are unlikely to win public trust.
Article
Full-text available
The emerging practice of open science, which makes the entire process of a scientific investigation available, could extend membership of the research community to new, public audiences, who do not have access to science’s long-established trust mechanisms. This commentary considers if the structures that enable scientists to trust each other, and the public to trust scientists, are enriched by the open science approach. The completeness of information provided by open science, whether as a replacement for or complement to older systems for establishing trust within science, makes it a potentially useful “trust technology.”
Guide to the Ethical Design and Application of Robots and Robotic Systems BS
  • Robotic Robots
  • Devices
British Standards Institute, BS 8611:2016 Robots and robotic devices. Guide to the ethical design and application of robots and robotic systems (2016) https://shop.bsigroup.com/ProductDetail/?pid=000000000030320089
A round up of robotics and AI ethics
  • A Winfield
Self-driving car companies should not be allowed to investigate their own crashes. The Guardian
  • J Stilgoe
  • A Winfield
  • A F Winfield
  • M Jirotka
  • Phil
Winfield, A. F. & Jirotka, M. Phil. Trans. R. Soc. A 376, 20180085 (2018);