Access to this full-text is provided by MDPI.
Content available from Information
This content is subject to copyright.
information
Review
Roboethics: Fundamental Concepts and
Future Prospects
Spyros G. Tzafestas ID
School of Electrical and Computer Engineering, National Technical University of Athens, Zographou,
GR 15773 Athens, Greece; tzafesta@cs.ntua.gr
Received: 31 May 2018; Accepted: 13 June 2018; Published: 20 June 2018
Abstract:
Many recent studies (e.g., IFR: International Federation of Robotics, 2016) predict that the
number of robots (industrial, service/social, intelligent/autonomous) will increase enormously in the
future. Robots are directly involved in human life. Industrial robots, household robots, medical robots,
assistive robots, sociable/entertainment robots, and war robots all play important roles in human life
and raise crucial ethical problems for our society. The purpose of this paper is to provide an overview
of the fundamental concepts of robot ethics (roboethics) and some future prospects of robots and
roboethics, as an introduction to the present Special Issue of the journal Information on “Roboethics”.
We start with the question of what roboethics is, as well as a discussion of the methodologies of
roboethics, including a brief look at the branches and theories of ethics in general. Then, we outline
the major branches of roboethics, namely: medical roboethics, assistive roboethics, sociorobot ethics,
war roboethics, autonomous car ethics, and cyborg ethics. Finally, we present the prospects for the
future of robotics and roboethics.
Keywords:
ethics; roboethics; technoethics; robot morality; sociotechnical system; ethical liability;
assistive roboethics; medical roboethics; sociorobot ethics; war roboethics; cyborg ethics
1. Introduction
All of us should think about the ethics of the work/actions we select to do or the work/actions
we choose not to do. This includes the work/actions performed through robots which, nowadays,
strongly affect our lives. It is true that as technology progresses, the function of robots is upgrading
from that of a pure tool to a sociable being. As a result of this social involvement of present-day robots,
in many cases the associated social practices are likely to change. The question is how to control
the direction in which this will be done, especially from an ethics point of view. Many scholars in
the fields of intelligent systems, artificial intelligence, and robotics anticipate that in the near future
there will be a strong influence of cultural and societal values and norms on the development of
robotics, and conversely an influence of robot cultural values on human beings [
1
]. This means that
social and cultural factors (norms, morals, beliefs, etc.) affect the design, operation, application, use,
and evaluation of robots and other technologies. Overall, the symbiosis of humans and robots will
reach higher levels of integration and understanding.
Roboethics is a fundamental requirement for assuring a sustainable, ethical, and profitable
human-robot symbiosis. Roboethics belongs to technoethics, which was initiated by Jose Maria
Galvan via his talk about the “ethical dimension of technology” in the Workshop on “Humanoids:
A Techno-ontological Approach” (IEEE Robotics and Automation Conference on Humanoid Robots,
Waseda University, 2001) [
2
]. Today, there are many books, conference proceedings, and journal Special
Issues on roboethics (e.g., [3–13]).
Information 2018,9, 148; doi:10.3390/info9060148 www.mdpi.com/journal/information
Information 2018,9, 148 2 of 25
Three influential events on roboethics that took place in the initial period of the field are:
•2004: First Roboethics International Symposium (Sanremo, Italy).
•
2005: IEEE Robotics and Automation Society Roboethics Workshop: ICRA 2005 (Barcelona, Spain).
•
2006: Roboethics Minisymposium: IEEE BioRob 2006—Biomedical Robotics and Biomechatronics
Conference (Pisa, Italy).
Other conferences on roboethics, or involving workshops or tracks on roboethics, held in the
period of 2006–2018 include:
•
2006: ETHICBOTS European Project International Workshop on Ethics of Human Interaction with
Robotic, Bionic, and AI Systems Concepts and Policies (Naples, October 2006).
•
2007: ICRA: IEEE R&A International Conference: Workshop on Roboethics: IEEE Robotics and
Automation Society Technical Committee (RAS TC) on Roboethics (Rome, Italy).
•
2007: ICAIL 2007: International Conference on Artificial Intelligence and Law (Palo Alto, USA,
4–6 June 2007).
•
2007: CEPE 2007: International Symposium on Computer Ethics Philosophical Enquiry
(Topic Roboethics) (San Diego, USA, 12–14 July 2007).
•
2009: ICRA: IEEE R&A International Conference on Robotics and Automation: Workshop on
Roboethics: IEEE RAS TC on Roboethics (Kobe, Japan, 2009).
•2012: We Robot, University of Miami, FL, USA.
•2013: International Workshop on Robot Ethics, University of Sheffield (February 2013).
•
2016: AAAI/Stanford Spring Symposium on Ethical and Moral Considerations in Non-Human
Agents.
•
2016: International Research Conference on Robophilosophy (Main Topic Roboethics),
Aarhus University (17–21 October 2016).
•
2018: International Conference on Robophilosophy: Envisioning Robots and Society (Main Topic
Roboethics) (Vienna University, 14–17 February 2018).
In 2004 (25 February), the Fukuoka World Robot Declaration was issued (Fukuoka, Japan),
which included the following statement [14]:
“Confident of the future development of robot technology and of the numerous contributions
that robots will make to Humankind, this World Robot Declaration is Expectations for
next-generation robots: (a) next-generation robots will be partners that co-exist with
human beings; (b) next-generation robots will assist human beings both physically and
psychologically; (c) next-generation robots will contribute to the realization of a safe and
peaceful society”.
Clearly, this declaration tacitly promises that next-generation robots will be designed and used in
an ethical way for the benefit of human society.
An important contributor for the progress and impact of robotics of the future is the European
Robotics Research Network (EURON), which aims to promote excellence in robotics by creating
resources and disseminating/exchanging existing knowledge [
14
]. A major achievement of EURON is
the creation of a “Robotics Research Roadmap” that identifies and clarifies opportunities for developing
and exploiting advanced robot technology over a 20-year time frame in the future. A second product
of EURON is the “Roboethics Atelier”, a project funded and launched in 2005, with the goal to
draw the first “Roboethics Roadmap”. By now, this roadmap has embodied contributions of a large
number of scholars in the fields of sciences, technology, and humanities. The initial target of the
“Roboethics Roadmap” was the ethics of robot designers, manufacturers, and users.
It is emphasized that for roboethics to be assured, the joint commitment of experts of different
disciplines (electrical/mechanical/computer engineers, control/robotics/automation engineers,
Information 2018,9, 148 3 of 25
psychologists, cognitive scientists, artificial intelligence scientists, philosophers/ethicists, etc.) to
design ethics-based robots, and adapt the legislation to the issues (technological, ethical) that arise
from the continuous advances and achievements of robotics, is required.
The purpose of this paper is to present the fundamental concepts of roboethics (robot ethics) and
discuss some future perspectives of robots and roboethics. The structure of the paper is as follows:
•Section 2analyzes the essential question: What is roboethics?
•
Section 3presents roboethics methodologies, starting with a brief review of ethics branches
and theories.
•
Section 4outlines the roboethics branches, namely: medical roboethics, assistive roboethics,
sociorobot ethics, war roboethics, autonomous car ethics, and cyborg ethics.
•Section 5discusses some prospects for the future of robotics and roboethics.
•Section 6gives the conclusions.
2. What Is Roboethics?
Roboethics is a modern interdisciplinary research field lying at the intersection of applied ethics
and robotics, which studies and attempts to understand and regulate the ethical implications and
consequences of robotics technology, particularly of intelligent/autonomous robots, in our society.
The primary objective of roboethics is to motivate the moral design, development, and use of robots for
the benefit of humanity [
5
]. The term roboethics (for robot ethics) was coined by Gianmarco Verrugio,
who defines the field in the following way [2]:
“Roboethics is an applied ethics whose objective is to develop scientific/cultural/technical
tools that can be shared by different social groups and beliefs. These tools aim to promote
and encourage the development of robotics for the advancement of human society and
individuals, and to help preventing its misuse against humankind”.
To embrace a wide range of robots and potential robotic applications, Veruggio classified
roboethics in three levels as follows [2]:
•Level 1:
Roboethics—This level is intrisically referred to philosophical issues, humanities,
and social sciences.
•Level 2: Robot Ethics—This level refers mainly to science and technology.
•Level 3:
Robot’s Ethics—This level mostly concerns science fiction, but it opens a wide spectrum
of future contributions in the robot’s ethics field.
The basic problems faced by roboethics are: the dual use of robots (robots can be used or
misused), the anthropomorphization of robots (from the Greek words
άνθ$ωπ
o
ς
(anthropos) = human,
and
µ
o
$φή
(morphe) = shape), the humanization (human-friendly making) of human-robot symbiosis,
the reduction of the socio-technological gap, and the effect of robotics on the fair distribution of wealth
and power [
1
,
2
]. During the last three or four decades, many scholars working in a variety of disciplines
(robotics, computer science, information technology, automation, philosophy, law, psychology, etc.)
have attempted to address the pressing ethical questions about creating and using robotic technology
in society. Many areas of robotics are impacted, particularly those where robots interact directly with
humans (assistive robots, elder care robots, sociable robots, entertainment robots, etc.). The area of
robotics which raises the most crucial ethical concerns is the area of military/war robots, especially
autonomous lethal robots [
3
,
7
,
15
]. Several prominent robotics researchers and professionals began
visibly working on the problem of making robots ethical. There are also many computer and artificial
intelligence scholars who have argued that robots and AI will one day take over the world. However,
many others, e.g., Roger K. Moore, say that this is not going to happen. According to him the problem
is not the robots taking over the world, but that some people want to pretend that robots are responsible
for themselves [
16
]. He says: “In fact, robots belong to us. People, companies, and governments
Information 2018,9, 148 4 of 25
build, own, and program robots. Whoever owns and operates a robot is responsible for what he
does”. Actually, roboethics has several common problems with computer ethics, information ethics,
automation ethics, and bioethics.
According to Peter M. Asaro [
17
], the three fundamental questions of roboethics are the following:
1. “How might humans act ethically through, or with, robots?
2. How can we design robots to act ethically? Or, can robots be truly moral agents?
3. How can we explain the ethical relationships between human and robots?”
In question 1, it is humans that are the ethical agents. In question 2, it is robots that are the ethical
beings. Sub-questions of question 3 include the following [5]:
•“Is it ethical to create artificial moral agents and ethical robots?
•Is it unethical not to design mental/intelligent robots that possess ethical reasoning abilities?
•Is it ethical to make robotic nurses or soldiers?
•What is the proper treatment of robots by humans, and how should robots treat people?
•Should robots have rights?
•Should moral/ethical robots have new legal status?”
Very broadly, scientists and engineers look at robotics in the following ways [5,11]:
•Robots are mere machines (albeit, very useful and sophisticated machines).
•Robots raise intrinsic ethical concerns along different human and technological dimensions.
•
Robots can be conceived as moral agents, not necessarily possessing free will mental states,
emotions, or responsibility.
•
Robots can be regarded as moral patients, i.e., beings deserving of at least some
moral consideration.
To formulate a sound framework of roboethics, all of the above questions/aspects (at minimum)
must be properly addressed. Now, since humans and robots constitute a whole sociotechnical system,
it is not sufficient to concentrate on the ethical performance of individual humans and robots, but the
entire assembly of humans and robots must be considered, as dictated by system and cybernetics
theory [
5
,
18
]. The primary concern of roboethics is to assure that a robot or any other machine/artifact
is not doing harm, and only secondarily to specify the moral status of robots, resolve human ethical
dilemmas, or study ethical theories. This is because as robots become more sophisticated, intelligent,
and autonomous it will become more necessary to develop more advanced robot safety control
measures and systems to prevent the most critical dangers and potential harms. Of course it should be
remarked here that the dangers for robots do not differ from the dangers of other artifacts, such as
factories, chemical processes, automatic control systems, weapons, etc. At minimum, moral/ethical
robots need to have: (i) the ability to predict the results of their own actions or inactions; (ii) a set of
ethical rules against which to evaluate each possible action/consequence; and (iii) a mechanism for
selecting the most ethical action.
Roboethics involves three levels, namely [11]:
1. The ethical theory or theories adopted.
2. The code of ethics embedded into the robot (machine ethics).
3.
The subjective morality resulting from the autonomous selection of ethical action(s) by a robot
equipped with a conscience.
The three primary views of scientists and engineers about roboethics are the following [5,19]:
•
Not interested in roboethics: These scholars say that the work of robot designers is purely technical
and does not imply an ethical or social responsibility for them.
Information 2018,9, 148 5 of 25
•
Interested in short-term robot ethical issues: This view is advocated by those who adopt
some social or ethical responsibility, by considering ethical behavior in terms of good or bad,
and short-term impact.
•
Interested in long-term robot ethical issues: Robotics scientists advocating this view express their
robotic ethical concern in terms of global, long-term impact and aspects.
Some questions that have to be addressed in the framework of roboethics are [5]:
•
Is ethics applied to robots an issue for the individual scholar or practitioner, the user, or a
third party?
•What is the role that robots could have in our future life?
•How much could ethics be embedded into robots?
•How ethical is it to program robots to follow ethical codes?
•Which type of ethical codes are correct for robots?
•
If a robot causes harm, is it responsible for this outcome or not? If not, who or what is responsible?
•Who is responsible for actions performed by human-robot hybrid beings?
•Is the need to embed autonomy in a robot contradictory to the need to embed ethics in it?
•What types of robots, if any, should not be designed? Why?
•How do robots determine what is the correct description of an action?
•If there are multiple rules, how do robots deal with conflicting rules?
•Are there any risks to creating emotional bonds with robots?
3. Roboethics Methodologies
Roboethics methodologies are developed adopting particular ethics theories. Therefore, before
discussing these methodologies, it is helpful to have a quick look at the branches and theories of ethics.
3.1. Ethics Branches
Ethics involves the following branches [5] (Figure 1):
•Meta-ethics.
The study of concepts, judgements, and moral reasoning (i.e., what is the nature of
morality in general, and what justifies moral judgements? What does right mean?).
•Normative (prescriptive) ethics.
The elaboration of norms prescribing what is right or wrong,
what must be done or what must not (What makes an action morally acceptable? Or what are the
requirements for a human to live well? How shoud we act? What ought to be the case?).
•Applied ethics.
The ethics branch which examines how ethics theories can be applied to specific
problems/applications of actual life (technological, environmental, biological, professional,
public sector, business ethics, etc., and how people take ethical knoweledge and put it in practice).
Applied ethics is actually contrasted with theoretical ethics.
•Descriptive ethics.
The empirical study of people’s moral beliefs, and the question: What is
the case?
Information 2018,9, 148 6 of 25
Information 2018, 9, x 5 of 24
Is ethics applied to robots an issue for the individual scholar or practitioner, the user, or a third
party?
What is the role that robots could have in our future life?
How much could ethics be embedded into robots?
How ethical is it to program robots to follow ethical codes?
Which type of ethical codes are correct for robots?
If a robot causes harm, is it responsible for this outcome or not? If not, who or what is
responsible?
Who is responsible for actions performed by human-robot hybrid beings?
Is the need to embed autonomy in a robot contradictory to the need to embed ethics in it?
What types of robots, if any, should not be designed? Why?
How do robots determine what is the correct description of an action?
If there are multiple rules, how do robots deal with conflicting rules?
Are there any risks to creating emotional bonds with robots?
3. Roboethics Methodologies
Roboethics methodologies are developed adopting particular ethics theories. Therefore, before
discussing these methodologies, it is helpful to have a quick look at the branches and theories of
ethics.
3.1. Ethics Branches
Ethics involves the following branches [5] (Figure 1):
Meta-ethics. The study of concepts, judgements, and moral reasoning (i.e., what is the nature of
morality in general, and what justifies moral judgements? What does right mean?).
Normative (prescriptive) ethics. The elaboration of norms prescribing what is right or wrong,
what must be done or what must not (What makes an action morally acceptable? Or what are
the requirements for a human to live well? How shoud we act? What ought to be the case?).
Applied ethics. The ethics branch which examines how ethics theories can be applied to
specific problems/applications of actual life (technological, environmental, biological,
professional, public sector, business ethics, etc., and how people take ethical knoweledge and
put it in practice). Applied ethics is actually contrasted with theoretical ethics.
Descriptive ethics. The empirical study of people’s moral beliefs, and the question: What is the
case?
Figure 1. Branches of ethics. Source: https://commons.wikimedia.org/wiki (/File:Ethics-en.svg).
Figure 1. Branches of ethics. Source: https://commons.wikimedia.org/wiki (/File:Ethics-en.svg).
3.2. Ethics Theories
Principal ethics theories are the following [5]:
•Virtue theory (Aristotle).
The theory grounded on the notion of virtue, which is specified as
what character a person needs to live well. This means that in virtue ethics the moral evaluation
focuses on the inherent character of a person rather than on specific actions.
•Deontological theory (Kant).
The theory that focuses on the principles upon which the actions
are based, rather than on the results of actions. In other words, moral evaluation carries on
the actions according to imperative norms and duties. Therefore, to act rightly one must be
motivated by proper universal deontological principles that treat everyone with respect (“respect
for persons theory”).
•Utilitarian theory (Mill).
A theory belonging to the consequentialism ethics which is
“teleological”, aiming at some final outcome and evaluating the morality of actions toward
this desired outcome. Actually, utilitarianism measures morality based on the optimization of
“net expected utility” for all persons that are affected by an action or decision. The fundamental
principle of utilitarianism says: “Actions are moral to the extent that they are oriented towards
promoting the best long-term interests (greatest good) for every one concerned”. The issue here is
what the concept of greatest good means. The Aristotelian meaning of greatest good is well-being
(pleasure or happiness).
Other ethics theories include value-based theory, justice as fairness theory, and case-based
theory [
5
]. In real-life situations it is sometimes more effective to combine ethical rules of more
than one ethics theory. This is so because in a dynamic world it is very difficult and even impossible to
cover every possible situation by the principles and rules of a unique ethics theory.
3.3. Roboethics Methodologies
Roboethics has two basic methodologies: top-down methodology and bottom-up
methodology [5,20,21].
•Top-down roboethics methodology.
In this methodology, the rules of the desired ethical behavior
of the robot are programmed and embodied in the robot system. The ethical rules can be
Information 2018,9, 148 7 of 25
formulated according to the deontological or the utilitarian theory or other ethics theories.
The question here is: which theory is the most appropriate in each case? Top-down methodogy in
ethics was originated from several areas including philosophy, religion, and literature. In control
and automation sytems design, the top-down approach means to analyze or decompose a task
in simpler sub-tasks that can be hierarchically arranged and performed to achieve a desired
output orproduct. In the ethical sense, following the top-down methdology means to select
an antecedently specified ethical theory and obtain its implications for particular situations.
In practice, robots should combine both meanings of the top-down concept (control systems
meaning and ethical systems meaning).
Deontological roboethics: The first deontological robotic ethical system was proposed by
Asimov [22] and involves the following rules, which are known as Asimov’s Laws [5,22]:
•
“
Law 1:
A robot may not injure a human being or, through inaction allow a human being to come
to harm.
•Law 2:
A robot must obey orders it receives from human beings except when such orders conflict
with Law 1.
•Law 3:
A robot must protect its own existence as long as such protection does not conflict with
Laws 1 and 2.”
Later, Asimov added a law which he called Law Zero, since it has a higher importance than
Laws 1 through 3. This law states:
•“Law 0: No robot may harm humanity or through inaction allow humanity to come to harm.”
Asimov’s laws are human-centered (anthropocentric) since they consider the role of robots in
human service. Actually, these laws assume that robots have sufficient intelligence (perception,
cognition) to make moral decisions using the rules in all situations, irrespective of their complexity.
Over the years several multi-rule deontological systems have been proposed, e.g., [
23
,
24
].
Their conflict problem is faced by treating them as dictating prima facie duties [25].
In Reference [
25
], it is argued that for a robot to be ethically correct the following conditions
(desiderata) must be satisfied [5]:
•“Robots only take permissible actions.
•
All relevant actions that are obligatory for robots are actually performed by them, subject to ties
and conflicts among available actions.
•
All permissible (or obligatory or forbidden) actions can be proved by the robot (and in some cases,
associated systems, e.g., oversight systems) to be permisible (or obligatory or forbidden), and all
such proofs can be explained in ordinary English”.
The above ethical system can be implemented in top-down fashion.
Consequentialist roboethics: As seen above, the morality of an action is evaluated on the basis of
its consequences. The best current moral action is the action that leads to the best future consequences.
A robot can reason and act along the consequentialist/utilitarian ethics theory if it is capable
to [5]
:
•“Describe every situation in the world.
•Produce alternative actions.
•
Predict the situation(s) which would be the outcome of taking an action given the present situation.
•Evaluate a situation in terms of its goodness or utility.”
The crucial issues here are how “goodness” is defined, and what optimization criterion is selected
for evaluating situations.
Information 2018,9, 148 8 of 25
•Bottom-up roboethics methodology.
This methodology assumes that the robots possess adequate
computational and artificial intelligence capabilites to adapt themselves to different contexts so as
to be capable to learn, starting from perception of the world, and then perform the planning of
the actions based on sensory data, and finally execute the action [
26
]. In this methodology, the use
of any prior knowledge is only for the purpose of specifying the task to be performed, and not for
specifying a control architecture or implementation technique. A detailed discussion of bottom-up
and top-down roboethics approaches is provided in Reference [
26
]. Actually, for a robot to be an
ethical learning robot both top-down and bottom-up approaches are needed (i.e., the robot should
follow a suitable hybrid approach). Typically, the robot builds its morality through developmental
learning similar to the way children develop their conscience. Full discussions of top-down and
bottom-up roboethics methodologies can be found in References [20,21].
The morality of robots can be classified into one of three levels [5,21]:
•Operational morality (moral responsibility lies entirely in the robot designer and user).
•
Functional morality (the robot has the ability to make moral judgments without top-down
instructions from humans, and the robot designers can no longer predict the robot’s actions and
their consequences).
•
Full morality (the robot is so intelligent that it fully autonomously chooses its actions,
thereby being fully responsible for them).
As seen in Figure 2, increasing the robot’s autonomy and ethical sensitivity increases the robot’s
level of moral agency.
Information 2018, 9, x 8 of 24
Figure 2. Levels of robot morality (operational, functional, full) embedded in the robot autonomy vs.
ethical sensitivity plane. Source:
www.wonderfulengineering.com/future-robots-will-have-moral-and-ethical-sense
4. Roboethics Branches
In the following we will outline the following roboethics branches:
Medical roboethics.
Assistive roboethics.
Sociorobot ethics.
War roboethics.
Autonomous car ethics.
Cyborg ethics.
4.1. Medical Roboethics
Medical roboethics (ethics of medical robots or health care robots) uses the principles of medical
ethics and roboethics [5,27,28]. The fundamental area of medical robotics is the area of robotic
surgery, which finds increasing use in modern surgery. Robotic surgery has excessive cost.
Therefore, the question that immediately rises is [5]: “Given that there is marginal benefit from using
robots, is it ethical to impose financial burden on patients or the medical system?” The critical issue
in medical ethics is that the subject of health care and medicine refers to human health, life, and
death. Medical ethics deals with ethical norms for the medical or health care practice, or how it must
be done. Medical ethics was initiated in ancient Greece by Hippocrates, who formulated the
well-known Hippocratic Oath (Όρκος του Ιπποκράτη, in Greek) [29].
The principles of medical ethics are based on the general theories of ethics (justice as fairness,
deontological, utilitarian, case-based theory), and the fundamental practical moral principles (keep
promises, do not interfere with the lives of others unless they request this form of help, etc.) [23,28].
According to the well-known Georgetown Mantra (or six-part medical ethics approach) [30], all
medical ethical decisions should involve at least the following principles [7,30]:
“Autonomy: The patients have the right to accept or refuse a treatment.
Beneficence: The doctor should act in the best interest of the patient.
Non-maleficence: The doctor/practitioner should aim “first not to do harm”.
Justice: The distribution of scarce health resources and the decision of who gets what treatment
should be just.
Truthfulness: The patient shoud not be lied to and has the right to know the whole truth.
Dignity: The patient has the right to dignity”.
Figure 2.
Levels of robot morality (operational, functional, full) embedded in the robot autonomy vs.
ethical sensitivity plane. Source: www.wonderfulengineering.com/future-robots-will-have-moral-
and-ethical-sense.
4. Roboethics Branches
In the following we will outline the following roboethics branches:
•Medical roboethics.
•Assistive roboethics.
•Sociorobot ethics.
•War roboethics.
Information 2018,9, 148 9 of 25
•Autonomous car ethics.
•Cyborg ethics.
4.1. Medical Roboethics
Medical roboethics (ethics of medical robots or health care robots) uses the principles of medical
ethics and roboethics [
5
,
27
,
28
]. The fundamental area of medical robotics is the area of robotic
surgery, which finds increasing use in modern surgery. Robotic surgery has excessive cost. Therefore,
the question that immediately rises is [
5
]: “Given that there is marginal benefit from using robots,
is it ethical to impose financial burden on patients or the medical system?”. The critical issue in
medical ethics is that the subject of health care and medicine refers to human health, life, and death.
Medical ethics deals with ethical norms for the medical or health care practice, or how it must be
done. Medical ethics was initiated in ancient Greece by Hippocrates, who formulated the well-known
Hippocratic Oath (΄Ο$κoςτoυIππoκ$άτη, in Greek) [29].
The principles of medical ethics are based on the general theories of ethics (justice as fairness,
deontological, utilitarian, case-based theory), and the fundamental practical moral principles
(keep promises, do not interfere with the lives of others unless they request this form of
help, etc.) [23,28].
According to the well-known Georgetown Mantra (or six-part medical ethics approach) [
30
],
all medical ethical decisions should involve at least the following principles [7,30]:
•“Autonomy: The patients have the right to accept or refuse a treatment.
•Beneficence: The doctor should act in the best interest of the patient.
•Non-maleficence: The doctor/practitioner should aim “first not to do harm”.
•
Justice: The distribution of scarce health resources and the decision of who gets what treatment
should be just.
•Truthfulness: The patient shoud not be lied to and has the right to know the whole truth.
•Dignity: The patient has the right to dignity”.
An authoritative code of ethics is the AMA (American Medical Association) code [31].
Robotic surgery ethics is a sub-area of applied medical ethics, and involves at minimum the
above Georgetown Mentra Principles. Medical treatment of any form should be ethical. However,
a legal treatment may not be ethical. The legislation provides the minimum law standard for people’s
performance. The ethical standards are specified by the principles of ethics and, in the context of
licenced professionals (robotics engineers, information engineers, medical doctors, managers, etc.),
are provided by the accepted code of ethics of each profession [32,33].
Injury law places on all individuals a duty of reasonable care to others, and determines this
duty based on how “a reasonable/rational person” in the same situation would act. If a person
(doctor, surgeon, car driver) causes injury to another, because of unreasonable action, then the law
imposes liability on the unreasonable person. A scenario concerning the case of injuring a patient
in robotic surgery is discussed in Reference [
5
]. Figure 3shows a snapshot of the DaVinci robot and
its accessories.
A branch of medicine which needs specialized ethical and law considerations is the branch
of telemedicine (especially across geographical and political boundaries). Telecare from different
countries should obey the standard ethics rules of medicine, e.g., the rules of confidentiality and
equipment reliability, while it may reduce the migration of specialists. Confidentiality is at risk
because of the possibility of overhearing. Here, the prevention of carelessness in the copying
of communications such as diagnoses is necessary, along with the assurance that non-physician
intermediaries (e.g., medical technicians or information experts) who collect data about patients
respect confidentiality. Communication should be sufficiently fast so as to assure that the ethical
requirements of beneficence and justice are met, and to reduce the unpleasant anxiety of the patients.
On the legal side, the so-called conflict of laws should be properly faced. A first issue is whether a
Information 2018,9, 148 10 of 25
medical care professional, who has a licence to practice only in jurisdiction A but treats a patient in
jurisdiction B, violates B’s laws. Conflict of law principles should be applied here [34].
Information 2018, 9, x 9 of 24
An authoritative code of ethics is the AMA (American Medical Association) code [31].
Robotic surgery ethics is a sub-area of applied medical ethics, and involves at minimum the
above Georgetown Mentra Principles. Medical treatment of any form should be ethical. However, a
legal treatment may not be ethical. The legislation provides the minimum law standard for people’s
performance. The ethical standards are specified by the principles of ethics and, in the context of
licenced professionals (robotics engineers, information engineers, medical doctors, managers, etc.),
are provided by the accepted code of ethics of each profession [32,33].
Injury law places on all individuals a duty of reasonable care to others, and determines this
duty based on how “a reasonable/rational person” in the same situation would act. If a person
(doctor, surgeon, car driver) causes injury to another, because of unreasonable action, then the law
imposes liability on the unreasonable person. A scenario concerning the case of injuring a patient in
robotic surgery is discussed in Reference [5]. Figure 3 shows a snapshot of the DaVinci robot and its
accessories.
Figure 3. The Da Vinci surgical robot system. Source: www.montefiore.org
(/cancer-robotic-prostate-surgery).
A branch of medicine which needs specialized ethical and law considerations is the branch of
telemedicine (especially across geographical and political boundaries). Telecare from different
countries should obey the standard ethics rules of medicine, e.g., the rules of confidentiality and
equipment reliability, while it may reduce the migration of specialists. Confidentiality is at risk
because of the possibility of overhearing. Here, the prevention of carelessness in the copying of
communications such as diagnoses is necessary, along with the assurance that non-physician
intermediaries (e.g., medical technicians or information experts) who collect data about patients
respect confidentiality. Communication should be sufficiently fast so as to assure that the ethical
requirements of beneficence and justice are met, and to reduce the unpleasant anxiety of the patients.
On the legal side, the so-called conflict of laws should be properly faced. A first issue is whether a
medical care professional, who has a licence to practice only in jurisdiction A but treats a patient in
jurisdiction B, violates B’s laws. Conflict of law principles should be applied here [34].
4.2. Assistive Roboethics
Assistive robots constitute a class of service robots that focuses on the enhancement of the
mobility capabilities of impaired people (people with special needs: PwSN) so as to attain their best
physical and/or social functional level, and to gain the ability to live independently [5]. Assistive
robots/devices include the following [5]:
Assistive robots/devices for people with impaired lower limbs (wheelchairs, walkers).
Assistive robots/devices for people with impaired upper limbs and hands.
Rehabilitation robots/devices for upper limbs or lower limbs.
Figure 3.
The Da Vinci surgical robot system. Source: www.montefiore.org
(/cancer-robotic-prostate-surgery).
4.2. Assistive Roboethics
Assistive robots constitute a class of service robots that focuses on the enhancement of the mobility
capabilities of impaired people (people with special needs: PwSN) so as to attain their best physical
and/or social functional level, and to gain the ability to live independently [
5
]. Assistive robots/devices
include the following [5]:
•Assistive robots/devices for people with impaired lower limbs (wheelchairs, walkers).
•Assistive robots/devices for people with impaired upper limbs and hands.
•Rehabilitation robots/devices for upper limbs or lower limbs.
•Orthotic devices.
•Prosthetic devices.
Figure 4a shows the principal components of the Toyama University’s intelligent/self-navigated
wheelchair, and Figure 4b shows the McGill University’s multi-task smart/intelligent wheelchair
(smart wheeler).
Information 2018, 9, x 10 of 24
Orthotic devices.
Prosthetic devices.
Figure 4a shows the principal components of the Toyama University’s intelligent/self-navigated
wheelchair, and Figure 4b shows the McGill University’s multi-task smart/intelligent wheelchair
(smart wheeler).
(a) (b)
Figure 4. (a) An intelligent wheelchair example with motor, PC, camera, and laser range sensor. (b)
Smart multi-task wheelchair (McGill SmartWheeler Project). (a) Source:
www3.u--toyama.ac.jp/mecha0/lab/mechacontr/res_ENG.html (b) www.cs.mcgill.ca/~smartwheeler.
The evaluation of assistive robots can be made along three main dimensions, namely: cost, risk,
and benefit. Since these evaluation dimensions trade off against each other we cannot achieve full
points on all of them at the same time. Thus, their quantitative evaluation and the trade-off among
the different dimensions is needed. The evaluation of risk-benefit and cost-benefit should be
conducted in light of the impact of assistive technologies on users’ whole life in both the short term
and the long term. Important guidelines for these analyses have been provided by the World Health
Organization (WHO), which has approved an International Classification of Functioning, Disability,
and Health (ICF) [35].
A framework for the development of assistive robots using ICF, which includes the evaluation
of assistive technologies in users’ life, is described in References [36,37]. In the ICF model, assistive
robots, besides activity, have impacts on body functions and structure/participation, and the
functioning of humans (combined, e.g., with welfare equipment, welfare service, housing
environment, etc.).
Assistive robotics is part of medical robotics. Therefore, the principles of medical roboethics
(Georgetown Mantra, etc.) and the respective codes of ethics are applicable here. Doctors and
caregivers should carefully respect the following additional ethical aspects [5]:
1. Select and propose the most appropiate device which is economically affordable by the PwSN.
2. Consider assistive technology that can help the user do things that he/she finds difficult to do.
3. Ensure that the chosen assistive device is not used for activities that a person is capable of doing
for him/herself (which will probably make the problem worse).
4. Use assistive solutions that respect the freedom and privacy of the person.
5. Ensure the users’ safety, which is of the greatest importance.
A full code of assistive technology was released in 2012 by the USA Rehabilitation Engineering
and Assistive Technology Society (RESNA) [38], and another code by the Canadian Commission on
Rehabilitation Councelor Certification (CRCC) was put forth in 2002 [39]. A four-level ethical
decision-making scheme for assistive/rehabilitation robotics and other technologies is the following
[5]:
Level 1: Select the proper device—Users should be provided the proper assistive/rehabilitation
devices and services, otherwise the non-maleficence ethical principle is violated. The principles
of justice, beneficence, and autonomy should also be followed at this level.
Figure 4.
(
a
) An intelligent wheelchair example with motor, PC, camera, and laser range sensor.
(
b
) Smart multi-task wheelchair (McGill SmartWheeler Project). (
a
) Source: www3.u--toyama.ac.jp/
mecha0/lab/mechacontr/res_ENG.html (b)www.cs.mcgill.ca/~smartwheeler.
Information 2018,9, 148 11 of 25
The evaluation of assistive robots can be made along three main dimensions, namely: cost, risk,
and benefit. Since these evaluation dimensions trade off against each other we cannot achieve full
points on all of them at the same time. Thus, their quantitative evaluation and the trade-off among the
different dimensions is needed. The evaluation of risk-benefit and cost-benefit should be conducted in
light of the impact of assistive technologies on users’ whole life in both the short term and the long
term. Important guidelines for these analyses have been provided by the World Health Organization
(WHO), which has approved an International Classification of Functioning, Disability, and Health
(ICF) [35].
A framework for the development of assistive robots using ICF, which includes the evaluation of
assistive technologies in users’ life, is described in References [
36
,
37
]. In the ICF model, assistive robots,
besides activity, have impacts on body functions and structure/participation, and the functioning of
humans (combined, e.g., with welfare equipment, welfare service, housing environment, etc.).
Assistive robotics is part of medical robotics. Therefore, the principles of medical roboethics
(Georgetown Mantra, etc.) and the respective codes of ethics are applicable here. Doctors and
caregivers should carefully respect the following additional ethical aspects [5]:
1. Select and propose the most appropiate device which is economically affordable by the PwSN.
2. Consider assistive technology that can help the user do things that he/she finds difficult to do.
3.
Ensure that the chosen assistive device is not used for activities that a person is capable of doing
for him/herself (which will probably make the problem worse).
4. Use assistive solutions that respect the freedom and privacy of the person.
5. Ensure the users’ safety, which is of the greatest importance.
A full code of assistive technology was released in 2012 by the USA Rehabilitation Engineering
and Assistive Technology Society (RESNA) [
38
], and another code by the Canadian Commission
on Rehabilitation Councelor Certification (CRCC) was put forth in 2002 [
39
]. A four-level ethical
decision-making scheme for assistive/rehabilitation robotics and other technologies is the following [
5
]:
•Level 1:
Select the proper device—Users should be provided the proper assistive/rehabilitation
devices and services, otherwise the non-maleficence ethical principle is violated. The principles of
justice, beneficence, and autonomy should also be followed at this level.
•Level 2:
Competence of therapists—Effective co-operation between therapists in order to
plan the best therapy program. Here again the principles of justice, autonomy, beneficence,
and non-maleficence should be respected.
•Level 3:
Effectiveness and efficiency of assistive devices—Use should be made of effective, reliable,
and cost-effective devices. The principles of beneficence, non-maleficence, etc. should be respected
here. Of highest priority at this level is the justice ethical rule.
•Level 4:
Societal resources and legislation—Societal, agency, and user resources should be
appropriately exploited in order to achieve the best available technologies. Best practices
rehabilitation interventions should be followed for all aspects.
Level 1 is the “client professional relasionship” level, level 2 is the “clinical multidisciplinary”
level, level 3 is the “institutional/agency” level, and level 4 is the “society and public policy” level.
4.3. Sociorobot Ethics
Sociorobots (social, sociable, socialized, or socially assistive robots) are assistive robot that are
designed to enter the mental and socialization space of humans, e.g., PaPeRo, PARO, Mobiserv,
i-Cat and NAO (Figure 5). This can be achieved by designing appropriate high-performance
human-robot interfaces: HRI (speech, haptic, visual). The basic features required for a robot to
be socially assistive are to [40]:
•Comprehend and interact with its environment.
Information 2018,9, 148 12 of 25
•
Exhibit social behavior (for assisting PwSN, the elderly, and children needing mental/socialization
help).
•
Direct its focus of attention and communication on the user (so as to help him/her achieve specific
goals).
A socially interactive robot possesses the following capabilities [5,40–42]:
•“Express and/or perceive emotions.
•Communicate with high-level dialogue.
•Recognize other agents and learn their models.
•Establish and/or sustain social connections.
•Use natural patterns (gestures, gaze, etc.).
•Present distinctive personality and character.
•Develop and/or learn social competence.”
Some more sociorobots, other than those shown in Figure 5, include the following [40]:
•AIBO: a robotic dog (dogbot) able to interact with humans and play with a ball (SONY) [43].
•KISMET: a human-like robotic head able to express emotions (MIT) [44].
•
KASPAR: a humanoid robot torso that can function as mediator of human interaction with autistic
children [41].
•QRIO: a small entertainment humanoid (SONY) [45].
Sociorobots are marketed for use in a variety of environments (private homes, schools,
elderly centers, hospitals). Thefore, they have to function in real environments which includes
interacting with family members, caregivers, and medical therapists [
5
,
40
]. Normally, a sociorobot
does not apply any physical force on the user, although the user can touch it, often as part of the
therapy. However, in most cases no physical user-robot contact is involved, and frequently the robot
is not even within the user’s reach. In most cases the robot lies within the user’s social interaction
domain in which a one-to-one interaction occurs via speech, gesture, and body motion. Thus, the use
of sociorobots raises a number of ethical issues that fall in the psychological, emotional, and social
sphere. Of course, since sociorobots constitute a category of medical robots, the principles of medical
roboethics discussed in Section 4.1 are all applied here as in the case of all assistive robots. In addition,
the following fundamental non-physical (emotional, behavioral) issues should be considered [5]:
•
“Attachment: The ethical issue here arises when a user is emotionally attached to the robot.
For example, in dementia/autistic persons, the robot’s absence when it is removed for repair may
produce distress and/or loss of therapeutic benefits.
•
Deception: This effect can be created by the use of robots in assistive settings (robot companions,
teachers, or coaches), or when the robot mimics the behavior of pets.
•
Awareness: This issue concerns both users and caregivers, since they both need to be accurately
informed of the risks and hazards associated with the use of robots.
•
Robot authority: A sociorobot that acts as a therapist is given some authority to exert influence on
the patient. Thus, the ethical issue here is who controls the type, the level, and the duration of
interaction. If a patient wants to stop an exercise due to fatigue or pain a human therapist would
accept this, but a robot might not accept. Such a feature is also to be possessed by the robot.
•
Autonomy: A mentally healthy person has the right to make informed decisions about his/her
treatment. If he/she has cognition problems, this autonomy right is passed to the person who is
legally and ethically responsible for the patient’s therapy.
•
Privacy: Securing privacy during robot-aided interaction and care is a primary requirement in
all cases.
•
Justice and responsibility: This is of primary ethical importance to observe the standard issues of
the “fair distibution of scarce resources” and “responsibility assignment”.
Information 2018,9, 148 13 of 25
•
Human-human relation (HHR): HHR is a very important ethical issue that has to be addressed
when using assistive and socialized robots. The robots are used as a means of addition or
enhancement of the therapy given by caregivers, not as a replacement of them.”
Information 2018, 9, x 12 of 24
“Attachment: The ethical issue here arises when a user is emotionally attached to the robot. For
example, in dementia/autistic persons, the robot’s absence when it is removed for repair may
produce distress and/or loss of therapeutic benefits.
Deception: This effect can be created by the use of robots in assistive settings (robot
companions, teachers, or coaches), or when the robot mimics the behavior of pets.
Awareness: This issue concerns both users and caregivers, since they both need to be accurately
informed of the risks and hazards associated with the use of robots.
Robot authority: A sociorobot that acts as a therapist is given some authority to exert influence
on the patient. Thus, the ethical issue here is who controls the type, the level, and the duration
of interaction. If a patient wants to stop an exercise due to fatigue or pain a human therapist
would accept this, but a robot might not accept. Such a feature is also to be possessed by the
robot.
Autonomy: A mentally healthy person has the right to make informed decisions about his/her
treatment. If he/she has cognition problems, this autonomy right is passed to the person who is
legally and ethically responsible for the patient’s therapy.
Privacy: Securing privacy during robot-aided interaction and care is a primary requirement in
all cases.
Justice and responsibility: This is of primary ethical importance to observe the standard issues
of the “fair distibution of scarce resources” and “responsibility assignment”.
Human-human relation (HHR): HHR is a very important ethical issue that has to be addressed
when using assistive and socialized robots. The robots are used as a means of addition or
enhancement of the therapy given by caregivers, not as a replacement of them.”
(a) (b)
(c) (d) (e)
Figure 5. Examples of sociorobots. (a) PaPeRo:
www.materialicious.com/2009/11/communication-robot-papero.html; (b) PARO:
www.roboticstoday.com/robots/paro; (c) Mobiserv:
www.smart-homes.nl/Innoveren/Sociale-Robots/Mobiserv; (d) i-cat:
www.bartneck.de/2009/08/12/photos-philips-icat-robot ; (e) NAO: www.hackedgadgets.com /
2011/02/18/nao-robot-demonstation
Figure 5.
Examples of sociorobots. (
a
) PaPeRo: www.materialicious.com/2009/11/communication-
robot-papero.html; (
b
) PARO: www.roboticstoday.com/robots/paro; (
c
) Mobiserv: www.smart-homes.
nl/Innoveren/Sociale-Robots/Mobiserv; (
d
) i-cat: www.bartneck.de/2009/08/12/photos-philips-icat-
robot; (e) NAO: www.hackedgadgets.com/2011/02/18/nao- robot-demonstation.
4.4. War Roboethics
Military robots, especially lethal autonomous robotic weapons, lie at the center of roboethics.
Supporters of the use of war robots state that these robots have important advantages which include
the saving of the lives of soldiers and the safe clearing of seas and streets from IED (Improvised
Explosive Devices). They also claim that autonomous robot weapons ca expedite war more ethically
and effectively than human soldiers who, under the influence of emotions, anger, fatigue, vengeance,
etc., may overreact and overstep the laws of war. The opponents of the use of autonomous killer robots
argue that weapon autonomy itself is the problem and the mere control of autonomous weapons
would never be satisfactory. Their central belief is that autonomous lethal robots must be entirely
prohibited [5].
War is defined as follows (Merriam Webster Dictionary):
•A state or period of fighting between countries or groups.
•A state of usually open and declared armed hostile conflict between states or nations.
•A period of such armed conflict.
A war does not really start until a conscious commitment and strong mobilization of the
belligerents occurs. War is a bad thing (it results in deliberate killing or injuring people) and raises
Information 2018,9, 148 14 of 25
critical ethical questions for any thoughtful person [
5
]. These questions are addressed by “war ethics”.
The ethics of war attempts to resolve what is right or wrong, both for the individual and the states or
countries contributing to debates on public policy, and ultimately leading to the establishment of codes
of war [46,47]. The three dominating traditions (doctrines) in the ethics of war and peace are [5,48]:
•Realism (war is an inevitable process taking place in the anarchical world system).
•Pacifism or anti-warism (rejects war in favor of peace).
•
Just war (just war theory specifies the conditions for judging if it is just to go to war, and conditions
for how the war should be conducted).
Realism is distinguished in descriptive realism (the states cannot behave morally in wartime)
and prescriptive realism (a prudent state is obliged to act amorally in the international scene).
Pacifism objects to killing in general and in particular, and objects to mass killing for political reasons
as commonly occurs during wartime. A pacifist believes that war is always wrong.
Just war theory involves three parts which are known by their latin names, i.e., jus ad bellum,
jus in bello, and jus post bellum [5].
•
“Jus ad bellum specifies the conditions under which the use of military force must be justified.
The jus ad bellum requirements that have to be fulfilled for a resort to war to be justified
are: (i) just cause; (ii) right intention; (iii) legitimate authority and declaration; (iv) last resort;
(v) proportionality; (vi) chance of success.
•
Jus in bello refers to justice in war, i.e., to conducting a war in an ethical manner. According to
international war law, a war should be conducted obeying all international laws for weapons
prohibition (e.g., biological or chemical weapons), and for benevolent quarantine for prisoners of
war (POWs).
•
Jus post bellum refers to justice at war termination. Its purpose is to regulate the termination
of wars and to facilitate the return to peace. Actually, no global law exists for jus post bellum.
The return to peace should obey the general moral laws of human rights to life and liberty.”
The international law of war or international humanitarian law attempts to limit the effects
of armed conflict for humanitarian purposes. The humanitarian jus in bello law has the following
principles [5,48]:
1.
Discrimination: It is immoral to kill civilians, i.e., non-combatants. Weapons (non-prohibited)
may be used only against those who are engaged in doing harm.
2. Proportionality: Soldiers are entitled to use only force proportional to the goal sought.
3.
Benevolent treatment of POWs: Captive enemy soldiers are “no longer engaged in harm”, and so
they are to be provided with benevolent (not malevolent) quarantine away from battle zones,
and they should be exchanged for one’s own POWs after the end of war.
4.
Controlled weapons: Soldiers are allowed to use controlled weapons and methods which are not
evil in themseves.
5.
No retaliation: This occurs when a state A violates jus in bello in war in state B, and state B
retaliates with its own violation of jus in bello, in order to force A to obey the rules.
In general, a war is considered a just war if it is both justified and carried out in the right way.
The ethical and legal rules of conducting wars using robotic weapons, in addition to conventional
weapons, includes at minimum all of the rules of just war discussed above, but the use of
semiautonomous/autonomous robots adds new rules as follows:
•
Firing decision: At present, the firing decision still lies with the human operator. However,
the separation margin between human firing and autonomous firing in the battlefield is
continuously decreased.
Information 2018,9, 148 15 of 25
•
Discrimination: The ability to distinguish lawful from unlawful targets by robots varies
enormously from one system to another, and present-day robots are still far from having visual
capabilities that may faithfully discriminate between lawful and unlawful targets, even in close
contact encounter. The distinction between lawful and unlawful targets is not a pure technical
issue, but it is considerably complicated by the lack of a clear definition of what counts as a
civilian. The 1944 Geneva Convention states that a civilian can be defined by common sense,
and the 1977 Protocol defines a civilian any person who is not an active combatant (fighter).
•
Responsibility: The assignment of responsibility in case of failure (harm) is both an ethical
and legislative issue in all robotic applications (medical, assistive, socialization, war robots).
Yet this issue is much more critical in the case of war robots that are designed to kill humans
with a view to save other humans. The question is to whom blame and punishment should be
assigned for improper fight and unauthorized harm caused (intentionally or unintentionally)
by an autonomous robot—to the designer, robot manufacturer, robot controller/supervisor,
military commander, a state prime minister/president, or the robot itself? This question is very
complicated and needs to be discussed more deeply when the robot is given a higher degree of
autonomy [49].
•
Proportionality: The proportionality rule requires that even if a weapon meets the test of
distinction, any weapon must also undergo an evaluation that sets the anticipated military
advantage to be gained against the predicted civilian harm (civilian persons or objects). In other
words, the harm to civilians must not be excessive relative to the expected military gain.
Proportionality is a fundamental requirement of just war theory and should be respected by
the design and programming of any autonomous robotic weapon.
Two examples of autonomous robotic weapons (fighters) are shown in Figure 6.
Information 2018, 9, x 14 of 24
3. Benevolent treatment of POWs: Captive enemy soldiers are “no longer engaged in harm”, and
so they are to be provided with benevolent (not malevolent) quarantine away from battle zones,
and they should be exchanged for one’s own POWs after the end of war.
4. Controlled weapons: Soldiers are allowed to use controlled weapons and methods which are
not evil in themseves.
5. No retaliation: This occurs when a state A violates jus in bello in war in state B, and state B
retaliates with its own violation of jus in bello, in order to force A to obey the rules.
In general, a war is considered a just war if it is both justified and carried out in the right way.
The ethical and legal rules of conducting wars using robotic weapons, in addition to
conventional weapons, includes at minimum all of the rules of just war discussed above, but the use
of semiautonomous/autonomous robots adds new rules as follows:
Firing decision: At present, the firing decision still lies with the human operator. However, the
separation margin between human firing and autonomous firing in the battlefield is
continuously decreased.
Discrimination: The ability to distinguish lawful from unlawful targets by robots varies
enormously from one system to another, and present-day robots are still far from having visual
capabilities that may faithfully discriminate between lawful and unlawful targets, even in close
contact encounter. The distinction between lawful and unlawful targets is not a pure technical
issue, but it is considerably complicated by the lack of a clear definition of what counts as a
civilian. The 1944 Geneva Convention states that a civilian can be defined by common sense,
and the 1977 Protocol defines a civilian any person who is not an active combatant (fighter).
Responsibility: The assignment of responsibility in case of failure (harm) is both an ethical and
legislative issue in all robotic applications (medical, assistive, socialization, war robots). Yet this
issue is much more critical in the case of war robots that are designed to kill humans with a
view to save other humans. The question is to whom blame and punishment should be
assigned for improper fight and unauthorized harm caused (intentionally or unintentionally)
by an autonomous robot—to the designer, robot manufacturer, robot controller/supervisor,
military commander, a state prime minister/president, or the robot itself? This question is very
complicated and needs to be discussed more deeply when the robot is given a higher degree of
autonomy [49].
Proportionality: The proportionality rule requires that even if a weapon meets the test of
distinction, any weapon must also undergo an evaluation that sets the anticipated military
advantage to be gained against the predicted civilian harm (civilian persons or objects). In other
words, the harm to civilians must not be excessive relative to the expected military gain.
Proportionality is a fundamental requirement of just war theory and should be respected by the
design and programming of any autonomous robotic weapon.
Two examples of autonomous robotic weapons (fighters) are shown in Figure 6.
Figure 6. Autonomous fighter examples (MQ-1 Predator, M12). Source: www.kareneliot.de/
OpenDrones/opendrones_1military.html; https://www.youtube.com/watch?v=_upbplsKGd4;
https://www.digitaltrends.com/cool-tech/coolest-military-robots.
Figure 6.
Autonomous fighter examples (MQ-1 Predator, M12). Source: www.kareneliot.de/
OpenDrones/opendrones_1military.html;https://www.youtube.com/watch?v=_upbplsKGd4;https:
//www.digitaltrends.com/cool-tech/coolest-military-robots.
The use of autonomous robotic weapons in war is subject to a number of objections [5]:
•
Inability to program war laws (Programming the laws of war is a very difficult and challenging
task for the present and the future).
•Taking humans out of the firing loop (It is wrong per se to remove human from the firing loop).
•
Lower barriers to war (The removal of human soldiers from the risk and the reduction of harm to
civilians through more accurate autonomous war robots diminishes the disincentive to resort to
war).
The Human Rights Watch (HRW) has issued a set of recommendations to all states, roboticists,
and other scientists involved in the development and production of robotic weapons, which aim to
minimize the development and use of autonomous lethal robots in war [50].
Information 2018,9, 148 16 of 25
4.5. Autonomous Car Ethics
Autonomous (self-driving, driverless) cars are on the way [
5
]. Proponents of autonomous cars and
other vehicles argue that within two or three decades autonomously driving cars will be so accurate
that they will exceed the number of human-driven cars [
51
,
52
]. The specifics of self-driving vary from
manufacturer to manufacturer, but at the basic level cars use a set of cameras, lasers, and sensors
located around the vehicle for detecting obstacles, and employ GPS (global positioning systems) help
them to move along a preset route (Figure 7).
Information 2018, 9, x 15 of 24
The use of autonomous robotic weapons in war is subject to a number of objections [5]:
Inability to program war laws (Programming the laws of war is a very difficult and challenging
task for the present and the future).
Taking humans out of the firing loop (It is wrong per se to remove human from the firing loop).
Lower barriers to war (The removal of human soldiers from the risk and the reduction of harm
to civilians through more accurate autonomous war robots diminishes the disincentive to resort
to war).
The Human Rights Watch (HRW) has issued a set of recommendations to all states, roboticists,
and other scientists involved in the development and production of robotic weapons, which aim to
minimize the development and use of autonomous lethal robots in war [50].
4.5. Autonomous Car Ethics
Autonomous (self-driving, driverless) cars are on the way [5]. Proponents of autonomous cars
and other vehicles argue that within two or three decades autonomously driving cars will be so
accurate that they will exceed the number of human-driven cars [51,52]. The specifics of self-driving
vary from manufacturer to manufacturer, but at the basic level cars use a set of cameras, lasers, and
sensors located around the vehicle for detecting obstacles, and employ GPS (global positioning
systems) help them to move along a preset route (Figure 7).
Figure 7. Basic sensors of Google’s driverless car. Source:
http://blog.cayenneapps.com/2016/06/13/self-driving-cars-swot-analysis.
Currently there are cars on the road that perform several driving tasks autonomously (without
the help of the human driver). Examples are: lane assist systems to keep the car in the lane, cruise
control systems that speed up or slow down according to the speed of the car in front, and automatic
emergency braking for emergency stops to prevent collisions with pedestrians.
SAE (Society of Automotive Engineers) International (www.sae.org/autodrive) developed and
released a new standard (J3016) for the “Taxonomy and definitions of terms related to on-road
motor vehicle automated driving systems”. This standard provides a harmonized classification
system and supporting definitions which:
“Identify six levels of driving automation from ‘no automation’ to ‘full automation’.
Base definitions and levels on functional aspects of technology.
Describe categorical distinction for step-wise progression through the levels.
Are consistent with current industry practice.
Eliminate confusion and are useful across numerous disciplines (engineering, legal, media, and
public discourse).
Educate a wide community by clarifying for each level what role (if any) drivers have in
performing the dynamic driving task while a driving automation system is engaged.”
Figure 7.
Basic sensors of Google’s driverless car. Source: http://blog.cayenneapps.com/2016/06/13/
self-driving-cars-swot-analysis.
Currently there are cars on the road that perform several driving tasks autonomously (without the
help of the human driver). Examples are: lane assist systems to keep the car in the lane, cruise control
systems that speed up or slow down according to the speed of the car in front, and automatic emergency
braking for emergency stops to prevent collisions with pedestrians.
SAE (Society of Automotive Engineers) International (www.sae.org/autodrive) developed and
released a new standard (J3016) for the “Taxonomy and definitions of terms related to on-road motor
vehicle automated driving systems”. This standard provides a harmonized classification system and
supporting definitions which:
•“Identify six levels of driving automation from ‘no automation’ to ‘full automation’.
•Base definitions and levels on functional aspects of technology.
•Describe categorical distinction for step-wise progression through the levels.
•Are consistent with current industry practice.
•
Eliminate confusion and are useful across numerous disciplines (engineering, legal, media,
and public discourse).
•
Educate a wide community by clarifying for each level what role (if any) drivers have in
performing the dynamic driving task while a driving automation system is engaged.”
The fundamental definitions included in J3016 are (orfe.princeton.edu, Business Wire, 2017):
•
“Dynamic driving tasks (i.e., operational aspects of automatic driving, such as steering, braking,
accelerating, monitoring the vehicle and the road, and tactical aspects such as responding to
events, determining when to change lanes, turn, etc.).
•
Driving mode (i.e., a form of driving scenario with appropriate dynamic driving task requirements,
such as expressway merging, high-speed cruising, low-speed traffic jam, closed-campus
operations, etc.).
Information 2018,9, 148 17 of 25
•
Request to intervene (i.e., notification by the automatic driving system to a human driver that he
should promptly begin or resume performance of the dynamic driving task).”
Figure 8shows the milestones needed to be passed on the way to meeting the final goal of fully
automated vehicles, according to SAE, NHTSA (National Highway Traffic Safety Administration),
and FHRI (Federal Highway Research Institute).
Information 2018, 9, x 16 of 24
The fundamental definitions included in J3016 are (orfe.princeton.edu, Business Wire, 2017):
“Dynamic driving tasks (i.e., operational aspects of automatic driving, such as steering,
braking, accelerating, monitoring the vehicle and the road, and tactical aspects such as
responding to events, determining when to change lanes, turn, etc.).
Driving mode (i.e., a form of driving scenario with appropriate dynamic driving task
requirements, such as expressway merging, high-speed cruising, low-speed traffic jam,
closed-campus operations, etc.).
Request to intervene (i.e., notification by the automatic driving system to a human driver that
he should promptly begin or resume performance of the dynamic driving task).”
Figure 8 shows the milestones needed to be passed on the way to meeting the final goal of fully
automated vehicles, according to SAE, NHTSA (National Highway Traffic Safety Administration),
and FHRI (Federal Highway Research Institute).
Figure 8. Vehicle driving automation milestones adopted by ASE, NHTSA, and BAST. Source:
https://www.schlegelundpartner.com (/cn/news/man-and-machine-automated-driving).
These scenarios and stages of development are subject to several legal and ethical problems
which are currently under investigation at regional and global levels. The most advanced country in
this development is the USA, while European countries are somewhat behind the USA. The general
legislation in the USA (primarily determined by NHTSA and the Geneva Convention on road traffic
of 1949) requires the active presence of a driver inside the vehicle who is capable of taking control
whenever necessary. Within the USA, each state enacts its own laws concerning automated driving
cars. So far only four states (Michigan, California, Nevada, and Florida) have accepted automated
driving software to be legal. In Germany, the Federal Ministry of Transport has already allowed the
Figure 8.
Vehicle driving automation milestones adopted by ASE, NHTSA, and BAST. Source:
https://www.schlegelundpartner.com (/cn/news/man-and-machine-automated-driving).
These scenarios and stages of development are subject to several legal and ethical problems
which are currently under investigation at regional and global levels. The most advanced country in
this development is the USA, while European countries are somewhat behind the USA. The general
legislation in the USA (primarily determined by NHTSA and the Geneva Convention on road traffic
of 1949) requires the active presence of a driver inside the vehicle who is capable of taking control
whenever necessary. Within the USA, each state enacts its own laws concerning automated driving
cars. So far only four states (Michigan, California, Nevada, and Florida) have accepted automated
driving software to be legal. In Germany, the Federal Ministry of Transport has already allowed the
use of driving assistance governed by corresponding legislation. Most car manufactures are planning
to produce autonomous driving technologies of various degrees. For example, Google is testing a fully
autonomous prototype that replaces the driver completely, and anticipates to release its technology in
the market by 2020. Automakers are proceeding towards full autonomy in stages; currently, most of
them are at level 1 and only a few have introduced level 2 capabilities.
Information 2018,9, 148 18 of 25
The fundamental ethical/liability question here is [
5
]: Who will be liable when a driverless
car crashes? This question is analogous to the ethical/liability question of robotic surgery. Today,
the great majority of car accidents are the fault of one driver or the other, or the two in some shared
responsibility. Few collisions are deemed to be the responsibility of the car itself or of the manufacturer.
However, this will not be the same if the car drives itself. Actually, it will be much harder to
conventionally blame one driver or the other. Should the ethical and legal responsibility be shared
by the manufacturer or multiple manufacturers, or the people who made the hardware or software?
Or, should another car that sent a faulty signal on the highway be blamed? [
5
]. An extensive discussion
of advantages/disadvantages including legal and ethical issues is provided in Reference [53].
4.6. Cyborg Ethics
Cyborg technology aims to design and study neuromotor prostheses in order to store and
reinstate lost function with a replacement that is as similar as possible to the real thing (a lost arm
or hand, lost vision, etc.) [
5
,
54
]. The word cyborg stands for cybernetic organism, a term coined
by Manfred Clynes and Nathan Kline [
55
]. A cyborg is any living being that has both organic and
mechanical/electrical parts that either restore or enhance the organism’s functioning. People with the
most common technological implants such as prosthetic limbs, pacemakers, and cochlear/bionic ear
implants, or people who receive implant organs developed from artificially cultured stem cells can be
consired to belong to this category [
56
]. The first real cyborg was a “lab rat” created at Rockland State
Hospital in 1950 (New York, www.scienceabc.com).
The principal advantages of mixing organs with mechanical parts are for human health.
For example [5]:
•
“People with replaced parts of their body (hips, elbows, knees, wrists, arteries, etc.) can now be
classified as cyborgs.
•
Brain implants based on neuromorphic model of the brain and the nervous system help reverse
the most devastating symptoms of Parkinson disease.”
Disadvantages of cyborgs include [5]:
•
“Cyborgs do not heal body damage normally, but, instead, body parts are replaced.
Replacing broken limbs and damaged armor plating can be expensive and time-consuming.
•
Cyborgs can think of the surrounding world in multiple dimensions, whereas human beings are
more restricted in that sense” [56,57].
Figure 9shows a cyborg/electronic eye.
Information 2018, 9, x 17 of 24
use of driving assistance governed by corresponding legislation. Most car manufactures are
planning to produce autonomous driving technologies of various degrees. For example, Google is
testing a fully autonomous prototype that replaces the driver completely, and anticipates to release
its technology in the market by 2020. Automakers are proceeding towards full autonomy in stages;
currently, most of them are at level 1 and only a few have introduced level 2 capabilities.
The fundamental ethical/liability question here is [5]: Who will be liable when a driverless car
crashes? This question is analogous to the ethical/liability question of robotic surgery. Today, the
great majority of car accidents are the fault of one driver or the other, or the two in some shared
responsibility. Few collisions are deemed to be the responsibility of the car itself or of the
manufacturer. However, this will not be the same if the car drives itself. Actually, it will be much
harder to conventionally blame one driver or the other. Should the ethical and legal responsibility be
shared by the manufacturer or multiple manufacturers, or the people who made the hardware or
software? Or, should another car that sent a faulty signal on the highway be blamed? [5]. An
extensive discussion of advantages/disadvantages including legal and ethical issues is provided in
Reference [53].
4.6. Cyborg Ethics
Cyborg technology aims to design and study neuromotor prostheses in order to store and
reinstate lost function with a replacement that is as similar as possible to the real thing (a lost arm or
hand, lost vision, etc.) [5,54]. The word cyborg stands for cybernetic organism, a term coined by
Manfred Clynes and Nathan Kline [55]. A cyborg is any living being that has both organic and
mechanical/electrical parts that either restore or enhance the organism’s functioning. People with the
most common technological implants such as prosthetic limbs, pacemakers, and cochlear/bionic ear
implants, or people who receive implant organs developed from artificially cultured stem cells can
be consired to belong to this category [56]. The first real cyborg was a ‘lab rat’ created at Rockland
State Hospital in 1950 (New York, www.scienceabc.com).
The principal advantages of mixing organs with mechanical parts are for human health. For
example [5]:
“People with replaced parts of their body (hips, elbows, knees, wrists, arteries, etc.) can now be
classified as cyborgs.
Brain implants based on neuromorphic model of the brain and the nervous system help reverse
the most devastating symptoms of Parkinson disease.”
Disadvantages of cyborgs include [5]:
“Cyborgs do not heal body damage normally, but, instead, body parts are replaced. Replacing
broken limbs and damaged armor plating can be expensive and time-consuming.
Cyborgs can think of the surrounding world in multiple dimensions, whereas human beings
are more restricted in that sense” [56,57].
Figure 9 shows a cyborg/electronic eye.
Figure 9. An example of cyborg eye. Source:
https://www.behance.net/gallery/4411227/Cyborg-Eye-(Female).
Figure 9.
An example of cyborg eye. Source: https://www.behance.net/gallery/4411227/Cyborg-Eye-
(Female).
Information 2018,9, 148 19 of 25
Three of the world’s most famous real-life cyborgs are the following (Figure 10) [58]:
•
The artist Neil Harbinson, born with achromatopsia (able to see only black and white) is equipped
with an antenna implanted into his head. With this eyeborg (electronic eye), he became able to
render perceived colors as sounds on the musical scale.
•
Jesse Sullivan suffered a life-threatening accident: he was electrocuted so severely that both of his
arms needed to be amputated. He was fitted with a bionic limb connected through a nerve-muscle
grafting. He then became able to control his limb with his mind, and also able to feel temperature
as well as how much pressure his grip applies.
•
Claudia Mitchell is the first woman to have a bionic arm after a motorcycle accident in which she
lost her left arm completely.
Information 2018, 9, x 18 of 24
Three of the world’s most famous real-life cyborgs are the following (Figure 10) [58]:
The artist Neil Harbinson, born with achromatopsia (able to see only black and white) is
equipped with an antenna implanted into his head. With this eyeborg (electronic eye), he
became able to render perceived colors as sounds on the musical scale.
Jesse Sullivan suffered a life-threatening accident: he was electrocuted so severely that both of
his arms needed to be amputated. He was fitted with a bionic limb connected through a
nerve-muscle grafting. He then became able to control his limb with his mind, and also able to
feel temperature as well as how much pressure his grip applies.
Claudia Mitchell is the first woman to have a bionic arm after a motorcycle accident in which
she lost her left arm completely.
(a) (b)
(c)
Figure 10. Examples of human cyborgs. (a) Neil Harbinson, (b) Jesse Sullivan, (c) Claudia Mitchell.
Source: www.medicalfuturist.com (/the-world-most-famous-real-life-cyborgs).
Cyborgs raise serious ethical concerns, especially in the case when the consciousness of a
person is changed by the integration of human and machine [59]. Actually, in all cases cyborg
technology violates the human/machine distinction. However, in most cases, although the person’s
physical capabilities take on a different form and his/her capabilities are enhanced, his/her internal
mental state, consciousness, and perception has not been changed other than to the extent of
changing what the individual might be capable of accomplishing [59]. Actually, what should be of
maximum ethical concern is not the possible physical enhancements or repairs, but when the change
of the nature of a human is changed by linking human and machine mental functioning. A
philosophical discussion about cyborgs and the relationship between body and machine is provided
in Reference [60].
Figure 10.
Examples of human cyborgs. (
a
) Neil Harbinson, (
b
) Jesse Sullivan, (
c
) Claudia Mitchell.
Source: www.medicalfuturist.com (/the-world-most-famous-real-life-cyborgs).
Cyborgs raise serious ethical concerns, especially in the case when the consciousness of a person
is changed by the integration of human and machine [
59
]. Actually, in all cases cyborg technology
violates the human/machine distinction. However, in most cases, although the person’s physical
capabilities take on a different form and his/her capabilities are enhanced, his/her internal mental
state, consciousness, and perception has not been changed other than to the extent of changing what
the individual might be capable of accomplishing [
59
]. Actually, what should be of maximum ethical
concern is not the possible physical enhancements or repairs, but when the change of the nature of a
human is changed by linking human and machine mental functioning. A philosophical discussion
about cyborgs and the relationship between body and machine is provided in Reference [60].
Information 2018,9, 148 20 of 25
5. Future Prospects of Robotics and Roboethics
In general, the intelligence capabilities of robots follow the development path of artificial
intelligence. The robots of today have capabilities compatible with “artificial narrow intelligence”
(ANI), i.e., they can execute specific focused tasks but cannot self-expand functionally. As a result,
they outperform humans in specific repetitive operations. By 2040, robots are expected to perform tasks
compatible with “artificial general intelligence” (AGI), i.e., they will be able to compete with humans
across all activities, and perhaps convince humans that they are “humans”. Soon after the AGI period,
robots are expected to demonstrate intelligence beyond human capabilities. In fact, many futurists,
e.g., Hans Moravec (Carnegie Mellon University), predict that in the future, robots and machines will
have superb features such as high-level reasoning, self-awareness, consiousness, conscience, emotion,
and other feelings. Moravec [
61
] believes that in the future, the line between humans and robots
will blur, and—although current robots are modeled on human senses, abilities, and actions—in the
future they will evolve beyond this framework. Therefore, the following philosophical question arises:
What makes a human being a human being and a robot a robot? The answer to this question given by
several robotics scientists is that what makes a human being different from a robot, even if robots can
reason, and are self-aware, emotional, and moral, is creativity.
The American Psychological Association (APA) points out that “in future, loneliness and isolation
may be a more serious public health hazard than obesity”. Ron Arkin (a roboethicist) says that “a
solution to this problem can be to use companion sociorobots, but there is a need to study deeply the
ethics of forming bonds/close relationships with these robots”. Today, human-robot relationships are
still largely task driven, i.e., the human gives the robot a task and expects it to be completed. In the
future, tasks are expected to be performed jointly by human-robot close co-operation and partnership.
The big double question here is (mobile.abc.com): Should we allow robots to become partners
with us in the same way that we allow humans to become partners? Is the concept of sentience or
true feeling required in a robot for it to be respected? Arkin’s comment about this question is that:
“Robots propagate an illusion of life; they can create the belief that the robot actually cares about us,
but what it cares is nothing”.
Three important questions about the robots of the future are (www.frontiers.org):
•How similar to humans should robots become?
•
What are the possible effects of future technological progress of robotics on humans and society?
•How to best design future intelligent/autonomous robots?
These and other questions are discussed in Reference [
62
]. The human-robot similarity of the
future depends on the further development of several scientific/technological fields such as artificial
intelligence, speech recognition, processing and synthesis, human-computer interfaces and interaction,
sensors and actuators, artificial muscles and skins, etc. Clearly, a proper synergy of these elements
is required. Whether the robots look like humans or not is not so important as how, and how much,
robots can perform the tasks we want them to do (www.frontiers.org). The question here is: Given
that we can create human-like (humanoid) robots, do we want or need them? According to the
“uncanny valley” hypothesis, as robots become more similar to humans (humane, anthropomorphic),
the pleasure of having them around increases up to a certain point. When they are very similar to
humans this pleasure falls ubruptly. However, it later increases again when the robots become even
more similar to humans (Figure 11). This decrease and increase of comfort as a robot becomes more
anthropomorphic is the “uncanny valley”, which is discused in detail in Reference [63].
Information 2018,9, 148 21 of 25
Information 2018, 9, x 20 of 24
Figure 11. The uncanny valley. Source: www.umich.edu/~uncanny
The IEEE Global Initiative Committee issued a document on “AI and Autonomous Systems”,
which involves a set of general principles that are then applied to the following particular areas [64]:
“Embedding values into autonomous intelligent systems.
Methodologies to guide ethical research and design.
Safety and beneficence of general AI and superintelligence.
Reframing autonomous weapons systems.
Economics and humanitarian issues.
Personal data and individual access control.”
This IEEE document is subject to periodical revision.
An issue of strong current debate is whether future robots should have rights, and if yes, what
types of robots? And what rights? Present-day robots may not deserve to have rights, but many
robotic thinkers argue that robots of the future might have rights, such as the right to receive
payments for their services, the right to vote, the right to be protected like humans, etc. Going
further, a highly important question is: Can robots be regarded as active moral agents or moral
patients? This question is discussed, among others, by Mark Coeckelberg [65].
Three opinions on these issues are the following (www.scuoladirobotica.it):
Ray Jarvis (Monash University, Australia): “I think that we would recognize machine rights if
we were looking at it from a human point of view. I think that humans, naturally, would be
empathetic to a machine that had self-awareness. If the machine had the capacity to feel pain, if
it had a psychological awareness that it was a slave, then we would want to extend rights to the
machine. The question is how far should you go? To what do you extend rights?”
Simon Longstaff (St. James Ethics Center, Australia): “It depends on how you define the
conditions for personhood. Some use preferences as criteria, saying that a severely disabled
baby, unable to make preferences, shouldn’t enjoy human rights yet higher forms of animal life,
capable of making preferences, are eligible for rights. […] Machines would never have to
contend with transcending instinct and desire, which is what humans have to do. I imagine a
hungry lion on a veldt about to spring on a gazelle. The lion as far we know doesn’t think, ‘Well
I am hungry, but the gazelle is beautiful and has children to feed.’ It acts on instinct. Altruism is
what makes us human, and I don’t know that you can program for altruism.”
Jo Bell (Animal Liberation): “Asimov’s Robot series grappled with this sort of (rights) question.
As we have incorporated other races and people-women, the disabled, into the category of
those who can feel and think, then I think if we had machines of that kind, then we would have
to extend some sort of rights to them.”
Figure 11. The uncanny valley. Source: www.umich.edu/~uncanny.
The IEEE Global Initiative Committee issued a document on “AI and Autonomous Systems”,
which involves a set of general principles that are then applied to the following particular areas [64]:
•“Embedding values into autonomous intelligent systems.
•Methodologies to guide ethical research and design.
•Safety and beneficence of general AI and superintelligence.
•Reframing autonomous weapons systems.
•Economics and humanitarian issues.
•Personal data and individual access control.”
This IEEE document is subject to periodical revision.
An issue of strong current debate is whether future robots should have rights, and if yes,
what types of robots? And what rights? Present-day robots may not deserve to have rights, but many
robotic thinkers argue that robots of the future might have rights, such as the right to receive payments
for their services, the right to vote, the right to be protected like humans, etc. Going further, a highly
important question is: Can robots be regarded as active moral agents or moral patients? This question
is discussed, among others, by Mark Coeckelberg [65].
Three opinions on these issues are the following (www.scuoladirobotica.it):
•
Ray Jarvis (Monash University, Australia): “I think that we would recognize machine rights if
we were looking at it from a human point of view. I think that humans, naturally, would be
empathetic to a machine that had self-awareness. If the machine had the capacity to feel pain, if it
had a psychological awareness that it was a slave, then we would want to extend rights to the
machine. The question is how far should you go? To what do you extend rights?”
•
Simon Longstaff (St. James Ethics Center, Australia): “It depends on how you define the conditions
for personhood. Some use preferences as criteria, saying that a severely disabled baby, unable to
make preferences, shouldn’t enjoy human rights yet higher forms of animal life, capable of
making preferences, are eligible for rights. [
. . .
] Machines would never have to contend with
transcending instinct and desire, which is what humans have to do. I imagine a hungry lion on
a veldt about to spring on a gazelle. The lion as far we know doesn’t think, “Well I am hungry,
but the gazelle is beautiful and has children to feed.” It acts on instinct. Altruism is what makes
us human, and I don’t know that you can program for altruism.”
Information 2018,9, 148 22 of 25
•
Jo Bell (Animal Liberation): “Asimov’s Robot series grappled with this sort of (rights) question.
As we have incorporated other races and people-women, the disabled, into the category of those
who can feel and think, then I think if we had machines of that kind, then we would have to
extend some sort of rights to them.”
Over the years, many AI thinkers have worried that intelligent machines of the future
(called superintelligent or ultra-intelligent machines) could pose a threat to humanity. For example,
I.J. Good argued (1965) that “an ultra-intelligent machine could design even better machinery, and the
intelligence of man would be left far behind”.
Roger Moore, speaking about AI ethics, artificial intelligence, robots, and society, explained why
people worry about the wrong things when they worry about AI [
16
]. He argues that the reasons not
to worry are:
•“AI has the same problems as other conventional artifacts.
•It is wrong to exploit people’s ignorance and make them think AI is human.
•Robots will never be your friends.”
Things to worry about include:
•
“Human culture is already a superintelligent machine turning the planet into apes, cows,
and paper clips.
•Big data + better models = ever-improving prediction, even about individuals.”
General key topics for future roboethics include the following:
•Assuring that humans will be able to control future robots.
•Preventing the illegal use of future robots.
•Protecting data obtained by robots.
•Establishing clear traceability and identification of robots.
The need to develop new industrial standards for testing AI/intelligent robots of the future
will be much more crucial, otherwise it will be difficult to implement and deploy future robots,
with superintelligence, safely and profitably. Big ethical questions for the robots of the future include
the following:
•
Is it ethical to turn over all of our difficult and highly sensitive decisions to machines and robots?
•
Is it ethical to outsource all of our autonomy to machines and robots that are able to make
good decisions?
•What are the existential and ethical risks of developing superintelligent machines/robots?
6. Conclusions
The core of this paper (roboethics branches) followed the structure of the author’s book on
roboethics [
5
]. The paper was concerned with the robot ethics field and its future prospects. Many of
the fundamental concepts of ethics and roboethics were outlined at an introductory conceptual level,
and some issues of future advanced artificial inteligence ethics and roboethics were discussed.
On topics as sensitive as decisions on human life (e.g., using autonomous robot weapons),
the ethical issues of war and robot-based weapons were discussed including the principal objections
against the use of autonomous lethal robots in war. The general ethical questions in this area are:
What kind of decisions are we comfortable outsourcing to autonomous machines? What kind of
decisions should or should not always remain in the hand of humans? In other words, should robots
be allowed to make life/death decisions? In cases not covered by the law in force, human beings
remain under the protection of the principles of humanity and the dictates of public conscience
according to the Geneva Conventions (Additional Protocol II). The Open Roboethics Institute (ORI)
Information 2018,9, 148 23 of 25
conducted a world-wide public study collecting the opinions of a large number of individuals on
the issue of autonomous robotic weapons use. The results of this study were documented and
presented in Reference [
66
]. Other sensitive human life areas discussed in the paper are the use of
robots in medicine, assistance to the elderly and impaired people, companionship/entertainment,
driverless vehicles, and cybernetic organisms. Finally, another emerging area that rises critical ethical
questions that was not discussed in this paper is the area of sex or love-making robots (sexbots,
lovebots). Representative references on sexbots include References [
67
–
69
]. A review of critical
ethical issues in creating superintelligence is provided in [
70
], and a review of ‘cyborg enhancement
technology’, with emphasis on the brain enhancements and the creation of new senses, is given in [
71
].
Conflicts of Interest: The authors declare no conflicts of interest.
References
1. Sabanovic, S. Robots in society, society in robots. Int. J. Soc. Robots 2010,24, 439–450. [CrossRef]
2.
Veruggio, G. The birth of roboethics. In Proceedings of the IEEE International Conference on Robotics and
Automation (ICRA 2005): Workshop on Robot Ethics, Barcelona, Spain, 18 April 2005; pp. 1–4.
3.
Lin, P.; Abney, K.; Bekey, G.A. Robot Ethics: The Ethical and Social Implications of Robotics; MIT Press: Cambridge,
MA, USA, 2011.
4. Capurro, R.; Nagenborg, M. Ethics and Robotics; IOS Press: Amsterdam, The Netherlands, 2009.
5.
Tzafestas, S.G. Roboethics: A Navigating Overview; Springer: Berlin, Germany; Dordrecht, The Netherlands,
2015.
6. Dekoulis, G. Robotics: Legal, Ethical, and Socioeconomic Impacts; InTech: Rijeka, Croatia, 2017.
7.
Jha, U.C. Killer Robots: Lethal Autonomous Weapon Systems Legal, Ethical, and Moral Challenges; Vij Books India
Pvt: New Delhi, India, 2016.
8.
Gunkel, D.J.K. The Machine Question: Critical Perspectives on AI, Robots, and Ethics; MIT Press: Cambridge,
MA, USA, 2012.
9.
Dekker, M.; Guttman, M. Robo-and-Information Ethics: Some Fundamentals; LIT Verlag: Muenster, Germany, 2012.
10. Anderson, M.; Anderson, S.L. Machine Ethics; Cambridge University Press: Cambridge, UK, 2011.
11.
Veruggio, G.; Solis, J.; Van der Loos, M. Roboethics: Ethics Applied to Robotics. IEEE Robot. Autom. Mag.
2001,18, 21–22. [CrossRef]
12.
Capurro, R. Ethics in Robotics. Available online: http://www.i-r-i-e.net/inhalt/006/006_full.pdf
(accessed on 10 June 2018).
13.
Lin, P.; Abney, K.; Jenkins, R. Robot Ethics 2.0: From Autonomous Cars to Artificial Intelligence; Oxford University
Press: Oxford, UK, 2018.
14.
Veruggio, G. Roboethics Roadmap. In Proceedings of the EURON Roboethics Atelier, Genoa, Italy,
27 Feberuary–3 March 2006.
15. Arkin, R. Governing Lethal Behavior of Autonomous Robots; Chapman and Hall: New York, NY, USA, 2009.
16.
Moore, R.K. AI Ethics: Artificial Intelligence, Robots, and Society; CPSR: Seattle, WA, USA, 2015; Available online:
www.cs.bath.ac.uk/~jjb/web/ai.html (accessed on 10 June 2018).
17. Asaro, P.M. What should we want from a robot ethics? IRIE Int. Rev. Inf. Ethics 2006,6, 9–16.
18.
Tzafestas, S.G. Systems, Cybernetics, Control, and Automation: Ontological, Epistemological, Societal, and Ethical
Issues; River Publishers: Gistrup, Denmark, 2017.
19.
Verrugio, P.M.; Operto, F. Roboethics: A bottom -up interdisciplinary discourse in the field of applied ethics
in robotics. IRIE Int. Rev. Inf. Ethics 2006,6, 2–8.
20.
Wallach, W.; Allen, C. Moral Machines: Teaching Robots Right from Wrong; Oxford University Press: Oxford,
UK, 2009.
21.
Wallach, W.; Allen, C.; Smit, I. Machine morality: Bottom-up and top-down approaches for modeling moral
faculties. J. AI Soc. 2008,22, 565–582. [CrossRef]
22.
Asimov, I. Runaround: Astounding Science Fiction (March 1942); Republished in Robot Visions: New York, NY,
USA, 1991.
23. Gert, B. Morality; Oxford University Press: Oxford, UK, 1988.
Information 2018,9, 148 24 of 25
24.
Gips, J. Toward the ethical robot. In Android Epistemology; Ford, K., Glymour, C., Mayer, P., Eds.; MIT Press:
Cambridge, MA, USA, 1992.
25. Bringsjord, S. Ethical robots: The future can heed us. AI Soc. 2008,22, 539–550. [CrossRef]
26.
Dekker, M. Can humans be replaced by autonomous robots? Ethical reflections in the framework of an
interdisciplinary technology assessment. In Proceedings of the IEEE International Conference on Robotics
and Automation (ICRA’07), Rome, Italy, 10–14 April 2007.
27. Pence, G.E. Classic Cases in Medical Ethics; McGraw-Hill: New York, NY, USA, 2000.
28. Mappes, G.E.; DeGrazia, T.M.D. Biomedical Ethics; McGraw-Hill: New York, NY, USA, 2006.
29.
North, M. The Hippocratic Oath (translation). National Library of Medicine, Greek Medicine.
Available online: www.nlm.nih.gov/hmd/greek/greek_oath.html (accessed on 10 June 2018).
30.
Paola, I.A.; Walker, R.; Nixon, L. Medical Ethics and Humanities; Jones & Bartlett Publisher: Sudbary, MA,
USA, 2009.
31.
AMA. Medical Ethics. 1995. Available online: https://www.ama.assn.org and https://www.ama.assn.org/
delivering-care/ama-code-medical-ethics (accessed on 10 June 2018).
32.
Beabou, G.R.; Wennenmann, D.J. Applied Professional Ethics; University of Press of America: Milburn, NJ,
USA, 1993.
33. Rowan, J.R.; Sinaih, S., Jr. Ethics for the Professions; Cencage Learning: Boston, MA, USA, 2002.
34.
Dickens, B.M.; Cook, R.J. Legal and ethical issues in telemedicine and robotics. Int. J. Gynecol. Obstet.
2006
,
94, 73–78. [CrossRef] [PubMed]
35.
World Health Organization. International Classification of Functioning, Disability, and Health; World Health
Organization: Geneva, Switzerland, 2001.
36.
Tanaka, H.; Yoshikawa, M.; Oyama, E.; Wakita, Y.; Matsumoto, Y. Development of assistive robots using
international classification of functioning, disability, and health (ICF). J. Robot.
2013
,2013, 608191. [CrossRef]
37.
Tanaka, H.; Wakita, Y.; Matsumoto, Y. Needs analysis and benefit description of robotic arms for daily
support. In Proceedings of the RO-MAN’ 15: 24th IEEE International Symposium on Robot and Human
Interactive Communication, Kobe, Japan, 31 August–4 September 2015.
38.
RESNA Code of Ethics. Available online: http://resna.org/certification/RESNA_Code_of_Ethics.pdf
(accessed on 10 June 2018).
39.
Ethics Resources. Available online: www.crccertification.com/pages/crc_ccrc_code_of_ethics/10.php
(accessed on 10 June 2018).
40. Tzafestas, S.G. Sociorobot World: A Guided Tour for All; Springer: Berlin, Germany, 2016.
41.
Fog, T.; Nourbakhsh, I.; Dautenhahn, K. A survey of socially interactive robots. Robot. Auton. Syst.
2003
,42,
143–166.
42.
Darling, K. Extending legal protections in social robots: The effect of anthropomorphism, empathy,
and violent behavior towards robots. In Robot Law; Calo, M.R., Froomkin, M., Ker, I., Eds.; Edward Elgar
Publishing: Brookfield, VT, USA, 2016.
43.
Melson, G.F.; Kahn, P.H., Jr.; Beck, A.; Friedman, B. Robotic pets in human lives: Implications for the
human-animal bond and for human relationships with personified technologies. J. Soc. Issues
2009
,65,
545–567. [CrossRef]
44. Breazeal, C. Designing Sociable Robots; MIT Press: Cambridge, MA, USA, 2002.
45.
Sawada, T.; Takagi, T.; Fujita, M. Behavior selection and motion modulation in emotionally grounded
architecture for QRIO SDR-4XIII. In Proceedings of the IEEE/RSJ International Conference on Intelligent
Robots and Systems (IROS’2004), Sendai, Japan, 28 September–2 October 2004; pp. 2514–2519.
46. Asaro, P. How Just Could a Robot War Be; IOS Press: Amsterdam, The Netherlands, 2008.
47.
Walzer, M. Just and Unjust Wars: A Moral Argument Historical with Illustrations; Basic Books: New York, NY,
USA, 2000.
48. Coates, A.J. The Ethics of War; University of Manchester Press: Manchester, UK, 1997.
49.
Asaro, A. Robots and responsibility from a legal perspective. In Proceedings of the 2007 IEEE International
Conference on Robotics and Automation: Workshop on Roboethics, Rome, Italy, 10–14 April 2007.
50.
Human Rights Watch. HRW-IHRC, Losing Humanity: The Case against Killer Robots; Human Rights Watch:
New York, NY, USA, 2012; Available online: www.hrw.org (accessed on 10 June 2018).
51.
Marcus, G. Moral Machines. Available online: www.newyorker.com/news_desk/moral_machines
(accessed on 24 November 2012).
Information 2018,9, 148 25 of 25
52.
Self-Driving Cars. Absolutely Everything You Need to Know. Available online: http://recombu.com/cars/
article/self-driving-cars-everything-you- need-to-know (accessed on 10 June 2018).
53.
Notes on Autonomous Cars: Lesswrong. 2013. Available online: http://lesswrong.com/lw/gfv/notes_on_
autonomous_cars (accessed on 10 June 2018).
54.
Lynch, W. Wilfred Implants: Reconstructing the Human Body; Van Nostrand Reihold: New York, NY, USA, 1982.
55.
Clynes, M.; Kline, S. Cyborgs and Space. Astronautics
1995
, 29–33. Available online: http://www.tantrik-
astrologer.in/book/linked/2290.pdf (accessed on 10 June 2018).
56.
Warwick, K. A Study of Cyborgs. Royal Academy of Engineering. Available online: www.ingenia.org.uk/
Ingenia/Articles/217 (accessed on 10 June 2018).
57. Warwick, K. Homo Technologicus: Threat or Opportunity? Philosophies 2016,1, 199. [CrossRef]
58.
Seven Real Life Human Cyborgs. Available online: www.mnn.com/leaderboard/stories/7- real-life-human-
cyborgs (accessed on 10 June 2018).
59. Warwick, K. Cyborg moral, cyborg values, cyborg ethics. Ethics Inf. Technol. 2003,5, 131–137. [CrossRef]
60. Palese, E. Robots and cyborgs: To be or to have a body? Poiesis Prax. 2012,8, 19–196. [CrossRef] [PubMed]
61. Moravec, H. Robot: Mere Machine to Trancendent Mind; Oxford University Press: Oxford, UK, 1998.
62.
Torresen, J. A review of future and ethical perspectives of robotics and AI. Front. Robot. AI
2018
. [CrossRef]
63.
MacDorman, K.F. Androids as an experimental apparatus: Why is there an uncanny valley and can we
exploit it? In Proceedings of the CogSci 2005 Workshop: Toward Social Mechanisms of Android Science,
Stresa, Italy, 25–26 July, 2005; pp. 106–118.
64.
IEEE Standards Association. IEEE Ethical Aligned Design; IEEE Standards Association: Piscataway, NJ,
USA, 2016; Available online: http://standards.ieee.org/develop/indconn/ec/ead_v1.pdf (accessed on
10 June 2018).
65.
Coeckelberg, M. Robot Rights? Towards a social-relational justification of moral consideration.
Ethics Inf. Technol. 2010,12, 209–221. [CrossRef]
66.
ORI: Open Roboethics Institute. Should Robots Make Life/Death Decisions? In Proceedings of the UN
Discussion on Lethal Autonomous Weapons, UN Palais des Nations, Geneva, Switzerland, 13–17 April 2015.
67.
Sullins, J.P. Robots, love, and sex: The ethics of building a love machine. IEEE Trans. Affect. Comput.
2012
,3,
389–409. [CrossRef]
68.
Cheok, A.D.; Ricart, C.P.; Edirisinghe, C. Special Issue “Love and Sex with Robots”. Available online:
http://www.mdpi.com/journal/mti/special_issues/robots (accessed on 10 June 2018).
69.
Levy, D. Love and Sex with Robots: The Evolution of Human-Robot Relationship; Harper Perrenial: London, UK, 2008.
70.
Bostrom, N. Ethical issues in advanced artificial intelligence. In Cognitive, Emotive and Ethical Aspects of
Decision Making in Humans and Artificial Intelligence; Lasker, G.E., Marreiros, G., Wallach, W., Smit, I., Eds.;
International Institute for Advanced Studies in Systems Research and Cybernetics: Tecumseh, ON, Canada,
2003; Volume 2, pp. 12–17.
71. Barfield, W.; Williams, A. Cyborgs and enhancement technology. Philosophies 2017,2, 4. [CrossRef]
©
2018 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access
article distributed under the terms and conditions of the Creative Commons Attribution
(CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Available via license: CC BY 4.0
Content may be subject to copyright.