Machine Medical Ethics

Book · October 2014with1,476 Reads

Publisher: Springer, Editor: Simon Peter van Rysewyk, Matthijs Pontier
Intelligent Systems, Control and Automation:
Science and Engineering
Simon Peter van Rysewyk
MatthijsPontier Editors
Intelligent Systems, Control and Automation:
Science and Engineering
Volume 74
Series editor
S.G. Tzafestas, Athens, Greece
Editorial Advisory Board
P. Antsaklis, Notre Dame, IN, USA
P. Borne, Lille, France
D.G. Caldwell, Salford, UK
C.S. Chen, Akron, OH, USA
T. Fukuda, Nagoya, Japan
S. Monaco, Rome, Italy
G. Schmidt, Munich, Germany
S.G. Tzafestas, Athens, Greece
F. Harashima, Tokyo, Japan
D. Tabak, Fairfax, VA, USA
K. Valavanis, Denver, CO, USA
More information about this series at
Simon Peter van Rysewyk · Matthijs Pontier
1 3
Machine Medical Ethics
Simon Peter van Rysewyk
Graduate Institute of Humanities
in Medicine
Taipei Medical University
Department of Philosophy
School of Humanities
University of Tasmania
Springer Cham Heidelberg New York Dordrecht London
© Springer International Publishing Switzerland 2015
This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part
of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations,
recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or
information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar
methodology now known or hereafter developed. Exempted from this legal reservation are brief excerpts
in connection with reviews or scholarly analysis or material supplied specifically for the purpose of
being entered and executed on a computer system, for exclusive use by the purchaser of the work.
Duplication of this publication or parts thereof is permitted only under the provisions of the Copyright
Law of the Publisher’s location, in its current version, and permission for use must always be obtained
from Springer. Permissions for use may be obtained through RightsLink at the Copyright Clearance
Center. Violations are liable to prosecution under the respective Copyright Law.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication
does not imply, even in the absence of a specific statement, that such names are exempt from the relevant
protective laws and regulations and therefore free for general use.
While the advice and information in this book are believed to be true and accurate at the date of
publication, neither the authors nor the editors nor the publisher can accept any legal responsibility for
any errors or omissions that may be made. The publisher makes no warranty, express or implied, with
respect to the material contained herein.
Printed on acid-free paper
Springer is part of Springer Science+Business Media (
ISSN 2213-8986 ISSN 2213-8994 (electronic)
ISBN 978-3-319-08107-6 ISBN 978-3-319-08108-3 (eBook)
DOI 10.1007/978-3-319-08108-3
Library of Congress Control Number: 2014947388
Matthijs Pontier
The Centre for Advanced Media Research
VU University Amsterdam
The Netherlands
Machines are occupying increasingly visible roles in human medical care. In hos-
pitals, private clinics, care residences, and private homes, machines are interacting
in close proximity with many people, sometimes the most vulnerable members of
the human population. Medical machines are undertaking tasks that require inter-
active and emotional sensitivity, practical knowledge of a range of rules of pro-
fessional conduct, and general ethical insight, autonomy, and responsibility. They
will be working with patients who are in fragile states of health, or who have phys-
ical or cognitive disabilities of various kinds, who are very young or very old. The
medical profession has well-defined codes of conduct for interacting with patients,
in relation to minimizing harm, responsible and safe action, privacy, informed con-
sent, and regard for personal dignity.
Although there is general agreement in the field of machine ethics that medical
machines ought to be ethical, many important questions remain. What ethical theory
or theories should constrain medical machine conduct? Is theory even necessary?
What implementation and design features are required in medical machines? In
what specific situations will it be necessary for machines to share praise or blame
with humans for the ethical consequences of their decisions and actions? Are there
medical decisions for which machine support is necessary? These questions are truly
twenty-first century challenges, and for the first time are addressed in detail in this
edited collection.
The collection is logically organized in two parts. The essays in Part I address
foundational questions concerning machine ethics and machine medical ethics
(“An Overview of Machine Medical Ethics”–“Moral Ecology Approaches to
Machine Ethics”). Part II focuses on contemporary challenges in machine medi-
cal ethics, and include three sections: Justice, Rights, and the Law (“Opportunity
Costs: Scarcity and Complex Medical Machines”–“Machine Medical Ethics
and Robot Law: Legal Necessity or Science Fiction?”), Decision-Making,
Responsibility, and Care (“Having the Final Say: Machine Support of Ethical
Decisions of Doctors”–“Machine Medical Ethics: When a Human Is Delusive but
the Machine Has Its Wits About Him”), and Technologies and Models (“ELIZA
Fifty Years Later: An Automatic Therapist Using Bottom-Up and Top-Down
Approaches”–“Ethical and Technical Aspects of Emotions to Create Empathy
in Medical Machines”). The collection Epilogue is an ethical dialog between a
researcher and a visual artist on machine esthetic understanding.
In “An Overview of Machine Medical Ethics”, Tatjana Kochetkova suggests
machine roles in medicine be limited to medical cases for which professional
codes of medical conduct already exist. In such “consensus cases,” machine algo-
rithms in operant medical machines should be either top-down, bottom-up, or
mixed (top-down-bottom-up). Kochetkova cautiously reasons that it is premature
to accord medical machines full ethical status. Instead, prudence suggests they be
designed as explicit, but not full, ethical agents by humans.
Oliver Bendel (“Surgical, Therapeutic, Nursing and Sex Robots in Machine
and Information Ethics”) attempts to shake loose the nature of machine medical
ethics by classifying medical machines according to context (surgery, therapy,
nursing, and sex), function, and stage of development. Bendel ponders the sub-
field of machine medical ethics in relation to its parent disciplines machine ethics
and applied ethics, and asks whether machine medical ethics can function inde-
pendently of these fields. Bendel argues that, in the best ethical case, a medi-
cal machine ought to interact with humans in order to respect and preserve their
Mark Coecklebergh (“Good Healthcare Is in the “How”: The Quality of Care,
The Role of Machines, and the Need for New Skills”) investigates whether
machines threaten or enhance good health care. He argues that “good health care”
relies on expert know-how and skills that enable caregivers to carefully engage
with patients. Evaluating the introduction of new technologies such as robots or
expert medical machines then requires us to ask how the technologies impact on
the “know-how” expertise of caregivers, and whether they encourage a less careful
way of doing things. Ultimately, Coecklebergh thinks machines require new skills
to handle the technology but also new know-how to handle people: knowing how
to be careful and caring with the technology.
In “Implementation Fundamentals for Ethical Medical Agents”, Mark R. Waser
identifies some broad guidelines for the implementation of ethics in medical
machines while acknowledging current machine limitations. All ethical machines
need top-down medical decision-making rules and bottom-up methods to collect
medical data, information, and knowledge as input to those rules, codified meth-
ods to determine the source, quality and accuracy of that input, and methods to
recognize medical situations beyond machine expertise and which require special-
ist human intervention. Waser thinks correct codification and documentation of the
processes by which each medical decision is reached will prove to be more impor-
tant than the individual decisions themselves.
In “Towards a Principle-Based Healthcare Agent”, Susan Leigh Anderson and
Michael Anderson present a top-down method for discovering the ethically rele-
vant features of possible actions that could be used by a machine as prima facie
duties to either maximize or minimize those features, as well as decision principles
that should be used to influence its behavior. This deontic approach is challenged
by Gary Comstock and Joshua Lucas in “Do Machines Have Prima Facie Duties?.”
Preface vii
Among several arguments Comstock and Lucas present against the Andersons and
their prima facie method, they argue that such duties do not uniquely simulate the
complexities of ethical decision-making. To substantiate this claim, Comstock and
Lucas propose an act-utilitarian alternative, they call Satisficing Hedonistic Act
Utilitarianism (SHAU). They show that SHAU can engage in ethical decision-mak-
ing just as sophisticated as prima facie based ethical deliberation, and can produce
the same verdict as a prima facie duty-based ethic in the medical case investigated
by the Andersons.
In contrast to the approach taken in the preceding chapters, the next three chap-
ters argue against the idea of a single theoretical machine ethic and for the idea
that hybrid top-down-bottom-up approaches offer a more promising ethical line
(“A Hybrid Bottom-Up and Top-Down Approach to Machine Medical Ethics:
Theory and Data” and “Moral Ecology Approaches to Machine Ethics”). Simon
Peter van Rysewyk and Matthijs Pontier (“A Hybrid Bottom-Up and Top-Down
Approach to Machine Medical Ethics: Theory and Data”) describe an experiment
in which a machine (Silicon Coppélia) run on a hybrid ethic combining utilitarian-
ism, deontology, and case-based reasoning matches in its own actions, the respec-
tive acts of human medical professionals in six clinical simulations. Christopher
Charles Santos-Lang (“Moral Ecology Approaches to Machine Ethics”) makes
an interesting point that the brains of human beings are “hybrids individually,” by
which he means that living brains can adapt our deliberations and judgments to
present circumstances in contrast to ecosystem approaches to ethics, which pro-
mote hybridization across, rather than within, individuals. Santos-Lang urges, we
design and build diverse teams of machines to simulate the best human teams,
instead of mass-producing identical machines to simulate the best individual
Adam Henschke begins Part II Contemporary Issues in Machine Medical
Ethics (Responsibility, Decision-Making and Care) (“Opportunity Costs: Scarcity
and Complex Medical Machines”). Future medical machines that prioritize health
care only for a minority of patients to the disadvantage of a majority is ethically
unjustified, according to Henschke, especially when resources are scarce. Instead,
in a depressed global economy, optimizing health care outcomes requires funding
increases for existing health care resources, such as nurses, nursing homes, and
family that provide care to their loved ones, rather than mass-producing expensive
medical machines that may ultimately serve only the very rich.
In “The Rights of Machines: Caring for Robotic Care-Givers” entitled, “Rights
for Robots?—Caring for Robotic Care-Givers,” David J. Gunkel ponders the ques-
tion of “machine rights” for health care robots. Gunkel identifies two “machine
rights” options: health care robots are nothing more than instrumental tools and
accordingly deserve no legal rights; health care robots are valued domestic com-
panions and deserve at least some legal protections. Since each option turns out to
have problems, Gunkel urges that the question of “machine rights” be taken more
seriously by society.
Are medical machines liable for their actions and mistakes, as are “natural
humans”? Addressing this question in “Machine Medical Ethics and Robot Law:
Legal Necessity or Science Fiction?”, Rob van den Hoven van Genderen pre-
dicts that new legal amendments will enter existing law to represent intelligent
machines but only on behalf of a real legal actor, a natural human being. Since
machines are best viewed as our assistants, workers or servants, they do not qual-
ify as natural persons, and ought never to have full human rights and obligations.
According to van den Hoven van Genderen, the legal system is under human con-
trol, and cannot ever be shared with machines.
Beginning the next section in Part III, Decision-Making, Responsibility, and
Care, Julia Inthorn, Rudolf Seising, and Marco E. Tabacchi propose that Serious
Games machines can share ethical responsibility with human health care profes-
sionals in solving medical dilemmas (“Having the Final Say: Machine Support
of Ethical Decisions of Doctors”). The authors show that Serious Games improve
upon current machines in clinical decision-making because they can integrate both
a short and long perspective and enable learning with regard to bottom-up decision
processes as well as top-down rules and maxims. Though there is a reluctance to
use machine support in medicine, the possibilities of experiential learning ought
to be considered an important aspect of behavioral change that could be used to
improve ethical decision-making in medicine. The authors also provide an inform-
ative historical overview of decision support systems in medicine.
What are the prospects of “robotic-assisted dying” in medical contexts? Ryan
Tonkens (“Ethics of Robotic Assisted Dying”) proposes that if we develop robots to
serve as human caregivers in medical contexts, and given that assistance in dying is
sometimes an important aspect of geriatric care, it is ethical for such robots to facil-
itate and assist in the dying of those patients at the eligible patient’s sound request.
A major benefit of robotic-assisted dying is that the robot would always assist those
consenting patients that are genuinely eligible, and thus such patients would not be
at the mercy of a willing physician clause in order to have some control over the
timing and manner of their death. At the same time, specialist humans must remain
involved in establishing strict regulations and safety protocols concerning end-of-
life situations and be present in the event of machine malfunction.
According to Blay Whitby (“Automating Medicine the Ethical Way”), unre-
liable technology and human errors in Information Technology (IT) resulting
from poor user interfaces are two outstanding ethical problems. Whitby calls for
improved ethical awareness and professionalism in IT workers in order to achieve
ethically acceptable medical machines. Lessons from the aviation industry suggest
that issues of acceptance and resistance by professionals can be successfully man-
aged only if they are fully engaged in the operational and procedural changes at
all stages. Negotiation over procedures and responsibility for errors in aviation is
complex and informative for other fields, including machine ethics.
In “Machine Medical Ethics: When a Human Is Delusive but the Machine Has
Its Wits About Him”, Johan F. Hoorn imagines an advanced dementia patient
under the care of a health care robot and asks: “Should the robot comply with the
demand of human autonomy and obey every patient command?” To help answer
this question, Hoorn offers a responsibility self-test for machine or human that dif-
ferently prioritizes top-down maxims of autonomy, nonmaleficence, beneficence,
Preface ix
and justice. The self-test comes in seven steps, ranging from “I do something” (to
act, with or without self-agency), to “My “higher” cognitive functions are sup-
posed to control my “lower” functions but failed or succeeded” (to act, with or
without self-control).
In “ELIZA Fifty Years Later: An Automatic Therapist Using Bottom-Up and
Top-Down Approaches”, Rafal Rzepka and Kenji Araki present a machine therapist
capable of analyzing thousands of patients’ cases implemented in an algorithm for
generating empathic machine reactions based on emotional and social consequences.
Modules and lexicons of phrases based on these theories enable a medical machine
to empathically sense how patients typically feel when certain events happen, and
what could happen before and after actions. The authors suggest that this bottom-up
method be complemented by a top-down utility calculation to ensure the best
outcome for a particular human user.
Neuromachines capable of measuring brain function and to iteratively guide
output will be a major development in neuromodulation technology. According
to Eran Klein, the use of closed-loop technologies in particular will entail ethi-
cal changes in clinical practices (“Models of the Patient-Machine-Clinician
Relationship in Closed-Loop Machine Neuromodulation”). Klein thinks current
ethical models of the clinical relationship are only suited to certain forms of neu-
romodulation, but new models ought to be more comprehensive as new neuromod-
ulatory technologies emerge. Klein assesses design, customer service, and quality
monitoring models as candidates for a new ethic and urges that any successful the-
oretical approach ought to incorporate Aristotelian concepts of friendship.
Steve Torrance and Ron Chrisley (“Modelling Consciousness-Dependent Exper-
tise in Machine Medical Moral Agents”) suggest that a reasonable design constraint
for an ethical medical machine is for it to at least model, if not reproduce, relevant
aspects of consciousness. Consciousness has a key role in the expertise of human
medical agents, including autonomous judging of options in diagnosis, planning
treatment, use of imaginative creativity to generate courses of action, sensorimotor
flexibility and sensitivity, and empathetic and ethically appropriate responsiveness.
An emerging application of affective systems is in support of psychiatric diag-
nosis and therapy. As affective systems in this application, medical machines
must be able to control persuasive dialogs in order to obtain relevant patient data,
despite less than optimal circumstances. Kim Hartman, Ingo Siegert, and Dmytro
Prylipko address this challenge by examining the validity, reliability, and impacts
of current techniques (e.g., word lists) used to determine the emotional states of
speakers from speech (“Emotion and Disposition Detection in Medical Machines:
Chances and Challenges”). They discuss underlying technical and psychologi-
cal models and examine results of recent machine assessment of emotional states
obtained through dialogs.
Medical machines are affective systems because they can detect, assess,
and adapt to emotional state changes in humans. David Casacuberta and Jordi
Vallverdú (“Ethical and Technical Aspects of Emotions to Create Empathy in
Medical Machines”) argue that empathy is the key emotion in health care and
that machines need to be able to detect and mimic it in humans. They reinforce
modeling of cultural, cognitive, and technical aspects in health care robots in order
to approximate empathic bonds between machine and human. The emotional
bonds between human and machines are not only the result of human-like com-
munication protocols but also the outcome of a global trust process in which emo-
tions are cocreated between machine and human.
In Epilogue, Dutch visual artist Janneke van Leeuvan and Simon van Rysewyk
discuss whether intelligent machines can appreciate esthetic representations as a
simulacrum of human esthetic understanding. The dialog is illustrated by selections
from van Leeuwen’s thoughtful photographic work, “Mind Models.
The book editors Simon Peter van Rysewyk and Matthijs Pontier wish to
warmly thank Springer for the opportunity to publish this book, and in particular, to
acknowledge Cynthia Feenstra and Nathalie Jacobs at Springer for their assistance
and patience. We wish to thank Jessica Birkett (Faculty of Medicine, University of
Melbourne) for reviewing author chapters, and all authors that feature in this book
for their excellent and novel contributions. Simon Peter van Rysewyk acknowledges
support from Taiwan National Science Council grant NSC102-2811-H-038-00.
Thank you all.
Simon Peter van Rysewyk
Matthijs Pontier
The Netherlands
Part I Theoretical Foundations of Machine Medical Ethics
An Overview of Machine Medical Ethics ........................... 3
Tatjana Kochetkova
Surgical, Therapeutic, Nursing and Sex Robots in Machine
and Information Ethics .......................................... 17
Oliver Bendel
Good Healthcare Is in the “How”: The Quality of Care, the Role
of Machines, and the Need for New Skills .......................... 33
Mark Coeckelbergh
Implementation Fundamentals for Ethical Medical Agents ............ 49
Mark R. Waser
Towards a Principle-Based Healthcare Agent ....................... 67
Susan Leigh Anderson and Michael Anderson
Do Machines Have Prima Facie Duties? ............................ 79
Joshua Lucas and Gary Comstock
A Hybrid Bottom-Up and Top-Down Approach to Machine
Medical Ethics: Theory and Data ................................. 93
Simon Peter van Rysewyk and Matthijs Pontier
Moral Ecology Approaches to Machine Ethics ...................... 111
Christopher Charles Santos-Lang
Part II Contemporary Challenges in Machine Medical Ethics:
Justice, Rights and the Law
Opportunity Costs: Scarcity and Complex Medical Machines ......... 131
Adam Henschke
The Rights of Machines: Caring for Robotic Care-Givers ............. 151
David J. Gunkel
Machine Medical Ethics and Robot Law: Legal Necessity
or Science Fiction? ............................................. 167
Rob van den Hoven van Genderen
Part III Contemporary Challenges in Machine Medical Ethics:
Decision-Making, Responsibility and Care
Having the Final Say: Machine Support of Ethical Decisions
of Doctors ..................................................... 181
Julia Inthorn, Marco Elio Tabacchi and Rudolf Seising
Ethics of Robotic Assisted Dying .................................. 207
Ryan Tonkens
Automating Medicine the Ethical Way ............................. 223
Blay Whitby
Machine Medical Ethics: When a Human Is Delusive but the
Machine Has Its Wits About Him ................................. 233
Johan F. Hoorn
Part IV Contemporary Challenges in Machine Medical Ethics:
Medical Machine Technologies and Models
ELIZA Fifty Years Later: An Automatic Therapist Using
Bottom-Up and Top-Down Approaches ............................ 257
Rafal Rzepka and Kenji Araki
Models of the Patient-Machine-Clinician Relationship
in Closed-Loop Machine Neuromodulation ......................... 273
Eran Klein
Contents xiii
Modelling Consciousness-Dependent Expertise in Machine
Medical Moral Agents ........................................... 291
Steve Torrance and Ron Chrisley
Emotion and Disposition Detection in Medical Machines:
Chances and Challenges ......................................... 317
Kim Hartmann, Ingo Siegert and Dmytro Prylipko
Ethical and Technical Aspects of Emotions to Create Empathy
in Medical Machines ............................................ 341
Jordi Vallverdú and David Casacuberta
Epilogue ...................................................... 363
Part I
Theoretical Foundations of Machine
Medical Ethics
An Overview of Machine Medical Ethics
Tatjana Kochetkova
© Springer International Publishing Switzerland 2015
S.P. van Rysewyk and M. Pontier (eds.), Machine Medical Ethics,
Intelligent Systems, Control and Automation: Science and Engineering 74,
DOI 10.1007/978-3-319-08108-3_1
Abstract This chapter defines the field of medical ethics and gives a brief over-
view of the history of medical ethics, its main principles and key figures. It dis-
cusses the exponential growth of medical ethics along with its differentiation into
various subfields since 1960. The major problems and disputes of medical ethics
are outlined, with emphasis on the relation between physicians and patients, insti-
tutions, and society, as well as on meta-ethical and pedagogic issues. Next, the
specific problems of machine ethics as a part of the ethics of artificial intelligence
are introduced. Machine ethics is described as a reflection about how machines
should behave with respect to humans, unlike roboethics, which considers how
humans should behave with respect to robots. A key question is to what extent
medical robots might be able to become autonomous, and what degree of hazard
their abilities might cause. If there is risk, what can be done to avoid it while still
allowing robots in medical care?
1 The Reality of Machine Medical Ethics
A hospital patient needs to take her regular medication, but is watching his/her favorite
TV program and reacts angrily to the reminder about taking medicine. If you were a
nurse, how would you react? How must a robot nurse react?
This case study originates from a project conducted by robotic researchers Susan
and Michael Anderson (Anderson and Anderson 2005). They programmed a NAO
robot1 to perform simple functions like reminding a “patient” that it is time to take
1 A NAO robot is a programmable autonomous humanoid robot developed by French company
Aldedaran Robotics. It was first produced at the beginning of the 21st century.
T. Kochetkova (*)
Institute for Philosophy, University of Amsterdam, Amsterdam, The Netherlands
4T. Kochetkova
prescribed medicine [29]. NAO brings a patient tablets and declares that it is time
to take them. If the treatment is not observed (i.e., if the patient does not take the
tablets), the robot should report this fact to the doctor in charge.
But suppose we program the robot to react also to a patient’s mental and emo-
tional states. This makes the situation much more complicated: a frustrated patient
can yell at the robot, refuse to take pills or refuse to react at all, or do something
else not included in the narrow algorithm that guides the robot. In order to react
accordingly, the robot now needs to be more flexible: it has to balance the benefits
that a patient receives from the medicine (or treatment) against the need to respect
the patient’s autonomy. In addition, the robot has to respect the independence and
freedom of the patient. If, for instance, the disease is not too dangerous and the
patient forgets to take a pill while watching his or her favorite television program,
another reminder of the robot could bring him more displeasure (i.e., harm) than
good. If skipping the medication had more serious consequences, then the robot
would have to remind the patient, and if necessary, even notify the patient’s doctor.
The robot thus needs to make decisions based both on the situation at hand, and
also on its built-in “value hierarchy”: different principles might lead to different
decisions for the same situation [4, 6].
In the near future, robots like the one in the Andersons study may become wide-
spread. The question of how to give them a complex hierarchy of values therefore
becomes increasingly important. In addition to having to think about their own
ability to carry out their responsibilities (e.g., they must know when it is time to
recharge their batteries, or else they might leave patients unattended and in potential
risk), they will also need to make appropriate choices for their patients. This implies
an in-built sense of justice when tackling even mundane tasks: if, for instance, they
are supposed to change the channel of a TV set that several patients are watching
together, they will have to take into account variables such as the patients conflicting
desires, and how often each patient’s TV wishes have been fulfilled in the past.
Reasons such as these explain the relevance of machine medical ethics.
Machine medical ethics faces at least three challenges. Foremost, there is the need
to ensure the safe use of medical robots, whose presence in the health sector is
increasing. By the middle of the 21st century, about 25 % of West Europeans will
be over 65 years old; there will be an increasing demand on the healthcare sys-
tem that will only be met by using advanced technology, including robotics. Some
remotely operated robots are now already routinely being used in surgery. Other
expected applications of medical robots in the near future are [19]:
Assisting patients with cognitive deficits in their daily life (reminding them to
take medicine, drink or attend appointments).
Mediating the interaction of patients and human caregivers, thus allowing car-
egivers to be more efficient and reducing the number of their physical visits.
Collecting data and monitoring patients, preventing emergencies like heart fail-
ure and high blood sugar levels.
Assisting the elderly or the disabled with domestic tasks such as cooking or
cleaning, thus making it possible for them to continue living independently.
An Overview of Machine Medical Ethics
The demand for robots in the healthcare sector is already quite palpable, at least
in the West, and I suspect this demand will only increase. This will include robots
that can perform some human tasks but are quicker to train, cheaper to maintain,
and are less bored by repetitive tasks, with the ultimate purpose being to take over
tasks done by human caretakers and to reduce the demand for care homes [29]. It
is clear that the behavior of such robots must be controlled by humans and within
the ambit of human ethical values, otherwise the robots would be useless and pos-
sibly even dangerous: if their behavior is not always predictable, they could poten-
tially cause harm to people. A robot with no programming on how to behave in an
emergency situation could make it worse. To avoid such problems, it is necessary
to build into the robots basic ethics that apply in all situations.
Second, there is a certain fear of autonomously thinking machines, especially
in the West, probably due to uncertainty about whether they will always behave
appropriately. Science fiction is full of such fears. The creation of ethical con-
straints for robots can make society more receptive to research in the field of arti-
ficial intelligence by allowing it to deal better with robots in ethical situations. In
fact, in such situations, robots without ethical constraints would appear to be too
risky to be allowed in society.
A third reason for the increasing interest in machine ethics is the question of who
can ultimately make better ethical decisions: humans or robots? Humans use their
intuition for moral decisions, which can be a very powerful heuristic [14, 25]. Yet
humans can be bad at making impartial or unbiased decisions and are not always
fully aware of their own biases. As for robots, Anderson claims that they may be
able to make better decisions than people, because robots would methodically calcu-
late the best course of action based on the moral system and principles programmed
into them [4]. In other words, robots may behave better than humans simply because
they are more accurate [4]. Yet it is not entirely clear whether such methodic con-
sideration of all possible options by a robot will always be an advantage for deci-
sion making. As Damasio’s research shows, people with brain damage actually do
methodically consider all options, yet this does not guarantee that their decisions
will be better, since their mental impairment forces them to consider, and perhaps
take, many options that healthy people would immediately see as bad [10, 14].
A final reason for the growing relevance of machine ethics is the lack of con-
sensus among experts on the ways to handle major ethical dilemmas, which makes
it more difficult to transfer decision making to machines. Answers to ethical
dilemmas are rarely clear. For instance, a classical problem like the train accident
dilemma, discussed below, would be solved by different theories in different ways:
utilitarians believe that the sacrifice of a single life in order to save more lives
is right, while deontologists believe such a sacrifice is wrong since the ends can-
not justify the means. There is no consensus on other vital issues such as whether
abortion is permissible, and if so, under what circumstances. A medical robot, per-
forming the role of adviser to a patient, for example, may have to take such facts
into account and realize that it needs to shift the burden of making the right deci-
sion to a human being. But this may not always be possible: in another dilemma
involving a possible car accident where one or several people would inevitably
6T. Kochetkova
die, an autonomous car with an automatic navigation system would either lose the
ability to act independently or have to take a random decision. Neither solution
seems really acceptable [15].
2 The Development of Machine Medical Ethics:
A Historical Overview
Machine medical ethics recently emerged as a branch of ethics. To fully under-
stand ethics, it is important to see it as the critical and reflexive study of moral-
ity, i.e., as the rational scrutiny of values, norms, and customs. This critical stance
differentiates ethics from morality: ethics is the philosophical study and question-
ing of moral attitudes. Even though machine medical ethics includes both norma-
tive and applied components, it is the latter which is recently gaining research
Since the 1950s, medical ethics has experienced exponential growth and dif-
ferentiation in various subfields, hand in hand with the technological, political, and
cultural changes of this time. Previously, the relation between medical profession-
als and patients had been paternalistic: all decisions were supposed to be taken by
a professional (or a group of professionals) in the best interests of a patient and
were then communicated to the patient. This paternalism was based on a knowl-
edge gap between the medical professional and the patient and between the profes-
sional and the public, as well as on the relative clarity of medical decisions and the
limited number of choices.
Paternalism in doctor-patient relationships has been undermined by public
knowledge about atrocities in medical experiments conducted during the Second
World War. The post-war Declaration of Geneva (1948) initiated the shift towards
a more liberal model of doctor-patient relationship, promoting informed consent
as an essential ethical value.
Since 1960 and up to the beginning of the 21st century, due to the growth of
public education, empowerment of the general public, accessibility of medi-
cal knowledge, as well as new developments in medical technology and science,
the general public has become more informed about available medical informa-
tion and patients are participating in clinical decision-making. This has changed
the relation between healthcare professionals and patients quite significantly: the
paternalistic model is anachronistic, and in most cases shared decision-making
model is the norm [8, 18, 30].
Concomitantly with the shift away from paternalism in medicine, ethics itself
underwent a change in focus towards application. Scientific and technologi-
cal development has given rise to various new choices and specific ethical prob-
lems. This led to the origin of bioethics (term introduced already in 1927 by Fritz
Jahr). In the narrow sense, bioethics embraces the entire body of ethical problems
found in the interaction between patient and physician, thus coinciding with medi-
cal ethics. In the broad sense, bioethics refers to the study of social, ecological,
An Overview of Machine Medical Ethics
and medical problems of all living organisms, including, for instance, genetically
modified organisms, stem cells, patents on life, creation of artificial living organ-
isms, and so on.
Along with the appearance of bioethics and the shift away from paternalism and
the consequent decrease of the role of the doctor as sole decision-maker, the idea
that machines could also be a part of the process of care became more acceptable.
Together with great progress in medical technology, this resulted in the emergence
of the field of machine medical ethics. The main aim of machine medical ethics is to
guarantee that medical machines will behave ethically. Machine medical ethics is an
application of ethics and a topic of heated debate and acute public interest.
The reasons for this change in focus towards application have been widely dis-
cussed in bioethics. Among its causes is the growth of human knowledge and tech-
nological possibilities, which brought along a number of new ethical problems,
some of which had never been encountered before. For example, should we switch
to artificial means of reproduction? Is it acceptable to deliberately make human
embryos for research or therapeutic purposes? Is it worthwhile to enhance humans
and animals by means of genetic engineering or through digital technologies? In
addition, there are also new problems concerning the usage of robots, brought
about by rapid progress in the development of computer science. For example, is
it acceptable to use robots as work force if their consciousness evolves, as they
become AMAs (artificial moral agents)? Suddenly, the area of human-robot inter-
actions is saturated with ethical dilemmas.
Given the increasing complexity and applicability of robots, it is quickly
becoming possible for machines to perform at least some autonomous actions
which may in turn cause either benefit or harm to humans. The possible conse-
quences of robot errors and, accordingly, the need to regulate their actions is a
pressing ethical concern. It is not simply a question of technical mistakes, like
autopilot crashes, and their consequences, but also of cases in which robots have
to make decisions that affect human interests. An obvious example in the field of
medicine is the activity of robot nurses i.e., mobile robotic assistants [3]. Robot
nurses have been developed to assist older adults with cognitive and physical
impairments, as well as support nurses. Mobile robotic assistants are capable of
successful human-robot interaction, they have a human tracking system and they
can plan under uncertainty and select appropriate courses of actions. In one study,
the robot successfully demonstrated that it can autonomously provide reminders
and guidance for elderly residents in experimental settings [26].
Presently, medical robots are already in use in various areas. In surgery, operations
involving robotic hands tend to have higher quality and involve fewer risks than tradi-
tional operations managed by humans. Robots are also being used in managing large
information files (“Big Data”). For instance, the market share of “exchange robots”,
computer algorithms for earning their owners money in the stock market, is set to
become more widespread, since their results are better than those of human traders.
The relation between the quality of electronic and live traders is now the same as it
was for chess players and chess programs on the eve of the match between the human
player Kasparov and the program Deep Blue. As we all know, the program won.
8T. Kochetkova
This particular case does not seem very dangerous, but is there an element of
risk involved in the success of intelligent machines in other areas? These questions
increasingly concern not only the broad public, but also designers and theorists of
Artificial Intelligence (AI) systems. The main challenge to solve is how to ensure
safety with AI systems. Devices found only in fiction, like Isaac Asimov’s famous
Three Laws of Robotics, seem increasingly necessary [5, 12, 16]. In recent dec-
ades, such issues have been debated in a broad range of publications in computer
science and machine ethics. The increasing success of various robot-related pro-
jects has stimulated research on the possibility of built-in mechanisms to protect
people from unethical behavior by computer-controlled systems.
Currently, the demand for producing ethical robots for the aging population in
developed countries exceeds medical services: the demand for service robots in res-
taurants, hotels, nurseries, and even at home has been growing. The entire service
sector, it seems, is impatiently waiting for robots with reasonably good quality and
affordable prices to appear. It would seem that all mechanical labor in today’s increas-
ingly educated society is regarded as something best shifted to the hands of robots.
But, problems in the production of such robots go beyond technological dif-
ficulties. Separating the mechanical and communicative component of special-
ized work (e.g., for nurses) is sometimes very difficult, or even impossible. They
are subtly intertwined, which makes ethical programming of robots necessary for
nearly all tasks involving interactions with humans. For instance, in the situation
described in the introduction—a patient reacts negatively to the reminder to take
medicine—communicative and ethical capacities must be intrinsic to a robot for it
to be able to react ethically.
These difficulties might lead to the question whether machine ethics is possible
at all, i.e., whether problems with ensuring AMAs behave ethically is a theoreti-
cally tractable problem. I think these difficulties are tractable from an engineer-
ing perspective. The real difficulty to be solved lies in improving robotic software,
perfecting the sets of rules, and ensuring the correct elaboration of in-coming data.
With robots as explicit AMAs, the challenge is to make their complex behavior
predictable. However complicated this challenge proves to be, it basically is a mat-
ter of improving the degree of complexity of already existing robots, not the theo-
retical question of building entirely new AI systems. This supports my optimism
about the prospects of robotics to ensure safe robot use in the real world.
3 Key Issues in Machine Medical Ethics
The key issues of machine medical ethics are linked to problems of AI. In current
AI discussions, three major issues dominate: computability, robots as autonomous
moral agents, and the relation between top-down, bottom-up and hybrid theoreti-
cal approaches. Each will be briefly considered in turn.
The computability of ethical reasoning concerns the conditions for the very
existence of machine medical ethics. Indeed, ethics, as seen above, can be defined
An Overview of Machine Medical Ethics
as a reflection on the normative aspects of human behavior. Presently, machine
ethics, as the study of how machines should behave with respect to humans,
attempts to create computer analogs for the object of ethical study—values and
norms—so as to make ethics computable and ultimately permit its implementa-
tion in robots and machines [11, 29]. The hope is that ethics can be made translat-
able into computer language, i.e., ultimately presentable as a complex set of rules
that can be made into a computer algorithm [24]. There already are programs that
allow machines to imitate aspects of human decision-making, a first step towards
the creation of robots that will be able to make such decisions by themselves.
Some of these programs are discussed by Bringsjord and Taylor [7].
One approach to achieving a computable machine ethics is a complete and
consistent theory that can guide and determine the actions of an intelligent moral
agent in the enormous variety of life situations. With such a theory, a machine
with AI could in principle be programmed to deal appropriately with all real-
life situations. The basic question is then, of course, whether such a universally
applicable theory actually exists: if it does, then machine ethics would be basi-
cally busy with programming it into computers. It may be, however, that no single
ethical theory is or can truly be complete: completeness, albeit attractive, may ulti-
mately turn out to be an unattainable ideal. The absence of unconditionally correct
answers to ethical dilemmas, and changes in ethical standards through time, sug-
gest that it is not prudent to hope for a “perfect” theory of ethics in attempting to
build ethical machines. Rather, work on ethically programmed robots should start
with the idea of ethical gray areas in mind, areas in which one cannot determine
which behavior is unconditionally right on a case by case basis [11].
Rather than concentrating on one single system or theory of ethics (for which
often intractable dilemmas can be found), it seems more productive to strive
towards a hierarchic system including a plurality of ethical values, some of which
are subordinate to others. For example, John Rawl’s theory of “equilibrium”,
which is similar to, but more complicated than, the one used by the hospital robot
in Anderson’s experiment, is a candidate.
The study of machine ethics might thus advance the study of ethics in general.
Although ethics ought to be a branch of philosophy with real-life impact, in practice
theoretical work and discussions between philosophers often drift toward the consid-
eration of unrealistic “academic” situations [11]. The application of AI to ethical prob-
lems will help understand them better, make them more realistic, and lead to better
solutions, while perhaps even opening the ways for new problems to be considered.
Secondly, there is a widespread agreement in AI that a robot capable of making
decisions and acting autonomously is an AMA [22]. This does not require any intrin-