Content uploaded by Damien Trentesaux
Author content
All content in this area was uploaded by Damien Trentesaux on Sep 21, 2017
Content may be subject to copyright.
26
ETHICAL BEHAVIOUR OF AUTONOMOUS NON-MILITARY
CYBER-PHYSICAL SYSTEMS
Damien Trentesaux
1
, Raphaël Rault
2
1
LAMIH UMR CNRS 8201, SurferLab, Univ. Valenciennes, Le Mont Houy,
59313 Valenciennes Cedex, France.
damien.trentesaux@univ-valenciennes.fr
2
Capon & Rault Avocats (Lawfirm),
7 rue de l’Hôpital Militaire 59800 Lille, France
r.rault@capon-rault.com
Keywords: ethical behavior, complex system, cyber-physical systems, autonomous vehicles, artificial intelli-
gence, train transportation, RAMS, RAME
Abstract.
Autonomous non-military cyber-physical systems are widely studied in research but there are still
few applications in industry. One of the reasons relies in the fact that there is still no proof of
guarantee for these systems regarding their safety and their ability to behave in a nonhazardous way,
mainly because of the induced complexity caused by their learning abilities coupled with the high
ability to interact and cooperate of their composing mechatronics elements. More, cyber-physical
systems are intended to be merged into socio-technical systems and interact with humans. As a
consequence, the study of their ethical behavior translates currently a major stake. Meanwhile, it is
clear that this stake is still not address by the scientific community while 1) sci-fi literature and
movies have addressed this since a long time 2) some autonomous road vehicles have already injured
people, and 3) EU parliament has launched a procedure dealing with the establishment of civil laws
for autonomous learning robots. This paper intends then to open the debate on this topic and
suggests to extend dependability studies to integrate ethicality as a new dimension. New emerging
research fields are identified and an illustration in the autonomous train transportation is presented.
Introduction
Cyber-physical systems (CPS) are complex systems aiming to fulfill a global function and integrating
communicating mechatronic parts, sensors and effectors, merging the physical and the digital worlds
[1]. They interact with human operators. In our work, we consider a sub-class of CPS, more complex, that
are autonomous. In this paper, autonomy refers to the ability of integrated sensing, perceiving, analyz-
ing, communicating, planning, decision-making, and acting/executing, to achieve goals [2]. These CPS
can thus behave and evolve in space independently neither from the decision of a human operator nor
the one of its designer. They can integrate Artificial Intelligence-based learning mechanisms enabling
them to improve their decisions with time and adapt to an evolving environment [3]. The CPS concerned
in this paper are also non military: their primary function is neither concerned with the destruction of
asset nor people, like one can find in the military or defense industries [4].
This paper, co-written by a lawyer specialized in the domain of digital property and cyber-security
and a researcher working in the field of industrial internet and CPS, deals with the ethical behaviour of
this kind of autonomous non military CPS
1
. The will of the authors is to foster researchers working on
1
For simplification purpose, we will no more precise “autonomous and non military” in the remaining of the paper when
using the acronym “CPS”.
27
this kind of CPS to pay attention to the possible consequences of their design on the welfare of humans
interacting with these CPS such as their possible responsibility in case of an accident.
Futuristic non-scientific ideas more relevant to entertainment and sci-fi than science have initially put
the light on this subject (as illustrated by Asimov’s robotic laws, Mary Shelley’s creature, Philip K. Dick
short stories, etc.). Meanwhile, this subject became reality and went to the public place since the world-
wide discussed recent case of the Tesla self-driving car crash that led to death [5].
As introduced, the scope of the paper concerns Ethics. Ethics is initially a field of philosophy. Accord-
ing to [6], ethical behaviour is concerned with actions that are in accord with cultural expectations relat-
ing to morality and fairness. Ethical aspects in engineering is not new and some journals are specialized
in that field [7]. These aspects have been studied in various fields of engineering, such as design [8] and
obviously bio-engineering, including genetics [9]. In the literature, one can face two kinds of ethical stud-
ies [3]:
• techno/engineering ethics that deals with the moral behaviour of human designing artificial beings
(the issue being the ethical design of CPS), and
• machine ethics that deals with the moral behaviour of artificial beings (the issue being the design of
ethical CPS).
In a previous paper, a review in ethical aspects in complex system engineering has been made [3].
Main conclusions are provided hereinafter.
Regarding techno ethics, proposals were mainly fostering the adoption of techno ethics behaviours by
researchers related to the idea of charters to be signed (“Hippocratic oath”).
In this paper, since we deal with the ethical behavior of CPS, our work is rather related to machine
ethics. Regarding this field, it is worth noticing that main breakthroughs are coming from lawyers. Start-
ing from the point that there exists a huge corpus of “human being centered” laws, while quite none
dealing with “artificial beings”, lawyers have started to pay attention to robotics and artificial intelli-
gence (AI). One of their surprising approaches is to consider the creation of a new specie composed of
intelligent autonomous robots and systems, aside the human one. This approach implies that human
right constitutions should be revised accordingly. For example, [10] discussed the organization of ad-
ministrative control and the legal liability regime which applies to service robots and the issue of auton-
omy of service robots. Another example, which is from the authors’ point of view, among the most amaz-
ing studies, is currently led at the European Union (EU) Parliament level that worked on legal proposals
related to robotics and AI [11]. Apart from the recommendations’ main goals, which are devoted to hu-
man safety, privacy, integrity, dignity and autonomy, the EU Parliament aims to unify and incentivise
European innovation in the area of robotics and AI. According to these recommendations, the definition
of a smart autonomous robot (SAR) has been proposed: a SAR acquires autonomy through sensors
and/or by exchanging data with its environment (inter-connectivity) and trades and analyses data; It is
self-learning (optional criterion); It has a physical support; It adapts its behaviours and actions to its
environment
2
. Following their idea, different categories of SAR would be created. It is advised that a
specific legal status for robots is created, so that at least the most sophisticated SAR could be established
as having the status of electronic persons. A SAR would also have rights (intellectual property and per-
sonal data…) and obligations (a SAR should be held reliable for its actions according to the EU Parlia-
ment, which asserts that “the greater a robot's learning capability or autonomy is, the lower other par-
ties' responsibility should be, and the longer a robot's 'education' has lasted, the greater the responsibil-
ity of its 'teacher' should be”). The concept of SAR is obviously close to the one relevant to the kind of
CPS we consider in this paper. Meanwhile, only few works dealing explicitly, from a scientific point of
view, with machine ethics in SAR/CPS are available [3]. Among them, let us mention [12] who proposed
a framework for assessment and attribution of responsibility based on a classification of CPS with re-
spect to the amount of autonomy and automation involved. Their proposal is based on the assumptions
2
It is clear that what EU calls a SAR is very close to that what is called a CPS in this paper.
28
that different types of decision making occur inside a CPS: automatic, semi-automatic, semi-autonomous
and autonomous decisions.
Provided the important stakes introduced about the emergence of machine ethics issues in CPS on the
one side, and the lack of research activity from the scientific community, far behind the activity led in the
legal field on the other side, the aim of this paper is to promote the emergence of a new field of scientific
research: the ethical behavior of autonomous, non-military CPS.
To contribute to such an ambitious objective, our idea is to start from scientific theoretical grounds
that can be considered as the closest to this new field and to extend this field to encompass machine
ethics in CPS engineering. From this perspective, one of the closest research fields that was found and
that is discussed in this paper concerns the “dependability analysis” for which recent studies on auton-
omous systems and robots are already available [13]. Of course, this does not forbid machine ethics in
CPS engineering to gain also from other fields such as AI (multi-agent systems, artificial neural networks,
etc.), mobile robotics, embedded/mechatronics systems, etc.
The next part summarizes the terminology from the historical field of dependability and points out
the angle of attack through which machine ethics may foster the generalization of dependability to en-
compass ethical aspects. Applications and illustrations will be taken from the transportation sector,
whatever the mode (aeronautical, naval and ground transportation), the emergence of autonomous sys-
tems in this field, and especially the Google Car, being a clear illustration of where our society is going to.
1 Dependability analysis: RAMS studies
Even if the word is known for years by researchers, there is no real consensual precise definition of
dependability. For example, in the norm IEC 60300-1, dependability is defined as “the collective term
used to describe an operation based on the availability and the influential factors: reliability, capacity
and support to the maintenance” [14]. It is commonly accepted by researchers that dependability is
composed of three main parameters: reliability, maintainability and availability aside which one fourth
parameter is more and more added: safety. The acronyms RAM and RAMS (along with the relevant term
“RAMS studies”) are usually used by researchers working on dependability.
It is beyond the scope of this paper to discuss the definition of these parameters, the relative litera-
ture being abundant. Thus, adapted from [14], the chosen definitions used in this paper for illustration
purpose are the following:
• Reliability: the ability that a system operates satisfactorily during a determined period of
time and in specific conditions of use. It can be expressed as a mean time to failure (MTTF).
• Maintainability: the capacity of a system to be maintained, where maintenance is the series
of actions taken to restore an element to, or maintain it in, an effective operating state. It can
be expressed as a mean time to repair (MTTR).
• Availability: the ability to perform when called upon and in certain surroundings. It can be
expressed as a percentage: the proportion of time a system in a functioning condition over a
given time period.
• Safety: the ability to detect undesired events and to apply rules to limit (mitigate) their con-
sequence.
More recently, Security has also been added as a fifth parameters. Security is the ability to operate
satisfactorily despite intentional external aggressions. In that sense, Security is related to external events
while reliability, internal events.
Obviously, Safety and Security are the closest indicators relevant to ethical aspects but one can see
that they do not consider all the aspects relevant to ethics. From our point of view, safety and security
must then be studied with a wider view, as discussed in the following part.
29
2 Dependability of CPS: from RAMS to RAME studies
The idea that we defend in this paper is that it seems possible to enlarge the safety/security aspects of
RAMS studies towards machine ethics as a federating, more global concept encompassing safety and
security. According to this idea, we suggest that RAMS becomes now RAME: Reliability, Availability,
Maintainability and Ethicality. From our point of view and in this paper, ethicality refers to the ability for
a system to behave according to an ethical manner. The core question is obviously what is an “ethical
manner”. The following section deals with such an aspect.
2.1 Ethicality and ethical behavior of a CPS
As introduced, safety and security can be seen as components, yet to be adapted, of ethicality since from
our perspective, ethicality is a more global concept, enlarging the fields of research founded on these
two components. One can notice that the discussed elements hereinafter present some links with
Azimov’s laws of robotics. It is also noticeable that the components described hereinafter are under the
responsibility of the designer of CPS if these CPS are “simple” in the sense that all of their states and be-
haviors are clearly identified, bounded, controlled, certified. Meanwhile, the responsibility of a CPS can
be engaged as soon as it becomes autonomous and able to learn, and the more it learns, the less the re-
sponsibility of the designed is committed, consistently with the previous discussion about the legal re-
sponsibilities of a SAR.
These components are described hereinafter.
A first component of ethicality is integrity, that is the ability to behave so that the others trust the in-
formation coming from the CPS and are confident about the ability of the CPS to engage actions to reach
a clear, readable, public objective (even potentially varying), but which must be announced and publi-
cized in that case. This includes for sure the ability for the CPS not to disclose sensible information gath-
ered about the humans and other CPS interacting with it. The designer of the CPS is highly responsible
for this (ethical design), but the CPS will be the one that risks to disclose sensible information when it
takes decisions in an autonomous way, or it is the one that may change its objectives with no explana-
tion, which influences the degree of trust from the humans, then its ethicality. Integrity includes also the
ability to verify the algorithms by others and to trace (explain) the decisions. For this aspect, a common
current solution is to implement open (no black box), certified, checkable-by-others computer codes, but
this is not sufficient since the CPS must also be able to explain its choices (for example, when investigat-
ing to determine the root cause of an accident for which the CPS was involved).
Safety is a second component of ethicality. As already introduced, safety concerns people involved
and in direct connection to the considered CPS. Ensuring that a system behaves in a safe way, limiting
the risk of injuries to human beings that depend on the CPS is the least that one could accept to deter-
mine if a CPS behaves ethically. For example, energy providers must provide their nuclear plant with
procedures and rules to limit as much as possible hazardous events for employers and close urban areas.
A cobot system, aimed to work jointly with a human operator, owns barriers and safety rules to avoid
harming human operators (eg., torque or effort detection). But from an ethical point of view, this is not
so simple. A typical emerging ethical issue at this level would concern the kind of decisions that could
maintain the safety of the CPS while at the same time, that could be harmful for people. For example, a
safe decision for an autonomous high-speed train that detects lately an animal on the track could be to
brake to avoid the collision while at the same time, it risks to hurt several of its 1,000 passengers, the
ones that are at this moment not seated, with this emergency brake.
A third component of ethicality is security. Security refers to the resilience of the CPS facing unex-
pected aggressive actions. It concerns the ability of a CPS to limit its vulnerability and intrusion, cyber-
attacks and aggressive behaviors (threats) coming from systems and events not only outside but also
inside the CPS, which is a new aspect induced by the principle of ethicality. Regarding outside threat,
30
cyber defense (firewalls, anti-malware, etc.) are of course among the most famous approaches studied
by industrialists and researchers. Regarding inside threat, and from a machine ethic point of view, this
component becomes delicate to address. For example, should the auto pilot of a plane be able to override
the command of a human pilot if it may lead to crash the plane (eg., suicide) ? if yes, how to ensure that
this override may not lead to a more critical situation (eg., difficulty for the cruise CPS to land the plane)
? It is noticeable that security and integrity may be conflictual component: increasing the integrity by
using open source AI code may facilitate the work of cyber-attackers, thus reducing the security of the
CPS.
Altruism (or caring) is a fourth component of ethicality and one of its ultimate components. It has
never been addressed by researchers in the field of CPS. Altruism can be seen as a complementary view
of safety. Altruism is more aligned with the welfare of people a priori not interacting with the CPS, while
safety focuses on the people interacting with the CPS. It concerns the ability to behave according to the
welfare of others: others being other technical systems, other CPS or human beings. Altruism may be
also contradictory to one or several of the other components and RAM indicators. For example, an altru-
ist autonomous boat, that detects a “SOS” message, delays its current mission to help the senders of the
SOS. But through this altruist decision, it will de facto reduce its availability (its mission may be can-
celed) and this decision may impact its reliability as well (overuse of its turbines used at full throttle to
reach as soon as possible the emitter of the SOS for example). Another example is the one of an autono-
mous car that has to break the law in order to save life (eg., emergency trip to hospital, line cross to
avoid a pedestrian, etc.) [15], reducing its accountability (see below). Of course, reducing the accounta-
bility, the reliability or the availability is not a necessary condition to behave in an altruist way. For ex-
ample, an altruist autonomous train detects an accident on a road close to its track (fire, smoke…) and
makes an emergency call to its central operator while continuing its travel to its destination at the same
time. Its mission is thus not impacted.
Accountability (or liability) is another one of the ultimate components of ethicality. Accountability
ensures that actions led by the CPS engage its responsibility in case of a problem. Being quite a futuristic
idea, like altruism, when dealing with non-human based systems, this is an important aspect promoted
by the EU through its legal proposal previously introduced. If a CPS is accountable, then in case of haz-
ard, its legal responsibility could be engaged, leading to allocating insurance funds to injured people. The
CPS could be sued for that hazardous behavior. Telling people that the autonomous train they are using
is covered by the insurance of the train itself, for which it contributes through the payment of a tax com-
ing from its own earned money render this CPS as a new accountable entity, belonging to a new specie,
aside the human one.
Equitability (financial fairness) is also another ultimate component of ethicality. It ensures that if the
CPS becomes an artificial, accountable person able to spend money for insurance (accountability), then,
as a counter balance, it must be able to earn money. This must be done in an equitable (fair) way, with a
balanced repartition of the money earned through its use to ensure its ethical behavior. This component
translates a logical evolution of the previously introduced legal developments related to SAR since SAR
and CPS are artificial beings potentially able to create and generate knowledge, ideas and to improve
their performances compared to other SAR. This process should increase the competitiveness of a CPS,
make it gain customers. For example, an autonomous train, known to behave more ethically compared to
a competitor train may attract more customers. Gaining market shares means earning more money.
Fig. 1. From RAMS studies to RAME studies summarizes this evolution from RAMS to RAME studies
where (E)thicality is decomposed into the previous listed components. In this figure an intuitive dichot-
omy, splitting ethicality into two levels is represented. The first level represents the immediate research
activity needed to be started. The second level addresses the more elaborated, complicated, delicate to
imagine components of ethicality. From our perspective, the second level can not be reached until the
first one is sufficiently elaborated.
31
Fig. 1. From RAMS studies to RAME studies of CPS.
2.2 Some key relevant issues in RAME studies of CPS
From this definition of ethicality, one can identify a large number of issues. We list hereinafter some of
the most challenging ones for illustration purpose:
• To ensure an ethical behavior, a first reflex would be to design rules to force artificially the ethical
behavior of a CPS. For example, "when driving, priority is gradually set first to animals, then trucks,
then pedestrians, then bikes, then motorbikes, then cars, then buses". But an ethical behavior is still a
complex concept to formalize, and defining in an exhaustive and sense full way all possible ethical
rules seems an insurmountable task up to now.
• As introduced, these different components of ethicality are conflictual, that means that improving one
may lead to the degradation of another one (ex., improving altruism may degrade safety, improving
integrity may degrade security, etc.).
• Deciding which decision is more ethical than another is sometimes a delicate choice to make. The
classical “Trolley problem”, which translates a “no-win” situation where each possible choice (and es-
pecially, inaction), leads to the death of one or several people, is a clear illustration of this.
• Linked to the previous issue, modeling an ethical behavior, optimizing it and simulating situations to
evaluate the degree of ethicality over a short or long time horizon will be a strong issue as well. Meas-
uring such a degree and stating that “system1 behaves more ethically than system2 during this
amount of time” seems to be quite impossible to do with the current scientific state of the art.
• Last, the corpus of current laws is described in a textual way. This will not be possible to use a similar
approach to define legal rules and laws for CPS. Thus, the co-development by lawyers and scientific
searchers of ethical rules applicable both to the human and the artificial beings becomes a key issue.
This could be done using mathematical, formal or algorithmic editions of laws and social rules to be
embedded into CPS, which is a necessary condition to make them behave ethically.
To summarize, evolving from RAMS studies to RAME studies of CPS becomes a challenging, really new
field of researchers for scientists and it is time for our community to appropriate this field. For sure,
everything has to be done. The intention of this paper is only to draw the attention upon this urgent
32
need. New theories have to be invented, new models defined, new indicators and new methods as well.
But this is not sci-fi. The google car obviously forces us to address this and each other of the transporta-
tion modes is becoming affected by this evolution. Major industrialists and operators are working on the
autonomous train, the autonomous plane, the autonomous tractor, the autonomous ship, etc. As an illus-
tration of this, the next part details an application to train transportation.
3 Application to train transportation
Future autonomous trains are clearly the kind of the CPS considered in this article, and will be thus con-
cerned with ethicality. The evolution of the development of “classical trains” towards “autonomous CPS-
train” is governed by several driving factors. The first one is obviously related to the limitation of crash-
es and accidents through a better monitoring of the train and its environment. A second one concerns
the possibility through the use of autonomous trains to increase the number of trains used per time unit,
thus the number of passengers. Indeed, the capacity of the infrastructure, which represents a large
amount of money involved, can not be easily augmented. To augment the number of people transported,
the solution offered by the autonomous train is to increase the number of trains by reducing the time
between two trains on a track and by adopting train pooling-like systems only controllable using ad-
vanced AI technologies. The use of a similar approach for fret transportation, on long distances, would
also limit the use of conductors on tiring, long trips. Last, energy is an important driver. This concerns
not only energy savings during a travel, but also the consensual limitation of the energy peak used inside
a fleet of autonomous trains based on negotiations among these trains to organize the smoothing of the
energy used. Estimated relevant gains are huge.
We describe hereinafter two complementary activities to illustrate the ongoing search for the future
autonomous train and provide some insights about RAME studies concerning it.
The first illustrative activity concerns a major national project led by the French national railway
agency, SNCF, who is working to implement the « autonomous train » in few years. This “EFIA” project is
thus a meta-project aiming to define the future development projects and target fleets of trains. It is
managed by a technological research institute (IRT), Railenium. Railenium is defining the roadmap to-
wards the “autonomous train” and specifies the deployment process, including the future applied pro-
jects to launch, accompanied with a methodological support to develop them in collaboration with in-
dustrial and academic partners.
The second illustrative activity concerns a joint research laboratory, called the SurferLab
3
, which is a
joint research laboratory with Bombardier Transport, Prosyst (a French SME) and the university of Va-
lenciennes and Hainaut Cambrésis in France. It aims to develop models, methods and systems enabling
the embedding of intelligent self-prognostic and health management functionalities into train, rendering
this train more autonomous, able to actively self-diagnose and thus, to actively negotiate with mainte-
nance centers and other trains of the fleet to optimize the quality of the fleet service.
The obvious conjunction of these two different activities clearly illustrates the will of the railway sec-
tor to work on the autonomous train (the Deutsche Bahn also works on this topic [16]). For the moment,
and consistently with our previous discussion about the sequential addressing of the two levels of ethi-
cality, these two activities consider only the first level of ethicality: integrity, safety and security. From
discussions with industrialists in train transportation and researcher in autonomous systems, dependa-
bility and AI, even these first components are hard to address assuming an autonomous behavior of the
CPS-train. For example, homologation of train requires to proof some safety levels (SIL). This is feasible
through a complete edition of all possible situations by engineers for “classical” non autonomous trains,
but when train starts to learn and behave autonomously, this is no more possible. That means that even
3
www.surferlab.fr
33
homologation rules must evolve as well. In addition, these two activities even don’t pay attention to al-
truism, accountability and equitability. Industrial partners acknowledge that they must be studied in the
near future as well, but they do not know how to address this.
4 Conclusion and future works
It is estimated that in the USA 94% of the car crashes are due to driver errors [17]. This kind of statistics
fosters the development of autonomous, self-learning CPS. One of the key issue to solve in the near fu-
ture is relevant to the ethical behavior of these systems interacting with humans and evolving in crowed
environments. Ensuring the ethical behavior of designers does not mean that the designed system will
behave ethically. This paper opens the debate on the crucial and urgent need to deal with this issue.
There is up to now nearly no scientific contribution on this topic while lawyers have started to work on
it since several years. Defining ethicality, modeling it and ensuring it, is for the moment quite impossible
to do. This means that researchers face now full, open and really new scientific research fields yet to
explore within the context of RAME studies.
5 Acknowledgment
This work is done within the context of a joint research Lab, “Surferlab”
(http://www.surferlab.fr/en/home), founded by Bombardier Transport, Prosyst and the University of
Valenciennes and Hainaut-Cambrésis. SurferLab is scientifically supported by the CNRS and is partially
funded by ERDF (European Regional Development Fund). The authors would like to thank the CNRS, the
European Union and the Hauts-de-France region for their support. Parts of works presented in this pa-
per have been developed in collaboration with SNCF and Railenium (EFIA project).
References
1. Trentesaux, D., Knothe, T., Branger, G., Fischer, K.: Planning and Control of Maintenance, Repair and Overhaul
Operations of a Fleet of Complex Transportation Systems: A Cyber-Physical System Approach. In: Borangiu, T.,
Thomas, A., and Trentesaux, D. (eds.) Service Orientation in Holonic and Multi-agent Manufacturing. pp. 175–
186. Springer International Publishing (2015).
2. Huang, H.-M., Pavek, K., Novak, B., Albus, J., Messin, E.: A framework for autonomy levels for unmanned systems
(ALFUS). In: Proceedings of AUVSI Unmanned Systems (2005).
3. Trentesaux, D., Rault, R.: Designing Ethical Cyber-Physical Industrial Systems. Presented at the IFAC World
Congress , Toulouse, France (2017).
4. Burmaoglu, S., Sarıtas, O.: Changing characteristics of warfare and the future of Military R&D. Technological
Forecasting and Social Change. 116, 151–161 (2017).
5. Ackerman, E.: Fatal Tesla Self-Driving Car Crash Reminds Us That Robots Aren’t Perfect. IEEE Spectrum.
(2016).
6. Morahan, M.: Ethics in management. IEEE Engineering Management Review. 43, 23–25 (2015).
7. Bird, S.J., Spier, R.: Welcome to science and engineering ethics. Sci Eng Ethics. 1, 2–4 (1995).
8. van Gorp, A.: Ethical issues in engineering design processes; regulative frameworks for safety and sustainabil-
ity. Design Studies. 28, 117–131 (2007).
9. Kumar, N., Kharkwal, N., Kohli, R., Choudhary, S.: Ethical aspects and future of artificial intelligence. In: 2016
International Conference on Innovation and Challenges in Cyber Security (ICICCS-INBUSH). pp. 111–114
(2016).
10. Dreier, T., Döhmann, I.S. genannt: Legal aspects of service robotics. Poiesis Prax. 9, 201–217 (2012).
34
11. Delvaux, M.: Civil law rules on robotics, European Parliament Legislative initiative procedure 2015/2103.
(2016).
12. Thekkilakattil, A., Dodig-Crnkovic, G.: Ethics Aspects of Embedded and Cyber-Physical Systems. In: Computer
Software and Applications Conference (COMPSAC), 2015 IEEE 39th Annual. pp. 39–44 (2015).
13. Guiochet, J.: Trusting robots : Contributions to dependable autonomous collaborative robotic systems. Habilita-
tion à diriger des recherches, Université de Toulouse 3 Paul Sabatier (2015).
14. Pascual, D.G., Kumar, U.: Maintenance Audits Handbook: A Performance Measurement Framework. CRC Press
(2016).
15. Goodall, N.J.: Can You Program Ethics Into a Self-Driving Car? IEEE Spectrum. (2016).
16. rt.com: German rail operator to launch self-driving trains in 5yrs, https://www.rt.com/business/346123-
german-railway-driveless-trains/.
17. Jenkins, R.: Autonomous vehicle ethics and laws: toward an overlapping consensus. New america (2016).