Conference PaperPDF Available

Ethical behaviour of autonomous non-military cyber-physical systems

Authors:

Abstract and Figures

Autonomous non-military cyber-physical systems are widely studied in research but there are still few applications in industry. One of the reasons relies in the fact that there is still no proof of guarantee for these systems regarding their safety and their ability to behave in a nonhazardous way, mainly because of the induced complexity caused by their learning abilities coupled with the high ability to interact and cooperate of their composing mechatronics elements. More, cyber-physical systems are intended to be merged into socio-technical systems and interact with humans. As a consequence, the study of their ethical behavior translates currently a major stake. Meanwhile, it is clear that this stake is still not address by the scientific community while 1) sci-fi literature and movies have addressed this since a long time 2) some autonomous road vehicles have already injured people, and 3) EU parliament has launched a procedure dealing with the establishment of civil laws for autonomous learning robots. This paper intends then to open the debate on this topic and suggests to extend dependability studies to integrate ethicality as a new dimension. New emerging research fields are identified and an illustration in the autonomous train transportation is presented.
Content may be subject to copyright.
26
ETHICAL BEHAVIOUR OF AUTONOMOUS NON-MILITARY
CYBER-PHYSICAL SYSTEMS
Damien Trentesaux
1
, Raphaël Rault
2
1
LAMIH UMR CNRS 8201, SurferLab, Univ. Valenciennes, Le Mont Houy,
59313 Valenciennes Cedex, France.
damien.trentesaux@univ-valenciennes.fr
2
Capon & Rault Avocats (Lawfirm),
7 rue de l’Hôpital Militaire 59800 Lille, France
r.rault@capon-rault.com
Keywords: ethical behavior, complex system, cyber-physical systems, autonomous vehicles, artificial intelli-
gence, train transportation, RAMS, RAME
Abstract.
Autonomous non-military cyber-physical systems are widely studied in research but there are still
few applications in industry. One of the reasons relies in the fact that there is still no proof of
guarantee for these systems regarding their safety and their ability to behave in a nonhazardous way,
mainly because of the induced complexity caused by their learning abilities coupled with the high
ability to interact and cooperate of their composing mechatronics elements. More, cyber-physical
systems are intended to be merged into socio-technical systems and interact with humans. As a
consequence, the study of their ethical behavior translates currently a major stake. Meanwhile, it is
clear that this stake is still not address by the scientific community while 1) sci-fi literature and
movies have addressed this since a long time 2) some autonomous road vehicles have already injured
people, and 3) EU parliament has launched a procedure dealing with the establishment of civil laws
for autonomous learning robots. This paper intends then to open the debate on this topic and
suggests to extend dependability studies to integrate ethicality as a new dimension. New emerging
research fields are identified and an illustration in the autonomous train transportation is presented.
Introduction
Cyber-physical systems (CPS) are complex systems aiming to fulfill a global function and integrating
communicating mechatronic parts, sensors and effectors, merging the physical and the digital worlds
[1]. They interact with human operators. In our work, we consider a sub-class of CPS, more complex, that
are autonomous. In this paper, autonomy refers to the ability of integrated sensing, perceiving, analyz-
ing, communicating, planning, decision-making, and acting/executing, to achieve goals [2]. These CPS
can thus behave and evolve in space independently neither from the decision of a human operator nor
the one of its designer. They can integrate Artificial Intelligence-based learning mechanisms enabling
them to improve their decisions with time and adapt to an evolving environment [3]. The CPS concerned
in this paper are also non military: their primary function is neither concerned with the destruction of
asset nor people, like one can find in the military or defense industries [4].
This paper, co-written by a lawyer specialized in the domain of digital property and cyber-security
and a researcher working in the field of industrial internet and CPS, deals with the ethical behaviour of
this kind of autonomous non military CPS
1
. The will of the authors is to foster researchers working on
1
For simplification purpose, we will no more precise “autonomous and non military” in the remaining of the paper when
using the acronym “CPS”.
27
this kind of CPS to pay attention to the possible consequences of their design on the welfare of humans
interacting with these CPS such as their possible responsibility in case of an accident.
Futuristic non-scientific ideas more relevant to entertainment and sci-fi than science have initially put
the light on this subject (as illustrated by Asimov’s robotic laws, Mary Shelley’s creature, Philip K. Dick
short stories, etc.). Meanwhile, this subject became reality and went to the public place since the world-
wide discussed recent case of the Tesla self-driving car crash that led to death [5].
As introduced, the scope of the paper concerns Ethics. Ethics is initially a field of philosophy. Accord-
ing to [6], ethical behaviour is concerned with actions that are in accord with cultural expectations relat-
ing to morality and fairness. Ethical aspects in engineering is not new and some journals are specialized
in that field [7]. These aspects have been studied in various fields of engineering, such as design [8] and
obviously bio-engineering, including genetics [9]. In the literature, one can face two kinds of ethical stud-
ies [3]:
techno/engineering ethics that deals with the moral behaviour of human designing artificial beings
(the issue being the ethical design of CPS), and
machine ethics that deals with the moral behaviour of artificial beings (the issue being the design of
ethical CPS).
In a previous paper, a review in ethical aspects in complex system engineering has been made [3].
Main conclusions are provided hereinafter.
Regarding techno ethics, proposals were mainly fostering the adoption of techno ethics behaviours by
researchers related to the idea of charters to be signed (“Hippocratic oath”).
In this paper, since we deal with the ethical behavior of CPS, our work is rather related to machine
ethics. Regarding this field, it is worth noticing that main breakthroughs are coming from lawyers. Start-
ing from the point that there exists a huge corpus of “human being centered” laws, while quite none
dealing with “artificial beings”, lawyers have started to pay attention to robotics and artificial intelli-
gence (AI). One of their surprising approaches is to consider the creation of a new specie composed of
intelligent autonomous robots and systems, aside the human one. This approach implies that human
right constitutions should be revised accordingly. For example, [10] discussed the organization of ad-
ministrative control and the legal liability regime which applies to service robots and the issue of auton-
omy of service robots. Another example, which is from the authors’ point of view, among the most amaz-
ing studies, is currently led at the European Union (EU) Parliament level that worked on legal proposals
related to robotics and AI [11]. Apart from the recommendations’ main goals, which are devoted to hu-
man safety, privacy, integrity, dignity and autonomy, the EU Parliament aims to unify and incentivise
European innovation in the area of robotics and AI. According to these recommendations, the definition
of a smart autonomous robot (SAR) has been proposed: a SAR acquires autonomy through sensors
and/or by exchanging data with its environment (inter-connectivity) and trades and analyses data; It is
self-learning (optional criterion); It has a physical support; It adapts its behaviours and actions to its
environment
2
. Following their idea, different categories of SAR would be created. It is advised that a
specific legal status for robots is created, so that at least the most sophisticated SAR could be established
as having the status of electronic persons. A SAR would also have rights (intellectual property and per-
sonal data…) and obligations (a SAR should be held reliable for its actions according to the EU Parlia-
ment, which asserts that “the greater a robot's learning capability or autonomy is, the lower other par-
ties' responsibility should be, and the longer a robot's 'education' has lasted, the greater the responsibil-
ity of its 'teacher' should be”). The concept of SAR is obviously close to the one relevant to the kind of
CPS we consider in this paper. Meanwhile, only few works dealing explicitly, from a scientific point of
view, with machine ethics in SAR/CPS are available [3]. Among them, let us mention [12] who proposed
a framework for assessment and attribution of responsibility based on a classification of CPS with re-
spect to the amount of autonomy and automation involved. Their proposal is based on the assumptions
2
It is clear that what EU calls a SAR is very close to that what is called a CPS in this paper.
28
that different types of decision making occur inside a CPS: automatic, semi-automatic, semi-autonomous
and autonomous decisions.
Provided the important stakes introduced about the emergence of machine ethics issues in CPS on the
one side, and the lack of research activity from the scientific community, far behind the activity led in the
legal field on the other side, the aim of this paper is to promote the emergence of a new field of scientific
research: the ethical behavior of autonomous, non-military CPS.
To contribute to such an ambitious objective, our idea is to start from scientific theoretical grounds
that can be considered as the closest to this new field and to extend this field to encompass machine
ethics in CPS engineering. From this perspective, one of the closest research fields that was found and
that is discussed in this paper concerns the “dependability analysis” for which recent studies on auton-
omous systems and robots are already available [13]. Of course, this does not forbid machine ethics in
CPS engineering to gain also from other fields such as AI (multi-agent systems, artificial neural networks,
etc.), mobile robotics, embedded/mechatronics systems, etc.
The next part summarizes the terminology from the historical field of dependability and points out
the angle of attack through which machine ethics may foster the generalization of dependability to en-
compass ethical aspects. Applications and illustrations will be taken from the transportation sector,
whatever the mode (aeronautical, naval and ground transportation), the emergence of autonomous sys-
tems in this field, and especially the Google Car, being a clear illustration of where our society is going to.
1 Dependability analysis: RAMS studies
Even if the word is known for years by researchers, there is no real consensual precise definition of
dependability. For example, in the norm IEC 60300-1, dependability is defined as “the collective term
used to describe an operation based on the availability and the influential factors: reliability, capacity
and support to the maintenance” [14]. It is commonly accepted by researchers that dependability is
composed of three main parameters: reliability, maintainability and availability aside which one fourth
parameter is more and more added: safety. The acronyms RAM and RAMS (along with the relevant term
“RAMS studies”) are usually used by researchers working on dependability.
It is beyond the scope of this paper to discuss the definition of these parameters, the relative litera-
ture being abundant. Thus, adapted from [14], the chosen definitions used in this paper for illustration
purpose are the following:
Reliability: the ability that a system operates satisfactorily during a determined period of
time and in specific conditions of use. It can be expressed as a mean time to failure (MTTF).
Maintainability: the capacity of a system to be maintained, where maintenance is the series
of actions taken to restore an element to, or maintain it in, an effective operating state. It can
be expressed as a mean time to repair (MTTR).
Availability: the ability to perform when called upon and in certain surroundings. It can be
expressed as a percentage: the proportion of time a system in a functioning condition over a
given time period.
Safety: the ability to detect undesired events and to apply rules to limit (mitigate) their con-
sequence.
More recently, Security has also been added as a fifth parameters. Security is the ability to operate
satisfactorily despite intentional external aggressions. In that sense, Security is related to external events
while reliability, internal events.
Obviously, Safety and Security are the closest indicators relevant to ethical aspects but one can see
that they do not consider all the aspects relevant to ethics. From our point of view, safety and security
must then be studied with a wider view, as discussed in the following part.
29
2 Dependability of CPS: from RAMS to RAME studies
The idea that we defend in this paper is that it seems possible to enlarge the safety/security aspects of
RAMS studies towards machine ethics as a federating, more global concept encompassing safety and
security. According to this idea, we suggest that RAMS becomes now RAME: Reliability, Availability,
Maintainability and Ethicality. From our point of view and in this paper, ethicality refers to the ability for
a system to behave according to an ethical manner. The core question is obviously what is an “ethical
manner”. The following section deals with such an aspect.
2.1 Ethicality and ethical behavior of a CPS
As introduced, safety and security can be seen as components, yet to be adapted, of ethicality since from
our perspective, ethicality is a more global concept, enlarging the fields of research founded on these
two components. One can notice that the discussed elements hereinafter present some links with
Azimov’s laws of robotics. It is also noticeable that the components described hereinafter are under the
responsibility of the designer of CPS if these CPS are “simple” in the sense that all of their states and be-
haviors are clearly identified, bounded, controlled, certified. Meanwhile, the responsibility of a CPS can
be engaged as soon as it becomes autonomous and able to learn, and the more it learns, the less the re-
sponsibility of the designed is committed, consistently with the previous discussion about the legal re-
sponsibilities of a SAR.
These components are described hereinafter.
A first component of ethicality is integrity, that is the ability to behave so that the others trust the in-
formation coming from the CPS and are confident about the ability of the CPS to engage actions to reach
a clear, readable, public objective (even potentially varying), but which must be announced and publi-
cized in that case. This includes for sure the ability for the CPS not to disclose sensible information gath-
ered about the humans and other CPS interacting with it. The designer of the CPS is highly responsible
for this (ethical design), but the CPS will be the one that risks to disclose sensible information when it
takes decisions in an autonomous way, or it is the one that may change its objectives with no explana-
tion, which influences the degree of trust from the humans, then its ethicality. Integrity includes also the
ability to verify the algorithms by others and to trace (explain) the decisions. For this aspect, a common
current solution is to implement open (no black box), certified, checkable-by-others computer codes, but
this is not sufficient since the CPS must also be able to explain its choices (for example, when investigat-
ing to determine the root cause of an accident for which the CPS was involved).
Safety is a second component of ethicality. As already introduced, safety concerns people involved
and in direct connection to the considered CPS. Ensuring that a system behaves in a safe way, limiting
the risk of injuries to human beings that depend on the CPS is the least that one could accept to deter-
mine if a CPS behaves ethically. For example, energy providers must provide their nuclear plant with
procedures and rules to limit as much as possible hazardous events for employers and close urban areas.
A cobot system, aimed to work jointly with a human operator, owns barriers and safety rules to avoid
harming human operators (eg., torque or effort detection). But from an ethical point of view, this is not
so simple. A typical emerging ethical issue at this level would concern the kind of decisions that could
maintain the safety of the CPS while at the same time, that could be harmful for people. For example, a
safe decision for an autonomous high-speed train that detects lately an animal on the track could be to
brake to avoid the collision while at the same time, it risks to hurt several of its 1,000 passengers, the
ones that are at this moment not seated, with this emergency brake.
A third component of ethicality is security. Security refers to the resilience of the CPS facing unex-
pected aggressive actions. It concerns the ability of a CPS to limit its vulnerability and intrusion, cyber-
attacks and aggressive behaviors (threats) coming from systems and events not only outside but also
inside the CPS, which is a new aspect induced by the principle of ethicality. Regarding outside threat,
30
cyber defense (firewalls, anti-malware, etc.) are of course among the most famous approaches studied
by industrialists and researchers. Regarding inside threat, and from a machine ethic point of view, this
component becomes delicate to address. For example, should the auto pilot of a plane be able to override
the command of a human pilot if it may lead to crash the plane (eg., suicide) ? if yes, how to ensure that
this override may not lead to a more critical situation (eg., difficulty for the cruise CPS to land the plane)
? It is noticeable that security and integrity may be conflictual component: increasing the integrity by
using open source AI code may facilitate the work of cyber-attackers, thus reducing the security of the
CPS.
Altruism (or caring) is a fourth component of ethicality and one of its ultimate components. It has
never been addressed by researchers in the field of CPS. Altruism can be seen as a complementary view
of safety. Altruism is more aligned with the welfare of people a priori not interacting with the CPS, while
safety focuses on the people interacting with the CPS. It concerns the ability to behave according to the
welfare of others: others being other technical systems, other CPS or human beings. Altruism may be
also contradictory to one or several of the other components and RAM indicators. For example, an altru-
ist autonomous boat, that detects a “SOS” message, delays its current mission to help the senders of the
SOS. But through this altruist decision, it will de facto reduce its availability (its mission may be can-
celed) and this decision may impact its reliability as well (overuse of its turbines used at full throttle to
reach as soon as possible the emitter of the SOS for example). Another example is the one of an autono-
mous car that has to break the law in order to save life (eg., emergency trip to hospital, line cross to
avoid a pedestrian, etc.) [15], reducing its accountability (see below). Of course, reducing the accounta-
bility, the reliability or the availability is not a necessary condition to behave in an altruist way. For ex-
ample, an altruist autonomous train detects an accident on a road close to its track (fire, smoke…) and
makes an emergency call to its central operator while continuing its travel to its destination at the same
time. Its mission is thus not impacted.
Accountability (or liability) is another one of the ultimate components of ethicality. Accountability
ensures that actions led by the CPS engage its responsibility in case of a problem. Being quite a futuristic
idea, like altruism, when dealing with non-human based systems, this is an important aspect promoted
by the EU through its legal proposal previously introduced. If a CPS is accountable, then in case of haz-
ard, its legal responsibility could be engaged, leading to allocating insurance funds to injured people. The
CPS could be sued for that hazardous behavior. Telling people that the autonomous train they are using
is covered by the insurance of the train itself, for which it contributes through the payment of a tax com-
ing from its own earned money render this CPS as a new accountable entity, belonging to a new specie,
aside the human one.
Equitability (financial fairness) is also another ultimate component of ethicality. It ensures that if the
CPS becomes an artificial, accountable person able to spend money for insurance (accountability), then,
as a counter balance, it must be able to earn money. This must be done in an equitable (fair) way, with a
balanced repartition of the money earned through its use to ensure its ethical behavior. This component
translates a logical evolution of the previously introduced legal developments related to SAR since SAR
and CPS are artificial beings potentially able to create and generate knowledge, ideas and to improve
their performances compared to other SAR. This process should increase the competitiveness of a CPS,
make it gain customers. For example, an autonomous train, known to behave more ethically compared to
a competitor train may attract more customers. Gaining market shares means earning more money.
Fig. 1. From RAMS studies to RAME studies summarizes this evolution from RAMS to RAME studies
where (E)thicality is decomposed into the previous listed components. In this figure an intuitive dichot-
omy, splitting ethicality into two levels is represented. The first level represents the immediate research
activity needed to be started. The second level addresses the more elaborated, complicated, delicate to
imagine components of ethicality. From our perspective, the second level can not be reached until the
first one is sufficiently elaborated.
31
Fig. 1. From RAMS studies to RAME studies of CPS.
2.2 Some key relevant issues in RAME studies of CPS
From this definition of ethicality, one can identify a large number of issues. We list hereinafter some of
the most challenging ones for illustration purpose:
To ensure an ethical behavior, a first reflex would be to design rules to force artificially the ethical
behavior of a CPS. For example, "when driving, priority is gradually set first to animals, then trucks,
then pedestrians, then bikes, then motorbikes, then cars, then buses". But an ethical behavior is still a
complex concept to formalize, and defining in an exhaustive and sense full way all possible ethical
rules seems an insurmountable task up to now.
As introduced, these different components of ethicality are conflictual, that means that improving one
may lead to the degradation of another one (ex., improving altruism may degrade safety, improving
integrity may degrade security, etc.).
Deciding which decision is more ethical than another is sometimes a delicate choice to make. The
classical “Trolley problem”, which translates a “no-win” situation where each possible choice (and es-
pecially, inaction), leads to the death of one or several people, is a clear illustration of this.
Linked to the previous issue, modeling an ethical behavior, optimizing it and simulating situations to
evaluate the degree of ethicality over a short or long time horizon will be a strong issue as well. Meas-
uring such a degree and stating that “system1 behaves more ethically than system2 during this
amount of time” seems to be quite impossible to do with the current scientific state of the art.
Last, the corpus of current laws is described in a textual way. This will not be possible to use a similar
approach to define legal rules and laws for CPS. Thus, the co-development by lawyers and scientific
searchers of ethical rules applicable both to the human and the artificial beings becomes a key issue.
This could be done using mathematical, formal or algorithmic editions of laws and social rules to be
embedded into CPS, which is a necessary condition to make them behave ethically.
To summarize, evolving from RAMS studies to RAME studies of CPS becomes a challenging, really new
field of researchers for scientists and it is time for our community to appropriate this field. For sure,
everything has to be done. The intention of this paper is only to draw the attention upon this urgent
32
need. New theories have to be invented, new models defined, new indicators and new methods as well.
But this is not sci-fi. The google car obviously forces us to address this and each other of the transporta-
tion modes is becoming affected by this evolution. Major industrialists and operators are working on the
autonomous train, the autonomous plane, the autonomous tractor, the autonomous ship, etc. As an illus-
tration of this, the next part details an application to train transportation.
3 Application to train transportation
Future autonomous trains are clearly the kind of the CPS considered in this article, and will be thus con-
cerned with ethicality. The evolution of the development of “classical trains” towards “autonomous CPS-
train” is governed by several driving factors. The first one is obviously related to the limitation of crash-
es and accidents through a better monitoring of the train and its environment. A second one concerns
the possibility through the use of autonomous trains to increase the number of trains used per time unit,
thus the number of passengers. Indeed, the capacity of the infrastructure, which represents a large
amount of money involved, can not be easily augmented. To augment the number of people transported,
the solution offered by the autonomous train is to increase the number of trains by reducing the time
between two trains on a track and by adopting train pooling-like systems only controllable using ad-
vanced AI technologies. The use of a similar approach for fret transportation, on long distances, would
also limit the use of conductors on tiring, long trips. Last, energy is an important driver. This concerns
not only energy savings during a travel, but also the consensual limitation of the energy peak used inside
a fleet of autonomous trains based on negotiations among these trains to organize the smoothing of the
energy used. Estimated relevant gains are huge.
We describe hereinafter two complementary activities to illustrate the ongoing search for the future
autonomous train and provide some insights about RAME studies concerning it.
The first illustrative activity concerns a major national project led by the French national railway
agency, SNCF, who is working to implement the « autonomous train » in few years. This “EFIA” project is
thus a meta-project aiming to define the future development projects and target fleets of trains. It is
managed by a technological research institute (IRT), Railenium. Railenium is defining the roadmap to-
wards the “autonomous train” and specifies the deployment process, including the future applied pro-
jects to launch, accompanied with a methodological support to develop them in collaboration with in-
dustrial and academic partners.
The second illustrative activity concerns a joint research laboratory, called the SurferLab
3
, which is a
joint research laboratory with Bombardier Transport, Prosyst (a French SME) and the university of Va-
lenciennes and Hainaut Cambrésis in France. It aims to develop models, methods and systems enabling
the embedding of intelligent self-prognostic and health management functionalities into train, rendering
this train more autonomous, able to actively self-diagnose and thus, to actively negotiate with mainte-
nance centers and other trains of the fleet to optimize the quality of the fleet service.
The obvious conjunction of these two different activities clearly illustrates the will of the railway sec-
tor to work on the autonomous train (the Deutsche Bahn also works on this topic [16]). For the moment,
and consistently with our previous discussion about the sequential addressing of the two levels of ethi-
cality, these two activities consider only the first level of ethicality: integrity, safety and security. From
discussions with industrialists in train transportation and researcher in autonomous systems, dependa-
bility and AI, even these first components are hard to address assuming an autonomous behavior of the
CPS-train. For example, homologation of train requires to proof some safety levels (SIL). This is feasible
through a complete edition of all possible situations by engineers for “classical” non autonomous trains,
but when train starts to learn and behave autonomously, this is no more possible. That means that even
3
www.surferlab.fr
33
homologation rules must evolve as well. In addition, these two activities even don’t pay attention to al-
truism, accountability and equitability. Industrial partners acknowledge that they must be studied in the
near future as well, but they do not know how to address this.
4 Conclusion and future works
It is estimated that in the USA 94% of the car crashes are due to driver errors [17]. This kind of statistics
fosters the development of autonomous, self-learning CPS. One of the key issue to solve in the near fu-
ture is relevant to the ethical behavior of these systems interacting with humans and evolving in crowed
environments. Ensuring the ethical behavior of designers does not mean that the designed system will
behave ethically. This paper opens the debate on the crucial and urgent need to deal with this issue.
There is up to now nearly no scientific contribution on this topic while lawyers have started to work on
it since several years. Defining ethicality, modeling it and ensuring it, is for the moment quite impossible
to do. This means that researchers face now full, open and really new scientific research fields yet to
explore within the context of RAME studies.
5 Acknowledgment
This work is done within the context of a joint research Lab, “Surferlab”
(http://www.surferlab.fr/en/home), founded by Bombardier Transport, Prosyst and the University of
Valenciennes and Hainaut-Cambrésis. SurferLab is scientifically supported by the CNRS and is partially
funded by ERDF (European Regional Development Fund). The authors would like to thank the CNRS, the
European Union and the Hauts-de-France region for their support. Parts of works presented in this pa-
per have been developed in collaboration with SNCF and Railenium (EFIA project).
References
1. Trentesaux, D., Knothe, T., Branger, G., Fischer, K.: Planning and Control of Maintenance, Repair and Overhaul
Operations of a Fleet of Complex Transportation Systems: A Cyber-Physical System Approach. In: Borangiu, T.,
Thomas, A., and Trentesaux, D. (eds.) Service Orientation in Holonic and Multi-agent Manufacturing. pp. 175–
186. Springer International Publishing (2015).
2. Huang, H.-M., Pavek, K., Novak, B., Albus, J., Messin, E.: A framework for autonomy levels for unmanned systems
(ALFUS). In: Proceedings of AUVSI Unmanned Systems (2005).
3. Trentesaux, D., Rault, R.: Designing Ethical Cyber-Physical Industrial Systems. Presented at the IFAC World
Congress , Toulouse, France (2017).
4. Burmaoglu, S., Sarıtas, O.: Changing characteristics of warfare and the future of Military R&D. Technological
Forecasting and Social Change. 116, 151–161 (2017).
5. Ackerman, E.: Fatal Tesla Self-Driving Car Crash Reminds Us That Robots Aren’t Perfect. IEEE Spectrum.
(2016).
6. Morahan, M.: Ethics in management. IEEE Engineering Management Review. 43, 23–25 (2015).
7. Bird, S.J., Spier, R.: Welcome to science and engineering ethics. Sci Eng Ethics. 1, 2–4 (1995).
8. van Gorp, A.: Ethical issues in engineering design processes; regulative frameworks for safety and sustainabil-
ity. Design Studies. 28, 117–131 (2007).
9. Kumar, N., Kharkwal, N., Kohli, R., Choudhary, S.: Ethical aspects and future of artificial intelligence. In: 2016
International Conference on Innovation and Challenges in Cyber Security (ICICCS-INBUSH). pp. 111–114
(2016).
10. Dreier, T., Döhmann, I.S. genannt: Legal aspects of service robotics. Poiesis Prax. 9, 201–217 (2012).
34
11. Delvaux, M.: Civil law rules on robotics, European Parliament Legislative initiative procedure 2015/2103.
(2016).
12. Thekkilakattil, A., Dodig-Crnkovic, G.: Ethics Aspects of Embedded and Cyber-Physical Systems. In: Computer
Software and Applications Conference (COMPSAC), 2015 IEEE 39th Annual. pp. 39–44 (2015).
13. Guiochet, J.: Trusting robots : Contributions to dependable autonomous collaborative robotic systems. Habilita-
tion à diriger des recherches, Université de Toulouse 3 Paul Sabatier (2015).
14. Pascual, D.G., Kumar, U.: Maintenance Audits Handbook: A Performance Measurement Framework. CRC Press
(2016).
15. Goodall, N.J.: Can You Program Ethics Into a Self-Driving Car? IEEE Spectrum. (2016).
16. rt.com: German rail operator to launch self-driving trains in 5yrs, https://www.rt.com/business/346123-
german-railway-driveless-trains/.
17. Jenkins, R.: Autonomous vehicle ethics and laws: toward an overlapping consensus. New america (2016).
... A detailed discussion on both positions can be found in the literature (Crnkovic & Cürüklü, 2011;Dignum et al., 2018). Even ensuring the ethical behavior of designers of the autonomous system does not have to result in the system behaving ethically (Trentesaux & Rault, 2017). As such, it becomes crucial to have in place mechanisms to allow agents, as well as humans, to be accountable. ...
... As such, it becomes crucial to have in place mechanisms to allow agents, as well as humans, to be accountable. Trentesaux and Rault (2017) extend the idea of accountable agents further, by considering their legal responsibility as well as their potential place in society, as a new species alongside the human one: "If a Cyber-Physical System (CPS) is accountable, then in case of hazard, its legal responsibility could be engaged, leading to allocating insurance funds to injured people. The CPS could be sued for that hazardous behavior. ...
... Others extend the RAMS (reliability, availability, maintenance, safety) paradigm for the development of dependable CPSs, to include the ethical aspect (Trentesaux & Rault, 2017). In their work, safety, security, and integrity are under the ethicality umbrella, along with aspects such as equitability, altruism, and accountability. ...
Article
Full-text available
In recent years, autonomous systems have become an important research area and application domain, with a significant impact on modern society. Such systems are characterized by different levels of autonomy and complex communication infrastructures that allow for collective decision-making strategies. There exist several publications that tackle ethical aspects in such systems, but mostly from the perspective of a single agent. In this paper we go one step further and discuss these ethical challenges from the perspective of an aggregate of autonomous systems capable of collective decision-making. In particular, in this paper, we propose the Caesar approach through which we model the collective ethical decision-making process of a group of actors—agents and humans, as well as define the building blocks for the agents participating in such a process, namely Caesar agents. Factors such as trust, security, safety, and privacy, which affect the degree to which a collective decision is ethical, are explicitly captured in Caesar . Finally, we argue that modeling the collective decision-making in Caesar provides support for accountability.
... For example, the concept of "safety bag" has been proposed to ensure that decisions taken by an autonomous system remain acceptable for the human safety (Paul et al 2018). In that sense, safety can be seen as an additional dimension that complements machine ethics (Winfield et al 2014): machine ethics complements the concept of safety beyond its core scope, towards developing and integrating other dimensions, such as trustworthiness, data protection, and privacy, altruism, politeness, accountability, etc. (Trentesaux and Rault 2017b). For instance, the examples provided in Table 1 typically concern the dimension of altruism. ...
Article
Full-text available
This paper addresses the engineering of the ethical behaviors of autonomous industrial cyber-physical human systems in the context of Industry 4.0. An ethical controller is proposed to be embedded into these autonomous systems, to enable their successful integration in the society and its norms. This proposed controller that integrates machine ethics is realized through three main strategies that utilize two ethical paradigms, namely deontology, and consequentialism. These strategies are triggered according to the type of event sensed and the state of the autonomous industrial cyber-physical human systems, their combination being potentially unknown or posing ethical dilemmas. Two case studies are investigated, that deal with a fire emergency, and two different contexts i.e. one with an autonomous train, and one with an autonomous industrial plant, are discussed to illustrate the controller utilization. The case studies demonstrate the potential benefits and exemplify the need to integrate ethical behaviors in autonomous industrial cyber-physical human systems already at the design phase. The proposed approach, use cases, and discussions make evident the need to address ethical aspects in new efforts to engineer industrial systems in the context of Industry 4.0.
... Dans le monde de la science, l'éthique concerne en premier lieu la génétique, mais on peut identifier d'autres domaines considérant l'éthique, notamment en ingénierie (van Gorp, 2007), mais bien évidemment et surtout, dans le monde de l'informatique (traitement et usage de données personnelles, etc.). Il est important de bien cerner deux types d'éthique : une éthique qui traite du comportement de l'homme quand il conçoit et utilise un système artificiel que nous traduisons en anglais par « ethical design of artificial entities » (Bird and Spier, 1995) et une éthique qui traite du comportement des entités artificielles créées par l'homme, en anglais « design of ethical artificial entities » (Trentesaux and Rault, 2017a). Le premier type conduit typiquement à la signature de chartes de comportement éthique par les ingénieurs, voir par exemple, la charte FACT pour Fairness, Accuracy, Confidentiality, and Transparency dans le monde des data scientists (van der Aalst et al., 2017). ...
... Dans le monde de la science, l'éthique concerne en premier lieu la génétique, mais on peut identifier d'autres domaines considérant l'éthique, notamment en ingénierie (van Gorp, 2007), mais bien évidemment et surtout, dans le monde de l'informatique (traitement et usage de données personnelles, etc.). Il est important de bien cerner deux types d'éthique : une éthique qui traite du comportement de l'homme quand il conçoit et utilise un système artificiel que nous traduisons en anglais par « ethical design of artificial entities » (Bird and Spier, 1995) et une éthique qui traite du comportement des entités artificielles créées par l'homme, en anglais « design of ethical artificial entities » (Trentesaux and Rault, 2017a). Le premier type conduit typiquement à la signature de chartes de comportement éthique par les ingénieurs, voir par exemple, la charte FACT pour Fairness, Accuracy, Confidentiality, and Transparency dans le monde des data scientists (van der Aalst et al., 2017). ...
Chapter
Full-text available
Nous nous proposons dans ce chapitre de nous intéresser aux aspects éthiques, et plus particulièrement, au comportement éthique de ces robots intelligents (dans le monde scientifique, un terme en anglais est utilisé, celui de machine ethics). Notre étude est menée dans le cadre d’une analyse à la frontière entre les aspects scientifiques et ceux relevant du droit. Elle est nourrie par la littérature, notamment en science-fiction, en tant que révélateur comportemental de la société humaine sur ce sujet qui reste, malgré les enjeux sous-jacents cruciaux, encore peu exploré dans le domaine scientifique. Pour ce faire, nous nous proposons de cadrer en premier lieu le contexte de notre sujet, en définissant en particulier, les notions de robots intelligents et d’éthique des robots intelligents. Dans une deuxième étape, nous discuterons des différentes facettes que revêt le concept de « robot intelligent » au travers de sa perception par l’être humain. Cette approche nous permettra de proposer un cadre conceptuel décrivant les différents aspects relevant d’un comportement dit « éthique » de la part d’un robot intelligent. Ce chapitre se conclura sur un ensemble de perspectives et pistes de réflexion.
... In the context of I40, the study of the impact of robots, embedded systems and CPS on ethics remains a scarcely studied topic with limited scientific contributions. Some of the technical, social and economic expectations have been expressed in terms of trustworthiness (including explicability of AI), safety, security, altruism, accountability and equitability for cyber-physical industrial systems (Trentesaux and Rault, 2017b). ...
Conference Paper
Industry 4.0 provokes a shift in the way production systems are designed and used that raises ethical questions. This shift stems from several features relevant to Industry 4.0, specifically the increase of the importance of the digital world and the fostering of the development of more autonomous and intelligent systems that will interact and interoperate with humans in more open production environments. The first aim of this paper is to study to what extent Industry 4.0 impacts ethics. The second one is to raise the awareness of researchers regarding potential ethical risks when designing and evaluating future Industry 4.0 compliant production systems. For that purpose, the ethical stakes of industry 4.0 are first presented. Then, an overview of related work is done to evaluate the different scientific fields potentially contributing to the study of the ethical dimension in Industry 4.0. A discussion is finally proposed from this overview. The main conclusion of this discussion concerns the urgent need to address the ethical dimension of scientific contributions relevant to Industry 4.0, given the lack of work in that field.
Article
The article considers the assessment of the need to adopt the legal neutrality of intelligent systems (IP). The author comes to the conclusion that IP is successfully used in various fields today, but in order to eliminate the multiple risks associated with their use, it is necessary to adopt appropriate laws and develop a legal policy regarding the regulation of relations involving IP. The set of principles governing the above-mentioned relations should include the principle of legal neutrality of IP to ensure that unnecessary barriers are not erected to realize the benefits of IP.
Article
Industry 4.0 provokes a shift in the way production systems are designed and used that raises ethical questions. This shift stems from several features relevant to Industry 4.0, specifically the increase of the importance of the digital world and the fostering of the development of more autonomous and intelligent systems that will interact and interoperate with humans in more open production environments. The first aim of this paper is to study to what extent Industry 4.0 impacts ethics. The second one is to raise the awareness of researchers regarding potential ethical risks when designing and evaluating future Industry 4.0 compliant production systems. For that purpose, the ethical stakes of industry 4.0 are first presented. Then, an overview of related work is done to evaluate the different scientific fields potentially contributing to the study of the ethical dimension in Industry 4.0. A discussion is finally proposed from this overview. The main conclusion of this discussion concerns the urgent need to address the ethical dimension of scientific contributions relevant to Industry 4.0, given the lack of work in that field.
Chapter
The objective of this paper is to discuss issues and insights relevant to the ethical behaviour of future autonomous intelligent system merged in the human society. This discussion is done at the frontier of three domains: science, as the mean to imagine and design innovative technological solutions in the field of autonomous artificial systems; law, as the mean to control, forbid and promote what can be used in the human society or not from these technological solutions; and science-fiction, as the imaginary world where scientists and lawyers get consciously or not their inspirations, fears and dreams, driving their decisions and actions in the real world. Four issues are specifically discussed. The crossing of these domains illustrates that addressing ethics in AIS is an urgent need but remains incomplete if addressed from a single discipline or domain point of view.
Chapter
Researchers and industrialists are working to propose user centric transportation services based on the development of autonomous systems. This paper deals with the evolution of human-machine cooperation processes within such a context. The concept of peer-to-peer cooperation is proposed. Contrarily to classical approaches, the suggested approach considers that an autonomous system is a peer, able to trigger cooperation needs with the human to reach its own objectives. This concept is applied to the maintenance of an autonomous train that cooperates with different maintenance actors. This chapter specifies and proposes first models with an ongoing implementation using augmented reality. An illustrative maintenance situation is finally discussed.
Presentation
Full-text available
conférence invitée, séminaire AFIS, Nancy
Article
Full-text available
It's 2034. A drunken man walking along a sidewalk at night trips and falls directly in front of a driverless car, which strikes him square on, killing him instantly. Had a human been at the wheel, the death would have been considered an accident because the pedestrian was clearly at fault and no reasonable person could have swerved in time. But the "reasonable person" legal standard for driver negligence disappeared back in the 2020s, when the proliferation of driverless cars reduced crash rates by 90 percent. Now the standard is that of the reasonable robot. The victim's family sues the vehicle manufacturer on that ground, claiming that, although the car didn't have time to brake, it could have swerved around the pedestrian, crossing the double yellow line and colliding with the empty driverless vehicle in the next lane. A reconstruction of the crash using data from the vehicle's own sensors confirms this. The plaintiff's attorney, deposing the car's lead software designer, asks: "Why didn't the car swerve?"
Article
Full-text available
The emergent use of service robots in more and more areas of social life raises a number of legal issues which have to be addressed in order to apply and adapt the existing legal framework to this new technology. The article provides an overview of law as a means to regulate and govern technology and discusses fundamental issues of the relationship between law and technology. It then goes on to address a number of relevant problems in the field of service robotics. In particular, these issues include the organization of administrative control and the legal liability regime which applies to service robots. Also, the issue of autonomy of service robots is discussed, which cannot easily be answered under the existing, human-centered legal regime.
Book
Maintenance Audits Handbook: A Performance Measurement Framework explores the maintenance function and performance of an organization, and outlines the key aspects required for an effective and efficient maintenance performance measurement (MPM) system. Incorporating different aspects of traditional literature and considering various frameworks on the subject, it examines the auditing process as well as the use and development of maintenance metrics. It identifies different frameworks and models showcasing how MPM systems should be implemented as well as the values that are created when different frameworks are used. The book presents performance indicators within a framework that classifies and sorts according to functional and hierarchical aspects. It introduces techniques that can help determine the right set of performance indicators. It also outlines a process that combines both numerical indicators with the classical result of massive questionnaires successfully incorporating both the quantitative and qualitative aspects of maintenance performance. In addition, the author provides examples of MPM frameworks that are used in organizations with condition-based, vibration-based, and reliability-centered maintenance. A useful handbook for students and maintenance professionals, this book provides readers with an understanding of how to Align the organizational strategy to the strategies of the maintenance function Link the maintenance performance measures to the different hierarchies of the organization and establish effective communication between them Translate the MPIs at operational level to the corporate level (to create value for the whole organization and its customers) Identify the weaknesses and strengths of the implemented maintenance strategy Maintenance Audits Handbook: A Performance Measurement Framework provides readers with a sound foundation for developing and measuring a comprehensive maintenance improvement strategy using qualitative and quantitative data, and serves as an ideal resource for maintenance/mechanical engineers, maintenance/performance/business/production managers and industry professionals involved in maintenance.
Article
Wars have been a part of humanity since prehistoric times, and are expected to remain an important component of future human societies. Since the beginning of the history wars have evolved in parallel with the changes in Society, Technology, Economy, Environment, Politics and Values (STEEPV). The changing circumstances unavoidably affect the characteristics of warfare through its motivations, shape and size. Armies have adapted themselves to these changing characteristics of warfare through Revolutions in Military Affairs (RMAs) by introducing new military concepts and technologies. Based on the overview of the evolution of military technologies and concepts as a response to changing conditions, the aim of the present study is to anticipate what and how future technologies and concepts will shape warfare and drive impending RMAs. To answer this question, first the RMA literature is reviewed within a broader historical context to understand the extent to which military concepts and technologies affected the RMAs. Then, a time-based technological trend analysis is conducted through the analysis of military patents to understand the impact of technological developments on military concepts. Following the historical analyses, two scenarios are developed for the future of military R&D based on ‘concept-driven’ and ‘technology-driven’ factors. The article is concluded with a discussion about the implications of future scenarios for military R&D, and likely RMAs through the changes of concepts and technologies, and possible consequences such as transformations in organizational structures of armies, new skill and capacity requirements, military education systems, and decision-making processes.
Conference Paper
Artificial Intelligence is the intelligence shown by machines or software. Artificial intelligence includes reasoning, natural processing language and even various algorithms are used to put the intelligence in the system. In this paper we investigate motivations and expectations for the development of Machine Intelligence. This paper also presents the role of ethics in developing artificial intelligence. In this paper we have compared the new emerging AI scope with the old technologies in various fields and its advantages to the society.
Article
ETHICAL behaviour is concerned with actions that are in accord with cultural expectations relating to morality and fairness. Organizations as well as individuals have an obligation to act ethically and the penalties for unethical behaviours can be severe and unpredictable. For young professionals, a thorough understanding of the concepts of ethics, and the potential ramifications of unethical behaviour in business is essential in the modern workplace. This article explores the concepts of ethics, and provides some guidance for young professionals in developing habits of good ethical behaviour and in dealing with unethical behaviour should it be encountered in business. The article is based on personal experience, aided by reference to published expert opinion.
Article
In this paper a new architecture named CPS4MRO, that goes a step further previous research development and industry-oriented projects led by the authors, is specified. The aim is to provide in the near future an optimized planning and control of MRO operations of a fleet of complex transportation systems using the paradigm of Cyber-Physical Systems.
Article
The ways designers deal with ethical issues that arise in their consideration of safety and sustainability in engineering design processes are described. In the case studies, upon which this article is based, a difference can be seen between normal and radical design. Designers refer to regulative frameworks to account for decisions about safety and sustainability in normal design processes. In normal design processes these regulative frameworks provide designers with minimal requirements, ways to make operationalisations, rules and guidelines. In radical design, on the other hand, designers do not, or only partly, use the regulative frameworks instead they rely on internal design team norms for making decisions about ethical issues.