Content uploaded by Jan-Philip Van Acken
Author content
All content in this area was uploaded by Jan-Philip Van Acken on Jul 11, 2017
Content may be subject to copyright.
TITLE:
Translation of the text "Ethik-Kommission Automatisiertes Und
Vernetztes Fahren -- Bericht Juni 2017" as released by the
Bundesministerium für Verkehr und digitale Infrastruktur
AUTHOR:
Jan-Philip van Acken
DATE:
10.07.2017
Translation of the text "Ethik-Kommission Automatisiertes Und
Vernetztes Fahren -- Bericht Juni 2017" as released by the
Bundesministerium für Verkehr und digitale Infrastruktur
by Jan-Philip van Acken
is licensed under a
Creative Commons Attribution 4.0 International License.
Based on a work at
http://www.bmvi.de/SharedDocs/DE/Anlage/Presse/084-
dobrindt-bericht-der-ethik-kommission.pdf.
Translated by Jan-Philip van Acken. This translation is unrelated to the official extract translated by
the Ministry under http://www.bmvi.de/SharedDocs/EN/Documents/G/ethic-commission-report.pdf
The extract released by the Ministry only contains chapters I and III; this text covers all chapters
contained in the German original.
Translator commentary marked with square brackets [for example -Ed.],
difficult or unclear translations retain the original German phrases in brackets ("in Klammern")
version 10-07-2017
Chapter I Introduction
Around the world the digitalisation of mobility is developing. Automation of the individual traffic
in public streets means technical drive-aids, making things easier for the driver by either assisting,
or partially or totally replacing the driver. The partial automation of the driver as part of a vehicles
equipment is an already an everyday occurrence, highly and fully automated systems that can –
without human intervention – change driving lanes, brake and steer are either available or just about
ready for production. In Germany as well as in the USA testing grounds are available, where highly
automated vehicles are allowed to drive. For public transport driver-less robot taxis or buses are
being developed and being tested. Even today processors are available or in development that are
capable – with certain sensors – to recognise the goings-on and the surrounding of a car in real time,
as well as determining the own position on maps, or to dynamically change the route depending on
the state of traffic. A "perception" of the vehicle surroundings that gets continually closer to
perfection makes us expect a better and better differentiation between traffic participants, obstacles
and hazardous situations. This should notably heighten traffic safety, it can not be excluded that the
end of this development presents us with vehicles that are inherently safe, avoiding accidents under
all conditions. However, with the current technological state and the reality of a heterogeneous,
disconnected traffic a complete avoidance of accidents will not be possible. This forces decision
making when programming the software of highly- and fully-automated vehicle-systems. The
development of technology forces politicians and society to think about the chances that loom on
the horizon. Decisions whether or not the permission ("Zulassung") of automated cars is ethical or
maybe even ethically mandatory ("ethisch geboten") need to be made. As soon as the permits are in
– internationally already a looming spectre – it will be a matter of the conditions and configurations.
On a fundamental level we need to decide how much dependency of technologically complex – in
the future to a great deal based on possible learning, artificial intelligence based – systems we want
to put up with, to – in return – get more safety, mobility and comfort. Which measures for
controllability, transparency and data autonomy are required? Which technological development
guidelines are necessary to not delude the contour of a humane society that places the singular
human, his free development of personality, his social claim to dignity ("Achtungsanspruch") as the
centre point of the legislation?
Chapter II Proceedings ("Verfahrensverlauf") ethics committee automated and inter-connected
driving
The Ethics Committee Automated and Inter-connected Driving as appointed by the Bundesminister
für Verkehr und digitale Infrastruktur [Alexander Dobrindt -Ed.] constituted itself on September 30th
2016. It is an expert committee with an interdisciplinary alignment, plurally manned, ("plural
besetzt") led by the former Bundesverfassungsrichter and current university professor in Bonn, Dr.
Dr. Udo Di Fabio. The mission of the committee is to "work out the required ethical guidelines for
automated and inter-connected driving." The committee consists of representatives from
philosophy, law, social sciences, technology impact analysis, consumer protection, car industry, as
well as software development. The questions and problem areas emerging from the mission were
assigned to five fields of work. To work on each field research/task groups ("Arbeitsgruppen") were
formed, each headed by a committee member. The ethics committee was in session for five sittings
inside the Bundesministerium für Verkehr und digitale Infrastruktur in Berlin. It worked free and
independently. On an additional date the committee took a test drive ("Praxisfahrt") with automated
and inter-connected experimental vehicles by different manufacturers.
Research/task group 1 "Inevitable Damage Situations" ("Unvermeidbare Schadenssituationen") was
headed by Prof. Dr. Dr. Eric Hilgendorf.
Questions concerning the data that accumulates during automated and inter-connected driving were
worked on by research/task group 2 "Data availability, data security, data economy"
("Datenverfügbarkeit, Datensicherheit, Datenökonomie“) headed by Prof. Dr. Dirk Heckmann.
Prof. Dr. Armin Grunwald headed research/task group 3 "Interaction conditions for man and
machine" ("Interaktionsbedingungen für Mensch und Maschine"), investigated the interface/point of
intersection ("Schnitstelle") between human and technology.
Research/task group 4 – "Looking at ethical contexts beyond traffic" ("Ethische Kontextbetrachtung
über den Straßenverkehr hinaus") – looked into the technology of automated and inter-connected
driving in the context of further (connected) technologies and was headed by Prof. Dr. Dr. Matthias
Lutz-Bachmann.
Responsibility for systems that are open to development ("entwicklungsoffen") were worked on by
research/task group 5 "Responsibility range for software and infrastructure"
("Verantwortungsreichweite für Software und Infrastruktur") headed by Prof. Dr. rer. nat. Dr.-Ing.
E. h. Henning Kagermann.
External experts were heard and questioned in a separate sitting in January 2017. [I will skip
translations of their exact titles and appointments for the most part and keep the German ones -Ed.]
The experts presented their main-points in a short essay ("Kurzreferat") in response to the questions
that the respective fields of work faced and reacted to questions and commentary by the committee.
Dr. Tobias Miethaner (Department head digital society, ("Digitale Gesellschaft")
Bundesministerium für Verkehr und digitale Infrastruktur) gave information regarding goals and
activities of the Bundesregierung concerning automated and inter-connected driving.
On ethical aspect concerning – among other things – the so called dilemma situations Prof. Dr. Dr.
h.c. Julian Nida-Rümelin (Staatsminister a.D., LMU München) gave a lecture. ("referieren")
Elaborations on questions of data protection were given by Ministerialrat Peter Büttgen
(Referatsleiter, Bundesbeauftragte für den Datenschutz und die Informationsfreiheit).
Hon.-Prof. Markus Ullman (Referatsleiter, Bundesamt für Sicherheit in der Informationstechnik)
dedicated himself to questions of IT security.
Prof. Dr.-Ing. Markus Maurer (Leiter des Instituts für Regelungstechnik, TU Braunschweig)
explained – among other things – technological and societal aspects of autonomous driving in his
contribution.
Dr.-Ing. Joachim Damasky (Geschäftsführer Bereich Technik, VDA e.V.) focused on the depiction
of Human-Computer-Interaction. ("Mensch-Maschine-Interaktion")
Prof. Dr. theol. Peter Dabrock (Vorsitzender des Deutschen Ethikrats, FAU Erlangen-Nürnberg) and
Prof. Dr. phil. Dr h.c. Dieter Birnbacher (HHU Düsseldorf) gave their views on ethical questions in
the context of different new technologies from other areas of life.
Questions regarding the responsibility/liability ("Verantwortung") in the case of
developing/evolving systems were handled by Prof. Dr. Michael Decker (Karlsruher Institut für
Technologie).
Composition of the committee [for positions cf. original document]
Prof. Dr. Dr. Udo Di Fabio (Chairman)
Prof. Dr. Dr. h.c. Manfred Broy
Renata Jungo Brüngger
Dr. Ulrich Eichhorn
Prof. Dr. Armin Grunwald
Prof. Dr. Dirk Heckmann
Prof. Dr. Dr. Eric Hilgendorf
Prof. Dr. rer. nat. Dr.-Ing. E. h. Henning Kagermann
Weihbischof Dr. Dr. Anton Losinger
Prof. Dr. Dr. Matthias Lutz-Bachmann
Prof. Dr. Christoph Lütge
Dr. August Markl
Klaus Müller
Kay Nehm
Chapter III – Ethical rules for automated and connected vehicular traffic:
[An official translation of these 20 points does exist, released by the Federal Ministry itself, cf.
http://www.bmvi.de/SharedDocs/EN/Documents/G/ethic-commission-report.pdf
Below is my translation, not the Ministry version -Ed.]
1. The most important aim of partially or fully automated traffic systems is to enhance the
security of all entities involved in traffic. Furthermore they aim to enhance the possibilities to be
mobile and to allow for further advantages. The technological development happens according to
the principal of autonomy, in the sense of a self-reliable freedom to act.
2. To protect humans is paramount before every other aspect of usefulness. The aim is to lessen
damages up to and including complete avoidance. Admittance of automated systems is only
justifiable iff they lessen damages when compared to human driving-capabilities—in the sense of a
positive risk-balance.
3. The warranty-liability (translation unclear, "Gewährleistungsverantwortung") for
introducing and admitting automated and connected systems lies with the private sector. Driving
systems thus require admittance and control through a public agency. The paramount goal
("Leitbild") is the prevention of accidents, but residual risk that is deemed technologically
unavoidable should not hinder deployment of automated driving given a generally positive risk
assessment.
4. Taking self-responsible decisions is the expression of a society that is centred around both
the demand for self-fulfilment and the need for protection of singular humans. Every orderly
decision by the state or politics thus serves the self-fulfilment as well as the protection of the
human. Within a free society the lawful development of technology strives to balance between a
general fulfilment-order and the freedom as well as safety of others. [Sounds like a Kantean
perspective to me? -Ed.]
5. The automated and interconnected technology should avoid accidents whenever possible.
Technology, depending on its state, has to be architectured in a way such that critical situations
cannot develop in the first place. This includes dilemmas, where the automated vehicle faces the
"decision" (quotes taken from the original text) to necessarily evoke one out of two evils that cannot
be pondered. ("nicht abwägungsfähige Übel") This process of accident avoidance should employ
the entire spectrum of technical possibilities and continuously advance it. This includes: restricting
the useable areas to controllable traffic environments, vehicle sensors and braking-systems, signals
for participants that are in danger, prevention of danger via an "intelligent" street infrastructure. The
considerable improvement of traffic save is development- and regulation-goal, as early as during
development and programming of the vehicle to drive with a look-ahead and defensively, thus
preventing "Vulnerable Road Users" from damage.
6. Introducing driving systems with higher automation—especially with collision avoidance—
can be a societal and ethical must, if they employ the given potential for damage reduction. On the
other hand, a duty by law to use fully automated traffic-systems or an implementation that is
practically inescapable is ethically dubious. That is the case if it entails the submission under
technical imperatives ("Unterwerfung unter technische Imperative") (Prohibition to degrade a
subject to a mere network element).
7. In dangerous situations, that appear unavoidable under all technical precautions, the
protection of human live has the highest priority in a case of weigthing objects of legal protection.
["Rechtsgüterabwegung" -- Rechtsgüter, composed out of the terms for law and goods; seems based
on the point of view that penal provisions are legitimate if and only if they protect a concrete (legal)
object -Ed.] The programming, within the limits of what is technologically doable, has to strive to
accept animal- or object-damage, iff this allows to prevent damages to a person.
8. Real dilemma decisions, like weighting life against life, are dependant upon concrete factual
situations including unpredictable behaviour of the ones involved. They can thus not be
standardised and cannot be programmed with an ethical system that is absolutely certain. ("ethisch
zweifelsfrei") Technological system have to be designed to prevent accidents, but cannot replace or
foresee the decisions of a morally judicious, responsible driver, for they cannot be standardised to
take complex or intuitive assessments about the results of an accident. While a human driver would
act against the law in a case of emergency where one human is killed to save one or more other
humans, he would not necessarily act culpable. Such retrospective verdicts that take special
circumstances into account cannot easily morphed into abstract-general ex-ante-assessments. It
follows that they can likewise not be put into such programming. We thus deem it desirable to
systematically process experiences via an independent public institution. [followed by: "etwa einer
Bundesstelle für Unfalluntersuchung automatisierter Verkehrssysteme oder eines Bundesamtes für
Sicherheit im automatisierten und vernetzten Verkehr", two examples of currently non-existing
institutions, one for looking into accidents, the other for security -Ed.]
9. In case of inescapable accidents every kind of profiling ("Qualifizierung") based on personal
identifiers (age, sex, physical or mental constitution) strictly forbidden. Adding up victims is
forbidden. A general programming to minimize damage to persons can be supported. [Contradiction
in terms, if we cannot sum up people or damage to them, then what mathematical base is there to
minimize? -Ed.] Those involved into the generation of mobility-risks ("Mobilitätsrisiken") may not
sacrifice those uninvolved.
10. Liability ("Verantwortung") reserved to the human gets shifted, in the case of automated and
interconnected driving systems, to the manufacturer and operator of the technological systems and
the infrastructural, political and juridical decision maker ("Entscheidungsinstanzen"). Juridical
liability rules and their firm establishment in terms of practical juridical decisions must take this
shift into account.
11. Concerning the accountability for damages by activated automated vehicles the same
principles as in other cases of product liability. It follows that it is the duty of manufacturers or
operators to continuously optimize their systems. This includes the monitoring and improvement of
systems already delivered, if technologically possible and reasonable. ("zumutbar")
12. The public's claim to be sufficiently enlightened about new technologies and their use needs
to be met. To achieve this the principles developed here would need to be transparently deduced
into guidelines for the use and programming of automated vehicles. These would furthermore need
to be communicated in public and to be checked by a qualified independent entity ("unabhängige
Stelle").
13. It cannot be appraised today whether or not it will, in the future, be possible and useful to
have a complete interconnection and central control system [problem with the German "Steuerung",
can mean control, supervision or steerage, implying either remote surveillance or remote control
-Ed.] akin to train- and air-traffic. A complete interconnection and central control system in the
context of a digital traffic infrastructure is ethically questionable, if and as far as it is impossible to
exclude risks of a total surveillance of the road users and manipulation of vehicle control.
14. Automated driving is only maintainable/justifiable to the degree to which plausible attacks
do not lead to damages that shatter the trust in road traffic. Plausible attacks notably include
manipulation of the IT-system or immanent weakening of the system.
15. Business models that use data – produced by automated and interconnected driving, relevant
or irrelevant for vehicle control – are allowed but limited by the autonomy and data sovereignty
("Datenhoheit") of the traffic participant. Vehicle driver(s?) or vehicle owner(s?) strictly
("grundsätzlich", also translatable as generally) decide about the transfer and use of their
accumulating vehicle data. The voluntary aspect of revealing data this way relies on the existence of
plausible alternatives and practicality. A normative power of the factual ("normative Kraft des
Faktischen"), as is the predominant case during data access through the operators of search engines
or in social networks, should be counteracted early.
16. It has to be clearly distinguishable whether or not a driver-less system is in use or a driver
with the possibility of "Overruling" [actually used term -Ed.] retains responsibility
("Verantwortung"). Concerning non-driverless systems the man-machine interface has to be
designed in such a way that, at every point in time, there are clear rules in place and that it is
recognizable which responsibilities lie with whom, especially so in the case of control. The
distribution of responsibilities ("Zuständigkeiten") -- and thus the liability ("Verantwortung") --
regarding the timings and accessrules should be documented and saved. [Note my inconsistent
translation of "Verantwortung" here -Ed.] This is of special importance for handovers between
human and technology. An international standardisation for handover processes and documentation
(protocolling) should be strived for, to ensure compatibility of protocol- and documentation-duties
when facing a spread (across borders) of motorcar [adj., "automobile Technologie" -Ed.] and digital
technology.
17. Software and technology of highly automated vehicles must be done in such a way, that the
necessity of an abrupt handover of control to the driver (state of emergency) is practically excluded.
To enable an efficient, reliable and secure communication between human and machine and to
avoid overburdening, the systems have to more strongly adapt to human communication behaviour
and not require heightened adaptability of the human.
18. Learning systems and systems that are self-learning during vehicle operations [Why do they
distinguish these two, what is the relevant difference? -Ed.] as well as their connection to
centralized scenario-databases could be ethically allowed, if and as far as this yields a security
advantage. Self-learning may only be deployed if they fulfil the security requirements of functions
relevant to vehicle operation ("fahrzeugsteuerungsrelevante Funktionen") and do not cancel the
rules established here. It seems reasonable to hand over relevant scenarios to a central scenario-
database in the hands of a neutral entity, such that appropriate generally accepted problem
specifications ("Vorgaben", alternatives: requirements, parameters, demands, standards,...),
including creating possible acceptance tests.
19. In a case of emergency the vehicle has to get to a "secure state" autonomously, i.e. without
human assistance. A unification of especially the definition of secure state or the handover routines
is desirable.
20. The appropriate use of automated systems should already be part of the general digital
education. The appropriate handling of automated driving systems should be suitably taught and
tested during driving education.
Chapter IV. Discussion results and remaining questions
1. The permission of automated driving-systems as risky decision
1.1 Levels of automated driving
[Table 1 appears at this part in the original text, see annotations for Table 1 below -Ed.]
Ethical questions are primarily a concern for automation levels 4 & 5. (cf. Table 1) In the case of
level 5 the term autonomous vehicle is used.
The committee works with assumptions that are currently either not or not sufficiently marketable.
("marktfähig") […] Regardless, this ethical evaluation/judgement ("Beurteilung") is done, with an
eye on the looming future. Considering the non-linearity of a highly dynamical development
concerning artificial intelligence and levels of connection ("Vernetzungsgrade") the
evaluation/judgement will likely rather fall short than be too broad.
1.2 Increasing possibilities for mobility ("Mobilitätschancen"), more security, but also residual
risks of fully automated traffic-systems
Autonomous driving, in case of drive-aids and driver-less systems, holds a plethora of possibilities
for the user. [Note that autonomous driving is a broader term than the autonomous vehicle
mentioned before. Autonomous vehicle is level 5 automation, while autonomous driving covers
(probably) levels 1 up to 5 -Ed.] A significant drop in the chances for accidents to occur is expected.
More comfort, both physical and mental relief, for the user is promised– as well as saving plenty of
time. [Ignoring that some drivers ignore speed limits and are likely faster than a law abiding
autonomous vehicle -Ed.] It further promises a conglomerate between private and public transport
and individual traffic and the possibility for people unable to drive a vehicle can actively participate
in traffic. [Not specified any further -Ed.]
On the other hand significant risks remain for traffic, that will appear especially frequent in a mixed
usage of all 5 levels and combined with other traffic-users and others affected by road traffic. If a
case of damages ("Schadensereignis") cannot be technologically fully excluded, then a heightened
use of automated driving-systems will nevertheless lead to questions regarding
liability/accountability ("Haftung"), surveillance questions and to dilemma decisions
("dilemmatische Konfliktentscheidungen") in certain traffic situations. Malfunctioning
interconnected systems or outside attacks cannot be completely excluded and are seen as immanent,
if rarely happening, to complex systems. [Unclear if the attacks or the malfunctions are seen as rare
occurrences -Ed.] When taking a strictly utilitarian viewpoint the positives (mentioned before)
prima vista outweigh the negatives. An ethical context analysis ("Kontextbetrachtung") will
furthermore ask, in which form and to what degree a self-extradition of the human to his technical
artefacts is allowed to take place, which borders are to be drawn and which control modalities are
required.
Table 1. Levels of Automation designed based on Abb.1 in the original text, bold emphasis taken
from source. Levels 4 & 5 were highlighted and marked with the keyword focus
Level 0 Level 1 Level 2 Level 3 Level 4 Level 5
Driver Only Assisted Partially
automated
Highly
automated
Fully
automated
Driver-less
Driver
permanently
controls speed
AND direction
("Quer- und
Längsführung")
Driver
permanently
controls speed
OR direction
Driver has to
monitor the
system
permanently
Driver no longer
has to monitor the
system
permanently
Driver must
potentially be able
to take over
No driver
necessary in a
specific
usecase[*]
No driver
necessary from
START to
FINISH
No intervening
vehicle-system is
active
System takes over
the remaining
other function
System takes
control of speed
and direction in a
specific usecase*
System takes
control of speed
and direction in a
specific usecase*
System recognises
the limits of the
system ("System-
grenzen") and
prompts the driver
to take over with a
sufficient time
reserve
System can
handle all
situations
automatically in a
specific usecase*
System takes over
the driving task in
its entirety, for all
road types, speed
ranges and all
surrounding
circumstances
* = Use-cases include road types, speed ranges and surrounding circumstances ("Straßentypen,
Geschwindigkeitsbereiche und Umfeldbedingungen")
1.3 A humans freedom of decision ("Entscheidungsfreiheit") in dilemma decisions
("dilemmatische Konfliktsituation")
Prior to solving so-called dilemma situations [trolley problem comes to mind as a possible
translation -Ed.] is the elementary question how much freedom of decision we want to transfer or
are allowed to transfer unto a programmer or even self-learning systems, if – according to Kantean
ethics – the freedom of the individual to moral self-determination forms the base of an existence
guided by reason. ("(…), wenn mit der kantianischen Ethik die Freiheit des Einzelnen zur sittlichen
Selbstbestimmung die Basis einer vernunftbestimmten Existenz bildet.") [Consider the
programmers of voting computers, as used during some elections – for context: not used in
Germany as far as I know -Ed.] Are we allowed to technologically pre-decide dilemma decisions
(here: "dilemmatische Entscheidungen") in an abstract-general way? Consider the following
example:
The driver of a car drives on a road next to a hillside. The fully automated car recognises that
several children are playing on the road. A self-dependent ("eigenverantwortlich") driver could now
choose to either take his own live by driving down the cliff or to accept the death of the children by
advancing towards the children playing in the street. In case of a fully-automated car the
programmer or the self-learning machine would have to decide how to resolve this situation.
The problem that lies with the decision of the programmer is, that he may well take the decision that
is right ethical decision for the human according to social consensus. ("Grundkonsens") This
remains to be, however, an external decision ("Fremdentscheidung") – in addition to not intuitively
grasping a concrete situation, (with all advantages and disadvantages of intuitive-situative decision
control) but having to judge a situation that is abstract-general. In case of an intuitive decision the
individual (here: the driver) accepts his own death or does not.
Ultimately, in extreme cases, the programmer or the machine would be able to take the "right"
ethical decision concerning the death of the single human. [Footnote: Lin, in Autonomes Fahren, 69,
77] Consequently thought through a human would thus be no longer self-determined, but externally
determined ("fremdbestimmt").
This consequence is problematic for different reasons. On the one hand this holds the danger of a
strong paternalism by the state, where the "right" ethical action is specified (as far as specified by
the programming), on the other hand this contrasts with the values of humanism, where the
individual is in the centre of the approach. ("Betrachtungsweise") Such a development should thus
be observed with a critical eye. [Here the segment ends, without going into any detail or even
acknowledging that most if not all states have laws in place that might make actions legally right,
but ethically wrong and vice versa -Ed.]
1.4 Dilemma situations
A dilemma situation is a situation where an automated vehicle faces the decision of having to
necessarily choose one out of two evils. These are well known in a legal context as trolley problem
("Weichensteller Fall”). [Footnote: cf. Welzel in ZstW 1951, 47, 51] What makes a dilemma
situation problematic is that these are decisions that have to be made based on a concrete singular
case while weighting several factors. Concrete norms ("Normierungen") like e.g. "personal damage
before property damage" seem possible for dilemma situations, but as abstract-general rules they
cause doubt in certain cases. Take for example, as a results of a property damage one can phase a
spilling fuel truck, or the breakdown of the powergrid of a major area. Abstract-general rules like
property damage before personal damages [yes, the other way around -Ed.] face the problem that
not all situations can be normed, given the number and complexity of different imaginable
scenarios. [compare an argument by Allen, Varner & Zinser (2000), Prolegomena to any future
artificial moral agent, in Journal of Experimental & Theoretical Artificial Intelligence 12(3), pages
251–261. -Ed.] The premise to minimize personal damage can only be consequently adhered to if
an impact assessment is attempted for property damages and if possible personal damages that
might follow from them are taken into account.
Regardless, the committee has come to a decision and put concrete norms into place, see ethical rule
number 7. This is motivated by the circumstance, that a technologically traceable/tractable[!]
("nachvollziehbar") solution, that is technologically doable and provides the highest potential for
accident reduction in most cases, should be preferred over a solution that is not realizable yet given
the current state of technology.
1.5 Protection of life as highest priority.
The protection of human life is the highest good according to our value-system. In cases of
unavoidable damages it takes unconditional priority. This leads to – when weighting between
property damages and personal damages in the context of assessable follow-up damages – a
categorical preference of property damages over personal damages.
1.6 No selection of humans, no allocation ("Verrechnung") of victims, but the principle of
minimizing damage
[German law, your mileage may vary -Ed.]
The modern constitutional state only deals in absolute commandments when facing fringe areas like
the ban on torture for people in state custody. [Footnote: Artk. 104, Abs. 1 Satz 2 Grundgesetz. ]
Regardless of the consequences an act is absolutely commanded or prohibited for the act in itself is
incompatible with the constitutive values of the constitutional order ("Verfassungsordnung").
Exceptionally, no weighting takes place, even so such weighting is an attribute of every morally
based legal system. ("Kennzeichen jeder sittlich fundierten Rechtsordnung") This ethical line of
reasoning ("Beurteilungslinie") follows from the verdict regarding the Luftsicherheitsgesetz (air
safety law) of the Bundesverfassungsgericht. [Footnote: BVerfGE 115 (118 ff.)
Luftsicherheitsgesetz, Urt. v. 15.02.2006 – 1 BvR 357/05.] The verdict reads that the sacrifice of
innocents to the benefit of potential victims is forbidden ("unzulässig") for it would degrade the
innocents to mere instruments and strip them of their subject quality. ("Subjektqualität") This
position is not uncontroversial, neither in constitutional law [Footnote: cf. Josef Isensee in AöR
2006, 173, 192.] nor ethical [Footnote: Niklas Luhmann, Gibt es in unserer Gesellschaft noch
unverzichtbare Normen?, 1993. ], but it should be considered by the legislator.
When considering ahead of time programmable reduction of damages ("Schadensminderung")
within a class of personal damages ("Klasse von Personenschäden") we face a different case than in
the cases of the Luftsicherheitsgesetz or the trolley problems. Here we have to make a probability
prognosis out of a situation where the identity of the wounded or killed people (contrary to the
trolley cases) is not yet determined. A programming to keep casualties (property damage before
personal damage, injuring a person before killing, least possible number of wounded or killed) at a
minimum could be justified without a violation of Art. 1 Abs. 1. Grundgesetz ["Die Würde des
Menschen ist unantastbar. Sie zu achten und zu schützen ist Verpflichtung aller staatlichen Gewalt."
– Human dignity is inviolable. To respect and protect it shall be the duty of all public authority.
-Ed.], as long as the programming reduces the risk for/to every traffic participant to an equal degree.
For as long as the previous programming minimizes the risk to all to an equal degree, the
programming would also serve the interests of the sacrificed people, before they could be identified
as such in the situation. This shows similarities to immunization. There we have a compulsory
vaccination as mandated by law in order to minimize the risks to the public, without knowing
beforehand whether or not a vaccinated person will be part of the circle of the (few) injured
(sacrifices). Nevertheless it is in everybody’s best interest to get immunized and thus minimize the
overall risk of infection.
The ethics committee objects any conclusion that the life of people is to be allocated in cases of
emergency with the lives of other people, which might make it permissible to sacrifice one person
to save several others. The committee qualifies the killing of humans or seriously injuring them
through autonomous vehicle-systems, without exception, as injustice. In cases of emergency, also,
human lives may not be set off against each other. According to this position the individual has to
be viewed as "sacrosanct." The individual is not to be burdened with solidarity-duties
("Solidarpflichten"), to sacrifice himself for others, not even when it could save other people. The
decision could be different in cases where several lives are already in immediate danger and the
only goal is to save as many innocents as possible. In such situation it seems justifiable/ethical
("vertretbar") to demand that the one action-variant be chosen that claims the least amount of lives.
On this matter the committee has not finished their discussion yet, neither satisfactory nor in all
aspects consensually. The committee calls for further studies. [Footnote: Dieter und Wolfgang
Birnbacher, Automatisiertes Fahren. In: Information Philosophie, Dezember 2016, S. 8–15; Nida-
Rümelin/Hevelke in Jahrbuch für Wissenschaft und Ethik, 1 ff; Eric Hilgendorf, Autonomes Fahren
im Dilemma. Überlegungen zur moralischen und rechtlichen Behandlung von selbsttätigen
Kollisionsvermeidesystemen. In: ders. (Hg.), Autonome Systeme und neue Mobilität. Baden-Baden
2017, S. 143–175; Jan C. Joerden, Zum Einsatz von Algorithmen in Notstandslagen. Das
Notstandsdilemma bei selbstfahrenden Kraftfahrzeugen als strafrechtliches Grundlagenproblem. In.
Eric Hilgendorf (Hg.), Autonome Systeme und neue Mobilität. Baden-Baden 2017, S. 73–97;
Günther M. Sander, Jörg Hollering, Strafrechtliche Verantwortlichkeit im Zusammenhang mit
automatisiertem Fahren. In: NStZ 2017, S. 193–206.]
If one follows the position that is argued for here, then the follow-up problem occurs, if or how far
manufacturers can be held responsible ("zur Verantwortung gezogen") for injuring or even killing
via automated systems, which was deemed an "injustice." We thus point out that, in principal,
collision avoidance systems can be treated no different than airbags or seat belts. A killing via a
erroneously triggering ("fehlauslösend") airbag remains injustice, but the manufacturer will not be
held repsonsible if he has undertaken everything that is reasonable ("zumutbar") to minimize such
risks. Deploying automated systems is thus permissible and does not lead to unique liability risks
("Haftungsrisiken"), as long as manufacturers do everything that is reasonable to make their
systems as save as possible and particularly to minimize the danger of personal damages. [what then
is considered reasonable, and who determines it? -Ed.]
1.7 Self-protection before the protection of others or subordination of self-protection?
The humanist mission statement ("Leitidee"), by now universal, is measured against the individual,
who is equipped with a special dignity. ("… an dem mit besonderer Würde ausgestatteten
Individuum") Assume an individual that we know for certain to take the role as driver or user of a
vehicle. It would go against this mission statement to impose on this individual – in cases of
emergency – solidarity duties for others, including the sacrifice of his own life. Thus the self-
protection of a person is not per se subordinate to protection of the uninvolved. Fundamentally,
however, it holds that those involved in mobility risks must not sacrifice the uninvolved. See ethical
rule number 9.
2. Regarding animal welfare interests
This depends on the status of animals in our society. Purely intuitively (higher) animals are treated
differently from objects. ("Sachen") [Footnote: Art. 20 a Grundgesetz and § 90 a Bürgerliches
Gesetzbuch still allow for the application of these regulations unto animals, but grant animals a
status distinct from objects.] This is supported by the theory of animals as beings capable of
suffering. ("leidfähige Wesen") From this capability of the animal follows that this being is worthy
of protection and the mandate ("Auftrag") for the human, to protect this being as part of the creation
from from damage, even though animals and humans cannot share an equal status. Personal
damages are thus to be treated as paramount, even before animal welfare interests. If, however,
personal damages can be excluded, then the protection of highly evolved animals ("höher
entwickelter Tiere") should have priority over (calculable) property damages. [Footnote: The
distinct status of animals can be seen through concrete embodiments of ethics in the legal world: § 1
Tierschutzgesetz, where the non damaging principle ("Nichtschädigungsgrundsatz") and the
avoidance of pain of suffering towards animals ("Leidens-und Schmerzvermeidung") is formulated]
3. Overruling through the human
Considering highly automated [level 3 in our table -Ed.] driving we face the possibility that the
driver is going through parts of the journey fully automated [not to be read as level 4! -Ed.], where a
takeover by the driver is not necessary. The question to what degree a voluntary takeover of the
driver should be made impossible conjures up ethical conflicts. Is it a virtue ("ethische Pflicht") to
not drive yourself if that constitutes a rise in safety? Or should, the other way round, final
responsibility lie with the human for as long as accidents cannot be excluded for certain? It is an
expression of human autonomy to take decision even though they are objectively irrational, like
aggressive driving behaviour or exceeding the recommended speed. It would go against the mission
statement ("Leitbild") of a responsible ("mündig" [probably again in the Kantean sence, cf.
"Unmündigkeit ist das Unvermögen, sich seines Verstandes ohne Anleitung eines anderen zu
bedienen." in Kant (1784) Beantwortung der Frage: Was ist Aufklärung?, in Berlinersche
Monatsschrift 2, p. 481–494 -Ed.]) citizen, where the state wanting to inescapably norm large parts
of (the citizen’s) live and wanting to inhibit dissent through social techniques ("sozialtechnisch")
from the get go. Such safety-conditions, ("Sicherheitszustände") deemed to be absolutes, may –
regardless of their undeniable good intentions – undermine the foundations of a humane, free
("freiheitlich") society. Comparable effects can come from seemingly voluntary designs, such as
"pay-as-you-drive" models from private insurance companies. The reduction of security risks and
limitations of freedom has to be decided in a weighting process that is democratic and according to
fundamental rights. There is no ethical rule that places security before freedom. [Coming from a
nation that just re-introduced data retention ("Vorratsdatenspeicherung"), keeps passenger records of
flights, installs cameras (partially with facial recognition software) in public places and/or public
transports, and deploys Trojan software to monitor communications the inner workings of which are
akin to full remote access control over the device it is used one. -Ed.]
4. Technology in cases of shared responsibility
In the case of driver-less systems and usage according to specifications ("bestimmungsmäßiger
Gebrauch") responsibility is in the hands of the manufacturer and operator. [Unclear if this is one
and the same person -Ed.] In all other cases of partially or fully automated driving systems there
borders for responsibility and liability. If the responsibility is split ("geteilte
Verantwortungszuständigkeit") then the human-computer-interface has to be designed in such a way
– taking into account a possible overruling through the driver – that it is clear at any moment in
time who is currently in control over the vehicle. This includes the possibility of a transfer to the
driver, if the technological system can no longer guarantee the safety of the driver. An abrupt
transfer would, however, mean that the driver could no longer reap the benefits of highly automated
driving. That is why a reasonable transition time has to be kept in place. If a transfer to the human is
no longer possible due to time constraints, then – as an exception in cases of emergency – the
control has to stay with the vehicle, to transit into a save vehicle state, while keeping up the highest
possible safety for the user and others involved.
5- Legal requirement to use fully automated traffic-systems?
The committee dealt with the question of a not yet current question of a requirement/duty
("Pflicht") to automate, if a superiority of technological systems compared to human drivers should
become evident. Would it be required, that the legislator conducts a complete, nation-wide, cross-
system design ("vollständige, flächendeckende und systemübergreifende Gestaltung") of the
mobility and propulsion concepts? Or is it required by the ideas of subsidiarity and the liberal idea
that concepts assert themselves through competing at the market and the state only provides the
required order and guarantees legal certainty. ("Rechtssicherheit") Is there a boost to social
paternalism looming behind the automation and inter-connection of road traffic, when the
automated inter-connected traffic-systems can no longer be avoided by individual decisions and
traffic flow is completely steered?
As an expression of his autonomy the self-reliable human is free to embrace technological
opportunities. This freedom to act entails the possibility to not partake in certain opportunities. An
introduction of such systems by requirement would strongly limit their expressiveness,
("Entfaltungsmöglichkeiten", their freedom of expression in the sense that they may do as they like)
including the aspect of driving pleasure. An introduction by requirement of autonomous systems
can not singularly be justified through the general increase in security coming with fully-automated
systems. [Footnote: cf. 1.2 above]
6. Technological assistant systems as support or steering by the driver
At level 2 of automated driving [Footnote: see 1.1 for a classification (Tabel/Abb. 1)] the human has
full control over the vehicle. Driving assistance systems warn or remind the human of errors that he
might make due to fatigue or other laps of concentration. Such non-binding prompts to the driver
serve the prevention of accidents and the well-being of society. We have to take a closer look – and
possibly a case for a legislative decision – at cases where the machine is no longer consisting of
warning, but compulsively conducting elements. [Footnote: Concerning the general question of
continuing automation and the benefit for society see Hancock, in Ergonomics, S. 449 ff.] One can
imagine a blockade of the starting sequence should the driver not adhere to prescribed pauses after a
long ride. [see car breathalyzers and ignition interlocks for a similar idea that is already in place,
even though not in Germany for as far as I am aware -Ed.]
We can get clues on how to judge this situation can be discussed by comparison with the care sector.
Care robots that lay out drugs for patients in need of special care ("besonders pflegebedürftig") or
assist in examinations. But how should we judge a case where the drugs were not only placed there,
but were instead forcibly administered to the patients for their alleged well-being? [Foonote: Deng
in Nature, 23, 25, 2015, describes the problems of care robots. Available online at
http://www.realtechsupport.org/UB/WBR/texts/markups/Deng_TheRobotsDilemma_2015_markup.
pdf.] Such controls could strip the human of his maturity/responsibility. ("Mündigkeit") It has to
remain his free choice whether or not he steps into a car regardless of fatigue and starts it or takes
appropriate drugs. Compulsively exercised controls are inconsistent with the view of an
autonomous human. [Let me repeat myself: this is said by/in a nation half of which had a reputation
as a surveillance state back when it was called German Democratic Republic/East Germany, and
with current laws coming dangerously close again according to some voices -Ed.]
7. No irreversible submission under technological systems
A different question is the submission of the human under technological systems. Looking at
autonomous driving not by itself, but as part a development reaching into many areas, like the
replacement of complex job profiles by robots [Footnote: cf. Eidenmüller in Oxford Legal Studies
Research Paper 2017, 1, 3, describing this phenomenon in regards of the legal profession], one can
get the opinion that the technological development is irreversible. Especially when looking at the
loss of cognitive abilities of the human not only considering the ability to drive, but also the
performance of medical procedures – with complete automation it seems no longer possible to act
autonomously, since the abilities that are required and need to be constantly honed for this have
been lost. [Footnote: describing the negative results also compare: Wolf in Autonomes Fahren, p.
103, 105; Bainbridge in Automatica, p. 775 ff.]
8. Dependence of society on technological systems
The growing dependence on technological systems is innate to modern societies. By now it applies
to core areas such as nutrition, access to information and knowledge, healthcare and energy supply.
Certain systemic risks are hereby unavoidable consequences of this development. They range from
a random black-out up to a constantly developing strategy of "cyber-war" to do targeted hacker
attacks. [I personally disagree with the usage of the term hacking in a malicious content here, and
would prefer terms like cracker, but it seems hacking attack is an accepted phrase now -Ed.] With
the switch in the traffic area to digital regulation/control ("digitale Steuerung") another central part
of the infrastructure would fall under such system instabilities stronger than what is currently the
case. Such a system vulnerability is justifiable from a utilitarian standpoint, as long as the risks are
estimated to be low. To prevent this vulnerability towards system failure as the worst case
scenario/maximum credible "accident" when experiencing a hacker attack, the IT-security of such
systems has to be more strongly stimulated/boosted ("stärker gefördert") by manufacturers and the
state. The state has a protective obligation ("Schutzauftrag") to guarantee the integrity of these
systems.
9 "Total" interconnection ("Vernetzung") of infrastructure
In order to control vehicles without permanent, directly situational human decisions, decision
systems have to be developed that take over the part of such control pulses ("Steurungsimpulse") –
like speed, heading, steering or route guidance – that are in current systems handled by the vehicle
operator as parameters for a goal-oriented, collision-free ride. Central to such systems could be
computers akin to the ones in use in current cars, but with a widely extended functional range.
Sensors, cameras and other technological aids used to register and process all control-relevant
("steurungsrelevant") traffic information – especially concerning the driving surface, vehicles or
nearby obstacles – are likewise integral. It is conceivable to design a system for automated and
inter-connected driving in such at way that it is decentralized and – from the point of view of the
involved vehicle – self-sufficient. ("autark") By self-sufficient in this context we mean that a goal-
oriented and safe vehicle control is possible solely based on the data gathered and saved within the
car itself. [How this is interconnected escapes me.] It is likewise conceivable to have a digital traffic
infrastructure that also uses data from outside the vehicle and that the vehicle fetches for steering
purposes. We are talking about central traffic information servers that would, i.e. provide
permanently updated weather- or road quality data, but also data carriers ("Informationsträger") by
the roadside [Temperature data? Your most recent speeding ticket? -Ed.] or other vehicles that
exchange data that is relevant for steering road safety and that cannot be gathered by the cameras or
sensors of the receiver, based on a "car-to-car-communication." Think, e.g. about the tail-end of a
traffic jam behind a knoll/hill that the sender already registered while it is out of sight for the
receiver.
Given this background we cannot exclude the possibility of the development of automated driving
alongside the idea and concept of a central traffic control and logging ("Erfassung") of all vehicles.
This begs the question of the tolerable risk of the abuse of such power structures. A critical
reflection of what is doable against the background of what is sound ("sinnvoll"), moderate
("massvoll") and ethically responsible ("ethisch verantwortbar") should take place. Automated and
inter-connected driving could lead into a total surveillance of all traffic users. In case of a central
traffic control it must be assumed that the freedom of the individual to move unrecognised,
unobserved and free from point A to point B might be sacrificed in favour of an efficiency based
digital traffic infrastructure. [How do you sacrifice what is already basically gone, in an age of
aforementioned cameras in public spaces, a bug that almost every citizen carries around willingly
(a.k.a. mobile phone), … I do not mean to say that I am okay with even more surveillance
infrastructure, I am mainly wondering what world some of the committee lives in to make that
statement. -Ed.] Autonomous driving would come at the expense of autonomous everyday acts. The
gain in comfort and traffic safety could in such a case not justify the loss of freedom and autonomy.
Such a development is thus to be counteracted by supporting privacy friendly innovations (Privacy
by Design) as well as normative arrangements. ("Ausgestaltungen")
10 Utilisation of data in-between safety, private autonomy and informational self-determination
10.1 Mediating between conflicting goals
The aspect of data security gains a new dimension when looking at autonomous driving. It is
required for running the system smoothly to gather and process bulk-data ("Datenmengen") from
the user. The legislator has to find an equilibrium ("Ausgleich") between the required gathering of
data and the guaranteeing of informational self-determination. The principals in European and
German law of Data Minimization ("Datensparsamkeit") and Data Avoidance ("Datenvermeidung")
need to be brought into balance with the requirements of traffic safety and with an eye on fair
competition in globalized value added models. ("Wertschöpfungsmodelle") Exceeding purely
aspects relevant to traffic-safety there are many different interests from public authorities for
emergency preparedness/danger defence ("Gefahrenabwehr") to consider, as well as private
businesses and their economical interests. [Footnote: see the problem described by Hornung in
DUD 2015, p. 359 ff.] The informational self-determination is not to be understood purely as a
protection of privacy. ("Schutz der Privatsphäre") It is rather the freedom of the user to grant access
to personal data. ("personenbezogene Daten")
10.2 Solutions meeting the demands in case of data processing and data utilization
When introducing different automated methods ("Verfahren"), new solutions that meet the demands
("bedarfsgerechte Lösungen") in the area of data processing and data utilization are needed. Here
data protection and innovation friendliness do not form insurmountable opposites, but provide
reciprocal benefits. In the case of automated and connected driving we thus require innovation
friendly data protection as well as data protection friendly innovation. Innovative technology is
furthermore capable of enabling effective data protection. (Privacy by Design) In accordance with
the data protection law principle of Privacy by Default [Default, not design -Ed.] vehicles should
furthermore possess data protection friendly default settings on deployment, that prohibit the
collection, processing and utilization of data not relevant to vehicle control – as long as they are not
absolutely relevant for safety [How is this set not a subset of the relevant vehicle control data set?
-Ed.]– before those are actively allowed by the user. The premise has to be that the user can make
self-reliable decisions concerning his data. In this case informational self-determination is not be
viewed only as intrusion protection, but includes the possibility to voluntarily divulge data.
As soon as data utilization and data processing is no longer clearly recognizable for the user and he
is thus deprived of his decision, then the state has to fulfil his federally mandated protection
obligation and ensure an adequate and required level of protection for its citizens regarding the
safety of their data. Here the state could take responsibility by giving a democratic legitimation to
the required processes relevant to data protection – concerning vehicle control data – in form of a
lawful justification. ("gesetzlicher Rechtfertigungsgrund")[highly questionable translation -Ed.] Part
of this lawful empowerment norm ("Ermächtigungsnorm") could also be an admission request
("Zulassungsanforderung") for automated (and connected) driving functions. The vehicle would
only be allowed to ("dürfte") drive automated under the guarantee that it obtains certain certificates
[Of what and by whom? See the 2011 case of Diginotar for trouble with certification-Ed.] and
exchanges sufficiently pseudonymized [the difference from anonymized likely being that in this
case other parties may not know who you are but can link all data coming from the same entity,
while anonymized data could not be linked in such a way? -Ed.] state data with other vehicles and
the infrastructure. In addition investments into research and development of new technological
anonymization solutions could be made. In this regard it would require a continuous observation
whether or not certain data has been sufficiently ("hinreichend") anonymized, and – if required –
changes to this processes would need to be made.
Practicable processes and technological solutions should be found to inform drivers, owners and
users regarding the usage of data not relevant to vehicle control about the reasons and legal grounds
of data processing, such that they can make decisions. For possible agreements of those involved in
a vehicle’s surroundings ("Fahrzeugumfeld"), say passers-by or other traffic agents, we also require
solution attempts that go conform with the law.
A step-by-step deployment of automated and interconnected driving should additionally be
accompanied/supervised by independent testing institutes and relevant special interest groups
("Interessenvertretungen") like consumer protection groups. ("Verbraucherschützer") To comply
with the transparency requirements (see ethical rule number 12) a fact-based elucidation
("Aufklärung") concerning chances and risks of data usage is needed. The special relevance of such
elucidation follows especially from the fact that manufacturers of automated and interconnected
vehicles can – and have to – access to the vehicles and their related data far beyond the point of the
transfer of ownership, ("Überignungszeitpunkt") in the context of required updates, product
monitoring/surveillance ("Produktbeobachtung") or for reasons of costumer loyalty.
("Kundenbindung")
11. The problems of the range of reliability ("Verantwortungsreichweite") in software and
infrastructure
11.1 Problem
With the introduction of automated and autonomous systems both on the level of vehicles and
further on the level of cooperative traffic the questions arises with whom responsibilities in case of
damages/accidents. ("im Schadensfall") Responsibility ("Verantwortung") is meant in the sense of
the duty of a person to render an account concerning the decisions and the resulting actions of the
automated vehicle-system, or – respectively – the underlying software, to assume liability and to –
if necessary – bare the legal consequences.
The German liability system ("Haftpflichtsystem") currently assigns the risk for an accident in
traffic ultimately to the owner or driver of the vehicle. In addition manufacturers are liable in
accordance with legal product liability. Fully automated and driver-less vehicles are subject to far
reaching impact factors (cf. Figure 1 / Abbildung 2)
Figure 1. – Modules in the complete architecture of cooperative road traffic,
as seen in the original document.
List of modules (left to right, top to bottom) contains: human, vehicle, traffic
related infrastructure (in this particular example: a traffic light), V2X
communication infrastructure (vehicle to X), V2V communication
infrastructure (vehicle to vehicle), OEM back-end, back-end traffic
management, back-end suppliers, scenario catalogue back-end, neutral
server, others. [Note that there are not numbered here, but later on they are]
Thusly – next to the owners and manufacturers of vehicles – the manufacturers and operators of
supporting technology for the vehicle have to be involved into the diversion of liabilities
("Haftungsteilung") as well. The figure provides an overview of potential responsible entities.
("Verantwortliche") It shows that for connected mobility systems both liability and responsibility
get shifted to the depicted areas and actors and has to be divided among them. Additionally a new
regulation ("Festlegung".) of the due diligence that needs to be observed by manufacturers,
suppliers and operators of components, software and data – as well as developers – is required.
Automated driving functions may only be used if they are statistically saver than the human driver.
With the liability shift from the driver or owner to the guarantor of technological systems in the
sense of a product liability, we need to discuss how much more save a technological system has to
be – statistically speaking – to be accepted by society and which methods would lead to a reliable
confidence.
[Figure 1 – Abbildung 2 was placed here in the original document -Ed.]
List of potentially liable entities:
1. Human: driver, vehicle owner
2. Vehicle: OEM, supplier, garage
3. Traffic related infrastructure: public authorities
4. V2X communication infrastructure: communication operators
5. V2V: OEM
6. OEM back-end: OEM, IT service providers
7. Traffic management back-end: public authorities, etc.
8. Back-end supplier: Tier 1 supplier, digital maps, etc.
9. Scenario catalogue back-end: certified () instance
10. Neutral Server (Interface to other services): IT service providers
The architecture picture shows which different components are part of cooperative road traffic. For
the specific components different actors are responsible for quality insurance or the reliable
transmission of data, respectively. Vehicle data is transmitted to the back-end of the OEM first – but
for the case of V2V/V2X communication. Whether or not the back-end should be operated – as seen
in the picture – by the OEM or by a neutral organisation is a recommendation that is not the ethic
committees to make, but rather falls with the parliamentary responsibility to make arrangements.
("parlamentarische Gestaltungsverantwortung")
11.2 How can responsibility for software and infrastructure be developed and divided?
It follows from the architecture picture that the manufacturers are responsible for the functional
security/safety ("Sicherheit") of their system. To fulfil this responsibility they have to use and
analyse certain sets of data. They are thus responsible for the content and quality of all
security/safety relevant data, that are exchanged with the vehicle via the OEM back-end. In as far as
they use data from third party providers they are responsible for the quality and context of that data.
To comply with this responsibility a quality control of incoming, security/safety relevant data from
external third party suppliers could be created. This could take the form of verifications
("Nachweise") that need to comply to certain security/safety standards for the products. These
verifications should especially contain statements regarding the confidence of the guaranties of
quality. For example: a map service provider should guarantee the information contained in the
maps with a degree of spatial and temporal resolution within an OEM specified confidence
threshold. Likewise statements about the maximum latency times and integrity of data submitted in
V2V- and V2X-communications should be linked to confidence statements.
A reliable data transfer is outside of the liability responsibility of the manufacturer and could thus
be attributed towards the telecommunications operators. They are liable, within the limits of their
self-proclaimed guaranties, for a secure transmission of data. [Note the sudden and unexplained
switch from reliable transfer to secure transfer] Concerning product liability data protection
regulations have to be taken into account anyway. The European (and member-state(-ly?)
("mitgliedsstaatlich")) legislator has to comply with the protective obligations laid down by
European and constitutional law concerning integrity and trustworthiness of such systems when
considering responsibility assignments ("Verantwortungszuweisungen") and considering further
developments. ("nähere Ausgestaltung") Manufacturers have to adhere to legal constraints when
collecting such data and should provide proposals for further development from their point of view.
New anonymization procedures for vehicle-relevant data should be developed.
To avoid errors and to guarantee the safety of everybody involved in traffic an analysis of
dangerous situations should happen, which are relevant to erroneous perceptions and erroneous
behaviour of the vehicle. It would be technologically desirable if known system errors would be
handed down to a scenario catalogue by the manufacturers, which saves these situations. These
should in turn be handed down to an independent public institution. (cf. ethical rule number 8) The
scenario catalogue sketched here could be developed in such a way that it gets expanded
permanently and based on real dangerous situations in order to test automated driving functions.
Think about a data foundation ("Datenbasis") of the back-end, filled with situational information
where erroneous interpretations of the actual surroundings by the car were observed and in turn
caused a cancellation of the automated driving mode. It needs to be made clear during the practical
development of the system, which information is going to be required for the scenario catalogue.
Figure 2 – Abb. 3 Setup of a knowledge base for critical driving
functions through end-to-end deep learning.
Inside the OEM back-end training data is fed into a trained [how, when
and by whom? -Ed.] artificial neural network, (ANN) which transferred
outside the back-end to the vehicle for installation/update. The vehicle in
turn gathers data about critical situations (cf. text, situations that require a
cancellation of automated driving) and transmits these to a certified
authority. The certified authority holds the scenario catalogue, adds the
transmitted situations to it and relays them as test data for the ANN to the
OEM back-end.
Furthermore we have to consider that vehicles is able to guarantee the safety of the driver even
without external links. In the field a safety/security-check ("Sicherheitscheck") can be suggested,
that guarantees a check of the unimpaired functionality of all systems contained internally in the
vehicle required for the automated driving function, the connection to the back-end as well as the
successful installation of all critical software updates. If an error occurs or a critical update is
missing the affected automation services should noticeably be unavailable.
11.3 To what extent can self-learning software be used?
When using different software one has to distinguish between learning ("lernend") and self-learning
("selbstlernend") software. Learning systems are trained during development. [Seems like a
misnomer to me, since the learning is finished as soon as the software gets deployed? I might be
missing out on the correct English term here due to a too literal translation. -Ed.] Self-learning
systems improve themselves further while deployed. Currently not only learning systems (e.g.
object recognition algorithms) are in use, but self-learning systems (e.g. adaption of the vehicle
dynamic to the driver) as well. Self-learning systems update their knowledge base continuously
while deployed. This would however lead to differing knowledge bases between individual vehicles
the longer they are in use. The committee asked itself under which circumstances such systems
could be permitted and who will ultimately have responsibility ("Verantwortung") for such systems.
Concerning an introduction of self-learning systems, the protection of physical integrity of the user
is paramount. (cf. ethical rule number 2) As long as self-learning systems provide no sufficient
certainty that these can judge situations correctly or can comply with safety requirements, a
decoupling of self-learning systems from safety-critical functions should be decreed. A use of self-
learning systems is – given the current level of technology – only conceivable for not immediately
safety-relevant functions. This could for example be a human-computer-interface where the
personal driving mode is analysed and is adapted towards. Note however, that such an individual
driving mode analysis possibilities emerge to gain and utilize data for not immediately vehicle
relevant purposes. Such a utilization via permitted business models can be permitted as long as the
data sovereignty of the personal user is kept. (cf. ethical rule number 15)
Literature: [copied without language alterations, page references remain S. instead of p., etc. -Ed.]
1. Bainbridge, Lisanne: Ironies of Automation, in Automatica 1983, S. 775–779.
2. Beavers, Anthony: Between Angels and Animals: The Question of Robot Ethics, or Is Kantian
Moral Agency Desirable, S. 1 ff., abrufbar unter: http://faculty.evansville.edu/tb2/PDFs/Robot
%20Ethics%20-%20APPE.pdf.
3. Bonnefon, Jean-François/Shariff, Azim/Rahwan, Iyad: The social dilemma of autonomous
vehicles in: Science 2016, S. 1573–1576.
4. Deng, Boer: The Robot’s Dilemma – Working out how to build ethical robots is one of the
thorniest challenges in artificial intelligence, in: Nature 2015, S. 1–4.
5. Eidenmüller, Horst: The Rise of Robots and the Law of Humans, in: Oxford Legal Studies
Research Paper 2017, S. 1–15.
6. Fournier, Tom: Will My Next Car Be a Libertarian or a Utilitarian? Who Will Decide? in: IEEE
Technology and Society Magazine 2016, S. 40–45.
7. Fraedrich, Eva/Lenz, Barbara: Automated Driving – Individual and Societal Aspects Entering
the Debate in: Transportation Research Record: Journal of the Transportation Research Board 2014,
2416, S. 64–72.
8. Gogoll, Jan/Müller, Julian F.: Autonomous cars, in favor of a mandatory ethics setting, in:
Science and Engineering Ethics 2016, S. 1–20.
9. Goodall, Noah J.: Ethical decision making during automated vehicle crashes in: Transportation
Research Record: Journal of the Transportation Research Board 2014, 2424, S. 58–65.
10. Goodall, Noah J.: Machine Ethics and Automated Vehicles, in: Road Vehicle Automation,
Springer International Publishing 2014, S. 93 ff.
11. Grunwald, Armin: Gesellschaftliche Risikokonstellationen für autonomes Fahren, in:
Autonomes Fahren. Technische, Rechtliche und Gesellschaftliche Aspekte, Heidelberg 2015, S.
661–685.
12. Hancock, P. A.: Automation: how much is too much in: Ergonomics 2014, S. 449–454.
13. Hevelke, Alexander/Nida-Rümelin, Julian: Intelligente Autos im Dilemma in Unsere digitale
Zukunft. In welcher Welt wollen wir leben?, hrsg. von Carsten Könneker, Springer 2017, S. 195–
204.
14. Hevelke, Alexander/ Nida-Rümelin, Julian: Responsibility for crashes of autonomous
vehicles: an ethical analysis in Science and engineering ethics 2015, S. 619–630.
15. Hornung, Gerrit: Verfügungsrechte an fahrzeugbezogenen Daten, in Datenschutz und
Datensicherheit 2015, S. 359–366.16. Isensee, Josef; Menschenwürde: die säkulare Gesellschaft
auf der Suche nach dem
Absoluten in Archiv des öffentlichen Rechts 2006, S. 173–218.
17. Kumfer, Wesley/Burgess, Richard: Investigation into the role of rational ethics in crashes of
automated vehicles in Transportation Research Record: Journal of the Transportation Research
Board 2015, 2489, S. 130–136.
18. Lemmer, K. (Hrsg.): Neue autoMobilität. Automatisierter Straßenverkehr der Zukunft (acatech
STUDIE), München: Herbert Utz Verlag 2016.
19. Lin, Patrick: Why ethics matter for autonomous cars in: Autonomes Fahren, Technische,
Rechtliche und Gesellschaftliche Aspekte, Heidelberg 2015, S. 70–85.
20. Luhmann, Niklas, Gibt es in unserer Gesellschaft noch unverzichtbare Normen?, Heidelberg
1993.
21. Mladenovic, Milos N./McPherson, Tristram: Engineering social justice into traffic control for
self-driving vehicles? in Science and engineering ethics Bd. 22 (2016), 4, S. 1131–1149.
22. Neumann, Ulfrid: Die Moral des Rechts. Deontologische und konsequentialistische
Argumentationen in Recht und Moral in: JRE 1994, S. 81–94.
23. Nida-Rümelin, Julian/Hevelke, Alexander: Selbstfahrende Autos und Trolley-Probleme: Zum
Aufrechnen von Menschenleben im Falle unausweichlicher Unfälle in Jahrbuch für Wissenschaft
und Ethik, Bd. 19, 2014, S. 5–23.
24. Powers, Thomas M.: Prospects for a Kantian Machine in Intelligent Systems, S. 46–51.
25. Powers, Thomas M.: On the Moral Agency of Computers in Topoi Bd. 32 (2013), 2, S. 227–
236.
26. Schuster, Frank P.: Das Dilemma-Problem aus Sicht der Automobilhersteller – eine
Entgegnung auf Jan Joerden in Autonome Systeme und neue Mobilität, hrsg. v. Eric Hilgendorf,
Baden-Baden 2017, S. 99–116.
27. VDA (2016): Zugang zum Fahrzeug und zu im Fahrzeug generierten Daten, unter:
https://www.vda.de/dam/vda/Medien/DE/Themen/Innovation-und-
Technik/Vernetzung/Position/VDA-Position-Zugang-zum-Fahrzeug-und-zu-im-Fahrzeug-
generierten-Daten/VDA%20Position%20Zugang%20zum%20Fahrzeug%20und%20zu%20im
%20Fahrzeug%20generierten%20Daten.pdf (Stand: 03.03.2017).
28. Welzel, Hans: Zum Notstandsproblem in Zeitschrift für die gesamte Strafrechtswissenschaft
1951, S. 49.
29. Wolf, Ingo: Wechselwirkung Mensch und autonomer Agent in Autonomes Fahren, Technische,
Rechtliche und Gesellschaftliche Aspekte, Heidelberg 2015, S. 103–125.