Content uploaded by Christoph Lütge
Author content
All content in this area was uploaded by Christoph Lütge on Jul 06, 2018
Content may be subject to copyright.
1 23
Philosophy & Technology
ISSN 2210-5433
Philos. Technol.
DOI 10.1007/s13347-017-0284-0
The German Ethics Code for Automated
and Connected Driving
Christoph Luetge
1 23
Your article is protected by copyright and
all rights are held exclusively by Springer
Science+Business Media B.V.. This e-offprint
is for personal use only and shall not be self-
archived in electronic repositories. If you wish
to self-archive your article, please use the
accepted manuscript version for posting on
your own website. You may further deposit
the accepted manuscript version in any
repository, provided it is only made publicly
available 12 months after official publication
or later and provided acknowledgement is
given to the original source of publication
and a link is inserted to the published article
on Springer's website. The link must be
accompanied by the following text: "The final
publication is available at link.springer.com”.
COMMENTARY
The German Ethics Code for Automated
and Connected Driving
Christoph Luetge
1
Received: 24 August 2017 /Accepted: 29 August 2017
#Springer Science+Business Media B.V. 2017
Abstract The ethics of autonomous cars and automated driving have been a subject of
discussion in research for a number of years (cf. Lin 2015; Goodall in Transportation
Research Record:Journal of the Transportation Research Board 2424:58–65, 2014;
Goodall in IEEE Spectrum 53(6):28–58, 2016). As levels of automation progress, with
partially automated driving already becoming standard in new cars from a number of
manufacturers, the question of ethical and legal standards becomes virulent. For exam-
ple, while automated and autonomous cars, being equipped with appropriate detection
sensors, processors, and intelligent mapping material, have a chance of being much safer
than human-driven cars in many regards, situations will arise in which accidents cannot
be completely avoided. Such situations will have to be dealt with when programming
the software of these vehicles. In several instances, internationally, regulations have
been passed, based on legal considerations of road safety, mostly. However, to date,
there have been few, if any, cases of a broader ethics code for autonomous or automated
driving preceding actual regulation and being based on a broadly composed ethics
committee of independent experts. In July 2016, the German Federal Minister of
Transport and Digital Infrastructure, Alexander Dobrindt, appointed a national ethics
committee for automated and connected driving, which began its work in September
2016. In June 2017, this committee presented a code of ethics which was published in
German (with annotations, BMVI 2017a) and in English (cf. BMVI 2017b). It consists
of 20 ethical guidelines. Having been a member of this committee, I will present the
main ethical topics of these guidelines and the discussions that lay behind them.
Keywords Automated driving .Autonomous cars .Road safety .Ethics of digitisation .
Digital ethics .Self-driving cars
Philos. Technol.
DOI 10.1007/s13347-017-0284-0
For an overview of passed US bills, see http://cyberlaw.stanford.edu/wiki/index.php/Automated_Driving:_
Legislative_and_Regulatory_Action.
*Christoph Luetge
luetge@tum.de
1
Peter Loescher Chair of Business Ethics and Global Governance, Technical University of Munich,
Munich, Germany
Author's personal copy
1 Members and Procedure
The ethics committee was composed of 14 members, three of which were professors of
law, three were professors of ethics, and two were professors of technical disciplines.
Among the others were two representatives of automotive companies, the president of
the association of consumer protection groups, the president of the German automobile
club ADAC, a Catholic bishop, and a former Public Prosecutor General. The chairman
was Udo di Fabio, a former judge of the German Federal Constitutional Court. In
addition, hearings with additional experts from technical, legal, and ethical disciplines
were conducted, as well as a driving test with several (semi-) autonomous cars.
The committee formed five working groups, which discussed the issues of Bun-
avoidable accident situations,^Bdata security and data economics,^Bhuman-machine
interface,^Bresponsibility for software and infrastructure,^and Bethical context beyond
traffic.^Each of these groups prepared separate working papers. These papers were
later integrated into the final code of ethics and its longer, annotated version (BMVI
2017a).
2 Levels of Automated Driving
The committee used the classification of levels of automated driving by the German
Association of the Automotive Industry (VDA):
0Driveronly
1 Assisted
2 Partial driving automation
3 High driving automation
4 Full driving automation
5Driverless
This is similar to the levels of automated driving defined by the Society of
Automotive Engineers (SAE), though their wording is slightly different:
0 No driving automation
1 Driver assistance
2 Partial driving automation
3 Conditional driving automation
4 High driving automation
5 Full driving automation
The National Highway Traffic Safety Administration (NHTSA), finally, draws
levels 4 and 5 of the German system together into one. According to the German
system, the committee saw themselves concerned mainly with levels 4 and 5, even if
those are not yet realized, at least not fully. Thus, full driving automation and driverless
cars were at the center of deliberation. In addition, the term Bconnected driving^was
used in order to highlight that ethical questions concerning the networking and
informational linking of cars were also under consideration in the committee.
C. Luetge
Author's personal copy
3 The Code
The code starts with some general remarks, ending in the following mission statement:
BThe decision that has to be taken is whether the licensing of automated driving
systems is ethically justifiable or possibly even imperative. If these systems are
licensed –and it is already apparent that this is happening at international level –
everything hinges on the conditions in which they are used and the way in which
they are designed. At the fundamental level, it all comes down to the following
question. How much dependence on technologically complex systems –which in
the future will be based on artificial intelligence, possibly with machine learning
capabilities –are we willing to accept in order to achieve, in return, more safety,
mobility and convenience? What precautions need to be taken to ensure control-
lability, transparency and data autonomy? What technological development
guidelines are required to ensure that we do not blur the contours of a human
society that places individuals, their freedom of development, their physical and
intellectual integrity and their entitlement to social respect at the heart of its legal
regime?^
After this introduction, which puts human beings at the center of attention of ethics
in technology, 20 ethical guidelines follow. I have grouped them into 10 clusters:
4 Introduction
4.1 Ethical Guideline 1
The primary purpose of partly and fully automated transport systems is to improve
safety for all road users. Another purpose is to increase mobility opportunities and to
make further benefits possible. Technological development obeys the principle of
personal autonomy, which means that individuals enjoy freedom of action for which
they themselves are responsible.
The principle of personal autonomy is introduced here as a central principle for
ethics of technology. It is indeed a key question for autonomous cars how personal
autonomy and technological imperatives and constraints can be brought into a healthy
relation. This question will come up frequently in the following.
5 General Ethical Benefits of Automated Driving
5.1 Ethical Guideline 2
The protection of individuals takes precedence over all other utilitarian consider-
ations. The objective is to reduce the level of harm until it is completely
prevented. The licensing of automated systems is not justifiable unless it promises
to produce at least a diminution in harm compared with human driving, in other
words a positive balance of risks.
The German Ethics Code for Automated and Connected Driving
Author's personal copy
The interesting point about this guideline is that it agrees with balancing risks
against one another rather than with ruling out any calculation at all (as a pure
deontological perspective might). The ethics code here sides with the view of ethics
as reducing harm and achieving a net advantage over relevant alternatives. The
committee agreed that autonomous cars carry ethical benefits with them, which is an
important argument for their introduction (see section 6.2, though). These benefits also
include the possibility of substantially improving mobility for handicapped people.
5.2 Ethical Guideline 3
The public sector is responsible for guaranteeing the safety of the automated and
connected systems introduced and licensed in the public street environment. Driving
systems thus need official licensing and monitoring. The guiding principle is the
avoidance of accidents, although technologically unavoidable residual risks do not
militate against the introduction of automated driving if the balance of risks is funda-
mentally positive.
This guideline stresses that an official license is needed for automated driving and cannot
be left to the responsibility of car manufacturers alone. Acceptance among the population
might be jeopardized if automated driving was not subjected to appropriate rules.
5.3 Ethical Guideline 4
The personal responsibility of individuals for taking decisions is an expression of a
society centered on individual human beings, with their entitlement to personal devel-
opment and their need for protection. The purpose of all governmental and political
regulatory decisions is thus to promote the free development and the protection of
individuals. In a free society, the way in which technology is statutorily fleshed out is
such that a balance is struck between maximum personal freedom of choice in a general
regime of development and the freedom of others and their safety.
This guideline puts Bpersonal development^and a Bfree society^at the center of
ethical attention, which should be promoted and not hindered by a technological
advance. Free society is not specified any further, but can be interpreted as referring
to democratic countries in a broad sense.
6 Unavoidable Dilemma Situations
Dilemma situations are one of the key issues in much of the literature on automated and
autonomous driving; they are extensively being debated with reference to the famous
trolley cases (cf. Fournier 2016; Hevelke and Nida-Rümelin 2015; Gogoll and Müller 2017;
Bonnefon et al. 2016). Guidelines 5 to 9 deal with situations of unavoidable accidents, and
these rules were among the most controversially debated ones within the committee.
6.1 Ethical Guideline 5
Automated and connected technology should prevent accidents wherever this is practi-
cally possible. Based on the state of the art, the technology must be designed in such a way
C. Luetge
Author's personal copy
that critical situations do not arise in the first place. These include dilemma situations, in
other words a situation in which an automated vehicle has to Bdecide^which of two evils,
between which there can be no trade-off, it necessarily has to perform. In this context, the
entire spectrum of technological options—for instance from limiting the scope of appli-
cation to controllable traffic environments, vehicle sensors, and braking performance,
signals for persons at risk, right up to preventing hazards by means of Bintelligent^road
infrastructure—should be used and continuously evolved. The significant enhancement of
road safety is the objective of development and regulation, starting with the design and
programming of the vehicles such that they drive in a defensive and anticipatory manner,
posing as little risk as possible to vulnerable road users.
This is a relatively unproblematic guideline: In the literature about autonomous
driving and trolley cases, much is said and reasoned about what to do when a situation
is already unavoidable. Much less time, however, is usually devoted to the fact that
automated and autonomous cars perform much better in trying to prevent these
situations from arising, especially with regard to braking at the right time and with
the right intensity. It is estimated that driverless cars could in this way reduce deaths on
the road by up to 90% (cf. for example: https://www.sciencealert.com/driverless-cars-
could-reduce-traffic-fatalities-by-up-to-90-says-report).
6.2 Ethical Guideline 6
The introduction of more highly automated driving systems, especially with the option
of automated collision prevention, may be socially and ethically mandated if it can
unlock existing potential for damage limitation. Conversely, a statutorily imposed
obligation to use fully automated transport systems or the causation of practical
inescapabilty is ethically questionable if it entails submission to technological imper-
atives (prohibition on degrading the subject to a mere network element).
While guideline 2(and the first sentence of this guideline) stressed that highly
automated driving is ethically desirable and even mandatory in general, other dangers
might lie in the (still distant) future: at least fully automated driving should not be made
mandatory, as it might submit subjects totally to a technological regime and thus reduce
them—in a Kantian perspective—to mere means to an end. There was some contro-
versy within the committee about this argument, as it follows that fully automated
driving would be ethically questionable even if it further reduced the number of
accidents, compared to highly automated driving. The guideline was however adopted
eventually to work as a caveat and warning against taking the development too far
without further reflection.
6.3 Ethical Guideline 7
In hazardous situations that prove to be unavoidable, despite all technological precau-
tions being taken, the protection of human life enjoys top priority in a balancing of
legally protected interests. Thus, within the constraints of what is technologically
feasible, the systems must be programmed to accept damage to animals or property
in a conflict if this means that personal injury can be prevented.
This guideline simply states that damage to humans takes priority over damage to
property and also eventually to animals. Higher animals were however given special
The German Ethics Code for Automated and Connected Driving
Author's personal copy
attention, as their protection has had constitutional status in Germany since 2002.
Tricky cases might arise if cars’detection equipment is eventually able to distinguish
with near certainty between higher animals (especially smaller ones) and other obsta-
cles on the road, or even between different classes of higher animals. However, the
details of these questions were not considered to be top priority for the time being.
6.4 Ethical Guideline 8
Genuine dilemmatic decisions, such as a decision between one human life and another,
depend on the actual specific situation, incorporating Bunpredictable^behavior by
parties affected. They can thusnot be clearly standardized,nor can they be programmed
such that they are ethically unquestionable. Technological systems must be designed to
avoid accidents. However, they cannot be standardized to a complex or intuitive
assessment of the impacts of an accident in such a way that they can replace or
anticipate the decision of a responsible driver with the moral capacity to make correct
judgements. It is true that a human driver would be acting unlawfully if he killed a
person in an emergency to save the lives of one or more other persons, but he would not
necessarily be acting culpably. Such legal judgements, made in retrospect and taking
special circumstances into account, cannot readily be transformed into abstract/general
ex ante appraisals and thus also not into corresponding programming activities. For this
reason, perhaps more than any other, it would be desirable for an independent public
sector agency (for instance, a Federal Bureau for the Investigation of Accidents
Involving Automated Transport Systems or a Federal Office for Safety in Automated
and Connected Transport) to systematically process the lessons learned.
Consider the following situation: a human driver, faced with a split-second decision
between hitting children playing by the roadside and driving over a cliff might choose to
sacri fice h erself. That would be a personal, intuitive decision, and it might also be the Bright^
result of a long philosophical deliberation. However, even if this were the case, such a
decision of deliberately sacrificing specific lives should not be taken by a programmer.
6.5 Ethical Guideline 9
In the event of unavoidable accident situations, any distinction based on personal
features (age, gender, physical, or mental constitution) is strictly prohibited. It is also
prohibited to offset victims against one another. General programming to reduce the
number of personal injuries may be justifiable. Those parties involved in the generation
of mobility risks must not sacrifice non-involved parties.
This guideline was debated controversially, and it was not adopted unanimously by
the committee’s members. The difficult issue is to avoid a machine or code selecting
targets according to personal characteristics (this is ruled out), however, still allowing
for a programmer to programme a code which reduces the overall number of personal
injuries—in whatever way. This is a complex problem, which is usually not as simple
as selecting target Aor Bto be definitely killed. First, damage to property might be very
substantial, as in the case of a power blackout for an entire city or an exploding fuel
truck. But even if one decides, as the committee did, to opt for personal injuries always
taking priority, there might be different probabilities for injuries or casualties of
C. Luetge
Author's personal copy
different targets. This could result in complicated situations in which it would not be
ethical to forego the opportunity to reduce the overall Bdamage^to persons.
However, what the code explicitly does not say is that individual victims in different
scenarios are allowed to be offset against each other. To some extent, this is the lesson
of the German Luftsicherheitsgesetz (Aviation Security Act) being ruled unconstitu-
tional by the German Federal Constitutional Court in 2006, in spite of opinions varying
(see for example Isensee 2006) among judges and legal scholars to this day (and the
committee could not reach a consensus in this matter regarding situations of imminent
danger).
In any case, the Aviation Security Act, which was later used in Ferdinand von
Schirach’sfamousplayBTerr o r, ^would have allowed to shoot down hijacked aircraft
which were thought to be used as weapons. In that case, individually known subjects
would have been sacrificed for the sake of others. In the case of an anonymous
programming, however, no victims are known individually in advance. Rather, it is
an abstract guideline, the exact consequences of which cannot be foreseen, and which
reduces the overall risk for all people affected by it (it could be regarded as similar to
the risk that comes with vaccination). Such a guideline clarifies, to the extent possible,
the situation for programmers by giving them a general ethical guideline.
Not allowing non-involved parties to be sacrificed implies that it cannot be a general
rule for a software code to unconditionally save the driver. However, the driver’swell-
being cannot be put last, either.
7 Who Is Accountable?
7.1 Ethical Guideline 10
In the case of automated and connected driving systems, the accountability that was
previously the sole preserve of the individual shifts from the motorist to the manufac-
turers and operators of the technological systems and to the bodies responsible for
taking infrastructure, policy, and legal decisions. Statutory liability regimes and their
fleshing out in the everyday decisions taken by the courts must sufficiently reflect this
transition.
7.2 Ethical Guideline 11
Liability for damage caused by activated automated driving systems is governed by the
same principles as in other product liability. From this, it follows that manufacturers or
operators are obliged to continuously optimize their systems and also to observe
systems they have already delivered and to improve them where this is technologically
possible and reasonable.
Guidelines 10 and 11 are very important ones, which will probably have more practical
consequences than the rules concerning dilemma situations (as those situations tend to very
rare). Rules 10 and 11 shift the accountability, which at the moment (see Geneva Conven-
tiononRoadTraffic(1949) and Vienna Convention on Road Traffic (1968)) still lies with
the car’sowner,totheBmanufacturers or operators^of the car and its technological systems.
It is clear that if the driver (or the car owner) cannot control the car fully in each single
The German Ethics Code for Automated and Connected Driving
Author's personal copy
situation and is not required to do so (from automation level 3 upward), he or she
cannot be accountable anymore for the car’s behavior, but only the companies who
built it or who are operating its relevant systems (for Volvo, cf. Korosec 2015). This
guideline will certainly have enormous impact on insurance and other questions.
8 Public Information
8.1 Ethical Guideline 12
The public is entitled to be informed about new technologies and their deployment in a
sufficiently differentiated manner. For the practical implementation of the principles
developed here, guidance for the deployment and programming of automated vehicles
should be derived in a form that is as transparent as possible, communicated in public,
and reviewed by a professionally suitable independent body.
The committee was convinced that public information about issues of automated cars
is necessary and that one or several suitable independent bodies will be required to
conduct this task (see also guideline 18). It does not have to be state-run but could be an
NGO (such as, for example, consumer organizations) which would take over the task of
critically monitoring companies’actions.
9 Connected Driving: Safety and Security
9.1 Ethical Guideline 13
It is not possible to state today whether, in the future, it will be possible and expedient
to have the complete connectivity and central control of all motor vehicles within the
context of a digital transport infrastructure, similar to that in the rail and air transport
sectors. The complete connectivity and central control of all motor vehicles within the
context of a digital transport infrastructure is ethically questionable if, and to the extent
that, it is unable to safely rule out the total surveillance of road users and manipulation
of vehicle control.
This guideline says that total surveillance, arising in the context of connected
driving, might be ethically problematic, though it does not state what exactly should
be done to prevent it. It is an issue that at the moment is probably not the most pressing
one, even if public discussions at times circle around it.
9.2 Ethical Guideline 14
Automated driving is justifiable only to the extent to which conceivable attacks, in
particular manipulation of the IT system or innate system weaknesses, do not result in
such harm as to lastingly shatter people’s confidence in road transport.
The issue of security against cyberattacks was high on the committee’s agenda, and
it is an issue much discussed in public whether and how autonomous cars might be
hacked and turned into weapons. While the general guideline is quite clear, there will
be much work left to the details of programming.
C. Luetge
Author's personal copy
10 Data Protection
10.1 Ethical Guideline 15
Permitted business models that avail themselves of the data that are generated by
automated and connected driving and that are significant or insignificant to vehicle control
come up against their limitations in the autonomy and data sovereignty of road users. It is
the vehicle keepers and vehicle users who decide whether their vehicle data that are
generated are to be forwarded and used. The voluntary nature of such data disclosure
presupposes the existence of se rious alte rnativ es and practicability. Action should be
taken at an early stage to counter a normative force of the factual, such as that prevailing in
the case of data access by the operators of search engines or social networks.
Data protection and data sovereignty are probably one of the most discussed issues
in big data ethics and ethics of digitization in general (cf. Floridi 2016) and with the
committee no less. The baseline here is that data belong to the users and keepers of a
car. They can voluntarily allow their data to be used by companies; however, what the
code stresses is that there should be a systematic search for alternatives among search
engines, social networks, or similar, in order to generate appropriate competition.
Privacy by design was used as a guideline here for connected driving (cf. EU 2016).
In the German annotations to the code, it is noted that benefits in terms of comforts are
not sufficient to justify lack of privacy or neglect of data sovereignty.
11 Human-Machine Interface
11.1 Ethical Guideline 16
It must be possible to clearly distinguish whether a driverless system is being used or
whether a driver retains accountability with the option of overruling the system. In the
case of non-driverless systems, the human-machine interface must be designed such
that at any time, it is clearly regulated and apparent on which side the individual
responsibilities lie, especially the responsibility for control. The distribution of respon-
sibilities (and thus of accountability), for instance with regard to the time and access
arrangements, should be documented and stored. This applies especially to the human-
to-technology handover procedures. International standardization of the handover
procedures and their documentation (logging) is to be sought in order to ensure the
compatibility of the logging or documentation obligations as automotive and digital
technologies increasingly cross national borders.
First, the code explicitly states that the driver can at any time voluntary overrule the
system and drive by herself. This generated some controversy, since it might lead to
additional risks. However, the committee decided that it is part of the conditio humana
to take even (what might be termed as) Birrational^decisions.
Second, the problem of the human-machine interface is not to be underestimated:
The handover procedures must be clear, unequivocal, and easy to handle. It must
always be clear who is in charge, the driver or the machine. Data about these
procedures must be appropriately stored. And the committee pleads for an international
standardisation of these procedures.
The German Ethics Code for Automated and Connected Driving
Author's personal copy
11.2 Ethical Guideline 17
The software and technology in highly automated vehicles must be designed
such that the need for an abrupt handover of control to the driver (Bemergen-
cy^) is virtually obviated. To enable efficient, reliable, and secure human-
machine communication and prevent overload, the systems must adapt more
to human communicative behaviour rather than requiring humans to enhance
their adaptive capabilities.
This handover from machine must occur with a certain time lag and not be
immediate. Second, it must be adapted to humans, not vice versa.
12 Learning Systems
12.1 Ethical Guideline 18
Learning systems that are self-learning in vehicle operation and their connection
to central scenario databases may be ethically allowed if, and to the extent that,
they generate safety gains. Self-learning systems must not be deployed unless
they meet the safety requirements regarding functions relevant to vehicle con-
trol and do not undermine the guidelines established here. It would appear
advisable to hand over relevant scenarios to a central scenario catalogue at a
neutral body in order to develop appropriate universal standards, including any
acceptance tests.
Machine learning is an issue highly important for autonomous driving, since
self-learning systems may lead to increased safety in a number of ways (cf. Kalra and
Paddock 2016). A learning car might learn to avoid certain situations or congested routes.
However, it is also a sensitive issue, since a self-learning system might evolve in ways that
programmers have not thought of beforehand, as in the case of Microsoft’sbotTay.
2
Therefore, the code only allows for self-learning in non-safety-critical matters, first.
And second, the code calls for a neutral body to develop standards for such self-
learning, its scenarios and acceptability.
12.2 Ethical Guideline 19
In emergency situations, the vehicle must autonomously, i.e., without human assis-
tance, enter into a Bsafe condition.^Harmonization, especially of the definition of a safe
condition or of the handover routines, is desirable.
If the autonomous car has to leave the autonomous mode, but the driver is unwilling
or unable to take over control, the vehicle must enter into a safe condition. The
currently still differing concepts of what is a safe condition should be harmonized:
Does the car stop in the middle of the road or does it safely drive to the roadside by
itself and stop there? This seems to make more sense.
1
http://www.telegraph.co.uk/technology/2016/03/24/microsofts-teen-girl-ai-turns-into-a-hitler-loving-sex-
robot-wit/.
C. Luetge
Author's personal copy
13 Driver Education
13.1 Ethical Guideline 20
The proper use of automated systems should form part of people’s general digital
education. The proper handling of automated driving systems should be taught in an
appropriate manner during driving tuition and tested.
Appropriate changes to driver education will be necessary. The code leaves these
changes still unspecific, but addresses the issue, which will have to be discussed in
further detail in the future.
14 Concluding Remarks
It remains to be seen what exact impact the ethics code will have on future legislation
and regulation. But certainly, no legislation in Germany will be able to completely
neglect or circumvent it. Also, it will be interesting to see whether a similar develop-
ment takes place in the entire European Union. It would make much sense to take the
same approach there and develop an ethics code for Europe.
From an ethical point of view, in retrospect, it was interesting to see that the hiatus
between different ethical approaches could be overcome. While there was considerable
disagreement in the discussions, ultimately, in most questions, a consensus in practical
matters could be reached—and in those questions where it could not be reached, this
was noted too. Pluralism in ethics without hindering achieving an ethics code looks
promising for future discussions.
References
BMVI (2017a). http://www.bmvi.de/SharedDocs/DE/Anlage/Presse/084-dobrindt-bericht-der-ethik-
kommission.pdf?__blob=publicationFile.
BMVI (2017b). https://www.bmvi.de/SharedDocs/EN/Documents/G/ethic-commission-report.pdf?__blob=
publicationFile.
Bonnefon, J.-F., Shariff, A., & Rahwan, I. (2016). The social dilemma of autonomous vehicles. Science, 352,
1573–1576.
EU: General Data Protection Regulation. (2016). Available at: http://ec.europa.eu/justice/data-
protection/reform/index_en.htm. Accessed 25 Mar 2016.
Floridi, L. (2016). The fourth revolution: how the infosphere is reshaping human reality. Oxford: Oxford
University Press.
Fournier, T. (2016). Will my next car be a libertarian or a utilitarian? Who will decide? IEEE Technology and
Society Magazine, 35(2), 40–45.
Geneva Convention on Road Traffic (1949). https://en.wikisource.org/wiki/Geneva_Convention_on_Road_
Traffic.
Gogoll, J., & Müller, J. F. (2017). Autonomous cars, in favor of a mandatory ethics setting. Science and
Engineering Ethics, 23(Issue 3), 681–700.
Goodall, N. J. (2014). Ethical decision making during automated vehicle crashes. Transportation Research
Record: Journal of the Transportation Research Board, 2424,58–65.
Goodall, N. J. (2016). Can you program ethics into a self-driving car? IEEE Spectrum, 53(6), 28–58.
Hevelke, A., & Nida-Rümelin, J. (2015). Responsibility for crashes of autonomous vehicles: an ethical
analysis. Science and Engineering Ethics, 21(Issue 3), 619–630.
The German Ethics Code for Automated and Connected Driving
Author's personal copy
Isensee, Josef (2006). Menschenwürde: die säkulare Gesellschaft auf der Suche nach dem Absoluten in Archiv
des öffentlichen Rechts, pp. 173–218.
Kalra, N., & Paddock, S. (2016). Driving to safety: how many miles of driving would it take to demonstrate
autonomous vehicle reliability? https://doi.org/10.7249/RR1478.
Korosec, K. (2015). Volvo will accept all liability when its cars are in autonomous mode. Online:
http://fortune.com/2015/10/07/volvo-liability-self-driving-cars/. Accessed 4 Sept 2017.
Lin, P. (2015). Why ethics matter for autonomous cars. In M. Maurer et al. (Eds.), Autonomes Fahren,
Technische, Rechtliche und Gesellschaftliche Aspekte (pp. 70–85). Heidelberg: Springer.
Vienna Convention on Road Traffic (1968). Updated 2014, in force since 2016, https://treaties.un.
org/Pages/ViewDetailsIII.aspx?src=TREATY&mtdsg_no=XI-B-19&chapter=11&Temp=mtdsg3&lang=
en.
C. Luetge
Author's personal copy