Conference PaperPDF Available

The Case for an Ethical Black Box

Abstract

This paper proposes that robots and autonomous systems should be equipped with the equivalent of a Flight Data Recorder to continuously record sensor and relevant internal status data. We call this an ethical black box. We argue that an ethical black box will be critical to the process of discovering why and how a robot caused an accident, and thus an essential part of establishing accountability and responsibility. We also argue that without the transparency afforded by an ethical black box, robots and autonomous systems are unlikely to win public trust.
The Case for an Ethical Black Box
Alan F.T. Winfield1?and Marina Jirotka2
1Bristol Robotics Lab, University of the West of England, Bristol
2Department of Computer Science, University of Oxford
Abstract. This paper proposes that robots and autonomous systems
should be equipped with the equivalent of a Flight Data Recorder to
continuously record sensor and relevant internal status data. We call
this an ethical black box. We argue that an ethical black box will be
critical to the process of discovering why and how a robot caused an
accident, and thus an essential part of establishing accountability and
responsibility. We also argue that without the transparency afforded by
an ethical black box, robots and autonomous systems are unlikely to win
public trust.
Keywords: robot ethics, ethical governance, traceability, transparency,
responsible robotics, trust
1 Introduction
Driverless car accidents are headline news. The fatal Tesla accident of May 2016
[25] resulted in considerable press and media speculation on the causes of the
accident. But there is a worrying perception that transparency is lacking in how
these events are disclosed. System developers speak reassuringly but we may
suspect they have a vested interest in giving events a positive gloss. This raises
the crucial question of how the transparency of robot control systems can be
guaranteed so as to avoid publics becoming fearful and to ensure that robots gain
high levels of trust and acceptance in society [10]. Whilst our existing concepts of
accountability and liability are being stretched by semi-autonomous machines,
new heights of machine autonomy are likely to shatter them completely [31],
giving urgency to the search for suitable revisions.
In this paper we propose that robots and autonomous systems should be
equipped with the equivalent of an aircraft Flight Data Recorder to continuously
record sensor and relevant internal status data. We call this an ethical black
box. We argue that an ethical black box will play a key role in the processes of
discovering why and how a robot caused an accident, and thus an essential part
of establishing accountability and responsibility. We also argue that without the
transparency afforded by an ethical black box, robots and autonomous systems
are unlikely to win public trust.
This paper is structured as follows. In section 2 we outline the development
and practice of flight data recorders. Then in section 3 we make the link between
?alan.winfield@uwe.ac.uk
ethical governance, transparency and trust. Section 4 discusses concerns over
transparency in safety-critical artificial intelligence (AI) to propose an ethical
black box, then suggests a generic specification. In section 5 we consider the
human processes of robot accident investigation.
2 Black box flight data recorders
Black box or flight data recorders were introduced in 1958, for larger aircraft,
and since then have vastly expanded in scope in what flight data they record.
Initially flight data recorders included time navigation data about the position
of surfaces and the pilots’ movement of controls; latterly sensor data on the in-
ternal and external environment as well as the functioning of components and
systems are also recorded, alongside intentional autopilot settings such as se-
lected headings, speeds, altitudes and so on [17]. The first black boxes recorded
5 flight parameters; data recorders on modern aircraft record more than 1000 pa-
rameters [7]. A significant innovation was the introduction of the Cockpit Voice
Recorder (CVR) to capture the conversation of the pilots to aid interpretation
of flight data. Although initially resisted [17], conversations captured by the
CVR have proven to be an invaluable tool in reconstructing the circumstances
of an accident by providing an intentional context from which to interpret flight
data. Air accident investigations help to rule out systematic failure modes, help
to preserve trust in aviation, provide accountability and produce lessons that
contribute to overall levels of safety.
Factors that typically contribute to air accidents have parallels to incidents
that may in the future arise through our increased dependence on robots. Air ac-
cidents are typically the result of unpredicted interactions between the intentions
and actions of the aircrew, the integrity and behaviour of the aircraft systems,
and the environmental circumstances of the flight. Somewhat analogously we
anticipate future robots to be subject to a similar mix of factors. As we expect
robots to do more for us, they will necessarily become more sophisticated, op-
erate more autonomously, and do so within open, unconstrained settings. These
greater freedoms come with the increased risk of unanticipated combinations of
unforeseen factors leading to hazardous situations or actually resulting in harm.
This is not to make a judgement about how frequently such events might occur,
as with air disasters, significant harm may be rare. But we need to acknowledge
hazardous events will inevitability take place. So our very ambition for robots to
contribute to human activities in new ways implies giving robots new capabilities
and freedoms, which in turn creates the potential for robots to be implicated
in significant hazard and injury. This may prompt an overall assessment of the
benefits versus the harms of expanding our dependency on robots, but our view
is that more than this is needed and that a further analogy with aviation is also
relevant. We suggest that acceptance of air travel, despite its catastrophes, is in
part bound up with aviation governance, which has cultural and symbolic impor-
tance as well as practical outcomes. A crucial aspect of the former is rendering
the tragedy of disaster comprehensible through the process of investigation and
reconstruction.
The transfer of the black box concept into settings other than aviation is
not novel. It has been mooted for software [9] and micro-controllers [8]. The
largest deployment of black box technology outside aviation is within the au-
tomobile and road haulage industries for data logging [28, 21, 36], and perhaps
the most relevant to this paper is work to develop an in-vehicle data recorder
to study driver behaviour and hence better understand the reasons for common
car accidents [23].
3 Ethical Governance and Trust
Ethics, standards and regulation are connected. Standards formalise ethical prin-
ciples (i.e. [4]) into a structure which could be used either to evaluate the level of
compliance or, more usefully perhaps for ethical standards, to provide guidelines
for designers on how to conduct an ethical risk assessment for a given robot
and mitigate the risks so identified [6]. Ethics therefore underpin standards. But
standards also sometimes need teeth, i.e. regulation which mandates that sys-
tems are certified as compliant with standards, or parts of standards. Thus ethics
(or ethical principles) are linked to standards and regulation.
Although much existing law applies to robots and autonomous systems [26],
there is little doubt that rapidly emerging and highly disruptive technologies
such as drones, driverless cars and assistive robotics require – if not new law –
regulation and regulatory bodies at the very least.
Ethics and Standards both fit within a wider framework of Responsible Re-
search and Innovation (RRI) [27]. RRI provides a scaffold for ethics and stan-
dards, as shown in Figure 1. Responsible Innovation typically requires that re-
search is conducted ethically, so ethical governance connects RRI with ethics.
RRI also connects directly with ethics through principles of, for instance, pub-
lic engagement, open science and inclusivity. Another key principle of RRI is
the ability to systematically and transparently measure and compare system
capabilities, typically with standardised tests or benchmarks [14].
In general technology is trusted if it brings benefits while also being safe,
well regulated and – when accidents happen – subject to robust investigation.
One of the reasons we trust airliners, for example, is that we know they are
part of a highly regulated industry with an excellent safety record. The reason
commercial aircraft are so safe is not just good design, it is also the tough safety
certification processes and, when things do go wrong, robust and publicly visible
processes of air accident investigation.
Regulation requires regulatory bodies, linked with public engagement [32] to
provide transparency and confidence in the robustness of regulatory processes.
All of which supports the process of building public trust, as shown in Figure
1. Trust does not, however, always follow from (suggested) regulation. A recent
survey of decision making in driverless cars reveals ambivalent attitudes to both
preferences and regulation in driverless cars [5] “... participants approved of util-
!!!!!!!!!!!!!!!!!!!!!TRUST!
ethics! regula8on!
standards!
Responsible!
Innova8on!
Ethical!
governance!
Bench-
marks! Verifica8on!&!
valida8on!
Regulatory!Bodies!
Public!
Engagement!
Figure 1. A framework of ethical governance building public trust, from [35]
itarian Autonomous Vehicles (AVs) (that is, AVs that sacrifice their passengers
for the greater good) and would like others to buy them, but they would them-
selves prefer to ride in AVs that protect their passengers at all costs. The study
participants disapprove of enforcing utilitarian regulations for AVs and would
be less willing to buy such an AV”.
4 Safety Critical Artificial Intelligence and Transparency
All machines, including robots, have the potential to cause harm. Responsibly
designed present day robots are designed to be safe and to avoid unintended
harms, either accidentally or from deliberate misuse, see i.e. [19] for personal
care robots.
The primary focus of this paper is robotics and autonomous systems, and not
software artificial intelligence. However, a reasonable definition of a modern robot
is ‘an embodied AI’ [33], thus in considering the safety of robots we must also
concern ourselves with the AI controlling the robot. Three important classes of
robot are drones, driverless cars and assistive robots (including care or workplace
assistant robots); all will be controlled by an embedded AI of some appropriate
degree of sophistication. Yet these are all safety critical systems, the safety of
which is fundamentally dependent on those embedded AIs – AIs which make
decisions that have real consequences to human safety or well being. Let us
consider the issue of transparency, in particular two questions:
1. How can we trust the decisions made by AI systems and, more generally, how
can the public have confidence in the use of AI systems in decision making?
In [35] we argue that ethical governance is a necessary, but (probably) not
sufficient, element of building public trust.
2. If an AI system makes a decision that turns out to be disastrously wrong,
how do we investigate the process by which the decision was made? This
question essentially underlies the case for an ethical black box.
Transparency will necessarily mean different things to different stakeholders
– the transparency required by a safety certification agency or an accident inves-
tigator will clearly need to be different to that required by the system’s user or
operator. But an important underlying principle is that it should always be pos-
sible to find out why an autonomous systems made a particular decision (most
especially if that decision has caused harm). A technology that would provide
such transparency, especially to accident investigators, would be the equivalent
of an aircraft flight data recorder (FDR). We call this an ethical black box, both
because aircraft FDRs are commonly referred to as black boxes, and because
such a device would be an integral and essential physical component supporting
the ethical governance of robots and robotic systems. Like its aviation counter-
part the ethical black box would continuously record sensor and relevant internal
status data so as to greatly facilitate (although not guarantee) the discovery of
why a robot made a particular decision or series of decisions – especially those
leading up to an accident. Ethical black boxes would need to be designed and
certified according to standard industry-wide specifications, although it is most
likely that each class of robot would have a different standard; one type for
driverless vehicles, one type for drones and so on.
4.1 An outline specification for an Ethical Black Box
All robots collect sense data, then – on the basis of that sense data and some
internal decision making process (AI) – send commands to actuators. This is of
course a simplification of what in practice will be a complex set of connected
systems and processes but, at an abstract level, all intelligent robots will have
the three major subsystems shown in blue, in Fig. 2. If we consider a driver-
less car, its sensors typically consist of a Light Detection and Ranging sensor
(LIDAR), camera, and a number of short range collision sensors together with
GPS, environmental sensors for rain, ambient temperature etc, and internal sys-
tem sensors for fuel level, engine temperature, etc. The actuators include the
car’s steering, accelerator and braking systems.
The Ethical Black Box (EBB) and its data flows, shown in red in Fig. 2
will need to collect and store data from all three robot subsystems. From the
sensors the EBB will need to collect either sampled or compressed raw data,
alongside data on features that have been extracted by the sensor subsystem’s
post-processing (i.e. ‘vehicle ahead, estimated distance 100m’). From the AI
system the EBB will need as a minimum high level ‘state’ data, such as ‘braking’,
‘steering left’, ‘parking’ etc and, ideally, also high level goals such as ‘turning
Embedded&&
Ar)ficial&
Intelligence&
Sensors&
Actuators&
Ethical&
Black&
Box&
Stored&data&access&
Figure 2. Robot sub-systems with an Ethical Black Box and key dataflows.
left at junction ...’ and alerts such as ‘cyclist detected front left - taking avoiding
action’. From the actuator system the EBB will need to collect actuator demands
(i.e. ‘steer left 10 degrees) as well as the resulting effects (i.e. steering angles).
All of these data will need to be date and time stamped, alongside location data
from the GPS.
How much data could be stored in the EBB? Some reports have suggested
that Google’s driverless car generates 1GByte of raw data per second3. If we
(reasonably) assume that we can sample and/or compress this to say 100MByte
per second, then an EBB equipped with a 1TByte solid-state drive would allow
it to continuously log data for about 3 hours of operation. The EBB would, like
an aircraft flight data recorder, continuously overwrite the oldest data logs, so
that at any one time the EBB stores the most recent 3 hours. This would seem to
be sufficient given the need to record only the events leading up to an accident.
It is beyond the scope of this paper to specify exactly which data will need
to be recorded in the EBB. We can however be clear about what those data
are for: the key principle is that, from the data recorded in the EBB, it must
be possible to reconstruct the timeline leading up to and during an accident;
a timeline annotated with the key sensory inputs, actuator demands, and the
high-level goals, alerts and decisions that drove those actuator demands. A full
EBB specification will need to set out which data is to be recorded and, equally
important, the specification of the interface(s) between the robot subsystems and
the EBB. The interface specification must include the hardware (connectors),
signaling and protocols. Note that the dataflows between robot subsystems and
3http://www.kurzweilai.net/googles-self-driving-car- gathers-nearly-1-
gbsec
the EBB are one way only – the EBB must, as far as the robot is concerned, be
an entirely passive subsystem.
The EBB specification must also cover the physical and data interface which
allows an accident investigator to access data logged by the EBB (shown at the
bottom of Fig, 2). There are many other aspects of the EBB that will need to be
specified. These include the physical form of the EBB noting that, like its aviation
counterpart, it will need to be rugged enough to survive a very serious accident.
Some would argue that the EBB does not have to be a physical component
at all (see, for instance [20]) and that instead all data should be streamed to
secure cloud storage. We would however counter that the high volumes of data
(especially in an environment with a large number of, for instance, driverless
cars) could overwhelm the local wireless infrastructure. The EBB will of course
also need to be rugged, secure and tamper-proof.
As suggested here the EBB does not record the internal low-level decision
making processes of the embedded AI (just as an aircraft FDR does not record
the low-level processes within the vehicle’s autopilot). The reason for this is
that different robot AIs are likely to have very different internal architectures:
some may be algorithmic, others based on artificial neural networks (ANNs).
We should however note that in the case of robots which learn or adapt then
any control parameters that change during the robot’s operation (such as con-
nection weights in an ANN) will need to be periodically saved in the EBB, thus
enabling a deep investigation of accidents in which the robot’s learning systems
are implicated. Finding a common specification for capturing the low-level deci-
sion making processes for all embedded AIs is therefore likely to be impossible.
Instead the transparency argued for in this paper is not achieved by the EBB
alone but through the processes of accident investigation, as outlined in section
5 below.
To what extent is an EBB based on the outline specification above a prac-
tical proposition? We have suggested that recording the most recent 3 hours of
data from a robot’s sensing, AI and actuation sub-systems is achievable with
current solid-state disk (SSD) technology. The most computationally intensive
process in the EBB is likely to be data compression which – given that real-time
video compression is commonplace in smart phones [29] – suggests that similar
computing resources would be adequate for an EBB. The overall power con-
sumption of an EBB is however likely to be significantly less than a smart phone
given that the most power-hungry sub-system of a mobile phone is its wireless
interface [12]. The size and mass of an EBB will be determined primarily by its
strong tamper-proof enclosure and connectors. A unit size comparable to that
of a ruggedised external HDD (approx. 15 x 10 x 5 cm) seems both achievable
and appropriate for mounting in either a driverless car or mobile service robot.
An EBB for a lightweight flying robot will clearly be more challenging to engi-
neer, but given that flying robots require wireless interfaces, either a wirelessly
connected external EBB, or a cloud-based (‘glass’) EBB [20] might prove a more
practical solution.
It is most unlikely that there could be one standard EBB for all robots, or
indeed one standard specification for all EBBs. The most successful data transfer
standards are either based on a common core specification which has sufficient
flexibility that it can be extended to allow for manufacturer or device-specific
data, or a foundational standard which can then be extended with a family of
related standards. An example of the former is the Musical Instrument Digi-
tal Interface (MIDI) with its manufacturer specified System Exclusive (SysEx)
messages [22]. The latter is best illustrated by the IEEE 802 family of local area
network standards, of which IEEE 802.11 (WiFi) is undoubtedly the most no-
table [16]. The extent to which a common core specification for an EBB interface
is possible is an open question, although we would argue that for the class of
mobile robots there is sufficient commonality of function that such a core spec-
ification is possible. We are unaware of efforts to develop such a specification,
although one current standards effort that is aiming to define testable levels of
transparency in autonomous systems is IEEE Standards Working Group P70014.
4.2 Ethical Black Boxes for Moral Machines
This paper makes the case that all safety critical robots should be equipped with
an ethical black box. Consider now robots that are explicit moral agents. It is
clear that some near future autonomous systems, most notably driverless cars are
by default moral agents. Both driverless cars and assistive (i.e. care) robots make
decisions with ethical consequences, even if those robots have not been designed
to explicitly embed ethical values and moderate their choices according to those
values. Arguably all autonomous systems implicitly reflect the values of their
designers or, even more worryingly, training data sets (as dramatically shown in
AI systems that demonstrate human biases [11]).
There is a growing consensus that near future robots will, as a minimum,
need to be designed to explicitly reflect the ethical and cultural norms of their
users and societies [6, 18]. Beyond reflecting values in their design a logical (but
difficult) next step is to provide robots with an ethical governor. That is, a
process which allows a robot to evaluate the consequences of its (or others’)
actions and modify its own actions according to a set of ethical rules. Developing
practical ethical governors remains the subject of basic research and presents two
high level challenges: (1) the philosophical problem of the formalisation of ethics
in a format that lends itself to machine implementation and (2) the engineering
problem of the implementation of moral reasoning in autonomous systems [15].
There are two approaches to addressing the second of these challenges [1]:
1. a constraint-based approach – explicitly constraining the actions of an AI
system in accordance with moral norms; and
2. a training approach – training the AI system to recognise and correctly
respond to morally challenging situations.
4https://standards.ieee.org/develop/project/7001.html
The training approach is developed for an assistive robot in [2], while examples
of constraint-based approaches are explored in [3,34]. One advantage of the
constraint-based approach is that it lends itself to verification [13].
Extending the EBB outlined in this paper for a robot equipped with an
ethical governor would in principle be straightforward: the EBB would need to
additionally log the decisions made by the ethical governor so that an accident
investigator could take those into account in building a picture of the processes
that led to the accident.
5 The Processes of Robot Accident Investigation
In aviation it is the investigation, not the black box data per se, which concludes
why an air accident occurred. We anticipate this will also be true for accidents
involving robots, where an investigation will draw upon EBB data amongst other
information to determine the reason for an accident. So alongside the technical
parameters of what to record within the EBB, we have also to consider how the
interpretation of those data fits into the process of an investigation. Air accident
investigations are social processes of reconstruction that need to be perceived as
impartial and robust, and which (we argue) serve as a form of closure so that
aviation does not acquire an enduring taint in the public’s consciousness. We
anticipate very similar roles for investigations into robot accidents.
Taking the example of driverless cars, an accident investigation would bring
together data and evidence from a variety of sources. The extant local context
will undoubtedly play a part in tracing the causes of the accident. While the EBB
is witness to events that often would otherwise remain unwitnessed, the activities
of driverless cars are likely to take place in populated spaces where there may be
many witnesses, some of whom will record, share and publish details of the event
via mobile phones and other devices, creating multiple perspectives on what may
have happened. Thus, there may be bystanders, pedestrians, passengers or other
drivers who will have particular views on the event and may have captured
the accident as it unfolds in a variety of ways. Traditional police forensics will
investigate the scene in the conventional way examining evidence such as, skid
marks in the road or the impact of the crash on other objects (walls, cars) or
people (for an example of the analysis of crash data see [36]). This raises the
question of how the interpretation of EBB data sits alongside the interpretation
of evidence from other witnesses, and brings to the fore consideration of the
epistemological status of different types of witnessing.
As a further key part of the interdependent network of responsibilities in this
case, the car’s manufacturer and possibly also maintainer are likely to be called
to provide input. This will be particularly important if initial investigation points
to some incorrect internal low-level decision making process in the car’s AI as
the likely cause of the accident. Indeed companies may be required to release
the data sets used to train the algorithms that are driving these systems when
accidents occur; a requirement that may conflict with a corporation’s desire to
gain competitive advantage from such data.
An obvious concern is how conclusions might be reached where the data from
these different types of witnessing is in conflict. It is clear that a specialist team
with deep expertise in driverless car technology, perhaps within the umbrella
of the road traffic investigation agency, will be needed to carefully weigh these
data and reach conclusions about why the driverless car behaved the way it did,
and make recommendations. But without doubt the data provided by the EBB
sits at the very centre of the process of accident investigation, providing the
crucial and objective timeline against which all other witness accounts can be
superposed.
6 Concluding discussion
The recent Tesla crash report [24] blames human error, often cited as a primary
cause or contributing factor in disasters and accidents as diverse as nuclear
power, space exploration and medicine. But in this case, we echo Jack Stilgoe’s
statement in the Guardian [30] that this conclusion and the process through
which it was reached is a missed opportunity both to learn from such incidents
about the systemic properties of such autonomous systems in the wild, and also
to initiate a more transparent and accountable process for accident investiga-
tion. As reported, after the US National Highways and Transport Safety Agency
(NHTSA) published their report, much of the media coverage concentrated upon
the fact that no products were recalled thereby prioritising the business value
for Tesla. But as Stilgoe recounts, “As new technologies emerge into the world,
it is vital for governments to open them up, look at their moving parts and
decide how to realise their potential while guarding against their risks.” Human
error is a common refrain and yet, “To blame individual users of technology is to
overlook the systemic issues that, with aeroplanes, have forced huge safety im-
provements. When an aeroplane crashes, regulators’ priority must be to gather
data and learn from it. User error should be seen as a function of poorly-designed
systems rather than human idiocy.”
Responsibility within RRI is framed as a collective accomplishment which
seems well matched to the interdependent systemic properties in the design, de-
velopment and use of robots and autonomous systems. Whilst the EBB we pro-
pose in this paper seems to focus on determining accountability once an accident
has happened, it is actually an outcome of reflections on applying anticipatory
governance to these robot technologies. Accidents will be inevitable. Within that
governance, it is vital to consider what bodies, processes and stakeholders need
to be in place to determine a just account of the reasons for the accident, draw-
ing upon the EBB record as one piece of evidence amongst others. Whilst the
process itself may seem simple, it will be complicated by local contexts and the
nature of the accident, political climates, legal actions, international differences
and corporate concerns.
Acknowledgments
This work has, in part, been supported by EPSRC grant ref EP/L024861/1. We
are also grateful to the anonymous reviewers for their insightful comments.
References
1. Allen, C., Smit, I. and Wallach, W.: Artificial morality: Top-down, bottom-up, and
hybrid approaches, Ethics and Information Technology 7, 149-155 (2005).
2. Anderson, M. and Anderson, S.L.: GenEth: A General Ethical Dilemma Analyzer,
in Proc. Twenty-Eighth AAAI Conference on Artificial Intelligence, 253-261 (2014).
3. Arkin, R.C., Ulam, P. and Wagner, A.R.: Moral Decision Making in Autonomous
Systems: Enforcement, Moral Emotions, Dignity, Trust, and Deception. Proc. IEEE
100(3), 571589 (2012).
4. Boden, M., Bryson, J., Caldwell, D., Dautenhahn, K., Edwards, L., Kember, S.,
Newman, P., Parry, V., Pegman, G., Rodden, T., Sorrell, T., Wallis, M., Whitby,
B. and Winfield, A.F.: Principles of Robotics, Connection Science 29 (2), 124-129
(2017).
5. Bonnefon, J-F., Shariff, A. and Rahwam, I.: The social dilemma of autonomous
vehicles, Science 352(6293), 1573-1576 (2016).
6. British Standards Institute: BS8611:2016 Robots and robotic devices: guide to the
ethical design and application of robots and robotic systems, ISBN 9780580895302,
BSI London, (2016).
7. Campbell, N.: The Evolution of Flight Data Analysis, In Proc. Australian Society
of Air Safety Investigators (2007). http://asasi.org/papers/2007/The_Evolution_
of_Flight_Data_Analysis_Neil_Campbell.pdf
8. Choudhuri, S. and Givargis, T.: FlashBox: a system for logging non-deterministic
events in deployed embedded systems. In Proc. 2009 ACM symposium on Applied
Computing (SAC ’09), 1676-1682 (2009).
9. Elbaum, S. and Munson, J.C.: Software Black Box: an alternative mechanism for
failure analysis, Proc. 11th Int. Symposium on Software Reliability Engineering, IS-
SRE 2000, 365-376 (2000).
10. Hibbard, B.: Ethical Artificial Intelligence, arXiv.org:1411.1373v9 (2014).
11. Caliskan-Islam, A., Bryson, J. and Narayanan, A: Semantics derived automatically
from language corpora necessarily contain human biases, arXiv:1608.07187v2 (2016).
12. Carroll, A. and Heiser, G.: An Analysis of Power Consumption in a Smartphone,
In Proc. 2010 USENIX annual technical conference, Boston, (June 2010).
13. Dennis, L.A., Fisher, M., Slavkovik, M. and Webster, M.: Formal Verification of
Ethical Choices in Autonomous Systems, Robotics and Autonomous Systems, 77,
1-14 (2016).
14. Dillmann, R.: Benchmarks for Robotics Research, EURON, April 2004. http://
www.cas.kth.se/euron/euron-deliverables/ka1-10-benchmarking.pdf
15. Fisher, M., List, C., Slavkovik, M. and Winfield, A.F.: Engineering Moral Machines,
Informatik-Spektrum, Springer, (2016).
16. Gibson, R.W.: IEEE 802 standards efforts, Computer Networks and ISDN Systems,
19(2), 95-104 (1990).
17. Grossi, D.R.: Aviation recorder overview, National Transportation Safety Board
[NTSB] Journal of Accident Investigation, 2(1), 31-42 (2006).
18. IEEE: Global initiative on Ethical Considerations in the Design of Artificial In-
telligence and Autonomous Systems, (2016) http://standards.ieee.org/develop/
indconn/ec/autonomous_systems.html.
19. ISO 13482:2014 Robots and robotic devices Safety requirements for personal care
robots, (2014) http://www.iso.org/iso/catalogue_detail.htm?csnumber=53820
20. Kavi, K.M.: Beyond the Black Box, IEEE Spectrum, Posted 30 Jul 2010, http:
//spectrum.ieee.org/aerospace/aviation/beyond-the-black-box
21. Menig, P. and Coverdill, C.: Transportation Recorders on Commercial Vehicles, In
Proc. 1999 International Symposium on Transportation Recorders, (1999).
22. Moog, R. A.: MIDI: Musical Instrument Digital Interface, Journal of the Audio
Engineering Society 34(5), 394-404 (1986).
23. Perez, A., Garca, M.I., Nieto, M., Pedraza, J.L., Rodrguez, S. and Zamorano, J.:
Argos: An Advanced In-Vehicle Data Recorder on a Massively Sensorized Vehicle for
Car Driver Behavior Experimentation, IEEE Trans. on Intelligent Transportation
Systems, 11(2), 463-473, (2010).
24. National Highway Traffic Safety Administration: Investigation Report PE 16-007
(2017) https://static.nhtsa.gov/odi/inv/2016/INCLA-PE16007-7876.PDF.
25. Vlasic, B. and Boudette, N.E.: Self-Driving Tesla Was Involved in Fatal Crash,
U.S. Says, New York Times, 30 June 2016.
26. Palmerini, E., Azzarri, F., Battaglia, A., Bertolini, A., Carnevale, A., Carpaneto,
J., Cavallo, F., Di Carlo, A., Cempini, M., Controzzi, M., Koops, B.J., Lu-
civero, F., Mukerji, N., Nocco, L., Pirni, A., Shah, H., Salvini, P., Schellekens,
M. and Warwick, K.: D6.2 Guidelines on regulating robotics, Robolaw
project (2014), http://www.robolaw.eu/RoboLaw_files/documents/robolaw_d6.
2_guidelinesregulatingrobotics_20140922.pdf.
27. The Rome Declaration on Responsible Research and Innovation (2014)
http://www.science-and-you.com/en/sis-rri- conference-recommendations-
rome-declaration-responsible-research- and-innovation.
28. Thom, P.R. and MacCarley, C.A.: A Spy Under the Hood: Controlling Risk and
Automotive EDR, Risk Management Magazine 55(2), 22-26 (2008).
29. Sharabayko, M.P. and Markov, N.G.: H.264/AVC Video Compression on Smart-
phones, Journal of Physics: Conference Series, 803(1) (2017).
30. Stilgoe, J.: Tesla crash report blames human error - this is a missed opportu-
nity, The Guardian, 21 January 2017, https://www.theguardian.com/science/
political-science/2017/jan/21/tesla-crash-report- blames-human-error-
this-is-a-missed- opportunity.
31. Vladeck, D.C.: Machines without Principals: Liability Rules and Artificial Intelli-
gence. In: Wash. L. Rev. 89:117, 117-150 (2014).
32. Wilsdon, J. and Willis, R.: See-through science: Why public engagement needs to
move upstream, DEMOS (2014).
33. Winfield, A.F.: Robotics: A very short introduction, Oxford University Press
(2012).
34. Winfield, A.F., Blum, C. and Liu, W.: Towards an ethical robot: Internal models,
consequences and ethical action selection. In: Mistry, M., Leonardis, A., Witkowski,
M. and Melhuish, C., eds. Advances in Autonomous Robotics Systems, LNCS 8717,
85-96, Springer (2014).
35. Winfield, A.F.: Written evidence submitted to the UK Parliamentary Select Com-
mittee on Science and Technology Inquiry on Robotics and Artificial Intelligence,
Discussion Paper, Science and Technology Committee (Commons) (2016).
36. Worrell, M.: Analysis of Bruntingthorpe crash test data, Impact: The Journal of
The Institute of Traffic Accident Investigators, 21 (1), 4-10 (2016).
... The purpose of embedding an ethical black box in an AI system is to investigate why and how an AI system caused an accident or a near miss. The ethical black box continuously records sensor data, internal status data, decisions, behaviors (both system and operator) and effects [125,126,127]. For example, an ethical black box could be built into the automated driving system to record the behaviors of the system and driver and their effects [125]. ...
Preprint
Full-text available
Responsible AI has been widely considered as one of the greatest scientific challenges of our time and the key to unlock the AI market and increase the adoption. To address the responsible AI challenge, a number of AI ethics principles frameworks have been published recently, which AI systems are supposed to conform to. However, without further best practice guidance, practitioners are left with nothing much beyond truisms. Also, significant efforts have been placed at algorithm-level rather than system-level, mainly focusing on a subset of mathematics-amenable ethical principles (such as privacy and fairness). Nevertheless, ethical issues can occur at any step of the development lifecycle crosscutting many AI, non-AI and data components of systems beyond AI algorithms and models. To operationalize responsible AI from a system perspective, in this paper, we adopt a pattern-oriented approach and present a Responsible AI Pattern Catalogue based on the results of a systematic Multivocal Literature Review (MLR). Rather than staying at the ethical principle level or algorithm level, we focus on patterns that AI system stakeholders can undertake in practice to ensure that the developed AI systems are responsible throughout the entire governance and engineering lifecycle. The Responsible AI Pattern Catalogue classifies patterns into three groups: multi-level governance patterns, trustworthy process patterns, and responsible-AI-by-design product patterns. These patterns provide a systematic and actionable guidance for stakeholders to implement responsible AI.
... RoAD therefore focused on the need for societal trust in processes of accident investigation (Winfield & Jirotka, 2018). This focus was identified by examining parallels in other industries, for example in aviation, where rigorous safety protocols and post-accident procedures involving specialist investigators render an inherently dangerous activity societally acceptable, even banal, through supporting the societal need to understand 'what went wrong' and be reassured that the same thing will not happen again Winfield and Jirotka (2017). In a similar way to an aircraft's 'black box' data recorder, AVs record certain data while in operation. ...
Article
Full-text available
Autonomous Vehicles (AVs) collect a vast amount of data during their operation (MBs/sec). What data is recorded, who has access to it, and how it is analysed and used can have major technical, ethical, social, and legal implications. By embedding Responsible Innovation (RI) methods within the AV lifecycle, negative consequences resulting from inadequate data logging can be foreseen and prevented. An RI approach demands that questions of societal benefit, anticipatory governance, and stakeholder inclusion, are placed at the forefront of research considerations. Considered as foundational principles, these concepts create a contextual mindset for research that will by definition have an RI underpinning as well as application. Such an RI mindset both inspired and governed the genesis and operation of a research project on autonomous vehicles. The impact this had on research outlines and workplans, and the challenges encountered along the way are detailed, with conclusions and recommendations for RI in practice.
... Another important aspect of robot regulation, besides nondiscrimination, is accountability for damage caused by a robot, which is addressed, for example, in the European Parliament's resolution in "Civil Law Rules on Robotics" [134]. Importantly, transparency, as a mechanism to explain robot behavior and decision making clearly, is presented as a fundamental tool to achieve accountability [135], [136]. This issue has already largely been discussed in the context of autonomous driving cars [137]- [140], and some conclusions might be drawn from this for robot accountability. ...
Preprint
Full-text available
Machine learning has significantly enhanced the abilities of robots, enabling them to perform a wide range of tasks in human environments and adapt to our uncertain real world. Recent works in various domains of machine learning have highlighted the importance of accounting for fairness to ensure that these algorithms do not reproduce human biases and consequently lead to discriminatory outcomes. With robot learning systems increasingly performing more and more tasks in our everyday lives, it is crucial to understand the influence of such biases to prevent unintended behavior toward certain groups of people. In this work, we present the first survey on fairness in robot learning from an interdisciplinary perspective spanning technical, ethical, and legal challenges. We propose a taxonomy for sources of bias and the resulting types of discrimination due to them. Using examples from different robot learning domains, we examine scenarios of unfair outcomes and strategies to mitigate them. We present early advances in the field by covering different fairness definitions, ethical and legal considerations, and methods for fair robot learning. With this work, we aim at paving the road for groundbreaking developments in fair robot learning.
... The motivation for such a device comes from aircraft black boxes [17], which are indispensable when diagnosing aircraft failures and accidents, as well as from autonomous vehicles and the need of monitoring their operation at all times in order to avoid and, if necessary, explain fatal failures [18]. The described robotic black box is supposed to be as general as possible; the concepts and part of the instrumentation are therefore robot-independent and are potentially standardisable. ...
Preprint
Robot deployment in realistic dynamic environments is a challenging problem despite the fact that robots can be quite skilled at a large number of isolated tasks. One reason for this is that robots are rarely equipped with powerful introspection capabilities, which means that they cannot always deal with failures in a reasonable manner; in addition, manual diagnosis is often a tedious task that requires technicians to have a considerable set of robotics skills. In this paper, we discuss our ongoing efforts - in the context of the ROPOD project - to address some of these problems. In particular, we (i) present our early efforts at developing a robotic black box and consider some factors that complicate its design, (ii) explain our component and system monitoring concept, and (iii) describe the necessity for remote monitoring and experimentation as well as our initial attempts at performing those. Our preliminary work opens a range of promising directions for making robots more usable and reliable in practice - not only in the context of ROPOD, but in a more general sense as well.
... In earlier programming processes and models, these technological systems could be explained by the logic in which the machines were designed and coded. The programmer(s) in other words, could be said to be accountable for explicating the data gathering, processing, operations and sequences of the machines [64]. The potential black box in traditional programming approaches could be said to rest in the human programmers themselves [65]. ...
Chapter
The aspirations for a global society of learning technology are high these days. Machine Learning (ML) and artificial intelligence (AI) are two key terms of any socio-political and technological discourse. Both terms however, are riddled with confusion both on practical and conceptual levels. Learning for one thing, assumes that an entity gains and develops their knowledge bank in ways that are meaningful to the entity’s existence. Intelligence entails not just computationality but flexibility of thought, problem-solving skills and creativity. At the heart of both concepts rests the philosophy and science of consciousness. For in order to meaningfully acquire information, or build upon knowledge, there should be a core or executive function that defines the concerns of the entity and what newly encountered information means in relation to its existence. A part of this definition of concerns is also the demarcation of the self in relation to others. This paper takes a socio-cognitive scientific approach to deconstructing the two currently overused terms of ML and AI by creating a design fiction of sorts. This design fiction serves to illustrate some complex problems of consciousness, identity and ethics in a potential future world of learning machines.
... In the event of negative outcomes, the responsible humans can be identified by an ethical black box for accountability [100]. The ethical black box continuously records sensor data, internal status data, decisions, behaviors (both system and operator) and effects [101]. All of these data need to be kept as evidence with the timestamp and location data using an immutable log (e.g. using blockchain) [102]. ...
Preprint
Full-text available
Although AI is transforming the world, there are serious concerns about its ability to behave and make decisions responsibly. Many ethical regulations, principles, and frameworks for responsible AI have been issued recently. However, they are high level and difficult to put into practice. On the other hand, most AI researchers focus on algorithmic solutions, while the responsible AI challenges actually crosscut the entire engineering lifecycle and components of AI systems. To close the gap in operationalizing responsible AI, this paper aims to develop a roadmap on software engineering for responsible AI. The roadmap focuses on (i) establishing multi-level governance for responsible AI systems, (ii) setting up the development processes incorporating process-oriented practices for responsible AI systems, and (iii) building responsible-AI-by-design into AI systems through system-level architectural style, patterns and techniques.
Chapter
Risk Assessment is a well known and powerful method for discovering and mitigating risks, and hence improving safety. Ethical Risk Assessment uses the same approach, but extends the scope of risk to cover ethical risks in addition to safety risks. In this paper we outline Ethical Risk Assessment (ERA), and set ERA within the broader framework of Responsible Robotics. We then illustrate ERA, first with a hypothetical smart robot teddy bear (RoboTed), and later with an actual smart robot toy (Purrble). Through these two case studies this paper demonstrates the value of ERA and how consideration of ethical risks can prompt design changes, resulting in more ethical and sustainable robots. KeywordsEthical risk assessmentResponsible roboticsSocial robotsBS8611Smart robot toy
Article
Systems of systems often include multiple agents that interact in both cooperative and competitive modes. Moreover, they involve multiple resources, including energy, information, and bandwidth. If these resources are limited, agents need to decide how to share resources cooperatively to reach the system-level goal, while performing the tasks assigned to them autonomously. This paper takes a step towards addressing these challenges by proposing a dynamic two-tier learning framework, based on Deep Reinforcement Learning that enables dynamic resource allocation while acknowledging the autonomy of systems constituents. The two-tier learning framework that decouples the learning process of the SoS constituents from that of the resource manager ensures that the autonomy and learning of the SoS constituents are not compromised as a result of interventions executed by the resource manager. We apply the proposed two-tier learning framework on a customized OpenAI Gym environment and compare the results of the proposed framework to baseline methods of resource allocation to show the superior performance of the two-tier learning scheme across a different set of SoS key parameters. We then use the results of this experiment and apply our heuristic inference method to interpret the decisions of the resource manager for a range of environment and agent parameters.
Article
Users and operators of swarms will, in the future, need to monitor the operations of swarms in a distributed way, without explicitly tracking every agent, and without the need for significant infrastructure or set up. Here we present a method for swarm self-monitoring that enables the aggregate display of information about swarm location by making use of physical transport of information and local communication. This method uses movement already exhibited by many swarms to collect self-reflective information in a fully distributed manner. We find that added swarm mobility can compensate for limited communication and that our self-monitoring swarm system scales well, with performance increasing with the size of the swarm in some cases. When developing systems such as this for real-world applications, individual agent memory will need to be taken into consideration, finding an effective means to spread swarm knowledge among robots while keeping information accessible to users.
Article
Full-text available
This paper proposes a set of five ethical principles, together with seven high-level messages, as a basis for responsible robotics. The Principles of Robotics were drafted in 2010 and published online in 2011. Since then the principles have influenced, and continue to influence, a number of initiatives in robot ethics but have not, to date, been formally published. This paper remedies that omission.
Article
Full-text available
In this paper, we studied the usage of H.264/AVC video compression tools by the flagship smartphones. The results show that only a subset of tools is used, meaning that there is still a potential to achieve higher compression efficiency within the H.264/AVC standard, but the most advanced smartphones are already reaching the compression efficiency limit of H.264/AVC.
Article
Full-text available
This article provides a short report on a recent Dagstuhl Seminar on “Engineering Moral Agents”. Imbuing robots and autonomous systems with ethical norms and values is an increasingly urgent challenge, given rapid developments in, for example, driverless cars, unmanned air vehicles (drones), and care assistant robots. Seminar participants discussed two immediate problems. A challenge for philosophical research is the formalisation of ethics in a format that lends itself to machine implementation; a challenge for computer science and robotics is the actual implementation of moral reasoning and conduct in autonomous systems. This article reports on these two challenges.
Article
Full-text available
Artificial intelligence and machine learning are in a period of astounding growth. However, there are concerns that these technologies may be used, either with or without intention, to perpetuate the prejudice and unfairness that unfortunately characterizes many human institutions. Here we show for the first time that human-like semantic biases result from the application of standard machine learning to ordinary language---the same sort of language humans are exposed to every day. We replicate a spectrum of standard human biases as exposed by the Implicit Association Test and other well-known psychological studies. We replicate these using a widely used, purely statistical machine-learning model---namely, the GloVe word embedding---trained on a corpus of text from the Web. Our results indicate that language itself contains recoverable and accurate imprints of our historic biases, whether these are morally neutral as towards insects or flowers, problematic as towards race or gender, or even simply veridical, reflecting the status quo for the distribution of gender with respect to careers or first names. These regularities are captured by machine learning along with the rest of semantics. In addition to our empirical findings concerning language, we also contribute new methods for evaluating bias in text, the Word Embedding Association Test (WEAT) and the Word Embedding Factual Association Test (WEFAT). Our results have implications not only for AI and machine learning, but also for the fields of psychology, sociology, and human ethics, since they raise the possibility that mere exposure to everyday language can account for the biases we replicate here.
Article
Full-text available
Autonomous vehicles (AVs) should reduce traffic accidents, but they will sometimes have to choose between two evils, such as running over pedestrians or sacrificing themselves and their passenger to save the pedestrians. Defining the algorithms that will help AVs make these moral decisions is a formidable challenge. We found that participants in six Amazon Mechanical Turk studies approved of utilitarian AVs (that is, AVs that sacrifice their passengers for the greater good) and would like others to buy them, but they would themselves prefer to ride in AVs that protect their passengers at all costs. The study participants disapprove of enforcing utilitarian regulations for AVs and would be less willing to buy such an AV. Accordingly, regulating for utilitarian algorithms may paradoxically increase casualties by postponing the adoption of a safer technology.
Article
Full-text available
Autonomous systems such as unmanned vehicles are beginning to operate within society. All participants in society are required to follow specific regulations and laws. An autonomous system cannot be an exception. Inevitably an autonomous system will find itself in a situation in which it needs to not only choose to obey a rule or not, but also make a complex ethical decision. However, there exists no obvious way to implement the human understanding of ethical behaviour in computers. Even if we enable autonomous systems to distinguish between more and less ethical alternatives, how can we be sure that they would choose right? We consider autonomous systems with a hybrid architecture in which the highest level of reasoning is executed by a rational (BDI) agent. For such a system, formal verification has been used successfully to prove that specific rules of behaviour are observed when making decisions. We propose a theoretical framework for ethical plan selection that can be formally verified. We implement a rational agent that incorporates a given ethical policy in its plan selection and show that we can formally verify that the agent chooses to execute, to the best of its beliefs, the most ethical available plan.
Book
Full-text available
Spurred on by high profile controversies over BSE, GM crops and now nanotechnology, scientists have gradually started to involve the public in their work. They looked first to education as the answer, then to processes of dialogue and participation. But these efforts have not yet proved sufficient. In See-through Science, James Wilsdon and Rebecca Willis argue that we are on the cusp of a new phase in debates over science and society. Public engagement is about to move upstream. Scientists need to find ways of listening to and valuing more diverse forms of public knowledge and social intelligence. Only by opening up innovation processes at an early stage can we ensure that science contributes to the common good. Debates about risk are important. But the public also wants answers to the more fundamental questions at stake in any new technology: Who owns it? Who benefits from it? To what purposes will it be directed? This pamphlet offers practical guidance for scientists, policy-makers, research councils, businesses and NGOs – anyone who is trying to make engagement work. It is an argument with profound implications for the future of science.