PreprintPDF Available

Robot Accident Investigation: a case study in Responsible Robotics

Authors:
Preprints and early-stage research may not have been peer reviewed yet.

Abstract and Figures

Robot accidents are inevitable. Although rare, they have been happening since assembly-line robots were first introduced in the 1960s. But a new generation of social robots are now becoming commonplace. Often with sophisticated embedded artificial intelligence (AI) social robots might be deployed as care robots to assist elderly or disabled people to live independently. Smart robot toys offer a compelling interactive play experience for children and increasingly capable autonomous vehicles (AVs) the promise of hands-free personal transport and fully autonomous taxis. Unlike industrial robots which are deployed in safety cages, social robots are designed to operate in human environments and interact closely with humans; the likelihood of robot accidents is therefore much greater for social robots than industrial robots. This paper sets out a draft framework for social robot accident investigation; a framework which proposes both the technology and processes that would allow social robot accidents to be investigated with no less rigour than we expect of air or rail accident investigations. The paper also places accident investigation within the practice of responsible robotics, and makes the case that social robotics without accident investigation would be no less irresponsible than aviation without air accident investigation.
Content may be subject to copyright.
Robot Accident Investigation: a case study in
Responsible Robotics
Alan F.T. Winfield, Katie Winkle, Helena Webb, Ulrik Lyngs, Marina Jirotka and
Carl Macrae
Abstract Robot accidents are inevitable. Although rare, they have been happening
since assembly-line robots were first introduced in the 1960s. But a new gener-
ation of social robots are now becoming commonplace. Often with sophisticated
embedded artificial intelligence (AI) social robots might be deployed as care robots
to assist elderly or disabled people to live independently. Smart robot toys offer
a compelling interactive play experience for children and increasingly capable au-
tonomous vehicles (AVs) the promise of hands-free personal transport and fully
autonomous taxis. Unlike industrial robots which are deployed in safety cages, so-
cial robots are designed to operate in human environments and interact closely with
humans; the likelihood of robot accidents is therefore much greater for social robots
than industrial robots. This paper sets out a draft framework for social robot acci-
dent investigation; a framework which proposes both the technology and processes
that would allow social robot accidents to be investigated with no less rigour than
we expect of air or rail accident investigations. The paper also places accident inves-
tigation within the practice of responsible robotics, and makes the case that social
robotics without accident investigation would be no less irresponsible than aviation
without air accident investigation.
A Winfield, K Winkle
Bristol Robotics Lab, UWE Bristol, UK. e-mail: alan.winfield@brl.ac.uk,katie.
winkle@brl.ac.uk
H Webb, U Lyngs, M Jirotka
Department of Computer Science, University of Oxford, UK. e-mail: helena.webb@cs.ox.
ac.uk,ulrik.lyngs@cs.ox.ac.uk, marina.jirotka@cs.ox.ac.uk
C Macrae
Nottingham University Business School, University of Nottingham, UK. e-mail: Carl.Macrae@
nottingham.ac.uk
1
arXiv:2005.07474v1 [cs.RO] 15 May 2020
2 Winfield, Winkle, Webb, Lyngs, Jirotka and Macrae
1 Introduction
What could possibly go wrong?
Imagine that your elderly mother, or grandmother, has an assisted living
robot to help her live independently at home. The robot is capable of
fetching her drinks, reminding her to take her medicine and keeping in
touch with family. Then one afternoon you get a call from a neighbour
who has called round and sees your grandmother collapsed on the floor.
When the paramedics arrive they find the robot wandering around ap-
parently aimlessly. One of its functions is to call for help if your grand-
mother falls or stops moving, but it seems that the robot failed to do
this.
Fortunately your grandmother makes a full recovery. Not surprisingly
you want to know what happened: did the robot cause the accident? Or
maybe it didn’t but made matters worse, and why did it fail to raise the
alarm?
Although this is a fictional scenario it could happen today. If it did we would be to-
tally reliant on the goodwill of the robot manufacturer to discover what went wrong.
Even then we might not get the answers we seek; it is entirely possible that neither
the robot nor the company that made it are equipped with the tools and processes to
undertake an investigation.
Robot accidents are inevitable. Although rare, they have been happening since
assembly line robots were first introduced in the 1960s. First wave robots include
factory robots (multi-axis manipulators), autonomous guided vehicles (AGVs) de-
ployed in warehouses, Remotely Operated Vehicles (ROVs) for undersea explo-
ration and maintenance, teleoperated robots for bomb disposal, and (perhaps sur-
prisingly) Mars rovers for planetary exploration. A defining characteristic of first
wave robots is that they are designed for jobs that are dull, dirty or dangerous; these
robots are typically either dangerous to be around (and therefore enclosed in safety
cages on assembly lines), or deployed in dangerous or inaccessible environments.
In contrast second wave robots are designed to operate in human environments
and interact directly with people. Those human environments include homes, of-
fices, hospitals, shopping malls, and city or urban streets and – unlike first wave
robots – many are designed to be used by untrained, naive or vulnerable users,
including children and the elderly. These are robots in society, and hence social
robots1. Often with sophisticated embedded artificial intelligence (AI) social robots
might be deployed as care robots to assist elderly or disabled people to live indepen-
dently. Smart robot toys offer a compelling interactive play experience for children
and increasingly capable autonomous vehicles (AVs) the promise of hands-free per-
sonal transport and fully autonomous taxis.
1Noting that we take a broader view of social robotics than usual.
Robot Accident Investigation 3
Social robots by definition work with and alongside people in human environ-
ments, thus the likelihood and scope of robot accidents is much greater than with
industrial robots. This is not just because of the close proximity of social robots and
their users (and perhaps also bystanders), it is also because of the kinds of roles
such robots are designed to fulfill, and further exacerbated by the unpredictability
of people and the unstructured, dynamic nature of human environments.
Given the inevitability of social robot accidents it is perhaps surprising that no
frameworks or processes of social robot accident investigation exist. This paper ad-
dresses that deficit by setting out a draft framework for social robot accident inves-
tigation; a framework which proposes both the technology and processes that would
allow social robot accidents to be investigated with no less rigour than we expect of
air or rail accident investigations.
This paper proceeds as follows. We first survey the current practices and frame-
works for accident investigation, including in transport (air, rail and road) and in
healthcare, in section 2. In section 3 we survey robot accidents, including both in-
dustrial and social robot accidents, then analyse the scope for social robot accidents
in order to understand why social robot accidents are more likely (per robot de-
ployed) than industrial robot accidents. Section 4 then places accident investigation
within the practice of responsible robotics, by defining responsible robotics within
the broader practice of Responsible Innovation; the section also briefly surveys the
emerging practices of values-driven design and ethical standards in robotics. Section
5 sets out both the technology and processes of our draft framework for social robot
accident investigation, then illustrates the application of this framework by setting
out how an investigation of our fictional accident might play out. Finally, in section
6, we conclude by outlining work currently underway within project RoboTIPS to
develop and validate our framework with real-robot mock accident scenarios, be-
fore considering some key questions about who would investigate real-world social
robot accidents.
2 The practice of Accident Investigation
Investigating accidents and incidents is a routine and widespread activity in safety-
critical industries such as aviation, the railways and healthcare. In healthcare, for
instance, over 2 million safety incidents are reported across the English National
Health Service each year, with around 50,000 of those incidents causing moderate
to severe harm or death [26] – such as medication errors or wrong-site surgery. Ac-
cident investigation also has a long history. In aviation, the world’s first national
air accident investigation agency was established over a century ago in 1915 [7],
and the international conventions governing air accident investigation were agreed
in the middle of the last century [3]. The practices and methods of accident investi-
gation are therefore well-established in many sectors and have been developed and
refined over many years. As such, when considering a framework for the investiga-
4 Winfield, Winkle, Webb, Lyngs, Jirotka and Macrae
tion of social robot accidents, it is instructive to examine how accident investigation
is conducted in other safety-critical domains.
First, it is important to emphasise that the core principle and fundamental pur-
pose of accident investigation is learning: while investigations primarily focus on
examining events that have occurred in the past, the core purpose of an accident
investigation is to improve safety in the future. Accident investigations are therefore
organised around three key questions [23]. The first is factual: what actually hap-
pened? The second is explanatory: why did those things happen and what does that
reveal about weaknesses in the design and operation of a particular system? The
third is practical: how can systems be improved to prevent similar events happening
in future? The ultimate objective for any accident investigation is to develop practi-
cal, achievable and targeted recommendations for making safety improvements.
Conducting an accident investigation involves collecting and analysing a range of
evidence and data from a variety of sources to understand what happened and why.
This can include quantitative data, such as information from ‘black box’ Flight Data
Recorders on aircraft that automatically collect data on thousands of flight parame-
ters. It can also include qualitative data in the form of organisational documentation
and policies, and in-depth interviews with witnesses or those directly or indirectly
involved in an accident such as a pilot or a maintenance engineer. Accident in-
vestigators therefore need to be experts in the methods and processes of accident
investigation, and also need to have a deep and broad understanding of the industry
and the technologies involved. However, accident investigations are also typically
collaborative processes that are conducted by a diverse team of investigators who,
in turn, draw on specific sources of expertise where required [20].
A variety of methods have been developed to assist in the collection and analysis
of safety data, from cognitive interviewing techniques [13] to detailed human fac-
tors methods [36] and organisational and sociotechnical systems analysis techniques
[37]. Importantly, to understand why an accident has occurred and to determine how
safety can be improved in future, accident investigations typically apply a systemic
model of safety that helps identify, organise and explain the factors involved and
how they are interconnected. One of the most widely applied and practical acci-
dent models is the organisational accident framework – commonly referred to as
the Swiss Cheese model [33]. This has been adapted and applied in various ways
[2], but at core this provides a simple framework that conceptualises system safety
as dependent on multiple layers of risk controls and safety defences that span from
the front-line to organisational and regulatory levels – such as redundant systems,
emergency alarms, operator training, management decisions or regulatory require-
ments. Each of these safety defences will inevitably be partial or weak in some way,
and accidents occur when the holes in these defences all line up in some unexpected
way – thus the eponymous image of multiple slices of Swiss Cheese, each slice full
of holes. The focus of accident investigations is to examine the entire sociotechnical
system to understand the safety defences, the weaknesses in those defences, why
those weaknesses arose and how they can be addressed. Similar premises underpin
a variety of different accident investigation models, methods and tools.
Robot Accident Investigation 5
Safety-critical industries have developed sophisticated systems to support these
activities of investigation and learning at all levels. These range from lengthy in-
vestigations of major accidents that are coordinated by national investigative bod-
ies to relatively rapid local-level investigations of more minor incidents or near-
miss events that are conducted by investigators within individual organisations [21].
A lot of media attention understandably focuses on the investigations into high-
profile accidents that are conducted by national investigation bodies, such as the US
National Transportation Safety Board’s investigations of the various accidents in-
volving semi-automated Tesla vehicles [4] and Uber’s autonomous test vehicle [5].
However, much of the more routine work of investigation actually occurs within
individual organisations, like airlines and hospitals, which regularly conduct hun-
dreds or thousands of investigations each year. These local-level investigations ex-
amine more minor safety incidents as well as near-miss events – where there was
no adverse outcome but some sort of safety-relevant failure was detected, such as a
poorly specified maintenance procedure in an airline leading to a technical failure
that causes a rejected take-off [21]. Local-level investigations employ similar meth-
ods and approaches to those conducted at a national level, but are often much more
rapid, lasting days or weeks rather than months and years. They are also typically
limited to examining sources of risk within one single organisation, unlike national-
level investigations which can consider regulatory factors and interactions between
various different organisations across an industry.
At all these levels of investigative activity, accident and incident investigation is
always focused on learning. One of the main implications of this focus on learning
is that accident investigation activities are typically entirely separated from other
investigative activities that seek to attribute blame or determine liability. In aviation,
for example, the information collected during major accident investigations may
only be used for the purposes of safety investigation and improvement – and may
not be used, for instance, to punish individuals or pursue legal claims against organ-
isations (EU 2010). This is also the case within individual organisations in safety-
critical industries, where it is common to have a safety department or team that is
entirely separated from operational processes or production pressures and whose
sole focus is monitoring safety, investigating incidents and supporting improvement
[22]. The information that is collected by these safety teams is usually only used
for safety improvement purposes and purposefully not used for line management or
disciplinary processes, to ensure that staff feel they can openly and honestly provide
safety-relevant information to support ongoing efforts to investigate and understand
failures and continuously improve safety.
6 Winfield, Winkle, Webb, Lyngs, Jirotka and Macrae
3 Robot Accidents
Robert Williams is believed to be the first person killed by a robot in an industrial
accident, in January 1979, at a Ford Motor Company casting plant2. Given that there
are over 2.4M industrial robots in use worldwide [28], it is surprisingly difficult
to find statistics on robot accidents. The US Department of Labor’s Occupational
Safety and Health Administration (OHSA) maintains a web page listing industrial
accidents since 1984, and a search with the keyword ‘robot’ returns records of 43
accidents3; all resulted in injuries of which 29 were fatal. The US National Institute
for Occupational Safety and Health (NIOSH) found 61 robot related deaths between
1992 and 2015, noting that “These fatalities will likely increase over time because of
the increasing number of conventional industrial robots being used by companies in
the United States, and from the introduction of collaborative and co-existing robots,
powered exoskeletons, and autonomous vehicles into the work environment.4
Finding data on accidents involving robots which interact with humans (HRI)
is also difficult. One study on the safety of interactive industrial robots in Finland
[24] notes that “International accident-report data related to robots is scarce”. The
study reports that the majority of the 6000 robots in Finland are “used in small
robot work cells rather than large automation lines” and that “a natural feature of
the production is that it requires substantial (mainly simple) HRI inside the robot
working envelope”. The study analyses a total of 25 severe robot or manipulator-
related accidents between 1989-2006, of which 3 were fatal, noting also that most
of the accidents occurred toward the end of this period. Key characteristics of these
accidents were:
“The cause of an accident is usually a sequence of events, which have been diffi-
cult to foresee.
Operator inattentiveness and forgetfulness may lead them to misjudge a situation
and even a normal robot movement may surprise them.
Most of the accidents involving robots occurred so that (the) robot moved un-
expectedly (from worker’s point of view) against the worker within the robot
working area.
Inadequate safeguarding featured significantly as a cause of accidents.
Many accidents are associated with troubleshooting disturbances. And
only about 20% of accidents occurred during undisturbed automated runs.” [24]
Although now somewhat dated, Chapter 4 ‘Robot Accidents of [11] sets out a
comprehensive analysis of industrial robot accidents, including data from accidents
in Japan, Western Europe and the United States. Noting that “human behavior plays
an important role in certain robot accidents” the paper outlines a set of typical human
2https://en.wikipedia.org/wiki/Robert_Williams_(robot_fatality)
3https://www.osha.gov/pls/imis/AccidentSearch.search?acc_keyword=
%22Robot%22&keyword_list=on
4https://www.cdc.gov/niosh/topics/robotics/aboutthecenter.html
Robot Accident Investigation 7
behaviours that might result in injury. A number of these are especially instructive
to the present study:
“Humans often incorrectly read, or fail to see, instructions and labels on various
products.
Many people carry out most assigned tasks while thinking about other things.
In most cases humans use their hands to test or examine.
Many humans are unable to estimate speed, distance, or clearances very well. In
fact, humans underestimate large distances and overestimate short distances.
Many humans fail to take the time necessary to observe safety precautions.
A sizeable portion of humans become complacent after a long successful expo-
sure to dangerous items.
There is only a small percentage of humans which recognize the fact that they
cannot see well enough, either due to poor illumination or poor eyesight.” [11]
Finding data on non-industrial robot accidents is even more difficult, but one
study reports on adverse events in robotics surgery in the US [1]; the paper sur-
veys 10,624 reports collected by the Food and Drug Administration between 2000
and 2013, showing 8061 device malfunctions, 1391 patient injuries and 144 deaths,
during a total of more than 1.75M procedures. Some robot accidents make headline
news, for example car accidents in which semi-automated driver-assist functions are
implicated; a total of six ‘self-driving car fatalities’ have been reported since Jan-
uary 20165. We only know of one accident in which a child’s leg was bruised by a
shopping mall security robot because it was reported in the national press6.
A recent study in human-robot interaction examines several serious accidents, in
aviation, the nuclear industry, and autonomous vehicles in an effort to understand
“the potential mismatches that can occur between control and responsibility in au-
tomated systems” and argues that these mismatches “create moral crumple zones,
in which human operators take on the blame for errors or accidents not entirely in
their control” [12].
3.1 The Scope for Social Robot Accidents
As we have outlined above, industrial and surgical robot accidents are – thankfully
– rare events. It is most unlikely that social robot accidents will be so rare. There are
several factors that lead us to make this forecast:
1. Social robots, as broadly defined in this paper, are designed to be integrated into
society at large: in urban and city environments, schools, hospitals, shopping
malls, and in both care and private homes. Unlike industrial robots operating
5https://en.wikipedia.org/wiki/List_of_self-driving_car_
fatalities
6https://www.wsj.com/articles/security-robot- suspended-after- colliding-with- a-toddler- 1468446311
8 Winfield, Winkle, Webb, Lyngs, Jirotka and Macrae
behind safety cages social robots are designed to be among us, up close and
personal. It is this close proximity that undoubtedly presents the greatest risk.
2. Operators of industrial robots are required to undertake training courses, both on
how to operate their robots and on the robot’s safety features, and are expected
to follow strict safety protocols. In contrast social robots are designed to benefit
naive users – including children, the elderly, and vulnerable or disabled people –
with little more preparation than might be expected to operate a vacuum cleaner
or dishwasher.
3. Industrial robots typically operate in an environment optimised for them and not
humans. Social robots have no such advantage: humans (especially young or
vulnerable humans) are unpredictable and human environments (from a robots
perspective) are messy, unstructured and constantly changing. Designing robots
capable of operating safely in human environments remains a serious challenge.
4. The range of serious harms that can arise from social robots is much greater
than those associated with industrial robots. Social robots can, like their indus-
trial counterparts, cause physical injury, but the responsible roboticist should be
equally concerned about the potential for psychological harms, such as decep-
tion (i.e. a user coming to believe that a robot cares for them), over-trust or over-
reliance on the robot, or violations of privacy or dignity. A number of ethical
hazards are outlined in section 4 below7.
5. The scope of social robot accidents is thus enlarged. The nature of the roles social
robots play will play in our lives – supporting elderly people to live independently
or helping the development of children with autism for instance [8] – opens up
a range of ethical risks and vulnerabilities that have hitherto not been a concern
of either robot designers or accident investigators. This factor also increases the
likelihood of social robot accidents.
If we are right and the near future brings an increasing number of social robot
accidents, these accidents will need to be investigated in order to address the three
key questions of accident investigation outlined in section 2. What happened? Why
did it happen? And what must we change to ensure it doesnt happen again? [23].
4 Responsible Robotics
In essence, Responsible Robotics is the application of Responsible Innovation (RI)
to the field of robotics, so we first need to define RI. A straightforward definition
of RI is: a set of good practices for ensuring that research and innovation bene-
fits society and the environment. There are several frameworks for RI. One is the
EPSRC AREA framework8, built on the four pillars of Anticipation (of potential
impacts and risks), Reflection (on the purposes of, motivations for and potential
implications of the research), Engagement (to open up dialogue with stakeholders
7An ethical hazard is a possible source of ethical harm
8https://epsrc.ukri.org/research/framework/area/
Robot Accident Investigation 9
beyond the narrow research community), and Action (to use the three processes of
anticipation, reflection and engagement, to influence the direction and trajectory of
the research) [30]. The more broadly framed Rome Declaration on Responsible Re-
search and Innovation9is built around the six pillars of open access, governance,
ethics, science communication, public engagement and gender equality10.
We define Responsible Robotics as follows:
Responsible Robotics is the application of Responsible Innovation in
the design, manufacture, operation, repair and end-of-life recycling of
robots, that seeks the most benefit to society and the least harm to the
environment.
Robot ethics – which is concerned with the ethical impact of robots, on individ-
uals, society and the environment, and how any negative impacts can be mitigated
– and ethical governance both have an important role in the practice of responsi-
ble robotics. In recent years many sets of ethical principles have been proposed in
robotics and AI – for a comprehensive survey see [17] – but one of the earliest are
the EPSRC Principles of Robotics, first published online in 201111[6]. In [41] we set
out a framework for ethical governance which links ethical principles to standards
and regulation and argue that when such processes are robust and transparent, trust
in robotics (or, to be precise, the individuals, companies and institutions designing,
operating and regulating robots) will grow.
Responsible social robotics [38] is the practice of responsible robots in society
with a particular ambition of creating robots which bring real benefits in both quality
of life and safeguarding to the most vulnerable, within a framework of values-based
design based upon a deep consideration of the ethical risks of social robots. There
are a number of approaches and methods available to the responsible roboticist,
including emerging new ethical standards, and an approach called ethically aligned
design, which we will now review.
4.1 Ethically Aligned Design
In April 2016, the IEEE Standards Association launched a Global Initiative on the
Ethics of Autonomous and Intelligent Systems12. The initiative positions human
9https://ec.europa.eu/research/swafs/pdf/rome_declaration_RRI_
final_21_November.pdf
10 http://ec.europa.eu/research/science-society/document_library/
pdf_06/responsible-research- and-innovation- leaflet_en.pdf
11 https://epsrc.ukri.org/research/ourportfolio/themes/
engineering/activities/principlesofrobotics/
12 https://standards.ieee.org/content/dam/ieee-standards/
standards/web/documents/other/ec_about_us.pdf
10 Winfield, Winkle, Webb, Lyngs, Jirotka and Macrae
well-being as its central tenet. The initiative’s mission is “to ensure every stake-
holder involved in the design and development of autonomous and intelligent sys-
tems (AIS) is educated, trained, and empowered to prioritize ethical considerations
so that these technologies are advanced for the benefit of humanity”.
The first major output from the IEEE global ethics initiative is a document called
Ethically Aligned Design (EAD) [16]. Developed through an iterative process over
3 years EAD is built on the three pillars of Universal Human Values, Political
self-determination & Data Agency, and Dependability, and eight general (ethical)
principles covering Human Rights, Well-being, Data Agency, Effectiveness, Trans-
parency, Accountability, Awareness of Misuse and Competence, and sets out more
than 150 issues and recommendations across its 10 chapters. In essence EAD is both
a manifesto and roadmap for a values-driven approach to the design of autonomous
and intelligent systems. Spiekermann and Winkler [35] detail the process of ethi-
cally aligned design within a broader methodological framework they call value-
based engineering for ethics by design. It is clear that responsible social robotics
must be values-based.
4.2 Standards in Social Robotics
A foundational standard in social robotics is ISO 13482:2014 Safety requirements
for personal care robots. ISO 13482 covers mobile servant robots, physical assistant
robots, and person carrier robots, but not robot toys or medical robots [29]. The stan-
dard sets out a number of safety (but not ethical) hazards, including hazards relating
to battery charging, robot motion, incorrect autonomous decisions and actions, and
lack of awareness of robots by humans.
A new generation of explicitly ethical standards are now emerging [39, 27].
Standards are simply “consensus-based agreed-upon ways of doing things” [9]. Al-
though all standards embody a principle or value, explicitly ethical standards ad-
dress clearly articulated ethical concerns and – through their application – seek to
remove, reduce or highlight the potential for unethical impacts or their consequences
[39].
The IEEE ethics initiative outlined above has initiated a total of 13 working
groups to date, each tasked with drafting a new standard within the 7000 series of so-
called ‘human standards’. The first of these to reach publication is IEEE 7010-2020
Recommended Practice for Assessing the Impact of Autonomous and Intelligent Sys-
tems on Human Well-Being13.
Almost certainly the worlds first explicitly ethical standard in robotics is BS8611-
2016 Guide to the ethical design and application of robots and robotic systems [10].
“BS8611 is not a code of practice, but instead guidance on how designers can un-
dertake an ethical risk14 assessment of their robot or system, and mitigate any eth-
13 https://standards.ieee.org/standard/7010-2020.html
14 An ethical risk is a possible ethical harm arising from exposure to a hazard.
Robot Accident Investigation 11
ical risks so identified. At its heart is a set of 20 distinct ethical hazards and risks,
grouped under four categories: societal, application, commercial and financial, and
environmental. Advice on measures to mitigate the impact of each risk is given,
along with suggestions on how such measures might be verified or validated”[39].
Societal hazards include, for example, anthropomorphisation, loss of trust, decep-
tion, infringements of privacy and confidentiality, addiction, and loss of employ-
ment, to which we should add the Uncanny Valley[25], weak security, lack of trans-
parency (for instance the lack of data logs needed to investigate accidents), unre-
pairability and unrecyclability. Ethical risk assessment is a powerful and essential
addition to the responsible roboticists toolkit, as it can be thought of as the opposite
face of accident investigation, seeking – at design time – to prevent risks becoming
accidents.
5 A Draft Framework for Social Robot Accident Investigation
We now set out a framework for social robot accident investigation outlining first
the technology, then the process, followed by an illustration of how the framework
might be applied.
5.1 Technology
We have previously proposed that all robots should be equipped with the equivalent
of an aircraft Flight Data Recorder (FDR) to continuously record sensor inputs,
actuator outputs and relevant internal status data [40]. We call this an ethical black
box15 (EBB), and argue that the EBB will play an important role in the process of
discovering how and why a robot caused an accident.
All robots collect sense data, then – on the basis of that sense data and some in-
ternal decision making process (the embedded Artificial Intelligence) – send com-
mands to actuators. This is of course a simplification of what in practice will be a
complex set of connected systems and processes but, at an abstract level, all intelli-
gent robots will have the three major subsystems shown in blue, in Fig. 1 Our EBB
will have much in common with its aviation and automotive counterparts, the flight
data recorder [15] and event data recorder (EDR) [14], in particular: data is stored
securely in a rugged unit designed to survive accidents; stored records are time and
date stamped, and storage capacity is limited and organised such that only the most
recent data are saved – overwriting older records (an FDR typically records 17-25
hours of data, an automobile EDR as little as 30 seconds).
The technical specification for an EBB for a social robot is beyond the scope
of this paper. It is, however, important to outline here the kinds of data we should
15 Because it would be unethical not to have one.
12 Winfield, Winkle, Webb, Lyngs, Jirotka and Macrae
Fig. 1 Robot sub-systems with an Ethical Black Box and key dataflows
expect to be recorded in the EBB. Consider the Pepper robot, as an exemplar of an
advanced social robot [31]. The Pepper is equipped with 25 sensors, including four
microphones, two colour cameras, two accelerometers and various proximal and
distal sensors. It has 20 motors, a chest mounted touch display pad, loudspeakers for
speech output, and WiFi connectivity. We would therefore expect an EBB designed
for the Pepper robot to be able to record:
1. Records of input sense data, including sampled (and compressed) camera images,
audio sampled by the microphones, accelerometers, and touch screen touches;
2. Actuator demands generated by the robot’s control system along with sampled
position of joints, from which we can deduce the robots pose;
3. Synthesised speech and touch screen display outputs;
4. Periodically sampled battery levels;
5. Periodically sampled status of the robot’s WiFi and Internet connectivity, and
6. The robot’s position (i.e. x,y coordinates) within its working environment (noting
that this data might be available from a tracking system in a ‘smart’ environment,
or if not, deduced via odometry from the robot’s known start position at, say, its
re-charging station and subsequent movements).
The EBB should also record high level decisions generated by the robots AI (see
the data flow in Fig. 1), and – given that social robots are likely to accept speech
Robot Accident Investigation 13
commands – we would, ideally, be able to record both the raw audio recorded by
the microphones and the text sequence produced by the robot’s speech recogniser.
5.2 Process
Conventionally accident investigation is performed in four stages: (1) information
gathering, (2) fact analysis, (3) conclusion drawing and – if necessary – (4) im-
plementation of remedies. Stage (2) fact analysis might typically take the form of
causal analysis, and we propose to adopt here the method of why-because analysis
developed by Ladkin et al. [18, 19, 34].
Why-Because Analysis (WBA) is a method of causal analysis of complex socio-
technical systems, and hence well suited to social robot accidents. WBA lends itself
to a simple quality check: whether one event is a causal factor in another can be
determined by applying a counter-factual test. The primary output of WBA is a
Why-Because Graph (WBG), and a further advantage of WBA is that – if necessary
– the correctness of the WBG can be formally proven. Fig. 2 shows, in flow chart
form, the process of why-because analysis.
Let us elaborate briefly on some of the steps in Fig. 2. ‘Gather information’ re-
quires collecting all available information on the accident. This will include witness
testimony, records from the EBB, any forensic evidence, and contextual information
on the setting of the accident. The next stage ‘determine facts’ requires the investi-
gation team to sift the information and glean the essential facts. Noting that witness
testimony can be unreliable, any causal inferences from that testimony should ide-
ally be corroborated with forensic evidence, so that – even if not absolutely certain
– the team can have high confidence in those inferences. The third stage: ‘create
why-because list’ links the facts of events – including both things that happened
and things that should have happened but did not (unevents) – to outcomes. If they
give the team a clear picture of the sequence of events and participants in the acci-
dent then the team agree on the ‘mishap topnode(s)’ of the why-before graph, i.e.
the accident – or perhaps multiple accidents. Then the why-because graph is cre-
ated, top-down. This is likely also to required several iterations and – if necessary –
quality checking using counter-factual tests or formal proof of completeness. For a
complete explanation of the method refer to the the introduction to WBA in [34].
5.3 The application of the framework
To understand how this framework would operate in practice, we return to the fic-
tional scenario outlined at the start of the paper. As described in the scenario, an
elderly lady Rose has a fall in her home. She is found, still on the floor, some time
later by her neighbour, Barbara. Barbara calls for medical help as well as alert-
ing Rose’s family. Whilst Rose receives hospital treatment, an investigation team is
14 Winfield, Winkle, Webb, Lyngs, Jirotka and Macrae
Fig. 2 An overview of Why-Because Analysis (adapted from Ladkin, 2005)
formed, who begin to collect evidence for the investigation. Photos of Rose’s flat are
taken and information about her home set up is collected; for instance, Rose lives
in a smart home with various sensors to detect her movements and communicate
with the robot as necessary. Preliminary observation of the robot also reveals details
about its design and use in the home. The robot can fetch drinks, provide reminders
(such as for Rose to take medication) and act as an interface so that Rose can con-
trol her television and other devices through it with voice commands. The robot will
also respond to commands sent via a smartphone/tablet application.
Robot Accident Investigation 15
Barbara, the paramedics and Rose herself are interviewed to provide witness
statements. These statements combine with the initial observations to provide im-
portant early findings that shape the ongoing investigation. Of key importance are
the following: i) Rose didn’t put on her fall alert bracelet on the day of the accident,
and ii) From her condition (as observed by the paramedics) it seems that after her
fall Rose was too weak to call out to the robot if it was more than two metres away
from her.
In her witness testimony Rose states that she had climbed on a chair and was
reaching for something in an upper cupboard in her kitchen but then slipped and fell
on the floor. She has no memory of what happened after that and does not recall
where the robot was when she fell. Barbara states that she doesn’t recall seeing the
robot when she entered the flat, and feels that the shock of finding Rose on the floor
meant she didn’t notice much else around her. The paramedic states that he noticed
the robot moving about around the flat – between the living area and the kitchen.
It didn’t seem to be moving for the accomplishment of any particular action so he
described the robot as acting aimlessly.
The investigation team gather further information. They interview the manager of
the retirement complex that Rose lives in; she provides details of the organisational
structure of the complex including the infrastructure that enables residents to set
up their homes as smart homes and have assistance robots. They also talk to others
who saw Rose on the day of her accident. The last person to see Rose before her fall
was Michelle, a cleaner who works for the retirement complex. Michelle saw Rose,
whilst she was in Rose’s flat for an hour to do her regular weekly clean. Michelle
reported that Rose seemed very cheerful and chatty, and did not complain of feeling
ill or mention any worries or concerns. Michelle said that she did her usual clean
in its usual order as requested by the retirement complex: collect up rubbish to take
outside; wipe bathroom surfaces and floor; wipe kitchen work surfaces and clean
floor; polish wooden surfaces; hoover carpeted areas; disinfectant wipes on door
handles and all over the robot for infection control. When asked by the investigation
team she said she thought the robot was in the living room for most of the time
she was there but she couldn’t really remember. She didn’t notice anything unusual
about what the robot was doing.
The team also gets a report from Rose’s General Practitioner about her overall
health status prior to the accident. This states that Rose had some limited mobil-
ity and needed to take daily medication to help her arthritis. She had also recently
complained of forgetting things more and more easily. However she was generally
healthy for her age and had no acute concerns.
Finally, the team extracts data from the Ethical Black Box. These are in CSV
format, and contain timestamped information regarding (i) the location and status
of the robot and Rose/others within the apartment, (ii) actions undertaken by the
robot and (iii) sampled records of all other robot inputs/outputs over the previous
24 hour period. It enables the team to conclude that the robot lost connection to
the central ‘home connection hub’ intermittently over the past 24 hours, coinciding
with Rose’s fall. In addition, processing of the camera feed and other sensors used
for navigation appear to be producing erroneous data. The records showed no log
16 Winfield, Winkle, Webb, Lyngs, Jirotka and Macrae
of Rose’s fall, but did log that the robot made a number of ‘requests for help’ – by
speaking out loud – regarding its inability to connect to the home connection hub.
Having collected and analysed all of the material, the investigation team identify
key relevant factors. At the individual level, certain actions by Rose – forgetting to
put on her accident bracelet and reaching to a high cupboard – certainly increased
the safety risk. Aspects of the local environment are likely to have also contributed
to this risk and influenced the technical problems that occurred – for instance the
repeated disinfecting of the robot, as required by the retirement complex, has almost
certainly impaired its sensors. The robot’s response to losing connection to the home
hub, i.e. asking for help was clearly not effective in getting the problem addressed,
most likely because Rose did not understand the problem.
Concerning the robots standard functionalities, it failed to detect Rose’s fall and
therefore raise an alert following the fall. The robot’s fall detection system relies,
in part, on data collected by distributed sensors placed around the smart home. This
data is delivered to the robot via the home connection hub, so the intermittent con-
nectivity issues prevented the robot’s fall detection functionality from operating as
intended. The team make use of these key facts to construct the why-because graph
shown in Fig. 3.
The first thing to note in Fig. 3 is that there are two mishap topnodes and hence
two sub-graphs. On the left is the sub-graph showing the direct causal chain that
resulted in Rose’s fall (Accident 1), following her unwise decision to try and reach
a high cupboard. The sub-graph on the right shows the chain of factors – processes,
states and unevents (things that should have happened but didn’t) – that together led
to Accident 2: the robot failing to raise the alarm.
The sub-graph leading to Accident 2 shows that the four ways in which the robot
might have raised the alarm all failed, for different reasons. The first (a) is that the
robot was too far away from Rose to hear her calls for help (most likely because
the failure of its connection with the home hub means that the robot didn’t know
where she was). The second (b) is that the robot’s sensors that should have been able
to detect her fall (together with data from the smart environment) were damaged,
almost certainly by cleaning with disinfectant, and the third (d) was the failure of
the wireless communication between the robot and home hub, which meant there
was no data from the home’s smart sensors. A fourth reason (c) is due to two factors
(i) Rose had forgotten to put on her fall alarm bracelet, but (ii) even if she had been
wearing it the bracelet would have been ineffective as it too communicates with
the robot via the home hub. The failure of communication between the robot and
home hub is particularly serious because, as the graph shows, even if the first two
pathways (a) and (b) to the robot’s fall detection system had operated correctly the
robot would still not have been able to raise the alarm, indicated by path (e). To use
the Swiss cheese metaphor from section 2, over reliance on communication with the
home hub represents a set of holes in the layers of safety which all line up.
The key conclusions from this analysis are that (i) the robot did not cause Rose’s
accident, (ii) the robot failed to raise the alarm following Rose’s fall – one of its
primary safeguarding functions, and (iii) failures and shortcomings of the smart
home’s infrastructure contributed significantly to the robot’s failure. The robot’s
Robot Accident Investigation 17
Fig. 3 Why-because graph for Rose’s accident
failure might have had very severe consequences had a neighbour not called upon
Rose and raised the alarm.
As a consequence of their investigation the team are able to to make a set of
recommendations to prevent similar accidents happening in future. These recom-
mendations are, in order of priority:
1. Equip the robot with a backup communications system, in case the WiFi fails.
A recommended approach might, for instance, be to integrate a module allowing
the robot to send text or data messages via a 3G GSM connection to the public
cellular network.
2. Equally important is that if the robot detects a WiFi connectivity failure it should
not be alerting its user (Rose) but instead sending an alert to a maintenance engi-
neer via its backup communication system.
18 Winfield, Winkle, Webb, Lyngs, Jirotka and Macrae
3. Equip the home hub with the ability to send an emergency call directly – via a
landline for instance – when the fall bracelet is triggered, so that this particular
alarm is not routed via the robot.
4. Improve the sensitivity of the robot’s microphones to increase their range.
5. Add a new function to the robot so that it reminds Rose to put on her fall bracelet
every day.
6. Advise the cleaner not to use disinfectants on the robot.
6 Concluding discussion
RoboTIPS
The work of this chapter is part of five-year programme RoboTIPS: Responsible
Robots for the Digital Economy16. RoboTIPS has several themes, two of which are
of relevance to this paper. The first is the technical specification and prototyping
of the EBB, including the development of model EBBs which will be used as the
basis of implementations and trials by project industrial partners. There will be two
model EBBs, one a hardware implementation and the other a software module, and
their specifications, code and designs will be published as open source in order
to support and encourage others to build EBBs into their robots. Ultimately we
would like to see industry standards emerge for EBBs, noting that we would need
separate specifications for different domains: one EBB standard for AVs, another
for healthcare robots, a third for robot toys/educational robots, and so on.
Second we are designing and running three staged (mock) accidents, and we
anticipate one in the application domain of assisted living robots, one in educational
(toy) robots, and another for autonomous vehicles. We believe this to be the world’s
first experimental investigation of accident investigation in the field of social robots.
For each of these staged scenarios we will be using real robots and will invite human
volunteers to act in three roles, as
1. subject(s) of the accident,
2. witnesses to the accident, and as
3. members of the accident investigation team.
One key aim of these staged accidents is to trial, develop and refine the framework
for social robot accident investigation outlined in this paper.
Thus we aim to develop and demonstrate both technologies and processes (and
ultimately policy recommendations) for effective social robot accident investiga-
tion. And as the whole project is conducted within the framework of Responsible
Research and Innovation it is a case study in Responsible Robotics.
16 https://www.robotips.co.uk/
Robot Accident Investigation 19
The Bigger Picture
There are two important details that we omitted from the accident scenario outlined
in section 1, then developed in section 5.3. The first is who needs to be on a robot ac-
cident investigation team. And the second – and perhaps more fundamental question
– who do you call upon to investigate a social robot accident?
Concerning the makeup of a social robot accident investigation team, if we fol-
low best practice it would be a multi-disciplinary team. One report, for instance,
described multi-disciplinary teams formed to investigate sleep related fatal vehi-
cle accidents as “consisting of a police officer, a road engineer, a traffic engineer,
a physician, and in certain cases a psychologist” [32]. Such teams did not require
the involvement of vehicle manufacturers, but more recent fatal accidents involving
AVs have needed to engage the manufacturer, to provide both expertise on the AV’s
autopilot technology and access to essential data from the vehicle’s proprietary data
logger [4]. Robot accident investigations will similarly need to call upon the assis-
tance of robot manufacturers, both to provide data logs and advice on the robot’s
operation. We would therefore expect social robot accident investigation teams to
consist of (i) an independent lead investigator with experience of accident investi-
gation, (ii) an independent expert on human-robot interaction, (iii) an independent
expert on robot hardware and software, (iv) a senior manager from the environment
in which the accident took place, and (v) one of the robot manufacturer’s senior en-
gineers. Depending on the context of the accident the team might additionally need,
for instance, a (child-)psychologist or senior health-care specialist.
Consider now the question: who do you call when there has been a robot ac-
cident?17 At present there is no social robot equivalent of the UK Air Accident
Investigations Branch18, or Healthcare Safety Investigation Branch (HSIB)19. A se-
rious AV accident would of course be attended by a police road traffic accident unit,
although they would almost certainly encounter difficulties getting to the bottom of
failures of the vehicle’s autopilot AI. The US National Transport Safety Board20
(NTSB) is the only investigation branch known to have experience of AV accidents,
having investigated five to date (it is notable that the NTSB is the same agency
responsible for air accident investigation in the US, and thus able to bring that con-
siderable experience to bear on AV accidents).
For assisted living robots deployed in a care home settings, such as in our ex-
ample scenario, accidents could be reported to both the Care Quality Commission21
(CQC) – the regulator of health and social care in England – and/or the Health
and Safety Executive22 (HSE), since care homes are also workplaces. Again it is
17 After the paramedics, that is.
18 https://www.gov.uk/government/organisations/
air-accidents- investigation-branch
19 https://www.hsib.org.uk/
20 https://www.ntsb.gov/Pages/default.aspx
21 https://www.cqc.org.uk/
22 https://www.hse.gov.uk/
20 Winfield, Winkle, Webb, Lyngs, Jirotka and Macrae
doubtful that either the CQC or HSE would have the expertise needed to investi-
gate accidents involving robots. Accidents involving workplace assistant robots, or
robots used in schools – including near misses – would certainly need to be reported
to the HSE. It is clear that as the use of social robots in society increases, regulators
such as the CQC and HSE will need to create robot accident investigation branches,
as would the HSIB for surgical or healthcare robots. Even more urgent is the need
to record all such accidents – again including near misses – so that we have, at the
least, statistics on the number and type of such accidents.
Until such mechanisms exist, or for robot accidents in settings that fall outside
those outlined here, the only recourse we would have is to contact the robot’s man-
ufacturer, thus underlining the importance of clear labelling of the robots make and
model alongside contact details for the manufacturer of the robot itself23. Even if
the robot and its manufacturer does not yet have data logging technologies (such as
the EBB) or processes for accident investigation in place, we would hope that they
would take accidents seriously. A responsible manufacturer would both investigate
the accident – drawing in outside expertise where needed – and effect remedies to
correct faults. Ideally social robot manufacturers would adopt the data sharing phi-
losophy that has proven so effective in aviation safety, summed up by the motto
“anybodys accident is everybodys accident”.
Acknowledgements The work of this chapter has been conducted within EPSRC project Robo-
TIPS, grant reference EP/S005099/1 RoboTIPS: Developing Responsible Robots for the Digital
Economy.
References
1. H. Alemzadeh, J. Raman, N. Leveson, Z. Kalbarczyk, and R. K. Iyer. Adverse events in
robotic surgery: A retrospective study of 14 years of fda data. PloS one, 11:4, 2016.
2. ATSB. Analysis, causality and proof in safety investigations. Technical report, Canberra:
Australian Transport Safety Bureau., 2007.
3. International Civil Aviation Authority. Annex 13 to the Convention on International Civil
Aviation, Aircraft Accident and Incident Investigation. ICAO, Montreal, 2007.
4. National Transportation Safety Board. Collision Between a Car Operating With Automated
Vehicle Control Systems and a Tractor-Semitrailer Truck Near Williston, Florida, 7 May 2016.
Washington, 2016.
5. National Transportation Safety Board. Preliminary report for crash involving pedestrian.
Washington, 2018.
6. M. Boden, J. Bryson, D. Caldwell, K. Dautenhahn, L. Edwards, S. Kember, P. Newman,
V. Parry, G. Pegman, T. Rodden, T. Sorrell, M. Wallis, B. Whitby, and A. Winfield. Prin-
ciples of robotics: regulating robots in the real world. Connect. Sci, 29:124–129.
7. Air Accident Investigation Branch. AAIB Centenary Conference. 2015.
8. E. Broadbent. Interactions with robots: The truths we reveal about ourselves. Annual Review
of Psychology, 2017(68-1):627–652, 2017.
9. J. Bryson and A. Winfield. Standardizing ethical design for artificial intelligence and au-
tonomous systems. Computer, 50(5):116–119, 2017.
23 See EPSRC Principle of Robotics #5 in [6].
Robot Accident Investigation 21
10. BSI. BS8611:2016 Robots and robotic devices, Guide to the ethical design and application of
robots and robotic systems. British Standards Institute, 2016.
11. B. S. Dhillon. Robot accidents. In Robot Reliability and Safety. Springer, New York, NY,
1991.
12. M. C. Elish. Moral crumple zones: Cautionary tales in human-robot interaction. Engaging
Science, Technology, and Society, 5:40–60, 2019.
13. R. P. Fisher and R. E. Geiselman. Memory enhancing techniques for investigative interview-
ing: The cognitive interview. Springfield, 1992.
14. H. Gabler, C. Hampton, and J. Hinch. Crash severity: A comparison of event data recorder
measurements with accident reconstruction estimates. SAE Technical Paper 2004-01-1194,
2004.
15. D. R. Grossi. Aviation recorder overview, national transportation safety board [NTSB]. J.
Accid. Investig., 2(1):31–42, 2006.
16. IEEE. The IEEE global initiative on ethics of autonomous and intelligent systems. ethically
aligned design: A vision for prioritizing human well-being with autonomous and intelligent
systems, first edition. Technical report, IEEE, 2019.
17. A. Jobin, M. Ienca, and E. Vayena. The global landscape of AI ethics guidelines. Nat Mach
Intell, 1:389–399, 2019.
18. P. B. Ladkin. Causal System Analysis, 2001.
19. P. B. Ladkin, J. Sanders, and T. Paul-Stueve. The WBA Workbook. 2005.
20. C. Macrae. Making risks visible: Identifying and interpreting threats to airline flight safety.
Journal of Occupational and Organizational Psychology, 82(2):273–293, 2010.
21. C. Macrae. Close Calls: Managing Risk and Resilience in Airline Flight Safety. Palgrave,
London, 2014.
22. C. Macrae. The problem with incident reporting. BMJ Quality and Safety, 25(2):71–75, 2016.
23. C. Macrae and C. Vincent. Investigating for improvement. building a national safety investi-
gator for healthcare. clinical human factors group thought paper. Technical report, 2017.
24. T. Malm, J. Viitaniemi, J. Latokartano, et al. Safety of interactive robotics - learning from
accidents. Int J of Soc Robotics, 2:221–227, 2010.
25. R. Moore. A bayesian explanation of the uncanny valley - effect and related psychological
phenomena. Sci Rep, 2:864, 2012.
26. NHS. NaPSIR quarterly data summary april-june 2019. Technical report, NHS, 2019.
27. Cian O’Donovan. Explicitly ethical standards for robotics. Technical report, Working pa-
per for the international symposium: Post-automation, democratic alternatives to Industry 4.0
SPRU - Science Policy Research Unit, University of Sussex, 11-13 September, 2019, 2020.
28. International Federation of Robotics (IFR). Executive Summary World Robotics 2019 Indus-
trial Robots, 2019.
29. International Standards Organisation. ISO 2014: Robots and robotic devices - Safety require-
ments for personal care robots. ISO, 2014.
30. R. Owen. The UK Engineering and Physical Sciences Research Council’s commitment to
a framework for responsible innovation. Journal of Responsible Innovation, 1(1):113–117,
2014.
31. A. K. Pandey and R. Gelin. A mass-produced sociable humanoid robot: Pepper: The first
machine of its kind. in IEEE Robotics & Automation Magazine, 25(3):40–48, 2018.
32. I. Radun and H. Summala. Sleep-related fatal vehicle accidents: Characteristics of decisions
made by multidisciplinary investigation teams. Sleep, 27(2):224–227, 2004.
33. J. T. Reason. Managing the risks of organisational accidents. Ashgate, Aldershot, 1997.
34. J. Sanders. Introduction to Why Because Analysis, 2012.
35. S. Spiekermann and T. Winkler. Value-based Engineering for Ethics by Design, 2020.
36. N. A. Stanton, P. M. Salmon, L. A. Rafferty, G. H. Walker, C. Baber, and D. P. Jenkins. Human
factors methods: A practical guide for engineering and design. London, Routledge, 2013.
37. P. Underwood and P. Waterson. Systems thinking, the swiss cheese model and accident anal-
ysis: A comparative systemic analysis of the grayrigg train derailment using the ATSB, Ac-
ciMap and STAMP models. Accident Analysis and Prevention, 68:75–94, 2014.
22 Winfield, Winkle, Webb, Lyngs, Jirotka and Macrae
38. H. Webb, M. Jirotka, A. F. Winfield, and K. Winkle. Human-robot relationships and the
development of responsible social robots. In Proc. of Halfway to the Future Symposium 2019
(HTTF 2019), pages 1–7, NY, USA, 2019. Association for Computing Machinery. Article 12.
39. A. Winfield. Ethical standards in robotics and AI. Nature Electronics, 2(2):46–48, 2019.
40. A. F. Winfield and M. Jirotka. The case for an ethical black box. In Y. et al. Gau, editor,
Towards Autonomous Robotic Systems (TAROS 2017) Lecture Notes in Computer Science Vol.
10454, pages 262–273. Springer, Cham, 2017.
41. A. F. Winfield and M. Jirotka. Ethical governance is essential to building trust in robotics and
artificial intelligence systems, phil. Trans. R. Soc. A, 376(20180085), 2018.
ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
In the past five years, private companies, research institutions and public sector organizations have issued principles and guidelines for ethical artificial intelligence (AI). However, despite an apparent agreement that AI should be ‘ethical’, there is debate about both what constitutes ‘ethical AI’ and which ethical requirements, technical standards and best practices are needed for its realization. To investigate whether a global agreement on these questions is emerging, we mapped and analysed the current corpus of principles and guidelines on ethical AI. Our results reveal a global convergence emerging around five ethical principles (transparency, justice and fairness, non-maleficence, responsibility and privacy), with substantive divergence in relation to how these principles are interpreted, why they are deemed important, what issue, domain or actors they pertain to, and how they should be implemented. Our findings highlight the importance of integrating guideline-development efforts with substantive ethical analysis and adequate implementation strategies.
Article
Full-text available
A new generation of ethical standards in robotics and artificial intelligence is emerging as a direct response to a growing awareness of the ethical, legal and societal impacts of the fields. But what exactly are these ethical standards and how do they differ from conventional standards?
Article
Full-text available
This paper explores the question of ethical governance for robotics and artificial intelligence (AI) systems. We outline a roadmap—which links a number of elements, including ethics, standards, regulation, responsible research and innovation, and public engagement—as a framework to guide ethical governance in robotics and AI. We argue that ethical governance is essential to building public trust in robotics and AI, and conclude by proposing five pillars of good ethical governance. This article is part of the theme issue ‘Governing artificial intelligence: ethical, legal, and technical opportunities and challenges’.
Article
Full-text available
As robotics technology evolves, we believe that personal social robots will be one of the next big expansions in the robotics sector. Based on the accelerated advances in this multidisciplinary domain and the growing number of use cases, we can posit that robots will play key roles in everyday life and will soon coexist with us, leading all people to a smarter, safer, healthier, and happier existence.
Article
Full-text available
AI is here now, available to anyone with access to digital technology and the Internet. But its consequences for our social order aren't well understood. How can we guide the way technology impacts society?
Article
Full-text available
This paper proposes a set of five ethical principles, together with seven high-level messages, as a basis for responsible robotics. The Principles of Robotics were drafted in 2010 and published online in 2011. Since then the principles have influenced, and continue to influence, a number of initiatives in robot ethics but have not, to date, been formally published. This paper remedies that omission.
Conference Paper
The contemporary development of social robots has been accompanied by concerns over their capacity to cause harm to humans. Our RoboTIPS study sets out to design and trial an innovative design feature that will advance the safe operation of social robots and foster societal trust. The Ethical Black Box (EBB) collects data about a robot's actions in real time and in context; when an incident occurs, this data can be used within a wider investigation process to determine what went wrong and prevent similar adverse events. In this paper we draw on Lucy Suchman's groundbreaking work on human-machine relationships to elucidate the goals, practices and potential impact of our study. We align with Suchman's positioning of safety as an accomplishment of situated action and draw on her analysis to describe the actions of the EBB-enhanced social robot as contingent on context and the robot's status as a social agent. We also describe shared priorities in our methodological approaches. We close with observations on how participatory design and an ethnomethodologically-informed stance towards data collection and analysis can contribute to the field of responsible innovation (RI), which seeks to ensure that innovations are undertaken in the public interest and provide societal value.
Book
Drawing on extensive and detailed fieldwork within airlines-an industry that pioneered near-miss analysis- this book develops a clear set of practical implications and theoretical propositions regarding how all organizations can learn from 'near-miss' events and better manage risk and resilience.
Article
A prevailing rhetoric in human-robot interaction is that automated systems will help humans do their jobs better. Robots will not replace humans, but rather work alongside and supplement human work. Even when most of a system will be automated, the concept of keeping a “human in the loop” assures that human judgment will always be able to trump automation. This rhetoric emphasizes fluid cooperation and shared control. In practice, the dynamics of shared control between human and robot are more complicated, especially with respect to issues of accountability.As control has become distributed across multiple actors, our social and legal conceptions of responsibility remain generally about an individual. If there's an accident, we intuitively — and our laws, in practice — want someone to take the blame. The result of this ambiguity is that humans may emerge as “moral crumple zones.” Just as the crumple zone in a car is designed to absorb the force of impact in a crash, the human in a robotic system may become simply a component — accidentally or intentionally — that is intended to bear the brunt of the moral and legal penalties when the overall system fails.This paper employs the concept of “moral crumple zones” within human-machine systems as a lens through which to think about the limitations of current frameworks for accountability in human-machine or robot systems. The paper examines two historical cases of “moral crumple zones” in the fields of aviation and nuclear energy and articulates the dimensions of distributed control at stake while also mapping the degree to which this control of and responsibility for an action are proportionate. The argument suggests that an analysis of the dimensions of accountability in automated and robotic systems must contend with how and why accountability may be misapplied and how structural conditions enable this misunderstanding. How do non-human actors in a system effectively deflect accountability onto other human actors? And how might future models of robotic accountability require this deflection to be controlled? At stake is the potential ultimately to protect against new forms of consumer and worker harm.This paper presents the concept of the “moral crumple zone” as both a challenge to and an opportunity for the design and regulation of human-robot systems. By articulating mismatches between control and responsibility, we argue for an updated framework of accountability in human-robot systems, one that can contend with the complicated dimensions of cooperation between human and robot.