Conference PaperPDF Available

Risk Assessment for Human-Robot Collaboration in an automated warehouse scenario

Authors:

Figures

Content may be subject to copyright.
PREPRINT
Risk Assessment for Human-Robot Collaboration
in an automated warehouse scenario
Rafia Inam, Klaus Raizer, Alberto Hata, Ricardo Souza, Elena Fersman,
Enyu Cao∗‡, Shaolei Wang∗‡
Ericsson Research, {firstname.lastname}@ericsson.com
Royal Institute of Technology (KTH), {caoe, shaolei}@kth.se
Abstract—Collaborative robotics is recently taking an ever-
increasing role in modern industrial environments like man-
ufacturing, warehouses, mining, agriculture and others. This
trend introduces a number of advantages, such as increased
productivity and efficiency, but also new issues, such as new
risks and hazards due to the elimination of barriers between
humans and robots. In this paper we present risk assessment for
an automated warehouse use case in which mobile robots and
humans collaborate in a shared workspace to deliver products
from the shelves to the conveyor belts. We provide definitions of
specific human roles and perform risk assessment of human-
robot collaboration in these scenarios and identify a list of
hazards using Hazard Operability (HAZOP). Further, we present
safety recommendations that will be used in risk reduction
phase. We develop a simulated warehouse environment using
V-REP simulator. The robots use cameras for perception and
dynamically generate scene graphs for semantic representations
of their surroundings. We present our initial results on the
generated scene graphs. This representation will be employed
in the risk assessment process to enable the use of contextual
information of the robot’s perceived environment, which will be
further used during risk evaluation and mitigation phases and
then on robots’ actuation when needed.
Keywords-Collaborative robotics, safe human-robot collabora-
tion, warehouse, safety, risk assessment, safety standards, ISO/TS
15066, hazard identification, HAZOP.
I. INTRODUCTION
Robots are playing a viable role in manufacturing, ware-
houses and industries by performing operations in a shorter
time and in a more precise way as compared to humans.
However, there are some tasks where humans can’t be replaced
due to their complexity. Collaborative robotics, in which
robots and humans work together to accomplish common
tasks, demands additional safety requirements as compared
to traditional safeguarding measures [1]. The autonomous
operations of mobile robots in human-shared environments
introduce new advantages in terms of productivity and effi-
ciency, as the abilities of human and machine complement
one another, but it is subject to hard safety constraints. It also
introduces new risks and hazards, like increased possibility
of collision with workers. Due to the elimination of barriers
around the robot in this new collaborative situation, a robot
should interact with other robots and workers at different
levels. It is crucial to ensure the correct and safe operation
of the robot, so that it cannot cause injuries or damages to the
workers, other objects or to itself [2]. This issue is aggravated
in an automated warehouse scenario, where mobile robots can
navigate autonomously together with human workers and other
moving robots.
Safety requirements for robots interaction were introduced
in the international standard ISO 10218 [3], [4] and for
human-robot collaboration (HRC) operations in a relatively
recent technical specification ISO/TS 15066:2016 [1]. Cur-
rently, the expected requirements of safety and unhindered
human-robot collaboration are under development. The scope
of the standard is not limited to the development of new
sensors, robots or intelligent control systems, but includes risk
analysis techniques, which is a fundamental requirement for
collaborative robot applications.
ISO standards alone are not enough for collaborative sys-
tems to ensure safety. A dedicated risk management approach
(including risk assessment and risk reduction) is vital, even for
those robots that are specifically designed for human-robot col-
laboration (HRC) [5], [6]. An experiment was performed in [7]
to check the safe collaborative operations by applying “power
and force limitation” specified in ISO/TS 15066 standard for
pick-and-place task while taking the maximum permissible
values for pressure and force from the standard. The results
indicate that the current specification is not sufficient, even
though ISO/TS15066 was applied reasonably, and there is a
vital need for risk assessment. As a result of this, additional
actions to reduce risk must be taken for collaborative scenarios
in a safe manner.
In this paper we present a systematic description of collab-
orative scenarios for our use case, in the safety perspective
by first identifying different human roles, their collaborative
interactions, and unsafe scenarios along with safety issues.
Secondly, we perform a risk assessment for HRC that in-
cludes hazards identification by using Hazard Operability
(HAZOP) technique coupled with Unified Modeling Language
(UML) [8] and risk estimation. Lastly, we present our simu-
lation setup along with scene graph representation that will
be used to evaluate the proposed risk assessment, and initial
results of the dynamically generated scene graph for each robot
in our scenario1. Through the scene graph, the robot perception
1Code available in https://github.com/EricssonResearch/scott-eu/tree/
simulation-ros/simulation-ros
PREPRINT
is converted into a semantic representation that contributes to
add relevant environment information for the risk assessment.
Paper Outline: Section II presents related works in risk
assessment for HRC. Section III describes an automated ware-
house use case using collaborative robots and the HRC scenar-
ios along with safety issues that could arise. It further presents
hazards and risk assessment for the use case. Implementation
set up along with semantic representation of the environment
and its initial results is presented in Section IV. Section V
presents discussion and finally, Section VI concludes the paper
with a description of ongoing and future works.
II. RELATED WORK
Traditionally, when robots and humans share the same
working space, they are separated by barriers to avoid direct
contact with each other and consequently, prevent possible
injuries, such as seen in Kiva project [9]. There are some
initiatives to enable the interaction between humans and robots
to increase efficiency in the production. Despite requiring risk
assessment, some works do not follow any safety standard.
For instance, in the work of [10] the authors employ robot
equipped with cameras and sensors to detect objects and
people. In this case, the robot simply modifies its path or
velocity to prevent collisions, but does not perform any risk
assessment.
Regulations that incorporate robot related risks for human
workers include the international standard ISO 10218 [3],
[4] and ISO 13849-1 [11]. Based on these standards, several
works were done on industrial robotics to maximize the
productivity while sharing the same space. In [12], the authors
present a kinematic control strategy to enforce safety for such
robot through an optimization-based real-time path planning
algorithm. During planning, a tractable set of constraints on
the robot’s velocity is used to keep the minimum separation
distance to the human. As compared to our work, [12] does
not contemplate on risk assessment to ensure safety. The
work of [13] performs hazard analysis and risk assessment in
cable harness assembly and evaluates the safety design before
and after the implementation that resulted in the reduction of
the potential collaboration risks. However, different from our
proposed solution, the previous two works only consider fixed
robot manipulators and not mobile robots. In [14] the operators
use safety vests (ANSI/ISEA 107-2004 standard) to facilitate
their detection by the robot’s cameras. Moreover, the robot
complies with the EN 1525 standard which enables the contact
with humans by limiting the force and power [15]. Despite all
these works following some safety standard, they are not in
the accordance with the most recent technical specification
ISO/TS 15066:2016, since they were in fact done before the
technical specification definition.
The ISO/TS 15066:2016 [1] standard introduces collabora-
tive robotics concepts and four collaborative operating modes
in details and thus, supplements the requirements of ISO
10218 in order to develop safe collaborative robot applications.
It enforces power, speed and movement limitation on robots,
according to the level of the risk that they bring to humans.
Further, ISO 13849-1 [11] and IEC 62061 [16] provide guide-
lines to determine the safety level based on the severity, the
frequency of exposure and the possibility to manage the hazard
to an acceptable level by reducing risk.
Robotic companies such as ABB [6] and SICK [5] are devel-
oping robots and sensors that follow these standards to be used
in industrial automation. They present safety requirements and
emphasize the need of risk assessment and risk reduction
approach for collaborative robots using these standards. Sim-
ilar need of applying standards of risk assessment and risk
reduction, specially focused on collaborative robots, have also
been observed in research institutes [17]. Recent works done
from 2016 made improvements in the HRC risk assessments
and reductions by identifying the operation modes accord-
ing to the standard, presenting collaborative scenarios, and
identifying different human body areas that could potentially
get injured. The work of [6] presents structured description
of HRC scenarios and performs risk assessment and hazards
analysis using Failure Modes and Effects Analysis (FMEA)
method [18] for industrial robots used in manufacturing. This
work is quite similar to ours in its approach, but different
in terms of the use case, robot mobility and the operations
performed by the robots. We also define different human roles
in our use cases and our HRC scenarios. Further, we present
the details of implementation and how risk assessment will be
evaluated which is completely missing in [6].
Askarpour et al. [19] presents a methodology to perform
semi-automated safety analysis of HRC applications. The
methodology aims at applying formal verification methods
in HRC tasks to identify possible hazardous situations and
mitigate them. The proposal performs offline verification of
such tasks and assumes a ’human-in-the-loop’ for providing
the mitigation strategies for unsafe situations. In [20], the
authors further extend the work by adding a model for the
operators behaviour as an attempt to deal with unpredictable
human behavior. The authors employs a cognitive model of the
operator, capturing erroneous human behavior driven from the
operator’s perception of the environment and mental decisions.
Although the approach of having the operator’s cognitive
model is extremely interesting, we argue that the complexity
of modeling human behaviour and more specifically possible
erroneous behaviors is almost an insurmountable task to be
performed.
We use HAZard OPerability (HAZOP) [21] in this work,
which is a guideword-based technique to identify hazards/risks
in risk identification phase. The technique mainly focuses on
operational hazards. It is relatively new as compared to other
techniques like FMEA, Preliminary Hazard Analysis (PHA),
with the advantages of controlling the model complexity and
delivers a safety document for certification. Further, HAZOP-
UML analysis can identify all hazards that PHA covered, and
additional hazards [21]. The work of [22] tested this technique
with two systems: a robotic mobile manipulator and a robot
that assists disabled people. A HAZOP-UML tool is developed
and used for a walking assistance robot [21]. The tool was also
used for an airport light measurement robot [23].
PREPRINT
Fig. 1: Illustration of human-robots interactions in the ware-
house. The small boxes in red, green and yellow are products
on the shelves and on the conveyor belts, white squares on the
floor (next to the conveyor belts and to the shelves) are way-
points and blue cylinders are the robots’ recharging stations.
III. DESCRIPTION OF USE CASE AND HUMAN-ROBOT
COLLABORATION
The first part of this section presents details of an auto-
mated warehouse use case and describes collaborative and
non-collaborative scenarios to be performed safely inside the
warehouse. The later parts of this section identify different
human workers that interacts with the robots and describes
the hazard exposure, their skill level, their frequency of
collaboration, and unsafe scenarios that could arise, along
with safety recommendations. Unsafe scenarios lead to hazards
identification, and safety recommendations that will later be
used in risk mitigation step.
We consider a use case of an automated warehouse where
autonomous mobile robots and humans share a common place
and work together to move products to the delivery truck.
Multiple mobile robots perform pick-and-place operations by
picking up products from the shelves and delivering them to
conveyor belts, that in turn take the products to the trucks.
Each robot is equipped with a robotic arm for pick-and-place
operation. Human workers interact with shelves by placing or
moving products on them. In special, placing the products is
a complex task that involves choosing the ordered products,
therefore it is performed by the human workers.
Figure 1 shows the simulated warehouse with the collabo-
rative robots and a human worker.
A. Human-robot collaboration scenarios and their safety re-
quirements
Human worker and robot can come into close interaction
with each other around the shelf when the former is placing
the product and latter is picking it up, thus leading to severe
safety risks. Other situations can include human intervention
when a product is dropped by the robot and a worker comes
to remove it, or when a worker enters in the warehouse for
the maintenance of a broken robot while other robots are
moving, or when a visitor (e.g. external worker) enters in
the warehouse. Thus, most of the collaborations will happen
around the shelves and on the warehouse floor. Proper safety
measures are needed to be adopted and safety must be ensured
during all these scenarios.
We have provided an overall architecture containing major
components of an automated warehouse including a warehouse
controller, planning service, and a two-layered safety strategy
in [24]. The presented safety strategy consists of an offline
safety analysis (performed before sending the tasks to the
robot) and an online safety analysis (performed at runtime
and inside the robot’s control loop). Warehouse controller is
a digital actor in the system that receives high-level goals
from the warehouse manager through a graphic user interface
(GUI), uses the planning service to generate a high-level
plan for the warehouse to accomplish the goal, and checks
the generated plan for safety constraints using the offline
safety analysis before sending it to the robots. For instance,
the plan ensures that two or more robots will not be at the
same position at a particular time. If the plan is correct then
the controller assigns resources (e.g. number of robots and
number of conveyor-belts) to be used to fulfill the current
plan and then finally sends the verified plan (tasks) to the
robots. The task assignment to the robots is shown in Figure 2.
The warehouse controller, planning and offline safety analysis
are briefly presented here only to provide an overview of the
complete system and are not focus of this paper.
As mentioned before, each robot receives a high-level plan
(task) that was already been checked in the warehouse con-
troller (offline) for safety constraints. However, still different
situations can arise when the products are placed closely
and there is a chance of collision with nearby robots. Close
encounters can also happen during navigation from shelves
to conveyor belts and when the robots are coming back
towards the shelves after delivering the products. Online safety
analysis is performed at runtime for this purpose using risk
management process and is the main focus of this paper.
Collaborative
Worker
Manager
Co-existing
Worker
External
Worker/ Visitor
System
Engineer
Robot
UC06
Monitoring
UC02
Placing/ replacing
products on shelf
Risk Management
UC03
Cleaning
UC07
Visiting the area
UC05
Updating software,
Monitoring behavior of
robot
Warehouse
Controller
Assigning
tasks
<<uses>>
<<uses>>
<<uses>>
UC01
Collaborative Operation
UC04
replacing
<<uses>>
<<uses>>
Fig. 2: Use cases for collaborative scenarios in automated
warehouse.
PREPRINT
We consider the following robot states in our use case:
Manipulation: Robot arm picks up or places a product.
Navigation: The robot’s platform is moving towards to a
waypoint (i.e. shelf or conveyor belt). For safety reasons
and to reduce hazards complexity, we assume that the
robot platform stands still during manipulation, and vice
versa.
Idling: Robot is standing still because it is either waiting
for the next task or had some technical problem (e.g.
battery out of charge).
Charging: Robot recharges itself at the charging station.
Before quantifying risks associated with our collaborative
robots, we present a systematic description of collaborative
scenarios from the safety perspective. We first identify differ-
ent roles and their collaborative interactions, and then explore
unsafe scenarios and safety issues.
1) Description of the roles in our collaborative scenario: In
order to find hazards that could arise, we first need to identify
all humans who might be exposed to a hazard, their skill
levels and the frequency of exposures. For our use case, we
describe different human roles in Table I with the respective
explanations about their skill levels and frequency of exposure
in each column. In the table, by “skilled” person we mean
that the necessary safety certification courses have been un-
dertaken, and by “trained” we mean that a training to work
collaboratively with robots have been attained. These include
instructions of coming close-by the robot, understanding robot
behavior (e.g. robot gradual speed reduction when the human
gets closer) or even making physical interactions with it.
2) Description of use cases and unsafe scenarios: After
describing different roles, and their exposure levels, we present
their interactions with the robots in Figure 2. We briefly
describe the use cases in Table II. The table further explains
unsafe interaction scenarios, safety issues in the use cases, and
presents safety requirements and recommendations that will be
used in risk reduction phase.
B. Risk Assessment
This section presents the concept of risk management pro-
cess that has capability of managing (identifying, assessing and
mitigating/reducing) safety risks for our collaborative scenar-
ios. Four main phases of a classical Risk Management Process
are 1) Hazard and Risk Identification; 2) Risk Analysis; 3) Risk
Evaluation; and 4) Risk Mitigation (also called Risk Reduction
or Treatment) [25] as shown in the red boxes of Figure 3.
The main focus of this paper is on risk assessment, which is
an overall process of hazard identification and risk analysis
(i.e. first two phases of risk management). Risk assessment
enhances understanding of risks, their causes, frequencies,
consequences, and probability.
The first phase of the risk assessment is the hazard and risk
identification. We conduct it manually by identifying and then
describing all possible existing threats in our HRC scenario.
Additionally, the possible consequences and damages to the
human and to other objects are also catalogued. There are
several methods to perform this phase such as, Preliminary
Robot Node
Sensors
Navigation/
Actuation
Camera Image
Robot
Velocity
Safety
Zones
Arm Speed/
Power
Hazard
Identification
Scene Graph
Object
Classification
Object
Detection
Scene Graph
Construction
Risk Analysis
Risk Reduction
(AI Based Alg.)
Scene Graph
Risk Evaluation
Graph
Risk Magnitude
Object Positions
Warehouse
Dynamic Objects
Static Objects
Shelves
Conveyor Belts
Humans
Other Robots
Fig. 3: Risk management process, its components and in-
puts/outputs. The ROS nodes that are executed in robot are
depicted in Robot Node. The scene graph produces a semantic
information of the environment which is used in risk analysis.
Hazard Analysis (PHA), HAZard OPerability analysis (HA-
ZOP), Fault Tree Analysis (FTA), and Failure Modes and Ef-
fects Analysis (FMEA). PHA is a simple but inductive method
in which hazards for a specific scenario are identified from
hazard checklists of a standard (e.g. ISO12100 [26] Annex
B: Examples of hazards, hazardous situations and hazardous
events). However, robotic standards [3], [4], [1] do not include
hazards for HRC scenario. We apply HAZOP method, which
is a structured and systematic examination approach to identify
hazards, and is suitable for our HRC use case. HAZOP first
models the scenarios by use case diagrams, sequence diagrams
and state-machine diagrams. Then, attributes and guidewords
are used to generate deviations. The hazards list can be
obtained after merging redundant deviations and removing
meaningless deviations. A detailed list of hazards for our HRC
use case along with the identification of its type, consequences
and effected human body area is presented in Table III.
The last three phases of risk management process are
performed inside the robot’s control loop as shown in Fig-
ure 3. Risk analysis phase comprehends the nature of risks,
determines the level of risk, including risk estimation [26]. In
this phase we identify key entities, attributes of the entities,
and the relationships among the attributes and then perform
risk estimation. The key entities in our case are the shared
workspace which consists of some static objects (e.g. shelves,
products, conveyor belts and dock stations) and some dynamic
objects (other robots and human workers) (see Figure 3).
We use scene graph in this phase to identify the entities and
their attributes based on the identified hazards. This consists of
gathering sensor data from the warehouse and then processing
the camera image through the scene graph module (details in
Section IV-B) which outputs the corresponding scene graph.
To formalize this problem, initially the obstacles are classified
as static objects, mobile objects, humans and the special
objects: dock station. The first three objects require increased
PREPRINT
TABLE I: Description of different roles in our use case and their collaboration with robot
Role Expertise Description Degree of collabo-
ration with robot
Frequency of col-
laboration
Collaborative
Worker
Skilled, trained
and experienced Has close collaboration with the robots. Has proper
training on working collaboratively with the robots
and understanding robot behavior.
Close collaboration Regular, daily work
System
Engineer
Skilled, trained
and experienced Has technical knowledge on how the robot works
and performs, understands behavior, and responsi-
ble for development/maintenance of the robots
Close collaboration Occasional, only
for updates or
when some error
occurs
Manager Non-skilled,
untrained He is responsible for administrative operations and
management of the warehouse and the assets. He
has low knowledge of the robot behavior. He rarely
interacts physically with the robots
No interaction No collaboration
Co-existing
Worker
Skilled, trained Shares the same place as the robots but has occa-
sional interaction with them. Has proper training
and a shallow understanding of the robot behavior
Close collaboration Occasional
External
Worker / Visitor
Untrained Workers that do not pertain to the warehouse but
has access to this place No close collabora-
tion Very rare
TABLE II: Description of use cases, unsafe scenarios and safety recommendations
Use Case Use cases and its unsafe scenarios Safety Requirements/Recommendations
UC01 The worker interacts with the robots in a collaborative way (working
very closely to robot) while placing products on the shelf. The robot must adjust its behavior and always keep necessary safety
distance from the worker according to the standards. Some of the
adjustments can include robot speed reduction and stop the robot if
the worker is in a safe range.
The worker must be trained and has knowledge about the robot’s
behaviour such as how much distance to be kept from the robot while
working safely and efficiently.
Feedback (visual and/or auditory) must be provided to inform the
current robot behaviour to the worker that can help to anticipate the
robot’s movement. E.g. the robot’s stopped position can be presented
as a red light at top of the robot and its slow movement can be
presented as yellow light.
UC02 The worker takes different products from the storage and places them
on the shelf, from where the robot will pick up the products and
delivers them to the conveyor belt. The products should be carefully
positioned so that the robot can easily pick it up. If it is placed at
an unusual position or shifted, then the robot may have difficulties or
may be unable to pick it up.
All recommendations from UC01.
Products to be placed at specific positions.
The worker is trained to identify the reason why robot is not able to
pick up the product.
A continuous information about robot functionality is provided that
could help to identify the reason.
UC03 A worker needs to remove items on the floor/to that the robot has
dropped. When it happens, the worker enters the collaborative area and
goes towards the dropped item. He/she is comfortable to come close
to the robots in a safe way to perform its task, without interfering the
robot’s activities.
All recommendations from UC01.
UC04 If a robot breaks down during its operation or a deadlock occurs in
the system, then the manager sends a technician. The worker enters in
the collaborative area to replace or move out the robot in the presence
of other working robots.
All recommendations from UC01.
Identifies the problem (e.g. drained-out battery) and logs it.
UC05 Collaborative worker informs the engineer about the problem in the
robot’s normal behavior. Sometimes the robot initiates actions which
were not anticipated by the worker. It could be a problem due to an
error in the algorithm or due to incorrect parameters of the algorithms.
All recommendations from UC01 for collaborative worker
The engineer must perform systematic tests of the algorithms before
deploying in the robots.
The engineer can work together with the worker in order to be
informed about the most common problems that the worker notices
while working with robots. This helps to adjust the robot’s software
and helps identifying the need of additional training to the workers.
UC06 The manager’s interaction with the robots happens mostly through the
warehouse management interface. The manager verifies, and approves
high-level plans being sent to the robots and expects them to be
accomplished in safely and timely manner.
Continuous information about the robot’s functionality must be dis-
played through some user interface.
UC07 A visitor gets inside the warehouse and moving in the warehouse along
with other mobile robots. He/she also observes the pick up operation
around the shelf and may like to place the products for the robot or
wishes to touch or come close to the robot.
All recommendations from UC01.
The visitor should be provided with some basic information or training
about the collaborative robots, so that he/she could anticipate about
robot’s behaviour (e.g. distances at which the robot will slow down or
stops completely to keep him/her safe).
function complexity and performance. The last one, the dock
station, breaks the common object strategy because it is used
to charge the robot where the robot must park closely and in
a proper direction.
For risk evaluation and risk mitigation phases, online safety
analysis is to be followed and implemented. We propose
dynamically changing three-layered safety fields/zones around
PREPRINT
TABLE III: Description of Hazards for collaborative operations
Task Problem Hazard Body area
Description Type Consequence
Pickup operation Product is not properly
placed and robot fails HN1: Robot cannot pick up the product
because either the product is not present on
the shelf or is not placed at a proper place
Temporal Time loss None
Pickup operation
and the worker is
placing/replacing the
products
Human is very close to the
robot. HN2: Physical human injury as transient
contact between gripper and hand. Followed
by clamping and dragging along the hand
while continuing the planned pick-and-place
task
Mechanical Human
injury:
Gripping
Back of
workers
hand
The robot navigates its
arm to pickup a product Human is very close to the
robot. HN3: Physical human injury. The robot’s
moving arm can hit the worker’s body Mechanical Human
injury: impact the upper
part of
the body
Manipulator drops the
product and a worker is
nearby
Product is not held prop-
erly and the robot drops
the product close to the
human who can get hurt
HN4: Physical human injury on worker’s
foot or leg due to the fallen product Mechanical Human
injury:
crushing
Foot / leg
Robot navigation and a
worker/visitor is moving
close by
Human is very close to the
moving robot. HN5: Physical human injury on worker’s
body due to the moving robot Mechanical Human
injury: impact the lower
part of
the body
Robot navigates and a
worker is cleaning the
floor close by or come to
replace the robot
Human is very close to the
moving robot. HN6: Physical human injury on worker’s
body due to the moving robot Mechanical Human
injury: impact the lower
and/or
upper
parts of
the body
Place operation Product cannot be placed
properly HN7: No place for the product because
conveyor belt is not moving Temporal Time loss None
Place operation Product cannot be placed
properly HN8: Robot is not able to place the grip
from product properly Mechanical Financial loss None
Change in the Robot’s be-
havior due to new/updated
software
Robot does not behave as
anticipated by worker HN9: Physical injury, stress to collaborative
worker with unexpected behavior Communi-
cation Human
physical/
mental injury
Any
body
area
Multiple robots are mov-
ing close to each other Proximity sensor failed or
software error HN10: Damage due to robots collision Mechanical Financial loss None
Pickup and/or place oper-
ations Improper force limitation
or force control failure HN11: Property damage on fragile products
due to robot Mechanical Financial loss None
The robot is performing a
task Software error HN12:Failure to switch modes when a re-
action is needed. Software Financial loss None
The robot is performing a
task Software or hardware er-
ror HN13False emergency stop Software/
hardware Financial loss None
The robot is performing a
task Software or hardware er-
ror HN14Robot shutdown during a task Software/
hardware Time loss None
The robot is performing a
task Software error HN15 False alarm or indicator light Software Time loss.
Physical/
mental injury
Any
body
area
the robot for its safe navigation and manipulation. The
fields/zones are categorized as red, yellow and green. The sizes
of the zones will be taken from the standards [1]. If an object
(obstacle) is identified far from the robot (in green zone) then
we evaluate it as safe and there is no risk; if the object is
identified a bit closer (in yellow zone) then there is a moderate
level of risk and the robot may need to reduce its speed
depending on the object type and its distance; and when the
object is identified very close (in red zone) then the risk is high
and the robot must stop immediately. This information will be
used in risk evaluation phase to calculate the magnitude of the
risk (i.e. no risk, low, moderate, high, very high). And based on
this risk magnitude, the safety rules will be generated and will
be used in the next risk mitigation phase. To implement online
safety analysis, we intend to use an Artificial Intelligence (AI)
based algorithm (e.g. fuzzy logic, neuro-fuzzy algorithm) in
future.
IV. IMPLEMENTATION SETUP
This section presents our implementation setup along with
semantic description of the environment in the form of scene
graph. The scene graph is used to represent the knowledge
of the robot’s visual perception and to enable environment
analysis at a semantic level2(Figure 3).
A. Simulated Warehouse
For simulation purposes, we use a Virtual Robot Experimen-
tation Platform (V-REP) [27] to model all the above mentioned
collaborative scenarios of our use case. The simulator comes
with an integrated development environment in which the
physical models can be created and controlled. Figure 1 depicts
the simulated warehouse that was modeled with all its physical
components, i.e. shelves, products, conveyor belts, robots,
charging stations, and the human workers. The simulated robot
2The simulated warehouse prototype and the codes are available
in this link: https://github.com/EricssonResearch/scott-eu/tree/simulation-ros/
simulation-ros/
PREPRINT
Fig. 4: ROS architecture employed to simulate the warehouse
and the robots. The main components of the architecture are
the V-REP simulator (red box) and the ROS nodes (gray box).
is a Turtlebot2i3which is equipped with a robotic arm and two
3D cameras.
We use Robot Operating System (ROS) which is a flexible
and widely used framework for developing robot software.
The main advantages of ROS are the code reusability, the
abstraction of the low level codes and support for several robot
models. The data generated by V-REP is converted to ROS
messages using the ROS interface.
In the simulation environment, we use a single ROS master
that centralizes the communication between the simulated
robots and the V-REP. The V-REP remote API is specifi-
cally used to control the robotic arms through socket com-
munication. Main components of the warehouse simulation
architecture are V-REP, Robot ROS nodes and ROS master.
Figure 4 presents an overview of the architecture with its
components and the communication between them. Details of
these components and description of their functionalities are
as follows:
V-REP: models all the physical components of the ware-
house. It also simulates the behavior of the warehouse and pro-
duces visualization through its GUI interface. It is important
to highlight that the robots control logic is implemented using
ROS and therefore is not coupled to V-REP, it only controls
the behaviors of conveyor belts, shelves, trucks, and workers.
The main motivation of keeping the robot code separately in
ROS nodes is to increase code reusability between simlation
and real world scenario.
Robot ROS Nodes: contain all the algorithms responsible
for processing the data coming from sensors in the robot’s
control loop. All methods were modeled using ROS libraries
and all data is formatted in ROS message structure. Some
of ROS nodes are: scene graph module, risk reduction or
mitigation algorithm, mapping/localization, obstacle detection,
and path planning. Path planning, based on Navigation Func-
tion 1 (NF1) algorithm [28], performs the robot navigation by
3Turtlebot2i robot specifications: http://www.trossenrobotics.com/
interbotix-turtlebot-2i-mobile-ros-platform.aspx
combining localization, mapping and obstacle detection nodes.
During the navigation, obstacles’ boundaries are inflated pro-
portionally to the robot size for a safer robot movement.
Navigation module also relies on local path planner to deal
with dynamic obstacles. Thus, the robot can make local
changes without modifying the global path.
ROS Master: centralizes the communication between the
components of the architecture and is responsible for estab-
lishing the communication between node pairs.
B. Semantic Representation of the Environment and Initial
Results
In this work, instead of analyzing the object detection
output, we include a contextual information within the de-
tected objects in order to get a richer representation of the
environment. We use scene graph [29] for this purpose, which
incorporates the semantic relationship between the objects.
Additionally, scene graphs include positional information of
the object which further enhances the contextual information.
We use scene graph for two main purposes: first, for environ-
ment perception (as mentioned before) and second, for risk
mitigation instead of using raw sensor data.
The scene graph is represented as a direct graph where
nodes denote objects (e.g. conveyor belt, shelf and robot)
or human workers, and the edges denote the spatial (e.g.
on, below, beside) or other semantic relationships between
two nodes. The nodes are obtained after performing object
detection and classification from the robot’s camera images.
These nodes store static (e.g. shape and size) or dynamic (e.g.
pose, velocity and acceleration) properties of the objects. The
root node is associated to a location (e.g. office, floor, factory)
and subsequent child nodes represent the objects present in
this location. To construct the graph from the robot’s list of
detected objects, the objects that are in contact with the leaf
nodes are added as child nodes and the process is repeated
until all elements in the list are analyzed. Using this method, a
separate scene graph is generated for each robot and the graph
is dynamically updated whenever any change is observed in
robot’s detection.
Figure 5 presents dynamically generated scene graphs at
two different times (t0and t1) by two robots navigating
in the simulated warehouse. The root node is always the
“warehouse” and has a child node “floor”, which is the element
that connects all the objects in the scene. The objects detected
by the robot is added below the “floor” node and the edge
is labeled with “on”, which represents the placement of the
object. With this representation the robot can have a special
attention when the “worker” node is added and the risk
assessment can use the contextual information provided by
this graph to generate safe behavior.
In the presented graphs, each child node has a distance
attribute, which corresponds to distance of the object to the
corresponding robot. The robots used monocular RGB camera
for object detection. At time t0, the robots are stopped (their
velocities are 0.00) and the worker is passing in front of Con-
veyorBelt#1 (Figure 5a). In this setting, Robot#0 detects the
PREPRINT
(a) Scenario at time t0.
Robot#0
camera_rgb
velocity=0.00
warehouse
floor
size: 25*25
Worker
distance: 0.82
on
ConveyorBel#1
distance: 2.13
on
(b) Scene graph from Robot#0 at time t0.
Robot#1
camera_rgb
velocity=0.00
warehouse
floor
size: 25*25
Shelf#0
distance: 2.33
on
Shelf#1
distance: 1.45
on
DockStation#1
distance: 3.21
on
(c) Scene graph from Robot#1 at time t0.
(d) Scenario at time t1
Robot#0
camera_rgb
velocity=0.10
warehouse
floor
size: 25*25
ConveyorBel#1
distance: 0.12
on
(e) Scene graph from Robot#0 at time t1.
Robot#1
camera_rgb
velocity=0.15
warehouse
floor
size: 25*25
Shelf#0
distance: 1.12
on
Shelf#1
distance: 0.21
on
(f) Scene graph from Robot#1 at time t1.
Fig. 5: Dynamically generated scene graphs by two robots, labeled as Robot#0 and Robot#1, based on the objects detected at
two different time stamps (t0and t1) during the simulation. Distance, size and velocity units are in m, m2and m/s, respectively.
Worker and ConveyorBelt#1, while Robot#1 detects Shelf#0,
Shelf#1 and DockStation#1. These detections are reflected in
the generated scene graphs presented in Figure 5b – 5c re-
spectively. At t1, Robot#0 is moving towards ConveyorBelt#1
and Robot#1 towards Shelf#1 (Figure 5d). At this moment,
Robot#0 can’t observe the Worker anymore and Robot#1 stops
detecting the DockStation#1, thus these nodes are removed
from the graphs, Figure 5e – 5f.
Regards to the computational performance, a single scene
graph construction takes approximately 150 ms. Currently, the
risk reduction is based on the robot direction and velocity
adjustment by taking into account its distances to the per-
ceived objects. Robots’ distances to objects and velocities are
illustrated in Figure 5.
V. DISCUSSION
Conducting risk related testing experiments directly in real
environments can be dangerous, spend lot of time and re-
sources. The simulated warehouse setup will make it possible
to conduct these experiments in a safe and efficient manner
before deploying the algorithms in real robots.
The presented setup enables risk management process for
safe HRC. Currently, we are using the scene graph, which
provides a contextual and semantic based representation of
the environment. It still requires some investigation on how to
add safety and risk related information in this representation
to leverage the risk analysis. After implementing an AI-
based algorithm, we intend to use this setup to evaluate our
safety approach. In this way, the proposed safety aspects will
be checked by verifying the robot behavior in presence of
humans; both in simulated and real world scenarios.
We have identified limitation to obtain the complete list
of hazards. HAZOP method is employed as it can identify
more hazards than other methods, such as Preliminary Hazard
Analysis [21], but can not identify all possible hazards.
Our goal is to evaluate the robot system using a set of Key
Performance Indicators (KPIs) based on safety requirements
and the overall performance of the warehouse (e.g. the number
of delivered products). An interesting topic is to study the
trade-off for the questions like: What is the effect on the
robot’s performance when using the safety analysis approach?;
How much safety of the system is improved by using the
safety analysis approach?; Has the number of possible colli-
sions/risky situations been reduced?
Studying human trust on machines is also an important
aspect. Human trust on the automated system should not be
blind. Excessive trust could be as harmful (or even more)
as a lack of it. Therefore, the concept of calibrated trust
[30] should be explored and how this calibrated-trust can be
developed and achieved.
VI. CONCLUSIONS AND FUTURE WORK
Human-robot collaboration (HRC) is expected to increase
both productivity and performance. However, this causes new
hazardous situations that must be avoided through proper
risk assessment and risk reduction, without compromising
human or robot productivity. In this perspective, we have
presented a systematic risk assessment approach applied to an
PREPRINT
automated warehouse use case. We have identified different
humans working at different interaction levels with robots
and we have presented their respective safety requirements.
We identified a list of hazards for possible HRC scenarios
using HAZOP method and presented risk analysis based on
the hazards. Additionally, we presented our simulation setup
based on V-REP along with the proposed ROS architecture
and described the usage and advantage of scene graph for the
risk management process.
Although this work focuses on an automated warehouse sce-
nario, most of the techniques and algorithms can be applied to
different contexts. The overall safety solution coupled with the
scene graph generation could be used in any scenario where
humans and robots need to coexist (i.e. office environment or
health care etc.). Furthermore, all basic navigation, planning
and obstacle avoidance strategies are agnostic to the context
and could be used generally.
Currently, we are looking into a suitable AI based algorithm
for the risk reduction phase. This will be implemented as a
ROS node and the output of this algorithm will be the basis to
implement and evaluate the three-layered zone safety strategy.
We also intend to set up the real robots and test our risk
management in the real environment. Another future direction
could be to work on trust aspects to bring the safety evaluation
closer to the real world.
ACKNOWLEDGEMENT
SCOTT (www.scott-project.eu) has received funding from the
Electronic Component Systems for European Leadership Joint Un-
dertaking under grant agreement No 737422. This Joint Undertaking
receives support from the European Unions Horizon 2020 research
and innovation programme and Austria, Spain, Finland, Ireland,
Sweden, Germany, Poland, Portugal, Netherlands, Belgium, Norway.
REFERENCES
[1] ISO. ISO/TS 15066:2016 Robots and robotic devices – Collaborative
robots. International Organization for Standardization, Geneva, Switzer-
land, February 2016.
[2] S. Robla-G´
omez, V. M. Becerra, J. R. LLata, E. Gonzlez-Sarabia,
C. Torre-Ferrero, and J. P´
erez-Oria. Working together: A review on
safe human-robot collaboration in industrial environments. IEEE Access,
PP(99):1–1, 2017.
[3] ISO. ISO 10218-1 (2011): Robots and robotic devices - Safety require-
ments for industrial robots - Part 1: Robots. International Organization
for Standardization, Switzerland, July 2011.
[4] ISO. ISO 10218-2 (2011): Robots and robotic devices - Safety require-
ments for industrial robots - Part 2: Robot systems and integration.
International Organization for Standardization, Switzerland, July 2011.
[5] Fanny Platbrood and Otto G¨
ornemann. Safe robotics - safety in
collaborative robot systems. In SICK AG WHITE PAPER, 2017.
[6] B. Matthias, S. Kock, H. Jerregard, M. Kllman, and I. Lundberg.
Safety of collaborative industrial robots: Certification possibilities for
a collaborative assembly robot concept. In 2011 IEEE International
Symposium on Assembly and Manufacturing (ISAM), May 2011.
[7] M. J. Rosenstrauch and J. Kr¨
uger. Safe human-robot-collaboration-
introduction and experiment using ISO/TS 15066. In 2017 3rd Interna-
tional Conference on Control, Automation and Robotics (ICCAR), pages
740–744, April 2017.
[8] J´
er´
emie Guiochet. Hazard analysis of human-robot interactions with
HAZOP-UML. Safety Science, 84:225 – 237, 2016.
[9] E. Guizzo. Three engineers, hundreds of robots, one warehouse. IEEE
Spectrum, 45(7):26–34, July 2008.
[10] L. Sabattini, M. Aikio, P. Beinschob, M. Boehning, E. Cardarelli,
V. Digani, A. Krengel, M. Magnani, S. Mandici, F. Oleari, C. Reinke,
D. Ronzoni, C. Stimming, R. Varga, A. Vatavu, S. Castells Lopez,
C. Fantuzzi, A. Mayra, S. Nedevschi, C. Secchi, and K. Fuerstenberg.
The pan-robots project: Advanced automated guided vehicle systems for
industrial logistics. IEEE Robotics Automation Magazine, PP(99):1–1,
2017.
[11] ISO. ISO 13849-1:2016 Safety of machinery – Safety-related pats of
control systems – Part 1: General princeples for design. International
Organization for Standardization, Geneva, Switzerland, January 2016.
[12] A. M. Zanchettin, N. M. Ceriani, P. Rocco, H. Ding, and B. Matthias.
Safety in human-robot collaborative manufacturing environments: Met-
rics and control. IEEE Transactions on Automation Science and
Engineering, 13(2):882–893, April 2016.
[13] J. T. Chuan Tan, F. Duan, Y. Zhang, R. Kato, and T. Arai. Safety design
and development of human-robot collaboration in cellular manufactur-
ing. In 2009 IEEE International Conference on Automation Science and
Engineering, pages 537–542, Aug 2009.
[14] R. Krug, T. Stoyanov, V. Tincani, H. Andreasson, R. Mosberger,
G. Fantoni, and A. J. Lilienthal. The next step in robot commissioning:
Autonomous picking and palletizing. IEEE Robotics and Automation
Letters, 1(1):546–553, Jan 2016.
[15] MTCOS. Safety of industrial trucks. driverless trucks and their systems.
technical report. Technical report, EN 1525, 1998.
[16] IEC. IEC 62061:Safety of machinery - Functional safety of safety-related
electrical, electronic and programmable electronic control systems.
International Electrotechnical Commission, Geneva, Switzerland, 2016.
[17] Safety requirements and standardisation for robots: Software
dos and donts. https://rosindustrial.squarespace.com/s/
ROS-I-Conf2016-day2-06-jacobs.pdf. Accessed: 2018-04-19.
[18] Z. De Lemos. FMEA Software Program for Managing Preventive
Maintenance of Medical Equipment. 2004.
[19] Mehrnoosh Askarpour, Dino Mandrioli, Matteo Rossi, and Federico
Vicentini. SAFER-HRC: Safety analysis through formal verification in
human-robot collaboration. In Computer Safety, Reliability, and Security,
pages 283–295, Cham, 2016. Springer International Publishing.
[20] Mehrnoosh Askarpour, Dino Mandrioli, Matteo Rossi, and Federico
Vicentini. Modeling operator behavior in the safety analysis of collabo-
rative robotic applications. In Computer Safety, Reliability, and Security,
pages 89–104, Cham, 2017. Springer International Publishing.
[21] Jeremie Guiochet, Quynh Anh Do Hoang, Mohamed Kaaniche, and
David Powell. Model-based safety analysis of human-robot interactions:
The miras walking assistance robot. In Rehabilitation Robotics (ICORR),
2013 IEEE International Conference on, pages 1–7. IEEE, 2013.
[22] Damien Martin-Guillerez, J´
er´
emie Guiochet, David Powell, and
Christophe Zanon. A uml-based method for risk analysis of human-
robot interactions. In Proceedings of the 2nd International Workshop on
Software Engineering for Resilient Systems, pages 32–41. ACM, 2010.
[23] L. Masson, J. Guiochet, H. Waeselynck, A. Desfosses, and M. Laval.
Synthesis of safety rules for active monitoring: Application to an airport
light measurement robot. In 2017 First IEEE International Conference
on Robotic Computing (IRC), pages 263–270, April 2017.
[24] Rafia Inam, Elena Fersman, Klaus Raizer, Ricardo Souza,
Amadeu Nascimento Junior, and Alberto Hata. Safety for
Automated Warehouse exhibiting collaborative robots. In 28th
European Safety and Reliability Conference (ESREL’18), pages
2021–2028, Trondheim, Norway, June 2018. IEEE. available at
https://www.taylorfrancis.com/books/9781351174657.
[25] ISO. ISO 13000:2018 Risk management– Guideline. International
Organization for Standardization, Geneva, Switzerland, April 2018.
[26] ISO. ISO 12100:2010 Safety of machinery – General principles for
design – Risk assessment and risk reduction. International Organization
for Standardization, Geneva, Switzerland, November 2010.
[27] E. Rohmer, S. P. N. Singh, and M. Freese. V-REP: A versatile and
scalable robot simulation framework. In 2013 IEEE/RSJ International
Conference on Intelligent Robots and Systems, Nov 2013.
[28] O. Brock and O. Khatib. High-speed navigation using the global
dynamic window approach. In Proceedings 1999 IEEE International
Conference on Robotics and Automation, volume 1, 1999.
[29] Michael Ying Yang, Wentong Liao, Hanno Ackermann, and Bodo
Rosenhahn. On support relations and semantic scene graphs. ISPRS
Journal of Photogrammetry and Remote Sensing, 131:15 – 25, 2017.
[30] John D Lee and Katrina A See. Trust in automation: Designing for
appropriate reliance. Human factors, 46(1):50–80, 2004.
... However, realizing a shared work space is bound to assuring a safe environment throughout the run time. Existing literature in this domain as [1], [2], [3] focus on hazard identification methods in simulation or based on pre-defined system ...
... For instance, the study of Bartneck et al. in [5] highlights the necessity of an interpretable value for human beings to communicate the risk level. Contributions in the domain of safe HRC as [1], [3], [2], [6] present hazard identification methods by modeling the system beforehand. While Askarpour et al. in [1] present a formal verification method build upon a logic language to detect possible risks of the system based on its model, Inam et al. search for potential hazards in [3] and generate a list of such by referring to a simulation setup. ...
... Contributions in the domain of safe HRC as [1], [3], [2], [6] present hazard identification methods by modeling the system beforehand. While Askarpour et al. in [1] present a formal verification method build upon a logic language to detect possible risks of the system based on its model, Inam et al. search for potential hazards in [3] and generate a list of such by referring to a simulation setup. Similarly, Araiza et al. in [7] present a simulation-based verification method by developing a test generation approach based on different system variables. ...
Preprint
Full-text available
We present an online and data-driven uncertainty quantification method to enable the development of safe human-robot collaboration applications. Safety and risk assessment of systems are strongly correlated with the accuracy of measurements: Distinctive parameters are often not directly accessible via known models and must therefore be measured. However, measurements generally suffer from uncertainties due to the limited performance of sensors, even unknown environmental disturbances, or humans. In this work, we quantify these measurement uncertainties by making use of conservation measures which are quantitative, system specific properties that are constant over time, space, or other state space dimensions. The key idea of our method lies in the immediate data evaluation of incoming data during run-time referring to conservation equations. In particular, we estimate violations of a-priori known, domain specific conservation properties and consider them as the consequence of measurement uncertainties. We validate our method on a use case in the context of human-robot collaboration, thereby highlighting the importance of our contribution for the successful development of safe robot systems under real-world conditions, e.g., in industrial environments. In addition, we show how obtained uncertainty values can be directly mapped on arbitrary safety limits (e.g, ISO 13849) which allows to monitor the compliance with safety standards during run-time.
... While there exist a plethora of policy/control generation techniques in a risk-sensitive setting, there exist few verification techniques -especially for arbitrary risk measures -that account for unstructured uncertainty. For example, there are numerous works detailing risk-aware verification procedures for specific systems [21,22,23]. These methods verify their systems of interest against existing widespread standards, e.g. in [23] the authors verify a multi-agent collaborative robotic system against the international standards for safe robot interactions with humans ISO 10218 [24,25]. ...
... For example, there are numerous works detailing risk-aware verification procedures for specific systems [21,22,23]. These methods verify their systems of interest against existing widespread standards, e.g. in [23] the authors verify a multi-agent collaborative robotic system against the international standards for safe robot interactions with humans ISO 10218 [24,25]. As such, the verification analyses in these works are limited to their specific systems of interest, and the notion of risk is typically defined against the corresponding standard. ...
... Assumption 4. R is as defined in equation (21) with respect to some α ∈ (0, 1] and γ 1 ∈ [0, 1). ∈ (0, 1), γ 2 ∈ [0, 1), ζ * N is the solution to (RP-PS) for a set of N -samples {y k = R(p k , γ 1 , α)} N k=1 where each p k was drawn uniformly from P , and V, F are as defined in equation (23). ...
Article
The dramatic increase of autonomous systems subject to variable environments has given rise to the pressing need to consider risk in both the synthesis and verification of policies for these systems. This paper aims to address a few problems regarding risk-aware verification and policy synthesis, by first developing a sample-based method to bound the risk measure evaluation of a random variable whose distribution is unknown. These bounds permit us to generate high-confidence verification statements for a large class of robotic systems. Second, we develop a sample-based method to determine solutions to non-convex optimization problems that outperform a large fraction of the decision space of possible solutions. Both sample-based approaches then permit us to rapidly synthesize risk-aware policies that are guaranteed to achieve a minimum level of system performance. To showcase our approach in simulation, we verify a cooperative multi-agent system and develop a risk-aware controller that outperforms the system's baseline controller. We also mention how our approach can be extended to account for any g-entropic risk measure - the subset of coherent risk measures on which we focus.
... While there exist a plethora of policy/control generation techniques in a risk-sensitive setting, there exist few verification techniques -especially for arbitrary risk measures -that account for unstructured uncertainty. For example, there are numerous works detailing risk-aware verification procedures for specific systems [21,22,23]. These methods verify their systems of interest against existing widespread standards, e.g. in [23] the authors verify a multi-agent collaborative robotic system against the international standards for safe robot interactions with humans ISO 10218 [24,25]. ...
... For example, there are numerous works detailing risk-aware verification procedures for specific systems [21,22,23]. These methods verify their systems of interest against existing widespread standards, e.g. in [23] the authors verify a multi-agent collaborative robotic system against the international standards for safe robot interactions with humans ISO 10218 [24,25]. As such, the verification analyses in these works are limited to their specific systems of interest, and the notion of risk is typically defined against the corresponding standard. ...
... Assumption 4. R is as defined in equation (21) with respect to some α ∈ (0, 1] and γ 1 ∈ [0, 1). ∈ (0, 1), γ 2 ∈ [0, 1), ζ * N is the solution to (RP-PS) for a set of N -samples {y k = R(p k , γ 1 , α)} N k=1 where each p k was drawn uniformly from P , and V, F are as defined in equation (23). ...
Preprint
Full-text available
The dramatic increase of autonomous systems subject to variable environments has given rise to the pressing need to consider risk in both the synthesis and verification of policies for these systems. This paper aims to address a few problems regarding risk-aware verification and policy synthesis, by first developing a sample-based method to bound the risk measure evaluation of a random variable whose distribution is unknown. These bounds permit us to generate high-confidence verification statements for a large class of robotic systems. Second, we develop a sample-based method to determine solutions to non-convex optimization problems that outperform a large fraction of the decision space of possible solutions. Both sample-based approaches then permit us to rapidly synthesize risk-aware policies that are guaranteed to achieve a minimum level of system performance. To showcase our approach in simulation, we verify a cooperative multi-agent system and develop a risk-aware controller that outperforms the system's baseline controller. We also mention how our approach can be extended to account for any $g$-entropic risk measure - the subset of coherent risk measures on which we focus.
... The Borda Voting method is one of the decision-making tools Ref [66] to rank an item from the most to least critical based on multiple evaluation criteria [42,67]. This study implemented the Borda method in applying the risk matrix as used by Engert and Lansdowne [68] with the following procedures: ...
... The b i is the ordering value used as the corresponding elements in the judgement Matrix A ( Figure A1) [42,61,69]. Concerning Saaty's scales, as shown in Table 5, the pairwise comparison is formed by pairwise comparison of the n number of risk factors, and the matrix elements of the quantised values are the importance of the elements i and j. ...
Article
Full-text available
In today’s era of industrial economics, warehousing is a complex process with many moving parts and is required to contribute productively to the success of supply chain management. Therefore, risk management in warehouses is a crucial point of contention to ensure sustainability with global supply chain processes to accommodate good productivity performance. Therefore, this study aims to analyse risks factors that affect warehouse productivity performance towards a systematic identification of critical factors that managers should target to sustain and grow warehouse productivity. This study utilised a traditional risk matrix framework, integrating it with the Borda method and Analytical Hierarchy Process (AHP) technique to produce an innovative risk matrix model. The results indicate that from the constructed ten warehouse operation risk categories and 32 risk factors, seven risk categories, namely operational, human, market, resource, financial, security and regulatory, including 13 risk factors were prioritised as the most critical risks impacting warehouse productivity performance. The developed risks analysis model guides warehouse managers in targeting critical risks factors that have a higher influence on warehouse productivity performance. This would be extremely helpful for companies with limited resources but seek productivity improvement and risks mitigation. Considering the increasing interest in sustainable development goals (economic, environmental, and social), arguably, this work support managers in boosting these goals within their organisation. This study is expected to benefit warehouse managers in understanding how to manage risk, handle unexpected disruptions, and improve performance in ever-changing uncertain business environments. It often has a profound effect on the productivity level of an organisation. This study proposes an innovative risks analysis model that aims to analyse risks, frame them, and rate them according to their importance, particularly for warehousing productivity performance.
... The amount of scenarios where robots (which are mostly resource-constrained) that need to execute complex machine learning models is increasing rapidly. An example of such a scenario is a Human-Robot Collaboration (HRC) scenario where robots need to avoid any hazardous situations through safety analysis [8,9]. Commonly, safety analysis involves complex computer vision tasks that require substantial computing power to perform the inferences in real-time. ...
Preprint
Full-text available
The number of mobile robots with constrained computing resources that need to execute complex machine learning models has been increasing during the past decade. Commonly, these robots rely on edge infrastructure accessible over wireless communication to execute heavy computational complex tasks. However, the edge might become unavailable and, consequently, oblige the execution of the tasks on the robot. This work focuses on making it possible to execute the tasks on the robots by reducing the complexity and the total number of parameters of pre-trained computer vision models. This is achieved by using model compression techniques such as Pruning and Knowledge Distillation. These compression techniques have strong theoretical and practical foundations, but their combined usage has not been widely explored in the literature. Therefore, this work especially focuses on investigating the effects of combining these two compression techniques. The results of this work reveal that up to 90% of the total number of parameters of a computer vision model can be removed without any considerable reduction in the model's accuracy.
... The development of human-computer interaction (HCI) and humanrobot interaction (HRI) is preparing the ground for the industry of the future, where humans and machines will share spaces and perform tasks in collaboration [24]. However, along with benefits such as increased productivity and efficiency, there are also emerging risks associated with direct interaction between humans and robots [25,26]. In this sense, the capability of the machines to understand human emotions through face expression recognition (FER) could enable a more effective interaction [27][28][29][30][31], but the gap between humans and machines in FER ability should be addressed. ...
Article
Facial appearance is one prominent feature in analyzing several aspects, e.g., aesthetics and expression of emotions, and face analysis is crucial in many fields. Face analysis requires measurements that can be performed by different technologies and typically relies on landmarks identification. Recently, low-cost customer grade 3D cameras have been introduced in the market, enabling an increase of application at affordable cost with nominal adequate performances. Novel cameras require to be thoroughly metrologically characterized to guarantee these performances. Cameras are calibrated following a standard general-purpose procedure. However, the specificity of facial measurements requires a task-based metrological characterization to include typical influence factors. This work outlines a methodology for task-based metrological characterization of low-cost 3D cameras for facial analysis, consisting of: influence factor identification by ANOVA, related uncertainty contribution assessment, uncertainty propagation, landmarking uncertainty estimation. The proposed methodology is then demonstrated on a customer grade state-of-the-art 3D camera available on the market.
... (accessed on 5 January 2022)). These software programs focus on documenting hazards, estimated risks, and risk mitigation methods, as illustrated in Figure 2. Newer methods from research [21][22][23][24] focus on automated methods to simplify the overall risk analysis process for collaborative robots. Other studies focus on methods to support non-experts in evaluating the effectiveness of risk mitigation measures [25]. ...
Article
Full-text available
Building on the idea of Industry 4.0, new models of the highly connected factory that leverage factory-generated data to introduce cost-effective automation and involve the human worker for creating higher added value are possible. Within this context, collaborative robots are becoming more common in industry. However, promises regarding flexibility cannot be satisfied due to the challenging process of ensuring human safety. This is because current regulations and standards require updates to the risk assessment for every change to the robotic application, including the parts involved, the robotic components, and the type of interaction within the workspace. This work presents a novel risk analysis software tool that was developed to support change management for adaptive collaborative robotic systems in the connected factory model. The main innovation of this work is the tool’s ability to automatically identify where changes have been made to components or processes within a specific application through its integration with a connected factory architecture. This allows a safety expert to easily see where updates to the risk assessment are required, helping them to maintain conformity with the CE marking process despite frequent changes. To evaluate the benefits of this tool, a user study was performed with an exemplary use-case from the SHOP4CF project. The results show that this newly developed technology for risk assessment has better usability and lower omission errors when compared to existing methods. Therefore, this study underlines the need for tools that can help safety engineers cope with changes in flexible robotics applications and reduce omission errors.
... The application of mobile robots in real-world environments has seen significant growth in recent years. This is particularly true in more structured environments such as on-road autonomous vehicles (Litman, 2020) and indoor service robots in shared pedestrian environments such as retail stores and warehouses (Inam et al., 2018). However, the same levels of adoption have not yet been achieved in Field Robotics, March, 2022 · 2:1-33 less structured environments, such as those seen in agriculture. ...
Article
Full-text available
Achieving long-term autonomy for mobile robots operating in real-world, unstructured environments, such as farms, remains a significant challenge. Such tasks are made increasingly complex when undertaken in the presence of moving humans or livestock. These dynamic environments require a robot to be able to adapt its immediate plans, accounting for the state of nearby agents and possible responses they may have to the robot's actions. Additionally, in order to achieve longer-term goals, consideration of the limited on-board resources available to the robot is required, especially for extended missions, such as weeding agricultural fields. To achieve efficient long-term autonomy, it is thus crucial to understand the impact that dynamic updates to an energy-efficient plan might have on resource usage whilst navigating through crowds or herds. To address these challenges, we present a hierarchical planning framework that integrates an online, dynamic path-planner with a longer-term, offline, objective-based planner. This framework acts to achieve long-term autonomy through awareness of both dynamic responses of agents to a robot's motion and the limited resources available. This paper details the hierarchical approach and its integration on a robotic platform, including a comprehensive description of the planning framework and associated perception modules. The approach is evaluated in real-world trials on farms, requiring both consideration of limited battery capacity and the presence of nearby moving agents. These trials additionally demonstrate the ability of the framework to adapt resource use through variation of the dynamic planner, allowing adaptive behaviour in changing environments. A summary video is available at https://youtu.be/DGVTrYwJ304.
Article
We present a framework for identifying, communicating, and addressing risk in shared-autonomy mobile manipulator applications. This framework is centered on the capacity of the mobile manipulator to sense its environment, interpret complex and cluttered scenes, and estimate the probability of actions and configuration states that may result in task failures, such as collision (i.e., identifying “risk”). If the threshold for acceptable risk is exceeded, a remote operator is notified and presented with timely, actionable information in which the person can quickly assess the situation and provide guidance for the robot. This framework is demonstrated with a use case in which a mobile manipulator performs machine tending and material handling tasks.
Conference Paper
Full-text available
Safety-critical autonomous systems, like robots working in collaboration with humans, are about to be used in diverse environments such as industry but also public spaces or hospitals. Those systems evolve in complex and dynamic environments and are exposed to a wide variety of hazards. Several techniques may be used to ensure that their misbehavior cannot cause unacceptable damage or harm. One of them is active safety monitoring. A safety monitor is a component responsible for maintaining the system in a safe state despite the occurrence of hazardous situations. In this paper, we study the introduction of safety monitoring into an airport light measurement robot. The specification of the monitor follows a principled approach that starts with a hazard analysis and ends with a set of safety rules synthesized based on formal methods. This study illustrates the benefits of the approach, and shows the impact of safety on the development of an autonomous system.
Article
Full-text available
So far, autonomous order picking (commissioning) systems have not been able to meet the stringent demands regarding speed, safety and accuracy of real-world warehouse automation, resulting in reliance on human workers. In this work we target the next step in autonomous robot commissioning: automatizing the currently manual order picking procedure. To this end, we investigate the use case of autonomous picking and palletizing with a dedicated research platform and discuss lessons learned during testing in simplified warehouse settings. The main theoretical contribution is a novel grasp representation scheme which allows for redundancy in the gripper pose placement. This redundancy is exploited by a local, prioritized kinematic controller which generates reactive manipulator motions on-the- fly. We validated our grasping approach by means of a large set of experiments, which yielded an average grasp acquisition time of 23.5 s at a success rate of 94.7 %. Our system is able to autonomously carry out simple order picking tasks in a human- safe manner, and as such serves as an initial step towards future commercial-scale in-house logistics automation solutions.
Article
Full-text available
New safety critical systems are about to appear in our everyday life: advanced robots able to interact with humans and perform tasks at home, in hospitals, or at work. A hazardous behavior of those systems, induced by failures or extreme environment conditions, may lead to catastrophic consequences. Well-known risk analysis methods used in other critical domains (e.g., avionics, nuclear, medical, transportation), have to be extended or adapted due to the non-deterministic behavior of those systems, evolving in unstructured environments. One major challenge is thus to develop methods that can be applied at the very beginning of the development process, to identify hazards induced by robot tasks and their interactions with humans. In this paper we present a method which is based on an adaptation of a hazard identification technique, HAZOP (Hazard Operability), coupled with a system description notation, UML (Unified Modeling Language). This systematic approach has been applied successfully in research projects, and is now applied by robot manufacturers. Some results of those studies are presented and discussed to explain the benefits and limits of our method.
Article
After many years of rigid conventional procedures of production, industrial manufacturing is going through a process of change towards flexible and intelligent manufacturing, the so-called Industry 4.0. In this context, human-robot collaboration has an important role in smart factories since it contributes to the achievement of higher productivity and greater efficiency. However, this evolution means breaking with the established safety procedures as the separation of workspaces between robot and human is removed. These changes are reflected in safety standards related to industrial robotics since the last decade, and have led to the development of a wide field of research focusing on the prevention of human-robot impacts and/or the minimisation of related risks or their consequences. This article presents a review of the main safety systems that have been proposed and applied in industrial robotic environments that contribute to the achievement of safe collaborative human-robot work. Additionally, a review is provided of the current regulations along with new concepts that have been introduced in them. The discussion presented in this work includes multi-disciplinary approaches such as: techniques for estimation and evaluation of injuries in human-robot collisions; mechanical and software devices designed to minimize the consequences of human-robot impact; impact detection systems; and strategies to prevent collisions or minimise their consequences when they occur.
Article
In modern manufacturing plants, automation is widely adopted in the production phases, which leads to a high level of productivity and efficiency. However, the same level of automation is generally not achieved in logistics, typically performed by human operators and manually driven vehicles. In fact, even though automated guided vehicles (AGVs) have been used for a few decades for goods transportation in industrial environments [1], they do not yet represent a widespread solution and are typically applied only in specific scenarios.
Conference Paper
Human-Robot Collaboration is increasingly prominent in people’s lives and in the industrial domain, for example in manufacturing applications. The close proximity and frequent physical contacts between humans and robots in such applications make guaranteeing suitable levels of safety for human operators of the utmost importance. Formal verification techniques can help in this regard through the exhaustive exploration of system models, which can identify unwanted situations early in the development process. This work extends our SAFER-HRC methodology with a rich non-deterministic formal model of operator behaviors, which captures the hazardous situations resulting from human errors. The model allows safety engineers to refine their designs until all plausible erroneous behaviors are considered and mitigated.
Article
Rapid development of robots and autonomous vehicles requires semantic information about the surrounding scene to decide upon the correct action or to be able to complete particular tasks. Scene understanding provides the necessary semantic interpretation by semantic scene graphs. For this task, so-called support relationships which describe the contextual relations between parts of the scene such as floor, wall, table, etc, need be known. This paper presents a novel approach to infer such relations and then to construct the scene graph. Support relations are estimated by considering important, previously ignored information: the physical stability and the prior support knowledge between object classes. In contrast to previous methods for extracting support relations, the proposed approach generates more accurate results, and does not require a pixel-wise semantic labeling of the scene. The semantic scene graph which describes all the contextual relations within the scene is constructed using this information. To evaluate the accuracy of these graphs, multiple different measures are formulated. The proposed algorithms are evaluated using the NYUv2 database. The results demonstrate that the inferred support relations are more precise than state-of-the-art. The scene graphs are compared against ground truth graphs.
Conference Paper
Whereas in classic robotic applications there is a clear segregation between robots and operators, novel robotic and cyber-physical systems have evolved in size and functionality to include the collaboration with human operators within common workspaces. This new application field, often referred to as Human-Robot Collaboration (HRC), raises new challenges to guarantee system safety, due to the presence of operators. We present an innovative methodology, called SAFER-HRC, centered around our logic language TRIO and the companion bounded satisfiability checker Zot, to assess the safety risks in an HRC application. The methodology starts from a generic modular model and customizes it for the target system; it then analyses hazards according to known standards, to study the safety of the collaborative environment.