Conference Paper

AI-based Safety Analysis for Collaborative Mobile Robots

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Conventionally as addressed in the PEGASUS project, a list of relevant scenarios is defined and tested [11]. As Hata et al. [12] propose, fuzzy methods combined with deep learning-based semantic segmentation helps interpolating between scenarios. However, if trying to make a claim for any possible scenarios, similarly to the proving approach strong assumptions are needed. ...
Conference Paper
The emerging autonomous mobile robots promise a new level of efficiency and flexibility. However, because these types of systems operate in the same space as humans, these mobile robots must cope with dynamic changes and heterogeneously structured environments. To ensure safety despite these challenges, new approaches are needed that model risk at runtime. This risk depends on the situation and that is a situational risk. In this paper, we propose a new methodology to model this situational risk based on multi-agent adversarial reinforcement learning. In this methodology, two competing groups of reinforcement learning agents, namely the protagonists and the adversaries, fight against each other in the simulation. The adversaries represent the disruptive and destabilizing factors, while the protagonists try to compensate for them and make the system robust. The situational risk can then be derived from the outcome of the simulated struggle. Risk modeling thereby differentiates the four steps of intelligent information processing: sense, analyze, process, and execute. To find the appropriate adversaries and actors for each of these steps, this methodology builds on Systems Theoretic Process Analysis (STPA). Using STPA, we identify critical signals that lead to losses when a disturbance under certain conditions or in certain situations occurs. At this point, the challenge of managing the complexity arises. We face this issue using training effort as a metric to evaluate it. Through statistical analysis of the identified signals, we derive a procedure for defining action spaces and rewards for the agents in question. We validate the methodology using the example of a Robotino 3 Premium from Festo, an autonomous mobile robot.
... A prime example of that is the execution of heavy AI-based algorithms and the processing a large amounts of data in the form of videos or images. Further, robots are usually resource constrained and thus need to be complemented with edge/cloud resources to meet the real-time requirements (on both communication and computation) [23]. ...
Conference Paper
Full-text available
1 Industry is moving towards advanced Cyber-Physical Systems (CPS), with trends in smartness, automation, connectivity and collaboration. We examine the drivers and requirements for the use of edge computing in critical industrial applications. Our purpose is to provide a better understanding of industrial needs and to initiate a discussion on what role edge computing could take, complementing current industrial and embedded systems, and the cloud. Four domains are chosen for analysis with representative use-cases; manufacturing, transportation, the energy sector and networked applications in the defense domain. We further discuss challenges, open issues and suggested directions that are needed to pave the way for the use of edge computing in industrial CPS.
... A prime example of that is the execution of heavy AI-based algorithms and the processing a large amounts of data in the form of videos or images. Further, robots are usually resource constrained and thus need to be complemented with edge/cloud resources to meet the real-time requirements (on both communication and computation) [23]. ...
Preprint
Full-text available
Industry is moving towards advanced Cyber-Physical Systems (CPS), with trends in smartness, automation, connectivity and collaboration. We examine the drivers and requirements for the use of edge computing in critical industrial applications. Our purpose is to provide a better understanding of industrial needs and to initiate a discussion on what role edge computing could take, complementing current industrial and embedded systems, and the cloud. Four domains are chosen for analysis with representative use-cases; manufacturing, transportation, the energy sector and networked applications in the defense domain. We further discuss challenges, open issues and suggested directions that are needed to pave the way for the use of edge computing in industrial CPS.
... Moon et al. [127] studied the generation of natural language description from environment images for further human-robot communication leveraging graph convolution networks (GCN) to extract local features from a 3D semantic graph map and LSTM to generate scene description. Hata et al. [128] reported a more specific application of scene graph for safe HRC in a warehouse navigation case, in which Mask R-CNN is utilized to segment scene objects from images and subsequently encode the extracted object information into a scene graph for further fuzzy logic-based risk management, while Riaz et al. [129] considered a similar warehouse scenario for HRC safety analysis leveraging the proposed MSDN (Multi-level Scene Description Neural Networks) to generate scene graph and region captions. Being a compact and efficient representation of the environment, scene graph is widely adopted in robotic applications, but the graph-based structure also undermines the ability to capture geometric relations between objects. ...
Article
Full-text available
Recently human-robot collaboration (HRC) has emerged as a promising paradigm for mass personalization in manufacturing owing to the potential to fully exploit the strength of human flexibility and robot precision. To achieve better collaboration, robots should be capable of holistically perceiving and parsing the information of a working scene in real-time to plan proactively and act accordingly. Although excessive attentions have been paid to human cognition in existing works of HRC, there is a lack of a holistic consideration of other crucial elements of a working scene, especially when taking a further step towards Proactive HRC. Aiming to fill the gap, this paper provides a systematic review of computer vision-based holistic scene understanding in HRC scenarios, which mainly takes into account the cognition of object, human, and environment along with visual reasoning to gather and compile visual information into semantic knowledge for subsequent robot decision-making and proactive collaboration. Finally, challenges and potential research directions that can be largely facilitated by enhanced holistic perception techniques are also discussed.
... Hata et al. propose a combined approach where neural networks extract information from a simulation model, which is then further evaluated with a fuzzy-logic system to provide a risk index (Hata et al., 2019). ...
Article
According to the standard ISO 10218–2, industrial robot systems must be subjected to a risk assessment prior to commissioning. In current industrial practice, risk assessments are conducted on the basis of experience, expert knowledge, and simple tools such as checklists. However, the recent trend towards human-robot collaboration (HRC) significantly increases the complexity of robot systems and makes risk assessment more challenging. In response to this challenge, the scientific community has proposed various new tools and methods to support risk assessment of HRC applications. As of yet, only few of these novel approaches have found their way into industrial practice. In this paper, we review literature on novel approaches to HRC risk assessment. Furthermore, we evaluate interviews with professionals from the field of HRC to explore needs of industrial practitioners. We compare our findings from literature review and interviews, and discuss which challenges need to be addressed to successfully transfer novel approaches into industrial practice.
Article
Full-text available
Safety in human-robot collaborative manufacturing is ensured by various types of safety systems that help to avoid collisions and limit impact forces to the acceptable level in a case of collisions. Recently, active vision-based safety systems have gained momentum due to their affordable price (e.g. off-the-shelf RGB or depth cameras and projectors), flexible installation and easy tailoring. However, these systems have not been widely adopted and standardized. The vision-based commercial products can only be found in a limited number. In this work, we review the recent methods in vision-based technologies applied in human-robot interaction and/or collaboration scenarios, and provide a technology analysis of these. The aim of this review is to provide a comparative analysis of the current readiness level of vision based safety systems for industrial requirements and highlight the important components that are missing. The factors that are analysed as such are use case flexibility, system speed and level of collaboration.
Conference Paper
Full-text available
Obstacle detection and avoidance is playing very important role in field of mobile robot navigation, space exploration and automation industries for safety of robot. In this study paper we are proposing a real time obstacle detection and avoidance algorithm using passive stereoscopic Kinect camera. The basic idea behind the obstacle detection method is to find a depth map of image captured by Kinect camera and map it with the real world coordinates. Camera can be used as non contact type of sensor for detection of obstacles unlike the classical sensors. The obstacle detection and avoidance is carrying out in both static and dynamic environment i.e. obstacles in environment can be stationary or moving. The proposed system is tested in indoor environment with raspberry pi 2 model with 640*480 pixels image size and with a frame rate of 30 fps. The system is simple, robust and efficient. By use of camera system as sensor the need for complex sensor arrangement can be avoided. By the experimental results we can conclude that the proposed system is robust and convenient.
Article
Full-text available
Developing advanced robotics applications is now facing the safety issue for users, the environment, and the robot itself, which is a main limitation for their deployment in real life. This safety could be justified by the use of dependability techniques as it is done in other safety-critical applications. However, due to specific robotic properties (such as continuous human–robot physical interaction or non deterministic decisional layer), many techniques need to be adapted or revised. This paper reviews the main issues, research work and challenges in the field of safety-critical robots, linking up dependability and robotics concepts.
Article
Full-text available
Human-Robot Collaboration is a new trend in the field of industrial and service robotics as a part of the strategy Industry 4.0. Robots and the strategy itself are known under the German acronym MRK (Mensch Roboter Kollaboration) or the English acronym HRC (Human Robot Collaboration). The main goal of this innovative strategy is to build up an environment for safety collaboration between humans and robots. There is an area between manual manufacture and fully automated production where a human worker comes into contact with machine. This area has many limitations due to safety restrictions. The machine is allowed to be at automatic work only if the operating personnel is out of its workspace. Collaborative robotics establishes new opportunities in the cooperation between humans and machines. Personnel shares the workspace with the robot where it helps him with non-ergonomic, repetitive, uncomfortable or even dangerous operations. The robot monitors its movements by using advanced sensors in order not to limit but mainly not to endanger its human colleague. In this article, the stress is laid on the safety of collaborative robots and the readout and readiness of this technology for its use in production.
Article
Full-text available
New safety critical systems are about to appear in our everyday life: advanced robots able to interact with humans and perform tasks at home, in hospitals, or at work. A hazardous behavior of those systems, induced by failures or extreme environment conditions, may lead to catastrophic consequences. Well-known risk analysis methods used in other critical domains (e.g., avionics, nuclear, medical, transportation), have to be extended or adapted due to the non-deterministic behavior of those systems, evolving in unstructured environments. One major challenge is thus to develop methods that can be applied at the very beginning of the development process, to identify hazards induced by robot tasks and their interactions with humans. In this paper we present a method which is based on an adaptation of a hazard identification technique, HAZOP (Hazard Operability), coupled with a system description notation, UML (Unified Modeling Language). This systematic approach has been applied successfully in research projects, and is now applied by robot manufacturers. Some results of those studies are presented and discussed to explain the benefits and limits of our method.
Article
Full-text available
A failsafe control strategy is presented for online safety certification of robot movements in a collaborative workspace with humans. This approach plans, predicts and uses formal guarantees on reachable sets of a robot arm and a human obstacle to verify the safety and feasibility of a trajectory in real time. The robots considered are serial link robots under Computed Torque schemes of control. We drastically reduce the computation time of our novel verification procedure through precomputation of non-linear terms and use of interval arithmetic, as well as representation of reachable sets by zonotopes, which scale easily to high dimensions and are easy to convert between joint space and Cartesian space. The approach is implemented in a simulation, to show that real time is computationally within reach.
Article
Full-text available
Ensuring that safety requirements are respected is a critical issue for the deployment of hazardous and complex reactive systems. We consider a separate safety channel, called a monitor, that is able to partially observe the system and to trigger safety-ensuring actuations. We address the issue of correctly specifying such a monitor with respect to safety and liveness requirements. Two safety requirement synthesis programs are presented and compared. Based on a formal model of the system and its hazards, they compute a monitor behavior that ensures system safety without unduly compromising system liveness. The first program uses the model-checker NuSMV to check safety requirements. These requirements are automatically generated by a branch-and-bound algorithm. Based on a game theory approach, the second program uses the TIGA extension of UPPAAL to synthesize safety requirements, starting from an appropriately reformulated representation of the problem.
Article
Full-text available
It is essential for robots working in close proximity to people to be both safe and trustworthy. We present a case study on formal verification for a high-level planner/scheduler for the Care-O-bot, an autonomous personal robotic assistant. We describe how a model of the Care-O-bot and its environment was developed using Brahms, a multiagent workflow language. Formal verification was then carried out by automatically translating this model to the input language of an existing model checker. Four sample properties based on system requirements were verified. We then refined the environment model three times to increase its accuracy and the persuasiveness of the formal verification results. The first refinement uses a user activity log based on real-life experiments, but is deterministic. The second refinement uses the activities from the user activity log nondeterministically. The third refinement uses “conjoined activities” based on an observation that many user activities can overlap. The four samples properties were verified for each refinement of the environment model. Finally, we discuss the approach of environment model refinement with respect to this case study.
Article
We present a conceptually simple, flexible, and general framework for object instance segmentation. Our approach efficiently detects objects in an image while simultaneously generating a high-quality segmentation mask for each instance. The method, called Mask R-CNN, extends Faster R-CNN by adding a branch for predicting an object mask in parallel with the existing branch for bounding box recognition. Mask R-CNN is simple to train and adds only a small overhead to Faster R-CNN, running at 5 fps. Moreover, Mask R-CNN is easy to generalize to other tasks, e.g., allowing us to estimate human poses in the same framework. We show top results in all three tracks of the COCO suite of challenges, including instance segmentation, bounding-box object detection, and person keypoint detection. Without tricks, Mask R-CNN outperforms all existing, single-model entries on every task, including the COCO 2016 challenge winners. We hope our simple and effective approach will serve as a solid baseline and help ease future research in instance-level recognition. Code will be made available.
Article
Rapid development of robots and autonomous vehicles requires semantic information about the surrounding scene to decide upon the correct action or to be able to complete particular tasks. Scene understanding provides the necessary semantic interpretation by semantic scene graphs. For this task, so-called support relationships which describe the contextual relations between parts of the scene such as floor, wall, table, etc, need be known. This paper presents a novel approach to infer such relations and then to construct the scene graph. Support relations are estimated by considering important, previously ignored information: the physical stability and the prior support knowledge between object classes. In contrast to previous methods for extracting support relations, the proposed approach generates more accurate results, and does not require a pixel-wise semantic labeling of the scene. The semantic scene graph which describes all the contextual relations within the scene is constructed using this information. To evaluate the accuracy of these graphs, multiple different measures are formulated. The proposed algorithms are evaluated using the NYUv2 database. The results demonstrate that the inferred support relations are more precise than state-of-the-art. The scene graphs are compared against ground truth graphs.
Conference Paper
Whereas in classic robotic applications there is a clear segregation between robots and operators, novel robotic and cyber-physical systems have evolved in size and functionality to include the collaboration with human operators within common workspaces. This new application field, often referred to as Human-Robot Collaboration (HRC), raises new challenges to guarantee system safety, due to the presence of operators. We present an innovative methodology, called SAFER-HRC, centered around our logic language TRIO and the companion bounded satisfiability checker Zot, to assess the safety risks in an HRC application. The methodology starts from a generic modular model and customizes it for the target system; it then analyses hazards according to known standards, to study the safety of the collaborative environment.
Conference Paper
Allowing humans and robots to interact in close proximity to each other has great potential for increasing the effectiveness of human-robot teams across a large variety of domains. However, as we move toward enabling humans and robots to interact at ever-decreasing distances of separation, effective safety technologies must also be developed. While new, inherently human-safe robot designs have been established, millions of industrial robots are already deployed worldwide, which makes it attractive to develop technologies that can turn these standard industrial robots into human-safe platforms. In this work, we present a real-time safety system capable of allowing safe human-robot interaction at very low distances of separation, without the need for robot hardware modification or replacement. By leveraging known robot joint angle values and accurate measurements of human positioning in the workspace, we can achieve precise robot speed adjustment by utilizing real-time measurements of separation distance. This, in turn, allows for collision prevention in a manner comfortable for the human user. We demonstrate our system achieves latencies below 9.64 ms with 95% probability, 11.10 ms with 99% probability, and 14.08 ms with 99.99% probability, resulting in robust real-time performance.
Conference Paper
Safety is an important consideration in human-robot interactions (HRI). Robots can perform powerful movements that can cause hazards to humans surrounding them. To prevent accidents, it is important to identify sources of potential harm, to determine which of the persons in the robot's vicinity may be in greatest peril and to assess the type of injuries the robot may cause to this person. This survey starts with a review of the safety issues in industrial settings, where robots manipulate dangerous tools and move with extreme rapidity and force. We then move to covering issues related to the growing numbers of autonomous mobile robots that operate in crowded (human-inhabited) environments. We discuss the potential benefits of fully autonomous cars on safety on roads and for pedestrians. Lastly, we cover safety issues related to assistive robots.
Article
Neuro-fuzzy systems have recently gained a lot of interest in research and application. Neuro-fuzzy models as we understand them are fuzzy systems that use local learning strategies to learn fuzzy sets and fuzzy rules. Neuro-fuzzy techniques have been developed to support the development of e.g. fuzzy controllers and fuzzy classifiers. In this paper we discuss a learning method for fuzzy classification rules. The learning algorithm is a simple heuristics that is able to derive fuzzy rules from a set of training data very quickly, and tunes them by modifying parameters of membership functions. Our approach is based on NEFCLASS, a neuro-fuzzy model for pattern classification. We also discuss some results obtained by our software implementation of NEFCLASS, which is freely available on the Internet.
ISO/TS 15066:2016 Robots and robotic devices – Collaborative robots
  • Iso Iso
Robot guidance using machine vision techniques in industrial environments: A comparative review
  • L Prez