Close human-robot interaction (HRI), especially in industrial scenarios, has been vastly investigated for the advantages of combining human and robot skills. For an effective HRI, the validity of currently available human-machine communication media or tools should be questioned, and new communication modalities should be explored. This article proposes a modular architecture allowing human operators to interact with robots through different modalities. In particular, we implemented the architecture to handle gestural and touchscreen input, respectively, using a smartwatch and a tablet. Finally, we performed a comparative user experience study between these two modalities.
This paper presents an Augmented Reality software suite aiming at supporting the operator’s interaction with flexible mobile robot workers. The developed AR suite, integrated with the shopfloor’s Digital Twin, provides the operators’ a) virtual interfaces to easily program the mobile robots b) information on the process and the production status, c) safety awareness by superimposing the active safety zones around the robots and d) instructions for recovering the system from unexpected events. The application’s purpose is to simplify and accelerate the industrial manufacturing process by introducing an easy and intuitive way of interaction with the hardware, without requiring special programming skills or long training from the human operators. The proposed software is developed for the Microsoft’s HoloLens Mixed Reality Headset 2, integrated with Robotic Operation System (ROS), and it has been tested in a case study inspired from the automotive industry.
Over the last decades, a significant effort has been allocated by the research community towards improving robot capabilities. These emerged robot cognitive and perception abilities have been driving the swift of production paradigm from rigid automation systems to hybrid and collaborative working environments. Nevertheless, the adoption of robots in lower volume, diverse manufacturing environments is still constrained, due to a) the high integration and deployment complexity of end-to-end integrated robotic applications, b) the limited flexibility of existing solutions in dealing with unexpected events in a dynamic way. This paper presents a modular service-oriented architecture for robotic resources that aims to address those limitations by a) creating digital models, semantically characterizing all production entities and processes in a unified way, b) deploy open interfaces allowing the communication of all robotics hardware and control systems, c) dynamically plan and orchestrate manufacturing operations execution by the involved human and robot resources, and d) collectively share the resources’ perceived environment. The overall system includes the integration of task planning, perception, actuation, and robot control modules while Augmented Reality (AR) based interfaces has been considered for including human operators in the loop. The described architecture has been tested and validated in a case study inspired from the automotive industry.
The increasing digitalization and advancement in information communication technologies has greatly changed how humans interact with digital information. Nowadays, it is not sufficient to only display relevant data in production activities, as the enormous amount of data generated from smart devices can overwhelm operators without being fully utilized. Operators often require extensive knowledge of the machines in use to make informed decisions during processes such as maintenance and production. To enable novice operators to access such knowledge, it is important to reinvent the way of interacting with digitally enhanced smart devices. In this research, a mobile augmented reality remote monitoring system is proposed to help operators with low knowledge and experience level comprehend digital twin data of a device and interact with the device. It analyses both historic logs as well as real-time data through a cloud server and enriches 2D data with 3D models and animations in the 3D physical space. A cloud-based machine learning algorithm is applied to transform learned knowledge into live presentations on a mobile device for users to interact with. A scaled-down case study is conducted using a tower crane model to demonstrate the potential benefits as well as implications when the system is deployed in industrial environments. This user study verifies that the proposed solution yields consistent measurable improvements for novice users in human-device interaction that is statistically significant.
The use of collaborative robots in the manufacturing industry has widely spread in the last decade. In order to be efficient, the human-robot collaboration needs to be properly designed by also taking into account the operator’s psychophysiological reactions. Virtual Reality can be used as a tool to simulate human-robot collaboration in a safe and cheap way. Here, we present a virtual collaborative platform in which the human operator and a simulated robot coordinate their actions to accomplish a simple assembly task. In this study, the robot moved slowly or more quickly in order to assess the effect of its velocity on the human’s responses. Ten participants tested this application by using an Oculus Rift head-mounted display; ARTracking cameras and a Kinect system were used to track the operator’s right arm movements and hand gestures respectively. Performance, user experience, and physiological responses were recorded. The results showed that while humans’ performances and evaluations varied as a function of the robot’s velocity, no differences were found in the physiological responses. Taken together, these data highlight the relevance of the kinematic aspects of robot’s motion within a human-robot collaboration and provide valuable insights to further develop our virtual human-machine interactive platform.
This paper presents a tool for supporting operators in shared industrial workplaces where humans and robots coexist. The tool has been developed in the form of software application for wearable devices, such as smartwatches, and provides functionalities for direct interaction with the robot. Interfaces to audio commands, manual guidance applications and AR visualization systems have been implemented. The paper focuses on the development of the smartwatch application and its integration with the network of resources under the ROS framework. The tool has been tested on two pilot cases, one in the automotive and one in the white goods production sectors. The results indicate that the approach can significantly enhance the operators’ integration in the hybrid assembly system.
Shipbuilding companies are upgrading their inner workings in order to create Shipyards 4.0, where the principles of Industry 4.0 are paving the way to further digitalized and optimized processes in an integrated network. Among the different Industry 4.0 technologies, this article focuses on Augmented Reality, whose application in the industrial field has led to the concept of Industrial Augmented Reality (IAR). This article first describes the basics of IAR and then carries out a thorough analysis of the latest IAR systems for industrial and shipbuilding applications. Then, in order to build a practical IAR system for shipyard workers, the main hardware and software solutions are compared. Finally, as a conclusion after reviewing all the aspects related to IAR for shipbuilding, it is proposed an IAR system architecture that combines Cloudlets and Fog Computing, which reduce latency response and accelerate rendering tasks while offloading compute intensive tasks from the Cloud.
Recent years, manufacturing aims on increasing flexibility while maintaining productivity for satisfying emerging market needs for higher product customization. Human Robot Collaboration (HRC) is able to bring about this balance by combining the benefits of manual assembly and robotic automation. When introducing a hybrid concept, safety and human acceptance are of vital importance for achieving implementation. Fenceless coexistence may lead to discomfort of operators especially in cases where close Human Robot Interaction (HRI) occurs. This work aims at designing and implementing a natural Human-System and System-Human interaction framework that enables seamless interaction between operators and their “robot colleagues”. This natural interaction will strengthen hybrid implementation through increased: a) operator’s and system’s awareness, b) operator’s trust to the system, and through the decrease of: a) human errors and b) safety incidents. The overall architecture of the proposed system makes it scalable, flexible, and applicable in different collaborative scenarios by enabling the connectivity of multiple interfaces with customizable environments according to operator’s needs. The performance of the system is evaluated on a scenario originating from the automotive industry proving that an intuitive interaction framework can increase acceptance and performance of both robots and operators.
Digital Twin is one of the promising digital technologies being developed at present to support digital transformation and decision making in multiple industries. While the concept of a Digital Twin is nearly 20 years old, it continues to evolve as it expands to new industries and use cases. This has resulted in a continually increasing variety of definitions that threatens to dilute the concept and lead to ineffective implementations of the technology. There is a need for a consolidated and generalized definition, with clearly established characteristics to distinguish what constitutes a Digital Twin and what does not. This paper reviews 46 Digital Twin definitions given in the literature over the past ten years to propose a generalized definition that encompasses the breadth of options available and provides a detailed characterization which includes criteria to distinguish the Digital Twin from other digital technologies. Next, a process and considerations for the implementation of Digital Twins is presented through a case study. Digital Twin future needs and opportunities are also outlined.
Human-Robot Interaction (HRI) has emerged in recent years as a need for common collaborative execution of manufacturing tasks. This work examines two types of techniques of safe collaboration that do not interrupt the flow of collaboration as far as possible, namely proactive and adaptive. The former are materialised using audio and visual cognitive aids, which the user receives as dynamic stimuli in real time during collaboration, and are aimed at information enrichment of the latter. Adaptive techniques investigated refer to the robot; according to the first one of them the robot decelerates when a forthcoming contact with the user is traced, whilst according to the second one the robot retracts and moves to the final destination via a modified, safe trajectory, so as to avoid the human. The effectiveness as well as the activation criteria of the above techniques are investigated in order to avoid possible pointless or premature activation. Such investigation was implemented in a prototype highly interactive and immersive Virtual Environment (VE), in the framework of H-R collaborative hand lay-up process of carbon fabric in an industrial workcell. User tests were conducted, in which both subjective metrics of user satisfaction and performance metrics of the collaboration (task completion duration, robot mean velocity, number of detected human-robot collisions etc.) After statistical processing, results do verify the effectiveness of safe collaboration techniques as well as their acceptability by the user, showing that collaboration performance is affected to a different extent.
In this paper we introduce a novel approach that enables users to interact with a mobile robot in a natural manner. The proposed interaction system does not require any specific infrastructure or device, but relies on commonly utilized objects while leaving the user's hands free. Specifically, we propose to utilize a smartwatch (or a sensorized wristband) for recognizing the motion of the user's forearm. Measurements of accelerations and angular velocities are exploited to recognize user's gestures and define velocity commands for the robot. The proposed interaction system is evaluated experimentally with different users controlling a mobile robot and compared to the use of a remote control device for the teleoperation of robots. Results show that the usability and effectiveness of the proposed natural interaction system based on the use of a smartwatch provide significant improvement in the human-robot interaction experience.
Maintenance and its cost continue, over the years, drawing the attention of production management, since the unplanned failures decrease the reliability of the system and the return of investments. Maintenance services of manufactured products are among the most common services in the industry; they account for more than half of the total costs and influence the environmental impact of the product. In order for manufacturers to increase their productivity, by performing accurate and quick maintenance, advanced monitoring systems should be considered in order to easily detect machine tool failures before they occur. Toward that end, a cloud-based platform for condition-based preventive maintenance, supported by a shop-floor monitoring service and an augmented reality (AR) application, is proposed as a product-service system (CARM2-PSS). The proposed AR maintenance service consists of algorithms of automated generation of assembly sequences, part movement scripts, and improved interface that aim to maximize existing knowledge usage while creating vivid AR service instructions. Moreover, the proposed monitoring system is supported by a wireless sensor network (WSN), and is deployed on a Cloud environment together with the AR tool. The monitoring system monitors the status of the machine tools, calculates their remaining operating time between failures (ROTBF), and identifies the available windows of the machine tools in order to perform the AR remote maintenance. In order to validate the proposed methodology and calculate its impact, it is applied in a real-life case study of a white-goods industry.
This paper presents the design and implementation of an augmented reality (AR) tool in aid of operators being in a hybrid, human and robot collaborative industrial environment. The system aims to provide production and process related information as well as to enhance the operators’ immersion in the safety mechanisms, dictated by the collaborative workspace. The developed system has been integrated with a service based station controller, which is responsible for orchestrating the flow of information to the operator, according to the task execution status. The tool has been applied to a case study from the automotive sector, resulting in an enhanced operator's integration with the assembly process.
Augmented reality (AR) provides a seamless interface between the virtual and real worlds, and it has been applied to various domains, e.g., product design, manufacturing, maintenance and repair, etc. In these AR systems, 3D graphics or other contents can be registered in the real environment to provide information to the users. Context-awareness can improve the usability of such AR systems through adapting the information rendered to the contexts so that the provided information can be more useful to the users. However, many AR product maintenance systems lack an authoring system that requires little programming skills to create context-aware AR contents. In this paper, an authoring system, authoring for context-aware AR (ACAAR), which provides concepts and techniques to author AR contents for context-aware AR applications, is proposed. Using ACAAR, the users can add and arrange various contents spatially, e.g., texts, images and computer-aided design (CAD) models, and specify the logical relationships between the AR contents and the maintenance contexts. In addition, a user study has been conducted to demonstrate the usability of the proposed system.
Understanding human-robot interaction in virtual reality
Jan 2017
751
Liu
An Augmented Reality Interface for Human-Robot Interaction in Unconstrained Environments