Fig 1 - uploaded by Alessandro Carfì
Content may be subject to copyright.
A human operator teaching a Baxter robot how to grasp an articulated object using Kinesthetic Teaching.
Source publication
The evolution of production systems for smart factories foresees a tight relation between human operators and robots. Specifically, when robot task reconfiguration is needed, the operator must be provided with an easy and intuitive way to do it. A useful tool for robot task reconfiguration is Programming by Demonstration (PbD). PbD allows human ope...
Contexts in source publication
Context 1
... assumes two programming phases, namely teaching, where one or different realisations of the target task are shown, and learning, in which examples are generalised in order to synthesise a resulting robot's behaviour. Kinesthetic Teaching (KT) (Figure 1) is a teaching technique, well-suited for robot manipulators, which assumes a human operator to physically move the robot in the execution of the target task, while the robot records the movements of its joints during the teaching process. PbD has the competitive advantage of not requiring any robot-specific competence related to reconfiguration issues and task teaching [6]. ...
Context 2
... experiment is repeated twice swapping the roles of teacher and learner. During all the experiments, teacher and learner are in front of each other with a table in between, as shown in Figure 13. Once the teaching procedure ends, the learner is asked to repeat the task. ...
Context 3
... distribution of end-effector trajectory points: The density for all the experiments has been computed and used to divide τ into the five intervals expected by the spatial classification as shown in Figure 6. Figure 10 shows for each group the empirical distribution function computed on the distances between points belonging to T γ and T r , respectively with p γ and p r . In these graphs we have one line for each experiment where the y-axis indicates the probability for a point to have a distance to the reference point lower than the corresponding value on the x-axis. ...
Context 4
... the definition of T γ and T r given in Section 3.1 and the considered setup, as well as the distance between location A and B of 78 cm, the ε value makes it reasonable to consider valid the density-based subdivision. The bar graph in Figure 11 presents the median value of densities over all the experiments for each group. It highlights that the densities of T γ and T r are higher with respect to the other phases, confirming H 2 , as it was realistic to expect. ...
Context 5
... highlights that the densities of T γ and T r are higher with respect to the other phases, confirming H 2 , as it was realistic to expect. Figure 12 shows the density median value for T γ and T r over all the volunteers. These graphs highlight the importance of each operator skills and the uncorrelation of volunteer's performance among phases. ...
Context 6
... to the Baxter-based setup used in these experiments, humans have a context-aware representation of the environment and of the task that is going to be performed and they use this information to predict and anticipate the teacher. In fact, observing Figure 13 it is possible to notice that the teacher is just controlling learner's wrist and elbow, nevertheless the learner is able to successfully grasp the box without any instruction on how to position palm and fingers, and on when to actually grasp the object. ...
Context 7
... experiment involved 17 volunteers, 12 from a vocational educational and training school and 5 from a state industrial and technical institute. Volunteers have been divided into two groups such that the schools are equally represented, Figure 14 presents the age of volunteers for the two groups. Two volunteers for each group have previous knowledge of industrial equipment and each of them had between 6 and 12 months of experience. ...
Context 8
... of the teaching process: Figure 15 presents the time duration of the teaching procedure for all volunteers, divided by group. The two groups are characterised by different trends. ...
Context 9
... volunteers in G1 are characterised by a median value of 20.5 sec and a variance of 70.26 s 2 , while volunteers belonging to G2 have a median value of 12.6 sec and a variance of 3.55 s 2 . These results together with the empirical cumulative distribution study presented in Figure 16 confirm [H 3 ] and [H 3.1 ], since the autonomous behaviours have reduced both the teaching time mean and variation over volunteers. ...
Context 10
... of task phases: We computed the duration associated with T ph1 , T ph2 and T ph3 (Figure 2) for each volunteer, thus obtaining the results presented in Figure 17. Such results reinforce precedent observations regarding H 3 . ...
Context 11
... order to prove the validity of the division process we study the empirical distribution of the distances between points belonging to T γ and T r , respectively with p γ and p r . From this study, which is summarised in Figure 18, results that, referring to (5), ε amounts to 12 cm while the majority of the points belonging to T γ and T r are far closer to the reference points p γ and p r . Considering the definition of T γ and T r given in Section 3.1 and the experimental setup, particularly the 78 cm distance between A and B, the determined ε value suggests that the density-based subdivision is valid. ...
Context 12
... the definition of T γ and T r given in Section 3.1 and the experimental setup, particularly the 78 cm distance between A and B, the determined ε value suggests that the density-based subdivision is valid. Figure 19 presents the median value of densities over all the experiments for each group. Densities for G1 are similar to the one we have found out in the preliminary study (Section 5). ...
Similar publications
A reactive motion-planning for collaborative robots using the time-layered C-spaces (TLC-spaces) is proposed in this paper. First, the time-augmented C-space (TAC-space) is introduced. TAC-space is an implementation of the configuration-time space with multiple moving obstacles [Latombe JC. Robot motion planning. Kluwer Academic; 1991. p. 22, 23]....
Citations
... The architecture uses the robot operating system (ROS) framework [27] to manage inter-module communication. Besides benefits in code reuse, this choice allows us to exploit the Baxter-related ROS APIs, which exposes services to acquire sensory data, control robot actuators, and record robot motions using Kinesthetic Teaching (KT) [28,29]. ...
Close human-robot interaction (HRI), especially in industrial scenarios, has been vastly investigated for the advantages of combining human and robot skills. For an effective HRI, the validity of currently available human-machine communication media or tools should be questioned, and new communication modalities should be explored. This article proposes a modular architecture allowing human operators to interact with robots through different modalities. In particular, we implemented the architecture to handle gestural and touchscreen input, respectively, using a smartwatch and a tablet. Finally, we performed a comparative user experience study between these two modalities.
... While this kind of technique reduces drastically the ill-posed correspondence problem, from an ergonomics and intuitiveness perspective, this method also imposes physical and mental constraints to the demonstrator. Carfi et al. [95] demonstrated the difficulties that an operator has while trying to open or close a gripper and position a robotic arm at the same time in a pick-and-place operation. A comparison between human-robot demonstration and human-human demonstration shows that it is easier and less stressful to teach a learner who has an active role during demonstrations. ...
Human–Robot Collaboration (HRC) is an interdisciplinary research area that has gained attention within the smart manufacturing context. To address changes within manufacturing processes, HRC seeks to combine the impressive physical capabilities of robots with the cognitive abilities of humans to design tasks with high efficiency, repeatability, and adaptability. During the implementation of an HRC cell, a key activity is the robot programming that takes into account not only the robot restrictions and the working space, but also human interactions. One of the most promising techniques is the so-called Learning from Demonstration (LfD), this approach is based on a collection of learning algorithms, inspired by how humans imitate behaviors to learn and acquire new skills. In this way, the programming task could be simplified and provided by the shop floor operator. The aim of this work is to present a survey of this programming technique, with emphasis on collaborative scenarios rather than just an isolated task. The literature was classified and analyzed based on: the main algorithms employed for Skill/Task learning, and the human level of participation during the whole LfD process. Our analysis shows that human intervention has been poorly explored, and its implications have not been carefully considered. Among the different methods of data acquisition, the prevalent method is physical guidance. Regarding data modeling, techniques such as Dynamic Movement Primitives and Semantic Learning were the preferred methods for low-level and high-level task solving, respectively. This paper aims to provide guidance and insights for researchers looking for an introduction to LfD programming methods in collaborative robotics context and identify
research opportunities.
... The architecture uses the robot operating system (ROS) framework [27] to manage inter-module communication. Besides benefits in code reuse, this choice allows us to exploit the Baxter-related ROS APIs, which exposes services to acquire sensory data, control robot actuators, and record robot motions using Kinesthetic Teaching (KT) [28,29]. ...
Close human-robot interaction (HRI), especially in industrial scenarios, has been vastly investigated for the advantages of combining human and robot skills. For an effective HRI, the validity of currently available human-machine communication media or tools should be questioned, and new communication modalities should be explored. This article proposes a modular architecture allowing human operators to interact with robots through different modalities. In particular, we implemented the architecture to handle gestural and touchscreen input, respectively, using a smartwatch and a tablet. Finally, we performed a comparative user experience study between these two modalities.
... EUD/EUP tools in this category enable factory workers to intuitively re-program robots without using nor learning a general-purpose programming language [1]. EUP approaches often used for this task are Visual Programming [18], [19] and Programming by Demonstration (PbD) [20], [21]. In many industrial cases, robots are deployed in scenarios whereby a direct interaction with a robot is not required [22], [23]. ...
In an effort towards the democratization of Robotics, this article presents a novel End-User Development framework called Robot Interfaces From Zero Experience (RIZE). The framework provides a set of useful software tools for the creation of robot-oriented software architectures and programming interfaces, as well as the modeling and execution of robot behaviors, with a specific emphasis on social behaviors. Programming interfaces built on top of RIZE enable professionals with different backgrounds and interests to design, adapt, and scale-up robotics applications. As an example of a programming interface, we present Open RIZE, which exploits an End-User Programming paradigm combining blocks, tables, and forms-filling interfaces. Unlike previous approaches, robot behavioral code generated by Open RIZE is intrinsically modular, re-usable, scalable, neutral to the employed programming language, and platform-agnostic. In the paper, we present the main design guidelines and features of Open RIZE. Additionally, we perform an initial usability evaluation of the Open RIZE interface in an online workshop. Preliminary results using the System Usability Scale with 10 novice end-users indicate that Open RIZE is easy-to-use and learn.
... This allows for building a library of simpler tasks, which can be composed together to create composite tasks [61]. In this respect, current work is devoted to integrate in FLEXHRC+ the capability of learning from human demonstrations and through interactions with the environment [62]. ...
In this article, we propose
FlexHRC+
, a hierarchical human–robot cooperation architecture designed to provide collaborative robots with an extended degree of autonomy when supporting human operators in high-variability shop-floor tasks. The architecture encompasses three levels, namely for perception, representation, and action. Building up on previous work, here we focus on an in-the-loop decision-making process for the operations of collaborative robots coping with the variability of actions carried out by human operators, and the representation level, integrating a hierarchical
and
/
or
graph whose online behavior is formally specified using first-order logic. The architecture is accompanied by experiments including collaborative furniture assembly and object positioning tasks.
... This allows for building a library of simpler tasks, which can be composed together to create composite tasks [61]. In this respect, current work is devoted to integrate in FLEXHRC+ the capability of learning from human demonstrations and through interactions with the environment [62]. ...
In this paper we propose FlexHRC+, a hierarchical human-robot cooperation architecture designed to provide collaborative robots with an extended degree of autonomy when supporting human operators in high-variability shop-floor tasks. The architecture encompasses three levels, namely for perception, representation, and action. Building up on previous work, here we focus on (i) an in-the-loop decision making process for the operations of collaborative robots coping with the variability of actions carried out by human operators, and (ii) the representation level, integrating a hierarchical AND/OR graph whose online behaviour is formally specified using First Order Logic. The architecture is accompanied by experiments including collaborative furniture assembly and object positioning tasks.
... The connection between human operators and robots is becoming tight and more mundane. When robot task reconfiguration is needed, the operator must be provided with an easy and intuitive way to do it [24]. Recent research refers to the methods of instructions/knowledge transfer from a human to a collaborative robot and between robots. ...
The realisation of the ideas of smart factories and sustainable manufacturing can be quickly realised in companies where industrial production is high-volume, low-mix. However, it is more difficult to follow trends toward industry 4.0 in craft industries such as tooling. This kind of work environment is a challenge for the deployment of sustainability and smart technologies because many stages involve the so-called “manual processing according to the worker’s feeling and experience.” With the help of literature review and testing in the production environment, we approach the design of a procedure for planning a sustainable technological upgrade of craft production. The best method proved to be a combination of a maturity model, process mapping with flowcharts, critical analysis, and customised evaluation model. Workplace flexibility, as a move towards sustainability, is presented in a laboratory environment on screwing performed by human wearing HoloLens and collaborative robot.
In recent years, there has been a growing trend of integrating robots into various dimensions of daily life, to improve user experiences and offer a range of services. This study explores the interactions of robots showing social behavior towards other robots and their impact on human perceptions. We focus on three types of social behavior: social sensitivity, attention-sharing, and helping, aiming to understand how these interactions affect human perceptions of robots. Utilizing Duckiebot mobile robots in carefully crafted experimental setups, participants observed video recordings of these robots’ interactions, which either included or excluded each targeted social behavior. The study measured user responses using established scales such as the Mind Attribution Scale (MAS), the Goodspeed Scale, and the Robotic Social Attributes Scale (RoSAS). The results demonstrated that robots displaying social behavior towards other robots were perceived more positively compared to those that did not exhibit such behavior. Specifically, social sensitivity positively impacted animacy, experience, likability, perceived intelligence, safety, and warmth. Attention-sharing improved perceptions of competence, experience, likability, perceived intelligence, and warmth. Additionally, helping behavior positively affected agency, animacy, anthropomorphism, competence, experience, likability, perceived intelligence, safety, and warmth. This research contributes valuable insights into Human-Robot Interaction (HRI), highlighting the significant impact of robots’ interactions with each other on user experiences and perceptions. The exploration of social behavior lays a foundation for designing robots that evoke positive responses, fostering smoother integration of robotic technology into various aspects of society.
Nowadays, considering the constant changes in customers’ demands, manufacturing systems tend to move more and more towards customization while ensuring the expected reactivity. In addition, more attention is given to the human factors to, on the one hand, create opportunities for improving the work conditions such as safety and, on the other hand, reduce the risks brought by new technologies such as job cannibalization. Meanwhile, Industry 4.0 offers new ways to facilitate this change by enhancing human–machine interactions using Collaborative Robots (Cobots). Recent research studies have shown that cobots may bring numerous advantages to manufacturing systems, especially by improving their flexibility. This research investigates the impacts of the integration of cobots in the context of assembly and disassembly lines. For this purpose, a Systematic Literature Review (SLR) is performed. The existing contributions are classified on the basis of the subject of study, methodology, methodology, performance criteria, and type of Human-Cobot collaboration. Managerial insights are provided, and research perspectives are discussed.