Chapter

Exploring AI-Enhanced Shared Control for an Assistive Robotic Arm

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

ResearchGate has not been able to resolve any citations for this publication.
Conference Paper
Full-text available
Robotic solutions, in particular robotic arms, are becoming more frequently deployed for close collaboration with humans, for example in manufacturing or domestic care environments. These robotic arms require the user to control several Degrees-of-Freedom (DoFs) to perform tasks, primarily involving grasping and manipulating objects. Standard input devices predominantly have two DoFs, requiring time-consuming and cognitively demanding mode switches to select individual DoFs. Contemporary Adaptive DoF Mapping Controls (ADMCs) have shown to decrease the necessary number of mode switches but were up to now not able to significantly reduce the perceived workload. Users still bear the mental workload of incorporating abstract mode switching into their workflow. We address this by providing feed-forward multimodal feedback using updated recommendations of ADMC, allowing users to visually compare the current and the suggested mapping in real-time. We contrast the effectiveness of two new approaches that a) continuously recommend updated DoF combinations or b) use discrete thresholds between current robot movements and new recommendations. Both are compared in a Virtual Reality (VR) in-person study against a classic control method. Significant results for lowered task completion time, fewer mode switches, and reduced perceived workload conclusively establish that in combination with feedforward, ADMC methods can indeed outperform classic mode switching. A lack of apparent quantitative differences between Continuous and Threshold reveals the importance of user-centered customization options. Including these implications in the development process will improve usability, which is essential for successfully implementing robotic technologies with high user acceptance.
Article
Full-text available
With the ongoing efforts to empower people with mobility impairments and the increase in technological acceptance by the general public, assistive technologies, such as collaborative robotic arms, are gaining popularity. Yet, their widespread success is limited by usability issues, specifically the disparity between user input and software control along the autonomy continuum. To address this, shared control concepts provide opportunities to combine the targeted increase of user autonomy with a certain level of computer assistance. This paper presents the free and open-source AdaptiX XR framework for developing and evaluating shared control applications in a high-resolution simulation environment. The initial framework consists of a simulated robotic arm with an example scenario in Virtual Reality (VR), multiple standard control interfaces, and a specialized recording/replay system. AdaptiX can easily be extended for specific research needs, allowing Human-Robot Interaction (HRI) researchers to rapidly design and test novel interaction methods, intervention strategies, and multi-modal feedback techniques, without requiring an actual physical robotic arm during the early phases of ideation, prototyping, and evaluation. Also, a Robot Operating System (ROS) integration enables the controlling of a real robotic arm in a PhysicalTwin approach without any simulation-reality gap. Here, we review the capabilities and limitations of AdaptiX in detail and present three bodies of research based on the framework. AdaptiX can be accessed at https://adaptix.robot-research.de.
Conference Paper
Full-text available
In Human-Computer-Interaction, vibrotactile haptic feedback offers the advantage of being independent of any visual perception of the environment. Most importantly, the user's field of view is not obscured by user interface elements, and the visual sense is not unnecessarily strained. This is especially advantageous when the visual channel is already busy, or the visual sense is limited. We developed three design variants based on different vibrotac-tile illusions to communicate 3D directional cues. In particular, we explored two variants based on the vibrotactile illusion of the cu-taneous rabbit and one based on apparent vibrotactile motion. To communicate gradient information, we combined these with pulse-based and intensity-based mapping. A subsequent study showed Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the owner/author(s). that the pulse-based variants based on the vibrotactile illusion of the cutaneous rabbit are suitable for communicating both directional and gradient characteristics. The results further show that a representation of 3D directions via vibrations can be effective and beneficial.
Conference Paper
Full-text available
Robots are becoming increasingly omnipresent in our daily lives, supporting us and carrying out autonomous tasks. In Human-Robot Interaction, human actors benefit from understanding the robot's motion intent to avoid task failures and foster collaboration. Finding effective ways to communicate this intent to users has recently received increased research interest. However, no common language has been established to systematize robot motion intent. This work presents a scoping review aimed at unifying existing knowledge. Based on our analysis, we present an intent communication model that depicts the relationship between robot and human through different intent dimensions (intent type, intent information, intent location). We discuss these different intent dimensions and their interrelationships with different kinds of robots and human roles. Throughout our analysis, we classify the existing research literature along our intent communication model, allowing us to identify key patterns and possible directions for future research.
Conference Paper
Full-text available
Nowadays, robots collaborate closely with humans in a growing number of areas. Enabled by lightweight materials and safety sensors , these cobots are gaining increasing popularity in domestic care, supporting people with physical impairments in their everyday lives. However, when cobots perform actions autonomously, it remains challenging for human collaborators to understand and predict their behavior. This, however, is crucial for achieving trust and user acceptance. One significant aspect of predicting cobot behavior is understanding their motion intent and comprehending how they "think" about their actions. We work on solutions that communicate the cobots AI-generated motion intent to a human collaborator. Effective communication enables users to proceed with the most suitable option. We present a design exploration with different visualization techniques to optimize this user understanding, ideally resulting in increased safety and end-user acceptance.
Article
Full-text available
Robot arms are one of many assistive technologies used by people with motor impairments. Assistive robot arms can allow people to perform activities of daily living (ADL) involving grasping and manipulating objects in their environment without the assistance of caregivers. Suitable input devices (e.g., joysticks) mostly have two Degrees of Freedom (DoF), while most assistive robot arms have six or more. This results in time-consuming and cognitively demanding mode switches to change the mapping of DoFs to control the robot. One option to decrease the difficulty of controlling a high-DoF assistive robot arm using a low-DoF input device is to assign different combinations of movement-DoFs to the device’s input DoFs depending on the current situation (adaptive control). To explore this method of control, we designed two adaptive control methods for a realistic virtual 3D environment. We evaluated our methods against a commonly used non-adaptive control method that requires the user to switch controls manually. This was conducted in a simulated remote study that used Virtual Reality and involved 39 non-disabled participants. Our results show that the number of mode switches necessary to complete a simple pick-and-place task decreases significantl when using an adaptive control type. In contrast, the task completion time and workload stay the same. A thematic analysis of qualitative feedback of our participants suggests that a longer period of training could further improve the performance of adaptive control methods.
Conference Paper
Full-text available
This paper presents a novel approach to shared control for an assistive robot by adaptively mapping the degrees of freedom (DoFs) for the user to control with a low-dimensional input device. For this, a convolutional neural network interprets camera data of the current situation and outputs a probabilistic description of possible robot motion the user might command. Applying a novel representation of control modes, the network’s output is used to generate individual degrees of freedom of robot motion to be controlled by single DoF of the user’s input device. These DoFs are not necessarily equal to the cardinal DoFs of the robot but are instead superimpositions of those, thus allowing motions like diagonal directions or orbiting around a point. This enables the user to perform robot motions previously impossible with such a low-dimensional input device. The shared control is implemented for a proof-of-concept 2D simulation and evaluated with an initial user study by comparing it to a standard control approach. The results show a functional control which is both subjectively and objectively significantly faster, but subjectively more complex.
Conference Paper
Full-text available
Light-weight robotic manipulators can be used to restore the manipulation capability of people with a motor disability. However, manipulating the environment poses a complex task, especially when the control interface is of low bandwidth, as may be the case for users with impairments. Therefore, we propose a constraint-based shared control scheme to define skills which provide support during task execution. This is achieved by representing a skill as a sequence of states, with specific user command mappings and different sets of constraints being applied in each state. New skills are defined by combining different types of constraints and conditions for state transitions, in a human-readable format. We demonstrate its versatility in a pilot experiment with three activities of daily living. Results show that even complex, high-dimensional tasks can be performed with a low-dimensional interface using our shared control approach.
Article
Full-text available
The purpose of this case study was to evaluate the technology readiness level of a prototype named “SAM” that consists of a robotic arm mounted on a mobile base. Usability and acceptance assessments were performed in patients with high-level quadriplegia. Seventeen patients with quadriplegia were trained to pick up three different objects in three different situations amounting to scenarios 1, 2 and 3. Each scenario was observed over the 5 steps of its execution. For each step, usability and acceptability parameters were measured. The success rate was optimal or acceptable (70–100%) for (step 1) identifying the room where the object was located, (step 2) directing SAM towards the object and (step 5) monitoring the return of SAM and dropping the object. Designating and validating the object (step 3), approaching and grasping the object (step 4) were rarely completed without any mistake. A majority of patients (70.6%) saw the usage of SAM as an interesting perspective for daily tasks (58.8%) as well as in the potential reorganising of the caregivers’ time (47%). This study suggests that the usage of SAM allows patients with quadriplegia to grab objects both within and out of their field of view. Possibilities allowing the act of seizure are increased via a user-friendly interface which is yet to be improved. Its technology readiness level has been estimated 5.
Article
Full-text available
Some people with severe mobility impairments are unable to operate powered wheelchairs reliably and effectively, using commercially available interfaces. This has sparked a body of research into "smart wheelchairs", which assist users to drive safely and create opportunities for them to use alternative interfaces. Various "shared control" techniques have been proposed to provide an appropriate level of assistance that is satisfactory and acceptable to the user. Most shared control techniques employ a traditional strategy called linear blending (LB), where the user's commands and wheelchair's autonomous commands are combined in some proportion. In this paper, however, we implement a more generalised form of shared control called probabilistic shared control (PSC). This probabilistic formulation improves the accuracy of modelling the interaction between the user and the wheelchair by taking into account uncertainty in the interaction. In this paper, we demonstrate the practical success of PSC over LB in terms of safety, particularly for novice users.
Article
Full-text available
Shared control is an increasingly popular approach to facilitate control and communication between humans and intelligent machines. However, there is little consensus in guidelines for design and evaluation of shared control, or even in a definition of what constitutes shared control. This lack of consensus complicates cross fertilization of shared control research between different application domains. This paper provides a definition for shared control in context with previous definitions, and a set of general axioms for design and evaluation of shared control solutions. The utility of the definition and axioms are demonstrated by applying them to four application domains: automotive, robot-assisted surgery, brain–machine interfaces, and learning. Literature is discussed for each of these four domains in light of the proposed definition and axioms. Finally, to facilitate design choices for other applications, we propose a hierarchical framework for shared control that links the shared control literature with traded control, co-operative control, and other human–automation interaction methods. Future work should reveal the generalizability and utility of the proposed shared control framework in designing useful, safe, and comfortable interaction between humans and intelligent machines.
Conference Paper
Full-text available
In the near future, robots and people will work hand in hand. Through technical development, robots will be able to follow social rules, interact and communicate with people and move freely in the environment. The number of these so-called social robots will increase significantly especially in production spaces forming hybrid human-robot-teams. This expected increasing integration of robots in production environments raises questions on how to design an ideal robot for hybrid collaboration. While most of the research focuses on the technical aspects of human-machine interactions, there is still a strong need for research on the psychological and social aspects that influence the cooperation within hybrid teams.
Article
Full-text available
This paper presents a motion planning system for robotic devices to be adopted in assistive or rehabilitation scenarios. The proposed system is grounded on a Learning by Demonstration approach based on Dynamic Movement Primitives (DMP) and presents a high level of generalization allowing the user to perform activities of daily living. The proposed approach has been experimentally validated on a robotic arm (i.e. the Kuka LWR4+) attached to a human subject wrist. Two experimental sessions have been carried out in order to: 1) evaluate the differences between our approach and the one proposed in [13] in terms of reconstruction error between the demonstrated trajectory and the learned one, and in terms of memory size required to record the database of DMP parameters; 2) measure the generalization level of the proposed system with respect to the variation of the object positions by evaluating the success rate of the task execution. The experimental results demonstrate that the proposed approach allows (i) reproducing the user’s personal motion style with high accuracy and (ii) efficiently generalizing with respect to the change of object position. Furthermore, a significant reduction of memory allocation for the database can be achieved, with a consequent significant computational timesaving.
Article
Full-text available
The development of technological applications that allow people to control and embody external devices within social interaction settings represents a major goal for current and future brain-computer interface (BCI) systems. Prior research has suggested that embodied systems may ameliorate BCI end-user's experience and accuracy in controlling external devices. Along these lines, we developed an immersive P300-based BCI application with a head-mounted display for virtual-local and robotic-remote social interactions and explored in a group of healthy participants the role of proprioceptive feedback in the control of a virtual surrogate (Study 1). Moreover, we compared the performance of a small group of people with spinal cord injury (SCI) to a control group of healthy subjects during virtual and robotic social interactions (Study 2), where both groups received a proprioceptive stimulation. Our attempt to combine immersive environments, BCI technologies and neuroscience of body ownership suggests that providing realistic multisensory feedback still represents a challenge. Results have shown that healthy and people living with SCI used the BCI within the immersive scenarios with good levels of performance (as indexed by task accuracy, optimizations calls and Information Transfer Rate) and perceived control of the surrogates. Proprioceptive feedback did not contribute to alter performance measures and body ownership sensations. Further studies are necessary to test whether sensorimotor experience represents an opportunity to improve the use of future embodied BCI applications.
Article
Full-text available
Wheelchair-mounted robotic arms have been commercially available for a decade. In order to operate these robotic arms, a user must have a high level of cognitive function. Our research focuses on replacing a manufacturer-provided, menu-based interface with a vision-based system while adding autonomy to reduce the cognitive load. Instead of manual task decomposition and execution, the user explicitly designates the end goal, and the system autonomously retrieves the object. In this paper, we present the complete system which can autonomously retrieve a desired object from a shelf. We also present the results of a 15-week study in which 12 participants from our target population used our system, totaling 198 trials.
Conference Paper
Full-text available
Assistive robotic arms are increasingly enabling users with upper extremity disabilities to perform activities of daily living on their own. However, the increased capability and dexterity of the arms also makes them harder to control with simple, low-dimensional interfaces like joy-sticks and sip-and-puff interfaces. A common technique to control a high-dimensional system like an arm with a low-dimensional input like a joystick is through switching between multiple control modes. However, our interviews with daily users of the Kinova JACO arm identified mode switching as a key problem, both in terms of time and cognitive load. We further confirmed objectively that mode switching consumes about 17.4% of execution time even for able-bodied users controlling the JACO. Our key insight is that using even a simple model of mode switching, like time optimality, and a simple intervention, like automatically switching modes, significantly improves user satisfaction.
Article
Full-text available
This paper presents a highly interactive and immersive Virtual Reality Training System (VRTS) (“beWare of the Robot”) in terms of a serious game that simulates in real-time the cooperation between industrial robotic manipulators and humans, executing simple manufacturing tasks. The scenario presented refers to collaborative handling in tape-laying for building aerospace composite parts. The tools, models and techniques developed and used to build the “beWare of the Robot” application are described. System setup and configuration are presented in detail, as well as user tracking and navigation issues. Special emphasis is given to the interaction techniques used to facilitate implementation of virtual human–robot (HR) collaboration. Safety issues, such as contacts and collisions are mainly tackled through “emergencies”, i.e. warning signals in terms of visual stimuli and sound alarms. Mental safety is of utmost priority and the user is provided augmented situational awareness and enhanced perception of the robot’s motion due to immersion and real-time interaction offered by the VRTS as well as by special warning stimuli. The short-term goal of the research was to investigate users’ enhanced experience and behaviour inside the virtual world while cooperating with the robot and positive pertinent preliminary findings are presented and briefly discussed. In the longer term, the system can be used to investigate acceptability of H–R collaboration and, ultimately, serve as a platform for programming collaborative H–R manufacturing cells.
Article
Full-text available
The user interface development of assistive robotic manipulators can be traced back to the 1960s. Studies include kinematic designs, cost-efficiency, user experience involvements, and performance evaluation. This paper is to review studies conducted with clinical trials using activities of daily living (ADLs) tasks to evaluate performance categorized using the International Classification of Functioning, Disability, and Health (ICF) frameworks, in order to give the scope of current research and provide suggestions for future studies. We conducted a literature search of assistive robotic manipulators from 1970 to 2012 in PubMed, Google Scholar, and University of Pittsburgh Library System - PITTCat. Twenty relevant studies were identified. Studies were separated into two broad categories: user task preferences and user-interface performance measurements of commercialized and developing assistive robotic manipulators. The outcome measures and ICF codes associated with the performance evaluations are reported. Suggestions for the future studies include (1) standardized ADL tasks for the quantitative and qualitative evaluation of task efficiency and performance to build comparable measures between research groups, (2) studies relevant to the tasks from user priority lists and ICF codes, and (3) appropriate clinical functional assessment tests with consideration of constraints in assistive robotic manipulator user interfaces. In addition, these outcome measures will help physicians and therapists build standardized tools while prescribing and assessing assistive robotic manipulators.
Article
Full-text available
Mixed Reality (MR) visual displays, a particular subset of Virtual Reality (VR) related technologies, involve the merging of real and virtual worlds somewhere along the 'virtuality continuum' which connects completely real environments to completely virtual ones. Augmented Reality (AR), probably the best known of these, refers to all cases in which the display of an otherwise real environment is augmented by means of virtual (computer graphic) objects. The converse case on the virtuality continuum is therefore Augmented Virtuality (AV). Six classes of hybrid MR display environments are identified. However quite different groupings are possible and this demonstrates the need for an efficient taxonomy, or classification framework, according to which essential differences can be identified. An approximately three-dimensional taxonomy is proposed comprising the following dimensions: extent of world knowledge, reproduction fidelity, and extent of presence metaphor.
Chapter
Assistive robot manipulators must be able to autonomously pick and place a wide range of novel objects to be truly useful. However, current assistive robots lack this capability. Additionally, assistive systems need to have an interface that is easy to learn, to use, and to understand. This paper takes a step forward in this direction. We present a robot system comprised of a robotic arm and a mobility scooter that provides both pick-and-drop and pick-and-place functionality for open world environments without modeling the objects or environment. The system uses a laser pointer to directly select an object in the world, with feedback to the user via projecting an interface into the world. Our evaluation over several experimental scenarios shows a significant improvement in both runtime and grasp success rate relative to a baseline from the literature [5], and furthermore demonstrates accurate pick and place capabilities for tabletop scenarios. KeywordsAssistive roboticsGraspingHuman-Robot Interaction
Article
Working with collaborative robots (cobots) can be a potential source of stress for their operators. However, research on specific factors that affect users’ stress levels when working with a cobot is still scarce. This study is the first to investigate the levels of psychological (primary and secondary stress appraisal) and physiological (heart rate) stress in human operators working in two different cobot modes (i.e., manual and autonomous). We applied an experimental within-subject repeated-measures design to 45 healthy adults (26 women, 19 men). The results show that the levels of secondary stress appraisal were lower and the heart rate levels were higher in the autonomous cobot mode. The results suggest that, when working with a cobot, control plays a key role in the emotional, cognitive, and physiological reactions during the human-robot collaboration. Implications for organizational practice are discussed.
Conference Paper
The deployment of robots at home must involve robots with pre-defined skills and the capability of personalizing their behavior by non-expert users. A framework to tackle this personalization is presented and applied to an automatic feeding task. The personalization involves the caregiver providing several examples of feeding using Learning-by-Demostration, and a ProMP formalism to compute an overall trajectory and the variance along the path. Experiments show the validity of the approach in generating different feeding motions to adapt to user’s preferences, automatically extracting the relevant task parameters. The importance of the nature of the demonstrations is also assessed, and two training strategies are compared.
Article
In this paper, we report on the results of a study that was conducted to examine how users suffering from severe upper-extremity disabilities can control a 6 Degrees of Freedom (DOF) robotics arm to complete complex activities of daily living. The focus of the study is not on assessing the robot arm but on examining the human-robot interaction patterns. Three participants were recruited. Each participant was asked to perform three tasks: eating three pieces of pre-cut bread from a plate, drinking three sips of soup from a bowl, and opening a right-handed door with lever handle. Each of these tasks was repeated three times. The arm was mounted on the participant's wheelchair, and the participants were free to move the arm as they wish to complete these tasks. Each task consisted of a sequence of modes where a mode is defined as arm movement in one DOF. Results show that participants used a total of 938 mode movements with an average of 75.5 (std 10.2) modes for the eating task, 70 (std 8.8) modes for the soup task, and 18.7 (std 4.5) modes for the door opening task. Tasks were then segmented into smaller subtasks. It was found that there are patterns of usage per participant and per subtask. These patterns can potentially allow a robot to learn from user's demonstration what is the task being executed and by whom and respond accordingly to reduce user effort.
Conference Paper
Assistive robotic manipulators have the potential to improve the lives of people with motor impairments. They can enable individuals to perform activities such as pick-and-place tasks, opening doors, pushing buttons, and can even provide assistance in personal hygiene and feeding. However, robotic arms often have more degrees of freedom (DoF) than the dimensionality of their control interface, making them challenging to use—especially for those with impaired motor abilities. Our research focuses on enabling the control of high-DoF manipulators to motor-impaired individuals for performing daily tasks. We make use of an individual's residual motion capabilities, captured through a Body-Machine Interface (BMI), to generate control signals for the robotic arm. These low-dimensional controls are then utilized in a shared-control framework that shares control between the human user and robot autonomy. We evaluated the system by conducting a user study in which 6 participants performed 144 trials of a manipulation task using the BMI interface and the proposed shared-control framework. The 100% success rate on task performance demonstrates the effectiveness of the proposed system for individuals with motor impairments to control assistive robotic manipulators.
Article
We report a small dual cohort pilot study with traumatic spinal cord injured (SCI) subjects designed to investigate the utility of a wheelchair-mounted robotic arm for these subjects. The UCF-MANUS, a vision-based 6DOF assistive robotic arm, has been designed to aid individuals with upper limb extremities to complete tasks of daily living that they would otherwise be unable to complete themselves. Pick-and-place IADL tasks were designed and ten (10) users post-SCI were selected under IRB guidelines to be trained and tested with the system for 1 to 2 h weekly over a period of three weeks. During this time, they controlled the robot either through a manual or an autonomous (supervised) mode of operation. Baseline characteristics (pre-study), quantitative performance metrics (during study), and psychometrics (post-study) were obtained and statistically analyzed to test a set of hypotheses related to performance and satisfaction with the two control modes. At the end of the study, both the autonomous and the manual mode had comparable task completion times while user effort required for operating the robot in autonomous mode was significantly less than that for the manual mode. However, the autonomous mode failed to commensurately raise the user's level of satisfaction. Over the three-week study, the manual mode users showed a pronounced learning effect in terms of reducing mean task completion time and number of commands while the auto mode users showed improvement in terms of reduction of variability. Based on qualitative feedback and quantitative results, possible directions for system design are presented to concurrently achieve better performance and satisfaction outcomes.