Article

The use of Virtual Fixtures to enhance telemanipulation with time delay

Authors:
To read the full-text of this research, you can request a copy directly from the author.

Abstract

This paper reviews the notion of virtual fixtures and describes a study which demonstrates that such fixtures can reduce performance degradation due to time delays in telemanipulation tasks. Just as fixtures in the real world can enhance manual performance, virtual fixtures are computer generated sensations that have been shown to provide similar benefits in telemanipulation. This study investigates the use of virtual fixtures by overlaying simple haptic surfaces during a peg-insertion telemanipulation task with time delay. Six subjects were tested wearing an exoskeleton to control a remote robot arm with no time delay, 250ms delay, and 450ms delay, between master and slave. A Fitt's law paradigm was used to quantify operator performance. It was found that without the use of virtual fixtures, task completion times were reduced by 36% for the 250ms delay and 45% for the 450ms delay. With overlaid virtual fixtures, no performance degradation was recorded for time delayed teleoperation.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... The assistance may be materialized by visual cues in the VE or an active assistance to reach and grab an object. The FOLLOW-ME technique uses virtual guides [9] [13] for this active assistance. ...
... The FOLLOW-ME [8] is a 3D interaction technique which is used in order to fasten the selection task for teleoperation systems. It has two main characteristics: • the VE is divided into three zones in which the interaction has its own granularity: the free manipulation zone, the scaled manipulation zone and the precise manipulation zone; • In the precise manipulation zone, virtual guides are used to handle both precision and security of manipulation (for a review, one may refer to [9] [10] [13]). ...
... The assistance may be materialized by visual cues in the VE or an active assistance to reach and grab an object. The FOLLOW-ME technique uses virtual guides [9] [13] for this active assistance. In this article, we first try to specify the situations in which the FOLLOW-ME technique may bring benefits to the user. ...
Conference Paper
Full-text available
This paper gives preliminary results about the utilization of an interaction technique called FOLLOW-ME to fasten the selection task for teleoperation system. The implementation of an interaction between a user and a virtual environment (VE) in virtual reality (VR) may use various techniques. However, in the case of teleoperation, the interaction must be very precise and comfortable for the user. The model associated to the FOLLOW-ME technique splits the virtual environment into three zones in which a specific interaction model is used: a free manipulation zone, a scaled manipulation zone and a precise manipulation zone. Each one of the three zones is characterized by a specific interaction granularity. In the precise manipulation zone, we use the concept of virtual guides in order to assist the user to achieve his task. In this paper, our aim is to show that the FOLLOW-ME technique is well suited for selection in teleoperation tasks. To do this, we have first compared the FOLLOW-ME technique with classical interaction techniques in a virtual environment where different targets are situated at different depth and may move. The preliminary results show that our technique is more efficient than the classical go-go and ray-casting techniques, in a sense that the task is more reproducible and easier to accomplish by the user. In a second stage, we use this result to design selection procedures for the ARITI tele-operation system and show that the use of FOLLOW-ME induces benefits for the user
... The concept of forbidden region VFs has been mostly implemented in computer access applications, for example by creating haptic cone-or tunnel-shaped VFs around computer icons to pull the cursor towards the target [23]. Guidance VFs have typically been applied to path following and peg-in-the-hole tasks [24][25][26][27]. In a series of experimental studies, Covarrubias et al. [28][29][30][31][32] projected guidance VFs into a set of path following tasks, such as sketching and foam cutting, to assist adults with Downs syndrome and developmental disabilities. ...
... In a series of experimental studies, Covarrubias et al. [28][29][30][31][32] projected guidance VFs into a set of path following tasks, such as sketching and foam cutting, to assist adults with Downs syndrome and developmental disabilities. Implementations of VFs have demonstrated increased precision and speed performing remote tasks [26,33,34] and faster manipulation [27]. ...
Article
Children with disabilities typically have fewer opportunities for manipulation and play, due to their physical limitations, resulting in delayed cognitive and perceptual development. A switched-controlled device can remotely do tasks for a child or a human helper can mediate the child’s interaction with the environment during play. However, these approaches disconnect children from the environment and limit their opportunities for interactive play with objects. This paper presents a novel application of a robotic system with virtual assistance, implemented by virtual fixtures, to enhance interactive object play for children in a set of coloring tasks. The assistance conditions included zero assistance (No-walls), medium level assistance (Soft-walls) and high level assistance (Rigid-walls), which corresponded to the magnitude of the virtual fixture forces. The system was tested with fifteen able-bodied adults and results validated the effectiveness of the system in improving the user’s performance. The Soft- and Rigid-walls conditions significantly outperformed the No-walls condition and led to relatively the same performance improvements in terms of: (a) a statistically significant reduction in the ratio of the colored area outside to the colored area inside the region of interest (with large effect sizes, Cohen’s d > .8), (b) and a substantial reduction in the travelled distance outside the borders (with large effect sizes). The developed platform will next be tested with typically developing children and then children with disabilities. Future development will include adding artificial intelligence to adaptively tune the level of assistance according to the user’s level of performance (i.e. providing more assistance only when the user is committing more errors).
... Rosenberg, introduced for the first time the concept of virtual guides (Virtual Fixtures) in a system of telepresence (Rosenberg, 1992;Rosenberg, 1993). In this context, the operator controls a real robot via an exoskeleton to perform insertion tasks. ...
... These aids include virtual guides, which have proved to be valuable tools (Rosenberg, 1993). Otmane et al. introduced an augmented reality system for tele-operation using virtual guides for precise selection and manipulation of objects (see Figure 5.1). ...
Article
Full-text available
The recent advancement in the field of high quality computer graphics and the capability of inexpensive computers to render realistic 3D scenes have made it possible to develop virtual environments where two or more users can co-exist and work collaboratively to achieve a common goal. Such environments are called Collaborative Virtual Environments (CVEs). The potential application domains of CVEs are many, such as military, medical, assembling, computer aided designing, teleoperation, education, games and social networks etc.. One of the problems related to CVEs is the user's low level of awareness about the status, actions and intentions of his/her collaborator, which not only reduces users' performance but also leads to non satisfactory results. In addition, collaborative tasks without using any proper computer generated assistance are very dicult to perform and are more prone to errors. The basic theme of this thesis is to provide assistance in collaborative 3D interaction in CVEs. In this context, we study and develop the concept of multimodal (audio, visual and haptic) assistance of a user or group of users. Our study focuses on how we can assist users to collaboratively interact with the entities of CVEs. We propose here to study and analyze the contribution of multimodal assistance in collaborative (synchronous and asynchronous) interaction with objects in the virtual environment. Indeed, we propose and implement various multimodal virtual guides. These guides are evaluated through a series of experiments where selection/manipulation task is carried out by users both in synchronous and asynchronous mode. The experiments were carried out in LISA (Laboratoire d'Ingénierie et Systèmes Automatisés) lab at University of Angers and IBISC (Informatique, Biologie Integrative et Systemes Complexes) lab at University of Evry. In these experiments users were asked to perform a task under various conditions ( with and without guides). Analysis was done on the basis of task completion time, errors and users' learning. For subjective evaluations questionnaires were used. The ndings of this research work can contribute to the development of collaborative systems for teleoperation, assembly tasks, e-learning, rehabilitation, computer aided design and entertainment.
... Concepts and approaches : The work proposed by Rosenberg [20] and by Sayers [21], explores the design and the implementation of computer generated entities known as virtual fixtures. To highlight this concept Rosenberg gives the following example, ...
... Concepts and approaches : The work proposed by Rosenberg [20] and by Sayers [21], explores the design and the implementation of computer generated entities known as virtual fixtures. To highlight this concept Rosenberg gives the following example, " When asking one to draw a straight line segment in the real world, human performance can be greatly enhanced by using a simple tool such as a ruler. ...
Article
Full-text available
The primary objective of rehabilitation robotics is to fully or partly restore the disabled user's manipulative function by placing a robot arm between the user and the environment. Our assistance system is composed of a control-command station for the disabled person and a manipulator arm mounted on a mobile robot. The main constraints of such peculiar systems are flexibility to be adapted to each user's capabilities, modularity to make the system versatile, reliability and cost to be affordable. A good compromise between those constraints is reached by a semiautonomous system in which robotics give the system as autonomy as possible and man-machine co-operation palliates both the deficiencies of the person due to the handicap and the limits of the machine. The approach consists to determine the autonomy level reachable with an affordable machine using robotic solutions. Taking into account both the capabilities of the person and the robot, the man-machine co-operation defines the best task sharing to do the person a service such as "go and see" or " fetch and bring an object back". The man machine interface for the perception and the control of the semiautonomous robotic system is based on new remote control approaches: virtual reality or enhanced reality. Enhanced and virtual reality techniques aims at immersing the user into the site where the mission is in progress. The increasing interest of video game firms for the immersion idea contributes to reduce the cost of such technology. An important feature of the MMI is the use of virtual fixtures as perceptual overlays to enhance human operator performances during teleoperating a robot. Virtual fixtures improve accuracy and time for performing a task and may reduce both human operator mental processing and stress.
... Rosenberg, introduced for the first time the concept of virtual guides (Virtual Fixtures) in a system of telepresence (Rosenberg, 1992;Rosenberg, 1993). In this context, the operator controls a real robot via an exoskeleton to perform insertion tasks. ...
... These aids include virtual guides, which have proved to be valuable tools (Rosenberg, 1993). Otmane et al. introduced an augmented reality system for tele-operation using virtual guides for precise selection and manipulation of objects (see Figure 5.1). ...
Article
Full-text available
Les progrès récents dans le domaine de l'infographie et la capacité des ordinateurs personnels de rendre les scènes 3D réalistes ont permis de développer des environnements virtuels dans lesquels plusieurs utilisateurs peuvent co-exister et travailler ensemble pour atteindre un objectif commun. Ces environnements sont appelés Environnements Virtuels Collaboratifs (EVCs). Les applications potentielles des EVCs sont dans les domaines militaire, medical, l'assemblage, la conception assistee par ordinateur, la teleoperation, l'éducation, les jeux et les réseaux sociaux. Un des problemes liés aux EVCs est la faible connaissance des utilisateurs concernant l'état, les actions et les intentions de leur(s) collaborateur(s). Ceci reduit non seulement la performance collective, mais conduit également à des résultats non satisfaisants. En outre, les tâches collaboratives ou coopératives réalisées sans aide ou assistance, sont plus difficiles et plus sujettes aux erreurs. Dans ce travail de thèse, nous étudions l'influence de guides multi-modaux sur la performance des utilisateurs lors de tâches collaboratives en environnement virtuel (EV). Nous proposons un certain nombre de guides basés sur les modalites visuelle, auditive et haptique. Dans ce contexte, nous étudions leur qualité de guidage et examinons leur influence sur l'awareness, la co-presence et la coordination des utilisateurs pendant la réalisation des tâches. A cette effet, nous avons développé une architecture logicielle qui permet la collaboration de deux (peut être étendue à plusieurs utiliateurs) utilisateurs (distribués ou co-localisés). En utilisant cette architecture, nous avons développé des applications qui non seulement permettent un travail collaboratif, mais fournissent aussi des assistances multi-modales aux utilisateurs. Le travail de collaboration soutenu par ces applications comprend des tâches de type "Peg-in-hole", de télé-manipulation coopérative via deux robots, de télé-guidage pour l'écriture ou le dessin. Afin d'évaluer la pertinence et l'influence des guides proposés, une série d'expériences a ete effectuée au LISA (Laboratoire d'Ingénierie et Systemes Automatisés) à l'Université d'Angers et au Laboratoire IBISC (Informatique, Biologie Integrative et Systemes Complexes) à l'Université d'Evry. Dans ces expériences, les utilisateurs ont été invités à effectuer des tâches variées, dans des conditions différentes (avec et sans guides). L'analyse a été effectuée sur la base du temps de réalisation des tâches, des erreurs et de l'apprentissage des utilisateurs. Pour les évaluations subjectives des questionnaires ont été utilisés. Ce travail contribue de manière signicative au développement de systèmes collaboratifs pour la téléoperation, la simulation d'assemblage, l'apprentissage de gestes techniques, la rééducation, la conception assistée par ordinateur et le divertissement.
... It mimicked for the first time an interaction with computer-generated shapes using a visual platform and natural user modality, as what can be used when people perform the same tasks without a computer. Since then, immersive technology has improved with the growth of VR and AR applications [31] , such as the head-mounted VR display and DataGlove for identifying hand gestures developed by the VPL Company in the late 1980s [32] and the pioneering virtual fixture systems for AR training [33] and maintenance assistance [34] in the 1990s. Table 1 shows the development milestones in the history of immersive technology. ...
Article
With the expanded digitalization of manufacturing and product development process, research into the use of immersive technology in smart manufacturing has increased. The use of immersive technology is theorized to increase the productivity of all steps in the product development process, from the start of the concept generation phase to assembling the final product. Many aspects of immersive technology are considered, including techniques for CAD model conversion and rendering, types of VR/AR displays, interaction modalities, as well as its integration with different areas of product development. The purpose of this survey paper is to investigate the potential applications of immersive technology and advantages and potential drawbacks that should be considered when integrating the technology into the workplace. The potential application is broad, and the possibilities are continuing to expand as the technology used becomes more advanced and more affordable for commercial business to implement on a large scale. The technology is currently being utilized in the concept generation and in the design or engineering of new products. Additionally, the immersive technology have great potential to increase the productivity of assembly line workers and of the factory layout/functionality, and could provide a more hands-on form of training, which leads to the conclusion that immersive technology is the step to the future in terms of smart product development strategies to implement for employers.
... A standard tool in haptics for generating such feedback are haptic constraints-also denoted as virtual fixtures. The concept of virtual fixtures was introduced by Rosenberg (1992) to support the operator during a telemanipulation task and was also evaluated for time-delayed systems (Rosenberg, 1993a). Virtual fixtures are control algorithms which regulate manipulator motion, surveyed in Bowyer et al. (2013). ...
Article
Full-text available
Certain telerobotic applications, including telerobotics in space, pose particularly demanding challenges to both technology and humans. Traditional bilateral telemanipulation approaches often cannot be used in such applications due to technical and physical limitations such as long and varying delays, packet loss, and limited bandwidth, as well as high reliability, precision, and task duration requirements. In order to close this gap, we research model-augmented haptic telemanipulation (MATM) that uses two kinds of models: a remote model that enables shared autonomous functionality of the teleoperated robot, and a local model that aims to generate assistive augmented haptic feedback for the human operator. Several technological methods that form the backbone of the MATM approach have already been successfully demonstrated in accomplished telerobotic space missions. On this basis, we have applied our approach in more recent research to applications in the fields of orbital robotics, telesurgery, caregiving, and telenavigation. In the course of this work, we have advanced specific aspects of the approach that were of particular importance for each respective application, especially shared autonomy, and haptic augmentation. This overview paper discusses the MATM approach in detail, presents the latest research results of the various technologies encompassed within this approach, provides a retrospective of DLR's telerobotic space missions, demonstrates the broad application potential of MATM based on the aforementioned use cases, and outlines lessons learned and open challenges.
... Simultaneously, at the beginning of 90' , the Boing Corporation created the first prototype of AR system for showing to employees how set up a wiring tool (Carmigniani et al., 2011). At the same time, Rosenberg and Feiner developed an AR fixture for maintenance assistance, showing that the operator performance enhanced by added virtual information on the fixture to repair (Rosenberg, 1993). In 1993 Loomis and colleagues produced an AR GPS-based system for helping the blind in the assisted navigation through adding spatial audio information (Loomis et al., 1998). ...
Article
Full-text available
The recent appearance of low cost virtual reality (VR) technologies-like the Oculus Rift, the HTC Vive and the Sony PlayStation VR-and Mixed Reality Interfaces (MRITF)-like the Hololens-is attracting the attention of users and researchers suggesting it may be the next largest stepping stone in technological innovation. However, the history of VR technology is longer than it may seem: the concept of VR was formulated in the 1960s and the first commercial VR tools appeared in the late 1980s. For this reason, during the last 20 years, 100s of researchers explored the processes, effects, and applications of this technology producing 1000s of scientific papers. What is the outcome of this significant research work? This paper wants to provide an answer to this question by exploring, using advanced scientometric techniques, the existing research corpus in the field. We collected all the existent articles about VR in the Web of Science Core Collection scientific database, and the resultant dataset contained 21,667 records for VR and 9,944 for augmented reality (AR). The bibliographic record contained various fields, such as author, title, abstract, country, and all the references (needed for the citation analysis). The network and cluster analysis of the literature showed a composite panorama characterized by changes and evolutions over the time. Indeed, whether until 5 years ago, the main publication media on VR concerned both conference proceeding and journals, more recently journals constitute the main medium of communication. Similarly, if at first computer science was the leading research field, nowadays clinical areas have increased, as well as the number of countries involved in VR research. The present work discusses the evolution and changes over the time of the use of VR in the main areas of application with an emphasis on the future expected VR's capacities, increases and challenges. We conclude considering the disruptive contribution that VR/AR/MRITF will be able to get in scientific fields, as well in human communication and interaction, as already happened with the advent of mobile phones by increasing the use and the development of scientific applications (e.g., in clinical areas) and by modifying the social communication and interaction among people.
... One of the application of VR techniques in CAT systems is directed toward<; fitting abstract perceptual information within the human-machine interface. The work proposed by Rosenberg in 1992, (Rosenberg (1993 explores the design and the implementation of computer generated entities known as virtual fixtures. An other work using synthetic fixtures was proposed by Sayers in 1993. ...
Article
The World Wide Web (WWW), has became todays an important means to bring robots to the public. Several robot systems, which can be operated over the WWW, have been developed over the last few years. On the other hand, many current teleoperation architectures allow one Human Operator (HO) to performe remote complex tasks. This paper presents an extention of our ARITI system (acronym for Augmented Reality Interface for Telerobotic applications via Internet) supporting cooperative remote control via the Web. In fact, the current teleoperation architecture, allows multiple remote operators to work together in cooperative mode and performing the complex remote tasks. In order to do a cooperative remote control via the Internet, active virtual guides and predictive display (surimposing virtual models on video image feedback) are used to enhance performances of each HO. The distributed predictive display allows cooperative remote control very easy and speed.
... Similar to this is the use of virtual fixtures. In a peg-in-hole experiment that suffered 45% task execution time degradation under 450 ms delay, this was reduced to 3% using virtual fixtures [117] • Predictive/simulated displays can be used when autonomous control at the remote site is difficult to achieve. This approach uses a simulation at the master side for real-time interaction by the operator. ...
... In order to make the interaction in VE easier and increase user performance, various aids like stereoscopic display, 3D audio or force feedback [19] may be utilized. In the context of assistance for 3D interaction, virtual guides [14] are valuable tools, for example in the context of teleoperation [6]. ...
Article
Full-text available
Interaction techniques play a vital role in the virtual en-vironment's enrichment and have profound effects on the uer's performance and sense of presence as well as realism of the virtual environment(VE).In this paper we present new haptic guide models for object selection. It is used to aug-ment the Follow-Me 3D interaction technique dedicated to object selection and manipulation. we divide the VE into three zones, in the first zone the user can freely navigate and does't need any guidance, the second zone provides vi-sual guidance to the user near an object and the third zone gives haptic guidance very near to the object. The haptic and visual guides assist the user in object selection. The paper presents two different models of the haptic guides, one for free and multidirectional selection and the second for precise and single direction selection. The evaluation and comparison of these haptic guides is investigated.
... The concept of virtual fixtures was introduced and defined by Rosenberg (1993). Virtual fixtures were initially designed to help the user carry out robot teleoperation tasks (Sayers & Paul, 1994). ...
Article
Virtual reality (VR) is a technology covering a large field of applications among which are sports and video games. In both gaming and sporting VR applications, interaction techniques involve specific gestures such as catching or striking. However, such dynamic gestures are not currently being recognized as elementary task primitives, and have therefore not been investigated as such. In this paper, we propose a framework for the analysis of interaction in dynamic virtual environments (DVEs). This framework is based on three dynamic interaction primitives (DIPs) that are common to many sporting activities: catching, throwing, and striking. For each of these primitives, an original modeling approach is proposed. Furthermore, we introduce and formalize the concept of dynamic virtual fixtures (DVFs). These fixtures aim to assist the user in tasks involving interaction with moving objects or with objects to be set in movement. Two experiments have been carried out to investigate the influence of different DVFs on human performance in the context of ball catching and archery. The results reveal a significant positive effect of the DVFs, and that DVFs could be either classified as performance-assisted or learning-assisted.
... Even with a visual display of the work area, the arm is difficult to control, since the visual display cannot convey properties like heft, compliance or resistance. Haptic feedback mimics these properties and affords operators a greater degree of control (see [56] for an example). Haptic feedback is also valuable in laparoscopic surgery simulation, where the goal is to minimize tissue damage [63]. ...
... Typical examples are the graphical predictive displays, described in the previous section, or some form of artificial haptic (kinesthetic and/or tactile) feedback [40]. Other VR-based techniques include the use of virtual fixtures ( [31]) or virtual mechanisms( [18]). ...
Article
Full-text available
This report presents work in progress within the framework of a research project aim- ing at the development of a mobile robotic system to perform assistive tasks in a hospital environment. This robotic assistant will consist of a mobile robot platform equipped with a variety of on-board sensing and processing equipment as well as a small manipulator for performing simple fetch-and-carry operations. In this report, we focus on the design of the teleoperation system integrating virtual reality techniques and Web-based capabilities in the human operator interface. Relative work found in the literature in the fleld of intervention and service telerobotics is reviewed, and an overview of the methodologies that will be fol- lowed is presented. Some speciflc issues requiring particular attention for the design of a teleoperation system for the mobile robotic assistant are investigated and include: (a) the speciflcation of the teleoperation modes supported by the system, integrating various au- tomatic computer assistance and shared-autonomy behaviour-based control modes, (b) the design of the user interface, built on Java technology to enable web-operation and support various multimodal VR-based functionalities, and (c) the integration with the other sub- systems and control modules of the mobile robotic assistant, in the framework of a general teleoperation/telemanipulation control architecture.
... In situation involving complex virtual mock-ups, it may be useful to provide the operator with some sorts of guidance that facilitates task accomplishment. For example, virtual fixtures or haptic guides have been proved to be efficient in tele-manipulation [24] or selection of objects in virtual envi- ronments [31] . Both accessibility testing and assembly simulations are interactive processes involving the operator and the handled objects, and hence simulation environments must be able to react according to the users actions. ...
Article
This paper studies the benefits that haptic feed-back can provide in performing complex maintenance tasks using virtual mock-ups. We have carried out user study that consisted on two experiments where participants had to per-form an accessibility task. A human-scale string-based hap-tic interface was used to provide the operator with haptic stimuli. A prop was used to provide grasp feedback. A mo-cap system tracks user's hand and head movements while a 5DT data-glove is used to measure finger flexion. In the first experiment the effect of haptic (collision) and visual feed-back are investigated. In the second experiment we investi-gated the effect of haptic guidance on operator performance. The results were analyzed in terms of task completion time and collision avoidance. Experiments show that haptic stim-uli proved to be more efficient than visual ones. In addition, haptic guidance helped the operators to correct trajectories and hence improve their performance.
... It may be a serious drawback showing the limit of the present choice to meet both passivity and transparency without conflict as in previous studies. Anticipating local command structures inside stations with "reflex" type response and distributed intelligence could be an interesting alternative aside the more common telemonitoring [28] [29] and virtual fixture generation [30] to improve again correlation between operator kinesthetic sense and feedback return from remote environment destroyed by all non ideal distortions (time delay, noise, ..). ...
Article
Full-text available
Conditions are discussed for operating a dissymmetric human master-small (or micro) slave system in best (large position gain-small velocity gain) conditions allowing higher operator dexterity when real effects (joint compliance, link flexion, delay and transmission distortion) are taken into account. It is shown that position PD feedback law advantage for ideal case no longer holds, and that more complicated feedback law depending on real effects has to be implemented with adapted transmission line . Drawback is slowdown of master slave interaction, suggesting to use more advanced predictive methods for the master and more intelligent control law for the slave.
... Typical examples are the graphical predictive displays, described above, or some form of artificial haptic (kinesthetic and/or tactile) feedback. Other VR-based techniques include the use of virtual fixtures (Rosenberg, 1993) or virtual mechanisms (Joly & Andriot, 1995). by performing some form of decision support function, that is, by providing suggestions or indications concerning the most suitable action plan and assist the human operator at the decision making process. ...
Chapter
Full-text available
This chapter has reviewed fundamental concepts and technologies of the general interdisciplinary field described usually by a combination of the terms: Virtual, Augmented or Mixed Reality systems, with the emphasis being on their applications in robot teleoperation. We have first analysed the basics of VR and AR systems, which have shown a great progress of research and development activities during the last decade, demonstrating a constantly increasing repertoire of useful practical applications in diverse domains of human activity. We have then described application scenarios of such VR technologies in the general field of robotics, with the particular focus on telerobotic applications. We have started by presenting a brief historical survey of the field of telerobotics, and identified the major profits that are related to the integration of VR and AR techniques. Virtual environments can be seen as a means to achieve natural, intuitive, multimodal human/computer (and generally human/machine) interaction; in this sense, a VE can function as an efficient mediator between a human operator and a telerobot, with the main objectives being: (a) to enhance human perception of the remote task environment and therefore improve transparency of the telerobotic system, by enriching the visual information (complemented by other form of sensory and sensori-motor stimuli) provided to the user, thus conveying complex data in a more natural and easier way; (b) to contribute to the solution of the time-delay problem in bilateral teleoperation and improve stability of the telerobotic system, by extending the concept of predictive displays and offering a range of control metaphors for both operator assistance and robot autonomy sharing. We have presented a number of successful case studies, where VR techniques have been effectively applied in telerobotics, for the two main robotic systems categories, namely (i) robot manipulators and (ii) mobile robotic vehicles. A long-distance parallel telemanipulation experiment was described, where an intermediate virtual task representation was used involving direct hand actions by means of a VR glove device. The use of telerobotic technologies in a distance training (virtual and remote laboratory) application has been also demonstrated, with very promising results in this important domain. As related to the field of mobile service robotics, two application scenarios have been described, to highlight the benefits that can result from the integration of VR-based interfaces for the teleoperation of robotic vehicles for a variety of tasks, including service / intervention tasks and remote exploration. The link with the field of haptics is also
... The mental model for the TorqueBAR's feedback is similar to the idea of " virtual fixtures " defined by Rosenberg [13] as, " abstract sensory information overlaid on top of reflected sensory feedback from a remote environment. " By changing its centre-of-mass, the TorqueBAR can likewise spatially guide user movement. ...
Conference Paper
Kinesthetic feedback is a key mechanism by which people perceive object properties during their daily tasks - particularly inertial properties. For example, transporting a glass of water without spilling, or dynamically positioning a handheld tool such as a hammer, both require inertial kinesthetic feedback. We describe a prototype for a novel ungrounded haptic feedback device, the TorqueBAR, that exploits a kinesthetic awareness of dynamic inertia to simulate complex coupled motion as both a display and input device. As a user tilts the TorqueBAR to sense and control computer programmed stimuli, the TorqueBAR's centre-of-mass changes in real-time according to the user's actions. We evaluate the TorqueBAR using both quantitative and qualitative techniques, and we describe possible applications for the device such as video games and real-time robot navigation.
... In order to make the interaction easier and increase user performance, various devices like stereoscopic display, 3D audio or force feedback devices may be utilized. In the context of assistance for 3D interaction, virtual guides [23] are valuable tools, for example in the context of teleoperation [12], [13]. The visual guides are generally characterized by their place of attachment (position and orientation), manipulation area, condition for activation, associated function and a condition of deactivation [12]. ...
Article
Full-text available
Interaction techniques play a vital role in virtual environments' enrichment and have profound effects on the user's performance and sense of presence as well as realism of the Virtual Environment (VE). In this paper we present a new haptic guide model for object selection. It is utilized to augment the Follow-Me 3D interaction technique dedicated to object selection and manipulation. The fundamental concept of the Follow-Me technique is to divide VE into three different zones (free manipulation, visual and haptic assistance zones). Each one of the three zones is characterized by a specific interaction granularity which defines the properties of the interaction in the concerned zone. This splitting of VE is aimed to have both precision and assistance (zones of visual and haptic guidance) near the object to reach or to manipulate and to maintain a realistic and free interaction in the VE (free manipulation zone). The haptic and visual guides assist the user in object selection. The paper presents two different models of the haptic guides, one for free and multidirectional selection and the second for precise and single direction selection. The evaluation and comparison of these haptic guides are given and their effect on the user's performance in object selection in VE is investigated.
... In these applications, assist is offered in the form of virtual fixtures that may be used by the operator as mechanical guides for controlling force or motion direction. Virtual fixtures have been shown to improve performance in targeting tasks [Hasser et al., 1998] [Dennerlein and Yang, 2001], peg-in-hole tasks [Rosenberg, 1993] [Sayers and Paul, 1994] [Payandeh and Stanisic, 2002] and in surgical interventions [Park et al., 2001]. Virtual fixtures are usually fixed in the shared workspace; however, virtual fixtures whose composition were functions of time or recognized operator motions were studied in [Li and Okamura, 2003]. ...
Article
Full-text available
This paper describes a paradigm for human/automation control sharing in which the automation acts through a motor coupled to a machine's manual control interface. The manual interface becomes a haptic display, continually informing the human about automation actions. While monitoring by feel, users may choose either to conform to the automation or override it and express their own control intentions. This paper's objective is to demonstrate that adding automation through haptic display can be used not only to improve performance on a primary task but also to reduce perceptual demands or free attention for a secondary task. Results are presented from three experiments in which 11 participants completed a lane-following task using a motorized steering wheel on a fixed-base driving simulator. The automation behaved like a copilot, assisting with lane following by applying torques to the steering wheel. Results indicate that haptic assist improves lane following by least 30%, p < .0001, while reducing visual demand by 29%, p < .0001, or improving reaction time in a secondary tone localization task by 18 ms, p = .0009. Potential applications of this research include the design of automation interfaces based on haptics that support human/automation control sharing better than traditional push-button automation interfaces.
... Previous implementation of virtual fixtures showed that they can increase speed and precision of operation, reduce operator workload, and reduce the effects of communication time delays [2], [3]. Another important area of application of such fixtures is in the area of tutoring and mentoring in the virtual training environment such as laparoscopic training environment [9]. ...
Conference Paper
Full-text available
This paper presents a study on the application of virtual fixtures as a control aid for performing telemanipulation or in the training environment. The implementation features both manual and supervisory control modes. It also studies approaches for using virtual fixtures in generating visual and force clues and/or for restricting the motion of the slave using the definition of virtual fixtures. In this preliminary study it was found that the virtual fixtures could: improve the speed and precision of the operator, and reduce the operator's workload and the duration of the training phase for the novice operator
... Second, the approach will fail in unstructured environments which cannot be modeled accurately. Rosenberg at Stanford University [39] proposed to use "virtual fixtures" as an alternative method for solving the problem of teleoperation with time delay and force feedback. These virtual fixtures are abstract sensorial data overlaid on top of the remote workspace, and only interacting with the operator. ...
Article
Full-text available
To a large extent the robotics and the newer virtual reality (VR) research communities have been working in isolation. This article reviews three areas where integration of the two technologies can be beneficial. First, we consider VR-enhanced CAD design, robot programming, and plant layout simulation. Subsequently, we discuss how VR is being used in supervisory teleoperation, for single operator-single robot systems, single operator multiplexed to several slave robots, and collaborative control of a single robot by multiple operators. Here VR can help overcome problems related to poor visual feedback as well as system instabilities due to time delays. Finally, we show how robotics can be beneficial to VR in general, since robots can serve as force feedback interfaces to the simulation. Newer back-drivable manipulators offer increased safety for the user that closely interacts with the robot. The synergy between the fields of robotics and virtual reality is expected to grow in years to come
Article
This paper presents an augmented reality system which facilitates user interactive simulation of teleoperation of robotic systems. Leveraging the recent advances in augmented reality and 3D sensing and modeling technologies, the system facilitates various concepts of enhanced teleoperation, including virtual fixture, teleautonomy, and augmented teleoperation. A series of test operations was made using the virtual simulator, which demonstrated the potential benefits of the use of augmented reality technique for enhancement of teleoperation performance.
Article
Live working robot is developed for live working, and it can instead of operators completing the corresponding action and tasks real-time and accurately. For Kraft hydraulic live working robot, this paper designed a motion control system based on TRIO464. This control system can realize the movement of each joint and has advantages of high positioning precision, fast response time, safe and reliable performance, multiple safety protection measures.
Chapter
Introduction Teleoperation and telerobotics Augmented reality assisted teleoperation Human-scale collaborative teleoperation Synthesis and problematics References
Article
Virtual reality is an effective method to eliminate the influence of time delay. However, it depends on the precision of the virtual model. In this paper, we introduce a method that corrects the virtual model on-line to establish a more precise model. The geometric errors of the virtual model were corrected on-line by overlapping the graphics over the images and also by syncretizing the position and force information from the remote. Then the sliding average least squares (SALS) method was adopted to determine the mass, damp, and stiffness of the remote environment and use this information to amend the dynamic model of the environment. Experimental results demonstrate that the on-line correction method we proposed can effectively reduce the impact caused by time delay, and improve the operational performance of the teleoperation system.
Article
The goal of this work is to investigate whether estimates of ease of part handling and part insertion can be provided by multimodal simulation using virtual environment (VE) technology, rather than by using conventional table-based methods such as Boothroyd and Dewhurst Charts. The long term goal is to extend cad systems to evaluate and compare alternative designs using Design for Assembly Analysis. A unified physically based model has been developed for modeling dynamic interactions among virtual objects and haptic interactions between the human designer and the virtual objects. This model is augmented with auditory events in a multimodal VE system called the Virtual Environment for Design for Assembly (VEDA). The designer sees a visual representation of the objects, hears collision sounds when objects hit each other and can feel and manipulate the objects through haptic interface devices with force feedback. Currently these models are 2D in order to preserve interactive update rates. Experiments were conducted with human subjects using two-dimensional peg-in-hole apparatus and a VEDA simulation of the same apparatus. The simulation duplicated as well as possible the weight, shape, size, peg-hole clearance, and frictional characteristics of the physical apparatus. The experiments showed that the Multimodal VE is able to replicate experimental results in which increased task completion times correlated with increasing task difficulty (measured as increased friction, increased handling distance combined with decreased peg-hole clearance). However, the Multimodal VE task completion times are approximately two times the physical apparatus completion times. A number of possible factors for this temporal discrepancy have been identified but their effect has not been quantified.
Article
One of the main objectives in telerobotics is the development of a telemanipulation system that allows a high task performance to be achieved by simultaneously providing a high degree of telepresence. Specific mechatronic design guidelines and appropriate control algorithms as well as augmented visual, auditory, and haptic feedback systems are typical approaches adopted in this context. This work aims at formulating new design guidelines by incorporating human factors in the development process and analyzing the effects of varied human movement control on task performance and on the feeling of telepresence. While it is well known that humans are able to coordinate and integrate multiple degrees of freedom (DOF), the focus of this work is on how humans utilize rotational degrees of freedom provided by a human-system interface and if and how varied human movement control affects task performance and the feeling of telepresence. For this analysis, a telemanipulation experiment with varying degrees of freedom has been conducted. The results indicate that providing the full range of movement, even though this range is not necessary to accomplish a task, has a beneficial effect on the feeling of telepresence and task performance in terms of measured interaction forces. Further, increasing visual depth cues provided to the human operator also had a positive effect.
Article
Full-text available
This paper introduces and validates quantitative performance measures for a rhythmic target-hitting task. These performance measures are derived from a detailed analysis of human performance during a month-long training experiment where participants learned to operate a 2-DOF haptic interface in a virtual environment to execute a manual control task. The motivation for the analysis presented in this paper is to determine measures of participant performance that capture the key skills of the task. This analysis of performance indicates that two quantitative measures---trajectory error and input frequency---capture the key skills of the target-hitting task, as the results show a strong correlation between the performance measures and the task objective of maximizing target hits. The performance trends were further explored by grouping the participants based on expertise and examining trends during training in terms of these measures. In future work, these measures will be used as inputs to a haptic guidance scheme that adjusts its control gains based on a real-time assessment of human performance of the task. Such guidance schemes will be incorporated into virtual training environments for humans to develop manual skills for domains such as surgery, physical therapy, and sports.
Article
Full-text available
Does virtual reality (VR) represent a useful platform for teaching real-world motor skills? In domains such as sport and dance, this question has not yet been fully explored. The aim of this study was to determine the effects of two variations of real-time VR feedback on the learning of a complex dance movement. Novice participants (n == 30) attempted to learn the action by both observing a video of an expert's movement demonstration and physically practicing under one of three conditions. These conditions were: full feedback (FULL-FB), which presented learners with real-time VR feedback on the difference between 12 of their joint center locations and the expert's movement during learning; reduced feedback (REDUCED-FB), which provided feedback on only four distal joint center locations (end-effectors); and no feedback (NO-FB), which presented no real-time VR feedback during learning. Participants' kinematic data were gathered before, immediately after, and 24 hr after a motor learning session. Movement error was calculated as the difference in the range of movement at specific joints between each learner's movement and the expert's demonstrated movement. Principal component analysis was also used to examine dimensional change across time. The results showed that the REDUCED-FB condition provided an advantage in motor learning over the other conditions: it achieved a significantly greater reduction in error across five separate error measures. These findings indicate that VR can be used to provide a useful platform for teaching real-world motor skills, and that this may be achieved by its ability to direct the learner's attention to the key anatomical features of a to-be-learned action.
Article
The goal of this work is to investigate whether estimates of ease of part handling and part insertion can be provided by multimodal simulation using virtual environment (VE) technology. The long-term goal is to use this data to extend computer-aided design (CAD) systems in order to evaluate and compare alternate designs using design for assembly analysis. A unified, physically-based model has been developed for modeling dynamic interactions and has been built into a multimodal VE system called the Virtual Environment for Design for Assembly (VEDA). The designer sees a visual representation of objects, hears collision sounds when objects hit each other, and can feel and manipulate the objects through haptic interface devices with force feedback. Currently these models are 2D in order to preserve interactive update rates. Experiments were conducted with human subjects using a two-dimensional peg-in-hole apparatus and a VEDA simulation of the same apparatus. The simulation duplicated as well as possible the weight, shape, size, peg-hole clearance, and frictional characteristics of the physical apparatus. The experiments showed that the multimodal VE is able to replicate experimental results in which increased task completion times correlated with increasing task difficulty (measured as increased friction, increased handling distance, and decreased peg-hole clearance). However, the multimodal VE task completion times are approximately twice those of the physical apparatus completion process. A number of possible factors have been identified, but the effect of these factors has not been quantified.
Article
Full-text available
The implementation of an interaction between a user and a Virtual Environment (VE) in Virtual Reality (VR) may use various techniques. However, in some cases, the interaction must be very precise and comfortable for the user. In this paper, we introduce a new selection and manipulation technique called FOLLOW-ME. The model associated to this technique splits the Virtual Environment into three zones in which a specificinteraction model is used: a free manipulation zone, a scaled manipulation zone and a precise manipulation zone. Each one of the three zones is characterized by a specific interaction granularity which defines the properties of the interaction in the concerned zone. This splitting is created in order to have both precision near the object to reach or to manipulate (scaled and precise manipulation zones) and to maintain a realistic and free interaction in the VE (free manipulation zone). In the precise manipulation zone, we use the concept of virtual guides in order to assist user. In this paper, we exhibit the formalization of the model associated to the FOLLOW-ME technique. Then, we both compare our technique with classical interaction techniques in the case of a task which consists in trying to reach an object in VE and give some insights about the conditions of usefulness of virtual guides in a selection task. Some preliminary experimental results are presented and discussed.
Conference Paper
Full-text available
In many current teleoperation architectures, remote tasks are indirectly performed by a human operator (HO) by means of a virtual environment consisting in a virtual or symbolic representation of the remote site. In order to achieve virtual tasks, the interaction of the HO and the virtual representation is monitored. Monitoring results are subsequently translated into a sequence of instructions sent to the remote robot for actual execution. This paper focuses on different strategies designed to allow a user-friendly operator interaction with the virtual representation in order to achieve complex remote tasks via the Internet. The use of active virtual guides to assist the HO in performing simple or complex tasks with enhanced performance (speed, precision and safety) is also discussed. Techniques such as virtual reality (VR), augmented reality (AR) combined with Internet-based programming facilities are investigated as part of the proposed teleoperation system named ARITI (acronym for Augmented Reality Interface for Telerobotic applications via Internet)
ResearchGate has not been able to resolve any references for this publication.