Fig 2 - uploaded by Ryad Chellali
Content may be subject to copyright.
Source publication
The ability for tele-operators to derive remote robots' location is crucial: it allows them to develop strategies to perform high level interactions and mainly navigation. In normal conditions, remote robots can perform autonomously such a task by comparing an priori knowledge and local sensed data. For some typical situations, this approach is imp...
Contexts in source publication
Context 1
... 3D and 2D maps based localization, results are presented in ( Fig. 2-5). The analysis shows that errors when using 3D maps are around 1m while for 2D maps, errors are around 1.5m (Fig. 2). On the contrary and as it was expected, subjects took twice time to achieve the task (Fig. 3) in 3D case. Following that, one can suggest that the additional dimension provided within 3D maps leads to a greater space to explore and thus it takes more time. Likewise, the additional dimension is an additional degree of freedom that helps to add constraints and information to disambiguate the robot location. Another possible explanation is dealing with habits: people are more used with 2D maps and they rely more on top views than to navigate with 3D environments. The analysis of the previous results gives us interesting hints and trends. Indeed and as we had is no significant difference between subjects concerning position errors in 2D map (p>0.052). On the contrary, for 3D there are differences (p<0.0015) in execution times: subjects do not have the same behavior suggesting the existence different skills in experiencing 3D environments. Regarding what subjects were asked to achieve, one has to take into account simultanously time and position errors. Indeed, subjects were asked to perform a good localization as quick as possible. Following that, the evaluation must combine both parameters time and errors. Conceptualy this equivalent to estimate the effort developed by subjects to achieve the task. In other words, a metrics t* Δ p, built by multiplying the execution time by the error made by the subject is more suitable and more representative of the effort spent to achieve the task: a good performer is the one with the smaller error obtained in a minmal time, respectively, a bad performer will take a lot of time to provide a bad position-oirentation estimation. As expected, people made a larger effort in 3D maps conditions both for positioning and orientating. Likewise, behavioural differences are existing for 3D maps while in 2D maps people perform similarly. We tested the effects of the field of view in order to see how subjects deploy motor activity in gathering the visual information (Fig.4 and Fig.5). Namely, subjects performed the localization task by using a video stream and a panoramic interactive view, respectively covering 36° and 180° of the FOV. We found that subjects are more accurate with a larger FOV. This confirms the geometrical intuition dealing with triangulation: the larger the angle between two lines the better is the accuracy of estimating the intersection of the two lines. On the other hand, people spent more or less the same time with both views. In this part, we present the results we obtained for two displaying technologies (HMD and PC-screens). Results show that HMD, in several cases, was deteriorating the performance of the subjects compared to PC-screen (Fig. 6). These results were obtained when subjects interacted with both video stream and panoramic views feedbacks, as well for both 2D and 3D maps. We found that the use of the HMD has a negative effect and increases position errors. Considering execution times, we have an opposite an opposite effect: one can observe that people achieve the task faster with the HMD. This suggests that people integrate visual information more easily when it is correlated with head movement than when visual navigation is done through hands and joystick. In this work we have presented a study concerning the factors that influence remote localization tasks. Subjects estimate the position and the orientation of a remote robot as faster as possible. The variations was concerned with a reference map (a 3D and a 2D maps), the remote camera field of view and the tools to control it. Results show that 3D maps are more effective even if it takes more interaction time. On the other hand, one can see that a wider view leads to better results. Finally, the use of HMD and PC-screens seems to be dependent on the use of the two previous techniques (2D or 3D maps). In this study, only static conditions were considered: subjects do not perform any robot’s control action and this limits the localization capabilities. In real tele-operation conditions, users can use the robot mobility as new degrees of freedom to find the solution. The enlargement of the field of view suggests that and our next steps will focus on this aspect. Another issue we will tackle is the multi-robots system. For such systems, the complexity is higher but, on the other side, one has more information (more cameras) to rely on to find individual ...
Context 2
... 3D and 2D maps based localization, results are presented in ( Fig. 2-5). The analysis shows that errors when using 3D maps are around 1m while for 2D maps, errors are around 1.5m (Fig. 2). On the contrary and as it was expected, subjects took twice time to achieve the task (Fig. 3) in 3D case. Following that, one can suggest that the additional dimension provided within 3D maps leads to a greater space to explore and thus it takes more time. Likewise, the additional dimension is an additional degree of freedom that helps to add constraints and information to disambiguate the robot location. Another possible explanation is dealing with habits: people are more used with 2D maps and they rely more on top views than to navigate with 3D environments. The analysis of the previous results gives us interesting hints and trends. Indeed and as we had is no significant difference between subjects concerning position errors in 2D map (p>0.052). On the contrary, for 3D there are differences (p<0.0015) in execution times: subjects do not have the same behavior suggesting the existence different skills in experiencing 3D environments. Regarding what subjects were asked to achieve, one has to take into account simultanously time and position errors. Indeed, subjects were asked to perform a good localization as quick as possible. Following that, the evaluation must combine both parameters time and errors. Conceptualy this equivalent to estimate the effort developed by subjects to achieve the task. In other words, a metrics t* Δ p, built by multiplying the execution time by the error made by the subject is more suitable and more representative of the effort spent to achieve the task: a good performer is the one with the smaller error obtained in a minmal time, respectively, a bad performer will take a lot of time to provide a bad position-oirentation estimation. As expected, people made a larger effort in 3D maps conditions both for positioning and orientating. Likewise, behavioural differences are existing for 3D maps while in 2D maps people perform similarly. We tested the effects of the field of view in order to see how subjects deploy motor activity in gathering the visual information (Fig.4 and Fig.5). Namely, subjects performed the localization task by using a video stream and a panoramic interactive view, respectively covering 36° and 180° of the FOV. We found that subjects are more accurate with a larger FOV. This confirms the geometrical intuition dealing with triangulation: the larger the angle between two lines the better is the accuracy of estimating the intersection of the two lines. On the other hand, people spent more or less the same time with both views. In this part, we present the results we obtained for two displaying technologies (HMD and PC-screens). Results show that HMD, in several cases, was deteriorating the performance of the subjects compared to PC-screen (Fig. 6). These results were obtained when subjects interacted with both video stream and panoramic views feedbacks, as well for both 2D and 3D maps. We found that the use of the HMD has a negative effect and increases position errors. Considering execution times, we have an opposite an opposite effect: one can observe that people achieve the task faster with the HMD. This suggests that people integrate visual information more easily when it is correlated with head movement than when visual navigation is done through hands and joystick. In this work we have presented a study concerning the factors that influence remote localization tasks. Subjects estimate the position and the orientation of a remote robot as faster as possible. The variations was concerned with a reference map (a 3D and a 2D maps), the remote camera field of view and the tools to control it. Results show that 3D maps are more effective even if it takes more interaction time. On the other hand, one can see that a wider view leads to better results. Finally, the use of HMD and PC-screens seems to be dependent on the use of the two previous techniques (2D or 3D maps). In this study, only static conditions were considered: subjects do not perform any robot’s control action and this limits the localization capabilities. In real tele-operation conditions, users can use the robot mobility as new degrees of freedom to find the solution. The enlargement of the field of view suggests that and our next steps will focus on this aspect. Another issue we will tackle is the multi-robots system. For such systems, the complexity is higher but, on the other side, one has more information (more cameras) to rely on to find individual ...
Similar publications
The ability for tele-operators to derive remote robots' location is crucial: it allows them to develop strategies to perform high level interactions and mainly navigation. In normal conditions, remote robots can perform autonomously such a task by comparing an priori knowledge and local sensed data. For some typical situations, this approach is imp...
This paper proposes a teleoperation interface by which an operator can control a robot from freely configured viewpoints using realistic images of the physical world. The viewpoints generated by the proposed interface provide human operators with intuitive control using a head-mounted display and head tracker, and assist them to grasp the environme...