PreprintPDF Available

Comparison of Gaze and Mouse Pointers for Video-based Collaborative Physical Task

Authors:

Abstract

Remote collaboration on physical tasks is an emerging use of video telephony. Recent work suggests that conveying gaze information measured using an eye tracker between collaboration partners could be beneficial in this context. However, studies that compare gaze to other pointing mechanisms, such as a mouse-controlled pointer, in video-based collaboration, have not been available. We conducted a controlled user study to compare the two remote gesturing mechanisms (mouse, gaze) to video only (none) in a situation where a remote expert saw video of the desktop of a worker where his/her mouse or gaze pointer was projected. We also investigated the effect of distraction of the remote expert on the collaborative process and whether the effect depends on the pointing device. Our result suggests that mouse and gaze pointers lead to faster task performance and improved perception of the collaboration, in comparison to having no pointer at all. The mouse outperformed the gaze when the task required conveying procedural instructions. In addition, using gaze for remote gesturing required increased verbal effort for communicating both referential and procedural messages.
ResearchGate has not been able to resolve any citations for this publication.
Conference Paper
Full-text available
Communicating spatial information by pointing is ubiquitous in human interactions. With the growing use of head-mounted cameras for collaborative purposes, it is important to assess how accurately viewers of the resulting egocentric videos can interpret pointing acts. We conducted an experiment to compare the accuracy of interpreting four different pointing techniques: hand pointing, head pointing, gaze pointing and hand+gaze pointing. Our results suggest that superimposing the gaze information on the egocentric video can enable viewers to determine pointing targets more accurately and more confidently. Hand pointing performed best when the pointing target was straight ahead and head pointing was the least preferred in terms of ease of interpretation. Our results can inform the design of collaborative applications that make use of the egocentric view.
Conference Paper
Full-text available
We present GazeTorch, a novel interface that provides gaze awareness during remote collaboration on physical tasks. GazeTorch uses a spotlight to display gaze information of the remote helper on the physical task space of the worker. We conducted a preliminary user study to evaluate user's subjective opinion on the quality of collaboration, using GazeTorch and a camera-only setup. Our preliminary results suggest that the participants felt GazeTorch made collaboration easier, made referencing and identifying of objects effortless, and improved the worker's confidence that the task was completed accurately. We conclude by presenting some novel application scenarios for the concept of augmenting real-time gaze information in the physical world.
Conference Paper
Full-text available
In this work, we investigate how remote collaboration between a local worker and a remote collaborator will change if eye fixations of the collaborator are presented to the worker. We track the collaborator's points of gaze on a monitor screen displaying a physical workspace and visualize them onto the space by a projector or through an optical see-through head-mounted display. Through a series of user studies, we have found the followings: 1) Eye fixations can serve as a fast and precise pointer to objects of the collaborator's interest. 2) Eyes and other modalities, such as hand gestures and speech, are used differently for object identification and manipulation. 3) Eyes are used for explicit instructions only when they are combined with speech. 4) The worker can predict some intentions of the collaborator such as his/her current interest and next instruction.
Conference Paper
Full-text available
People utilize eye gaze as an important cue for monitoring attention and coordinating awareness. This study investigates how remote pairs make use of a graphical representation of their partner's eye-gaze during a tightly-coupled collaborative task. Our results suggest that reproducing shared gaze in a remote collaboration setting makes pairs more accurate when referring to linguistically complex objects by facilitating the production of efficient forms of deictic references. We discuss how the availability of gaze influences coordination strategies and implications for the design of shared gaze in remote collaboration systems.
Conference Paper
Full-text available
Video communication using head-mounted cameras could be useful to mediate shared activities and support collaboration. Growing popularity of wearable gaze trackers presents an opportunity to add gaze information on the egocentric video. We hypothesized three potential benefits of gaze-augmented egocentric video to support collaborative scenarios: support deictic referencing, enable grounding in communication, and enable better awareness of the collaborator's intentions. Previous research on using egocentric videos for real-world collaborative tasks has failed to show clear benefits of gaze point visualization. We designed a study, deconstructing a collaborative car navigation scenario, to specifically target the value of gaze-augmented video for intention prediction. Our results show that viewers of gaze-augmented video could predict the direction taken by a driver at a four-way intersection more accurately and more confidently than a viewer of the same video without the superimposed gaze point. Our study demonstrates that gaze augmentation can be useful and encourages further study in real-world collaborative scenarios.
Article
Full-text available
This paper studies how eye-tracking can be used to measure and facilitate joint attention in parent-child interaction. Joint attention is critical for social learning activities such as parent-child shared storybook reading. There is a disassociation of attention when the adult reads texts while the child looks at pictures. We hypothesize the lack of joint attention limits children"s opportunity to learn print-related skills. Traditional research paradigm does not measure joint attention in real-time during shared storybook reading. In the current study, we simultaneously tracked eye movements of a parent and his/her child with two eye-trackers. We also provided real-time feedback to the parent where the child was looking at, and vice versa. Changes of dyads" reading behaviors before and after the joint attention intervention were measured from both eye movements and video records. Baseline data show little joint attention in parent-child shared book reading. The real-time eye-gaze feedback significantly changes parent-child interaction and improves learning.
Conference Paper
Remote collaboration can be more difficult than collocated collaboration for a number of reasons, including the inability to easily determine what your collaborator is looking at. This impedes a pair's ability to efficiently communicate about on-screen locations and makes synchronous coordination difficult. We designed a novel gaze visualization for remote pair programmers which shows where in the code their partner is currently looking, and changes color when they are looking at the same thing. Our design is unobtrusive, and transparently depicts the imprecision inherent in eye tracking technology. We evaluated our design with an experiment in which pair programmers worked remotely on code refactoring tasks. Our results show that with the visualization, pairs spent a greater proportion of their time concurrently looking at the same code locations. Pairs communicated using a larger ratio of implicit to explicit references, and were faster and more successful at responding to those references.
Article
We present the results of an empirical study that measured the contribution of a conspicuous eye-gaze (as a function of scleral de-pigmentation) of humans in conveying multimodal referentiality by combining visual and auditory cues in a naturalistic setting. We made participants interact in a cooperative task in which they had to convey referential meaning about co-presential entities. In one of the conditions, participants had no access to their interactants' eye-gaze. We interpret the results as supporting the idea that our eye morphology contributes to instantiating multimodal referentiality in cooperative tasks in peripersonal space.
Article
We present results from research exploring the effect of sharing virtual gaze and pointing cues in a wearable interface for remote collaboration. A local worker wears a Head-mounted Camera, Eye-tracking camera and a Head-Mounted Display and shares video and virtual gaze information with a remote helper. The remote helper can provide feedback using a virtual pointer on the live video view. The prototype system was evaluated with a formal user study. Comparing four conditions, (1) NONE (no cue), (2) POINTER, (3) EYE-TRACKER and (4) BOTH (both pointer and eye-tracker cues), we observed that the task completion performance was best in the BOTH condition with a significant difference of POINTER and EYETRACKER individually. The use of eye-tracking and a pointer also significantly improved the co-presence felt between the users. We discuss the implications of this research and the limitations of the developed system that could be improved in further work.
Article
Remote cooperation can be improved by transferring the gaze of one participant to the other. However, based on a partner's gaze, an interpretation of his communicative intention can be difficult. Thus, gaze transfer has been inferior to mouse transfer in remote spatial referencing tasks where locations had to be pointed out explicitly. Given that eye movements serve as an indicator of visual attention, it remains to be investigated whether gaze and mouse transfer differentially affect the coordination of joint action when the situation demands an understanding of the partner's search strategies. In the present study, a gaze or mouse cursor was transferred from a searcher to an assistant in a hierarchical decision task. The assistant could use this cursor to guide his movement of a window which continuously opened up the display parts the searcher needed to find the right solution. In this context, we investigated how the ease of using gaze transfer depended on whether a link could be established between the partner's eye movements and the objects he was looking at. Therefore, in addition to the searcher's cursor, the assistant either saw the positions of these objects or only a grey background. When the objects were visible, performance and the number of spoken words were similar for gaze and mouse transfer. However, without them, gaze transfer resulted in longer solution times and more verbal effort as participants relied more strongly on speech to coordinate the window movement. Moreover, an analysis of the spatio-temporal coupling of the transmitted cursor and the window indicated that when no visual object information was available, assistants confidently followed the searcher's mouse but not his gaze cursor. Once again, the results highlight the importance of carefully considering task characteristics when applying gaze transfer in remote cooperation.