Article

Gaze Augmented Hand-Based Kinesthetic Interaction: What You See is What You Feel

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Dominjon et al. [14] used the bubble technique which adjusts the HIP speed based on the relative positions of the HIP and its bubble to reach objects. Li et al. [15,16] employed gaze modality to move the HIP for reaching remote targets. Both methods maintained the unit CD gain while touching objects. ...
... Overall, there is an agreement that applying a large gain may influence kinesthetic interactions [14][15][16]. However, it is still not clear how different CD gains affect task measures such as task completion time, accuracy of interaction and user experience in real-world kinesthetic tasks. ...
Chapter
Full-text available
Kinesthetic interaction typically employs force-feedback devices for providing the kinesthetic input and feedback. However, the length of the mechanical arm limits the space that users can interact with. To overcome this challenge, a large control-display (CD) gain (>1) is often used to transfer a small movement of the arm to a large movement of the onscreen interaction point. Although a large gain is commonly used, its effects on task performance (e.g., task completion time and accuracy) and user experience in kinesthetic interaction remain unclear. In this study, we compared a large CD gain with the unit CD gain as the baseline in a task involving kinesthetic search. Our results showed that the large gain reduced task completion time at the cost of task accuracy. Two gains did not differ in their effects on perceived hand fatigue, naturalness, and pleasantness, but the large gain negatively influenced user confidence of successful task completion.
... Alternatively, gaze has been identified as a light-weight and fast input method, and has shown its potential for assisting with object manipulation tasks (e.g., [33,40,59]). However, previous work in VR mostly focused on the use of gaze for target selection [39,45], which is only a sub-phase of the whole manipulation process, while how gaze input can be incorporated into the "manipulate" phase (translation, rotation, and scaling [31]) is still underexplored. ...
... Their results indicate that moving targets causes a medium value of eye strain independently of the combination of both modalities. In contrast, Li et al., who combined gaze with touch input, found less eye strain when the eyes were being used solely for pointing in contrast to being used as a pointing and selection mechanism [Li et al. 2019]. Similarly, Rajanna and Hammond found that gaze input leads to higher eye strain values than touch and mouse input [Rajanna and Hammond 2018]. ...
Article
Perspective-taking and attentional switching are some of the ergonomic challenges that existing teleoperation human-machine interface designs need to address. This study developed two gaze interaction methods, the Eye Stick and the Eye Click, which were based on the joystick metaphor and the navigation metaphor, respectively, to be used in exocentric perspective teleoperation scenarios. We conducted two user studies to test the task performance and the subjective experience of the gaze interaction methods in a virtual ground vehicle teleoperation task. The results showed that compared with a traditional joystick design, the Eye Stick led to a shorter driving distance and the Eye Click led to less task time, and the gaze interaction methods had performance advantages in more difficult mazes. After multiple task sessions, the participants reported that the gaze interaction methods and the traditional joystick were similar in terms of task workload, perceived learnability, and satisfaction; however, the perceived usability of the Eye Stick was not as good as the Eye Click and the traditional joystick. In conclusion, both the Eye Stick and the Eye Click are feasible and promising gaze interaction methods for teleoperation applications with task performance advantages; however, more research is needed to optimize their user experience design.
Article
Haptic devices can be used to feel through the sense of touch what the user is watching in a virtual scene. Force feedback devices provide kinesthetic information enabling the user to touch virtual objects. However, the most reasonably priced devices of this type are the desktop ones, which have a limited workspace that does not allow a natural and convenient interaction with virtual scenes due to the difference in size between them and the workspace. In this paper, a brand-new interaction model addressing this problem is proposed. It is called Haptic Zoom and it is based on performing visual and haptic amplifications of regions of interest. These amplifications allow the user to decide whether s/he wants more freedom in movements or an accurate interaction with a specific element inside the scene. An evaluation has been carried out comparing this technique and two well-known desktop haptic device techniques. Preliminary results showed that haptic zoom can be more useful than other techniques at accuracy tasks.
Article
Kinaesthetic interaction using force-feedback devices is promising in virtual reality. However, the devices are currently not suitable for interactions within large virtual spaces because of their limited workspace. We developed a novel gaze-based kinaesthetic interface that employs the user’s gaze to relocate the device workspace. The workspace switches to a new location when the user pulls the mechanical arm of the device to its reset position and gazes at the new target. This design enables the robust relocating of device workspace, thus achieving an infinite interaction space, and simultaneously maintains a flexible hand-based kinaesthetic exploration. We compared the new interface with the scaling-based traditional interface in an experiment involving softness and smoothness discrimination. Our results showed that the gaze-based interface performs better than the traditional interface, in terms of efficiency and kinaesthetic perception. It improves the user experience for kinaesthetic interaction in virtual reality without increasing eye strain.
Conference Paper
Full-text available
Virtual reality affords experimentation with human abilities beyond what's possible in the real world, toward novel senses of interaction. In many interactions, the eyes naturally point at objects of interest while the hands skilfully manipulate in 3D space. We explore a particular combination for virtual reality, the Gaze + Pinch interaction technique. It integrates eye gaze to select targets, and indirect freehand gestures to manipulate them. This keeps the gesture use intuitive like direct physical manipulation, but the gesture's effect can be applied to any object the user looks at --- whether located near or far. In this paper, we describe novel interaction concepts and an experimental system prototype that bring together interaction technique variants, menu interfaces, and applications into one unified virtual experience. Proof-of-concept application examples were developed and informally tested, such as 3D manipulation, scene navigation, and image zooming, illustrating a range of advanced interaction capabilities on targets at any distance, without relying on extra controller devices.
Conference Paper
Full-text available
Eye tracking technology in a head-mounted display has undergone rapid advancement in recent years, making it possible for researchers to explore new interaction techniques using natural eye movements. This paper explores three novel eye-gaze-based interaction techniques: (1) Duo-Reticles, eye-gaze selection based on eye-gaze and inertial reticles, (2) Radial Pursuit, cluttered-object selection that takes advantage of smooth pursuit, and (3) Nod and Roll, head-gesture-based interaction based on the vestibulo-ocular reflex. In an initial user study, we compare each technique against a baseline condition in a scenario that demonstrates its strengths and weaknesses.
Article
Full-text available
Vibrotactile feedback is widely used in mobile devices because it provides a discreet and private feedback channel. Gaze based interaction, on the other hand, is useful in various applications due to its unique capability to convey the focus of interest. Gaze input is naturally available as people typically look at things they operate, but feedback from eye movements is primarily visual. Gaze interaction and the use of vibrotactile feedback have been two parallel fields of human-computer interaction research with a limited connection. Our aim was to build this connection by studying the temporal and spatial mechanisms of supporting gaze input with vibrotactile feedback. The results of a series of experiments showed that the temporal distance between a gaze event and vibrotactile feedback should be less than 250 milliseconds to ensure that the input and output are perceived as connected. The effectiveness of vibrotactile feedback was largely independent of the spatial body location of vibrotactile actuators. In comparison to other modalities, vibrotactile feedback performed equally to auditory and visual feedback. Vibrotactile feedback can be especially beneficial when other modalities are unavailable or difficult to perceive. Based on the findings, we present design guidelines for supporting gaze interaction with vibrotactile feedback.
Conference Paper
Full-text available
We present GazeTorch, a novel interface that provides gaze awareness during remote collaboration on physical tasks. GazeTorch uses a spotlight to display gaze information of the remote helper on the physical task space of the worker. We conducted a preliminary user study to evaluate user's subjective opinion on the quality of collaboration, using GazeTorch and a camera-only setup. Our preliminary results suggest that the participants felt GazeTorch made collaboration easier, made referencing and identifying of objects effortless, and improved the worker's confidence that the task was completed accurately. We conclude by presenting some novel application scenarios for the concept of augmenting real-time gaze information in the physical world.
Article
Full-text available
Minimally invasive and robotic surgery changes the capacity for surgical mentors to guide their trainees with the control customary to open surgery. This neuroergonomic study aims to assess a "Collaborative Gaze Channel" (CGC); which detects trainer gaze-behavior and displays the point of regard to the trainee. A randomized crossover study was conducted in which twenty subjects performed a simulated robotic surgical task necessitating collaboration either with verbal (control condition) or visual guidance with CGC (study condition). Trainee occipito-parietal (O-P) cortical function was assessed with optical topography (OT) and gaze-behavior was evaluated using video-oculography. Performance during gaze-assistance was significantly superior [biopsy number: (mean ± SD): control = 5.6 ± 1.8 vs. CGC = 6.6 ± 2.0; p < 0.05] and was associated with significantly lower O-P cortical activity [ΔHbO2 mMol × cm [median (IQR)] control = 2.5 (12.0) vs. CGC 0.63 (11.2), p < 0.001]. A random effect model (REM) confirmed the association between guidance mode and O-P excitation. Network cost and global efficiency were not significantly influenced by guidance mode. A gaze channel enhances performance, modulates visual search, and alleviates the burden in brain centers subserving visual attention and does not induce changes in the trainee's O-P functional network observable with the current OT technique. The results imply that through visual guidance, attentional resources may be liberated, potentially improving the capability of trainees to attend to other safety critical events during the procedure.
Conference Paper
Full-text available
Smartwatches are widely available and increasingly adopted by consumers. The most common way of interacting with smartwatches is either touching a screen or pressing buttons on the sides. However, such techniques require using both hands. We propose glance awareness and active gaze interaction as alternative techniques to interact with smartwatches. We will describe an experiment conducted to understand the user preferences for visual and haptic feedback on a "glance" at the wristwatch. Following the glance, the users interacted with the watch using gaze gestures. Our results showed that user preferences differed depending on the complexity of the interaction. No clear preference emerged for complex interaction. For simple interaction, haptics was the preferred glance feedback modality.
Article
Full-text available
Human saccades and fixations have numerous functions in complex everyday tasks, which have sometimes been neglected in simple experimental situations. In this review I describe some of the characteristics of eye movement behaviour during real-world interactions with objects, while walking in natural environments and while holding a conversation. When performing real-world actions and walking around the world, we fixate relevant features at critical time points during the task. The eye movements between these fixations are planned and coordinated alongside head and body movements, often occurring a short time before the corresponding action. In social interactions, eye movements are both a mechanism for taking in information (for example, when looking at someone's face or following their gaze) and for signalling one's attention to another person. Thus eye movements are specific to a particular task context and subject to high-level planning and control during everyday actions.Eye advance online publication, 14 November 2014; doi:10.1038/eye.2014.275.
Conference Paper
Full-text available
Anticipating the emergence of gaze tracking capable mobile devices, we are investigating the use of gaze as an input modality in handheld mobile devices. We conducted a study of combining gaze gestures with vibrotactile feedback. Gaze gestures were used as an input method in a mobile device and vibrotactile feedback as a new alternative way to give confirmation of interaction events. Our results show that vibrotactile feedback significantly improved the use of gaze gestures. The tasks were completed faster and rated easier and more comfortable when vibrotactile feedback was provided.
Conference Paper
Full-text available
Consistent measuring and reporting of gaze data quality is important in research that involves eye trackers. We have developed TraQuMe: a generic system to evaluate the gaze data quality. The quality measurement is fast and the interpretation of the results is aided by graphical output. Numeric data is saved for reporting of aggregate metrics for the whole experiment. We tested TraQuMe in the context of a novel hidden calibration procedure that we developed to aid in experiments where participants should not know that their gaze is being tracked. The quality of tracking data after the hidden calibration procedure was very close to that obtained with the Tobii's T60 trackers built-in 2 point, 5 point and 9 point calibrations.
Article
Full-text available
Tactile perception is inhibited during movement execution, a phenomenon known as tactile suppression. Here, we investigated whether the type of movement determines whether or not this form of sensory suppression occurs. Participants performed simple reaching or exploratory movements. Tactile discrimination thresholds were calculated for vibratory stimuli delivered to participants' wrists while executing the movement, and while at rest (a tactile discrimination task, TD). We also measured discrimination performance in a same vs. different task for the explored materials during the execution of the different movements (a surface discrimination task, SD). The TD and SD tasks could either be performed singly or together, both under active movement and passive conditions. Consistent with previous results, tactile thresholds measured at rest were significantly lower than those measured during both active movement and passive touch (that is, tactile suppression was observed). Moreover, SD performance was significantly better under conditions of single-tasking, active movements, as well as exploratory movements, as compared to conditions of dual-tasking, passive movements, and reaching movements, respectively. Therefore, the present results demonstrate that when active hand movements are made with the purpose of gaining information about the surface properties of different materials an enhanced perceptual performance is observed. As such, it would appear that tactile suppression occurs for irrelevant tactual features during both reaching and exploratory movements, but not for those task-relevant features that result from action execution during tactile exploration. Taken together, then, these results support a context-dependent modulation of tactile suppression during movement execution.
Conference Paper
Full-text available
While eye tracking has a high potential for fast selection tasks, it is often regarded as error-prone and unnatural, especially for gaze-only interaction. To improve on that, we propose gaze-supported interaction as a more natural and effective way combining a user's gaze with touch input from a handheld device. In particular, we contribute a set of novel and practical gaze-supported selection techniques for distant displays. Designed according to the principle gaze suggests, touch confirms they include an enhanced gaze-directed cursor, local zoom lenses and more elaborated techniques utilizing manual fine positioning of the cursor via touch. In a comprehensive user study with 24 participants, we investigated the potential of these techniques for different target sizes and distances. All novel techniques outperformed a simple gaze-directed cursor and showed individual advantages. In particular those techniques using touch for fine cursor adjustments (MAGIC touch) and for cycling through a list of possible close-to-gaze targets (MAGIC tab) demonstrated a high overall performance and usability.
Article
Full-text available
Most object manipulation tasks involve a series of actions demarcated by mechanical contact events, and gaze is typically directed to the locations of these events as the task unfolds. Here, we examined the timing of gaze shifts relative to hand movements in a task in which participants used a handle to contact sequentially five virtual objects located in a horizontal plane. This task was performed both with and without visual feedback of the handle position. We were primarily interested in whether gaze shifts, which in our task shifted from a given object to the next about 100 ms after contact, were predictive or triggered by tactile feedback related to contact. To examine this issue, we included occasional catch contacts where forces simulating contact between the handle and object were removed. In most cases, removing force did not alter the timing of gaze shifts irrespective of whether or not vision of handle position was present. However, in about 30% of the catch contacts, gaze shifts were delayed. This percentage corresponded to the fraction of contacts with force feedback in which gaze shifted more than 130 ms after contact. We conclude that gaze shifts are predictively controlled but timed so that the hand actions around the time of contact are captured in central vision. Furthermore, a mismatch between the expected and actual tactile information related to the contact can lead to a reorganization of gaze behavior for gaze shifts executed greater than 130 ms after a contact event.
Conference Paper
Full-text available
Novel robotic technologies utilised in surgery need assessment for their effects on the user as well as on technical performance. In this paper, the evolution in ‘cognitive burden’ across visuomotor learning is quantified using a combination of functional near infrared spectroscopy (fNIRS) and graph theory. The results demonstrate escalating costs within the activated cortical network during the intermediate phase of learning which is manifest as an increase in cognitive burden. This innovative application of graph theory and fNIRS enables the economic evaluation of brain behaviour underpinning task execution and how this may be impacted by novel technology and learning. Consequently, this may shed light on how robotic technologies improve human-machine interaction and augment minimally invasive surgical skills acquisition. This work has significant implications for the development and assessment of emergent robotic technologies at cortical level and in elucidating learning-related plasticity in terms of inter-regional cortical connectivity.
Conference Paper
Full-text available
This work explores a new direction in utilizing eye gaze forcomputer input. Gaze tracking has long been considered as analternative or potentially superior pointing method for computerinput. We believe that many fundamental limitations exist withtraditional gaze pointing. In particular, it is unnatural tooverload a perceptual channel such as vision with a motor controltask. We therefore propose an alternative approach, dubbed MAGIC(Manual And Gaze Input Cascaded) pointing. With such an approach,pointing appears to the user to be a manual task, used for finemanipulation and selection. However, a large portion of the cursormovement is eliminated by warping the cursor to the eye gaze area,which encompasses the target. Two specific MAGIC pointingtechniques, one conservative and one liberal, were designed,analyzed, and implemented with an eye tracker we developed. Theywere then tested in a pilot study. This early- stage explorationshowed that the MAGIC pointing techniques might offer manyadvantages, including reduced physical effort and fatigue ascompared to traditional manual pointing, greater accuracy andnaturalness than traditional gaze pointing, and possibly fasterspeed than manual pointing. The pros and cons of the two techniquesare discussed in light of both performance data and subjectivereports.
Conference Paper
Full-text available
Previous research shows that text entry by gaze using dwell time is slow, about 5-10 words per minute (wpm). These results are based on experiments with novices using a constant dwell time, typically between 450 and 1000 ms. We conducted a longitudinal study to find out how fast novices learn to type by gaze using an adjustable dwell time. Our results show that the text entry rate increased from 6.9 wpm in the first session to 19.9 wpm in the tenth session. Correspondingly, the dwell time decreased from an average of 876 ms to 282 ms, and the error rates decreased from 1.28% to .36%. The achieved typing speed of nearly 20 wpm is comparable with the result of 17.3 wpm achieved in an earlier, similar study with Dasher. Author Keywords
Conference Paper
Full-text available
Nonparametric data from multi-factor experiments arise often in human-computer interaction (HCI). Examples may include error counts, Likert responses, and preference tallies. But because multiple factors are involved, common nonparametric tests (e.g., Friedman) are inadequate, as they are unable to examine interaction effects. While some statistical techniques exist to handle such data, these techniques are not widely available and are complex. To address these concerns, we present the Aligned Rank Transform (ART) for nonparametric factorial data analysis in HCI. The ART relies on a preprocessing step that "aligns" data before applying averaged ranks, after which point common ANOVA procedures can be used, making the ART accessible to anyone familiar with the F-test. Unlike most articles on the ART, which only address two factors, we generalize the ART to N factors. We also provide ARTool and ARTweb, desktop and Web-based programs for aligning and ranking data. Our re-examination of some published HCI results exhibits advantages of the ART.
Conference Paper
Full-text available
Eye gaze interaction can provide a convenient and natural addition to user-computer dialogues. We have previously reported on our interaction techniques using eye gaze [10]. While our techniques seemed useful in demonstration, we now investigate their strengths and weaknesses in a controlled setting. In this paper, we present two experiments that compare an interaction technique we developed for object selection based on a where a person is looking with the most commonly used selection method using a mouse. We find that our eye gaze interaction technique is faster than selection with a mouse. The results show that our algorithm, which makes use of knowledge about how the eyes behave, preserves the natural quickness of the eye. Eye gaze interaction is a reasonable addition to computer interaction and is convenient in situations where it is important to use the hands for other tasks. It is particularly beneficial for the larger screen workspaces and virtual environments of the future, and it will become increasingly practical as eye tracker technology matures.
Conference Paper
Full-text available
Our experience with a Haptic Workstation™ has shown that this device is uncomfortable to use during long sessions. The main reason is the uncomfortable posture of the arms, which must be kept outstretched horizontally while supporting the weight of an exoskeleton. We describe Zero-G, a real-time weight compensation system aimed at improving user comfort by compensating for the weight of both the exoskeleton and arms (zero gravity illusion). We present experimental results complemented with electro myography measures (EMG) as an indicator of muscular activity/fatigue. Our tests show how Zero-G exerts a positive influence on the reduction of muscular fatigue when using a Haptic Workstation™.
Conference Paper
Full-text available
Eye typing provides a means of communication for severely handicapped people, even those who are only capable of moving their eyes. This paper considers the features, functionality and methods used in the eye typing systems developed in the last twenty years. Primary concerned with text production, the paper also addresses other communication related issues, among them customization and voice output.
Article
Full-text available
Novel robotic technologies utilised in surgery need assessment for their effects on the user as well as on technical performance. In this paper, the evolution in 'cognitive burden' across visuomotor learning is quantified using a combination of functional near infrared spectroscopy (fNIRS) and graph theory. The results demonstrate escalating costs within the activated cortical network during the intermediate phase of learning which is manifest as an increase in cognitive burden. This innovative application of graph theory and fNIRS enables the economic evaluation of brain behaviour underpinning task execution and how this may be impacted by novel technology and learning. Consequently, this may shed light on how robotic technologies improve human-machine interaction and augment minimally invasive surgical skills acquisition. This work has significant implications for the development and assessment of emergent robotic technologies at cortical level and in elucidating learning-related plasticity in terms of inter-regional cortical connectivity.
Conference Paper
Full-text available
The use of master-slave surgical robots for Minimally Invasive Surgery (MIS) has created a physical separation between the surgeon and the patient. Reconnecting the essential visuomotor sensory feedback is important for the safe practice of robotic assisted MIS procedures. This paper introduces a novel gaze contingent framework with real-time haptic feedback by transforming visual sensory information into physical constraints that can interact with the motor sensory channel. We demonstrate how motor tracking of deforming tissue can be made more effective and accurate through the concept of gaze-contingent motor channelling. The method also uses 3D eye gaze to dynamically prescribe and update safety boundaries during robotic assisted MIS without prior knowledge of the soft-tissue morphology. Initial validation results on both simulated and robotic assisted phantom procedures demonstrate the potential clinical value of the technique.
Article
This sweeping introduction to the science of virtual environment technology masterfully integrates research and practical applications culled from a range of disciplines, including psychology, engineering, and computer science. With contributions from the field's foremost researchers and theorists, the book focuses in particular on how virtual technology and interface design can better accommodate human cognitive, motor, and perceptual capabilities. Throughout, it brings the reader up-to-date with the latest design strategies and cutting-edge virtual environments, and points to promising avenues for future development. The book is divided into three parts. The first part introduces the reader to the subject by defining basic terms, identifying key components of the virtual environment, and reviewing the origins and elements of virtual environments. The second part focuses of current technologies used to present visual, auditory, tactile, and kinesthetic information. The book concludes with an in-depth analysis of how environments and human perception are integrated to create effective virtual systems. Comprehensive and splendidly written, Virtual Environments and Advanced Interface Design will be the "bible" on the subject for years to come. Students and researchers in computer science, psychology, and cognitive science will all want to have a copy on their shelves.
Chapter
The problem of human-computer interaction can be viewed as two powerful information processors (human and computer) attempting to communicate with each other via a narrow-bandwidth, highly constrained interface (Tufte, 1989). To address it, we seek faster, more natural, and more convenient means for users and computers to exchange information. The user’s side is constrained by the nature of human communication organs and abilities; the computer’s is constrained only by input/output devices and interaction techniques that we can invent. Current technology has been stronger in the computer-to-user direction than the user-to-computer, hence today’s user-computer dialogues are rather one-sided, with the bandwidth from the computer to the user far greater than that from user to computer. Using eye movements as a user-to-computer communication medium can help redress this imbalance. This chapter describes the relevant characteristics of the human eye, eye-tracking technology, how to design interaction techniques that incorporate eye movements into the user-computer dialogue in a convenient and natural way, and the relationship between eye-movement interfaces and virtual environments. As with other areas of research and design in human-computer interaction, it is helpful to build on the equipment and skills humans have acquired through evolution and experience and search for ways to apply them to communicating with a computer. Direct manipulation interfaces have enjoyed great success largely because they draw on analogies to existing human skills (pointing, grabbing, moving objects in space), rather than trained behaviors. Similarly, we try to make use of natural eye movements in designing interaction techniques for the eye. Because eye movements are so different from conventional computer inputs, our overall approach in designing interaction techniques is, wherever possible, to obtain information from a user’s natural eye movements while viewing the screen, rather than requiring the user to make specific trained eye movements to actuate the system. This requires careful attention to issues of human design, as will any successful work in virtual environments. The goal is for human-computer interaction to start with studies of the characteristics of human communication channels and skills and then develop devices, interaction techniques, and interfaces that communicate effectively to and from those channels.
Book
Focusing on recent advances in analytical techniques, this third edition of Andrew Duchowski’s successful guide has been revised and extended. It includes new chapters on calibration accuracy, precision and correction; advanced eye movement analysis; binocular eye movement analysis; practical gaze analytics; eye movement synthesis. Eye Tracking Methodology opens with useful background information, including an introduction to the human visual system and key issues in visual perception and eye movement. The author then surveys eye-tracking devices and provides a detailed introduction to the technical requirements necessary for installing a system and developing an application program. Modern programming examples (in Python) are included and the author outlines the gaze analytics pipeline, a step-by-step data processing sequence from raw data to statistical analysis. Focusing on the use of modern video-based, corneal-reflection eye trackers – the most widely available and affordable types of systems, Andrew Duchowski takes a look at a number of interesting and challenging applications in human factors, collaborative systems, virtual reality, marketing and advertising. His primary focus is on methodology, and how analysis of eye movements can enhance research and development of anything that is inspected visually. Stefan Robila, reviewing the second edition says, “The book is written in an easy-to-understand language. Given its breadth, it may be most appropriate for scientists and students starting in this field. ... Overall, I found it to be a solid book on a fascinating topic." (ACM Computing Reviews, October 2008)”
Article
Our experience with a Haptic Workstation™ has shown that this device is uncomfortable to use during long sessions. The main reason is the uncomfortable posture of the arms, which must be kept outstretched horizontally while supporting the weight of an exoskeleton. We describe Zero-G, a real-time weight compensation system aimed at improving user comfort by compensating for the weight of both the exoskeleton and arms (zero gravity illusion). We present experimental results complemented with electro myography measures (EMG) as an indicator of muscular activity/fatigue. Our tests show how Zero-G exerts a positive influence on the reduction of muscular fatigue when using a Haptic Workstation™
Conference Paper
" Modalities such as pen and touch are associated with direct input but can also be used for indirect input. We propose to combine the two modes for direct-indirect input modulated by gaze. We introduce gaze-shifting as a novel mechanism for switching the input mode based on the alignment of manual input and the user's visual attention. Input in the user's area of attention results in direct manipulation whereas input offset from the user's gaze is redirected to the visual target. The technique is generic and can be used in the same manner with different input modalities. We show how gaze-shifting enables novel direct-indirect techniques with pen, touch, and combinations of pen and touch input.
Article
Gaze has the potential to complement multi-touch for interaction on the same surface. We present gaze-touch, a technique that combines the two modalities based on the principle of "gaze selects, touch manipulates". Gaze is used to select a target, and coupled with multi-touch gestures that the user can perform anywhere on the surface. Gaze-touch enables users to manipulate any target from the same touch position, for whole-surface reachability and rapid context switching. Conversely, gaze-touch enables manipulation of the same target from any touch position on the surface, for example to avoid occlusion. Gaze-touch is designed to complement directtouch as the default interaction on multi-touch surfaces. We provide a design space analysis of the properties of gazetouch versus direct-touch, and present four applications that explore how gaze-touch can be used alongside direct-touch. The applications demonstrate use cases for interchangeable, complementary and alternative use of the two modes of interaction, and introduce novel techniques arising from the combination of gaze-touch and conventional multi-touch.
Article
Gaze tracking technology is increasingly common in desktop, laptop and mobile scenarios. Most previous research on eye gaze patterns during human-computer interaction has been confined to controlled laboratory studies. In this paper we present an in situ study of gaze and mouse coordination as participants went about their normal activities. We analyze the coordination between gaze and mouse, showing that gaze often leads the mouse, but not as much as previously reported, and in ways that depend on the type of target. Characterizing the relationship between the eyes and mouse in realistic multi-task settings highlights some new challenges we face in designing robust gaze-enhanced interaction techniques.
Article
Repetitive haptic tasks involve constant movement of the user's hand while interacting with the haptic application. While haptic devices are becoming more prominent, user satisfaction must be studied. Undesired fatigue is one side effect of repetitive tasks which could reduce the overall Quality of Experience (QoE) of a haptic-based application. Fatigue is usually assessed through questionnaire and observation. In this paper, we study the user force profile and examine the trends that are present within the profile after users perform numerous haptic signature tasks. Our results show significant correlation between force profile elements and users' perceived fatigue.
Conference Paper
While eye tracking has a high potential for fast selection tasks, it is often regarded as error-prone and unnatural, especially for gaze-only interaction. To improve on that, we propose gaze-supported interaction as a more natural and effective way combining a user's gaze with touch input from a handheld device. In particular, we contribute a set of novel and practical gaze-supported selection techniques for distant displays. Designed according to the principle gaze suggests, touch confirms they include an enhanced gaze-directed cursor, local zoom lenses and more elaborated techniques utilizing manual fine positioning of the cursor via touch. In a comprehensive user study with 24 participants, we investigated the potential of these techniques for different target sizes and distances. All novel techniques outperformed a simple gaze-directed cursor and showed individual advantages. In particular those techniques using touch for fine cursor adjustments (MAGIC touch) and for cycling through a list of possible close-to-gaze targets (MAGIC tab) demonstrated a high overall performance and usability.
Article
When individuals perform purposeful actions to fatigue, there is typically a general decline in their movement performance. This study was designed to investigate the effects exercise-induced fatigue has on lower limb kinetics and kinematics during a side-step cutting task. In particular, it was of interest to determine what changes could be seen in mean amplitude and all metrics of signal variability with fatigue. The results of the study revealed that post-fatigue there was an overall decrease in absolute force production as reflected by a decline in mean amplitude and variability (SD) of the ground reaction forces (GRFV and GRFML). A decrease in mean and SD of the knee moments were also observed post-exercise. Interestingly, this trend was not mirrored by similar changes in time-dependent properties of these signals. Instead, there was an increase in the SampEn values (reflecting a more variable, irregular signal) for GRF force profiles, knee kinematics and moments following the exercise-induced fatigue. These results illustrate that fatigue can have differential effects on movement variability, resulting in a both an increase and decrease in movement variability, depending on the variable selected. Thus, the impact of fatigue is not simply restricted to a decline in force producing capacity of the system but more importantly it demonstrates that the ability of the person to perform a smooth and controlled action is limited due to fatigue.
Article
People naturally interact with the world multimodally, through both parallel and sequential use of multiple perceptual modalities. Multimodal human–computer interaction has sought for decades to endow computers with similar capabilities, in order to provide more natural, powerful, and compelling interactive experiences. With the rapid advance in non-desktop computing generated by powerful mobile devices and affordable sensors in recent years, multimodal research that leverages speech, touch, vision, and gesture is on the rise. This paper provides a brief and personal review of some of the key aspects and issues in multimodal interaction, touching on the history, opportunities, and challenges of the area, especially in the area of multimodal integration. We review the question of early vs. late integration and find inspiration in recent evidence in biological sensory integration. Finally, we list challenges that lie ahead for research in multimodal human–computer interaction.
Article
Eye gaze interaction can provide a convenient and natural addition to user-computer dialogues. We have previously reported on our interaction techniques using eye gaze [10]. While our techniques seemed useful in demonstration, we now investigate their strengths and weaknesses in a controlled setting. In this paper, we present two experiments that compare an interaction technique we developed for object selection based on a where a person is looking with the most commonly used selection method using a mouse. We find that our eye gaze interaction technique is faster than selection with a mouse. The results show that our algorithm, which makes use of knowledge about how the eyes behave, preserves the natural quickness of the eye. Eye gaze interaction is a reasonable addition to computer interaction and is convenient in situations where it is important to use the hands for other tasks. It is particularly beneficial for the larger screen workspaces and virtual environments of the future, and it will become increasingly practical as eye tracker technology matures.
Conference Paper
This paper describes the PHANTOM haptic interface - a device which measures a user's finger tip position and exerts a precisely controlled force vector on the finger tip. The device has enabled users to interact with and feel a wide variety of virtual objects and will be used for control of remote manipulators. This paper discusses the design rationale, novel kinematics and mechanics of the PHANTOM. A brief description of the programming of basic shape elements and contact interactions is also given.
Article
Longitudinal changes in cortical function are known to accompany motor skills learning, and can be detected as an evolution in the activation map. These changes include attenuation in activation in the prefrontal cortex and increased activation in primary and secondary motor regions, the cerebellum and posterior parietal cortex. Despite this, comparatively little is known regarding the impact of the mode or type of training on the speed of activation map plasticity and on longitudinal variation in network architectures. To address this, we randomised twenty-one subjects to learn a complex motor tracking task delivered across six practice sessions in either "free-hand" or "gaze-contingent motor control" mode, during which frontoparietal cortical function was evaluated using functional near infrared spectroscopy. Results demonstrate that upon practice termination, gaze-assisted learners had achieved superior technical performance compared to free-hand learners. Furthermore, evolution in frontoparietal activation foci indicative of expertise was achieved at an earlier stage in practice amongst gaze-assisted learners. Both groups exhibited economical small world topology; however, networks in learners randomised to gaze-assistance were less costly and showed higher values of local efficiency suggesting improved frontoparietal communication in this group. We conclude that the benefits of gaze-assisted motor learning are evidenced by improved technical accuracy, more rapid task internalisation and greater neuronal efficiency. This form of assisted motor learning may have occupational relevance for high precision control such as in surgery or following re-learning as part of stroke rehabilitation.
Conference Paper
We present a practical technique for pointing and selection using a combination of eye gaze and keyboard triggers. EyePoint uses a two-step progressive refinement process fluidly stitched together in a look-press-look-release action, which makes it possible to compensate for the accuracy limitations of the current state-of-the-art eye gaze trackers. While research in gaze-based pointing has traditionally focused on disabled users, EyePoint makes gaze-based pointing effective and simple enough for even able-bodied users to use for their everyday computing tasks. As the cost of eye gaze tracking devices decreases, it will become possible for such gaze-based techniques to be used as a viable alternative for users who choose not to use a mouse depending on their abilities, tasks and preferences. Author Keywords
Book
Despite the availability of cheap, fast, accurate and usable eye trackers, there is still little information available on how to develop, implement and use these systems. This second edition of Andrew Duchowski's successful guide to these systems contains significant additional material on the topic and fills this gap in the market with this accessible and comprehensive introduction. Opening with useful background information, including an introduction to the human visual system and key issues in visual perception and eye movement, the second part surveys eye-tracking devices and provides a detailed introduction to the technical requirements necessary for installing a system and developing an application program. The book focuses on video-based, corneal-reflection eye trackers - the most widely available and affordable type of system, before closing with a look at a number of interesting and challenging applications in human factors, collaborative systems, virtual reality, marketing and advertising.
Article
In seeking hitherto-unused methods by which users and computers can comrnumcate, we investigate the usefulness of eye movements as a fast and convenient auxiliary user-to-computer communication mode. The barrier to exploiting this medium has not been eye-tracking technology but the study of interaction techniques that incorporate eye movements mto the usercomputer dialogue in a natural and unobtrusive way This paper discusses some of the human factors and technical considerations that arise in trying to use eye movements as an input medium, describes our approach and the first eye movement-based interaction techniques that we have devised and implemented in our laboratory, and reports our experiences and observa tions on them.
Article
This paper presents a simple and widely ap- plicable multiple test procedure of the sequentially rejective type, i.e. hypotheses are rejected one at a tine until no further rejections can be done. It is shown that the test has a prescribed level of significance protection against error of the first kind for any combination of true hypotheses. The power properties of the test and a number of possible applications are also discussed.
Chapter
The results of a multi-year research program to identify the factors associated with variations in subjective workload within and between different types of tasks are reviewed. Subjective evaluations of 10 workload-related factors were obtained from 16 different experiments. The experimental tasks included simple cognitive and manual control tasks, complex laboratory and supervisory control tasks, and aircraft simulation. Task-, behavior-, and subject-related correlates of subjective workload experiences varied as a function of difficulty manipulations within experiments, different sources of workload between experiments, and individual differences in workload definition. A multi-dimensional rating scale is proposed in which information about the magnitude and sources of six workload-related factors are combined to derive a sensitive and reliable estimate of workload.
Article
This paper reviews tactual perception of material properties such as roughness, compliance, coldness and friction. Psychophysical functions relating physical properties to perception are discussed, as well as discrimination thresholds. Also, the neural codes mediating some of these sensations are discussed. Furthermore, we take a look into how sensation of these material properties can be induced artificially in haptic displays. Lastly, the interactions between perception of the different material properties are explored.
Article
Reports of 3 experiments testing the hypothesis that the average duration of responses is directly proportional to the minimum average amount of information per response. The results show that the rate of performance is approximately constant over a wide range of movement amplitude and tolerance limits. This supports the thesis that "the performance capacity of the human motor system plus its associated visual and proprioceptive feedback mechanisms, when measured in information units, is relatively constant over a considerable range of task conditions." 25 references. (PsycINFO Database Record (c) 2006 APA, all rights reserved).
Article
Two experiments establish links between desired knowledge about objects and hand movements during haptic object exploration. Experiment 1 used a match-to-sample task, in which blindfolded subjects were directed to match objects on a particular dimension (e.g., texture). Hand movements during object exploration were reliably classified as “exploratory procedures,” each procedure defined by its invariant and typical properties. The movement profile, i.e., the distribution of exploratory procedures, was directly related to the desired object knowledge that was required for the match. Experiment 2 addressed the reasons for the specific links between exploratory procedures and knowledge goals. Hand movements were constrained, and performance on various matching tasks was assessed. The procedures were considered in terms of their necessity, sufficiency, and optimality of performance for each task. The results establish that in free exploration, a procedure is generally used to acquire information about an object property, not because it is merely sufficient, but because it is optimal or even necessary. Hand movements can serve as “windows,” through which it is possible to learn about the underlying representation of objects in memory and the processes by which such representations are derived and utilized.
Article
Normative values for the Finger Tapping and Grooved Pegboard Tests were developed on a sample of 360 normal volunteers stratified according to gender, three educational groups ranging from 7 to 22 years, and four age groups subdivided between the ages of 16 to 70 years. Retest reliability was estimated for both measures. The Finger Tapping Test showed significant gender differences, since women were substantially slower, particularly in the older age groups. On the Grooved Pegboard Test, a converse gender difference was noted, since women were substantially faster than men. A smaller effect with increasing age resulted, and better educated individuals performed faster. If these motor and visuomotor tests are to be applied, then stratified normative estimates need to be implemented to provide viable clinical judgements.