Conference Paper

Gaze-Controlled Instructions for Manual Assembly Tasks - A Usability Evaluation Study

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

People interact with technical systems every day, making use of manifold input methods. One possible but not yet very established input method is eye gaze. The present article investigates a gaze-controlled interface in the context of manual assembly tasks, where it provides a language-free and at the same time hands-free input alternative. To this end, we implemented a gaze-controlled instruction prototype and compared its efficiency, usability, and user experience to that of an established paper manual. Both instruction forms were assessed on subjective measures (NASA-TLX, UEQ, and USE) as well as on an objective measure (assembly time). Albeit being prototypical and novel to the participants, the usability of the gaze-based instruction form was at least comparable to that of the paper manual and on some scales even better. Further, the gaze-based interface yielded similar assembly times and was rated preferable in terms of user experience. Taken together, our results suggest that gaze-based instructions can be a valuable alternative to previously used instruction forms in the work context.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... The third recommendation includes enhancing the ergonomics of the human-machine interfaces and setting appropriate SOA conditions based on the task requirements to balance the cognitive load and users' feelings. As such, the spatial arrangement of tasks should be carefully configured to minimize eye movements, simplify operational processes, and improve user efficiency, which are potential avenues to refine the design of human-machine interfaces [52,53]. Moreover, the participants' physiological and performance indicators should be monitored in designing human-machine interfaces. ...
Article
Full-text available
In today’s rapidly evolving technological landscape, human–machine interaction has become an issue that should be systematically explored. This research aimed to examine the impact of different pre-cue modes (visual, auditory, and tactile), stimulus modes (visual, auditory, and tactile), compatible mapping modes (both compatible (BC), transverse compatible (TC), longitudinal compatible (LC), and both incompatible (BI)), and stimulus onset asynchrony (200 ms/600 ms) on the performance of participants in complex human–machine systems. Eye movement data and a dual-task paradigm involving stimulus–response and manual tracking were utilized for this study. The findings reveal that visual pre-cues can captivate participants’ attention towards peripheral regions, a phenomenon not observed when visual stimuli are presented in isolation. Furthermore, when confronted with visual stimuli, participants predominantly prioritize continuous manual tracking tasks, utilizing focal vision, while concurrently executing stimulus–response compatibility tasks with peripheral vision. Furthermore, the average pupil diameter tends to diminish with the use of visual pre-cues or visual stimuli but expands during auditory or tactile stimuli or pre-cue modes. These findings contribute to the existing literature on the theoretical design of complex human–machine interfaces and offer practical implications for the design of human–machine system interfaces. Moreover, this paper underscores the significance of considering the optimal combination of stimulus modes, pre-cue modes, and stimulus onset asynchrony, tailored to the characteristics of the human–machine interaction task.
Conference Paper
Full-text available
Manuelle Montageprozesse sind nach wie vor unverzichtbar in vielen Bereichen der produzierenden Industrie. Vor allem die Qualitätskontrolle, sowie das Einlernen neuer Mitarbeitenden stellen Betriebe durch die voranschreitende Digitalisierung vor neue Herausforderungen. Assistenzsysteme können hier helfen, die Lücke zwischen Anforderungen und Qualifikation zu überbrücken. Wir stellen einen Ansatz zur intelligenten Assistenz vor, welcher auf einer kamerabasierten Erkennung von Arbeitsabläufen mit Hilfe von Methoden des maschinellen Lernens beruht. Das Assistenzsystem erzeugt automatisiert Hilfsmaterial zur Unterstützung der Werkenden. Zusätzlich zur Darstellung der technischen Aspekte, werden psychologische Aspekte, wie Akzeptanz und Motivation untersucht.
Article
Full-text available
This study investigates the usability of various “dwell times” for selecting visual objects with eye-gaze-based input by means of eye tracking. Two experiments are described in which participants used eye-gaze-based input to select visual objects consisting of alphanumeric characters, dots, or visual icons. First, a preliminary experiment was designed to identify the range of dwell time durations suitable for eye-gaze-based object selection. Twelve participants were asked to evaluate, on a 7-point rating scale, how easily they could perform an object-selection task with a dwell time of 250, 500, 1000, or 2000 ms per object. The evaluations showed that a dwell time of 250 ms to around 1000 ms was rated as potentially useful for object selection with eye-gaze-based input. In the following main experiment, therefore, 30 participants used eye tracking to select object sequences from a display with a dwell time of 200, 400, 800, 1000 or 1200 ms per object. Object selection time, object selection success rate, the number of object selection corrections, and dwell time evaluations were obtained. The results showed that the total time necessary to select visual objects (object selection time) increased when dwell time increased, but longer dwell times resulted in a higher object-selection success rate and fewer object selection corrections. Furthermore, regardless of object type, eye-gaze-based object selection with dwell times of 200-800 ms was significantly slower for participants with glasses than for those without glasses. Most importantly, participant evaluations showed that a dwell time of 600 ms per object was easiest to use for eye-gaze-based selection of all three types of visual objects.
Article
Full-text available
This paper presents results from an ongoing empirical study that investigates the effect of human-centered performance management with regard to operational performance and work motivation of shop floor workers. The approach is based on gamified information provisioning. To date, the concept of gamification has been applied in numerous fields, yet hardly any related work provides empirical findings for the production environment. Based on previous approach, we realized our approach prototypically as an MES application. Then, we integrated our application into a business game simulation. The study design builds on an application scenario in manual assembly. Two treatment groups were defined for investigation. Qualitative observations show that the provision of gamified metrics-based information proves to be a motivation driver.
Conference Paper
Full-text available
Augmented Reality (AR) is a novel technology that projects virtual information on the real world environment. With the increased use of Industry 4.0 technologies in manufacturing, AR has gained momentum across various stages of product life cycle. AR can benefit production operators in many manufacturing tasks such as quality inspection, work instructions for manual assembly, maintenance, and in training. This research presents not only a typical architecture of an AR system but also both its software and hardware functions. The architecture is then applied to display virtual assembly instructions in the form of 3D animations on to the real world environment. The chosen assembly task in this research is to assemble a planetary gearbox system. The assembly instructions are displayed on a mobile device targeting a static tracker placed in the assembly environment.
Conference Paper
Full-text available
With increasing automation, vehicles could soon become mobile work-and living spaces, but traditional user interfaces (UIs) are not designed for this domain. We argue that high levels of productivity and user experience will only be achieved in SAE L3 automated vehicles if UIs are modified for non-driving related tasks. As controls might be far away (up to 2 meters), we suggest to use gaze-based interaction with windshield displays. In this work, we investigate the effect of different dwell times and feedback designs (circular and linear progress indicators) on user preference, task performance and error rates. Results from a user study conducted in a virtual reality driving simulator (N = 24) highlight that circular feedback animations around the viewpoint are preferred for gaze input. We conclude this work by pointing out the potential of gaze-based interactions with windshield displays for future SAE L3 vehicles.
Article
Full-text available
Background Patient monitoring is indispensable in any operating room to follow the patient’s current health state based on measured physiological parameters. Reducing workload helps to free cognitive resources and thus influences human performance, which ultimately improves the quality of care. Among the many methods available to assess perceived workload, the National Aeronautics and Space Administration Task Load Index (NASA-TLX) provides the most widely accepted tool. However, only few studies have investigated the validity of the NASA-TLX in the health care sector. Objective This study aimed to validate a modified version of the raw NASA-TLX in patient monitoring tasks by investigating its correspondence with expected lower and higher workload situations and its robustness against nonworkload-related covariates. This defines criterion validity. Methods In this pooled analysis, we evaluated raw NASA-TLX scores collected after performing patient monitoring tasks in four different investigator-initiated, computer-based, prospective, multicenter studies. All of them were conducted in three hospitals with a high standard of care in central Europe. In these already published studies, we compared conventional patient monitoring with two newly developed situation awareness–oriented monitoring technologies called Visual Patient and Visual Clot. The participants were resident and staff anesthesia and intensive care physicians, and nurse anesthetists with completed specialization qualification. We analyzed the raw NASA-TLX scores by fitting mixed linear regression models and univariate models with different covariates. ResultsWe assessed a total of 1160 raw NASA-TLX questionnaires after performing specific patient monitoring tasks. Good test performance and higher self-rated diagnostic confidence correlated significantly with lower raw NASA-TLX scores and the subscores (all P
Article
Full-text available
Due to motor deficiencies inducing low force capabilities or tremor, many persons have great difficulties to use joystick operated wheelchairs. To alleviate such difficulties, alternative interfaces using vocal, gaze or brain signals are now becoming available. While promising, these systems still need to be evaluated thoroughly. In this framework, the aims of this study are to analyse and evaluate the behaviour of eleven able bodied subjects during a navigation task executed with gaze or joystick operated electric wheelchair involving a door crossing. An electric wheelchair was equipped with retroreflective markers and their movements were recorded with an optoelectronic system. The gaze commands were detected using an eye tracking device. Apart from classical, forward, backward, stop, left and right commands the chosen screen-based interface integrated forward-right and forward-left commands. The global success rate with the gaze-based control was 80.3%. The path optimally ratio was 0.97 and the subject adopted similar trajectories with both systems. The results for gaze control are promising and highlight the important utilization of the forward-left and forward-right commands (25% of all issued commands) that may explain the similarity between the trajectories using both interfaces. 50 Free copies are available at the following link: https://www.tandfonline.com/eprint/NFR6sEF8eIhHZKmMjMi9/full?target=10.1080/10400435.2019.1586011
Article
Full-text available
Small lot sizes in modern manufacturing present new challenges for people doing manual assembly tasks. Assistive systems, including context-aware instruction systems and collaborative robots, can support people to manage the increased flexibility, while also reducing the number of errors. Although there has been much research in this area, these solutions are not yet widespread in companies. This paper aims to give a better understanding of the strengths and limitations of the different technologies with respect to their practical implementation in companies, both to give insight into which technologies can be used in practice and to suggest directions for future research. The paper gives an overview of the state of the art and then describes new technological solutions designed for companies to illustrate the current status and future needs. The information provided demonstrates that, although a lot of technologies are currently being investigated and discussed, many of them are not yet at a level that they can be implemented in practice.
Conference Paper
Full-text available
For eye tracking to become a ubiquitous part of our everyday interaction with computers, we first need to understand its limitations outside rigorously controlled labs, and develop robust applications that can be used by a broad range of users and in various environments. Toward this end, we collected eye tracking data from 80 people in a calibration-style task, using two different trackers in two lighting conditions. We found that accuracy and precision can vary between users and targets more than six-fold, and report on differences between lighting, trackers, and screen regions. We show how such data can be used to determine appropriate target sizes and to optimize the parameters of commonly used filters. We conclude with design recommendations and examples how our findings and methodology can inform the design of error-aware adaptive applications.
Article
Full-text available
Gaze-controlled interfaces have become a viable alternative to hand-input-based displays and present a particular value to the field of assistive technologies, allowing people with motor disabilities to partake in activities that otherwise would have been inaccessible to them. The present paper gives an overview of the key problems associated with the user experience in gaze-controlled human?computer interfaces and introduces two areas of psychological research that could contribute to the development of gaze-controlled interfaces that give a more intuitive sense of control and are less likely to interfere with ongoing cognitive processes. Such interfaces are referred to as cognitively grounded. The two areas of psychological research that lead to the design of cognitively grounded gaze-controlled interfaces are the sense of agency and the cognitive embodiment. This overview builds on findings within these areas and outlines research questions essential to the design of cognitively grounded gaze-controlled interfaces.
Article
Full-text available
In the past twenty years, gaze control has become a reliable alternative input method not only for handicapped users. The selection of objects, however, which is of highest importance and of highest frequency in computer control, requires explicit control not inherent in eye movements. Objects have been therefore usually selected via prolonged fixations (dwell times). Dwell times seemed to be for many years the unique reliable method for selection. In this paper, we review pros and cons of classical selection methods and novel metaphors, which are based on pies and gestures. The focus is on the effectiveness and efficiency of selections. In order to discover the real potential of current suggestions for selection, a basic empirical comparison is recommended.
Conference Paper
Full-text available
This paper presents a descriptive analysis of over 1000 global NASA Task Load Index (TLX; Hart & Staveland, 1988) scores from over 200 publications. This analysis is similar to that which was suggested by Hart (2006). The frequency distributions and measures of central tendency presented will aid practitioners in understanding global NASA-TLX scores observed in system tests.
Article
Full-text available
Although analysing software for eye-tracking data has significantly improved in the past decades, the analysis of gaze behaviour recorded with head-mounted devices is still challenging and time-consuming. Therefore, new methods have to be tested to reduce the analysis workload while maintaining accuracy and reliability. In this article, dwell time percentages to six areas of interest (AOIs), of six participants cycling on four different roads, were analysed both frame-by-frame and in a 'fixation-by-fixation' manner. The fixation-based method is similar to the classic frame-by-frame method but instead of assigning frames, fixations are assigned to one of the AOIs. Although some considerable differences were found between the two methods, a Pearson correlation of 0.930 points out a good validity of the fixation-by-fixation method. For the analysis of gaze behaviour over an extended period of time, the fixation-based approach is a valuable and time-saving alternative for the classic frame-by-frame analysis.
Conference Paper
Full-text available
A good user experience is central for the success of interactive products. To improve products concerning these quality aspects it is thus also important to be able to measure user experience in an efficient and reliable way. But measuring user experience is not an end in itself. Several different questions can be the reason behind the wish to measure the user experience of a product quantitatively. We discuss several typical questions associated with the measurement of user experience and we show how these questions can be answered with a questionnaire with relatively low effort. In this paper the user experience questionnaire UEQ is used, but the general approach may be transferred to other questionnaires as well.
Conference Paper
Full-text available
Haptic feedback can improve the usability of gaze gestures in mobile devices. However, the benefit is highly sensitive to the exact timing of the feedback. In practical systems the processing and transmission of signals takes some time, and the feedback may be delayed. We conducted an experiment to determine limits on the feedback delays. The results show that when the delays increase to 200 ms or longer the task completion times are significantly longer than with shorter delays.
Conference Paper
Full-text available
Haptic feedback can improve the usability of gaze gestures in mobile devices. However, the benefit is highly sensitive to the exact timing of the feedback. In practical systems the processing and trans-mission of signals takes some time, and the feedback may be delayed. We conducted an experiment to determine limits on the feedback delays. The results show that when the delays increase to 200 ms or longer the task completion times are significantly longer than with shorter delays.
Conference Paper
Full-text available
Mit dem User Experience Questionnaire wurde ein Fragebogen entwickelt, der eine schnelle Messung verschiedener Kriterien der Softwarequalität erlaubt. Die Relevanz der Kriterien für die Beurteilung wurde durch eine empirische Selektion sichergestellt. Experten sammelten und reduzierten eine große Menge potenziell relevanter Begriffe und Aussagen, die sowohl „harte“ Usability-Kriterien als auch „weichere“ User Experience-Kriterien einschlossen. Der daraus entstandene ursprüngliche Fragebogen mit bipolaren 80 Items wurde in mehreren Untersuchungen eingesetzt und durch eine Faktorenanalyse auf 26 Items reduziert, die sich den sechs Faktoren Attaktivität, Durchschaubarkeit, Effizienz, Vorhersagbarkeit, Stimulation und Originalität zuordnen lassen. Erste Validierungsuntersuchungen deuten auf eine zufriedenstellende Konstruktvalidität hin.
Article
Full-text available
This paper examines and compares the usability problems associated with eye-based and head-based assistive technology pointing devices when used for direct manipulation on a standard graphical user interface. It discusses and examines the pros and cons of eye-based pointing in comparison to the established assistive technology technique of head-based pointing, and illustrates the usability factors responsible for the apparent low usage or 'unpopularity' of eye-based pointing. It shows that user experience and target size on the interface are the predominant factors affecting eye-based pointing and suggests that these could be overcome to enable eye-based pointing to be a viable and available direct manipulation interaction technique for the motor-disabled community.
Conference Paper
Full-text available
The present study examines the role of subjectively perceived ergonomic quality (e.g. simplicity, controllability) and hedonic quality (e.g. novelty, originality) of a software system in forming a judgement of appeal. A hypothesised research model is presented. The two main research question are: (1) Are ergonomic and hedonic quality subjectively different quality aspects that can be independently perceived by the users? and (2) Is the judgement of appeal formed by combining and weighting ergonomic and hedonic quality and which weights are assigned?The results suggest that both quality aspects can be independently perceived by users. Moreover, they almost equally contributed to the appeal of the tested software prototypes. A simple averaging model implies that both quality aspects will compensate each other.Limitations and practical implication of the results are discussed.
Conference Paper
Full-text available
Previous research shows that text entry by gaze using dwell time is slow, about 5-10 words per minute (wpm). These results are based on experiments with novices using a constant dwell time, typically between 450 and 1000 ms. We conducted a longitudinal study to find out how fast novices learn to type by gaze using an adjustable dwell time. Our results show that the text entry rate increased from 6.9 wpm in the first session to 19.9 wpm in the tenth session. Correspondingly, the dwell time decreased from an average of 876 ms to 282 ms, and the error rates decreased from 1.28% to .36%. The achieved typing speed of nearly 20 wpm is comparable with the result of 17.3 wpm achieved in an earlier, similar study with Dasher. Author Keywords
Conference Paper
Full-text available
We present a Fitts' Law evaluation of a number of eye tracking and manual input devices in the selection of large visual targets. We compared performance of two eye tracking techniques, manual click and dwell time click, with that of mouse and stylus. Results show eye tracking with manual click outperformed the mouse by 16%, with dwell time click 46% faster. However, eye tracking conditions suffered a high error rate of 11.7% for manual click and 43% for dwell time click conditions. After Welford correction eye tracking still appears to outperform manual input, with IPs of 13.8 bits/s for dwell time click, and 10.9 bits/s for manual click. Eye tracking with manual click provides the best tradeoff between speed and accuracy, and was preferred by 50% of participants. Mouse and stylus had IPs of 4.7 and 4.2 respectively. However, their low error rate of 5% makes these techniques more suitable for refined target selection.
Article
Full-text available
� Springer-Verlag 2006 Abstract Eye typing provides a means of communication that is especially useful for people with disabilities. However, most related research addresses technical is- sues in eye typing systems, and largely ignores design issues. This paper reports experiments studying the im- pact of auditory and visual feedback on user perfor- mance and experience. Results show that feedback impacts typing speed, accuracy, gaze behavior, and subjective experience. Also, the feedback should be matched with the dwell time. Short dwell times require simplified feedback to support the typing rhythm, whereas long dwell times allow extra information on the eye typing process. Both short and long dwell times benefit from combined visual and auditory feedback. Six guidelines for designing feedback for gaze-based text entry are provided.
Article
Full-text available
It is widely assumed that 5 participants suffice for usability testing. In this study, 60 users were tested and random sets of 5 or more were sampled from the whole, to demonstrate the risks of using only 5 participants and the benefits of using more. Some of the randomly selected sets of 5 participants found 99% of the problems; other sets found only 55%. With 10 users, the lowest percentage of problems revealed by any one set was increased to 80%, and with 20 users, to 95%.
Article
Due to a rising number of product variants and an increase in complexity, manual assembly tasks lead to increasing physical and mental strain on employees. In order to maintain their health, an individual strain-oriented employee scheduling is necessary. In the conducted study, the individual physical and mental strain during manual assembly tasks is determined by using smart sensors and questionnaires. This paper presents the structure and process of the study as well as the first results of the applied questionnaires regarding the differences between the two levels of workload and concerning the validity to capture subjective strain. The performance requirements assembly competence and chronic stress were identified as predictors for perceived subjective physical and mental strain, queried by NASA-RTLX and Borg.
Article
Limited information is available regarding the effective use of workplace head-worn displays (HWD), especially the choices of HWD types and user interface (UI) designs. We explored how different HWD types and UI designs affect perceived workload, usability, visual discomfort, and job performance during a simulated warehouse job involving order picking and part assembly. Sixteen gender-balanced participants completed the simulated job in all combinations of two HWD types (binocular vs. monocular) and four UIs, the latter of which manipulated information mode (text-vs. graphic-based) and information availability (always-on vs. on-demand); a baseline condition was also completed (paper pick list). Job performance, workload, and usability were more affected by UI designs than HWD type. For example, the graphic-based UI reduced job completion time and number of errors by ∼13% and ∼59%, respectively. Participants had no strong preference for either of the HWD types, suggesting that the physical HWD designs tested are suboptimal.
Conference Paper
Manual work is a cornerstone of manufacturing. This holds true today, as well as for future factories of the Industry 4.0 era. Use cases of manual work are small series or highly customized products. Assistance systems that support workers in dealing with such diverse assembly processes are developed and evaluated. This paper presents an evaluation of an Augmented Reality-based assistance system which is integrated in a manual workstation. This system is compared with video-based assistance regarding performance, user acceptance and mental workload. It shows a significantly reduced number of errors and scored better regarding time and mental workload.
Article
New portable and noninvasive eye-trackers allow the creation of robust virtual keyboards that aim to improve the life of disabled people who are unable to communicate. This paper presents a novel multimodal virtual keyboard and evaluates the performance changes that occur with the use of different modalities. The virtual keyboard is based on a menu selection with eight main commands that allow us to spell 30 different characters and correct errors with a delete button. The system has been evaluated with 18 adult participants in three conditions corresponding to three modalities: direct selection using a mouse, with the eye-tracker to point at the desired command and a switch to select it, and with only the eye-tracker for command selection. The performance of the proposed virtual keyboard was evaluated by the speed and information transfer rate (ITR) at both the command and application levels. The average speed across subjects was 18.43 letters/min with the mouse only, 15.26 letters/min with the eye-tracker and the switch, and 9.30 letters/min with only the eye-tracker. The later provided an ITR of 44.96 and 57.46 bits/min at the letter and command levels, respectively. The results show to what extent a drop of performance can occur when switching between several modalities. While the speed decreases when controlling the virtual keyboard with the eye-tracker only, the system’s performance remains functioning for severely disabled people who have their gaze as one of their only means of communication.
Article
Gaze interaction, as understood in this book, provides a means to exploit information from eye gaze behaviour during human-technology interaction. Gaze can either be used as an explicit control method that enables the user to point at and select items, or information from the user's natural gaze behaviour can be exploited subtly in the background as an additional input channel without interfering with normal viewing. This chapter provides a brief introduction to the potential for applied gaze tracking, with special emphasis on its application in assistive technology. It introduces common terms and offers a concise summary of previous research and applications of eye tracking.
Article
The “Midas Touch” problem has long been a difficult problem existing in gesture-based interaction. This paper proposes a visual attention-based method to address this problem from the perspective of cognitive psychology. There are three main contributions in this paper: (1) a visual attention-based parallel perception model is constructed by combining top-down and bottom-up attention, (2) a framework is proposed for dynamic gesture spotting and recognition simultaneously, and (3) a gesture toolkit is created to facilitate gesture design and development. Experimental results show that the proposed method has a good performance for both isolated and continuous gesture recognition tasks. Finally, we highlight the implications of this work for the design and development of all gesture-based applications.
Conference Paper
This study evaluates the effectiveness of an AR-based context-aware assembly support system with AR visualization modes proposed in object assembly. Although many AR-based assembly support systems have been proposed, few keep track of the assembly status in real-time and automatically recognize error and completion states at each step. Naturally, the effectiveness of such context-aware systems remains unexplored. Our test-bed system displays guidance information and error detection information corresponding to the recognized assembly status in the context of building block (LEGO) assembly. A user wearing a head mounted display (HMD) can intuitively build a building block structure on a table by visually confirming correct and incorrect blocks and locating where to attach new blocks. We proposed two AR visualization modes, one of them that displays guidance information directly overlaid on the physical model, and another one in which guidance information is rendered on a virtual model adjacent to the real model. An evaluation was conducted to comparatively evaluate these AR visualization modes as well as determine the effectiveness of context-aware error detection. Our experimental results indicate the visualization mode that shows target status next to real objects of concern outperforms the traditional direct overlay under moderate registration accuracy and marker-based tracking.
Article
This paper investigates whether it is feasible to interact with the small screen of a smartphone using eye movements only. Two of the most common gaze-based selection strategies, dwell time selections and gaze gestures are compared in a target selection experiment. Finger-strokes and accelerometer-based interaction, i. e. tilting, are also considered. In an experiment with 11 subjects we found gaze interaction to have a lower performance than touch interaction but comparable to the error rate and completion time of accelerometer (i.e. tilt) interaction. Gaze gestures had a lower error rate and were faster than dwell selections by gaze, especially for small targets, suggesting that this method may be the best option for hands-free gaze control of smartphones.
Article
Although eye typing (typing on an on-screen keyboard via one's eyes as they are tracked by an eye tracker) has been studied for more than three decades now, we still know relatively little about it from the users' point of view. Standard metrics such as words per minute and keystrokes per character yield information only about the effectiveness of the technology and the interaction techniques developed for eye typing. We conducted an extensive study with almost five hours of eye typing per participant and report on extended qualitative and quantitative analysis of the relationship of dwell time, text entry rate, errors made, and workload experienced by the participants. The analysis method is comprehensive and stresses the need to consider different metrics in unison. The results highlight the importance of catering for individual differences and lead to suggestions for improvements in the interface.
Article
Eye gaze interaction can provide a convenient and natural addition to user-computer dialogues. We have previously reported on our interaction techniques using eye gaze [10]. While our techniques seemed useful in demonstration, we now investigate their strengths and weaknesses in a controlled setting. In this paper, we present two experiments that compare an interaction technique we developed for object selection based on a where a person is looking with the most commonly used selection method using a mouse. We find that our eye gaze interaction technique is faster than selection with a mouse. The results show that our algorithm, which makes use of knowledge about how the eyes behave, preserves the natural quickness of the eye. Eye gaze interaction is a reasonable addition to computer interaction and is convenient in situations where it is important to use the hands for other tasks. It is particularly beneficial for the larger screen workspaces and virtual environments of the future, and it will become increasingly practical as eye tracker technology matures.
Conference Paper
NASA-TLX is a multi-dimensional scale designed to obtain workload estimates from one or more operators while they are performing a task or immediately afterwards. The years of research that preceded subscale selection and the weighted averaging approach resulted in a tool that has proven to be reasonably easy to use and reliably sensitive to experimentally important manipulations over the past 20 years. Its use has spread far beyond its original application (aviation), focus (crew complement), and language (English). This survey of 550 studies in which NASA-TLX was used or reviewed was undertaken to provide a resource for a new generation of users. The goal was to summarize the environments in which it has been applied, the types of activities the raters performed, other variables that were measured that did (or did not) covary, methodological issues, and lessons learned
Article
Since humans direct their visual attention by means of eye movements, a device which monitors eye movements should be a natural “pick” device for selecting objects visually present on a monitor. The results from an experimental investigation of an eye tracker as a computer input device are presented. Three different methods were used to select the object looked at; these were a button press, prolonged fixation or “dwell” and an on screen select button. The results show that an eye tracker can be used as a fast selection device providing that the target size is not too small. If the targets are small speed declines and errors increase rapidly.
Conference Paper
The analysis of cognitive processes during human-machine and human-human interaction requires various tracking technologies. The human gaze is a very important cue to gather information con- cerning the user's intentions, current mental state, etc. To get this data the framework consisting of a highly accurate head-mounted gaze tracker combined with a low latency head tracking method was developed. Its integration into various experimental environ- mentsforces aneasytouse calibrationmethodformultiple working areas and also the implementation of numerous interfaces. There- foreacalibrationmethodbysimplylookingatknownfixationpoints was integrated. Also, first results of a brief user study using the pro- posed framework are presented.
Conference Paper
In seeking hitherto-unused methods by which users and computers can communicate, we investigate the usefulness of eye movements as a fast and convenient auxiliary user-to-computer communication mode. The barrier to exploiting this medium has not been eye-tracking technology but the study of interaction techniques that incorporate eye movements into the user-computer dialogue in a natural and unobtrusive way. This paper discusses some of the human factors and technical considerations that arise in trying to use eye movements as an input medium, describes our approach and the first eye movement-based interaction techniques that we have devised and implemented in our laboratory, and reports our experiences and observations on them.
Article
In seeking hitherto-unused methods by which users and computers can comrnumcate, we investigate the usefulness of eye movements as a fast and convenient auxiliary user-to-computer communication mode. The barrier to exploiting this medium has not been eye-tracking technology but the study of interaction techniques that incorporate eye movements mto the usercomputer dialogue in a natural and unobtrusive way This paper discusses some of the human factors and technical considerations that arise in trying to use eye movements as an input medium, describes our approach and the first eye movement-based interaction techniques that we have devised and implemented in our laboratory, and reports our experiences and observa tions on them.
Article
Augmented reality (AR) for assembly processes is a new kind of computer support for a traditional industrial domain. This new application of AR technology is called ARsembly. The intention of this article is to describe a typical scenario for assembly and service personnel and how they might be supported by AR. For this purpose, tasks with different degrees of difficulty were selected from an authentic assembly process. In addition, 2 other kinds of assembly support media (a paper manual and a tutorial by an expert) were examined in order to compare them with ARsembly. The results showed that the assembly times varied according to the different support conditions. AR support proved to be more suitable for difficult tasks than the paper manual, whereas for easier tasks the use of a paper manual did not differ significantly from AR support. Tasks done under the guidance of an expert were completed most rapidly. Some of the information obtained in this investigation also indicated important considerations for improving future ARsembly applications.
Article
The usability of an eye‐gaze input system to aid interaction with computers for older computer users was investigated. The eye‐gaze input system was developed using an eye‐tracking system. An experiment using the developed eye‐gaze input system was conducted while systematically manipulating experimental conditions such as the moving distance, size of a target, and direction of movement in a pointing task. The usability of the eye‐gaze input was compared among three age groups (young, middle‐aged, and older adults) and with that of a traditional PC mouse. The eye‐gaze input system led to a faster pointing time as compared with mouse input, especially for older adults. This result demonstrates that an eye‐gaze input system may be able to compensate for the declined motor functions of older adults when using mouse input.
Chapter
The results of a multi-year research program to identify the factors associated with variations in subjective workload within and between different types of tasks are reviewed. Subjective evaluations of 10 workload-related factors were obtained from 16 different experiments. The experimental tasks included simple cognitive and manual control tasks, complex laboratory and supervisory control tasks, and aircraft simulation. Task-, behavior-, and subject-related correlates of subjective workload experiences varied as a function of difficulty manipulations within experiments, different sources of workload between experiments, and individual differences in workload definition. A multi-dimensional rating scale is proposed in which information about the magnitude and sources of six workload-related factors are combined to derive a sensitive and reliable estimate of workload.
Conference Paper
We developed a pointer in 3D virtual space, using an eye-tracking system as a sensor. The eye mark pointer is installed to a virtual environment system which provides stereoscopic vision with an immersive projection display. The circular-polarization stereoscopic vision enables us to use the eye-tracking system in the immersive projection display. The eye-tracking system obtains relative gaze directions with respect to the head, so the absolute position requires compensation of the user's head motion with a head tracker. We then compare the eye mark pointer with a joystick in an experiment with the virtual environment system. The experimental result indicates the pointing of the eye mark pointer is 9.8 times quicker than that of the joystick, and suggests that the eye mark pointer is available for pointing at the target in the virtual environment
Eye movement research: An introduction to its scientific foundations and applications
  • P Majaranta
  • K.-J Räihä
  • A Hyrskykari
  • O Špakov
Majaranta, P., Räihä, K.-J., Hyrskykari, A., & Špakov, O. (2019). Eye movements and human-computer interaction. In C. Klein & U. Ettinger (Eds.), Eye movement research: An introduction to its scientific foundations and applications (pp. 971-1015). Springer International Publishing. https://doi.org/10.1007/978-3-030-20085-5_23
Methoden der Usability Evaluation
  • F Sarodnick
  • H Braun
  • Sarodnick F.