Figure 1 - uploaded by Deepak Akkil
Content may be subject to copyright.
(a) Gaze tracking provided by the glasses. (b) Gaze tracking provided by the user facing camera in the watch. 

(a) Gaze tracking provided by the glasses. (b) Gaze tracking provided by the user facing camera in the watch. 

Source publication
Conference Paper
Full-text available
Smartwatches are widely available and increasingly adopted by consumers. The most common way of interacting with smartwatches is either touching a screen or pressing buttons on the sides. However, such techniques require using both hands. We propose glance awareness and active gaze interaction as alternative techniques to interact with smartwatches...

Contexts in source publication

Context 1
... envision two scenarios that provide gaze-tracking capability in watches. Firstly, the user would be wearing gaze-tracking capable smart glasses that are wirelessly connected to the watch (Figure 1a). Secondly, the watch would have a camera facing the user to track the eyes ( Figure 1b). ...
Context 2
... the user would be wearing gaze-tracking capable smart glasses that are wirelessly connected to the watch (Figure 1a). Secondly, the watch would have a camera facing the user to track the eyes ( Figure 1b). The second method could be usable when high gaze-tracking accuracy is not required. ...

Similar publications

Conference Paper
Full-text available
Capacitive touchscreens have changed the way in which people interact with computational devices. In fact, direct touch input on screens is immediately understandable and appealing to both novice and advanced users and, more importantly, it leverages people's natural ability to use multiple fingers for input gestures. However, currently off-the-she...
Conference Paper
Full-text available
Touch and gesture input have become popular for display interaction. While applications usually focus on one particular input technology, we set out to adjust the interaction modality based on the proximity of users to the screen. Therefore, we built a system which combines technology-transparent interaction spaces across 4 interaction zones: touch...
Article
Full-text available
We investigated whether finger pointing toward picture locations can be used as an external cognitive control tool to guide attention and compensate for the immature cognitive control functions in children compared with young adults. Item and source memory performance was compared for picture‐location pairs that were either semantically congruent (...
Article
Full-text available
The objective of the current study was to examine the effects of display curvature on smart watch touch interaction. A total of 36 younger individuals with the mean (SD) age of 22.2 (3.3) yrs were divided into three groups according to the length of their dominant hand. Each hand-size group was comprised of 12 individuals. Two smart watches were us...

Citations

... At first, the gaze interaction method was mostly used for special purposes, like typing tools for the disabled [72], but through active research and affordable new trackers, gaze-based interfaces can be included in many new devices. It has been demonstrated how eye tracking could enhance the interaction e.g., with mobile phones [73], tablets [74], smart watches [75], smart glasses [76], and public displays [77]. Gaze can also be used to directly control moving devices, like drones (e.g., [78,79]). ...
Article
Full-text available
When designing extended reality (XR) applications, it is important to consider multimodal interaction techniques, which employ several human senses simultaneously. Multimodal interaction can transform how people communicate remotely, practice for tasks, entertain themselves, process information visualizations, and make decisions based on the provided information. This scoping review summarized recent advances in multimodal interaction technologies for head-mounted display-based (HMD) XR systems. Our purpose was to provide a succinct, yet clear, insightful, and structured overview of emerging, underused multimodal technologies beyond standard video and audio for XR interaction, and to find research gaps. The review aimed to help XR practitioners to apply multimodal interaction techniques and interaction researchers to direct future efforts towards relevant issues on multimodal XR. We conclude with our perspective on promising research avenues for multimodal interaction technologies.
... It is postulated to be the best proxy for attention or intention [2]. Nowadays, eye tracking has matured and become an important research topic in computer vision, pattern recognition because the human gaze positions and movements are essential information for many applications ranging from diagnostic to interactive applications [3][4][5][6][7]. Eye tracking equipment is a key requirement of gaze-based applications, either worn on the body (head-mounted) [8] or strategically located in the environment [9]. ...
Conference Paper
The research on gaze-controlled interactive application increases dramatically as human-computer interactions are no longer constrained to traditional input devices such as joysticks, keyboards, touch screens, and so on. In this paper, we present GazeCamera a novel framework to control camera mounted on an unmanned aerial vehicle with two degrees-of-freedom through eye-gaze. In the proposed work, eye-gaze is the sole control command for maneuvering the camera in real-space according to the user's eye fixation. The user study results of the pilot research demonstrate that the GazeCamera could be an effortless and intuitive alternative to the other conventional interfaces.
... The simplest form of eye-awareness could be that an application notices the presence of eyes, without the knowledge of the actual gaze direction or target of the gaze. For example, if a cell phone's front camera sees the user's eyes, it can deduce that the user is probably looking at it, even if the cell phone does not know the exact location of the gaze on the screen (Akkil et al., 2015). If information on the gaze direction and scan path (Bischof et al., this volume; Foulsham, this volume) is available, the system "knows" much more about the user's interests and cognitive processes. ...
... Or, the gestures can be off-screen (Isokoski, 2000), which frees the screen for other purposes. Simple gestures that start from the screen and then go off-screen and back by crossing one of the display borders have been used for mode change during gaze-based gaming (Istance, Bates, Hyrskykari, & Vickers, 2008), and controlling a mobile phone (Kangas et al., 2014b) or a smart wrist watch (Akkil et al., 2015). Figure 4 illustrates an on-screen gesture implementation in a game. ...
Chapter
Gaze provides an attractive input channel for human-computer interaction because of its capability to convey the focus of interest. Gaze input allows people with severe disabilities to communicate with eyes alone. The advances in eye tracking technology and its reduced cost make it an increasingly interesting option to be added to the conventional modalities in every day applications. For example, gaze-aware games can enhance the gaming experience by providing timely effects at the right location, knowing exactly where the player is focusing at each moment. However, using gaze both as a viewing organ as well as a control method poses some challenges. In this chapter, we will give an introduction to using gaze as an input method. We will show how to use gaze as an explicit control method and how to exploit it subtly in the background as an additional information channel. We will summarize research on the application of different types of eye movements in interaction and present research-based design guidelines for coping with typical challenges. We will also discuss the role of gaze in multimodal, pervasive and mobile interfaces and contemplate with ideas for future developments.
... La mise en place de fixations avec délai (dwelling times d'environ 1000 msecs [12]) permet de lever cette limitation au prix d'une interruption du processus d'interaction désagréable pour l'utilisateur. L'oculométrie ne propose que trois autres interactions explicites: le tracé par le regard (où l'utilisateur doit "dessiner" des formes avec les yeux), la poursuite, qui consiste à suivre une cible mouvante [1,2], et enfin les clignements d'yeux [5]. Toutes ces limitations ne font du regard un candidat viable en tant que modalité principale d'interaction que dans un objectif d'accessibilité pour le handicap. ...
Conference Paper
Full-text available
L'oculométrie (ou eyetracking) est une technique permettant de capter la position du regard d'un utilisateur. Longtemps cantonnée à l'analyse des comportements du regard, cette technique est désormais largement utilisée en tant que technique d'interaction. Cet usage nécessite cependant une calibration pour un utilisateur particulier. Notre objectif est la conception d'interactions complémentaires au tac-tile sur des surfaces tactiles publiques, qui requièrent une utilisabilité immédiate, idéalement sans calibration. Nous présentons dans cet article un état de l'art sur cette problé-matique. Mots Clés Oculométrie; multimodalité; affichages interactifs publics; interaction par le regard. Abstract Although primarily used as a user analysis technique, eye-tracking has also been developed as a way of interacting with a computer. This use, however, requires a calibration process for each specific user. Our goal is to implement eyetracking interactions in a public tactile display. This context requires immediate usability, ideally skipping the calibration phase. In this paper, we present a synthesis of the literature on this issue.
... Thus humans behave differently to different situations, so detecting the behavior during harassment will help to alert the concerned authorities. Deepak Akkil Et. al [5], proposes look alertness and active watch communications as substitute to interact with smart watches instead of touch screens or press buttons. It elaborates an experiment to demonstrate the user preferences for visual and handicapped feedback on a "glance" at the wrist watch. ...
Article
The paper describes a prototype of wearable a device enhanced with GSM technology for protecting women during physical harassment or robbery. This device is having the sensing unit, in the form of a watch and is responsible for monitoring the heartbeat of the individual. It is equipped with a finger print scanner, to wear and remove the device. It also consists of RTC (Real Time Clock) which will display the clock in the system. Heartbeat sensor is used to monitor the heartbeat of the women. If any harassment takes place to women, naturally heartbeat will increase then the wearable device sends the SMS to respective people or police whose numbers are fed in the system. It will send the messages only after a preset delay, so that error can be avoided, during the course of exercise or walking fast.
... Some of the earliest eye typing systems required the user to glance at a few defined directions in specific order to compose a character [Rosen and Durfee 1978]. Gaze gestures have also been used to control a computer [Porta and Turina 2008], play games [Istance et al. 2010], control mobile phones Kangas et al. 2014] and smart watches [Akkil et al. 2015;Hansen et al. 2016]. ...
Conference Paper
In gesture-based user interfaces, the effort needed for learning the gestures is a persistent problem that hinders their adoption in products. However, people's natural gaze paths form shapes during viewing. For example, reading creates a recognizable pattern. These gaze patterns can be utilized in human-technology interaction. We experimented with the idea of inducing specific gaze patterns by static drawings. The drawings included visual hints to guide the gaze. By looking at the parts of the drawing, the user's gaze composed a gaze gesture that activated a command. We organized a proof-of-concept trial to see how intuitive the idea is. Most participants understood the idea without specific instructions already on the first round of trials. We argue that with careful design the form of objects and especially their decorative details can serve as a gaze-based user interface in smart homes and other environments of ubiquitous computing.
... Some of the earliest eye typing systems required the user to glance at a few defined directions in specific order to compose a character [Rosen and Durfee 1978]. Gaze gestures have also been used to control a computer [Porta and Turina 2008], play games [Istance et al. 2010], control mobile phones Kangas et al. 2014] and smart watches [Akkil et al. 2015;Hansen et al. 2016]. ...
Conference Paper
In gesture-based user interfaces, the effort needed for learning the gestures is a persistent problem that hinders their adoption in products. However, people's natural gaze paths form shapes during viewing. For example, reading creates a recognizable pattern. These gaze patterns can be utilized in human-technology interaction. We experimented with the idea of inducing specific gaze patterns by static drawings. The drawings included visual hints to guide the gaze. By looking at the parts of the drawing, the user's gaze composed a gaze gesture that activated a command. We organized a proof-of-concept trial to see how intuitive the idea is. Most participants understood the idea without specific instructions already on the first round of trials. We argue that with careful design the form of objects and especially their decorative details can serve as a gaze-based user interface in smart homes and other environments of ubiquitous computing.
... In haptic applications involving tactile input devices such as touch screens, eye gaze has been used to improve object acquisition and manipulation, by completely replacing the hand motion [19], [20] or integrating with the hand input [21] to reach targets. Other studies in tactile interaction have investigated user performance while combining gaze input with tactile output [22], [23]. In kinesthetic interaction, gaze modality has been developed as an auxiliary function for solving technical and safety issues in tasks such as robotic surgery [24], [25], [26]. ...
... Kangas et al. [22] have shown that gaze interaction with vibrotactile feedback increases the efficiency of interaction. Akkil et al. [23] have noted that vibrotactile feedback is a clearer and more noticeable modality for gaze events than visual feedback in small-screen devices such as smartwatches. ...
... Human gaze has a long history as a means for hands-free interaction with ubiquitous computing systems and has, more recently, also been shown to be a rich source of information about the user [13,18,36]. Prior work has demonstrated that gaze can be used for fast, accurate, and natural interaction with both ambient [31,53,63,65,73] and body-worn displays, including smartwatches [3,21]. Eye movements are closely linked to everyday human behaviour and cognition and can therefore be used for computational user modelling, such as for eye-based recognition of daily activities [14,15], visual memory recall [10], visual search targets [49,50,70], and intents [7], or personality traits [25] -including analyses over long periods of time for life-logging applications [17,52]. ...
Article
Analysis of everyday human gaze behaviour has significant potential for ubiquitous computing, as evidenced by a large body of work in gaze-based human-computer interaction, attentive user interfaces, and eye-based user modelling. However, current mobile eye trackers are still obtrusive, which not only makes them uncomfortable to wear and socially unacceptable in daily life, but also prevents them from being widely adopted in the social and behavioural sciences. To address these challenges we present InvisibleEye, a novel approach for mobile eye tracking that uses millimetre-size RGB cameras that can be fully embedded into normal glasses frames. To compensate for the cameras’ low image resolution of only a few pixels, our approach uses multiple cameras to capture different views of the eye, as well as learning-based gaze estimation to directly regress from eye images to gaze directions. We prototypically implement our system and characterise its performance on three large-scale, increasingly realistic, and thus challenging datasets: 1) eye images synthesised using a recent computer graphics eye region model, 2) real eye images recorded of 17 participants under controlled lighting, and 3) eye images recorded of four participants over the course of four recording sessions in a mobile setting. We show that InvisibleEye achieves a top person-specific gaze estimation accuracy of 1.79° using four cameras with a resolution of only 5 × 5 pixels. Our evaluations not only demonstrate the feasibility of this novel approach but, more importantly, underline its significant potential for finally realising the vision of invisible mobile eye tracking and pervasive attentive user interfaces.
... Smartwatches such as Apple Watch and Samsung Gear have the potential to provide unobtrusive and discreet access to phone message notifications, applications, and incoming calls [22]. With minimal user input and micro interactions, such as touching the screen, pressing the side buttons/dial or using gestures, users can be hands-free and attain their required information in seconds [2] [36]. An example is provided by Akkil et al. [2], where they use glance awareness and gaze gestures (looking left, right and up) for selection of items on a smartwatch. ...
... With minimal user input and micro interactions, such as touching the screen, pressing the side buttons/dial or using gestures, users can be hands-free and attain their required information in seconds [2] [36]. An example is provided by Akkil et al. [2], where they use glance awareness and gaze gestures (looking left, right and up) for selection of items on a smartwatch. Their experiment, conducted with twelve participants, revealed that the gaze-based interaction was practical for simple tasks and haptics was the preferred feedback modality ( [2]. ...
... An example is provided by Akkil et al. [2], where they use glance awareness and gaze gestures (looking left, right and up) for selection of items on a smartwatch. Their experiment, conducted with twelve participants, revealed that the gaze-based interaction was practical for simple tasks and haptics was the preferred feedback modality ( [2]. In the area of optimizing text entry and improving a users' performance to achieving a high entry speed, Oney et al. [24], Chen et al. [9], and Komninos & Dunlop [16] prototyped a QWERTY keyboard and explored zooming, swiping and next-word predictions to enable faster text entry. ...
Conference Paper
Full-text available
Smartwatches are growing in usage, yet they come with the additional challenge of regulating their usage during the taking of academic tests. However, it is unclear how effective they are at actually allowing students to cheat. We conducted an experiment that examines the use of smartwatches for cheating on Multiple-Choice Questions (MCQ) and Short Answers (SA) with either Pictures/Text shown on the watch to aid students. Our results indicate that smartwatches are neither efficient nor have a high usability rating for cheating. However, students are able to score higher on Multiple-Choice Questions compared to Short Answers. We use the cheating paradigm as an example to understand the perceived usability and appropriation of smartwatches in an academic setting. We provide suggestions that help to deter cheating in an academic setting. Our study contributes to the research on academic integrity and the growing demand of wearable technologies.