Conference Paper

Eyepass - eye-stroke authentication for public terminals

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Authentication on public terminals e.g. on ATMs and ticket vending machines is a common practice. Due to the weaknesses of the traditional authentication approaches PIN and password, it is possible that other people gain access to the authentication information and thus to the users' personal data. This is mainly due to the physical interaction with the terminals, which enables various manipulations on these devices. In this paper, we present EyePass, an authentication mechanism based on PassShape and eye-gestures that has been created to overcome these problems by eliminating the physical connection to the terminals. EyePass additionally assists the users by providing easy-to-remember PassShapes instead of PINs or passwords. We present the concept, the prototype and the first evaluations performed. Additionally, the future work on the evaluation is outlined and expected results are discussed.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... So far, there are already some works applying eye tracking techniques in user authentication. These works can be classified into biometric-based [3,14,12,13] and pattern-based [15,7,9,10]. The biometric-based methods authenticate a user based on the biometric information extracted from the user's eyes or eye movement characteristics. ...
... And users have to keep their heads fixed after the calibration. The other type [7,9,10] recognizes a user's eye movement trajectory that represents a specific command, and does not need calibration process. Most of these eye tracking applications are proposed for the devices with large screen. ...
... Besides, users do not need to remember the complex shapes but just the target object as a password. Considering that humans' eyes move in fast and straight saccades and thus cannot perform any curves or other non-linear shapes [10], we make the objects move in straight lines for eye tracking. In the following, we first introduce the basic authentication process and the architecture of our eye tracking authentication system. ...
Conference Paper
Full-text available
Traditional user authentication methods using passcode or finger movement on smartphones are vulnerable to shoulder surfing attack, smudge attack, and keylogger attack. These attacks are able to infer a passcode based on the information collection of user’s finger movement or tapping input. As an alternative user authentication approach, eye tracking can reduce the risk of suffering those attacks effectively because no hand input is required. However, most existing eye tracking techniques are designed for large screen devices. Many of them depend on special hardware like high resolution eye tracker and special process like calibration, which are not readily available for smartphone users. In this paper, we propose a new eye tracking method for user authentication on a smartphone. It utilizes the smartphone’s front camera to capture a user’s eye movement trajectories which are used as the input of user authentication. No special hardware or calibration process is needed. We develop a prototype and evaluate its effectiveness on an Android smartphone. We recruit a group of volunteers to participate in the user study. Our evaluation results show that the proposed eye tracking technique achieves very high accuracy in user authentication.
... Researchers have explored a wide variety of eye movements that could be used for authentication. This includes fixations [93], gestures [33,34], and smooth pursuit eye movements [8,27,35,164]. There are two dimensions to consider in the use of gaze for explicit authentication: a) password type: legacy vs gaze-based password symbols, and b) used modalities: unimodal vs multimodal gaze-based authentication. ...
... Examples include GazeTouchPass [74] and GTmoPass [78], where gaze gestures constitute part of the password. Similarly, in EyePass [34] and another work by De Luca et al. [33], the password consists of a series of gaze gestures. In DyGazePass [126,127], the user's input is a series of smooth pursuit movements that are supported by cues in the form of 2D geometric objects. ...
Conference Paper
Full-text available
For the past 20 years, researchers have investigated the use of eye tracking in security applications. We present a holistic view on gaze-based security applications. In particular, we canvassed the literature and classify the utility of gaze in security applications into a) authentication, b) privacy protection, and c) gaze monitoring during security critical tasks. This allows us to chart several research directions, most importantly 1) conducting field studies of implicit and explicit gaze-based authentication due to recent advances in eye tracking, 2) research on gaze-based privacy protection and gaze monitoring in security critical tasks which are under-investigated yet very promising areas, and 3) understanding the privacy implications of pervasive eye tracking. We discuss the most promising opportunities and most pressing challenges of eye tracking for security that will shape research in gaze-based security applications for the next decade.
... Drawing passwords with eye gaze has been explored as an input method for authentication [31]. De Luca et al. [6,8,9] presented EyePIN, EyePass, and EyePassShapes, which relied on a gesture alphabet, i.e., they assigned specific eye gestures to each digit. Although these methods prohibit shoulder surfing, they were extremely slow (54 s per PIN [8]), rendering them impractical for real-world use. ...
... The interaction in the eye gesture-based methods of Eye-Pass and EyePassShapes [6,9] is also multimodal. It requires pressing a key to confirm the start and end of each gesture shape. ...
Conference Paper
Full-text available
We present TouchGazePath, a multimodal method for entering personal identification numbers (PINs). Using a touch-sensitive display showing a virtual keypad, the user initiates input with a touch at any location, glances with their eye gaze on the keys bearing the PIN numbers, then terminates input by lifting their finger. TouchGazePath is not susceptible to security attacks, such as shoulder surfing, thermal attacks, or smudge attacks. In a user study with 18 participants, TouchGazePath was compared with the traditional Touch-Only method and the multimodal Touch+Gaze method, the latter using eye gaze for targeting and touch for selection. The average time to enter a PIN with TouchGazePath was 3.3 s. This was not as fast as Touch-Only (as expected), but was about twice as fast the Touch+Gaze. TouchGazePath was also more accurate than Touch+Gaze. TouchGazePath had high user ratings as a secure PIN input method and was the preferred PIN input method for 11 of 18 participants.
... Moreover, EyePassShapes was created taking into account the previously mentioned properties of public authentication systems. While in [6] the idea of EyePassShapes has been briefly discussed as work in progress, in this paper we will go from theory to practice. We will present a thorough and extensive evaluation of all aspects of the system: design, usability, memorability and security. ...
... EyePassShapes – first time theoretically discussed in [6] – uses the stroke based authentication tokens of PassShapes and combines it with the secure eye tracking approach of EyePIN. Fortunately, the strokes used for PassShapes perfectly fit the biological constraints of the human eye, which moves in saccades and cannot perform any non-linear movements . ...
Conference Paper
Authentication systems for public terminals { and thus pub- lic spaces { have to be fast, easy and secure. Security is of utmost importance since the public setting allows mani- fold attacks from simple shoulder surng to advanced ma- nipulations of the terminals. In this work, we present Eye- PassShapes, an eye tracking authentication method that has been designed to meet these requirements. Instead of using standard eye tracking input methods that require precise and expensive eye trackers, EyePassShapes uses eye ges- tures. This input method works well with data about the rel- ative eye movement, which is much easier to detect than the precise position of the user's gaze and works with cheaper hardware. Dierent evaluations on technical aspects, usabil- ity, security and memorability show that EyePassShapes can signicantly increase security while being easy to use and fast at the same time.
... Some of these systems rely on specialised hardware at border agencies as a point of care or access. Sometimes these systems are abused Ma et al. [10], Mlakar et al. [12], De Luca et al. [5], and this leads to false-positive cases. Here we adopted the implementation of a dynamic state-space model for the prediction and reduction of false-positive cases. ...
Chapter
Full-text available
An eye authentication software or tool is a form of the embedded module that includes an image data extraction front-end unit for determining the main concentric circle of the image of the eye as an integrating circle; a pupil radius detection unit which is used for detecting the integrated value of the integrating circle in stepwise order. This paper dealt with this approach by using system control dynamics for physical systems to determine the predictive focus of the eye by detecting constant changes in pupil dilation and constriction. Most current research has found that there are a lot of stories behind the concept of emotional response related to pupillary metrics on both constriction and dilation. We approach the current concept by using a contour integrating unit for integrating the captured image data extracted by the image data extraction unit with the eye circumference. The N4SID is a physical dynamic system for predicting complex behaviour patterns and this is used here to address the problem of outlier inclusion as part of the image data acquisition. The system is used here as a front-end engine integrated for detecting and controlling pupil response from a mobile camera lens in real-time. Initial results show that N4SID with the first order perfectly predicts the centric coordinate of the pupil in both constriction and dilation of all five epochs used on the differential equation.
... The user then authenticates by recalling these eye movements and providing them as input. Examples of such systems include EyePass [21], Eye gesture blink password [22], and another work by De Luca et al. [23], where the password consists of a series of gaze gestures. Implicit Gaze-based Authentication refers to the use of eye movements to implicitly verify identity; it does not require the user to remember a secret, but it is based on inherent unconscious gaze behavior and can occur actively throughout a session [24][25][26]. ...
Article
Full-text available
Emerging Virtual Reality (VR) displays with embedded eye trackers are currently becoming a commodity hardware (e.g., HTC Vive Pro Eye). Eye-tracking data can be utilized for several purposes, including gaze monitoring, privacy protection, and user authentication/identification. Identifying users is an integral part of many applications due to security and privacy concerns. In this paper, we explore methods and eye-tracking features that can be used to identify users. Prior VR researchers explored machine learning on motion-based data (such as body motion, head tracking, eye tracking, and hand tracking data) to identify users. Such systems usually require an explicit VR task and many features to train the machine learning model for user identification. We propose a system to identify users utilizing minimal eye-gaze-based features without designing any identification-specific tasks. We collected gaze data from an educational VR application and tested our system with two machine learning (ML) models, random forest (RF) and k-nearest-neighbors (kNN), and two deep learning (DL) models: convolutional neural networks (CNN) and long short-term memory (LSTM). Our results show that ML and DL models could identify users with over 98% accuracy with only six simple eye-gaze features. We discuss our results, their implications on security and privacy, and the limitations of our work.
... Other works introduce gaze-based methods that track the user's gaze with cameras or specialized eye tracking sensors. These methods were shown to be more secure than, e.g., touch-based input [De Luca et al. 2008;Khamis et al. 2018]. In addition, gaze-based input allows hands-free authentication and interaction which is more hygienic. ...
... The analysis of eye movements has been combined with knowledge-based authentification procedures such as en-tering a password with the eye gaze [32,30,8,9,46,6], or with other behavioral biometrics such as keystroke dynamics [44] or physiological biometric methods such as iris scanning [29]. ...
... Stimuli used for viewer identification include viewing artificial stimuli [5,20,6,19,8,35,36,33,30], text documents [5,17,30], movies [22] or images [5,8]. Most approaches are designed to identify viewers on a specific stimulus, for example by applying graph matching techniques to the scanpaths produced on a specific face image [29], or even by including a secondary identification task such as entering a PIN or password with the eye gaze [25,23,9,10,34,7]. Approaches that can be applied to novel stimuli at test time extract different kinds of fixational and saccadic features, such as fixation durations [32,14] or saccade amplitudes [14,29,19,30], velocities [5,32,6,29,8,19,11,30] and accelerations [29,8,11,30], and either aggregate these over the whole scanpath [32,17,22,8,14], or compute the similarity of scanpaths by applying statistical tests to the distributions of the extracted features [16,30]. ...
Article
Full-text available
We study the problem of identifying viewers of arbitrary images based on their eye gaze. Psychological research has derived generative stochastic models of eye movements. In order to exploit this background knowledge within a discriminatively trained classification model, we derive Fisher kernels from different generative models of eye gaze. Experimentally, we find that the performance of the classifier strongly depends on the underlying generative model. Using an SVM with Fisher kernel improves the classification performance over the underlying generative model.
... Thirdly, the identification procedure can include a randomized challenge to which the user has to respond. For example, a user can be asked to look at specific positions on a screen [32,29,11,13,48,9]. Challenges prevent replay attacks at the cost of obtrusiveness, bypassing them requires a data generator that is able to generate the biometric feature and also respond to the challenge. ...
Conference Paper
We study involuntary micro-movements of the eye for bio-metric identification. While prior studies extract lower-frequency macro-movements from the output of video-based eye-tracking systems and engineer explicit features of these macro-movements, we develop a deep convolutional architecture that processes the raw eye-tracking signal. Compared to prior work, the network attains a lower error rate by one order of magnitude and is faster by two orders of magnitude: it identifies users accurately within seconds.
... Thirdly, the identification procedure can include a randomized challenge to which the user has to respond. For example, a user can be asked to look at specific positions on a screen [32,29,11,13,48,9]. Challenges prevent replay attacks at the cost of obtrusiveness, bypassing them requires a data generator that is able to generate the biometric feature and also respond to the challenge. ...
Article
Full-text available
We study involuntary micro-movements of the eye for biometric identification. While prior studies extract lower-frequency macro-movements from the output of video-based eye-tracking systems and engineer explicit features of these macro-movements, we develop a deep convolutional architecture that processes the raw eye-tracking signal. Compared to prior work, the network attains a lower error rate by one order of magnitude and is faster by two orders of magnitude: it identifies users accurately within seconds.
... It is perhaps the most commonly used factor [2]. Examples are passwords, PINs, and graphical passwords. Researchers also developed ways to authenticate using eye movements [8,12,13,16], mid-air gestures [25], and by recalling photographs [26]. Knowledge-based schemes allow changing passwords, and can be integrated into any system that accepts any kind of user input. ...
Conference Paper
Full-text available
As public displays continue to deliver increasingly private and personalized content, there is a need to ensure that only the legitimate users can access private information in sensitive contexts. While public displays can adopt similar authentication concepts like those used on public terminals (e.g., ATMs), authentication in public is subject to a number of risks. Namely, adversaries can uncover a user's password through (1) shoulder surfing, (2) thermal attacks, or (3) smudge attacks. To address this problem we propose GTmoPass, an authentication architecture that enables Multi-factor user authentication on public displays. The first factor is a knowledge-factor: we employ a shoulder-surfing resilient multimodal scheme that combines gaze and touch input for password entry. The second factor is a possession-factor: users utilize their personal mobile devices, on which they enter the password. Credentials are securely transmitted to a server via Bluetooth beacons. We describe the implementation of GTmoPass and report on an evaluation of its usability and security, which shows that although authen-tication using GTmoPass is slightly slower than traditional methods, it protects against the three aforementioned threats.
... De Luca et al. [10] presented an authentication method based on eye gestures, which stemmed from the conjecture that complex shapes are easier to remember than long passwords or PINs. An eye gesture is performed by moving the gaze in specific ways, like if "drawing" patterns on the screen. ...
... Authentication methods applying eye-tracking technology have been investigated. Luca et al. [15] presented an authentication mechanism based on eye gesture to eliminate the physical contact-type on the public terminals. It enabled users to invoke commands by moving their eyes in a pre-defined pattern. ...
... Unlike intrusion detection systems, ATM-like solutions require the user to explicitly perform certain eye tasks in order to authenticate. For instance, De Luca et al. [31] propose an authentication mechanism based on eye gestures, which derives from the idea that it is easier to remember complex shapes than long passwords or PINs. Gestures are obtained by moving the eyes in specific ways, thus "drawing" patterns on the screen. ...
... Kumar et al. [11] first implemented a gaze-based authentication system, EyePassword, where users gaze at the letters of their password on an on-screen keyboard. De Luca et al. [6, 5] have proposed eye-gesture methods for shoulder-surfing resistant authentication. Dunphy et al. [8] tested gaze control with PassFaces, a recognition-based graphical password system. ...
Conference Paper
Click-based graphical passwords have been proposed as alternatives to text-based passwords, despite being potentially vulnerable to shoulder-surfing, where an attacker can learn passwords by watching or recording users as they log in. Cued Gaze-Points (CGP) is a graphical password system which defends against such attacks by using eye-gaze password input, instead of mouse-clicks. A first user study revealed that CGP's unique use of eye tracking required special techniques to improve gaze precision. In this paper, we present two enhancements that we developed and tested: a nearest-neighbour gaze-point aggregation algorithm and a 1-point calibration before each password entry. We found that these enhancements made a substantial improvement to users' gaze accuracy and system usability.
... Therefore, the idea came up to combine this approach with PassShapes to provide a secure and easy to remember authentication method for public terminals. A new project dealing with this approach is already running and delivered first encouraging results [8]. Fortunately, the underlying stroke concept perfectly fits the movement of the eyes that can only move in saccades. ...
Conference Paper
Full-text available
Authentication today mostly relies on passwords or personal identification numbers (PINs). Therefore the average user has to remember an increasing amount of PINs and passwords. Unfortunately, humans have limited capabilities for remembering abstract alphanumeric sequences. Thus, many people either forget them or use very simple ones, which implies several security risks. In this work, a novel authentication method called PassShapes is presented. In this system users authenticate themselves to a computing system by drawing simple geometric shapes constructed of an arbitrary combination of eight different strokes. We argue that using such shapes will allow more complex and thus more secure authentication tokens with a lower cognitive load and higher memorability. To prove these assumptions, two user studies have been conducted. The memorability evaluation showed that the PassShapes concept is able to increase the memorability when users can practice the PassShapes several times. This effect is even increasing over time. Additionally, a prototype was implemented to conduct a usability study. The results of both studies indicate that the PassShapes approach is able to provide a usable and memorable authentication method.
Article
Password, fingerprint and face recognition are the most popular authentication schemes on smartphones. However, these user authentication schemes are threatened by shoulder surfing attacks and spoof attacks. In response to these challenges, eye movements have been utilized to secure user authentication since their concealment and dynamics can reduce the risk of suffering those attacks. However, existing approaches based on eye movements often rely on additional hardware (such as high-resolution eye trackers) or involve a time-consuming authentication process, limiting their practicality for smartphones. This paper presents DEyeAuth, a novel dual-authentication system that overcomes these limitations by integrating eyelid patterns with eye gestures for secure and convenient user authentication on smartphones. DEyeAuth first leverages the unique characteristics of eyelid patterns extracted from the upper eyelid margins or creases to distinguish different users and then utilizes four eye gestures (i.e., looking up, down, left, and right) whose dynamism and randomness can counter threats from image and video spoofing to enhance system security. To the best of our knowledge, we are among the first to discover and prove that the upper eyelid margins and creases can be used as potential biometrics for user authentication. We have implemented the prototype of DEyeAuth on Android platforms and comprehensively evaluated its performance by recruiting 50 volunteers. The experimental results indicate that DEyeAuth achieves a high authentication accuracy of 99.38% with a relatively short authentication time of 6.2 seconds, and is effective in resisting image presentation, video replaying, and mimic attacks.
Article
Biometric authentication has been applied in many domains due to the promoting awareness of privacy and security risks. Most of the previous work has shown the performance of single biometric, but few researches explored the feasibility of hybrid biometrics. On this basis, we proposed a hybrid brain-computer interface (BCI) authentication approach that combined user’s electroencephalogram (EEG) and eye movement data features simultaneously. In anti-shoulder surfing experiments, the proposed approach reached the average accuracy of 84.36% (the highest was 88.35%) to identify shoulder surfers, and outperformed the only EEG and only eye movement data based authentication approach. In additional experiments, the approach was proved to be useful in reducing the possibility of user misidentification. Our approach hold great potential in providing references for implementing hybrid BCI authentication for anti-shoulder surfing applications.
Chapter
Full-text available
We study involuntary micro-movements of the eye for biometric identification. While prior studies extract lower-frequency macro-movements from the output of video-based eye-tracking systems and engineer explicit features of these macro-movements, we develop a deep convolutional architecture that processes the raw eye-tracking signal. Compared to prior work, the network attains a lower error rate by one order of magnitude and is faster by two orders of magnitude: it identifies users accurately within seconds.
Thesis
Full-text available
The last decade witnessed an increasing adoption of public interactive displays. Displays can now be seen in many public areas, such as shopping malls, and train stations. There is also a growing trend towards using large public displays especially in airports, urban areas, universities and libraries.
Conference Paper
No matter how sophisticated an authentication system has been devised, human is often considered as the weakest link in the security chain. Security problems can stem from bad interactions between humans and systems. Eye movement is a natural interaction modality. The application of eye tracking technology in authentication offers a promising and feasible solution to the trading-off between the usability and the security of an authentication system. This paper conducts a comprehensive survey on existing Eye Movement Based Authentication (EMBA) methodologies and systems, and briefly outlines the technical and methodological aspects of EMBA systems. We decompose the EMBA technique into three fundamental aspects: (1) eye movement input modality, (2) eye movement interaction mechanism, and (3) eye movement data recognition. The features and functions of the EMBA modules are further analyzed. An emphasis is put on the interrelationship among the modules and their general impacts on the formation and function of the EMBA framework. The paper attempts to provide a systemic treatment on the state of the art technology and also to outline some potential future development directions in eye movement based interaction or security systems.
Conference Paper
We investigate the possibility of using pupil size as a discriminating feature for eye-based soft biometrics. In experiments carried out in different sessions in two consecutive years, 25 subjects were asked to simply watch the center of a plus sign displayed in the middle of a blank screen. Four primary attributes were exploited, namely left and right pupil sizes and ratio and difference of left and right pupil sizes. Fifteen descriptive statistics were used for each primary attribute, plus two further measures, which produced a total of 62 features. Bayes, Neural Network, Support Vector Machine and Random Forest classifiers were employed to analyze both all the features and selected subsets. The Identification task showed higher classification accuracies (0.6194 ÷ 0.7187) with the selected features, while the Verification task exhibited almost comparable performances (~ 0.97) in the two cases for accuracy, and an increase in sensitivity and a decrease in specificity with the selected features.
Article
Full-text available
User authentication is an important and usually final bar-rier to detect and prevent illicit access. Nonetheless it can be broken or tricked, leaving the system and its data vulnerable to abuse. In this pa-per we consider how eye tracking can enable the system to hypothesize if the user is familiar with the system he operates, or if he is an unfamiliar intruder. Based on an eye tracking experiment conducted with 12 users and various stimuli, we investigate which conditions and measures are most suited for such an intrusion detection. We model the user's gaze be-havior as a selector for information flow via the relative conditional gaze entropy. We conclude that this feature provides the most discriminative results with static and repetitive stimuli.
Article
The emergence of small handheld devices such as tablets and smartphones, often with touch sensitive surfaces as their only input modality, has spurred a growing interest in the subject of gestures for human–computer interaction (HCI). It has been proven before that eye movements can be consciously controlled by humans to the extent of performing sequences of predefined movement patterns, or “gaze gestures” that can be used for HCI purposes in desktop computers. Gaze gestures can be tracked noninvasively using a video-based eye-tracking system. We propose here that gaze gestures can also be an effective input paradigm to interact with handheld electronic devices. We show through a pilot user study how gaze gestures can be used to interact with a smartphone, how they are easily assimilated by potential users, and how the Needleman-Wunsch algorithm can effectively discriminate intentional gaze gestures from otherwise typical gaze activity performed during standard interaction with a small smartphone screen. Hence, reliable gaze–smartphone interaction is possible with accuracy rates, depending on the modality of gaze gestures being used (with or without dwell), higher than 80 to 90%, negligible false positive rates, and completion speeds lower than 1 to 1.5 s per gesture. These encouraging results and the low-cost eye-tracking equipment used suggest the possibilities of this new HCI modality for the field of interaction with small-screen handheld devices.
Article
Full-text available
Access to computer systems is most often based on the use of alphanumeric passwords. However, users have difficulty remembering a password that is long and random-appearing. Instead, they create short, simple, and insecure passwords. Graphical passwords have been designed to try to make passwords more memorable and easier for people to use and, therefore, more secure. Using a graphical password, users click on images rather than type alphanumeric characters. We have designed a new and more secure graphical password system, called PassPoints. In this paper we describe the PassPoints system, its security characteristics, and the empirical study we carried out comparing PassPoints to alphanumeric passwords. In the empirical study participants learned either an alphanumeric or graphical password and subsequently carried out three longitudinal trials to input their passwords over a period of five weeks. The results show that the graphical group took longer and made more errors in learning the password, but that the difference was largely a consequence of just a few graphical participants who had difficulty learning to use graphical passwords. In the longitudinal trials the two groups performed similarly on memory of their password, but the graphical group took more time to input a password.
Conference Paper
Full-text available
Users gain access to cash, confidential information and services at Automated Teller Machines (ATMs) via an authentication process involving a Personal Identification Number (PIN). These users frequently have many different PINs, and fail to remember them without recourse to insecure behaviours. This is not a failing of users. It is a usability failing in the ATM authentication mechanism. This paper describes research executed to evaluate whether users find multiple graphical passwords more memorable than multiple PINs. The research also investigates the success of two memory augmentation strategies in increasing memorability of graphical passwords. The results demonstrate that multiple graphical passwords are substantially more effective than multiple PIN numbers. Memorability is further improved by the use of mnemonics to aid their recall. This study will be of interest to HCI practitioners and information security researchers exploring approaches to usable security. Author Keywords
Conference Paper
Full-text available
This paper describes some of the consumer-driven usability research conducted by NCR Self Service Strategic Solutions in the development of an understanding of usability and user acceptance of leading-edge biometrics verification techniques. We discuss biometric techniques in general and focus upon the usability phases and issues, associated with iris verification technology at the Automated Teller Machine (ATM) user interface. The paper concludes with a review of some of the major research issues encountered, and an outline of future work in the area.
Conference Paper
Full-text available
Current software interfaces for entering text on touch screen devices mimic existing mechanisms such as keyboard typing or handwriting. These techniques are poor for entering private text such as passwords since they allow observers to decipher what has been typed simply by looking over the typist's shoulder, an activity known as shoulder surfing. In this paper, we outline a general approach for designing security-sensitive onscreen virtual keyboards that allow users to enter private text without revealing it to observers. We present one instantiation, the Spy-Resistant Keyboard, and discuss design decisions leading to the development of this keyboard. We also describe the results of a user study exploring the usability and security of our interface. Results indicate that although users took longer to enter their passwords, using the Spy-Resistant Keyboard rather than a standard soft keyboard resulted in a significant increase in their ability to protect their passwords from a watchful observer.
Conference Paper
Full-text available
Personal identification numbers (PINs) are one of the most common ways of electronic authentication these days and used in a wide variety of applications, especially in ATMs (cash machines). A non-marginal amount of tricks are used by criminals to spy on these numbers to gain access to the owners' valuables. Simply looking over the victims' shoulders to get in possession of their PINs is a common one. This effortless but effective trick is known as shoulder surfing. Thus, a less observable PIN entry method is desirable. In this work, we evaluate three different eye gaze interaction methods for PIN- entry, all resistant against these common attacks and thus providing enhanced security. Besides the classical eye input methods we also investigate a new approach of gaze gestures and compare it to the well known classical gaze-interactions. The evaluation considers both security and usability aspects. Finally we discuss possible enhancements for gaze gestures towards pattern based identification instead of number sequences.
Conference Paper
Full-text available
Graphical passwords are an alternative to alphanumeric passwords in which users click on images to authenticate themselves rather than type alphanumeric strings. We have developed one such system, called PassPoints, and evaluated it with human users. The results of the evaluation were promising with respect to rmemorability of the graphical password. In this study we expand our human factors testing by studying two issues: the effect of tolerance, or margin of error, in clicking on the password points and the effect of the image used in the password system. In our tolerance study, results show that accurate memory for the password is strongly reduced when using a small tolerance (10 x 10 pixels) around the user's password points. This may occur because users fail to encode the password points in memory in the precise manner that is necessary to remember the password over a lapse of time. In our image study we compared user performance on four everyday images. The results indicate that there were few significant differences in performance of the images. This preliminary result suggests that many images may support memorability in graphical password systems.
Conference Paper
Full-text available
This paper investigates novel ways to direct compu ters by eye gaze. Instead of using fixations and dwell times, this wo rk focuses on eye motion, in particular gaze gestures. Gaze gestures are insensi tive to accuracy problems and immune against calibration shift. A user study indi cates that users are able to perform complex gaze gestures intentionally and inv estigates which gestures occur unintentionally during normal interaction wit h the computer. Further experiments show how gaze gestures can be integrated into working with standard desktop applications and controlling media devices.
Article
In this paper we propose and evaluate new graphical password schemes that exploit features of graphical input displays to achieve better security than text-based passwords. Graphical input devices enable the user to decouple the position of inputs from the temporal order in which those inputs occur, and we show that this decoupling can be used to generate password schemes with substantially larger memo-rable password spaces. In order to evaluate the se-curity of one of our schemes, we devise a novel way to capture a subset of the memorable" passwords that, we believe, is itself a contribution. In this work we are primarily motivated by devices such as per-sonal digital assistants PDAs that ooer graphical input capabilities via a stylus, and we describe our prototype implementation of one of our password schemes on such a P D A, namely the Palm Pilot TM .
Conference Paper
Authentication today mostly means using passwords or personal identification numbers (PINs). The average user has to remember an increasing amount of PINs and passwords. But unfortunately, humans have limited capabilities in remembering abstract alphanumeric sequences. Thus, many people either forget them or use very simple ones that imply several security risks. In our previous work on PIN entry on ATMs (cash machines), we found out that many persons support their memory recalling PINs by using an imaginary shape overlaid on the number pad. In this paper, we introduce PassShape, a shape based authentication mechanism. We argue that using shapes will allow more complex and more secure authentication with a lower cognitive load. That is, it enables people to use easy to remember but complex authentication patterns.
Conference Paper
Shoulder-surfing - using direct observation techniques, such as looking over someone's shoulder, to get passwords, PINs and other sensitive personal information - is a problem that has been difficult to overcome. When a user enters information using a keyboard, mouse, touch screen or any traditional input device, a malicious observer may be able to acquire the user's password credentials. We present EyePassword, a system that mitigates the issues of shoulder surfing via a novel approach to user input. With EyePassword, a user enters sensitive input (password, PIN, etc.) by selecting from an on-screen keyboard using only the orientation of their pupils (i.e. the position of their gaze on screen), making eavesdropping by a malicious observer largely impractical. We present a number of design choices and discuss their effect on usability and security. We conducted user studies to evaluate the speed, accuracy and user acceptance of our approach. Our results demonstrate that gaze-based password entry requires marginal additional time over using a keyboard, error rates are similar to those of using a keyboard and subjects preferred the gaze-based password entry approach over traditional methods.
The design and analysis of graphical passwords
  • I Jermyn
  • A Mayer
  • F Monrose
  • M Reiter
  • A Rubin
Jermyn, I., Mayer, A., Monrose, F., Reiter, M. Rubin, A. The design and analysis of graphical passwords. In: Proceedings of USENIX Security Symposium. August 1999.
Authentication using graphical passwords: Basic Results
  • S Wiedenbeck
  • J Waters
  • J.-C Birget
  • A Brodskiy
  • N Memon
Wiedenbeck, S., Waters, J., Birget, J.-C., Brodskiy, A., Memon, N. Authentication using graphical passwords: Basic Results. In: Proceedings of HumanComputer Interaction International (HCII 2005), Las Vegas, Nevada, USA, July 22-27, 2005.