Fig 8 - uploaded by Henry Zhong
Content may be subject to copyright.
Source publication
This paper presents VeinDeep, a system for using
vein patterns to secure smartphones from opportunistic access,
e.g. a device left unattended. VeinDeep takes advantage of
infrared depth sensors, which at the time of writing have
recently started to appear in smartphones for 3D indoor mapping
and localisation. We find these sensors can be re-purpose...
Context in source publication
Similar publications
Die Digitalisierung erfordert neue Strategien – auch vom kleinen Laden um die Ecke. Eine Chance
oder ein Risiko für die Händler? Lässt sich die Begeisterung für das Smartphone für den stationären Einzelhandel nutzen? Tiefeninterviews sowie Online-Befragungen von 40 Einzelhändlern und 50 Kunden in den Stadtteilen Sülz und Klettenberg geben Auskunft...
La famille des modèles de croissance SiWaWa est conçue pour être utilisable facilement à partir d’entrées les plus simples possibles, en fait au minimum le nombre de tiges (N) et la surface terrière (G), ainsi que l’indice de fertilité (SI, soit hdom à 50 ans). Ces modèles génèrent la distribution des tiges avec une très bonne justesse d’estimation...
Citations
... While marking progress in the field, smiling or making an occlusion sound may not be something people are willing to do every time they wish to unlock their phone. Leveraging vein patterns is proposed in [69] but it relies on specialized infrared sensors not found on commodity mobile phones. ...
... SmileAuth [30] and BiLock [70] propose to authenticate the user based on information extracted from their teeth alignment or occlusion sound. VeinDeep [69] proposes to authenticate the user based on the vein pattern of their hand while BreathPrint [8] exploits audio features derived from an individual's breathing gestures for this task. If providing new interesting authentication modalities, these approaches still exhibit the important limitations that they either rely on specialized sensors that are not found on common mobile phones [69] or require a smiling or dental occlusion action, or explicit breathing with the phone near to the nose, that might not be convenient for use in public [8,30,70]. ...
... VeinDeep [69] proposes to authenticate the user based on the vein pattern of their hand while BreathPrint [8] exploits audio features derived from an individual's breathing gestures for this task. If providing new interesting authentication modalities, these approaches still exhibit the important limitations that they either rely on specialized sensors that are not found on common mobile phones [69] or require a smiling or dental occlusion action, or explicit breathing with the phone near to the nose, that might not be convenient for use in public [8,30,70]. ...
We present HoldPass, the first system that can authenticate a user while they simply hold their phone. It uses the heart activity as biometric trait sensed via the hand vibrations in response to the cardiac cycle - a process known as ballistocardiography (BCG). While heart activity has been used for biometric authentication, sensing it through hand-based ballistocardiography (Hand-BCG) using standard sensors found on commodity mobile phones is an uncharted territory. Using a combination of in-depth qualitative analysis and large-scale quantitative analysis involving over 100 volunteers, we paint a detailed picture of opportunities and challenges. Authentication based on Hand-BCG is shown to be feasible but the signal is weak, uniquely prone to motion artifacts and does not land itself to the common approach of alignment-based authentication. HoldPass addresses these challenges by introducing a novel alignment-free authentication scheme that builds on asynchronous signal slicing and a data-driven algorithm for identifying a reduced set of features for characterizing a user. We implement HoldPass and evaluate it using a multi-modal approach: a large-case study involving 112 volunteers and targeted studies with a smaller set of volunteers over a period of several months. The data shows that HoldPass provides an authentication accuracy and user experience on par with or better than state-of-the-art systems with stronger requirements on hardware and/or user participation.
... In contrast, VID is designed to work with commodity depth cameras embedded in off-the-shelf smart devices such as smartphones and laptops [8]. A work that is closely related to VID is VeinDeep [15], which used the Microsoft Kinect V2 (which is now officially discontinued) for extracting the vein patterns. However, VID is different in many ways. ...
... Once the model is trained, the identification process is quick and invariant of the number of users in the enrolled set. 2) Authors in Ref. [15] collected only six images per subject for their evaluations. Contrary, VID used 500 images per subject, which includes images that are more representative of normal usage of such a system and thus capture expected artefacts such as slight variations in the hand position (and orientation) relative to the camera when the images are recorded. ...
... VID on the other hand leverages a commodity depth camera (which is ubiquitously available on computing devices) and utilizes the deep learning which is scalable and the identification phase is invariant of the number of the enrolled users. Recently, authors in Ref. [15] used Microsoft Kinect V2 (which is now discontinued) to extract the vein patterns from the hand dorsum. Benefits of VID over this approach are mentioned in Introduction (see Section 1 for details). ...
Herein, a human identification system for smart spaces called Vein-ID (referred to as VID) is presented, which leverage the uniqueness of vein patterns embedded in dorsum of an individual's hand. VID extracts vein patterns using the depth information and infrared (IR) images, both obtained from a commodity depth camera. Two deep learning models (CNN and Stacked-Autoencoders) are presented for precisely identifying a target individual from a set of N enrolled users. VID also incorporates a strategy for identifying an intruder—that is a person whose vein patterns are not included in the set of enrolled individuals. The performance of VID by collecting a comprehensive data set of approximately 17,500 images from 35 subjects is evaluated. The tests reveal that VID can identify an individual with an average accuracy of over 99% from a group of up to 35 individuals. It is demonstrated that VID can detect intruders with an average accuracy of about 96%. The execution time for training and testing the two deep learning models on different hardware platforms is also investigated and the differences are reported. © 2021 The Authors. IET Biometrics published by John Wiley & Sons Ltd on behalf of The Institution of Engineering and Technology.
User authentication is a measurement challenge for handheld devices and online accounts such as bank accounts, social media accounts etc. because illegal access results in money loss and user privacy. Individual devices, online financial services, and intelligent spaces are three significant areas of concern for customer authentication procedures. Three ways have been identified for authentication factors: i) knowledge-factor, ii) Inherence factor, and iii) possession-factor. This study investigates two-way user authentication through image processing. CNN, RCNN, and Deepface are deep learning algorithms used for image recognition. We used imagechain for image storage and Blockchain for personal information storage (mobile number) to secure the database. The database is stored on an Ethereum-based blockchain. After determining whether the image is fake or real, match the webcam image with the imagechain; if both images match, the one-time password is given to the user’s cellphone number for login access. For image processing, Opencv is employed, and the Python library is used to execute machine and deep learning algorithms for user authentication. Test the proposed model on the 10 to 100 users for authentication. Accuracy of this experiment is 75.35, 76.33, 98.18 and cosine similarities of images are much better between images, but in case of fake image identification it achieved 97.35 % accuracy.
The Twelfth International Conference on eHealth, Telemedicine, and Social Medicine (eTELEMED 2020) considered advances in techniques, services, and applications dedicated to a global approach of eHealth.
Recent advancements in technology have led to profusion of personal computing devices like smart phone, tablet, watch, glasses and many more. This has contributed to the realization of a digital world where important daily tasks can be performed over the Internet from any place and at any time and using any device. At the same time, advances in pervasive computing technologies have brought to fruition the concept of smart spaces that target the automated provision of customized services to the inhabitants effortlessly. User authentication, i.e., a procedure to verify the identity of the user, is essential in the digital world so as to protect the user’s personal data stored online (e.g. online bank accounts) and on personal devices (e.g., smart phones) and to also enable customized services in smart spaces (e.g., adjusting room temperature etc). Recently, traditional authentication mechanisms (e.g., passwords or fingerprints) have been repeatedly shown to be vulnerable to subversion. Researchers thus have proposed numerous new mechanisms to authenticate the users in the aforementioned scenarios. This paper presents an overview of these novel systems, so as to guide the future research efforts in these domains.