Conference Paper

Hand Gesture Recognition for Human-Machine Interaction.

Source: DBLP

ABSTRACT

Even after more than two decades of input devices development, many people still find the interaction with computers an uncomfortable experience. Efforts should be made to adapt computers to our natural means of communication: speech and body language. The PUI paradigm has emerged as a post-WIMP interface paradigm in order to cover these preferences. The aim of this paper is the proposal of a real time vision system for its application within visual interaction environments through hand gesture recognition, using general-purpose hardware and low cost sensors, like a simple personal computer and an USB web cam, so any user could make use of it in his office or home. The basis of our approach is a fast segmentation process to obtain the moving hand from the whole image, which is able to deal with a large number of hand shapes against different backgrounds and lighting conditions, and a recognition process that identifies the hand posture from the temporal sequence of segmented hands. The most important part of the recognition process is a robust shape comparison carried out through a Hausdorff distance approach, which operates on edge maps. The use of a visual memory allows the system to handle variations within a gesture and speed up the recognition process through the storage of different variables related to each gesture. This paper includes experimental evaluations of the recognition process of 26 hand postures and it discusses the results. Experiments show that the system can achieve a 90% recognition average rate and is suitable for real-time applications.

Download full-text

Full-text

Available from: Elena Nielsen, Mar 06, 2015
  • Source
    • "These images can for example be acquired by Principal Component Analysis (PCA). Some researchers who use appearance based approaches are [10] [11] [12]. Methods relying on high-level features use a 3D hand model [17] [18] [19]. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Hand posture recognition has played a very important role in Human Computer Interaction (HCI) and Computer Vision (CV) for many years. The challenge arises mainly due to self-occlusions caused by the limited view of the camera. In this paper, a robust hand posture recognition approach based on 3D point cloud from two RGB-D sensors (Kinect) is proposed to make maximum use of 3D information from depth map. Through noise reduction and registering two point sets obtained satisfactory from two views as we designed, a multi-viewed hand posture point cloud with most 3D information can be acquired. Moreover, we utilize the accurate reconstruction and classify each point cloud by directly matching the normalized point set with the templates of different classes from dataset, which can reduce the training time and calculation. Experimental results based on posture dataset captured by Kinect sensors (from digit 1 to 10) demonstrate the effectiveness of the proposed method.
    Full-text · Article · Jul 2015 · KSII Transactions on Internet and Information Systems
  • Source
    • "These images can for example be acquired by Principal Component Analysis (PCA). Some algorithms based on appearance are presented in [4], [5], [6]. S. Gupta et al. proposed method using 15 local Gabor filters and the features are being reduced by PCA to overcome small sample size problem. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Hand posture recognition has been a wide region of applications in Human Computer Interaction and Computer Vision for many years. The problem arises mainly due to the high dexterity of hand and self-occlusions created in the limited view of the camera or illumination variations. To remedy these problems, a hand posture recognition method using 3-D point cloud is proposed to explicitly utilize 3-D information from depth maps in this paper. Firstly, hand region is segmented by a set of depth threshold. Next, hand image normalization will be performed to ensure that the extracted feature descriptors are scale and rotation invariant. By robustly coding and pooling 3-D facets, the proposed descriptor can effectively represent the various hand postures. After that, SVM with Gaussian kernel function is used to address the issue of posture recognition. Experimental results based on posture dataset captured by Kinect sensor (from 1 to 10) demonstrate the effectiveness of the proposed approach and the average recognition rate of our method is over 96%.
    Full-text · Article · Feb 2015 · KSII Transactions on Internet and Information Systems
  • Source
    • "For skin color identification only Hue and Saturation components are considered. RGB image is converted to HSV color image and given thresholds for skin regions are 0 ≤ H ≤ 15; 20 ≤ S ≤ 120 for natural light and 0 ≤ H ≤ 30; 60 ≤ S ≤ 160 for artificial light [1]. In [6], [8] also used HSV color space for skin color segmentation. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Gesture recognition techniques are used in order to achieve spontaneous and natural machine interactions. Normal Hand gesture based recognition techniques using single 2D camera have serious issues in terms of correct tracking; also the false recognition occurrences are higher. In this paper, we present an efficient method of implementing gesture recognition using marker. In this approach we use a single marker for gesture recognition by doing mouse emulation. Tracking is accurate and false recognition occurrences are found to be very low in this method. Also this approach is found to be computationally efficient and better user experience compared to other hand gesture recognition techniques using single 2D camera.
    Full-text · Article · Nov 2013 · International Journal of Computer Applications
Show more