Conference Paper

Hand Gesture Recognition for Human-Machine Interaction.

Source: DBLP

ABSTRACT Even after more than two decades of input devices development, many people still find the interaction with computers an uncomfortable experience. Efforts should be made to adapt computers to our natural means of communication: speech and body language. The PUI paradigm has emerged as a post-WIMP interface paradigm in order to cover these preferences. The aim of this paper is the proposal of a real time vision system for its application within visual interaction environments through hand gesture recognition, using general-purpose hardware and low cost sensors, like a simple personal computer and an USB web cam, so any user could make use of it in his office or home. The basis of our approach is a fast segmentation process to obtain the moving hand from the whole image, which is able to deal with a large number of hand shapes against different backgrounds and lighting conditions, and a recognition process that identifies the hand posture from the temporal sequence of segmented hands. The most important part of the recognition process is a robust shape comparison carried out through a Hausdorff distance approach, which operates on edge maps. The use of a visual memory allows the system to handle variations within a gesture and speed up the recognition process through the storage of different variables related to each gesture. This paper includes experimental evaluations of the recognition process of 26 hand postures and it discusses the results. Experiments show that the system can achieve a 90% recognition average rate and is suitable for real-time applications.

0 Bookmarks
 · 
109 Views
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: As the integration of digital cameras within personal computing devices becomes a major trend, a real opportunity exists to develop more natural Human-Computer Interfaces that rely on user gestures. In this work, we present a system that acquires and classifies users' hand gestures from images and videos. Using inputs from low resolution off-the-shelf web cameras, our algorithm identifies the location and shape of the depicted hand gesture and classifies it into one of several predefined gestures. Our algorithm first applies image processing techniques on the images in order to cancel background and noise effects on the image, it then extracts relevant features for classification and finally classifies the gesture features using a multiclass Support Vector Machine classifier. The algorithm is robust and operates well on several different backgrounds, lighting and noise conditions. Our method achieves an average 97.8% accuracy rate in several cases and it is suitable for both real-time and offline classification.
    01/2010;
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Gestural interfaces have the potential of enhancing control operations in numerous applications. For Air Force systems, machine-recognition of whole-hand gestures may be useful as an alternative controller, especially when conventional controls are less accessible. The objective of this effort was to explore the utility of a neural network-based approach to the recognition of whole-hand gestures. Using a fiber-optic instrumented glove, gesture data were collected for a set of static gestures drawn from the manual alphabet used by the deaf. Two types of neural networks (multilayer perceptron and Kohonen self-organizing feature map) were explored. Both showed promise, but the perceptron model was quicker to implement and classification is inherent in the model. The high gesture recognition rates and quick network retraining times found in the present study suggest that a neural network approach to gesture recognition be further evaluated.
    04/1996;
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper presents a system to manipulate 3D objects or navigate through 3D models by detecting the gestures and the movements of the hands of a user in front of a camera mounted on top of a screen. This paper more particularly introduces an improved skin color segmentation algorithm which combines an online and an offline model; and a Haarlet-based hand gesture recognition system, where the Haarlets are trained based on Average Neighborhood Margin Maximization (ANMM). The result is a real-time marker-less interaction system which is applied to two applications, one for manipulating 3D objects, and the other for navigating through a 3D model.
    IEEE Workshop on Applications of Computer Vision (WACV 2009), 7-8 December, 2009, Snowbird, UT, USA; 01/2009

Full-text

View
0 Downloads
Available from