Conference PaperPDF Available

Eye Recognition and Hand Gesture Identification Fusion System

Authors:
  • Director, IIIT Kottayam, Kerala, India Institute of National Importance

Abstract

In this paper, an individual human computer interface system using eye motion and hand gestures is introduced. Traditionally, human computer interface uses mouse, keyboard as an input device. This paper presents interface between computer and human. This technology is intended to replace the conventional computer screen pointing devices for the use of disabled. The paper presents a novel idea to control computer mouse cursor movement with human eyes and hand gestures. Hand gesture is used as a mechanism for interaction with the computers. 1. Introduction Recently there has been a growing interest in developing natural interaction between human and computer. Several studies for human-computer interaction in universal computing are introduced. [1] The vision-based interface technique extracts motion information without any high cost equipments from an input video image. Thus, vision-based approach is taken into account an effective technique to develop human computer interface systems. For vision-based human computer interaction, eye tracking is a hot issue. Eye tracking research is distinguished by the emergency of interactive applications. However, to develop a vision-based multimodal human computer interface system, an eye tracking and their recognition is done. Real-time eye input has been used most frequently for disabled users, who can use only their eyes for input. Hand gesture has been one of the most common and natural communication media among human being. The keyboard and mouse are currently the main interfaces between man and computer. Hand gesture recognition research has gained a lot of attentions because of its applications for interactive human-machine interface and virtual environments. Most of the recent works related to hand gesture interface techniques [1] has been categorized as: glove-based method [2, 3] and vision-based method. There are many vision-based techniques, such as model-based [4] and state-based [5] for locating objects and recognizing gesturers. Recently, there have been an increasing number of gesture recognition researches using vision-based methods. This paper introduces
EYE RECOGNITION AND HAND GESTURE IDENTIFICATION FUSION SYSTEM
International Conference on Computer Science and Information Technology (ICCSIT), 3 Feb 2013, Nagpur, ISBN: 978-93-82208-60-0
EYE RECOGNITION AND HAND GESTURE
IDENTIFICATION FUSION SYSTEM
Ms. S. A. Chhabria
Head Of Information Technology
Department
G. H. Raisoni College Of Engineering
Nagpur, Maharashtra
sharda_chhabria@yahoo.co.in
Shrunkhala Satish Wankhede
Computer Sciences And Engineering
G. H. Raisoni College Of Engineering
Nagpur, Maharashtra
shrunkhala.wankhede@gmail.com
Dr. R. V. Dharaskar
Director,
Matoshri Pratishthan's Group of
Institutions, Nanded, Maharashtra
rvdharaskar@yahoo.com
Abstract-In this paper, an individual human computer interface
system using eye motion and hand gestures is introduced.
Traditionally, human computer interface uses mouse, keyboard
as an input device. This paper presents interface between
computer and human. This technology is intended to replace the
conventional computer screen pointing devices for the use of
disabled. The paper presents a novel idea to control computer
mouse cursor movement with human eyes and hand gestures.
Hand gesture is used as a mechanism for interaction with the
computers.
Keywords- Eye tracking, mouse movement, Eye-blinking detection,
Hand gesture recognition, Hand tracking
1. Introduction
Recently there has been a growing interest in developing
natural interaction between human and computer. Several
studies for human-computer interaction in universal
computing are introduced. [1] The vision-based interface
technique extracts motion information without any high cost
equipments from an input video image. Thus, vision-based
approach is taken into account an effective technique to
develop human computer interface systems. For vision-based
human computer interaction, eye tracking is a hot issue. Eye
tracking research is distinguished by the emergency of
interactive applications. However, to develop a vision-based
multimodal human computer interface system, an eye tracking
and their recognition is done. Real-time eye input has been
used most frequently for disabled users, who can use only
their eyes for input.
Hand gesture has been one of the most common and
natural communication media among human being. The
keyboard and mouse are currently the main interfaces between
man and computer. Hand gesture recognition research has
gained a lot of attentions because of its applications for
interactive human-machine interface and virtual environments.
Most of the recent works related to hand gesture interface
techniques [1] has been categorized as: glove-based method
[2, 3] and vision-based method. There are many vision-based
techniques, such as model-based [4] and state-based [5] for
locating objects and recognizing gesturers. Recently, there
have been an increasing number of gesture recognition
researches using vision-based methods. This paper introduces
an eye and hand gesture recognition system to recognize
‘dynamic gesture’.
2. Literature Review
2.1. EYE RECOGNITION
Design and implementation of human computer interface
tracking system based on multiple eye features”. For human
eye (Iris) detection, batch mode is employed. Iris tracking
technique is implemented on static images. This technique
simply works when the direction of iris is left, right or center.
If the position of iris is up or down, it does not work. The
system not works in real time. It is not expert to handle blinks
and close eyes. [6]
This paper is aimed for designing and implementing a human
computer interface system that tracks the direction of the
human eye. The particular motion as well as direction of the
iris is employed to drive the interface by positioning the
mouse cursor consequently. The location of the iris is
completed in batch mode. This means that the frames are
stored in a permanent storage device and are retrieved one by
one. Each of the frames is processed for finding the location of
the iris and thereby placing the mouse cursor consequently.
Such a system that detects the iris position from still images
provides an alternate input modality to facilitate computer
users with severe disabilities.
Statistical models of appearance for eye tracking and eye
blink detection and measurement.[7,8] Active Appearance
Model (AAM) a proofofconcept model for the eye region is
created to determine the parameters that measure the degree of
eye blinks. After developing an eye model, a blink detector is
projected. The main advantage of using AAM technique is
that the detailed description of the eye is obtained and not just
its rough location. The main drawback of AAM technique is
that it is designed to work for a single individual and
additionally the blink parameters have to be identified in
advance.
Simultaneous eye tracking and blink detection with
interactive particle filters”. [9] Eye position is found using eye
recognition algorithm. Then these filters are used for eye
tracking and blink detection. For describing state transition,
auto regression models are used. A statistical active
EYE RECOGNITION AND HAND GESTURE IDENTIFICATION FUSION SYSTEM
International Conference on Computer Science and Information Technology (ICCSIT), 3 Feb 2013, Nagpur, ISBN: 978-93-82208-60-0
appearance model (AAM) is developed to track and detect eye
blinking. The model has been designed for variations of head
pose or gaze. During this paper, the model parameters which
encode the variations caused by blinking are analyzed and
determine. This international model is further extended using a
series of sub-models to enable independent modeling and
tracking of the two eye regions. Many techniques to enable
measurement and detection of eye-blink are proposed and
evaluated. The results of various tests on completely different
image databases are presented to validate each model.
Communication via eye blinksDetection and duration
analysis in real-time[10] Initial eye blink is employed to find
the eyes. The algorithm detects the eye blinks. The “Blink
link” prototype can be used in order to get in touch with the
device. Simply by considering the motion information among
two consecutive frames and determining that if this motion is
caused by blink, eyes are tracked and monitored constantly.
This system is a real-time system. The disadvantage of this
system is that it can only handle long blinks and is not able to
handle short blinks. In case of short blinks it just simply
avoids the blinks.
“MouseField: A Simple and Versatile Input Device for
Ubiquitous Computing”. [11] “MouseField” is a individual
personal laptop or human computer interaction system that
uses RFID reader and motion sensor. Especially the vision
based face and hand motion tracking and gesture recognition
is an attractive input mode for better human-computer
interaction. Human gesture information has been variously
employed in the game, virtual reality and other applications.
Such gesture information is classified into the static gesture
which uses spatial information only and the dynamic gesture
which uses the spatial information and time information
together. Since, the dynamic gesture can presents various
expressions and it is considered as a natural presenting
technique. Such motion information can be acquired by both
using device-based interface and vision-based interface. The
device-based interface technique gets motion information by
motion capture devices and marker. However, the vision-
based interface technique extracts motion information from
input video image without any high cost equipments. Thus,
vision-based approach is considered an effective technique to
develop human computer interface systems. For vision-based
human computer interaction, eye and hand tracking is hot
issue. Eye tracking search is distinguished by the emergence
of interactive applications.
Although various interaction technologies for handling
information in the present computing atmosphere have been
proposed, some techniques are too easy for performing human
computer interaction, and others require special expensive
equipments to be set up everywhere, and cannot quickly be
accessed in our daily environment. In this, a new simple and
versatile input device called the MouseField that enables users
to control various information appliances easily without large
amount of expenses. [11] MouseField consists of an
identification recognizer and motion sensors that can detect an
object and its movement after the object is placed on it. The
system can easily translate the user's actions as a command to
control the flow of information. A robust and versatile input
device called the MouseField that can be used at almost any
place for controlling information appliances. MouseField is a
device that combines ID reader and motion sensing devices
into one package.
HAND GESTURE IDENTIFICATION
Glove-based gesture interfaces require the user to
wear a device, and generally carry a load of cables that
connect the device to a computer.
Huang et al. [12] use 3D neural network method to
develop a Taiwanese Sign Language (TSL) recognition system
to recognize 15 different gestures. This paper presents sign
language recognition system which consists of three modules:
model-based hand tracking, feature extraction, and gesture
recognition using 3D Hopfield neural network (HNN).
David and Shah [13] propose a model-based
approach by using a finite state machine to model four
qualitatively distinct phases of a generic gesture. Hand shapes
are described by a list of vectors and then matched with the
stored vector models. The seven gestures are representatives
for actions of Left, Right, Up, Down, Grab, Rotate, and Stop.
Darrell and Pentland [14] propose space-time gesture
recognition method. This paper presents a method for
learning, tracking, and recognizing human gestures using a
view-based approach to model both object and behavior. Signs
are represented by using sets of view models, and then are
matched to stored gesture patterns using dynamic time
warping.Starner et al. [15] describe an extensible system
which uses one color camera to track hands in real time and
interprets American Sign Language (ASL). They use hidden
Markov models (HMMs) to recognize a full sentence and
demonstrate the feasibility of recognizing a series of
complicated series of gesture. Instead of using instrumented
glove, they use vision-based approach to capture the hand
shape, orientation and trajectory. The vision-based method
selects the 3-D input data as the feature vectors for the HMM
input, other HMM-based [16, 17] hand gesture recognition
systems have also been development. Liang et al. [118]
develop gesture recognition of TSL by using Data-Glove to
capture the flexion of 10 finger joints, the roll of palm and
other 3-D motion information.
Cui and Weng [19] develop a non-HMM-based
system which can recognize 28 different gestures in front of
complex backgrounds. Nishikawa et al. [20] propose a new
technique for description and recognition of human gestures.
The proposed method is based on the rate of change of gesture
motion direction that is estimated using optical flow from
monocular motion images.
Nagaya et al. [21] propose a method to recognize
gestures using an approximate shape of gesture trajectories in
a pattern space defined by the inner-product between patterns
on continuous frame images. Heap and Hogg [22] present a
method for tracking of a hand using a deformable model,
which also works in the presence of complex backgrounds.
The deformable model describes one hand posture and certain
EYE RECOGNITION AND HAND GESTURE IDENTIFICATION FUSION SYSTEM
International Conference on Computer Science and Information Technology (ICCSIT), 3 Feb 2013, Nagpur, ISBN: 978-93-82208-60-0
variations of it and is not aimed at recognizing different
postures.Zhu and Yuille [23] develop a statistical framework
using principal component analysis and stochastic shape
grammars to represent and recognize the shapes of animated
objects. It is called flexible object recognition and modeling
system (FORMS). Lockton et al. [24] propose a real-time
gesture recognition system which can recognize 46 ASL letter
spelling alphabet and digits. The gestures that are recognized
by [25] are ‘static gestures’ of which the hand gestures do not
move. Different from [25], this paper introduces a hand
gesture recognition system to recognize ‘dynamic gesture’.
3. Goal of the system:
1. Facilitating the handicapped in using the computer
2. Controlling the mouse pointer through eye and hand
gesture
3. Human computer interaction provides real time eye
tracking and hand gesture estimation
4. Objectives of the system:
1. Easy interaction with computer without using mouse
2. Limitation of stationary head is eliminated.
3. Pointer of the mouse will move on screen where the
user will be looking & the clicks will be performed
by blinking.
5. Proposed System
Controlling mouse cursor by using eye and hand fusion
technique. Chess playing is an application of this system.
VARIOUS GESTURE
+ = Move
the Knight Right two step and then up
+ = Move
the Knight Right two step and then down
+ =
Move the Knight Left two step and then up
+ = Move the
Knight Left two step and then down
+ = Move
the Knight Up two step and then right
+ = Move the
Knight Up two step and then left
+ = Move the
Knight down two step and then left
+ = Move the
Knight Down two step and then right
+ = Move
Pawn Upward
+ = Move Pawn
diagonally Right to kill
+ = Move Pawn
diagonally Left to kill
+ = Move
Bishop diagonally leftward - UP
EYE RECOGNITION AND HAND GESTURE IDENTIFICATION FUSION SYSTEM
International Conference on Computer Science and Information Technology (ICCSIT), 3 Feb 2013, Nagpur, ISBN: 978-93-82208-60-0
+ = Move
Bishop diagonally rightward - UP
+ = Move
Bishop diagonally leftward - down
+ = Move
Bishop diagonally rightward - down
+ = Move
Rook left
+ = Move
Rook Right
+ = Move
Rook Up
+ = Move
Rook Down
+ = Move
Queen Right
+ = Move
Queen Left
+ = Move
Queen Up
+ = Move
Queen Down
+ = Move
King Right
+ =
Move King Left
+ =
Move King Up
+ = Move
King Down
EYE RECOGNITION AND HAND GESTURE IDENTIFICATION FUSION SYSTEM
International Conference on Computer Science and Information Technology (ICCSIT), 3 Feb 2013, Nagpur, ISBN: 978-93-82208-60-0
6. CONCLUSION
This paper focused on the analysis of the development of PC
using human eyes and hand gesture. The mouse pointer is
operated using eye and hand gesture. The most unique aspect
of this system is that it does not require any wearable
attachments. This makes the interaction more efficient and
enjoyable. A user interface is the system by which human
interact with a computer.
7. REFERENCE
[1] Sidra Naveed, Bushra Sikander, and Malik Sikander
Hayat Khiyal “Eye Tracking System with Blink
Detection”, IEEE,2012
[2] T. Baudel, M. Baudouin-Lafon, Charade: remote control
of objects using free-hand gestures, Comm. ACM 36 (7)
(2003) 2835.
[3] D.J. Sturman, D. Zeltzer, A survey of glove-based input,
IEEE Computer Graphics and Applications 14 (2004) 30
39.
[4] T. Takahashi, F. Kishino, A hand gesture recognition
method and its application, Systems and Computers in
Japan 23 (3) (2002) 3848.
[5] A.F. Bobick, A.D. Wilson, A state-based technique for
the summarization and recognition of gesture,
Proceedings fifth international conference on computer
vision (2005) 382388.
[6] Craig Hennessey, Jacob Fiset, “Long Range Eye
Tracking: Bringing Eye Tracking into the Living Room”,
IEEE, 2012
[7] Bacivarov, Ionita M., Corcoran,P. , “Statistical models of
appearance for eye tracking and eye-blink detection and
measurement”, IEEE Transactions, August 2010
[8] Ioana Bacivarov, Mircea Ionita, Peter Corcoran,
“Statistical models of appearance for eye tracking and eye
blink detection and measurement”, IEEE transactions on
consumer electronics, Vol.54 , No.3, pp. 13121320
August 2009.
[9] Mohan M. Trivedi, “Simultaneous eye tracking and blink
detection with interactive particle filters” , ACM , January
2010
[10]Grauman, K.; Betke, M.; Gips, J.; Bradski, G.R. ,
“Communication via eye blinksDetection and duration
analysis in real-time”, IEEE CONFERENCE
PUBLICATIONS, 2009
[11]Toshiyuki Masui, Koji Tsukada, and Itiro Siio,
“MouseField: A Simple and Versatile Input Device for
Ubiquitous Computing”, Vol. 3205, Springer (2011), p.
319-328.
[12]C.L. Huang, W.Y. Huang, Sign language recognition
using model based tracking and a 3D Hopfield neural
network, Machine Vision and Applications 10 (2008)
292307.
[13]J. Davis, M. Shah, Visual gesture recognition, IEEE
Proceedings -Vision Image Signal Process 141 (2) (2004).
[14]Darrell, T., Pentland, A., Recognition of space-time
gestures using a distributed representation, MIT Media
Laboratory Perceptual Computing Group Technical
Report. 2002 No197.d
[15]T. Starner, A. Pentland, Visual recognition of American
Sign Language using hidden Markov models, Proceedings
of International. Workshop on Automatic Face- and
Gesture-Recognition, Zurich, Switzerland, 2005
[16]L.W. Campbell, D.A. Becker, A. Azarbayejani, A.F.
Bobick, A. Plentland, Invariant features for 3-D gesture
recognition, Proceedings IEEE Second International
Workshop on Automatic Face and Gesture Recognition,
2006.
[17]J. Schlenzig, E. Hunter, R. Jain, Recursive identification
of gesture input using hidden Markov models,
Proceedings Second Annual Conference On Applications
of Computer Vision (2004) 187194.
[18]R.H. Liang, M. Ouhyoung, real-time continuous gesture
recognition system for sign language, Proceedings IEEE
Second International Conference on Automatic Face and
Gesture Recognition, 2008, pp. 558565.
[19]Y. Cui, J.J. Weng, Hand sign recognition from intensity
image sequences with complex backgrounds, Proceedings
IEEE Second International Conference on Automatic
Face and Gesture Recognition, 1996.
[20]A. Ohknishi, A. Nishikawa, Curvature-based
segmentation and recognition of hand gestures,
Proceedings Annual Conference on Robotics Society of
Japan, 2007, p. 401407.
[21]S. Nagaya, S. Seki, R. Oka, A theoretical consideration of
pattern space trajectory for gesture spotting recognition,
Proceedings IEEE Second International Workshop on
Automatic Face and Gesture Recognition, 2006.
EYE RECOGNITION AND HAND GESTURE IDENTIFICATION FUSION SYSTEM
International Conference on Computer Science and Information Technology (ICCSIT), 3 Feb 2013, Nagpur, ISBN: 978-93-82208-60-0
[22]T. Heap, D. Hogg, Towards 3D hand tracking using a
deformable model, Proceedings IEEE Second
International Conference on Automatic Face and Gesture
Recognition, 2006.
[23]S.C. Zhu, A.L. Yuille, FORMS: a flexible object
recognition and modelling system, Proceedings Fifth
International Conference on Computer Vision (2005)
465472.
[24]R. Lockton, A.W. Fitzgibbon, Real-time gesture
recognition using deterministic boosting, Proceedings of
British Machine Vision Conference (2002).
[8] Akhil Gupta, Akash Rathi, Dr. Y. Radhika,
“HANDS-FREE PC CONTROL” CONTROLLING OF
MOUSE CURSOR USING EYE MOVEMENT, International
Journal of Scientific and Research Publications, Volume 2,
Issue 4, April 2012
[9] Jixu Chen, Yan Tong ,Wayne Grayy ,Qiang Jiz “A
Robust 3D Eye Gaze Tracking System”,IEEE ,2011
[10] Yingxi Chen, “Design and evaluation of a human-
computer interface based on electro-oculography”, 2003,
unpublished. URL: vorlon.case.edu/~wsn/theses/
yingxichen_thesis.pdf
[11] Arantxa Villanueva ,Rafael Cabeza,Sonia Porta Eye
Tracking System Model With Easy Calibration”,IEEE , 2011
[12] Arie E. Kaufman, Amit Bandopadhay, and Bernard
D. Shaviv “An Eye Tracking Computer User Interface”, IEEE
, 2011
[13] Takehiko Ohno ,Naoki Mukawa ,Shinjiro Kawato
“Just Blink Your Eyes: A Head-Free Gaze Tracking System”,
IEEE, 2011
[14] Shazia Azam, Aihab Khan, M.S.H. Khiyal, “design
and implementation of human computer interface tracking
system based on multiple eye features” JATITjournal of
theoretical and applied information technology, Vol.9, No.2
Nov, 2009.
[15] Margrit Betke, James Gips,Peter Fleming ,“The
Camera Mouse: Visual Tracking of Body features to Provide
Computer Access for People With Severe Disabilities”, IEEE
Transactions On Neural Systems And Rehabilitation
Engineering, Vol.10, No.1, March 2008.
[16] H.T. Nguyen, “Occlusion robust adaptive template
tracking.”, Computer Vision, 2001.ICCV 2001 Proceedings
International Journal of Scientific and Research Publications,
Volume 2, Issue 4, April 2012
[17] M. Betke, “the camera mouse: Visual Tracking
of Body Features to provide Computer Access For People
With Severe Disabilities.” IEEE Transactions on Neural
Systems and Rehabilitation Engineering. VOL. 10. NO 1.
March 2002.
[18] Abdul Wahid Mohamed, “Control of Mouse
Movement Using Human Facial Expressions” 57,
Ramakrishna Road, Colombo 06, Sri Lanka.
[19] Arslan Qamar Malik, “Retina Based Mouse Control
(RBMC)”, World Academy of Science, Engineering and
Technology 31, 2007
[20] G. Bradsk, “Computer Vision Face Tracking for
Use in a Perceptual User Interface”. Intel Technology 2nd
Quarter , 98 Journal.
[21] A. Giachetti, Matching techniques to compute image
motion, Image and Vision Computing 18(2000). Pp.247-260.
ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
The paper presents a novel idea to control computer mouse cursor movement with human eyes. In this paper, a working of the product has been described as to how it helps the special people share their knowledge with the world. Number of traditional techniques such as Head and Eye Movement Tracking Systems etc. exist for cursor control by making use of image processing in which light is the primary source. Electro-oculography (EOG) is a new technology to sense eye signals with which the mouse cursor can be controlled. The signals captured using sensors, are first amplified, then noise is removed and then digitized, before being transferred to PC for software interfacing. Keywords—Human Computer Interaction, Real-Time System, Electro-oculography, Signal Processing. I. INTRODUCTION
Conference Paper
Full-text available
Although various interaction technologies for handling information in the ubiquitous computing environment have been proposed, some techniques are too simple for performing rich interaction, and others require special expensive equipments to be installed everywhere, and cannot soon be available in our everyday environment. We propose a new simple and versatile input device called the MouseField that enables users to control various information appliances easily without huge amount of cost. A MouseField consists of an ID recognizer and motion sensors that can detect an object and its movement after the object is placed on it. The system can interpret the user’s action as a command to control the flow of information. In this paper, we show how this simple device can be used for handling information easily in ordinary environments like living rooms, kitchens, and toilets, and show the benefits of using it in the ubiquitous computing environment.
Article
As a first step towards a perceptual user interface, a computer vision color tracking algorithm is developed and applied towards tracking human faces. Computer vision algorithms that are intended to form part of a perceptual user interface must be fast and efficient. They must be able to track in real time yet not absorb a major share of computational resources: other tasks must be able to run while the visual interface is being used. The new algorithm developed here is based on a robust...
Article
The paper presents hands free interface between computer and human. This technology is intended to replace the conventional computer screen pointing devices for the use of disabled or a new way to interact with mouse. the system we describe is real time, on-intrusive, fast and affordable technique for tracking facial features. The suggested algorithm solves the problem of occlusions and is robust to target variations and rotations. It is based on novel template matching technique. A SSR Filter integral image, SVM is used for adaptive search window positioning and sizing.
Article
The demand for improved human computer interaction will lead to increasing adoption of eye tracking in everyday devices. For interaction with devices such as Smart TVs, the eye tracker must operate in more challenging environments such as the home living room. In this paper we present a non-contact eye tracking system that allows for freedom of viewer motion in a living room environment. A pan and tilt mechanism is used to orient the eye tracker, guided by face tracking information from a wide-angle camera. The estimated point of gaze is corrected for viewer movement in realtime, avoiding the need for recalibration. The proposed technique achieves comparable accuracy to desktop systems near the calibration position of less than 1° of visual angle and accuracy of less than 2° of visual angle when the viewer moved a large distance, such as standing or sitting on the other side of the couch. The system performance achieved was more than sufficient to operate a novel, hands-free Smart TV interface.
Article
If information concerning speech and hand gestures is handled in an integrated way, a more user‐friendly interface will be realized. These are defined here as the “hand gesture” as the “shape and motion of hands to transmit the intention of the speaker.” They aim at the realization of the hand gesture interface. Using the hand shape input device Data Glove TM , we investigate the coding of the hand shape/motion information, a series of hand shapes in a continuous motion and the recognition method for the hand motion. For the case of hand shape/motion in the alphabet gesture in the sign language, a recognition experiment is executed. It is verified that some 20 hand gestures can be recognized, including the dynamic hand gestures. As the interface for the map guidance system, the instruction by speech and hand gesture is realized.
Conference Paper
We propose a new appearance-based feature for real-time gesture recognition from motion images. The feature is the shape of the trajectory caused by gesture, in pattern space defined by inner-product between patterns on frame images. It has three merits, (1) invariant for the target human man's position, size and lie, (2) gesture recognition without interpreting frame image contents, (3) no costly statistical calculation. And it gives us a theoretical guarantee about the effectiveness of several time-sequence matching methods using shapes in the eigenspace or results of position tracking. In this paper, we describe the properties of the gesture trajectory feature, and some experimental results in order to show its applicability to gesture recognition with a theoretical consideration
Conference Paper
In this paper, we have presented a new approach to recognize hand signs. In our approach, motion understanding (the hand movement) is tightly coupled with spatial recognition (hand shape). The system uses the multiclass, multidimensional discriminant analysis to automatically select the most discriminating features for gesture classification. A recursive partition tree approximator is proposed to do classification. This approach combined with our previous work on the hand segmentation forms a new framework which addresses three key aspects of the hand sign interpretation, that is the hand shape, the location, and the movement. The framework has been tested to recognize 28 different hand signs. The experimental results show that the system can achieve a 93.1% recognition rate for test sequences that have not been used in the training phase