Conference PaperPDF Available

Human-Computer Interaction System: Using A Method To Improve Posture Detection Technique

Authors:

Abstract

Natural and friendly interface is critical for the development of service robots. Gesturebased interface offers a way to enable untrained users to interact with robots more easily and efficiently. Robots are, or soon will be, used in such critical domains as search and rescue, military battle, mine and bomb detection, scientific exploration, law enforcement, and hospital care. Such robots must coordinate their behaviors with the requirements and expectations of human team members; they are more than mere tools but rather quasi-team members whose tasks have to be integrated with those of humans. In this sense, this paper goal is to present a systematic review of existing work about Posture recognition technique applied in ubiquitous computing and used the efficient bio inspired posture recognition algorithm for our proposed scheme. Here we present a scheme which reduce the size of the database which is used to store different postures of human beings which are used by robot as commands. The picture frame may divide into different scan lines and pixel color value under these scan lines are examined to guess the particular posture of user. The robot may use this as command and act accordingly.
Human-Computer Interaction System:
Using A Method To Improve Posture Detection Technique.
Ms. Sushma D. Ghode Ms. S. A. Chhabria Dr. R.V. Dharaskar
M.E.(WCC), AP, Dept. of CSE Professor and Head (CSE)
GHRCE , Nagpur GHRCE, Nagpur GHRCE, Nagpur
sushdg@gmail.com sachhabria@ghrce.edu.in rvdharaskar@rediffmail.com
Abstract
Natural and friendly interface is critical for the
development
of service robots. Gesture-
based interface offers a way to enable untrained
users to interact with robots more easily and
efficiently. Robots are, or soon will be, used in such
critical domains as search and rescue, military
battle, mine and bomb detection, scientific
exploration, law enforcement, and hospital care.
Such robots must coordinate their behaviors with
the requirements and expectations of human team
members; they are more than mere tools but
rather quasi-team members whose tasks have to be
integrated with those of humans. In this sense, this
paper goal is to present a systematic review of existing
work about Posture recognition technique
applied in ubiquitous computing
and used the efficient bio inspired posture
recognition algorithm for our proposed scheme.
Here we present a scheme which reduce the size of
the database which is used to store different
postures of human beings which are used by robot
as commands. The picture frame may divide into
different scan lines and pixel color value under these
scan lines are examined to guess the particular
posture of user. The robot may use this as command
and act accordingly.
Key Words: robotics, posture library, Hausdorff
Distance
1. Introduction
Human-robot symbiotic systems have been
studied extensively in recent years, considering that
robots will play an important role in the future
welfare society [Ueno, 2001]. The use of intelligent
robots encourages the view of the machine as a
partner in communication rather than as a tool [1].
In the near future, robots will interact closely with a
group of humans in their everyday environment
in the field of entertainment, recreation, health-care,
nursing, etc. In human-human interaction, multiple
communication modals such as speech, gestures and
body movements are frequently used. The standard
input methods, such as text input via the
keyboard and pointer/location information from a
mouse, do not provide a natural, intuitive interaction
between models for natural and intuitive
communication between humans and robots.
Furthermore, for intuitive gesture-based
interaction between human and robot, the robot
should understand the meaning of gesture with
respect to society and culture. The ability to
understand hand gestures will improve the
naturalness and efficiency of human interaction with
robot, and allow the user to communicate in
complex tasks without using tedious sets
of detailed instructions. This interactive
system uses robot eye’s cameras or cameras to
identify humans and recognize their gestures based
on face and hand poses.
Vision-based face recognition systems [10]
have three major components: image processing or
extracting important clues (hand pose and position),
tracking the facial features (related position or
motion of and hand poses), and posture recognition.
Vision- based face recognition system varies along a
number.
Human posture recognition is gaining
increasing attention due to its promising
application in the area of personal health care,
environmental awareness, intelligent visual
human machine interface (such as video
game systems and human robot interaction), to
name a few. Based on commercially available image
sensors and powerful personal computers,
impressive research work has been reported for a
variety of applications [1]. Works on human
gait recognition, standing
posture
recognition with different arm poses [2] and
dynamic gesture such as sign language recognition
were presented. In general, those approaches first
detect moving objects by the analysis of video
stream, then extract human silhouettes using
background subtraction technique. Blob
metrics are represented into multiple appearance
models and finally posture profiling is conducted
based on frame-by-frame posture classification
algorithms. Due
to
the complexity,
these algorithms are implemented on powerful
computers, even when recognizing only a small
subset of human body postures, such as standing,
bending, sitting and lying [3]. This will limit the
use of these algorithms in real life applications.
On the
other
hand,
small
and lightweight wireless platforms, such as
ultra-mobile PCs or smart cellular phones, are
becoming an ubiquitous computation platform [9].
Unfortunately, these devices are still unable to
perform power and computation hungry object
recognition tasks. In fact, there is a growing gap
between the latest computer- based vision
algorithms and what is actually implementable in
low-complexity hardware.
The remainder of the paper is structured as
follows. Section 2 we present an analysis of the
most relevant identified proposal. Section
3 discusses proposed plan for system and,
finally, Section 4 presents concluding remarks and
future directions.
2. Analysis Of Selected Proposals
In Shoushun Chen paper[4] the architecture
of the proposed system is illustrated in Fig.1. It
includes an image sensor working at temporal
difference mode, a hierarchical edge feature
extraction unit and a classifier with a set of
library postures.
Temporal Hierarchical Classifier
Posture Library
difference imager edge extraction
Fig. 1. Block diagram of the system
The temporal difference image sensor
compares two continuous image frames and
only outputs addresses of those pixels with
illumination variance larger than
certain threshold. If the scene
illumination and object reflectance are
constant then the changes in scene
reflectance only result from object or
viewer movement. Therefore the background
information is naturally filtered since the
received pixels only come from the active
object of interest. This shows great
computational efficiency as compared to
conventional image sensors used in other
systems. With the address of the events, an
edge feature extraction unit will reorganize the
contour of the objects into vectorial line
segments. The extracted line segments a r e
fed to a modified Hausdorff distance
scheme to measure the similarity of the input
line segments with those of a set of library
objects. The proposed classifier is able
to perform size and position invariance
recognition from object or viewer movement.
Therefore the background information is
naturally filtered since the received pixels
only come from the active object of interest.
This shows great computational efficiency
as compared to
conventional image sensors used in other
systems. With the address of the events, an edge
feature extraction unit will reorganize the
contour of the objects into vectorial line
segments. The extracted line segments are fed
to posture library.
2.1. Size And Position Invariant
Recognition Algorithm
The proposed recognition algorithm works
in two phases. First the received address events are
stored in memory and line segments extraction
is The body shape extraction is achieved in three
different stages. First a binary image is built where a
“1” is set by decoding the address of the received
spike from the image sensor. After that, the image
is scanned at four directions, namely
Databas
e
horizontal, vertical, 45o and 135o. A line segment
can be identified by looking at the output of the
scanner. For instance, a transition from 0” to “1”
denotes a starting point of a line segment while a
transition from “1” to “0” indicates an ending point.
However, to suppress the internal lines in a
thick object, special extraction rule is required. We
propose to filter the redundant lines by a special
extraction rule as the following statement, “Ignore a
line if both its starting point and ending point are
included in another line”.
One can notice that, high encoding
efficiency is obtained by representing complex
objects by limited pairs of dots. At the third stage, as
shown in Fig.1, 2 rounds of subsampling are
performed on the binary image, each followed by a
new scanning procedure. Interestingly, the sub-
sampling is implemented by changing the scanner’s
incremental value instead of physically manipulating
the memory content. For instance, to sub-sampling
by a factor of two, each time, the column and
row address counter just needs to increase by “2”.
2.2. Simplified Line Segment Hausdorff
Distance
In computer vision, a large number of object
recognition algorithms were reported based on Line
Segment Hausdorff Distance [2]. For use in
our application, we propose the distance of two line
segments, as shown in Fig.2, from Line A to Line
B, defined as below:
D(A → B) = × (d//1 + d//2 + LAd)
(1)
Where Pθ is an intersection angle penalty coefficient,
d//1 and d//2 is the parallel distance between the two
lines’ terminals, LA is the length of line A, d is the
perpendicular distance.
Fig. 2. Displacement measurement of two line
segments
The intersection penalty coefficient also plays
an important role. We give it a value of 1 for two
lines in parallel and ∞ for two lines in
perpendicular. For two lines intersecting at a angle
of 45o or 135o.
3. Proposed plan of the system
In above case we have to save each and every
posture in database. So instead of managing a huge
database, we are going to divide the frame into
different scan lines and just examine the pixels
which fall under a particular scan line. This reduces
the size of database to some extent because we are
going to store the coordinates of the scan lines in the
form of database. Initially we are going to examine
whether this plan work for the color of arm (i.e. the
color of shirt) and if it is successful then we are
going to implement this for entire hand posture.
L C R
Fig. 3. Initial Position
As shown in fig.3, the system scan the pixel color
which fall under lines indicated as L for left hand
movements and R for right hand movements
respectively and accordingly takes the respective
action.
3.1. Processing of Scan Lines
First we have to capture the live image using
camera. Now we are having the camera view but
the problem is that this view is handled by the
O.S. and the camera driver can’t work on it for
processing. So we need to convert the live video
in to processing format. For this the image is
converted into bitmap image type and entire
image is stored into single variable. Any
processing on the image is now performed on
this variable. From this bitmap image extract
single pixel and find out its RGB value. It is
stored in separate variable like value for Red
color is stored in variable R, and so on. After this
we make a function which The function picks a
A
B
d
d//1
d//2
pixel and compares other pixel with current
pixel, if value of new pixel is near to current
pixel then it ignore that pixel otherwise save this
value in array which is used to draw scan lines.
As shown in fig. 4, the image is divided into scan
lines and area under each scan line is examined
for finding hand posture. If left scan line is cut,
then it recognize that left hand is moving. Same
procedure is used for other commands also.
Fig.4. Screenshot of processing still image
Even for small imaging size it can reduce the
computation by 10 10 when compared to
classic convolution techniques. And this a
conservative number because it assumes that size
and position invariance is already computed by some
other means, while they are an integral part of our
approach.
4. CONCLUSION
This paper reports a size and position invariant
human posture recognition algorithm. The image is
first acquired using an address event temporal
difference image sensor and followed by a bio-
inspired hierarchical line segment extraction unit.
A simplified scan line algorithm is used to get the
command from user. The proposed algorithm
achieves 88% average recognition rate while features
10−100× computational saving as compared to
conventional approach.
So in our propose system we are going
to use same algorithm which may be further
improved by using dynamic images using CCTV
camera, instead of storing all posture images into
database and then perform the same action on real-
time images as explained in this algorithm.
5. Referances
[1] Naouki Kubota, Yu Tomioka, Toru
Yamaguchi (2008) “Gesture Recognition for
Partner Robot based on Computational
Intelligence”, IEEE 2008.
[2] Shoshun Chen, Fopefolu Folowosele,
Dondsoo Kim (2008) “A size and position
Invarient Event- based Human Posture
Recognition Algorithm”, IEEE 2008
[3] R.S.A. Brinkworth, D.C. O’Carroll
“Applications for Bio-Inspired Visual
Processing Algorithms” IEEE 2008.
[4] Shoushun Chen, Berin Martini and
Eugenio Culurciello, “A Bio-inspired Event-
based Size and Position Invariant
Human Posture Recognition
Algorithm”, IEEE 2009.
[5] LIU Zhe LI Xiao-jiu,” Image Abnormal
Region Recognition with Fuzzy Clustering
Based on Multi-characteristic Variable
Window”, First International Workshop on
Education Technology and Computer Science
2009.
[6] A.H. Muhamad Amin, R.A. Raja Mahmood,
and A.I. Khan “Analysis of Pattern
Recognition Algorithms using Associative
Memory Approach: A Comparative Study
between the Hopfield Network and Distributed
Hierarchical Graph Neuron (DHGN)”, IEEE 8th
International Conference on Computer
and Information Technology Workshops.
[7] Tong wen Wang, Lin gaun, Yao Zhang
a Modified Pattern Recognition and its
application in Power system Transient stability
assessment”, IEEE 2008.
[8] B.A. Kitchenham, “Procedures for Performing
Systematic Reviews” Technical Report
TR/SE- 0401, Keele University, and
Technical Report 0400011T.1, NICTA, 2004.
[9] Gyu Myoung Lee, Jun Kyun Choi, Noel
Crespi “Object Identification for Ubiquitous
Networking”, ICACT 2009.
[10] Tutorials on “Human Robot Interaction”
f rom Wikipedia free encyclopedia
ResearchGate has not been able to resolve any citations for this publication.
Article
This paper proposes a new pattern recognition algorithm with fuzzy clustering based on multi-characteristic variable window. This algorithm can recognize the abnormal region for dynamic image. The image is divided into some windows by this algorithm, and multi-characteristic vector of each window is constructed. The weight factor vector is introduced so that each window characteristic may be primary or secondary according to different image feature. The window coefficient is introduced so that the recognition speed and precision can be adjusted. In this paper, the objective function, membership function and clustering center calculation function of fuzzy clustering algorithm with weight factor and window coefficient is proposed. At last, this paper takes example for fabric defects recognition with this algorithm. Experimental results show that this algorithm can recognize more categories of image abnormal regions with high-accuracy, high-speed, no-training and extensive application.
Conference Paper
This paper proposes a new approach to recognize human postures in realtime video sequences. The algorithm employs temporal difference imaging between video sequences as input and then decompose the contour of the active object into vectorial line segments. A scheme based on simplified line segment Hausdorff distance combined with projection histograms is proposed to achieve size and position invariance recognition. Consistent with the hierarchical model of the human visual system, sub-sampling techniques are used to represent the object by line segments at multiple resolution levels. The whole classification is described as a coarse to fine procedure. An average realtime recognition rate of 88% is achieved in the experiment. Compared to conventional convolution method, the proposed algorithm reduces the computation cycles by 10 - 100 times. This work sets the foundation for size and position invariant object recognition for the implementation of event-based vision systems.
Conference Paper
We explain the concept of ubiquitous networking including object to object communications and specify naming and addressing issues for object identification. In order to use host identity protocol (HIP) for connecting objects in ubiquitous networking environment, we propose the extensions of HIP according to mapping relationships between host and objects. In addition, we provide packet formats and considerations for HIP extensions concerning objects. Our proposal can be used for location management, networked-ID services, etc.
Conference Paper
In this paper we report a size and position invariant human posture recognition algorithm. The algorithm employs a simplified line segment Hausdorff distance classification and uses projection histograms to achieve size and position invariance. Compared to other existing method utilizing line segment Hausdorff distance, the proposed algorithm reduces the computation complexity by 36000 times, for our test images. Combining bio-inspired event-based image acquisition and hardware friendly feature extraction and classification algorithm will lead to a promising technology for use in wireless sensor network.
Conference Paper
A modified pattern recognition algorithm based on the residual analysis and the recursive partition is proposed in this paper, which is more effective than the original algorithm in cases of high-dimension finite-sample practical engineering problems. The proposed approach is first verified with artificial testing data and then applied to power system transient stability assessment (TSA). Case studies show that the proposed pattern recognition scheme can successfully find out the relationships between the pre-contingency steady state quantities and the stability indices under each specified fault. According to the spatial description of the obtained patterns in the feature space, the transient stability levels can be predicted according to the system operation states. Thus the TSA scheme obtained can not only predict the stability index, such as the CCT value, of faults but also offer instability preventive control strategies. Simulation results on the IEEE New-England test system verifies the feasibility and good performance of the proposed algorithm. As a pattern recognition technique, it can be widely applied in different engineering domains to realize knowledge acquisition.
Conference Paper
Recently, various types of human-friendly robot have been developed. Such robots should perform voice recognition, gesture recognition, and others. This paper discusses the learning capability of a human gesture recognition method based on computational intelligence. The proposed method is composed of image processing for human face and hand detection based on a steady-state genetic algorithm, an extraction method for human hand motion based on a fuzzy spiking neural network, and an unsupervised classification method for human hand motion based on a self-organizing map. We show several experimental results and discuss their effectiveness.
Conference Paper
In this paper, we conduct a comparative analysis of two associative memory-based pattern recognition algorithms. We compare the established Hopfield network algorithm with our novel Distributed Hierarchical Graph Neuron (DHGN) algorithm. The computational complexity and recall efficiency aspects of these algorithms are discussed. The results show that DHGN offers lower computational complexity with better recall efficiency compared to the Hopfield network.
Applications for Bio-Inspired Visual Processing Algorithms
  • R S A Brinkworth
  • D C O'carroll
R.S.A. Brinkworth, D.C. O'Carroll "Applications for Bio-Inspired Visual Processing Algorithms" IEEE 2008.