Article

EMG Signal Classification for Human Computer Interaction A Review

Abstract and Figures

With the ever increasing role of computerized machines in society, Human Computer Interaction (HCI) system has become an increasingly important part of our daily lives. HCI determines the effective utilization of the available information flow of the computing, communication, and display technologies. In recent years, there has been a tremendous interest in introducing intuitive interfaces that can recognize the user's body movements and translate them into machine commands. For the neural linkage with computers, various biomedical signals (biosignals) can be used, which can be acquired from a specialized tissue, organ, or cell system like the nervous system. Examples include Electro-Encephalogram (EEG), Electrooculogram (EOG), and Electromyogram (EMG). Such approaches are extremely valuable to physically disabled persons. Many attempts have been made to use EMG signal from gesture for developing HCI. EMG signal processing and controller work is currently proceeding in various direction including the development of continuous EMG signal classification for graphical controller, that enables the physically disabled to use word processing programs and other personal computer software, internet. It also enable manipulation of robotic devices, prosthesis limb, I/O for virtual reality games, physical exercise equipments etc. Most of the developmental area is based on pattern recognition using neural networks. The EMG controller can be programmed to perform gesture recognition based on signal analysis of groups of muscles action potential. This review paper is to discuss the various methodologies and algorithms used for EMG signal classification for the purpose of interpreting the EMG signal into computer command.
Content may be subject to copyright.
European Journal of Scientific Research
ISSN 1450-216X Vol.33 No.3 (2009), pp.480-501
© EuroJournals Publishing, Inc. 2009
http://www.eurojournals.com/ejsr.htm
EMG Signal Classification for Human Computer
Interaction: A Review
Md. R. Ahsan
Electrical & Computer Engineering Department
Kulliyyah of Engineering, International Islamic University
Kuala Lumpur, Malaysia
E-mail: rezwanul.ahsan@yahoo.com
Tel: +60-36196-4000, Ext: 2841
Muhammad I. Ibrahimy
Electrical & Computer Engineering Department
Kulliyyah of Engineering, International Islamic University
Kuala Lumpur, Malaysia
Othman O. Khalifa
Electrical & Computer Engineering Department
Kulliyyah of Engineering, International Islamic University
Kuala Lumpur, Malaysia
Abstract
With the ever increasing role of computerized machines in society, Human
Computer Interaction (HCI) system has become an increasingly important part of our daily
lives. HCI determines the effective utilization of the available information flow of the
computing, communication, and display technologies. In recent years, there has been a
tremendous interest in introducing intuitive interfaces that can recognize the user's body
movements and translate them into machine commands. For the neural linkage with
computers, various biomedical signals (biosignals) can be used, which can be acquired
from a specialized tissue, organ, or cell system like the nervous system. Examples include
Electro-Encephalogram (EEG), Electrooculogram (EOG), and Electromyogram (EMG).
Such approaches are extremely valuable to physically disabled persons. Many attempts
have been made to use EMG signal from gesture for developing HCI. EMG signal
processing and controller work is currently proceeding in various direction including the
development of continuous EMG signal classification for graphical controller, that enables
the physically disabled to use word processing programs and other personal computer
software, internet. It also enable manipulation of robotic devices, prosthesis limb, I/O for
virtual reality games, physical exercise equipments etc. Most of the developmental area is
based on pattern recognition using neural networks. The EMG controller can be
programmed to perform gesture recognition based on signal analysis of groups of muscles
action potential. This review paper is to discuss the various methodologies and algorithms
used for EMG signal classification for the purpose of interpreting the EMG signal into
computer command.
EMG Signal Classification for Human Computer Interaction: A Review 481
Keywords: HCI, EMG, Neural Network, Hidden Markov Model, Bayes Network etc.
1. Introduction
Recently, a significant amount of effort has been dedicated in the field of HCI in the field of HCI for
the development of user-friendly interfaces employing voice, vision, gesture, and other innovative I/O
channels. In the past decade, studies have been widely pursued, aimed at overcoming the limitations of
the conventional HCI tools such as keyboard, mouse, joystick, etc. One of the most challenging
approaches in this research field is to link a human's neural signals with computers by exploiting the
electrical nature of the human nervous system. More recently, there has been increasing interest in
exploiting bioelectric signals such as EMGs, EEGs and EOGs for the purpose of devising new types of
HCI. As the silver generation has been exponentially increasing, the social demands for the quality of
life (QOL) also have been increasing proportionally. To improve the QOL of the disabled and the
elderly, robotic researchers have been trying to combine the robotic techniques into the rehabilitation
systems. However, since the robotic system needs to guarantee both the safety and reliability, many
recent studies proposed the human-in-the-loop control system for considering user’s intention. Since
the human’s information system is different from the machinery system, HCI is regarded as one of key
technologies in the human-in-the loop control system (Moon et. al., 2004). To implement an HCI, the
acquired and processed signals need to classify which is the difficult part of the system. The choice of
classification methodology depends on the application field. In the field of HCI, the studies found that
most of the classifiers are neural network based. This is because it has been used by many researchers
in the past very widely as well as it has numerous advantages in the processing and classification of
biosignals.
This paper first discussed about several types of non biosignal based devices/systems, their
applications along with some advantages and weak points. Then the paper proceeds with different kind
of methodologies used for EMG signal classifications in the field of HCI. Finally a summary table
presented with brief properties of the classifier discussed in this paper.
2. Techniques used in HCI
2.1. Non-biosignal Approach
Several attempts have been done beside the use of biomedical signals to implement a convenient
solution of HCI for the disabled persons. These devices are based on motor skills and still available to
use. The “Tonguepoint” based on IBM Trackpoint, a pressure sensitive isometric joystick operated by
user’s tongue. The joystick provides cursor-control, while two switches (a bite switch and a manual
switch located outside of the mouth) allow the user to consider left and right button selections (Salem
et. al., 1997). Another commercially available device “Headmouse” (Website:
http://orion.com/access/headmouse/index.htm), a pointing device, that that transforms head movement
into cursor movement on the screen. This device device uses infrared distance measurement to measure
the head motion. The wireless sensing technology employs infrared light to track a small disposable
target (reflective accessory) that is placed on the user's forehead or glasses. The mouse pointer
movement on the screen is then proportional to the user’s head movement, which are used to trigger a
switch through which the user can control various system functions. A specific problem with head
mouse systems is the required motor skills. The mentioned approaches have potential disadvantages for
some categories of users. For example, a user with cerebral palsy may not have the fine motor abilities
in the tongue to operate the Tonguepoint device. Similarly, a user with spinal vertebrae fusion may not
be able to turn his or her head, so the Headmouse would be of little benefit (Barreto et. al., 2000).
Patients with severe multiple sclerosis and SCI have reduced range of neck motion causing difficulties
during computer use through these type of devices (LoPresti et. al., 2003). Subjects with disabilities
were also found to have longer reaction time, and spend more time trying to make fine adjustments to
cursor position. Filtering and gain adjustment options in some of head-control systems might improve
482 Md. R. Ahsan, Muhammad I. Ibrahimy and Othman O. Khalifa
usability for some people with neck movement impairments. However, limitations of these systems
have been demonstrated by practical experiments. It was also found that more adaptive techniques
required to allow head control for automatic adjustment to the needs and abilities of a particular user.
However, limitation of these systems have been demonstrated by practical experiments. It was also
found that more adaptive techniques required to allow head control for automatic adjustment to the
needs and abilities of a particular user. More severe problems with head control were mentioned in
(Ortega R. et. al., 2004). A Head mouse system operates on the principle of a single switch. This
allows the user to give single commands at the appropriate time and reduce the amount of user’s head
movements. However, a critical issue with this approach is its exact timing requirement, which often
leads to increase head movement and spasticity; especially when the user is trying to work relative fast.
Head movements indeed require considerable muscles and ligaments efforts and their overuse can
cause injuries to the users (Surdilovic et. al., 2005).
Other more complex approaches have attempted to provide computer interface functionality
requiring even fewer abilities from the potential users. A prominent example is the eye-gaze tracking
interface approach. This principle patented by Mason K. A., (1969) is based on the observation that
reflected light produces a bright spot (glint) on the cornea, which position can vary according to the
change of eye-gaze direction. In the most common types of these systems, an infrared illuminator and
video camera are used to obtain continuous images of the subject's eye. Application of digital image
processing techniques allows the real-time isolation of two landmark reflections from the subject's eye:
the reflection from its pupil and the smaller and brighter reflection from its cornea. Real-time
determination of the centers of these reflections and their relative positions in the image captured by
the camera is used to define the instantaneous orientation of the eye's line of gaze. The clicking
operation in these systems has been attempted by assigning a "dwell latency" and executing a click
whenever the cursor remains within a so-called "dwell neighborhood" for at least that amount of time.
This clicking procedure, however, may result in false clicks if a user is simply staring attentively at a
small area of the screen, a dilemma referred to as the "Midas Touch" problem (Jacob, 1991). Given
their complexity and computational requirements, eye-gaze-tracking systems are comparatively
expensive and require great attention and effort to achieve proper cursor control (Foulds et al., 1997).
The research using eye-gaze to create a usable HCI is active (Wang et. al., 2006), e.g., eye
mouse. However, there is still no efficient interface being built up due to the inaccuracy of the eye-
tracking technique and the Midas Touch problem. In (Bates et. al. 2002), a zooming-in interface has to
be designed to compensate for the positional tolerance of eye tracking. Problem is the target size
significantly affects the system performance. Despite some difficulties, an effort is made in this
technology to make eye-gaze-tracking systems more portable (e .g. head-mounted version) (Barreto et.
al., 1999). Although they provide the subject with the ability to quickly displace the cursor across the
screen, is not easy to execute fine, small cursor movements in these systems. Furthermore, the stability
of the cursor in a single screen position is limited. If the user changes position with respect to the plane
of the screen during the use of the device, the calibration is lost and cursor position errors develop.
Another weak point is, if the subject moves enough to shift his/her eye out of the field of vision of the
camera, the operation of the system is disrupted. A comparative studies carried out by Barreto et. al.,
(1999), clearly indicates that the eye-gaze approach requires more strenuous and stringent control
abilities for finer cursor movements. At present, some eye-gaze systems do attempt to compensate for
the movement of the subject by using a pan-tilt camera, and adding a magnetic head tracking device to
feed head position information and command compensatory movements to the camera, in real time.
Results are improved with this addition, but unfortunately at the expense of added complexity and cost.
In recent years, vision-based hand gesture recognition has become a very active research theme
because of its potential use in HCI. Vision-based gesture recognition is achieved by using video
cameras, image processing and visual tracking algorithms. Advanced mouse emulators named Camera
Mouse (Betke et. al., 2002) track users’ movements with a camera focusing on various body features as
target, such as tip of the user’s nose, eyes, lips or fingers. Sophisticated pattern recognition software
algorithms recognize the target pattern, determine motion parameters, and translate this information
EMG Signal Classification for Human Computer Interaction: A Review 483
into motion of the mouse pointer on the screen. Initial experiments with the Camera Mouse have given
encouraging results for subjects with relatively good muscle control abilities. It has proven to be user
friendly because it requires no calibration or body attachments before and during its use. It is easily
adaptable to serve specific needs of various disabilities, and it is especially suitable for children (e.g.
with cerebral palsy). However, several problems were also observed during its experiments, such as
drifts, loss of communication, slow communication rates etc (Betke et. al., 2002). People with
insufficient muscles control, the Camera Mouse become quite ineffective. Nakanishi et. al. (1999)
proposed a powered wheelchair controlled by the face directional gestures. But the gesture recognition
required a high-speed image processing hardware and overall cost of system become very high.
However vision based techniques require restricted backgrounds and camera positions and are suitable
for a small set of gestures per formed with only one hand (Pavlovic et. al., 1997).
2.2. Biosignal Based Approach
2.2.1. EOG Signal Approach
Some biosignals have also been shown to be suited for the creation of a new communication interface
between humans and computers. In this area the use of biosignals offer brand new possibilities when
compared to the conventional, mostly audio-visually based HCI. Eye movements are arguably the most
frequent of all human movements (Jonghwa Kim et al., 2008). In terms of our primary senses, the eye
is one of the main subsystems of the body. The position of the eye directly relates with the visual
information of interest. It is possible to provide very intuitive assistive device by using the position of
the eye. It is possible to provide very intuitive assistive device by using the position of the eye. Position
of the eye can be measured optically, mechanically, and electrically. The electrical method of
measurement, the EOG, is the least invasive method of determining the eye position (Doyle et. al.
2006).
Eye movement research is of great interest in the study of neuroscience and psychiatry, as well
as ergonomics, advertising and design. Since eye movements can be controlled volitionally, to some
degree, and tracked by modern technology with great speed and precision, they can now be used as a
powerful input device, and have many practical applications in HCI. EOG is one of the very few
methods for recording eye movements that does not require a direct attachment to the eye itself
(Qiuping Ding et. al., 2005). The ability of humans to visually follow the path of an object with the
help of dynamic corrections is for the majority an easy task. The EOG is the electrical recording
corresponding to the direction of the eye and makes the use of EOG for applications such as Man
Machine Interface (MMI) that is very attractive. As most of the machines that need to be operated are
computer controlled. MMI is synonymous to HCI (Kumar et. al., 2002).
Figure 1: Basic Block Component Diagram of HCI System based on EOG
484 Md. R. Ahsan, Muhammad I. Ibrahimy and Othman O. Khalifa
There are many ways used to measure the eye movement, some are more accurate than EOG,
but most of them are far more expensive and bring much inconvenience and uncomfortable feeling to
users. The EOG method is noninvasive, low-cost and easy to use. A study on the group of persons with
severe disabilities shows that many of them have the ability to control their eye movements, which
could be used to develop new HCI systems to help them communicate with other persons or control
some special instruments. Furthermore, this application of EOG-based HCI could be extended to the
group of normal persons for game or other entertainments. Compared with the EEG, EOG signals have
the characteristics as follows: the amplitude is relatively high, the relationship between EOG and eye
movements is linear, and the waveform is easy to detect (Zhao et. al., 2008).
To determine the applications of EOG based HCI, it is important to realize the limitations and
the potential errors in the system. There may be several main sources of error that affect the accuracy
of the HCI using EOG signals. There are several problems related to head and muscle movement
interface, signal drift and channel crosstalk. Whether the user makes a choice or sits idle, there are
always some unavoidable minor head movements (Kaufman et. al., 1993). It is, however, difficult to
differentiate the gaze vector from EOG signals because the EOG signal is affected easily by a noise
due to head movement (Kuno et. al., 1998). Some other factors that may affect HCI performance are
angular displacement between head and torso, physiological defects, an individual perception of gaze
point, and movement of the individual relative to a known reference point. The HCI using the EOG
signal proposed by Krueger et al (2007) can be used for nearly every person except for totally locked-
in patients. The reaction time of the cursor is very fast and the users made themselves familiar with the
interface very easily. In a limited time the user was forced to increase the accuracy very fast.
Furthermore, the game-like trial environment can create stressful situation on user and can measure
user performance at certain time is an additional advantage. However good results could not be
reproduced for every user and the learning curve can vary widely (Krueger et. al., 2007). The artificial
stress situation blocked sometimes the performance of the system. The stability of the signal may
increase significantly if the user allowed to do a free training. On the other hand a defined testing
environment is needed for the HCI to characterize it and be comparable with other approaches
(Birbaumer et. al., 2004). When maximum performance is desired it is discussable if an EOG system is
still adequate. The user might also search for an eye tracking system which provides higher accuracy
than EOG system. Yet the advantage of a simple system vanishes and either the hardware or the
software computing power is a magnitude higher (Hiley et. al., 2006).
2.2.2. EEG Signal Approach
Numerous studies have shown that individuals with severe neuromuscular disabilities can learn to use a
Brain Computer Interface (BCI), by modulating various features in their EEG (Wolpaw et. al., 2002).
The BCI is an emergent multidisciplinary technology that allows a brain to control a computer directly,
without relying on normal neuromuscular pathways (Dornhege et. al., 2007).
EMG Signal Classification for Human Computer Interaction: A Review 485
Figure 2: Structural components of a BCI System
The most important applications of the technology for the paralyzed people who are suffering
from severe neuromuscular disorders, as BCI potentially provides them with communication, control,
or rehabilitation tools to help compensate for or restore their lost abilities. Among various brain signal
acquisition methods, the EEG is of particular interest to the BCI community (Wolpaw, 2002; Curran et.
al., 2003; Vaughan et. al., 2003; Ebrabimi et. al., 2003). The EEG records the electrical brain signal
from the scalp, where the signal originates from postsynaptic potentials, aggregates at the cortex, and
transfers through the skull to the scalp (Fisch et. al., 1999). EEG based device that requires extracting
raw EEG data from the brain and converting it to device control commands through suitable signal
processing techniques. The cerebral electrical activities of the brain are recorded via the EEG using
electrodes that are attached to the surface of the skull. These signals measured by the electrodes are
amplified, filtered and digitized for processing in a computer where feature extraction is performed,
classification is done and a suitable control command is generated (Gopi et. al., 2006).
EEG based BCI technology has seen much development in recent years. Specifically, EEG
based BCI technologies that do not depend on peripheral nerves and muscles have received much
attention as possible modes of communication for the disabled (Palaniappan, 2005).Various EEG
phenomena, such as slow cortical potentials, P300 potentials, and mu and beta rhythm control can
provide opportunities for severely disabled individuals to further interact with their environment . One
of the popular phenomena utilized for BCI control is the modulation of mu (8-12 Hz) and beta (18-25
Hz) rhythms via motor imagery. Actual or imagined motor movements result in a de-synchronization
(decrease in amplitude) of these rhythms over the sensorimotor cortex. Users are thus directly able to
control a BCI by modulating the magnitude of these rhythms by switching between motor imagery
tasks (Rasmussen et. al., 2006).
486 Md. R. Ahsan, Muhammad I. Ibrahimy and Othman O. Khalifa
The EEG bears merits as it is noninvasive, technically less demanding, and widely available at
relatively low cost. On the other hand, it also brings great challenges to signal processing and pattern
recognition, since it has relatively poor signal-to-noise ratio and limited topographical resolution and
frequency range (Wolpaw et. al., 2006). However non-invasive data acquisition makes automated
feature extraction challenging. It is because the signals of interest are 'hidden' in a highly noisy
environment. It was demonstrated that the spatial filtering operations improve the signal-to-noise ratio
(Bufalari et. al., 2006). Unfortunately, the intensive training time (several months) involved for a user
to gain a high degree of control (>80% accuracy) may be a deterrent for practical applications of BCIs
such as prosthetic control and daily computer use for disabled individuals (Guger et. al., 2003).
2.2.3. EMG Signal Approach and Importance
Among these bioelectric signals, EMGs are considered to be the source of a new means of HCI, i.e. an
alternative input mechanism. In fact, an input device developed using EMGs is a natural means of HCI
because the electrical activity induced by the human's arm muscle movements can be interpreted and
transformed into computer's control commands. Furthermore, EMGs can be easily acquired on the
surface of human skin through conveniently attachable electrodes.
Compared to optical systems, EOG based systems provide favored possibilities for mouse
pointer control, and are practical and valuable for people with SCI. However, their complex learning
and calibration procedures present the main limitations and require further development (Surdilovic,
2005). On the other hand, one of the major limitations of BCI systems is the high potential for EMG
contamination. EEG signals originate in the neurons of the brain and have to propagate through the
skull and the pericranial muscles in order to reach the surface electrodes. Because the EEG signals are
small in amplitude (5–300 μV), the EEG biopotential amplifiers are designed to incorporate high
amplification (Taberner et. al., 1998). Thus, any muscle movement on the head or neck can produce a
large noise contamination from the corresponding EMG signal. From an application standpoint, this is
a big inconvenience to a user, especially if the user has a condition such as cerebral palsy. Most BCI
researchers have tried their best to eliminate any EMG artifacts, especially eye blinks and neck
movements (Wolpaw et. al., 1994; Pfurtscheller et. al., 1996).
The EEG is a noninvasive monitoring method of recording brain activities on the scalp (Millan
et. al., 2004). However, signals acquired via this method represent the massed activities of many
cortical neurons; they also provide a low spatial resolution and a low signal-to-noise ratio (SNR).
Invasive monitoring methods, on the other hand, capture the activities of individual cortical neurons in
the brain (Wessberg et. al., 2000). However, many fundamental neurobiological questions and
technical difficulties need to be solved (Nicolelis, 2001), and extensive training is required for interface
methods based on brain activities (Cheng et. al., 2002). Signals generated because of body motion at
the level of peripheral nervous system can be detected by an ENG (Cavallaro et. al., 2003) and an
EMG (Chu et. al., 2006). However, ENG-based interfaces have limitations with respect to the SNR,
dimensions, and drifts: that is, damage to the neural tissue (Bossi et. al., 2006) and continued
differential motion of the electrode within the fascicle cause a reduction in the SNR and a gradual drift
in the recorded nerve fiber population (Lawrence et. al., 2004). Whereas, EMG signals can be
measured more conveniently and safely than other neural signals. Furthermore, this noninvasive
monitoring method produces a good SNR. Hence, an EMG-based HCI is the most practical with
current technology.
EMG measures electrical currents that are generated in a muscle during its contraction and
represent neuromuscular activities. EMG signals can be used for a variety of applications including
clinical applications, HCI and interactive computer gaming. Moreover, EMG can be used to sense
isometric muscular activity which does not translate into movement (Park et. al., website:
http://melab.snu.ac.kr/Research/melab/doc/HCI/muscleman_paper.pdf). This makes it possible to
classify subtle motionless gestures and to control interfaces without being noticed and without
disrupting the surrounding environment. On the other hand, one of the main difficulties in analyzing
the EMG signal is due to its noisy characteristics. Compared to other biosignals, EMG contains
EMG Signal Classification for Human Computer Interaction: A Review 487
complicated types of noise that are caused by, for example, inherent equipment noise, electromagnetic
radiation, motion artifacts, and the interaction of different tissues. Hence, preprocessing is needed to
filter out the unwanted noises in EMG. This difficulty becomes more critical when resolving a
multiclass classifying problem. In most previous works, therefore, multi-channel EMG sensors are
used at the same time to detect relevant EMG patterns by a combined signal analysis. In this case,
however, users suffer from the inconvenience of carrying many cabled electrodes (Jonghwa et. al.,
2008).
In human-centered solutions such as a gesture-based interface, the system customarily
compensates for individual differences between users to produce a consistent pattern-recognition rate
no matter who is using the system. However, in the case of security, you can take advantage of user
differences to prevent unauthorized users. You could also do this by monitoring EMG signals
corresponding to typical computer command sequences. The EMG signals have different signatures
depending on age, muscle development, motor unit paths, skin-fat layer, and gesture style. The external
appearances of two peoples’ gestures might look identical, but the characteristic EMG signals are
different. In terms of fun applications, the video game industry constantly needs quick, flexible
interfaces. New input devices such as the Xbox controller are pushing the limits by increasing the
complexity of numerous physical buttons and sticks manipulated simultaneously. However, it is
possible to map multiple muscle groups to different actions to distribute this complexity across the
body. This would require training for proficiency, but the net result would be a whole new gaming
experience (Wheeler et. al., 2003).
In the past three decades, myoelectric control has attracted more and more attention for its
application in rehabilitation and human-computer interfaces. In myoelectric control systems, hand
gestures are often used for controlling peripheral equipments. Hand gestures are captured by the means
of surface electromyography (SEMG), by sensors which measure the activities of the musculature
system (Weir, 2003; Chen et. al., 2007). Accurate recognition of the user’s intent on the basis of the
measured SEMG signals are the key problem in the realization of myoelectric control. From early
1970’, researchers have studied the classification of hand motions such as finger flexion-extension,
wrist flexion-extension and supinationpronation by sensing the activities of upper arm muscles.
However, although the recognition rates have reached above 90 percent in the recent research work,
there are still many problems that need to be solved for realizing practical applications of myoelectric
control (Chen et. al., 2007).
Hand gestures involve relative flexure of the user’s fingers and consist of information that is
often too abstract to be interpreted by a machine. An important application of hand gesture recognition
is to improve the quality of life of the deaf or non-vocal persons through a hand-gesture to speech
system. Another major application is in rehabilitation engineering and in prosthesis. Some of the
commonly employed techniques in hand recognition include mechanical sensors (Pavlovic et. al.,
1997), vision based systems (Rehg et. al., 1994) and the use of EMG (Koike et. al., 1996) EMG has an
advantage of being easy to record, and it is non-invasive. SEMG is the electrical manifestation in
contracting muscles activity and closely related to the muscle contraction and thus an obvious choice
for control of the prosthesis. Since all these muscles present in the forearm are close to each other,
myoelectric activity observed from any muscle site comprises the activity from the neighbouring
muscle as well, referred to as cross-talk. The cross-talk problem is more significant when the muscle
activation is relatively weak (subtle) because the comparable signal strength is very low. Extraction of
the useful information from such kind of SEMG becomes difficult mainly due to the low signal to
noise ratio. At low level of contraction, EMG activity is hardly discernible from the background
activity. To identify the small movements and gesture of the hand, there is need to identify components
of SEMG originating from the different muscles (Naik et. al., 2008).
488 Md. R. Ahsan, Muhammad I. Ibrahimy and Othman O. Khalifa
3. EMG Classification Méthodologies for HCI
Some artificial intelligence (AI) techniques mainly based on neural networks have been proposed for
processing and discriminating EMG signal. Neural network is a computing technique that evolved
from mathematical models of neurons and systems of neurons. During recent years, neural networks
have become a useful tool for categorization of multivariate data. Even some of the cases, the neural
network with other AI e.g. Fuzzy, Hidden Markov Model (HMM), Bayes yields very good
performance.
3.1. Artificial Neural Network
In 1993, William Putnam et. al. (Putnam et. al., 1993) proposed a real-time computer control system
based on neural network for pattern recognition of the EMG from user’s gestures. The system consists
of two modes of communication are derived from the EMG. The first mode is a continuous control
signal, proportional to muscular exertion which control computer software objects such as sliders or
scroll bars. The second communication mode is gesture recognition. This allows the computer to make
discrete choices such as menu selections or slider direction by executing different gestures. Single
Layer Perception (SLP) structure was trained by Widrow-Hoff LMS algorithm. Whereas, a
backpropagation algorithm was utilized to train Multi-Layer Perception (MLP) structure. Feature
vector comprise with AR model parameters. Although 95% accuracy in classification was achieved, it
is felt that a system utilizing both bicep and tricep data, along with a more robust classifier is warranted
to accommodate users with disabilities who are unable to perform such clearly defined tasks as studied
at the present time. Another prominent attempt is EMG controlled 2-dimensional pointer invented by
Rosenberg (1998), which is known as Biofeedback Pointer. This graphic input device controlled by
wrist motion. Moving the wrist causes the pointer to move in that direction. The pointer detects the
EMG signals of four of the muscles used to move the wrist. These are interpreted by a neural network
which is trained for each user. The Biofeedback Pointer’s simple neural network is computationally
inexpensive, but with the side effect of a reduction in accuracy which is compensated for by using four
EMG sensors. Instead of using special hardware to train the device, the training is performed by
requiring the user to follow the pointer’s motion on the screen. During training period, the network
calculated for 8 times with offset 0 to 448ms for finding out least error network. The reason behind this
is to minimize the reaction time delay regarding user’s motion. The main problem with the current
training is that the user’s motions may not adequately synchronize with the cursor.
EMG Signal Classification for Human Computer Interaction: A Review 489
Figure 3: The main steps of online classification of hand movement using EMG signals
G. Tsenov et. al. (2006) discovered that the classification performance of hand and finger
movements depends significantly upon feature extraction, which is very important to improve
considerably the accuracy of classification. They described the identification procedure, based on EMG
patterns of forearm activity using various Neural Networks models. After comparison between
different intelligent computational methods of identification, they gained best classification result
(nearly 93% using 2-channel data) using MLP other than Radial Basis Function (RBF) or Learning
Vector Quantization (LVQ). In the time domain, features like: Mean Absolute Value (MAV), Variance
(VAR), Waveform Length (WL), Norm, Number of Zero Crossings, Absolute Maximum, Absolute
Minimum, Maximum minus Minimum and Median Value (Med) are some of extracted features.
Relevant features will lead to high and accurate classification rates. However, in practice,
determination of relevant features is very difficult. One year later, Kyung Kwon Jung et. al. (2007)
came with stronger classifiers that would help to implement the HCI. They proposed a method of
pattern recognition of EMG signals of hand gesture using spectral estimation and neural network.
Proposed system is composed of the Yule-Walker algorithm and the Learning vector quantization
(LVQ). The use of the Yule-Walker algorithm is to estimates the power spectral density (PSD) of the
EMG signals. LVQ is a method for training competitive layers in a supervised manner. A competitive
layer automatically learns to classify input vectors. However, the classes that the competitive layer
finds are dependent only on the distance between input vectors. If two input vectors are very similar,
the competitive layer probably will put them in the same class. There is no mechanism in a strictly
competitive layer design to say whether or not any two input vectors are in the same class or different
classes. The experiment verified that EMG signals produced by hand gestures are reliably classified by
proposed system with a success rate of about 78%.
490 Md. R. Ahsan, Muhammad I. Ibrahimy and Othman O. Khalifa
Figure 4: LVQ Network Architecture
3.2. Back-Propagation (BP) based Neural Network
Back-propagation Neural Network (BPN) algorithm applied to EMG based mouse cursor control
system as a man-machine interface by Itou et. al. (2001). They used neural network with three inputs,
two hidden layer and one output layer which achieved 70% rate of recognition. Any muscle can be
used, mouse cursor can be operated using a leg too, whereas, muscle fatigue may appeared for long
time use. In 2007, Naik et. al. applied BPN to overcome the drawback of the standard Artificial Neural
Network (ANN) architecture by augmenting the input hidden context units, which give feedback to the
hidden layer, thus giving the network an ability of extracting features of the data from the training
events. The data was divided into subsets of training data, validation, and test subsets. One fourth of
the data was used for the validation set, one-fourth for the test set, and one half for the training set. The
four RMS EMG values were the inputs to the ANN. The outputs of the ANN were the different
isometric hand action RMS values. The overall accuracy was reported 97%, but the number of hand
gesture identification was restricted to three. One year later Ganesh R Naik et. al. (2008) proposed
more improving identification of various hand gestures using multi run ICA of SEMG with back-
propagation learning algorithm based ANN classifier. They reported that only ICA is not suitable for
SEMG due to the nature of SEMG distribution and order ambiguity. They also showed that a
combination of the mixing matrix and network weights to classify the SEMG recordings in almost real-
time. Their results indicate an overall classification accuracy of 99% for all the experiments and can be
used for the classification of different subtle hand gestures. However, BPN cannot realize high learning
and discrimination performance because the EMG patterns differ considerably at the start and end of
the motion even if they are within the same class. Whereas, Eman et. al. in 2008, applied HMM of
surface EMG algorithm that facilitates automatic SEMG feature extraction and ANN are combined for
providing an integrated system for the automatic analysis and diagnosis of neuro-muscle disorders. The
number of input nodes is 312 using the 4 HMM features for 78 SEMG segments and the number of
outputs is two output nodes. In each model, each subject was characterized by 312 feature vector
calculated using HMM. Every vector is considered as one training pattern, so there are 52 training
patterns and 55 testing set. ANN architectures with three layers (input layer, hidden layer and output
layer) were used. The ANN architectures are expressed as strings showing the number of inputs, the
number of nodes in the hidden layers and two output nodes. They achieved the best correct
classification rate was 90.91% for 80 hidden layers.
3.3. Log-Linearized Gaussian Mixture Network (LLGMN) and Probabilistic Neural Network
(PNN)
The neural network has to estimate the probability that the pointer will move to each base direction, so
that the heavy learning calculation and the huge network structure are not necessary. Neural network is
EMG Signal Classification for Human Computer Interaction: A Review 491
used as a pointer controller in the prototype system. This system can adapt itself to changes of the
EMG patterns according to the differences among individuals, different locations of the electrodes,
time variation caused by fatigue or sweat, and so on. Fukuda et. al. (1999) presented an EMG
controlled pointing device using a neural network and developed a prototype system. This system uses
the information on the EMG signals for pointer control. The operator's intended direction of the pointer
movement and its velocity are estimated from the EMG signals, and natural interaction can be expected
using this information. In the proposed method, a several numbers of base directions are set on the
computer display, and the operator's intended direction is estimated from the probability that the
pointer will move to each base direction. The neural network used to estimate the probability of the
pointer movement to each base direction. This way it is possible to avoid heavy learning calculation
and the huge network structure. In the neural network part, the Log-Linearized Gaussian Mixture
Network (LLGMN) proposed by Tsuji et al.(1995) is used.
Figure 5: Structure of the prototype system based on LLGMN classifier
The LLGMN can acquire the log-linearized Gaussian mixture model through learning and
calculate the posteriori probability of the pointer movement to each base direction based on this model.
The probability density function is expressed by the weighted sum of the Gaussian components. It
enables the LLGMN to learn the complicated mapping between the operator's EMG patterns and the
pointer movement. Before the operation, the LLGMN must be trained the nonlinear mapping between
the EMG patterns and the pointer movement. Then the LLGMN can estimate the pointer movement
based on the statistical model. The accuracy improves depending on the increase of the number of the
base directions, although a large number of the base directions require much longer learning time. The
error becomes large when the desired direction differs from the base direction. However, this method
can control the pointer in an arbitrary direction, but accuracy of the estimated direction was not so high
to the intention of the operator. Furthermore, if the pointer is allowed to move in all directions from the
current position, the number of moving directions will be infinite. To overcome this, Tsuji et al. (1995)
has therefore proposed Recurrent Log- Linearized Gaussian Mixture Network (R-LLGMN) based on a
continuous density hidden Markov model (CDHMM) (Chen Xiang, 2007). This network uses recurrent
connections added to the units of LLGMN in order to discriminate a time sequence of the signals with
high accuracy. Osama Fukuda et al. (2003) proposed a new EMG-controlled omni-directional pointing
device using R-LLGMN. In the proposed pointing device, an arbitrary direction of pointer movement
is represented using a combination of finite base directions. Since the neural network utilized in this
system only estimates the probability for each base direction, it may lead to avoid a heavy learning
calculation and a huge network structure. The probability of pointer movements in each base direction
can be estimated by R-LLGMN using probability theory. Their results showed that the direction errors
improved remarkably. According to Nan Bu et. al. (2004), a probabilistic neural network (PNN)
provides a stochastic perspective of pattern discrimination; it has been proven to be efficient for
492 Md. R. Ahsan, Muhammad I. Ibrahimy and Othman O. Khalifa
complicated data such as bioelectric signals. They proposed programmable gate array (FPGA)
implementation of a PNN, with which system on chip (SoC) design of a bioelectric human interface
device. This PNN called a LLGMN, which estimates the posterior probability based on a Gaussian
Figure 6: The Structure of R-LLGMN
mixture model (GMM) and the log-linear model. Although weights of the LLGMN correspond to a
nonlinear combination of the GMM parameters, such as the mixture coefficients, mean vectors, and
covariance matrices, constraints on the parameters in the statistical model are relieved in the LLGMN.
Therefore, a simple learning algorithm can be derived, and the LLGMN is expected to have high
performance in the case of statistical pattern discrimination. The LLGMN has been successfully
applied to pattern discrimination of bioelectric signals, e.g., EMG and EEG and has been further used
to develop various human interface applications like prosthetic device control, an EMG-based pointing
device. The problems include non-trivial in cases of implementation of larger and more complicated
neural networks, and more hardware efficient algorithms are required.
3.4. Fuzzy Mean Max Neural Network (FMMNN)
Jong-Sung Kim et. al. (2004) applied fuzzy mean max neural network (FMMNN) as a classifier for
online EMG mouse that controls computer cursor. Also, stochastic values such as integral absolute
value were used as features for an appropriate classification of the intended wrist motions. He
interpreted 6 predefined wrist motions to left, right, up, down, click and rest operation. Here,
Difference Absolute Mean Value (DAMV) extracted from the EMG signals is used as the input vectors
in learning and classifying the patterns. The commands for controlling mouse cursor movements can
then be generated in accordance with these classified patterns. The DAMV is calculated for each
window of data according to the following equation:
2
1() ( 1)
1
N
i
DAMV xi xi
N
=−
(1)
where x is data available within a window and N is window size on the time frame .
Pattern recognition rate for each wrist motion reported as above 90%. The average recognition
rate of 97% shows a promise that it can be used as an efficient means of HCI.
3.5. Radial Basis Function Artificial Neural Network (RBFNN)
A novel method for online estimation of human forearm dynamics using a second-order quasi-linear
model is presented by Farid Mobasser et al. (2006). Human arm dynamics can be used for human body
EMG Signal Classification for Human Computer Interaction: A Review 493
performance analysis or for control of human-machine interfaces. The proposed method uses Moving
Window Least Squares (MWLS) to identify dynamic parameters for a limited number of operating
points in a variable space defined by elbow joint angle and velocity, and the electromyogram signals
collected from upper-arm muscles. The dynamic parameters for these limited points are then employed
to train a Radial Basis Function Artificial Neural Network (RBFNN) to interpolate/extrapolate for
online estimation of arm dynamic parameters for other operating points in the variable space. The
model parameters are identified for a limited number of points using a MWLS estimation method. The
limited number of points is justified as in contact applications the arm workspace and movement is
relatively small and slow. The RBFANN has the advantage of minimum memory usage for function
approximation and has been used significantly for interpolation. One major factor in parameter error is
the stochastic nature of EMG signals. The online estimation accuracy may be improved by changing
the neural network input quantization level, and the use of more sensors for each muscle for more
accurate representation of Muscle Activation Levels.
3.6. Other Methodologies used
3.6.1. Hidden Markov Model
Wheeler (2003) introduced an approach of designing and using neuroelectric interfaces for controlling
virtual devices. Hand gestures are used to interface with a computer instead of manipulating
mechanical devices such as joysticks and keyboards. EMG signals are non-invasively sensed from the
muscles used to perform these gestures. These signals are then interpreted and translated into useful
computer commands. Among the most common methods like Short Time Fourier Transform (STFT),
Wavelets, Moving Average, Auto-Regression (AR) Coefficients, they found moving average is the
best and simplest for feature space. The pattern recognition method employed was a HMM. The ability
to naturally interface with a computer allows for humans to manipulate any electrically controlled
mechanical system. In addition to wearable computing applications it can also applied interfaces to
robotic arms, mobile robots for urban rescue, unmanned aircraft drones, robotic exoskeletons, and
space suit interfaces. There are also side benefits to using EMG signals for control in long duration
space missions. However, one of the side effects of living in a zero gravity environment for extended
periods is muscle atrophy. Another disadvantage is wet electrodes caused unintentional misplacement
that greatly degraded our recognition performance. Standard EMG dry electrodes incorporated into a
sleeve alleviated this problem but then raised significant reliability issues in signal sensing. Chan et. al.
(2005) used HMM in their research for feature discrimination. Using 4-channel of SEMG signal, they
achieved a classification accuracy of 87%.
3.6.2. Bayes Network
Alsayegh (2000) presented an EMG-based human-machine interface system that interprets arm
gestures in the 3-dimensional (3D) space. Gestures are interpreted by sensing the activities of three
muscles, namely, anterior deltoid (AD), medial deltoid (MD), and biceps brachii (BB) muscles. The
problem of gesture classification is carried out in a framework of the statistical pattern recognition. The
processing of the EMG signals utilizes the temporal coordination activity of the monitored muscles to
identify a particular gesture. The classification procedure is carried out by constructing successive
feature vectors for each gesture. These feature vectors describe the gesture's temporal signature. This
type of classification is referred to as the context-dependent classification, which is carried out in this
study within the framework of Bayes theorem. The overall success rate is 96%. It was observed that
the structured type movements have a higher classification success rate than the pointing (simple)
movements. The main reason that structured type gestures have a better classification rate is due to the
clear coordination of the muscular activities. However, The input method described there is of course
non-standard, since it does not make use of a keyboard or a mouse – it is, however, inappropriate for
helping disabled persons, since it still requires control over the hands. In 2007, Xiang Chen et. al.
implemented multiple hand gesture recognition along with a 2-D accelerometer for mobile HCI.
494 Md. R. Ahsan, Muhammad I. Ibrahimy and Othman O. Khalifa
Feature extraction is carried out to reduce the data dimensionality while preserving the signal patterns
which help to differentiate between the gesture classes. In their research, MAV, the ratio of two
MAVs, and fourth-order AR model coefficients are used in the formation of the feature vectors. The
accelerometer feature vector consists of the mean absolute values. The Linear Bayesian Classifier is
trained with the feature vectors to distinguish the different gesture actions from each other. Due to their
low computational complexity and stable recognition performance, classical linear classifiers are well
suited for real-time gesture analysis and real life implementation. It was reported that the combination
of accelerometers and SEMG sensors provided higher classification accuracy, especially for gesture
sets including wrist motions, than the approaches using only the accelerometers or SEMG sensors. The
development of an EMG based interface for hand gesture recognition is presented by Jonghwa Kim et
al. (2008). For realizing real-time classification assuring acceptable recognition accuracy, they
introduced the combination of two simple linear classifiers (K-nearest neighbor (KNN) KNN and
Bayes) in decision level fusion. As the duration of the classification process is an essential factor for
the efficiency of a real-time system, it is required to apply two comparatively simple and thus fast
algorithms: the K-nearest neighbor (KNN) classifier and the Bayes classifier. Despite their simplicity
these algorithms generally provide proportionally good results. The KNN classifier, which belongs to
the non-parametric statistical classifiers, rates a pattern by regarding the most similar labeled training
samples. For this purpose, the distances (e.g. Euclidean distance) between the feature vector of the
current pattern and the feature vectors of each training sample are calculated. Beforehand, all vectors
are generally normalized. The number of adjacent samples which are taken into account is defined by
the parameter k. In our pattern recognition system, they considered the five nearest neighbors.
Figure 7: Decision Tree of Classifier Combination
The presented EMG-based controlling interface is able to reliably recognize various hand
gestures with a positive classification rate of over 94% even though only one single EMG sensor used,
in contrast to related work which is based on multiple EMG sensors. Moreover, since the EMG signal
can be used to sense isometric muscular activity, it is possible to detect motionless gesture or intention
in the EMG signal. Consequently, there is a wide range of potential applications using EMG signal in
human-machine interfacing. However, to realize advanced applications, many issues still need to be
resolved, including the development of algorithms for EMG-specific analysis, the extraction of
relevant features, and the design of real-time classifiers with guaranteed accuracy
EMG Signal Classification for Human Computer Interaction: A Review 495
4. Discussion
It can be found from the review that ANN plays an important role in the classification of EMG signal
for further interpretation to computer command. By last decades, many researchers successfully
applied various algorithm based neural network. Even though, it can be realized that, neural network as
well as composition with other artificial intelligent as for example Fuzzy logic, HMM yields
satisfactory recognition results. The neural network with Yule-Walker algorithm and the Learning
vector quantization (LVQ) reported a success rate of about 78%. Effective classification accuracy can
also be obtained from BP based neural network but problem is that it cannot realize high learning and
discrimination performance because the EMG patterns differ considerably at the start and end of the
motion even if they are within the same class. PNN with LLGMN is efficient for complicated data such
as bioelectric signals. The accuracy improves depending on the increase of the number of the base
directions, although a large number of the base directions require much longer learning time. 97%
average recognition rate reported by using FMMNN. HMM are popular dynamic classifiers in the field
of speech recognition. HMM are perfectly suitable algorithms for the classification of time series.
HMM are not much widespread within the HCI community but the studies revealed that they were
promising classifiers for HCI systems. A summary of major classification methods is given in the table
below.
Table 1: Summary of major methods used for EMG classification in the field of HCI
Classifier Used Researcher Description
AR model parameters based feature vector for Neural Network
95% accuracy in classification was achieved
Putnam et. al.
(1993) More robust classifier required for persons with disabilities
One layer feed-forward neural network
Performance yields 14% according to Fitt’s law
Rosenberg
(1998) More sophisticated neural network and better training methods required for
future improvement
Both time and frequency domain features used
MLP based model yield best result compare to RBF and LVQ
Classification accuracy can be as hi as 98% using 4-channel data set,
computational time becomes double.
Tsenov et. al
(2006)
It is hard to determine complete set of relevant discrimination features
Yule-Walker algorithm based AR model for spectral estimation
4th order AR model parameters as input for LVQ neural network
Competitive layer for learning and linear layer for classifying for LVQ
Classifier success rate is about 78%
Artificial Neural
Network (ANN)
Kyung Kwon
Jung et. al
(2007) There is no mechanism in a strictly competitive layer design depending on
input vector classes
New type of EMG based mouse developed
70% recognition rate in mouse cursor
Not applicable for long term use
Itou et. al. (2001)
Limited to 4 directions and drag action absent
ICA based signal extraction method used
Temporal decorrelation source separation (TDSEP) algorithm based ICA gives
97% separation efficiency than others
RMS value of each signal used to form feature vector as input to neural
network
Combination of the mixing matrix and network weights to classify the sEMG
recordings in almost real-time
Backpropagation
Neural Network
(BPNN) Naik et. al.
(2007, 2008),
Eman M. El-
Daydamony et.
al. (2008)
Number of hand gesture identication was restricted to three and six
Log-Linearized
Gaussian
Mixture
Tsuji et. al.
(1995) Fukuda
et. al. (1999)
LLGMN for creating LLGM model through learning and calculating the
posteriori probability of pointer movement in each base direction depending
on EMG patterns
496 Md. R. Ahsan, Muhammad I. Ibrahimy and Othman O. Khalifa
Higher discrimination performance can be achieved than other neural network
The direction of pointer movement is achieved by output of neural network
Network
(LLGMN)
The accuracy of pointer movement depends on number of learning data and
the accuracy of estimated direction depends on number of base directions
Continuous density hidden Markov model
(CDHMM) based Recurrent LLGMN
Finite base direction assumed which leads to avoid heavy learning calculation
and huge network structure
Higher accuracy for the discrimination of time sequence of signal
Recurrent
LLGMN
Tsuji et. al.
(2003) Fukuda et
al. (2004)
Direction errors improved remarkably
FPGA implementation of PNN, LLGMN
HCI on FPGA chip much more portable and compact
Classification rate of hardware is 97.9%, more than software
Shortage of memory for hardware language
LLGMN based
Probalistic
Neural Network
(PNN)
Nan Bu et. al.
(2004)
Processing speed needs to improve
Stochastic values such as integral absolute value were used as features
Six distinctive wrist motions can be classified well
Difference Absolute Mean Value (DAMV) extracted from the EMG signals is
used as the input vectors in learning and classifying the patterns
Pattern recognition rate of each wrist motions is above 90%, whereas average
recognition rate yield 97%
4 channel raw EMG signal used
Fuzzy Mean
Max Neural
Network
(FMMNN)
Jong-Sung Kim
et. al. (2004)
It is important to extract appropriate feature vector for the classifier
Moving Window Least Squares (MWLS) estimation method used to identify
limited number of operating points.
RBFNN is trained using limited points and is utilized for
interpolation/extrapolation for online estimation of arm dynamic parameters
Parameters error found because of stochastic nature of EMG signals
Radial Basis
Function
Artificial Neural
Network
(RBFNN)
Farid Mobasser
et al. (2006)
Estimation accuracy can be improved by changing neural network input
quantization level and more sensors for each muscle
Moving average selected for feature space as it is best and simplest
HMM has inherent ability to deal with spurious misclassification
During classifier training, HMM provides large computational savings
compared to MLP
Error rates depends on sleeve position, sweating, skin moisture, length of time
that electrodes were worn, fatigue
Astronauts required further training to overcome muscle atrophy for long term
staying in a zero gravity environment
Reported that the used methodology does not vary adaptively
Hidden Markov
model (HMM)
Wheeler (2003),
Chan et al.
(2005)
Further improvement would required in model correcting adaptation and
calibration stage
Reported that structured type movements have higher classification success
rate than pointing movements
Common time domain and frequency domain features extracted
K-Nearest Neighbour (k-NN) classifier added with Bayes to obtain good result
Addition of accelerator meter with EMG sensors cany increase the
classification rate 5-10%
Feature selection is important for better classification and increasing number
of features does not always produce good result
Average classification rate reported was over 94%
Bayes Network
Alsayegh, Xiang
Chen et. al
(2007), Jonghwa
Kim et al. (2008)
Small discrepancies can result major differences in EMG signal as well as
degrade the performance of classifier
EMG Signal Classification for Human Computer Interaction: A Review 497
5. Conclusion
Use of standard interface to operate computer is inappropriate for the persons suffering severe physical
disability. This is because it requires reliable use of hand movements. Developing of HCI using
different biosignals will help to improve the QOL of the disabled persons. EMG signal is one of the
prominent out of other biosignals having valuable information regarding nerve system. This review
paper focused on the algorithms and methodologies used for classifying EMG signals in the field of
HCI. It can be concluded that the neural network dominating the classification of EMG for HCI
development. There are still huge possible way to work for the disabled people by improving HCI and
making it more natural use to them. Beside neural network, there are several artificial intelligent using
of which may yield remarkable humanizing of HCI.
References
[1] Alsayegh O.A., 2000. “EMG-based human-machine interface system,” Multimedia and Expo,
2000. ICME 2000. 2000 IEEE International Conference on, vol. 2, pp. 925 – 928.
[2] Barreto A. B., Scargle S. D., Adjouadi M., 2000. “A practical EMG-based human-computer
interface for users with motor disabilities,” Journal of Rehabilitation Research and
Development, vol. 37(1), pp. 53-63.
[3] Barreto A. B., Scargle S. D., and Adjouadi M, 1999. “A Real-Time Assistive Computer
Interface for Users with Motor Disabilities,” ACM SIGCAPH Computers and the Physically
Handicapped, pp. 6-16.
[4] Bates R., Istance H., 2002. “Zooming interfaces!: enhancing the performance of eye controlled
pointing devices,” Proceedings of the fifth international ACM conference on Assistive
technologies Assets’02, 119-126.
[5] Betke M., Gips J. and Flemeing P, 2002. “The camera Mouse: Visual Tracking of Body
Features to Provide Computer acess for People With Severe Disabilities”, IEEE Transaction on
Neural Systems and Rehabilitation Engineering, Vol. 10, No.1, pp. 1-10.
[6] Birbaumer N., Strehl U., and Hinterberger T., 2004. “Future FES Systems - Brain-Computer
Interfaces for Verbal Communication,” Neuroprosthetics - Theory and Practice, K.W. Horch,
G.S. Dhillon, Singapore: World Scientific Publishing Co. Pte. Ltd., pp. 1146-1157.
[7] Bossi S., Micera S., Menciassi A., Beccai L., Hoffmann K. P., Koch K. P., and Dario P., 2006.
"On the Actuation of Thin Film Longitudinal Intrafascicular Electrodes," Proceedings in The
First IEEE/RAS-EMBS International Conference on Biomedical Robotics and
Biomechatronics, pp. 383-388.
[8] Bufalari S., Mattia D., Babiloni F., Mattiocco M., Marciani M. G., Cincotti F., 2006
“Autoregressive spectral analysis in Brain Computer Interface context,” Engineering in
Medicine and Biology Society, 2006. EMBS '06. 28th Annual International Conference of the
IEEE, pp. 3736 – 3739.
[9] Cavallaro E., Micera S., Dario P., Jensen W., and Sinkjaer T., 2003. "On the intersubject
generalization ability in extracting kinematic information from afferent nervous signals," IEEE
Transactions on Biomedical Engineering, vol. 50, pp. 1063-1073.
[10] Chan A., Kevin B., 2005. “Continuous Myoelectric Control for powered protheses using
Hidden Markov Models,” IEEE Transactions on Biomedical Engineer, Vol. 52, pp. 123- 134.
[11] Chen Xiang, Zhang Xu, Zhao Zhang-Yan, Yang Ji-Hai, Lantz Vuokko, Wang Kong-Qiao,
2007. Multiple Hand Gesture Recognition Based on Surface EMG SignalBioinformatics and
Biomedical Engineering, 2007, ICBBE 2007, The 1st International Conference on, pp. 506 –
509.
[12] Cheng M., Gao X. R., Gao S. G., and Xu D. F., 2002. "Design and implementation of a brain-
computer interface with high transfer rates," IEEE Transactions on Biomedical Engineering,
vol. 49, pp. 1181-1186.
498 Md. R. Ahsan, Muhammad I. Ibrahimy and Othman O. Khalifa
[13] Chu J. U., Moon I., and Mun M. S., 2006. "A real-time EMG pattern recognition system based
on linear-nonlinear feature projection for a multifunction myoelectric hand," IEEE
Transactions on Biomedical Engineering, vol. 53, pp. 2232-2239.
[14] Curran E. A. and Strokes M. J., 2003. “Learning to control brain activity: A review of the
production and control ofEEGcomponents for driving brain– computer interface (BCI)
systems,” Brain Cognition, vol. 51, pp. 326– 336.
[15] Dornhege G., Millan J., Hinterberger T., McFarland D., and Muller Eds. K.-R., 2007. “Toward
Brain Computer Interfacing. Cambridge,MA: MIT Press.
[16] Doyle T. E., Kucerovsky Z., Greason W. D., 2006. “Design of an Electroocular Computing
Interface”, Electrical and Computer Engineering, 2006. CCECE '06. Canadian Conference on,
pp. 1458-1461.
[17] Ebrahimi T., Vesin J. M., and Garcia v, 2003. “Brain–computer interface in multimedia
communication,” IEEE Signal Process. Mag., vol. 20, no. 1, pp. 14–24.
[18] Eman M. El –Daydamony, Mona El- Gayar and Fatma Abou- Chadi, 2008. “A Computerized
System for SEMG Signals Analysis and Classification,” National Radio Science Conference,
2008. NRSC 2008, pp. 1-7.
[19] Fisch B. J., 1999. Fisch & Spehlmann’s EEG Primer. Amsterdam, The Netherlands: Elsevier.
[20] Foulds R., Arthur J., and Khan A., 1997. “Human Factors Studies in Eye Movements Related to
AAC Head Movement Studies,” Rehab. R&D 1996 Progress reports, vol. 34, pp. 155-156.
[21] Fukuda O., Arita J. and Tsuji T., 2003. “An EMG-Controlled Omnidirectional Pointing Device
Using a HMM-based Neural Network,” Neural Networks, 2003. Proceedings of the
International Joint Conference on, vol. 4, pp. 3195- 3200.
[22] Fukuda O., Tsuji T., Kaneko M., 1999. “An EMG controlled pointing device using a neural
network Systems,” Man, and Cybernetics, IEEE SMC '99 Conference Proceedings. 1999 IEEE
International Conference on, vol. 4, pp. 63 – 68.
[23] Gopi E.S., Sylvester Vijay R., Rangarajan V., Nataraj L., 2006. “Brain Computer Interface
Analysis using Wavelet Transforms and Auto Regressive Coefficients,” Electrical and
Computer Engineering, 2006. ICECE '06. International Conference on, pp. 169 – 172.
[24] Guger C., Edlinger G., Harkam W., Niedermayer I., and Pfurtscheller G., 2003. “How many
people are able to operate an EEG-based brain-computer interface (BCI)?,” IEEE Trans. Rehab.
Engng, vol 11(2), pp. 145-147.
[25] Hiley J.B., Redekopp A.H. and Reza Fazel-Rezai, 2006. “A Low Cost Human Computer
Interface based on Eye Tracking,” Proc. 28th Annu. IEEE EMBC, New York, pp 3226 – 3229.
[26] Itou T., Terao M.; Nagata J., Yoshida M., 2001. “Mouse cursor control system using EMG,”
Engineering in Medicine and Biology Society, 2001. Proceedings of the 23rd Annual
International Conference of the IEEE, vol. 2, pp. 1368 – 1369.
[27] Wolpaw J. R. and McFarland D. J., 1994. "Multichannel EEG-based brain-computer
communication," Electroencephalography and Clinical Neurophysiology, vol. 90, no. 6, pp.
444-449.
[28] Jacob R. J., 1991. “The use of eye movements in human-computer interaction techniques: what
you look at is what you get,” ACM Trans Inform System, vol. 9(3), pp. 152-62.
[29] Jonghwa Kim, Stephan Mastnik, Elisabeth André, 2008. “EMG-based hand gesture recognition
for real-time biosignal interfacing,” International Conference on Intelligent User Interfaces,
Proceedings of the 13th international conference on Intelligent user interfaces, pp. 30-39.
[30] Jong-Sung Kim, Huyk Jeong, Wookho Son, 2004. “A new means of HCI: EMG-MOUSE,
Systems,” Man and Cybernetics, 2004 IEEE International Conference on, vol. 1, pp. 100 –
104.
[31] Kaufman A.E., Bandopadhay A., Shaviv B.D., 1993. “An eye tracking computer user
interface,” Virtual Reality, 1993. Proceedings., IEEE 1993 Symposium on Research Frontiers
in, pp. 120-121.
EMG Signal Classification for Human Computer Interaction: A Review 499
[32] Koike Y., and Kawato M.. 1996. “Human Interface Using Surface Electromyography Signals,”
Electronics and Communications in Japan (Part III: Fundamental Electronic Science), vol.
79(9), pp. 15–22.
[33] Krueger T.B., Stieglitz T., 2007. “A Naive and Fast Human Computer Interface Controllable
for the Inexperienced - a Performance Study,” Engineering in Medicine and Biology Society,
2007. EMBS 2007. 29th Annual International Conference of the IEEE, pp. 2508-2511.
[34] Kumar D., Poole E, 2002. “Classification of EOG for human computer interface,” Engineering
in Medicine and Biology, 2002. 24th Annual Conference and the Annual Fall Meeting of the
Biomedical Engineering Society] EMBS/BMES Conference, 2002. Proceedings of the Second
Joint, vol 1, pp. 64 – 67.
[35] Kuno Y., Yagi T., and Uchikawa Y., 1998. “Development of Eye Pointer with Free Head-
Motion,” Proc. of IEEE Int’l Conf. on Engineering in Medicine and Biology Society, pp. 1750-
1752.
[36] Kyung Kwon Jung; Joo Woong Kim; Hyun Kwan Lee; Sung Boo Chung; Ki Hwan Eom, 2007.
“EMG pattern classification using spectral estimation and neural network,” SICE, 2007 Annual
Conference, pp. 1108 – 1111.
[37] Lawrence S. M., Dhillon G. S., Jensen W., Yoshida K., and Horch K. W., 2004. "Acute
peripheral nerve recording characteristics of polymer- based longitudinal intrafascicular
electrodes," IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 12, pp.
345-348..
[38] LoPresti E.F., Brienza D.M., Angelo J., and Gilbertson, 2003. “Neck Range of Motion and Use
of Computer Head Control,” Journal of Rehabilitation Research and Development, Vol 40, N0.
3, pp. 199-212.
[39] Mason K.A., “Control Apparatus Sensitive to Eye Movement”, 1969. U.S. Patent 3-462-604-
1969
[40] Millan J. D., Renkens F., Mourino J., and Gerstner W., 2004. "Noninvasive brain-actuated
control of a mobile robot by human EEG," IEEE Transactions on Biomedical Engineering, vol.
51, pp. 1026-1033.
[41] Mobasser Farid, Hashtrudi-Zaad Keyvan, 2006. “A Method for Online Estimation of Human
Arm Dynamics,” Engineering in Medicine and Biology Society, 2006. EMBS '06. 28th Annual
International Conference of the IEEE, pp. 2412 – 2416.
[42] Moon I., Lee M., Mun M., 2004. “A novel EMG-based human-computer interface for persons
with disability,” Mechatronics, 2004. ICM '04. Proceedings of the IEEE International
Conference on, pp. 519 – 524.
[43] Naik G.R., Kumar D.K., Weghorn H., 2007. “Performance comparison of ICA algorithms for
Isometric Hand gesture identification using Surface EMG Intelligent Sensors,” Sensor
Networks and Information, 2007. ISSNIP 2007. 3rd International Conference on, pp. 613 –
618.
[44] Naik G.R., Kumar, D.K., Palaniswami M., 2008. “Multi run ICA and surface EMG based
signal processing system for recognising hand gestures,” Computer and Information
Technology, 2008. CIT 2008. 8th IEEE International Conference on, pp. 700 – 705.
[45] Nakanishi S., Kuno Y., Shimada N. and Shirai Y., 1999. "Robotic Wheelchair Based on
Observations of Both User and Environment,", Proc. of IROS 99, pp. 912-917.
[46] Nan Bu, Hamamoto T., Tsuji T., Fukuda, O., 2004. “FPGA implementation of a probabilistic
neural network for a bioelectric human interface,” Circuits and Systems, 2004. MWSCAS '04.
The 2004 47th Midwest Symposium on, vol. 3, pp. 29-32.
[47] Nicolelis M. A. L., 2001. "Actions from thoughts," Nature, vol. 409, pp. 403-407.
[48] Ortega R., 2004. “Unusal Acess Methods,” Proceedings CSUN's 19th Annual International
Conference "Technology and Persons with Disabilities", Los Angeles.
500 Md. R. Ahsan, Muhammad I. Ibrahimy and Othman O. Khalifa
[49] Palaniappan R., 2005. “Brain Computer Interface Design Using Band Powers Extracted During
Mental Tasks,” Neural Engineering, 2005. Conference Proceedings. 2nd International IEEE
EMBS Conference on, pp. 321 – 324.
[50] Park D. G., and Kim H. C., “Muscleman: Wireless input device for a fighting action game
based on the EMG signal and acceleration of the human forearm.”
[http://melab.snu.ac.kr/Research/melab/doc/HCI/muscleman_paper.pdf].
[51] Pavlovic V. I., Sharma R., and Huang T. S., 1997. “Visual interpretation of hand gestures for
human-computer interaction,” IEEE Transactions on Pattern Analysis and Machine
Intelligence, Vol. 19, no. 7, pp. 677-695.
[52] Pfurtscheller G, Flotzinger D, Pregenzer M, Wolpaw JR, McFarland D., 1996. “EEG-based
brain computer interface (BCI)”. Med Progr Technol, 21.
[53] Putnam, W. Knapp, R.B., 1993. “Real-time computer control using pattern recognition of the
electromyogram,” Engineering in Medicine and Biology Society, 1993. Proceedings of the 15th
Annual International Conference of the IEEE, pp. 1236-1237.
[54] Qiuping Ding, Kaiyu Tong, Guang Li, 2005. “Development of an EOG (Electro-Oculography)
Based Human-Computer Interface,” 27th Annual International Conference of the Engineering
in Medicine and Biology Society, IEEE-EMBS 2005, pp. 6829 – 6831.
[55] Rasmussen R.G., Acharya S., Thakor N.V., 2006. “Accuracy of a Brain-Computer Interface in
Subjects with Minimal Training,” Bioengineering Conference, 2006. Proceedings of the IEEE
32nd Annual Northeast, pp. 167 – 168.
[56] Rehg J. M., and Kanade D. T., 1994. “Vision-based hand tracking for human-computer
interaction,” IEEE Workshop on Motion of Non-Rigid and Articulated Objects, 16–22.
[57] Rosenberg, R., 1998. “The biofeedback pointer: EMG control of a two dimensional pointer,
Wearable Computers,” Digest of Papers. Second International Symposium on 19-20 Oct. 1998,
pp. 162 – 163.
[58] Salem C, Zhai S., 1997. “An isometric tongue pointing device,” Proceedings of CHI'97, March
22-27.
[59] Surdilovic T., 2005. “A Fuzzy Mouse Cursor Control System for Users with Spinal Cord
Injury”, 2005, Master's Thesis, Georgia State University.
[60] Taberner A. M., Barreto A. B., 1997. “Real-time signal processing towards an EEG-based
human-computer interface,” Proceedings of the 1997 Florida Conference on Recent Advances
in Robotics, Miami, FL; 1997 . p . 56-60. & In 1998. Webster JG, editor. Medical
instrumentation: application and design, 3rd Ed . Boston : Houghton Mifflin Company.
[61] Tsenov G., Zeghbib A.H., Palis F., Shoylev N., Mladenov V., 2006. “Neural Networks for
Online Classification of Hand and Finger Movements Using Surface EMG signals,” Neural
Network Applications in Electrical Engineering, 2006. NEUREL 2006. 8th Seminar on On, pp.
167-171.
[62] Tsuji T., Bu N., Fukuda O., Kaneko M., 2003. “A Recurrent Log-Linearized Gaussian Mixture
Network,” IEEE Transactions on Neural Network, vol. 14, no. 2, pp. 304-316.
[63] Tsuji T., Ichinobe H., Fukuda O. and Kaneko M., 1995. “A Maximum Likelihood Neural
Network Based on a Log- Linearized Gaussian Mixture Model,” Proceedings of IEEE
International Conference on Neural Networks, pp. 2479- 2484.
[64] Vaughan T. M., 2003. “Guest Editorial: Brain–computer interface technology: A review of the
second international meeting,” IEEE Trans. Neural Syst. Rehabil. Eng., vol. 11, no. 2, pp. 94–
109.
[65] Wang, H., Chignell, M., Ishizuka, M. 2006. “Empathic tutoring software agents using real-time
eye tracking,” In Proceedings of the 2006 symposium on Eye tracking research & applications
ETRA’06, pp. 73-78.
[66] Website: http://orin.com/access/headmouse/index.htm [Last visited 17-05-09]
EMG Signal Classification for Human Computer Interaction: A Review 501
[67] Weir R., 2003. “Design of artificial arms and hands for prosthetic applications,” In Standard
Handbook of Biomedical Engineering & Design, M. Kutz, Ed. New York: McGraw-Hill, 2003,
pp.32.1–32.61.
[68] Wessberg J., Stambaugh C. R., Kralik J. D., Beck P. D., Laubach M., Chapin J. K., Kim J.,
Biggs J., Srinivasan M. A., and Nicolelis M. A. L., 2000. "Real-time prediction of hand
trajectory by ensembles of cortical neurons in primates," Nature, vol. 408, pp. 361-365.
[69] Wheeler K.R., 2003. “Device control using gestures sensed from EMG, Soft Computing in
Industrial Applications,” 2003. SMCia/03. Proceedings of the 2003 IEEE International
Workshop on, pp. 21 – 26.
[70] Wheeler K.R., Jorgensen C.C., 2003. “Gestures as input: neuroelectric joysticks and
keyboards,” Pervasive Computing, IEEE, vol. 2, issue 2, pp. 56-61.
[71] Wolpaw J. R., Loeb G. E., Allison B. Z., Donchin E., do Nascimento O. F., Heetderks W. J.,
Nijboer F., Shain W. G., and Turner J. N., 2006. “BCI meeting 2005—Workshop on signals
and recordingmethods,” IEEE Trans. Neural Syst. Rehabil. Eng., vol. 14, no. 2, pp. 138–141.
[72] Wolpaw J.R., Birbaumer N., McFarland D.J., Pfurtscheller G., and Vaughan T.M., 2002.
“Brain-computer interfaces for communication and control,” Electroenceph. Clin.
Neurophysiol., vol. 113, no. 6, pp. 767–791.
[73] Xiang Chen, Xu Zhang, Zhang-Yan Zhao, Ji-Hai Yang, Lantz, V., Kong-Qiao Wang, 2007.
“Hand Gesture Recognition Research Based on Surface EMG Sensors and 2D-accelerometers,”
Wearable Computers, 2007 11th IEEE International Symposium on, pp. 11 – 14.
[74] Yun Liu, Zhijie Gan, Yu Sun, 2008. “Static Hand Gesture Recognition and its Application
based on Support Vector Machines,” Software Engineering, Artificial Intelligence, Networking,
and Parallel/Distributed Computing, 2008. SNPD '08. Ninth ACIS International Conference on,
pp. 517 – 521.
[75] Zhao Lv, Xiaopei Wu, Mi Li, Chao Zhang, 2008. “Implementation of the EOG-Based Human
Computer Interface System,” Bioinformatics and Biomedical Engineering, 2008. ICBBE 2008.
The 2nd International Conference on, pp. 2188 – 2191.
... Analyzing the execution and perception of ULR-EXO is the crucial technology to solve whether stroke patients can carry out practical rehabilitation training under the guidance of ULR-EXO. Most of the current reviews/surveys [20,[38][39][40][41][42][43] analyze the mechanical structure or sensory system alone, and few summarize the ULR-EXO based on the clinical requirements of stroke patients. Accordingly, focusing on the factors that affect the development of the ULR-EXO and the current problems, this paper comprehensively analyzes the critical technologies in the execution and perception system of the ULR-EXO and looks forward to the future development. ...
... It is primarily used to detect patients' muscle strength during rehabilitation and provides relevant movement intentions approximately 50-100 ms before the action [129,151]. EMG signals are divided into needle EMG (nEMG) signals and surface EMG (sEMG) signals [38]. The nEMG inserts the needle electrode into the skin with high accuracy, but it also brings discomfort and inconvenience to patients. ...
Article
Full-text available
Due to the lack of therapists and the demand for objective rehabilitation training indicators, the upper limb rehabilitation exoskeleton (ULR-EXO) has attracted more and more concentration. Execution and perception are the two most important technologies of ULR-EXO. A unified analysis of their essential anatomical characteristics and rehabilitation training needs will help to understand the future development trend of the ULR-EXO. According to the anatomical and kinematic features of the upper limb, combined with human-robot compatibility, this paper introduces the structural design of the ULR-EXO, the classification of execution, and the existing problems, summarizes the status quo of perceptual information, and classifies signal sources according to the signals generated by stroke patients in human-robot interaction. This paper also briefly summarizes the control methods of the ULR-EXO in different rehabilitation stages. Finally, based on the two stages of hospital treatment and family rehabilitation, the design requirements of the ULR-EXO and the selection of sensors based on different mechanism forms are discussed, which provides some reference values for researchers in this field.
... On one side, there is a possibility of helping disabled persons conducting their day to day lives through technologies including word processing programs and functional prosthetic limbs for amputees. While on the other hand, the idea of human computer interfaces arises naturally in many new technologies such as virtual reality, or really any other system that would require a continuous classification of bioelectrical signals [1]. Particularly, an EMG signal is an electrical current generated from a contracting muscle. ...
... Thus, EMG signals are naturally very noisy and complex, and depending on how many channels are being measured can be very high dimensional. Many related models that aid to classify these signals are neural network based [1]. While neural networks are known to generalize well for an appropriate amount of training data, they can be rather costly and difficult to train, and may not be the best option for scenarios with limited samples, even when considering proper data augmentation. ...
Preprint
Full-text available
Electromyography signals can be used as training data by machine learning models to classify various gestures. We seek to produce a model that can classify six different hand gestures with a limited number of samples that generalizes well to a wider audience while comparing the effect of our feature extraction results on model accuracy to other more conventional methods such as the use of AR parameters on a sliding window across the channels of a signal. We appeal to a set of more elementary methods such as the use of random bounds on a signal, but desire to show the power these methods can carry in an online setting where EMG classification is being conducted, as opposed to more complicated methods such as the use of the Fourier Transform. To augment our limited training data, we used a standard technique, known as jitter, where random noise is added to each observation in a channel wise manner. Once all datasets were produced using the above methods, we performed a grid search with Random Forest and XGBoost to ultimately create a high accuracy model. For human computer interface purposes, high accuracy classification of EMG signals is of particular importance to their functioning and given the difficulty and cost of amassing any sort of biomedical data in a high volume, it is valuable to have techniques that can work with a low amount of high-quality samples with less expensive feature extraction methods that can reliably be carried out in an online application.
... [92] focused on the recognition of complex construction activities with wearable EMG and IMU sensors in a neural network-based way. Similar work has been explored for hand gesture recognition [93,94], human-computer interaction [95,96], etc. ECG records the electrical signal during the heartbeat. With up to twelve electrodes, ECG signals are commonly used to check different heart conditions. ...
Article
Full-text available
Human activity recognition (HAR) has become an intensive research topic in the past decade because of the pervasive user scenarios and the overwhelming development of advanced algorithms and novel sensing approaches. Previous HAR-related sensing surveys were primarily focused on either a specific branch such as wearable sensing and video-based sensing or a full-stack presentation of both sensing and data processing techniques, resulting in weak focus on HAR-related sensing techniques. This work tries to present a thorough, in-depth survey on the state-of-the-art sensing modalities in HAR tasks to supply a solid understanding of the variant sensing principles for younger researchers of the community. First, we categorized the HAR-related sensing modalities into five classes: mechanical kinematic sensing, field-based sensing, wave-based sensing, physiological sensing, and hybrid/others. Specific sensing modalities are then presented in each category, and a thorough description of the sensing tricks and the latest related works were given. We also discussed the strengths and weaknesses of each modality across the categorization so that newcomers could have a better overview of the characteristics of each sensing modality for HAR tasks and choose the proper approaches for their specific application. Finally, we summarized the presented sensing techniques with a comparison concerning selected performance metrics and proposed a few outlooks on the future sensing techniques used for HAR tasks.
... Whereas various reviews have been published on MLLPs, most of these focus on mechanics and control, and either do not discuss EMG-driven control [50,52,69], or dedicate a short section to it [42,84,124,131]. Instead, whereas numerous other reviews focus specifically on myoelectric control, they either do so for generic HRI [1,3,93,94,97,103,109], or for upper limb prosthetics specifically [47,102,107]. ...
Article
Full-text available
Background The inability of users to directly and intuitively control their state-of-the-art commercial prosthesis contributes to a low device acceptance rate. Since Electromyography (EMG)-based control has the potential to address those inabilities, research has flourished on investigating its incorporation in microprocessor-controlled lower limb prostheses (MLLPs). However, despite the proposed benefits of doing so, there is no clear explanation regarding the absence of a commercial product, in contrast to their upper limb counterparts. Objective and methodologies This manuscript aims to provide a comparative overview of EMG-driven control methods for MLLPs, to identify their prospects and limitations, and to formulate suggestions on future research and development. This is done by systematically reviewing academical studies on EMG MLLPs. In particular, this review is structured by considering four major topics: (1) type of neuro-control, which discusses methods that allow the nervous system to control prosthetic devices through the muscles; (2) type of EMG-driven controllers, which defines the different classes of EMG controllers proposed in the literature; (3) type of neural input and processing, which describes how EMG-driven controllers are implemented; (4) type of performance assessment, which reports the performance of the current state of the art controllers. Results and conclusions The obtained results show that the lack of quantitative and standardized measures hinders the possibility to analytically compare the performances of different EMG-driven controllers. In relation to this issue, the real efficacy of EMG-driven controllers for MLLPs have yet to be validated. Nevertheless, in anticipation of the development of a standardized approach for validating EMG MLLPs, the literature suggests that combining multiple neuro-controller types has the potential to develop a more seamless and reliable EMG-driven control. This solution has the promise to retain the high performance of the currently employed non-EMG-driven controllers for rhythmic activities such as walking, whilst improving the performance of volitional activities such as task switching or non-repetitive movements. Although EMG-driven controllers suffer from many drawbacks, such as high sensitivity to noise, recent progress in invasive neural interfaces for prosthetic control (bionics) will allow to build a more reliable connection between the user and the MLLPs. Therefore, advancements in powered MLLPs with integrated EMG-driven control have the potential to strongly reduce the effects of psychosomatic conditions and musculoskeletal degenerative pathologies that are currently affecting lower limb amputees.
... Different techniques have been developed to record activity at varying levels of the nervous system. Electroencephalography [2], electrocorticography [3], and intracortical arrays [4] allow brain-machine interfaces, while peripheral nerve implants [5] and surface electromyography (EMG) [6] enable communication from the peripheral nervous system. The EMG signal reflects the summation [7] of motor unit action potentials (MUAPs) from a number of motor units (MUs) (each a motor neuron and all the muscle fibers it innervates), considered the smallest independent control units of muscle activation [8]. ...
Article
Background: Myoelectric-based decoding has gained popularity in upper-limb neural-machine interfaces. Motor unit (MU) firings decomposed from surface electromyographic (EMG) signals can represent motor intent, but EMG properties at different arm configurations can change due to electrode shift and differing neuromuscular states. This study investigated whether isometric fingertip force estimation using MU firings is robust to forearm rotations from a neutral to either a fully pronated or supinated posture. Methods: We extracted MU information from high-density EMG of the extensor digitorum communis in two ways: (1) Decomposed EMG in all three postures (MU-AllPost); and (2) Decomposed EMG in neutral posture (MU-Neu), and extracted MUs (separation matrix) were applied to other postures. Populational MU firing frequency estimated forces scaled to subjects' maximum voluntary contraction (MVC) using a regression analysis. The results were compared with the conventional EMG-amplitude method. Results: We found largely similar root-mean-square errors (RMSE) for the two MU-methods, indicating that MU decomposition was robust to postural differences. MU-methods demonstrated lower RMSE in the ring (EMG = 6.23, MU-AllPost = 5.72, MU-Neu = 5.64 %MVC) and pinky (EMG = 6.12, MU-AllPost = 4.95, MU-Neu = 5.36 %MVC) fingers, with mixed results in the middle finger (EMG = 5.47, MU-AllPost = 5.52, MU-Neu = 6.19% MVC). Conclusion: Our results suggest that MU firings can be extracted reliably with little influence from forearm posture, highlighting its potential as an alternative decoding scheme for robust and continuous control of assistive devices.
Article
Full-text available
Background The study of the regulatory mechanisms of evolutionarily conserved Nucleotide-binding leucine-rich repeat (NLR) resistance (R) proteins in animals and plants is of increasing importance due to understanding basic immunity and the value of various crop engineering applications of NLR immune receptors. The importance of temperature is also emerging when applying NLR to crops responding to global climate change. In particular, studies of pathogen effector recognition and autoimmune activity of NLRs in plants can quickly and easily determine their function in tobacco using agro-mediated transient assay. However, there are conditions that should not be overlooked in these cell death-related assays in tobacco. Results Environmental conditions play an important role in the immune response of plants. The system used in this study was to establish conditions for optimal hypertensive response (HR) cell death analysis by using the paired NLR RPS4/RRS1 autoimmune and AvrRps4 effector recognition system. The most suitable greenhouse temperature for growing plants was fixed at 22 °C. In this study, RPS4/RRS1-mediated autoimmune activity, RPS4 TIR domain-dependent cell death, and RPS4/RRS1-mediated HR cell death upon AvrRps4 perception significantly inhibited under conditions of 65% humidity. The HR is strongly activated when the humidity is below 10%. Besides, the leaf position of tobacco is important for HR cell death. Position #4 of the leaf from the top in 4–5 weeks old tobacco plants showed the most effective HR cell death. Conclusions As whole genome sequencing (WGS) or resistance gene enrichment sequencing (RenSeq) of various crops continues, different types of NLRs and their functions will be studied. At this time, if we optimize the conditions for evaluating NLR-mediated HR cell death, it will help to more accurately identify the function of NLRs. In addition, it will be possible to contribute to crop development in response to global climate change through NLR engineering.
Article
Full-text available
Hand gesture based Human-Computer-Interaction (HCI) is one of the most normal and spontaneous ways to communicate between people and apparatus to present a hand gesture recognition system with Webcam, Operates robustly in unrestrained environment and is insensible to hand variations and distortions. This classification consists of two major modules, that is, hand detection and gesture recognition. Diverse from conventional vision-based hand gesture recognition methods that use color-markers for hand detection, this system uses both the depth and color information from Webcam to detect the hand shape, which ensures the sturdiness in disorderly environments. Assurance its heftiness to input variations or the distortions caused by the low resolution of webcam, to apply a novel shape distance metric called Handle Earth Mover's Distance (HEMD) for hand gesture recognition. Consequently, in this paper concept operates accurately and efficiently. The intend of this paper is to expand robust and resourceful hand segmentation algorithm where three algorithms for hand segmentation using different color spaces with required thresholds have were utilized. Hand tracking and segmentation algorithm is found to be most resourceful to handle the challenge of apparition based organization such as skin dye detection. Noise may hold, for a moment, in the segmented image due to lively background. Tracking algorithm was developed and applied on the segmented hand contour for elimination of unnecessary background noise
Conference Paper
Microcystin-LR is a hepatotoxic cyclic heptapeptide produced by some species of bloom forming aquatic cyanobacteria. The liver is the main target of this toxin however it has been shown that MC-LR can also cause damages to the other tissues. Several episodes of human and livestock poisoning have been reported after drinking contaminated fresh water. The toxicity is associated with the inhibition of serine-threonine phosphatases 1 and 2A and also to the generation of intracellular reactive oxygen species leading to oxidative stress triggering apoptosis, disrupting the cytoskeleton, damaging DNA and leading to development of degenerative diseases. In order to counteract the damaging effects of such toxic agents, the use of exogenous antioxidants may constitute an effective protection and therapy for the body tissues. In this study, we focused on melatonin and N-acetyl-L-cysteine (NAC) as antioxidant agents; where the former is a hormone synthesized in the pineal gland and plays an important role in the regulation of circadian rhythms and the latter is the acetylated form of the amino acid Lcysteine and plays an important role as a glutathione precursor; a key cellular antioxidant and detoxifier. BalbC mice were used to assess the protective effect of melatonin (20 mg/ kg body weight) and N-acetylcysteine (10 mM/ kg body weight) against an acute dose of MC-LR (LD50= 34.5 mg/kg body weight) injected intraperitoneally after 10 days of supplementation with the antioxidants orally. Biomarkers of toxicity were assayed; that is lipid peroxidation (LPO), protein carbonyl content (PCC), reduced glutathione (GSH) and serum lactate dehydrogenase (LDH) and sorbitol dehydrogenase (SDH). Results demonstrated that the liver of mice treated only with the toxin suffers significant oxidative damages while antioxidant treated mice were significantly less affected. Melatonin was found to have a better protective effect than N-acetylcysteine against MC LR-induced toxicity. These results may suggest a therapeutic use of these substances against oxidative damages caused by different toxins.
Article
Full-text available
The idea of an EEG based BCI is to assist the people unable to communicate their thoughts due to neuromuscular disorders and hence affected by motor disabilities. The BCI helps them acting as an interface between the human mind and the computer. In this paper an offline analysis of the EEG data recorded from the C3 and C4 electrodes pertaining to motor activities was done. The data obtained was preprocessed with techniques like wavelet transform and linear predictive coding was applied to it to determine the auto regressive coefficients which are treated as feature vectors to train an artificial neural network for appropriate classification. The trained net was then subjected to testing of data from 140 random trials that were taken and the accuracy was determined. The efficiency of this approach was found to be 71.5%.
Chapter
A Brain-Computer-lnterface (BCI) bascd on learning of self-regulation of slow conical potenlials (SCP) is described. This Thought-Translation-Devicc (TTD) allows completely paralyzed and locked-in patients to select letrers or words with their SCPs recorded non-invasively at the scalp. The neurophysiological basis of SCPs and the hierarchically ordered training steps for locked-in patienls were teslcd experimentally on 7 paralyzcd and artificially respirated patients with end-stage amyotrophic lateral sclerosis.
Article
,Abstract People with severe ,motor-impairments due to Spinal Cord Injury (SCI) or Spinal Cord Dysfunction (SCD), often experience difficulty with accurate and efficient control of
Article
This paper presents a new method of gesture recognition based on multiple sensors fusion technique. Three kinds of sensors, namely surface Electromyography (sEMG) sensor, 3-axis accelerometer (ACC) and camera, are used together to capture the dynamic hand gesture firstly. Then four types of features are extracted from the three kinds of sensory data to depict the static hand posture and dynamic gesture trajectory characteristics of hand gesture. Finally decision-level multi-classifier fusion method is implemented for hand gesture pattern classification. Experimental results of 4 subjects demonstrate that each kind of sensor data has its advantages and disadvantages in representing hand gestures. And the proposed method could fuse effectively the complementary information from these three types of sensors for dynamic hand gesture recognition.
Article
We have been working toward the construction of a forward model arm employing a neural network model with electromyographic signals as the control input. We have succeeded in estimating 1) the joint torque during isometric contraction in a plane from the electromyographic signals, 2) the path of motion from the degree of joint acceleration as well as 3) the path of motion from the joint torque and 4) the position in three-dimensional space. In this paper, we present a new human interface employing a model of an arm, robot control of an artifical hand, and the learning of motion capability. In addition, based on the fact that we can estimate the position of the arm in three-dimensional space, we discuss as an example of the human interface the results of generating motion from electromyographic signals in a hypothetical arm using a neural network model.
Conference Paper
Myoelectric signals (MES) are the electrical manifestation of muscular contractions and they can be used to create myoelectric prosthesis which is able to function with the amputee's muscle movements. This signal recorded at the surface of the skin of the forearm has been exploited to provide recognition of the movement of the hand and finger movements of healthy subject. The objective of the paper is to describe the identification procedure, based on EMG patterns of forearm activity using various neural networks methods and to make a comparison between different intelligent computational methods of identification, which are used in this work. Then an online algorithm for movement identification and classification that utilises the trained neural networks is presented