[Show abstract][Hide abstract] ABSTRACT: In this paper we summarize the emotion recognition from the electroencephalogram (EEG) signals. The combination of surface Laplacian filtering, time-frequency analysis (Wavelet Transform) and linear classifiers are used to detect the discrete emotions (happy, surprise, fear, disgust, and neutral) of human through EEG signals. EEG signals are collected from 20 subjects through 62 active electrodes, which are placed over the entire scalp based on International 10-10 system. All the signals are collected without much discomfort to the subjects, and can reflect the influence of emotion on the autonomic nervous system. An audio-visual (video clips) induction based protocol has been designed for evoking the discrete emotions. The raw EEG signals are preprocessed through Surface Laplacian filtering method and decomposed into five different EEG frequency bands using Wavelet Transform (WT). In our work, we used "db4" wavelet function for extracting the statistical features for classifying the emotions. A new statistical features based on frequency band energy and it's modified from are discussed for achieving the maximum classification rate. The validation of statistical features is performed using 5 fold cross validation. In this work, KNN outperforms LDA by offering a maximum average classification rate of 78.4783 % on 62 channels and 73.6087% on 24 channels respectively. Finally we present the average classification accuracy and individual classification accuracy of two different classifiers for justifying the performance of our emotion recognition system.
Proceedings of the International Conference on Man-Machine Systems (ICoMMS), Malaysia; 09/2013
[Show abstract][Hide abstract] ABSTRACT: In this paper, lip features are applied to classify the human emotion using a set of irregular ellipse fitting equations using Genetic Algorithm (GA). South-east Asian and Japanese faces are considered in this study. The parameters relating the face emotions, in either case, are entirely different. All six universally accepted emotions are considered for classifications. The method that is fastest in extracting lip features is adopted in this study. Observation of various emotions of the subject leads to unique characteristic of lips. GA is adopted to optimise irregular ellipse characteristics of the lip features in each emotion. That is, the top portion of lip configuration is a part of one ellipse and the bottom of different ellipse. Two ellipse-based fitness equations are proposed for the lip configuration and relevant parameters that define the emotion are listed. This approach has given reasonably successful emotion classifications.
International Journal of Artificial Intelligence and Soft Computing 09/2012; 3(2):95-107.
[Show abstract][Hide abstract] ABSTRACT: We recently proposed the guided particle swarm optimisation (GPSO) algorithm as a modification to the popular particle swarm optimisation (PSO) algorithm with the objective of solving the facial emotion recognition problem. A real-time facial emotion recognition software was implemented using GPSO and tested with 25 subjects. The result was found to be good both in terms of recognition success rate and recognition speed. As a follow-up, we decided to investigate how our novel (GPSO) approach compares with existing popular classification methods, such as genetic algorithm (GA). We re-implement our emotion recognition software using GA and tested it using the video recordings of the same 25 subjects that were used to test the GPSO-based system. Our results show that while the recognition success rate achieved using GA is still reasonable, the recognition speed is very slow, suggesting that the GA method may not be suitable for real-time emotion recognition applications.
International Journal of Artificial Intelligence and Soft Computing 02/2012; 3(3):310-329.
[Show abstract][Hide abstract] ABSTRACT: Recent years, several researchers are developing different kinds of assistive devices for physically disabled peoples. In this work, movement of illuminant markers through facial expressions is used to control the cursor movement in computer applications. A set of five facial expressions namely left and right cheek movement, eye brow rise and down and mouth open are used for controlling cursor movement in left and right direction, up and down and click, respectively. Four very small luminous stickers are fixed on subject's face and the subject is instructed to perform the above said facial expressions. Conventional web-camera is used for capturing the facial expression and sends the data into BASIC STAMP microcontroller through serial port interfacing. Movements of markers are detected through its x-y coordinate's changes on the video image and each facial expression is uniquely represented by a binary number. As a result of change of x-y co-ordinates, the BASIC STAMP microcontroller sends the binary code to the computer for controlling the mouse actions.
Communication, Networks and Satellite (ComNetSat), 2012 IEEE International Conference on; 01/2012
[Show abstract][Hide abstract] ABSTRACT: Developing tools to assist physically disabled and immobilized people through facial expression is a challenging area of research and has attracted many researchers recently. In this paper, luminance stickers based facial expression recognition is proposed. Recognition of facial expression is carried out by employing Discrete Wavelet Transform (DWT) as a feature extraction method. Different wavelet families with their different orders (db1 to db20, Coif1 to Coif 5 and Sym2 to Sym8) are utilized to investigate their performance in recognizing facial expression and to evaluate their computational time. Standard deviation is computed for the coefficients of first level of wavelet decomposition for every order of wavelet family. This standard deviation is used to form a set of feature vectors for classification. In this study, conventional validation and cross validation are performed to evaluate the efficiency of the suggested feature vectors. Three different classifiers namely Artificial Neural Network (ANN), k-Nearest Neighborhood (kNN) and Linear Discriminant Analysis (LDA) are used to classify a set of eight facial expressions. The experimental results demonstrate that the proposed method gives very promising classification accuracies.
Journal of Medical Systems 04/2011; 36(4):2225-34. · 1.78 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: A brain machine interface (BMI) design for controlling the navigation of a power wheelchair is proposed. Real-time experiments with four able bodied subjects are carried out using the BMI-controlled wheelchair. The BMI is based on only two electrodes and operated by motor imagery of four states. A recurrent neural classifier is proposed for the classification of the four mental states. The real-time experiment results of four subjects are reported and problems emerging from asynchronous control are discussed.
Advances in experimental medicine and biology 01/2011; 696:565-72. · 1.83 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: In this paper, we present a scheme for target acquisition scheme for mobile robot that will use vision sensor. The scheme in order to accurately measures the location of a target in real world coordinates and finds the distance to the target from the mobile robot. Fuzzy logic control laws for differential steering control of the autonomous nonholonomic mobile robot are developed. Certain requirements for the fuzzy logic control laws are presented to chose suitable rule base for the fuzzy logic controller in order to make the system asymptotically stable. The stability of the proposed fuzzy logic controller is theoretically proved and also demonstrated by simulation studies. Finally, the proposed fuzzy logic controller is implemented on the nonholonomic mobile robot and the results show that the proposed fuzzy controller can achieve the desired turning angle and the mobile robot follows the target satisfactorily.
[Show abstract][Hide abstract] ABSTRACT: Recent research in the field of Human Computer Interaction aims at recognizing the user's emotional state in order to provide a smooth interface between humans and computers. This would make life easier and can be used in vast applications involving areas such as education, medicine etc. Human emotions can be recognized by several approaches such as gesture, facial images, physiological signals and neuro imaging methods. Most of the researchers have developed user dependent emotion recognition system and achieved maximum classification rate. Very few researchers have tried to develop a user independent system and obtained lower classification rate. Efficient emotion stimulus method, larger data samples and intelligent signal processing techniques are essential for improving the classification rate of the user independent system. In this paper, we present a review on emotion recognition using physiological signals. The various theories on emotion, emotion recognition methodology and the current advancements in emotion research are discussed in subsequent topics. This would provide an insight on the current state of research and its challenges on emotion recognition using physiological signals, so that research can be advanced to obtain better recognition.
[Show abstract][Hide abstract] ABSTRACT: In this paper, we attempted to recognize facial expression by using Haar-like feature extraction method. A set of luminance stickers were fixed on subject's face and the subject is instructed to perform required facial expressions. At the same time, subject's expressions were recorded in video. A set of 2D coordinate values are obtained by tracking the movements of the stickers in video using tracking software. We use Haar-like technique to extract the features. Six statistical features namely variance, standard deviation, mean, power, energy and entropy were derived from the approximation coefficients of Haar-like decomposition. These statistical features were used as an input to the neural network for classifying 8 facial expressions. The feature Variance offers better result compared to other statistical features.
Intelligent and Advanced Systems (ICIAS), 2010 International Conference on; 07/2010
[Show abstract][Hide abstract] ABSTRACT: Robot chair control using an asynchronous brain machine interface (ABMI) based on motor imagery requires sufficient subject training. This paper proposes a generalized a brain machine interface design to investigate the feasibility of real-time robot chair control by trained subjects. Performance of the real-time experiments conducted for asynchronous navigation is assessed based on completion of a navigation protocol. The performances of the ABMI and its constraints are discussed.
Signal Processing and Its Applications (CSPA), 2010 6th International Colloquium on; 06/2010
[Show abstract][Hide abstract] ABSTRACT: Simultaneous Localization and Mapping (SLAM) addresses the problem of a robot navigating and acquiring spatial models of initially unknown environments, without an absolute localization means. To solve this problem, we propose a mapping system that builds feature-based geometrical maps by applying a modified Particle Swarm Optimization (PSO) algorithm. Particles are defined as the location of individual features in the environment where the size of the swarm increases as the features are re-observed at different positions. PSO adjusts the velocity and location of particles towards a target (feature location) as the particles move around the constrained 2-dimensional search space. Finally, the particles will converge around an optimum feature location. The mobile robot is also localized with respect to this map simultaneously. It is demonstrated that accurate feature locations can be obtained using the proposed technique.
[Show abstract][Hide abstract] ABSTRACT: This paper investigates the performance of a Daubechies Wavelet family in recognizing facial expressions. A set of luminance stickers were fixed on subject's face and the subject is instructed to perform required facial expressions. At the same time, subject's expressions are recorded in video. A set of 2D coordinate values are obtained by tracking the movements of the stickers in video using tracking software. Daubechies wavelet transform with different orders (db1 to db20) performed on obtained data. Standard deviation is derived from wavelet approximation coefficients for each daubechies wavelet orders. This standard deviation is used as an input to the neural network for classifying 8 facial expressions.
Signal Processing and Its Applications (CSPA), 2010 6th International Colloquium on; 01/2010
[Show abstract][Hide abstract] ABSTRACT: This paper presents an integrated system for detecting facial changes of patient in a hospital in Intensive Care Unit (ICU). The facial changes are most widely represented by eyes and mouth movements. The proposed system uses color images and it consists of three modules. The first module implements skin detection to detect the face. The second module constructs eye and mouth maps that are responsible for changes in eye and mouth regions. The third module extracts the features of eyes and mouth by processing the image and measuring certain dimensions of eyes and mouth regions. Finally a fuzzy classifier used to classify the movements at different illumination levels. From 300 samples of face images, it is found that the identification rate of awakness reaches 80%.
Signal Processing and Its Applications (CSPA), 2010 6th International Colloquium on; 01/2010
[Show abstract][Hide abstract] ABSTRACT: This paper presents an integrated system for detecting facial changes of patient in a hospital in Intensive Care Unit(ICU).In this research we have considered the facial changes most widely represented by eyes and mouth movements. The proposed system uses color images and it consists of three modules. The first module implements skin detection to detect the face. The second module constructs eye and mouth maps that are responsible for changes in eye and mouth regions. The third module extracts the features of eyes and mouth by processing the image and measuring certain demensions of eyes and mouth regions. Finally the result of this work shows that the (k-NN) can be used for used to classify the awake ness with the average accuracy of 94%.
Intelligent and Advanced Systems (ICIAS), 2010 International Conference on; 01/2010