R. Nagarajan

Universiti Malaysia Perlis, Perlis, Perlis, Malaysia

Are you R. Nagarajan?

Claim your profile

Publications (105)26.07 Total impact

  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Hospital nurse regularly bring her instrument to the patient using cart. They need to push or pull the cart to the patient bed and bring it back many times in a day. This can be tiresome for nurses because they need to treat many patients in the hospital. This research is mainly to solve this problem by constructing a mobile robot for nurses that is able to follow and carry the medical equipment and at the same time perform obstacle avoidance. The designed robot has ability to move in and out at constricted space and is able to avoid any obstacles either static or dynamic. This robot can carry a load of 20 kg and used dc geared motor to move. The mobile platform is able to rotate at axial axis with the construction of special wheel and the placement of the motor. A suitable ultrasonic sensor bank is selected so that robot can detect obstacle around the mobile platform and avoid the obstacle. The robot control and obstacle avoidance system is designed by adopting the facilities of Basic ATOM microcontroller for better performance.
    International Journal of Medical Engineering and Informatics 01/2014; 6(1):1 - 13. DOI:10.1504/IJMEI.2014.058521
  • Source
    M Murugappan · R Nagarajan · S Yaacob ·
    [Show abstract] [Hide abstract]
    ABSTRACT: In this paper we summarize the emotion recognition from the electroencephalogram (EEG) signals. The combination of surface Laplacian filtering, time-frequency analysis (Wavelet Transform) and linear classifiers are used to detect the discrete emotions (happy, surprise, fear, disgust, and neutral) of human through EEG signals. EEG signals are collected from 20 subjects through 62 active electrodes, which are placed over the entire scalp based on International 10-10 system. All the signals are collected without much discomfort to the subjects, and can reflect the influence of emotion on the autonomic nervous system. An audio-visual (video clips) induction based protocol has been designed for evoking the discrete emotions. The raw EEG signals are preprocessed through Surface Laplacian filtering method and decomposed into five different EEG frequency bands using Wavelet Transform (WT). In our work, we used "db4" wavelet function for extracting the statistical features for classifying the emotions. A new statistical features based on frequency band energy and it's modified from are discussed for achieving the maximum classification rate. The validation of statistical features is performed using 5 fold cross validation. In this work, KNN outperforms LDA by offering a maximum average classification rate of 78.4783 % on 62 channels and 73.6087% on 24 channels respectively. Finally we present the average classification accuracy and individual classification accuracy of two different classifiers for justifying the performance of our emotion recognition system.
    Proceedings of the International Conference on Man-Machine Systems (ICoMMS), Malaysia; 09/2013
  • M. Karthigayan · R. Nagarajan · M. Rizon · Sazali Yaacob ·
    [Show abstract] [Hide abstract]
    ABSTRACT: In this paper, lip features are applied to classify the human emotion using a set of irregular ellipse fitting equations using Genetic Algorithm (GA). South-east Asian and Japanese faces are considered in this study. The parameters relating the face emotions, in either case, are entirely different. All six universally accepted emotions are considered for classifications. The method that is fastest in extracting lip features is adopted in this study. Observation of various emotions of the subject leads to unique characteristic of lips. GA is adopted to optimise irregular ellipse characteristics of the lip features in each emotion. That is, the top portion of lip configuration is a part of one ellipse and the bottom of different ellipse. Two ellipse-based fitness equations are proposed for the lip configuration and relevant parameters that define the emotion are listed. This approach has given reasonably successful emotion classifications.
    International Journal of Artificial Intelligence and Soft Computing 09/2012; 3(2):95-107. DOI:10.1504/IJAISC.2012.049004
  • Source
    Mohammed Bashir · Ghandi · Ramachandran Nagarajan · Sazali Yaacob · Desa Hazry ·
    [Show abstract] [Hide abstract]
    ABSTRACT: We recently proposed the guided particle swarm optimisation (GPSO) algorithm as a modification to the popular particle swarm optimisation (PSO) algorithm with the objective of solving the facial emotion recognition problem. A real-time facial emotion recognition software was implemented using GPSO and tested with 25 subjects. The result was found to be good both in terms of recognition success rate and recognition speed. As a follow-up, we decided to investigate how our novel (GPSO) approach compares with existing popular classification methods, such as genetic algorithm (GA). We re-implement our emotion recognition software using GA and tested it using the video recordings of the same 25 subjects that were used to test the GPSO-based system. Our results show that while the recognition success rate achieved using GA is still reasonable, the recognition speed is very slow, suggesting that the GA method may not be suitable for real-time emotion recognition applications.
    International Journal of Artificial Intelligence and Soft Computing 02/2012; 3(3):310-329. DOI:10.1504/IJAISC.2013.056828
  • M. Vasanthan · M. Murugappan · R. Nagarajan · B. Ilias · J. Letchumikanth ·
    [Show abstract] [Hide abstract]
    ABSTRACT: Recent years, several researchers are developing different kinds of assistive devices for physically disabled peoples. In this work, movement of illuminant markers through facial expressions is used to control the cursor movement in computer applications. A set of five facial expressions namely left and right cheek movement, eye brow rise and down and mouth open are used for controlling cursor movement in left and right direction, up and down and click, respectively. Four very small luminous stickers are fixed on subject's face and the subject is instructed to perform the above said facial expressions. Conventional web-camera is used for capturing the facial expression and sends the data into BASIC STAMP microcontroller through serial port interfacing. Movements of markers are detected through its x-y coordinate's changes on the video image and each facial expression is uniquely represented by a binary number. As a result of change of x-y co-ordinates, the BASIC STAMP microcontroller sends the binary code to the computer for controlling the mouse actions.
    Communication, Networks and Satellite (ComNetSat), 2012 IEEE International Conference on; 01/2012
  • Source
    M. Murugappan · R. Nagarajan · S. Yaacob ·

    Discrete Wavelet Transforms - Biomedical Applications, 09/2011; , ISBN: 978-953-307-654-6
  • R Nagarajan · M Hariharan · M Satiyan ·
    [Show abstract] [Hide abstract]
    ABSTRACT: Developing tools to assist physically disabled and immobilized people through facial expression is a challenging area of research and has attracted many researchers recently. In this paper, luminance stickers based facial expression recognition is proposed. Recognition of facial expression is carried out by employing Discrete Wavelet Transform (DWT) as a feature extraction method. Different wavelet families with their different orders (db1 to db20, Coif1 to Coif 5 and Sym2 to Sym8) are utilized to investigate their performance in recognizing facial expression and to evaluate their computational time. Standard deviation is computed for the coefficients of first level of wavelet decomposition for every order of wavelet family. This standard deviation is used to form a set of feature vectors for classification. In this study, conventional validation and cross validation are performed to evaluate the efficiency of the suggested feature vectors. Three different classifiers namely Artificial Neural Network (ANN), k-Nearest Neighborhood (kNN) and Linear Discriminant Analysis (LDA) are used to classify a set of eight facial expressions. The experimental results demonstrate that the proposed method gives very promising classification accuracies.
    Journal of Medical Systems 04/2011; 36(4):2225-34. DOI:10.1007/s10916-011-9690-5 · 2.21 Impact Factor
  • Source
    S Jerritta · M Murugappan · R Nagarajan · Khairunizam Wan ·
    [Show abstract] [Hide abstract]
    ABSTRACT: Recent research in the field of Human Computer Interaction aims at recognizing the user's emotional state in order to provide a smooth interface between humans and computers. This would make life easier and can be used in vast applications involving areas such as education, medicine etc. Human emotions can be recognized by several approaches such as gesture, facial images, physiological signals and neuro imaging methods. Most of the researchers have developed user dependent emotion recognition system and achieved maximum classification rate. Very few researchers have tried to develop a user independent system and obtained lower classification rate. Efficient emotion stimulus method, larger data samples and intelligent signal processing techniques are essential for improving the classification rate of the user independent system. In this paper, we present a review on emotion recognition using physiological signals. The various theories on emotion, emotion recognition methodology and the current advancements in emotion research are discussed in subsequent topics. This would provide an insight on the current state of research and its challenges on emotion recognition using physiological signals, so that research can be advanced to obtain better recognition.
  • Source
    K. Mohamad · A.A. Ali · R. Nagarajan ·
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper deals with the application of Fuzzy-Neural Networks (FNNs) in multi-machine system control applied on hot steel rolling. The electrical drives that used in rolling system are a set of three-phase induction motors (IM) controlled by indirect field-oriented control (IFO). The fundamental goal of this type of control is to eliminate the coupling influence though the coordinate transformation in order to make the AC motor behaves like a separately excited DC motor. Then use Fuzzy-Neural Network in control the IM speed and the rolling plant. In this work MATLAB/SIMULINK models are proposed and implemented for the entire structures. Simulation results are presented to verify the effectiveness of the proposed control schemes. It is found that the proposed system is robust in that it eliminates the disturbances considerably.
    Energy, Power and Control (EPC-IQ), 2010 1st International Conference on; 01/2011
  • Source
    Mohd Saifizi Saidon · Hazry Desa · R. Nagarajan · MP Paulraj ·
    [Show abstract] [Hide abstract]
    ABSTRACT: In this paper, we present a scheme for target acquisition scheme for mobile robot that will use vision sensor. The scheme in order to accurately measures the location of a target in real world coordinates and finds the distance to the target from the mobile robot. Fuzzy logic control laws for differential steering control of the autonomous nonholonomic mobile robot are developed. Certain requirements for the fuzzy logic control laws are presented to chose suitable rule base for the fuzzy logic controller in order to make the system asymptotically stable. The stability of the proposed fuzzy logic controller is theoretically proved and also demonstrated by simulation studies. Finally, the proposed fuzzy logic controller is implemented on the nonholonomic mobile robot and the results show that the proposed fuzzy controller can achieve the desired turning angle and the mobile robot follows the target satisfactorily.
  • C R Hema · M P Paulraj · Sazali Yaacob · Abdul Hamid Adom · R Nagarajan ·
    [Show abstract] [Hide abstract]
    ABSTRACT: A brain machine interface (BMI) design for controlling the navigation of a power wheelchair is proposed. Real-time experiments with four able bodied subjects are carried out using the BMI-controlled wheelchair. The BMI is based on only two electrodes and operated by motor imagery of four states. A recurrent neural classifier is proposed for the classification of the four mental states. The real-time experiment results of four subjects are reported and problems emerging from asynchronous control are discussed.
    Advances in Experimental Medicine and Biology 01/2011; 696:565-72. DOI:10.1007/978-1-4419-7046-6_57 · 1.96 Impact Factor
  • Source
    MN Mansor · S Yaacob · R Nagarajan · H Muthusamy ·

  • [Show abstract] [Hide abstract]
    ABSTRACT: This work presents a simple marker-based person detection method for a mobile robot that can follow a person in indoor environments. Expensive laser range finders or RFID which can provide very accurate range measurements have been used by several researchers for person detection. Recently, vision based approaches have been popular for person detection within a group of multiple people using stereo cameras. In this paper of proposed implementation, an inexpensive single camera has been used to acquire video frames to detect a specific target person and determine its position. A new detection method using color and shape based marker technique has been advanced in this work. The experimental results show that the proposed algorithm can detect a target person under various conditions such as marker features and lighting conditions. Index Terms—Following robot; Mono-vision; Object tracking; Marker-based detection.
  • MN Mansor · S Yaacob · R Nagarajan · H Muthusamy ·

  • Source
    M Satiyan · R Nagarajan · M Hariharan ·

  • Source
    Murugappan Murugappan · Ramachandran Nagarajan · Sazali Yaacob ·
    [Show abstract] [Hide abstract]
    ABSTRACT: In this paper, we present human emotion assessment using electroencephalogram (EEG) signals. The combination of surface Laplacian (SL) filtering, time-frequency analysis of wavelet transform (WT) and linear classifiers are used to classify discrete emotions (happy, surprise, fear, disgust, and neutral). EEG signals were collected from 20 subjects through 62 active electrodes, which were placed over the entire scalp based on the International 10-10 system. An audio-visual (video clips) induction-based protocol was designed for evoking discrete emotions. The raw EEG signals were preprocessed through surface Laplacian filtering method and decomposed into five different EEG frequency bands (delta, theta, alpha, beta and gamma) using WT. In this work, we used three different wavelet functions, namely: "db8", "sym8" and "coif5", for extracting the statistical features from EEG signal for classifying the emotions. In order to evaluate the efficacy of emotion classification under different sets of EEG channels, we compared the classification accuracy of the original set of channels (62 channels) with that of a reduced set of channels (24 channels). The validation of statistical features was performed using 5-fold cross validation. In this work, K nearest neighbor (KNN) outperformed linear discriminant analysis (LDA) by offering a maximum average classification rate of 83.04% on 62 channels and 79.17% on 24 channels, respectively. Finally, we present the average classification accuracy and individual classification accuracy of two different classifiers for justifying the performance of our emotion recognition system.
  • [Show abstract] [Hide abstract]
    ABSTRACT: In this paper, fuzzy classifier is explained and reviewed for detecting facial changes of patient in a hospital in Intensive Care Unit (ICU). The facial changes are most widely represented by eyes and mouth movements. The proposed system uses color images and it consists of three modules. The first module implements skin detection to detect the face. The second module constructs eye and mouth maps that are responsible for changes in eye and mouth regions. The third module extracts the features of eyes and mouth by processing the image and measuring certain dimensions of eyes and mouth regions. Finally a fuzzy classifier used to classify the movements at different illumination levels. From 300 samples of face images, it is found that the identification rate of awakness reaches 97%.
    Industrial Electronics & Applications (ISIEA), 2010 IEEE Symposium on; 10/2010
  • M. Satiyan · R. Nagarajan ·
    [Show abstract] [Hide abstract]
    ABSTRACT: In this paper, we attempted to recognize facial expression by using Haar-like feature extraction method. A set of luminance stickers were fixed on subject's face and the subject is instructed to perform required facial expressions. At the same time, subject's expressions were recorded in video. A set of 2D coordinate values are obtained by tracking the movements of the stickers in video using tracking software. We use Haar-like technique to extract the features. Six statistical features namely variance, standard deviation, mean, power, energy and entropy were derived from the approximation coefficients of Haar-like decomposition. These statistical features were used as an input to the neural network for classifying 8 facial expressions. The feature Variance offers better result compared to other statistical features.
    Intelligent and Advanced Systems (ICIAS), 2010 International Conference on; 07/2010
  • Source
    C.R. Hema · M.P. Paulraj · S. Yaacob · A.H. Adom · R. Nagarajan ·
    [Show abstract] [Hide abstract]
    ABSTRACT: Robot chair control using an asynchronous brain machine interface (ABMI) based on motor imagery requires sufficient subject training. This paper proposes a generalized a brain machine interface design to investigate the feasibility of real-time robot chair control by trained subjects. Performance of the real-time experiments conducted for asynchronous navigation is assessed based on completion of a navigation protocol. The performances of the ABMI and its constraints are discussed.
    Signal Processing and Its Applications (CSPA), 2010 6th International Colloquium on; 06/2010
  • Bashir Mohammed Ghandi · R. Nagarajan · Hazry Desa ·
    [Show abstract] [Hide abstract]
    ABSTRACT: Emotion detection is receiving a lot of attention from researchers due to its potentials in improving human-computer interaction. Recently, we proposed a modification to the Particle Swarm Optimization (PSO) algorithm for the purpose applying it to emotion detection. Our algorithm, which we called Guided Particle Swarm Optimization (GPSO), involves studying the movements of specific points, called action units (AUs), placed on the face of a subject, as the subject expresses different emotions. A swarm of particles is defined such that each particle consists of components from the neighborhood of each AU. However, instead of applying the pure PSO on the swarm to detect emotions, we made the algorithm to take into account the positions of the AUs - thus, the swarm is effectively guided to converge on the path of the AUs. We showed this approach to work very well and made the swarm to converge very quickly to identify the emotion being expressed. One limitation to our earlier system was that the AUs must be physically specified on the subject before the video clips are recorded. In this paper, we present an improvement on the system where we specify the AUs at runtime in a video stream and then apply LK algorithm to keep track of their positions, thus making the system to work on real time basis with the same promising detection success rates. Potential application areas of our system include medical engineering, forensic applications by police and psychiatric applications.

Publication Stats

604 Citations
26.07 Total Impact Points


  • 2007-2014
    • Universiti Malaysia Perlis
      • School of Mechatronic Engineering
      Perlis, Perlis, Malaysia
  • 2005-2007
    • Universiti Utara Malaysia
      Kuala Lumpor, Kuala Lumpur, Malaysia
  • 2002-2007
    • Universiti Malaysia Sabah (UMS)
      • School of Engineering and Information Technology
      Jesselton, Sabah, Malaysia
  • 2000
    • University of Science Malaysia
      • School of Industrial Technology
      Penang, Penang, Malaysia