Article

Achieving remote presence using a humanoid robot controlled by a non-invasive BCI device

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... They found the field to be growing quickly with the introduction of 2010, Kadam and Sheng used the Emotiv EPOC to teleoperate a remote humanoid robot, showing yet another facet of robotics where non-invasive BCI can be successfully applied. Kadam and Sheng's system allowed for the remote control of an Aldebaran Robotics Nao humanoid robot (Figure 1), which the user was able to direct with electroencephalographic commands [8] to move forward, turn left, and turn right. Kadam and Sheng found that their system was effective to navigate a small course. ...
... While the software provided with the Emotiv EPOC allows a user to train a maximum of four outputs, the consistency and effectiveness of those outputs are questionable. While prior work with this device has shown that effective control with as many as three and sometimes four outputs can be consistent [5,6,8,9], none of these systems use more than four outputs. In this work, we look to classify the electroencephalographic data of the user to six outputs, greatly increasing the complexity of any mechanisms being controlled by the user. ...
... The Nao humanoid robot, by Aldebaran Robotics. Kadam and Sheng[8] used this robot for electroencephalographic teleoperation. ...
... This headset consists of 14 electrodes, which are located on the surface of the scalp (see Figure 1). The EPOC is produced by a company called Emotiv ® , which specializes in producing a BCI system and its associated software [52]. The EPOC has three main software modules: (1) Cognitive suite: to detect thoughts and thinkings, (2) Expressive suite: for facial expression recognition, and (3) Affective suite: to interpret emotions [25,52]. ...
... The EPOC is produced by a company called Emotiv ® , which specializes in producing a BCI system and its associated software [52]. The EPOC has three main software modules: (1) Cognitive suite: to detect thoughts and thinkings, (2) Expressive suite: for facial expression recognition, and (3) Affective suite: to interpret emotions [25,52]. Moreover, the EPOC headset encapsulates a gyroscope to detect head movements and orientation. ...
... OSC is one of the most common protocols for communication among computers, sound synthesizers, and other multimedia devices that are optimized for modern networking technology and has been used in many application areas [59]. The "Mind your OSCs" application sends the data collected by the EPOC headset to any program that can receive and read OSC messages [52]. However, the Arduino Ethernet Shield hardware module is needed to be added to the system, which may be difficult to deal with. ...
Article
Full-text available
Recent research studies showed that brain-controlled systems/devices are breakthrough technology. Such devices can provide disabled people with the power to control the movement of the wheelchair using different signals (e.g. EEG signals, head movements, and facial expressions). With this technology, disabled people can remotely steer a wheelchair, a computer, or a tablet. This paper introduces a simple, low-cost human-machine interface system to help chaired people to control their wheelchair using several control sources. To achieve this paper’s aim, a laptop was installed on a wheelchair in front of the sitting person, and the 14-electrode Emotiv EPOC headset was used to collect the person’s head impressions from the skull surface. The superficially picked-up signals, containing the brain thoughts, head gestures, and facial emotions, were electrically encoded and then wirelessly sent to a personal computer to be interpreted and then translated into useful control instructions. Using these signals, two wheelchair control modes were proposed: automatic (using single-modal and multimodal approaches) and manual control. The automatic mode controller was accomplished using a software controller (Arduino), whereas a simple hardware controller was used for the manual mode. The proposed solution was designed using wheelchair, Emotiv EPOC EEG headset, Arduino microcontroller, and Processing language. It was then tested by totally chaired volunteers under different levels of trajectories. The results showed that the person’s thoughts can be used to seamlessly control his/her wheelchair and the proposed system can be configured to suit many levels and degrees of disability.
... The 6-class SSVEP model only reached an average accuracy of 83.1%, so the 4-class SSVEP model met the mandatory for control of the four robot walking behaviors: walking forward, walking backward, and turning left and right. The work [22] used the 6-class SSVEP model to control a humanoid robot with a response time of 7.52 s, but it did not explain how to obtain the accuracy and ITR. We could not repeat the tests due to omitting the detailed experimental procedures and the test conditions, but our single channel-based algorithm reached the compatible classification accuracy to the one achieved by the algorithms [23] used for the tests in [22], as listed in Table 1. ...
... The work [22] used the 6-class SSVEP model to control a humanoid robot with a response time of 7.52 s, but it did not explain how to obtain the accuracy and ITR. We could not repeat the tests due to omitting the detailed experimental procedures and the test conditions, but our single channel-based algorithm reached the compatible classification accuracy to the one achieved by the algorithms [23] used for the tests in [22], as listed in Table 1. ...
... Table 5 summarizes the existing work on the closed-loop control of humanoid robots using the human mind to present a performance comparison based on three criteria (although one or two criteria are lacking for some approaches): success rate, response time, and ITR. Works [9,10,13,22,37,38] report the control of a robot using motor imagery models, which deliver low success rates. Works [9,39,40] report experiments involving the control of a virtual robot; by comparison, controlling a real robot would be much more challenging. ...
Article
Full-text available
In this paper, we evaluate the control performance of SSVEP (steady-state visual evoked potential)- and P300-based models using Cerebot-a mind-controlled humanoid robot platform. Seven subjects with diverse experience participated in experiments concerning the open-loop and closed-loop control of a humanoid robot via brain signals. The visual stimuli of both the SSVEP- and P300- based models were implemented on a LCD computer monitor with a refresh frequency of 60 Hz. Considering the operation safety, we set the classification accuracy of a model over 90.0% as the most important mandatory for the telepresence control of the humanoid robot. The open-loop experiments demonstrated that the SSVEP model with at most four stimulus targets achieved the average accurate rate about 90%, whereas the P300 model with the six or more stimulus targets under five repetitions per trial was able to achieve the accurate rates over 90.0%. Therefore, the four SSVEP stimuli were used to control four types of robot behavior; while the six P300 stimuli were chosen to control six types of robot behavior. Both of the 4-class SSVEP and 6-class P300 models achieved the average success rates of 90.3% and 91.3%, the average response times of 3.65 s and 6.6 s, and the average information transfer rates (ITR) of 24.7 bits/min 18.8 bits/min, respectively. The closed-loop experiments addressed the telepresence control of the robot; the objective was to cause the robot to walk along a white lane marked in an office environment using live video feedback. Comparative studies reveal that the SSVEP model yielded faster response to the subject's mental activity with less reliance on channel selection, whereas the P300 model was found to be suitable for more classifiable targets and required less training. To conclude, we discuss the existing SSVEP and P300 models for the control of humanoid robots, including the models proposed in this paper.
... The rapid improvement of motor imagery BCIs has made them an efficient platform for new developments in such areas as medicine, robotics and entertainment. Today, the BCI technology enables users move prostheses [15][16][17], navigate through virtual reality [18][19][20] and control telepresence robotic settings only by thought [21][22][23][24][25]. These advances are indebted to the much effort of BCI researchers who have produced powerful algorithm and high-accuracy classifiers that are responsible for decoding EEG signals. ...
... Similarly in the case of BCI-control for a moving avatar or robot, the arousal of embodiment for the user is assumed to promote his involvement in the motor imagery task and enhance his skills in the navigation and operation. So far, studies with virtual reality setup [40,41] and humanoid robots [21,23] have investigated the conditions and cognitive process under which the BCI control of another body can elicit a sense of embodiment and ownership in the operators. Recently, Evans and Blanke took a step further and found shared neural underpinnings between motor imagery and illusory body ownership [45]. ...
Article
Full-text available
Brain computer interfaces (BCIs) have been developed and implemented in many areas as a new communication channel between the human brain and external devices. Despite their rapid growth and broad popularity, the inaccurate performance and cost of user-training are yet the main issues that prevent their application out of the research and clinical environment. We previously introduced a BCI system for the control of a very humanlike android that could raise a sense of embodiment and agency in the operators only by imagining a movement (motor imagery) and watching the robot perform it. Also using the same setup, we further discovered that the positive bias of subjects' performance both increased their sensation of embodiment and improved their motor imagery skills in a short period. In this work, we studied the shared mechanism between the experience of embodiment and motor imagery. We compared the trend of motor imagery learning when two groups of subjects BCI-operated different looking robots, a very humanlike android's hands and a pair of metallic gripper. Although our experiments did not show a significant change of learning between the two groups immediately during one session, the android group revealed better motor imagery skills in the follow up session when both groups repeated the task using the non-humanlike gripper. This result shows that motor imagery skills learnt during the BCI-operation of humanlike hands are more robust to time and visual feedback changes. We discuss the role of embodiment and mirror neuron system in such outcome and propose the application of androids for efficient BCI training.
... There have been some applications that successfully utilize this technology in several fields [8][9][10][11]. In 2013, Duvinage et al. [8] proposal a BCI system to discussed the performance of the Emotiv Epoc headset for P300-based applications. Ramirez and Vamvakousis [9] used Emotive Epoc device to detect emotion from EEG signals. ...
... Szar and Signorile [10] used Emotiv system to extract the EEG signals from the headset to categorize them into one of several groups, to translate that group to a robotic command, and finally controlled the robot. Thobbi et al. [11] achieved remote presence using the Emotiv Epoc headset to control a humanoid robot. ...
Article
Full-text available
Extracting neural signals to control a quadcopter using wireless manner is proposed in this paper for hands-free, silence and effortless human-mobile interaction with remote presence. The brain activity is recorded in real-time and discovered patterns to relate it to facial-expression states with a cheap off-the-shelf electroencephalogram (EEG) headset-Emotic Epoc device. A tablet based mobile framework with Android system is developed to convert these discovered patterns into commands to drive the quadcopter-AR Drone 2.0 through wireless interface. First, neural signals are sequentially extracted from headset and transmitted to the tablet mobile system. In the tablet mobile system, large number of feature vector of EEG can be reduced by using Principle Component Analysis (PCA) to recognize the facial expression to generate suitable commands and driving the quadcopter through wireless interface. Finally, the quadcopter can fly smoothly in accordance with the commands converted by the EEG signals. The experimental results show that the proposed system can easily control quadcopters.
... Several BCI techniques are used in controlling telepresence robots [16][17][18][19][20]. It shows that choosing the proper paradigm is an essential step in the design of BCI's telepresence sensorimotor rhythm (SMR) system [21][22][23][24]. It does not require any external visual or auditory stimuli to transmit control signals to the telepresence robot, as shown in Fig. 8c. ...
Article
Full-text available
In this era of post-COVID-19, humans are psychologically restricted to interact less with other humans. According to the world health organization (WHO), there are many scenarios where human interactions cause severe multiplication of viruses from human to human and spread worldwide. Most healthcare systems shifted to isolation during the pandemic and a very restricted work environment. Investigations were done to overcome the remedy, and the researcher developed different techniques and recommended solutions. Telepresence robot was the solution achieved by all industries to continue their operations but with almost zero physical interaction with other humans. It played a vital role in this perspective to help humans to perform daily routine tasks. Healthcare workers can use telepresence robots to interact with patients who visit the healthcare center for initial diagnosis for better healthcare system performance without direct interaction. The presented paper aims to compare different telepresence robots and their different controlling techniques to perform the needful in the respective scenario of healthcare environments. This paper comprehensively analyzes and reviews the applications of presented techniques to control different telepresence robots. However, our feature-wise analysis also points to specific technical, appropriate, and ethical challenges that remain to be solved. The proposed investigation summarizes the need for further multifaceted research on the design and impact of a telepresence robot for healthcare centers, building on new perceptions during the COVID-19 pandemic.
... These studies have shown that the choice of an appropriate paradigm is an essential step in designing the telepresence system. Sensorimotor Rhythms (SMR) based BCIs are one of the best candidate paradigms [21]- [24] because these types of BCI do not require an external visual or auditory stimulation to exert control signals, and therefore are considered to be more convenient for the user. However, such systems are usually limited to few task choices, require an exhaustive concentration of users, and have extended training time, which, altogether, render their usage troublesome for disabled people. ...
Article
Full-text available
Improving the life quality of people with severe motor paralysis has a significant impact on restoring their functional independence to perform activities of daily living (ADL). Telepresence is a subfield of the robotic-assisted route, where human plays the role of an operator, sending high-level instructions to an assistive robot while receiving sensory feedback. However, for severely motor-impaired people, conventional interaction modalities may not be suitable due to their complete paralysis. Thus, designing alternative ways of interaction such as Brain-Computer Interfaces (BCI) is essential for a telepresence capability. We propose a novel framework that integrates a BCI system and a humanoid robot to develop a brain-controlled telepresence system with multimodal control features. In particular, the low-level control is executed by Programming by Demonstration (PbD) models, and the higher-level cognitive commands are produced by a BCI system to perform vital ADLs. The presented system is based on real-time decoding of attention-modulated neural responses elicited in the brain electroencephalographic signals and generating multiple control commands. As a result, the system allows a user to interact with a humanoid robot while receiving auditory and visual feedback from the robot’s sensors. We validated our system across ten subjects in a realistic scenario. The experimental results show the feasibility of the approach in the design of a telepresence robot with high BCI decoding performances.
... 4. The quality of the data collected using Emotiv headset was tested in a few recent studies; such as wheel chair control (Stamps and Hamam, 2010), humanoid robot control (Thobbi et al., 2010). ...
Thesis
Full-text available
As a single-hidden layer feed forward neural network (SLFN), conventional extreme learning machine (ELM) reaches high performance rates in extremely rapid training pace on benchmark datasets. However, when it is applied to real life large datasets, decline in training pace and performance rates related to low convergence of singular value decomposition (SVD) method occurs. This thesis proposes new approaches in conventional ELM to overcome this problem with lower upper (LU) triangularization, Hessenberg decomposition, Schur decomposition, modified Gram Schmidt (MGS) process and Householder reflection methods. Experiments with conventional and proposed ELMs, have been conducted on visual stimuli optimization problem in brain computer interface (BCI). And, multi-layer perceptron (MLP), k-nearest neighbour (k-NN) and Bayesian network (BayesNET) are applied for compartments. 19 subjects participated in this experiment and results show that if priority is given to training pace, Hessenberg decomposition method, and if priority is given to performance measures Householder reflection method can replace SVD. Also, other proposed methods give comparable results. Besides, this thesis shows that visual stimuli that is smaller and has orange coloured concentric background has statistically positive effect on performing BCI application. In real-time BCI application proposed algorithms can decide just in 17 seconds with selected electroencephalography (EEG) channels and it has an accuracy rate of 90.83%.
... On the first issue, experiments with virtual reality for the control of a virtual body[12]or navigation through a virtual environment[1,13,14]have indicated that the virtual control through BCI can raise an experience of agency and embodiment for the subjects. In a similar line, other works have employed humanoid robots to assess the experience of immersion in terms of tele-presence[15][16][17][18]. On the second problem, researches have tested different cognitive tasks, training protocols and feedback designs to seek the efficient method for the voluntary production and control of cerebral activities by the user[10].Neuper et al.showed that in the case of motor imagery, the recognition of motor patterns were mostly achieved when the participants used a strategy of kinesthetic motor imagery (first-person process) rather than when they formed a visual image of another's action (third-person process)[19], but they couldn't find an effect of improvement when a realistic feedback (moving hand) was presented versus the abstract feedback (moving bar)[20]. ...
... There have been some applications that successfully utilize this technology in several fields. Thobbi et al. [8] achieved remote presence using the Emotiv EPOC headset to control a humanoid robot. Szarfir and Signorile [9] used Emotiv system to extract the EEG signals from the headset to categorize them into one of several groups, to translate that group to a robotic command, and finally controlled the robot. ...
Article
Full-text available
Brain?computer interface (BCI) has created a new era in neuroscience. It has improved the life quality of severely disabled patients. It allows them to regain the power of executing will by their cognitive, expressive and affective brain activities. An electroencephalogram (EEG)-based BCI system with wireless manner was developed to extract EEG signals with Emotiv EPOC head set for recognizing the facial actions in this paper. The extracted feature vectors of EEG can be reduced by the Wavelet transform. Then the reduced EEG signals can then be clearly classified into six clusters by means of support vector machine algorithm with Gaussian kernel function. The better correct rates can be obtained by one-order wavelet transform than those got by three-order wavelet transform. In order to get real-time manner to control an electronic system smoothly, the sampling data have to be reduced. If time consumption is considered, we can choice the one-order wavelet transform with 32 samples. The experimental results showed a promising correct rate for the facial-action recognition through the proposed BCI system with real-time manner.
... It enables the patient to control the peripheral environment using his own thoughts, i.e., brain's electrical activity. Currently, there are many BCI applications in different areas such as computer cursor control [23], remote control [24], biometric authentication [25], and emotion prediction [26]. Most remarkably, BCIs were already applied to control limb prosthetic devices [27], [28], and such BCI controlled prostheses indicate a feasible neurological rehabilitation technology for motor function recovery. ...
Article
Full-text available
Traditional rehabilitation techniques have limited effects on the recovery of patients with tetraplegia. A brain-computer interface (BCI) provides an interactive channel that does not depend on the normal output of peripheral nerves and muscles. In this paper, an integrated framework of a noninvasive electroencephalogram (EEG)-based BCI with a noninvasive functional electrical stimulation (FES) is established, which can potentially enable the upper limbs to achieve more effective motor rehabilitation. The EEG signals based on steady-state visual evoked potential are used in the BCI. Their frequency domain characteristics identified by the pattern recognition method are utilized to recognize intentions of five subjects with average accuracy of 73.9%. Furthermore the movement intentions are transformed into instructions to trigger FES, which is controlled with iterative learning control method, to stimulate the relevant muscles of upper limbs tracking desired velocity and position. It is a useful technology with potential to restore, reinforce or replace lost motor function of patients with neurological injuries. Experiments with five healthy subjects demonstrate the feasibility of BCI integrated with upper extremity FES toward improved function restoration for an individual with upper limb disabilities, especially for patients with tetraplegia.
... Based on S3 and S5, they had little worse performance for LDA3 when gazing 12HZ flashing square, respectively 73.17% and 74.55%. The corresponding frequency band of SSVEP (11.75-12.25 HZ) was overlapped with alpha band (8)(9)(10)(11)(12)(13). Some subjects with stronger alpha wave may lead to worse performance in recognizing 12 HZ flashing square of this SSVEP-based BCI. ...
Conference Paper
Full-text available
Brain-computer interface (BCI) is currently developed as an alternative technology with a potential to restore lost motor function in patients with neurological injuries. In this paper, we describe an integrated system of a non-invasive electroencephalogram (EEG)-based BCI with a non-invasive functional electrical stimulation (FES). This system enables the direct brain control of upper limbs to achieve motor rehabilitation. The EEG signals based on steady-state visual evoked potential (SSVEP) were used in the BCI. The classifier of linear discriminant analysis was applied to deal with the frequency domain characteristics and recognize intentions. The identified intentions were transformed into instructions to trigger FES which was controlled with iterative learning control method to stimulate the relevant muscles of upper limbs for motor recovery. Results show that the integration of BCI with an upper-extremity FES is feasible with an average accuracy of about 73.9% over five able-bodied subjects.
... Thobbi et. al. used video goggles as the projection of the robot movement while controlling the NAO robot with the Emotiv BCI device [11]. ...
Conference Paper
Full-text available
There are several implementations of Brain-Computer Interfaces (BCI) systems to control robots. Although there are a wide variety of implementations, there is minimal work on researching infrastructures that work independent of hardware. Most systems presented in previous work are built to work with specific hardware components. The infrastructure presented in this paper serves as an alternative to these approaches and has the ability to function independent of hardware. This paper gives a thorough infrastructure design and presents results collected while using the infrastructure to control a robot with a non-invasive EEG device.
... In particular, electrodes are attached on a semi-rigid support that constitutes the headset, as seen on Fig. 1, so that electrode positioning, while less accurate, becomes much faster. EMOTIV headset has become increasingly popular in BCI community [5,22,35,44] and has already been successfully involved in several BCI designs [35,38,18]. Actual comparisons of the EMOTIV Headset to more traditional EEG amplifier have already been reported for EEG recording [36] and event related potentials (ERPs) extractions [11] indicating worse performance of the EMOTIV headset. ...
Article
Full-text available
A brain-computer interface aims at restoring communication and control in severely disabled people by identification and classification of EEG features such as event-related potentials (ERPs). The aim of this study is to compare different modalities of EEG recording for extraction of ERPs. The first comparison evaluates the performance of six disc electrodes with that of the EMOTIV headset, while the second evaluates three different electrode types (disc, needle, and large squared electrode). Ten healthy volunteers gave informed consent and were randomized to try the traditional EEG system (six disc electrodes with gel and skin preparation) or the EMOTIV Headset first. Together with the six disc electrodes, a needle and a square electrode of larger surface were simultaneously recording near lead Cz. Each modality was evaluated over three sessions of auditory P300 separated by one hour. No statically significant effect was found for the electrode type, nor was the interaction between electrode type and session number. There was no statistically significant difference of performance between the EMOTIV and the six traditional EEG disc electrodes, although there was a trend showing worse performance of the EMOTIV headset. However, the modality-session interaction was highly significant (P<0.001) showing that, while the performance of the six disc electrodes stay constant over sessions, the performance of the EMOTIV headset drops dramatically between 2 and 3h of use. Finally, the evaluation of comfort by participants revealed an increasing discomfort with the EMOTIV headset starting with the second hour of use. Our study does not recommend the use of one modality over another based on performance but suggests the choice should be made on more practical considerations such as the expected length of use, the availability of skilled labor for system setup and above all, the patient comfort.
Article
Full-text available
A Brain-Computer Interface (BCI) acts as a communication mechanism using brain signals to control external devices. The generation of such signals is sometimes independent of the nervous system, such as in Passive BCI. This is majorly beneficial for those who have severe motor disabilities. Traditional BCI systems have been dependent only on brain signals recorded using Electroencephalography (EEG) and have used a rule-based translation algorithm to generate control commands. However, the recent use of multi-sensor data fusion and machine learning-based translation algorithms has improved the accuracy of such systems. This paper discusses various BCI applications such as tele-presence, grasping of objects, navigation, etc. that use multi-sensor fusion and machine learning to control a humanoid robot to perform a desired task. The paper also includes a review of the methods and system design used in the discussed applications.
Conference Paper
Brain computer interface (BCI) is a technology that enables a user to interact with the outside world by measuring and analysing signals associated with neural activity, and mapping an identified neural activity pattern to a behavior or action. In this work, an BCI system was developed where the operation of a quadcopter is controlled by identified brain concentration and eye blink patterns. A portable electroencephalography (EEG) headset is used to acquire neural signal around forehead and both eyes. Acquired EEG data are sent to a data processing computer wirelessly and processed in real-time. Identified brain concentration and eye blink patterns are associated with quadcopter operation commands and transmitted to the remote control that is modified to interface with the computer. The BCI system was evaluated by an experiment study and classification accuracy was calculated. Experimental results indicate that the system can achieve the expected performance without using EEG data from all channels and complicated data processing algorithms.
Book
Introduction to EEG- and Speech-Based Emotion Recognition Methods examines the background, methods, and utility of using electroencephalograms (EEGs) to detect and recognize different emotions. By incorporating these methods in brain-computer interface (BCI), we can achieve more natural, efficient communication between humans and computers. This book discusses how emotional states can be recognized in EEG images, and how this is useful for BCI applications. EEG and speech processing methods are explored, as are the technological basics of how to operate and record EEGs. Finally, the authors include information on EEG-based emotion recognition, classification, and a proposed EEG/speech fusion method for how to most accurately detect emotional states in EEG recordings. Provides detailed insight on the science of emotion and the brain signals underlying this phenomenon; Examines emotions as a multimodal entity, utilizing a bimodal emotion recognition system of EEG and speech data; Details the implementation of techniques used for acquiring as well as analyzing EEG and speech signals for emotion recognition.
Chapter
Full-text available
In this paper, a Brain Computer Interactive (BCI) robot motion control system for patients’ assistance is designed and implemented. The proposed system acquires data from the patient’s brain through a group of sensors using Emotiv Epoc neuroheadset. The acquired signal is processed. From the processed data the BCI system determines the patient’s requirements and accordingly issues commands (output signals). The processed data is translated into action using the robot as per the patient’s requirement. A Graphics user interface (GUI) is developed by us for the purpose of controlling the motion of the Robot. Our proposed system is quite helpful for persons with severe disabilities and is designed to help persons suffering from spinal cord injuries/ paralytic attacks. It is also helpful to all those who can’t move physically and find difficulties in expressing their needs verbally.
ResearchGate has not been able to resolve any references for this publication.