Conference Paper

Towards non-invasive brain-computer interface for hand/arm control in users with spinal cord injury

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Spinal cord injury (SCI) can disrupt the communication pathways between the brain and the rest of the body, restricting the ability to perform volitional movements. Neuroprostheses or robotic arms can enable individuals with SCI to move independently, improving their quality of life. The control of restorative or assistive devices is facilitated by brain-computer interfaces (BCIs), which convert brain activity into control commands. In this paper, we summarize the recent findings of our research towards the main aim to provide reliable and intuitive control. We propose a framework that encompasses the detection of goal- directed movement intention, movement classification and decoding, error-related potentials detection and delivery of kinesthetic feedback. Finally, we discuss future directions that could be promising to translate the proposed framework to individuals with SCI.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... The study involved five participants with SCI, and the results showed that the proposed BCI system effectively controlled the robotic arm. Specifically, the participants could perform various tasks, including reaching, grasping, and releasing objects, using the BCI system[5]. Study 3 (Samejima et al.,Figure 2. The bar chart via R Studios-Rehabilitation assessment results were gathered from Cui et al. Computer-Spinal Interface Restores Upper Limb Function After Spinal Cord Injury. ...
... Neural recovery is crucial for many functions, and various neuropathological changes can lead to different dysfunctions in nervous system diseases. Studies have shown that BCI positively affects functional mechanisms such as cortical excitability[5], cerebral plasticity, and functional connectivity[4]. As a result, BCI is now being used as a rehabilitation method for upper limb dysfunction based on the principle of neuroplasticity.However, there are several limitations to consider. ...
Article
Full-text available
Spinal cord injuries (SCI) can result in lifelong disability from the disconnection of the neural pathways in the spine to the brain, causing motor and functional deficits. In recent years, there has been growing interest in applying Brain-Computer Spinal Interface (BCI) technology to address the challenges individuals face with SCI. A systematic literature review resulted in a comprehensive analysis of existing studies on BCI for SCI. The selected studies covered various approaches to BCI for restoring motor function in participants with an SCI. The BCI interventions ranged from non-invasive electroencephalography-based systems to more invasive technologies involving neural implants on the cortex of the brain and the epidural space on the spine. The key findings of this review suggest that BCI technologies hold promise in significantly improving the quality of life and functional capabilities of individuals with SCI. Many studies reported significant improvements in motor function, communication, and overall well-being following BCI interventions. These interventions often facilitated direct communication between the brain and the spinal cord, bypassing the disconnected pathways between the brain and the spine caused by SCI.
... In summary, the proposed BCI system's classifiers outcomes indicate that the BCI can discriminate successfully between the different MI conditions and a resting period. This is consistent with results of studies published within the field [25], [29], [30], [32], [70], [72], [76]. Therefore, we can expect that most healthy people and SCI patients with mild to moderate disability levels could be potential users of this technology. ...
Article
Full-text available
This work presents the design, implementation, and feasibility evaluation of a Motor Imagery (MI) based Brain-Computer Interface (BCI) developed to control a Functional Electrical Stimulation (FES) device. The aim of this system is to assist the upper limb motor recovery of patients with spinal cord injury (SCI). With this BCI-controlled FES system, the user performs open and close MI with either the left or right hand, which if detected is used to provide visual feedback and electroestimulation to muscles in the forearm to perform the corresponding grasping movement. The system was evaluated with seven healthy subjects (HS group) and two SCI patients (SC group) in several experimental sessions across different days. Each experimental session consisted of a training routine devoted to collect calibration EEG data to train the BCI machine learning model, and of a validation routine devoted to validate system in online operation. The online system validation showed an accuracy of the recognition of the MI task that ranged between 78% and 81% for HS participants and between 63% and 93% for SCI participants. Additionally, the time taken by the BCI system to trigger the FES device ranged between 7.05 and 7.29 s for HS participants and between 8.43 s and 13.91 s for SCI participants. Finally, significant negative correlations were observed ( r=0.418r=-0.418 , p=0.024 and r=0.437r=-0.437 , p=0.018 for left and right hand MI conditions, respectively) between the online BCI performance with a quantitative EEG parameter based on event-related desynchronization/synchronization analysis. The results of this work indicate the feasibility of the proposed BCI coupled to a FES device to be used for SCI patients with a moderate level of disability and provides evidence of the functionality of the proposed BCI system in a motor rehabilitation context.
... However, the disparity between a mental task devised for control and the intended task at hand often makes the operation of BCIs unintuitive, apart from introducing a mental workload that is disproportionate to the intended action. As a result, recent research in the field has turned to BCI solutions offering more natural control to the user; in particular, solutions which allow unrestricted eye movements as can be expected also from everyday situations, as well as intuitive paradigms that connect the task in a straight-forward way with the intended goal (Ofner and Müller-Putz, 2012;Müller-Putz et al., 2016, 2022Müller-Putz, Pereira and Ofner, 2018;Edelman et al., 2019;Mondini et al., 2020). ...
Article
Full-text available
Objective. In people with a cervical spinal cord injury (SCI) or degenerative diseases leading to limited motor function, restoration of upper limb movement has been a goal of the brain-computer interface field for decades. Recently, research from our group investigated non-invasive and real-time decoding of continuous movement in able-bodied participants from low-frequency brain signals during a target-tracking task. To advance our setup towards motor-impaired end users, we consequently chose a new paradigm based on attempted movement. Approach. Here, we present the results of two studies. During the first study, data of ten able-bodied participants completing a target-tracking/shape-tracing task on-screen were investigated in terms of improvements in decoding performance due to user training. In a second study, a spinal cord injured participant underwent the same tasks. To investigate the merit of employing attempted movement in end users with SCI, data of the spinal cord injured participant were recorded twice; once within an observation-only condition, and once while simultaneously attempting movement. Main results. We observed mean correlations well above chance level for continuous motor decoding based on attempted movement in able-bodied participants. Additionally, no global improvement over three sessions within five days, both in sensor and in source space, could be observed across all participants and movement parameters. In the participant with SCI, decoding performance well above chance was found. Significance. No presence of a learning effect in continuous attempted movement decoding in able-bodied participants could be observed. In contrast, non-significantly varying decoding patterns may promote the use of source space decoding in terms of generalized decoders utilizing transfer learning. Furthermore, above-chance correlations for attempted movement decoding ranging between those of observation only and executed movement were seen in one spinal cord injured participant, suggesting attempted movement decoding as a possible link between feasibility studies in able-bodied and actual applications in motor impaired end users.
... Brain-computer interfaces (BCIs) [1,2] are characterized by offering a user direct control over an interface without prior muscular activity. For years, a goal of our group has been the restoration of arm and hand movement respectively in people with cervical spinal cord injury (SCI) [3,4,5]. Through a BCI, these persons should be enabled to control an end-effector, e.g., a neuroprosthesis or a robotic limb. ...
Conference Paper
Full-text available
Recent research from our group has shown that non-invasive continuous online decoding of executed movement from non-invasive low-frequency brain signals is feasible. In order to cater the setup to actual end users, we proposed a new paradigm based on attempted movement and after conducting a pilot study, we hypothesize that user control in this setup may be improved by learning over multiple sessions. Over three sessions within five days, we acquired 60-channel electroencephalographic (EEG) signals from nine able-bodied participants while having them track a moving target / trace depicted shapes on a screen. Though no global learning effect could be identified, increases in correlations between target and decoded trajectories for approximately half of the participants could be observed.
... Similar work has also been completed on developing more natural, intuitive neuroprosthesis control using non-invasive EEG-based BCIs. The goal is to improve the quality of life for people with cervical spinal cord injuries [17]. However, all the aforementioned work is directly related to clinical applications alone and does not explore multi-party, social BCI integration. ...
Chapter
This paper discusses a general architecture for multi-party electroencephalography (EEG)-based robot interactions. We explored a system with multiple passive Brain-Computer Interfaces (BCIs) which influence the mechanical behavior of competing mobile robots. Although EEG-based robot control has been previously examined, previous investigations mainly focused on medical applications. Consequently, there is limited work for hybrid control systems that support multi-party, social BCI. Research on multi-user environments, such as gaming, have been conducted to discover challenges for non-medical BCI-based control systems. The presented work aims to provide an architectural model that uses passive BCIs in a social setting including mobile robots. Such structure is comprised of robotic devices able to act intelligently using vision sensors, while receiving and processing EEG data from multiple users. This paper describes the combination of vision sensors, neurophysiological sensors, and modern web technologies to expand knowledge regarding the design of social BCI applications that leverage physical systems.
... Current trends in assistive MI BCI (e.g. "MoreGrasp" [15]) is to combine BCI with multi electrode arrays which may enable different grasp patterns [16], but are limited to unimanual control. People with complete tetraplegia, who have bilateral upper limb paralysis, would benefit from assistive devices that would allow a selection between uni-and bimanual tasks, enabling wider range of motor tasks, increasing patients' independence. ...
Article
Bimanual movements are an integral part of everyday activities and are often included in rehabilitation therapies. Yet electroencephalography (EEG) based assistive and rehabilitative brain computer interface (BCI) systems typically rely on motor imagination (MI) of one limb at the time. In this study we present a classifier which discriminates between uni-and bimanual MI. Ten able bodied participants took part in cue based motor execution (ME) and MI tasks of the left (L), right (R) and both (B) hands. A 32 channel EEG was recorded. Three linear discriminant analysis classifiers, based on MI of L-B, B-R and B--L hands were created, with features based on wide band Common Spatial Patterns (CSP) 8-30 Hz, and band specifics Common Spatial Patterns (CSPb). Event related desynchronization (ERD) was significantly stronger during bimanual compared to unimanual ME on both hemispheres. Bimanual MI resulted in bilateral parietally shifted ERD of similar intensity to unimanual MI. The average classification accuracy for CSP and CSPb was comparable for L-R task (73±9% and 75±10% respectively) and for L-B task (73±11% and 70±9% respectively). However, for R-B task (67±3% and 72±6% respectively) it was significantly higher for CSPb (p=0.0351). Six participants whose L-R classification accuracy exceeded 70% were included in an on-line task a week later, using the unmodified offline CSPb classifier, achieving 69±3% and 66±3% accuracy for the L-R and R-B tasks respectively. Combined uni and bimanual BCI could be used for restoration of motor function of highly disabled patents and for motor rehabilitation of patients with motor deficits.
Article
Full-text available
Arguably, automation is fast transforming many enterprise business processes, transforming operational jobs into monitoring tasks. Consequently, the ability to sustain attention during extended periods of monitoring is becoming a critical skill. This manuscript presents a Brain-Computer Interface (BCI) prototype which seeks to combat decrements in sustained attention during monitoring tasks within an enterprise system. A brain-computer interface is a system which uses physiological signals output by the user as an input. The goal is to better understand human responses while performing tasks involving decision and monitoring cycles, finding ways to improve performance and decrease on-task error. Decision readiness and the ability to synthesize complex and abundant information in a brief period during critical events has never been more important. Closed-loop control and motivational control theory were synthesized to provide the basis from which a framework for a prototype was developed to demonstrate the feasibility and value of a BCI in critical enterprise activities. In this pilot study, the BCI was implemented and evaluated through laboratory experimentation using an ecologically valid task. The results show that the technological artifact allowed users to regulate sustained attention positively while performing the task. Levels of sustained attention were shown to be higher in the conditions assisted by the BCI. Furthermore, this increased cognitive response seems to be related to increased on-task action and a small reduction in on-task errors. The research concludes with a discussion of the future research directions and their application in the enterprise.
Article
Full-text available
Bipedalism (using only two legs for walking) and having the capability to use tools have long been considered characteristic features that differentiate human beings from animals. Being able to walk upright freed up human hands, allowing us to reach, grasp, carry food, make and use tools, which greatly increased the survivability of our ancestors. Hand actions not only involve muscles and joints to execute actions but also require computations in the brain to analyze the visual environment and select the appropriate action, as well as formulate the action before execution and correct it in real-time during execution. Here, we review the behavioral and brain imaging research of human hand actions from a perspective of cognitive neuroscience. The review includes the research contents and methods of visually-guided action, existing theories, current debates, new evidence of existing theories, and the applications of action research in robotics and artificial intelligence.
Conference Paper
Full-text available
Brain-computer interfaces (BCIs) are prone to errors in the decoding of the user's intention, yet the detection of errors can be used to improve the performance of BCIs. We recorded the EEG data of 8 subjects who participated in an experiment to study error-related potentials (ErrPs) with masked and unmasked onset, during a task with continuous control and continuous feedback. The masked ErrPs had a delayed onset and less pronounced peak amplitudes when compared to the unmasked ErrPs. We obtained an average classification rate of 94% for correct trials and of 80% for error trials. The classification rates for masked errors against unmasked errors were at chance level.
Conference Paper
Full-text available
Eye movements and their contribution to electroencephalographic (EEG) recordings as ocular artifacts (OAs) are well studied. Yet their existence is typically regarded as impeding analysis. A widely accepted bypass is artifact avoidance. OA processing is often reduced to rejecting contaminated data. To overcome loss of data and restriction of behavior, research groups have proposed various correction methods. State of the art approaches are data driven and typically require OAs to be uncorrelated with brain activity. This does not necessarily hold for visuomotor tasks. To prevent correlated signals, we examined a two block approach. In a first block, subjects performed saccades and blinks, according to a visually guided paradigm. We then fitted 5 artifact removal algorithms to this data. To test their stationarity regarding artifact attenuation and preservation of brain activity, we recorded a second block one hour later. We found that saccades and blinks could still be attenuated to chance level, while brain activity during rest trials could be retained.
Article
Full-text available
Objective. Despite the high number of degrees of freedom of the human hand, most actions of daily life can be executed incorporating only palmar, pincer and lateral grasp. In this study we attempt to discriminate these three different executed reach-and-grasp actions utilizing their EEG neural correlates. Approach. In a cue-guided experiment, 15 healthy individuals were asked to perform these actions using daily life objects. We recorded 72 trials for each reach-and-grasp condition and from a no-movement condition. Main results. Using low-frequency time domain features from 0.3 to 3 Hz, we achieved binary classification accuracies of 72.4%, STD ± 5.8% between grasp types, for grasps versus no-movement condition peak performances of 93.5%, STD ± 4.6% could be reached. In an offline multiclass classification scenario which incorporated not only all reach-and-grasp actions but also the no-movement condition, the highest performance could be reached using a window of 1000 ms for feature extraction. Classification performance peaked at 65.9%, STD ± 8.1%. Underlying neural correlates of the reach-and-grasp actions, investigated over the primary motor cortex, showed significant differences starting from approximately 800 ms to 1200 ms after the movement onset which is also the same time frame where classification performance reached its maximum. Significance. We could show that it is possible to discriminate three executed reach-and-grasp actions prominent in people’s everyday use from non-invasive EEG. Underlying neural correlates showed significant differences between all tested conditions. These findings will eventually contribute to our attempt of controlling a neuroprosthesis in a natural and intuitive way, which could ultimately benefit motor impaired end users in their daily life actions.
Article
Full-text available
How neural correlates of movements are represented in the human brain is of ongoing interest and has been researched with invasive and non-invasive methods. In this study, we analyzed the encoding of single upper limb movements in the time-domain of low-frequency electroencephalography (EEG) signals. Fifteen healthy subjects executed and imagined six different sustained upper limb movements. We classified these six movements and a rest class and obtained significant average classification accuracies of 55% (movement vs movement) and 87% (movement vs rest) for executed movements, and 27% and 73%, respectively, for imagined movements. Furthermore, we analyzed the classifier patterns in the source space and located the brain areas conveying discriminative movement information. The classifier patterns indicate that mainly premotor areas, primary motor cortex, somatosensory cortex and posterior parietal cortex convey discriminative movement information. The decoding of single upper limb movements is specially interesting in the context of a more natural non-invasive control of e.g., a motor neuroprosthesis or a robotic arm in highly motor disabled persons.
Article
Full-text available
Using low-frequency time-domain electroencephalographic (EEG) signals we show, for the same type of upper limb movement, that goal-directed movements have different neural correlates than movements without a particular goal. In a reach-and-touch task, we explored the differences in the movement-related cortical potentials (MRCPs) between goal-directed and non-goal-directed movements. We evaluated if the detection of movement intention was influenced by the goal-directedness of the movement. In a single-trial classification procedure we found that classification accuracies are enhanced if there is a goal-directed movement in mind. Furthermore, by using the classifier patterns and estimating the corresponding brain sources, we show the importance of motor areas and the additional involvement of the posterior parietal lobule in the discrimination between goal-directed movements and non-goal-directed movements. We discuss next the potential contribution of our results on goal-directed movements to a more reliable brain-computer interface (BCI) control that facilitates recovery in spinal-cord injured or stroke end-users.
Article
Full-text available
To explore the exciting new domain of brain informatics, we invited several well-known experts to discuss the state of the art, the challenges, the opportunities, and the trends. In "Creating Human-Level AI by Educating a Child Machine," Raj Reddy proposes an architecture for a "child machine" that can learn and is teachable. In "Cyborg Intelligence," Zhaohui Wu, Gang Pan, and Nenggan Zheng describe a biological-machine system consisting of both an organic and a computing part. In "Formal Minds and Biological Brains II: From the Mirage of Intelligence to a Science and Engineering of Consciousness," Paul F.M.J. Verschure discusses human-like cognitive architectures and describes the Distributed Adaptive Control (DAC) architecture for perception, cognition, and action. In "The Challenges of Closed-Loop Invasive Brain-Machine Interfaces," Qiaosheng Zhang and Xiaoxiang Zheng discuss the challenges and trends in closed-loop brain-machine interfaces. In "Neural Signal Processing in Brain-Machine Interfaces," Jose C. Principe takes a critical look at the challenges and opportunities of performing computation with pulses, as neurons do. In "Neuroprosthesis Control via a Noninvasive Hybrid Brain-Computer Interface," Alex Kreilinger, Martin Rohm, Vera Kaiser, Robert Leeb, Rüdiger Rupp, and Gernot R. Müller-Putz describe an example of the convergence of biological intelligence and machine intelligence in a hand-elbow neuroprosthesis control unit.
Conference Paper
Full-text available
A brain-computer interface (BCI) can be used to control a limb neuroprosthesis with motor imaginations (MI) to restore limb functionality of paralyzed persons. However, existing BCIs lack a natural control and need a considerable amount of training time or use invasively recorded biosignals. We show that it is possible to decode velocities and positions of executed arm movements from electroencephalography signals using a new paradigm without external targets. This is a step towards a non-invasive BCI which uses natural MI. Furthermore, training time will be reduced, because it is not necessary to learn new mental strategies.
Article
Full-text available
The aim of this work is to present the development of a hybrid Brain-Computer Interface (hBCI) which combines existing input devices with a BCI. Thereby, the BCI should be available if the user wishes to extend the types of inputs available to an assistive technology system, but the user can also choose not to use the BCI at all; the BCI is active in the background. The hBCI might decide on the one hand which input channel(s) offer the most reliable signal(s) and switch between input channels to improve information transfer rate, usability, or other factors, or on the other hand fuse various input channels. One major goal therefore is to bring the BCI technology to a level where it can be used in a maximum number of scenarios in a simple way. To achieve this, it is of great importance that the hBCI is able to operate reliably for long periods, recognizing and adapting to changes as it does so. This goal is only possible if many different subsystems in the hBCI can work together. Since one research institute alone cannot provide such different functionality, collaboration between institutes is necessary. To allow for such a collaboration, a new concept and common software framework is introduced. It consists of four interfaces connecting the classical BCI modules: signal acquisition, preprocessing, feature extraction, classification, and the application. But it provides also the concept of fusion and shared control. In a proof of concept, the functionality of the proposed system was demonstrated.
Article
Full-text available
For individuals with a high spinal cord injury (SCI) not only the lower limbs, but also the upper extremities are paralyzed. A neuroprosthesis can be used to restore the lost hand and arm function in those tetraplegics. The main problem for this group of individuals, however, is the reduced ability to voluntarily operate device controllers. A brain–computer interface provides a non-manual alternative to conventional input devices by translating brain activity patterns into control commands. We show that the temporal coding of individual mental imagery pattern can be used to control two independent degrees of freedom – grasp and elbow function – of an artificial robotic arm by utilizing a minimum number of EEG scalp electrodes. We describe the procedure from the initial screening to the final application. From eight naïve subjects participating online feedback experiments, four were able to voluntarily control an artificial arm by inducing one motor imagery pattern derived from one EEG derivation only.
Article
Full-text available
Nowadays, everybody knows what a hybrid car is. A hybrid car normally has two engines to enhance energy efficiency and reduce CO2 output. Similarly, a hybrid brain-computer interface (BCI) is composed of two BCIs, or at least one BCI and another system. A hybrid BCI, like any BCI, must fulfill the following four criteria: (i) the device must rely on signals recorded directly from the brain; (ii) there must be at least one recordable brain signal that the user can intentionally modulate to effect goal-directed behaviour; (iii) real time processing; and (iv) the user must obtain feedback. This paper introduces hybrid BCIs that have already been published or are in development. We also introduce concepts for future work. We describe BCIs that classify two EEG patterns: one is the event-related (de)synchronisation (ERD, ERS) of sensorimotor rhythms, and the other is the steady-state visual evoked potential (SSVEP). Hybrid BCIs can either process their inputs simultaneously, or operate two systems sequentially, where the first system can act as a “brain switch”. For example, we describe a hybrid BCI that simultaneously combines ERD and SSVEP BCIs. We also describe a sequential hybrid BCI, in which subjects could use a brain switch to control an SSVEP-based hand orthosis. Subjects who used this hybrid BCI exhibited about half the false positives encountered while using the SSVEP BCI alone. A brain switch can also rely on hemodynamic changes measured through near-infrared spectroscopy (NIRS). Hybrid BCIs can also use one brain signal and a different type of input. This additional input can be an electrophysiological signal such as the heart rate, or a signal from an external device such as an eye tracking system.
Article
Full-text available
The aim of the present study was to demonstrate the first time the non-invasive restoration of hand grasp function in a tetraplegic patient by electroencephalogram (EEG)-recording and functional electrical stimulation (FES) using surface electrodes. The patient was able to generate bursts of beta oscillations in the EEG by imagination of foot movement. These beta bursts were analyzed and classified by a brain-computer interface (BCI) and the output signal used to control a FES device. The patient was able to grasp a cylinder with the paralyzed hand.
Article
Full-text available
In this paper, we describe a simple set of "recipes" for the analysis of high spatial density EEG. We focus on a linear integration of multiple channels for extracting individual components without making any spatial or anatomical modeling assumptions, instead requiring particular statistical properties such as maximum difference, maximum power, or statistical independence. We demonstrate how corresponding algorithms, for example, linear discriminant analysis, principal component analysis and independent component analysis, can be used to remove eye-motion artifacts, extract strong evoked responses, and decompose temporally overlapping components. The general approach is shown to be consistent with the underlying physics of EEG, which specifies a linear mixing model of the underlying neural and non-neural current sources.
Conference Paper
In this study we investigated the cue-locked (P300 and later event-related potentials components) and response-locked electroencephalography (EEG) phenomena associated to externally and internally-driven target selection. For that we designed a novel paradigm, that aimed to separate the selection of motor goals according to the respective task rules from the actual programming of the upcoming motor response. Our paradigm also made possible the estimation of the onset of a self-paced reach-and-grasp movement imagination for better capturing the associated movement-related cortical potentials (MRCPs). Our preliminary results indicate that differences between the externally and internally-driven conditions are present in the cue-locked event-related potentials, but not in the response-locked MRCPs. Our study contributes for a better understanding of the neurophysiological signature of movement-related processes, including both perception and actual motor planning, which are so extensively used in brain-computer interfaces (BCIs).
Chapter
In this chapter, we give an overview of the Graz-BCI research, from the classic motor imagery detection to complex movement intentions decoding. We start by describing the classic motor imagery approach, its application in tetraplegic end users, and the significant improvements achieved using coadaptive brain–computer interfaces (BCIs). These strategies have the drawback of not mirroring the way one plans a movement. To achieve a more natural control—and to reduce the training time—the movements decoded by the BCI need to be closely related to the user's intention. Within this natural control, we focus on the kinematic level, where movement direction and hand position or velocity can be decoded from noninvasive recordings. First, we review movement execution decoding studies, where we describe the decoding algorithms, their performance, and associated features. Second, we describe the major findings in movement imagination decoding, where we emphasize the importance of estimating the sources of the discriminative features. Third, we introduce movement target decoding, which could allow the determination of the target without knowing the exact movement-by-movement details. Aside from the kinematic level, we also address the goal level, which contains relevant information on the upcoming action. Focusing on hand–object interaction and action context dependency, we discuss the possible impact of some recent neurophysiological findings in the future of BCI control. Ideally, the goal and the kinematic decoding would allow an appropriate matching of the BCI to the end users’ needs, overcoming the limitations of the classic motor imagery approach.
Article
In their early days, brain–computer interfaces (BCIs) were only considered as control channel for end users with severe motor impairments such as people in the locked-in state. But, thanks to the multidisciplinary progress achieved over the last decade, the range of BCI applications has been substantially enlarged. Indeed, today BCI technology cannot only translate brain signals directly into control signals, but also can combine such kind of artificial output with a natural muscle-based output. Thus, the integration of multiple biological signals for real-time interaction holds the promise to enhance a much larger population than originally thought end users with preserved residual functions who could benefit from new generations of assistive technologies. A BCI system that combines a BCI with other physiological or technical signals is known as hybrid BCI (hBCI). In this work, we review the work of a large scale integrated project funded by the European commission which was dedicated to develop practical hybrid BCIs and introduce them in various fields of applications. This article presents an hBCI framework, which was used in studies with nonimpaired as well as end users with motor impairments.
Article
For individuals with high spinal cord injury (SCI), restoring missing grasping function is a high priority. Neuroprostheses based on functional electrical stimulation (FES) can partly compensate the loss of upper extremity function in people suffering from tetraplegia. With noninvasive, multichannel neuroprostheses a pinch and power grasp can be accomplished for everyday use. Hybrid systems combining FES with active orthoses hold promise for restoring a completely lost arm function. Novel control interfaces are needed to make full use of the many degrees of freedom of complex hybrid neuroprostheses. Motor imagery (MI)-based brain–computer interfaces (BCIs) are an emerging technology that may serve as a valuable adjunct to traditional control interfaces for neuroprosthetic control. Shared control and context-specific autonomy are most effective for reducing the users' workload. The modularity of upper extremity neuroprostheses as well as their associated control interfaces enable customization of the systems to adapt to the impairment and needs of each individual end user. This work provides an overview of the application of noninvasive hybrid BCI-controlled upper extremity neuroprostheses in individuals with high SCI with a strong focus on the results from the European Integrated Project Tools for Brain-Computer Interaction and will describe the challenges and promises for the future.
Article
A brain-computer interface (BCI) can help to overcome movement deficits in persons with spinal-cord injury (SCI). Ideally, such a BCI detects detailed movement imaginations, i.e., trajectories, and transforms them into a control signal for a neuroprosthesis or a robotic arm restoring movement. Robotic arms have already been controlled successfully by means of invasive recording techniques, and executed movements have been reconstructed using non-invasive decoding techniques. However, it is unclear if detailed imagined movements can be decoded non-invasively using electroencephalography (EEG). We made progress towards imagined movement decoding and successfully classified horizontal and vertical imagined rhythmic movements of the right arm in healthy subjects using EEG. Notably, we used an experimental design which avoided muscle and eye movements to prevent classification results being affected. To classify imagined movements of the same limb, we decoded the movement trajectories and correlated them with assumed movement trajectories (horizontal and vertical). We then assigned the decoded movements to the assumed movements with the higher correlation. To train the decoder, we applied partial least squares, which allowed us to interpret the classifier weights although channels were highly correlated. To conclude, we showed the classification of imagined movements of one limb in two different movement planes in 7 out of 9 subjects. Furthermore, we found a strong involvement of the supplementary motor area. Finally, as our classifier was based on the decoding approach we indirectly showed the decoding of imagined movements.
Article
The bilateral loss of the grasp function associated with a lesion of the cervical spinal cord severely limits the affected individuals' ability to live independently and return to gainful employment after sustaining a spinal cord injury (SCI). Any improvement in lost or limited grasp function is highly desirable. With current neuroprostheses, relevant improvements can be achieved in end users with preserved shoulder and elbow, but missing hand function. The aim of this single case study is to show that (1) with the support of hybrid neuroprostheses combining functional electrical stimulation (FES) with orthoses, restoration of hand, finger and elbow function is possible in users with high-level SCI and (2) shared control principles can be effectively used to allow for a brain-computer interface (BCI) control, even if only moderate BCI performance is achieved after extensive training. The individual in this study is a right-handed 41-year-old man who sustained a traumatic SCI in 2009 and has a complete motor and sensory lesion at the level of C4. He is unable to generate functionally relevant movements of the elbow, hand and fingers on either side. He underwent extensive FES training (30-45min, 2-3 times per week for 6 months) and motor imagery (MI) BCI training (415 runs in 43 sessions over 12 months). To meet individual needs, the system was designed in a modular fashion including an intelligent control approach encompassing two input modalities, namely an MI-BCI and shoulder movements. After one year of training, the end user's MI-BCI performance ranged from 50% to 93% (average: 70.5%). The performance of the hybrid system was evaluated with different functional assessments. The user was able to transfer objects of the grasp-and-release-test and he succeeded in eating a pretzel stick, signing a document and eating an ice cream cone, which he was unable to do without the system. This proof-of-concept study has demonstrated that with the support of hybrid FES systems consisting of FES and a semiactive orthosis, restoring hand, finger and elbow function is possible in a tetraplegic end-user. Remarkably, even after one year of training and 415 MI-BCI runs, the end user's average BCI performance remained at about 70%. This supports the view that in high-level tetraplegic subjects, an initially moderate BCI performance cannot be improved by extensive training. However, this aspect has to be validated in future studies with a larger population.
Article
Human hand dexterity depends on the ability to move digits independently and to combine these movements in various coordinative patterns. It is well established that the primary motor cortex (M1) is important for skillful digit actions but less is known about the role played by the nonprimary motor centers. Here we use functional magnetic resonance imaging to examine the hypothesis that nonprimary motor areas and the posterior parietal cortex are strongly activated when healthy humans move the right digits in a skillful coordination pattern involving relatively independent digit movements. A task in which flexion of the thumb is accompanied by extension of the fingers and vice versa, i.e., a learned "nonsynergistic" coordination pattern, is contrasted with a task in which all digits flex and extend simultaneously in an innate synergistic coordination pattern (opening and closing the fist). The motor output is the same in the two conditions. Thus, the difference when contrasting the nonsynergistic and synergistic tasks represents the requirement to fractionate the movements of the thumb and fingers and to combine these movements in a learned coordinative pattern. The supplementary (and cingulate) motor area, the bilateral dorsal premotor area, the bilateral lateral cerebellum, the bilateral cortices of the postcentral sulcus, and the left intraparietal cortex showed stronger activity when the subjects made the nonsynergistic flexion-extension movements of the digits than when the synergistic movements were made. These results suggest that the human neural substrate for skillful digit movement includes a sensorimotor network of nonprimary frontoparietal areas and the cerebellum that, in conjunction with M1, control the movements of the digits.
Article
This case study demonstrates the coupling of an electroencephalogram (EEG)-based Brain-Computer Interface (BCI) with an implanted neuroprosthesis (Freehand system). Because the patient was available for only 3 days, the goal was to demonstrate the possibility of a patient gaining control over the motor imagery-based Graz BCI system within a very short training period. By applying himself to an organized and coordinated training procedure, the patient was able to generate distinctive EEG-patterns by the imagination of movements of his paralyzed left hand. These patterns consisted of power decreases in specific frequency bands that could be classified by the BCI. The output signal of the BCI emulated the shoulder joystick usually used, and by consecutive imaginations the patient was able to switch between different grasp phases of the lateral grasp that the Freehand system provided. By performing a part of the grasp-release test, the patient was able to move a simple object from one place to another. The results presented in this work give evidence that Brain-Computer Interfaces are an option for the control of neuroprostheses in patients with high spinal cord lesions. The fact that the user learned to control the BCI in a comparatively short time indicates that this method may also be an alternative approach for clinical purposes.
Article
Transferring a brain-computer interface (BCI) from the laboratory environment into real world applications is directly related to the problem of identifying user intentions from brain signals without any additional information in real time. From the perspective of signal processing, the BCI has to have an uncued or asynchronous design. Based on the results of two clinical applications, where 'thought' control of neuroprostheses based on movement imagery in tetraplegic patients with a high spinal cord injury has been established, the general steps from a synchronous or cue-guided BCI to an internally driven asynchronous brain-switch are discussed. The future potential of BCI methods for various control purposes, especially for functional rehabilitation of tetraplegics using neuroprosthetics, is outlined.
EEG-controlled grasp neuroprosthesis for individuals with high spinal cord injury - decoding of multiple single limb movements
  • G R Müller-Putz
  • A Schwarz
  • J Pereira
  • P Ofner
  • A Pinegger
  • R Rupp
G. R. Müller-Putz, A. Schwarz, J. Pereira, P. Ofner, A. Pinegger, and R. Rupp, "EEG-controlled grasp neuroprosthesis for individuals with high spinal cord injury -decoding of multiple single limb movements," in 21st Annual Conference of the International Functional Electrical Stimulation Society, London, UK.
EEG-controlled grasp neuroprosthesis for individuals with high spinal cord injury - decoding of multiple single limb movements
  • müller-putz