Figure - available from: Communications Medicine
This content is subject to copyright. Terms and conditions apply.
Channel importance for grasp classification
Saliency maps for the model used online, a model using HG features from all channels except from channel 112, and a model using HG features only from channels covering cortical hand-knob are shown in (a), (c) and (e), respectively. Channels overlayed with larger and more opaque circles represent greater importance for grasp classification. White and transparent circles represent channels which were not used for model training. Mean confusion matrices from repeated 10-fold CV using models trained on HG features from all channels, all channels except for channel 112, and channels covering only the cortical hand-knob are shown in (b), (d), and (f), respectively. g Box and whisker plot showing the offline classification accuracies from 10 cross-validated testing folds using models with the above-mentioned channel subsets. Specifically, for one model configuration, each dot represents the average accuracy of the same validation fold across 20 repetitions of 10-fold CV (see Channel contributions and offline classification comparisons). Offline classification accuracies from CV-models trained on features from all channels were statistically higher than CV-models trained on features from channels only over cortical hand-knob (* P = 0.015, two-sided Wilcoxon Rank-Sum test with 3-way Holm-Bonferroni correction).

Channel importance for grasp classification Saliency maps for the model used online, a model using HG features from all channels except from channel 112, and a model using HG features only from channels covering cortical hand-knob are shown in (a), (c) and (e), respectively. Channels overlayed with larger and more opaque circles represent greater importance for grasp classification. White and transparent circles represent channels which were not used for model training. Mean confusion matrices from repeated 10-fold CV using models trained on HG features from all channels, all channels except for channel 112, and channels covering only the cortical hand-knob are shown in (b), (d), and (f), respectively. g Box and whisker plot showing the offline classification accuracies from 10 cross-validated testing folds using models with the above-mentioned channel subsets. Specifically, for one model configuration, each dot represents the average accuracy of the same validation fold across 20 repetitions of 10-fold CV (see Channel contributions and offline classification comparisons). Offline classification accuracies from CV-models trained on features from all channels were statistically higher than CV-models trained on features from channels only over cortical hand-knob (* P = 0.015, two-sided Wilcoxon Rank-Sum test with 3-way Holm-Bonferroni correction).

Source publication
Article
Full-text available
Background Brain-computer interfaces (BCIs) can restore communication for movement- and/or speech-impaired individuals by enabling neural control of computer typing applications. Single command click detectors provide a basic yet highly functional capability. Methods We sought to test the performance and long-term stability of click decoding using...

Citations

... Encouragingly, recent advances in iBCI calibration techniques have successfully reduced training times and the amount of calibration data needed for effective use. Rapid online calibration techniques that reduce both time and data requirements are being developed [13][14][15][16], with the ultimate goal being effective use without prior active calibration [17]. ...
Article
Full-text available
Objective. Implantable brain–computer interfaces (iBCIs) hold great promise for individuals with severe paralysis and are advancing toward commercialization. The features required for successful clinical translation and patient adoption of iBCIs may be under recognized within traditional academic iBCI research and deserve further consideration. Approach. Here we consider potentially critical factors to achieve iBCI user autonomy, reflecting the authors’ perspectives on discussions during various sessions and workshops across the 10th International BCI Society Meeting, Brussels, 2023. Main results. Four key considerations were identified: (1) immediate use, (2) easy to use, (3) continuous use, and (4) stable system use. Significance. Addressing these considerations may enable successful clinical translation of iBCIs.
... For the past few decades, a major focus of the BCI field has been decoding neural activity associated with attempted movements to control a computer cursor. [1][2][3][4][5][6][7][8][9][10][11][12][13][14][15][16][17] By controlling a computer cursor with their neural signals, a person with paralysis can type sentences using an on-screen keyboard, send emails and text messages, or use a web browser and many other software outcomes; instead, it describes scientific and engineering discoveries that were made using data collected in the context of the ongoing clinical trial. ...
Preprint
Full-text available
Decoding neural activity from ventral (speech) motor cortex is known to enable high-performance speech brain-computer interface (BCI) control. It was previously unknown whether this brain area could also enable computer control via neural cursor and click, as is typically associated with dorsal (arm and hand) motor cortex. We recruited a clinical trial participant with ALS and implanted intracortical microelectrode arrays in ventral precentral gyrus (vPCG), which the participant used to operate a speech BCI in a prior study. We developed a cursor BCI driven by the participant's vPCG neural activity, and evaluated performance on a series of target selection tasks. The reported vPCG cursor BCI enabled rapidly-calibrating (40 seconds), accurate (2.90 bits per second) cursor control and click. The participant also used the BCI to control his own personal computer independently. These results suggest that placing electrodes in vPCG to optimize for speech decoding may also be a viable strategy for building a multi-modal BCI which enables both speech-based communication and computer control via cursor and click.
... Incorporating a threshold based on data glove recordings could be beneficial. However, defining true rest may not be practically feasible for real BCI applications, as it necessitates extensive subject training to suppress such activity [11,12,19], and a more naturalistic approach would be to build a decoder that can successfully discriminate between meaningful and non-task-related sporadic motor activity. Furthermore, some features along the temporal dimension may not reflect actual motor activity, but 'resting' activity, especially before the movement onset and towards the end of the segmentation window. ...
... The proposed decoder design surpassed prior approaches evaluated on gesture data for four out of five subjects [20,21], albeit a direct comparison is difficult due to differences in task complexity. For communication assistance systems, the proposed decoder design can offer a viable alternative to deep learning approaches for onedimensional brain-click tasks [19] and even larger DoF applications [14,17], where data acquisition is challenging. Although trained offline, the decoder can process individual segments of preprocessed data in as little as 2 -10ms. ...
Conference Paper
Full-text available
Severe impairment of the central motor network can result in loss of motor function, clinically recognized as Locked-in Syndrome. Advances in Brain-Computer Interfaces offer a promising avenue for partially restoring compromised communicative abilities by decoding different types of hand movements from the sensorimotor cortex. In this study, we collected ECoG recordings from 8 epilepsy patients and compared the decodability of individual finger flexion and hand gestures with the resting state, as a proxy for a one-dimensional brain-click. The results show that all individual finger flexion and hand gestures are equally decodable across multiple models and subjects (>98.0%). In particular, hand movements, involving index finger flexion, emerged as promising candidates for brain-clicks. When decoding among multiple hand movements, finger flexion appears to outperform hand gestures (96.2% and 92.5% respectively) and exhibit greater robustness against misclassification errors when all hand movements are included. These findings highlight that optimized classical machine learning models with feature engineering are viable de-coder designs for communication-assistive systems.
... Brain activity can be measured using several techniques such as electroencephalography, magnetoencephalography (MEG), and electrocorticography (ECoG) [4][5][6]. The electroencephalogram (EEG) is used as the input in most BCI systems. ...
Article
Full-text available
Background To enhance the information transfer rate (ITR) of a steady-state visual evoked potential (SSVEP)-based speller, more characters with flickering symbols should be used. Increasing the number of symbols might reduce the classification accuracy. A hybrid brain-computer interface (BCI) improves the overall performance of a BCI system by taking advantage of two or more control signals. In a simultaneous hybrid BCI, various modalities work with each other simultaneously, which enhances the ITR. Methods In our proposed speller, simultaneous combination of electromyogram (EMG) and SSVEP was applied to increase the ITR. To achieve 36 characters, only nine stimulus symbols were used. Each symbol allowed the selection of four characters based on four states of muscle activity. The SSVEP detected which symbol the subject was focusing on and the EMG determined the target character out of the four characters dedicated to that symbol. The frequency rate for character encoding was applied in the EMG modality and latency was considered in the SSVEP modality. Online experiments were carried out on 10 healthy subjects. Results The average ITR of this hybrid system was 96.1 bit/min with an accuracy of 91.2%. The speller speed was 20.9 char/min. Different subjects had various latency values. We used an average latency of 0.2 s across all subjects. Evaluation of each modality showed that the SSVEP classification accuracy varied for different subjects, ranging from 80% to 100%, while the EMG classification accuracy was approximately 100% for all subjects. Conclusions Our proposed hybrid BCI speller showed improved system speed compared with state-of-the-art systems based on SSVEP or SSVEP-EMG, and can provide a user-friendly, practical system for speller applications.