Updates
0 new
0
Recommendations
0 new
0
Followers
0 new
0
Reads
0 new
38
Project log
In the present study a feature selection algorithm based on mutual information (MI) was applied to electro-encephalographic (EEG) data acquired during three different motor imagery tasks from two dataset: Dataset I from BCI Competition IV including full scalp recordings from four subjects, and new data recorded from three subjects using the popular low-cost Emotiv EPOC EEG headset. The aim was to evaluate optimal channels and band-power (BP) features for motor imagery tasks discrimination, in order to assess the feasibility of a portable low-cost motor imagery based Brain-Computer Interface (BCI) system. The minimal sub set of features most relevant to task description and less redundant to each other was determined, and the corresponding classification accuracy was assessed offline employing linear support vector machine (SVM) in a 10-fold cross validation scheme. The analysis was performed: (a) on the original full Dataset I from BCI competition IV, (b) on a restricted channels set from Dataset I corresponding to available Emotiv EPOC electrodes locations, and (c) on data recorded with the EPOC system. Results from (a) showed that an offline classification accuracy above 80% can be reached using only 5 features. Limiting the analysis to EPOC channels caused a decrease of classification accuracy, although it still remained above chance level, both for data from (b) and (c). A top accuracy of 70% was achieved using 2 optimal features. These results encourage further research towards the development of portable low cost motor imagery-based BCI systems.
In past decades, eye-tracking (ET) became one of the most widespread communication strategies for people with severe motor impairments (as in locked-in syndromes, LIS). ET cameras enable paralyzed patients to move a cursor across a user interface (UI) by means of their gaze, and to activate a selectable UI object after looking at it for a certain dwell time. This procedure is definitely intuitive and acceptable for many users. Nevertheless, such control function of gaze can become prone to errors if the dwell time duration is too short, not customized for each specific user. Considering such issue and the potential of brain-computer interface (BCI) systems in monitoring user's parameters like attention, it is possible to design hybrid ET-BCI solutions - labeled as EyeBCI in this paper - in order to improve the performance and the usability of ET. In this paper, a novel interaction concept is introduced to adapt the duration of the dwell time to the level of mental focus of the user of EyeBCI when he/she wants to select and activate a UI item: the dwell time shortens according to the raise of the observer's concentration, improving the system precision and responsiveness. In order to evaluate this solution, a pilot study was performed to compare different control conditions in terms of task performance and user experience: 3 ET conditions (different by duration of dwell time) and 2 EyeBCI conditions (BCI-triggered activation of UI items and BCI-modulated dwell time). The results demonstrated promising levels of performance and user experience when using the tested implementation of the novel EyeBCI. In addition, the capability of this new interaction paradigm to be self-adaptable to the user's goals has the potential to greatly enhance the usability of ET solutions for patients with LIS.
Eye-tracking (ET) is one of the most intuitive solutions for enabling people with severe motor impairments to control devices. Nevertheless, even such an effective assistive solution can detrimentally affect user experience during demanding tasks because of, for instance, the user's mental workload - using gaze-based controls for an extensive period of time can generate fatigue and cause frustration. Thus, it is necessary to design novel solutions for ET contexts able to improve the user experience, with particular attention to its aspects related to workload. In this paper, a pilot study evaluates the effects of a relaxation biofeedback system on the user experience in the context of a gaze-controlled task that is mentally and temporally demanding: ET-based gaming. Different aspects of the subjects' experience were investigated under two conditions of a gaze-controlled game. In the Biofeedback group (BF), the user triggered a command by means of voluntary relaxation, monitored through Galvanic Skin Response (GSR) and represented by visual feedback. In the No Biofeedback group (NBF), the same feedback was timed according to the average frequency of commands in BF. After the experiment, each subject filled out a user experience questionnaire. The results showed a general appreciation for BF, with a significant between-group difference in the perceived session time duration, with the latter being shorter for subjects in BF than for the ones in NBF. This result implies a lower mental workload for BF than for NBF subjects. Other results point toward a potential role of user's engagement in the improvement of user experience in BF. Such an effect highlights the value of relaxation biofeedback for improving the user experience in a demanding gaze-controlled task.