Conference PaperPDF Available

Powered wheelchair control with a multimodal interface using eye-tracking and soft-switch

Conference Paper

Powered wheelchair control with a multimodal interface using eye-tracking and soft-switch

Abstract

Human computer interaction (HCI) research aims to provide a novel means of communication between computers and people. There exists a large variability of disabilities across people who have difficulties to move a wheelchair. To increase the accessibility of a wheelchair control, the HCI system should include several modalities to allow different types of users to control a wheelchair in a convenient way. This work proposes a multimodal interface for people with mobility impairment such as those who cannot move their arms and legs to control a powered wheelchair.
Powered wheelchair control with a
multimodal interface using
eye-tracking and soft-switch
3.EXPERIMENTAL PROTOCOL
4. MULTIMODAL INTERFACE
1.AIM
To develop a multimodal interface that combines an eye-tracker and a soft-
switch for people with mobility impairment to control a wheelchair in a
convenient way.
2.INTRODUCTION
Human computer interaction (HCI) research aims to provide a novel means of
communication between computers and people. There exists a large variability
of disabilities across people who have difficulties to move a wheelchair.
To increase the accessibility of a wheelchair control, the HCI system should
include several modalities to allow different types of users to control a
wheelchair in a convenient way.
This work proposes a multimodal interface for people with mobility
impairment such as those who cannot move their arms and legs to control a
powered wheelchair.
The multimodal interface was designed to include an eye-tracker and a soft-
switch wherein the wheelchair can be controlled.
The multimodal interface is used to find the desired command among nine
possible commands (eight directions, and stop).
Six consenting healthy male subjects participated in the study.
Three conditions varied in multimodal interface to measure the performance on
a predefined trajectory.
For each subject, lap completion time and number of commands were
recorded.
A touch-pad (TP)
An eye-tracker (ET)
Eye-tracker and soft-switch (ET_SS)
5.RESULTS
Average lap completion time (in second) for each input modalities. Error bars display
standard deviation.
Average number command was used to complete the lap for each input modalities.
Error bars display standard deviation.
The average lap completion time for eye-tracker with soft-switch is found to be lower
than the eye-tracker.
In order to combine these two different modalities simultaneously, users can employ
their gaze to point the desired command, and the soft-switch is used for the selection
[1], [2], [3].
6.CONCLUSION
A powered wheelchair may be controlled hands-free through gaze, thanks to the
proposed multimodal interface, and an additional soft-switch may improve the
usability of the system.
A significant improvement in performance for a multimodal system using a total of
nine possible commands (eight directions, and stop).
The combination of various modalities is largely dependent on the graphical user
interface.
This multimodal system can also be used to reduce the false positives obtained from
an eye-tracker selection paradigm by utilising soft-switch commands.
REFERENCES
1. Y. K. Meena, H. Cecotti, K. Wong-lin, and G. Prasad, “Towards increasing the number of commands in a hybrid brain-computer interface with combination of gaze and motor
imagery,” in 37th annual int. conf. of the IEEE Engineering in Medicine and Biology Society, 2015.
2. Y. K. Meena, H. Cecotti, K. Wong-lin, and G. Prasad, “Simultaneous gaze and motor imagery hybrid BCI increases single-trial detection performance: a compatible-incompatible
study,” 9th IEEE-EMBS International Summer School on Biomedical Signal Processing, 2015.
3. D. O. Doherty, Y. K. Meena, H. Raza, H. Cecotti, and G. Prasad, “Exploring gaze-motor imagery hybrid brain-computer interface design,” in IEEE int. conf. on Bioinformatics and
Biomedicine, 2014,pp.335339.
Yogesh Kumar Meena, Hubert Cecotti, KongFatt Wong-Lin, Girijesh Prasad
School of Computing and Intelligent Systems
Ulster University, Derry BT48 7JL, Northern Ireland, UK
touch-pad
eye-tracker
soft-switch
PC GUI
Arduino
Sensors
Microcontroller
Chair
Motion
Block diagram of proposed system.
Experimental setup for
the control interface.
Experimental setup for the test trajectory
Graphical user interface (GUI).
Touchpad Eye-tracker Eye-tracker + soft-switch
0
20
40
60
80
100
120
140
160
180
200
time (sec)
Input modalities
Backward Backward right
Forward Forward right
Forward left
Backward left
Stop
Right
Left
Connect
Disconnect
Start
TP
Start
ET
Start
ET_SS
ACKNOWLEDGMENT
Y.K.M. is supported by the Government of India (Education- 11016152013). H.C., K.W.-L., and G.P. are supported by the Northern
Ireland Functional Brain Mapping Facility (1303/101154803), funded by InvestNI, University of Ulster, Magee Campus, Northland
Road, BT48 7JL, Northern Ireland, United Kingdom.
Experimental setup for the test trajectory.
Touchpad Eye-tracker Eye-tracker + soft-switch
0
5
10
15
20
25
30
35
40
Number of commands
Input modalities
... Disabled people who are not completely locked-in may still partially be able to use their body part and gaze to control and communicate with an assistive device. Recently, gaze based control of wheelchair has been implemented successfully [12], [13], [14], which has shown a strong potential of gaze based assistive technology. ...
... Although, a key concern in eye-tracker based interfaces is to quantify the intention of the user and this issue is aggravated by involuntary eye movements leading to unwanted selection of items (Midas-Touch problem) [19]. Multimodal and hybrid interfaces have been utilized to counter these issues in previous works [13], [20]. Furthermore, the addition of new constraints to the system such as the duration of the gaze attention on a particular selected item can be an effective solution to resolve the Midas-touch problem. ...
Conference Paper
Full-text available
Over the last decade, there has been a speedy an increase in the number of persons with mobility and speech impairments who require novel communication devices. Most of the recent works that have been carried out to focus on the Latin script; there is a lack of appropriate assistive devices for scripts that are specific to a country. In this paper, we propose a novel multimodal Hindi language virtual keyboard based on a menu selection with eight commands providing access to spell and type 63 different Hindi language characters along with other functionalities such as the delete command for corrections. The system has been evaluated with eight able-bodied individuals who performed a specially designed typing task. The spelling task has been achieved in three different modalities using: (i) a mouse, (ii) a portable eye-tracker, and (iii) a portable eye-tracker combined with a soft-switch. The performance has been evaluated over the changes that occur with the use of each modality in terms of typing speed and information transfer rate (ITR) at both the command and letter levels for each subject. The average speed across subjects with mouse only, eye-tracker only, and eye-tracker with soft-switch were 17.12 letters/min, 10.62 letters/min, and 13.50 letters/min, respectively. The ITRs at the command and letter levels were about 67.58 bits/minute and 62.67 bits/minute, respectively, with only the eye-tracker option. Based on its robustness, the proposed system has the potential to be used as a means of augmentative communication for patients suffering from mobility and speech impairment, and can contribute to substantial improvement in their quality of life.
... Eye-tracking is a powerful means for assistive technologies for people with movement disabilities [28,30,33,34]. An eye-tracking device combined with an assistive robot can increase users' speed of interaction and comfort [30,35,36]. Therefore, researchers are interested in developing an eye-gaze system interface for paralyzed or physically disabled people [27]. ...
Article
Full-text available
Background Building control architecture that balances the assistive manipulation systems with the benefits of direct human control is a crucial challenge of human–robot collaboration. It promises to help people with disabilities more efficiently control wheelchair and wheelchair-mounted robot arms to accomplish activities of daily living. Methods In this study, our research objective is to design an eye-tracking assistive robot control system capable of providing targeted engagement and motivating individuals with a disability to use the developed method for self-assistance activities of daily living. The graphical user interface is designed and integrated with the developed control architecture to achieve the goal. Results We evaluated the system by conducting a user study. Ten healthy participants performed five trials of three manipulation tasks using the graphical user interface and the developed control framework. The 100% success rate on task performance demonstrates the effectiveness of our system for individuals with motor impairments to control wheelchair and wheelchair-mounted assistive robotic manipulators. Conclusions We demonstrated the usability of using this eye-gaze system to control a robotic arm mounted on a wheelchair in activities of daily living for people with disabilities. We found high levels of acceptance with higher ratings in the evaluation of the system with healthy participants.
... Eye-tracking is a powerful means for assistive technologies for people with movement disabilities 77 [30,32,35,36]. An eye-tracking device combined with an assistive robot can increase users' speed of 78 interaction and comfort [32,37,38]. Therefore, researchers are interested in developing an eye-gaze system 79 interface for paralyzed or physically disabled people [29]. ...
Preprint
Full-text available
Background: Building control architecture that balances the assistive manipulation systems with the benefits of direct human control is a crucial challenge of human-robot collaboration. It promises to help people with disabilities more efficiently control wheelchair and wheelchair-mounted robot arms to accomplish activities of daily living. Methods: In this paper, our research objective is to design an eye-tracking assistive robot control system capable of providing targeted engagement and motivating individuals with a disability to use the developed method for self-assistance activities of daily living. The graphical user interface is designed and integrated with the developed control architecture to achieve the goal. Results: We evaluated the system by conducting a user study. Ten healthy participants performed five trials of three manipulation tasks using the graphical user interface and the developed control framework. The 100% success rate on task performance demonstrates the effectiveness of our system for individuals with motor impairments to control wheelchair and wheelchair-mounted assistive robotic manipulators. Conclusions: We demonstrated the usability of using this eye-gaze system to control robotic arm mounted on wheelchair in activities of daily living for the people with disabilities. We found high levels of acceptance with higher ratings in evaluation of the system with healthy participants. Trial registration: Not applicable.
... Other dwell-free techniques include multimodal and hybrid interfaces. These techniques address issues highlighted in previous studies [18,[42][43][44][45][46][47][48]. In particular, these studies have introduced a dwell-free technique for an eye-typing system, which focused on a combination of different modalities such as eye-tracking, smiling movements, input switches, and speech recognition. ...
Article
Full-text available
The usability of virtual keyboard based eye-typing systems is currently limited due to the lack of adaptive and user-centered approaches leading to low text entry rate and the need for frequent recalibration. In this work, we propose a set of methods for the dwell time adaptation in asynchronous mode and trial period in synchronous mode for gaze based virtual keyboards. The rules take into account commands that allow corrections in the application, and it has been tested on a newly developed virtual keyboard for a structurally complex language by using a two-stage tree-based character selection arrangement. We propose several dwell-based and dwell-free mechanisms with the multimodal access facility wherein the search of a target item is achieved through gaze detection and the selection can happen via the use of a dwell time, soft-switch, or gesture detection using surface electromyography in asynchronous mode; while in the synchronous mode, both the search and selection may be performed with just the eye-tracker. The system performance is evaluated in terms of text entry rate and information transfer rate with 20 different experimental conditions. The proposed strategy for adapting the parameters over time has shown a significant improvement (more than 40%) over non-adaptive approaches for new users. The multimodal dwell-free mechanism using a combination of eye-tracking and soft-switch provides better performance than adaptive methods with eye-tracking only. The overall system receives an excellent grade on adjective rating scale using the system usability scale and a low weighted rating on the NASA task load index, demonstrating the user-centered focus of the system.
... New free-view solutions for gaze based wheelchair navigation have also been introduced for faster operation [6]. An eye-tracking device in combination with scanning based techniques can increase users comfort and speed of interaction [7], [8]. BCI devices are also coupled with vision based eye-tracking devices for rehabilitation of users' upper-extremity where the user intentions are decoded using the eye-tracking device and the continuous control of the exoskeleton is achieved with motorimagery based BCI [9]. ...
Conference Paper
People suffering from a variety of upper and lower limb disabilities due to different neuro-muscular diseases or injuries, often find it difficult to perform day-today activities of mobility and grasping (pick and place) objects. This paper presents the feasibility and utility of a newly developed assistive device named EMOHEX, for disabled people to perform some activities of daily living (ADL). EMOHEX is an integrated platform that combines a low cost eye-tracking device with a powered-wheelchair mounted hand-exoskeleton, which can assist disabled people in grasping objects while moving around. A dual control panel based graphical user interface is designed wherein the user's intention to select any command button is detected through eye-tracking. The dual control consists of wheelchair control panel and exoskeleton control panel, which are interchangeable by a switch button common to both the panels. The hand-exoskeleton is capable of assisting grasp, hold, and release action. Experiments conducted on 16 healthy subjects revealed that performance metrics were significantly (p<0.01) similar for the same task complexity while for different task complexities the performance metrics were significantly (p<0.01) different across all the subjects. These results showed the feasibility and stability of the system, respectively. Moreover, the information transfer rate (ITR) of eye-tracker was found satisfactory at 55.28±1.29 bits/min and 51.02±1.72 bits/min for simple and complex task, respectively. Thus, EMOHEX has the potential as a quality assistive device for disabled people.
Conference Paper
Full-text available
Non-invasive brain-computer interface (BCI) provides a novel means of communication. This can be achieved by measuring electroencephalogram (EEG) signal over the sensory motor cortex of a person performing motor imagery (MI) tasks. However, the performance of BCI remains currently too low to be of wide practical use. A hybrid BCI system could improve the performance by combining two or more modalities such as eye tracking, and the detection of brain activity responses. In this paper, first, we propose a simultaneous hybrid BCI that combines an event-related de-synchronization (ERD) BCI and an eye tracker. Second, we aim to further improve performance by increasing the number of commands (i.e., the number of choices accessible to the user). In particular, we show a significant improvement in performance for a simultaneous gaze-MI system using a total of eight commands. The experimental task requires subjects to search for spatially located items using gaze, and select an item using MI signals. This experimental task studied visuomotor compatible and incompatible conditions. As incorporating incompatible conditions between gaze direction and MI can increase the number of choices in the hybrid BCI, our experimental task includes single-trial detection for average, compatible and incompatible conditions, using seven different classification methods. The mean accuracy for MI, and the information transfer rate (ITR) for the compatible condition is found to be higher than the average and the incompatible conditions. The results suggest that gaze-MI hybrid BCI systems can increase the number of commands, and the location of the items should be taken into account for designing the system.
Conference Paper
Full-text available
AIM: To develop a simultaneous hybrid brain-computer interface (BCI) that combines an event-related de-synchronization (ERD) BCI and an eye tracker, and improves performance by increasing the number of commands. KEY QUESTIONS: Do the number of command improve by combining of two different modalities? • Does the combination of various modalities largely depend on the accuracy of each modality? • Does the orientation between gaze and motor imagination depend on GUI experimental design? • Does the number of choices improve in hybrid BCI? • Do the false positive selection of eye-tracker reduce by BCI?
Conference Paper
Full-text available
Non-invasive Brain-Computer Interface (BCI) has appeared as a new hope for a large population of disabled people, who were waiting for a new communication means that would translate some brain responses into actions. After several decades of research in fields such as neuroscience and machine learning, the performance remains too low due to the low signal to noise ratio of the EEG signal, and the time that has to be dedicated to the recording of the brain responses. Hybrid BCIs consider the combination of several modalities, including brain responses, for new communication systems. The creation of a Hybrid BCI requires particular care as it possesses the constraints from several modalities. We propose to investigate the performance that could be achieved in a paradigm, where gaze control is used for the selection of an item on a computer screen and motor imagery is used to enable the selected item on the screen. Based on the results obtained from gaze detection with an eye tracker, and motor imagery detection with non-invasive EEG recording, we show that the performance of a parallel Hybrid BCI is only beneficial if the accuracy of each modality reaches a particular limit, and if the number of commands from each modality is carefully chosen.
Exploring gaze-motor imagery hybrid brain-computer interface design
  • D O Doherty
  • Y K Meena
  • H Raza
  • H Cecotti
  • G Prasad
D. O. Doherty, Y. K. Meena, H. Raza, H. Cecotti, and G. Prasad, "Exploring gaze-motor imagery hybrid brain-computer interface design," in IEEE int. conf. on Bioinformatics and Biomedicine, 2014, pp. 335-339.