ArticlePDF Available

The Berlin brain-computer interface presents the novel mental typewriter Hex-O-Spell



We present a novel typewriter application ‘Hex-o-Spell’ that is specifically tailored to the characteristics of direct brain-to-computer interaction. The high bandwidth at which a user may perceive information from the display is used in an appealing visualization based on hexagons. On the other hand the control of the application is possible at low bandwidth using only two control commands (mental states) and is relatively stable against delays and the like. The effectiveness and robustness of the interface was demonstrated at the CeBIT 2006 (world’s largest IT fair) where two subjects operated the mental typewriter at a speed of up to 7.6 char/min. It was developed within the Berlin Brain- Computer Interface project in cooperation with specialists for Human Computer Interaction.
Benjamin Blankertz
, Guido Dornhege
, Matthias Krauledat
, Michael Schr¨oder
John Williamson
, Roderick Murray-Smith
, Klaus-Robert M¨uller
Fraunhofer FIRST (IDA), Berlin, Germany
Technical University Berlin, Berlin, Germany
University of Glasgow, Glasgow, Scotland
Hamilton Institute, NUI Maynooth, Ireland
SUMMARY: We present a novel typewriter appli-
cation ‘Hex-o-Spell’ that is specifically tailored to
the characteristics of direct brain-to-computer inter-
action. The high bandwidth at which a user may
perceive information from the display is used in an
appealing visualization base d on hexagons. On the
other hand the control of the application is possible
at low bandwidth using only two control commands
(mental states) and is relatively stable against de-
lays and the like. The effectiveness and robustness
of the interface was demonstrated at the CeBIT 2006
(world’s largest IT fair) where two subjects oper-
ated the mental typewriter at a speed of up to 7.6
char/min. It was developed within the Berlin Brain-
Computer Interface project in cooperation with spe-
cialists for Human Computer Interaction.
Brain-Computer Interfaces (BCIs) translate the in-
tent of a subject measured from brain signals directly
into control commands, e.g. for a computer applica-
tion or a neuroprosthesis ([3]). Although the proof-
of-concept of BCI systems was given decades ago,
several major challenges are still to be faced. One of
those challenges is to develop BCI applications which
take the specific characteristics of BCI communica-
tion into account. Apart from being prone to error
and having a rather uncontrolled variability in tim-
ing, its bandwidth is heavily unbalanced: BCI users
can perceive a high rate of information transfer from
the display, but have a low-bandwidth communica-
tion in their control actions.
The Berlin Brain-Computer Interface (BBCI) is
an EEG-based BCI system which operates on the
spatio-spectral changes during different kinds of mo-
tor imagery. It uses machine learning techniques to
adapt to the specific brain signatures of each user,
thereby achieving high quality feedback already in
the first se ssion ([1]). The mental typewriter pre-
sented here incorporates state-of-the-art knowledge
from Human Computer Interaction (HCI) and report
results of a public performance with two subjects.
The challenge in designing a mental typewriter is to
map a small numbe r of BCI control states (typically
two) to the high number of symbols (26 letters plus
punctuation marks) while accounting for the low sig-
nal to noise ratio in the control signal. The more fluid
interaction in the BBCI system was made possible by
introducing an approach which combined probabilis-
tic data and dynamic systems theory based on our
earlier work ([2]) on mobile interfaces.
Here we take the example that the typewriter is con-
trolled by the two mental states imagined right hand
movement and imagined right foot movement. The
initial configuration is shown in the leftmost plot of
Fig. 1. Six hexagonal fields are surrounding a cir-
cle. In each of them five letters or other symbols
(including < for backspace) are arranged. For the
selection of a symbol there is an arrow in the center of
the circle. By imagining a right hand movement the
arrow turns clockwise. An imagined foot movement
stops the rotation and the arrow starts extending.
If this imagination is performed in a longer period
the arrow touches the hexagon and thereby selects
it. Then all other hexagons are cleared and the five
symbols of the selected hexagon are moved to indi-
vidual hexagons as shown in Fig. 1. The arrow is
reset to its minimal length. Now the same proce-
dure (rotation if desired and extension of the arrow)
is repeated to select one symbol.
A language model determines the order of the sym-
bols in one hexagon depending on the context, but
this and many more important details go beyond the
scope of this note.
On two days in the course of the CeBIT fair 2006 in
Hannover, Germany, live demonstrations were given
with two subjects simultaneously using the BBCI
system. These demonstrations turned out to be
BBCI robustness tests par excellence. All over the
Figure 1: The mental typewriter ’Hex-o-Spell’. The two states classified by the BBCI system control the turning
and growing of the gray arrow respectively (see also text). Letters can thus be chosen in a two step procedure.
fair pavilion, noise sources of different kinds (electric,
acoustic,...) were potentially jeopardizing the perfor-
mance. A low air humidity made the EEG electrode
gel dry out and last but not least the subjects were
under psychological pressure to perform well for in-
stance in front of several running TV cameras or in
the presence of the German minister of research. The
preparation of the experiments started at 9:15 a.m.
and the live performance at 11 a.m. The two subjects
were either playing ‘Brain-Pong’ against each other
or writing sentences with the typewriter Hex-o-Spell.
Except for short breaks and a longer lunch break, the
subjects continued until 5 p.m. without degradation
of performance over time which is a demonstration
of great stability. The typing speed was between 2.3
and 5 char/min for one subject and between 4.6 and
7.6 char/min for the other subject. This speed was
measured for error-free, completed sentences, i.e. all
typing errors that have been committed had to be
corrected by using the backspace of the mental type-
For a B CI driven typewriter not operating on evoked
potentials this is a world class spelling speed, espe-
cially taking into account the environment and the
fact that the subjects did not train the usage of the
BBCI typewriter interface: the subjects used the
typewriter application only twice before.
The prospective value of BCI research for rehabil-
itation is well known. In light of the work pre-
sented here we would advocate a further point. BCI
provides stimulation to HCI researchers as an ex-
treme example of the sort of interaction which is
becoming more common: interaction with ‘uncon-
ventional’ computers in mobile phones, or with de-
vices embedded in the environment. These have a
number of shared attributes: high-dimensional, noisy
inputs, which describe intrinsically low-dimensional
content; data with content at multiple time-scales;
and a significant uncontrolled variability. The mis-
match in the bandwidth between the display and
control channels (as explained in the introduction)
and the slow, frustrating error correction motivate
a more ‘negotiated’ style of interaction, where com-
mitments are withheld until appropriate levels of ev-
idence have been accumulated (i.e. the entropy of the
beliefs inferred from the behavior of the joint human-
computer system should change smoothly, limited by
the maximum input bandwidth). The dynamics of a
cursor, given such noisy inputs, should be stabilized
by controllers which infer potential actions, as well
as the structure of the variability in the sensed data.
Hex-o-Spell demonstrates the potential of such in-
telligent stabilising dynamics in a noisy, but richly-
sensed medium. The results suggest that the ap-
proach is a fruitful one, and one which leaving open
the potential for incorporating sophisticated models
without ad hoc modifications.
This work was supported in part by a grant
of the BMBF (FKZ 01IBE01A), by the SFI
(00/PI.1/C067), and by the IST Programme of the
EU under the PASCAL NoE (IST-2002-506778).
[1] Blankertz B, Dornhege G, Krauledat M, M¨uller K-
R, Kunzmann V, Losch F, Curio G, The Berlin Brain-
Computer Interface: EEG-based communication without
subject training, IEEE Trans. Neural Sys. Rehab. Eng.
(2006), accepted.
[2] Williamson J, Murray-Smith R, Dynamics and proba-
bilistic text entry, Proc. of the Hamilton Summer School
on Switching and Learning in Feedback systems (Murray-
Smith R and Shorten R, eds.), LNCS vol. 3355, 2005,
pp. 333–342.
[3] Wolpaw JR, Birbaumer N, McFarland DJ, Pfurt-
scheller G, Vaughan TM, Brain-computer interfaces for
communication and control, Clin. Neurophysiol. 113
(2002), 767–791.
... Brain-computer interface (BCI) establishes a pathway from the brain to external devices such as computers and neuroprosthetics [1][2][3][4]. Especially, BCI provides a novel way of human-computer interaction, which gives rise to new applications such as BCI typewriters [5,6] and BCI games [7,8]. ...
... To improve the easy-of-use of BCI systems, efforts have been made. Several studies design special typewriter interfaces to facilitate BCI-based interactions [5,11]. The Berlin Brain-Computer Interface presents the novel mental typewriter Hex-o-Spell which uses an appealing visualization based on hexagons, and improves the bandwidth of BCI systems [5]. ...
... Several studies design special typewriter interfaces to facilitate BCI-based interactions [5,11]. The Berlin Brain-Computer Interface presents the novel mental typewriter Hex-o-Spell which uses an appealing visualization based on hexagons, and improves the bandwidth of BCI systems [5]. Some approaches design games specifically tailored to the characteristics of direct brain-to-computer interaction [8,12,13]. ...
... Some researches aimed at introducing new speller development methods for displaying P300 stimuli, such as the use of audio stimuli [17,18] and tactile stimuli [19], or visual stimuli other than matrix [20,21] and rapid serial visual presentation paradigm(RSVP) [22,23]. ...
... The Hex-o-Spell method developed by the Berlin BCI team is one of the most important methods used to design a VCP signal for a speller. In this method, which uses a two-step selection method similar to the region-based presentation method, character selection at each level is made using a motor imaginary [21]. ...
Full-text available
Background: Brain-computer interface (BCI) is a system in which complete control is exerted directly through brain signals. One of the most important applications of BCI is in spellers, which are designed to restore communication ability for people with severe speech and motor impairments. These people can communicate using a BCI speller by spelling their desired words. The goal of this paper is to present the achievements of a project whose ultimate purpose is to propose a Persian BCI speller, which utilizes a Persian language model as an assisting tool to improve its performance. Method: In this study, we first present our method for data collection. To collect the dataset, we designed a screen that is used to display the visual stimuli. Furthermore, we carefully selected the words used in the data collection process. Using the designed stimuli screen and the selected words, the dataset from 5 subjects is collected. This dataset is used in different signal processing and machine learning methods in the designed speller. To classify the P300 signal for spelling purposes, first, the feature vectors of data samples were extracted by the concatenation of the electroencephalography (EEG) channels, which were filtered using a moving average filter. Then, using a support vector machine (SVM), the samples were classified and the results were fused with character-level N-grams to detect the target characters. Results: In the evaluation of the system, we have achieved an average accuracy of 93.2% in character detection by utilizing a character-level 5-gram language model. Conclusions: The assessment of the language models in the system demonstrated that using the 5- gram language model resulted in increasing the accuracy by 1.9% compared to when no language models were used.
... First stage of selection Second stage of selection Figure 1: The Hex-O-Spell paradigm used in motor imagery speller [4]. ...
... The benefit of MI based speller is it does not require any external 35 stimuli. In [4], a gaze-independent MI based BCI speller referred as Hex-O-Spell is used to spell the characters. In Figure 1, the speller paradigm is shown, where six hexagons are fixed up around a circle and the circle contains an arrow pointing towards the hexagon. ...
Full-text available
Brain-computer interface (BCI) speller is a system that provides an alternative communication for the disable people. The brain wave is translated into machine command through a BCI speller which can be used as a communication medium for the patients to express their thought without any motor movement. A BCI speller aims to spell characters by using the electroencephalogram (EEG) signal. Several types of BCI spellers are available based on the EEG signal. A standard BCI speller system consists of the following elements: BCI speller paradigm, data acquisition system and signal processing algorithms. In this work, a systematic review is provided on the BCI speller system and it includes speller paradigms, feature extraction, feature optimization and classification techniques for BCI speller. The advantages and limitations of different speller paradigm and machine learning algorithms are discussed in this article. Also, the future research directions are discussed which can overcome the limitations of present state-of-the-art techniques for BCI speller.
... Relevant data may also include spontaneous changes in brain activity in response to an external stimulus as used in reactive BCIs. To use intentionally evoked brain signals as it is often the case for many active BCIs, where, for example, the imagination of right-and left-hand movements can be used to spell a word [Blankertz et al. 2006], requires the attention of the user. Such attentional effort would require too many resources, which is one reason why classical BCIs are often considered to be inadequate for robot control in complex applications, such as, for example, in space applications. ...
Full-text available
The Handbook of Multimodal-Multisensor Interfaces provides the first authoritative resource on what has become the dominant paradigm for new computer interfaces-user input involving new media (speech, multi-touch, hand and body gestures, facial expressions, writing) embedded in multimodal-multisensor interfaces. This three-volume handbook is written by international experts and pioneers in the field. It provides a textbook, reference, and technology roadmap for professionals working in this and related areas. This third volume focuses on state-of-the-art multimodal language and dialogue processing, including semantic integration of modalities. The development of increasingly expressive embodied agents and robots has become an active test-bed for coordinating multimodal dialogue input and output, including processing of language and nonverbal communication. In addition, major application areas are featured for commercializing multimodal-multisensor systems, including automotive, robotic, manufacturing, machine translation, banking, communications, and others. These systems rely heavily on software tools, data resources, and international standards to facilitate their development. For insights into the future, emerging multimodal-multisensor technology trends are highlighted for medicine, robotics, interaction with smart spaces, and similar topics. Finally, this volume discusses the societal impact of more widespread adoption of these systems, such as privacy risks and how to mitigate them. The handbook chapters provide a number of walk-through examples of system design and processing, information on practical resources for developing and evaluating new systems, and terminology and tutorial support for mastering this emerging field. In the final section of this volume, experts exchange views on a timely and controversial challenge topic, and how they believe multimodal-multisensor interfaces need to be equipped to most effectively advance human performance during the next decade.
... Various BCI-spellers have been proposed over the past few decades [2]. BCI-spellers are usually controlled either by motor imagery (MI) [3][4][5][6] or by event-related potentials (ERP) [2]. An MI-based BCI-speller derives its output from brain activity that is directly and consciously controlled by the user, independently from external events, by focusing on a specific mental task, i.e., imagining the movement of a muscle. ...
Full-text available
Acknowledging the importance of the ability to communicate with other people, the researcher community has developed a series of BCI-spellers, with the goal of regaining communication and interaction capabilities with the environment for people with disabilities. In order to bridge the gap in the digital divide between the disabled and the non-disabled people, we believe that the development of efficient signal processing algorithms and strategies will go a long way towards achieving novel assistive technologies using new human–computer interfaces. In this paper, we present various classification strategies that would be adopted by P300 spellers adopting the row/column paradigm. The presented strategies have obtained high accuracy rates compared with existent similar research works.
... To improve the performance of BCI spellers, a number of spellers based on other triggering methods other than that mentioned above have been proposed. These methods include the motion-onset VEP (m-VEP) [11][12][13], code VEP (c-VEP) [14], and motor imagery (MI) [5,[15][16][17]. The m-VEP-based BCI speller is also known as the N200 speller and is characterized by improved user experience, but is slow [13]. ...
Full-text available
The steady-state visual evoked potential (SSVEP), measured by the electroencephalograph (EEG), has high rates of information transfer and signal-to-noise ratio, and has been used to construct brain–computer interface (BCI) spellers. In BCI spellers, the targets of alphanumeric characters are assigned different visual stimuli and the fixation of each target generates a unique SSVEP. Matching the SSVEP to the stimulus allows users to select target letters and numbers. Many BCI spellers that harness the SSVEP have been proposed over the past two decades. Various paradigms of visual stimuli, including the procedure of target selection, layout of targets, stimulus encoding, and the combination with other triggering methods are used and considered to influence on the BCI speller performance significantly. This paper reviews these stimulus paradigms and analyzes factors influencing their performance. The fundamentals of BCI spellers are first briefly described. SSVEP-based BCI spellers, where only the SSVEP is used, are classified by stimulus paradigms and described in chronological order. Furthermore, hybrid spellers that involve the use of the SSVEP are presented in parallel. Factors influencing the performance and visual fatigue of BCI spellers are provided. Finally, prevailing challenges and prospective research directions are discussed to promote the development of BCI spellers.
... For example, many works have been produced in terms new BCI spellers. The Hex-O-Spell, a gaze-independent BCI speller that relies on imaginary movement, was first explained in 2006 by Blankertz et al. [136] and presented in [76,137]. As reported in [138], the first variation of the Hex-O-Speller was used as an ERP P300 BCI system and was compared with other variations of the Hex-O-Spell utilizing ERP systems (Cake Speller and Center Speller) [139]. ...
Full-text available
The prospect and potentiality of interfacing minds with machines has long captured human imagination. Recent advances in biomedical engineering, computer science, and neuroscience are making brain–computer interfaces a reality, paving the way to restoring and potentially augmenting human physical and mental capabilities. Applications of brain–computer interfaces are being explored in applications as diverse as security, lie detection, alertness monitoring, gaming, education, art, and human cognition augmentation. The present tutorial aims to survey the principal features and challenges of brain–computer interfaces (such as reliable acquisition of brain signals, filtering and processing of the acquired brainwaves, ethical and legal issues related to brain–computer interface (BCI), data privacy, and performance assessment) with special emphasis to biomedical engineering and automation engineering applications. The content of this paper is aimed at students, researchers, and practitioners to glimpse the multifaceted world of brain–computer interfacing.
Full-text available
The classification of electroencephalogram (EEG) signals is of significant importance in brain-computer interface (BCI) systems. Aiming to achieve intelligent classification of motor imagery EEG types with high accuracy, a classification methodology using the wavelet packet decomposition (WPD) and the proposed deep residual convolutional networks (DRes-CNN) is proposed. Firstly, EEG waveforms are segmented into sub-signals. Then the EEG signal features are obtained through the WPD algorithm, and some selected wavelet coefficients are retained and reconstructed into EEG signals in their respective frequency bands. Subsequently, the reconstructed EEG signals were utilized as input of the proposed deep residual convolutional networks to classify EEG signals. Finally, EEG types of motor imagination are classified by the DRes-CNN classifier intelligently. The datasets from BCI Competition were used to test the performance of the proposed deep learning classifier. Classification experiments show that the average recognition accuracy of this method reaches 98.76%. The proposed method can be further applied to the BCI system of motor imagination control.
Conference Paper
Full-text available
We present a gestural interface for entering text on a mobile device via continuous movements, with control based on feedback from a probabilistic language model. Text is represented by continuous trajectories over a hexagonal tessellation, and entry becomes a manual control task. The language model is used to infer user intentions and provide predictions about future actions, and the local dynamics adapt to reduce effort in entering probable text. This leads to an interface with a stable layout, aiding user learning, but which appropriately supports the user via the probability model. Experimental results demonstrate that the application of this technique reduces variance in gesture trajectories, and is competitive in terms of throughput for mobile devices. This paper provides a practical example of a user interface making uncertainty explicit to the user, and probabilistic feedback from hypothesised goals has general application in many gestural interfaces, and is well-suited to support multimodal interaction.
For many years people have speculated that electroencephalographic activity or other electrophysiological measures of brain function might provide a new non-muscular channel for sending messages and commands to the external world – a brain–computer interface (BCI). Over the past 15 years, productive BCI research programs have arisen. Encouraged by new understanding of brain function, by the advent of powerful low-cost computer equipment, and by growing recognition of the needs and potentials of people with disabilities, these programs concentrate on developing new augmentative communication and control technology for those with severe neuromuscular disorders, such as amyotrophic lateral sclerosis, brainstem stroke, and spinal cord injury. The immediate goal is to provide these users, who may be completely paralyzed, or ‘locked in’, with basic communication capabilities so that they can express their wishes to caregivers or even operate word processing programs or neuroprostheses. Present-day BCIs determine the intent of the user from a variety of different electrophysiological signals. These signals include slow cortical potentials, P300 potentials, and mu or beta rhythms recorded from the scalp, and cortical neuronal activity recorded by implanted electrodes. They are translated in real-time into commands that operate a computer display or other device. Successful operation requires that the user encode commands in these signals and that the BCI derive the commands from the signals. Thus, the user and the BCI system need to adapt to each other both initially and continually so as to ensure stable performance. Current BCIs have
The Berlin Brain-Computer Interface (BBCI) project develops a noninvasive BCI system whose key features are 1) the use of well-established motor competences as control paradigms, 2) high-dimensional features from 128-channel electroencephalogram (EEG), and 3) advanced machine learning techniques. As reported earlier, our experiments demonstrate that very high information transfer rates can be achieved using the readiness potential (RP) when predicting the laterality of upcoming left- versus right-hand movements in healthy subjects. A more recent study showed that the RP similarly accompanies phantom movements in arm amputees, but the signal strength decreases with longer loss of the limb. In a complementary approach, oscillatory features are used to discriminate imagined movements (left hand versus right hand versus foot). In a recent feedback study with six healthy subjects with no or very little experience with BCI control, three subjects achieved an information transfer rate above 35 bits per minute (bpm), and further two subjects above 24 and 15 bpm, while one subject could not achieve any BCI control. These results are encouraging for an EEG-based BCI system in untrained subjects that is independent of peripheral nervous system activity and does not rely on evoked potentials even when compared to results with very well-trained subjects operating other BCI systems.
Brain-computer interfaces for communication and control
  • Jr Wolpaw
  • N Birbaumer
  • Dj Mcfarland
  • G Pfurtscheller
  • Tm Vaughan
Wolpaw JR, Birbaumer N, McFarland DJ, Pfurtscheller G, Vaughan TM, Brain-computer interfaces for communication and control, Clin. Neurophysiol. 113 (2002), 767–791.
Dynamics and probabilistic text entry
  • J Williamson
  • R Murray-Smith
Williamson J, Murray-Smith R, Dynamics and probabilistic text entry, Proc. of the Hamilton Summer School on Switching and Learning in Feedback systems (Murray-Smith R and Shorten R, eds.), LNCS vol. 3355, 2005, pp. 333-342.
  • J R Wolpaw
  • N Birbaumer
  • D J Mcfarland
  • G Pfurtscheller
  • T M Vaughan
Wolpaw JR, Birbaumer N, McFarland DJ, Pfurtscheller G, Vaughan TM, Brain-computer interfaces for communication and control, Clin. Neurophysiol. 113 (2002), 767-791.