Content uploaded by Laurens Ruben Krol
Author content
All content in this area was uploaded by Laurens Ruben Krol on Apr 10, 2017
Content may be subject to copyright.
Towards BCI-based Implicit Control in Human-Computer
Interaction
T. O. Zander, J. Br¨onstrup, R. Lorenz, L. R. Krol
Team PhyPA, Berlin Institute of Technology, MAR 3-2, 10587 Berlin
Abstract
In this chapter a specific aspect of Physiological Computing is defined and discussed:
implicit Human-Computer Interaction. Implicit Interaction aims at controlling a computer
system by behavioural or psychophysiological aspects of user state, independently of any
intentionally communicated command. This introduces a new type of Human-Computer
Interaction, which in contrast to most forms of interaction implemented nowadays, does
not require the user to explicitely communicate with the machine. Instead, users can focus
on understanding the current state of the system and developing higher-level strategies for
optimally reaching the goal of the given interaction. For example, the system can assess the
user state by means of passive Brain-Computer Interfaces, which the user need not even be
aware of. Based on this information, combined with information about the given context,
the system can adapt automatically to the current strategies of the user. In a first study,
a proof of principle is given by implementing Implicit Interaction to guide simple cursor
movements in a 2D grid to a target. The results of this study clearly indicate the high
potential of Implicit Interaction and introduce a new bandwidth of appications for passive
Brain-Computer Interfaces.
This is author’s (updated) final draft. The final publication is in S.H. Fairclough and K. Gilleade
(Eds.), Advances in Physiological Computing (pp. 67–90). Berlin, Germany: Springer. DOI:
10.1007/978-1-4471-6392-3 4
1 Introduction
Technology affects almost every aspect of our lives—our jobs, transportation, entertainment,
communication and social integration. As advances in technology serve as a catalyst for
the steady progress of our society, industrial, scientific and governmental efforts focus on
stimulating the development of new hard- and software which become more powerful day
by day. As a result, information can be processed, analyzed and distributed by technical
systems at a speed, in an amount and with an accuracy which far exceed the capabilities
of human beings. Nevertheless, a human user is needed to control most technical systems
because computers still lack the capabilities of intelligent thinking. However, a Human-
Machine System (HMS) would work most efficiently if more parts of the human information
processing system could be delegated to a machine. Ideally, the user would just be observing
and interpreting the current state of the system and deciding on high-level strategies to reach
the goal of the HMS based on information smartly distilled by the machine. Low-level
control tasks executing those strategies are best left to the machine.
An everyday example for this line of reasoning would be an adaptive, open-ended elec-
tronic book. While reading the story, the reader evaluates the current events as either ‘good’
or ‘bad’. If the book could assess these evaluations, it could ‘steer’ the story accordingly,
by re-writing the storyline to better suit the reader’s apparent preferences. Regardless of
the reader’s evaluations being conscious or not—they may simply be automatic, affective
reactions to reading a story—the reader is not actively composing a story, nor explicitly
instructing the system how it should continue: The focus is on reading and interpreting the
story as it unfolds, while the actual changes to the book happen automatically, potentially
even unbeknown to the reader. In a highly automated way, the computer would ‘under-
stand’ the concepts developed by the user. Humans would give guidance to machines to
efficiently solve a task.
Currently, Human-Computer Interaction (HCI) is far from such an ideal system. Firstly,
users have to request information manually and, secondly, they need to give very detailed
commands, usually in a cumbersome way. The first problem results from the fact that com-
puters can store and process large amounts of information, and do that very differently from
humans. Think about a logfile storing network activity over one day. Such information can
be processed in a short amount of time by a computer, but is hardly accessible by users. The
user must thus explicitly instruct the computer to parse it for them. The second problem is
twofold. Firstly, humans usually think in larger concepts, while a computer is controlled by
triggering small actions leading to the realization of such concepts. If you want to change
the colour of a two dimensional figure of a cube, you easily could advise a human painter
to do this by a simple sentence (“Please, paint the cube blue.”), but when communicating
this concept to a computer, you have to go step by step (defining areas, selecting specific
shades of blue etc.). Secondly, input mechanisms like mouse and keyboard are cumbersome
and often unnatural means for communicating intentions and instructions. In the above
example, the step “defining areas” will likely consist of a large number of mouse movements
and button presses. The user has to spend effort on translating intentions into commands,
such that they can be processed by the machine. All of these problems lead to a high user
workload resulting from tasks in infrastructural areas.
A lot of effort has gone into increasing the usability of technical systems by resolving one as-
pect of this problem. Current systems smartly aggregate and present available information
such that it is easily accessible for the user when needed and can be perceived quickly and
effortlessly. However, the other direction in human-computer interaction, that of sending
information from the user to the machine, is also highly relevant. Communicating to the
computer is still complicated and demanding. It mainly works based on explicitly sending
detailed and small-stepped commands formulated by the user, as described in the examples
above. Each communication in this direction increases the effort by the user to keep the
interaction running. Hence, an increase in the overall efficiency of a given HCI could be
achieved by dissolving most of the direct and explicit communication, which is unwieldy
due to form, style and complexity from a human perspective. Once this is achieved, the
user can focus on the task of guiding the interaction to its goal.
In such a scenario, the machine would still need to have information about the user’s con-
cepts, in order to process and provide appropriate information and prepare specific tasks in
This is author’s (updated) final draft. The final publication is in S.H. Fairclough and K. Gilleade
(Eds.), Advances in Physiological Computing (pp. 67–90). Berlin, Germany: Springer. DOI:
10.1007/978-1-4471-6392-3 4
the interaction. This must thus, still, be communicated to the system somehow. But with
Implicit Interaction, the machine is expected to infer this information automatically. This
can only be achieved by extending the user model used by the machine—which is currently
restricted to directly formulated commands—by information about the user’s emotions, in-
tentions and situational interpretations. Passive Brain-Computer Interfaces can be a tool
for this as they provide information about even covert aspects of the (cognitive, affective)
user state in real time. They can be used for implementing a secondary, implicit interaction
loop which supports users in their main interaction, leading to an automated adaptation of
the system.
The following parts of this chapter focus on the capabilities of passive Brain-Computer
Interfaces for establishing implicit Human-Computer Interaction. An example shows that
this could lead to a completely new type of interaction which reaches its goals even without
the need for any command that is consciously generated. The user only needs to focus on
understanding the current state of the interaction and the machine learns to reach the goal
by investigating the user’s cognition and interpretation automatically.
2 Brain-Computer Interfaces
2.1 The roots and history of BCI
The idea of “reading thoughts” with the electroencephalogram (EEG) was first mentioned by
Berger in 1929 (as cited in Birbaumer 2006). He speculated on the possibility of processing
human EEG waveforms using sophisticated mathematical analyses. The development of
the first EEG-based “Brain-Computer Interface” (BCI) was pioneered in the early 1970s by
Vidal (1973, 1977), who also coined the term BCI. Since then BCI has evolved “into one of
the fastest-growing areas of scientific research” (Mak and Wolpaw 2009).
A BCI was originally defined as a new, non-muscular communication and control chan-
nel for sending messages and commands in real-time to the external world (Wolpaw et al
2002). By measuring brain activity associated with the user’s intent and translating it into
control signals for communication systems or external devices, a BCI bypasses the brain’s
normal output channels of peripheral nerves and muscles. Brain activity can be recorded
from electrodes in the skull (invasive BCIs) or outside (non-invasive BCIs). Invasive BCI
involves brain signals such as action potentials from nerve cells or nerve fibers, synaptic and
extracellular field potentials and electrocorticography (ECoG) (for a review see Birbaumer
2006). A variety of methods may serve as non-invasive BCI. Besides EEG, these methods
include magnetoencephalography (MEG, e.g. Lal et al 2005; Mellinger et al 2007), func-
tional magnetic resonance imaging (fMRI, e.g. Lee et al 2009; Weiskopf et al 2003), and
near-infrared spectroscopy (NIRS, e.g. Coyle et al 2004). Due to its relatively simple and
inexpensive equipment and high temporal resolution, BCI research mainly focused on EEG
as preferred recording method in recent decades.
From the early beginning in the 1970s, EEG-based BCIs have given rise to the hope of
restoring independence to people suffering from diseases that disrupt the neural pathways
through which the brain communicates with and controls its external environment. Among
these are neurodegenerative and genetic neuromuscular diseases such as amyotrophic lateral
sclerosis (ALS), Friedreich’s ataxia or muscular dystrophies, multifactorial and polygenic
disorders like multiple sclerosis, cerebral palsy or the Guillain-Barr´e syndrome, as well as
severe neuromuscular impairments due to brainstem stroke or brain and spinal cord injuries.
For severely paralyzed patients for whom the remaining control (e.g. eye movement) is weak,
easily fatigued, or unreliable (Wolpaw et al 2002) BCI serves as the only remaining channel
for communicating with the outside world. The resulting condition is called locked-in state
(LIS) if the basic control of at least one muscle is present (Birbaumer 2006). However,
for most cases an easier and more efficient communication can be established by exploiting
any remaining muscle rather than employing a BCI (e.g. communication via eye blinks
or cheek muscles). As neuronal degeneration progresses the patients become completely
paralyzed (e.g. late-stage ALS). They lose control over all voluntary muscles including eye
movement and respiration and are “locked in to their bodies, unable to communicate in any
way” (Wolpaw et al 2002). For completely locked-in state (CLIS) patients, research has
shown that basic communication cannot be restored with BCI (Birbaumer 2006; K¨ubler
This is author’s (updated) final draft. The final publication is in S.H. Fairclough and K. Gilleade
(Eds.), Advances in Physiological Computing (pp. 67–90). Berlin, Germany: Springer. DOI:
10.1007/978-1-4471-6392-3 4
and Birbaumer 2008). Whether CLIS constitutes a unique BCI-resistant condition or if
individuals are able to retain the capacity for BCI use if they begin employing it before
becoming completely locked-in, remains an open empirical question (K¨ubler and Birbaumer
2008). Beyond this initial motivation for BCI research to enable communication, other BCI
applications can be used to transmit brain signals to muscles or to external orthotic devices
in order to restore movements in paralyzed limbs. Such so-called neuroprostheses generally
use functional electrical stimulation (FES) to “elicit action potentials of the efferent nerves,
which provoke contractions of the innervated but paralyzed muscles” (M¨uller-Putz et al
2006b). Based on this principle, neuroprostheses artificially compensate for the loss of
voluntary muscle control. These originally intended BCI applications have in common that
they usually require voluntary and directed commands by the user to enable spelling or the
control of an external device.
2.2 Categorization of BCIs
Alongside these accomplishments, a recent direction within the research field of BCI at-
tempts to broaden the general BCI approach by substituting the user’s command with pas-
sively conveyed implicit information. Based on this thought, a new categorization of BCI
systems was proposed by Zander and colleagues (Zander et al 2010b), dividing BCI-driven
applications into active, reactive and passive BCI systems.
Active BCI. An active BCI derives its outputs from brain activity which is directly
and consciously controlled by the user, independent of external events, for controlling an
application.
Reactive BCI. A reactive BCI derives its outputs from brain activity arising in reaction
to external stimulation which is indirectly modulated by the user to control an application.
Passive BCI. A passive BCI derives its outputs from arbitrary brain activity arising
without the purpose of voluntary control, for enriching a human-machine interaction with
implicit information on the user state.
2.3 Important Applications of BCIs
2.3.1 Active BCI
Early examples of active BCI systems are based on slow cortical potentials (SCP). SCPs are
slow voltage changes generated in the cortex that can last from less than half a second up
to several seconds (Birbaumer et al 1990). With frequencies down to 1 Hz they are among
the lowest frequency features of the EEG. It could be shown that both healthy users and
paralyzed patients can be trained to self-regulate these positive and negative voltage shifts
in order to control external devices by means of a BCI (Birbaumer and Cohen 2007). Most
commonly SCP-based BCIs are used for cursor control and target selection, such as spelling
(Birbaumer et al 1999, 2003; Hinterberger et al 2004). Despite acceptable accuracy rates,
a SCP-based BCI needs long training time, sometimes up to several months, and provides
only slow communication with usually around one letter per minute (Birbaumer 2006).
More popular active BCI systems utilize the sensorimotor rhythm (SMR). SMR com-
prise “mu and beta rhythms, which are oscillations in the brain activity localized in the
mu band (7-13 Hz) [...] and beta band (13-30 Hz)” (Nicolas-Alonso and Gomez-Gil 2012).
SMR-based BCIs operate on the principle of movement-related frequency changes in the
ongoing EEG activity over sensorimotor areas. Due to decreased synchrony of the under-
lying neuronal populations during the performance of such movements, the power in the
mu-rhythm decreases (Pfurtscheller and Lopes da Silva 1999). This phenomenon is called
event-related desynchronization (ERD, Pfurtscheller 1977) and is measured to effect control
in SMR-based BCIs. Besides ERD, post-movement beta rebound (event-related synchro-
nization (ERS), Pfurtscheller 1992) can also be employed for classification (Bai et al 2008).
For BCI, the significance of SMR definitely lies in the fact that it is not only attenuated
by actual movements, but also by intended (K¨ubler et al 2005) or imagined ones (Wolpaw
et al 2000) in paralyzed patients and healthy subjects respectively. For the latter, the term
motor imagery (MI)-based BCI has been established in the field of research. SMR-based
BCIs have been extensively investigated since the mid-1980s (Wolpaw et al 2002). Similar
This is author’s (updated) final draft. The final publication is in S.H. Fairclough and K. Gilleade
(Eds.), Advances in Physiological Computing (pp. 67–90). Berlin, Germany: Springer. DOI:
10.1007/978-1-4471-6392-3 4
to SCP-based BCIs they are most commonly used for cursor control in order to select let-
ters or icons on a screen (Daly and Wolpaw 2008). Besides one-dimensional control, both
two-dimensional (Blankertz et al 2007; Wolpaw 2004) and three-dimensional control (Mc-
Farland et al 2010) can be achieved by employing MI of several limbs, such as right hand,
left hand and foot. Furthermore, the multidimensional control of neuroprostheses (M¨uller-
Putz et al 2005; Tavella et al 2010) and orthotic devices such as robotic arms (McFarland
and Wolpaw 2008; Pfurtscheller et al 2000) has been accomplished with the support of MI-
based BCIs. However, due to its relatively low bit rates and only moderate accuracy rates
compared to for example reactive BCIs (Guger et al 2003, 2009), the real-world application
of MI-based systems is extremely limited. Beyond that, a non-negligible portion of users
(15-30%)—so-called BCI illiterates—fail to gain any MI-based BCI control (Blankertz et al
2010).
2.3.2 Reactive BCI
The vast majority of systems based on reactive BCI employ event-related potentials (ERP).
An ERP is the brain response to an external or internal event such as a visual, auditory
or tactile stimulus. The most prominent and best-studied ERP is the P3 (also P300)
component, a large positivity that is elicited in the central-parietal region of the brain 300-
500 ms post-stimulus upon rare events. For BCI purposes an oddball paradigm is usually
applied. A rare target event (e.g. the target letter) is presented among frequently appearing
nontarget events (e.g. remaining letters of the alphabet). The user’s focused attention on
the presented target leads to a noticeable increase of the P3 amplitude that can be extracted
from the EEG. Based on this principle, the target letter will be selected. The first P3-based
speller, the so-called matrix speller, was introduced by Farwell and Donchin (1988). Since
then, similar P3 spellers have been extensively investigated and developed (Nijboer et al
2008; Sellers and Donchin 2006). Another constituent of the ERP, the N2 (also N200), a
parieto-occipital negativity typically evoked 180-320 ms following the stimulus presentation,
appears also to be closely associated with cognitive processes of perception and selective
attention (Patel and Azzam 2005). For this reason Treder and Blankertz (2010) advocate
the use of the term ERP-based BCI in order to emphasize the fact that “there is a multitude
of ERP components that is affected by attention and can be exploited by classifiers”. Most
recently, advances have been made towards gaze-independent spellers (Acqualagna and
Blankertz 2011; Treder et al 2011) or spellers adapted to non-visual modalities by using
auditory (Furdea et al 2009; Schreuder et al 2011) and tactile stimulation (Brouwer and van
Erp 2010) (for a review also see Riccio et al 2012).
Other reactive BCIs are frequency-based by exploiting steady-state evoked potentials
(SSEP) that occur in response to a visual, auditory or tactile stimulus that is presented at
a steady rate. For instance in a steady-state visually evoked potential (SSVEP)-based BCI
several stimuli, each flickering at different frequencies (typically in the range of 3.5-75 Hz,
Beverina et al 2003), are presented to the user (Bin et al 2009; Gao et al 2003). By the
user’s focused attention on one of the steadily flashing stimuli, the brain produces detectable
oscillations of the same frequency in the visual cortex. Further examples for BCIs utilizing
steady-state somatosensory evoked potentials (SSSEP, M¨uller-Putz et al 2006a) or steady-
state auditory evoked potentials (SSAEP, Hill and Sch¨olkopf 2012) have been proposed.
2.3.3 Passive BCI
Systems based on passive BCI can provide information about Covert Aspects of the User
State (CAUS), i.e. task-induced states which can only be detected with weak reliability using
conventional methods such as behavioural measures (Zander et al 2010b). Restricted forms
of passive BCIs have in the past proven to be valuable tools for detecting mental workload
(Kohlmorgen et al 2007), working memory load (Grimes et al 2008), fatigue (Papadelis
et al 2007), self-induced errors (Blankertz et al 2002a), deception (Fang et al 2003), or
anticipation (Gangadhar et al 2009). However, those systems only focused on user-state
detection alone and the information gained about CAUS has not been fed back into the
system to enrich the human-machine interaction. More recent examples pursuing this notion
include detecting and correction of self-induced (Schmidt et al 2012) or machine-induced
errors (Zander et al 2010b).
This is author’s (updated) final draft. The final publication is in S.H. Fairclough and K. Gilleade
(Eds.), Advances in Physiological Computing (pp. 67–90). Berlin, Germany: Springer. DOI:
10.1007/978-1-4471-6392-3 4
2.4 Extending the definition of BCI for applications including
users without disabilities
In contrast to active or reactive systems, a passive BCI does not interfere with other means
of the human-machine interaction. It can be “reliant on either the presence or the ab-
sence of an ongoing conventional human-computer interaction, or be independent of it”
(complementarity, Zander and Kothe 2011, p. 4). Furthermore, “a passive BCI applica-
tion can make use of arbitrarily many passive BCI detectors in parallel with no conflicts,
which is more difficult for active and reactive BCIs due to the user’s limited ability of
consciously interacting with multiple components simultaneously” (composability, Zan-
der and Kothe 2011, p. 4). “Since no conscious effort from the user is needed for the use
of passive BCIs (besides preparation), their operational cost is determined by the cost of
their false alarms. Passive BCIs producing probabilistic estimates, together with the a pri-
ori probability of predicting correctly, could potentially be designed allowing for arbitrary
levels of cost-optimal decision making at the application level. In that way, theoretically,
systems could be designed which would only gain in efficiency by utilizing a passive BCI
and could have zero benefit in the worst case” (controlled costs, Zander and Kothe 2011,
p. 4).
As the concept of passive BCI offers the key properties of complementarity, composabil-
ity and controlled cost, its application spectrum is not limited to users with disabilities.
Moreover, it adds an additional information channel conveying highly relevant information
about the user. Besides information about the user state, data of the environment and
the technical system could augment the available information space and thereby add con-
text awareness to the system (Zander and Jatzev 2012). These additions can be used to
“improve the actual state-of-the-art human-machine interaction by enabling the technical
system to adapt to the user without any additional effort taken by the user” (Zander and
Kothe (2011), p. 3).
3 Brain-Computer Interfaces and Physiological Com-
puting
Physiological Computing (PC) aims at integrating real-time physiological measures into an
HMS, such that the technical system gains insight into the cognitive and emotional state of
the user, allowing it adapt itself accordingly.
3.1 Passive Brain-Computer Interfaces for Implicit Human-
Computer Interaction
3.1.1 Explicit and Implicit Interaction
In a given HMS, the user and the technical system communicate with each other to reach a
certain goal, as described in R¨otting et al (2009). In the more specific context of HCI, this
is usually done through an interaction cycle where user and computer exchange information
in an explicit way: Changes in the state of the machine are explicitly communicated to
the user so that this information can feed the next cycle. Similarly, all communication
from the user to the machine is based on specific commands which are explicitly formulated
and directed by the user. Hence, we define explicit control as the intentional directing of
commands at the interface of a computer system, which then follows the instructions.
We define overt and covert commands as being dependent on, respectively, overt and
covert aspects of the user state (cf. overt and covert attention). Explicit commands are
usually overt, but they do not have to be, as in the case of an active BCI (see section 2.3.1).
Other examples of explicit control are the use of a computer through peripheral controls
such as mouse or keyboard, or more recently also speech and gesture recognition.
Implicit Control on the other hand, we define as an automatic state change of a technical
system based at least in part on an evaluation of the user’s current state, without any actual
commands to that effect being intentionally communicated to the system by the user (see
also Fairclough 2009). Although the user may be aware of neither the communication nor
This is author’s (updated) final draft. The final publication is in S.H. Fairclough and K. Gilleade
(Eds.), Advances in Physiological Computing (pp. 67–90). Berlin, Germany: Springer. DOI:
10.1007/978-1-4471-6392-3 4
the system’s state changes, these state changes are ultimately based on the user’s state.
Just like explicit commands, implicitly sent commands can be covert or overt.
In interpersonal interaction the verbal context of speech is an example of explicit inter-
action, as described in Schmidt (2000); R¨otting et al (2009). But often implicit information
has to be taken into account so that the intent of the message can be understood more easily,
as sometimes the speaker’s intention can hardly be inferred from the words alone. Intona-
tion, volume, gestures or facial expressions are examples of implicit information which can
change the meaning of a message entirely. A sarcastic statement is a good example for a
message that can change its meaning depending on implicit information. It can be taken as
proof for the importance of the affective context of communication that Emoticons (Rivera
et al 1996) have become an important part of text-based interaction.
In the following we refer to a given HCI based on implicit commands as Implicit Inter-
action.
3.1.2 Forms of Implicit Interaction
Implicit Interaction is not only relevant for interpersonal interaction. As described in
Schmidt (2000), information about the context of a given system and the user’s behavioural
actions can be used for implicit interaction. In addition, forms of HCI can potentially be
improved if the technical system is able to detect cognitive processes such as attention,
engagement or workload. Such information could be used to enable a system to adapt to
the users state and thereby make HCI more usable, comfortable, intuitive and entertaining
(R¨otting et al 2009). Also the affective state of the user such as frustration, confusion,
disliking or interest (Picard 1999) carries useful information.
An attempt at a Human-Computer Interface that adapted to behavioural measures of
the user, which can be seen as an early realization of the ideas stated in (Schmidt 2000),
was Clippy, the automated assisstance in Microsoft Office ’97 (Microsoft Corp., Redmond,
USA.). Most users found Clippy to be highly intrusive, as it was inaccurate in estimating
the users intentions, which introduced switching costs (Squire and Parasuraman 2010) and
led to frustration (Whitworth 2005). The reason for the low accuracy of this automated
adaptation is that it is based solely on short-termed information about the user’s behaviour,
which can be an unreliable source of information (M¨uller et al 2008). Incorporating implicit
information about the user state, i.e. information about the situational interpretation,
could potentially have improved this system, without increasing the users workload, and
may hence have led to a better user acceptance.
3.1.3 Assessment of Implicit Information
Even though we can assert that adding implicit information about the cognitive or affective
state is important, it is unclear how this can be achieved in an efficient way. One promising
approach is using psychophysiological measures, as these are suited to convey information
about changes in the user state. Galvanic skin response, which measures change in con-
ductance of the skin, is sensitive to changes in cognitive constructs such as workload (Shi
et al 2007) or stress (Lazarus et al 1963) but also relative to specific events such as errors
committed by the subject (Hajcak et al 2003). Similarly, cardiac measures such as heart
rate or electrocardiogram (ECG) are responsive to cognitive, physical and environmental
influences (Hankins and Wilson 1998; LeBlanc et al 1976). Yet these methods have clear
downsides. The relation of such signals to the user’s cognitive state is inherently indirect.
Physical effects such as these are usually the result of cognitive or affective changes, and
thus carry only indirect information about these. As bodily processes are also modulated
by other factors, they can indeed be sensitive but are usually not very specific to any one
aspect of cognition or affect.
For certain aspects of cognitive and affective state the EEG appears to be a more di-
rect and hence better measure. Often changes in cognitive and affective state are initiated
in the human brain. As EEG reflects changes in cortical activity, it is capable of provid-
ing direct information about the source of certain state changes. As CAUS (see section
2.3.3) are only hardly reflected in bodily processes, measuring cortical activity might be the
only way to access information about these specific changes in cognitive and affective state.
An example for this can be found in Zander et al (2011), where a passive BCI is used to
This is author’s (updated) final draft. The final publication is in S.H. Fairclough and K. Gilleade
(Eds.), Advances in Physiological Computing (pp. 67–90). Berlin, Germany: Springer. DOI:
10.1007/978-1-4471-6392-3 4
detect the (covert) intention of moving an index finger, before the onset of muscular activity.
As the the autonomic nervous system (ANS) and the human brain are inherently inter-
connected, changes in cognition also might be triggered by the ANS, and hence, the EEG
is not necessarily the most direct link to cognition. Nevertheless, EEG is a multi-channel
biosignal, that can measure direct, physical manifestations of cognition. In addition, even
though the EEG has clear limitations in spatial resolution, it allows for simultaneous as-
sessment of different aspects of human cognition (see composability, section 2.4), as we can
identify multiple cortical sources contributing to it. EEG provides a comparably fast tem-
poral resolution, modulations in the ECG for example may be as fast as 2-3 seconds, while
EEG signals usually respond within hundreds of milliseconds. The evaluation of EEG data
in real time for Implicit Interaction is one main application for passive BCI and directly fol-
lows its definition (see section 2.2). Hence, it is a direct and worthwhile means for assessing
information about the user state.
3.2 Possibilities for Beneficial Multidisciplinarity of BCI- and
PC-Research
The combination of PC- and BCI-based research can provide mutual benefits. The method-
ology of BCI research can prove valuable to PC, in particular single-trial classification of
physiological measures, and PC can open a new variety of applications for BCI technology.
3.2.1 Passive BCI Methodology for Physiological Computing
Over the past four decades of BCI research, machine learning has become a fundamental
part of the field. Initially BCIs consisted of predefined classifiers which the users had to
adapt to, as described in 2.3.1. Machine learning made it possible to shift this training
effort from the user to the machine (M¨uller et al 2004).
To allow the technical system to learn the characteristics of a certain signal, exemplary
data is collected in a calibration phase. Multiple trials for each experimental condition
are recorded. Features are extracted from these data sets, for example, certain temporal,
spectral or spatial aspects or distributions. Machine learning then derives a model that em-
phasizes features that provide maximal discriminability between the conditions. This model
is used in the BCI to evaluate new data sets and assign them to one of the cognitive states
that are of interest. This still relatively time-consuming calibration has to be undertaken
individually for each subject and usually for each session. In future BCIs this problem might
be solved through subject-independent classifiers, also called universal classifiers (Reuderink
et al 2011; Zander 2011; Wolpaw et al 2002). Such classifiers are predefined (for example
through training on a group of subjects) and are capable of accurate single-trial classifi-
cation of an aspect of the users state independent of the subject. Individual training of
the subjects can then be omitted. The BCI community has developed and advanced many
different approaches to feature extraction and machine learning, and adopted approaches
and techniques from other fields of neurophysiological analysis. For the definition of predic-
tive models many different approaches from machine learning have been used: from support
vector machines (SVM) to linear or quadratic discriminant analysis (LDA and QDA) (Duda
et al 2001) to non-linear kernel SVM (Sch¨ollkopf and Smola 2002) and up to advanced meth-
ods building on artificial neural networks (Balderas et al 2011). Because features usually
are gaussian and due to the low complexity (low Vapnik-Chervonenkis dimension) of LDA,
LDA is less prone to overfitting (Vapnik and Chervonenkis 1971). Another major advantage
of LDA over SVM is that it is robust against imbalanced trial numbers between classes, as
it is based mainly on an estimation of covariance matrices (Duda et al 1973).
Implementations of these approaches have been made public through toolboxes and
platforms such as BCILAB (Kothe and Makeig 2013), BCI2000 (Schalk et al 2004) or
OpenVIBE (Renard et al 2010) that allow for easy access to BCI tools and methods.
3.2.2 New Applications of BCI technology in Physiological Computing
For a long time, the main purpose for BCI research was to provide people with severe
physical impairments with a form of communication. This was usually achieved through
active or reactive BCIs (see section 2.1, 2.3.1 and 2.3.2). With the introduction of passive
This is author’s (updated) final draft. The final publication is in S.H. Fairclough and K. Gilleade
(Eds.), Advances in Physiological Computing (pp. 67–90). Berlin, Germany: Springer. DOI:
10.1007/978-1-4471-6392-3 4
BCI applications it was shown that new possibilities arise if the initial methodology is
extended (Zander et al 2010a,b) (see also 2.4).
Since then, the main application of passive BCI technology is automated adaptation of
a technical system in an HMS. This can be achieved through the interpretation of the user
state (Zander and Kothe 2011), containing interpretations of situational events like errors
committed (Blankertz et al 2002b) or perceived (Ferrez and Mill´an 2008) or user intentions,
like intending to interact with the system (Protzak et al 2013). Passive BCI technology can
also be used to monitor the user state or to give neurofeedback.
Since its initial definition, BCI research mainly focused on one input modality reflect-
ing brain activity. Recently, it opened up for multi-modal approaches, as described in
Pfurtscheller et al (2010). This development allows for an interconnection between the field
of PC and BCI, such that the bandwidth of applications for BCI technology is extended
significantly. PC already found application in different types of HCI, which can now easily
be combined with (passive) BCI technology. One main aim of PC is to extend the range
of possible applications and promote the development of hybrid systems, and hence, its
methodology allows for combining different sources of information such as galvanic skin re-
sponse, cardiac measures or even behavioural measures into combined feature spaces. Such
measures make it possible to assess the user’s emotions quite accurately, which has proven
to be difficult through BCI only. Under the term affective computing (AC) emotions such
as anger, happiness, sadness and surprise were classified successfully through a combination
of psychophysiological measures (Canento et al 2011).
Another benefit of combining BCI with physiological input was shown through the com-
bination of eye tracking with a passive BCI. This provides an elegant approach to overcome
a common problem in gaze-based interaction. Usually the affirmative action is executed
through dwell times, blink patterns or active BCI (Vilimek and Zander 2009) which are not
always easy to control and also are not comfortable to use. This is called the Midas Touch
problem (Jacob et al 1993). It was shown that one efficient solution is to assess the user’s
intent to interact with an icon through a passive BCI (Protzak et al 2013) and to combine it
with a dwell time selection. In this study Protzak et al. showed that in a dwell-time based
interaction the intention to interact can be detected by a passive BCI. Activity in parietal
cortex showed a significant difference during trials where subjects interacted with the sys-
tem compared to trials where they just looked at items on the screen. A pseudo-online
evaluation revealed that it is possible to correctly predict the users intend to interact in
single trial with an average accuracy of 81%. A system that uses eye tracking and a passive
BCI combined is an excellent example for a HMS using PC to establish intuitive HCI.
3.2.3 Influences on the EEG-Signal: Artifacts or Features
In PC, eye movements or muscular activity are considered to be input modalities used
for HCI. Contrarily, in BCI research signals generated from such activity are considered
artifacts. Strictly speaking, a BCI should process input solely from electrical activity within
the central nervous system (CNS, Wolpaw et al 2002). Factually this requirement can hardly
be met. Even under laboratory conditions, first degree artifacts, such as eye or neck muscle
movements, and second degree artifacts, such as changes in the electromagnetic field, will
always be part of the recorded EEG signal (Zander 2011). The amplitude of these signals
exceeds that of any cortical signal by an order of magnitude. Where the artifacts are
independent of the context investigated in the experiment they are uncritical but merely
lead to a lower signal to noise ratio. If artifacts are dependent of conditions investigated,
they will contribute strongly to the predictive model of the BCI. In that case, the resulting
BCI model may be more dependent on artifacts than on cortical activity. Whether artifacts
can be used as a reliable control signal that can be included in passive BCI is strongly
dependent on the type of interaction investigated. In systems where such artifacts are
significantly related to the aspects of user state under investigation, they can be useful
additions to or even replacements of cortical signals.
Hence, the question whether artifacts should be used for HCI (as proposed in section
3.2.2) based on passive BCI is hard to answer. Our behaviour and our physiology are
sensitive to changes in environmental context. Think about changes in your mimicry during
social interaction compared to it in privacy. Such changes also affect artifacts produced by
such behaviour. Therefore, an interaction including artifacts will also be dependent on the
context of the interaction. We assume that basal brain activity representing cognitive or
This is author’s (updated) final draft. The final publication is in S.H. Fairclough and K. Gilleade
(Eds.), Advances in Physiological Computing (pp. 67–90). Berlin, Germany: Springer. DOI:
10.1007/978-1-4471-6392-3 4
affective processes of interest are more robust to contextual modulation. Nevertheless, this
still needs to be proven.
3.3 Passive BCI for Implicit Interaction Beyond Secondary
Input
The differentiation between explicit and implicit interaction is important for HCI because
both forms contain information that is needed for comprehensive interaction. In most
applications of passive BCIs, it is intended for secondary input supporting the primary
interaction (Schmidt et al 2012; Zander et al 2010b; Blankertz et al 2002a). However,
passive BCI technology could ultimately be used as the only input. In such a scenario the
user acts as a critic of an external autonomous system and in doing so, effectively controls
it. This would be a completely new type of HCI that renders any explicit input from the
user unnecessary since the perception and interpretation of the environment would serve as
input.
This form of interaction would be highly intuitive to use since it does not need any
instructions (Fairclough 2008). The HCI converges to the goal of the HMS by tracking the
process of the user learning about and understanding the current system state. The user
does not need to intend to interact with the system, and does not need to translate any
mental abstract concepts to small-stepped command sequences, which reduces the workload
to a minimum. This also satisfies the demand made in the introduction, for an HMS that
runs autonomically and only shifts effort to the user that needs intelligent thinking.
In the following section an example is given of a system which only is guided only by
Implicit Interaction based on passive BCI.
4 Passive BCI for Implicit Interaction: An Exam-
ple Study
The following study by two of the authors may serve to illustrate the concept of implicit
interaction.
Krol, Gramann and Zander (in press) investigated the use of a passive BCI to control
the movement of a cursor in two dimensions. The cursor was not controlled directly, but
moved autonomously: Implicit interaction with the cursor was realised by adapting its
directional movement biases, based on the presence or absence of error negativities evoked
by the cursor’s movements. An error negativity is a “negative deflection in the ongoing
EEG seen when human participants commit errors” (Holroyd and Coles 2002), and can also
be seen after the passive observation of errors being committed (Van Schie et al 2004).
When, for example, an upwards movement of the cursor was followed by an error negativ-
ity, as detected by the passive BCI, the probability of repeating that movement was reduced.
In short, rather than actively controlling the direction in which the cursor should go, the
subjects were passively observing and interpreting the cursor’s initially random movements,
and the cursor responded to those interpretations by making its movement increasingly less
random.
4.1 Experimental Design
The cursor as used in the study was a red circle, that could move from one node to any
of the adjacent nodes in grids of varying size. These grids consisted of grey open circles,
slightly larger than the cursor, connected by grey lines, on a black background. Depending
on the cursor’s position in the grid, then, there were up to eight possible movements.
An animation allowed the subjects to be able to anticipate the moment of each move-
ment. As illustrated in figure 1 (top), over the course of one second, a white ‘ghost cursor’
would grow inside the actual cursor. As soon as this ghost reached the same size as the
actual cursor, it would instantaneously jump to the next node, also highlighting the grid line
connecting the two nodes in white. The movement remained visible for one second, with
the red original cursor still on the initial node, the white ghost cursor on the new node, and
a white line connecting them. Following that, the white elements disappeared and the (red)
cursor would instantaneously move to and remain at its new position, on the new node, for
another second, before the animation would start over for the next movement.
This is author’s (updated) final draft. The final publication is in S.H. Fairclough and K. Gilleade
(Eds.), Advances in Physiological Computing (pp. 67–90). Berlin, Germany: Springer. DOI:
10.1007/978-1-4471-6392-3 4
4 x 50 trials 5 x 120 trials 1 x 120 trials 1 x 120 trials
training online online
1s: new grid 1s: white ghost grows 1s: ghost at new position 1s: cursor at rest
Figure 1: Above: Stimuli over the course of one (‘incorrect’) trial in the example study, on a one
by three grid. Below: Illustration of the study’s blocks and grids. The number of blocks and
trials for each grid is indicated. Data from the blocks labelled ‘training’ were used to train the
classifier. The last two blocks, labelled ‘online’, were BCI-supported.
Three different grids were used in the study: One by three nodes, four by four, and six
by six. In each grid, a single red node in one of the corners indicated the target. Whenever
the cursor reached this target, a new grid was started of the same size. For the one by three
grids, each new grid was rotated a random integer multiple of 45 degrees as compared to
the previous grid. The four by four and six by six grids did not rotate, but a new target
was selected for each new grid, such that no two subsequent grids had a target in the same
corner. In all of these grids, the cursor’s starting position was one node away from the
opposite corner, in a straight line to the target. Each newly started grid was displayed for
one second before the first movement was initiated.
Additionally, a new grid was started when a certain number of movements had been
made without reaching the target. For the one by three grids, the maximum number of
moves was one; for the four by four grids, the maximum was 55, which is one and a half
times the mean number of random movements required to reach the target. No maximum
other than the block’s length (120) was set for the six by six grid.
With the one additional second added for the subjects to orient themselves after a new
grid was started, a trial in the one by three grids took four seconds. In the other two grid
sizes, a trial took three seconds.
4.2 Subjects, Setup, and Procedure
A total of sixteen subjects participated in this study, with an average age of 25.9 years
±3.4. All had normal or corrected to normal vision.
After preparation and setting up the EEG cap, which took up to one hour, subjects
were seated comfortably in a padded chair in a dimly lit room. In writing, they were
instructed to judge every individual movement of the cursor as either ‘acceptable’ or ‘not
acceptable’,with respect to reaching the goal, and to indicate their verdict by pressing either
‘v’ or ‘b’, respectively, on a standard German layout computer keyboard using the same
finger of one hand. This task was intended to keep the subjects focused on the cursor’s
movements. The subjects performed this task during all blocks.
EEG was recorded continuously using 64 Ag/AgCl electrodes mounted according to the
extended 10-20 system on an elastic cap (Easy Cap, Falk Minow Services). The signal
was sampled at 500 Hz and amplified using a 250 Hz high cutoff filter via BrainAmps
(BrainProducts, Munich). All electrodes were referenced to FCz.
The experiment itself took about one hour. As illustrated in figure 1 (bottom), subjects
were first shown four blocks of 50 trials on a one by three grid, and following that, five
blocks of 120 trials on a four by four grid. These latter five blocks were used to train the
passive BCI classifier. This classifier was then used in two online blocks: One block of 120
trials on a four by four grid, and one block of 120 trials on a six by six grid.
This is author’s (updated) final draft. The final publication is in S.H. Fairclough and K. Gilleade
(Eds.), Advances in Physiological Computing (pp. 67–90). Berlin, Germany: Springer. DOI:
10.1007/978-1-4471-6392-3 4
4.3 Feature Extraction and Classification
Two groups of cursor movements were built to train the classifier. One group consisted
of those movements that went directly towards the target (‘correct’ movements), and the
other of those whose direction deviated 135 degrees or more from a straight line to the
target (‘incorrect’ movements). Of all 600 trials in the four by four grids, this selection left
between 162 and 217 (mean: 185.3) trials per subject for the classifier to be trained on.
Note that the subjects’ judgements, indicated using button-presses, were ignored: Only
a movement’s angle with respect to the target determined its group.
The open-source toolbox BCILAB (Delorme et al 2010) was used to define and implement
the BCI. Features were extracted by the Windowed Means approach (Blankertz et al 2011),
which calculated the average amplitudes of eight sequential time windows of 50 ms each,
starting at 300 ms after cursor movement. For this feature extraction, the data was first
resampled at 100 Hz, and bandpass filtered from 0.1 to 15 Hz. Classification of these
features was done through Linear Discriminant Analysis (Duda et al 2001), regularised by
Shrinkage (Blankertz et al 2011).
A [5,5]-times nested cross-validation (Duda et al 2001) with margins of 5 was used to
select the Shrinkage regularisation parameter, and to generate estimates of the model’s
online reliability, as reported below. A model was trained before the first online block, for
each subject individually.
4.4 Implicit Control
In the two online blocks, the trained classifier was applied to all cursor movements without
any knowledge of the angular difference to the target. Each new grid started with all direc-
tions having equal probabilities. If a movement was classified as ‘correct’, the probability
of that movement’s direction was increased, as well as, to a lesser extent, the probabilities
of its two neighbouring directions. When a movement was classified as ‘incorrect’, these
probabilities were reduced. The cursor did not undo incorrect movements or directly repeat
correct movements, but merely altered the relevant probabilities for subsequent trials. Over
time, then, this system was hypothesised to gradually “steer” the cursor more and more into
the target’s direction. This steering would be done on the basis of event-related potentials
evoked by the subjects’ judgement of the cursor’s own, autonomous movements—not by
any actual intent to steer the cursor.
4.5 Results
The average classification rate over all sixteen subjects, as estimated using the method
described above, was 71%, with a standard deviation of 7.6 percentage points. This indicates
a substantial improvement over chance level, which is 50% for a binary classifier as used
here.
It is important to note that classifiers trained on cursor movements and cursor move-
ments alone, outperformed classifiers that somehow took button presses into account. For
example, a classifier trained to distinguish between trials where the subject pressed one or
the other button, respectively, had a lower accuracy than a classifier trained to distinguish
only between different cursor movements (e.g. deviating <45◦versus ≥45◦from a straight
line to the target). This suggests that classification was, at least in part, performed on
passive, implicit signals that differed from the subjects’ conscious acts.
The performance of the cursor was operationalised as the average number of movements
required to reach one target. For the BCI-supported performance, averages were calculated
per subject over all 120 online trials of the respective grid size. Trials at the end of a block
that did not contribute to reaching a target or hitting the maximum number of trials for
that attempt, were discarded. The BCI-supported data was compared to an equal sample
size of non-supported (random) performance measures over the same number of trials. A
Wilcoxon rank-sum test revealed significant differences between non-supported performance
(median = 27.6) and BCI-supported performance (median = 19.9) on both the four by four
grid, W= 171.5, z= 3.5, p<0.001, r= 0.61, and on the six by six grid (median = 76,
unsupported, versus 22.6, supported), W= 123, z= 2.7, p<0.01, r= 0.53.
In summary, these results indicate that the classifier was reliably capable of differenti-
ating between movements going towards, and away from the target, without itself knowing
where the target was. When this information was used to adapt the cursor’s behaviour, a
This is author’s (updated) final draft. The final publication is in S.H. Fairclough and K. Gilleade
(Eds.), Advances in Physiological Computing (pp. 67–90). Berlin, Germany: Springer. DOI:
10.1007/978-1-4471-6392-3 4
marked improvement was seen in terms of cursor performance. This enabled the sub jects
to effectively guide the cursor towards the target, even though they were attending to a
fundamentally different task.
5 Discussion and Conclusion
The main result of the theory and the experiment presented in this chapter is that implicit
human-computer interaction, incorporating information about the user state, is indeed a
realistic concept. The development of and experience with passive BCI provides tools which
perfectly fit in this concept. Implicit control theoretically allows for a highly efficient way of
interacting with technical systems, as it aims at distributing tasks between the machine and
the user along their specific capabilities. In the best case, this distribution is optimal and
both machine and human learn from one another during the interaction—their strategies
converge efficiently.
The implicit approach might not always be the most effective way to solve a task. In the
study presented here it is clear that a user focusing fully on the task and having standard
explicit control would reach the target much faster. This is mostly a result of the fact that
the scenario is very simple, the optimal strategy to reach the goal is obvious and an intuitive
explicit control is easy to establish with eight direction keys. However, a more elaborate
passive BCI approach could be applied in scenarios closer to real-world applications. One
could envision an example, where a technical system is distributing tasks between a team
of experts, e.g. air traffic controlers in a tower, along their current level of workload or
stress. In that scenario mistakes resulting from over- or underloading team members could
be reduced. The interpretation of the passive BCI output would then be more complex
but it should still be feasible for a computer system. Hence, implicit control might not be
the most suitable way of interacting in every given HMS, but it could be for some and it
defines a new and more intuitive interaction. This becomes particularly clear when we take
into account that most of the systems we currently have available are designed for explicit
control. Over several steps such systems could be enhanced by adding implicit control,
for example for automated error correction during automated adaptation as described in
Zander (2011). Future systems, directly designed for implicit control, might then reveal the
full potential of this new type of HCI.
From the perspective of BCI research, the main advantage of passive BCI comes into
play: Its independence of bitrate (Zander 2011). As most BCI approaches, active and re-
active, aim at realizing an interface for direct communication and control, but provide only
a very limited reliability, they often do not succeed reaching their goal. Additionally, users
have to spend a significant amount of effort to keep control and often perceive this as being
frustrating (Lorenz et al 2013). With the statistical approach of implicit control based on
passive BCI presented here, it is shown that even with a low accuracy of 70% the system
reaches its goal efficiently, and with no additional effort taken by the attentive user.
Nevertheless, from a usability perspective, there are still significant hurdles to be taken.
Most realizations of BCIs based on Machine Learning, like the one presented here, need a
time-consuming calibration phase. A significant number of prototypes of the features, in
current BCI systems typically at least 40 to 80 per class, have to be generated to calibrate
the BCI model. In currently available BCI frameworks, this has to be repeated before each
session, which is time consuming and might be annoying for the user, both depending on
the complexity of generating the investigated user state. In addition, the setup of an EEG
system is cumbersome and time consuming, as usually several dozens of electrodes need to
be gelled on the user’s head. Several solutions to these problems have been proposed and are
currently being investigated. Universal classifiers (Zander 2011) or reduced-training BCIs
(Krauledat et al 2008) could reduce the calibration time to an acceptable level. The time
needed for setting up an EEG system can be reduced by using dry electrodes [96] and by
reducing the number of electrodes by identifying which are the most relevant through apply-
ing methods from computational neuroscience. Nevertheless, due to the complex structure
of the brain and to volume conduction, a significant number of electrodes will always be
needed for a reliable BCI system.
This is author’s (updated) final draft. The final publication is in S.H. Fairclough and K. Gilleade
(Eds.), Advances in Physiological Computing (pp. 67–90). Berlin, Germany: Springer. DOI:
10.1007/978-1-4471-6392-3 4
Also, the application of BCIs outside the lab is still mostly uninvestigated. In standard
experiments most environmental factors are strictly controlled. As this hardly is possible
for real-world environments, different approaches have to be investigated. One solution is
to model the given HMS completely by incorporating contextual information about user,
environment and the technical part. In this approach context information would counter-
balance the lack of control and support the information gained about of the user state (see
Zander and Jatzev 2012).
As it is likely that there will be significant advances in solving the above-mentioned
problems in the near future, BCIs will become more practical and more useable. The theory
presented here and the first proof of concept of implicit HCI can be seen as a starting point
for a new type of research. It opens up a large variety of applications that need to be
investigated in upcoming research endeavors. With passive BCIs a new horizon for HCI
is defined, carrying the potential to significantly increase the intuitivity and efficiency of
future computer systems.
References
Acqualagna L, Blankertz B (2011) A gaze independent spelling based on rapid serial visual
presentation. Conference Proceedings IEEE Eng Med Biol Soc pp 4560–4563
Bai O, Lin P, Vorbach S, Floeter MK, Hattori N, M H (2008) A high performance sensori-
motor beta rhythm-based brain-computer interface associated with human natural motor
behavior. Journal of Neural Engineering 5(1):24–35
Balderas D, Zander TO, Bachl F, Neuper C, Scherer R (2011) Restricted boltzmann ma-
chines as useful tool for detecting oscillatory EEG components. In: Proc. of the 5th
international brain-computer interface conference, Graz, Austria, pp 68–71
Beverina F, Palmas G, Silvoni S, Piccione F, Giove S (2003) User adaptive BCIs: SSVEP
and P300 based interfaces. PsychNology Journal pp 331–354
Bin G, Gao X, Yan Z, Hong B, Gao S (2009) An online multi-channel SSVEP-based brain-
computer interface using a canonical correlation analysis method. Journal of Neural En-
gineering 6(4):046,002
Birbaumer N (2006) Breaking the silence: Brain-computer interfaces (BCI) for communi-
cation and motor control. Psychophysiology 43(6):517–532
Birbaumer N, Cohen LG (2007) Brain-computer interfaces: communication and restoration
of movement in paralysis. The Journal of Physiology 579(3):621–636
Birbaumer N, Elbert T, Canavan AG, Rockstroh B (1990) Slow potentials of the cerebral
cortex and behavior. Physiological Reviews 70(1):1–41
Birbaumer N, Ghanayim N, Hinterberger T, Iversen I, Kotchoubey B, K¨ubler A, Perel-
mouter J, Taub E, Flor H (1999) A spelling device for the paralysed. Nature
398(6725):297–298
Birbaumer N, Hinterberger T, K¨ubler A, Neumann N (2003) The thought-translation device
(TTD): neurobehavioral mechanisms and clinical outcome. IEEE Transactions on Neural
Systems and Rehabilitation Engineering 11(2):120–123
Blankertz B, Sch¨afer C, Dornhege G, Curio G (2002a) Single trial detection of EEG error
potentials: A tool for increasing BCI transmission rates. In: Dorronsoro JR (ed) Artificial
Neural Networks—ICANN 2002, Lecture Notes in Computer Science, vol 2415, Springer
Berlin Heidelberg, pp 1137–1143
Blankertz B, Sch¨afer C, Dornhege G, Curio G (2002b) Single trial detection of EEG error
potentials: A tool for increasing bci transmission rates. In: Artificial Neural Networks—
ICANN 2002, Springer, pp 1137–1143
This is author’s (updated) final draft. The final publication is in S.H. Fairclough and K. Gilleade
(Eds.), Advances in Physiological Computing (pp. 67–90). Berlin, Germany: Springer. DOI:
10.1007/978-1-4471-6392-3 4
Blankertz B, Krauledat M, Dornhege G, Williamson J, Murray-Smith R, M¨uller KR (2007)
A note on brain actuated spelling with the Berlin brain-computer interface. In: Stephani-
dis C (ed) Universal access in human-computer interaction, Springer, Berlin and New
York, Lecture Notes in Computer Science Vol. 4557, pp 759–768
Blankertz B, Sannelli C, Halder S, Hammer E, K¨ubler A, M¨uller KR, Curio G, Dick-
haus T (2010) Neurophysiological predictor of SMR-based BCI performance. NeuroImage
51(4):1303–1309
Blankertz B, Lemm S, Treder M, Haufe S, M¨uller KR (2011) Single-trial analysis and
classification of ERP components – A tutorial. NeuroImage 56(2):814–825
Brouwer AM, van Erp J (2010) A tactile P300 brain-computer interface. Frontiers in Neu-
roscience
Canento F, Fred A, Silva H, Gamboa H, Louren¸co A (2011) Multimodal biosignal sensor
data handling for emotion recognition. In: Sensors, 2011 IEEE, IEEE, pp 647–650
Coyle S, Ward T, Markham C, McDarby G (2004) On the suitability of near-infrared
(NIR) systems for next-generation brain-computer interfaces. Physiological Measurement
25(4):815–822
Daly JJ, Wolpaw JR (2008) Brain-computer interfaces in neurological rehabilitation. The
Lancet Neurology 7(11):1032–1043
Delorme A, Kothe C, Vankov A, Bigdely-Shamlo N, Oostenveld R, Zander TO, Makeig
S (2010) MATLAB-based tools for BCI research. In: Tan DS, Nijholt A (eds) Brain-
Computer Interfaces, Human-Computer Interaction Series, Springer London, pp 241–259
Duda RO, Hart PE, et al (1973) Pattern classification and scene analysis. Wiley, New York
Duda RO, Hart PE, Stork DG (2001) Pattern Classification, 2nd edn. Wiley, New York
Fairclough SH (2008) BCI and physiological computing for computer games: Differences,
similarities & intuitive control. Proceedings of CHI08
Fairclough SH (2009) Fundamentals of physiological computing. Interacting with computers
21(1):133–145
Fang F, Liu Y, Shen Z (2003) Lie detection with contingent negative variation. International
Journal of Psychophysiology 50(3):247–255
Farwell LA, Donchin E (1988) Talking off the top of your head: toward a mental prosthesis
utilizing event-related brain potentials. Electroencephalogr Clin Neurophysiol 70(6):510–
523
Ferrez PW, Mill´an JdR (2008) Simultaneous real-time detection of motor imagery and
error-related potentials for improved BCI accuracy. Proc of the 4th International Brain-
Computer Interface Workshop & Training Course pp 197–202
Furdea A, Halder S, Krusienski D, Bross D, Nijboer F, Birbaumer N, K¨ubler A (2009) An
auditory oddball (P300) spelling system for brain-computer interfaces. Psychophysiology
46(3):617–625
Gangadhar G, Chavarriaga R, Mill´an JdR (2009) Fast recognition of anticipation-related
potentials. IEEE Transactions on Biomedical Engineering 56(4):1257–1260
Gao X, Xu D, Cheng M, Gao M (2003) A BCI-based environmental controller for the
motion-disabled. IEEE Transactions on Neural Systems and Rehabilitation Engineering
11(2):137–140
Grimes D, Tan DS, Hudson SE, Pradeep S, Rao RP (2008) Feasibility and pragmatics
of classifying working memory load with an electroencephalograph. In: Czerwinski M
(ed) Proceedings of the twenty-sixth annual SIGCHI conference on Human factors in
computing systems, ACM, New York, pp 835–844
This is author’s (updated) final draft. The final publication is in S.H. Fairclough and K. Gilleade
(Eds.), Advances in Physiological Computing (pp. 67–90). Berlin, Germany: Springer. DOI:
10.1007/978-1-4471-6392-3 4
Guger C, Edlinger G, Harkam W, Niedermayer I, Pfurtscheller G (2003) How many people
are able to operate an EEG-based brain-computer interface (BCI)? IEEE transactions on
neural systems and rehabilitation engineering : a publication of the IEEE Engineering in
Medicine and Biology Society 11(2):145–147
Guger C, Daban S, Sellers E, Holzner C, Krausz G, Carabalona R, Gramatica F, Edlinger
G (2009) How many people are able to control a P300-based brain-computer interface
(BCI)? Neuroscience Letters 462(1):94–98
Hajcak G, McDonald N, Simons RF (2003) To err is autonomic: Error-related brain poten-
tials, ANS activity, and post-error compensatory behavior. Psychophysiology 40(6):895–
903
Hankins TC, Wilson GF (1998) A comparison of heart rate, eye activity, eeg and subjective
measures of pilot mental workload during flight. Aviation, Space, and Environmental
Medicine 69(4):360–367
Hill NJ, Sch¨olkopf B (2012) An online brain-computer interface based on shifting attention
to concurrent streams of auditory stimuli. Journal of Neural Engineering 9(2):026,011
Hinterberger T, Schmidt S, Neumann N, Mellinger J, Blankertz B, Curio G, Birbaumer N
(2004) Brain-computer communication and slow cortical potentials. IEEE Transactions
on Biomedical Engineering 51(6):1011–1018
Holroyd CB, Coles MGH (2002) The neural basis of human error processing: reinforcement
learning, dopamine, and the error-related negativity. Psychological Review 109(4):679–
708
Jacob RJ, Leggett JJ, Myers BA, Pausch R (1993) Interaction styles and input/output
devices. Behaviour & Information Technology 12(2):69–79
Kohlmorgen J, Dornhege G, Braun M, Blankertz B, M¨uller KR, Curio G, Hagemann K,
Bruns A, Schrauf M, Kincses W (2007) Improving human performance in a real operating
environment through real-time mental workload detection. In: Dornhege G, Mill´an JdR,
Hinterberger T, McFarland DJ, M¨uller KR (eds) Toward brain-computer interfacing,
Neural information processing series, MIT Press, Cambridge, pp 409–422
Kothe CA, Makeig S (2013) BCILAB: a platform for brain-computer interface development.
Journal of Neural Engineering 10(5):056,014
Krauledat M, Tangermann M, Blankertz B, M¨uller KR (2008) Towards zero training for
brain-computer interfacing. PLoS One 3(8):e2967
K¨ubler A, Birbaumer N (2008) Brain-computer interfaces and communication in paralysis:
Extinction of goal directed thinking in completely paralysed patients? Clinical Neuro-
physiology 119(11):2658–2666
K¨ubler A, Nijboer F, Mellinger J, Vaughan TM, Pawelzik H, Schalk G, McFarland DJ,
Birbaumer N, Wolpaw JR (2005) Patients with ALS can use sensorimotor rhythms to
operate a brain-computer interface. Neurology 64(10):1775–1777
Lal TN, Schr¨oder M, Hill NJ, Preissl H, Hinterberger T, Mellinger J, Bogdan M, Rosenstiel
W, Hofmann T, Birbaumer N, Sch¨olkopf B (2005) A brain computer interface with on-
line feedback based on magnetoencephalography. In: ICML ’05 Proceedings of the 22nd
international conference on machine learning, pp 465–472
Lazarus RS, Speisman JC, Mordkoff AM (1963) The relationship between autonomic indi-
cators of psychological stress: Heart rate and skin conductance. Psychosomatic Medicine
25(1):19–30
LeBlanc J, Blais B, Barabe B, Cote J (1976) Effects of temperature and wind on facial
temperature, heart rate, and sensation. Journal of Applied Physiology 40(2):127–131
Lee JH, Ryu J, Jolesz FA, Cho ZH, Yoo SS (2009) Brain-machine interface via real-
time fMRI: Preliminary study on thought-controlled robotic arm. Neuroscience Letters
450(1):1–6
This is author’s (updated) final draft. The final publication is in S.H. Fairclough and K. Gilleade
(Eds.), Advances in Physiological Computing (pp. 67–90). Berlin, Germany: Springer. DOI:
10.1007/978-1-4471-6392-3 4
Lorenz R, Pascual J, Blankertz B, Vidaurre C (2013) Towards a holistic assessment of the
user experience with hybrid BCIs. Journal of Neural Engineering submitted
Mak JN, Wolpaw J (2009) Clinical applications of brain-computer interfaces: Current state
and future prospects. IEEE Reviews in Biomedical Engineering 2:187–199
McFarland DJ, Wolpaw JR (2008) Brain-computer interface operation of robotic and pros-
thetic devices. Computer 41(10):52–56
McFarland DJ, Sarnacki WA, Wolpaw JR (2010) Electroencephalographic (EEG) control
of three-dimensional movement. Journal of Neural Engineering 7(3):036,007
Mellinger J, Schalk G, Braun C, Preissl H, Rosenstiel W, Birbaumer N, K¨ubler A (2007)
An MEG-based brain-computer interface (BCI). NeuroImage 36(3):581–593
M¨uller KR, Krauledat M, Dornhege G, Curio G, Blankertz B (2004) Machine learning
techniques for brain-computer interfaces. Biomedical Engineering 49(1):11–22
M¨uller KR, Tangermann M, Dornhege G, Krauledat M, Curio G, Blankertz B (2008) Ma-
chine learning for real-time single-trial EEG-analysis: From brain-computer interfacing
to mental state monitoring. Journal of Neuroscience Methods 167(1):82–90
M¨uller-Putz G, Scherer R, Neuper C, Pfurtscheller G (2006a) Steady-state somatosensory
evoked potentials: Suitable brain signals for brain-computer interfaces? IEEE Transac-
tions on Neural Systems and Rehabilitation Engineering 14(1):30–37
M¨uller-Putz GR, Scherer R, Pfurtscheller G, Rupp R (2005) EEG-based neuroprosthesis
control: a step towards clinical practice. Neuroscience letters 382(1-2):169–174
M¨uller-Putz GR, Scherer R, Pfurtscheller G, Rupp R (2006b) Brain-computer interfaces
for control of neuroprostheses: from synchronous to asynchronous mode of operation.
Biomedizinische Technik 51(2):57–63
Nicolas-Alonso LF, Gomez-Gil J (2012) Brain computer interfaces, a review. Sensors
12(2):1211–1279
Nijboer F, Sellers EW, Mellinger J, Jordan MA, Matuz T, Furdea A, Halder S, Mochty
U, Krusienski DJ, Vaughan TM, Wolpaw JR, Birbaumer N, K¨ubler A (2008) A P300-
based brain-computer interface for people with amyotrophic lateral sclerosis. Clinical
Neurophysiology 119(8):1909–1916
Papadelis C, Chen Z, Kourtidou-Papadeli C, Bamidis PD, Chouvarda I, Bekiaris E, Maglav-
eras N (2007) Monitoring sleepiness with on-board electrophysiological recordings for
preventing sleep-deprived traffic accidents. Clinical Neurophysiology 118(9):1906–1922
Patel SH, Azzam PN (2005) Characterization of N200 and P300: Selected studies of the
event-related potential. International Journal of Medical Sciences p 147
Pfurtscheller G (1977) Graphical display and statistical evaluation of event-related desyn-
chronization (ERD). Electroencephalography and clinical neurophysiology 43(5):757–760
Pfurtscheller G (1992) Event-related synchronization (ERS): an electrophysiological cor-
relate of cortical areas at rest. Electroencephalography and clinical neurophysiology
83(1):62–69
Pfurtscheller G, Lopes da Silva FH (1999) Event-related EEG/MEG synchronization and
desynchronization: basic principles. Clinical neurophysiology : official journal of the In-
ternational Federation of Clinical Neurophysiology 110(11):1842–1857
Pfurtscheller G, Guger C, M¨uller G, Krausz G, Neuper C (2000) Brain oscillations control
hand orthosis in a tetraplegic. Neuroscience Letters 292(3):211–214
Pfurtscheller G, Allison BZ, Brunner C, Bauernfeind G, Solis-Escalante T, Scherer R, Zan-
der T, M¨uller-Putz G, Neuper C, Bierbaumer N (2010) The hybrid BCI. Frontiers in
Neuroscience 4(30):1–11
This is author’s (updated) final draft. The final publication is in S.H. Fairclough and K. Gilleade
(Eds.), Advances in Physiological Computing (pp. 67–90). Berlin, Germany: Springer. DOI:
10.1007/978-1-4471-6392-3 4
Picard RW (1999) Affective computing for HCI. In: HCI (1), pp 829–833
Protzak J, Ihme K, Zander TO (2013) A passive brain-computer interface for supporting
gaze-based human-machine interaction. In: Universal Access in Human-Computer Inter-
action. Design Methods, Tools, and Interaction Techniques for eInclusion, Springer, pp
662–671
Renard Y, Lotte F, Gibert G, Congedo M, Maby E, Delannoy V, Bertrand O, L´ecuyer
A (2010) Openvibe: an open-source software platform to design, test, and use brain-
computer interfaces in real and virtual environments. Presence: teleoperators and virtual
environments 19(1):35–53
Reuderink B, Farquhar J, Poel M, Nijholt A (2011) A subject-independent brain-computer
interface based on smoothed, second-order baselining. In: Engineering in Medicine and
Biology Society, EMBC, 2011 Annual International Conference of the IEEE, IEEE, pp
4600–4604
Riccio A, Mattia D, Simione L, Olivetti M, Cincotti F (2012) Eye-gaze independent
EEG-based brain-computer interfaces for communication. Journal of Neural Engineer-
ing 9(4):045,001
Rivera K, Cooke NJ, Bauhs JA (1996) The effects of emotional icons on remote communi-
cation. In: Conference Companion on Human Factors in Computing Systems, ACM, pp
99–100
R¨otting M, Zander T, Tr¨osterer S, Dzaack J (2009) Implicit interaction in multimodal
human-machine systems. In: Industrial Engineering and Ergonomics, Springer, pp 523–
536
Schalk G, McFarland DJ, Hinterberger T, Birbaumer N, Wolpaw JR (2004) BCI2000: a
general-purpose brain-computer interface (BCI) system. Biomedical Engineering, IEEE
Transactions on 51(6):1034–1043
Schmidt A (2000) Implicit human computer interaction through context. Personal Tech-
nologies 4(2-3):191–199
Schmidt NM, Blankertz B, Treder MS (2012) Online detection of error-related potentials
boosts the performance of mental typewriters. BMC Neuroscience 13(1):19
Sch¨ollkopf B, Smola AJ (2002) Learning with kernels: support vector machines, regulariza-
tion, optimization, and beyond. Cambridge: TheMITPress
Schreuder M, Rost T, Tangermann M (2011) Listen, you are writing! speeding up online
spelling with a dynamic auditory BCI. Frontiers in Neuroscience 5
Sellers EW, Donchin E (2006) A P300-based brain-computer interface: Initial tests by ALS
patients. Clinical Neurophysiology 117(3):538–548
Shi Y, Ruiz N, Taib R, Choi E, Chen F (2007) Galvanic skin response (GSR) as an index of
cognitive load. In: CHI’07 extended abstracts on Human factors in computing systems,
ACM, pp 2651–2656
Squire P, Parasuraman R (2010) Effects of automation and task load on task switching
during human supervision of multiple semi-autonomous robots in a dynamic environment.
Ergonomics 53(8):951–961
Tavella M, Leeb R, Rupp R, Mill´an JdR (2010) Towards natural non-invasive hand neuro-
prostheses for daily living. Conference Proceedings IEEE Eng Med Biol Soc pp 126–129
Treder MS, Blankertz B (2010) (C)overt attention and visual speller design in an ERP-based
brain-computer interface. Behavioral and Brain Functions 6(1):28
Treder MS, Schmidt NM, Blankertz B (2011) Gaze-independent brain-computer inter-
faces based on covert attention and feature attention. Journal of Neural Engineering
8(6):066,003
This is author’s (updated) final draft. The final publication is in S.H. Fairclough and K. Gilleade
(Eds.), Advances in Physiological Computing (pp. 67–90). Berlin, Germany: Springer. DOI:
10.1007/978-1-4471-6392-3 4
Van Schie HT, Mars RB, Coles MG, Bekkering H (2004) Modulation of activity in medial
frontal and motor cortices during error observation. Nature Neuroscience 7(5):549–554
Vapnik VN, Chervonenkis AY (1971) On the uniform convergence of relative frequencies of
events to their probabilities. Theory of Probability & Its Applications 16(2):264–280
Vidal JJ (1973) Toward direct brain-computer communication. Annual Review of Biophysics
and Bioengineering 2(1):157–180
Vidal JJ (1977) Real-time detection of brain events in EEG. Proceedings of the IEEE
65(5):633–641
Vilimek R, Zander TO (2009) BC (eye): Combining eye-gaze input with brain-computer
interaction. In: Universal Access in Human-Computer Interaction. Intelligent and Ubiq-
uitous Interaction Environments, Springer, pp 593–602
Weiskopf N, Veit R, Erb M, Mathiak K, Grodd W, Goebel R, Birbaumer N (2003) Physiolog-
ical self-regulation of regional brain activity using real-time functional magnetic resonance
imaging (fMRI): methodology and exemplary data. NeuroImage 19(3):577–586
Whitworth B (2005) Polite computing. Behaviour & Information Technology 24(5):353–363
Wolpaw JR (2004) Control of a two-dimensional movement signal by a noninvasive
brain-computer interface in humans. Proceedings of the National Academy of Sciences
101(51):17,849–17,854
Wolpaw JR, McFarland D, Vaughan T (2000) Brain-computer interface research at the
Wadsworth Center. IEEE Transactions on Rehabilitation Engineering 8(2):222–226
Wolpaw JR, Birbaumer N, McFarland DJ, Pfurtscheller G, Vaughan TM (2002) Brain-
computer interfaces for communication and control. Clinical Neurophysiology 113(6):767–
791
Zander TO (2011) Utilizing brain-computer interfaces for human-machine systems. PhD
thesis, Universit¨atsbibliothek TU Berlin, Germany
Zander TO, Jatzev S (2012) Context-aware brain-computer interfaces: exploring the infor-
mation space of user, technical system and environment. Journal of Neural Engineering
9(1):016,003
Zander TO, Kothe C (2011) Towards passive brain-computer interfaces: applying brain-
computer interface technology to human-machine systems in general. Journal of Neural
Engineering 8(2):025,005
Zander TO, Gaertner M, Kothe C, Vilimek R (2010a) Combining eye gaze input with a
brain-computer interface for touchless human-computer interaction. International Journal
of Human-Computer Interaction 27(1):38–51
Zander TO, Kothe C, Jatzev S, Gaertner M (2010b) Enhancing human-computer interaction
with input from active and passive brain-computer interfaces. In: Tan DS, Nijholt A
(eds) Brain-Computer Interfaces: Applying our Minds to Human-Computer Interaction,
Human-Computer Interaction Series, Springer, London, pp 181–199
Zander TO, Ihme K, G¨artner M, R¨otting M (2011) A public data hub for benchmarking
common brain-computer interface algorithms. Journal of Neural Engineering 8(2):025,021
This is author’s (updated) final draft. The final publication is in S.H. Fairclough and K. Gilleade
(Eds.), Advances in Physiological Computing (pp. 67–90). Berlin, Germany: Springer. DOI:
10.1007/978-1-4471-6392-3 4