ArticlePDF Available

Abstract and Figures

This article describes an emerging approach to the design of human-machine systems referred to as 'neuroadaptive interface technology'. A neuroadaptive interface is an ensemble of computer-based displays and controls whose functional characteristics change in response to meaningful variations in the user's cognitive and/or emotional states. Variations in these states are indexed by corresponding central nervous system activity, which control functionally adaptive modifications to the interface. The purpose of these modifications is to promote safer and more effective human-machine system performance. While fully functional adaptive interfaces of this type do not currently exist, there are promising steps being taken toward their development, and great potential value in doing so--value that corresponds directly to and benefits from a neuroergonomic approach to systems development. Specifically, it is argued that the development of these systems will greatly enhance overall human-machine system performance by providing more symmetrical communications between users and computer-based systems than currently exist. Furthermore, their development will promote a greater understanding of the relationship between nervous system activity and human behaviour (specifically work-related behaviour), and as such may serve as an exemplary paradigm for neuroergonomics. A number of current research and development areas related to neuroadaptive interface design are discussed, and challenges associated with the development of this technology are described.
Content may be subject to copyright.
Neuroadaptive technologies: applying neuroergonomics to the
design of advanced interfaces
Lawrence J. Hettinger{*, Pedro Branco{, L. Miguel Encarnacao{ and
Paolo Bonato}
{ Northrop Grumman Information Technology, Harvard, MA 01451, USA
{ Fraunhofer Center for Research in Computer Graphics, Providence, RI, USA
} Boston University, Neuromuscular Research Center, Boston, MA, USA
Keywords: Adaptive interfaces; brain-computer interfaces; neuroergonomics.
This article describes an emerging approach to the design of human–machine
systems referred to as ‘neuroadaptive interface technology’. A neuroadaptive
interface is an ensemble of computer-based displays and controls whose func-
tional characteristics change in response to meaningful variations in the user’s
cognitive and/or emotional states. Variations in these states are indexed by cor-
responding central nervous system activity, which control functionally adaptive
modifications to the interface. The purpose of these modifications is to promote
safer and more effective human–machine system performance. While fully func-
tional adaptive interfaces of this type do not currently exist, there are promising
steps being taken toward their development, and great potential value in doing
so—value that corresponds directly to and benefits from a neuroergonomic
approach to systems development. Specifically, it is argued that the development
of these systems will greatly enhance overall human–machine system performance
by providing more symmetrical communications between users and computer-
based systems than currently exist. Furthermore, their development will promote
a greater understanding of the relationship between nervous system activity and
human behaviour (specifically work-related behaviour), and as such may serve as
an exemplary paradigm for neuroergonomics. A number of current research and
development areas related to neuroadaptive interface design are discussed, and
challenges associated with the development of this technology are described.
1. Introduction
Interactions between humans and computer-based systems are all too often fraught
with difficulties. Errors occur, frustration mounts, workload soars unnecessarily (or
plummets to unacceptably low levels) and other problems arise that interfere with
users’ goals and intentions. Human factors and ergonomics exist to illuminate the
issues underlying these types of problems and, when possible, to identify solutions
through more effective, user-centred design. In many cases, these disciplines have
succeeded admirably in achieving their objectives. However, as technical systems
become more powerful, more functionally complex and more reliant on automated
or semi-automated processes, users are likely to continue to experience difficulties in
achieving their task-related objectives in an efficient and effective manner.
One potential means of coping with the human performance demands of
these technical trends in human–machine system design is to rethink the nature of
Theor. Issues in Ergon. Sci., 2003, vol. 4, nos. 1–2, 220237
Theoretical Issues in Ergonomics Science ISSN 1463–922X print/ISSN 1464–536X online # 2003 Taylor & Francis Ltd
http://www.tandf.co.uk/journals
DOI 10.1080/1463922021000020918
* Author for correspondence. e-mail: lhettinger@northropgrummon.com
communications between humans and computer-based systems. Current systems of
this type typically feature highly asymmetrical communications channels in which the
computer is able to convey far more information about its moment-to-moment
status and requirements to the user than the user is able to convey to the computer.
In other words, the computer has no means available to it to independently and
directly detect or recognize situations in which the user’s status, condition and/or
requirements have changed in some way that significantly impacts the functioning
and overall goals of the human–machine interaction. Therefore, the computer
cannot alter its behaviour in real time to suit the needs of the user unless and
until the user enters some sort of discrete (and often, in a communications sense,
very limited) keyboard, mouse or voice command. As Picard (1997: 248) has
expressed it: ‘People express their frustration to the computer, but it cannot see it
or do anything about it’. Indeed, one might say that current computer-based systems
have extremely low ‘emotional Iqs’ (Goleman 1995).
The problematic nature of asymmetrical communication between humans and
computers is very similar to common forms of dysfunctional interpersonal commu-
nication. Consider the following situation in which two people are attempting to
communicate with one another. One person, very bright, highly educated, technically
articulate and adept, but a poor interpreter of others body language and facial
expressions, is attempting to explain several advanced principles of quantum
mechanics to another person. The latter, also very bright and well educated, yet
far less technically articulate and adept in the area under discussion is trying to
follow the explanations, but is having significant trouble keeping up. He does not
understand much of what is being said, and cannot even seem to find the right words
with which to phrase a meaningful question. To make matters worse, his interlocutor
appears to be oblivious to his growing confusion and frustration, and continues on
with his explanations in an essentially open-loop mode, as if everything was perfectly
clear.
Obviously, the objectives of this particular interpersonal interaction are not being
met—the expert is wasting his time and energies, and the person on the receiving end
of his monologue has gained little more than a sense of frustration and helplessness,
perhaps feeling a bit stupid and/or angry to boot. Worse yet, if the latter individual is
expected to actually do something in the immediate future on the basis of the
information that was presented to him, he would be confronted with a new and
possibly more serious set of problems. If the expert had only possessed more skill in
recognizing his partner’s confusion and frustration, he could have modified his pre-
sentation to produce a more satisfactory outcome.
Communications between humans and computers are often very similar.
Computers are very well suited for storing, accessing and processing large amounts
of data, and given the presence of a sufficiently well designed human–computer
interface they can also be very effective tools for conveying information to their
users under most conditions. However, computers are abject failures at being able
to (1) directly recognize signs indicating changes in their users’ cognitive states and
emotions, and (2) modify their behaviour to effectively accommodate those changes.
The result is that a fundamental communications disconnect often exists between the
user and the computer-based system. Therefore, the full benefit of a dynamic synergy
between the human and computer, analogous to the synergy that exists between two
functionally communicating human beings, is not realized. As a result, overall
human–computer system performance may suffer considerably—often in situations
Neuroadaptive interfaces 221
in which it is most important that it succeed, such as when the user of a high-risk
technical system is fatigued, stressed, bored, frightened or confronted with some
unexpected or extraordinarily complex or dangerous situation.
1.1. Adaptive human–machine interfaces
As a potential means of addressing this problem, researchers have recently begun to
investigate the general domain of developing closed loop human–computer systems
characterized by the capability to detect meaningful aspects of human behaviour (or
substrates of human behaviour related to underlying cognitive and/or emotional
states) in real time, and using these measures to dynamically modify the behaviour
of the system to promote safer and more effective performance (e.g. Pope et al. 1995).
Whether referred to as ‘adaptive interface technology’ (e.g. Bennett et al. 2001, Haas
and Hettinger 2001, Mulgund et al. 2002), ‘adaptive automation’ (e.g. Morrison and
Gluckman 1994, Byrne and Parasuraman 1996, Scerbo 1996, Scerbo et al. 2000,
2001) or ‘affective computing’ (Picard 1997), the fundamental characteristics of
these systems are very similar. An adaptive interface (a term we shall use to refer
to all of the above systems inclusively) consists of an ensemble of displays and
controls whose features can be made to change in real time in response to variations
in parameters indexing the state of the user—either some internal state, such as level
of cognitive workload or engagement in a particular task (e.g. Pope et al. 1995), and/
or a relevant external task-related condition, such as the nature, number and priority
of tasks to be performed within a given unit of time (e.g. Mulgund et al. 2002).
As currently conceived, input from sources such as biological, behavioural and
psychophysiological signals obtained from the user, as well as information about the
current status of the situation in which the user is immersed (e.g. an air combat
mission, prolonged monitoring of a complex automated control system, risky neu-
rosurgical procedure, etc.), will be used in an attempt to draw reliable inferences
about the user’s current state as well as her current needs with respect to the human–
computer interface. This information is then used as input to a computer-based
system, and serves as the means by which the system modifies the availability and/
or presentation of information to the user as well as the nature and extent of the
control that the user can exert on the system.
A major intent of adaptive interfaces is to eliminate the communications gap that
currently exists between humans and computer-based information systems—to
create a very tightly coupled feedback loop in which meaningful variations in the
state of the user are detected by the computer and translated into operationally
meaningful modifications in the ways in which information is displayed and/or
made available for manipulation. In general terms, the goal of an adaptive interface
is to maintain the human–machine system within the boundaries of a desired envel-
ope of ‘performance equilibrium’ with respect to some psychological or system
performance variable. For example, Freeman et al. (1999, 2000), Pope et al. (1995)
and Prinzel et al. (2000) have demonstrated that an EEG-based measure of cognitive
engagement (i.e. combined -power/combined - þ combined -power) under
appropriate feedback conditions provides an effective means of driving an adaptive
automation algorithm that supports performance of compensatory tracking tasks at
significantly better levels than under comparable control conditions. This work has
demonstrated that it is indeed possible to utilize neurologically-based signals as input
to a closed-loop, adaptive interface system to effectively aid human performance of a
perceptual-motor task.
222 L. J. Hettinger et al.
1.2. Research issues
In order for the concept of dynamically self-regulating adaptive interfaces* to
become a functional reality, each of which will benefit from application of neuro-
ergonomic research and design principles (Parasuraman 1998, 2003). In our opinion
there are at least four significant, inter-dependent research domains that must be
addressed:
(1) The identification of biological, behavioural and/or psychophysiological signals
that reliably index the cognitive and/or emotional state of the user. A number
of promising steps (discussed above) have been taken toward realizing this
goal (e.g. Pope et al. 1995, Freeman et al. 1999, 2000, Prinzel et al. 2000).
Particularly in light of the extensive work conducted by Scerbo et al. (2000,
2003) at Old Dominion University and others at NASA Langley Research
Center (e.g. Pope et al. 1995), this area of adaptive interface research is
clearly the most advanced.
(2) The identification of reliable measures of more specific aspects of user state.
Future applications of adaptive interface technology will benefit from the
ability not only to detect that the user of a computer-based system is experi-
encing greatly increased workload (for example), but workload related to a
specific task or mode of information processing such as quantitative problem
solving, spatial reasoning, etc. This increased level of precision in terms of
knowledge of user-state will permit more specifically focused real-time mod-
ifications to be made to the user interface. Achieving this level of specificity in
the identification of user state remains a challenging area, one that is perhaps
logically best addressed once reliable indices of more general states are more
fully developed.
(3) Development of the means to dynamically and reliably analyse external situa-
tions. Future applications of adaptive interfaces might support such activities
as drawing a user’s attention to critically important information in the exter-
nal environment, once it has been noted that the user is oblivious to it.{
Similarly, given knowledge of a user’s intent in a particular situation, an
adaptive interface could make information available, in response to changing
external conditions, to support the fulfillment of that intent. However, to
support these system performance objectives, adaptive interfaces will need to
Neuroadaptive interfaces 223
* A ‘dynamic’ adaptive interface is differentiated from a ‘static’ adaptive interface in that
the former class of systems are envisioned as interfaces that are continuously adapted in real
time, generally without conscious input on the part of the user, while the latter refers to
interfaces that are adapted at discrete points in time, based upon conscious input by the
user. A computer interface that allows one to consciously ‘customize’ the location of com-
mands or menu items on a display by configuring or creating a preferred setup is an example
of a static adaptive interface. Scerbo et al (2001) refer to the latter class of systems as ‘adapt-
able’ interfaces.
{ Conceivably, an application of this sort could address problems of alarm flooding by
only displaying alarms in situations in which a critical event has occurred (or is about to occur)
in the external environment and the user’s cognitive, physiological and/or emotional state
indicates a high probability that they will miss it. For example, work by Torsvall and
Akerstedt (1988, cited in Kramer and Weber 2000) has demonstrated that changes in the -
and -bands of EEG, coupled with measurements of slow eye movements, can reliably predict
that a visual target would be missed in a vigilance study a minute before its appearance.
integrate information about ongoing activities in the external environment
with information about the goals, intentions and current state of the user.
Little research has been conducted in this area, although Mulgund et al.
(2002) have explored the development of belief network algorithms to sup-
port external information delivery to adaptive interfaces for air combat. As
Flach and Rasmussen (2000) have argued in the case of assessing situation
awareness, a ‘theory of situations’ is also needed in the adaptive interface
domain to promote a more complete and accurate assessment of user state.
(4) The identification of effective methods of modifying the nature of the human
computer interface. Clearly, once an inference about the current state of the
user in light of the nature of the situation in which he is immersed has been
drawn, the next vital question involves determining what (if anything) should
be done to modify the human–machine interface to better support system
performance objectives. Pope et al. (1995), Freeman et al. (1999, 2000) and
Prinzel et al. (2000) have examined this question with regard to the nature of
adaptive system feedback required to optimize performance of a compensa-
tory tracking task, and to date their research is the most illuminating in this
area. Specifically, they have demonstrated that an adaptive interface can be
implemented in such a way to respond to variations in users’ level of cogni-
tive engagement and modify (in real time) the nature of the system’s control
and display characteristics to positively impact performance. However, and
particularly in light of ongoing developments in multimodal human–machine
interface technology, the variety of potential classes of modifications are
extremely large and much more work remains to be done in this area. In
addition, it will also be necessary to execute these modifications in such a
way that the user’s performance is not adversely affected. In other words,
methods will need to be determined for dynamically altering interface char-
acteristics in a manner that is functionally (and perhaps phenomenally)
transparent to the user—i.e. that does in itself create confusion or interfere
with task performance. Ultimately, in order to make sense from a design
perspective, any modifications to the interface should either improve human
performance or at least maintain it at a constant level.*
1.3. Neuroadaptive interfaces
In this paper, we have chosen to concentrate on issues associated with the design and
use of ‘neuroadaptive’ interfaces. As the name implies, one of the two major sources
of input to a neuroadaptive interface is information derived from activity within the
central nervous system (CNS), primarily the brain. The second major source of
information is situational data extracted from the operational context in which the
human–machine system is immersed. Other potential, non-neurological sources of
input such as gestures, facial expressions, vocal utterances or other overt behaviours
224 L. J. Hettinger et al.
* It may seem somewhat counter-intuitive to consider devoting effort to the design of a
system whose goal is to maintain performance at a constant level. However, in the face of
sufficient deterioration in the condition of the user and/or his situation, maintaining human–
system output at a constant level may indeed be a challenging and important accomplishment
(Jones and Kennedy 1996).
are not included in the class of adaptive interface, although they may be useful as
inputs for other varieties of dynamically adaptive interfaces.
Neurologically-based signals offer a number of potentially significant advantages
over other biologically-based signals (e.g. galvanic skin response, heart rate varia-
bility, etc), as well as gestural- or behaviourally-based signals (e.g. facial expressions,
head motion frequency and/or amplitude, control stick or steering wheel activity,
etc.). We emphasize the term ‘potentially’ because much work remains to be done
before it can be definitively stated that neurologic sources of information specifying
user state are more useful than others.
One of the most important potential advantages of neurologic signals involves
the high degree of specificity they share with their corresponding cognitive and/or
emotional components. As discussed in more detail below, a significant amount of
research is currently being conducted in an attempt to correlate CNS activity with
emotional and cognitive states (e.g. Bressler 2002, Hamann et al. 2002, Ohman
2002) and, as already noted, there have been clear empirical demonstrations of
the utility of EEG signals for use in prototype adaptive interfaces (Pope et al.
1995, Freeman et al. 1999, 2000, Prinzel et al. 2000). This research demonstrates
that (a) it is possible to reliably detect objective indices of brain activities related to
behaviourally important cognitive and emotional states, and (b) in some cases use
those indices to drive the activity of a human–machine system in a functional
manner. However, as noted above, current experimental adaptive systems are
only sensitive to variations in rather general aspects of human cognition and emo-
tion (e.g. overall level of engagement and workload). In many cases, this may be all
that is needed to design and implement a highly useful adaptive human–machine
system. In other cases, greater specificity may be required. As Kramer and Weber
(2000: 802–803) point out in the case of mental workload, general indices ‘could be
used in a wide variety of settings and across a number of different systems to
provide a general indication of the mental workload experienced by the human
operator. In many cases, such information can be extremely valuable to system
designers who are interested in the overall magnitude of processing demands
imposed on the human operator. However, if more specific information concerning
the type of processing demands is needed . . . then more diagnostic measures will be
necessary’.
Certainly the possibility of locating specific, unitary areas of the brain whose
activity directly and fully accounts for such a highly specific set of task characteristics
seems highly remote. However, the discovery and real-time assessment of functional
networks of cortical and sub-cortical regions that correspond to relatively fine-
grained cognitive and emotional states could provide a potential means of regulating
the behaviour of computer-based systems to facilitate the performance of tasks with
highly specific sets of challenges. Of course, the realization of such a system awaits a
great deal of further neuroergonomic research related to work-related brain activity,
as well as the development of very powerful and portable biosignal sensing and
processing equipment.
There are also a number of disadvantages of biologically-based and behaviou-
rally-based signals that would seem to limit their utility as inputs to an adaptive
interface. We (somewhat arbitrarily) define biological signals as measures of physio-
logical activity not obtained from direct assessment of the CNS—rather, these sig-
nals are used to make inferences about cognitive and emotional states by assessing
other biological functions (e.g. galvanic skin response, heartbeat frequency and
Neuroadaptive interfaces 225
variability, etc.) known to correspond with these states. Behavioural signals, on the
other hand, are directly observable behaviours performed by a user that may provide
an indication of his current cognitive or emotional state and may provide clues
regarding information and/or control requirements for an adaptive interface. For
example, high amplitude, rapid head and/or eye movements may signal a high work-
load situation associated with a high-priority search for information. Gestural inputs
are largely involuntary types of behavioural signals and appear to be promising
sources of information concerning gross variations in a user’s emotion state (e.g.
Picard 1997, 2000).
The common problem with each of these potential classes of adaptive inputs lies
primarily in its lack of specificity in terms of correlating with precise aspects of
underlying cognitive and emotional states. For instance, it has been known for
some time that certain biological signals such as eye blinks, heart-rate variability
and respiration are sensitive to rather global changes in workload (e.g. Backs et al.
1994, Kramer and Weber 2000, Scerbo et al. 2001). If the design goal of a par-
ticular adaptive system is sufficiently served by a fairly low level of precision in the
assessment of workload (i.e. a general indication of high, medium, low workload)
then these measures as currently developed may prove quite useful. However, if the
system under consideration seeks to provide more detailed information about the
specific nature of the user’s cognitive or emotional states, then more precise infor-
mation will be required. Additionally, the time constant associated with changes in
biological signals such as galvanic skin response, respiration, heart rate, etc. may
render them of little value in situations in which rapid adaptive response is
required.
In our opinion, the class of signals that will best meet the dual criteria of posses-
sing a high level of specificity as well as comparatively short time constants are those
associated with measurable changes in specific aspects of CNS activity. In the
remainder of this paper, we will discuss research findings supporting this view, in
addition to those already discussed describing the utility of EEG metrics for adaptive
interface design.
The development of dynamically adaptive interfaces, including the neuroadaptive
variety, is still very much in its early phases. However, there is a growing body of
neurophysiological literature that is directly relevant to issues involved in the devel-
opment of these systems. Much of this literature relates to the development of
‘brain–computer interfaces’, devices primarily intended to support the control of
computer-based systems using neural signals (e.g. Wolpaw et al. 2000). We will
examine aspects of this literature that bear on the development of adaptive
human–machine systems.
We will also examine the role of neuroergonomics and user-centred design in
neuroadaptive system development. Neuroergonomics is an emerging human factors
and ergonomics domain concerned with the study of the brain and behaviour at
work (Parasuraman 2003) and, therefore, subsumes all of the research and develop-
ment issues relevant to neuroadaptive interface design. We argue that that the neu-
roergonomics framework provides a very useful set of guidelines within which to
pursue the development of these systems, and present examples of its utility.
Reciprocally, the area of neuroadaptive interface design represents a global research
and development area that may serve as an exemplary paradigm for neuroergo-
nomics.
226 L. J. Hettinger et al.
2. Current research relevant to neuroadaptive interface development
Neuroadaptive interface development is still in its infancy. While important research
(described above) is being pursued in this area, a substantial research literature does
not yet exist. However, work in other closely related areas, such as brain–computer
interface development and localization and the description of neural activity associ-
ated with cognitive and emotional states provide invaluable guidance for those
interested in neuroadaptive interface development. In this section, we will review
recent work in these areas.
Scerbo et al. (2000, 2001, 2003), as well as Kramer and Weber (2000), have
surveyed the research literature related to the development of adaptive automation,
a specific form of adaptive interface technology in which the locus of control of a
given system may shift along a continuum between the user and the computer-based
system, depending on the state of the user. In particular, Scerbo et al. (2001) and
Kramer and Weber (2000) provide encyclopedic reviews of psychophysiological
research relevant to adaptive interface development. A major portion of this research
is directly relevant to neuroadaptive interface design and some of it is summarized
below. However, the reader is referred to the above reviews for a far more complete
treatment of the relevant psychophysiological research literature.
2.1. Brain–computer interfaces
An area of inquiry that is very closely related to neuroadaptive interface technology
involves research on brain–computer interfaces. Wolpaw et al. (2000: 165) define a
brain–computer interface (BCI) as ‘a communication system that does not depend
on the brain’s normal output pathways of peripheral nerves and muscles’. Rather, a
BCI relies on the production (by the user), detection (by means of bioelectrodes) and
processing (by a computer-based system) of controllable aspects of brain activity.
This brain activity is then used to control the activity of a computer-based system.
As Levine et al. (2000: 180) state: ‘a direct brain interface accepts voluntary
commands directly from the human brain without requiring physical movement
and can be used to operate a computer or other technologies’. Lusted and Knapp
(1996) describe BCIs as being analogous to contemporary voice controlled systems,
but taken an appreciable step further. As they note, if a computer were able to
reliably recognize human thought patterns, the computer would become ‘an exten-
sion of the mind itself’ (Lusted and Knapp 1996: 82). BCIs, in other words, are a
technology with the potential to help provide a more symmetrical, functional com-
munications channel between a human and a computer. Although in the case of
BCIs the enhanced communications are accomplished by largely conscious strategies
that result in control of the computer by the user, the techniques that are employed
can conceivably be modified to permit unconscious ‘control’ of computer-based
processes by enabling the computer to sense patterns of brain-related activity.
Research in the design and use of BCIs has expanded rapidly within the past
decade. In 1995, there were six active BCI laboratories, while in 2000 there were
more than 20 (Wolpaw et al. 2000). Much of the work in the area of BCIs has
focused on the use of EEG signals and event-related potentials (ERPs) as a means
of exerting control over a computer-based system. In many experimental BCI
devices, users are provided with biofeedback training that over time enables them
to control the spectral composition of the EEG (Wolpaw et al. 1997, Donchin et al.
2000) and, thereby, control the activity of computer-based systems. More recently,
Serruya et al. (2002) demonstrated that is possible to enable monkeys to control the
Neuroadaptive interfaces 227
activity of a cursor on a computer display by monitoring activity of a relatively small
number of neurons in the motor cortex. -and-waves appear to be of particular
interest to BCI investigators, since people can learn to modify their amplitude. For
neuroadaptive interfaces, -waves may be of more interest, as their appearance is
often coincident with subjective feelings of frustration and disappointment (Lusted
and Knapp 1996).
ERPs are brain signals elicited by the occurrence of external phenomena imping-
ing on the sensory systems. By taking advantage of known characteristics of ERP
responses to external events, it is possible to devise interfaces that enable control of a
computer-based system. For instance, Farwell and Donchin (1988) developed a
system that takes advantage of known characteristics of the P300 response to un-
usual or ‘oddball’ events to enable paraplegic individuals to control a virtual com-
puter keyboard. By taking advantage of the fact that the P300 response is larger for
unexpected events than it is for expected events, and by selectively controlling the
frequency of displayed keyboard events (i.e. illumination of individual ‘keys’) it is
possible to control the level of P300 associated with the occurrence of the illumina-
tion of the ‘targeted’ key (Donchin et al. 2000). A significant advantage of this
application is that essentially no learning, in the sense of biofeedback or any sort
of conscious or unconscious manipulation of the ERP, is required.
It is important to bear in mind that work in the BCI area directly examines only
one of several key areas involved in neuroadaptive interface development—specifi-
cally that concerned with how neural signals can be used to control the activity of a
computer-based system. Other issues, such as the detection of neural signals that
correspond to meaningful cognitive and emotional states, and the identification of
adaptive strategies for modifying the ‘behaviour’ of the computer-based system are
not addressed. Nevertheless, the research conducted in this area is of obvious
relevance to neuroadaptive interface development, not only because of its growing
empirical database on issues related to control of computer-based processes, but
also because of the new view it affords on the conception of human–computer
communication. As Wolpaw et al. (2000) point out, BCI technology enables two
forms of communication—one a sort of ‘wire-tapping’ in which a computer-based
system can listen in on the activity of the brain and make inferences about the
nature of activity going on within it. It can also serve as a channel of direct
command and control, once the user has learned to control certain elements of
brain-related bioelectric activity. BCI development is, of course, more directly con-
cerned with the latter type of direct communication. Neuroadaptive interface devel-
opment must be concerned with both, as the ‘wire-tapping’ approach affords an
additional channel of communication from the user to the computer, albeit a
largely unconscious one.
2.2. Neural correlates of cognitive and emotional states
A second key area of research related to the development of neuroadaptive interfaces
deals with the general area of identifying specific neural correlates of cognitive and
emotional states. In order for computer-based systems to be able to make use of
neurologic sources of information about user-state, the level of knowledge in this
area will need to increase appreciably.
Research on the functional neuroanatomy of emotion has been receiving
increased research attention in recent years and offers some intriguing possibilities
for the design of neuroadaptive interfaces. For example, investigations of
228 L. J. Hettinger et al.
neuronal activity in localized areas of the amygdala indicate a strong correlation
between its activity and emotional responses such as fear (bilateral but predomi-
nantly left-lateralized amygdala activation) or positive emotions (significant amyg-
dala activation in the left hemisphere with associated activity in the frontal cortex
and ventral striatum) (Hamann et al. 2002). Other researchers have produced similar
findings, although primarily with respect to the relation between activity in the
amygdala and negative emotions (e.g. LeDoux 2002). The amygdala is also impli-
cated in the interpretation of facial emotions, particularly of the negative variety (e.g.
Ohman 2002). In fact, individuals with rare bilateral lesions of the amygdala appear
to exhibit inappropriate interpretation of other’s facial emotions (Adolphs et al.
1998).
While the amygdala appears to be directly involved in a wide variety of emotions,
it is possible that greater specificity concerning the locus and nature of the emotional
stimulus can be derived from the pattern of networked activity that occurs between
the amygdala and other brain structures. For example, activity along the thalamic-
auditory cortical-amygdala pathway may indicate the presence of an auditory signal
with high emotional salience (e.g. Romanski and LeDoux 1993). Indeed, it is poss-
ible that the imaging in real time of many such neuroanatomical networks may shed
significant light on very specific aspects of emotional stimuli and their concomitant
effects. From a systems design perspective, such information may be useful in
enabling a computer-based system to draw very specific inferences about the current
emotional state of the user as well as the nature and source of the external informa-
tion driving the emotion. In turn, this may provide useful input regarding the nature
of functional adaptations to make the computer interface enable the user to cope
with the situation.
Research on the neuroanatomical correlates of cognition has also been attract-
ing increased research attention in recent years. Indeed, the general domain of
cognitive neuroscience is now one of the most active areas in experimental psy-
chology, representing a growing synergism of psychological and neuroscientific
paradigms. One of the key insights from this work is that ‘although elementary
cognitive operations reside in individual (cerebral) cortical areas, complex cognitive
functions require the joint operation of multiple distributed areas acting in concert’
(Bressler 2002: 58). In other words, in the performance of any relatively complex
cognitive function, a network of neural structures is involved. The challenge for
cognitive neuroscience, as well as neuroadaptive technology development, will be
to map the cognitive processes of interest onto these neural networks and devise
the technical means to assess them in real time in the performance of real world
tasks.
The recent advances that have characterized functional neuroanatomical studies
of cognition and emotion are largely dependent on corresponding advances in
functional brain imaging (positron emission tomography and magnetic resonance
imaging being the two most commonly employed technologies). And, while the re-
sults of these studies are encouraging in that they strongly suggest that a mapping of
human cognition and emotion onto functional neuroanatomical structures is within
the realm of possibility, there is clearly still a great deal of work to be done. The
complexity of the task, given the numerous interactions that occur across various
neuroanatomical sites, is undoubtedly daunting. However, as Bressler (2002) points
out, observations to date indicate that each cortical area has a unique set of addi-
Neuroadaptive interfaces 229
tional areas with which it interacts—suggesting the existence of an underlying order
that may help guide research efforts.
3. Research and development issues in the design and use of neuroadaptive
interfaces
As Picard (2000: 1) points out: ‘Not all computers need to ‘‘pay attention’ to emo-
tions or to have the capability to emulate emotion. Some machines are useful as rigid
tools, and it is fine to keep them that way. However, there are situations in which
human–computer interaction could be improved by having the computer adapt to
the user’. Similarly, not all computers need to pay attention to variations in their
users’ cognitive states. However, one can easily imagine situations in which such a
capability would be extremely useful. For instance, it would clearly be beneficial to
detect situations in which operators of high-risk technologies show signs of not
having attended to a key source of information that may specify the emergence of
a critical situation. If these, and other similar user states, could be reliably detected,
then means could be devised to modify the presentation of information to enhance
the probability of its detection.
As described above, the establishment and maintenance of symmetrical channels
of communication generally helps to promote functional interactions between
human beings. A key aspect of this symmetry is the ability to recognize elements
of confusion, anger, frustration, boredom, etc. as they arise in the other person and
change one’s behaviour accordingly. One can reasonably speculate that an ideal
basis for human–computer communication would involve much the same sort of
characteristics.
Given the high-speed parallel processing capabilities of modern computer
systems and the development of sophisticated biosensing technologies, it is not
unreasonable to expect that human–machine systems can be developed in the rela-
tively near future featuring elements that can (1) reliably and unobtrusively monitor
neurophysiological signals or brain imaging data related to a user’s cognitive and
emotional state, (2) acquire task-relevant information from sources external to the
user in the operational environment (e.g. information currently obtained from air-
craft sensors, sensors within chemical plants, etc.), (3) process multiple streams of
such signals in real time to draw reliable inferences about user state, (4) combine user
state data with correlated information about the external situation in which the
human–machine system is immersed, and use this combined data to (5) adjust the
performance of the computer-based system to either maintain the human–machine
system in some pre-defined state of behavioural ‘equilibrium’ or return it there if
necessary. This process is schematically depicted in figure 1.
3.1. Design challenges
There are, of course, many challenges associated with the development and use
of neuroadaptive interfaces. As briefly described above, two of the most compel-
ling problem areas are: (1) the need to identify valid and reliable neurologic
metrics of user state, and (2) the need to determine effective methods of dyna-
mically adapting the human–machine interface on the basis of such information.
More specific design challenges can be outlined by reference to the chain of
processes, depicted in figure 1, which must occur in order for neuroadaptive
interfaces to function.
230 L. J. Hettinger et al.
The first challenge relates to the need to identify underlying neural substrates of
critical cognitive and emotional states and then devise and implement the appro-
priate measurement devices in as unobtrusive a manner as possible.* The amount of
research implied in that single sentence alone is monumental, but absolutely necess-
ary for this technology to be realized. Fortunately, as described above, multiple lines
of research currently exist (along with engineering efforts geared toward continuous
improvement in brain imaging and neurophysiological signal extraction and pro-
cessing) that are currently addressing many of these areas. The challenge will be to
draw the necessary connections between these various lines of research and devel-
opment to benefit the neuroadaptive interface domain. As described below, the area
of neuroergonomics is ideally positioned to draw such connections.
Determining a user’s optimal neurologic input features has been identified as an
issue in the BCI domain (Wolpaw et al. 2000) and is also critical to the development
of neuroadaptive interfaces. Similarly, determining the extent to which relevant
signals and processes vary from one individual to the next and within a given indi-
vidual across time will be critical. A high degree of unpredictable and/or unmanage-
able variability in signals associated with the underlying neural processes would
present a significant developmental challenge. For practical reasons associated
with system development, there is a critical need to know (a) if the neural signals
indicative of a current cognitive or emotional state exhibit strong intra- and inter-
user reliability, and (b) if the transitions from one state to another are manifest by
pattern changes that are also reliable
A second key research area involves the empirical assessment of situations, inso-
far as they impact the current and future well being of the user and the human–
machine system. As challenging as it will be, it is not sufficient to merely assess the
cognitive and emotional state of the user—the state of the external situation in which
the user is immersed must also be assessed. This will require knowledge of and the
Neuroadaptive interfaces 231
Figure 1. Schematic overview of neuroadaptive interface architecture.
* This, of course, begs the question of whether or not we know what those critical cogni-
tive and emotional states are. Clearly, these will vary to some extent across applications, and
their identification will in all likelihood be most satisfactorily addressed by means of methods
associated with cognitive work analysis (Vicente 1999) and other task analytic techniques.
ability to continually assess the goals and intentions of the human–machine system
and how they are impacted by the presence of environmental and situational con-
straints on and affordances for performance. To further complicate matters, it will be
necessary to continually assess user-state with respect to ongoing activities in the
external environment. As noted above in the example of missed information in the
operation of a risky technical system, a neuroadaptive interface will need to have
the ability to continuously assess the state of the user in light of the demands of
the external environment. If a significant mismatch occurs—i.e. if the user is not in a
desirable state to meet these challenges, then modifications to the human–machine
interface will be needed to attempt to alleviate the situation.
The ability of a computer-based system to automatically perform meaningful
functional modifications to displays and controls on the basis of the combined
output of processes indexing the user’s state and the state of the external environ-
ment represents, perhaps, the most challenging aspect of the whole process. In order
to fulfil its function (improving system performance, or at least maintaining it at a
desired level), methods for adaptively modifying information presentation and con-
trol dynamics will need to be determined. Of all the areas described in this article,
this is the area that has by far received the least amount of research attention. The
challenge, of course, will be to dynamically modify the characteristics of the interface
in a manner that not only does not interfere with performance (by distracting or
confusing the operator), but that significantly aids in the accomplishment of system
objectives.
4. Neuroergonomics and the development of neuroadaptive interfaces
As described above, ‘neuroergonomics refers to the study of brain and behaviour at
work (Parasuraman 1998, 2003). As the articles in this special issue illustrate, neu-
roergonomics focuses on the interactions between the study of properties of the
nervous system and the study of work performed by human beings in multiple
settings.
The principle goal of neuroergonomics is to promote more effective design of
human–machine systems through better understanding of the neural mechanisms
and processes that underlie work behaviours. An additional, important aspect of
neuroergonomics—one that happens to be particularly well illustrated in the case of
neuroadaptive interfaces—is to promote the growth of neuroscientific knowledge
through the study of humans at work. In essence, neuroergonomics is a means for
promoting reciprocal advances in the development of human–machine systems and
in our understanding of neural processes that underlie functional behaviour in the
world.
In studying human performance in real-world conditions, as opposed to the
largely contrived conditions of the laboratory, a great deal of useful information
will be generated that will promote the understanding of neural processes and func-
tions in more ecologically valid settings. Furthermore, a broader understanding of
neural functioning will permit better design of better human–machine systems.
Research and development in the area of neuroadaptive interface technologies is
an area in which a neuroergonomic approach is clearly required. First, such tech-
nologies cannot possibly be developed in the absence of very advanced knowledge
about nervous system function, in particular the relation between nervous system
activity and concomitant cognitive and emotional processes and states. Secondly,
the examination of neuroadaptive interface concepts will necessarily require the
232 L. J. Hettinger et al.
investigation of couplings between nervous system activity and human–machine
system activity in real world situations. Besides providing ‘practical information
about how such systems should be designed, there is every reason to expect that
useful knowledge about the nervous system activity will also be acquired.
One of the most important functions for neuroergonomics in the design of neu-
roadaptive interfaces and other advanced technologies is to serve as a focus for the
sort of multidisciplinary research that is required for the successful development of
these systems. As illustrated throughout this paper, there currently exist a number of
largely independent research areas, each of which has vital contributions to make to
the development of neuroadaptive interface technology. Inputs from cognitive neu-
roscience, functional neuroanatomy, advanced computer graphics and technology,
human factors and ergonomics are all vital to the success of neuroadaptive systems.
Neuroergonomics can serve the vital function of providing a context within which
these areas can coordinate their research activities to develop safer and more effec-
tive human–machine technologies while simultaneously advancing our knowledge of
functional aspects of the human nervous system.
A broader challenge confronting the development of these systems is directly
related to the manner in which research in neuroscience, experimental psychology
and human factors and ergonomics is conducted. As pointed out by LeDoux (2002),
neuroscientists have become very adept at examining and explaining how various
facets of human cognition, perception, memory, emotion and motivation work—but
not how they work together. The same can accurately be said of experimental
psychology, as well as human factors and ergonomics. Fortunately, the development
of neuroadaptive interfaces will require that such knowledge be generated and may
serve as an attractive stimulus to promote its growth. The multidisciplinary nature of
the neuroergonomic framework would seem make it an ideal method to use in its
pursuit.
5. Conclusions
What does it mean for a human–machine system to be dynamically adaptive? On one
level, it means that the system is comprised of displays and controls whose function-
ality can be enhanced in real time on the basis of significant variations in the user’s
condition. On this level, the term ‘adaptive’ refers to specific characteristics and
capabilities of the system. However, ultimately, the function of an adaptive interface
is to enable the human–machine system to better cope with (adapt to) changing
situational factors.
Whether the goal of a particular system is to improve overall system performance
in response to these situational factors, or to maintain a relatively constant level of
performance,* the research and design issues are fundamentally the same: (1) The
identification of valid and reliable neural sources of information that correspond to
meaningful variations in cognitive and emotional states is a critical first step, and (2)
the determination of how to use those sources of information to functionally modify
the content and/or behaviour of the human–machine interface is a critical second
step.
Neuroadaptive interfaces 233
* The notion of ‘iso-performance’ put forth by Jones and Kennedy (1996) may prove to be
a valuable means of examining trade-offs involved between various neuroadaptive design
elements in terms of maintaining overall system performance at a constant level.
The examination of these questions would be best served by means of a neuroer-
gonomic approach. Clearly, neuroadaptive interfaces cannot be developed in the
absence of far more sophisticated knowledge of the relation between neural activity
and cognitive and emotional states than we currently possess. However, this type of
data, primarily derived from the neurosciences, must be supplemented by corre-
sponding advances in the development of human–machine interfaces to render
these systems capable of effectively rendering dynamically adaptive displays and
controls. Indeed, a convergence of multidisciplinary domains is required for suc-
cessful development of these technologies to occur. A key function of neuroergo-
nomics in the design of neuroadaptive interfaces will be to provide a context and
research and development framework within which this multidisciplinary work can
be performed.
It is important to bear in mind that once algorithms are developed that can
effectively assess a user’s state of mind for the purpose of easing their workload or
adapting their computer-based equipment to make task performance safer and more
effective, it may be only a short step to developing similar algorithms that can record
and store that information for other less altruistic uses (invasive monitoring, person-
ality profiling, etc.). Therefore, significant opportunities exist for invasion of privacy,
particularly if these adaptive algorithms are used to support activities on a net-
worked computer system such as the world-wide-web. That new technologies
often have unintended, negative side-effects is well known (e.g. Tenner 1997), and
that innovative human–computer technologies in particular may be put to negative
use is a topic of much recent discussion (e.g. Gray 2001). Therefore, it is vitally
important that the design of these systems be approached with strict attention
paid to possible misuse.
The goal of neuroadaptive interface design is not only to promote the develop-
ment of more effective human–machine systems—it is, in essence, to reduce the size
of the communications ‘gulf’ that currently exists between humans and computer-
based systems. By sampling, assessing and incorporating aspects of CNS activity
that correspond to meaningful variations in users’ cognitive and/or emotional func-
tioning in combination with similar assessment of the user’s external situation, it
should be possible to devise systems that can dynamically adapt themselves to suit
the changing needs of the human–machine system. Systems of this type will repre-
sent a new communications metaphor for human–computer interfaces—one in
which there is an ongoing ‘broad band’ dialogue between the human and the
machine.
References
Adolphs, R., Tranel, D. and Damasio, A. R. 1998, The human amygdala in social judg-
ment, Nature, 393, 470–474.
Backs, R., Ryan, A. and Wilson, G. 1994, Psychophysiological measures of workload
during continuous manual performance, Human Factors, 36, 514–531.
Bennett, K. B., Cress, J. D., Hettinger, L. J., Stautberg, D. and Haas,M.W.2001, A
theoretical analysis and preliminary investigation of dynamically adaptive interfaces,
The International Journal of Aviation Psychology, 11, 169–196.
Bressler, S. L. 2002, Understanding cognition through large-scale cortical networks, Current
Directions in Psychological Science, 11, 58–61.
Byrne, E. A. and Parasuraman, R. 1996, Psychophysiology and adaptive automation,
Biological Psychology, 42, 249–268.
234 L. J. Hettinger et al.
Donchin, E., Spencer, K. M. and Wijesinghe, R. 2000, The mental prosthesis: assessing the
speed of P300-based brain-computer interface, IEEE Transactions on Rehabilitation
Engineering, 8, 174–179.
Farwell, L. A. and Donchin, E. 1988, Talking off the top of your head: a mental prosthesis
utilizing event-related brain potentials, Electroencephalography and Clinical
Neurophysiology, 70, 510–523.
Flach,J.M.and Rasmussen, J. 2000, Cognitive engineering: designing for situation aware-
ness, in N. Sarter and R. Amalberti (eds), Cognitive engineering in the aviation domain
(Mahwah, NJ: Lawrence Erlbaum Associates Publishers), 153–180.
Freeman, F. G., Mikulka, P. J., Prinzel, L. J. and Scerbo,M.W.1999, Evaluation of an
adaptive automation system using three EEG indices with a visual tracking task,
Biological Psychology, 50, 61–76.
Freeman,F.G.,Mikulka,P.J.,Scerbo,M.W.,Prinzel,L.J.and Clouatre, K. 2000,
Evaluation of psychophysiologically controlled adaptive automation system, using per-
formance on a tracking task, Applied Psychophysiology and Biofeedback, 25, 103–
115.
Goleman, D. 1995, Emotional intelligence (New York: Bantam).
Gray, C. H. 2001, Cyborg citizen (New York: Routledge).
Haas,M.W.and Hettinger, L. J. 2001, Current research in adaptive interfaces: introduction
to special issue, International Journal of Aviation Psychology, 11, 119–122.
Hamann,S.B.,Ely,T.D.,Hoffman,J.M.and Kilts,C.D.2002, Ecstasy and agony:
activation of the human amygdala in positive and negative emotion, Psychological
Science, 13, 135–141.
Jones, M. B. and Kennedy, R. S. 1996, Isoperformance curves in applied psychology, Human
Factors, 38, 167–182.
Kramer,A.F.and Weber, T. 2000, Applications of psychophysiology to human factors, in J.
Cacioppo, L. Tassinary and G. Berntson (eds), Handbook of psychophysiology
(Cambridge, UK: Cambridge University Press), 794–814.
Ledoux, J. 2002, Synaptic self: How the brain shapes who we are (New York: Viking).
Levine, S. P., Huggins, J. E., Bement, S. L., Kushwara, R. K., Schuh, L. A., Rohde, M.
M.,Passaro,E.A.,Ross,D.A.,Elisevich,K.V.and Smith, B. J. 2000, A direct
brain interface based on event-related potentials, IEEE Transactions on Rehabilitation
Engineering, 8, 180–185.
Lusted,H.S.and Knapp, R. B. 1996, Controlling computers with neural signals, Scientific
American, 275, 82–87.
Morrison, J. G. and Gluckman, J. P. 1994, Definitions and prospective guidelines for the
application of adaptive automation, in M. Mouloua and R. Parasuraman (eds), Human
performance in automated systems: Current research and trends (Mahwah, NJ: Lawrence
Erlbaum Associates Publishers), 256–263.
Mulgand, S., Rinkus, G. and Zacharias, G. 2002, Adaptive pilot/vehicle interfaces for the
tactical air environment, in L. Hettinger and M. Haas (eds), Psychological issues in the
design and use of virtual and adaptive environments (Mahwah, NJ: Lawrence Erlbaum
Associates Publishers), in press.
Ohman, A. 2002, Automaticity and the amygdala: nonconscious responses to emotional faces,
Current Directions in Psychological Science, 11, 62–66.
Parasuraman, R. 1998, Neuroergonomics: The study of brain and behavior at work, http://
www.cua.edu/psy/csl/neuroerg.htm.
Parasuraman, R. 2003, Neuroergonomics: research and practice, Theoretical Issues in
Ergonomics Science, 4, 5–20.
Picard, R. W. 1997, Affective computing (Cambridge, MA: The MIT Press).
Picard, R. W. 2000, Toward computers that recognize and respond to user emotion, IBM
Systems Journal, 39, http://www.research.ibm.com/journal/sj/393/part2/picard.html
Pope,A.T.,Bogart,E.H.and Bartolome, D. S. 1995, Biocybernetic system evaluates
indices of operator engagement in automated task, Biological Psychology, 40, 187–
195.
Prinzel, L. J., Freeman, F. G., Scerbo, M. W., Mikulka, P. J. and Pope,A.T.2000, A
closed-loop system for examining psychophysiological measures for adaptive task allo-
cation, The International Journal of Aviation Psychology, 10, 393–410.
Neuroadaptive interfaces 235
Romanski, L. M. and Ledoux, J. E. 1993, Information cascade from primary auditory cortex
to the amygdala: coriticocortical and coamygdaloid projections of temporal cortex in
the rat, Cerebral Cortex, 3, 515–532.
Scerbo, M. W. 1996, Theoretical perspectives on adaptive automation, in R. Parasuraman
and M. Mouloua (eds), Human performance in automated systems: Current research and
trends (Mahwah, NJ: Lawrence Erlbaum Associates Publishers), 37–64.
Scerbo,M.W.,Freeman,F.G.and Mikulka, P. J. 2000, A biocybernetic system for
adaptive automation, in R. W. Backs and W. Boucsein (eds), Engineering psychophy-
siology: Issues and applications (Mahwah, NJ: Lawrence Erlbaum Publishers), 241–254.
Scerbo, M. W., Freeman, F. G. and Mikulka, P. J. 2003, A brain-based system for adaptive
automation, Theoretical Issues in Ergonomics Science, 4, 200–219.
Scerbo, M. W., Freeman, F. F., Mikulka, P. J., Parasuraman, R., Di Nocera, F. and
Prinzel, L. J. 2001, The efficacy of psychophysiological measures for implementing
adaptive technology, NASA TP-2001-211018 (Hampton, VA: NASA Langley Research
Center).
Serruya,M.D.,Hatsopolous,N.G.,Paninski,L.,Fellows,M.R.and Donoghue, J. P.
2002, Instant neural control of a movement signal, Nature, 416, 141–142.
Tenner, E. 1997, Why things bite back: Technology and the revenge of unintended consequences
(New York: Alfred A. Knopf).
Torsvall, L. and Akerstedt, T., 1988, Extreme sleepiness: quantification of EOG and
spectral EEG parameters, Electroencephalography and Clinical Neurophysiology, 47,
272–279.
Vicente, K. J. 1999, Cognitive work analysis: Toward safe, productive, and healthy computer-
based work (Mahwah, NJ: Lawrence Erlbaum Publishers).
Wolpaw, J. R., Birbaumer, N., Heetderks, W. J., McFarland, D. J., Peckham, P. H.,
Schalk, G., Donchin, E., Quatrano, L. A., Robinson, C. J. and Vaught, T. M.
2000, Brain-computer interface technology: a review of the first international meeting,
IEEE Transactions on Rehabilitation Engineering, 8, 164–173.
Wolpaw, J. R., Flotzinger, D., Pfurtscheller, G. and MacFarland, D. J. 1997,
Timing of EEG-based cursor control, Journal of Clinical Neurophysiology, 14, 529–
538.
About the authors
Lawrence J. Hettinger is Senior Human Factors Engineer for Northrop Grumman Informa-
tion Technology in Harvard, MA. He has primarily worked in research and development areas
concerned with the design and evaluation of complex human–machine systems, principally
those related to the performance of tactical aviation, surface naval and medical procedures. He
received his PhD in Psychology from the Ohio State University in 1987.
Pedro Branco is a Researcher in the Human Media Technologies department of the Fraun-
hofer Center for Research in Computer Graphics in Providence, RI, where he is working on
the development of multimodal interfaces for virtual environments and perceptual computing
approaches for adaptive interfaces. These areas are also the focus for his thesis work that he is
pursuing as a PhD candidate at Minho University in Portugal. Pedro Branco obtained the
‘Licenciatura’ degree in Computer Science from the Faculty of Science, Porto University,
Portugal in December 1997.
L. Miguel Encarnacao is the head of the Human Media Technologies department of the
Fraunhofer Center for Research in Computer Graphics in Providence, Rhode Island. His
research interests include next-generation adaptive user interfaces and mixed reality technol-
ogies as well as advanced distributive learning environments. He received his PhD in Com-
puter Science in 1997 from the University of Tubingen, Germany. He is an Adjunct Professor
of Computer Science at the University of Rhode Island.
Paolo Bonato is Research Assistant Professor at the NeuroMuscular Research Center of
Boston University, Boston, MA. His research work includes signal processing applied to
the biomedical field and rehabilitation engineering, analysis of non-stationary signals by
time-frequency analysis, electromyography and biomechanics of movement. More recently,
236 L. J. Hettinger et al.
his focus has been on intelligent signal processing for investigating problems in neurophysiol-
ogy and neuro-fuzzy inference systems for the analysis of data recorded using wearable sen-
sors. He received his PhD in Biomedical Engineering from Universita
`
di Roma ‘La Sapienza’,
Rome, Italy in 1995.
Neuroadaptive interfaces 237
... This can be done by dynamically adapting the user interface, the task and level of automation or by "stimulating" the end-users' brain (see [4] for a review). Technically, the development of neuroadaptive technology (defined in [5] as systems with "explicit input but unaware output") presupposes to address several challenges [6] amongst which the design of the neurophysiological sensors is of paramount importance. Indeed, these sensors have to fulfill some requirements before being considered as usable for the implementation of neuroadaptive technologies: ...
... • Their ease of use and comfort: sensors have to be usable in real-life conditions (sometimes for many hours), they must be fast and easy to put on and off and must be portable, non-invasive and, ideally, unobtrusive for the user . • Their sensitivity and specificity: as mentioned amongst others by [6], neuroadaptive technologies should account for a mental phenomenon specifically, precisely and repeatedly in a reliable manner. • Their technical specifications: signal quality is a very important aspect of daily-life measures as the signal is very disturbed by noise coming from everywhere (e.g., physiological artefacts, electromagnetic noise, etc). ...
... The papers in this theme focus on the design of adaptive interfaces and unique control dynamics inherent in BCI systems. Central to this endeavor is the concept of neuroadaptive interfaces, which Hettinger et al. (2003) emphasize as systems that evolve in real-time to the user's cognitive and emotional states. However, the realization of such dynamic, responsive systems necessitates a standardized design framework, as proposed by Mason and Birch (2003), to ensure clear communication and comparability within BCI research. ...
Article
While considerable research has been conducted, the field of neurodesign and human-computer interaction (N-HCI) has yet to be methodically delineated in light of contemporary studies. In this context, this paper undertakes a systematic review with dual objectives: (a) to uncover emerging research themes within N-HCI, and (b) to provide future directions for research on N-HCI. Employing robust systematic review methodologies, this paper scrutinized 122 pertinent documents, revealing four principal emerging research themes and constructing a conceptual framework that synthesized key concepts in N-HCI. The results underscored the relative underdevelopment in research that leverages neural evidence to enhance general human-computer interface design, and the notable absence of research at the intersections of the “user” and “environment” aspects of neurodesign with HCI. Accordingly, this paper identifies underdeveloped areas in existing research that require more depth and blank areas in the conceptual framework that call for exploration as future directions.
... The user communicates implicitly with this technological actor via implicit data monitoring, which provokes an adaptive response from the NAT interface. This symmetrical mode of human-computer interaction (Hettinger et al., 2003) creates a dyadic dialogue that mimics key aspects of human-human communication, such as assessment, reciprocity, and action, creating an Alterity relation (Ihde, 1975(Ihde, , 1990. The fact that NAT can perceive and respond autonomously to user states/intentions in ways that are both timely and appropriate (Fairclough, 2021) creates a dialogue that allows the user interface to function as a technological actor. ...
Article
Full-text available
Neuroadaptive technology (NAT) is a closed-loop neurotechnology designed to enhance human–computer interaction. NAT works by collecting neurophysiological data, which are analysed via autonomous algorithms to create actions and adaptations at the user interface. This paper concerns how interaction with NAT can mediate self-related processing (SRP), such as self-awareness, self-knowledge, and agency. We begin with a postphenomenological analysis of the NAT closed loop to highlight the built-in selectivities of machine hermeneutics, i.e., autonomous chains of algorithms that convert data into an assessment of psychological states/intentions. We argue that these algorithms produce an assessment of lived experience that is quantitative, reductive, and highly simplistic. This reductive assessment of lived experience is presented to the user via feedback at the NAT interface and subsequently mediates SRP. It is argued that congruence between system feedback and SRP determines the precise character of the alterity relation between human user and system. If feedback confirms SRP, the technology is regarded as a quasi-self. If there is a disagreement between SRP and feedback from the system, NAT is perceived to be a quasi-other. We argue that the design of the user interface shapes the precise ways in which NAT can mediate SRP.
... There are advantages to measure a psychological concept using two or more different indices from the perspective of physiological computing. Certain psychological states may be described as many-to-one [Hettinger et al. 2003], i.e., a number of measures are required in order to represent a single psychological concept. This approach also represents a form of convergent validity wherein multiple measures are collected simultaneously in order to derive a composite score based on the degree of correlation or coherence between different measures. ...
Chapter
Full-text available
The Handbook of Multimodal-Multisensor Interfaces provides the first authoritative resource on what has become the dominant paradigm for new computer interfaces-user input involving new media (speech, multi-touch, hand and body gestures, facial expressions, writing) embedded in multimodal-multisensor interfaces. This three-volume handbook is written by international experts and pioneers in the field. It provides a textbook, reference, and technology roadmap for professionals working in this and related areas. This third volume focuses on state-of-the-art multimodal language and dialogue processing, including semantic integration of modalities. The development of increasingly expressive embodied agents and robots has become an active test-bed for coordinating multimodal dialogue input and output, including processing of language and nonverbal communication. In addition, major application areas are featured for commercializing multimodal-multisensor systems, including automotive, robotic, manufacturing, machine translation, banking, communications, and others. These systems rely heavily on software tools, data resources, and international standards to facilitate their development. For insights into the future, emerging multimodal-multisensor technology trends are highlighted for medicine, robotics, interaction with smart spaces, and similar topics. Finally, this volume discusses the societal impact of more widespread adoption of these systems, such as privacy risks and how to mitigate them. The handbook chapters provide a number of walk-through examples of system design and processing, information on practical resources for developing and evaluating new systems, and terminology and tutorial support for mastering this emerging field. In the final section of this volume, experts exchange views on a timely and controversial challenge topic, and how they believe multimodal-multisensor interfaces need to be equipped to most effectively advance human performance during the next decade.
... The cycle of data collection and system response wherein psychophysiological change is transformed into adaptive control may be described as a biocybernetic loop [10]. This category of biocybernetic system control creates a symmetrical form of HCI where the availability of system information to the user is balanced by data about user state being at the disposal of the system [11]. Making a computer system privy to psychophysiological states has the potential to enable so-called 'smart' technology, i.e. systems that are characterised by increased autonomy and adaptive capability [12]. ...
Conference Paper
Full-text available
Task engagement is a psychological dimension that describes effortful commitment to task goals. This is a multidimensional concept that combines cognition, motivation and emotion. This dimension may be important for the development of physiological computing systems that use real-time psychophysiology to monitor user state, particularly those systems seeking to optimise performance (e.g. adaptive automation, games, automatic tutoring). Two laboratory-based experiments were conducted to investigate measures of task engagement, based on EEG, pupilometry and blood pressure. The first study exposed participants to increased levels of memory load whereas the second used performance feedback to either engage (success feedback) or disengage (failure feedback) participants. EEG variables, such as frontal theta and asymmetry, were sensitive to disengagement due to cognitive load (experiment 1) whilst changes in systolic blood pressure were sensitive to feedback of task success. Implications for the development of physiological computing systems are discussed.
... As previous studies have shown (e.g., Biondi & Rognoli, 2007;Ferraris, 2007;Hettinger et al., 2003), neuroscience and design are disciplines that, although different, should interact continuously in order to offer an effective product. According to Bridger (2017), this need for integration and interaction has given rise to neurodesign, a new discipline that applies insights from neuroscience and psychology to create new and more effective types of design. ...
Article
Full-text available
In an ever-changing world, it is essential to integrate different disciplines to create user-centred products and services. Design alone cannot fully understand the needs and require- ments of the end users, nor the emotions and mental states they feel when they experience the product. There is a need for a multidisciplinary work that brings together the skills of designers with the most sophisticated neurotechnologies on the market. Neuroergonomics is an applied discipline that arises from the joint efforts of neuroscience and ergonomi- cs. It uses existing knowledge about human behaviour and brain functioning to identify human-friendly technologies and interfaces and to support the design of safer and more efficient work environments that can increase worker well- being, while maximizing productivity. Thus, it can establish a reference framework for decision-making in the architectural and industrial design sector, focusing on central factors that affect people’s reactions and behaviors.
Chapter
Cognitive load is a user state intensively researched in the NeuroIS community. Recently, the interest in designing neuro-adaptive information systems (IS) which react to the user’s current state of cognitive load has increased. However, its measurement through surveys is cumbersome and impractical. Alternatively, it was shown that by collecting biosignals and analysing them with supervised machine learning, it is possible to recognize cognitive load less obtrusively. However, data collection and classifier training are challenging. Specifically, large amounts of data are required to train a high-quality classifier. To serve this need and increase transparency in research, more and more datasets are publicly available. In this paper, we present our results of a systematic review of public datasets and corresponding classifiers recognizing cognitive load using biosignals. Thereby, we want to stimulate a discussion in the NeuroIS community on the role and potential of public datasets and classifiers for designing neuro-adaptive IS.
Chapter
Digitalization leads to the necessity of taking into account the cognitive aspects of human-computer interaction. Research of the complex dynamic system’s functioning in extremal conditions are limited by the hidden interconnections of elements. The influence of the increasing complexity of interaction on ensuring the safety, dependability and stability of a complex dynamic system in extreme conditions is considered. The analysis of dynamic complexity from the perspective of interdisciplinary principles and laws made it possible to establish that taking into account cognitive aspects, as well as the person’s psychophysiological state are critically important. Attention is drawn to the functional asymmetry of the cerebral hemispheres, the features of which lead to latent cognitive problems. Features of asymmetry are inherited or acquired during life. Their consequence is the violation of the basic communication interrelations in a complex dynamic system. This is reflected in the system’s cognitive model, in which the influence of the human factor prevails. The key element of the model is the spatio-temporal inconsistency of relevant information flows, which complicates the analysis and selection of risk mitigation strategies. It is proposed to eliminate the actions of systematic security threats by applying an interdisciplinary approach. It is developed a structural-functional approach to the study and modelling of the elements functioning of the complex dynamic system. It is shown that such an approach to the modelling allows taking into account the person’s psychophysiological state in real-time. The complex dynamic system’s viability in extreme conditions depends on the spatio-temporal coordination of the elements functioning.KeywordsComplex dynamic systemsDependabilityViabilityHuman factorsCognitive aspectsSpatio-temporal patternsParametric geometrizationVisual analytics
Article
A bio-cybernetic, closed-loop system was validated for use in an adaptive automation environment. Subjects were asked to perform either a single task or multiple tasks from the Multi-Attribute Task Battery. EEG was continuously sampled while they performed the task(s) and an EEG index was derived (20 Beta/Alpha + Theta). The system switched between manual and automatic modes according to the level of operator engagement based upon the EEG index. The NASA-TLX was administered after each trial. The results of the study demonstrated that it was possible to moderate an operator's level of engagement through a closed-loop system driven by the operator's EEG. In addition, the system was sensitive to increases in task load. These findings show promise for designing adaptive automation technology around psychophysiological input.
Article
The human face is an evolved adaptation for social communication. This implies that humans are genetically prepared to produce facial gestures that are automatically decoded by observers. Psychophysiological data demonstrate that humans respond automatically with their facial muscles, with autonomic responses, and with specific regional brain activation of the amygdala when exposed to emotionally expressive faces. Attention is preferentially and automatically oriented toward facial threat. Neuropsychological data, as well as a rapidly expanding brain-imaging literature, implicate the amygdala as a central structure for responding to negative emotional faces, and particularly to fearful ones. However, the amygdala may not be specialized for processing emotional faces, but may instead respond to faces because they provide important information for the defense appraisal that is its primary responsibility.
Article
An emerging body of evidence from a number of fields is beginning to reveal general neural principles underlying cognition. The characteristic adaptability of cognitive function is seen to derive from large-scale networks in the cerebral cortex that are able to repeatedly change the state of coordination among their constituent areas on a subsecond time scale. Experimental and theoretical studies suggest that large-scale network dynamics operate in a metastable regime in which the interdependence of cortical areas is balanced between integrating and segregating activities. Cortical areas, through their coordination dynamics, are thought to rapidly resolve a large number of mutually imposed constraints, leading to consistent local states and a globally coherent state of cognition.
Article
Adaptive automation refers to technology that can change its mode of operation dynamically. Further, both the technology and the operator can initiate changes in the level or mode of automation. One of the important issues surrounding this technology concerns the method for initiating changes in the state of automation. The present paper considers the potential of using brain activity to drive an adaptive automation system. Relevant research on EEG is presented followed by a review of several experiments in which EEG is used to trigger changes among system modes in an adaptive automation system. The system moderates operator task load based upon an index derived from a ratio of EEG power bands. The research shows that it may be feasible to build an adaptive automation system and use this index of brain activity to drive the system. The paper concludes with a discussion of several issues that still need to be addressed before this approach can move beyond the laboratory.
Article
Adynamically adaptive interface (DAI) is a computer interface that changes the display or control characteristics of a system (or both) in real time. The goal of DAIs is to anticipate informational needs or desires of the user and provide that information without the requirement of an explicit control input by the user. DAIs have the potential to improve overall human-machine system performance if properly designed; they also have a very real potential to degrade performance if they are not properly designed. This article explores both theoretical and practical issues in the design of DAIs. The relation of the DAI concept to decision aiding and automation is discussed, and a theoretical framework for design is outlined.Apreliminary investigation of the DAI design concept was conducted in the domain of aviation (precision, low-level navigation). Nontraditional controls (a force reflecting stick) and displays (a configural flight director) were developed to support performance at the task. A standard interface (conventional controls and displays), a candidate interface (alternative controls and displays), and an adaptive interface (dynamically alternating between the standard and candidate displays) were evaluated. The results indicate that significant performance advantages in the quality of route navigation were obtained with the candidate and adaptive interfaces relative to the standard interface; no significant differences between the candidate and adaptive interfaces were obtained. The implications of these results are discussed, with special emphasis on their relation to fundamental challenges that must be met for the DAI concept to be a viable design alternative.