ChapterPDF Available

A Wireless Future: performance art, interaction and the brain-computer interfaces

Authors:

Abstract and Figures

Although the use of Brain-Computer Interfaces (BCIs) in the arts originates in the 1960s, there is a limited number of known applications in the context of real-time audio-visual and mixed-media performances and accordingly the knowledge base of this area has not been developed sufficiently. Among the reasons are the difficulties and the unknown parameters involved in the design and implementation of the BCIs. However today, with the dissemination of the new wireless devices, the field is rapidly growing and changing. In this frame, we examine a selection of representative works and artists, in comparison to the current scientific evidence. We identify important performative and neuroscientific aspects, issues and challenges. A model of possible interactions between the performers and the audience is discussed and future trends regarding liveness and interconnectivity are suggested.
Content may be subject to copyright.
ICLI 2014 - INTER-FACE
INTERNATIONAL CONFERENCE
ON LIVE INTERFACES
Edited by
Adriana Sa
Miguel Carvalhais
Alex McLean
Published by
Porto University
CECL (NOVA University)
CESEM (NOVA University)
MIPTL (University of Sussex)
Design
Design (event): David Palmer
Design (proceedings): Joana Morgado
ISBN
978-989-746-060-9
ORGANIZATION
PLANNING TEAM
Adriana Sa
EAVI / Goldsmiths, University of London
(general chair)
Alex McLean
ICSRiM / University of Leeds
(co-chair)
Miguel Carvalhais
ID+ – University of Porto
(co-chair)
Teresa Cruz
CECL / Nova University Lisbon
Kia Ng
ICSRiM / University of Leeds
Isabel Pires
CESEM / Nova University
Luísa Ribas
ID+ / Faculty of Fine Arts,
University of Lisbon
Maria Chatzichristodoulou
University of Hull
TECHNICAL COORDINATION
John Klima / Scratchbuilt Studios
THANKS TO
Raquel Castro
Silvia Firmino
Rajele Jain / VIPULAMATI Associação
SCIENTIFIC COMMITTEE
Samuel Aaron
University of Cambridge
Baptiste Caramiaux
EAVI / Goldsmiths, University of London
Miguel Carvalhais
ID+ – University of Porto
Maria Chatzichristodoulou
University of Hull
Teresa Cruz
CECL / Nova University Lisbon
Emilia Duarte
IADE Creative University
Mick Grierson
EAVI / Goldsmiths, University of London
Edwin van der Heide
Leiden University / Interfaculty The Hague
Martin Kaltenbrunner
ICL – University of Art and Design Linz
Alex McLean
ICSRiM – University of Leeds
Thor Magnusson
University of Sussex
Kia Ng
ICSRiM / University of Leeds
Rui Penha
Universidade do Porto
Carlos Pimenta
Lusófona University
Isabel Pires
CESEM / Nova University
Pedro Rebelo
SARC / School of Creative Arts Belfast
Luísa Ribas
ID+ / Faculty of Fine Arts,
University of Lisbon
Adriana Sa
EAVI / Goldsmiths, University of London
Daniel Schorno
STEIM
Franziska Schroeder
SARC / School of Creative Arts Belfast
Atau Tanaka
EAVI / Goldsmiths, University of London
SUBMISSION REVIEWERS
Samuel Aaron
University of Cambridge
Joanne Armitage
University of Leeds
Matt Benatan
University of Leeds
Baptiste Caramiaux
EAVI / Goldsmiths, University of London
Miguel Carvalhais
ID+ – University of Porto
Maria Chatzichristodoulou
University of Hull
Teresa Cruz
CECL – Nova University Lisbon
Emilia Duarte
IADE Creative University
Martin Kaltenbrunner
ICL – University of Art and Design Linz
Chris Kiefer
EAVI – Goldsmiths & Sussex University
John Klima
Scratchbuilt Studios
Alex McLean
ICSRiM – University of Leeds
Thor Magnusson
University of Sussex
Kia Ng
ICSRiM / University of Leeds
Adam Parkinson
EAVI / Goldsmiths, University of London
Rui Penha
Universidade do Porto
Carlos Pimenta
Lusófona University
Isabel Pires
CESEM / Nova University
Robin Price
SARC / School of Creative Arts Belfast
Pedro Rebelo
SARC / School of Creative Arts Belfast
Luísa Ribas
ID+ / Faculty of Fine Arts,
University of Lisbon
Adriana Sa
EAVI / Goldsmiths, University of London
Daniel Schorno
STEIM
Franziska Schroeder
SARC / School of Creative Arts Belfast
Atau Tanaka
EAVI / Goldsmiths, University of London
EDITORIAL REVIEWERS (PAPERS)
Till Bovermann
Berlin University of the Arts
Miguel Carvalhais
ID+ – University of Porto
Marko Ciciliani
IEM – University of Music and Performing
Arts Graz
Nuno Correia
EAVI / Goldsmiths, University of London
Mat Dalgeich
University of Wolverhampton
Pete Furniss
University of Edinburgh
Chris Kiefer
EAVI – Goldsmiths & Sussex University
Alex McLean
ICSRiM – University of Leeds
Andrew McPherson
Centre for Digital Music – Queen Mary
University of London
Tom Mudd
Goldsmiths, University of London
Luísa Ribas
ID+ / Faculty of Fine Arts,
University of Lisbon
Adriana Sa
EAVI / Goldsmiths, University of London
Tim Sayer
University of St Mark and St John Plymouth
Horácio Tome-Marques
ID+ – University of Porto
Ian Wilcock
University of Hertfordshire
Polina Zioga
Digital Design Studio / Glasgow School
of Art
PARTNERSHIPS
CECL – Centre for Communication
and Languages
(Faculty of Social and Human
Sciences / Nova University Lisbon)
CESEM – Centre for Studies in Sociology
and Musical Aesthetics
(Faculty of Social and Human
Sciences / Nova University Lisbon)
EAVI – Embodied Audiovisual Interaction
Group
(Goldsmiths, University of London)
EMCN – School of Music of the National
Conservatory
FBAUL – Fine Arts Faculty / University
of Lisbon
IADE – Institute of Art, Design
and Enterprise
ICSRiM – Interdisciplinary Centre
for Scientic Research in Music
(University of Leeds)
ID+ – Research Institute for Design, Media
and Culture
(Porto University + Aveiro University)
MNAC – Museu Nacional de Arte
Contemporânea do Chiado
Music Informatics and Performance
Technologies Lab
(Sussex University)
ZDB – Galeria zedosbois
SUPPORTS
FCT – Foundation for Science
and Technology
POPH / FSE
OPTEC Lda.
ScratchBuilt Studios
Nacional Filmes – Film Production
and Sound Studio
DP Creative
Teatro São Luiz
EGEAC | Câmara Municipal de Lisboa
PRETO
NEGATIVO
PANTONE 376 C
CMYK
CINZA
Teatro Municipal
220
ABSTRACT
Although the use of Brain-Computer Interfaces (BCIs) in the arts origi-
nates in the 1960s, there is a limited number of known applications in
the context of real-time audio-visual and mixed-media performances
and accordingly the knowledge base of this area has not been developed
suciently. Among the reasons are the diculties and the unknown pa-
rameters involved in the design and implementation of the BCIs. How-
ever today, with the dissemination of the new wireless devices, the eld
is rapidly growing and changing. In this frame, we examine a selection
of representative works and artists, in comparison to the current scien-
tic evidence. We identify important performative and neuroscientic
aspects, issues and challenges. A model of possible interactions between
the performers and the audience is discussed and future trends regard-
ing liveness and interconnectivity are suggested.
KEYWORDS
Brain-Computer Interface (BCI), Electroencephalography (EEG),
Human-Computer Interaction (HCI), Wireless, Performance Art,
Real-Time, Liveness, Mixed-Media, Audience.
A WIRELESS FUTURE:
PERFORMANCE ART, INTERACTION
AND THE BRAIN-COMPUTER INTERFACES
POLINA ZIOGA
Digital Design Studio
Glasgow School of Art
Glasgow, United Kingdom
P.Zioga1@gsa.ac.uk
MINHUA MA
School of Art, Design and Architecture
University of Hudderseld
Hudderseld, United Kingdom
M.Ma@hud.ac.uk
ICLI 2014 / INTER-FACE
INTERNATIONAL CONFERENCE ON LIVE INTERFACES
PAUL CHAPMAN
Digital Design Studio
Glasgow School of Art
Glasgow, United Kingdom
P.Chapman@gsa.ac.uk
FRANK POLLICK
School of Psychology
University of Glasgow
Glasgow, United Kingdom
Frank.Pollick@glasgow.ac.uk
221
1. INTRODUCTION
The use of Brain-Computer Interfaces (BCIs) in the arts originates in
the 1960s with the pioneering work of composers like Alvin Lucier,
David Rosenboom, and others. Today there is an increasing number
of musical works in the eld, but there are still limited known appli-
cations in the context of real-time audio-visual and mixed-media per-
formances1 and accordingly the knowledge base of this area has not
been developed suciently. The reasons are merely two. On the one
hand, the low-cost commercial devices have only recently been avail-
able in the market, making the technology approachable to artists. On
the other hand, the design and implementation of BCIs presents several
diculties and is dependent on unknown parameters. However, today
the eld is rapidly growing and new approaches and denitions are
requested. In this frame we shall refer to the use of BCIs in the con-
text of real-time audio-visual and mixed-media performances as live
brain-computer mixed-media performances. After a brief introduction in
section 2 to BCIs and the particular diculties they present, we exam-
ine in section 3 a selection of representative works and artists, in order
to identify important performative and neuroscientic aspects, issues
and challenges and show how the development of the eld is changing
with the dissemination of the new wireless devices. In section 4 we out-
line possible directions for the future research and practices and we
suggest a model of possible interactions between the performers and
the audience.
2. BRAIN-COMPUTER INTERFACES: LIMITATIONS, DIFFICULTIES AND
UNKNOWN PARAMETERS
Wolpaw and Wolpaw (2012, 3-12) dened a BCI as:
“[…] a system that measures CNS [Central Nervous System] activity and converts it
into articial output that replaces, restores, enhances, supplements, or improves nat-
ural CNS output and thereby changes the ongoing interactions between the CNS and
its external or internal environment.”
Among the non-invasive techniques used for signal acquisition in
BCIs, the most common is Electroencephalography (EEG). EEG, a tech-
nique that can be applied to humans repeatedly with no risk or limita-
tion, is the recording of the electrical activity along the scalp, by meas-
uring the voltage uctuations resulting from the current ows (Teplan
2002, Niedermeyer and da Silva 2004). The recorded electrical activity
is then categorized in rhythmic activity frequency bands,2 which are
associated to different brain- and cognitive- states. EEG is a very effec-
tive technique for measuring changes in the brain-activity with accura-
1. We use the term “mixed-media performances” as introduced by Auslander (1999,
36): “[…] events combining live and mediatized representations: live actors with lm,
video, or digital projections […].”
2. The EEG rhythmic activity frequency bands are delta (<4Hz), theta (4-7Hz), alpha
(8-13Hz), beta (14-30Hz), and gamma (30-100Hz).
222
cy of milliseconds. However, one of its technical limitations is the low
spatial resolution, as compared to other brain imaging techniques, like
fMRI (functional Magnetic Resonance Imaging), meaning that it has
low accuracy in identifying the region of the brain being activated.
At the same time the design and implementation of the BCIs pre-
sents additional diculties and is dependent on many factors and un-
known parameters, such as the unique brain anatomy of the person
wearing each time the device, the task/s being executed, the type of
sensors used, the location of the sensors which might be differentiated
even slightly during each session, and the ratio of noise and non-brain
artifacts to the actual brain signal being recorded. More specically
among the non-brain artifacts are included the “internally generated”,
such as the EMG (electromyographic) deriving from the neck and face
muscles, the eye movements, but also the heart activity, and the “exter-
nally generated” like spikes from equipment, cable sway and thermal
noise (Swartz Center of Computational Neuroscience, University of Cal-
ifornia San Diego 2012).
In recent years, with the accelerating advances in neuroscience and
biomedical engineering research, new low-cost devices which use wet
or dry sensors have been developed. Neurosky introduced in 2007,
the rst, to our present knowledge, wireless device for consumer use,
which was also the rst device with a dry sensor that did not require
the application of a conductive gel, nor skin preparation (bnetTV.com
2007). In 2009, Emotiv launched two wireless devices, the EPOC and
the EEG neuroheadset, with 14 wet sensors plus 2 references. At the
same time, alongside with the companies building new wireless inter-
faces, a community of developers and engineers working on DIY (do
it yourself) devices has also emerged, such as the OpenEEG project
(OpenEEG project 2014), which is a relatively well-known community
amongst artists and creative practitioners. This way and within only a
few years, the EEG technology has been made more approachable and
easy-to use and therefore the applications in the arts have radically
increased and the practices have changed. As we will discuss further
on, the new wireless devices help the artists to overcome important
constraints, but at the same time they also present new challenges.
3. THE USE OF BCIS IN REAL-TIME AUDIO-VISUAL AND MIXED-MEDIA
PERFORMANCES: NEUROSCIENTIFIC AND PERFORMATIVE CHALLENGES
3.1. KINESIOLOGY, FACIAL EXPRESSION AND NOISE
Since the rst works with the use of BCIs, performers have encoun-
tered considerable limitations to their kinesiology and even their fa-
cial expression; either in cases they use wired devices and electrodes,
and/or because of the contamination of the EEG-data with noise and
non-brain artifacts from the cranial and body muscles. A well-known
example is Music For Solo Performer (1965) by Alvin Lucier, which is
considered the rst real-time performance using EEG. In this work, the
223
performer has two electrodes attached to his forehead, while he sits
almost without moving on a chair, opening and closing slowly his eyes,
thus controlling the effect of the visual stimuli on his brain-activity and
consequently the alpha rhythmic activity frequency band, which is as-
sociated with a brain-state of relaxation. The electrodes are connected
via an amplier to a set of speakers, who transmit the electrical signal
and vibrate percussion instruments placed around the performance
space (Ashley 1975).
Another example is INsideOUT (2009) by Claudia Robles Angel, in
which she uses an open source EEG interface from Olimex, consisting
of one analogue and one digital board, connected to a computer. Two
electrodes, one on her forehead and one on the back of her head, are
connecting respectively the frontal lobe’s activity with the sound out-
put from the computer and the occipital lobe’s activity with the video
output. The sounds and images are projected on a screen and onto the
performer. They are controlled by the values of the signals acquired via
the electrodes and processed via the MAX/MSP software (Angel 2011).
In one of her interviews, Angel mentions that with the EEG interface
she could not move because it “is so sensitive that if you move you get
values [noise] from other sources” (Lopes and Chippewa 2012). Today,
the new wireless devices have provided the performers with greater
kinetic and expressive freedom, while in some cases they also include
lters and algorithmic interpretations which can be used to some ex-
tent for the real-time processing of the acquired data. However there
are certain issues, which will be discussed more in detail in section 3.4.
3.2. RHYTHMIC ACTIVITY FREQUENCY BANDS AND COGNITIVE STATES
The limitations imposed in the performers’ kinesiology and facial ex-
pression, like in the previously presented examples of works, have
further implications and result in additional performative constraints,
such as the inevitable focus in the control of only the relaxation state
and the associated alpha rhythmic activity frequency band. For per-
formers that are interested in using BCIs while engaging in more active
situations and states of tension, like for example in works that involve
intense kinesiology and speech, the use of wireless devices is indispen-
sable. Consequently they are also enabled to consider all the different
frequency bands, associated with a greater range of brain- and cogni-
tive-states. The EEG-data can be further processed and differentiated
according to the tasks executed and in consistency with the dramatur-
gical conditions of the performance. In this way the use of the BCIs as a
medium in live performances is enriched. Examples of such works are
presented in the following sections.
3.3. SPATIAL RESOLUTION AND THE HEAD VOLUME CONDUCTION EFFECT
As we discussed in section 2, one of EEG’s technical limitations is its low
spatial resolution, which is also further inuenced by the “head vol-
ume conduction effect” (He and Ding 2013), meaning that the recorded
224
electrical signal is further blurred, as it passes through the different
anatomical tissues of the head, before it reaches the scalp. The result of
this phenomenon is that positioning the electrodes or sensors on differ-
ent locations on the head cannot be easily associated with the activity
of specic regions of the brain. In neuroscience research, in order to
bypass this limitation, apart from the clinical grade systems that can
use up to 256 electrodes, there are methods and tools, such as invasive
BCIs, the complementary use of fMRI scans, as well as complex linear
algebra mathematical modelling. However, these techniques are cur-
rently not applicable to artistic performances and especially in cases
where low-cost interfaces are used with limited number of electrodes/
sensors, either wireless or not. For this reason, either the artists should
not rely the concept of their live brain-computer mixed-media perfor-
mances on the localisation of the electrodes/sensors or they should con-
sider applying a combination of pre-performance study and on-perfor-
mance use of computational processing, which however is complex
and therefore challenging.
3.4. RAW EEG DATA VERSUS “DETECTION SUITES”
The new low-cost wireless devices have not only given greater kinet-
ic and expressive freedom to the performers, but with their accompa-
nying user-friendly software, SDK (software development kit) licences
and a variety of connectivity solutions, they have enabled artists to es-
tablish communication with different hardware and boards like Ardui-
no, and software like Pure Data, MAX/MSP, Processing, Ableton Live
and others, creating prototypes and playful applications. This easiness
is largely achieved because these devices enable the real-time raw EEG
data extraction, but at the same time they also include ready-made al-
gorithmic interpretations and lters for feature extraction. For exam-
ple the user can view and process/map data under categorisations such
as “frustration” or “excitement”, “meditation” or “relaxation”, “engage-
ment” or “concentration”, which are differentiated amongst the differ-
ent devices and manufactures.
For example, Adam John Williams with Alex Wakeman and Robert
Wollner presented in 2013 a project, which uses an Emotiv EPOC head-
set in order to connect with and sent to a computer the participants’
EEG data, converting them to:
“[…] OpenSound Control messages, which were sent to a Mac where Max MSP used
the data to adjust the rules of a generative music engine. Tempo and sync information
were then packed along with the original EEG messages and transmitted to the Rasp-
berry Pi upon which the visuals were generated.”
Williams 2013
As it is shown in the video documentation, the software process-
es different inputs titled as “Bored/Engaged”, “Excited”, “Excited LT”,
“Meditation” and “Frustration”, which are associated with the Emotiv’s
“detection suites” (Emotiv 2014).
225
Lisa Park in her work Eunoia (2013), a Greek word meaning good-
will and beautiful thinking, reinterprets in a way Alvin Lucier’s Music
for Solo Performer (1965) by using Neurosky’s Mindwave wireless de-
vice, monitoring her brain-wave activity and processing the EEG-data
categorised in different rhythmic activity frequency bands, but also
states, such as Attention” and “Meditation”. These data and the cor-
responding values are amplied and transmitted through ve speak-
ers, positioned underneath equal number of round metal plates, lled
with water, and associated according to the artist with the emotions
of “happiness”, “anger”, “sadness”, “hatred”, and “desire”. The speak-
ers vibrate the metal plates and “varieties of water forms” are created
(Park 2013).
Although the use of the aforementioned “detection suites” serves in
the artists’ hands as ready-made tools for the creation of inspiring and
imaginative works, there are two facts that we should bear in mind. On
the one hand the algorithms and methodology upon which the inter-
pretation and feature extraction of the brain’s activity is made are not
published by the manufactures. On the other hand the published neu-
ro-science research in the eld of emotion recognition via the use of
EEG data is fairly new. Thus, the use of these “detections” of emotional
states should not necessarily be regarded as accurate and therefore the
creative results may not be consistent to the artists’ original intentions.
Two examples in the direction of scientically established use of
emotion interpretation via EEG in the arts, come from the eld of com-
puter music research. The Embodied AudioVisual Interaction Group
(EAVI) at Goldsmiths, University of London, has developed a BCI toolkit,
that can be used with both clinical grade and consumer level devices,
and has the ability of detecting Event Related Potentials (ERPs) used for
“making high-level musical decisions”, like for example in Finn Peters’
Music of the Mind (2010) album and tour (Grierson, Kiefer, and Yee-
King 2011). For their under development performance piece The Space
Between Us, Eaton, Jin, and Miranda (2014) describe the measurement
and mapping of valence and arousal levels within EEG, for which there
are different known methods with well documented results. Similar
approaches can contribute to a new system of validation and evalua-
tion, enabling further advancements in the eld.
3.5. COHERENCE, SYNCHRONICITY AND INTERACTION WITH MULTIPLE
PARTICIPANTS
One of the most cited works, Mariko Mori’s Wave UFO (2003) is an im-
mersive video installation, where computer-generated graphics are
combined with the “real-time interpretation of three participants’ al-
pha, beta, and theta brain-waves” (Mori, Kunsthaus Bregenz, and Sch-
neider 2003). The participants are wearing EEG devices with three elec-
trodes/sensors attached to their foreheads, recording the frequencies of
their brains’ right and left hemispheres. According to which frequency
is showing higher activity, projected animated spheres on the ceiling
226
(one for each participant’s hemisphere) take a different/associated col-
our (red for beta band, blue for alpha and yellow for theta). At the same
time is also animated each participant’s brain coherence with a second
pair of smaller spheres, the “Coherence Spheres”. By coherence the art-
ist refers to the phenomenon of synchronicity of the alpha-wave ac-
tivity between the two brain’s hemispheres (Mori, Kunsthaus Bregenz,
and Schneider 2003). When this is achieved, the “Coherence Spheres”
are joining together. If all the participants reach this state, then a cir-
cle is created, as a scientic and visualisation approach to the artist’s
idea of connectivity. Coherence in Mariko Mori’s work also serves as an
example of a real-time interaction between the brain activity of mul-
tiple participants and the visualisation of the brain-data as a form of
physicalisation, which is the process of rendering physical the abstract
information through either graphical representation and visual inter-
pretation or sonication (Tanaka 2012).
More recently, the Marina Abramovic Institute Science Chamber
and neuroscientist Dr. Suzanne Dikker have been collaborating in a
series of projects, like Measuring the Magic of Mutual Gaze (2011), The
Compatibility Racer (2012) and The Mutual Wave Machine (2013), which
explore “moments of synchrony” of the brain-activity between two par-
ticipants, when they interact by gazing at each other (Dikker 2014). As
Dikker explains by “moments of synchrony” are meant points in time
when the two participants present the same predominant brain-activi-
ty (Marina Abramovic Institute 2014). Could we expect to see in the fu-
ture live brain-computer mixed-media performances where an interac-
tion between the performer/s’ and the audience’s brain activity, jointly
contribute to the nal creative output/result? In this case what kind of
new connections and cognitive issues might emerge?
4. TOWARDS THE FUTURE
4.1. LIVENESS AND INTERACTION WITH THE AUDIENCE
In real-time audio-visual and mixed-media performances, from experi-
mental underground acts to multi dollar music concerts touring around
the world in big arenas, liveness is a key element. In the case of perform-
ers using laptops and operating software, the demonstration of liveness
to the audience is a challenge approached in various ways. The Erasers
(2013) for example, transform the stage into a kind of audio-visual labo-
ratory, where the creative process and the different techniques they use
to produce moving images and sound, as well as the nal outcome are
immediately visible to the audience. Other performers use two projec-
tions, with one of them showing their computers’ desktops and the oth-
er one showing the visual output/result. A similar approach is also live
coding, a programming practice disseminated in contemporary music
improvisational performances.
In the eld of live brain-computer mixed-media performances, the
members of PULSE4ART group, awarded in Errors Allowed Mediterra-
nea 16 Young Artists Biennial (2013), have mentioned that in their 2014
227
new project they will engage the audience by having them wear the
headsets and contributing their EEG data to the performance, much
like the way it was realised in their 2013 project ALPHA (Pulse 4 Arts
and Oullier 2014). The project is an improvisation-based performance
with live music, live visuals and the brain-activity of two dancers wear-
ing two EPOC headsets extracted and mapped real-time to projected
moving images (Association Bjcem 2013). Also Lisa Park, in her demo
video for her upcoming performance Eudaimonia, a Greek word mean-
ing bliss, presents the idea of an installation with the collaboration of
eight to ten participants wearing portable BCI devices. As in her 2013
performance, discussed in section 3.4, the brain-activity of the partici-
pants will be physicalised as sound-waves, played by speakers placed
underneath a shallow pool of water, vibrating and creating “corre-
sponding ripples and droplets” on the surface (Park 2014).
From these and other examples a question deriving is: what might
be a model for interaction between the performer/s’ and the audience’s
brain-activity in the context of a live brain-computer mixed-media per-
formance and how could liveness be presented to the audience? In Fig-
ure 1, we present a proposal for such a model, which demonstrates the
collective participation and co-creation of the mediatized elements of
the performance. According to the model, the audience is made aware
of the liveness of the performance by realising the interaction taking
place among its EEG activity, the audio and visual outputs and nally
the performer/s themselves.
Figure 1 A model of interactions between the performer/s and the audience in live
brain-computer mixed-media performances.
The model currently serves as the basis for the development by the
authors of a new multi-brain EEG-based BCI system, which will be used
228
in the context of a new live brain-computer mixed-media performance,
due to be presented in the coming months.
4.2. INTERCONNECTIVITY
As the research and development of applications are advancing, new
possibilities are emerging for the BCIs to connect with other devices,
and ultimately the World Wide Web. The idea of using technology,
sensors and computers to connect the human body to the Internet is
not new in the arts. Stelarc, a performance artist using biotechnology,
robotics, virtual reality systems and the Internet, probes and acousti-
cally amplies his own body (Stelarc 2014). During the Telepolis event
that took place in November 1995, a series of sensors were attached
to different parts of his body, connected to a computer with a “touch
screen interface & muscle stimulation circuitry”, and via the computer
to the World Wide Web (Smith 2005). Through a “performance web-
site” the audience remotely viewed, accessed, and actuated the body by
clicking/sending commands to the computer interface located together
with Stelarc at the performance site. The result was causing the body to
move involuntary (Stelarc 1995).
In August 2013 Rao and Stocco conducted in the University of Wash-
ington the pilot study Direct Brain-to-Brain Interface in Humans. In the
published research report is described the rst brain-to-brain interface
between two humans, which transmits EEG signals recorded from the
rst participant to the second over the internet (Rao et al. 2014). In
August 2014 Grau et al. published the results of a series of experiments
with established “internet-mediated B2B [Brain to Brain] communica-
tion by combining a BCI […] with a CBI [Computer-Brain Interface]”.
Of course the Brain to Brain research is a newly-born scientic break-
through and therefore currently far from being applicable in the arts.
However, the use of EEG data transferred via the internet is a reality
and it is only a matter of time to witness similar applications in the
context of live brain-computer mixed-media performances, the practices
and theories of interconnectivity.
5. CONCLUSIONS
There is no doubt that the new wireless devices are not only the future,
but already the present in the eld of live brain-computer mixed-media
performances. Artists are not only enabled with the new EEG technolo-
gies to use their own brain in their creative practices in the most direct
way made so far possible, but they are also given a new freedom of
access, interpretation, communication, interaction, and the ability to
investigate new performative patterns.
The presented and discussed artists and their work is only a sam-
ple of the continuously increasing number of imaginative applications,
creative and playful ideas that have emerged within only a few years.
The new wireless devices help performers to overcome the so far dom-
229
inant constraints, providing them with greater kinetic and expressive
freedom, but at the same time they also present new challenges. By tak-
ing into account both the advantages and disadvantages, the opportu-
nities and limitations of the technology, in comparison with the current
scientic research and methodologies, artists can enrich their practices
in a meaningful and consistent to the medium way. They will be able to
contribute to the advancement of the eld and the creation of a greater
and more validated area of investigation in discourse with other rele-
vant practices. We expect in the near future much progress and new
aesthetic experiences intersecting and transcending the boundaries of
performance and new media art, experimental psychology, computa-
tional neuroscience, and modern brain-computer interface design.
REFERENCES
Angel, Claudia Robles. “Creating Interactive Multimedia Works with Bio-data” in
NIME ’11 International Conference on New Interfaces for Musical Expression. Oslo:
NIME ‘11, 30 May – 1 June, 2011, 421-424.
Ashley, Robert. “Music with Roots in the Aether – Alvin Lucier (1975)”. UbuWeb video,
121:35. Accessed April 19, 2014. http://ubu.com/lm/aether_lucier.html.
Association Bjcem. ERRORS ALLOWED MEDITERRANEA 16 YOUNG ARTISTS BIENNI-
AL ANCONA 2013. Macerata: Quodlibet srl, 2013.
Auslander, Philip. Liveness: Performance in mediatized culture. New York: Routledge,
1999.
bnetTV.com. “NeuroSky – CTIA 2007”. Youtube video. 05:28. Posted by “NeuroSky, Inc.”,
2007. Accessed April 4, 2014. https:// youtube.com/watch?v=qTYXOMuVL5E.
Dikker, Suzanne. “REAL-TIME INTERACTIVE BRAIN INSTALLATIONS”. Last Modied
September 2014. Accessed October 19, 2014. https://les.nyu.edu/sd1083/public/art.html.
Eaton, Joel, Jin, Weiwei, and Miranda, Eduardo. “The Space Between Us: A Live Perfor-
mance with Musical Score Generated via Affective Correlates Measured in EEG of One
Performer and an Audience Member” in NIME’14 International Conference on New
Interfaces for Musical Expression. London: NIME ‘14, June 30 – July 03, 2014, 593-596.
Emotiv. “Emotiv eStore”. Accessed October 20, 2014. http://emotiv.com/store/compare.
Grau, Carles, Ginhoux, Romuald, Riera, Alejandro, Nguyen ,Thanh Lam, Chauvat
Hubert, Berg, Michel, Amengual, Julia L., Pascual-Leone, Alvaro, Runi, Giulio.
“Conscious Brain-to-Brain Communication in Humans Using Non-Invasive Technol-
ogies”. PLoS ONE 9(8) (2014): e105225. Accessed October21, 2014. doi:10.1371/jour-
nal.pone.0105225.
Grierson, Mick, Kiefer, Chris, and Yee-King, Matthew. “PROGRESS REPOST ON THE
EAVI BCI TOOLKIT FOR MUSIC: MUSICAL APPLICATIONS OF ALGORITHMS FOR
USE WITH CONSUMER BRAIN COMPUTER INTERFACES.” in Proccedings of the Inter-
national Computer Music Conference 2011. University of Hudderseld, UK: 31 July
– 5 August 2011, 110-113.
He, Bin and Ding, Lei. “Electrophysiological Mapping and Neuroimaging”. In: Neural
Engineering, edited by Bin He. New York: Springer Science+Business Media, 2013.
Lopes, Pedro and Chippewa, Jef. “Performing Biological Bodies An open conversation
with Marco Donnarumma, Claudia Robles and Peter Kirn at Body Controlled #4
(Berlin, 11–15 July 2012)”. eContact! 14.2 – Biotechnological Performance Practice/
Pratiques de performance biotechnologique (July / juillet 2012). Montréal: Commu-
nauté électroacoustique canadienne / Canadian Electroacoustic Community. Ac-
cessed March 25, 2014. http://cec.sonus.ca/econtact/14_2/lopes_bc4-interview.html.
Marina Abramovic Institute. “Out of the Lab”. Accessed October 19, 2014. http://www.
immaterial.org/content/2014/6/9/out-of-the-lab.
Mori, Mariko, Kunsthaus Bregenz, and Schneider, Eckhard. Mariko Mori: wave
UFO. Köln: Verlag der Buchhandlung Walther König, 2003.
Niedermeyer, Ernst and da Silva, Fernando Lopes. Electroencephalography: Basic
Principles, Clinical Applications, and Related Fields. Philadelphia ; London: Lippin-
cot Williams & Wilkins, 2004.
OpenEEG project. “Welcome to the OpenEEG project”. Accessed April 22, 2014. http://
openeeg.sourceforge.net/doc/index.html.
Park, Lisa. “Eunoia”. 2013. Accessed October 19, 2014. http://www.thelisapark.com/#/
eunoia.
———. “eunoia (about the process)”. Vimeo video, 05:33. Posted by “Lisa Park”, 2013.
Accessed October 19, 2014. https://vimeo.com/67935519.
———. “Eudaimonia”. Vimeo video, 02:23. Posted by “Lisa Park”, 2014. Accessed Octo-
ber 19, 2014. https://vimeo.com/85057000.
Pulse 4 Arts and Oullier, Olivier. “Interview: Pulse 4 Arts & Olivier Oullier talk about The
Neuromix and their projects for 2014”. La Nuit Magazine, January 9, 2014. Accessed
April 19, 2014. http://lanuitmagazine.com/2014/01/09/interview-pulse-4-arts-olivier-oul-
lier-talk-about-the-neuromix-and-their-projects-for-2014 (webpage discontinued).
Rao, Rajesh P.N., Stocco, Andrea, Bryan, Mathew, Sarma, Devapratim, Youngquist,
Tiffany M., Wu, Joseph, and Prat, Chantel. A Direct Brain-to-Brain Interface in
Humans. University of Washington Computer Science and Engineering. Technical
Report No. UW-CSE-14-07-01. July, 2014. Accessed October 22, 2014. http://homes.
cs.washington.edu/~rao/brain2brain/UW-CSE-14-07-01.PDF.pdf
Smith, Marquard, ed. STELARC THE MONOGRAPH. Cambridge: MIT Press, 2005.
Stelarc. “Fractal Flesh”. 1995. Accessed April 22, 2014. http://stelarc.va.com.au/projects/
fractal/ffvid.html.
———. “BIOGRAPHY”. Accessed October 20, 2014. http://stelarc.org/?catID=20239.
Swartz Center of Computational Neuroscience, University of California San Die-
go. “Introduction To Modern Brain-Computer Interface Design Wiki”. Last modied
June 10, 2014. Accessed September 24, 2014. http://sccn.ucsd.edu/wiki/Introduction_
To_Modern_Brain-Computer_Interface_Design.
Tanaka, Atau. “BioMuse to Bondage: Corporeal Interaction in Performance and Exhi-
bition”. In: Intimacy Across Visceral and Digital Performance, edited by Maria Chat-
zichristodoulou and Rachel Zerihan. Hampshire: Palgrave Macmillan, 2012.
Teplan, Michal. “FUNDAMENTALS OF EEG MEASUREMENT”. MEASUREMENT SCI-
ENCE REVIEW Volume 2, Section 2 (2002): 1-11.
The Erasers.About”. Accessed April 21, 2014. http://theerasers.org/about.
Williams, Adam John.Adam John Williams speaks on BBC Tech News about EEG-con-
trolled generative music”. Youtube video, 01:33. Posted by Adam John William”,
January 2, 2014. Accessed April 18, 2014. http://youtube.com/watch?v=CrY42RS9f0k.
Wolpaw, Jonathan R. and Wolpaw, Elizabeth W. “Brain-computer interfaces: some-
thing new under the sun”. In: Brain-computer interfaces: principles and practice,
edited by Jonathan R. Wolpaw and Elizabeth W. Wolpaw. Oxford: Oxford University
Press, 2012.
... Nowadays, in both fields the use of different applications and devices for engaging multiple participants and players is highly disseminated and rapidly increasing, from the use of mobile applications, to human-computer interaction devices and after 2007, electroencephalography (EEG)-based brain-computer interfaces (BCIs). The development of new low-cost wireless devices has radically changed the use of BCIs in the arts, which originates in the 1960s [1], and alongside the research on applications for populations suffering from neurological deficiencies and disabilities. At the same time, it has enabled the development of the first computer games and applications, mainly for entertainment, and more recently serious games. ...
... Enheduanna -A Manifesto of Falling (2015), 1 directed by Polina Zioga [17], is a new work which falls under the definitions of live brain-computer mixed-media performances, that combine live, mediatized representations and the use of BCIs [1], and live cinema, which is "[...] real-time mixing of images and sound for an audience, where […] the artist's role becomes performative and the audience's role becomes participatory." [18]. ...
... A model of interactions between the performer/s and the audience in live brain-computer mixed-media performances. Adapted from Zioga et al.[1]. ...
Chapter
Full-text available
Interactive new media art and games belong to distinctive fields, but nevertheless share common grounds, tools, methodologies, challenges, and goals, such as the use of applications and devices for engaging multiple participants and players, and more recently electroencephalography (EEG)-based brain-computer interfaces (BCIs). At the same time, an increasing number of new neuroscientific studies explore the phenomenon of brain-to-brain coupling, the dynamics and processes of the interaction and synchronisation between multiple subjects and their brain activity. In this context, we discuss interactive works of new media art, computer and serious games that involve the interaction of the brain-activity, and hypothetically brain-to-brain coupling, between multiple performer/s, spectator/s, or participants/players. We also present Enheduanna – A Manifesto of Falling (2015), a new live brain-computer cinema performance, with the use of an experimental passive multi-brain BCI system under development. The aim is to explore brain-to-brain coupling between performer/s and spectator/s as means of controlling the audio-visual creative outputs.
... In this context, we present in Section 2 the neuroscientific, computational, creative, performative and experimental challenges of the design and implementation of multi-brain BCIs in mixedmedia performances. Accordingly, we discuss in Section 3 Enheduanna – A Manifesto of Falling, the first demonstration of a live brain-computer cinema performance, as a complete combination of creative and research solutions, which is based on our previous work (Zioga et al. 2014Zioga et al. , 2015with the use of BCIs in performances that involve audience participation and interaction with a performer (Nijholt and Poel 2016, 81). This new work enables for the first time, to our present knowledge, the simultaneous real-time interaction with the use of EEG of more than two participants, including both a performer as well as members of the audience in the context of a mixed-media performance. ...
... In this context, we present in Section 2 the neuroscientific, computational, creative, performative and experimental challenges of the design and implementation of multi-brain BCIs in mixedmedia performances. Accordingly, we discuss in Section 3 Enheduanna-A Manifesto of Falling, the first demonstration of a live brain-computer cinema performance, as a complete combination of creative and research solutions, which is based on our previous work ( Zioga et al. 2014Zioga et al. , 2015 with the use of BCIs in performances that involve audience participation and interaction with a performer ( Nijholt and Poel 2016, 81). This new work enables for the first time, to our present knowledge, the simultaneous real-time interaction with the use of EEG of more than two participants, including both a performer as well as members of the audience in the context of a mixed-media performance. ...
Article
Full-text available
The new commercial-grade Electroencephalography (EEG)-based Brain-Computer Interfaces (BCIs) have led to a phenomenal development of applications across health, entertainment and the arts, while an increasing interest in multi-brain interaction has emerged. In the arts, there is already a number of works that involve the interaction of more than one participants with the use of EEG-based BCIs. However, the field of live brain-computer cinema and mixed-media performances is rather new, compared to installations and music performances that involve multi-brain BCIs. In this context, we present the particular challenges involved. We discuss Enheduanna – A Manifesto of Falling, the first demonstration of a live brain-computer cinema performance that enables the real-time brain-activity interaction of one performer and two audience members; and we take a cognitive perspective on the implementation of a new passive multi-brain EEG-based BCI system to realise our creative concept. This article also presents the preliminary results and future work.
... In the first situation we are interested in understanding the group process and ways to improve it; this may concern decision making, but it can also concern less task-oriented issues such as social cohesion and empathy awareness; in that way it becomes part of social interaction research. In the second situation we are inter‐ ested in having one or more persons' brain activity getting involved in creating or adapting a piece of life media art, maybe also in interaction with an artist/performer who is coordinating the joint life performance [14]. In the third situation we are interested in the statistics that can be obtained and from analysis we learn about preferences that help to improve the product. ...
... There can be a joint performance, but there can as well be a designed and implemented environment that allows and assumes feedback from audience members without further involvement of the artist or an envi‐ ronment that not only incorporates affective state information, but also expects explicit commands generated by one or more audience members. [7, 14]. There are many ways of cooperation and many ways of competition. ...
Conference Paper
We investigate various forms of face-to-face and multiparty interactions in the context of potential brain-computer interface interactions (BCI). BCI has been employed in clinical applications but more recently also in domestic and game and entertainment applications. This paper focusses on multi-party game applications. That is, BCI game applications that allow multiple users and different BCI paradigms to get a cooperative or competitive task done. Our observations are quite preliminary and not yet supported by experimental research. Nevertheless we think we have put forward steps to structure future BCI game research and to make connections with neuro-scientific social interaction research.
... The precision data of the neural data that provides mutually silent communication and the connection of sensitive data to specific processing conditions and explicit consent in GDPR Art. 9 21 also prevents the destruction of Democracy and fundamental rights. 22 Furthermore, using state-of-the-art technology to protect sensitive data in cyberspace, not processing data outside of its intended processing, and taking open consent based on informational results at the last point in Democracy. In MBIs, neural data is processed by AI tools and combined with large data sets. ...
Article
Full-text available
Machine-brain interfaces (MBI) affect General Data Protection Regulation (GDPR), users' privacy and data protection. MBIs can transform industries and improve lives by directly connecting human brains to computers. However, these advances raise worries about personal data misuse and abuse and the need for robust regulatory frameworks to protect privacy and data. The article examines the relationship between privacy and MBIs in the context of the GDPR and closely examines surveillance risks posed by MBIs. The article also considers MBIs' ethicality and privacy as a human right. Thus, this essay examines the GDPR's current condition considering the the Brussels Effect and sustainability.
... Artists and musicians have experimented with EEG signal and brain wave sonification since the 1960s. Pioneer composers such as Alvin Lucier and David Rosenboom as well as performers Mariko Mori and Claudia Robles-Angel have experimented with "brain computer interfaces" or BCIs to convert brain waves into sound and imaging (Chapman et al. 2014). First coined by UCLA computer scientist Jacques Vidal in the 1970s, a BCI is an artificial system that gathers neurological data and translates it into artificial output that can transform the capacity or expressivity of a natural bodily system. ...
Article
The alpha brain wave sonification inaugurated by Alvin Lucier in Music for Solo Performer (1965) ushered in biofeedback as a new possibility for art and a racialized fantasy of the “Orient.” The “Alpha Orient” encompasses sonic methods equating alpha brain waves with the supposed exceptional “composure” and “silence” of the East. Eunoia (2013-2014) by Lisa Park and Yoko Ono’s 1964 Cut Piece and 1965 Sky Piece for Jesus Christ expose the Alpha Orient as an ableist fantasy of the Asian woman in the remarkable soundness of her self-control.
... Previous experiences include tools for communication (Chaudhary and Birbaumer 2015;Käthner, Kübler, and Halder 2015), robot navigation (Choi and Jo 2013), drawing and painting (Muenssinger et al. 2010), performing music (Cádiz and de la Cuadra 2014; Eaton, Williams, and Miranda 2015), and performance (Tomé-Marques, Carvalhais, and Pennycook 2014). There is also previous work on conceptual models regarding the use of BCI in the performing arts (Zioga et al. 2014;Aparicio 2015). What seems to be lacking is a common conceptual model to frame research into the specific needs of users with LIS in the context of live performance and live interfaces. ...
Conference Paper
Full-text available
Locked-in syndrome is a condition where a person is unable to move but has preserved cognition. While it is not common, the existence of potential performers with this condition questions the very nature of performing. Most of the time it depends on moving the body: a musician playing her instrument, a dancer swirling through the stage, a live coder typing on the keyboard. What happens then when the performer cannot move? Can he perform? How is flow affected? Can it be considered live performance? How does the relationship with the audience change? These are a few questions raised by the existence of performers with severe motor impairments. This article proposes the concept of brain affordances as a starting point for the discussion about the design of live interfaces for performers with locked-in syndrome.
... Although less accurate than its high-end and expensive EEG cousins, the EPOC TM has been successfully used in research, thereby validating its performance and output. [7] A comprehensive interrogation of current uses of brain-computer interfaces in the context of performance art (Zioga, Chapman, Ma, & Pollick, 2014) further embeds Clasp Together (beta) as a part of a wider cannon of performance work in this area. The EPOC™ includes a set of software tools that analyze and decompose the EEG signal in realtime, performing various detections of facial expressions, mental states and (after "training") several cognitive patterns. ...
Article
Full-text available
This paper will explore questions of agency, control and interaction and the embodied nature of musical performance in relation to the use of human-computer interaction (HCI), through the experimental work Clasp Together (beta) [1] for small ensemble and live electronics by J. Harry Whalley. This practice-led research is situated at the intersection of music neurotechnology for sound synthesis and brain-computer interfaces (BCI, a subdomain of HCI), and explores the use of neural patterns from Electroencephalography (EEG) as a control instrument. The composition departed from the traditional composer/performer paradigm by using both non-instrumental physical gestures and cognitive or emotive instructions integrated into the score.
Chapter
As rapidly advancing technologies become more widely available, having access to tools that collect biometric data and in particular BCI technology, is providing artists with new ways of exploring our biological selves as well as creating new modes of audience interaction. Brainlight is a large illuminated interactive sculpture that integrates biology, lighting design and BCI technology to explore the hidden aspects of our minds. The installation is controlled with a wireless EMOTIV EPOC+ EEG headset that detects live neural activity which is translated into a light display within the brain sculpture. In real time it visualises the brain frequencies of Theta (3.5–7.5 Hz) as green light, Alpha (7.5–13 Hz) as blue light, and Beta (16–32 Hz) as red light. Previously, in more traditional art, when an audience views an artwork their own psychological process would be a passive, hidden, private experience. The aim of Brainlight is to harness the brain as the creator of an interactive art experience where no physical interplay is required except for the electrical activity of the mind. The project exposes some key developments in the use of BCI technology for artistic purposes, such as how to accurately collect and process EEG data aesthetically, and what license the artist can take with this data in order to facilitate meaning or allow space for the audience to bring their own meaning to the work. This chapter will explore these developments and outline the collaborative process behind the research and development of the work and the contexts in which it has subsequently been exhibited and used by the public.
Chapter
Full-text available
This chapter looks at technologies of biosensing and techniques of sonification that invoke physicalization of sound to create a sense of intimacy in musical performance and gallery exhibition. By comparing the dynamic across stage and gallery settings, I will discuss how the corporeal activation of sound can become a key not just to decode experimental performance, but as an entry point to possible intimate spaces created by digital interaction. I will draw upon notions of physicalization, enaction, and embodiment to describe how a visceral, nearly carnal situation can be created in technology-mediated work.
Article
Full-text available
We describe the first direct brain-to-brain interface in humans and present results from experiments involving six different subjects. Our non-invasive interface, demonstrated originally in August 2013, combines electroencephalography (EEG) for recording brain signals with transcranial magnetic stimulation (TMS) for delivering information to the brain. We illustrate our method using a visuomotor task in which two humans must cooperate through direct brain-to-brain communication to achieve a desired goal in a computer game. The brain-to-brain interface detects motor imagery in EEG signals recorded from one subject (the "sender") and transmits this information over the internet to the motor cortex region of a second subject (the "receiver"). This allows the sender to cause a desired motor response in the receiver (a press on a touchpad) via TMS. We quantify the performance of the brain-to-brain interface in terms of the amount of information transmitted as well as the accuracies attained in (1) decoding the sender's signals, (2) generating a motor response from the receiver upon stimulation, and (3) achieving the overall goal in the cooperative visuomotor task. Our results provide evidence for a rudimentary form of direct information transmission from one human brain to another using non-invasive means.
Article
Full-text available
Human sensory and motor systems provide the natural means for the exchange of information between individuals, and, hence, the basis for human civilization. The recent development of brain-computer interfaces (BCI) has provided an important element for the creation of brain-to-brain communication systems, and precise brain stimulation techniques are now available for the realization of non-invasive computer-brain interfaces (CBI). These technologies, BCI and CBI, can be combined to realize the vision of non-invasive, computer-mediated brain-to-brain (B2B) communication between subjects (hyperinteraction). Here we demonstrate the conscious transmission of information between human brains through the intact scalp and without intervention of motor or peripheral sensory systems. Pseudo-random binary streams encoding words were transmitted between the minds of emitter and receiver subjects separated by great distances, representing the realization of the first human brain-to-brain interface. In a series of experiments, we established internet-mediated B2B communication by combining a BCI based on voluntary motor imagery-controlled electroencephalographic (EEG) changes with a CBI inducing the conscious perception of phosphenes (light flashes) through neuronavigated, robotized transcranial magnetic stimulation (TMS), with special care taken to block sensory (tactile, visual or auditory) cues. Our results provide a critical proof-of-principle demonstration for the development of conscious B2B communication technologies. More fully developed, related implementations will open new research venues in cognitive, social and clinical neuroscience and the scientific study of consciousness. We envision that hyperinteraction technologies will eventually have a profound impact on the social structure of our civilization and raise important ethical issues.
Book
In the last fifteen years, a recognizable surge in the field of Brain Computer Interface (BCI) research and development has emerged. This emergence has sprung from a variety of factors. For one, inexpensive computer hardware and software is now available and can support the complex high-speed analyses of brain activity that is essential is BCI. Another factor is the greater understanding of the central nervous system, including the abundance of new information on the nature and functional correlates of brain signals and improved methods for recording these signals in both the short-term and long-term. And the third, and perhaps most significant factor, is the new recognition of the needs and abilities of people disabled by disorders such as cerebral palsy, spinal cord injury, stroke, amyotrophic lateral sclerosis (ALS), multiple sclerosis, and muscular dystrophies. The severely disabled are now able to live for many years and even those with severely limited voluntary muscle control can now be given the most basic means of communication and control because of the recent advances in the technology, research, and applications of BCI.
Chapter
Although electrical activity recorded from the exposed cerebral cortex of a monkey was reported in 1875 [1], it was not until 1929 that Hans Berger, a psychiatrist in Jena, Germany, first recorded noninvasively rhythmic electrical activity from the human scalp [2], which has subsequently known as electroencephalography (EEG). Since then, EEG has become an important tool for probing brain electrical activity and aiding in clinical diagnosis of neurological disorders, due to its excellent temporal resolution in the order of msec. The first recording of magnetic fields from the human brain was reported in 1972 by David Cohen at the Massachusetts Institute of Technology [3], which led to the development of magnetoencephalography (MEG). Like EEG, MEG also enjoys high temporal resolution in detecting brain electrical activity. EEG and MEG have become two prominent methods for noninvasive assessment of brain electrical activity, providing unsurpassed temporal resolution, in neuroscience research and clinical applications such as epilepsy or sleeping disorders.
Article
This chapter begins by briefly addressing the question: What is a braincomputer interface (BCI)? It then covers the provenance of the term BCI and its present definition, synonymous or subsidiary terms, and related neurotechnology. The remainder of the chapter introduces six themes that are important for understanding BCI research and development: BCIs create new central nervous system outputs that are fundamentally different from natural outputs; BCI operation depends on the interaction of two adaptive controllers; the importance of choosing signal types and brain areas; the importance of recognizing and avoiding artifacts; BCI output commands; and the need for validating and disseminating useful BCI applications.
Chapter
Established in 1982 as the leading reference on electroencephalography, Drs. Niedermeyer's and Lopes da Silva's text is now in its thoroughly updated Fifth Edition. An international group of experts provides comprehensive coverage of the neurophysiologic and technical aspects of EEG, evoked potentials, and magnetoencephalography, as well as the clinical applications of these studies in neonates, infants, children, adults, and older adults. This edition includes digital EEG and advances in areas such as neurocognition. Three new chapters cover the topics of Ultra-Fast EEG Frequencies, Ultra-Slow Activity, and Cortico-Muscular Coherence. Hundreds of EEG tracings and other illustrations complement the text.
REAL-TIME INTERACTIVE BRAIN INSTALLATIONS
  • Suzanne Dikker
Dikker, Suzanne. " REAL-TIME INTERACTIVE BRAIN INSTALLATIONS ". Last Modified September 2014. Accessed October 19, 2014. https://files.nyu.edu/sd1083/public/art.html.