Article

Music for Solo Performer

Authors:
To read the full-text of this research, you can request a copy directly from the author.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... The potential for sensor-based interactive technologies to support new forms of body-focussed experience has been an ongoing strand of research in interactive arts practice, since its emergence as a field in the 1960s to the present day (Lucier, 1965;Rosenboom, 1976;Grau, 2003;Mori et al., 2003;Schiphorst, 2007). Body-focussed interfaces can provide a framework for directing our attention inwards -temporarily reframing our experience of (physiological) embodiment by magnifying otherwise hidden or overlooked aspects of our being (brainwaves, heart rate, muscle tone, etc.). ...
... While the body itself has been a recurrent motif in the 'visual arts' across many cultures and traditions, our interest in this paper focuses specifically on the development of perception itself as a subject for artistic exploration. Parallel to many of the early developments in the field of interactive art and performance (Lucier, 1965;Rosenboom, 1970), several visual artists (James Turrell, Robert Irwin, Dan Graham, Bruce Nauman, etc.) began exploring the mechanisms of visual/spatial perception and associated phenomenological questions via the creation of paintings and installations that confounded our normal experience of seeing and looking. ...
Article
Full-text available
This article describes interdisciplinary research undertaken by a group of artists, designers, curators and somatic bodywork practitioners to explore a human-centred approach to the potential of touch, movement, balance and proprioception as modalities for interactive art. Somatic bodywork methodologies such as the Feldenkrais method provide highly developed frameworks for attending to these very phenomena. Re-sensitising the body through somatic investigations allowed us as makers of body-focussed interactive art to translate the subtle shifts in attention and nuances of felt sensation into the audience experience of sensor-based interactive artworks. We describe the results of a yearlong project through our experience of the making of one specific experimental artwork, surging verticality. We reflect on the conditions for audience engagement and the profound connections we experienced between Feldenkrais somatic bodywork and art practice as modes of enquiry into the world.
... EEG and neurological signal usage has an established basis in live electronic music, with a great deal of foundational work created between 1965 to 1977. The performance methodology of Alvin Lucier's Music for Solo Performer (1965) [38] was influential in developing the present work. In Lucier's piece, as in my own, the performer seeks a state of stillness in body and mind. ...
Conference Paper
Full-text available
Real-time EEG and sensor data sonification have been researched extensively. New implementations of these technologies in the fields of live electronic music performance and sonic art generally focus on instrumentalising brain waves as an interface, or are concerned with controlling brain waves through biofeedback. Twice Stepped in Still Waters is instead concerned with using bio and sensor feedback as a focused framework accommodating phenomenological listening for the performer. Furthermore, when presented before an audience, this activity results in circumstances for sound and listening (music). Currently available high-resolution consumer devices and open-source software have now rendered these technologies accessible to a wide pool of researchers from a variety of fields. The present work uses sonified real-time EEG and accelerometer data to situate phenomenological concerns of the performance situation within the field of artistic research. Discussion of the technical development and artistic context for the piece reveals a broad interdisciplinary scope. Twice Stepped in Still Waters may be therefore read as a case study offering an implementation and a performance methodology for sonification and focused listening afforded by these sensor technologies. Variations on the approaches described may be applied by other investigators working within or outside of the domain of artistic research.
... In 1948 Wiener established Cybernetics as the scientific study of control and communication in the animal and the machine [1] before Von Foerster's second-order cybernetics [2] integrated the observer into the system introducing a form of reflexivity which was built upon by Beer who saw Cybernetics as the science of effective organization [3]. 20th-century experimental music was shaped by developments in Cybernetics [4].Louis and Bebe Barron, Herbert Brn, Alvin Lucier, and Roland Kayn created first-order cybernetic music with approaches that privileged homeostasis through corrective feedback while Steve Reich, Brian Eno, Agostino Di Scipio and Alvin Lucier produced second-order cybernetic music [5] [6] [7].Third-order cybernetics privileges the concept of Emergence, the appearance of new properties not present in the constituent parts of a system and it is argued elsewhere that works by William Basinksi fall into this category [8]. The current piece represents an effort to reconcile data-driven musical practices (w/ IoT data). ...
Conference Paper
Full-text available
Signal to Noise Loops v4 is a data-driven audiovisual piece. It is informed by principles from the fields of IoT, Sonification, Gener-ative Music, and Cybernetics. The piece maps data from noise sensors placed around Dublin City to control a generative algorithm that creates the music. Data is mapped to control the sound synthesis algorithms that define the timbre of individual musical voices and data is also mapped to control post-processing effects applied in the piece. The first movement consists of data recorded from noise level sensors around Dublin in March 2019. This is before the COVID-19 pandemic and the bustling nature of the city is well represented. The second movement consists of data recorded in March 2020 when restrictive and social distancing measures were introduced culminating in a full lockdown on March 27th. This section is notably more sedate. The piece was created with Python, Ableton Live, Max MSP, Reaktor, and Processing.
... This approach drives the choreography in Alvin Lucier's Music for Solo Performer. The canonical film of the composer performing his piece reveals performative actions that are known to produce large fluctuations in the alpha activity his system selectively sonifies [4]. In particular, Lucier opens and closes his eyes, accentuating the gesture with his hand. ...
Article
Vessels is a brain-body performance practice that combines flute improvisation with live, sonified brain and body data. This article describes the genesis of this performance practice, which co-evolved with her new brain-music interfacing and physiological data sonification methods. The author presents these novel interface designs and discusses how the affordances and constraints of these systems reflect onto her brain-body performance technique.
... The Leap Motion sensor [7] as well as electroencephalography (EEG) brainwave sensors [8] share both a high relevance in electronic art music practice. In the here described scenario, the sensors are not directly mapped to control-data, but interpreted by machine learning (ML) algorithms resulting in a metainstrument [9] providing a conductor-like control layer between the performer and electronic instrument. ...
Conference Paper
Spatial composition represents a key aspect of contemporary acousmatic and computer music. The history of spatial composition practice has shown many different approaches of composition and performance tools, instruments and interfaces. Furthermore, current developments and the increasing availability of virtual/augmented reality systems (XR) extend the possibilities in terms of sound rendering engines as well as environments and tools for creation and experience. In contrast to systems controlling parameters of simulated sound fields and virtual sound sources, we present an approach of XR-based and real-time body-controlled (motion and biofeedback sensors) sound field manipulation in the spatial domain. The approach can be applied not only to simulated sound fields but also to recorded ones and reproduced with various spatial rendering procedures.
... However, it cannot be assumed that this approach could be used for the development of musical instruments, since the latency of more than 300ms would be too high. There are other examples and previous BCIs that have contributed to the creation of music interfaces, starting with Alvin Lucier's experimental piece "Music for Solo Performer", where the brain waves of a performer were used to con-trol percussion instruments [9,10]. Pinegger et al. focused their work on music-related topics and developed a system that utilizes the P300 pattern to compose music [11]. ...
Preprint
Full-text available
[Comment:] This ist the preprint version for an reviewed and accepted ICMC 2020 conference paper. The conference has been rescheduled to 2021 due to COVID-19 and the paper will be published then. [Abstract:] In recent years Electroencephalography (EEG) technology has evolved to such an extent that controlling software with the bare mind is no longer impossible. In addition, with the market introduction of various commercial devices even private households can now afford to purchase a (simplified) EEG device. This unlocks new prospects for the development of user interfaces. Especially people with severe physical disabilities could benefit by facilitating common difficulties (e.g. in terms of mobility or communication) but also for specific purposes such as making music. The goal of our work is to evaluate the applicability of a cheap, commercial EEG headset to be used as an input device for new musical interfaces (NIMEs). Our findings demonstrate that there are at least 7 input actions which can be unambiguously differentiated by machine learning and can be mapped to musical notes in order to play basic melodies.
... Certain aspects of my arrangement recall Alvin Lucier's work Music for Solo Performer. For enormously amplified brain waves and percussion (Lucier, 1965). Lucier transmitted an array of his brain's alpha waves via electrodes into loudspeakers and onto percussion instruments distributed in a room. ...
Thesis
Full-text available
Portfolio of works and thesis in composition
... In 1934, Adrian and Matthews [12] were the first researchers who monitored their own EEG with sound through the replication of the earliest EEG descriptions of the posterior dominant rhythm (PDR). However, the first brain composition was performed in 1965 by the composer and experimental musician Alvin Lucier controlling percussion instruments via strength of EEG PDR [11,13]. Following Lucier's experience, David Rosenboom created a music piece in 1970 in which EEG signals of several participants were processed through individual electronic circuits, generating visual and auditory performance for the Automation House in New York [14]. ...
Chapter
Brain state control has been well established in the area of Brain-computer interfaces over the last decades in which the active applications allow controlling external devices consciously. The purpose of this study was to develop a real-time graphical sound representation system based on an interaction design that allows navigating through the motor imagery cognitive task in a bidimensional plane. This representation was developed using the OpenBCI EEG acquisition system in order to record the necessary information which was sent and processed in Max/MSP software. The system operates under a metaphorical Graphical User Interface (GUI) programmed in Processing. The system was tested through an experiment under controlled conditions in which six professional musicians participated. From the experimental results, it was found that all participants achieved different control levels associated to their static and dynamic response with an average of 26.73%26.73\% and 73.27%73.27\% respectively.
... run on a laptop running a Linux operating system is used. Pd patches were designed to receive Open Sound Control (OSC) data live streamed 1 Open Brain Computer Interface from performers' Muse headsets via Bluetooth and museIO software. ...
Conference Paper
Music for Various Groups of Performers (After Lucier) (MfVGoP) is an experimental audio-visual improvised group performance employing sonified electroencephalographic (EEG) signals and computer music. The data generated by performers' brain waves are used as control parameters for custom built audio software using the Pure Data (Pd) environment. The piece explores and investigates group and individual dynamics within a performance context. The title references composer Alvin Lucier's 1965 work Music for Solo Performer in which the composer attempted to control electro-acoustic instruments via EEG signals produced by his brain. In contrast Music for Various Groups of Performers (After Lucier) investigates the roles of individuals and small groups through the context of EEG monitored performance considering the individual performers, their relationships with each other and the audience as well as their interaction with the technology.
... Conversion of EEG signals to not just sound, but musical modalities, followed later: in 1965, the composer and experimental musician Lucier (1965) created a performance involving control of percussion instruments via strength of EEG PDR, with the encouragement and participation of composer John Cage. However, they experienced some difficulty in achieving good control, and to overcome this employed a second performer manually adjusting the gain from the EEG output (Rosenboom, 1975). ...
Article
Full-text available
A novel musical instrument and biofeedback device was created using electroencephalogram (EEG) posterior dominant rhythm (PDR) or mu rhythm to control a synthesized piano, which we call the Encephalophone. Alpha-frequency (8–12 Hz) signal power from PDR in the visual cortex or from mu rhythm in the motor cortex was used to create a power scale which was then converted into a musical scale, which could be manipulated by the individual in real time. Subjects could then generate different notes of the scale by activation (event-related synchronization) or de-activation (event-related desynchronization) of the PDR or mu rhythms in visual or motor cortex, respectively. Fifteen novice normal subjects were tested in their ability to hit target notes presented within a 5-min trial period. All 15 subjects were able to perform more accurately (average of 27.4 hits, 67.1% accuracy for visual cortex/PDR signaling; average of 20.6 hits, 57.1% accuracy for mu signaling) than a random note generation (19.03% accuracy). Moreover, PDR control was significantly more accurate than mu control. This shows that novice healthy individuals can control music with better accuracy than random, with no prior training on the device, and that PDR control is more accurate than mu control for these novices. Individuals with more years of musical training showed a moderate positive correlation with more PDR accuracy, but not mu accuracy. The Encephalophone may have potential applications both as a novel musical instrument without requiring movement, as well as a potential therapeutic biofeedback device for patients suffering from motor deficits (e.g., amyotrophic lateral sclerosis (ALS), brainstem stroke, traumatic amputation).
... The relationship between neuroscience and music dates even further back: Avant-garde and pioneers such as Yoko Ono and John Lennon experimented with EEGs and music since the 1960s. Alvin Lucier's with "Music for solo performer" (Lucier 1965) performed at Brandeis University made use of headsets as a wearable device to interpret brain waves to generate sound. Richard Teitelbaum's Musica Elettronica Viva MEV used EEG and EKG signals to manipulate electronic synthesizers (Teitelbaum: Spacecraft 1967). ...
Conference Paper
Full-text available
Transmission is both a telepresence performance and a research project. As a real-time visualization tool, Transmission creates alternate representations of neural activity through sound and vision, investigating the effect of interaction on human consciousness. As a sonification project, it creates an immersive experience for two users: a soundscape created by the human mind and the influence of kinetic interaction. An electroencephalographic (EEG) headset interprets a user's neural activity. An Open Sound Control (OSC) script then translates this data into a real-time particle stream and sound environment at one end. A second user in a remote location modifies this stream in real time through body movement. Together they become a telematic musical interface--communicating through visual and sonic representation of their interactions.
... Artistic engagement with neuroscience and music dates even further back: avant-garde artists and pioneers such as Yoko Ono and John Lennon experimented with EEGs and music as early as the 1960s. Alvin Lucier performed Music for Solo Performer at Brandeis University in 1965 [18]. By making use of the headset as a wearable device, Lucier interpreted brain waves to generate soundscapes. ...
Conference Paper
Full-text available
Transmission is a telematic performance which allows for real time sonification and visualisation of brain waves and looks at the effect of interaction on the human consciousness. Users are equipped with an Emotiv EEG which enables them to perceive sonification and visualisation of their brain's processing in real time. This visualisation is modified through another user's movement in a remote location -- the user's consciousness becomes a new musical and visual interface played by a remote person. Transmission is both a research project on telepresence and a participatory audience tool.
... In the past 25 years, composers like Alvin Lucier, Richard Teitelbaum, myself and others have produced major works of music with EEGs and other bioelectronic signals. Lucier's 1965 work Music for Solo Performer achieved a direct mapping of a soloist's alpha rhythms onto the orchestrational palette of a percussion ensemble [11][12][13]. Greatly amplified alpha signals were used to activate, either acoustically or mechanically, an array of otherwise performerless percussion instruments. This produced the startling effect of a percussion ensemble seeming to activate itself, almost invisibly, but somehow following activities inside the solo performer's mind. ...
Book
Full-text available
The purpose of this monograph is severalfold: (1) to give a detailed description of some work done in the mid- to late-1970s in which I was able to achieve the spontaneous generation of formal musical architectures with a computer music system by using a detailed analysis of evoked responses to features in those architectures recorded from a performer's brain; (2) to provide an overview of some historical events related to the development of artistic works that are in some way responsive to bioelectrically derived signals; (3) to describe briefly the emergence of the biofeedback paradigm and to discuss biofeedback modeling; (4) to survey accumulated knowledge regarding interpretation of electroencephalographic phenomena with particular emphasis on event-related potentials (ERPs) and their relation to aspects of selective attention and cognitive information processing; (5) to present a speculative model for the general interpretation of electroencephalographic waveforms; (6) to discuss some inferences and speculations relating these phenomena to musical experience; (7) to provide an assessment of some methods and techniques that have been applied to realizing works of art with these phenomena; (8) to describe some specific algorithms for generating self-organizing musical structures in a feedback system that relates a limited model of perception to the occurrence of event-related potentials in a performer's brain; and (9) to discuss the potential of new and emerging technologies and conceptual paradigms for the future evolution of this work. Finally, an actual score containing a conceptual scheme for a biofeedback work involving electroencephalographic phenomena and electronic orchestrations is provided in an appendix to stimulate further thinking and ideas for applications in the arts. The writing is addressed to those with an interdisciplinary interest in the arts (particularly music) and the sciences (particularly those of the brain, psychology and perception, and the study of self-organizing systems). However, readers whose backgrounds are in the arts or sciences alone, or even other areas such as cognition, philosophy, computer science or musical instrument design, are encouraged to read on as well. Many references are provided with which the reader may enhance her or his knowledge in a particular sub-discipline. Those who may find some of the technical descriptions difficult should first skim through the entire document and then return to individual sections for further study. It is hoped that the ideas presented herein may contribute in some way toward increasing our breadth of understanding concerning dynamic processes in the arts and sciences.
... Measured electrical activity in the brain has been used for artistic performances since the 1960s. Music performances were done by Alvin Lucier in the USA in 1965, [5] Richard Teitelbaum in Italy in 1967 and Pierre Henry and Roger LaFosse in France in 1971. Nina Sobell introduced a Brainwave Drawing Game in the early 1970s. ...
Article
Artistic BCI applications offer a new modality for humans to express themselves creatively. In this survey we reviewed the available literature on artistic BCIs by classifying four types of user control afforded by the available applications: selective control, passive control, direct control and collaborative control. A brief overview of the history of artistic BCIs is presented, followed by examples of current artistic BCI applications in each defined sector of control. We questioned whether or not creative control affects the users’ sense of enjoyment or satisfaction. Finally, we made suggestions for the future of artistic BCI research to question the role that control plays in user satisfaction and entertainment.
... Artistic engagement with neuroscience and music dates even further back: avant-garde artists and pioneers such as Yoko Ono and John Lennon experimented with EEGs and music as early as the 1960s. Alvin Lucier performed Music for Solo Performer at Brandeis University in 1965 [18]. By making use of the headset as a wearable device, Lucier interpreted brain waves to generate soundscapes. ...
Article
Full-text available
Transmission is both a telepresence performance and a research project. As a real-time visualization tool, Transmission creates alternate representations of neural activity through sound and vision, investigating the effect of interaction on human consciousness. As a sonification project, it creates an immersive experience for two users: a soundscape created by the human mind and the influence of kinetic interaction. An electroencephalographic (EEG) headset interprets a user’s neural activity. An Open Sound Control (OSC) script then translates this data into a real-time particle stream and sound environment at one end. A second user in a remote location modifies this stream in real time through body movement. Together they become a telematic musical interface—communicating through visual and sonic representation of their interactions.
... The use of physiological sensing and biofeedback technologies is by no means new, and artists started using these technologies as early as 1965 (Lucier 1965;Rosenboom 1976;Frieling 2004). Bioelectrical sensing technologies introduced during the early 1960's provided a new way of experiencing the living body in ways that not previously been available, and heralded a broader movement towards a more systems oriented analysis of natural and social phenomena. ...
... The EEG has also been a source of creative inspiration for music composers. As far back as 1964, Alvin Lucier coupled EEG electrodes from his scalp through loudspeakers to a set of cymbals, gongs, and drums to produce a percussion performance Music for Solo Performer [14]. In 1977 David Rosenboom used components of EEG he related to selective attention to drive an electronic music system on his recording of On Being Invisible [15]. ...
Article
Full-text available
Listening to the Mind Listening (LML) explored whether sonifications can be more than just noise in terms of perceived information and musical experience. The project generated an unprecedented body of 27 multichannel sonifications of the same dataset by 38 composers. The design of each sonification was explicitly documented, and there are 88 analytical reviews of the works. The public concert presenting 10 of these sonifications at the Sydney Opera House Studio drew a capacity audience. This paper presents an analysis of the reviews, the designs and the correspondences between timelines of these works.
... Biosignals such as EEG and electromyogram (EMG) or muscle activity can also be used to generate music. Using EEG as a forum for musical expression was first demonstrated in 1965 by the composer Alvin Lucier through his recital named "Music for Solo Performer" [29]. Here manipulation of alpha waves was utilized to resonate percussion instruments. ...
Conference Paper
Full-text available
In this paper, we describe an intelligent graphical user interface (IGUI) and a User Application Interface (UAI) tailored to Brain Computer Interface (BCI) interaction, designed for people with severe communication needs. The IGUI has three components; a two way interface for communication with BCI2000 concerning user events and event handling; an interface to user applications concerning the passing of user commands and associated device identifiers, and the receiving of notification of device status; and an interface to an extensible mark-up language (xml) file containing menu content definitions. The interface has achieved control of domotic applications. The architecture however permits control of more complex 'smart' environments and could be extended further for entertainment by interacting with media devices. Using components of the electroencephalogram (EEG) to mediate expression is also technically possible, but is much more speculative, and without proven efficacy. The IGUI-BCI approach described could potentially find wider use in the augmentation of the general population, to provide alternative computer interaction, an additional control channel and experimental leisure activities.
Article
Full-text available
This article depicts a bibliometric analysis done through visualization mechanisms and interpretation of bibliometric metadata on the research field of Brain-Computer Interface and Music or Brain-Computer Music Interface (BCMI). Citation, co-citation, co-authorship, and keywords co-occurrence analysis were carried out in this work in order to identify the intellectual structure, research trends, the organizations involved, and the methodological structure of such research field. The bibliometric metadata was visualized through VOSviewer and Scimat software. This study also includes the analysis of 227 papers done through 2005–2021 which include research and review articles, and proceedings papers. The results of this work demonstrate the growing and legitimizing of the research field, and the impact of the interdisciplinary work required in this area.
Article
Two main types of human perception are seeing and hearing. They play a central role in everyday life, scientific research, and also fine arts. Especially in the visual arts, linking the auditory and the visual realms is not uncommon. However, when it comes to linking the visual domain with the audio domain, the possibilities there are less addressed. Therefore, in this paper, we conceptualize the so-called cruxes for sonification of the visual domain. These are hierarchically structured transformations of the main elements in both domains, starting at the physics level, then extending to the physiological level, and finally to the artistic level. They offer a strong basis for creative sonification efforts and are intended for computer-based (generative) art production. In brief, this paper shows that sonification has a strong potential for artistic sonic creations, where scientific basis can play an important role, when linking the visual, acoustic, and physiological domains.
Article
Noor: A Brain Opera is the first fully interactive, immersive brainwave opera where a performer wearing a wireless EEG brainwave headset touches, gazes and walks around audience members in a 360 degree theater while a story is narrated. Her measured emotional states trigger videos, sound, and a pre-recorded libretto as her emotions are displayed as live time colored bubbles. The opera rhetorically asks: “Is there a place in human consciousness where surveillance cannot go?” This article discusses the rationale and implementation of the brainwave opera.
Thesis
Full-text available
Explorative play affects the root of our being, as it is generative. Often experienced as a thrill, explorative play gradually lures its players beyond their mental or physical limits. While doing so, it affects players well before they can perform intentional actions. To understand explorative play therefore means to understand what happens before intention sets in. But this is problematic: by the time this becomes experienceable it is already clouded by habit and memory. However, thought processes outlined in Deleuze’s philosophy of difference reveal important clues as to how habitual thinking patterns may be exceeded, and why explorative play might cause thrilling and vertiginous experiences: when our awareness of the present is intensified, the potential to disturb habitual patterns arises; within this there is a chance to arrive at an ‘intuitive understanding’ of events where intensities express themselves as non-intentional movement or poetic language. This notion was investigated through generative art practice. An experimental setting was prepared that allowed for explorative play with a complex system – a biofeedback instrument that sonified its wearer’s physiological data in real-time. This instrument was explored in performances as well as participative action research sessions. The insight emerging from the performances was that introspection and stillness can enhance practice. The connections to Eastern practices this suggests were followed up and, by investigating the role of stillness in performance practices like Butoh, methods that may radicalise a biofeedback performance came to light. Extending these to biofeedback composition then made listening a central focus of this research and consequently, listeners’ responses to sonified biofeedback, the disruption of habitual musical expectations and increased immersed listening became paramount aspects of the practice. Conversely, the insight emerging from the participative sessions was that explorative playing with a complex system can allow for a more intuitive understanding of the generative because the emerging play experiences can be internally transformative; producing new ideas and forms, for instance poetic language or improvised movement. Thus overall, the research underlined the benefits of a greater propagation of explorative play.
Conference Paper
Full-text available
This paper refers to the theories of Extended Mind (EM) and enactivism as cognitive frameworks to understand contemporary approaches to art practice. The essay is structured in four sections and offers examples from existing works of artists across a range of media, with a focus on the computational arts. Initially, we compare the two models of cognition by highlighting differences and similarities, arguing that the epistemic value of each approach is observer-dependent. Following, we explain why art can be considered as a form of language. Then, we echo from the concept of "assemblage" as a mode of thinking (Dewsbury, 2011) expressed in Deleuze and Guattari (1987) and more recently in Hayles (2017) by proposing the idea of the "artistic assemblage". In the end, we underline the validity of both cognitive models for understanding the system of relations, which allows the emergence of the "artistic assemblage".
Chapter
Full-text available
Unmanned aircraft system (UAS) sensor operators must maintain performance while tasked with multiple operations and objectives, yet are often subject to boredom and consequences of the prevalence-effect during area scanning and target identification tasks. Adapting training scenarios to accurately reflect real-world scenarios can help prepare sensor operators for their duty. Furthermore, integration of objective measures of cognitive workload and performance, through evaluation of functional near infrared spectroscopy (fNIRS) as a non-invasive measurement tool for monitor of higher-level cognitive functioning, can allow for quantitative assessment of human performance. This study sought to advance previous work regarding the assessment of cognitive and task performance in UAS sensor operators to evaluate expertise development and responsive changes in mental workload.
Chapter
Full-text available
Music is a natural partner to human-computer interaction, offering tasks and use cases for novel forms of interaction. The richness of the relationship between a performer and their instrument in expressive musical performance can provide valuable insight to human-computer interaction (HCI) researchers interested in applying these forms of deep interaction to other fields. Despite the longstanding connection between music and HCI, it is not an automatic one, and its history arguably points to as many differences as it does overlaps. Music research and HCI research both encompass broad issues, and utilize a wide range of methods. In this chapter I discuss how the concept of embodied interaction can be one way to think about music interaction. I propose how the three “paradigms” of HCI and three design accounts from the interaction design literature can serve as a lens through which to consider types of music HCI. I use this conceptual framework to discuss three different musical projects—Haptic Wave, Form Follows Sound, and BioMuse.
Article
I have taken the ambiguous psychology of Kinaesthetic Empathy and the relatively recent ideas that form Extended Mind Theory and re-contextualised them so they are relevant to sound-based live performance. I then used these psychologies as a guidance to investigate how we interact with discreet and invasive instruments by analysing specific examples of performance, sound installation and composition. I have defined ‘invasive and discreet’ by using examples of how these instruments are presented as objects in the context of performance. For example, the way in which an object or system can physically invade, and make use of, the performance space when employing technology and physical sculpture; or how an object or system can interact with the performer through tactility and psychological presence. During the process of defining discreet and invasive instruments I noted that there is no binary differentiation because the instruments denotation is dependent on context, sound palette and how they are interpreted as objects for creative expression by the performer. I concluded that the physicality of invasive instruments gives strength to the presentation of ideas in live performance. This is in opposition to discrete instruments which I argue are better suited to studio production or acousmatic performance.
Conference Paper
'Kontraktion' is an embodied musical interface using biosignals to create an immersive sonic performance setup. It explores the energetic coupling between digital synthesis and musical expression by reducing the interface to an embodied instrument and therefore tightening the connection between intention and sound. By using the setup as a biofeedback system the user explores his own subconscious gestures with a heightened sensitivity. Even subtle, usually unaware neural impulses are brought to conscious awareness by sensing muscle contractions with an armband and projecting them outward into space with sound in realtime. The users gestural expressions are embodied in sound and allow for an expressive energetic coupling between the users body and a virtual agent. Utilizing the newly adopted awareness of his body the user can take control of the sound and perform with it using the metagestures of his body as an embodied interface. The body itself is transformed into a musical instrument, controlled by neurological impulses and sonified by a virtual interpreter.
Chapter
Full-text available
Despite it being more than twenty years since the launch of an international conference series dedicated to its study, there is still much debate over what sonification really is, and especially as regards its relationship to music. A layman’s definition of sonification might be that it is the use of non-speech audio to communicate data, the aural counterpart to visualization. Many researchers have claimed musicality for their sonifications, generally when using data-to-pitch mappings. In 2006 Bennett Hogg and I (Vickers and Hogg 2006) made a rather provocative assertion that bound music and sonification together (q.v., and further developed in Vickers (2006)), not so much to claim an ontological truth but to foreground a debate that has simmered since the first International Conference on Auditory Display (ICAD) in 1992. Since then there has been an increasing number of musical and sonic art compositions driven by the data of natural phenomena, some of which are claimed by their authors to be sonifications. This chapter looks at some of the issues surrounding the relationship between sonification and music and at developments that have the potential to draw sonification and the sonic arts into closer union.
Article
The surfeit of new technologies, research and methods about the human brain raise concerns around issues of privacy, surveillance, autonomy and consciousness. Do our electroencephalograph, functional magnetic resonance imaging and other biometric data really contain the essence of who we are and what we think? How will this data be used for security identification, thought reeducation, manipulating memories and identifiers called ‘brainotypes’ or ‘brainfingerprints’? If cognitive processes can be monitored and harvested, how do we prepare for this new frontier of surveillance? Artists and musicians have been experimenting with brainwaves since the 1960s. Currently, new types of consumer-grade brain sensors are used for artistic exploration. The idea of the cyborg has given way to the age of human–machine augmentation, with the brain as the next site-specific performative space. What kind of dramatic structures, interventions, methods and contexts will work when the brain itself drives the performance experience? Though no one is entirely sure, this paper explores past and current forays into brain inspired art and issues of mind, surveillance and privacy.
Article
Full-text available
The Listening to the Mind Listening concert was a practice-led research project to explore the idea that we might hear information patterns in the sonified recordings of brain activity, and to investigate the aesthetics of sonifications of the same data set by different composers. This world-first concert of data sonifications was staged at the Sydney Opera House Studio on the evening of 6 July 2004 to a capacity audience of more than 350 neuroscientists, composers, sonification researchers, new media artists and a general public curious to hear what the human brain could sound like. The concert generated 30 sonifications of the same data set, explicit descriptions of the techniques each composer used to map the data into sound, and 90 reviews of these sonifications. This paper presents the motivations for the project, overviews related work, describes the sonification criteria and the review process, and presents and discusses outcomes from the concert.
ResearchGate has not been able to resolve any references for this publication.