Conference PaperPDF Available

LINEAR (LIVE-GENERATED INTERFACE AND NOTATION ENVIRONMENT IN AUGMENTED REALITY)

Authors:
  • Xi-an Jiaotong - Liverpool University

Abstract and Figures

Recent developments in Augmented Reality (AR) technology are opening up new modes of representation and interaction with virtual objects; at the same time, increase in processing power of portable devices is enabling a wide diffusion of applications until recently usable only in very specific situations (like motion-capture labs). This study aims to describe an AR environment created for musical performance: LINEAR (Live-generated Interface and Notation Environment in Augmented Reality), where the author explored some perspectives made possible by the current state of AR technology applied to music. In LINEAR, one dedicated performer using an AR iPhone app, can create virtual objects (rendered in real-time and superimposed to the real environment) according to the movement of the device; they are used both as virtual interfaces for electronics (sending OSC message to Max/MSP on a computer) and as forms of live-generated graphic notation. LINEAR allows, with some limitations, the representation of gestural movements with an exact 3-D placement in space: we can now have an analogic notation of gestures, rather than a symbolic one. For the iPhone performer , the act of notation corresponds to the notated act. The resulting representations can be also approached as graphic animated notation by other performers (the iPhone screen is mirrored to a projector). The multiple perspectives on the notation and the possibilities of interaction with virtual bodies allow a high level of flexibility, while introducing some almost unprecedented resources and foreseeing a very rich scenario.
Content may be subject to copyright.
A preview of the PDF is not available
... [19]. Other attempts at AR include using markers and webcams with external screens [20] [21]. ...
... LINEAR [16] is an AR framework for improvisation that allows the creation of a notation-interface hybrid in realtime: a performer, using an iPhone, can generate virtual bodies along the trajectory of his/her gestures. Those bodies have both the function of interface (they are linked to specific samples) and notation (for generating the sound of the original gesture, the performer needs to repeat the same gesture). ...
Conference Paper
Full-text available
Augmented Reality (AR) is becoming, year by year, an established and well-known technological resource. Ex-perimentations and innovative applications are produced in different areas. In music, there already is some use of such a technology in the fields of education and performance. However, the use of AR features as composi-tional resources has yet to be deeply explored and leaves room for innovative research. In particular, the possibility of notating the gesture in space, instead of on paper or screen, has been only superficially studied. This research focuses on the development of a new prescriptive notation system for gestures that represents extended techniques requiring direct contact between the performer and the vibrating body. Such a system has been implemented in the composition Portale, for small tam-tam, AR environment and live-electronics.
... Numerous applications have also been developed for live performance. LINEAR [13] is a tool for generating interfacesnotation hybrids in real-time (the trajectory of the gesture to perform is indicated by virtual bodies that produce the intended resulting sounds when "hit" by the performer). That is one of the first examples of accurate space sampling, where the exact position of interaction can be easily determined and used as a technical and expressive resource. ...
Conference Paper
Full-text available
Augmented instruments have been a widely explored research topic since the late 80s. The possibility to use sensors for providing an input for sound processing/synthesis units let composers and sound artist open up new ways for experimentation. Augmented Reality, by rendering virtual objects in the real world and by making those objects interactive (via some sensor-generated input), provides a new frame for this research field. In fact, the 3D visual feedback, delivering a precise indication of the spatial configuration/function of each virtual interface, can make the instrumental augmentation process more intuitive for the interpreter and more resourceful for a composer/creator: interfaces can change their behavior over time, can be reshaped, activated or deactivated. Each of these modifications can be made obvious to the performer by using strategies of visual feedback. In addition, it is possible to accurately sample space and to map it with differentiated functions. Augmenting interfaces can also be considered a visual expressive tool for the audience and designed accordingly: the performer's point of view (or another point of view provided by an external camera) can be mirrored to a projector. This article will show some example of different designs of AR piano augmentation from the composition Studi sulla realtà nuova.
... LINEAR [9], in Figure 3, constitutes one of the first experiments making use of AR as a resource for live-generated notation and musical performance in the context of improvisation. The application allows one to draw perdurable gestures in the air. ...
Conference Paper
Full-text available
In a context where Augmented Reality (AR) is rapidly spreading out as one of the most promising technologies, there is a great potential for applications addressing musical practices. This paper presents the development of a framework for creating AR gesture-based scores in the context of experimental instrumental composition. The notation system is made possible by GesturAR, an Augmented Reality software developed by the author: it allows one to draw trajectories of gestures directly on the real vibrating body. Those trajectories are visualized as lines moving in real-time with a predetermined speed. The user can also create an AR score (a sequence of trajecto-ries) by arranging miniaturized trajectories representations on a timeline. The timeline is then processed and a set of events is created. This application paves the way to a new kind of notation: embodied interactive notation, characterized by a mimetic 4D representation of gesture, where the act of notation (performed by the composer during the compositional process) corresponds to the notated act (i.e., the action the interpreter is meant to produce during the performance).
Conference Paper
Full-text available
The following paper explores the Inconspicuous Head-Mounted Display within the context of a live technology-mediated music performance. For this purpose in 2014 the authors have developed Glasstra, an Android/Google Glass networked display designed to project real-time orchestra status to the conductor, with the primary goal of minimizing the on-stage technology footprint and with it audience's potential distraction with technology. In preparation for its deployment in a real-world performance setting the team conducted a user study aimed to define relevant constraints of the Google Glass display. Based on the observed data, a conductor part from an existing laptop orchestra piece was retrofitted, thereby replacing the laptop with a Google Glass running Glasstra and a similarly inconspicuous forearm-mounted Wiimote controller. Below we present findings from the user study that have informed the design of the visual display, as well as multi-perspective observations from a series of real-world performances, including the designer, user, and the audience. We use findings to offer a new hypothesis, an inverse uncanny valley or what we refer to as uncanny mountain pertaining to audience's potential distraction with the technology within the context of a live technology-mediated music performance as a function of minimizing on-stage technological footprint.
Conference Paper
Full-text available
The rhizome concept explored by Deleuze and Guatarri has had an important influence on formal thinking in music and new media. This paper explores the development of rhizomatic musical scores that are arranged cartographically with nodal points allowing for alternate pathways to be traversed. The challenges of pre-digital exemplars of rhizomatic structure are discussed. It follows the development of concepts and technology used in the creation of five works by the author Ubahn c. 1985: the Rosenberg Variations [2012], The Last Years [2012], Sacrificial Zones [2014], detritus [2015] and trash vortex [2015]. The paper discusses the potential for the evolution of novel formal structure using a rhizomatic approach.
Conference Paper
Full-text available
This paper describes a package of modular tools developed for use with current virtual reality peripherals to allow for music composition, performance and viewing in ‘real-time’ across networks within a spectralist paradigm. The central tool is SpectraScore~ - a Max/MSP abstraction for analysing audio signals and ranking the resultant partials according to their frequency of occurrence within a user controlled time window and amplitude at detection time. Using a variety of interactive visualisations designed with Unity-3d, musical data can be generated and forwarded to clients who are connected via score interfaces as clients using a HMD and a PC, a tablet or smartphone, or via a browser. They use their movements to trigger MIDI messages in various microtonal tunings according to the generated scores. Finally, an audience who connect to the same VR environment as the performers can watch the avatars perform together from the perspective of disembodied sprites. The ‘real-time composer’, and performers are not required to have any prior knowledge of complex computer systems and interact either using keyboard and mouse, through head position tracking, with a Leap Motion controller or Myo armband or with a Bluetooth gamepad.
Conference Paper
Full-text available
In 2009, the Decibel new music ensemble based in Perth, Western Australia was formed with an associated manifesto that stated “Decibel seek to dissolve anydivision between sound art, installation and music by focusing on the combination of acoustic and electronic instruments” [1]. The journey provided by this focus led to a range of investigations into different score types, resulting in a re-writing of the groups statement to “pioneering electronic score formats, incorporating mobile score formats and networked coordination performance environments” [2]. This paper outlines the development of Decibel’s work with the ‘screen score’, including the different stages of the ‘Decibel ScorePlayer’, an application (App) for reading graphic notation on the iPad. The paper proposes that the Decibel ScorePlayer App provides a new, more accurate and reliable way to coordinate performances of music where harmony and pulse are not the primary elements described by notation. It features a discussion of selected compositions facilitated by the application, with a focus on the significance of the application to the author’s own compositional practices. The different stages in the development, from prototype score player to the establishment of a commercialized ‘Decibel ScorePlayer’, are outlined in the context of practice led investigations.
Article
Full-text available
In this paper, an experimental self-teaching system capable of superimposing audio-visual information to support the process of learning to play the guitar is proposed. Different learning scenarios have been carefully designed according to diverse levels of experience and understanding and are presented in a simple way. Learners can select between representative numbers of scenarios and physically interact with the audio-visual information in a natural way. Audio-visual information can be placed anywhere on a physical space and multiple sound sources can be mixed to experiment with compositions and compilations. To assess the effectiveness of the system some initial evaluation is conducted. Finally conclusions and future work of the system are summarized.
Conference Paper
Full-text available
This paper presents a vision-based method for tracking guitar fingerings played by guitar players from stereo cameras. We propose a novel framework for colored finger markers tracking by integrating a Bayesian classifier into particle filters, with the advantages of performing automatic track initialization and recovering from tracking failures in a dynamic background. ARTag (Augmented Reality Tag) is utilized to calculate the projection matrix as an online process which allow guitar to be moved while playing. By using online adaptation of color probabilities, it is also able to cope with illumination changes.
Article
In this survey of academic contributions driving augmented reality’s commercial potential and of the industry trends advancing software and hardware developments and emerging applications, the author offers advice on how startups hoping to leverage these advances can compete against senior tech tycoons.
Conference Paper
This paper aims to apply the object detection used in Augmented Reality (AR) technology to a real-time application for musical instrument, a virtual piano. The proposed application does provide the sounds of musical notes and also display the corresponding notations of the playing notes. Both features allow people with hearing disability or muscular weakness or even ones who cannot exert their pressure on the general keyboard instruments to play music, due to their physical disabilities. The experimental results demonstrated that the error of the playback sounds is only 0.5 percent which is only 2 times measured from tapping 50 markers for 400 times in real time. However, all errors are because of the wrong positions of the player's hand as some of his fingers accidentally covered over the unexpected markers during playing the virtual piano.
Article
Rossini would often wait until the last possible moment to compose overtures. He wrote the overture to Otello the evening before the opera’s 1816 premiere. For La gazza ladra the following year, Rossini waited until the day of the premiere to score the overture, working, as he later wrote, "up under the roof of La Scala in Milan." He also noted: "Nothing is better for inspiration than necessity, the presence of a copyist waiting for your work, sheet by sheet" (Hughes 1956, p. 247). Real-time music notation systems take Rossini’s strategy to new extremes, waiting to create the score until during the performance. Unlike most live computer-music performance environments, these software algorithms do not produce digital audio or control data. Instead, they produce a dynamic musical score that may contain conventional Western notation or a range of graphical representations, which is interpreted by human musicians to create sound.