Article

LOCATIVE SONIFICATION: PLAYING THE WORLD THROUGH CITYGRAM

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Ce langage permet l'interface avec les formats OSC et MIDI, donc la paramétrisation en temps réel de la synthèse sonore. Il est ainsi souvent utilisé comme moteur de rendu audio pour la sonification [123,229,164,89,200,36]. Cette utilisation de ChucK pour la sonification a d'ailleurs fait l'objet d'un cours à la conférence ICAD de 2012 [45]. ...
Thesis
La sonification est la représentation de données sous une forme auditive non verbale. Il s’agit d’un domaine de recherche relativement jeune, issu de technologies assez récentes, et pour lequel il n’existe pas encore de standard clair. Après quelques décennies de réflexions sur la modélisation générique du concept et des centaines de solutions au cas par cas, il semble intéressant d’effectuer une synthèse des points communs et variables de tout ce que l’on peut appeler une sonification afin d’en extraire un « patron » pour ce type de représentation. Dans cette thèse, nous rapportons nos travaux sur trois cas d’études, ainsi qu’une étude de l’existant plus générale, visant à poser les bases d’une telle modélisation. Notre modèle prend ainsi la forme d’une fonction paramétrée par les données et les interventions de l’utilisateur, dont la sortie est le son construit. On propose un format graphique pour le représenter, avant de réfléchir à son potentiel en tant qu’outil informatique de design.
... The presented project aims to continuously measure and ultimately understand these urban sound environments. It is a multidisciplinary collaborative effort between New York University's (NYU) Center for Urban Science and Progress (CUSP) and the NYU Steinhardt School's Citygram Project [16,18,19,12,17]. The impetus of the Citygram project is focused on the lack of sufficient mapping paradigms for non-ocular energies in urban settings. ...
... Stanza, a UK based sound artist, makes extensive use of environmental sensor data in his artistic sonification practices 6 . The Social City Detector project and the Citigram project (Park et al, 2013) used sonification to integrate the digital and physical layers of the city by making social data visible through sound 7 . The Phantom Terrains project used a repurposed hearing aid to reveal the electromagnetic signals of the wireless networks, which pervade the contemporary built environment. ...
Conference Paper
Full-text available
Auditory display is the use of sound to present information to a listener. Sonification is a particular type of auditory display technique in which data is mapped to non-speech sound to communicate information about its source to a listener. Sonification generally aims to leverage the temporal and frequency resolution of the human ear and is a useful technique for representing data that cannot be represented by visual means alone. Taking this perspective as our point of departure, we believe that sonification may benefit from being informed by aesthetic explorations and academic developments within the wider fields of music technology, electronic music and sonic arts. In this paper, we will seek to explore areas of common ground between sonification and electronic music/sonic arts using unifying frameworks derived from musical aesthetics and embodied cognitive science (Kendall, 2014; Lakoff & Johnson, 1999). Sonification techniques have been applied across a wide range of contexts including the presentation of information to the visually impaired (Yoshida et al., 2011), process monitoring for business and industry (Vickers, 2011), medical applications (Ballora et al., 2004), human computer interfaces (Brewster, 1994), to supplement or replace visual displays (Fitch & Kramer, 1994), exploratory data analysis (Hermann & Ritter, 1999) and, most importantly for the current milieu, to reveal the invisible data flows of smart cities and the internet of things (Rimland et al., 2013; Lockton et al., 2014). The use of sonification as a broad and inclusive aesthetic practice and cultural medium for sharing, using and enjoying information is discussed by Barrass (2012). As networked smart societies grow in size and becomes increasingly complex the ubiquitous invisible data flows upon which these societies run are becoming hard to monitor and understand by visual means alone. Sonification might provide a means by which these invisible data flows can be monitored and understood. In order to achieve this type of usage, sonification solutions need to be applicable to and intelligible to an audience of general listeners. This requires a universal shared context by which sonifications can be interpreted. Embodied cognition researchers argue that the shared physical features of the human body, and the capacities and actions which our bodies afford us, define and specify mid-level structures of human cognitive processing, providing shared contexts by which people can interpret meaning in and assign meaning to their worlds (Lakoff and Johnson 1980; 1999; Varela et al., 1991). At present, embodied perspectives on cognition are infrequently explored in auditory display research, which tends to focus on either higher level processing in terms of language and semiotics (Vickers, 2012) or lower level processing in terms of psychoacoustics and Auditory Scene Analysis (Carlile, 2011).
... [8] Other features such as spectral centroid, or acoustic event detection are also considered in recent studies. [9] This ambient noise data may be used to look for links with population trends, potential risks to residents' health, [10] or as factors in determining real-estate values. ...
Conference Paper
Full-text available
This paper describes the development of a reproduction installation for the "I Hear NY3D" project. This project's aim is the capture and reproduction of immersive soundfields around Manhattan. A means of creating an engaging reproduction of these soundfields through the medium of an installation will also be discussed. The goal for this installation is an engaging, immersive experience that allows participants to create connections to the soundscapes and observe relationships between the soundscapes. This required the consideration of how to best capture and reproduce these recordings, the presentation of simultaneous multiple soundscapes, and a means of interaction with the material.
... The presented project aims to continuously measure and ultimately understand these urban sound environments. It is a multidisciplinary collaborative effort between New York University's (NYU) Center for Urban Science and Progress (CUSP) and the NYU Steinhardt School's Citygram Project [16,18,19,12,17]. The impetus of the Citygram project is focused on the lack of sufficient mapping paradigms for non-ocular energies in urban settings. ...
Conference Paper
Full-text available
The urban sound environment of New York City (NYC) is notoriously loud and dynamic. The current project aims to deploy a large number of remote sensing devices (RSDs) throughout the city, to accurately monitor and ultimately understand this environment. To achieve this goal, a process of long-term and continual acoustic measurement is required, due to the complex and transient nature of the urban soundscape. Urban sound recording requires the use of robust and resilient microphone technologies, where unpredictable external conditions can have a negative impact on acoustic data quality. For the presented study, a large-scale deployment is necessary to accurately capture the geospatial and temporal characteristics of urban sound. As such, an implementation of this nature requires a high-quality, low-power and low-cost solution that can scale viably. This paper details the microphone selection process, involving the comparison between a range of consumer and custom made MEMS microphone solutions in terms of their environmental durability, frequency response, dynamic range and directivity. Ultimately a MEMS solution is proposed based on its superior resilience to varying environmental conditions and preferred acoustic characteristics.
... The presented project aims to continuously monitor and ultimately understand these urban sound environments. It is a multidisciplinary collaborative effort between New York University's (NYU) Center for Urban Science and Progress (CUSP) and the NYU Steinhardt School's Citygram Project [11,13,14,8,12]. The impetus of the Citygram project is focused on the lack of sufficient mapping paradigms for non-ocular energies in urban settings. ...
Conference Paper
Full-text available
The urban sound environment of New York City is notoriously loud and dynamic. As such, scientists, recording engineers, and soundscape researchers continuously explore methods to capture and monitor such urban sound environments. One method to accurately monitor and ultimately understand this dynamic environment involves a process of long-term sound capture, measurement and analysis. Urban sound recording requires the use of robust and resilient acoustic sensors, where unpredictable external conditions can have a negative impact on acoustic data quality. Accordingly, this paper describes the design and build of a self-contained urban acoustic sensing device to capture, analyze, and transmit high quality sound from any given urban environment. This forms part of a collaborative effort between New York University's (NYU) Center for Urban Science and Progress (CUSP) and the NYU Steinhardt School's Citygram Project. The presented acoustic sensing device prototype incorporates a quad core Android based mini PC with Wi-Fi capabilities, a custom MEMS microphone and a USB audio device. The design considerations, materials used, noise mitigation strategies and the associated measurements are detailed in the following paper.
Chapter
In this paper we summarize efforts in exploring non-ocular spatio-temporal energies through strategies that focus on the collection, analysis, mapping, and visualization of soundscapes. Our research aims to contribute to multimodal geospatial research by embracing the idea of time-variant, poly-sensory cartography to better understand urban ecological questions. In particular, we report on our work on scalable infrastructural technologies critical for capturing urban soundscapes and creating what can be viewed as dynamic soundmaps. The research presented in this paper is developed under the Citygram project umbrella (Proceedings of the conference on digital humanities, Hamburg, 2012; International computer music conference proceedings (ICMC), Perth, pp 11–17, 2013; International computer music conference proceedings, Athens, Greece, 2014b; Workshop on mining urban data, 2014c; International computer music conference proceedings (ICMC), Athens, Greece, 2014d; INTER-NOISE and NOISE-CON congress and conference proceedings, Institute of Noise Control Engineering, pp 2634–2640, 2014) and includes a cost-effective prototype sensor network, remote sensing hardware and software, database interaction APIs, soundscape analysis software, and visualization formats. Noise pollution, which is the New Yorkers’ number one complaint as quantified by the city’s 311 non-emergency hotline, is also discussed as one of the focal research areas.
ResearchGate has not been able to resolve any references for this publication.