Conference PaperPDF Available

Physical Sonification Dataforms

Authors:
  • SONIFICATION.COM

Abstract and Figures

Physical Sonification Dataforms are physical objects constructed from digital datasets to produce sounds. This paper reports on a series of three experiments that establish and verify the theory of Physical Sonification. This experiments use the HRTF data provided as a sonification challenge at ICAD 2011. The dataset is a spatial array of spectral filters measured from the left and right ears of a dummy head. The proof of concept is a coin-like metal disc cnstructed from the data that can be struck or scraped to produce a sound. The second iteration is shaped like a bell to produce a more sustained and pitched sound. The third experiment compares a Control with Test Bells constructed from left and right HRTFs. The timbre of the Control is categorically different from the Left and Right Bells which are strangely dissonant. Spectrograms of the Left and Right Bells show a superposition of doubled harmonics. These results suggest that the sound of the Bell could characterize a HRTF dataset in a way that could be useful for classification and recognition of HRTF datasets from different people.
Content may be subject to copyright.
The 17th International Conference on Auditory Display (ICAD-2011) June 20-24, 2011, Budapest, Hungary
Physical Sonification Dataforms
Stephen Barrass
University of Canberra, Canberra, ACT, Australia
ABSTRACT
Physical Sonification Dataforms are physical objects
constructed from digital datasets to produce sounds. This paper
reports on a series of three experiments that establish and verify
the theory of Physical Sonification. The experiments use HRTF
data provided as a sonification challenge at ICAD 2011. The
dataset is a spatial array of spectral filters measured from the
left and right ears of a Kemar dummy head. The proof of
concept is a coin-like metal disc constructed from the data that
can be struck or scraped to produce a sound. The second
iteration is shaped like a bell to produce a more sustained and
pitched sound. In the third iteration the bell shape was enhanced
to produce a louder tone, and the effect of the data on the shape
was amplified. The hypothesis that these bells could convey
useful information about the data was tested in an experiment
that compared the tones produced with a Control. The timbre of
the Left and Right Bells is categorically different from the
Controls, with a double harmonic series that causes a dissonant
effect. The Bells each have different pitches that can also be
used to distinguish them from each other. These results support
the hypothesis that Physical-Acoustic Sonification Dataforms
could provide information that could be useful for classifying
and analyzing HRTF collections such as the CIPIC database.
1. Introduction
Recent advances in the printing of 3d objects in Computer
Aided Design have made it possible to produce physical objects
from digital datasets. The range of materials that can be printed
include stainless steel, glass and ceramics. This technology has
been driven by the need to prototype models and parts in
industrial design, automotive engineering, architecture and
other fields where the 3D shape of an object is important for
functional and aesthetic purposes.
These materials also have acoustic properties that can be shaped
by digital processes. This opens the door to the production of
sonic objects from digital data sets. The ecological theory of
hearing is that the human ear has evolved to hear information in
the physical acoustics of everyday objects. Taking a further step
in this direction leads to the idea that specially designed
acoustic objects could be physical sonifications of digital data
they are constructed from. Up until now Sonification has relied
on the artificial synthesis of sounds using electroacoustic
circuits, signal processing algorithms and computer programs.
Physical Sonification Dataforms will only need air to work.
2. Background
The sonification challenge has become a regular component of
the International Conference on Auditory Display since the
Listening to the Mind Listening Concert of EEG Data at the
Sydney Opera House in 2004, the Global Data by Ear concert at
the ICA in London in 2006, the DNA Sonification contest in
Copenhagen in 209, and the Embodied Sounds concert in
Washington DC in 2010. This year in 2011 Gyorgi Wysernyi
has organized a challenge to sonify a spatial HRTF dataset.
The Sonification contest is an opportunity to express yourself
and your creativity on the field of Sonification. A data set will
be provided with a detailed description that has to be sonified.
This year the data set is some two-channel (left ear-right ear)
recording of a dummy-head containig the Head-Related
Transfer Functions. These transfer functions describe the
transmission from the free-field to the eardrums, and are the
directional dependent filters for the outer ears. The data set and
instructions can be downloaded below. It contains horizontal
and median plane HRTFs. Your task is to sonify these „raw
numbers”. You are welcome to use any software and idea,
artistic or musical performance. [Wersényi, 2010].
Many data visualizations are rendered in a 3D view to appear
like a physical object. This can be particularly beneficial for
understanding datasets that are spatial or 3d in nature. The
development of 3d printing has allowed researchers to render
3D visualizations in a 3D physical form. The manual
manipulation and feel of the dataset may allow different
perceptions and deeper understanding of the data than the
screen version. A broad spectrum of these data sculpturesare
described and analysed in theoretical terms of metaphorical
distance and embodiment in [Zhao and Vande Moere, 2008].
3D CAD tools have also been used to design 3D acoustic
forms. The Federation Bells in Melbourne is a public
installation of 39 bells of up to 1.2 tonnes in mass that were
designed using computer simulations in order to produce
specified just-tuned harmonics [MacLachlan, 2003].
The 17th International Conference on Auditory Display (ICAD-2011) June 20-24, 2011, Budapest, Hungary
Drawing these threads together lead to the idea of printing a 3D
version of the HRTF dataset and then physically striking it to
produce a sonification of the entire dataset in a single tone. The
rest of this paper describes the process of producing this object,
and the insights that have been gained along the way. The next
section describes the three experiments that have established
and verified the hypothesis of Physical Sonification Dataforms.
This is followed by an analysis and discussion, then
conclusions and further work.
3. Experimental Method and Process
3.1. HRTF Dataset
The HRTF dataset was measured by recording a white noise
excitation using microphones in the ear canals of a dummy
head. The dataset for the horizontal plane contain files at 1
degree intervals for both the left and right ear. The first three
digits indicate azimuth from 0 to 359 degrees and each file
contains 2*2048 float numbers that describe the spectral FFT
from 0 Hz (DC value) up to 25 kHz.
3.2. Data Medallion
The first prototype called the Data Medallion is a coin-
like disc made from stainless steel that produces a sound
when struck or scraped. The Data Medallion was inspired
by a paper on visualizations of HRTF provided with the
dataset [Wersényi, 2010].
The polar plot of 360 HRTFs for one ear, shown in Figure
1., arranges the FFTs as spokes at 1-degree spacing, with
the DC component at the centre and higher frequencies
radiating outward. The spokes are coloured by amplitude
at each point of the spectral envelope, to form a disc as
shown in Figure 1a. Complete HRTF datasets are difficult
to compare as a whole. This problem has been addressed
with a second polar visualization of the difference
between left and right ear called a HRTFD.
Figure 1. Polar Visualisations of HRTF data
A 3D visualization, shown in Figure 2. was also proposed
to help auditory researchers comprehend and analyse the
details of these large and complex HRTF datasets
Figure 2. 3D visualization of HRTF for one ear
The combination of these visualizations inspired the idea
that the left and right ear HRTFs could be presented as
opposite sides of a physical coin. The idea was explored
by constructing a 3D mesh in a CAD package as shown in
Figure 3. This mesh was then laser printed in stainless
steel to produce a physical object, as also shown in the
figure. This object, called the Data Medallion is 5cm in
diameter and ranges from 2 to 6 mm in thickness. The left
and right ears are present as opposite sides that can be felt
at the same time. The object produces a short sharp
metallic sound when struck, and the surface roughness
can be heard by scraping it with a metal rod. The
capability to produce sounds from the HRTF dataset in
this way is a proof of concept for Physical Sonification
Dataforms.
Figure 3. Data Medallion a proof of concept
3.3. The Golden Bell of Hearing
Although the Data Medallion produces sounds from physical
interactions, most of the information is related to the size, shape
and material of the disc, and the effects of the dataset on the
sounds is unclear. In experiments it was necessary to hold it
with pointed pliers to prevent the sounds being damped. These
observations led to a second iteration of the concept designed to
enhance auditory perception of the characteristics of the
dataset. The Golden Bell of Hearing was constructed by
The 17th International Conference on Auditory Display (ICAD-2011) June 20-24, 2011, Budapest, Hungary
applying the HRTF spectral profiles to alter the exterior
thickness of a bell shaped object, as shown in Figure 3. A
handle was manually modeled and attached in a CAD program,
and the object was again 3D printed in stainless steel as shown
in the Figure. The physical acoustics of the bell-like shape
produce a longer pitched ringing tone that produces small
changes in timbre when struck in different locations. The
metaphor of a bell also provides a cue for interacting with the
object in an acoustic manner. The Golden Bell of Hearing
demonstrates the application of acoustics and affordances to
design and develop a Physical Sonification Dataform.
Figure 4. The Golden Bell of Hearing an application
of acoustics and metaphor to the concept
3.4. Test Bells
The hypothesis of Physical Sonification is that a physical
acoustic form can allow the auditory perception of useful
information about a dataset. The third experiment tests this
hypothesis by comparing Bells constructed from the left and
right HRTFs with a Control. Based on observations from the
previous experiment the Bell form was made larger with a
bigger handle and an internal loop for hanging a clapper. The
Control Bell consists of this template without any data applied
to it. The Left and Right bells were constructed by applying the
spectral envelopes on the outside as previously. However this
time a more complex shell was constructed by inverting the
spectral envelope on the inside to follow the exterior, as shown
in Figure 5. This more complex shaping was designed to
increase the effect of the dataform on the acoustics of the
object.
Figure 5. Forming the Left Bell from the Data.
As before, the three Test Bell dataforms were 3D printed
in stainless steel, as shown in Figure 6.
Figure 6. Left, Control and Right Test Bells
3.5. Results
The Control Bell produces a 1.4s tone with fundamental at
approximately 3kHz, and two main harmonics at approx
7.5kHz and 14kHz, when it is struck with a metal rod. Although
the harmonics are not evenly spaced the tone is distinctly bell
like.
The 17th International Conference on Auditory Display (ICAD-2011) June 20-24, 2011, Budapest, Hungary
Figure 6. Audio spectrum of the Control Bell when
struck.
When the Left Bell is struck it also produces a fundamental at
approximately 3kHz with a duration of 2s, and major harmonics
at 7kHz and 12.5Khz as shown in Figure 7. Although the
spectrum appears similar both the Left and Right Bells have a
dissonant tone that is distinctly different from the Control.
Figure 7. Audio spectrum of the Left Bell when struck.
The reason for the dissonance is evident from a closer
inspection and comparison of the spectra in Figure 8. The Left
Bell has a fundamental at 2.9 and 3.0 kHz, and second
harmonics are at 7.0 and 7.1 kHz, with a repeated pattern of
doublings in higher harmonics. The Right Bell has double
fundamentals at 3.0 and 3.1 kHz, a set of doubled second
harmonics at 7.2 and 7.3kHz, and a different pattern of higher
harmonics.
Figure 8. Spectra for Left, Control and Right Bells.
4. Discussion
The Left and Right Bells are distinctly in timbre and duration
from the Control bell. The difference in timbre is due to a
superposition of two harmonic series separated by about 100Hz
in fundamental frequency. This produces a beating and
dissonance that is unusual in a bell sound. The Left bell is
slighty lower pitched than the Control, while the Right is
slightly higher pitched. So although the Left and Right Bells
have categorically similar timbres that distinguish them from
the Control, they have a distinct difference in pitch that
distinguishes them from each other. This pitch difference may
be due to the yellow/red highlight in the visualization of the
HRTFD in Figure 1. The Left and Right also have subtle
differences in timbre due to the different patterns of upper
harmonics.
5. Conclusion
This series of experiments has developed a proof of concept, a
design process, and a validation for a theory of Physical
Sonification Dataforms. In the first prototype the data was used
to shape a coin-like medallion constructed from stainless steel,
with the data from the left ear on one side and the right on the
other. The Data Medallion produces a sound when struck or
scraped, but the sound is damped by being held. The Medallion
also allows you to see and feel the left and right ear HRTFs as
features on both sides of a physical object you can put in your
pocket for further contemplation at your leisure.
The next iteration was a small bell made of gold coloured
stainless steel in which the HRTF filters of the left ear are
rendered by the thickness of the curved sides. The Golden Bell
of Hearing produces a more sustained and pitched bell like
sound when rung. The design of the bell demonstrates the
application of acoustics, metaphor and interaction affordance to
the concept. The third experiment tests the hypothesis that the
physical acoustics of the Bell can characterise the HRTF dataset
of 360 spectral profiles as a single tone. The sounds produced
by a control, left and right ear datasets were compared
perceptually and spectrally. The Left and Right ear bells sound
categorically different from the Control bell, and different in
pitch from each other. The spectrograms from each bell show
differences in the temporal development of harmonics caused
by differences in the shape of each object. These observations
support the proposal that Physical Sonification Dataforms can
provide useful information about the dataset. The auditory
differences between the Test Bells demonstrate the plausibility
to distinguish HRTF datasets measured from different people
by listening. These differences could be used to classify entire
datasets in terms of similarity and difference from a single tone,
and perhaps even to remember and associate datasets with
different individuals
5.1. Further Work
In further work we will continue to explore the psychoacoustic
effects of different shapes and forms, with different types of
data. We will also explore design issues of metaphor,
affordance and functionality with different tasks, contexts, and
users.
6. References
[1] Zhao, J. and Vande Moere, A. (2008) Embodiment in data
sculpture: a model of the physical visualization of
information. In Proceedings of the 3rd international
The 17th International Conference on Auditory Display (ICAD-2011) June 20-24, 2011, Budapest, Hungary
conference on Digital Interactive Media in Entertainment
and Arts (DIMEA '08). ACM, New York, NY, USA, 343-
350.
[2] McLachlan, NM. Keramati Nigjeh, B. and Hasell, A.
(2003) The Design of Bells with Harmonic
Overtones, Journal of the Acoustical Society of
America, 114 (1), p 505-511.
[3] WERSÉNYI, GY. (2010) Sonification Contest,
International Conference on Auditory Display, Budapest,
Hungary, 20-24 June, 2011, http://icad2011.com/the-
sonification-contest.html, retrieved 28 May 2011.
[4] WERSÉNYI, GY. (2010) Representations of HRTFs using
MATLAB: 2D and 3D plots of accurate dummy-head
measurements. Proceedings of the 20th International
Congress on Acoustics (ICA), Sydney, Australia, 2010
Aug. 23-27.
ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
Musical bells have had limited application due to the presence of inharmonic partials in the lower part of their acoustic spectra. A series of bells has been designed that contains up to seven partial frequencies in the harmonic series beginning at the fundamental frequency. This was achieved by choosing geometries for finite-element analysis models in which as many purely circumferential bending modes as possible occurred at frequencies below any mode with an axial ring node. The bell models were then fine tuned using gradient projection method shape optimization and the resulting profiles were cast in silicon bronze. A range of bell geometries and timbres is analyzed using psycho-acoustic models and is discussed in relation to European carillon bells.
Article
Human Head-Related Transfer Functions describe the transmission from the free-field to the eardrums. HRTFs are measured on human subjects or on dummy-heads, characterized by the angle of incidence. The dummy-head meas-urement method allows the acquisition of data in high spatial resolution. Our setup provided HRTF data in 1 degree horizontal and 5 degrees elevational steps in different environmental settings. Spectral evaluation in spatial hearing research requires proper representation methods of detailed measurement data. Different 2D and 3D representation methods will be presented here, using different coordinate systems, color maps and additional filtering methods pro-grammed under MATLAB. Figures are mainly helpful for HRTF analysis but MATLAB features allow other use for applications where directional characteristics, polar plots are required.
Conference Paper
Information is becoming pervasive in the contemporary society, and is increasingly saturating the visual senses and the cognitive efforts of the lay masses. As our attention for visual impulses and cognitive effort has become more competitive, new approaches are being pursued to convey information to people in memorable and intuitive ways. With human's inherent proficiency in comprehending the physical affordances present in the real world, some researchers and designers are investigating how meaningful insights can be conveyed by way of "sculpting" data. This paper proposes a domain model to establish the concept of data sculpture as a data-based physical artifact, possessing both artistic and functional qualities, that aims to augment a nearby audience's understanding of data insights and any socially relevant issues that underlie it. This paper also proposes a model of embodiment to capture and analyze the wide and multi-layered spectrum of existing data sculptures. In this model, the introduced concepts of metaphorical distances are used as a means to measure embodiment in data sculpture. The models provide groundwork for useful design principles for effective information communication of socially relevant, data-driven insights to a large, lay audience using data sculpture.
Sonification Contest
  • Gy Wersényi
WERSÉNYI, GY. (2010) Sonification Contest, International Conference on Auditory Display, Budapest, Hungary, 20-24 June, 2011, http://icad2011.com/the-sonification-contest.html, retrieved 28 May 2011.