Conference PaperPDF Available

Location-Aware Interactive Game Audio

Authors:

Figures

Content may be subject to copyright.
AES 41st International Conference, London, UK, 2011 February 2–4
1
LOCATION AWARE INTERACTIVE GAME AUDIO
NATASA PATERSON2, KATSIARYNA NALIUKA1, TARA CARRIGY1, MADS HAAHR2,
FIONNUALA CONWAY2
1 Trinity College Dublin, College Green, Dublin 2, Ireland
katsiaryna.naliuka@ndrc.ie,tara.carrigy@ndrc.ie
2 Trinity College Dublin, College Green, Dublin 2, Ireland
patersn@tcd.ie,mads.haahr@cs.tcd.ie,conwayfi@tcd.ie
With the affordability and pervasiveness of GPS enabled smart phones, downloadable location aware applications have
become increasingly popular. These applications include the overlay of local information onto physical sites to
multiplayer games and tourism. However, many of these applications rely predominantly on graphical overlays and
simple user interface audio. For narrative led location aware experiences, audio can play an important role in
immersing the player in the story line and physical location. Coupled interactivity and adaptability of audio to player
movements encourage the user to remain engaged with the physical location. This paper describes the theory and
implementation of interactive audio for a location aware tourism application. The aim of the application is to create an
immersive gaming experience where the audio supports the historical setting and given narrative.
INTRODUCTION
Location aware augmented reality applications refer to
the overlaying of a digital world onto a real world space
with physical locations providing contextual cues for
narrative and gameplay elements. Currently many
augmented reality applications include the use of visual
and audio overlays onto physical locations in order to
present local information of the site or for use in gaming
or artistic installations. One installation that focuses on
audio is The Tactical Sound Garden [12] by Mark
Shepard. Participants download the software and travel
to the specified location of the implemented soundscape
within an urban place. As the user moves around the
space a real-time audio mix is heard that responds to
movements within the given location. As well as
listening to a pre-defined soundscape, participants may
also create their own ‘sound garden’ by uploading audio
files to chosen physical positions. In regards to
narrative led historical dramas in location aware
applications, audio can play an important role in
involving participants in the setting and creating
empathy towards characters [11]. Correlations have
been found to exist between increased immersion and
the association of a physical object to audio. Visual cues
are often important for sound localisation therefore
adding to location involvement [5]. Hence the
importance of utilising real world visual cues and an
adaptive and interactive soundscape to increase the
engagement in a given narrative.
Audio as yet has not been thoroughly explored for its
potential in creating an immersive environment for this
medium. There are examples of rich cinematic
soundtracks overlaid onto a real world space such as
SoundWalk [13]. However, most augmented reality
audio walks do not include traditional gaming concepts
such as non-linearity and interactive audio.
In this paper we shall overview non-linear adaptive
audio in composition and gaming together with
examples of historical location aware applications that
use audio as part of their interface. Falkland Ghost
Hunt, a location aware historical game, will then be
presented together with the soundscape implementation.
Results of a commercial evaluation of the overall game
experience will be discussed in regards to the role of
interactive audio for game immersing and involvement.
1 RELATED BACKGROUND
Game audio production and composition techniques hail
back to the experience of film and music composition.
Not only is film audio production knowledge of value
but composers and sound designers also draw on the
compositional aesthetics and also apply this to gaming.
In gaming, music serves as one important component of
the spectrum of sounds that includes a musical score,
ambient sound, dialogue, sound effects and even
silence. Although sound effects and game soundtracks
are based on the same principles of film, a slightly
different approach must be taken when it comes to game
audio. In addition to a cinematic soundscape,
compositional non-linearity, adaptability and
interactivity must be considered when designing the
Paterson et al. Location Aware Interactive Game Audio
AES 41st International Conference, London, UK, 2011 February 2–4
2
sound. Therefore in gaming scenarios, music acts as an
accompaniment while gameplay elements and the
communication of narrative information is of paramount
importance.
Interactive audio in a location aware scenario describes
the process where sound and music is passively
triggered by a user’s movement in a given location.
Audio content activated by interaction describes the
process of ‘generative art’ whereby new content is
created with the use of algorithmic processes [1]. The
first known example of generative interactive
composition goes as far back as the 18th century with
Johann Philip Kirn Berger’s musical dice game Der
allezeit fertige Polonoisen und Menuetten companist
(Berlin, 1757). The “Minuet Composer allowed
Berger to compose by throwing a dice and consulting a
table to determine which musically predefined card to
use. Through mathematics the player was ensured an
“original” work with most of the variation occurring in
the melody and the harmonic content remaining more
controlled. This random approach to music generation
popularised aleatoric (according to the dice) or dice
music and was popular for those new to musical
composition. Many other examples of ‘dice
composition existed with Mozart’s Musikalisches
Wurfelspiel first published in 1792 in Berlin, being the
most famous. More recently the technique has been
used by the avant garde composers such as Henry
Cowell’s String Quartet No. 3, Cage’s Winter Music
and Stockhausen’s Klavierstuck XI. In Klavierstuck X, a
note sheet with nineteen separated segments are given to
the performer to be played at random [2]. Information
in regards to tempo and dynamics for the next segment
is stated at the end of each segment and the whole piece
is finished when one of the segments has been repeated
twice. This is an early example of ‘mobile’ interactive
music.
Due to the presence of mixers for digital audio on the
mobile platform, interactive generative systems can now
be developed. One example is Lasorsal and
Lemordant’s [7] interactive format called AMML,
(Advanced/Audio Multimedia Markup Language). This
sound manager is designed for the iPhone and J2ME
phones. This example uses the process of selecting
short files in creating a continuous immersive
soundtrack. Random file selection is used together with
randomised triggering of sound effects and sleep times
between file playback. DSP (Digital Signal Processing)
effects and virtual 3D (three dimensional) audio is
implemented in real-time. This example provides an
immersive non-linear soundtrack without the use of
dialogue elements and does not utilise the triggering of
audio files by user movement in relation to GPS (Global
Positioning System) predefined locations.
Due to the ubiquitous nature of mobile technology,
tourism is one area that can greatly benefit from
downloadable applications. This is especially true for
historical sites where location aware technology can
allow for history to ‘come alive’ for the participant. The
Battel of Culloden, developed by ZolkC is an interactive
visual and audio tourist guide retelling the historical
events of the battle between the Jacobite and British
Government in 1746 [10]. The core content is triggered
automatically and delivered through a single earpiece
with the visual display providing information such as
maps and a menu – with the user controlling how much
they want to listen to and see. However the mono audio
deployment does not allow for any spatial awareness of
audio and may effect engagement. Additionally, the
audio presentation was linear with no use of generative
audio content.
The use of interactive dramas for reliving historical sites
and events is used in the location aware audio drama
Riot! 1831 [11]. Participant movements trigger a variety
of sound files with audio content presenting dramatised
real world historical events. In general, moving into a
region would trigger one of the sound files to start
playing and moving out of the region would cause it to
stop. However this application was not based on the
mobile phone platform and required the use of PDA
(Personal Digital Assistant) and external GPS receiver.
The aim of Falkland Ghost Hunt is to combine location
aware technology on the mobile phone platform with
that of an interactive and generative soundscape in order
to engage participants in the historical setting of
Falkland palace and gaming elements.
2 GAME PROTOTYPE
The working prototype, Falkland Ghost Hunt is a
location aware gaming application designed to make
local history more accessible to a younger audience.
The application is developed on the Android platform
and based on the Samsung Galaxy S, which contains an
integrated GPS receiver, internal compass and
accelerometer and requires for the participant to wear
stereo headphones. The game narrative is set during the
time Queen Mary spent at Falkland Castle in Scotland.
It is located in the gardens of Falkland Castle with a
contextually based audio and visual virtual world
overlay. This narrative led game requires the player to
act as a paranormal investigator in order to find and
release ghostly manifestations. Different graphical
modes of interaction assist the player in locating and
capturing the image of the spirit in order to hear their
story. Audio plays a significant role in creating a
ghostly, immersive atmosphere that is used as a
backdrop for sound effects and dialogue.
Paterson et al. Location Aware Interactive Game Audio
AES 41st International Conference, London, UK, 2011 February 2–4
3
Figure 1: Capturing the ghost
Once the player has found and released all required
ghosts they are able to review their evidence in a
casebook where they can hear the dialogue and see the
images they have captured. Therefore the aim of the
sound was to reinforce the paranormal investigator role-
play element within the historical setting.
3 INTERACTIVE AND GENERATIVE AUDIO
According to the game composer Koji Kondo [2] game
audio must be adaptive, interactive, change with each
gameplay and provide tension and surprise. Keeping
these principles in mind, the sound design for Falkland
Ghost Hunt utilises the concept of generative stochastic
background sound by the randomisation and layering of
‘wavelets’ (small audio files). Generating music by
using digital audio wavelets can be used to build a
soundtrack and is used for games on consoles and now
is also possible on the mobile platform. The process of
generating music from short audio files can be first
traced back to 2001 when Kenneth Kirschner a New
York based composer, used Adobe Flash (multimedia
platform) to compose ever-changing pieces of digital
music [4]. This process generally consists of a number
of audio files that are randomly chosen and layered
simultaneously and can play for as long as the player
wishes. There is no inherent end and it is suited for
games requiring continuously changing music, hence
avoiding the problem of boredom and habituation often
evident in the looping of files in console gaming. From
the listeners' point of view the soundscape will always
be perceived as linear even though it is non-linear in
implementation.
The use of interdeterminate adaptive audio wavelets is
ideal for the mobile platform due to the limited memory
and CPU (Central Processing Unit) available. In the
prototype, the use of wavelets forms the continuous
background sound that is able to play for as long as the
player remains within a given location, therefore
avoiding the looping of sound files [2]. This is
implemented by using the SoundPool class in the
Android operating system. The maximum audio file size
was 750 KB, and of approximately 2–4 seconds in
length. Playback of the wavelets was randomised within
the application code so that the files overlapped to
create a continuous background sound. It was found that
SoundPool had a quick file loading time and hence
worked well for looping and playing back smaller
numerous files simultaneously. As well as randomising
the file selection, sleep time between file playback was
also randomised in order to avoid any perceptual pattern
recognition. Hence the background was perceived as
continuous and ever changing so that the player always
had a different soundscape experience each time they
played the game. SoundPool was also used to playback
isolated sound effects. The Android MediaPlayer class
presented the dialogue as it was capable of streaming
larger files in a linear format [9].
In addition to the generative background sound,
interactivity of the audio to player movement was
implemented by linking predefined GPS co-ordinates to
the triggering of audio files. Concentric rings were
created with differing radial distances from the central
GPS location so when a player enters the different
regions, relevant audio files are triggered, consisting of
the coded generative audio files and isolated sound
effects. When a ghost is encountered in the central
region, the player is required to capture its image in
order to listen to the historically related game dialogue.
Figure 2: Concentric rings of audio
Synchronicity between the audio, visuals and player
movement is paramount for the application, as tight
coupling between all sensory modalities provides a
sense of realism and immersion [8]. By utilising
location technology such as GPS to trigger content
delivery, it enables a subconscious interaction with the
game space and physical location. Hence the
Paterson et al. Location Aware Interactive Game Audio
AES 41st International Conference, London, UK, 2011 February 2–4
4
technology doesn’t interfere with the experience and
makes the interaction pervasive and ubiquitous.
Figure 3: Falkland Ghost Hunt
Within the sound design, user interface sounds are also
implemented using the SoundPool class. These sounds
were designed to reinforce the concept of the handset
acting as a paranormal investigative device. Hence
sounds reminiscent of EVP (Electronic Voice
Phenomenon) recorders and radar were used to
complete the role-play aspect of the game. This
represents another aspect of audio interactivity, that it
provides feedback to the player of their investigative
role in the game space and physical location.
As well as considering interactive and generative audio,
compositional styles in creating suspense and tension
reminiscent of cinema are also implemented in the
sound design. The combination of these factors allow
for the creation of an engaging, narrative led audio
experience for the player that supports the game
architecture and virtual world overlay in the given
location.
4 EVALUATION
A commercial evaluation of the Falkland Ghost Hunt
was undertaken in the grounds of Falkland Castle,
Scotland. The aim of the trial was to assess the
commercial viability of the application and to evaluate
the players’ experience of the game. The trial ran for 12
days with 319 people playing the game. Participant
ages ranged from 6 to 65+ years with varying
technological and cultural backgrounds. Hence player
demographics were wide and diverse. After completing
the game mission, participants completed a
questionnaire consisting of open-ended questions and
quantitative five point scale (1–5) questions.
Approximately 98% of people responded that they liked
the game and 71% felt they had learned something
about the site. The average enjoyment rate was 4.4 out
of 5 (5 stating they enjoyed it very much). Overall this
is a favourable result as one of the original aims of the
application was to make history enjoyable and
accessible to a wider audience, especially a younger
one:
“It occupied the children with historical information…”
(32 year old mom with children of 10 and 6)
“Makes history come alive.” (29 year old)
Another aim in developing the game was to encourage
the participant to experience interactivity with the
virtual game world and historical site. Players were
asked what they liked most about this experience:
“Going around the garden, interactivity, ghosts,
historical aspect.” (8 and 12 year olds)
“(The game was) easy to use, something
different/modern/interactive.” (28 year old)
“Interactivity, visuals, audio.” (44 year old)
“Innovative, fun, the use of technology in a traditional
setting.” (40 year old man)
Players also responded positively to the overall sound
design:
“Sound effects were tremendous, very atmospheric.”
(29 year old man)
“I liked the sounds.” (12 year old)
“I liked listening to the voices, fun and scary.” (13 year
old)
“I liked the sound, birds, atmospheric sounds.” (60 year
old woman)
Therefore the sound design contributed to participants
being able to engage and interact with the historical site
in a way that was immersive and enjoyable for all ages
[6].
5 DISCUSSION
Interactive and generative audio for location aware
historical applications can facilitate the engagement of
players in the virtual world and historical site. The
responsiveness of the audio to the physical location and
the generative, ever changing atmospheric background
together with visual clues encourage an involvement
with historical events. However, in regards to
generative audio there are limitations and difficulties.
Non-linearity does pose the problem of the inability to
Paterson et al. Location Aware Interactive Game Audio
AES 41st International Conference, London, UK, 2011 February 2–4
5
create a tension build up and resolution popular in linear
forms, which may limit the composer’s control of
musical elements. This may result in the difficulty in
mapping potential player emotion progression during
gameplay which would aid in sound design composition
[3]. Another difficulty for location aware augmented
reality sound designers and composers is the lack of
standards for hardware devices that may not have the
same features in memory and processing power. This
results in variable playback of sound content and
quality. As well as device dependency, processing
constraints result in the inability of real-time DSP
(Digital Signal Processing) mixing of effects on the
Android platform.
Additionally GPS inaccuracies or lack of updates can
interfere with the triggering of audio files in the
specified locations hence interfering with the
experience. However, in spite of the difficulties of
game audio and current mobile platform technology,
adaptive interactive game audio in location aware
augmented reality gaming can result in a creative and
entertaining experience.
6 CONCLUSIONS AND FUTURE WORK
Location aware game audio that includes cinematic
soundscapes, interactivity and non-linear generative
audio together with an engaging virtual world can
successfully be created on the mobile platform. Future
work in this area could focus on the inclusion of
variability within audio file playback. Changing the
speed of audio playback would change the pitch, or the
addition of real-time granulation would change audio
file timbres. This would create sound variability and
reusability of files hence reducing the need for
additional memory in creating complex soundscapes.
REFERENCES
[1] Beilharz, K. “Interactively Determined
Generative Sound Design for Sensate
Environments: ExtendingCyborg Control”, Y.
Pisan (eds), InteractiveEntertainment '04, pp. 11-
18 (2004).
[2] Collins, K., Game Sound: An introduction to the
history, theory and practice of video game music
and sound design. MIT Press, USA, (2008).
[3] Glassner, A.S., Interactive storytelling:
techniques for 21st century fiction. Coyote Wind,
Canada, (2004).
[4] Guerraz, A., Lemordant, J. Indeterminate
Adaptive Digital Audio for Games on Mobiles: In
From Pac-Man to Pop Music, editors: Karen
Collins. Ashgate, (2007).
[5] Griesinger, D.H. “Concert Hall Acoustics and
Audience PerceptionIEEE Signal Processing
Magazine March (2007)
[6] www.hauntedplanet.com
[7] Lasorsa1, Y., Lemordant, J. “An Interactive
Audio System for Mobiles” Audio Engineering
Society Convention Paper, Presented at the
127th Convention October 9–12 New York, NY,
USA (2009).
[8] Laurel, B., Computers as theatre, New York,
Addison-Wesley, (1993)
[9] Paterson, N., Naliuka, K., Jensen, S.K., Carrigy,
T., Haahr, M., Conway, F. “Spatial Audio and
Reverberation in an Augmented Reality Game
Sound Design,” 40th AES Conference: Spatial
Audio, Tokyo, Japan, October 8-10, 2010, Audio
Engineering Society, (2010).
[10] Pfeifer, T., Savage, P., Robinson, B. “Managing
the Culloden Battlefield Invisible Mobile
Guidance Experience”, MUCS2009, June 15,
Barcelona, Spain (2009).
[11] Reid, J., Geelhoed, E., Hull, R., Cater, K.,
Clayton, B. “Parallel Worlds: Immersion in
location-based experiences”. CHI '05 extended
abstracts on Human factors in computing
systems, (2005).
[12] Shepard, M. “The Tactical Sound Garden [TSG}
Toolkit”. International Conference on Computer
Graphics and Interactive Techniques, ACM
SIGGRAPH 2007 p 219 (2007).
[13] www.soundwalk.com
Paterson et al. Location Aware Interactive Game Audio
... However, the taxonomy falls short in categorizing more complex game-based interaction with sound, in particular interaction that takes place within an augmented-reality style audio setting, which easily ends up using elements from all four categories (e.g. Cohen et al. 2004;Ekman et al. 2005;Moustakas et al. 2009;Paterson et al. 2010;Paterson et al. 2011). Moreover, since the taxonomy puts sound in focus, it applies best to audio-only or audio-mostly concepts and the supportive function that sound plays in most audiovisual contexts falls outside the framework. ...
Article
Pervasive games break the boundary between digital and physical to make use of elements in the real world as part of the game. One form of pervasive games are locative mobile games, which utilize physical movement as game control. To facilitate eyes-free interaction during play, these games benefit from exploring sound-based content. However, it is currently unclear what type of sound-based interaction is feasible to the general audience. Another consideration is which sound design strategies best support the goal of situated experiences, and how to design sound that supports game experiences drawing upon location-awareness, and intermixing virtual content with physical reality.A first generation of locative mobile games is already commercially available. The present contribution analyzes seven commercially available locative games (Ingress; Shadow Cities; Zombies, Run!; Inception the App; The Dark Knight Rises Z+; CodeRunner) and summarizes the sound design strategies employed to contextualize game content in real-world. Comparison to current themes in contextualized audio research indicates similarities but also challenges some assumptions regarding audio-heavy gameplay. The findings illustrate the need for simplicity regarding audio challenges, but generally confirm the view of audio-based gameplay as a facilitator of mobility. Sound is also centrally involved in shaping contextualized experiences, forging links between the physical and digital world, and indexing game content to context through functionality, verbal references, spatialization, and remediation. The article discusses two complementary strategies to systematically manipulate the physical-digital relationship, and to promote strongly situated experiences.
... Location-aware games can use spatial audio to increase engagement of players (Paterson et al. 2011). Situated or location-based games (Gaye et al. 2003, Magerkurth et al. 2005) are a fertile domain for AAR (Cater et al. 2007), sound gardens, including cross-modal applications such as those mentioned earlier. ...
Chapter
The previous chapter outlined the psychoacoustic theory behind cyberspatial sound, recapitulated in Figure 13.1, and the idea of audio augmented reality (AAR), including review of its various form factors. Whereware was described as a class of location- and position-aware interfaces, particularly those featuring spatial sound. This chapter considers application domains, interaction styles, and display configurations to realize AAR. Utility, professional, and leisure application areas are surveyed, including multimodal augmented reality (AR) interfaces featuring spatial sound. Consideration of (individual) wearware and (ubicomp) everyware is continued from the previous chapter, in the context of mobile ambient transmedial interfaces that integrate personal and public resources. Two more “…ware” terms are introduced: anyware here refers to multipresence audio windowing interfaces that use narrowcasting to selectively enable composited sources and soundscape layers, and awareware automatically adjusts such narrowcasting, maintaining a model of user receptiveness in order to modulate and distribute privacy and attention across overlaid soundscapes.
... Location-aware games can use spatial audio to increase engagement of players (Paterson et al. 2011). Situated or location-based games (Gaye et al. 2003, Magerkurth et al. 2005) are a fertile domain for AAR (Cater et al. 2007), sound gardens, including cross-modal applications such as those mentioned earlier. ...
Chapter
The previous chapter outlined the psychoacoustic theory behind cyberspatial sound, recapitulated in Figure 13.1, and the idea of audio augmented reality (AAR), including review of its various form factors. Whereware was described as a class of location- and position-aware interfaces, particularly those featuring spatial sound. This chapter considers application domains, interaction styles, and display configurations to realize AAR. Utility, professional, and leisure application areas are surveyed, including multimodal augmented reality (AR) interfaces featuring spatial sound. Consideration of (individual) wearware and (ubicomp) everyware is continued from the previous chapter, in the context of mobile ambient transmedial interfaces that integrate personal and public resources. Two more “…ware” terms are introduced: anyware here refers to multipresence audio windowing interfaces that use narrowcasting to selectively enable composited sources and soundscape layers, and awareware automatically adjusts such narrowcasting, maintaining a model of user receptiveness in order to modulate and distribute privacy and attention across overlaid soundscapes.
... Location-aware games can use spatial audio to increase engagement of players (Paterson et al. 2011 (Cater et al. 2007), "sound gardens," including cross-modal applications such as those mentioned above. "decibel 151" (Stewart et al. 2008, Magas et al. 2009) was an art installation and music interface that used spatial audio technology and ideas of social networking to turn individuals into walking soundtracks as they moved around each other in a shared real space and listened to each other in a shared virtual space. ...
Chapter
Full-text available
The previous chapter outlined the psychoacoustic theory behind cyberspatial sound, recapitulated in Figure 13.1, and the idea of audio augmented reality (AAR), including review of its various form factors. Whereware was described as a class of location- and position-aware interfaces, particularly those featuring spatial sound. This chapter considers application domains, interaction styles, and display configurations to realize AAR. Utility, professional, and leisure application areas are surveyed, including multimodal augmented reality (AR) interfaces featuring spatial sound. Consideration of (individual) wearware and (ubicomp) everyware is continued from the previous chapter, in the context of mobile ambient transmedial interfaces that integrate personal and public resources. Two more “…ware” terms are introduced: anyware here refers to multipresence audio windowing interfaces that use narrowcasting to selectively enable composited sources and soundscape layers, and awareware automatically adjusts such narrowcasting, maintaining a model of user receptiveness in order to modulate and distribute privacy and attention across overlaid soundscapes.
... Location-aware games can use spatial audio to increase engagement of players (Paterson et al. 2011). Situated or location-based games (Gaye et al. 2003, Magerkurth et al. 2005) are a fertile domain for AAR (Cater et al. 2007), sound gardens, including cross-modal applications such as those mentioned earlier. ...
Chapter
The previous chapter outlined the psychoacoustic theory behind cyberspatial sound, recapitulated in Figure 13.1, and the idea of audio augmented reality (AAR), including review of its various form factors. Whereware was described as a class of location- and position-aware interfaces, particularly those featuring spatial sound. This chapter considers application domains, interaction styles, and display configurations to realize AAR. Utility, professional, and leisure application areas are surveyed, including multimodal augmented reality (AR) interfaces featuring spatial sound. Consideration of (individual) wearware and (ubicomp) everyware is continued from the previous chapter, in the context of mobile ambient transmedial interfaces that integrate personal and public resources. Two more “…ware” terms are introduced: anyware here refers to multipresence audio windowing interfaces that use narrowcasting to selectively enable composited sources and soundscape layers, and awareware automatically adjusts such narrowcasting, maintaining a model of user receptiveness in order to modulate and distribute privacy and attention across overlaid soundscapes.
... It was further shown that overlapping dialogue and background noises does not interfere with the listeners' perception of the scene being depicted or the spoken dialogue between characters. [5,6] showed the importance of audio for environmental and condition changes in interactive applications. This indicated that in fields where there has been a strong historical link between audio and visual cues, especially in gaming where visuals are often given a much higher priority, audio-only versions can succeed just as well as visual cues if attention to detail is made regarding environment and object changes. ...
Article
Full-text available
This paper presents a study undertaken to evaluate user ratings on auditory feedback of sound source selection within a multi-track auditory environment where sound placement is controlled by a gesture control system. Selection confirmation is presented to the participants via changes to the audio mixture over the stereo loud-speakers or feedback over a single ear bluetooth headset. Overall five different methods are compared and results of our study are presented. A second task in the study was given to evaluate a pre-selection method to help find sound sources before selection, the participant altered a width control of the pre-selection that was heard in the bluetooth headset. Results indicate a specific value ir-respective of genre that the pre-selection should be set to whilst the selection confirmation can be perceived to be dependant on genre and instrumentation.
... Here, the mobile device is portrayed as a device, and pointing/sweeping gestures with the device are translated into listening to the game world. Within pervasive mobile gaming, Ekman[15][16]portrayed the mobile phone as a magic shaman drum, and Paterson and colleagues[43][44]used the mobile as a multitool for paranormal investigation (including EVP recorder). However, it remains unclear whether prop-based designs can fully replace disassociating immersive techniques in these games. ...
Conference Paper
Full-text available
A commonly encountered argument for using sound in games is that sound increases the sense of immersion of a game. Immersion refers to an experience of being drawn into the game world, a process is centrally dependent on the players’ simultaneous removal from everyday life also called disassociation. The immersive power of sound has been linked to its capacity to disassociate: to transport the player into a virtual reality which feels more real, more plausible and more consequential than his/ her real physical surroundings, which is problematic especially for in mixed-reality and pervasive gaming. This paper draws on literature to trace exactly how sound contributes to immersion, and proposes how sound design can create immersion without disassociation. It also identifies engaging aesthetic opportunities that require non-immersive sound, which are currently being overlooked because of the assumption that good sound needs to be immersive.
Book
Full-text available
**THIS IS A BOOK DO NOT REQUEST A COPY*****
Article
Full-text available
Sensate environments provide a medium for humans to interact with space. This interaction includes ambient/passive triggering, performative artistic interaction and physical sensate spaces used for games and interactive entertainment. This paper examines aural representations of data activated by interaction, shaped by user activities and social environmental behaviours. Generative art forms, for example genetic algorithms and evolutionary design systems, provide methodologies for creating new material. This paper addresses ways in which the generative innovations can relate to human experience in a comprehensible representation using constructs shared by behaviour and sound. The purpose of site -specific generative sound is to respond intelligently to human participants with feedback: sonic indicators of social activity. The affects of the environment contribute to the generative process - the number of occupants, busy-ness (motion), environmental measurements (e.g. temperature, position relative to specifi c locations in the space) or direct controls - proximity to sensor walls, movement on pressure sensitive flooring. An examination of comprehensibl e correspondence s between sonic parameters and socio-spatial behaviour articulates a methodology for auralisation of social data. Human interaction contributes to the initiation and modification of generative procedures. The central concern - to make generative responsive sound/music clearly indicative of its social context - is applicable in virtual environments as well as wireless sensate physical spaces. Sensate spaces, as a growing and cutting-edge phenomenon at this time, require constructs for expedient computational processing and purposes to which the vast stream of sensed data can be meaningfully applied for the edification of the users.
Article
Full-text available
Implementations of location based services have long been researched and well understood in the research domain, how-ever real examples of useful applications of location aware technology are still scarce today. In this paper we present how that knowledge and experience in research was applied to manage the battlefield site at Culloden, bringing the vis-itor experience there to a new level. Using state of the art GPS triggering and standards based approach the system provides a consistent user experience with reliable results in terms of availability of relevant content. This paper dis-cusses not only the technology but also the impact of real live deployment on the implementation.
Conference Paper
Full-text available
This paper analyses the stages and circumstances for immersion based on quantitative and qualitative feedback from 700 people who took part in a three week long public trial of a location-based audio drama. Ratings of enjoyment, immersion and how much history came alive all scored highly and people often spent up to an hour in the experience. A model of immersion as a cycle of transient states triggered by events in the overall experience is defined. This model can be used to design for immersion in future experiences.
Conference Paper
Full-text available
In this paper, the development and implementation of a rich sound design, reminiscent of console gaming for a location aware game, Viking Ghost Hunt (VGH) is presented. The role of audio was assessed with particular attention to the effect on immersion and emotional engagement. Because immersion also involves the interaction and the creation of presence (the feeling of being in a particular place) these aspects of the sound design were also investigated. Evaluation of the game was undertaken over a three-day period with the participation of 19 subjects. The results gained imply that audio plays an important role in immersing a player within the game space and in emotionally engaging with the virtual world. However, challenges in regards to GPS inaccuracy and unpredictability remain, as well as device processor constraints, in order to create an accurate audio sound field and for the real-time rendering of audio files.
Article
Full-text available
A mobile game is a video game played on a mobile phone. The game market for mobiles is clearly regarded as a market with a future [11], as the multiple investments carried out by the large world editors on this segment testify. The mobiles are true platforms of large and general public games: mobile games can be downloaded via the mobile operator's radio network, WLAN, bluetooth or USB connection. Mobiles phones give game developers a unique set of tools and behaviours and with a little creativity, game developers can make some really great games for this platform. The challenges posed by portable devices are numerous, but the biggest complaint of many in the industry is the lack of standards. It is necessary to adapt each game to the various models of existing terminals, which do not offer the same features (memory, processing power, keys...). Consequently, the number of different versions of a game rises to several hundreds. As we will see, Java Micro Edition (J2ME) attempts with some success to solve these problems.
Article
An abstract is not available.
Article
An attempt is made to clarify why highly rated concert halls have widely different acoustics and how their acoustic properties vary greatly within the hall while retaining acceptable sound quality. Furthermore, it is explored how the beneficial effects of reverberation can be quantified and how concert hall acoustics can be improved through architectural and DSP (digital signal processing) solutions. In answering these questions, the main observation is that classical reverberation theory predicts that large concert halls will have a low value of direct-to-reverberant ratio (d/r). In spite of the low d/r, the gradual onset of reverberation allows the listener in most seats to localize soloists while still allowing notes to be separated. In the best halls, it is possible to separately perceive the reverberation between notes.