Conference PaperPDF Available

Multisensory Experiences for Singers: a First Tangible Prototype

Authors:

Abstract and Figures

When performing a piece of music, body, senses, and cognition are strictly connected to each other. This connection, however, is not always particularly evident. As a consequence, it is extremely important for musicians to be able to control their performance by relying on other sensorial modalities that complement the auditory cue. Sight, in particular, is paramount for most instrumentalists as it helps learning new techniques, recognising errors, correcting expressiveness, and memorize complex passages. As opposed to other musicians, singers can almost exclusively rely on the auditory feedback coming from their voice to adjust their singing. Starting from this statement, we conduct a user study to find possible solutions to provide singers with further feedback during their performance. This paper is a preliminary study in this direction.
Content may be subject to copyright.
Multisensory Experiences for Singers:
a First Tangible Prototype
Abstract
When performing a piece of music, body, senses, and
cognition are strictly connected to each other. This
connection, however, is not always particularly evident.
As a consequence, it is extremely important for
musicians to be able to control their performance by
relying on other sensorial modalities that complement
the auditory cue. Sight, in particular, is paramount for
most instrumentalists as it helps learning new
techniques, recognising errors, correcting
expressiveness, and memorize complex passages. As
opposed to other musicians, singers can almost
exclusively rely on the auditory feedback coming from
their voice to adjust their singing. Starting from this
statement, we conduct a user study to find possible
solutions to provide singers with further feedback
during their performance. This paper is a preliminary
study in this direction.
Author Keywords
Tangible User Interface; Physicality; Multisensory
Interaction; Breath-controlled Interface
ACM Classification Keywords
H.5.m. Information interfaces and presentation (e.g.,
HCI): Miscellaneous.
Permission to make digital or hard copies of part or all of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that
copies bear this notice and the full citation on the first page. Copyrights
for third-party components of this work must be honored. For all other
uses, contact the Owner/Author.Copyright is held by the
owner/author(s).
Ubicomp/ISWC'16 Adjunct , September 12-16, 2016, Heidelberg,
Germany
ACM 978-1-4503-4462 3/16/09.
http://dx.doi.org/10.1145/2968219.2968263
Assunta Matassa
Computer Science Department,
University of Turin
Corso Svizzera 185
Torino, 10149, Italy
matassa@di.unito.it
Fabio Morreale
School of Electronic Engineering
and Computer Science,
Queen Mary Universit y of London
Mile End Road, London E1 4NS
f.morreale@qmul.ac.uk
983
UBICOMP/ISWC ’16 ADJUNCT, SEPTEMBER 12-16, 2016, HEIDELBERG, GERMANY
INTRODUCTION
Music performance is a physical activity that can take
the form of singing a song, playing a piano piece,
strumming a guitar, and so on. Such activity is
particularly complex and demanding in terms of
physical and cognitive skills, in particular as it often
occurs in situations of stress and anxiety. Controlling a
range of skills needed to correctly play pitch and
rhythm over an extended period of time is indeed
particularly hard from a cognitive and motor point of
view [8,13]. A musician has to read musical annotation,
process this information, transform it into a series of
motor activities, execute movements, and listen to the
musical output. As a consequence, they are requested
to engage in more than one interaction modality at
time, as different senses - hearing, sight, and touch -
come into play when performing a musical piece.
A preliminary investigation that we conducted
confirmed that visual and tactile feedback are
extremely important when performing music. Vision
supports musicians’ performance by helping the
coordination between body parts and anticipating
potential mistakes. Touch offer musicians a tangible
perception of their instrument, its boundaries, its
capabilities, and makes it easier to reproduce the
metaphor of the symbiont [9], i.e. the fusion between
the human body and the instrument body. Users
receive the feedback directly from the instruments and
can enjoy without having to interrupt this symbiosis. An
interaction of this type is only possible through the
direct contact between the two bodies.
This concept, however, does not apply to voice. As
other instruments, singing requires the development of
techniques to master control on tonality and rhythm
but singers can only rely on the acoustic feedback
produced by their voice. This evidence suggested us to
search for new ways to support singers using
technology [13].
BACKGROUND INFORMATION
In the past centuries music used to play a role of a
ritual that was orally preserved. Each new passage
involved an interpretation of the piece, which was
renewed and personalised. As a consequence, the main
focus was on the singing performance rather than on
the composition itself. To make a further distinctive
trait in their performance, musicians eventually
explored novel techniques. As an example, please refer
to cantu a tenore, a polyphonic folk singing from
Sardinia (Italy). As represented in Figure 1, singers
stand in a close circle: the solo singers sings a piece of
prose or a poem while the other voices sing an
accompanying chorus [1]. The interesting point is that
singers are physically connected to each other to feel
the vibrations of each other’s to adjust the voice [4].
Also, as to being able to hear their own and the other
singers’ voices at the same time, performers close of
their ears with one hand.
By taking inspiration from this example, in a different
design context, a number of previous studies [1, 4]
analysed touch as a mean to provide users with an
enhanced experience in ubiquitous applications. The
aim of this preliminary study was to conduct a user
research to understand how we can adopt tangible and
visual feedback to improve singers’ awareness when
performing. This paper represents a preliminary step in
this direction focused on the challenge from a design
perspective.
984
UBICOMP/ISWC ’16 ADJUNCT, SEPTEMBER 12-16, 2016, HEIDELBERG, GERMANY
Figure 1 An evocative illustration of in cantu tenore.1
PROBLEM STATEMENT
Singing is a complex activity, which involves several
human organs i.e. the larynx, the supraglottic vocal
tract, the tracheobronchial tree, lungs and thorax, the
abdomen, the musculoskeletal system, and the psycho-
neurological system in general. The coordination of
most of these organs is important to breathing [9, 3],
whose function is ensured by a well-trained abdominal-
thoracic muscles [6]. Also, the abdomen and the thorax
are considered the source of the voice, because they
are the source for a direct stream of air between the
vocal folds [6].
To summarise, voice can be considered like a particular
instrument made by different components: a power
1 http://alicerama.jobrary.com/portfolio
supply (the lungs), an oscillator (the vocal folds) and a
resonator (the larynx, pharynx and mouth). All these
components work together to ensure the generation of
voice. However, most of these organs are hidden inside
our body. As a consequence, the singer has to control
an invisible and intangible instrument, without having
the series of visual and tactile feedback that other
instrumentalists have. These deficiencies make singing
a particularly demanding activity as singers need to
control their instrument by interacting with something
impalpable and invisible.
We tackled this issue from a design perspective. The
aim is to identify a design concept to support the
singing experience by offering real time visual and
tactile feedback of the performance.
The next section provides more details about the user
studies and presents two prototypes. The first
prototype is a visual interface displaying in real time
information about the respiratory activity of the singer.
The second prototype is a tangible object that enhances
singers’ vibrations. In the final section, discussions and
future works are presented.
OUR PROPOSAL
Two examples of interactive systems were envisioned
to offer visual and tactile feedback of singers’
performance in real time. Although a variety of
parameters are involved in singing activity, these initial
prototypes narrowed the investigation to breathing and
voice vibrations.
Visualising Respiration
The idea of the first prototype is to collect respiratory
information and display it in real time on of a tablet. By
985
SESSION: BODYSENSE UX
using a respiratory biofeedback sensor similar to those
used in [12], we are able to detect a set of attributes of
respiration (depth, rate, thoracic/abdominal ratio). In
our prototype, the respiratory sensor is placed in two
main points: one around the chest and one around the
abdomen. Chest expansion and contraction can be
detected by checking the mutual values of these
sensors. The collected data are transformed into
animated visuals displayed on a graphic interface. The
representation has to show in real time how the air is
flowing, as well as other parameters that can support
the singers in performing better. We propose two
different representations: a realistic and a metaphorical
one. The realistic prototype is based on a
representation of the chest and other organs involved
in the respiration, which are modelled and animated in
real time using data coming from the sensors (Figure
2a). In the metaphorical representation the gathered
data are represented in an abstract way. As an
example (Figure 2b) where respiration is represented
as a 3D blob in which contraction, rotation, and speed
are matched with data related to the air flow, direction,
and intensity.
Touching Vibrations
The second prototype aims at providing singers with
augmented vibrotactile feedback. Augmenting vocal
performances with vibrotactile feedback is a common
practice in some choral singing as the already-
mentioned cantu a tenore. Borrowing the method from
cantu a tenore singers, the idea is to enhance the
perception of notes by feeling a resonating body. The
prototype we are currently developing collects
vibrations with piezo microphones positioned on
singers’ throat to capture their vibration (Figure 3).
This information is communicated to an ad-hoc device
(approximately of the size of a basket ball) that
amplifies the vibrations. To do so, a tactile transducer
is embedded inside the object and driven by an
amplifier. By touching the object, the singer can feel
the vibrations of her own (or other singers’) voice. This
device allows singers to have a more accurate sense of
their voice by using the sense of touch that, with a few
exceptions, is currently overlooked when it comes to
singing training and performance.
Figure 2: Realistic (a) and metaphoric (b) real time
visualisation of respiration.
USER STUDY
We have conducted a user study to test a preliminary
version of our early prototype. We have interviewed
eight professional singers from a music school. We
used drawings to explain our idea, showing the present
scenario representing their current modalities to sing
and a future scenario describing how our project would
affect these modalities. We showed them the figures
illustrated in (Figure 2, 3) and asked them some
questions related to their perceptions of their possible
use during the training activities.
986
UBICOMP/ISWC ’16 ADJUNCT, SEPTEMBER 12-16, 2016, HEIDELBERG, GERMANY
The results have showed that singers were fascinated
by the idea of a realistic representation of their body as
to support them in the understanding of their
performance. Some of them reported negative feelings
mostly connected to losing their body awareness. In
general, the second prototype was considered more
interesting partially due to the tangible nature of the
interface. Interviewees were fond of the possibility to
have an external interface that faithfully represents one
of their body organs as an opportunity to increase their
proprioception. The opportunity to have a tangible
prototype could improve learning and can foster users
in accomplishing tasks with abstract contents, reducing
their complexity and enhancing their understanding
[10].
FUTURE WORK & CONCLUSIONS
By taking into account the comments collected by the
user study, the next step will be to design and develop
and hi-fidelity prototype of the tangible interface. We
will conduct a co-design session with singers to rethink
about prototype’s features like form factors, materials,
and types of feedback that the prototype has to provide
to the users.
While maintaining our preliminary idea to use vibration
as a direct representation of breathing activity, we are
open to find new ways to map the correlation between
performance and its representation.
We expect to achieve a final prototype based on
tangible feedback and able to grow awareness in
singing, supporting them in interpret their
performances, establishing deeper bonds with their
personal instruments, and grow awareness of their
body and their voice.
Figure 3: A contact microphone detects voice vibrations,
which are then processed, amplified, and transmitted to a
vibrotactile transducer embedded in a custom object.
References
1. Becker, B. (2003). Marking and crossing borders:
bodies, touch and contact in cyberspace. Body,
Space & Technology.
2. Blake, J. (2008). Unesco’s 2003 Convention on
Intangible Cultural Heritage.The implications of
community involvement, 45-50.
3. Cho, T. S. (2012). Study on Breathing Method for
Improving Singing Skills. In Green and Smart
Technology with Sensor Applications, Springer
Berlin Heidelberg, 372-377.
4. Dimitropoulos, K., Manitsaris, S., Tsalakanidou, F.,
Nikolopoulos, S., Denby, B., Al Kork, S., ... &
Tilmanne, J. (2014). Capturing the intangible an
introduction to the i-Treasures project. In Proc. Of
VISAPP pp. 773-78.
5. Matassa, A., Console, L., Angelini, L., Caon, M., &
Khaled, O. A. (2015). Workshop on full-body and
987
SESSION: BODYSENSE UX
multisensory experience in ubiquitous interaction.
In Proc. of UBICOMP 2015.
6. Matassa, A., Morreale, F. (2016). Supporting
Singers with Tangible and Visual Feedback. In Proc.
of AVI pp. 328-329.
7. Miller, R. (1986). The Structure of Singing: System
and Art in Vocal Technique. 1986.
8. Morreale, F. (2015). Designing New Experiences of
Music Making. PhD Thesis, University of Trento
9. Norman, D. (2007). The Design of Future Things.
Basic Books, New York, 2007.
10. Schneider, B., Jermann, P., Zufferey, G., &
Dillenbourg, P. (2011). Benefits of a tangible
interface for collaborative learning and
interaction. IEEE Transactions on Learning
Technologies, 4(3), 222-232.
11. Sundberg, J. (1977). The acoustics of the singing
voice. Scientific American.
12. Vidyarthi, J., Riecke, B. E., & Gromala, D. (2012).
Sonic Cradle: designing for an immersive
experience of meditation by connecting respiration
to music. In Proc. of DIS 408-417.
13. Zatorre, R. J., Chen, J. L., and Penhune, V. B.
2007. When the brain plays music: auditorymotor
interactions in music perception and production.
Nature reviews neuroscience 8.7: 547-558.
988
UBICOMP/ISWC ’16 ADJUNCT, SEPTEMBER 12-16, 2016, HEIDELBERG, GERMANY
ResearchGate has not been able to resolve any citations for this publication.
Thesis
Full-text available
Music making is among the activities that best fulfil a person’s full potential, but it is also one of the most complex and exclusive: successful music making requires study and dedication, combined with a natural aptitude that only gifted individuals possess. This thesis proposes new design solutions to reproduce the human ability to make music. It offers insights to provide the general public with novel experiences of music making by exploring a different interactive metaphor. Emotions are proposed as a mediator of musical meanings: an algorithmic composer is developed to generate new music, and the player can interact with the composition, controlling the desired levels of the composition’s emotional character. The adequacy of this metaphor is tested with the case study of The Music Room, an interactive installation that allows visitors to influence the emotional aspect of an original classical style musical composition by means of body movements. This thesis addresses research questions and performs exploratory studies that are grounded in and contribute to different fields of research, including musical interface design, algorithmic composition, and psychology of music. The thesis presents MINUET, a conceptual framework for the design of musical interfaces, and the Music Room, an example of interactive installation based on the emotional metaphor. The Music Room was the result of a two-year iteration of design and evaluation cycles that informed an operational definition of the concept of engagement with interactive art. New methods for evaluating visitors’ experience based on the integration of evidences from different user-research techniques are also presented. As regards the field of algorithmic composition, the thesis presents Robin, a rule-based algorithmic affective composer, and a study to test its validity in eliciting different emotions in listeners (N=33). Valence (positive vs. negative) and arousal (high vs. low) were manipulated in a 2*2 within-subjects design. Results showed that Robin correctly elicited valence and arousal in converging conditions (high valence, high arousal and low valence, low arousal). However, in cases of diverging conditions (high valence, low arousal and low valence, high arousal), valence received neutral values. As regards the psychology of music, this thesis contributes new evidence to the on-going debate about the innate or learned nature of musical competence, defined as the ability to recognise emotion in music. Results of an experimental study framed within Russell’s two-dimensional theory of emotion suggest that musical competence is not affected by training when listeners are required to evaluate arousal (dictated by variations of tempo). The evaluation of valence (dictated by the combination of tempo and mode), however, was found to be more complicated, highlighting a difference in the evaluation of musical excerpts when tempo and mode conveyed diverging emotional information. In this debate, Robin is proposed as a suitable tool for future experimental research as it allows the manipulation of individual musical factors.
Conference Paper
Full-text available
The ubiquitous computing era is bringing to the human the possibility to interact always and everywhere with digital information. However, the interaction means used to access this information exploit only few of the human sensorimotor abilities. Most of these interactions happen through traditional desktop or mobile interfaces, which often involve just vision and hearing senses and require the movement of only one finger. The aim of this workshop is rediscovering the role of human body and senses, focusing on abilities that are often forgotten by the HCI designers, in order to provide new body experiences through the design of novel interactions in smart environments. The focus of this workshop will go beyond the mere design of multimodal interfaces and will exploit theories of embodied cognition to design new full-body experiences to explore ambient space and, more in general, the environment.
Conference Paper
Full-text available
Cultural expression is not limited to architecture, monuments or collections of artifacts. It also includes fragile intangible live expressions, which involve knowledge and skills such as music, dance, singing, theatre, human skills and craftsmanship. These manifestations of human intelligence and creativeness constitute our Intangible Cultural Heritage (ICH), a basic factor of local cultural identity and a guaranty for sustainable development. In this paper, we briefly introduce the i-Treasures research project, which aims at developing an open and extendable platform to provide access to ICH resources, enable knowledge exchange and contribute to the transmission of rare know-how. The project goes beyond digitization of cultural content; it creates new knowledge that has never been analysed or studied before through novel methodologies for the analysis and modelling of ICH based on multisensory technology. High-level semantics are extracted enabling researchers to identify possible implicit or hidden correlations between different ICH expressions or interpretation styles and study the evolution of a specific ICH. Four different ICH cases are studied: traditional songs, dance interactions, pottery and contemporary music composition Combining conventional learning procedures and sensorimotor learning through an interactive 3D environment, i-Treasures breaks new ground in education and knowledge transfer of ICH.
Conference Paper
Full-text available
Sonic Cradle is a chamber of complete darkness where users shape a peaceful soundscape using only their respiration. This interactive system was designed to foster a meditative experience by facilitating users' sense of immersion while following a specific attentional pattern characteristic of mindfulness. The goal of Sonic Cradle is twofold: first, to trigger the proven effects of mindfulness on stress, and second, to help teach and demystify the concept of meditation for users' long-term benefit. This paper presents the design phase of the project, starting by theoretically grounding the initial concept. We then discuss 15 co-design sessions which provided informal conceptual validation and led to several concrete design iterations aimed at balancing users' perceived sense of control. The presented approach to designing an interactive stress management system can be considered research through design, as it also resulted in a novel theoretical framework for the psychology of media immersion which has implications for a wide range of research areas.
Conference Paper
Most of musicians can control their performance by relying on different sensorial modalities that complement the auditory cue. Vision, in particular, offers most instrumentalists an essential support: it helps them developing techniques, identifying errors, correcting expressiveness, and memorise complex passages. By contrast, when performing a piece, singers can almost exclusively rely on the auditory feedback coming from their voice to adjust their singing. This paper frames this issue and proposes possible alternatives to improve singers' awareness by adding visual and tangible feedback to their performance.
Chapter
For vocalists, breathing is more than a simple activity that keeps them alive. Breathing means energy when singing. Through training, it makes it possible to maintain longer breaths which not only strengthen the expression of songs, but also boost the volume of voice. Moreover, a precise breathing method expands the range of voice asit becomes comfortable to express the lower and higher ranges.This study is about the method that expands one’s volume and range to his/her full potential by training, tailored for Asians who naturally have smaller physique than the Westerners. Hence, we suggest systematic training methods including basic posture, abdominal breathing and repetitious training for strength.
Article
Combining the physical, technical, and artistic aspects of singing, the author applies current findings in medicine, acoustics, phonetics, and speech therapy to the singer's needs. The text demonstrates the scientific basis of exercises and vocalises, covering all major areas of vocal technique.