Emma Frid

Emma Frid
KTH Royal Institute of Technology / IRCAM

PhD

About

38
Publications
60,624
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
248
Citations
Citations since 2016
31 Research Items
241 Citations
2016201720182019202020212022020406080
2016201720182019202020212022020406080
2016201720182019202020212022020406080
2016201720182019202020212022020406080

Publications

Publications (38)
Article
Full-text available
Research on Accessible Digital Musical Instruments (ADMIs) has highlighted the need for participatory design methods, i.e., to actively include users as co-designers and informants in the design process. However, very little work has explored how pre-verbal children with Profound and Multiple Disabilities (PMLD) can be involved in such processes. I...
Chapter
Computer-Aided Architectural Design (CAAD) finds its historical precedents in technological enthusiasm for generative algorithms and architectural intelligence. Current developments in Artificial Intelligence (AI) and paradigms in Machine Learning (ML) bring new opportunities for creating innovative digital architectural tools, but in practice this...
Conference Paper
This paper presents exploratory work on sonic and visual representations of heartbeats of a COVID-19 patient and a medical team. The aim of this work is to sonify heart signals to reflect how a medical team comes together during a COVID-19 treatment, i.e. to highlight other aspects of the COVID-19 pandemic than those usually portrayed through sonif...
Conference Paper
In this paper, we introduce sonification as a less intrusive method for preventing shoplifting. Music and audible alerts are common in retail, and auditory monitoring of a store can aid clerks and reduce losses. Despite these potential advantages, sonification of interaction with goods in retail is an undeveloped field. We conducted an experiment f...
Conference Paper
Full-text available
The new developments in Information and Communication Technologies (ICT) and Artificial Intelligence (AI) bring revelations of emerging smart cities. However, AI has not yet been integrated in Computer Aided Design (CAD), Building Information Modelling (BIM) or Geographic Information Systems (GIS) software. There are experiments with AI in urban mo...
Article
Full-text available
This paper presents two experiments focusing on perception of mechanical sounds produced by expressive robot movement and blended sonifications thereof. In the first experiment, 31 participants evaluated emotions conveyed by robot sounds through free-form text descriptions. The sounds were inherently produced by the movements of a NAO robot and wer...
Article
Full-text available
A class of master of science students and a group of preschool children codesigned new digital musical instruments based on workshop interviews involving vocal sketching, a method for imitating and portraying sounds. The aim of the study was to explore how the students and children would approach vocal sketching as one of several design methods. Th...
Article
Full-text available
Unfortunately, some errors and imprecise descriptions were made in the final proofreading phase, and the author, therefore, wishes to make the following corrections to this paper [...]
Conference Paper
Full-text available
This paper presents a study on the composition of haptic music for a multisensory installation and how composers could be aided by a preparatory workshop focusing on the perception of whole-body vibrations prior to such a composition task. Five students from a Master's program in Music Production were asked to create haptic music for the installati...
Article
Full-text available
Existing works on interactive sonification of movements, i.e., the translation of human movement qualities from the physical to the auditory domain, usually adopt a predetermined approach: the way in which movement features modulate the characteristics of sound is fixed. In our work we want to go one step further and demonstrate that the user role...
Conference Paper
Full-text available
Short online videos have become the dominating media on social platforms. However, finding suitable music to accompany videos can be a challenging task to some video creators, due to copyright constraints, limitations in search engines, and required audio-editing expertise. One possible solution to these problems is to use AI music generation. In t...
Article
Full-text available
Current advancements in music technology enable the creation of customized Digital Musical Instruments (DMIs). This paper presents a systematic review of Accessible Digital Musical Instruments (ADMIs) in inclusive music practice. History of research concerned with facilitating inclusion in music-making is outlined, and current state of developments...
Conference Paper
Full-text available
Sound Forest is a music installation consisting of a room with light-emitting interactive strings, vibrating platforms and speakers, situated at the Swedish Museum of Performing Arts. In this paper we present an exploratory study focusing on evaluation of Sound Forest based on picture cards and interviews. Since Sound Forest should be accessible fo...
Article
Full-text available
In this paper we present three different experiments designed to explore sound properties associated with fluid movement: (1) an experiment in which participants adjusted parameters of a sonification model developed for a fluid dance movement, (2) a vocal sketching experiment in which participants sketched sounds portraying fluid versus nonfluid mo...
Article
Full-text available
The original version of this article unfortunately contained mistakes. The presentation order of Fig 5 and Fig. 6 was incorrect. The plots should have been presented according to the order of the sections in the text; the “Mean Task Duration” plot should have been presented first, followed by the “Perceived Intuitiveness” plot.
Conference Paper
Full-text available
This paper describes a survey of accessible Digital Musical Instruments (ADMIs) presented at the NIME, SMC and ICMC conferences. It outlines the history of research concerned with facilitating inclusion in music making and discusses advances, current state of developments and trends in the field. Based on a systematic analysis of DMIs presented at...
Conference Paper
Full-text available
In this paper we present a pilot study carried out within the project SONAO. The SONAO project aims to compensate for limitations in robot communicative channels with an increased clarity of Non-Verbal Communication (NVC) through expressive gestures and non-verbal sounds. More specifically, the purpose of the project is to use movement sonification...
Article
Full-text available
In this paper we present a study on the effects of auditory- and haptic feedback in a virtual throwing task performed with a point-based haptic device. The main research objective was to investigate if and how task performance and perceived intuitiveness is affected when interactive sonification and/or haptic feedback is used to provide real-time f...
Article
Full-text available
The 11th Summer Workshop on Multimodal Interfaces eNTERFACE 2015 was hosted by the Numediart Institute of Creative Technologies of the University of Mons from August 10th to September 2015. During the four weeks, students and researchers from all over the world came together in the Numediart Institute of the University of Mons to work on eight sele...
Conference Paper
Full-text available
The primary goal of this study was to estimate the number of female authors in the academic field of Sound and Music Computing. This was done through gender prediction from authors' first names for proceedings from the ICMC, SMC and NIME conferences, and by sonifying these results. Although gender classification by first name can only serve as an e...
Conference Paper
Full-text available
This paper presents findings from an exploratory study on the effect of auditory feedback on gaze behavior. A total of 20 participants took part in an experiment where the task was to throw a virtual ball into a goal in different conditions: visual only, audiovisual, visuohaptic and audio-visuohaptic. Two different sound models were compared in the...
Conference Paper
Full-text available
In this study we conducted two experiments in order to investigate potential strategies for sonification of the expressive movement quality "fluidity" in dance: one perceptual rating experiment (1) in which five different sound models were evaluated on their ability to express fluidity, and one interactive experiment (2) in which participants adjus...
Article
Full-text available
In this paper we present three studies focusing on the effect of different sound models in interactive sonification of bodily movement. We hypothesized that a sound model characterized by continuous smooth sounds would be associated with other movement characteristics than a model characterized by abrupt variation in amplitude and that these associ...
Conference Paper
Full-text available
In this paper we present a study of the interaction with a large sized string instrument intended for a large installation in a museum, with focus on encouraging creativity,learning, and providing engaging user experiences. In the study, nine participants were video recorded while interacting with the string on their own, followed by an interview f...
Conference Paper
Full-text available
Ilinx is a multidisciplinary art/science research project fo-cusing on the development of a multisensory art installation involving sound, visuals and haptics. In this paper we describe design choices and technical challenges behind the development of the haptic technology embedded into six augment garments. Starting from perceptual experiments , c...
Conference Paper
Full-text available
This paper presents a brief overview of work-in-progress for a study on correlations between visual and haptic spatial attention in a multimodal single-user application comparing different modalities. The aim is to gain insight into how auditory and haptic versus visual representations of temporal events may affect task performance and spatial atte...
Conference Paper
Full-text available
In this paper we present a study we conducted to assess physical and perceptual properties of a tactile display for a tactile notification system within the CIRMMT Live Electronics Framework (CLEF), a Max-based modular environment for composition and performance of live electronic music. Our tactile display is composed of two rotating eccentric mas...
Technical Report
Full-text available
Relatively few investigations have yet focused on quality perception of the sonic environment in eateries, i.e. restau- rant soundscapes. The aim of this exploratory study is to investigate the correlation between acoustic and per- ceptual features in such sonic environments. A total of 31 binaural recordings from everyday eateries, divided into th...

Questions

Questions (3)
Question
Hello,
I am about to perform a study in which a participant will be shown a stimuli (a sound, video or video with sound) and rate this stimuli on a set of continuous perceptual scales. There will be three different conditions: video only stimuli, video and audio stimuli, and audio only stimuli (see attached example data). To complicate things further, there are 10 types of different videos, portraying different emotions (e.g. happy, sad, relaxed, frutrated … videos).
I am interested in wether there is a significant effect of condition (video only, video and audio, audio only) for respective emotion.
From what I understand, I would be able to analyze each perceptual scale using a two-way repeated measures ANOVA (factors: condition and emotion). However, my concern is that there are so many levels of the second factor (emotion, 10 levels). Will my post-hoc tests be affected by this, in terms of required number of participants for detection of significant effects? And will interaction effects be hard to interpret?
Best,
Emma
Question
We are planning an experiment in which we will randomly assign subjects into two groups. Participants in group 1 will perform the same task as participants in group 2, apart from the fact that the participant in group 2 will be able to observe a participant from group 1 that is simultaneously performing the task (participants will be in separate rooms that are connected through a one-way mirror). We will measure 6 different variables for each participant (see attached file for data structure). Since each participant in group 2 will observe another participant in group 1 during the experiment, I believe that the observations are paired. 
We are interested in investigating if there is a difference in terms of the 6 measured variables between group 1 and 2.
My question is, what would be the best analysis method for this setup? I believe I could do paired t-tests for each of the 6 measured variables separately, but is that OK (it will be a total of 6 tests)? I wonder if there is some kind of ANOVA for paired observations like this? I guess I can not use a MANOVA since observations are paired?
Question
I have conducted a 3 alternative forced choice experiment (3AFC) where participants had to combine one out of three visualizations with a specific sound. There were 4 sets of different visualizations (a set consisted of 3 visualizations) and these sets were presented together with 3 different sounds. In other words, each sound was presented 4 times. There was a total of 4 sets x 3 sounds, i.e. 12 stimuli.
Considering that I presented every sound multiple times for each participant (but with different sets of visualizations), is it correct to run a chi-square test to analyze the association between the two variables "sounds" and "visualization"?
I am confused if this should be considered a repeated measures experiment since some of the sounds were presented several times. I am wondering if this could cause issues with the chi-square test, since this is a test that requires independence?
Thank you!

Network

Cited By

Projects

Projects (2)
Project
Multi-agent multimodal interaction systems that provide advanced interaction possibilities for manufacturing, for medical applications or for teaching will fundamentally alter the way people work together in the future. In this project we study how the haptic and audio modalities can affect task performance and support divided attention and communication in both single-user and multi-user multimodal virtual environments. The project is funded by the Swedish Research Council (VR). Project leader: Eva-Lotta Sallnäs Pysander