Project

SONICOM - Transforming auditory-based social interaction and communication in AR/VR.

Goal: Immersive audio is our everyday experience of being able to hear and interact with sounds around us. Simulating spatially located sounds in virtual or augmented reality (VR/AR) must be done in a unique way for each individual. This has been the object of extensive research in the past years; nevertheless, several major challenges are still to be tackled in this area, which require an improved understanding and modelling of the human spatial hearing mechanisms. Furthermore, the impact of immersive audio beyond perceptual metrics such as presence and localisation is still an unexplored area of research, specifically when related with social interaction within virtual environments, entering the behavioural and cognitive realms.
SONICOM puts together some of the most important research centres across Europe to tackle these research challenges using a multidisciplinary approach. Research involves a mixture of digital signal processing and artificial intelligence, together with psychology and perceptual modelling, exploring new forms, levels and dimensions of interaction within auditory-based immersive environments. Furthermore, in order to reinforce the idea of reproducible research and promoting future development and innovation in the area of auditory-based social interaction, the SONICOM Ecosystem will be created and shared with the wider community, which will include auditory data closely linked with model implementations and immersive audio rendering components.

---

The SONICOM project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement no.101017743.

Updates
0 new
0
Recommendations
0 new
0
Followers
2 new
20
Reads
1 new
243

Project log

Isaac Engel
added a research item
Accurately rendering reverberation is critical to produce realistic binaural audio, particularly in augmented reality applications where virtual objects must blend in seamlessly with real ones. However, rigorously simulating sound waves interacting with the auralised space can be computationally costly, sometimes to the point of being unfeasible in real time applications on resource-limited mobile platforms. Luckily, knowledge of auditory perception can be leveraged to make computational savings without compromising quality. This chapter reviews different approaches and methods for rendering binaural reverberation efficiently, focusing specifically on Ambisonics-based techniques aimed at reducing the spatial resolution of late reverberation components. Potential future research directions in this area are also discussed.
Isaac Engel
added a research item
It was great to attend DAGA 2021 in Vienna! Thanks to Piotr Majdak and colleagues from the Acoustics Research Insititute for organising a very interesting session on binaural models. This was our contribution, regarding the use of auditory models to evaluate methods for the binaural rendering of Ambisonics sound fields.
Lorenzo Picinali
added a project goal
Immersive audio is our everyday experience of being able to hear and interact with sounds around us. Simulating spatially located sounds in virtual or augmented reality (VR/AR) must be done in a unique way for each individual. This has been the object of extensive research in the past years; nevertheless, several major challenges are still to be tackled in this area, which require an improved understanding and modelling of the human spatial hearing mechanisms. Furthermore, the impact of immersive audio beyond perceptual metrics such as presence and localisation is still an unexplored area of research, specifically when related with social interaction within virtual environments, entering the behavioural and cognitive realms.
SONICOM puts together some of the most important research centres across Europe to tackle these research challenges using a multidisciplinary approach. Research involves a mixture of digital signal processing and artificial intelligence, together with psychology and perceptual modelling, exploring new forms, levels and dimensions of interaction within auditory-based immersive environments. Furthermore, in order to reinforce the idea of reproducible research and promoting future development and innovation in the area of auditory-based social interaction, the SONICOM Ecosystem will be created and shared with the wider community, which will include auditory data closely linked with model implementations and immersive audio rendering components.
---
The SONICOM project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement no.101017743.