Soundfield rendering with loudspeaker arrays through multiple beam shaping
This paper proposes a method for the acoustic rendering of a virtual environment based on a geometric decomposition of the wavefield into multiple elementary acoustic beams, all reconstructed with a loudspeaker array. The point of origin, the orientation and the aperture of each beam is computed according to the geometry of the virtual environment that we want to render and to the location of the sources. Space-time filters are computed with a Least Squares approach to render the desired beam. Experimental results show the feasibility as well as the critical issues of the proposed algorithm.