ArticlePDF Available

Sound spatialization based on fast beam tracing in the dual space

Authors:
  • Conservatorio G. Tartini, Trieste, Italy

Abstract and Figures

This paper addresses the problem of geometry-based sound rever-beration for applications of virtual acoustics. In particular, we propose a novel method that allows us to significantly speed-up the construction of the beam tree in beam tracing applications, by avoiding space subdivision. This allows us to dynamically recom-pute the beam tree as the sound source moves. In order to speed-up the construction of the beam tree, we determine what portion of which reflectors the beam "illuminates" by performing visibility checks in the "dual" of the geometric space.
Content may be subject to copyright.
Proc. of the 6th Int. Conference on Digital Audio Effects (DAFx-03), London, UK, September 8-11, 2003
SOUND SPATIALIZATION BASED ON FAST BEAM TRACING IN THE DUAL SPACE
Marco Foco, Pietro Polotti, Augusto Sarti, Stefano Tubaro
Dipartimento di Elettronica e Informazione
Politecnico di Milano
Piazza L. Da Vinci 32, 20133 Milano, Italy
marco.foco@fastwebnet.it polotti/sarti/tubaro@elet.polimi.it
ABSTRACT
This paper addresses the problem of geometry-based sound rever-
beration for applications of virtual acoustics. In particular, we
propose a novel method that allows us to signicantly speed-up
the construction of the beam tree in beam tracing applications, by
avoiding space subdivision. This allows us to dynamically recom-
pute the beam tree as the sound source moves. In order to speed-
up the construction of the beam tree, we determine what portion of
which reectors the beam “illuminates” by performing visibility
checks in the “dual” of the geometric space.
1. INTRODUCTION
Advanced sound rendering techniques are often based on the phys-
ical modeling of the acoustic reections and scattering in the en-
vironment. This can be modeled using either numerical methods
(nite elements, boundary elements and nite differences) or geo-
metrical methods (image source, path tracing, beam tracing and
radiosity). With all such techniques, however, computational com-
plexity may easily become an issue. One approach that enables an
efcient auralization of all reected paths is based on beam trac-
ing [1]. This method is based on a geometric pre-computation of
a tree-like topological representation of the reections in the envi-
ronment (beam tree) through spatial subdivision techniques. The
beam tree is then used for real-time auralization through a sim-
ple look-up of which beams pass through the auditory points. It
is important to notice, however, that the tree-like structure of the
beam reections makes the approach suitable for early reverbera-
tions only, as it prevents us from looping beams and implementing
the corresponding IIR structures. In order to overcome this dif-
culty, we recently proposed an approach [2] that models both early
and late reverberation by cascading a tapped delay line (a FIR)
with a Waveguide Digital Network (WDN) [3] (an IIR). In this ap-
proach, the beam tree is looked-up to generate the coefcients of
a tapped delay line for the auralization of early reverberation. In
order to account for late reverberation as well, we feed the out-
puts of the tapped delay lines into WDN, whose parameters are
determined through path tracing [2].
One problem of the beam tracing approach is that it assumes
that sound sources are xed and only listeners are allowed to move
around.Infact,everytimethesourcemoves,thebeamtreeneeds
to be re-computed. This may easily become a costly operation,
particularly with environments of complex topology, as it is based
on spatial subdivision algorithms.
In this paper we propose a novel method that allows us to sig-
nicantly speed-up the construction of the beam tree, by avoiding
space subdivision. This allows us to dynamically recompute the
beam tree as the source moves. In order to speed-up the construc-
tion of the beam tree, we determine what portion of which reec-
tors the beam “illuminates” by performing visibility checks in the
“dual” of the world space. In order to illustrate the approach, we
will assume that the world space be 2-dimensional.
2. TRACING BEAMS IN THE DUAL SPACE
The world space that we consider is made of sources and reec-
tors. Sources are assumed as point-like, and reectors are linear
segments. We call “active” that region of a reector that is directly
“illuminated” by a source. The active portion of a reector can be
made of one or more active segments (connected sets), due to oc-
clusions. Each one of the active segments generates a beam, which
is dened as that bundle of acoustic rays that connect the source
with points of the active segment. Each acoustic ray wcan be de-
scribed by an equation of the form y=ax +b,whereais the
angular coefcient and bis the offset on the yaxis, all referred to
a world coordinate system (x, y).Thedualˆwof that ray is thus
apoint(a, b)in parameter space. The dual of a point pof coor-
dinates (x, y), on the other hand, is a line ˆpin parameter space,
givenbyallpairs(a, b)that correspond to rays passing through p.
This means that the dual of a source is a ray.
Figure 1: The duals of a set of physical reector is a collection of
strip-like regions in the dual space.
Let us now consider a reector, which is a segment in world
DAFX -1
Proc. of the 6th Int. Conference on Digital Audio Effects (DAFx-03), London, UK, September 8-11, 2003
space. In order to determine its dual, we can consider the duals ˆp
and ˆqof its two extremes p and q, respectively. Such points will
correspond to two lines ˆpand ˆqin the dual space. As the reector
r is made of all points in between p and q, its dual ˆrwill be made
of all the lines in between ˆpand ˆq. In other words, the dual of a
reector is a beam-like region. We recall, however, that an active
segment of a reector, together with a virtual source s, completely
species a beam. The dual of this beam can be determined as the
intersection between the dual ˆrof the active segment (a beam-like
region) and the dual ˆsof the virtual source (a line). In conclusion,
the dual of a beam is a segment, just like the dual of a segment
(reector) is a beam.
Pushing this formalism even further, we can dene active re-
ecting segments at innity (just like in projective geometry) and
beams at innity, which are those beams that are not reected by
any surface. If we have a collection of reectors in world space,
their duals will form a collection of beam-like regions (see Fig. 1).
We would now be tempted to say that the intersection between
the dual of a source and the dual of the reectors is the dual of the
branching beams, therefore it can be used to determine the beam
tree. This would be true if there were no occlusions between re-
ectors (i.e. if all points of the reectors were active). In fact, mu-
tual occlusions cause the duals of reectors to overlap each other,
therefore in order to determine the branching of the beams we need
rst to gure out the layering order of such regions, according to
mutual occlusions (i.e. “which region eats which”).
The layering order is not the only problem, as we need to con-
struct the tree of all possible reections in the environment while
keeping track of the occlusions. In order to do so, we propose an
iterative approach that starts from the source (root) and tracks the
branching of all beams in dual space. At each iteration we con-
sider a beam at a time and we determine how this beam is split
into various sub-beams as it encounters reectors on its way. This
splitting characterizes the branching of the beam tree.
Figure 2: Beam tracing in world-space. a. The source silluminates
the reector r3and partially the reector r4. b. The beam that
describes the reection from r4(s0is the virtual source for this
reection). In this gure sub-beams illuminating r3,r1and r2are
shown.
At the rst iteration we consider the source and the physical
reectors (Fig. 2a). From this source a number of beams will
depart, each corresponding to an active segment of the various re-
ectors. Such active segments can be determined quite straightfor-
wardly by tracing all rays that depart from the source. At this point
we consider all the beams one by one and gure out how each of
them branches out as it encounters the reectors (Fig. 2b). Each
beam will keep branching out until its cumulative attenuation or
its aperture angle falls beneath some pre-assigned threshold. Let
us consider a beam originating from a virtual source s.Ifwewant
to characterize how this beam branches out, start from the active
reector segment r0(Fig. 3a) that denes that beam (aperture) and
we perform a change of reference frame in order for its extremes
p0and q0to fall in the coordinates (0,1) and (0,1), respectively
(Fig. 3b).
Figure 3: A reective segment: a. in the world space, b. after
normalization relative to ri, c. in the dual space
This way, the dual of the considered aperture will be the ref-
erence strip 1b1in parameter space (see Fig. 3c). The
intersection in the dual space between the refence strip and the rep-
resentation oof the othe considered reection is the area named ˆr1
that represents the visibility of r0
1from r0
0.
In Fig. 4 and 5 we can see some other examples of a reector
visibility from a reference segment.
Figure 4: The visibility of r0
1with respect of r0
0. The visibility is
limited to the right half-plane of the reference reector.
Let us now consider again Fig. 1, which has four physical re-
ectors. Let us assume that, as the iterations go alone, r4is found
to be an active reector, and we want to show how to determine the
DAFX -2
Proc. of the 6th Int. Conference on Digital Audio Effects (DAFx-03), London, UK, September 8-11, 2003
Figure 5: Another particular case of visibility. Varying the obser-
vation point on the reference segment, the two different sides of
the reector r0
1are seen. This is clear from the observation of the
dual space.
branching of the beams. Therefore we will consider r4as an active
aperture, and a coordinate change is used to normalize its repre-
sentation in the space domain. We will name the new coordinate
system (x4,y
4)(superscript denotes which reector is currently
the reference for the normalization), and the new representation of
the other reectors will become r4
1,r4
2and r4
3. In the dual space
we obtain the set of regions of Fig. 6. Notice that r4
3partially oc-
cludes r4
1. In the dual space, when the areas that represents two
reectors overlap, ther is an occlusion. By analyzing the slopes of
the dual representation of the reector endpoints, it is possible to
dene in the correct way the relative visibility.
Figure 6: Beam tracing in the dual space. The intersection between
ˆsand the colored regions generates a collection of segments, which
are duals of the branching beams.
A particular ordering situation between reectors occurs when
the sorting between two (or more) reectors is ambiguous with re-
spect to a third reector (see Fig. ??). This happens when the
reector’s order changes with respect to a moving point on the ref-
erence reector. This could be a problem when working in metric
space, but it becomes easy to solve in the dual space. In fact, in
the dual space the visibility information is obtained by partitioning
the reference strip (1b1), by looking at the intersections
of the segments that dene the reector’s endpoints.
As a last foremark it is important to notice that, without loss
of generality, the source can always be placed on the left of the
normalized aperture, as we can always replace a (virtual) source
with its specular with respect to the aperture. As we can see in
Fig. 6, the scan of the rays that dene a source beam corresponds
to the scan of the points of its dual along its extension.
Figure 7: The sorting problem: a.world-space sorting is ambigu-
ous; b. solution in dual-space.
3. IMPLEMENTATION
In order to test the method that we propose, we implemented a
software application that includes a 2D CAD for creating test en-
vironments, and an auralization system based on both a traditional
beam tracer and a dual-space beam tracer. Auralization is based on
tapped delay lines, and includes HRTF and cross-fading between
beam congurations.
The geometries considered by this application are only two-
dimensional, but the method can be extended to the 3D space. The
basic structure of our auralization system is shown in Fig. 8.
Figure 8: A schematic view of the auralization algorithm embed-
ded in the beam tracing application.
In a rst step, the algorithm analyses the geometric descrip-
tion of the reectors of the environment, and it extracts all the
necessary information on visibility by working in the dual of the
DAFX -3
Proc. of the 6th Int. Conference on Digital Audio Effects (DAFx-03), London, UK, September 8-11, 2003
geometric space. As this step depends only on the geometry of the
reectors,as long as the environment is xed, this operation can
be performed beforehand, as a preprocessing step.
The location of the sound source and the visibility information
can now be used to construct the beam tree, or to update it every
time the source moves.
Figure 9: Building a tree level
Fig. 9 refers to the construction of one level of the beam
tree (a reectiononther4reector, as already shown in Fig. 6).
In this portion of the beam tree, we describe the splitting of the
generating beam (the one that is incident to reector r4)intove
sub-beams (numbered from 1 to 5). Sub-beams 1 and 3 are “un-
bounded” beams (they keep propagating undisturbed), while sub-
beams 2, 3 and 5 are “bounded”, as they end up onto reectors r2,
r1and r3, respectively. The process is repeated for each one of
the “bounded” sub-beams, until the number of reections reaches
apre-xed maximum or until the energy associated to the beam
becomes negligible.
The beam-tree lookup block of Fig. 8 generates the lter taps
according to what beams reach the listener’s location. This update
operation is fast and can be performed frequently [1]. Through
this lookup process we obtain the direction of arrival (DOA), the
path length and the list of the reectors encountered so far. This
phase requires all of the tree nodes and leaves to be visited, and the
relative beam to be tested for listener’s inclusion. If the listener’s
location falls within the beam area, the DOA and the path length
are computed. From there we can go down the beam tree towards
the root, in order to obtain the list of reectors encountered along
the propagation of the wavefront.
Since it is impossible to produce one stream for every DOA,
we grouped together angular intervals in order to generate a limited
number of audio streams, one per interval. Each one of these an-
gular intervals will be attributed a separate delay line (a FIR lter).
The taps are computed considering all path lengths and relative
attenuations.
The lter bank whose parameters are generated at the previous
step, constitutes the auralization algorithm, which generates the
directional streams from an anechoic (dry) source. These streams
are then mixed for stereo headphone auralization using a HRTF. In
our implementation we used OpenAL [7] to mix 16 virtual “equiv-
alent sources” placed in circle around the listener (to simulate 16
discretized DOAs). The listener’s head orientation is accounted
foronlyinthislaststep.
In the following table we summarise what kind of computation
is required for source-listener geometry changes.
Type of motion Recomputation required
Source beam tree re-computation
Listener’s location beam tree lookup, lter bank update
Listener’s orientation no re-computation required
Every time a re-computation is required, the taps in the de-
lay lines are changed, and this can cause some clicking sounds to
be produced in the output. This can be avoided by smoothly in-
terpolating parameters in place of just switching them. A simple
method to avoid clicks is a linear interpolation in the audio streams
generated by using the “old” and “new” lters (see Fig. 10).
Figure 10: Mixing audio streams to avoid clicks.
4. PERFORMANCE EVALUATION
In order to test the effectiveness of the proposed solution, we com-
pared our implementation (dual space approach) with an optimized
one that we developed, which works in geometry space but does
not require space subdivision. In spite of the computational ef-
ciency of the approach based working in metric space, the per-
formance improvement of our approach in the dual space is quite
apparent.
In the geometry space, the complexity of tracing a beam tree
level without space subdivisions depends on n, the number of re-
ectors. In the dual space, the complexity depends on m(<n), the
number of “visible” reectors from the reference. In most cases,
such as a typical ofce conguration, mdepends only on the local
geometry, and it is independent from n. For example, in a system
of square rooms the average visible faces is 6-7, but in a structure
characterized by long corridors the number of visible faces can be
higher.
In the test we performed with the geometry space approach,
the time used to rebuild the beam tree depends quadratically on
thenumberofreector in the world, while our method (dual space
analysis) grows almost in a linear way (Fig. 11).
These tests have been executed on a modular set of rooms
connected by not aligned doorways: an example of this model (5
rooms, 20 reectors) is shown in the program’s screenshot in Fig.
12.
5. CONCLUSIONS AND FUTURE WORK
In this paper we showed a novel, very effective algorithm to trace
beams in the 2-dimensional space to be used for virtual acoustics.
The computational efciency has been achieved by exploiting par-
ticular properties of the representation in the dual space.
We also showed that the above procedure, implemented on a
standard PC platform, turns out to be very effective, and the com-
DAFX -4
Proc. of the 6th Int. Conference on Digital Audio Effects (DAFx-03), London, UK, September 8-11, 2003
Figure 11: Execution time vs. number of reectors in the scene for
the standard beam tracing (diamonds), proposed method (squares).
Figure 12: Test application screenshot. The room’s model is the
5-rooms test model (20 reectors).
putational saving compared with standard beam tree methods in-
creases as the complexity of the environment grows.
We are currently working on a 3-dimensional version of this
algorithm, to be used with more complex environments.
6. REFERENCES
[1] T. Funkhouser, I. Carlbom, G. Elko, G. Pingali, M. Sondhi,
J.West,“BeamTracingApproachtoAcousticModelingfor
Interactive Virtual Environments”. Computer Graphics (SIG-
GRAPH ’98), Orlando, FL, July 1998, pp. 21–32.
[2] Augusto Sarti and Stefano Tubaro: “Efcient geometry-based
sound reverberation”, XEUSIPCO, Toulouse, France, 2002.
[3] J.O. Smith, “Principles of digital waveguide models of musi-
cal instruments”, in Applications of digital signal processing
to audio and acoustics,editedbyM.KahrsandK.Branden-
burg, Kluwer, 1998, pp. 417–466.
[4] S. Teller and P. Hanrahan, “Global visibility algorithms for
illumination computations”, in Computer Graphics (SIG-
GRAPH ’93 Proceedings), pp. 239–246.
[5] P.S. Heckbert and P. Hanrahan, “Beam Tracing Polygonal Ob-
jects”, in Computer Graphics (SIGGRAPH ’84 Proceedings),
pp. 119–127.
[6] L.Savioja, “Modeling Techniques for Virtual Acoustics”, PhD
dissertation, .Espoo 1999.
[7] OpenAL Documentation, available via
http://www.openal.org/.
DAFX -5
... However, to the best of our knowledge, 3D RGI with a linear array (1D configuration), where the spatial diversity is significantly reduced in two dimensions, is yet to be addressed in the literature. As the first attempt to combat this challenging situation, the main contributions of this work include: 1) a novel sparsity-constrained high-resolution 2D polarspace DOA mapping technique using RIRs recorded with a synchronized setup made up of a linear loudspeaker array and a single microphone, 2) a semisupervised approach tackling the geometrical ambiguity and identifying potential first-order reflection candidates through the segmentation of the DOA map into bounded regions, each associated with a wall, based on pre-defined constraints for wall dimensions and orientations, 3) room geometry inference based on a cost function measuring the match between the higher-order reflections estimated via beam tracing [27]- [29] and the actual reflections spotted on the DOA map. The proposed 3D RGI algorithm is validated with an extensive set of RIRs measured in rooms with different wall characteristics and reverberation times as well as a simulated replica of one of these rooms. ...
... In the third step, all possible room geometries are obtained through a Cartesian product of the sets of wall candidates extracted from the six bounded regions that correspond to four side-walls, floor and ceiling. Finally, the room geometry described by its floor map and height is inferred by using a cost function that evaluates the agreement between the higher-order reflections estimated from the candidate first-order reflections via the beam-tracing method [27]- [29] and the peaks on the actual DOA map. ...
... In the final step, a candidate from the set G is selected as the inferred room geometry, comparing the estimated higher-order reflections with the actual peaks on the DOA map remaining after the exclusion of the peak-set corresponding to first-order reflections. In more detail, for each room geometry candidate, the higher-order image-microphone positions are first estimated via the beam-tracing method described in [27]- [29] from the 1 Given a pair of lines l 1 = [a 1 , b 1 , c 1 ] T and l 2 = [a 2 , b 2 , c 2 ] T , their intersection is computed in homogeneous coordinates as the cross product l 1 × l 2 [35]. It is easy to verify that the corresponding Cartesian coordinates are given by candidate first-order reflections, and then projected onto the 2D polar-coordinate space as illustrated in the rightmost column of Fig. 5. Subsequently, a cost function is computed by summing the amplitudes of grid points on the DOA map corresponding to the estimated higher-order image-microphone positions. ...
Article
Full-text available
Sound reproduction systems may highly benefit from detailed knowledge of the acoustic space to enhance the spatial sound experience. This paper presents a room geometry inference method based on identification of reflective boundaries using a high-resolution direction-of-arrival map produced via room impulse responses (RIRs) measured with a linear loudspeaker array and a single microphone. Exploiting the sparse nature of the early part of the RIRs, Elastic Net regularization is applied to obtain a 2D polar-coordinate map, on which the direct path and early reflections appear as distinct peaks, described by their propagation distance and direction of arrival. Assuming a separable room geometry with four side-walls perpendicular to the floor and ceiling, and imposing pre-defined geometrical constraints on the walls, the 2D-map is segmented into six regions, each corresponding to a particular wall. The salient peaks within each region are selected as candidates for the first-order wall reflections, and a set of potential room geometries is formed by considering all possible combinations of the associated peaks. The room geometry is then inferred using a cost function evaluated on the higher-order reflections computed via beam tracing. The proposed method is tested with both simulated and measured data.
... funded by the Italian Ministry of University and Scientific Research (MIUR); and within the VISNET project, a European Network of Excellence (www.visnet-noe.org) A solution to this problem was recently proposed in [4]. The idea behind that method was to first compute the visibility information on the environment (reflectors) from an arbitrary point in space, which is equivalent to the visibility of a generic reflector from a point on a generic reflector. ...
... The rays departing from the reference segment and hitting the other segments correspond in the (m, q) space with visibility regions , for example the visibility region of s 2 is showed inFig. 2. Considering the dual space interpretation, the visibility regions of the various reflectors with respect to the reference one can be computed in closed form [4]. Notice, however, that the visibility regions of the various reflectors overlap in regions corresponding to visual rays that intersect more than one reflector. ...
... In conclusion, in order to determine which reflectors the beam will encounter in its path after being reflected by r i, we just need to determine the intersection between the dual of the source (a line) and the visibility regions of all the reflectors as seen from ri. Once the beam tree is constructed, all paths corresponding to a given receiver location can be readily found through a simple beam tree lookup as described in [4] and [?]. ...
Article
Full-text available
In order to achieve high-quality audio-realistic rendering in com-plex environments, we need to determine all the acoustic paths that go from sources to receivers, due to specular reflections as well as diffraction phenomena. In this paper we propose a novel method for computing and auralizing the reflected as well as the diffracted field in 2.5D environments. The method is based on a preliminary geometric analysis of the mutual visibility of the environment re-flectors. This allows us to compute on the fly all possible acoustic paths, as the information on sources and receivers becomes avail-able. The construction of a beam tree, in fact, is here performed through a look-up of visibility information and the determination of acoustic paths is based on a lookup on the computed beam tree. We also show how to model diffraction using the same beam tree structure used for modeling reflection and transmission. In order to validate the method we conducted an acquisition campaign over a real environment and compared the results ob-tained with our real-time simulation system.
... A wide selection of algorithms have been prepared for two-dimensional cases of beam tracing. Foco et al suggest an algorithm for visibility determination by tracing beams in both scene and dual space, which allows beam tree updates after its source has been moved [3]. Sufficient execution time for real-time sound simulation has not been achieved in the two-dimensional version described in this article for one sound source, and it is unlikely that a game will use only one dynamic source at a time. ...
... A wide selection of algorithms have been prepared for two-dimensional cases of beam tracing. Foco et al suggest an algorithm for visibility determination by tracing beams in both scene and dual space, which allows beam tree updates after its source has been moved[3]. Sufficient execution time for real-time sound simulation has not been achieved in the two-dimensional version described in this article for one sound source, and it is unlikely that a game will use only one dynamic source at a time. ...
... Motivated by the growing popularity of soundbars among home entertainment systems, we have proposed a 3D RGI method using a linear loudspeaker array and a single omnidirectional microphone in [19] as the first attempt in the literature to tackle this difficult scenario with spatial diversity reduced in two dimensions due to using a 1D linear array. As illustrated in Fig. 1, this modeldriven method involves multiple steps, including the generation of a computationally demanding sparse DOA map followed by a peak detection and pruning procedure, the segmentation of the sparse DOA map into six bounded regions for the identification of potential peaks associated with the first-order wall reflections, and finally, RGI using a cost function that measures the agreement between the higher-order reflections estimated via beam tracing [20][21][22] given a room geometry candidate and the peaks on the sparse DOA map. The model-driven approach requires coarse prior knowledge of room boundaries (i.e., pre-defined constraints on wall dimensions and orientations), for instance, to be given by the consumer in a commercial setting, along with the tuning of several parameters in other steps. ...
Preprint
Knowing the room geometry may be very beneficial for many audio applications, including sound reproduction, acoustic scene analysis, and sound source localization. Room geometry inference (RGI) deals with the problem of reflector localization (RL) based on a set of room impulse responses (RIRs). Motivated by the increasing popularity of commercially available soundbars, this article presents a data-driven 3D RGI method using RIRs measured from a linear loudspeaker array to a single microphone. A convolutional recurrent neural network (CRNN) is trained using simulated RIRs in a supervised fashion for RL. The Radon transform, which is equivalent to delay-and-sum beamforming, is applied to multi-channel RIRs, and the resulting time-domain acoustic beamforming map is fed into the CRNN. The room geometry is inferred from the microphone position and the reflector locations estimated by the network. The results obtained using measured RIRs show that the proposed data-driven approach generalizes well to unseen RIRs and achieves an accuracy level comparable to a baseline model-driven RGI method that involves intermediate semi-supervised steps, thereby offering a unified and fully automated RGI framework.
... A wide selection of algorithms have been prepared for two-dimensional cases of beam tracing. Foco et al suggest an algorithm for visibility determination by tracing beams in both scene and dual space, which allows beam tree updates after its source has been moved [3]. Sufficient execution time for real-time sound simulation has not been achieved in the two-dimensional version described in this article for one sound source, and it is unlikely that a game will use only one dynamic source at a time. ...
Article
Full-text available
The work presents some aspects of beam tracing technique used in sound simulation. Adaptive Frustum algorithm, which was designed for detecting obstacles via beam subdivision was reviewed from efficiency point of view as well asfor its accuracy. Some possible improvements are suggested, however, they donot fully solve the problems of using this algorithm in real-time applications.Improved algorithm implementation was tested on five scenes with differentcharacteristics and varying complexity.
... Nei casi d'interesse, tuttavia, anche la posizione della sorgente può cambiare e quindi è necessaria una tecnica di beam tracing evoluta che sia in grado di calcolare in una fase preliminare tutte le informazioni indipendenti dalle posizioni del ricevitore e della sorgente, ovvero dipendenti solo dalla geometria del problema, i cosiddetti diagrammi di visibilità. Un simile tracciatore è stato ideato e sviluppato per una geometria bidimensionale e per il campo acustico, dall'Image and Sound Processing Group (ISPG), presso il Dipartimento di Elettronica ed Informazione (DEI) del Politecnico di Milano [1,2]; in seguito sono state apportate le necessarie modifiche al fine di poterlo utilizzare per il segnale elettromagnetico [3]. ...
Conference Paper
Full-text available
In this paper we introduce an indoor radio propagation model to characterize wireless telecommunication systems affected by dense multipath problems. The forward problem has been solved by the convolution of the UWB transmitted signal with the channel impulse response, obtained by a beam tracing technique. Computed values of the received signals are compared with the measurements performed by ULTRALAB research group, in a typical laboratory/office building. We show that calculated data are consistent with the observed ones. 1. INTRODUZIONE. In seguito all'avvento dei moderni sistemi di comunicazione wireless, l'interesse scientifico per la propagazione elettromagnetica indoor è cresciuto significativamente e, con esso, lo studio del canale di trasmissione e della sua risposta all'impulso. I canali di tipo indoor sono caratterizzati dal fenomeno di dense multipath, ossia dalla presenza di cammini multipli che complicano l'individuazione dei vari arrivi al ricevitore; per ovviare a questo problema vengono utilizzati in trasmissione segnali di tipo UWB (Ultra-Wide Bandwidth) che presentano una risoluzione temporale elevata, associata ad uno spettro molto ampio e quindi soffrono poco di fenomeni selettivi in frequenza. Nell'ipotesi che il mezzo sia isotropo ed omogeneo, la radiazione elettromagnetica si propaga seguendo traiettorie rettilinee a tratti, dette raggi. Responsabile del multipath è dunque la geometria: ogni qualvolta un raggio elettromagnetico incontra un ostacolo, assunto in quiete e con superficie perfettamente liscia, viene riflesso e cambia direzione. La modellizzazione del canale può avvenire per mezzo di diverse tecniche numeriche di tracciamento di raggi (ray tracing, beam tracing, etc.) che collegano la sorgente del campo al punto di osservazione per mezzo di spezzate e danno luogo al cosiddetto d.s.s. (delay spread spectrum) o dispersione dei ritardi (ovvero la risposta all'impulso del canale); quest'ultimo descrive l'ampiezza del campo elettromagnetico, agli istanti di tempo corrispondenti all'arrivo dei vari raggi nel punto d'osservazione. Le tecniche di ray tracing necessitano il tracciamento di numerosi raggi al fine di descrivere il campo in modo accurato in prossimità del punto d'osservazione; inoltre il calcolo deve essere ripetuto ogni volta che cambia la posizione della sorgente o del ricevitore. Per tali motivi questi metodi sono computazionalmente costosi e quindi di applicazione onerosa; i tracciatori di raggi possono venire sostituiti da tracciatori di fasci di raggi (beam tracing), più efficienti in quanto effettuano il calcolo una volta sola, nota la geometria e la posizione della sorgente. Nei casi d'interesse, tuttavia, anche la posizione della sorgente può cambiare e quindi è necessaria una tecnica di beam tracing evoluta che sia in grado di calcolare in una fase preliminare tutte le informazioni indipendenti dalle posizioni del ricevitore e della sorgente, ovvero dipendenti solo dalla geometria del problema, i cosiddetti diagrammi di visibilità. Un simile tracciatore è stato ideato e sviluppato per una geometria bidimensionale e per il campo acustico, dall'Image and Sound Processing Group (ISPG), presso il Dipartimento di Elettronica ed Informazione (DEI) del Politecnico di Milano [1, 2]; in seguito sono state apportate le necessarie modifiche al fine di poterlo utilizzare per il segnale elettromagnetico [3]. In questo contributo si confrontano la modellizzazione della propagazione, mediante le stime di tale tracciatore, con i dati reali misurati in un ambiente indoor.
... Two dimensional digital waveguide mesh (DWM) implementations have been used to model both 2-D acoustic spaces [24] [25] and vibrations on plates and membranes [26]. A 2-D image method has been used to model room reverberation [27], and a 2-D beam tracer has been used to model acoustics in [28] [29]. ...
Article
Full-text available
The image method is generalized to geometries with an arbitrary number of spatial dimensions. n-dimensional (n-D) acoustics is discussed, and an algorithm for n-D room impulse response calculations is presented. Synthesized room impulse responses (RIRs) from n-D rooms are presented. RIR characteristics are discussed, and computational considerations are examined.
Conference Paper
The sensation of elevation in binaural audio is known to be strongly correlated to spectral peaks and notches in HRTFs, introduced by pinna reflections. In this work we provide an analysis methodology that helps us to explore the relationship between notch frequencies and elevation angles in the median plane. In particular, we extract the portion of the HRTF due to the presence of the pinna and we use it to extract the notch frequencies for all the subjects and for all the considered directions. The extracted notch frequencies are then clustered using the K-means algorithm to reveal the relationship between notch frequencies and elevation angles. We present the results of the proposed analysis methodology for all the subjects in the CIPIC and SYMARE HRTFs databases.
Conference Paper
Full-text available
This paper reports the results of a positioning and tracking algorithm for indoor environments based on simulated and pre-computed attenuation map values. The localization is performed through a global optimization that minimizes a cost function computed in the data-space, which is the attenuation reference map relative to the environment under test. The tracking is implemented introducing a correlation between the current position and the previous ones. Two environments of different size, shape and characteristics are chosen for the algorithm validation. In the worst case the localization is performed with a median bias error of 0.71m. The overall median bias error when using tracking features is below 0.75m computed considering 4 distinct trajectories for each environment.
ResearchGate has not been able to resolve any references for this publication.