Figure 3 - uploaded by Jakub Gałka
Content may be subject to copyright.
Source publication
This paper presents two different methods of speech ex-traction: cross-correlation analysis and adaptive filtering. Algorithms are designed to extract conversations in noisy environment. Such situations can appear in police inves-tigations' materials or multi-speaker environment. Noise can be added intentionally by suspects or not intentionally (e....
Contexts in source publication
Context 1
... difference between spectral density of speech and mu- sic is a basis of cross-correlation algorithm. A block di- agram of this algorithm is depicted in Figure 3. Let us assume that s m1,in and s m1,in are recordings from Mi- crophone 1 and Microphone 2 (see Figure 1) respectively. ...
Context 2
... settings allow us to cut off the major- ity of a speech signal. Then, output signals s m1,out and s m2,out contain the music signal mainly, what allows eas- ier calculation of a maximal value of the cross-correlation 125 d (see Figure 4) The delay determined above is used in a delay block z −τ2 (see Figure 3). As a result we get a signal with a com- pensated impact of the distance. ...
Similar publications
Air quality is a growing concern worldwide because of its impacts on both the environment and the human health. The road transport sector is a major contributor to this poor air quality. To reduce the emission of particulate matter, all diesel passenger cars were equipped with diesel particulate filters since the EURO5b emission standard. Unfortuna...
Citations
... where s dist (t) is an intentionally added disturbance. The timeshift τ 2 is not equal to τ 1 because of differences in distances between microphones and audio signal sources like radio or TV-set [5]. Dual-microphone scenario of listening-in to a conversation where source of a distracting signal, like a radio-set, was used to hide the content of the conversation [9] This is much different from scenarios typical for information centres or conference rooms. ...
This paper suggests a speech enhancement approach to an eavesdropping audio system. Speech signal is disturbed by non-stochastic noise. The algorithm is based on recordings from dual-microphone system. The Wiener filter was applied for speech extraction. The algorithm is designed to capture dialogues in noisy environment as well. It uses the small differences between recordings. The differences in speaker and the source of noise localisation together with differences in spectra, enable us to split both signals.