Science topic

Acoustic Signal Processing - Science topic

Explore the latest questions and answers in Acoustic Signal Processing, and find Acoustic Signal Processing experts.
Questions related to Acoustic Signal Processing
  • asked a question related to Acoustic Signal Processing
Question
10 answers
Hi,
I am working on an echo removal project. So far, I have successfully identified the far-end signal of length 21 ms at a sampling rate of 48000Hz whose echo is present in my near-end signal of 21ms. I did it using Echo Detection and Delay Estimation using a Pattern Recognition Approach and Cepstral Correlation .
Now, I want to remove that far-end echoed signal from my near-end signal which contains(echoed signal of farend and voice).
Things I tried:
  1. Time-domain subtraction of PCM signals. i.e output[n] = near_end[n] - far_end[n]
  2. Spectral Subtraction technique Eliminate Signal A from Signal B. Even Ephraim-Malah
In both, I am not getting the expected result as for spectral subtraction I read that It works well when there is static noise or one signal is stationary. For non-stationary signals, it doesn't work well.
What are the other techniques to remove the echo in my scenario? Since I have identified the far-end chunk whose echo is present in the near end chunk, I just want to remove it from near end chunk.
Relevant answer
Answer
Isam Alkhalifawi Sir, these adaptive echo cancellation techniques don't help me in my scenario. My echo is non-stationary and the same sound is being produced with delay by multiple speakers, not just 1 speaker.
  • asked a question related to Acoustic Signal Processing
Question
3 answers
I am trying to build a text to speech convertor from scratch.
For that
Text 'A' should sound Ayyy
Text 'B' should sound Bee
Text 'Ace' should sound 'Ase'
Etc
So how many total sounds should I need to resconstruct full English language words
Relevant answer
Answer
May be, it will be usefull...
  • asked a question related to Acoustic Signal Processing
Question
5 answers
Hi guys,
is there any option in AVISOFT SASLab Pro software which enables you to eliminate unwanted noise from digital recording without effecting your original sound? In my case, sounds are recorded in the experimental tanks with a hydrophone connected to the digital audio recorder. The lab is full of low-frequency noise, which in some proportions, disrupts my sound of interest. If I high-pass filter recording, there is still noise which is not eliminated and it is overlapping with the sound frequency spectra.
Any advise would be helpful.
Relevant answer
Answer
Avisoft SasLab Pro 5.2 software has got inbuilt lowpass, highpass, notchpass and bandpass filters critical in elimination of unwanted noise. Open the Avisoft SasLab Pro 5.2 software, the choose the edit menu. Under edit menu select Filter. Under filter, choose the Time domain IIR or FIR Filter. If its the background noise, then you may record the room tone and filter it out. Similarly the Avisoft UltraSound Gate allows you to attenuate unwanted signals.
  • asked a question related to Acoustic Signal Processing
Question
15 answers
Anyone have any idea on how to harvest the acoustic energy from a line sound source? The line sound source is in small scale, maybe in a centimeter range, and the sound pressure is very small, around uPa I guess. 
Relevant answer
Answer
Dear Sheng,
welcome,
I think you had already solved the problem. However i would like to introduce a solution. There are two types of electrical vibration harvesters. The electrostaic MEMES converter and the piezoelectric transducer. You can use four flat converters to collect the sound from all sides. You can also put the sound line in in the focus of parabolic reflector and receiver the reflected sound by one flat converters. The second solution may be better. as it uses only one flat converter.
Best wishes
  • asked a question related to Acoustic Signal Processing
Question
2 answers
Hi all,
I have involved and interested to estimate the age of users through speech signals. Kindly suggest some of the free corpus available to do this research.
Thanks
Relevant answer
Answer
Thanks for your kind response, Silber. I have searched for the corpus links but i couldn't find the link. Can you share the url link related with free corpus maintained by carnegie mellon university that will be helpful.
  • asked a question related to Acoustic Signal Processing
Question
14 answers
Are there any DSP techniques to temporally synchronize two different or more digital audio signals without timestamp information?
Relevant answer
Answer
When you describe what are your streams and why you want to synchronize them i can make you a proposal. However, when you work in packet switched network
Then you can follow the synchronization techniques used in it. It is so that the packets are sealed and addressed and numbered such that one can assemble them at the destination.
Best wishes
  • asked a question related to Acoustic Signal Processing
Question
9 answers
It is well known that audio compression (e.g., MP3, AAC) usually processes the audio data frame-by-frame. However, I am curious about the feasibility of single frame based processing.
A commonly accepted notion is that frame based processing has time resolution of audio data while a single frame processing does not have. This is similar to comparing DFT and STFT.
However, why we need time resolution of audio signal during compression? For a given audio clip, its single frame FFT has super frequency resolution (huge points) and no time resolution. However, we can still calculate tonal and non-tonal elements, masking curves, and generate quantization index, etc. In this way, the modifications of any frequency bins will be reflected throughout time domain whenever this frequency appears along the time axis in the compressed time domain audio samples.
I personally do not see any potential problems of performing single frame compression as described above. The only problem I can imagine is in terms of hardware implementation for huge DCT points. But the computational complexity of FFT is O(nlogn) which approaches a linear function of n when n is large. Hence I do not see this as a big problem with the consideration of rapid developed computer capabilities.
Please help to point out my mistakes in the above statements.
Relevant answer
Answer
You want to increase the size of the audio frame to contain the whole audio message. At first i think such question needs investigation to answer it.
But i think the frame size is determined by the latency in the transmission system.
It is required to carry out all the signal processing in such latency time. One other important point is that the transmission medium may be dynamic such that the decoding may be very complicated or even impossible if we increased the frame size.. I think the size of the frames are dictated more by the more complex decoding process that the coding process. Also, the transmission medium imposes restriction on the frame size..
Also the cost of the processing even if it increases only proportional to the size of the frame must be considered as one can not use powerful computing platforms for all communication equipment.
An another important point that audio signal is not a continuous signal in nature but contains interruptions.
I think the optimum frame size varies among applications.
Best wishes
  • asked a question related to Acoustic Signal Processing
Question
4 answers
Which module can I use to simulate nonlinear acoustic wave propagation in micro damaged materials?
Relevant answer
Answer
Dear Nesrin,
This problem can be solved using the numerical method previously used in CAE IMPULSE:
This approach has good stability and very low computational costs (time and memory).
The simulation time on standard desktop computers is about 1-10 minutes (depending on the specific task).
Regards
  • asked a question related to Acoustic Signal Processing
Question
8 answers
The aim is to follow the evolution of the attenuation coefficient vs. mortar age.
The experiments will be done using the Ultrasound Pulse Echo Method with P-waves and by Immersion testing (using imersion transducers)
Relevant answer
Answer
Please share me the best answer might you get...
  • asked a question related to Acoustic Signal Processing
Question
6 answers
I have two speech signals coming from two different people. I want to find out whether or not both people are saying the same phrase. Is there anything that I can directly measure between the two signals to know how similar they are?
Relevant answer
Answer
It sounds simple, but unfortunately, it is not!
There are many confounding factors that make this process complicated. I give you some examples: consider you have a recording of your own voice recorded in a sound proof room saying "OPEN THE DOOR", and you would like to use that recording as the reference to which other voice commands are compared to take an action to open the door, for example.
  • Now, if you utter the same utterance but in a noisy environment, the two recordings are no longer the same.
  • If you change the room and record it in a reverberant room, the two signals are no longer the same.
  • If you say the same sentence but in different speed (speech rate) as you uttered the reference one, the two signals are no longer the same.
  • If you utter the same sentence but in different rhythm as you uttered the reference one, again, the two signals are no longer the same.
  • Now, consider that all or some of the above mentioned factors happen at the same time. Again, the two signals are no longer the same.
  • Now, imagine that you want to compare your reference signal with another person's recording of the same sentence. If both recordings are recorded in a similar environmental condition (same room, same equipment) and the same rhythm and rate, again, the two recordings are not the same.
  • Age, gender, health condition are other confounding factors that influence the signal.
Considering the formants of the two signals and comparing them using some similarity measures could be a very simple and quick solution. But unfortunately, they do not provide a good result since, for example, the similarity measure of two completely different sentences recorded in a particular acoustic environment can be relatively higher than two roughly similar sentences recorded in different environment, or if in the second recording the speaker utters the similar words than the reference recording but in different order.
To deal with these factors and variabilities, you might need a model (such as hidden Markov model, Gaussian mixture model) to capture the acoustical characteristics of the signals (in some relevant feature space such as cepstral domain or time-frequency domain) and to relate the segments of a signal to the language unites, and also you need a language model to link the unites to recognize the sentence. All these procedures are covered under the speech recognition field.
  • asked a question related to Acoustic Signal Processing
Question
4 answers
Hello,
I would appreciate it, if someone explain to me how I could estimate parameters of stoneley waves (such as arrival time, corresponding to the stoneley waves frequency or range of frequencies, velocity and so on...) that propagate in rock sample - limestone.
I've used ultrasonic pulser/receiver and oscilloscope for measurements and got this results ( please find attached ), but not sure what should be the next steps.
How I could differentiate where are P-wave, S-wave or Stoneley-wave moment of arrival and if there are any at all?
Never deal with this topic before, therefore would be grateful for guidance or advice on literature that I should read first.
Thank you.
Relevant answer
Answer
Some how you don't get my reply
Ori Yeheskel
  • asked a question related to Acoustic Signal Processing
Question
3 answers
For my bachelor thesis, I would like to analyse the voice stream of a few meetings of 5 to 10 persons.
The goal is to validate some hypothesis linking speech time repartition to the workshop creativity. I am looking for a tool that can be implemented easily and without any extensive knowledge of signal processing.
Ideally, I would like to feed the tool with an an audio input and get the time segments of the speaker either graphically or in matrix/array form.
- diarization does not need to be realtime
- source can be single or multi stream (we could install microphones on each participant)
- the process and can be (semi-)supervised if need be, we know the number of participants beforehand.
- Tool can be an matlab, .exe, java, or similar file. I am open for suggestions.
Again I am looking for the simplest, easy-to-install solution.
Thank you in advance
Basile Verhulst
Relevant answer
Answer
  • asked a question related to Acoustic Signal Processing
Question
6 answers
I have the following 2 files in xlsx format. I would like to have a matlab code to read in the data of both files and do a corrleation to chek how much similarity there is between them and if they differ, how much do they differ from each other. Can someone help me, I have limited matlab knowledge
Relevant answer
Answer
Just copy an paste data.
Create a variable in the workspace then paste the excel data into the variable editor window (don't paste the header).
You can open the excel file from matlab and import the data too, each column will become a variable.
After that you can apply the corrcoef function.
  • asked a question related to Acoustic Signal Processing
Question
3 answers
I want to measure the velocity fluctuation in a air stream using two microphone. I got the data. How do i process it to get the velocity.
Relevant answer
Answer
Dear Haidar,
You may use MATLAB and apply the two microphone based sound intensity calculation procedure for processing your data. You could find some useful resources for this purpose at Bruel&Kjaer website. In addition, for your short reference, please read the frequency domain Kalman filter as proposed by Bai et al (J. Ac. Soc. Am. 133(3), March 2013 pp 1425-1432. 
  • asked a question related to Acoustic Signal Processing
Question
4 answers
elimination of noise from acoustic signal using image processing
Relevant answer
Answer
Good Morning, Prashanth
As your question is vague, I would assume that you are not referring to removal of noise in acoustic sound imagery but to pure acoustic signals.
 Here is a patent that has been granted recently and it can help you start thinking of how you can apply the knowledge taken from  the patent.
 My understanding is that you want to apply two-dimensional filtering to stereo sound ( or 3D filters to 3D sound).
 2D filters can be applied as separable or non-separable. You can apply separable filters in each one of the dimensions and the non-separable 2d filters similar to the application in imaging. Sometimes you can apply their inverse form and use FFT.
Here is an example in an article from Ieee explore:
Good luck, please do not hesitate to ask for more, Viara
  • asked a question related to Acoustic Signal Processing
Question
3 answers
I want to do an auditory experiment, in which the intensity of sound changes in different conditions, e.g., 60dB in one condition, 35dB in another condition. How could this be achieved? Is there any hardware or software to control the intensity of sound in dB level?
Relevant answer
Hi Milan,
What is your sound reproduction system? When do you say 60 dB is SPL?
If your reproduction system is headphones, you can use a binaural head to measure the sound pressure level at each ear and calibrate your system. In the case of loudspeakers, I would measure the SPL at the listener position using a sound level meter.
Best regards,
Diego
  • asked a question related to Acoustic Signal Processing
Question
2 answers
How to calculate PESQ (Perceptual Evaluation of Speech Quality) Score of any noisy speech signal, especially, whose speech signals which have sampling frequency 12 kHz.
  • asked a question related to Acoustic Signal Processing
Question
3 answers
i want to calculate the flame transfer function of swirl stabilized non premixed flame. I have a loud speaker at my disposal. Do i need a siren instead of a loudspeaker.
Relevant answer
Answer
Hi
The concept of Flame Transfer Function is new to me. I just found this paper and had a quick look at it where the speaker is used to modulate the flow through the nozzle. 
It seems to me that you might be able to detect part of the information you want on the downstream side using reciprocity but, I may be off so treat the above with a grain of salt. 
/C
  • asked a question related to Acoustic Signal Processing
Question
4 answers
I want to find J- integral value of steel material using Acoustic Emission Technique(AET).
Relevant answer
Answer
You can detect point on crack initiation and hence initiation toughness but you can't generate complete j-r curve.
  • asked a question related to Acoustic Signal Processing
Question
9 answers
Hi there,
I understand that power delay profile is created a particular measurement point in the propagation environment. If that is so, I'm wondering how can I create a single power delay profile for a 8 m propagation measurement with 28 measurement points.
The channel sounder captures complex frequency response which translated to the time domain using IFFT. The transmission bandwidth is between 1-4 GHz.
Thanks for your kind help in advance. 
Relevant answer
Answer
No.  This is wrong.  The step is still 15 MHz, so the total time is still 66.66666667 ns.  Because you have more points (and thus the bandwidth is bigger) the time step is now 248.8 ps.
step  = 15 MHz
start - stop = 4.005 GHz
4.005 GHz / 15 MHz = 267 steps
267 steps +1 = 268 points
total range of time = 1/15 MHz = 66.666666666667 ns
time step = 66.666666667 ns / 268 = 248.8 ps = 1/4.005 GHz *267/268
The zero padding won't have changed the shape of the time response in proportion to the total time, because no extra data has been added or taken away (it will change it a little bit if you used a non-constant window function that is over the whole frequency range rather than just over where the data is), but the time sample points will be at a different place on the curve.  Zero padding puts the time points closer together.  It is used to make the curves look smoother.  Zero padding to 4 or 8 times the length is sometimes used.
Putting the two columns beside each other makes me wonder if you think the times and frequencies correspond with each other.  No particular frequency corresponds to any particular time.  Every frequency is used to calculate the result at each time.  In fact each time data point is just basically the sum of the signals at all the frequencies (with the measured amplitudes and phases) at that time.
  • asked a question related to Acoustic Signal Processing
Question
5 answers
I am trying to determine the signal to noise ratio (SNR) of audio signals for speech research. I have found several ways to calculate it. Is there a standard equation generally used in this field and is there a standard value the SNR should be above?
Relevant answer
Answer
To my knowledge a SNR > 30 dB is considered as a clean speech. Some guidelines can be found in the link.
  • asked a question related to Acoustic Signal Processing
Question
3 answers
I want to get the Intensity readings of a sustained vowel at even time intervals (i.e. every 0.01 or 0.001 seconds). In Praat when I adjust the time step to "fixed" and change the fixed time to 0.01 or 0.001 it adjusts this for pitch and formants, but not for intensity. Intensity remains at increments of 0.010667 seconds between each time. Is it possible to change the time step for intensity or can it only be changed for the other parameters? Any help is much appreciated!
Relevant answer
Answer
What you actually want to change is the window size. For that you need to change the pitch settings and give a minimum pitch 4 times more than the rate you want rate you want to have (i.e. 400/4000 for a window size of 0.ü1/0.001). This will however not change the overall mean intensity much, and is more relevant to clinical research.
  • asked a question related to Acoustic Signal Processing
Question
7 answers
Why not a Square wave form or parabola?
Relevant answer
Answer
The exponential functions are useful not only in the study of acoustical signals but in the study of "linear" systems which are generally considered because these function are not modified by a linear system: if the input of a linear system is an exponential function the output is the same exponential function multiplied by a complex number whose modulus give the "gain" of the system and the phases the phase change due to the system...
  • asked a question related to Acoustic Signal Processing
Question
2 answers
I am analyzing recorded speech of sustained vowel phonation and am trying to figure out which filters are necessary for the analysis. Does an A-weighted filter need to be applied to account for the fundamental frequency? And does any de-noising need to be done to the signal?
Relevant answer
Answer
If the recorded speech signal is degraded by Background noise, using wavelet tresholding would be more appropriate than low pass filters for vowels.
  • asked a question related to Acoustic Signal Processing
Question
10 answers
Dear All,
Could you please let me know the any references or details for designing a wideband SAW receiver working at a centre frequency of 20MHz and BW of 25%.
Is there any limitation for designing SAW wideband devices at this frequency? I saw that most of the works are in the GHz range.
Thanks,
Rahul Kishor
Relevant answer
Answer
Hi Dr Victor,
SAW in my case is not generated using an IDT , instead it is optically generated. I want to use IDT to detect such an optically induced SAW.
Thanks
  • asked a question related to Acoustic Signal Processing
Question
4 answers
I am working on psychoacoustic active noise control. I want to design an psychoacoustic model for sound quality measurement. I went through the book of Zwicker's loudness: Psychoacoustic Facts and Models.
I am not able to understand how to write the program in matlab to calculate the loudness.
Relevant answer
Answer
Good question and good answer :)
Perhaps a little late, but there is a toolbox available which contains matlab implementations of a number of loudness models, which I've found useful in the past. http://genesis-acoustics.com/en/loudness_online-32.html
  • asked a question related to Acoustic Signal Processing
Question
5 answers
I have two acoustic signals (amplitude as a function of time). I do an FFT or Welch analysis to see, for which frequencies the two spectra are similar and where they differ. I am thinking of a correlation coefficient as a function of frequency. How do I do this (e.g. in matlab)?
PS the spectra are quite noisy.
Relevant answer
Answer
Best way I know of is what Sina said with the CPSD or the coherence. To do this with MATLAB's built in functions, use the "cpsd" for the cross-power spectral density or mscohere function for the coherence.
If they're noisy signals as you say, you'll may want to use a Hanning window with some overlap for the window edge effects. We usually use 75% overlap for aeroacoustic broadband signals.
  • asked a question related to Acoustic Signal Processing
Question
4 answers
Synthetic data generation is an interesting ara of research but i have difficulties finding articles and textbooks about the topic. I want an idea about definitions and framework for automatic synthetic data generations in any area, particullary on sound analysis. 
Relevant answer
Answer
As you focus on sound analysis, you may find interesting nowadays state-of-the-art technique for improving speech recognition acoustic models by augmenting training data using with speed, frequency and tempo warping, which is indirectly related to synthetic data generation ( http://www.danielpovey.com/files/2015_interspeech_augmentation.pdf )
Another example of creating synthetic data is block mixing that seems to be useful in polyphonic acoustic event detection: https://arxiv.org/pdf/1604.00861.pdf
We also tried the latter in recent DCASE challenge gaining a little bit on F-measure, but not as large as the author of previous work: https://www.researchgate.net/publication/306118891_DCASE_2016_Sound_Event_Detection_System_Based_on_Convolutional_Neural_Network
  • asked a question related to Acoustic Signal Processing
Question
8 answers
my name rouland, working in university of pakuan departmen of biology. now i studying about frogs. i have a big problem, as a beginner biggest problem was identified frogs sound. so i have a idea to analyze every type of frog sounds and colleting base on similarity of wave sound so i can have what kind of wave which same frogs. the hard one is searching software to analyze the sounds? so what kind of software suit for my research? thnks
Relevant answer
Answer
There are two excellent alternatives (free software):
For R users, the package "seewave": http://rug.mnhn.fr/seewave/
The Brazilian biologist Marcos Gridi Papp developed Sound Ruler: http://soundruler.sourceforge.net/main/
Both are excellent options.
Of course there are commercial software (most of them with free "light" versions)
Best
  • asked a question related to Acoustic Signal Processing
Question
2 answers
My eventual aim is to model the radiation of acoustic sources using a hybrid approach (different mesh gradients and domain sizes). 
I would like the be able to create models that have changing flow velocities at a boundary. This would eventually lead to simulating audible sources (loudspeakers etc), so the source velocity would need to change appropriately with each step of the computation, and would need to effectively be an inlet and an outlet. 
Is this currently possible with only some minor tweaks of an arbitrary 2D case?
Has anyone come across a tool that already exists for this? 
Any advice or suggestions are more than welcome.
Many thanks.
Relevant answer
Answer
  • asked a question related to Acoustic Signal Processing
Question
3 answers
Dear Physicists, please I would like to know if there is a relation between the intensity of a signal acoustic wave and the ability of that wave signal to shatter a body. For instance, if a tuning fork is emitting an acoustic wave into body X (such as a glass cup) at a resonant frequency of body X (for instance the fundamental frequency) is there a threshold intensity (e.g. wave amplitude, wave power) of the wave emitted by the tuning fork which needs to be exceeded before body X will shatter?  It seems well known that the shattering effect will occur at specific frequencies (the fundamental frequency or its harmonics) however, does the wave amplitude, power or other material properties of the body play a part? If yes, please what relation or formula governs these? Answers appreciated. Thanks
Relevant answer
Yes, if you need to.
BB-C 
  • asked a question related to Acoustic Signal Processing
Question
3 answers
The mean processed signal to noise ratio was calculated to be 30 dB for the Raytheon sonar, and 13 dB for the Klein sonar. Using the Receiver Operating Characteristic (ROC) curve displayed (figure with calculations from Urick,1983 is attached: file name is "ROC curves calculations.bmp"), and given the desired false alarm probability of 0.5%, the probability of detection corresponding to the mean processed signal to noise ratio for each sonar
was calculated at the false alarm level. The probability of detection was calculated to be 0.998 for the Raytheon sonar (green lines on the plot attached), and 0.82 for the Klein sonar (yellow lines on the plot attached).
I tried to make the calculations mentioned by MATLAB tools (with the rocsnr function), but I cannot receive the same results as by paper plots. MATLAB gives the essentially higher values: e.g., for Raytheon sonar the probability of detection is always 1 (for signal to noise ratio equal to 30 dB). MATLAB code for calculations is relatively simple and is given below.
[Pd,Pfa] = rocsnr(30);
idx = find(Pfa==0.005); % find index for Pfa=0.005
sprintf('%.9f',Pd(idx))
The result for calculation looks as follows (I expected to get 0.998).
ans =
Empty matrix: 1-by-0
After getting this result I tried to increase Pfa value, but the result is 1.
[Pd,Pfa] = rocsnr(30);
idx = find(Pfa==0.01); % find index for Pfa=0.01
sprintf('%.9f',Pd(idx))
ans =
1.000000000
For Klein sonar the probability of detection is almost 1 instead of 0.82 (for signal to noise ratio equal to 13 dB). I cannot obtain the result for false alarm probability of 0.5%, in case of 0.1% I get 0.999967062.
[Pd,Pfa] = rocsnr(13);
idx = find(Pfa==0.005); % find index for Pfa=0.005
sprintf('%.9f',Pd(idx))
ans =
Empty matrix: 1-by-0
[Pd,Pfa] = rocsnr(13);
idx = find(Pfa==0.01); % find index for Pfa=0.01
sprintf('%.9f',Pd(idx))
ans =
0.999967062
What is the reason for such inconsistence in paper plot calculations and "efficient" MATLAB calculations performed automatically for the same data input?
The original figure for ROC curves (Urick,1983) without additional lines plotted is attached too (file name is "ROC curves (Urick, 1983).bmp").
The links to MATLAB documentation related to ROC curves are given below.
It is interesting that ROC curves were first introduced in MATLAB R2011a.
Relevant answer
Answer
Dear Fernando,
Attachments (with calculations info) are available. 
MATLAB code for calculations is relatively simple and is given below.
[Pd,Pfa] = rocsnr(30);
idx = find(Pfa==0.005); % find index for Pfa=0.005
sprintf('%.9f',Pd(idx))
The result for calculation looks as follows (I expected to get 0.998).
ans =
Empty matrix: 1-by-0
After getting this result I tried to increase Pfa value, but the result is 1.
[Pd,Pfa] = rocsnr(30);
idx = find(Pfa==0.01); % find index for Pfa=0.01
sprintf('%.9f',Pd(idx))
ans =
1.000000000
The links to MATLAB documentation related to ROC curves are given below.
It is interesting that ROC curves were first introduced in MATLAB R2011a.
Sincerely,
Oleksandr
  • asked a question related to Acoustic Signal Processing
Question
7 answers
Check this video:
It seems to prove the string modes are finite states that are either one state or another.
Note the waves are never standing but twist and vibrate in another mode on top of the standing wave concatenary. The standing wave has no time derivative but the concatenary has its own oscillation.
Can anyone prove the string can have two modes at once?
Relevant answer
Answer
I think you will find that you are not seeing the actual string vibrations but strobe effects caused by the sample rate of the digital camera.  The strings are not bending in that rapid pattern, but move from side to side in the time it takes the camera to scan a small distance up the string (actually across its own image plane).  Similarly there are not waves moving along the string in the way it looks.  These are all to do with the sampling in the camera - just like wheels turn backwards in some films.  The A string has a frequency of 110 Hz.  The camera only looks at it 30 times a second, it does not quite four vibrations between each frame.  It will be in sync with the camera at different points in the picture for each frame, that's why you see the pattern travel along the string.  The camera lies - especially digital cameras.
  • asked a question related to Acoustic Signal Processing
Question
9 answers
Can someone help me, why scattering coefficient (s) is negative in Reverberation Time measurement?
Relevant answer
Answer
Hello Mr. Alejandro. My diffusers is not surface circular but surface square and surface area 60 cm x 60 cm according to the direction Vorländer and Mommertz in paper "Defnition and measurement of random-incidence scattering coecients". Thanks for posting the email address, maybe we are communication via email.
Best regards. Sofian hanafi
  • asked a question related to Acoustic Signal Processing
Question
8 answers
I have used 356B20 accelerometer to record acceleration data during a drop test. I have extracted the information in an excel file. It contains X, Y, z acceleration data from 1 sensor. I would like to remove any noise from the signal to get the smooth acceleration data. Please suggest me a suitable method.
Relevant answer
Answer
Dear Gaurav,
Low pass filtering may help a little (as described above).
However, if I understand your experiment correctly, then you want to measure impact. i.e. a fast acceleration and deceleration. Low pass filters can attenuate the peak signal and so give a confusing answer.
A better solution is to identify the source of noise and tackle that. the following check list may help.
1). is the noise mains frequency? if so use better shielding at the transducer and over the leads to the amplifier
2). is the noise pick up from other machines in the same lab; use better shielding place the amplifier as close to the transducer as possible, with screened cable. Do not use the screen as part of the measurement circuit. Use differential amplifier inputs wherever possible.
3). can you increase the signal from the transducer? (this depends on the type of Tr) on some you can increase the excitation voltage. (but be careful of dissipation, one can just increase voltage around the impact incident)
4). Are you using the right transducer? are there any trade-offs you can take advantage of?
5). do a FFT (Fast Fourier transform) of the noise to see if there are any dominant frequencies you can filter out using a notch filter
6). Is the noise coming from the A to D converter (typically clock frequency noise), if so check your screening and grounding, use an analogue amplifier close to the transducer to increase signal before the A to D.
One last comment, its easier to get a clean signal from the transducer. Trying to filter a noisy signal is unlikely to give you the best result.
By all means come back to me if you have more detail of your set up and let me know what measurement accuracy you are looking for.
Regards
Paul
  • asked a question related to Acoustic Signal Processing
Question
4 answers
Two time series data (1. Engine RPM wrt time and 2. Seat Acceleration wrt time) were analyzed using LMS Testlab and LMS AMESim post processing tools. It was observed that the order plot of acceleration wrt rpm obtained from both softwares using the same window type and trend removal option do not match. Further investigations reveals that using the same sampling frequency in AMESim and Testlab produced a result that have similar trends with Testlab result. However, the accelerations amplitudes have significant deviation from Testlab results.
What other parameters should be checked to obtain same results from Testlab and AMESim?
Best Regards,
David
Relevant answer
Answer
Hi David
Debugging software always is dog's work.
Suggestions on simple things to compare are
  • RMS (1s) values computed from raw time signals to ensure signals have similar energy content.
  • Autospectrum, verify that frequencies and amplitudes are identitical. Check Autospectrum with Amplitude shown as Peak, RMS, Pk-Pk etc.
  • A good overview of long time series often is to use Amplitude histograms and to look at Kurtosis.
Where possible, try and swap data between systems, i.e. get test data to act as simulation data and vice versa. If this works, use a simple calibrator signal of 1 g at fixed and known frequency as first signal to test AMESim.
If I should venture a guess, test systems use fixed sampling rates as the measurement HW uses fixed clock frequency. A simulation system may use different time steps because of numerical efficiency and this difference in time stepping may complicate post processing.
Have fun
Claes
  • asked a question related to Acoustic Signal Processing
Question
3 answers
At the application of the Analog-Digital-Microprocessor (ADµP™) for formant measurement, analysis and synthesis I have found that the frequency of the second formants corresponds to the frequency of the second harmonic. Now, it must be decided whether the Analog-Digital-Microprocessor to ascertain the second harmonic. This implies more computing time and less performance.
Relevant answer
Answer
A priori, there is no correlation between the harmonics and the formant frequencies of a spoken word. Imagine you are saying a vowel (such as /a/) on different pitches. The formant frequencies remain more of less constant while the fundamental frequencies f0 and upper harmonics vary (if f0 is in the speech range). The coincidence of 2*f0 and F2 can happen but it can not be a general case. In the singing voice, the belting technique (such as in Bulgarian singing or in music theatre singing style) is characterized by the tuning of the *first* formant (F1) to the second harmonic 2*f0.
  • asked a question related to Acoustic Signal Processing
Question
1 answer
I have data, where ACC is elicited with frequency change of 5, 10, 25, 50, 100 Hz. Here, (for eg.) it is possible that thresholds could be anywhere between 25 and 50. How to find it?
Relevant answer
Answer
Hi Mohan, the way to estimate threshold is usually follows the procedure described in behavioral (Psycho-physical) experiments.  You read about method of constant stimuli, that will provide you the possible answer. 
Vijay
  • asked a question related to Acoustic Signal Processing
Question
2 answers
Increasing volume fraction of a tungsten epoxy backing from 5 - 25% increases density, decreases speed of sound and overall impedance increases. Does increasing density correlate to a greater hardness?
Relevant answer
Answer
Thanks for the suggestion. I checked and i am afraid there is no mention of tungsten epoxy backing material in that book. May i add this material is normally used as backing material in ultrasonic transducers or hydrophones.
  • asked a question related to Acoustic Signal Processing
Question
15 answers
From papers, i know that the frequency range of the AE in a machining is averagely 100khz-300khz, although the different material has different frequency range. But i still can't  make sure what is the useful frequency range which decides my denoising processing. The following picture are my result. 
I want to know whether the 25khz and 50khz is the useful signal of mechining. And is the final denoising right?
Relevant answer
Answer
It seems that 25kHz comes from the machine with it`s first harmonic at 50kHz. Just change the machine`s operating point [(rotational) speed, force, ...] to check if the peaks move in your FFT.
And it`s not clear what you mean with denoising: just filtering your sound waves or active noise control / reduction?
  • asked a question related to Acoustic Signal Processing
Question
12 answers
odeon / ease ???
Relevant answer
Answer
The question, that you asked, rises next problem: what does acoustic comfort mean? Do you mean noise free environment or absence of any acoustic signals? Or concert hall with good acoustic properties? If the last one, Ease could be good choice.
The second question: evaluate acoustic comfort for existing objects, or designed?
If you mean any existing building, measurements could be good starting point.
These are just a few quick thoughts on the question you asked.
  • asked a question related to Acoustic Signal Processing
Question
4 answers
A flow past a mouth of a deep cavity can result in an exciation of high - amplitude acoustic pulsations. such pulsations are often encounterd in gas-transport systems, heat exchangers and other industrial processes involving transport of a fluid through a pipleline.
I really don't know how noise occurs. 
one said  "swirls induced by separation interact between each other and generating noise." 
when rotating flow contract with other rotating flow, noise occurs? how??
Relevant answer
Answer
Cavity noise is a commonly found in air crafts and air conditioning ducts.  This tend to be characterised by three main phenomenon: (Separation from the leading edge generates a shear flow that ends up creating (1) and (2))
1) A feedback loop inside the cavity due to recirculation.
2) An edge tone like phenomenon at the trailing edge.
3) Feedback pressure waves.
Such tones are now called the Rossiter modes,
  • asked a question related to Acoustic Signal Processing
Question
7 answers
I want to know if there is something wrong with the solver.
Relevant answer
Answer
I have solved this problem, because I have simply mistake, when I definite the PMLs, the center coordinate is not in according with the geometry . Thank you for your attention!
  • asked a question related to Acoustic Signal Processing
Question
4 answers
I have a measured data signal in which a noise component of roughly 107 MHz appears. How can I get rid of it? Could you explain me how to implement it in MatLab? Thank you very much.
Relevant answer
Answer
Hey Carlos,
just adding to the previous two answers: You can use the SPTOOL if you have the signal processing toolbox. The tool lets you view your signal and design filters and try them..
If you have a signal that has a temporal alignment, then use FILTFILT instead of FILTER to avoid phase shifts (e.g. use zero-phase filtering).
Greetings, David
  • asked a question related to Acoustic Signal Processing
Question
6 answers
Which parameters of ship influence the pressure wave it generates that travel downwards and sideways?
Relevant answer
Answer
The main ship parameters affecting wave propagation are the length, beam, draft, displacement and shape of the hull.
There are some secondary parameters too, like the smoothness of the surface: e.g. if there are sharp corners or other slope discontinuities, that can cause waves to be produced.
The roughness of the surface can increase turbulence close to the ship and might act to damp out very high frequency waves, especially immediately behind the hull.
I think you would get some excellent insights into the problem by looking at how wave resistance of model hulls is estimated from towing tank measurements. There are a number of methods (e.g. transverse cuts, longitudinal cuts, and the XY-method of Lawrence Ward) that you could look at.
About 20 years ago, I released a (free) version of the program "Michlet" which was a thin-ship, linear code that allowed the user to input a wave field behind the ship. The program used (primarily) memetic algorithms to search for the hull that produced that wave field.
Although there were severe restrictions in that toy problem, it was able to get reasonable estimates of the location and the length, beam, draft and displacement of the hull, as well as some indication of the shape of the hull.
Of course, the problem is ill-posed, and can be made even worse by adding real-world effects such as, among many others, ambient waves and viscous damping effects, but it can give some insights into the way various parameters act, and interact, in producing the far-field wave pattern.
  • asked a question related to Acoustic Signal Processing
Question
4 answers
Does anyone know the transmission loss of a metallic pipeline for acoustic singal?
Relevant answer
Answer
  • asked a question related to Acoustic Signal Processing
Question
4 answers
I would also be most grateful if anyone who works within the field of infrasound would contact me to discuss an exciting collaboration.
Relevant answer
Answer
What is your band requirement ?
What is the medium (air, water ...) ?
You can take a look to Bruel & Kjaer products, they are expensive but they are viewed by many as the best transducers (at least in my field, underwater acoustic).
For the acquisition device, poor chance a sound card (even a high-grade one) can do the job because they won't pass DC to 20Hz, 50Hz or 100Hz. The lowest I've found are from the RME brand, they are  advertising 5Hz at -3dB , and i've measured mine wich cuts at 1 or 2Hz.
That said, another culprit of doing acquisition with a sound board is if you need to synchronize the start of the recording session (no trigger input).
  • asked a question related to Acoustic Signal Processing
Question
3 answers
The source is fixed and waveforms are recorded in many stations. I want to determine the delay time between two stations. I know some ways to measure similarity like the Cross-Correlation,semblance,dynamic time warping etc. And meet the biggest problem that my data are with strong noise background like the cars noise ,walking noise etc . Anyway, I don't know what to do and how to do. if  two stations is close ,so i can use the CC way to obtain the delay time .but the raypath is not same and the site effect, so using the CC may be risky. CAN ANYONE  HELP ME? 
Relevant answer
Answer
I'd suggest to experiment with cross-correlation techniques and try with different signals. Try with either constant frequency and modulated signals of different duration, e.g. short and long sweeps. Of course you have to manage the cross-correlation length accordingly.
Gianni
  • asked a question related to Acoustic Signal Processing
Question
4 answers
I want to model the Lloyd's Mirror effect for a calculation of a radiated noise from a ship, into the water. 
Relevant answer
Answer
It dosent change the figure, it only had a effect on the far field microphone. I asked the Ansys support and they told me I should use an "Impendance Boundery" on the surface, with the impendance of the air. But i didn't tried it, because it will not effect the far field, only the acoustic body (the figure) you see. 
Conrad
  • asked a question related to Acoustic Signal Processing
Question
4 answers
Hello, i need to measure the cavitation in a water tank (cause by ultrasound) using a hydrophone, however i am stuck with the data interpretation measured by the hydrophone.Is there any journal paper/help online on interpreting acoustic cavitation measured by hydrophone ? Thanks
Relevant answer
Hi John,
With the hydrophone, you can measure the pressure in a single point, so you are measuring the pressure change in one point of the pressure field imposed by the sonotrode.
If you are working with just one bubble (state of single bubble cavitation) you would probably “see” the change in the shape of your hydrophone’s signal (that introduce the presence of the cavitation  bubble and if you use a high pass filter, applied to your hydrophone signal, you would probably see even easily the presence of a cavitation bubble), however if you are in a multi-bubble regime, you most probably see all the interaction of bubbles + the impose acoustic filed, and that’s is more difficult to work with, nevertheless you could build another high pas filter in order to see the cavitation in your chamber.
Another home-made way to measure the number of node in your chamber is by the use of paper metal strip, where you could see the damage in the strip that the cavitation bubbles made (when they collapse) 
Here is a good bibliography of the differences between the two regimens
  • asked a question related to Acoustic Signal Processing
Question
7 answers
It will be interesting to gain experience from persons applying a hydrophone for the measurement of sound in air. Initial experiments seems to indicate that a hydrophone performs different in air compared to water, probably due to grater impedance mismatch between the impedance of the transmission medium and the sensor.
Relevant answer
Answer
Ole-Herman,
You still haven't told us why you want to use a hydrophone in air.
I agree with everyone on the impedance mismatch. Hydrophones just weren't designed to work in air.
If you are looking for a weatherproof or immersible microphone, there are a number available. Just search the web on "waterproof microphone". Some are fully immersible, although they probably don't work very well as hydrophones, just as hydrophones don't work very well in air.
A weatherproof mic in air will give you a much better signal-to-noise ratio and a much more predictable amplitude-frequency response and directional characteristic than a hydrophone in air.
If you want a transducer that will work in both media, I suggest you use both a hydrophone and a waterproof mic and mix them after the preamps. If you set the gains correctly, the impedance difference between the two media should effectively switch between the two transducers automatically.
  • asked a question related to Acoustic Signal Processing
Question
4 answers
dear all
I am new in speaker recognition field. I have collected the speech data form 100 person. each person have 3 speech samples. NOW , I want to extract features in my data in order to build model for speaker recognition. for example: MFCC and Formant ..... so can i applied these acoustic technique to feature extraction from the speech signal directly ?    
.... I will appreciate to any one help me
Relevant answer
Answer
As suggested by Prasad Kantipudi, start reading the literature, which is extremely rich and with plenty of tutorials. This topic has been investigated for decades and there are thousands of works. Today state of the art are i-vectors.
Note that you can find several free tools which implement these state-of-the art solutions.
  • asked a question related to Acoustic Signal Processing
Question
2 answers
Reverse-integration is usually used from IR. Is there another method?
Relevant answer
Answer
During the first half of the 20th century, we measured RT by setting a stimulus such as octave-band-filtered noise to a certain level, then cutting off the stimulus and clicking a stopwatch at the same time. We then observed the decay on a sound level meter using the FAST setting and a D'Arsonval movement. When the decay reached 60 dB below the original steady-state sound, we clicked the stopwatch again. The stopwatch reading was the RT. Sabine used human-blown organ pipes as his sound source. The precision of these methods is limited by the human link in starting and stopping the stopwatch. The D'Arsonval meter smoothed the quasi-random amplitudes of the individual reflections making up the reverberant field, thus acting as sort of a mechanical Shroeder averager. This process can be improved by using a graphic level recorder or chart recorder to trace the sound level during the decay, but then, as Manuel points out, you have to do some sort of averaging. The pen speed setting on the recorder can do the averaging for you. I have measurements of a church sanctuary made using this method that gave RT results quite close to the results obtained using modern computer impulse-response-Schroeder methods.
Perhaps if I knew your application, and the reason for wanting to avoid Schroeder integration, I could be of more help.
  • asked a question related to Acoustic Signal Processing
Question
7 answers
How to determine the maximum speed(ensuring no collisions) of an omnidirectional robot  with a ring of sequentially firing 'n'number of ultrasonic sonar sensors? Given the frequency 'f' of each sensor, the max acceleration 'a'.
The robot is i a world filled with sonar-detectable fixed (nonmoving) obstacles that can only be detected at 'x' meters and closer
Is this the maximum velocity that can be attained by the robot as in the 'ring cycle time'?
The way I approached it,
Consider: F=70kHz, a=0.5m/s2,  x=5m, n=8.
Now considering the obstacle is at the furthest detectable distance, 5m
Considering the speed of sound as 300 m/s.
Time taken for the sensor to receive the signal = 2* ((1/70*103)/(300/70*103))*5 seconds. =1/30 seconds
Now as there are 8 sensors fired sequentially, total time taken= 8/30 seconds.
Therefore, ring cycle time=8/30 seconds.
Also overall update frequency of 1 sensor = 30/8 Hz.
Now is the maximum velocity , the velocity attained by the robot for t= 8/30 seconds?
Relevant answer
Answer
(2*5m)/300m/s = 33msec for a soundwave travelling 5m towards and backwards to the transducer (puls-echo mode)
A complete measurement over 8 transducer for max. distance of 5m needs 267msec.
[So the puls repetition rate for each transducer will be 3,75]
Your robot is able to detect changes in distances of obstacles every 8/30s. Within this time at the given acceleration of a=0,5m/s^2 a maximum velocity of 0,133m/s (0,48km/h) will be reached.
But, if your robot is able to travel for say 10s at constant acceleration without collision the speed at the end will be 10s*0,5m/s^2 = 5m/s = 18km/h. Now at each ring cycle time of 267msec a distance of 1,33m will be covered.
At least you have to give a better definition of your maximum speed.
If it would be okay to avoid collisions in the range of one detection for 5m range then v = 5m / 0,267s = 18,75m/s. That means a speed of 67,5 km/h. Perhaps too fast for collision avoidance.
By the way: bandwidth and risetime is only interesting if you want to calculate time-of-flight. In this example the speed of the robot seems to be in the focus of interest.
  • asked a question related to Acoustic Signal Processing
Question
2 answers
I have changed the frequency slightly during my experiments, how should it affect on the intensity of sonication? Many publications said that it should remain same, while some reported that it should be increased simultaneously.
Relevant answer
Answer
Hi
Pzt transducers has one or more resonance frequencies, if you are in their resonance frequencies the performance of the transducer is maximal (at least in the frequency domain), so their intensity is also improved, however there are others ways to improve your transducer performance, such as “tuning the impedance of the transducer-medium”, and adjusting the frequency slightly in your experiment in order to have always the resonance frequency (yes, the resonance frequency has some little changes in course of time), if you want to know the resonance frequency of your transducer you should use an impedance analyzer
  • asked a question related to Acoustic Signal Processing
Question
5 answers
 I applied spectral subtraction technique to AE signal by subtracting the recorded noise spectrum from the acquired AE spectrum (NB: AE data contains both AE and spindle noise influence). the ifft of the residual in the time domain indicates higher amplitudes in the result compared to the time domain amplitude of the signal + noise. why is that so? does it mean that lower frequency content results in high amplitudes in the time domain, is there any explanation to support this inverse proportionality
Relevant answer
Answer
Thanks. The answers were helpful
  • asked a question related to Acoustic Signal Processing
Question
4 answers
Hi. I'm working on acoustic dection and acoustic image processing. In acoustic dection, the image(C-scan) will be blurred if the scanning step is smaller than probe resolution. So, how to deblur this??
Relevant answer
Answer
Yichun,
You will need a signal acquisition deconvolution algorithm with paramerters determined by the signal acquisition system you are using.
Fundamentally, you can think of the process of signal blurring as the convolution of the  impulse response of the acquisition system with a "perfect" image. Then you deconvolve to remove the bluring.
You also have to have a spatial sampling rate that is adequate to model the Fourier Transform spectral components of the image. If you do not meet the Nyquist sampleing criterion, you data will have "spectral folding" and this contamination of the data cannot be removed.
So two things are required:
1. adequate spatial sampling
2. convolution model for the signal acquisition system.
Good Luck
  • asked a question related to Acoustic Signal Processing
Question
3 answers
I have seen research papers that propose the method to calculate guided wave transmission/reflection coefficients by dividing the amplitude at the center frequency of the receiving signal spectrum by that of the excitation spectrum. My question is that is there any windowing applied before such a division? In other words, do the scattered or reflected wave-packets need to be isolated first? If so, what kind of windows are commonly used?
Thanks
Relevant answer
Answer
@Dante, Thanks for the information and I will read the paper.
@Valerio, Thanks for your input. I am aware of the properties of different windows, but I still want to know the commonly used one in guided wave transmission coefficient calculation, as many papers just did not mention it. 
  • asked a question related to Acoustic Signal Processing
Question
17 answers
I am trying to calculate the Breathiness Index, suggested by Fukazawa et al. (1988), as a measure for breathy voice. The paper indicates a range of 8.3 to 75.7 for values of BRI, but my calculations yield values at an order of 1015.
I refer to the definition of BRI as the ratio between the energy of the second derivative of a signal and the energy of the non-derived signal.
I performed the analysis in Praat. The original sound was converted to a matrix, then I applied a formula ((self [col+1] - self [col]) / dx) twice, to obtain the second derivative, and cast the matrix to a sound. Energy was calculated using the Get energy command (which calculates the integral of the squared signal between two time points).
Any idea what I am missing here?
Alternatively, can anyone suggest another measure for spectral tilt that does not require an arbitrary cut-off frequency between low and high frequencies?
Relevant answer
Answer
Dear Carlos Ariel Ferrer-Riesgo,
Thank you for the ideas.
I tried resampling the sound at 8 kHz and calculating  the integrals in the spectral domain. Though I received lower values than before, they are still orders of magnitude higher than those reported in the Fukazawa paper (they actually sampled at 20 kHz).
I still don't know what's going on, and, at this point, I don't think I'll use that measure in my study.
  • asked a question related to Acoustic Signal Processing
Question
5 answers
I am trying to find something very technical with lots of examples in order to understand how acoustic signal reacts on particular settings (frequency, pulse etc.).
Relevant answer
Answer
Sure, good luck.
  • asked a question related to Acoustic Signal Processing
Question
6 answers
I use stereo microphones and I could not find a good open source SSL code with Google. I would like to have an implementation of Time Difference of Arrival, Interaural Phase Difference or other sophisticated method.
Relevant answer
Answer
Limited to stereo microphones you will have limited localization.  The source for the TDA algorithm is implemented in Java in the following code.
  • asked a question related to Acoustic Signal Processing
Question
5 answers
In direction of arrival estimation the MUSIC algorithm differs from the EV algorithm in eigenvalues weight so what is the advantages and disadvantages for each one over the other? Also I would like the MATLAB code for DOA estimation using two methods mentioned above, MUSIC and EV.
Relevant answer
Answer
Dear Khalaf,
The music function aims at minimising the following cost function:
P_music(\theta) = ||u_1^H x a(\theta)||^2  +  ||u_2^H x a(\theta)||^2 + ... +  ||u_{N-q}^H x a(\theta)||^2 where u_1 u_2 till u_{N-q} are the eigenvectors corresponding to the smallest N-q eigenvalues L1 < L2 < .. L_{N-q} of data covariance matrix of size N and q being the number of sources.
EV is a weighted version of MUSIC weighting each term above of the form ||u_j^H x a(\theta)||^2 as follows  (1/Lj) ||u_j^H x a(\theta)||^2.
So minimising the above term is more reliable when weights are assigned, i.e. the smallest eigenvalues count more than larger ones, i.e. small eigenvalues are more reliable than large ones.
  • asked a question related to Acoustic Signal Processing
Question
1 answer
When i try to get output as 44k music format, its have some delay in case of buffer process and buffer size (as array). Finally, the output sound like have any distortion. How to prevents that delay?
Thanks
Best Regard
Relevant answer
Answer
The first delay that you have is the buffer array size, that one is for example in you case  ( 1 / 44100 ) × buffer size. Then if you are polling are you sure you are polling fast enough for capturing at 44100 and second does the processing that you are doing is fast enough?  You could instead use interruptions to assure a 44100 sampling while your processing limit depends on your buffer size. Also measure your timings you could do this with the TI tools or with an oscilloscope, check your processing time your polling times. This could give you a good idea of what is happening.  
  • asked a question related to Acoustic Signal Processing
Question
3 answers
As the title, acoustics analysis by my BEM,the results what I obtained are conjugate complex of reference solutions. For example: the reference solution is 1+2*i, my result is 1-2*i.    I have not found the reason. Who can tell me the reasons?
The problem I solve is 2D frequency domain. The normal direction is consistent with my BIE formulation. The reference solutions come from software of prof. Yijun Liu.
Relevant answer
Answer
In addition, please check the normal direction defined in your model. Please make sure it is consistent with your BIE formulation.
  • asked a question related to Acoustic Signal Processing
Question
3 answers
At a certain transducer power level the maximum intensity as well as the average intensity in a volume of interest drop rapidly (instead of in a linear manner, as I would have expected). I wondered whether this is due to some kind of rescaling in 3D-US software, but couldn't find any information on that.
Image below shows the same effect observed on a Philips SONOS 7500 3DUS. 
I am working with the following setup: Ultrasonix SonixTablet, running 'Porta SDK 6.07' on it, 4DL14-5/38 linear 4D transducer. Checked the manual for answers. I am taking pictures of metallic surgery tools in soft tissue, such as ex vivo pig hearts.
I'd be happy about any information(paper, book,..) related to that topic.
Thank you for your help
Relevant answer
Answer
I googled the probe you use and It seems to be ok for the job you are working on.  Did you use a good transmitting media, I mean ultrasound jel - plenty of it...  You may try to put ex vivo pig hearts in a plastic container filled with ultrasound gel, I have tried plain water for some other experiments, ultrasound gel may even work better.  
You can check my experimental model ( A model for basic ultrasound set-up and training for 3D/4D ultrasound scanning
K E Karaşahin · M Ercan · I Alanbay · U Keskin · M Dede · M C Yenen · I Başer
Ultrasound in Obstetrics and Gynecology 03/2011; 37(3):371-2. )
I believe it will work for you.  Submerge the pig heart  about 4- 5 cm away from the tip of the probe, make sure there is plenty of gel between the heart and the probe.  Try to avoid air bubbles. 
You can also try to fill the plastic cup with jello solution, and observe after it solidifies.  
4 D imaging is real time so you have to be patient, do not move the probe around quickly.  3 D is easier but still requires a stationary probe.  Get a clear 2 D image first.  Then use 3/4 D .
Use small volume or interest for better results (you get higher frequency and better resolution).
Good luck.  
  • asked a question related to Acoustic Signal Processing
Question
2 answers
I want to know the criteria for designing a particular frequency band PZT-based AE  sensor such as  the size, shape of the PZT pallet etc...
Relevant answer
Answer
APC has a nice online calculator for calculating the frequency based on the material type and dimensions.
  • asked a question related to Acoustic Signal Processing
Question
6 answers
I'm trying to differentiate between different vowels based on formant frequencies. I see that are list of formant-frequency-pair for different acoustic vowels. For any particular vowel, are the formant frequencies same even for different person (irrespective of gender and age)?
Thanks in advance
Relevant answer
Answer
The short answer is no. Formant frequencies are variable and context-dependent even within speaker. One well-known reference is Hildebrand et al. JASA 1995. The long answer is that with a little more effort, it is possible to do pretty well provided that you identify beforehand the vowel core. See for example http://www.ling.upenn.edu/courses/cogs501/Hillenbrand.html for a modern machine-learning approach to the Hildebrand data.
  • asked a question related to Acoustic Signal Processing
Question
4 answers
I am looking for research on the topic of training Boltzmann machines, Deep Belief Nets or other generative models on audio samples. Ultimately I would like to train these on specific sounds and then have the network generate new sound samples using Gibbs sampling. The only research that I find are for training and generating music scores. The closest I can find is for speech recognition but it is very specific to speech and combined with HMMs.
Relevant answer
Answer
Hi,
Yes I have been looking at RNN-RBMs, even overviewed Nicolas Boulanger-Lewandowski's thesis but it's all about generating *scores* which is very different from generating actual sound.
I will look into LSTM and GRU. Thanks.
  • asked a question related to Acoustic Signal Processing
Question
4 answers
I have implemented MVDR beamformer for speech signal processing, assuming the gain to be unity for the desired direction(delay vector), but when I am checking it for speech files gain seems to be many folds during speech regions. due to this speech is getting distorted and saturated.I am using 2 mic linear array with separation of 6cm for capturing audio files. 
from the problem formulation of MVDR beamforming we assume unit gain in desired direction, do I need to multiply the calculated weights with some small constant(fixed/adaptive)as below in order to control the gain  
w = 0.05.*( Inv(noise_cor) * c_ ) / ( ( c_' *Inv(noise_cor )*c_)) ;
or there is some implementation mistake ? 
Relevant answer
Answer
Hi Arpit,
Assuming that you are using your 2 microphones as a ULA (Uniform Linear Array), the separation of 6cm/0.06m will correspond to a wavelength of 0.12m.  Assuming the speed of sound is 343m/s, the wavelength of 0.12m will correspond to a frequency of 2.858 kHz.  I am thinking that if your are sampling a audio signal greater than 2.858 kHz, you may want to place the microphones nearer to each other (less than 6 cm, maybe 5 cm for up to 3 kHz) so as to prevent signal returns due to grating lobes from interfering with your intended audio signals. 
Also, when using MVDR, an array of 2 elements with only provide two degrees of freedom (DOF) for the constraints: (1) to prevent distortion of the signal in the desired direction of arrival and the (2) to suppress interference from other directions.  Thus, by adding a couple more of array elements (microphones), you will be able to produce better interference suppression from undesirable signals from other directions.
  • asked a question related to Acoustic Signal Processing
Question
11 answers
For the attached sawtooth wave, it is apparent that 0th complex-form Fourier series coefficient is equal to zero, c0=0, because average of the sawtooth wave is zero.
Furthermore, for any k value, the complex-form Forier series coefficients are obtained as
ck=j*[(-1)k] / [k*pi].
My question is: Shouldn't we obtain c0 as a special case of ck if we substitute k=0?
But, if we do this, it seems like c0 diverges to (j*infinity) instead of going to 0.
Am I missing something???
Relevant answer
Answer
See the attached file.
  • asked a question related to Acoustic Signal Processing
Question
6 answers
I want to know if there are estimations of the signal-to-noise ratio of speech signals affected by acoustic noise found in different environments.
Relevant answer
Answer
Dr Ruiter,
Seeing your affiliation reminds me that in the early 1980's I used empirical formulas published by V.M.A. Peutz for predicting the performance of sound reinforcement systems in rooms. After comparing those equations to standard reverberation equations, I came to realize that Peutz was treating room reverberation as another noise term.  Modern acoustic analysis instruments can compute a variety of speech intelligibility measures, but %ALcons continues to be widely used by practicing sound contractors. 
  • asked a question related to Acoustic Signal Processing
Question
5 answers
I want to ask you if there is anyone searching on the DOA using eigenvector method (EV). This method was proposed after MUSIC method. I am faced with a problem in calculating the magnitude and accurate angle for this method.
Relevant answer
Answer
There are many versions of what you're talking about. 
Example, a root EV approach, close to ROOT MUSIC.
Could you be more specific ? 
  • asked a question related to Acoustic Signal Processing
Question
3 answers
I have been recently interested in source separation for musical signals. The paper I study (see below), uses nonnegative matrix factorization (NMF) for separation of musical audio recordings based on the magnitude spectrogram which could be a size MxN nonnegative matrix.
"Score-informed source separation for musical audio recordings: An overview", Ewert, S., Pardo, B., Muller, M., Plumbley, M. D., IEEE Signal Processing Magazine, vol: 31, no: 3, pp:116 - 124, May 2014.  
NMF separates the magnitude spectrom into a size MxK template matrix W and a size KxN activation matrix H, both of which are also nonnegative valued. Dimensions M and N correspond to the numbers of the frequecy bins and time frames, respectively, of the input magnitude spectrogram. But, the additional dimension value of K is shared by both W and H and should also be given to the NMF. In the above paper, K  is manually set dependent on the number of instruments and musical pitches existing in the particular musical piece that is to be separated.
In that case, can we still claim that we are performing  a blind source separation method? Or, it is better to classify it as semi-blind, or even something else? What is the accepted terminology? I will appreciate some expert opinions.
Relevant answer
Answer
For score-informed source separation like the reference indicated, it is a semi-blind source separation.
  • asked a question related to Acoustic Signal Processing
Question
9 answers
Hi, I need to process real time signals from PZT crystals. can I take these signals and put them into MATLAB for further processing?
What is the maximum sampling rate I can have (the signals are around 1Mhz, sine)? 
Relevant answer
Answer
Hi, i guess you need a proper Data Acquisition Device (DAQ).
During my bachelor works, we did use NI-DAQ USB-6001 with 13 digital I/O and 8 analog inputs. With a proper configuration of Matlab Data Acquisition toolbox, you can reach almost real-time processing (which in our case was adequate enough).
Cheers!
  • asked a question related to Acoustic Signal Processing
Question
2 answers
I would like to attempt to use otoacoustic signal for biometric application. I searched for the datasets, but unable to find any. Please help me in this regard.
Relevant answer
Answer
Hi,
PhysioNet, probably one of the best source for physiological data actually provides a database which includes OAE data:
I definitely recommend this site to everyone who wants to work with physiological data. I hope it helps you.
Best,
Johannes