Science topic

Sound Analysis - Science topic

Explore the latest questions and answers in Sound Analysis, and find Sound Analysis experts.
Questions related to Sound Analysis
  • asked a question related to Sound Analysis
Question
4 answers
"In the FEM simulation of a micro-perforated plate (MPP) absorber, is it necessary to include the end correction factor typically used in theoretical models to match experimental results, or does the simulation inherently account for the physical effects that the end correction compensates for?"
Relevant answer
Answer
There is no need to include end correction in general in numerical acoustics, as the Finite Element setup does not need to be corrected. The correction is needed for models that approximate the acoustic behavior, such as Narrow Region Acoustics model that approximate exact thermoviscous behavior, or the Interior Perforated Plate which again uses an analytical expression to approximate exact behavior.
  • asked a question related to Sound Analysis
Question
2 answers
I am researching pronunciation and correct articulation among EFL students. I have a list of "problematic sounds" that students often struggle to pronounce, and I am trying to analyse those sounds by comparing them to sounds produced by a native speaker.
I have started working with PRAAT, but I was wondering if there is a better tool out there for my purposes.
Relevant answer
Answer
SpeechRecorder by LMU Munich University. I would still recommend PRAAT, however. It's simple and there are many tutorials about how to use it on YouTube.
  • asked a question related to Sound Analysis
Question
5 answers
How do you think? What is origin of our particular sensitivity to harmonious music?
The longer I live, in growing degree I am becoming positive (and still more optimistic) and believe in more natural origin of human attitudes toward beauty and goodness. Once I'd even suggested that also understandingand recognizing of music may be imprinted in ourgenes.
The understanding of music is not irrelevant to ethics. As well as to the culture as a whole, also. Symphonies' general pattern implemented ingenes? The genetic memory of this pattern we can hear in the symphony performed by crickets
(Have you ever heard the amazing cricketschirping slowed down?)
Isn't it comparable with humans best symphonies?Maybe we have to change our understanding ofmusic, beauty and goodness, as attributed only toculture of human beings?When we were much much smaller mammals, sosmall that our pace of life was equal with thecricket life, we could hear these symphonieswhole our lifes generation after generation, as if we were spending most of our lifes in philharmony. This was alike music of the heavenly spheres all the time around us. It could not end in other way. So,we may have imprinted  archetypes of symphony in our genetic memory, quite likely. We can enjoy these music again, when after dozens millions years we've managed to return to this hidden for our ears for dozens hundred thousand years music, as our best composers rediscovered it again for us during recent few centuries.
In a similar way not only notion of music, but also more general beauty or goodness, can be incorporated in the structure of our genes, as the creatures which possessed empathy and prefered more regular (than chaotic) patterns, simply were better prepared for survival. In such a way nature could create higher beings able to consider things beautiful and valuable, differentiating better from worse. However, isn't it so that the full expression of these natural features occurred when humans during evolution of their culture invented the names for good and evil, as well as for beauty and art?
Isn't it so that prehuman beings (and preculture beings we were at the beginning) could dimmly sense the difference? But only when the notions were invented and their designates were developed, humans created understanding of beauty and goodness. And is not it so than only while they fully developed these ideas, they entered reality as its part? I.e., did not humans created all the beauty and goodness of the world? As its beauty was hardly recognized as such before by any former beings?
Or did they recognized it but just could not express that recognision in other way than just by prefering beauty or good in images or behaviour of their partners and surrouding? In other words: May beauty or goodness be possible without beings understanding them as such?
Of course you may remain sceptical as in https://www.youtube.com/watch?v=iFFtqEyfu_o
Notice, however, wrong assumption that difference of receiving sound depends on age (time of life) differences of species (and not linear dimensions of their hearing apparatus).
This question is connected with similar discussions already present on RG, and among the others the omne archivized in the attached file : (27) Do small babies understand the ethical and aesthetical categories_.pdf
as well as no longer available its predecessor
Relevant answer
Answer
Your understanding of music goes back to the theory of mimesis. The problem of interpreting the phenomenon of music depends on where you draw the line between art and reality.
An important, but somewhat distracting, observation: if one were to find a genetic predisposition to music in some people and not in others, such a theory could become the basis of Nazism, racism, xenophobia in all its possible variations, and the corresponding potential for discrimination.
  • asked a question related to Sound Analysis
Question
5 answers
Due to COVID-19 restrictions, my project got postponed for one year. My sociolinguistic research focus is on face to face\focused group interviews to examine identity construction + accent\sound production. Unfortunately, I don't think I'll manage to conduct face-to-face interviews anytime soon (apparently COVID 19 restrictions is still developing) and postponing my project is no longer an option. So, I intend to replace face-to-face interviews with online interviews. I'm looking for recent studies that used online interviews - I hope you can recommend some.
Thanks in advance!
  • asked a question related to Sound Analysis
Question
4 answers
I tried Praat but I understand Praat is'nt good with resonating instruments and multivoices. What else can I use? Sound Analysis Pro (Matlab version) compares at maximum 5 seconds without freezing. What else could I try? thank you!
Relevant answer
Answer
It produces a spectrogram for audio - with control of the analysis parameters. For data output, you can choose how many partials you want to consider for each sound. It also has tools for fundamental estimation etc.
NB For analysing the numerical data, you need to use an extra small program and then view with a spreadsheet etc.
  • asked a question related to Sound Analysis
Question
7 answers
Please see the attachment.
1- I have to use Perfectly matched layer while using port boundary condition or not.
2- My port shape is hexagonal.
So from available options, I am choosing user defined port. But while computation it is showing error.
Plz suggest.
Thanks.
Relevant answer
Answer
Thank you very much sir. René Christensen
  • asked a question related to Sound Analysis
Question
18 answers
The work of George and Shamir describes a method to use spectrogram as an image for classifying audio records. The method described is interesting, but the results seemed to me a little adjusted to the chronology and not to the spectrogram properties at itself. The spectrogram gives a limited information about the audio signal, but it is enough to do a classification method?
Relevant answer
Answer
Spectrogram presented not only the frequency content of the signal but also its energy. The spectrogram's vertical axis represents frequency, with the lowest frequencies at the bottom and the highest frequencies at the top, while the horizontal axis represents time; it runs from left to right of the axis. The colors enrich the spectrogram representation as its third dimension; different colors represent different energy levels . so if you combine it with classifier such as CNN which is manly image classifier it might give good results
  • asked a question related to Sound Analysis
Question
5 answers
Hi guys,
is there any option in AVISOFT SASLab Pro software which enables you to eliminate unwanted noise from digital recording without effecting your original sound? In my case, sounds are recorded in the experimental tanks with a hydrophone connected to the digital audio recorder. The lab is full of low-frequency noise, which in some proportions, disrupts my sound of interest. If I high-pass filter recording, there is still noise which is not eliminated and it is overlapping with the sound frequency spectra.
Any advise would be helpful.
Relevant answer
Answer
Avisoft SasLab Pro 5.2 software has got inbuilt lowpass, highpass, notchpass and bandpass filters critical in elimination of unwanted noise. Open the Avisoft SasLab Pro 5.2 software, the choose the edit menu. Under edit menu select Filter. Under filter, choose the Time domain IIR or FIR Filter. If its the background noise, then you may record the room tone and filter it out. Similarly the Avisoft UltraSound Gate allows you to attenuate unwanted signals.
  • asked a question related to Sound Analysis
Question
3 answers
Company A actually has been involved in the industry for about 30yrs and known to offer quality professional installation plus material specifications as :zincalum steel, thickness is .45mm, AZ150. B has been involved in the industry for about 15yrs and offers somewhat professional installation plus material specifications as: zincalum steel, thickness 0.40mm, AZ150 and C is a less known and new company that has involved in the industry for 3 yrs with material specifications as: Alluminium, thickness 0.40mm and AZ150.
The currency is GHS.
He wants to base his decision on sound analysis.
Thank you for your support.
Relevant answer
Answer
Analysis starts from technical suitability. Out of 3 companies quoted, go through the specifications and engineering design with Durability. Then u go to the lowest offer suitable to tech spec
  • asked a question related to Sound Analysis
Question
5 answers
Hi, I have a 4 microphone array and have all information about azimuth and elevation obtained. The ultimate goal is to find the distance from the array to the sound source.
Has anyone had gone through some papers on this because i have not found anything on google scholar? All papers focus heavily on azimuth and elevation but nothing about the z distance.
Please recommend me some papers if you have read them
Relevant answer
  • asked a question related to Sound Analysis
Question
22 answers
Acoustic simulations
Relevant answer
CATT acoustic is better for theater simulations in my experience. You may also try to use AcMus developed by a Brazilian team.
  • asked a question related to Sound Analysis
Question
5 answers
Hi all,
I'm currently working on a soundscape ecology study in which the entire acoustic community is of interest. I have been reading up about the Fast Fourier Transform (FFT), and the trade-off between the time and frequency resolution, which is determined by the choice of window length.
I have however failed to find any resources which explain which temporal/frequency resolution is required for the sounds of interest.
I understand that if only one/a few species with known vocalizations are of interest, this choice can be justified easily, but what if you're dealing with an unknown acoustic community? Studies of the acoustic community which use only the audible spectrum (with a sampling rate of 44.1 kHz) often use a frequency resolution of 172 Hz, but don't offer a justification why they chose this. And what if you're also looking at the ultrasonic part of the acoustic community - how would the required frequency resolution change to capture both sonic and ultrasonic signals?
I appreciate any insights you might have.
Relevant answer
Answer
Hi Thomas Luypaert , The window size in a FFT analysis represents a number of samples, and a duration. To set the size you would need to look at the fundamental frequency in your system, its intensity and any changes it may experience in time. The window size expressed in samples along with your sampling rate (samples/second) would give you information to calculate the window duration.
Also, from the spectraplus page: " The frequency resolution of each spectral line is equal to the Sampling Rate divided by the FFT size. For instance, if the FFT size is 1024 and the Sampling Rate is 8192, the resolution of each spectral line will be: 8192 / 1024 = 8 Hz. Larger FFT sizes provide higher spectral resolution but take longer to compute."
Hope this helps!
  • asked a question related to Sound Analysis
Question
8 answers
Hi, everyone. If there is any public sound data with the marks, such as bird songs, children crying, traffic congestions, vehicle noise, vehicle horn, or other labels that marks the sound events.
Relevant answer
Answer
Analyzing a sound's spectral makeup (fundamental frequency plus harmonics) is easy with commercial or free software (I often use SpectraPLUS, which has a 30-day trial). What isn't so easy to access is a sound's intensity or sound pressure level (SPL) unless the recording comes with a calibration file. Inexpensive sound level meters (SLMs) may help when making live measurements, but these SLMs often present sounds as A-weighted only, meaning the measurements filter out very low and high frequencies. Very low level sounds (i.e. very soft sounds) are not easy to measure, as microphone noise alone can obscure the measurement. So, depending on what attributes of a noise or sound you're investigating, publicly available sound snippets may not be sufficient. Eric C. (former Principle Engineer, Doppler Labs; former Chief Engineer, Countryman Associates).
  • asked a question related to Sound Analysis
Question
3 answers
Best regards
I am looking for natural frequency of cutting tool (normal and wear) but can't find it. In what frequency the tool show the diffrent of normal and wear tool. Can i use this formula to find it ? (natural frequency=2.phi.rotation/60)
Since the rotating part not only the main spindle (tansmision from motor to main spindle)
Relevant answer
Answer
Natural frequencies are almost independent of the tool rotation, what experienced milling operators do is to adapt the milling parameters in a way to avoid excitating natural frequencies hence minimise the tool/piece vibration and get the smoothest result.
Natural frequencies depend of the piece fixed on the milling table, the course of the various X Y and Z slides etc so cannot be determined once for ever for a given milling machine
Devices as "tuned vibration absorbers" can be fixed on the milling head to fix problems that cannot be sorted by adjusting the milling parameters off a risk of resonance, but this requires a solid vibration engineering culture
  • asked a question related to Sound Analysis
Question
2 answers
Suppose avg. grain size, crystal structure, young's Modulus, fracture strength, velocity of sound and surface roughness of both the crushed crystals (before and after crushing ) and the crushing surfaces, as well as load on grinding surface, static/dynamic friction coefficients are known. Then, is it possible to estimate the crushing sound of the crystals?
Conversely, if crushing sound of the crystals are analyzed, is it possible to find any mathematical relation between the variables outlined?
I am asking the questions since crushing minerals and recording-analyzing the sound require no sophisticated instrument at all.
Please provide relevant research links.
Relevant answer
Answer
Dear Sumit,
Sound of cracking crystals between two hard surface depends on couple of important metallurgical as well as mechanics factors such as increasing strains, grain distribution reaches log normal packing, coarsening of crystal grains, mis orientation angles, level of temperature distribution due to fraction between two surface with grain misalignment effect , critical grain size micro-cracking and thermal enhancement, cohesion crack nucleation, defects, cavities etc.
In general, If the size of the crystal grains is closed to the wavelength of the supersonic ray, the part will not be transparent to ultrasound, in this case the there will be no ultrasound bottom of signal. it is also seen that size of the crystal grain increases the intensity of supersonic wave decreases.
Sound products can be listen at high extrusion speeds.
Atomistic modeling is highly applicable in easy understanding of crack sound between hard metal.
Ashish
  • asked a question related to Sound Analysis
Question
3 answers
They should be vast with the AVISOFT SASLab. Pro or any sound analysis software with capability to generate sound.
Relevant answer
Answer
Thank you. I was able to synthesize them with the software.
  • asked a question related to Sound Analysis
Question
4 answers
I have a doubt about the unit of amplitude on y axis on linear plot of a sound wave generated in Matlab.
In logarithmic scale it is dB but I am confused about what it should be on the linear scale and what is the maths behind the conversion that would be a plus if someone has any idea bout it.
In this image the value of power at 404.2 Hz in -1.26 dB on log scale and 404.2 on linear scale..What is the maths behind this conversion and what should be the ideal unit on Y axis on linear scale?
I currently use "V" which I think is wrong.
Also I have attached section of my code which is generating this plot.
--------------------------
fSep=1/(N*h);
disp(['FFT frequency separation = ' num2str(fSep) ' Hz'])
[Y,f,YdB]=SimpleLineSpectrum2(y,h,0,fny);
figure
subplot(211)
plot(f,Y,'LineWidth',1.5),grid
title(['Line Spectrum of ' name ],'fontweight','bold','fontsize',10)
ylabel('Linear power(V)')
xlabel('Frequency (Hz)')
axis([fL fR 0 max(Y)])
subplot(212)
plot(f,YdB,'LineWidth',1.5),grid
title('Line Spectrum in dB','fontweight','bold')
ylabel('power (dB)')
xlabel('frequency (Hz)')
axis([fL fR dBmin 0])
pause
--------------------------------------------------------
and this is what "Simplelinespectrum " is doing
% normalize by 2*np
np=length(y); % length of the input vector
%===remove the mean
yavg=mean(y);
y=y-yavg;
%===calculate the fast Fourier transform
Y=fft(y(1:np));
%===generate the frequencies in the frequency grid
nint=fix(np/2);
i=1:nint+1;
f(1:nint+1)=(i-1)/(np*h);
%===take the absolute value and normalize
Ya=2*abs(Y(1:nint+1))/(np); % normalization
% Ya=abs(Y(1:nint+1)); % no normalization
%===generate the frequencies and magnitudes to be
% returned consonant with frLeft and frRight
fL=frLeft;
iL=fix(1+np*h*fL);
fR=frRight;
iR=fix(1+np*h*fR);
retYa=Ya(iL:iR);
retf=f(iL:iR);
Ymax=max(retYa);
YdB=20*log10(retYa/Ymax);
-------------------------------------------------
Relevant answer
Dear concern,
dB is used to quantify the ratio of two values in logarithmic scale which conveniently represent very large or small numbers on a same scale.
For converting the ratio of two power values in dB, we use ans(dB) = 10*log10(ratio) and ratio=10(ans(dB)/10).
For converting the ratio of two voltage or current values in dB, we use ans(dB)=20*log10(ratio) and ratio=10(ans(dB)/20).
You can verify the concept of multiplicative factor of 20 and 10 using P = V2/R = I2*R and taking R=1.
For your information, dBm is used to quantify the value of Power (not a ratio) in logarithmic scale which is generally used in wireless communication and other areas.
Hope you find this answer hepful.
  • asked a question related to Sound Analysis
Question
4 answers
Sound Power Level of a speaker is computed from the sound power of the speaker.
Sound power from the speaker is the total sound energy emitted by a speaker per unit time.
How this sound power is related to the electric power given to the speaker. For example if I give 6W electric power to the speaker then sound power from the speaker will be only 6W.
Relevant answer
Answer
Unqualified comments :-
Time weighted negotiating of a mass, collection of matter considering field effects ( say gravity, possibly friction and so on) with a directional effort is work, forced displacment sort of effects.
Rate of doign work is power (watts) , sort of time rated expense of energy. Describing power has nothing to do with underlying phenomenon.
Most important thing to understand or find or reconcile is
proxy of or displacments/field
proxy of or efforts,
and of course ratio of expended against recoved power offers clues about nature of losses , mostly thermal, and proverbial efficiency etc.
  • asked a question related to Sound Analysis
Question
9 answers
How do we know that the missing fundamental is inferred by our brains and is not a by-product of the interference pattern caused by mixing tones of several frequencies?
Thanks for your thoughts!
Relevant answer
Dear Tatiana Izmaylova . The missing fundamental is "missed" because it does not appear in the frequency analysis performed by a Fourier Transform or another similar technique. What is present in the original signal is a number of harmonics that corresponds with a fundamental frequency fi=n*f0. And this is the "physics" of the phenomenon, i.e., the reality that is measurable.
If the fundamental frequency were present in the signal, we could measure it with the appropriate techniques. But the acoustic system is unable to generate this particular frequency because of its size or other physical characteristics.
It is when our brain interprets the received signal, in terms of pitch, when we can describe the sound by means of a tone that should have a lower measured frequency.
To obtain a fundamental tone of f0 from a modulation of, let say, a second harmonic 2f0, we will need a modulating frequency of f0 (the result of an amplitude modulation of two tones are the sum and the difference of the modulating frequencies). With the appropriate bandwidth in the frequency analysis, we should be able to separate the two tones. A long enough Fourier Transform gives us a short enough bandwidth analysis.
For all the reasoning above I think that it is the interpretation that our brain makes, by its previous experience of learning about the pitch of a sound, which produces the phenomenon called missing fundamental.
  • asked a question related to Sound Analysis
Question
7 answers
I am currently working on anthropogenic effects on the vocalisation of howler monkeys in the urban environment.
I record the vocalisations with a TASCAM DR-07MKii and I want to extract acoustic measurement such as frequency, pitch, rates and lengths of vocalisations.
Which program is easiest to use for a beginner to extract these variables?
I currently have: Sound Analysis Pro, Praat and Audacity.
All tips and additional information are very welcome
Relevant answer
Answer
I agree that Audacity is of no use-- it's primarily an editing program, not for measurement. Praat and SAP would both probably work for the simple measurements you described. I believe Praat (which was developed for work on speech) has some tools for measuring speech-related features such as formants, which might be useful for nonhuman primate vocalizations. Raven Pro (http://ravensoundsoftware.com/) has (I'm told) an easier learning curve than Praat and includes basic time, frequency, and relative power measurements (though no formant measurements). There's also a set of freely available tutorial videos available (see "Training" on that website).
  • asked a question related to Sound Analysis
Question
4 answers
I have used RAVEN software auto detect filters to identify (label) the notes of a single bird species that imitates other birds. Thus I'm working with a large data set (hundreds of thousands of notes within thousands of songs) and am attempting to label all the sounds (notes, syllables, phrases) accurately within the songs of
a bird species and found:
1) it is very time consuming to tweak the parameters to even get it to 'work'
2) when it is 'working' it returns with numerous false positive and negative results
I'm curious if those interested in identifying (labeling) large data sets of animal sounds have found a software that will sift through the spectrograms with accurate identification (labeling) of sounds?
Currently I'm using visual and aural inspection (human) of spectrograms to do this, which amazingly seems to be the only way to achieve accuracy in this task. It's incredibly time-consuming but I appear to be quicker at doing it 'old school' than by using automated (computer based) methods.
Cheers!
Brandi Gartland
M.S., Doctoral candidate in Animal Behavior
University of California, Davis
Relevant answer
Answer
There is a highly instructive user manual on the site. The reference for the initial publication is:
Tchernichovski, O., Nottebohm, F., Ho, C. E., Pesaran, B., & Mitra, P. P. (2000). A procedure for an automated measurement of song similarity. Animal behaviour, 59(6), 1167-1176.
I have studied with Olga Fehér who was herself a PhD student of the first author on the paper. I have learned a few things the hard way. First off, the initial setting of this software is designed for the analysis of zebra finch song. If you are working with a different species you may need to adjust them. Also, the detection and classification algorithms are good, but by no means perfect. Therefore I strongly advise that you keep the results by eye!
Greetings from the UK!
Tarandeep
  • asked a question related to Sound Analysis
Question
5 answers
Hello, I currently work on rainbow lorikeets vocal individuality. The problem is that parrots are very loud and the audio signals are clipping. I use directional mic Senheizer ME 66 and recorder Tascam DR 100 MK III with limiter on. Is there any solution to solve this issue? I know it is possible to fix clipping in audio programs, but is it ok to use fixed signals in my analysis or it is better to eliminate them?
Relevant answer
Answer
For most analyses, the biggest problem with clipped recordings is not the loss of reliable information about relative amplitude ("dynamics"); it's the corruption of the spectral content of the recording. Clipping introduces spurious energy into the recording at frequencies that are harmonics of the true signal. In some cases, sounds that are purely tonal (no harmonic content) will end up having strong harmonics in the clipped recording. Sounds that truly do have harmonics (like many parrot vocalizations) will be recorded with distorted relative powers in different harmonics. Most kinds of analysis that aim to classify sounds or measure their relative similarity (e.g., spectrogram cross-correlation, dynamic time warping, etc.) may be severely affected by such distortions.
What about "fixing" a clipped recording? Many digital audio editing programs (e.g., Audacity, Audition) include tools for "fixing" clipping. However, these tools are not intended to provide a perfect reconstruction of what the unclipped signal would have looked like. They're meant for rendering the harmonic distortions of brief periods of clipping in music or speech recordings inaudible to the human ear. This is very different from what would be needed for a quantitative measurement and classification analysis.
Although there may be some kinds of analysis that could tolerate clipping (e.g., analyses focused on temporal patterning of sounds that don't measure spectral content), in general I recommend against using clipped recordings for pattern analysis and classification. You're likely to get quite different results from what you would obtain using unclipped recordings of the same signals.
  • asked a question related to Sound Analysis
Question
4 answers
Advices and indications about types of microphones or recorders (and their location in terrain) and software for processing data are wellcome.
Thanks!
Relevant answer
Answer
He ojeado la página y me parece un proyecto interesantísimo! Muchísimas gracias por el dato. Me pondré en contacto con Juanjo Palacios para saber más y enriquecer el estudio que quiero desarrollar.
Gracias, saludos!
  • asked a question related to Sound Analysis
Question
3 answers
Some animals like whales, dolphins, bats, and ants can produce ultra or infra sounds. Can we use these ‎waves to cure diseases like cancer?‎
Relevant answer
I think it is possible since even a cat's purring can aid in the healing of injured tissue
  • asked a question related to Sound Analysis
Question
5 answers
I would like to establish a metric or a variable to assess how 'good' a spectral profile is. A 'good' profile for me would have be a narrow profile with a tall peak, and a bad profile is the opposite: wide and short peak.
My project is on MATLAB which performs the necessary simulation and produces the spectra. I would like to know of a (statistical) process what I could use on MATLAB to discern a good profile from a bad profile
I realise this is an unusual question but any help would be much appreciated!
Thank you
Relevant answer
Answer
If you want to quantify spectral profile by statistical metric and your desired profile is narrow with tall peak.
you can do this by window whose width should be more than desired feature. Calculate the first and second order moments that is sufficient to differentiate narrow and wide features.
You can do this with non-over lapping and over lapping windows.
I think this will help you.
  • asked a question related to Sound Analysis
Question
4 answers
i build baby cry detection system using deep learning algorithm.
i had used hardware raspberry and sound sensor as mic.
when baby starts cry then the system should detect the cry.
its working good when i placed the mic near to the baby but when the mic is away i am not able to detect sound.
please suggest me any techniques to detect sound from max 3 meter range.
Relevant answer
Answer
Dear Prashanth,
welcome,
You can use body area wireless system to acquire the sound and send it wirelessly to your computer where you have the identification and supervision system. This may be considered an application for internet of things.
The solution to use a sensitive microphone and amplifier may fail because as the distance increases the noise picked up by the microphone may be too high and impair the sound detection. However it may be a possible solution.
I expect the first solution is the best one.
Best wishes
  • asked a question related to Sound Analysis
Question
3 answers
WHO ARE WE
Master Acoustics International Corp.was founded in Taiwan in 2017 by our Chief Director Mr. Jack Chen, who has a brilliant sense of both sound and frequency. Jack has used sound techniques to invent two of our innovative products: the sound optimized resonator and a sound optimized coating which can be painted on the surface of instruments. Both of these innovations dramatically improve the timbre of the instrument to which they are applied. The sound optimized resonator has already been patented in China with patent applications in process in Taiwan and Europe.
WHAT WE WANT TO DO – A draft blueprint for the project.
We conceived of this project and would propose three goals:
1. To examine our inventions to assess their effectiveness
2. To use collaboration to create new ideas/innovations/inventions
3. To formally present the project findings for everyone to see
LOOKING FOR PARTNERS
If you are interested in knowing more about Master Acoustics International and this project, or you can recommend any institutes or researchers. Please let us know, thanks!
Relevant answer
Answer
Hi, i'm an acoustics professor in Colombia and i'm really interested in the project and in general in Master Acoustics International , could you give me more information.
Best regards
  • asked a question related to Sound Analysis
Question
5 answers
I have heard that if you collapse a bubble in the water with some sort of sound wave it will produce light. Is it a special gas or just a bubble of air?And is it a special wave sound?
I wonder what the reason behind this phenomenon could be?
Relevant answer
Answer
Dear Mr.James Garry
It is just a personal interest,my major study is in a different field.
  • asked a question related to Sound Analysis
Question
8 answers
I need a recommendation, thanks
Relevant answer
Answer
Muchas gracias Gian, you work is very interesting!
Update; I buy an used Olympus LS-100, after reading helpful reviews in birds pages and bioacoustics forums in general. And my microphone is a shotgun Sennheiser (the pic is attached to this reply, i don't know the model). Thanks to all!
  • asked a question related to Sound Analysis
Question
2 answers
Hi, I'm looking for a software to analysis bat echolocation call recordings, preferably free. It doesn't need to have many features, I just want to extract information about the intensity of the calls, and plot the intensity as a function of time. 
Thanks!
Relevant answer
Answer
Dear Mike,
Xbat is indeed a nice piece of software that is extensible and well-architected although it is no longer supported and I don't know how well it will work with modern versions of Matlab.  Ludwig's other suggestions are fine as well, and I would add Cornell's Raven although it is not free (nor is Audition).  In addition, many people use Praat which was designed for analyzing human speech and has some very nice features although it is perhaps not quite as user-friendly as some of the other offerings.    There are a number of programs designed for analyzing marine mammal calls that could also be useful to you:  Dave Mellinger's Ishmael (http://www.bioacoustics.us/ishmael.html) and Douglas Gillespie's PAMGUARD (https://www.pamguard.org).  Our group also has an extensible Matlab-based analysis program (Triton, http://cetus.ucsd.edu/technologies_Software.html).
With respect to time-frequency analysis (e.g. a spectrogram), you'll need to think about whether or not you need absolute or relative received level.  If you want absolute received level, you will need to adjust for your acquisition system's calibration curve. If you would like to have this done automatically, be sure to investigate whether or not the software package you select can support this.
Best of luck - Marie
  • asked a question related to Sound Analysis
Question
3 answers
I want to do an auditory experiment, in which the intensity of sound changes in different conditions, e.g., 60dB in one condition, 35dB in another condition. How could this be achieved? Is there any hardware or software to control the intensity of sound in dB level?
Relevant answer
Hi Milan,
What is your sound reproduction system? When do you say 60 dB is SPL?
If your reproduction system is headphones, you can use a binaural head to measure the sound pressure level at each ear and calibrate your system. In the case of loudspeakers, I would measure the SPL at the listener position using a sound level meter.
Best regards,
Diego
  • asked a question related to Sound Analysis
Question
3 answers
Hi all,
I am currently writing my MSc thesis about vocal communication in wild woolly monkeys. I am using Audacity software and I would be very grateful to know how to calculate low frequency, initial frequency and final frequency of the calls of my recordings.
I've taken measurements from spectrograms generated using a 512-point fast-Fourier transform, Hanning window function. So when I click to 'plot spectrum', all my low frequencies seem to be 86 Hz... and I don't know how to calculate this parameter correctly. The same happens with final frequency, I don't get what I've to select to obtain it.
Regarding the initial frequency... I think I can find it in Effects > Change Pitch > Stimated Start Pitch. Is this correct?
I would be really grateful if you could help me :)
Best,
Laura
Relevant answer
Answer
Thank you Ignacio and Juan David for your answers! :)
  • asked a question related to Sound Analysis
Question
9 answers
Hi everyone,
I am writing my MSc thesis about vocal communication in woolly monkeys and I want to make a general description of their different types of calls. I want to obtain various acoustic parameters such as duration, frequency range, low frequency, high frequency, maximum amplitude, average frequency, initial frequency, and final frequency. Hence, I have to analyse my recordings using SoundRuler, but I've never used this software before. I've read the instructions but I have some questions anyway.
- I recorded in stereo, so when I introduce the recording in the software, it asks me if I want to analyse left or right channel. Can I analyse both separately and then calculate the mean of both channels?
- Also, when I introduce the recording, I mark the section that I want to analyse using green bars. Once this section is marked, I proceed to do the analysis. Is it as easy as clicking the "manual" button? When I do it, it appears a table with the different values of the parameters, but I don't know if it is as "simple" as that.
That's all at the moment. Thank you for your answers!
Laura.
Relevant answer
Hi Laura,
I agree with Pavel regarding the channels and amplitude. I also don't use SoundRuler (sorry!) but I thought it might be useful to add that you need to be sure there is no background noise overlapping your calls of interest. If for example these recordings were made at a zoo, there may be visitors chatting, or in the field there could be other animals calling etc. on your recordings. If there is, then you can either need to filter it (if it does not overlap in frequency with the monkey calls), or if that's not possible, you could manually extract the frequency measures, or simply eliminate those calls from your analyses. It might be that you can use the read-out on all your "clean" calls (i.e. no background noise, one individual calling at a time) and in that case it could be just as simple as you say!
Good luck with your interesting project!
All the best,
Esther
  • asked a question related to Sound Analysis
Question
14 answers
Acoustics monitoring to be utilized as solution in analysis background noise
Relevant answer
Answer
B&K 2250 or 2270 
  • asked a question related to Sound Analysis
Question
5 answers
I have experimentally recorded the Sound Pressure Levels of a horn. The SPLs have been obtained through simulation from LMS. But the output of LMS is in the form of spectrum in excel. I want to conver this excel into an audible sound to make psychoacoustic characterisation. How can I do it in MATLAB or any other available resource?
Relevant answer
Answer
Sorry if I wasn't clear enough. On one hand, what I was saying (which is pretty much what Amaya said again at the beginning of her answer), is that using only sound pressure level data, it is not possible to reconstruct the original sound signal.
On the other hand, if you extract the h(t) (impulse response) using the LMS technique, what you get is the response of the system in the frequency domain, there is no original audio per se, you would have to create a signal based on the response of the system, frequency components, relative amplitude, and what is also important, phase information that is not present in the frequency domain. The result would be very similar to the sound produced by the horn, differences will be observed accordingly to the excitation used in order to produce sound with it.
Regards
  • asked a question related to Sound Analysis
Question
4 answers
With an affordable rate (good price-quality ratio) please. The main purpose is to record marine mammals sounds and foraging activities at night in coral reef ecosystems but if it would be great if it could also pick up waves breaking for an artistic personal project. It has to be easily handled manualy (for snorkelling). Thank you for all advice.
Relevant answer
Answer
Thank you everyone, it really helps! I will soon let you know my choice and how does it sound.
  • asked a question related to Sound Analysis
Question
3 answers
I'm looking for a software that can differentiate sound frequency with a resolution <1Hz. Eg. Audacity interpretes the whole spectrum between 0 and 85 as one continuous spectrum. I'm looking for one that could interprete 0-1 as one band etc. What are the hardware limitations? Can you recommend a software?
thanks,
jan
  • asked a question related to Sound Analysis
Question
10 answers
I have a question about the factoranalyse in spss:
I have two different datasets I want to compare (original sounds and adapted sounds). When I plot  a component plot of dataset I (original sounds) and then dataset II  (adapted sounds) in two different plots the result is different than when I plot dataset I and II together in one component plot. I want to have dataset I on the same place (original sounds) so that you can see clearly the difference between the two datasets. 
How can I have the same axis for these component plots? Can I calculate this?
Relevant answer
Answer
If I remember my SPSS output, each column will be what you muiltiply the variable values for (check in whatever you fav SPSS book is, my recommendation is Field's Discovering Statistics with SPSS). THis will give you values (the linear combinations of the variables) for each of the three components and you can use these to calculate component scores for each case in the original datafile.  You could then plot these three new variables and compare them for the two groups.
  • asked a question related to Sound Analysis
Question
14 answers
I am looking for a user-friendly software for analysing bioacoustic recordings (underwater sounds) with students. I am so far interested by Raven Pro and Adobe Audition. Any advice? What is your favourite software ?
Thanks!
Relevant answer
Answer
Hello,
You could use the R packages seewave, tuneR, soundecology and ineq (https://www.r-project.org/). “R” is free and useful in many ways. Plus, there are numerus articles using these R packages for bioacoustic and ecoacoustic research.   
Nevertheless, for a more basic and user friendly approach, you could use Audacity (http://www.audacityteam.org/).    
Best regards,
Aggelos
  • asked a question related to Sound Analysis
Question
4 answers
Hi,
In the past few years I been involved in sound analysis and condition monitoring of the car engine where the misfiring engine operating in workshop environment is successfully identified. I was also involved in research on Music Information Retrieval (MIR) where the system is provided with a sample record or part of a record and the system looks for similar records or records with similar queried part within them, based on their similarity and not their genre or style. 
I am looking for a data set to apply my previous experiences in sound analysis on animal identification and even in the field of animal psychology.
This will be a inter/multi disciplinary work and scientific inputs are welcomed.
Many thanks
Peyman
Relevant answer
Answer
The Bird Audio Detection project may be of interest, although the aim is detection rather than identification. The dataset has 17000 recordings tagged as either containing or not containing birds. The survey paper reviews the most common methodologies.
  • asked a question related to Sound Analysis
Question
27 answers
Is there a "sound density limit" beyond which sound energy fails to be recorded and/or played back?
Example: one of the largest known choirs consisted of 121,440 people - if I wanted to record such an event (or as many overdubs as those or many more), would there be a density limit I would reach and if yes, how can it be calculated ?
What about natural events ? Imagine hailstroms for example. What would happen if I recorded many and created a sound file with dozens, even hundreds of those and played them back ? Would I be reaching any playback (or hearing) limits ? Would such density create some sort of coloured noise ? 
All your ideas, suggestions and explorations will be very welcome  
Relevant answer
Answer
If :played back "means sounding all at once in the same place, your question devolves mathematically into asking how many numbers can sum to, say, 3.  Mathematicians are inclined to invoke such ideas as infinity in such cases, though the answer is clearly larger than we can name.
If you at what point does adding yet another sound to a large mix become irrelevant, the answer is also clearly large, but far from infinite.  Having sung in large choruses, I opine that after about a hundred or two voice, the addition (or subtraction) one more (or less) hardly matters in any musical sense.  (The gospel choir at UCSD once numbered over 500, but their major problem was finding a space in which they could rehearse).
The interesting part of your question lies in between 0 and infinity.  Modern psycholofy and cognitive science both suggest that  7 plus or minus 2 is about the limit of our "attention".  But they also suggest we don't multitask, but scan, i.e., shift our focused attention rapidly among competing messages.  Any instrument-rated pilot can tell you that simultaneously scanning (meaning observing and understanding) 7 or so instruments (as well as looking out the window) stretches human capabilities.  Not impossible, but difficult, especially if your life depends on it.
So even though I can't truly answer your question, my best guess would be the well-established psychological number associated with short-term memory: seven plus or minus two.  Seems our brain - evolved over million of years - have figured that before that number, we stand a chance to figure things out.  After that number, run!
  • asked a question related to Sound Analysis
Question
4 answers
Synthetic data generation is an interesting ara of research but i have difficulties finding articles and textbooks about the topic. I want an idea about definitions and framework for automatic synthetic data generations in any area, particullary on sound analysis. 
Relevant answer
Answer
As you focus on sound analysis, you may find interesting nowadays state-of-the-art technique for improving speech recognition acoustic models by augmenting training data using with speed, frequency and tempo warping, which is indirectly related to synthetic data generation ( http://www.danielpovey.com/files/2015_interspeech_augmentation.pdf )
Another example of creating synthetic data is block mixing that seems to be useful in polyphonic acoustic event detection: https://arxiv.org/pdf/1604.00861.pdf
We also tried the latter in recent DCASE challenge gaining a little bit on F-measure, but not as large as the author of previous work: https://www.researchgate.net/publication/306118891_DCASE_2016_Sound_Event_Detection_System_Based_on_Convolutional_Neural_Network
  • asked a question related to Sound Analysis
Question
18 answers
I plan to examine the effects of mixed sounds on human mood. I have three types of sounds: nature sound, traffic noise, and mechanical noise. I want to mixed them and produce three types of sound: nature sound dominate (but mixed with traffic noise and mechanical noise at a lower level), traffic noise dominate, and mechanical noise dominate. 
I have two proposed ways to produce these three types of sound, let me use the nature sound dominate as an example:
1. Nature sound (keep the original sound volume measured at a point near the sound source) + traffic noise (only 10% of original sound volume: say 80db*10%))+ mechanical noise (only 10% of original sound volume, say 85 db*10%)
2. Nature sound (keep the original sound volume measured at a point near the sound source)+ traffic noise (original sound - decreased db because of distance)+ mechanical noise (original sound - decreased db because of distance). The decreased db can be calculated by the fomula in the link: http://www.sengpielaudio.com/calculator-squarelaw.htm
which one is better and why? or neither is good and you have a good suggestion? Could you recommend any literature as a reference? Many thanks!
Relevant answer
Answer
Hi Bin, you might want to look at the research leaders in the field of applied soundscape. 
  1. Prof. Dick Botteldooren
  2. Prof. Brigitte Schulte-Fortkamp 
  3. Prof. Jian Kang
Regarding your application, you might want to look into the work of Dr. Pyoung Jik Lee: https://scholar.google.com/citations?user=TYZB3ZUAAAAJ&hl=en
We use signal-to-mask ratio in digital signal processing as a measure of how "loud" the noise is compared to the masker. I believe Dr. Lee's work has a range of -3dB to +3dB. for signal-to-mask ratio. I can't be sure, let me know if you find any discrepancies. Hope this helps.
  • asked a question related to Sound Analysis
Question
8 answers
my name rouland, working in university of pakuan departmen of biology. now i studying about frogs. i have a big problem, as a beginner biggest problem was identified frogs sound. so i have a idea to analyze every type of frog sounds and colleting base on similarity of wave sound so i can have what kind of wave which same frogs. the hard one is searching software to analyze the sounds? so what kind of software suit for my research? thnks
Relevant answer
Answer
There are two excellent alternatives (free software):
For R users, the package "seewave": http://rug.mnhn.fr/seewave/
The Brazilian biologist Marcos Gridi Papp developed Sound Ruler: http://soundruler.sourceforge.net/main/
Both are excellent options.
Of course there are commercial software (most of them with free "light" versions)
Best
  • asked a question related to Sound Analysis
Question
3 answers
Hi all, It is known that the low spontaneous rate fibres code high intensity information and the high spontaneous rate fibres code low intensity information.
This is known when we present the low intensity or high intensity sounds alone. What will happen if a certain sound has mixtures of low and high intensity information ? Would simultaneous low and high intensity mixtures be differentially coded by the low and high spontaneous rate fibres
Awaiting your valuable inputs
Relevant answer
Answer
Sorry for the delay in reply Sir and ma'am. Thank you for responding..
I do realize that my question was poorly formed. I am still not too clear about what I am asking , but I have a strong feeling that what i am asking is not too wrong. And am not even sure if I have a clear question, rather it turns out to be a thought. But, would be great if I can get comments or inputs
The Low SR fibres being affected in NIHL was the primary motivation of this line of thinking.
Vanaja mam, asked a very important question. This was one major concern which I was thinking of as well. Rather I was thinking on two lines.
1) when a modulated puretone reaches a receptor, then the the sound varies in intensity from high to low and keeps alternating. Now is the high intensity portion preferentially coded by the low SR fibres and the low intensity portion by the high SR fibres?  In this case, shouldn't modulation detection with affected low SR fibres perception and coding of shallow modulations should be affected than deep modulation. Also by extension modulations at high SLs should be more affected than modulation at low SLs
2) When two very close frequencies (within an auditory filter) are presented one at low and one at high intensity. Then the low intensity signal will fall within the excitation pattern of the high intensity. How are the different SR fibres coding the signal now. We also have to consider the weird phase relationships that may be present between the two signals.
Please do comment about these. I don't have a very specific question here though, but await a good discussion.
best
Nike
  • asked a question related to Sound Analysis
Question
3 answers
Would we expect a high frequency adapter to have any effect on a low frequency target in a horizontal localization task? I mean in comparison to a low frequency adapter and a low frequency target. Does anybody know of any studies on these types of interaction? Sometimes a clue close enough to this can point me in the right direction if I follow the breadcrumbs. 
Thank you very much in advance
Relevant answer
Answer
Hi Jonas, thank you for your answer. It certainly is close to the topic. I think I found what I was looking for. Actually, it was a study by one of the researchers here at the IHR: 
Briley PM, Krumbholz K. The specificity of stimulus-specific adaptation in human auditory cortex increases with repeated exposure to the adapting stimulus. J Neurophysiol [Internet]. 2013;110(12):2679–88. Available from: http://www.ncbi.nlm.nih.gov/pubmed/24047909
This study was able to show adaptation as a function of adapter-probe frequency separation. Adaptation is stronger when there is no Δf, but it is still present (although in a lesser degree; 50% adaptation) when the Δf is as high as 1.5 octaves.
Hope this is useful for others.
Kind regards
Nuno
  • asked a question related to Sound Analysis
Question
4 answers
I am analysing animated e-books from a multimodal perspective but do not have any background in sound/music or cinema, so I am struggling with the basics of describing the sounds. Any good primer of the types of effects/music and their role in audio-visual texts?
Relevant answer
Answer
Van Leeuwen's Speech Music Sound may also be relevant to your multi-modal perspective, providing terminologies and various examples.   
  • asked a question related to Sound Analysis
Question
10 answers
I seek advice on dealing with background noise which overlaps in frequency with the signal of interest. The software I use is Avisoft SAS-Lab Pro. I have used an eraser tool (under strict predetermined criteria). Is there a more suitable method in cases where the background noise is of a similar frequency to the signal?
Recordings are made at a sample rate of 48kHz (16-bit) and resampled to 22.05kHz. Spectrogram parameters: 256 FFT, Hamming window, 100% frame size, 50% overlap. Resolution: 86Hz and 2.9ms.
Relevant answer
Answer
As suggested by Israel, if the "nature" of your noise does not change (assuming frequency remains the same as the signal but not the amplitude or phase) then a simple amplitude based filter should work. You may also try phase-based filters.
If the noise (or even if the signal) changes its characteristics then go for statistical filters which can even be adaptive! The first one to try out is Wiener filter mechanism with adaptive filter algorithms like LMS, NLMS and RLS. These are the best ones (my personal experience). See these for more information 
and
and
  • asked a question related to Sound Analysis
Question
9 answers
how the meaning of these images?
This images is result from processing with Software Matlab with Syntax FFT to Specgram
Thank You For Attention
Regards 
Lubis
Relevant answer
Answer
I recommend you to read a brief introduction to sound analysis, for example:
A good reference book also can be
Au, W. W., & Hastings, M. C. (2008). Principles of marine bioacoustics (pp. 121-174). New York: Springer.
ISO 690
I guess you are trying to analyze your dolphin whistles using contour analysis, there are many publications on this technique, you can check this recent ones and track back to the older ones:
Gannier, A., Fuchs, S., Quèbre, P., & Oswald, J. N. (2010). Performance of a contour-based classification method for whistles of Mediterranean delphinids. Applied Acoustics, 71(11), 1063-1069.
Ferrer-i-Cancho, R., & McCowan, B. (2009). A law of word meaning in dolphin whistle types. Entropy, 11(4), 688-701.
  • asked a question related to Sound Analysis
Question
3 answers
I'm testing the sound insulation properties of plywood panels. 
Kundt's tube can be used for determining transmission loss (TL), but it only provides orientative results and a specific standard is missing (is this correct?). Is it possible to establish a correlation, even only orientative, between Kundt's tube and ISO 140 results?
Relevant answer
Answer
There is no general method to establish a correspondence between the two measurements.
To establish a "correspondence" you will need characterization and modelling tools. There is also conditions on the tested material.
In a Kundt's tube, the acoustic excitation is plane waves at normal incidence thus the TL should rather be called normal Sound Transmission Loss (nSTL).
You also have samples with small dimensions and this will tremendously impact your nSTL at low frequencies (when the wavelength in air is very much larger than these dimensions). Indeed, the nSTL can reach high values at low frequencies as long wavelengths will not be able to propagate through a small sized sample and will be reflected. This is a measurement artifact you will not see (or to a lesser extent) in ISO 140-3 measurements.
When the frequency increases you should find a nSTL following the mass-law behavior.
At even higher frequencies you might find bending vibrations of your homogeneous sample. From the frequency positions of these vibrations you can estimate the elastic parameters of your samples. Of course this is easier if your sample is made of an isotropic material and its first bending frequency is below the highest frequency of plane waves assumption in the Kundt's tube. Honestly, if one of these 2 conditions is not fulfilled, forget the Kundt's tube.
If everything is OK at the previous step, from the mass and elastic parameters of your sample estimated from a Kundt's tube measurement you can estimate the sound transmission loss for a larger sample (~ 10 square meters is required by ISO 140-3) and for a diffuse sound field with a modeling tool.
Finally, you're right, there is no ISO standard for the measurement of the normal Sound Transmission Loss while there is the ASTM E2611. The ISO is aware of this point and a working group should start a discussion about it in the next few months.
  • asked a question related to Sound Analysis
Question
4 answers
I'm working on the field of emotion perception and recognition through 3D sound systems based on headphones (like Dolby Auro or DTS HeadphoneX), and I'm doing a search about previous studies on this field, as well as those related to other multichannel sound systems not necessarily based on headphones (5.1 discrete speakers systems, for example). 
I'm particularly interested in studies that have used real stimuli (music, film scenes, environment sound) better than isolated sounds, but any contribution is welcome.
Relevant answer
Answer
According to what you mean by emotion, you may look to Stephen Mc Adams and Albert Bregman's work.
In France there were also interesting works such as even itf they are not exaclty on emotion these artistic, creative and penomenological approachs can help to qulify emotion:
La revue l'Espace du Son
My works on spatial sound  :
  • asked a question related to Sound Analysis
Question
3 answers
Dear All,
      Dielectric measurements are an important means of studying the dynamics properties (capacitance, conductance, permittivity and loss factor) of a dielectric. However, Dielectric measurements can be performed over a wide frequency range. My question is as given below:
What is the advantage of performing dielectric measurements in the frequency range from 42 Hz to 5 MHz (Audio- Radio frequency) in terms of its application point of view? I would be grateful to you if you could provide some references. 
Thanks in advance.
Relevant answer
Answer
ok.
  • asked a question related to Sound Analysis
Question
6 answers
I have a WAV file and I would like to know how to measure the tempo in bpm unit
Relevant answer
Answer
Hi Yopie,
Behringer has a device on which you can tap a button to the beat of the song or .wav file.  And many Pro-tools plug-ins have a "tap tempo" function which then calculates bpm for you.  In the simplest sense, 60 bpm is 1 beat per second.  So, if you are tapping your finger 50 times a minute to your .wav file, the bpm = 50.  This assumes you are using quarter notes when tapping however.  If you tap your finger to your wav file and you noticed you tapped 100 times in a minute, this could mean that you were tapping eighth notes and really, it was 50 bpm and not 100 bpm, since bpm, in the classic sense, is always based on quarter notes. If you are are dealing in different time signatures (e.g. 7/4, 5/4, 3/4, 11/4) you always find what the quarter note pulse is and find out how many times you tap your finger in a minute - and that is your bpm.  Be aware of what a quarter note is - and this should solve it for you. Or, buy a metronome and see what the metronome setting is when it seems to be 'in sync' with your .wav file. What is your .wav file?  A song, sound of ocean waves, something other than music? Heartbeat?  Also, you could have something weird like 90.5 bpm.  As well, if you listen to songs that were not recorded with a metronome, the bpm changes from time to time throughout the whole song. Early Beatles = no metronome (flowing excitement; the beat make sense as the song grows, e.g., "It Won't Be Long"). Sting = perfect metronome (e.g. "Fields of Gold"). Stravinski, Beethoven, Brahms, Mozart... suggested tempos, and rit. (slow downs) and accel. (speed ups)... gas pedal moving up and down. Are you dealing with a perfect bpm?  Only you have the .wav file you are talking about ;]  120 bpm is another very common tempo.  The odd tempos 117, 105, 99 ... have a certain magic to them as well. 
Attached is a tool I made for exploring this sort of thing...
  • asked a question related to Sound Analysis
Question
7 answers
It will be interesting to gain experience from persons applying a hydrophone for the measurement of sound in air. Initial experiments seems to indicate that a hydrophone performs different in air compared to water, probably due to grater impedance mismatch between the impedance of the transmission medium and the sensor.
Relevant answer
Answer
Ole-Herman,
You still haven't told us why you want to use a hydrophone in air.
I agree with everyone on the impedance mismatch. Hydrophones just weren't designed to work in air.
If you are looking for a weatherproof or immersible microphone, there are a number available. Just search the web on "waterproof microphone". Some are fully immersible, although they probably don't work very well as hydrophones, just as hydrophones don't work very well in air.
A weatherproof mic in air will give you a much better signal-to-noise ratio and a much more predictable amplitude-frequency response and directional characteristic than a hydrophone in air.
If you want a transducer that will work in both media, I suggest you use both a hydrophone and a waterproof mic and mix them after the preamps. If you set the gains correctly, the impedance difference between the two media should effectively switch between the two transducers automatically.
  • asked a question related to Sound Analysis
Question
4 answers
I need information about sound pressure level of OM457LA 220kW diesel engine, emitted from its exhaust system at rated speed. Information of any similar engine also could help me. I wish, anybody could help me.
Relevant answer
Answer
In 'Methoden der Mathematiaschen Physik' by Courant and Hilbert is given how you can calculate all modes of a circular plate. The resulting algorithm is easily executed in Matlab.
  • asked a question related to Sound Analysis
Question
5 answers
I want to get a function f(t). It can returns the value of the amplitude of the sound at the t moment. I‘ve already convert mp3 to a list of hexadecimal numbers. What should I do next?
Zyt
Relevant answer
Answer
The MP3 format is a complicated lossy compression format for audio. Writing an MP3 decoder is a major undertaking that requires a fair bit of expertise.
Rather than writing code from scratch to decode MP3 files, you should consider using third-party code, such as the open-source MINIMP3 decoder (http://keyj.emphy.de/minimp3/) or a third-party library like libmpg123 (http://www.mpg123.de/api/). Using such third-party code, you can convert audio data to uncompressed form, which you can then easily export in the format you desire.
[NB: Microsoft Security Essentials flags the .exe file in the MINIMP3 library package as a Trojan. I believe this is a false positive, but just in case, delete the .exe file and focus on the source code, recompiling if necessary using your own system.]
  • asked a question related to Sound Analysis
Question
4 answers
I do research in analyzing the richness of vehicle sound and I already use the semantic differential rating scale (poor -rich) and I want to prove subjective scale with physical properties of sound.
Relevant answer
Answer
What do you exactly mean by richness? In terms of properties of the sound, richness may be related to the spectrum frequency, to the amplitude evolution through time or, for instance, to the harmonic relation among frequencies (harmonic and partials). What properties are you considering?
  • asked a question related to Sound Analysis
Question
5 answers
I am interested in developing sound quality of car horns and I want to perform dissimilarity task on it.
Thank you
Relevant answer
Answer
I don't know how you approach this topic. But one of the easiest ways would be this:
1. Create/Record the sound.
2. Calculate Spectrum (or Spectrogram if the sound changes over time)
3. Look at the regions with the highest energy. If you have only one maximum it is most likely a monophonic signal.
  • asked a question related to Sound Analysis
Question
2 answers
Suppose I have y = conv (x,h) + n = s +n, where h is channel impulse response with length L and unnormalized. How do I define SNR? Is it SNR = power of x/power of n or SNR = power of s/power of n?
Relevant answer
Answer
I was wondering this same question a few years ago. I came to a conclusion that the definition depends on your application. In my field of science which is room acoustics, the very first part of the impulse response, say 5 ms after the direct sound, is considered as signal. Then your SNR is "power of conv(x,h(0:5ms)) / (power of noise)". Thus, the rest of the impulse response, i.e., from 5 ms to infinity is considered as convolutive noise and is not presented in the traditional SNR figure. Instead the convolutive noise is presented by another figure called the Direct-to-reverberant ratio (DRR) and it is defined as "power of h(0:5ms) / power of h(5ms:infinity)". Sometimes, in room acoustics,  also a parameter called the reverberation time is used as the indication of how much convolutive noise there is, as in 
Champagne, B. Bedard, S. ; Stephenne, A "Performance of time-delay estimation in the presence of room reverberation" IEEE Transactions on  Speech and Audio Processing, Volume:4 , Issue: 2 , Pages:148 - 152
That is, the signal-to-noise ratio is defined by what is considered noise and what is considered signal in your application. 
  • asked a question related to Sound Analysis
Question
5 answers
For instance, i'm looking for research on audiogames testing.
Relevant answer
Answer
We published a paper on unit and integration testing for the Jamoma framework some years ago: 
  • asked a question related to Sound Analysis
Question
5 answers
the tank is stratification and 3 meters in long. and I'm using sound wave in 250 KHz.
Relevant answer
Answer
A bit more clear. So you have one ultrasonic wave inside the tank - from bottom to top? - and a moving liquid ("wave").
Do you use puls-echo mode or transmission?
Now you want to see the affect of moving liquid in your received ultrasonic signal?
Sometimes a sketch of the setup is worth 10.000 words.      ;-)
  • asked a question related to Sound Analysis
Question
8 answers
I need some suggestions based on experience from Soundscape related researchers. All the City's parks I would like to analyze are small in size and located in a busy streets. Thanks.
Relevant answer
Answer
I do not think either option is definitely better than another. The key question is how are you going to work on your recordings. If your target reproduction system is a multichannel loudspeaker system, then definitely go for ambisonics, but remember your listener must have his head positioned exactly in the sweet spot to hear correctly.  You will not have a problem of non-individualised HRTF in the recording. The binaural option is an obvious solution if your primary listening tool is headphones, and if you are to be the main listener, then recording binaurally with your own ears seems a better option. If you have both reproduction and recording systems available, then try both and decide which works better for you. If you need to purchase a multichannel system but your budget allows, then also go for ambisonics. A 16 channel system would be fine, inexpensive monitors like Genelec 6010 are OK.  A nearly as good result will be obtained with just 10 speakers: 6 around the listener in the horizontal plane at the level of the ears and four around an elevated horizontal plane.
  • asked a question related to Sound Analysis
Question
11 answers
I'm searching for spectral data about natural background noise in quiet environments either with and without vocalizing animals. 
Relevant answer
Answer
The US national Park Service has focused a lot of efforts on this Contact Kurt Fristrup (kurt_fristrup@nps.gov)
  • asked a question related to Sound Analysis
Question
4 answers
In complex sound I need to determine the fundamental frequency but the Fast Fourier Transform catch other frequency which have a small differences value with other frequency. Can anybody help me? Thank you
Relevant answer
Answer
The most direct and simple method is to take a longer FFT.  This may require zero padding if you do not have enough input samples.  (If the input signal is stable and you have enough data, it is better to increase the FFT size using more input samples than to zero pad.)  This separates the signals into narrower FFT bins and may be enough to separate the two signals.
If that does not work, a slightly more complicated method is to use the Chirp Z Transform (CZT).  If you have access to Matlab or Octave, this is given by the command czt().  Set up the CZT so that it follows a contour along the unit circle in the Z plane (in matlab, parameter A = exp(1i*2*pi*LowF/SR), where LowF is a frequency somewhat lower than the frequency being examined.  Set increments of frequency (deltaF) between points smaller than the width of the FFT bins (W = exp(-1i*2*pi*deltaF)). 
Both of these methods will fail if the signals are too close together and there are not enough input samples.  In this case, more sophisticated methods are available, but from the question it is not obvious that more sophisticated methods are necessary, so I'll start with these simple suggestions. 
  • asked a question related to Sound Analysis
Question
4 answers
I want to build a highpass filter that has .1H cut off frequency to let higher frequency pass through and block lower frequency for recording the EEG signals.I design it but the big problem is  that the frequency that we work with it is about .1 Hz so it is near the pole so we have relatively high delay and I want less than 50ms delay.so if someone knows what should I do ,please do me  a favor and help me
Relevant answer
Answer
Thank you for your idea ,fortunately I solved it.
  • asked a question related to Sound Analysis
Question
4 answers
I searched for it in google and in a couple of scientific article search programs, but to no avail. The problem is that the impedance is usually given without specifying the type of glass it has been attributed to. Since there are at least 10 000 types of glass on the market, it is difficult to decide which one is the right one for me. Typically, a standard value of 1,5 is given for glass (the one for windows and so on). I am looking for a glass with Z = 1 (lower impedance). It could be something with density that is lower than that of standard glass (about 3,5) or something very special. I am looking forward to your suggestions.
Relevant answer
Answer
The lower he density of the glass (plastics for instance), the lower its transmission loss for sound impinging on it, from outdoors,for instance. For a composite window (two or more panes with airspace between) the reduced TL for each pane can be compensated by increasing the airspace distance ("gap"), space permitting.
  • asked a question related to Sound Analysis
Question
2 answers
I seeking for best answers and the advantages of using Delany-Bazley empirical model that used to predict sound absorption value, comparing to other models that proposed by Allard-Johnson, Miki or Wilson`s Model. I know its simple and fastest method but i finding more strong reason to compute my results even it out dated method. If attach with some proven research publications are most helpful for me. Even comparison among the models also would help me. Thank you to all.
Relevant answer
Answer
I suggest you to read "Sound absorption of porous materials – Accuracy of prediction methods" by Oliva and Hongisto http://www.sciencedirect.com/science/article/pii/S0003682X14001261
and also "On the modification of Delany and Bazley fomulae" by Kirby http://www.sciencedirect.com/science/article/pii/S0003682X14001261
  • asked a question related to Sound Analysis
Question
4 answers
I'm capturing video of mosquitoes feeding on nestling birds in an attempt to quantify biting pressure(sample video is attached), but I have no way of determining which species of mosquitoes are attempting to feed.
What kind of equipment do I need in the field and what kind of software is needed in the lab to determine the species present?
Relevant answer
Answer
You would need a sensitive microphone very close to the chicks but I don't believe you could identify species from the sound alone. Much easier to identify from the videos. If they are Culex, pipiens had a higher frequency than tarsalis when we recorded them in the lab.   Peter
  • asked a question related to Sound Analysis
Question
4 answers
How to normalize each recording sentence to a Root-Mean-Square (RMS) level of 70 dB SPL, or recordings were normalized for RMS amplitude to 70 dB SPL? By Praat, Adobe Audition or Matlab? How to realize it? Thanks.
Relevant answer
Answer
I assume that you are trying to scale different sound files to an approximately equal loudness. If that is the case, you can select a Sound Object in Praat and under "Modify", you can use "Scale intensity" to set a specific dB SPL level (Drop me an email if you need a script that normalizes all files in a directory that way). To determine the actual SPL of the output you can use a sound level meter. 
  • asked a question related to Sound Analysis
Question
9 answers
I'm searching for acoustical propagation model to validate my sound absorption coefficient value that obtain from two-mic impedance tube method. Seeking for other than Delany-Bazley and Johnson-Champoux-Allard models. any latest or newest? also want know what are the exact properties required to predict it. Thank you.
Relevant answer
Answer
In general a two microphone method is not accurate. One has to use more than two microphones. For accurate measurements I use 6 microphones. A great advantage of this is that one can obtain an acoustic determination of the speed of sound. We carry out a series of 20 measurements at different frequencies. The unknowns to be determined are 20x2 amplitudes (p+ and p- at 20 frequencies) plus the speed of sound c> So we solve by least square a set of 120 equations with 41 unknowns.  After relative calibration of the microphones carried out at the relevant frequencies (placing all microphones in one plane along the tube near a closed end) we carry out measurements with a closed pipe termination. Typical accuracy is 0.3% in the reflection coefficient. Attached you will find descriptions of typical results of the method.
Note that the impedance tube should be quite rigid! Wall vibrations can dramatically pollute your results.
Note that we use a single frequency excitation and a lock-in post processing method, which is essential to obtain high accuracies. Broad-band measurements are a pain in the neck!
Note that large errors can be induced by the way the porous sample is mounted in the impedance tube. Such errors can be very important for narrow tubes. 
Note that the impedance tube only provides information concerning normal wave incidence. Theory should be used to deduce from those data the absorption under other incidence or in a diffuse acoustic field.
  • asked a question related to Sound Analysis
Question
3 answers
Im seeking for exact Durability Properties Test`s for acoustical material that made from natural fiber. Anybody knows the exact with the ASTM? Thank you to all.
Relevant answer
Answer
thank you Mr. Elias Randjbaran. Actually, im trying to do impact test on acoustics material. For example sound proof panels, impact or damage peak point till the panel destroy or endurance point. so which test are suitable. tq sir.
  • asked a question related to Sound Analysis
Question
6 answers
What properties of a perceived sound cause us to attribute it to a human vs. non human/environmental source?
Relevant answer
Answer
Hey
Good question, though I do have a question back. By sound generated by a human agent do you mean that it is generated by and from the human (body, voice, etc.) or just that it is initiated and perhaps modified by a human using external source (instrument, shaking a wind chime as opposed to it being shaken by wind)?
I think both questions are interesting, but I am more curious about the second.
  • asked a question related to Sound Analysis
Question
7 answers
I am working with an audio sound profile and. I want to analyse the frequency of that sound and  I am using wavpad sound editing software for frequency analysis of that sound. In that case, I have generated frequency versus time graph. But  these sound shows multiple frequencies at a time. So I'm not able to generate a perfect graph and also not able to find a frequency range of that sound. So, can you tell me how I can analyse these frequencies ?   
Relevant answer
If what you want is do pitch tracking (get a graph that shows you what note is being played at a particular time) you need something like melodyne. All frequency representations mentioned above will give you all the frequency components that are present on the signal, which is what I think you're getting with wavpad.
  • asked a question related to Sound Analysis
Question
5 answers
Anybody could help me find a good Acoustic Performance Estimation / Prediction Software. Just to identify by using the multilayer and obtain Sound Absorption value (alpha). Creating model room and finding Reverberation time are not necessary. If the software open source and free to use, im more glad to use it. Thank you.
Relevant answer
Answer
Hi Mathan,
I suggest you I-simpa, it is an open source sofware for 3D sound propagation modelling  you can down load it at the following link: http://i-simpa.ifsttar.fr/
Kind regards