Science topic
Sound Analysis - Science topic
Explore the latest questions and answers in Sound Analysis, and find Sound Analysis experts.
Questions related to Sound Analysis
"In the FEM simulation of a micro-perforated plate (MPP) absorber, is it necessary to include the end correction factor typically used in theoretical models to match experimental results, or does the simulation inherently account for the physical effects that the end correction compensates for?"
I am researching pronunciation and correct articulation among EFL students. I have a list of "problematic sounds" that students often struggle to pronounce, and I am trying to analyse those sounds by comparing them to sounds produced by a native speaker.
I have started working with PRAAT, but I was wondering if there is a better tool out there for my purposes.
How do you think? What is origin of our particular sensitivity to harmonious music?
The longer I live, in growing degree I am becoming positive (and still more optimistic) and believe in more natural origin of human attitudes toward beauty and goodness. Once I'd even suggested that also understandingand recognizing of music may be imprinted in ourgenes.
The understanding of music is not irrelevant to ethics. As well as to the culture as a whole, also. Symphonies' general pattern implemented ingenes? The genetic memory of this pattern we can hear in the symphony performed by crickets
(Have you ever heard the amazing cricketschirping slowed down?)
Isn't it comparable with humans best symphonies?Maybe we have to change our understanding ofmusic, beauty and goodness, as attributed only toculture of human beings?When we were much much smaller mammals, sosmall that our pace of life was equal with thecricket life, we could hear these symphonieswhole our lifes generation after generation, as if we were spending most of our lifes in philharmony. This was alike music of the heavenly spheres all the time around us. It could not end in other way. So,we may have imprinted archetypes of symphony in our genetic memory, quite likely. We can enjoy these music again, when after dozens millions years we've managed to return to this hidden for our ears for dozens hundred thousand years music, as our best composers rediscovered it again for us during recent few centuries.
In a similar way not only notion of music, but also more general beauty or goodness, can be incorporated in the structure of our genes, as the creatures which possessed empathy and prefered more regular (than chaotic) patterns, simply were better prepared for survival. In such a way nature could create higher beings able to consider things beautiful and valuable, differentiating better from worse. However, isn't it so that the full expression of these natural features occurred when humans during evolution of their culture invented the names for good and evil, as well as for beauty and art?
Isn't it so that prehuman beings (and preculture beings we were at the beginning) could dimmly sense the difference? But only when the notions were invented and their designates were developed, humans created understanding of beauty and goodness. And is not it so than only while they fully developed these ideas, they entered reality as its part? I.e., did not humans created all the beauty and goodness of the world? As its beauty was hardly recognized as such before by any former beings?
Or did they recognized it but just could not express that recognision in other way than just by prefering beauty or good in images or behaviour of their partners and surrouding? In other words: May beauty or goodness be possible without beings understanding them as such?
Of course you may remain sceptical as in https://www.youtube.com/watch?v=iFFtqEyfu_o
Notice, however, wrong assumption that difference of receiving sound depends on age (time of life) differences of species (and not linear dimensions of their hearing apparatus).
This question is connected with similar discussions already present on RG, and among the others the omne archivized in the attached file : (27) Do small babies understand the ethical and aesthetical categories_.pdf
as well as no longer available its predecessor
![](profile/Zbigniew-Motyka/post/May_music_be_implemented_in_our_gens_and_not_only_cultural_archetypes2/attachment/644191bb806fe2503df427d3/AS%3A11431281128308097%401679339515354/image/cats.png)
Due to COVID-19 restrictions, my project got postponed for one year. My sociolinguistic research focus is on face to face\focused group interviews to examine identity construction + accent\sound production. Unfortunately, I don't think I'll manage to conduct face-to-face interviews anytime soon (apparently COVID 19 restrictions is still developing) and postponing my project is no longer an option. So, I intend to replace face-to-face interviews with online interviews. I'm looking for recent studies that used online interviews - I hope you can recommend some.
Thanks in advance!
I tried Praat but I understand Praat is'nt good with resonating instruments and multivoices. What else can I use? Sound Analysis Pro (Matlab version) compares at maximum 5 seconds without freezing. What else could I try? thank you!
Please see the attachment.
1- I have to use Perfectly matched layer while using port boundary condition or not.
2- My port shape is hexagonal.
So from available options, I am choosing user defined port. But while computation it is showing error.
Plz suggest.
Thanks.
![](profile/Nitish-Katiyar/post/Related_to_pressure_acoustic_frequency_domainComsol/attachment/605d087a220bc500014b895f/AS%3A1005367731433472%401616709754121/image/IMG_20210326_032254.jpg)
The work of George and Shamir describes a method to use spectrogram as an image for classifying audio records. The method described is interesting, but the results seemed to me a little adjusted to the chronology and not to the spectrogram properties at itself. The spectrogram gives a limited information about the audio signal, but it is enough to do a classification method?
Hi guys,
is there any option in AVISOFT SASLab Pro software which enables you to eliminate unwanted noise from digital recording without effecting your original sound? In my case, sounds are recorded in the experimental tanks with a hydrophone connected to the digital audio recorder. The lab is full of low-frequency noise, which in some proportions, disrupts my sound of interest. If I high-pass filter recording, there is still noise which is not eliminated and it is overlapping with the sound frequency spectra.
Any advise would be helpful.
Company A actually has been involved in the industry for about 30yrs and known to offer quality professional installation plus material specifications as :zincalum steel, thickness is .45mm, AZ150. B has been involved in the industry for about 15yrs and offers somewhat professional installation plus material specifications as: zincalum steel, thickness 0.40mm, AZ150 and C is a less known and new company that has involved in the industry for 3 yrs with material specifications as: Alluminium, thickness 0.40mm and AZ150.
The currency is GHS.
He wants to base his decision on sound analysis.
Thank you for your support.
Hi, I have a 4 microphone array and have all information about azimuth and elevation obtained. The ultimate goal is to find the distance from the array to the sound source.
Has anyone had gone through some papers on this because i have not found anything on google scholar? All papers focus heavily on azimuth and elevation but nothing about the z distance.
Please recommend me some papers if you have read them
Hi all,
I'm currently working on a soundscape ecology study in which the entire acoustic community is of interest. I have been reading up about the Fast Fourier Transform (FFT), and the trade-off between the time and frequency resolution, which is determined by the choice of window length.
I have however failed to find any resources which explain which temporal/frequency resolution is required for the sounds of interest.
I understand that if only one/a few species with known vocalizations are of interest, this choice can be justified easily, but what if you're dealing with an unknown acoustic community? Studies of the acoustic community which use only the audible spectrum (with a sampling rate of 44.1 kHz) often use a frequency resolution of 172 Hz, but don't offer a justification why they chose this. And what if you're also looking at the ultrasonic part of the acoustic community - how would the required frequency resolution change to capture both sonic and ultrasonic signals?
I appreciate any insights you might have.
Hi, everyone. If there is any public sound data with the marks, such as bird songs, children crying, traffic congestions, vehicle noise, vehicle horn, or other labels that marks the sound events.
Best regards
I am looking for natural frequency of cutting tool (normal and wear) but can't find it. In what frequency the tool show the diffrent of normal and wear tool. Can i use this formula to find it ? (natural frequency=2.phi.rotation/60)
Since the rotating part not only the main spindle (tansmision from motor to main spindle)
Suppose avg. grain size, crystal structure, young's Modulus, fracture strength, velocity of sound and surface roughness of both the crushed crystals (before and after crushing ) and the crushing surfaces, as well as load on grinding surface, static/dynamic friction coefficients are known. Then, is it possible to estimate the crushing sound of the crystals?
Conversely, if crushing sound of the crystals are analyzed, is it possible to find any mathematical relation between the variables outlined?
I am asking the questions since crushing minerals and recording-analyzing the sound require no sophisticated instrument at all.
Please provide relevant research links.
They should be vast with the AVISOFT SASLab. Pro or any sound analysis software with capability to generate sound.
I have a doubt about the unit of amplitude on y axis on linear plot of a sound wave generated in Matlab.
In logarithmic scale it is dB but I am confused about what it should be on the linear scale and what is the maths behind the conversion that would be a plus if someone has any idea bout it.
In this image the value of power at 404.2 Hz in -1.26 dB on log scale and 404.2 on linear scale..What is the maths behind this conversion and what should be the ideal unit on Y axis on linear scale?
I currently use "V" which I think is wrong.
Also I have attached section of my code which is generating this plot.
--------------------------
fSep=1/(N*h);
disp(['FFT frequency separation = ' num2str(fSep) ' Hz'])
[Y,f,YdB]=SimpleLineSpectrum2(y,h,0,fny);
figure
subplot(211)
plot(f,Y,'LineWidth',1.5),grid
title(['Line Spectrum of ' name ],'fontweight','bold','fontsize',10)
ylabel('Linear power(V)')
xlabel('Frequency (Hz)')
axis([fL fR 0 max(Y)])
subplot(212)
plot(f,YdB,'LineWidth',1.5),grid
title('Line Spectrum in dB','fontweight','bold')
ylabel('power (dB)')
xlabel('frequency (Hz)')
axis([fL fR dBmin 0])
pause
--------------------------------------------------------
and this is what "Simplelinespectrum " is doing
% normalize by 2*np
np=length(y); % length of the input vector
%===remove the mean
yavg=mean(y);
y=y-yavg;
%===calculate the fast Fourier transform
Y=fft(y(1:np));
%===generate the frequencies in the frequency grid
nint=fix(np/2);
i=1:nint+1;
f(1:nint+1)=(i-1)/(np*h);
%===take the absolute value and normalize
Ya=2*abs(Y(1:nint+1))/(np); % normalization
% Ya=abs(Y(1:nint+1)); % no normalization
%===generate the frequencies and magnitudes to be
% returned consonant with frLeft and frRight
fL=frLeft;
iL=fix(1+np*h*fL);
fR=frRight;
iR=fix(1+np*h*fR);
retYa=Ya(iL:iR);
retf=f(iL:iR);
Ymax=max(retYa);
YdB=20*log10(retYa/Ymax);
-------------------------------------------------
Sound Power Level of a speaker is computed from the sound power of the speaker.
Sound power from the speaker is the total sound energy emitted by a speaker per unit time.
How this sound power is related to the electric power given to the speaker. For example if I give 6W electric power to the speaker then sound power from the speaker will be only 6W.
How do we know that the missing fundamental is inferred by our brains and is not a by-product of the interference pattern caused by mixing tones of several frequencies?
Thanks for your thoughts!
I am currently working on anthropogenic effects on the vocalisation of howler monkeys in the urban environment.
I record the vocalisations with a TASCAM DR-07MKii and I want to extract acoustic measurement such as frequency, pitch, rates and lengths of vocalisations.
Which program is easiest to use for a beginner to extract these variables?
I currently have: Sound Analysis Pro, Praat and Audacity.
All tips and additional information are very welcome
I have used RAVEN software auto detect filters to identify (label) the notes of a single bird species that imitates other birds. Thus I'm working with a large data set (hundreds of thousands of notes within thousands of songs) and am attempting to label all the sounds (notes, syllables, phrases) accurately within the songs of
a bird species and found:
1) it is very time consuming to tweak the parameters to even get it to 'work'
2) when it is 'working' it returns with numerous false positive and negative results
I'm curious if those interested in identifying (labeling) large data sets of animal sounds have found a software that will sift through the spectrograms with accurate identification (labeling) of sounds?
Currently I'm using visual and aural inspection (human) of spectrograms to do this, which amazingly seems to be the only way to achieve accuracy in this task. It's incredibly time-consuming but I appear to be quicker at doing it 'old school' than by using automated (computer based) methods.
Cheers!
Brandi Gartland
M.S., Doctoral candidate in Animal Behavior
University of California, Davis
Hello, I currently work on rainbow lorikeets vocal individuality. The problem is that parrots are very loud and the audio signals are clipping. I use directional mic Senheizer ME 66 and recorder Tascam DR 100 MK III with limiter on. Is there any solution to solve this issue? I know it is possible to fix clipping in audio programs, but is it ok to use fixed signals in my analysis or it is better to eliminate them?
Advices and indications about types of microphones or recorders (and their location in terrain) and software for processing data are wellcome.
Thanks!
Some animals like whales, dolphins, bats, and ants can produce ultra or infra sounds. Can we use these waves to cure diseases like cancer?
I would like to establish a metric or a variable to assess how 'good' a spectral profile is. A 'good' profile for me would have be a narrow profile with a tall peak, and a bad profile is the opposite: wide and short peak.
My project is on MATLAB which performs the necessary simulation and produces the spectra. I would like to know of a (statistical) process what I could use on MATLAB to discern a good profile from a bad profile
I realise this is an unusual question but any help would be much appreciated!
Thank you
i build baby cry detection system using deep learning algorithm.
i had used hardware raspberry and sound sensor as mic.
when baby starts cry then the system should detect the cry.
its working good when i placed the mic near to the baby but when the mic is away i am not able to detect sound.
please suggest me any techniques to detect sound from max 3 meter range.
WHO ARE WE
Master Acoustics International Corp.was founded in Taiwan in 2017 by our Chief Director Mr. Jack Chen, who has a brilliant sense of both sound and frequency. Jack has used sound techniques to invent two of our innovative products: the sound optimized resonator and a sound optimized coating which can be painted on the surface of instruments. Both of these innovations dramatically improve the timbre of the instrument to which they are applied. The sound optimized resonator has already been patented in China with patent applications in process in Taiwan and Europe.
WHAT WE WANT TO DO – A draft blueprint for the project.
We conceived of this project and would propose three goals:
1. To examine our inventions to assess their effectiveness
2. To use collaboration to create new ideas/innovations/inventions
3. To formally present the project findings for everyone to see
LOOKING FOR PARTNERS
If you are interested in knowing more about Master Acoustics International and this project, or you can recommend any institutes or researchers. Please let us know, thanks!
I have heard that if you collapse a bubble in the water with some sort of sound wave it will produce light. Is it a special gas or just a bubble of air?And is it a special wave sound?
I wonder what the reason behind this phenomenon could be?
Hi, I'm looking for a software to analysis bat echolocation call recordings, preferably free. It doesn't need to have many features, I just want to extract information about the intensity of the calls, and plot the intensity as a function of time.
Thanks!
I want to do an auditory experiment, in which the intensity of sound changes in different conditions, e.g., 60dB in one condition, 35dB in another condition. How could this be achieved? Is there any hardware or software to control the intensity of sound in dB level?
Hi all,
I am currently writing my MSc thesis about vocal communication in wild woolly monkeys. I am using Audacity software and I would be very grateful to know how to calculate low frequency, initial frequency and final frequency of the calls of my recordings.
I've taken measurements from spectrograms generated using a 512-point fast-Fourier transform, Hanning window function. So when I click to 'plot spectrum', all my low frequencies seem to be 86 Hz... and I don't know how to calculate this parameter correctly. The same happens with final frequency, I don't get what I've to select to obtain it.
Regarding the initial frequency... I think I can find it in Effects > Change Pitch > Stimated Start Pitch. Is this correct?
I would be really grateful if you could help me :)
Best,
Laura
Hi everyone,
I am writing my MSc thesis about vocal communication in woolly monkeys and I want to make a general description of their different types of calls. I want to obtain various acoustic parameters such as duration, frequency range, low frequency, high frequency, maximum amplitude, average frequency, initial frequency, and final frequency. Hence, I have to analyse my recordings using SoundRuler, but I've never used this software before. I've read the instructions but I have some questions anyway.
- I recorded in stereo, so when I introduce the recording in the software, it asks me if I want to analyse left or right channel. Can I analyse both separately and then calculate the mean of both channels?
- Also, when I introduce the recording, I mark the section that I want to analyse using green bars. Once this section is marked, I proceed to do the analysis. Is it as easy as clicking the "manual" button? When I do it, it appears a table with the different values of the parameters, but I don't know if it is as "simple" as that.
That's all at the moment. Thank you for your answers!
Laura.
Acoustics monitoring to be utilized as solution in analysis background noise
I have experimentally recorded the Sound Pressure Levels of a horn. The SPLs have been obtained through simulation from LMS. But the output of LMS is in the form of spectrum in excel. I want to conver this excel into an audible sound to make psychoacoustic characterisation. How can I do it in MATLAB or any other available resource?
With an affordable rate (good price-quality ratio) please. The main purpose is to record marine mammals sounds and foraging activities at night in coral reef ecosystems but if it would be great if it could also pick up waves breaking for an artistic personal project. It has to be easily handled manualy (for snorkelling). Thank you for all advice.
I'm looking for a software that can differentiate sound frequency with a resolution <1Hz. Eg. Audacity interpretes the whole spectrum between 0 and 85 as one continuous spectrum. I'm looking for one that could interprete 0-1 as one band etc. What are the hardware limitations? Can you recommend a software?
thanks,
jan
I have a question about the factoranalyse in spss:
I have two different datasets I want to compare (original sounds and adapted sounds). When I plot a component plot of dataset I (original sounds) and then dataset II (adapted sounds) in two different plots the result is different than when I plot dataset I and II together in one component plot. I want to have dataset I on the same place (original sounds) so that you can see clearly the difference between the two datasets.
How can I have the same axis for these component plots? Can I calculate this?
I am looking for a user-friendly software for analysing bioacoustic recordings (underwater sounds) with students. I am so far interested by Raven Pro and Adobe Audition. Any advice? What is your favourite software ?
Thanks!
Hi,
In the past few years I been involved in sound analysis and condition monitoring of the car engine where the misfiring engine operating in workshop environment is successfully identified. I was also involved in research on Music Information Retrieval (MIR) where the system is provided with a sample record or part of a record and the system looks for similar records or records with similar queried part within them, based on their similarity and not their genre or style.
I am looking for a data set to apply my previous experiences in sound analysis on animal identification and even in the field of animal psychology.
This will be a inter/multi disciplinary work and scientific inputs are welcomed.
Many thanks
Peyman
Is there a "sound density limit" beyond which sound energy fails to be recorded and/or played back?
Example: one of the largest known choirs consisted of 121,440 people - if I wanted to record such an event (or as many overdubs as those or many more), would there be a density limit I would reach and if yes, how can it be calculated ?
What about natural events ? Imagine hailstroms for example. What would happen if I recorded many and created a sound file with dozens, even hundreds of those and played them back ? Would I be reaching any playback (or hearing) limits ? Would such density create some sort of coloured noise ?
All your ideas, suggestions and explorations will be very welcome
Synthetic data generation is an interesting ara of research but i have difficulties finding articles and textbooks about the topic. I want an idea about definitions and framework for automatic synthetic data generations in any area, particullary on sound analysis.
I plan to examine the effects of mixed sounds on human mood. I have three types of sounds: nature sound, traffic noise, and mechanical noise. I want to mixed them and produce three types of sound: nature sound dominate (but mixed with traffic noise and mechanical noise at a lower level), traffic noise dominate, and mechanical noise dominate.
I have two proposed ways to produce these three types of sound, let me use the nature sound dominate as an example:
1. Nature sound (keep the original sound volume measured at a point near the sound source) + traffic noise (only 10% of original sound volume: say 80db*10%))+ mechanical noise (only 10% of original sound volume, say 85 db*10%)
2. Nature sound (keep the original sound volume measured at a point near the sound source)+ traffic noise (original sound - decreased db because of distance)+ mechanical noise (original sound - decreased db because of distance). The decreased db can be calculated by the fomula in the link: http://www.sengpielaudio.com/calculator-squarelaw.htm
which one is better and why? or neither is good and you have a good suggestion? Could you recommend any literature as a reference? Many thanks!
my name rouland, working in university of pakuan departmen of biology. now i studying about frogs. i have a big problem, as a beginner biggest problem was identified frogs sound. so i have a idea to analyze every type of frog sounds and colleting base on similarity of wave sound so i can have what kind of wave which same frogs. the hard one is searching software to analyze the sounds? so what kind of software suit for my research? thnks
Hi all, It is known that the low spontaneous rate fibres code high intensity information and the high spontaneous rate fibres code low intensity information.
This is known when we present the low intensity or high intensity sounds alone. What will happen if a certain sound has mixtures of low and high intensity information ? Would simultaneous low and high intensity mixtures be differentially coded by the low and high spontaneous rate fibres
Awaiting your valuable inputs
Would we expect a high frequency adapter to have any effect on a low frequency target in a horizontal localization task? I mean in comparison to a low frequency adapter and a low frequency target. Does anybody know of any studies on these types of interaction? Sometimes a clue close enough to this can point me in the right direction if I follow the breadcrumbs.
Thank you very much in advance
I am analysing animated e-books from a multimodal perspective but do not have any background in sound/music or cinema, so I am struggling with the basics of describing the sounds. Any good primer of the types of effects/music and their role in audio-visual texts?
I seek advice on dealing with background noise which overlaps in frequency with the signal of interest. The software I use is Avisoft SAS-Lab Pro. I have used an eraser tool (under strict predetermined criteria). Is there a more suitable method in cases where the background noise is of a similar frequency to the signal?
Recordings are made at a sample rate of 48kHz (16-bit) and resampled to 22.05kHz. Spectrogram parameters: 256 FFT, Hamming window, 100% frame size, 50% overlap. Resolution: 86Hz and 2.9ms.
how the meaning of these images?
This images is result from processing with Software Matlab with Syntax FFT to Specgram
Thank You For Attention
Regards
Lubis
![](profile/Muhammad-Zainuddin-Lubis/post/How_with_this_picture_This_is_result_from_whistle_dolphins_vocalization_wav/attachment/59d623c26cda7b8083a1e947/AS%3A348146573037568%401460016019711/image/question.jpg)
I'm testing the sound insulation properties of plywood panels.
Kundt's tube can be used for determining transmission loss (TL), but it only provides orientative results and a specific standard is missing (is this correct?). Is it possible to establish a correlation, even only orientative, between Kundt's tube and ISO 140 results?
I'm working on the field of emotion perception and recognition through 3D sound systems based on headphones (like Dolby Auro or DTS HeadphoneX), and I'm doing a search about previous studies on this field, as well as those related to other multichannel sound systems not necessarily based on headphones (5.1 discrete speakers systems, for example).
I'm particularly interested in studies that have used real stimuli (music, film scenes, environment sound) better than isolated sounds, but any contribution is welcome.
Dear All,
Dielectric measurements are an important means of studying the dynamics properties (capacitance, conductance, permittivity and loss factor) of a dielectric. However, Dielectric measurements can be performed over a wide frequency range. My question is as given below:
What is the advantage of performing dielectric measurements in the frequency range from 42 Hz to 5 MHz (Audio- Radio frequency) in terms of its application point of view? I would be grateful to you if you could provide some references.
Thanks in advance.
I have a WAV file and I would like to know how to measure the tempo in bpm unit
It will be interesting to gain experience from persons applying a hydrophone for the measurement of sound in air. Initial experiments seems to indicate that a hydrophone performs different in air compared to water, probably due to grater impedance mismatch between the impedance of the transmission medium and the sensor.
I need information about sound pressure level of OM457LA 220kW diesel engine, emitted from its exhaust system at rated speed. Information of any similar engine also could help me. I wish, anybody could help me.
I want to get a function f(t). It can returns the value of the amplitude of the sound at the t moment. I‘ve already convert mp3 to a list of hexadecimal numbers. What should I do next?
Zyt
I do research in analyzing the richness of vehicle sound and I already use the semantic differential rating scale (poor -rich) and I want to prove subjective scale with physical properties of sound.
I am interested in developing sound quality of car horns and I want to perform dissimilarity task on it.
Thank you
Suppose I have y = conv (x,h) + n = s +n, where h is channel impulse response with length L and unnormalized. How do I define SNR? Is it SNR = power of x/power of n or SNR = power of s/power of n?
For instance, i'm looking for research on audiogames testing.
the tank is stratification and 3 meters in long. and I'm using sound wave in 250 KHz.
I need some suggestions based on experience from Soundscape related researchers. All the City's parks I would like to analyze are small in size and located in a busy streets. Thanks.
I'm searching for spectral data about natural background noise in quiet environments either with and without vocalizing animals.
In complex sound I need to determine the fundamental frequency but the Fast Fourier Transform catch other frequency which have a small differences value with other frequency. Can anybody help me? Thank you
I want to build a highpass filter that has .1H cut off frequency to let higher frequency pass through and block lower frequency for recording the EEG signals.I design it but the big problem is that the frequency that we work with it is about .1 Hz so it is near the pole so we have relatively high delay and I want less than 50ms delay.so if someone knows what should I do ,please do me a favor and help me
I searched for it in google and in a couple of scientific article search programs, but to no avail. The problem is that the impedance is usually given without specifying the type of glass it has been attributed to. Since there are at least 10 000 types of glass on the market, it is difficult to decide which one is the right one for me. Typically, a standard value of 1,5 is given for glass (the one for windows and so on). I am looking for a glass with Z = 1 (lower impedance). It could be something with density that is lower than that of standard glass (about 3,5) or something very special. I am looking forward to your suggestions.
I seeking for best answers and the advantages of using Delany-Bazley empirical model that used to predict sound absorption value, comparing to other models that proposed by Allard-Johnson, Miki or Wilson`s Model. I know its simple and fastest method but i finding more strong reason to compute my results even it out dated method. If attach with some proven research publications are most helpful for me. Even comparison among the models also would help me. Thank you to all.
I'm capturing video of mosquitoes feeding on nestling birds in an attempt to quantify biting pressure(sample video is attached), but I have no way of determining which species of mosquitoes are attempting to feed.
What kind of equipment do I need in the field and what kind of software is needed in the lab to determine the species present?
How to normalize each recording sentence to a Root-Mean-Square (RMS) level of 70 dB SPL, or recordings were normalized for RMS amplitude to 70 dB SPL? By Praat, Adobe Audition or Matlab? How to realize it? Thanks.
I'm searching for acoustical propagation model to validate my sound absorption coefficient value that obtain from two-mic impedance tube method. Seeking for other than Delany-Bazley and Johnson-Champoux-Allard models. any latest or newest? also want know what are the exact properties required to predict it. Thank you.
Im seeking for exact Durability Properties Test`s for acoustical material that made from natural fiber. Anybody knows the exact with the ASTM? Thank you to all.
What properties of a perceived sound cause us to attribute it to a human vs. non human/environmental source?
I am working with an audio sound profile and. I want to analyse the frequency of that sound and I am using wavpad sound editing software for frequency analysis of that sound. In that case, I have generated frequency versus time graph. But these sound shows multiple frequencies at a time. So I'm not able to generate a perfect graph and also not able to find a frequency range of that sound. So, can you tell me how I can analyse these frequencies ?
Anybody could help me find a good Acoustic Performance Estimation / Prediction Software. Just to identify by using the multilayer and obtain Sound Absorption value (alpha). Creating model room and finding Reverberation time are not necessary. If the software open source and free to use, im more glad to use it. Thank you.