Science topic

Acoustics and Acoustic Engineering - Science topic

Explore the latest questions and answers in Acoustics and Acoustic Engineering, and find Acoustics and Acoustic Engineering experts.
Questions related to Acoustics and Acoustic Engineering
  • asked a question related to Acoustics and Acoustic Engineering
Question
5 answers
I already made the experimental setup of impedance tube for the same construction. Do I have to validate it?
Relevant answer
Answer
did you di it?
  • asked a question related to Acoustics and Acoustic Engineering
Question
5 answers
I aim to analyse the permeability of aluminum to liquid gallium by measuring it's velocity at individual grain boundaries using ultrasounds.An ultrasound-based characterization technique is motivated by its higher temporal resolution.
But open to more suggestions regarding the problem statement.
Relevant answer
Answer
Please read :
An atomistically informed energy-based theory of environmentally assisted failure
S Ganesan, V Sundararaghavan - Corrosion Reviews, 2015 - De Gruyter
  • asked a question related to Acoustics and Acoustic Engineering
Question
7 answers
Nowadays, I'm working on sound absorption of porous materials. Experimentally I have found that by increasing thickness of porous materials the frequency, which maximum absorption occurs, decreases.
I mean that, in lower frequency maximum absorption occurs. I want to know what is the reason of this behavior?
Relevant answer
Answer
The relation of frequency and thickness of porous absorber is a rectangular hyperbola, which would be expressed in a generic form as follows.
f * t^m=b; where, m is the power term of thickness, and b is constant.
Thus with an increase of thickness the magnitude of the frequency of the sound wave tackled by the porous absorber decreases. In another way, we can say that the thicker the material better is the noise control performance.
  • asked a question related to Acoustics and Acoustic Engineering
Question
13 answers
Elastic waves in solids can be coupled with both the electrical and magnetic potentials, the speed of light is five orders larger than the speed of elastic waves.
Relevant answer
Answer
Indeed, Dear Prof. Aleksey Anatolievich Zakharenko,
The whole derivation of the equations is complicated, the paper by Prof. Kontorovich was pointed out to me by Prof. M. Issakovich in the 90s, but it seems to be the most complete system of lattice equations with non-local dynamical effects. I cannot check the paper but please, see the discussion in pp. 1129 of the following:
http://www.jetp.ac.ru/cgi-bin/dn/e_018_04_1125.pdf
eq 3.10 and the description before it. I unquote "...It is natural to represent the total free energy of the electrons, lattice, and the electromagnetic field in the form..."
  • asked a question related to Acoustics and Acoustic Engineering
Question
7 answers
Hi everyone ,
I am implementing Acoustic model in FLUENT. Geometry is 2D. By using FW-H model, I am facing difficultly to understand these things. ( I have already read FLUENT manuals and one tutorial )
1) It is very important and necessary to define accurately Source Correlation Length in 2D geometry. I do not know this parameter. how to calculate it. my geometry is very simple, just a rectangular to 10*20 mm.
2) What should be the source zone and type ? In 2d-cylindrical tutorial, I read it is a cylindrical wall. My geometry have 0.5 mm inlet a bottom. Is it the source zone and type will be Velocity INLET ?
3) Where will be the receivers position ? I have to monitor the the time history when gas passes through from INLET ? I read one paper there was one monitoring point which is away form the inlet.. I could not understand.
4) What is FW-H integral surface ? I think, the under investigation portion has very fine mesh. this is called FW-H integral surface. But I am not sure about it..
Any type of advise, guideline or tutorial will be highly appreciated. Thanks in advance..
Relevant answer
Answer
If the case is solved using pressure based solver, the EOS and Acoustic features can be defined for the materials, but I'm not sure if it is correct for multiphase flow. As we know,Walli's sound speed should be used for Homogeneous bubbly flow.
  • asked a question related to Acoustics and Acoustic Engineering
Question
15 answers
Anyone have any idea on how to harvest the acoustic energy from a line sound source? The line sound source is in small scale, maybe in a centimeter range, and the sound pressure is very small, around uPa I guess. 
Relevant answer
Answer
Dear Sheng,
welcome,
I think you had already solved the problem. However i would like to introduce a solution. There are two types of electrical vibration harvesters. The electrostaic MEMES converter and the piezoelectric transducer. You can use four flat converters to collect the sound from all sides. You can also put the sound line in in the focus of parabolic reflector and receiver the reflected sound by one flat converters. The second solution may be better. as it uses only one flat converter.
Best wishes
  • asked a question related to Acoustics and Acoustic Engineering
Question
3 answers
How many interfacial SH-waves can propagate in the PEMs?
Relevant answer
Answer
My pleasure Dear Prof Aleksey. Regards
  • asked a question related to Acoustics and Acoustic Engineering
Question
10 answers
Hi I need to model PA system for a train station with EASE.
I got this doubt, while a speaker makes announcement, sound from it have all frequencies or it will be at one particular frequency.
Because in EASE, it divides electric power equally among all frequencies.
If there are 21 frequencies, and electric power provided is 0.75W, then for each frequency the power is (0.75/21). We use pink noise.
Since the power gets distrbuted we are not getting the desired SPL.
Relevant answer
Answer
As Fabrizio above has correctly mentioned, you need to consider just the frequency range for human voice in your application, which is much more limited than the hearing frequency range. In addition to the acoustical response of the room you are interested in to model (a train station), an important detail is to consider the background noise as accurate as possible. This will also affect the total required sound power of your PA system in order to get an appropriate intelligibility level.
Best
  • asked a question related to Acoustics and Acoustic Engineering
Question
4 answers
Sound Power Level of a speaker is computed from the sound power of the speaker.
Sound power from the speaker is the total sound energy emitted by a speaker per unit time.
How this sound power is related to the electric power given to the speaker. For example if I give 6W electric power to the speaker then sound power from the speaker will be only 6W.
Relevant answer
Answer
Unqualified comments :-
Time weighted negotiating of a mass, collection of matter considering field effects ( say gravity, possibly friction and so on) with a directional effort is work, forced displacment sort of effects.
Rate of doign work is power (watts) , sort of time rated expense of energy. Describing power has nothing to do with underlying phenomenon.
Most important thing to understand or find or reconcile is
proxy of or displacments/field
proxy of or efforts,
and of course ratio of expended against recoved power offers clues about nature of losses , mostly thermal, and proverbial efficiency etc.
  • asked a question related to Acoustics and Acoustic Engineering
Question
7 answers
I am currently working on anthropogenic effects on the vocalisation of howler monkeys in the urban environment.
I record the vocalisations with a TASCAM DR-07MKii and I want to extract acoustic measurement such as frequency, pitch, rates and lengths of vocalisations.
Which program is easiest to use for a beginner to extract these variables?
I currently have: Sound Analysis Pro, Praat and Audacity.
All tips and additional information are very welcome
Relevant answer
Answer
I agree that Audacity is of no use-- it's primarily an editing program, not for measurement. Praat and SAP would both probably work for the simple measurements you described. I believe Praat (which was developed for work on speech) has some tools for measuring speech-related features such as formants, which might be useful for nonhuman primate vocalizations. Raven Pro (http://ravensoundsoftware.com/) has (I'm told) an easier learning curve than Praat and includes basic time, frequency, and relative power measurements (though no formant measurements). There's also a set of freely available tutorial videos available (see "Training" on that website).
  • asked a question related to Acoustics and Acoustic Engineering
Question
3 answers
We have multiple arrays of microphones, each array well seperated from others on the ground. Our task is to track an aircraft that happens to pass by. But the terrain is such that there is a lot of reverberation. Now:
1) What prefiltering technique is best to remove reverberation effects.
2) Which weightings during GCC(Generalised Cross Correlation) are best against reverberation, while at the same time without increasing error by much.
Thank you very much.
Lay Jain,
Massachusetts Institute of Technology
Relevant answer
Answer
Maybe "reverberation" might be considered simply as "multipath" for aircraft noise in outdoor environments, and in that case you might solve it explicitly e.g using adaptative beamforming algorithms. Acoustic tracking of low flying helos is a conventional problem solved satisfactorily from many years in open field, I know some systems based on likelihood maximisation, it take a fraction of a second to get iteratively the most likely position, then your tracking algorithm will have to reject aberrant dots. A key issue is now far your aircraft noise and course is stationary or not, as rightly commented by Ronald! Of course if you deal with supersonic low flying jets, you will find the Mach wave direction, not directly the plane heading... [An issue solved in acoustic sniper detection systems by tabulating the speed (hence the Mach cone angle) for a library of ammunitions and associated signatures]
  • asked a question related to Acoustics and Acoustic Engineering
Question
3 answers
Hello Research Gate Community!
I am a first year Entrepreneurship Ph.D. student at Baylor and I am conducting a study on academic scientist’s intention to commercialize their research.
While a lot has been done on the topic of commercializing research, almost no one has examined it from the perspective of the scientist. Your perceptions of research commercialization matter and should be considered by university policy makers when designing the process.
I don’t think anyone likes long surveys and I know you have much more important things to do, so I designed the survey to take less than 10 minutes. If you are willing to take the survey, you can access it through the link below:
Thank you very much for the time and consideration!
Best,
Austin
Relevant answer
Answer
P.K. Karmakar Thank you for the encouragement! Before offering my opinion I will offer a disclaimer-- the deepest understanding I have of theoretical physics research comes from a recent reading of Neil deGrasse Tyson's "Accessory to War" (while not a technical read, I highly recommend it). The book offers an interesting insight: while theoretical physicists tend to be opposed to war, their research outcomes are often first utilized by military before reaching civilian markets. To speculate on Tyson's offering: I think generally, direct commercialization is challenging. While outcomes of theoretical physics research have a multitude of application (in warfare and civilian markets), I imagine large scale development is initially very costly (due to being highly technical in nature). Military budgets, being extensive and widely unchecked, allow for far more spending on prototyping/developing these technologies than what civilian industry can achieve. I suspect many outcomes of theoretical physics research undergo initial development in a military setting, later transitioning to civilian markets after this process has reduced development costs for businesses. That said, I am sure there are exceptions-- I am just unfamiliar with them. I hope my answer is helpful. If you know of any cases of outcomes going from lab to civilian markets, please do not hesitate to share. I would love to learn more about direct commercialization of theoretical physics research! Thanks again for your comments.
  • asked a question related to Acoustics and Acoustic Engineering
Question
3 answers
Many applications in numerical acoustics occur in exterior domains. The advantage of benchmark problems is a closed form solution (e.g. circumferentially harmonic radiation from an infinite cylinder or radiation from a portion of the infinite cylinder). Different models such as perfectly matched layers (PML) and absorbing boundary conditions ABCs (like Enquist Majda, Bayliss Turkel) are used in the finite element method to approximate Sommerfelds radiation condition. I am interested in a publication, where these approaches are compared.
  • asked a question related to Acoustics and Acoustic Engineering
Question
3 answers
I'm looking for the way to determine the points that must locate the sound level meter for measuring the noise of sources in workplaces ( like a chiller or compressor ) . Is there any standard method that completely explain the procedure of measurement the noise of sources for reduction and control ? For control the exposure of personnel.
specially about the right direction of microphone.
Relevant answer
Answer
If it is to characterize the machines as noise sources e.g. to redesign the work space, you can still operate in a normal workshop environment but you have to perform several measurements and possibly apply corrections, ISO 112 00/01/02/03/04 please refer to https://www.iso.org/obp/ui/#iso:std:iso:11203:ed-1:v1:en
  • asked a question related to Acoustics and Acoustic Engineering
Question
4 answers
Hi everyone.
I need database (dataset) for my article evaluation in Audio Steganography field; for example echo hiding, SS and phase coding.
If anyone knows related article for result competition that can be a great favor.
Thanks a lot for your help.
Ehsan
Relevant answer
Answer
It's done. thanks for your attention.
  • asked a question related to Acoustics and Acoustic Engineering
Question
7 answers
Hello,
I have a problem with a type of singular integral like the attached image, may you please share your experience with me?
Sincerely,
Hamed
Relevant answer
Answer
Sometimes the substitution x=arctan(t) or x=arctan(t/2) helps to transform it into integrals from -\infty to +\infty. Then you can go on with the residua theorem as suggested by Nikolay. Sometimes polar coordinates may help. Or you try it numerically by using adaptive Simpson's method:
In any case, the numerical method may be used to check your result.
Best regards, Frank
  • asked a question related to Acoustics and Acoustic Engineering
Question
2 answers
We linearise the viscoelastic material by taking the assumption that the amplitude of disturbance for an acoustic wave is very small.
Can anyone share the solved mass balance, momentum balance equations for a viscoelastic material and the wave equation derived from it.
Thank you.
Relevant answer
Answer
I found this paper. Can anyone tell explain equation 14, and what does q corresponds to.?
  • asked a question related to Acoustics and Acoustic Engineering
Question
3 answers
Hello All,
I am presently creating a transfer matrix to represent a wave propagation in a multi-layered plate. I m intended to use this transfer matrix for elastic materials.
On what grounds can rubber be considered as a linear material to study the acoustic properties.
Thank you.
Relevant answer
Answer
Rubber-like materials have been modelled as nonlinear hyperelastic materials since  the first experimental results on natural gum reported by Rivlin & Saunders (1951).
See, for example (the current literature on this topic is indeed very extensive):
[1] Treloar LRG, Hopkins HG, Rivlin RS, Ball JM. 1976. The mechanics of rubber elasticity [and discussions], Proceedings of the Royal Society of London A 351, 301-330.
or the recent paper
[2] Destrade M, Saccomandi G, Sgura I. 2017. Methodical fitting for mathematical models of rubberlike materials, Proceedings of the Royal Society A 473, 20160811.
or the review article
[3] Steinmann P, Hossain M, Possart G. 2012. Hyperelastic models for rubber-like materials: consistent tangent operators and suitability for Treloar's data, Archive of Applied Mechanics 82, 1183-1217.
As all the stress-strain (or force-displacement) curves show in these papers, rubber can be treated as a linear-elastic (hyperelastic) material only under small (infinitesimal) elastic deformations.
  • asked a question related to Acoustics and Acoustic Engineering
Question
3 answers
how can I get (download) active sonar data set for the sonar target classification purpose?
Relevant answer
Answer
There are a useful data set entitled "Sound of propeller" as follow:
M. Khishe, M. Mosavi and B. Safarpoor, “Sound of Propeller”, Mendeley Data, v1, 2017.
  • asked a question related to Acoustics and Acoustic Engineering
Question
3 answers
I am looking for the following properties of Thermoplastic Elastomers or TPE like materials:
1.Flow resistivity (Pa.s/m^2)
2.Porosity
3.Tortuosity factor
Will also like to know from experts alternative ways to simulate the sound Transmission Loss of acoustic plugs made of TPEs.
Best Regards,
David
Relevant answer
Answer
Hi David
I am guessing that you can use micro models to design at least some of the properties. That said, you would need physical measurement to verify the end result or, at least, measurement would be a prudent step to secure the end result.
Have fun
Claes
  • asked a question related to Acoustics and Acoustic Engineering
Question
8 answers
I am trying to build several cases for solver and CAA-library code validation. There are breathing, trembling sphere, baffled piston.
In all cases we can see phase difference between pressure signals. I'm explaining it by inertia like in the mass-spring system. Is it correct?
How can i modify wave equation results or my CFD solution to clear it from such kind of discrepancy? I need to clear it because i'm trying to validate very different solvers (for example with mesh motion) and already observe reflections from boundaries (so then i will need to test some types of non-reflecting) and i suppose that this phase difference could make further analysis harder.
Thank you for any help ("to read" recommendations are also appreciated).
Last figure just for reference (baffled piston, 2d-axissymmetric case).
Relevant answer
Answer
You can get dispersion error from your numerical scheme. Try decreasing your mesh spacing to see if the dispersion reduces. You could try using dispersion-relation-preserving (DRP) schemes. Another reason for inaccuracy is your boundary condition. When comparing against the analytical solution, do it at an instant when the wave has not reached the boundary, so you can isolate the effect of boundary conditions. 
  • asked a question related to Acoustics and Acoustic Engineering
Question
14 answers
Acoustics monitoring to be utilized as solution in analysis background noise
Relevant answer
Answer
B&K 2250 or 2270 
  • asked a question related to Acoustics and Acoustic Engineering
Question
5 answers
The source SL is the sound pressure level expressed in dB referenced to 1 m away from the acoustic center of the sound source.
What determines the SL of a single organ pipe?
Can SL be related to something like the acoustic volume velocity or volume displacement? Is there a mathematical relationship of dependence between SL and one or more generalized source-characterizing parameters like these. 
Can the SL for a given pipe be increased or decreased without making physical modifications to the pipe? How is SL then changed?
is there a good reference dealing with the SL from organ pipes? I cannot find one.
Thanks!
Ronald
Relevant answer
Answer
Hi Ronald
As a sidenote, the efficiency of a loudspeaker in creating sound usualliy is about 0.5%, i.e. 5E-3. As a reference, most naturally occuring processes land in the ballpark 1E-5, so 5E-3 is not a bad efficiency.
Valves are reported to create sound inside of pipes with an efficiency varying from 1E-3 (a noisy valve at a poor point of operation) to 1E-8 (a very quiet valve).
However, the radiation factor tells us that resonant sound radiation has 100% effciency in radiating sound (discounting mechanical losses).
For radiation of structureborne sound this happens when the bending wavespeed exceeds the acoustic wavespeed, i.e. when so called coincidence occurs.
There are some honeycomb panel speakers (coincidence occurs at fairly low frequency for honeycomb) that use this principle and they achieve much higher effciency than ordinary loudspeakers. https://en.wikipedia.org/wiki/Distributed_mode_loudspeaker
A former collague of mine (Sven Tyrland) told me that he was able to get 50% efficiency and a very directional sound when using an matrix of many small speakers placed in a large wall without backing. Sven was very skilled and quirky - he told me that the trick was to mimic coincidence conditions. 
If memory serves me correctly, I believe he mentioned phasing not being very fancy as he simply wired the speakers in serial/parallel to get a suitable impedance for the amplifier to drive.
This sound wall was used on discotheque and similar. He used it also outdoors and told me he was able to focus sound to very high sound pressure levels at long distance without any side disturbance, i.e. you could stand more or less next to it and have very high SPL in front of it.
So, perchance, if efficiency greatly matters - you might want to take a look at matrices of many small units spaced such that you meet the sampling criterion, i.e. exceed 4 speakers per wavelength?
Just my 2 cents
Claes
  • asked a question related to Acoustics and Acoustic Engineering
Question
4 answers
I have decided to do research on noise pollution modeling. Kindly tell me open source software for noise pollution modeling around airport.
Relevant answer
Answer
Hi
Here is another opensource software, Code Tympan, that uses raytracing. It might not yet be exactly what you want but may still be worth your taking a look at as anew solver is in the works.
"A new solver called ANIME3D will be soon available and includes a transformation based method to take into account refraction by sound speed celerity gradients. "
It seems that Code Tympan currently is slower than SoundPlan. 
Sincerely
Claes
  • asked a question related to Acoustics and Acoustic Engineering
Question
3 answers
Hi, I was wondering if there is any difference between normalize an impulse response in the time or frequency domain, respectively. My first though is yes, because the normalization in the time domain could change the ratio between the frequency components. Simulations in matlab suggest that there is not difference but I am not sure.
Thanks for any advise,
Relevant answer
Answer
Definitely there is no difference. First of all, you should not worry that normalization in the time domain may change the ratio between frequency components - if you scale your impulse up or down all its frequency components get scaled by exactly the same amount, since the Fourier Transform is a linear operation as so all rules of linear systems theory applies, as Remi Cornwall wrote. One more good news is that when you have normalized your impulse in one of the domains, its energy (the integral of the function squared) is exactly equal in the other domain. The energy of your impulse normalized in the time domain is equal to its energy in the frequency domain. This is what the Parseval's theorem says.
  • asked a question related to Acoustics and Acoustic Engineering
Question
17 answers
I am looking for a way to estimate particle acceleration of an underwater sound produced by aquatic animals, per e.g. Something easy to use in the field and reliable. Like a vector sensor or an underwater geophone?
Relevant answer
Answer
Some of the Wilcoxon sensors were developed with the US Gov't, and have ITAR restrictions associated with them.  They are not available internationally.  Applied Physical Sciences also has some sensors, but they too have ITAR restrictions.  The Microflown/Hydroflown sensors (from the Netherlands) have never been proven to be effective in underwater applications. 
My recommendation would be to get several good pressure sensors and build your own vector sensor based on pressure gradient measurements.  (I believe that's really what the PAS is.)
  • asked a question related to Acoustics and Acoustic Engineering
Question
4 answers
Synthetic data generation is an interesting ara of research but i have difficulties finding articles and textbooks about the topic. I want an idea about definitions and framework for automatic synthetic data generations in any area, particullary on sound analysis. 
Relevant answer
Answer
As you focus on sound analysis, you may find interesting nowadays state-of-the-art technique for improving speech recognition acoustic models by augmenting training data using with speed, frequency and tempo warping, which is indirectly related to synthetic data generation ( http://www.danielpovey.com/files/2015_interspeech_augmentation.pdf )
Another example of creating synthetic data is block mixing that seems to be useful in polyphonic acoustic event detection: https://arxiv.org/pdf/1604.00861.pdf
We also tried the latter in recent DCASE challenge gaining a little bit on F-measure, but not as large as the author of previous work: https://www.researchgate.net/publication/306118891_DCASE_2016_Sound_Event_Detection_System_Based_on_Convolutional_Neural_Network
  • asked a question related to Acoustics and Acoustic Engineering
Question
5 answers
related to ocean ambient noise
Relevant answer
Answer
You can also consider the Green's function as a transfer function of an acoustic disturbance between two points in space, assuming both points are omnidirectional in their responses.  As Dr. Lomonosov notes, this all assumes linear (small) acoustic perturbations to the ambient medium.  Green's functions can be defined in either the time or frequency domain.
  • asked a question related to Acoustics and Acoustic Engineering
Question
8 answers
Hi everyone, 
please if anyone is using Acoustic Doppler Velocimeter, can he/she explain how can I generate bubble in the flume to measure velocities?
best wishes 
Harith
Relevant answer
Answer
Thanks Carlo, 
I appreciate your answering, actually the law of our lab does not accept to put any thing in the flume so I'm looking for bubble generating.
best wishes 
  • asked a question related to Acoustics and Acoustic Engineering
Question
3 answers
hello all
is there any difference or similarities between human eye and camera in terms of noise level because both use same technique but camera captures noise while eye doesn't,is there anyway to create artificial noise so that only camera captures it but not the eye ?
Relevant answer
Answer
Hi
When looking at process piping vibration, the rule of thumb is that the human eye exaggerates vibration by about a factor of 10x.
This is easy to verify, just put a marker on the vibrating object and a ruler next to the marker. I have arrived at a similar factor using also other more serious equipment such as position lasers, LVDT, YoYo pots and accelerometers.
The best explanation I have been able to come up with is that the human eye tracks motion up to about 4 Hz and in doing so, that the eye is rolling in the eye socket. This implies that scale matters, i.e. video tape and playback on a 15 inch screen and things no longer look as impressive.
It will be interesting to hear what the RG community has to say on this matter.
All the best
Claes
  • asked a question related to Acoustics and Acoustic Engineering
Question
8 answers
We are using a 2.5 m diameter x 1 m tank for calibrating hydrophones.  We've observed that the results can be quite unrepeatable for the first month or so after we clean and refill the tank.  We assume this effect is due to dissolved gases.  This effect usually only lasts for a few weeks up to one month.
We are looking for ways to measure, characterize and explain this effect in the 10 kHz to 200 kHz region, and if possible, evaluate available techniques for remediation (de-gasing) of the tank water to achieve a "standard" profile.
Relevant answer
Answer
You must degas your water for good hydrophone calibration. I use a vacuum chamber to do this. As one of your responders says, boiling is also an option but If you have a vacuum chamber that is sufficiently large, use that. The issue is this. When in dissolved state, the gas species have far less effect than when in free gas state (I.e. Bubbles). The problem is that dissolved gas very easily comes out as free bubbles. Consider the glass of water you might leave by your bedside overnight. Every morning there will be visible bubbles on the side of the glass, because as the water got cold overnight, more gas dissolved into the liquid, and in the morning when the temperature in the room rose slightly, it came out of solution and grew to visible size any microscopic bubbles in microscopic cracks in the glass. See page 78 onwards of:
1. Leighton, T.G. (1994) The Acoustic Bubble, Academic Press, 640pages (ISBN 0124419208).
2. Leighton, T.G. (1996) The Acoustic Bubble, (paperback edition), London, San Diego, Academic Press, 640 pages (ISBN 0124419216).
So simple daily oscillations in temperature will cause dissolved gas to come out as bubbles. The bubbles required to cause significant absorption will probably be very small, much smaller than those you see on your overnight glass of water, as you can see from any number of articles, eg the references contained in:
3. Ainslie, M.A. and Leighton, T.G. (2011) Review of scattering and extinction cross-sections, damping factors, and resonance frequencies of a spherical gas bubble, Journal of the Acoustical Society of America, 130(5), 3184-3208 (doi: 10.1121/1.3628321).
4. Leighton, T.G., Meers, S.D. and White, P.R. (2004) Propagation through nonlinear time-dependent bubble clouds and the estimation of bubble populations from measured acoustic characteristics, Proceedings of the Royal Society of London A, 460(2049), 2521-2550 (doi:10.1098/rspa.2004.1298)
It does not matter much what gas is or is not coming out of solution - they will mostly all contribute absorption and scattering. Even if you do take all gas out of the water, there will be small bubbles trapped in crevices when you immerse your hydrophone in the water, so be careful your immersed instruments are properly wetted. 
Cracks and crevices are just one way that microbubbles that be stabilised against dissolving away. Fats, surfactants, and other 'impurities' in the water can also stabilise bubbles, as can microscopic particles. See pages 72-77 of references 1,2 above. 
So, this seems like a difficult problem. The answer is to use clean, degasser water. You might want to do calibrations at night when the water has cooled down (or cool it down with with ice made from degasser water, but be careful not to introduce extra gas in doing so) to make the bubbles in the water shrink, because gas is more soluble in sooner water - do calculate the size of resonance bubbles for your sound field (see equation 4.86 on page 306 of references 1,2 above). You might be friendly advice from your national measurement lab (and you can see why good calibrations cost a lot). 
  • asked a question related to Acoustics and Acoustic Engineering
Question
2 answers
My eventual aim is to model the radiation of acoustic sources using a hybrid approach (different mesh gradients and domain sizes). 
I would like the be able to create models that have changing flow velocities at a boundary. This would eventually lead to simulating audible sources (loudspeakers etc), so the source velocity would need to change appropriately with each step of the computation, and would need to effectively be an inlet and an outlet. 
Is this currently possible with only some minor tweaks of an arbitrary 2D case?
Has anyone come across a tool that already exists for this? 
Any advice or suggestions are more than welcome.
Many thanks.
Relevant answer
Answer
  • asked a question related to Acoustics and Acoustic Engineering
Question
3 answers
Dear Physicists, please I would like to know if there is a relation between the intensity of a signal acoustic wave and the ability of that wave signal to shatter a body. For instance, if a tuning fork is emitting an acoustic wave into body X (such as a glass cup) at a resonant frequency of body X (for instance the fundamental frequency) is there a threshold intensity (e.g. wave amplitude, wave power) of the wave emitted by the tuning fork which needs to be exceeded before body X will shatter?  It seems well known that the shattering effect will occur at specific frequencies (the fundamental frequency or its harmonics) however, does the wave amplitude, power or other material properties of the body play a part? If yes, please what relation or formula governs these? Answers appreciated. Thanks
Relevant answer
Yes, if you need to.
BB-C 
  • asked a question related to Acoustics and Acoustic Engineering
Question
9 answers
b) It takes 300 Hz to break a wine glass
c) Speed of sound in air = 340 m/s
d) Wavelength = 340/300 = 1.13 metres
e) Speed of sound in glass = 3900 m/s
f) Wavelength = 3900/300 = 13 metres
g) Both the wavelengths in (d) and (f) are much greater
than the size of the glass
So can you explain what is actually going on here? How does a wavelength bigger than the object itself appear to setup modes in the glass of a shorter wavelength? What is actually breaking the glass: the originating frequency or higher harmonics?
If it is a higher harmonic then presumably we can also break the glass at that driving frequency also? Has this ever been demonstrated?
Relevant answer
Answer
A while since this question has been asked, but none of the above responses really captures the physics, in my opinion.
@Evert is most definitely correct in the 'reversal' argument, i.e. excite the eigenmodes and then use this to determine the resonant mode.  However this doesn't enable you to predict what will happen.
The key to answering this problem is to realise that the speed of sound for the glass you have described is not the excitation that is causing the glass to break.  You have quoted the bulk speed of sound in glass, which is the correct speed for determining propagating of sound waves, for example through a large block of glass.
Instead, the wine glass is being destroyed by resonance in the flexural modes, and so the correct treatment is due to cantilever equations.  By looking at the equations and treating the wine glass as a cantilever, one can see that the resonance frequency should be proportional to the thickness divided by length to the power of (3/2). (Here I'm ignoring the change in geometry from linear to circular).
So using the standard reference, Wikipedia (https://en.wikipedia.org/wiki/Cantilever), we can write down the resonant frequency of a cantilever as
\omega = \sqrt(E w t^3/(4 L^3 m))
where \omega is the resonant frequency, E  as the Young's modulus of glass, w the effective width of the cantilever, t as the thickness, L the length and m the mass.
Now of course, I don't know most of these numbers, but I can do an order of magnitude calculation, so:
Only part of the glass is oscillating, so let's choose an effective width as 1 cm, so 
w = 1 \times 10^-2
thickness will be closer to 1 mm than anything else, length of order 10cm
mass is also unknown, but let's assume 10 g (remember this is the effective mass of the resonant part).
Caveats here: of course any real calculation must take into account boundary conditions, as discussed by Kai Fauth
Anyway, plugging these numbers into the formula gives \omega = 100 Hz.  Now this is close enough for an order of magnitude calculation to explain the discrepancy between the values that you've quoted, and also helps to explain why the mode shown in the Youtube clip appears to be the fundamental mode of the clip.
All the best
Andy
  • asked a question related to Acoustics and Acoustic Engineering
Question
3 answers
The mean processed signal to noise ratio was calculated to be 30 dB for the Raytheon sonar, and 13 dB for the Klein sonar. Using the Receiver Operating Characteristic (ROC) curve displayed (figure with calculations from Urick,1983 is attached: file name is "ROC curves calculations.bmp"), and given the desired false alarm probability of 0.5%, the probability of detection corresponding to the mean processed signal to noise ratio for each sonar
was calculated at the false alarm level. The probability of detection was calculated to be 0.998 for the Raytheon sonar (green lines on the plot attached), and 0.82 for the Klein sonar (yellow lines on the plot attached).
I tried to make the calculations mentioned by MATLAB tools (with the rocsnr function), but I cannot receive the same results as by paper plots. MATLAB gives the essentially higher values: e.g., for Raytheon sonar the probability of detection is always 1 (for signal to noise ratio equal to 30 dB). MATLAB code for calculations is relatively simple and is given below.
[Pd,Pfa] = rocsnr(30);
idx = find(Pfa==0.005); % find index for Pfa=0.005
sprintf('%.9f',Pd(idx))
The result for calculation looks as follows (I expected to get 0.998).
ans =
Empty matrix: 1-by-0
After getting this result I tried to increase Pfa value, but the result is 1.
[Pd,Pfa] = rocsnr(30);
idx = find(Pfa==0.01); % find index for Pfa=0.01
sprintf('%.9f',Pd(idx))
ans =
1.000000000
For Klein sonar the probability of detection is almost 1 instead of 0.82 (for signal to noise ratio equal to 13 dB). I cannot obtain the result for false alarm probability of 0.5%, in case of 0.1% I get 0.999967062.
[Pd,Pfa] = rocsnr(13);
idx = find(Pfa==0.005); % find index for Pfa=0.005
sprintf('%.9f',Pd(idx))
ans =
Empty matrix: 1-by-0
[Pd,Pfa] = rocsnr(13);
idx = find(Pfa==0.01); % find index for Pfa=0.01
sprintf('%.9f',Pd(idx))
ans =
0.999967062
What is the reason for such inconsistence in paper plot calculations and "efficient" MATLAB calculations performed automatically for the same data input?
The original figure for ROC curves (Urick,1983) without additional lines plotted is attached too (file name is "ROC curves (Urick, 1983).bmp").
The links to MATLAB documentation related to ROC curves are given below.
It is interesting that ROC curves were first introduced in MATLAB R2011a.
Relevant answer
Answer
Dear Fernando,
Attachments (with calculations info) are available. 
MATLAB code for calculations is relatively simple and is given below.
[Pd,Pfa] = rocsnr(30);
idx = find(Pfa==0.005); % find index for Pfa=0.005
sprintf('%.9f',Pd(idx))
The result for calculation looks as follows (I expected to get 0.998).
ans =
Empty matrix: 1-by-0
After getting this result I tried to increase Pfa value, but the result is 1.
[Pd,Pfa] = rocsnr(30);
idx = find(Pfa==0.01); % find index for Pfa=0.01
sprintf('%.9f',Pd(idx))
ans =
1.000000000
The links to MATLAB documentation related to ROC curves are given below.
It is interesting that ROC curves were first introduced in MATLAB R2011a.
Sincerely,
Oleksandr
  • asked a question related to Acoustics and Acoustic Engineering
Question
5 answers
Hi, friends. Maybe what I will ask you is something very basic for you. But even so, I'd want to do it. Many of you may have read Elements of Acoustic Phonetics by Ladefoged (2.ª Ed.). In page 50 of this book, exactly in the last paragraph, where Mr. Ladefoged explain to us what we can see in a spectrum (with an image that I share you below), he said: "as we can see from the line spectrum, much of energy of the complex wave is contained in the one component at 1000 Hz". I'm not an English native speaker and the first interpretation I had was that much of sound energy is concentrated in the component which has 1000 Hz. But the chart says another thing. The component that has the largest amplitude, that is, the amplitude that carries/conserves/has much of energy of the sound is at 500 Hz, isn't it? So, Was there a mistake in the book? I think there was. The following sentence in the same paragraph, just after what I quoted, would support what my mistake impression. Ladefoged says "the additional components which have to be combined with it [the 1000 Hz component] have very small amplitudes". But, looking at the chart, are there components which have smaller amplitudes than the 1000 Hz component amplitude? There are not, right? So, what is the problem here? Is there something that I'm not considering? A friend told me the book says that the energy is 'contained' (restrain) not 'concentrated' in one component. In this case, it seems to me they are synonymous: what is concentrated is somehow contained in a virtual space or área, isn't it? However let's imagine that the much of energy in contained or restrained at 1000 Hz. But when I imagine energy contained in these terms, I imagine high pressure, not low pressure, and it is precisely what the chart shows at 1000 Hz: a component with a low pressure, that's why it has a short amplitude. Therefore, from my point of view, the 1000 Hz component is not indicating that much of energy is contained. I'd appreciate your help on it. Is there a mistake in the Ladefoged's book or there's something that has escaped me? Thanks in advance! Regards!
Relevant answer
Answer
It looks like a mistake to me.  It could be said that most of the energy was contained within the band up to 1 kHz.  That is likely to be true, looking at the time signal, which doesn't have much in the way of higher frequency components.  The 500 line does have the most energy, but because the vertical axis is amplitude, it doesn't have even half the total energy (it doesn't have most of the energy!).  If you assume energy is proportional to amplitude squared, the nearest four lines have more energy in total that the tallest line.
  • asked a question related to Acoustics and Acoustic Engineering
Question
2 answers
Any experience in what consists of a 100% scrambled call for a control?
Relevant answer
Answer
Thanks very much for your help.
  • asked a question related to Acoustics and Acoustic Engineering
Question
2 answers
Hi everybody,
I am a Young researcher in aeroacoustics and I work with Kirchhoff integral in supersonic flow, due to Mach cone I face with a singular integral, in simple form of these integrals are in attached figure form and i like to solve this problem,
who can help me?
With Best Regards for your time
Relevant answer
Answer
 Dear Dr.Bernard
Thank you so much for answer
but i think my problem is a little difference,
i suppose shock occur in behind the vibrating structure and i don't consider shock effect in my computation, i e i have completely uniform flow,
about FWH analogy, i think this problem exist in it too, because i have surface integral in solid surface and it appear again
Regards
  • asked a question related to Acoustics and Acoustic Engineering
Question
7 answers
Hi,
While simulating sound transmission loss through perforated plate, it is assumed in some research articles that the sound pass through holes and completely blocked by the plate. The plate is considered rigid.
But in reality there is a possibility that sound would also travel through plate material. The vibration of material contribute transmitting sound. So I would like to ask how valid it is to assume the plate as a rigid wall and only the holes allow the sound to transmit.  
Thanks.
Relevant answer
Thank you Mr. White and Mr. Claes for the needful answers. The answers are very much helpful.
  • asked a question related to Acoustics and Acoustic Engineering
Question
2 answers
I am working on an Ocean Acoustic Propagation Model and to validate my model I am using the KRAKEN (http://oalib.hlsresearch.com/Modes/AcousticsToolbox) Code.
During this process I observed that the Depth Modes obtained using KRAKEN (using plotmode.m) is exactly matched with the Analytical Modes (Jensen et. al) for an Isovelocity waveguide for source frequency (50 Hz and 1000 Hz).
But while tried to match the Pressure at fixed depth (using plottlr.m) I found that there is a Normalization difference between the Kraken Model and the Analytical Method for lower frequency 50 Hz and for higher frequency such as 1000 Hz small phase difference in the Pressure plot is also observed.  
Can someone please enlighten me on this pressure issue. Thank you in advance.
I have attached the pressure comparison plots
Relevant answer
Answer
Thank you sir for you reply. I received an email from Michel Porter and according to his email Kraken shows the pressure as (P/P0) where P0 = exp(ci*k)/(4*pi) and using this normalization factor the problem is solved. Thanks for you answer.
  • asked a question related to Acoustics and Acoustic Engineering
Question
4 answers
I've been using FreeFEM++ to solve some simple acoustics problems. The code works very well for a single connected domain but I can't seem to find an example which handles multiple regions.  Say we have glass in one region and air in another.
Relevant answer
Answer
Hello Paul, sorry for the delayed response. We're pushing a publication through right now, so things have been a bit busy. 
I'll try to remember get back to you once things settle down a bit, but I don't imagine it'll be until mid to late June. Thanks for the response! 
  • asked a question related to Acoustics and Acoustic Engineering
Question
5 answers
Why is it that in the middle there are no standing wave field?
How will the particles move when the standing wave field is only on the upper and lower of the cavity?
Relevant answer
Answer
I just realized my acoustic force potential and acoustic radiation force have negative values as well
Thank you very much. I'll come back asking questions again after researching throughly. 
  • asked a question related to Acoustics and Acoustic Engineering
Question
2 answers
Ultrasonic cavitation,acoustics
Relevant answer
Answer
Cavitation caused by ultrasonic treatment during solidification aids in breaking up the growth of dendrites and allows formation of equiaxed grain structure. This is particularly helpful to obtain uniform properties throughout the casting. 
You may find this article useful :
  • asked a question related to Acoustics and Acoustic Engineering
Question
4 answers
I made an impact on a plate structure by impact hammer. I got theoretical dispersion graph for the plate from Rayleigh-lamb equation. As I am very new in this field, I am confused that the frequency values in the dispersion graph represents frequency from impact hammer's time-response? or the frequency that I get from accelerometer's response?
Thanks in advance.
Relevant answer
Answer
Two very good answers are given here by Brian and Sergey. I would like to add a little more:
1- A pulse impact usually generates a broadband wave with the upper frequency limit not exceeding the inverse of the duration of the pulse. 
2- The footprint of the impact will determine the smallest wavelength you can generate and therefore, the highest frequency  
3- what you detect is not all the waves that exist but only what the detector is capable of detecting. 
4- surface waves will exist too if you generate wavelengths that are smaller than half the plate thickness. Much smaller than that is recommended. Surface wave are not dispersive on a flat plate, while plate waves are. 
  • asked a question related to Acoustics and Acoustic Engineering
Question
12 answers
odeon / ease ???
Relevant answer
Answer
The question, that you asked, rises next problem: what does acoustic comfort mean? Do you mean noise free environment or absence of any acoustic signals? Or concert hall with good acoustic properties? If the last one, Ease could be good choice.
The second question: evaluate acoustic comfort for existing objects, or designed?
If you mean any existing building, measurements could be good starting point.
These are just a few quick thoughts on the question you asked.
  • asked a question related to Acoustics and Acoustic Engineering
Question
2 answers
For simulation purpose. I have incoming optic pulse envelope of which in time is F(t)=E(t) and consequent frequency form G(w). If its affected by AOM with WSHIFT frequency how it affected original pulse form ?
Am I right ?
Lets scheme will be: Pulse1 -> AOM -> Pulse2,  like:
Pulse1=E(t),    Pulse2=E(t)*exp(1i*WSHIFT*t), and in frequency domain it will make:
G(w) -> G(w+WSHIFT*t))
Relevant answer
Answer
G(w+WSHIFT*t) is not a pure frequency-dependent form. What you actually get is two pulses at frequencies: w+WSHIFT and w-WSHIFT. Which is the usual Brillouin scattering. However there are cases where you can have nonlinear terms as well. What you write could be acceptable if WSHIFT<<w and you can measure the frequency response at time intervals smaller than t.
  • asked a question related to Acoustics and Acoustic Engineering
Question
9 answers
As you can see from the attached diagram, the particles are concentrated in the nodal planes at 0.008 s. But at 0.1s, the particles are not as concentrated in the nodal planes as at 0.008 s ( i.e. There are no red colours, just replace these red colours with yellow). Why are the suspended particles become less concentrated in the nodal planes as time goes by?
This is the manipulation of suspended solid particles using an ultrasonic standing wave field. The report that I attached is the brief report written from COMSOL Simulation V5.2 
Relevant answer
Answer
One possibiliy: The frequency drifts away from the resonance point. If it's a real experiment, the particles change the resonate frequency. Try again with a smaller number of particles or different mass.
  • asked a question related to Acoustics and Acoustic Engineering
Question
2 answers
Hi,
Could someone please let me know the advantages of using an apodized IDT design over "reducing the number of fingers in the IDT" while designing a wideband SAW device ?
If I use the apodized transducer as a receiver of the acoustic wave, from the equivalent circuit model of the IDT, my conclusion is that the short-circuit current obtained will be larger for the same bandwidth for an apodized design over the wideband design using reduced number of finger IDTs. Is my understanding correct?
Thanks,
Rahul Kishor
Relevant answer
Answer
Dear Gazi,
Many thanks for sharing the information. Let me read through it and see if it helps.
  • asked a question related to Acoustics and Acoustic Engineering
Question
1 answer
i will conduct a series of experiments on dehydration of minerals triggering earthquakes using modified Griggs-type deformation apparatus, but i do not know how to install acoustic emission sensor.
Relevant answer
Answer
Shouldn't it be installed on the sample surface? I didn't conduct the AE experiments on earthquake. Generally, the sensor can be fixed by tapes.
  • asked a question related to Acoustics and Acoustic Engineering
Question
4 answers
Does anyone know the transmission loss of a metallic pipeline for acoustic singal?
Relevant answer
Answer
  • asked a question related to Acoustics and Acoustic Engineering
Question
5 answers
I believe ODEON is not efficient in simulating the articulation of a surface. Any other software suggestions which can simulate edge diffraction as well?
Relevant answer
Answer
As far as I can remember, ODEON struggles to model surfaces with the smaller dimensions under 0.5 m. Generally speaking, geometrical acoustic software are not adequate to model diffraction effects, as they are low-frequency related. You may try switch to finite element modelling, or better fdtd. Have a look here:
  • asked a question related to Acoustics and Acoustic Engineering
Question
16 answers
I wish to model a ultrasonic horn in ansys but am having difficulties. I have an input displacement of 0.0016mm and wish to apply this input displacement at the base of my ultrasonic horn. Should i use harmonic displacement or nodal displacement? Thanks 
Relevant answer
Answer
You may find this helpful for estimating the cavitation power (http://ultrasonic-resonators.org/applications/cavitation/cavitation_1a.html).
  • asked a question related to Acoustics and Acoustic Engineering
Question
4 answers
Do you think that the use of oil dumpers is the best solution?
Relevant answer
Answer
The key parameter to eliminate  reciprocating compressor vibration is  the well-designed pulsation suppression device (two surge bottles with one choke combination), please review API 618, also Appendix-M to fellow one of the three design approaches.  Pulsating vibration remedy can be performed by a complete acoustic and stress analysis study.
However for new installation
If available, review with the vendor rotor dynamic stability report.
This report includes ( lateral or torsional vibration with critical speeds from 0 to trip)
If available, review all stress analysis including piping system supports as well as the agreed mounting base frame. In addition, the design of surge bottles.
Old installation
Define the source of vibration (spectrum analysis report) to solve for the problem, if it is the compressor itself or emerging from the deriver) and which mechanical part to maintain.
Review the piston run out reading as API 618.
If the exciting force is due to the resonance, review all piping supports flexibility and base foundation cracks or movement. Also review if any mechanical retrofitting had been done in the manner to affect the system stiffness or mass.
Review if any operating parameter had been changed (pressure, temperature, and gas composition).
For resonating piping system, re-correction is necessary for resonated piping length by modified piping supports.
Acoustic and stress analysis can be performed by any commercial software as AUTOPIPE or CAESAR
  
  • asked a question related to Acoustics and Acoustic Engineering
Question
4 answers
I would also be most grateful if anyone who works within the field of infrasound would contact me to discuss an exciting collaboration.
Relevant answer
Answer
What is your band requirement ?
What is the medium (air, water ...) ?
You can take a look to Bruel & Kjaer products, they are expensive but they are viewed by many as the best transducers (at least in my field, underwater acoustic).
For the acquisition device, poor chance a sound card (even a high-grade one) can do the job because they won't pass DC to 20Hz, 50Hz or 100Hz. The lowest I've found are from the RME brand, they are  advertising 5Hz at -3dB , and i've measured mine wich cuts at 1 or 2Hz.
That said, another culprit of doing acquisition with a sound board is if you need to synchronize the start of the recording session (no trigger input).
  • asked a question related to Acoustics and Acoustic Engineering
Question
3 answers
The source is fixed and waveforms are recorded in many stations. I want to determine the delay time between two stations. I know some ways to measure similarity like the Cross-Correlation,semblance,dynamic time warping etc. And meet the biggest problem that my data are with strong noise background like the cars noise ,walking noise etc . Anyway, I don't know what to do and how to do. if  two stations is close ,so i can use the CC way to obtain the delay time .but the raypath is not same and the site effect, so using the CC may be risky. CAN ANYONE  HELP ME? 
Relevant answer
Answer
I'd suggest to experiment with cross-correlation techniques and try with different signals. Try with either constant frequency and modulated signals of different duration, e.g. short and long sweeps. Of course you have to manage the cross-correlation length accordingly.
Gianni
  • asked a question related to Acoustics and Acoustic Engineering
Question
2 answers
I did a High Resolution Melting (HRM), but all peaks have these noise, when it should be only one big peak. The line black is negative control and green lines are same sample
Relevant answer
Answer
Hello, sorry for delay. Maybe two peaks mean prescen dimers, unspecific product or maybe the sequence of your product have this singularity to give two peaks. If you have the sequence of your product you can use Umelt online tool to predict how will be your curves and peaks. Run the gel of your reaction to discard unspecific products and dimers. The noise maybe occur for degenerates primers, or bad quality of your template.
  • asked a question related to Acoustics and Acoustic Engineering
Question
4 answers
Hello, i need to measure the cavitation in a water tank (cause by ultrasound) using a hydrophone, however i am stuck with the data interpretation measured by the hydrophone.Is there any journal paper/help online on interpreting acoustic cavitation measured by hydrophone ? Thanks
Relevant answer
Hi John,
With the hydrophone, you can measure the pressure in a single point, so you are measuring the pressure change in one point of the pressure field imposed by the sonotrode.
If you are working with just one bubble (state of single bubble cavitation) you would probably “see” the change in the shape of your hydrophone’s signal (that introduce the presence of the cavitation  bubble and if you use a high pass filter, applied to your hydrophone signal, you would probably see even easily the presence of a cavitation bubble), however if you are in a multi-bubble regime, you most probably see all the interaction of bubbles + the impose acoustic filed, and that’s is more difficult to work with, nevertheless you could build another high pas filter in order to see the cavitation in your chamber.
Another home-made way to measure the number of node in your chamber is by the use of paper metal strip, where you could see the damage in the strip that the cavitation bubbles made (when they collapse) 
Here is a good bibliography of the differences between the two regimens
  • asked a question related to Acoustics and Acoustic Engineering
Question
7 answers
ultrasonic pulse velocity
pundit
depth of frecuency
Relevant answer
Answer
Hi!
Remember, rocks are complex systems. One needs lots of information (geo-acoustic parameters) to make an accurate guess. And information about a given type of rock is not applicable to another. However, with suitable references, I would make an approximate guess as follows:
Both, phase velocity c_p and wave attenuation are functions of frequency. According to ref 1, c_p for rock (metamorphic/igneous) is 4600–6200 m/s and for limestones c_p is 3100–6200 m/s. This agrees with table 2 in ref 2. Since you have mentioned "concrete or rock blocks", I am assuming a grey granite. From fig 6 in ref 2, we can assume under little stress, c_p = 3600 m/s and attenuation to be -20 db/cm or -200 db/m at 54 khz. Also see ref 3 for more details.
However, defining penetration depth as done in page 23-24 of ref 4, penetration depth comes out to be equivalent to -10 db in amplitude attenuation. So at 54 kHz, the wave in the rock will already be exponentially damped, it would be diffused and be of evanescent wave characteristic and not like wave propagating type! If you still want to calculate the maximum penetration depth at 54 KHz, you should use formula 40 or 42 of ref 4, and the plots in ref 2 or ref 3, depending on the type of rock you are interested in.
Hope it helps!
  • asked a question related to Acoustics and Acoustic Engineering
Question
3 answers
Does anyone have experience of curtain walling which spans two rooms needing to achieve 65dB Dntw + Ctr
Relevant answer
Answer
  • asked a question related to Acoustics and Acoustic Engineering
Question
11 answers
Analytical solution
Relevant answer
Answer
Thanks everybody. For giving me your valuable inputs .I will go through all the reference provided hopefully I will get the results 
  • asked a question related to Acoustics and Acoustic Engineering
Question
11 answers
While modeling aeroacoustics of the fan using DDES, I have to use the transient formulation. Fluent is recommending to use Bounded second order implicit. After using this, I have observed lot of fluctuation in acoustic signal. Can anybody elaborate why it is so?
Relevant answer
Answer
 Hello Malik 
I am studying the time step effect on noise. It does help but still can not dampen the rotational frequency. Further reducing time step is costing tremendous increase in simulation time (while looking at my limited CPU resource). 
As you are also interested in this, let me know if you get any relation of compressibility and fan noise. I have a seen similar plot for supersonic tip speed. But for my case it is subsonic. Share  if you find any relevant link.
  • asked a question related to Acoustics and Acoustic Engineering
Question
7 answers
i need a detailed study about the size of the particle can be levitated and the corresponding required ultrasonic frequencies   
Relevant answer
Answer
As Icíar González mentioned, the limit of the particle trapped is dependent on the frequency of operation. That being said, if you want to manipulate very small individual particles, you will experience significant energy losses in terms of damping although the radiation forces would scale with frequency. Thus, a trade off between the two exists
You may find this article interesting.
"Optimisation of an acoustic resonator for particle manipulation in air"
DOI:10.1016/j.snb.2015.10.068
Below are the links.
or
  • asked a question related to Acoustics and Acoustic Engineering
Question
7 answers
How to determine the maximum speed(ensuring no collisions) of an omnidirectional robot  with a ring of sequentially firing 'n'number of ultrasonic sonar sensors? Given the frequency 'f' of each sensor, the max acceleration 'a'.
The robot is i a world filled with sonar-detectable fixed (nonmoving) obstacles that can only be detected at 'x' meters and closer
Is this the maximum velocity that can be attained by the robot as in the 'ring cycle time'?
The way I approached it,
Consider: F=70kHz, a=0.5m/s2,  x=5m, n=8.
Now considering the obstacle is at the furthest detectable distance, 5m
Considering the speed of sound as 300 m/s.
Time taken for the sensor to receive the signal = 2* ((1/70*103)/(300/70*103))*5 seconds. =1/30 seconds
Now as there are 8 sensors fired sequentially, total time taken= 8/30 seconds.
Therefore, ring cycle time=8/30 seconds.
Also overall update frequency of 1 sensor = 30/8 Hz.
Now is the maximum velocity , the velocity attained by the robot for t= 8/30 seconds?
Relevant answer
Answer
(2*5m)/300m/s = 33msec for a soundwave travelling 5m towards and backwards to the transducer (puls-echo mode)
A complete measurement over 8 transducer for max. distance of 5m needs 267msec.
[So the puls repetition rate for each transducer will be 3,75]
Your robot is able to detect changes in distances of obstacles every 8/30s. Within this time at the given acceleration of a=0,5m/s^2 a maximum velocity of 0,133m/s (0,48km/h) will be reached.
But, if your robot is able to travel for say 10s at constant acceleration without collision the speed at the end will be 10s*0,5m/s^2 = 5m/s = 18km/h. Now at each ring cycle time of 267msec a distance of 1,33m will be covered.
At least you have to give a better definition of your maximum speed.
If it would be okay to avoid collisions in the range of one detection for 5m range then v = 5m / 0,267s = 18,75m/s. That means a speed of 67,5 km/h. Perhaps too fast for collision avoidance.
By the way: bandwidth and risetime is only interesting if you want to calculate time-of-flight. In this example the speed of the robot seems to be in the focus of interest.
  • asked a question related to Acoustics and Acoustic Engineering
Question
1 answer
The sound generators of string and non-electronic keyboard instruments are nearly always coupled via the soundboard or organ case. Sympathetic resonance between two notes is strongest for pure intervals. With tempered intervals, this effect should be weaker, but in addition the coupling should "pull" the two frequencies away from the frequencies of the notes played separately.
People such as Werckmeister who developed tempering systems defined the intervals in terms of length ratios on the monochord. Nowadays, published data are given in cents, presumably on the assumption that the length ratios are an accurate indication of the frequency ratios. I'd like to know whether this assumption ought to be questioned.
A supplementary question would concern possible differences between tuning a harspichord or piano note-by-note with the aid of an electronic device, and doing it the difficult traditional way by listening to pairs of notes played together.
Relevant answer
Answer
One interesting point is that Bach, himself, tempered his intervals by actually counting the beats. Therefore each interval (be it a 3ed , 4th 5th etc) had a characteristic number of beats. The intervals being determined independent of string length. It was empirical. This approach was a bit different for concert and chamber music, the  beats per intervals being some what different.  It was said that the cover page to the Well tempered Clavier has a code with this information. 
If the cents approach should be questioned depends on your perspective. The difference can be heard. But,  Musial instruments are different  from Pitch references.  But because this effect can vary between instruments it is hard to generalize.  My impression with tuning is, that with harpsichords ( I made several Z-boxes), that the resonant characteristics of different sound board may vary, so the same note on different instruments have a different quality in tone and resonance. On key bound instruments it is exactly this quality and the  "pull" that gives the instrument its characteristic "color".  Bach was able to exploit it for his uses, as were the Impressionists. Though in different ways. The piano being one of the Impressionists most effective instrument because of this effect. Did they know what the frequency was, unlikely.
  • asked a question related to Acoustics and Acoustic Engineering
Question
4 answers
I am having a problem in post processing a test result that was done using a GE UNIK 5000 transducers for sensing the static pressure at different locations of an exhaust system. What was obtained was a fluctuating pressure. 
How can I obtain the average pressure? Is there any standard approach?
Attached snapshot for reference.
Relevant answer
Answer
Hi David, 
What exactly was the problem? Assuming the fluctuations are below the frequency response of the tubing system (which they should be since it sounds like you didn't use a tubing system) and below the response of the transducers, then all you do is take the mean of the pressure - as in, sum the data points and divide by the total number of data points. 
There are other statistics that you can use as well, including rms of the pressure time history and the variance of the pressure time history.
Hope that helps
Ric
  • asked a question related to Acoustics and Acoustic Engineering
Question
2 answers
I attached my questions on ABAQUSE  .
Relevant answer
Answer
Hi
The short answer is no, you do not need to simulate the impedance tube, you can work it out from other data.
How this is done is a longer story. Take a look at this document.
That said, you might want to simulate the impedance tube anyway, i.e. to use it as a test that you are doing things the right way. It does not take much time and it provides certainty to our work, so give it your consideration.
Sincerely
Claes
  • asked a question related to Acoustics and Acoustic Engineering
Question
4 answers
We have a circular array microphones from BSWA (36 points array). Would like to build the data acquisition on National Instruments Hardware platform. Using Matlab or similar software. Anybody have similar experiences? Thanks
Relevant answer
Answer
Hi
Yes you can do what you state using NI hardware (hw), but not using all kinds of NI's hw.
The NI hw of preference is the NI cDAQ platform which is low cost and quite versatile.
Please not that this is NOT cRIO (FPGA) which requires LabView.
However, there are cDAQ modules with builtin PCs available. In my eyes, it usually is better to use an ordinary PC for processing to be able to keep up with development. Measurement hw lasts much longer than PCs.
You can use the NI-DACmx drivers which exist in a few versions, e.g. C, and can be accessed from your platform of preference, e.g. Matlab or Python.
Here is an example of how to access NI-DAQmx from Matlab
Here is an example of how to access NI-DAQmx from Python
I have toyed with the Matlab version and it works. The Python version is very similar.
My recommendation is to go for Python as you then can access also Acoular, which I suspect is of interest to you?
This is also in the works
On top of this, there is usable low cost software that uses the NI cDAQ platform. It is a good choise for new & reasonably compact hw.
If you are seeking lowest cost, I suggest looking at Agilent VXI where used gear provides the lowest cost per channel
I use both of the above.
Sincerely
Claes
  • asked a question related to Acoustics and Acoustic Engineering
Question
4 answers
i want to monitor the parameters of gas-solid flow by the acoustic means, while i found few researchers using the audio acoustic. does anyone working on it? or give me some instruction on that,thanks.
Relevant answer
Answer
Hi
Here is a company manufacturing such a system.
sincerely
Claes
  • asked a question related to Acoustics and Acoustic Engineering
Question
2 answers
Basically, I want to prove that subjective measurement of vehicle sound's richness with appropriate objective measurement.
Relevant answer
Answer
Hi
One way to deal with this subject is to make binarual recordings ('3D sound').
where you record and replay sound at the correct strenght and as unbiased as possible to a panel of listeners. This way you can compare the effect of versions A and B to see if anyone can tell the difference and if so, which is a good objective number for such sound.
There is state-of-the-art gear, e.g.
and you can use DIY
This kind of work usually is referred to as Sound Quality (SQ). SQ has over time developed lots of metrics, e.g. 
Sincerely
Claes
  • asked a question related to Acoustics and Acoustic Engineering
Question
5 answers
 I applied spectral subtraction technique to AE signal by subtracting the recorded noise spectrum from the acquired AE spectrum (NB: AE data contains both AE and spindle noise influence). the ifft of the residual in the time domain indicates higher amplitudes in the result compared to the time domain amplitude of the signal + noise. why is that so? does it mean that lower frequency content results in high amplitudes in the time domain, is there any explanation to support this inverse proportionality
Relevant answer
Answer
Thanks. The answers were helpful
  • asked a question related to Acoustics and Acoustic Engineering
Question
8 answers
1) Does this have something to do with aesthetics? Because engineers like the relative uniformity of an equiripple response, as opposed to the more 'organic' response of other methods. 
2) Is it because an equiripple design allows the maximum error to be quoted as a design parameter? 
3) Does it have something to do with robustness and L-Inf norms? Because the maximum error is minimized.
4) ... (Others?)
The price paid for these features is that the overall error, or deviation from the desired response, is greater than what is achievable by simply minimizing the (integral or sampled) error function, possibly weighted by some function.
If it is possible to reduce the error in some parts of the the spectrum, for an FIR or an IIR filter, then why not do it? Is non uniformity really that objectionable?
I would be interested to hear your thought on this. 
Relevant answer
Answer
In my opinion equi-ripple filter designs are more robust. Especially these are more robust against random errors of all sort (system errors, measurement errors).