Questions related to Acoustics and Acoustic Engineering
I already made the experimental setup of impedance tube for the same construction. Do I have to validate it?
I aim to analyse the permeability of aluminum to liquid gallium by measuring it's velocity at individual grain boundaries using ultrasounds.An ultrasound-based characterization technique is motivated by its higher temporal resolution.
But open to more suggestions regarding the problem statement.
Nowadays, I'm working on sound absorption of porous materials. Experimentally I have found that by increasing thickness of porous materials the frequency, which maximum absorption occurs, decreases.
I mean that, in lower frequency maximum absorption occurs. I want to know what is the reason of this behavior?
Elastic waves in solids can be coupled with both the electrical and magnetic potentials, the speed of light is five orders larger than the speed of elastic waves.
Hi everyone ,
I am implementing Acoustic model in FLUENT. Geometry is 2D. By using FW-H model, I am facing difficultly to understand these things. ( I have already read FLUENT manuals and one tutorial )
1) It is very important and necessary to define accurately Source Correlation Length in 2D geometry. I do not know this parameter. how to calculate it. my geometry is very simple, just a rectangular to 10*20 mm.
2) What should be the source zone and type ? In 2d-cylindrical tutorial, I read it is a cylindrical wall. My geometry have 0.5 mm inlet a bottom. Is it the source zone and type will be Velocity INLET ?
3) Where will be the receivers position ? I have to monitor the the time history when gas passes through from INLET ? I read one paper there was one monitoring point which is away form the inlet.. I could not understand.
4) What is FW-H integral surface ? I think, the under investigation portion has very fine mesh. this is called FW-H integral surface. But I am not sure about it..
Any type of advise, guideline or tutorial will be highly appreciated. Thanks in advance..
Anyone have any idea on how to harvest the acoustic energy from a line sound source? The line sound source is in small scale, maybe in a centimeter range, and the sound pressure is very small, around uPa I guess.
Hi I need to model PA system for a train station with EASE.
I got this doubt, while a speaker makes announcement, sound from it have all frequencies or it will be at one particular frequency.
Because in EASE, it divides electric power equally among all frequencies.
If there are 21 frequencies, and electric power provided is 0.75W, then for each frequency the power is (0.75/21). We use pink noise.
Since the power gets distrbuted we are not getting the desired SPL.
Sound Power Level of a speaker is computed from the sound power of the speaker.
Sound power from the speaker is the total sound energy emitted by a speaker per unit time.
How this sound power is related to the electric power given to the speaker. For example if I give 6W electric power to the speaker then sound power from the speaker will be only 6W.
I am currently working on anthropogenic effects on the vocalisation of howler monkeys in the urban environment.
I record the vocalisations with a TASCAM DR-07MKii and I want to extract acoustic measurement such as frequency, pitch, rates and lengths of vocalisations.
Which program is easiest to use for a beginner to extract these variables?
I currently have: Sound Analysis Pro, Praat and Audacity.
All tips and additional information are very welcome
We have multiple arrays of microphones, each array well seperated from others on the ground. Our task is to track an aircraft that happens to pass by. But the terrain is such that there is a lot of reverberation. Now:
1) What prefiltering technique is best to remove reverberation effects.
2) Which weightings during GCC(Generalised Cross Correlation) are best against reverberation, while at the same time without increasing error by much.
Thank you very much.
Massachusetts Institute of Technology
Hello Research Gate Community!
I am a first year Entrepreneurship Ph.D. student at Baylor and I am conducting a study on academic scientist’s intention to commercialize their research.
While a lot has been done on the topic of commercializing research, almost no one has examined it from the perspective of the scientist. Your perceptions of research commercialization matter and should be considered by university policy makers when designing the process.
I don’t think anyone likes long surveys and I know you have much more important things to do, so I designed the survey to take less than 10 minutes. If you are willing to take the survey, you can access it through the link below:
Thank you very much for the time and consideration!
Many applications in numerical acoustics occur in exterior domains. The advantage of benchmark problems is a closed form solution (e.g. circumferentially harmonic radiation from an infinite cylinder or radiation from a portion of the infinite cylinder). Different models such as perfectly matched layers (PML) and absorbing boundary conditions ABCs (like Enquist Majda, Bayliss Turkel) are used in the finite element method to approximate Sommerfelds radiation condition. I am interested in a publication, where these approaches are compared.
I'm looking for the way to determine the points that must locate the sound level meter for measuring the noise of sources in workplaces ( like a chiller or compressor ) . Is there any standard method that completely explain the procedure of measurement the noise of sources for reduction and control ? For control the exposure of personnel.
specially about the right direction of microphone.
I need database (dataset) for my article evaluation in Audio Steganography field; for example echo hiding, SS and phase coding.
If anyone knows related article for result competition that can be a great favor.
Thanks a lot for your help.
We linearise the viscoelastic material by taking the assumption that the amplitude of disturbance for an acoustic wave is very small.
Can anyone share the solved mass balance, momentum balance equations for a viscoelastic material and the wave equation derived from it.
I am presently creating a transfer matrix to represent a wave propagation in a multi-layered plate. I m intended to use this transfer matrix for elastic materials.
On what grounds can rubber be considered as a linear material to study the acoustic properties.
I am looking for the following properties of Thermoplastic Elastomers or TPE like materials:
1.Flow resistivity (Pa.s/m^2)
Will also like to know from experts alternative ways to simulate the sound Transmission Loss of acoustic plugs made of TPEs.
I am trying to build several cases for solver and CAA-library code validation. There are breathing, trembling sphere, baffled piston.
In all cases we can see phase difference between pressure signals. I'm explaining it by inertia like in the mass-spring system. Is it correct?
How can i modify wave equation results or my CFD solution to clear it from such kind of discrepancy? I need to clear it because i'm trying to validate very different solvers (for example with mesh motion) and already observe reflections from boundaries (so then i will need to test some types of non-reflecting) and i suppose that this phase difference could make further analysis harder.
Thank you for any help ("to read" recommendations are also appreciated).
Last figure just for reference (baffled piston, 2d-axissymmetric case).
The source SL is the sound pressure level expressed in dB referenced to 1 m away from the acoustic center of the sound source.
What determines the SL of a single organ pipe?
Can SL be related to something like the acoustic volume velocity or volume displacement? Is there a mathematical relationship of dependence between SL and one or more generalized source-characterizing parameters like these.
Can the SL for a given pipe be increased or decreased without making physical modifications to the pipe? How is SL then changed?
is there a good reference dealing with the SL from organ pipes? I cannot find one.
I have decided to do research on noise pollution modeling. Kindly tell me open source software for noise pollution modeling around airport.
Hi, I was wondering if there is any difference between normalize an impulse response in the time or frequency domain, respectively. My first though is yes, because the normalization in the time domain could change the ratio between the frequency components. Simulations in matlab suggest that there is not difference but I am not sure.
Thanks for any advise,
I am looking for a way to estimate particle acceleration of an underwater sound produced by aquatic animals, per e.g. Something easy to use in the field and reliable. Like a vector sensor or an underwater geophone?
Synthetic data generation is an interesting ara of research but i have difficulties finding articles and textbooks about the topic. I want an idea about definitions and framework for automatic synthetic data generations in any area, particullary on sound analysis.
please if anyone is using Acoustic Doppler Velocimeter, can he/she explain how can I generate bubble in the flume to measure velocities?
is there any difference or similarities between human eye and camera in terms of noise level because both use same technique but camera captures noise while eye doesn't,is there anyway to create artificial noise so that only camera captures it but not the eye ?
We are using a 2.5 m diameter x 1 m tank for calibrating hydrophones. We've observed that the results can be quite unrepeatable for the first month or so after we clean and refill the tank. We assume this effect is due to dissolved gases. This effect usually only lasts for a few weeks up to one month.
We are looking for ways to measure, characterize and explain this effect in the 10 kHz to 200 kHz region, and if possible, evaluate available techniques for remediation (de-gasing) of the tank water to achieve a "standard" profile.
My eventual aim is to model the radiation of acoustic sources using a hybrid approach (different mesh gradients and domain sizes).
I would like the be able to create models that have changing flow velocities at a boundary. This would eventually lead to simulating audible sources (loudspeakers etc), so the source velocity would need to change appropriately with each step of the computation, and would need to effectively be an inlet and an outlet.
Is this currently possible with only some minor tweaks of an arbitrary 2D case?
Has anyone come across a tool that already exists for this?
Any advice or suggestions are more than welcome.
Dear Physicists, please I would like to know if there is a relation between the intensity of a signal acoustic wave and the ability of that wave signal to shatter a body. For instance, if a tuning fork is emitting an acoustic wave into body X (such as a glass cup) at a resonant frequency of body X (for instance the fundamental frequency) is there a threshold intensity (e.g. wave amplitude, wave power) of the wave emitted by the tuning fork which needs to be exceeded before body X will shatter? It seems well known that the shattering effect will occur at specific frequencies (the fundamental frequency or its harmonics) however, does the wave amplitude, power or other material properties of the body play a part? If yes, please what relation or formula governs these? Answers appreciated. Thanks
a) See here: https://www.youtube.com/watch?v=BE827gwnnk4
b) It takes 300 Hz to break a wine glass
c) Speed of sound in air = 340 m/s
d) Wavelength = 340/300 = 1.13 metres
e) Speed of sound in glass = 3900 m/s
f) Wavelength = 3900/300 = 13 metres
g) Both the wavelengths in (d) and (f) are much greater
than the size of the glass
So can you explain what is actually going on here? How does a wavelength bigger than the object itself appear to setup modes in the glass of a shorter wavelength? What is actually breaking the glass: the originating frequency or higher harmonics?
If it is a higher harmonic then presumably we can also break the glass at that driving frequency also? Has this ever been demonstrated?
The mean processed signal to noise ratio was calculated to be 30 dB for the Raytheon sonar, and 13 dB for the Klein sonar. Using the Receiver Operating Characteristic (ROC) curve displayed (figure with calculations from Urick,1983 is attached: file name is "ROC curves calculations.bmp"), and given the desired false alarm probability of 0.5%, the probability of detection corresponding to the mean processed signal to noise ratio for each sonar
was calculated at the false alarm level. The probability of detection was calculated to be 0.998 for the Raytheon sonar (green lines on the plot attached), and 0.82 for the Klein sonar (yellow lines on the plot attached).
I tried to make the calculations mentioned by MATLAB tools (with the rocsnr function), but I cannot receive the same results as by paper plots. MATLAB gives the essentially higher values: e.g., for Raytheon sonar the probability of detection is always 1 (for signal to noise ratio equal to 30 dB). MATLAB code for calculations is relatively simple and is given below.
[Pd,Pfa] = rocsnr(30);
idx = find(Pfa==0.005); % find index for Pfa=0.005
The result for calculation looks as follows (I expected to get 0.998).
Empty matrix: 1-by-0
After getting this result I tried to increase Pfa value, but the result is 1.
[Pd,Pfa] = rocsnr(30);
idx = find(Pfa==0.01); % find index for Pfa=0.01
For Klein sonar the probability of detection is almost 1 instead of 0.82 (for signal to noise ratio equal to 13 dB). I cannot obtain the result for false alarm probability of 0.5%, in case of 0.1% I get 0.999967062.
[Pd,Pfa] = rocsnr(13);
idx = find(Pfa==0.005); % find index for Pfa=0.005
Empty matrix: 1-by-0
[Pd,Pfa] = rocsnr(13);
idx = find(Pfa==0.01); % find index for Pfa=0.01
What is the reason for such inconsistence in paper plot calculations and "efficient" MATLAB calculations performed automatically for the same data input?
The original figure for ROC curves (Urick,1983) without additional lines plotted is attached too (file name is "ROC curves (Urick, 1983).bmp").
The links to MATLAB documentation related to ROC curves are given below.
It is interesting that ROC curves were first introduced in MATLAB R2011a.
Hi, friends. Maybe what I will ask you is something very basic for you. But even so, I'd want to do it. Many of you may have read Elements of Acoustic Phonetics by Ladefoged (2.ª Ed.). In page 50 of this book, exactly in the last paragraph, where Mr. Ladefoged explain to us what we can see in a spectrum (with an image that I share you below), he said: "as we can see from the line spectrum, much of energy of the complex wave is contained in the one component at 1000 Hz". I'm not an English native speaker and the first interpretation I had was that much of sound energy is concentrated in the component which has 1000 Hz. But the chart says another thing. The component that has the largest amplitude, that is, the amplitude that carries/conserves/has much of energy of the sound is at 500 Hz, isn't it? So, Was there a mistake in the book? I think there was. The following sentence in the same paragraph, just after what I quoted, would support what my mistake impression. Ladefoged says "the additional components which have to be combined with it [the 1000 Hz component] have very small amplitudes". But, looking at the chart, are there components which have smaller amplitudes than the 1000 Hz component amplitude? There are not, right? So, what is the problem here? Is there something that I'm not considering? A friend told me the book says that the energy is 'contained' (restrain) not 'concentrated' in one component. In this case, it seems to me they are synonymous: what is concentrated is somehow contained in a virtual space or área, isn't it? However let's imagine that the much of energy in contained or restrained at 1000 Hz. But when I imagine energy contained in these terms, I imagine high pressure, not low pressure, and it is precisely what the chart shows at 1000 Hz: a component with a low pressure, that's why it has a short amplitude. Therefore, from my point of view, the 1000 Hz component is not indicating that much of energy is contained. I'd appreciate your help on it. Is there a mistake in the Ladefoged's book or there's something that has escaped me? Thanks in advance! Regards!
I am a Young researcher in aeroacoustics and I work with Kirchhoff integral in supersonic flow, due to Mach cone I face with a singular integral, in simple form of these integrals are in attached figure form and i like to solve this problem,
who can help me?
With Best Regards for your time
While simulating sound transmission loss through perforated plate, it is assumed in some research articles that the sound pass through holes and completely blocked by the plate. The plate is considered rigid.
But in reality there is a possibility that sound would also travel through plate material. The vibration of material contribute transmitting sound. So I would like to ask how valid it is to assume the plate as a rigid wall and only the holes allow the sound to transmit.
I am working on an Ocean Acoustic Propagation Model and to validate my model I am using the KRAKEN (http://oalib.hlsresearch.com/Modes/AcousticsToolbox) Code.
During this process I observed that the Depth Modes obtained using KRAKEN (using plotmode.m) is exactly matched with the Analytical Modes (Jensen et. al) for an Isovelocity waveguide for source frequency (50 Hz and 1000 Hz).
But while tried to match the Pressure at fixed depth (using plottlr.m) I found that there is a Normalization difference between the Kraken Model and the Analytical Method for lower frequency 50 Hz and for higher frequency such as 1000 Hz small phase difference in the Pressure plot is also observed.
Can someone please enlighten me on this pressure issue. Thank you in advance.
I have attached the pressure comparison plots
I've been using FreeFEM++ to solve some simple acoustics problems. The code works very well for a single connected domain but I can't seem to find an example which handles multiple regions. Say we have glass in one region and air in another.
Why is it that in the middle there are no standing wave field?
How will the particles move when the standing wave field is only on the upper and lower of the cavity?
I made an impact on a plate structure by impact hammer. I got theoretical dispersion graph for the plate from Rayleigh-lamb equation. As I am very new in this field, I am confused that the frequency values in the dispersion graph represents frequency from impact hammer's time-response? or the frequency that I get from accelerometer's response?
Thanks in advance.
For simulation purpose. I have incoming optic pulse envelope of which in time is F(t)=E(t) and consequent frequency form G(w). If its affected by AOM with WSHIFT frequency how it affected original pulse form ?
Am I right ?
Lets scheme will be: Pulse1 -> AOM -> Pulse2, like:
Pulse1=E(t), Pulse2=E(t)*exp(1i*WSHIFT*t), and in frequency domain it will make:
G(w) -> G(w+WSHIFT*t))
As you can see from the attached diagram, the particles are concentrated in the nodal planes at 0.008 s. But at 0.1s, the particles are not as concentrated in the nodal planes as at 0.008 s ( i.e. There are no red colours, just replace these red colours with yellow). Why are the suspended particles become less concentrated in the nodal planes as time goes by?
This is the manipulation of suspended solid particles using an ultrasonic standing wave field. The report that I attached is the brief report written from COMSOL Simulation V5.2
Could someone please let me know the advantages of using an apodized IDT design over "reducing the number of fingers in the IDT" while designing a wideband SAW device ?
If I use the apodized transducer as a receiver of the acoustic wave, from the equivalent circuit model of the IDT, my conclusion is that the short-circuit current obtained will be larger for the same bandwidth for an apodized design over the wideband design using reduced number of finger IDTs. Is my understanding correct?
i will conduct a series of experiments on dehydration of minerals triggering earthquakes using modified Griggs-type deformation apparatus, but i do not know how to install acoustic emission sensor.
I believe ODEON is not efficient in simulating the articulation of a surface. Any other software suggestions which can simulate edge diffraction as well?
I wish to model a ultrasonic horn in ansys but am having difficulties. I have an input displacement of 0.0016mm and wish to apply this input displacement at the base of my ultrasonic horn. Should i use harmonic displacement or nodal displacement? Thanks
I would also be most grateful if anyone who works within the field of infrasound would contact me to discuss an exciting collaboration.
The source is fixed and waveforms are recorded in many stations. I want to determine the delay time between two stations. I know some ways to measure similarity like the Cross-Correlation,semblance,dynamic time warping etc. And meet the biggest problem that my data are with strong noise background like the cars noise ,walking noise etc . Anyway, I don't know what to do and how to do. if two stations is close ,so i can use the CC way to obtain the delay time .but the raypath is not same and the site effect, so using the CC may be risky. CAN ANYONE HELP ME？
I did a High Resolution Melting (HRM), but all peaks have these noise, when it should be only one big peak. The line black is negative control and green lines are same sample
Hello, i need to measure the cavitation in a water tank (cause by ultrasound) using a hydrophone, however i am stuck with the data interpretation measured by the hydrophone.Is there any journal paper/help online on interpreting acoustic cavitation measured by hydrophone ? Thanks
Does anyone have experience of curtain walling which spans two rooms needing to achieve 65dB Dntw + Ctr
While modeling aeroacoustics of the fan using DDES, I have to use the transient formulation. Fluent is recommending to use Bounded second order implicit. After using this, I have observed lot of fluctuation in acoustic signal. Can anybody elaborate why it is so?
i need a detailed study about the size of the particle can be levitated and the corresponding required ultrasonic frequencies
How to determine the maximum speed(ensuring no collisions) of an omnidirectional robot with a ring of sequentially firing 'n'number of ultrasonic sonar sensors? Given the frequency 'f' of each sensor, the max acceleration 'a'.
The robot is i a world filled with sonar-detectable fixed (nonmoving) obstacles that can only be detected at 'x' meters and closer
Is this the maximum velocity that can be attained by the robot as in the 'ring cycle time'?
The way I approached it,
Consider: F=70kHz, a=0.5m/s2, x=5m, n=8.
Now considering the obstacle is at the furthest detectable distance, 5m
Considering the speed of sound as 300 m/s.
Time taken for the sensor to receive the signal = 2* ((1/70*103)/(300/70*103))*5 seconds. =1/30 seconds
Now as there are 8 sensors fired sequentially, total time taken= 8/30 seconds.
Therefore, ring cycle time=8/30 seconds.
Also overall update frequency of 1 sensor = 30/8 Hz.
Now is the maximum velocity , the velocity attained by the robot for t= 8/30 seconds?
The sound generators of string and non-electronic keyboard instruments are nearly always coupled via the soundboard or organ case. Sympathetic resonance between two notes is strongest for pure intervals. With tempered intervals, this effect should be weaker, but in addition the coupling should "pull" the two frequencies away from the frequencies of the notes played separately.
People such as Werckmeister who developed tempering systems defined the intervals in terms of length ratios on the monochord. Nowadays, published data are given in cents, presumably on the assumption that the length ratios are an accurate indication of the frequency ratios. I'd like to know whether this assumption ought to be questioned.
A supplementary question would concern possible differences between tuning a harspichord or piano note-by-note with the aid of an electronic device, and doing it the difficult traditional way by listening to pairs of notes played together.
I am having a problem in post processing a test result that was done using a GE UNIK 5000 transducers for sensing the static pressure at different locations of an exhaust system. What was obtained was a fluctuating pressure.
How can I obtain the average pressure? Is there any standard approach?
Attached snapshot for reference.
We have a circular array microphones from BSWA (36 points array). Would like to build the data acquisition on National Instruments Hardware platform. Using Matlab or similar software. Anybody have similar experiences? Thanks
i want to monitor the parameters of gas-solid flow by the acoustic means, while i found few researchers using the audio acoustic. does anyone working on it? or give me some instruction on that,thanks.
Basically, I want to prove that subjective measurement of vehicle sound's richness with appropriate objective measurement.
I applied spectral subtraction technique to AE signal by subtracting the recorded noise spectrum from the acquired AE spectrum (NB: AE data contains both AE and spindle noise influence). the ifft of the residual in the time domain indicates higher amplitudes in the result compared to the time domain amplitude of the signal + noise. why is that so? does it mean that lower frequency content results in high amplitudes in the time domain, is there any explanation to support this inverse proportionality
1) Does this have something to do with aesthetics? Because engineers like the relative uniformity of an equiripple response, as opposed to the more 'organic' response of other methods.
2) Is it because an equiripple design allows the maximum error to be quoted as a design parameter?
3) Does it have something to do with robustness and L-Inf norms? Because the maximum error is minimized.
4) ... (Others?)
The price paid for these features is that the overall error, or deviation from the desired response, is greater than what is achievable by simply minimizing the (integral or sampled) error function, possibly weighted by some function.
If it is possible to reduce the error in some parts of the the spectrum, for an FIR or an IIR filter, then why not do it? Is non uniformity really that objectionable?
I would be interested to hear your thought on this.