Questions related to Acoustics
Dear ResearchGate members,
On one hand, there is a theory giving the reflection/transmission coefficients when acoustic planes waves propagating in a medium (rho0, c0) reach a finite thickness object (rho1, c1) with normal incidence. Such theory basically gives the thicknesses (n*lambda1/2) at which the object is theoretically acoustically transparent - of course, the width of the reduced reflection depends on the impedance mismatch between the 2 media – and the thicknesses ([2n-1]*lambda1/4) at which the object is fully reflective.
On the other hand, there is also theory giving the variation of the reflection coefficient depending on the incident angle of acoustic plane waves at the interface between two semi-infinite media (rho0, c0; rho1, c1). Over a critical angle (depending on the impedance mismatch between the two media), the reflection is theoretically total.
Now, here is my question: What is the behavior of the acoustic waves when the two phenomena are considered at the same time? If the plane waves reach a surface with an incident angle, and the reflective medium is finite in thickness (acoustic mirror)?
By experience and through simulations, it appears that over the critical angle, the reflection is not total, even with a mirror thickness for which the reflection is theoretically total.
Thanks a lot in advance.
I’m currently in the process of selecting a research topic for my doctoral studies. I have a keen interest in the field of acoustic metamaterials. I would really appreciate if you are helping me to provide your expertise and suggestion to select the topics in the field of acoustic metamaterials
Dear expert,
How to do Distributed Acoustic Sensing for MASW purpose, if it can be done can the geophone planted on surface in vertical, for subsurface imaging ?
Really appreciate if it can be done?
In general, it is difficult to effectively divide the difference between tuff and dacite by natural gamma curve and acoustic time difference curve in some work areas, and both show low GR value and low DT value.
It is well-known from the literature that there exist diverse acoustic waves in compact astrophysical objects, such as white dwarfs, neutron stars, etc. Can anyone please give us a concise glimpse of the state-of-the-art astronomical observations of such existent acoustic wave spectra?
As the eigen room modes of a room are complex. So How to remove the complex eigen modes? Is there a way to remove the acoustic damping by air in SOLVER SETTING?
#COMSOL #ROOM ACOUSTIC
We have observed in literature regarding the optical mode and acoustic mode in FMR spectra.
How to identify which one is acoustic and which one is optical?
Kindly clarify.
It seems to me that everyone just refers to J. W. Goodman's book "Speckle phenomena in optics: Theory and Applications". I agree this is a good book, however in my opinion there are some differences in ultrasound.
I would like to find answers to the questions:
- how realistic is it to assume that the ultrasound signal has a single spectral component (monochromatic light assumption in Goodman)? Is this assumption required?
- what is the effect of ultrasound transducer and the transformation from pressure to RF signal?
Thank you for your answers.
Hi All,
I wonder how I can use infinite elements in CEL (Coupled Eulerian-Lagrangian) models in Abaqus to absorb waves at the boundaries of the soil domain. The only available element type in CEL is EC3D8R (Eulerian hexahedral 3D elements with Reduced integration) and acoustic or infinite elements are not available. As an alternative, I created a non-Eulerian (deformable) part, tied it to the Eulerian part, and intended to assign infinite elements to this (non-Eulerian) part in the input file. However, I encountered an error that "Eulerian elements can not be tied to non-Eulerian elements".
Could you please, provide some advice?
Thank you very much.
Pourya
discussing about sand monitoring on deep water subsea wells, is there any threshold that we can refer to when we read ASD (Acoustic Sand Detector)?
when we can saya that the value reading is representing sand or representing the fluid flow?
thanks and please correct me if there is anything wrong with my appellation
- Isotropic metals in a stress free state have a stiffness matrix. Under the action of prestress, an equivalent stiffness matrix containing the third order elastic constants l, m, n can be established based on the acoustic elastic effect. Its acoustic elastic constants in the natural coordinate system are shown below. I wonder if these formulas are correct? Where can I find the formulas for these coefficients
Specific: frequency or time domain? acoustic or elastic media? with attenuation or without? using CPUs or GPUs? ... ...
The results of our research determined for the first time that for the entire frequency range of acoustic waves, the range of their propagation, measured not in units of measurement of distance, but in cycles, is a constant: the same number of cycles corresponds to the same absorption of acoustic energy. Due to the difference in the lengths of acoustic waves, the range of sound propagation is determined by the wavelength, which for the conditions of the practical absence of sound dispersion in water, has a statistical relationship with the wave frequency. Due to this, the researchers got the wrong impression about the dependence of the sound propagation distance on the frequency. But the presence of correlation in this case is not related to the presence of a cause-and-effect relationship between the frequency of acoustic waves and their propagation distance. Thus, for the first time, the basis for a complete rethinking of the theory of the process of absorbing the energy of acoustic waves in water is presented.
It should be noted that there are signs that the obtained regularity can be extended to transverse waves in water. This is evidenced by the fact that, unlike shorter wind waves, long ocean surface (transverse) waves of "surge" spread over a distance of more than 1000 km. Tsunami waves, which have a length greater than the length of "Zibu" waves, spread over a distance of tens of thousands of kilometers. Seismic waves that propagate in the solid shell of the Earth, at lengths close to the length of tsunami waves, also propagate for tens of thousands of kilometers. In the future, different types of waves propagating in different environments can be considered, which does not exclude the possibility of confirming the general (universal) physically justified and understandable regularity of wave attenuation put forward by us.

Recently, I want to study the acoustoelastic effect of Lamb wave in composites, and use this property to measure the stress of carbon fiber reinforced composite T300/QY8911. The third order elastic modulus parameters of materials are required for the acoustic elastodynamics modeling. However, I only found the relevant parameters of T300/5208 in the paper, and did not find any relevant parameters of T300/QY8911. I wonder if anyone knows these parameters and would like to share them, I would appreciate it!
Hi all,
Is there any tutorial on acoustic Fresnel lens simulation in COMSOL? I want to start the simulation of the fresnel lens in COMSOL.
I am working on a project in which I need to analyse acoustic data taken from voice recordings of a person in different rooms. I wanted to know if there are any parameters that are independent of the environment and therefore do not change by changing the room setting (e.g. standard deviation of acoustic pitches). Also, I would like some reference if you have.
many thanks!
I wish to perform acoustic phonetic analysis of Oral and Nasal vowel phones in a language. I am aware of F1-F2 plot helps in plotting oral vowel phones, but having the confusion of same can be used for nasal vowels. Please suggest me some good reading materials related to acoustic phonetic study of nasal vowels.
Acoustic inside the mosque is affected by different creteria such as form, space design , material, insulations, AC,...
What do you think?
Dear Researchers,
I am looking into deriving the particle displacement for pressure acoustics in the frequency domain or transient domain. If I solve a acoustics problem in comsol I get the pressurefield and derived variables like acoustics velocity and acoustic acceleration. How could I derive acoustic displacment from these variables. Displacement is the Timeintegral of velocity and in simple cases It is easy to derive the displacement, but in more complex cases I am lost onto how to solve for particle displacement.
Can anyone pont me in the right direction?
Conceptually, as well as source of wave propagation and wave equation
Dear RG community,
I'm starting a photoacoustic project, and I need to acquire some ultrasonic receivers.
Acoustics is not my expertise, so I'm asking here for help! :D
Where to buy, what is the price range, and what to pay attention to?
I am looking for the maximum bit rate for underwater acoustic communication and communication distance. Please suggest to me some related papers.
Microphone array is heavily used in acoustical techniques such as detection, DOA, target tracking and so on. I'm wondering if there is a user-friendly code or toolbox that can be used for demos in the classroom for an acoustic course for master/bachelor students.
We've recently been investigating alternatives to glass/plastic particles for Acoustic Doppler Velocimeter seeding material that can be disposed of without environmental concerns and are less costly. One alternative we have tried is kaolin clay, which has seemed to be quite effective in initial tests. I was wondering whether anyone else has experience using this or if there are any other alternatives that we should consider?

explain to me the parameters that can influence the acoustic and elastic properties of materials
I have studied the literature and found that combining a Helmholtz resonator and membrane results in a negative effective mass density and a negative effective bulk modulus.
Typically, in double negative acoustic metamaterials, we find two resonant frequencies, one due to the Helmholtz resonator and one due to the membrane.
Is there any possibility that we get only one resonant frequency in double negative acoustic metamaterials?
I want to model a vibrating solid (ultrasonic horn) in liquid filled structure and observe the acoustic pressure field in the liquid.
I used solid mechanics (frequency domain) and laminar flow (stationary), but it didn't work.
I think it is because solid mechanics is frequency domain but fluid mechanics is not.
If I don't use fluid mechanics, I can't select the properties of the liquid.
please help me.
Can I get some comments or examples?
Can anyone recommend a usable resource/tool for species detection from acoustic data please? A middle ground between a phone app and something like Arbimon would be about the level. Briefly, a phone app lacks flexibility in data collection, i.e. it cant be left out all night or left running for long periods. However, Arbimon is not that useful to anyone below ecologist level, as it only tells the user what species is present if the user completes their own validation, i.e. the user has to identify all species themselves. I'm looking for something that can analyse data from an AudioMoth, uploaded by citizen scientist participants, and actually identify species.
I'm preparing a model related to the phenomenon of leak detection by acoustics in gas pipeline . The case is transient encountering injecting of an acoustic wave signals in the domain . After invistigation of results , i found that the results are mixed with reflected wave coming from the end of the pipe.So i need to eliminate the reflected waves by means of end condition that can absorbes all the reflected waves . NOTE (I already used perfect matched layer for frequancy steady state studies but it doesn't work with transient studies ) .
thanks for your supports .
Could anybody tell me the name for acoustic counterpart of RFID? Any references on that? Thank you!
Dear RG specialists, I am wondering if is there a phase transition to a localized transversal phonon sort of coherent state? We know that there is one for the diffuse photon field when light scattering becomes strong enough (frozen light limit [1,2]).
This question arises only for transverse waves [3,4].
Following [1] Frozen light, Sajeev J. Nature volume 390, pp. 661–662, 1997:
Are there strong interference effects, due to the wave-like nature of transverse phonons, which severely obstruct their diffusion?
We already know that electrons & photons can be localized, please see the following articles & references therein:
Conference Paper Atomistic Visualization of Ballistic Phonon Transport
I am required to make a report on the relation between the mechanical waves' domain and the Electromagnetic waves' domain.
I understand both are fundamentally independent of one another so I am confused of the request and willing to learn if there exists a direct relation (as in physics of the two waves) between the two types.
Sensor is designed for maximal temperature of 130°C, pipe surface depending on application reaches 200-450°C. The isolation pad must have satisfactory acoustic conductivity of ultrasonic signal in frequency range between 0,2 to 5 MHz. The isolating pad should be several mm thin. Maybe some kind of cooling layers in required dimensions could be used. Contact surface of the sensors is in dimesions up to 40x80 mm.
I have, as the outcome of data collection, three groups of music that have been used for different purposes. I have collected many different types of information about the individual pieces of music (pitch range, speed, acoustic properties, perceptual ratings of emotional content etc.) and I wish to determine what combination of these variables best explains the original grouping of the music. What statistical analysis method should I use? Thanks in advance.
I am at loss how to start my acoustic analysis. In fact, I downloaded many softwares and none of them could serve the purpose.
Hi,
I'm looking for a small (less than 0.5m of max length) underwater sound source to do some experimental measurements, any recommendations?
I'm looking for something robust and reliable but with prices ranging from lowcost to lab equipment.
All the best
which is efficient model to generate band gaps for acoustic meta materials
Hello,
I just want to discuss a few things about the non-reflecting boundary condition (NRBC) function in LS-DYNA. In fact, I am studying a fluid-structure interaction problem using LS-DYNA Finite element explicit code. My main interest is to study the propagation of shock wave as well as its interaction with the structure. So, I modelled the fluid part using solid acoustic elements (along with *MAT_ACOUSTIC) in LS-DYNA. Then, I applied the pressure to the fluid elements. But, the problem is that I cannot introduce both loading and boundary conditions (NRBC) on the same segment at the same time. It happened that the non-reflecting boundary condition that I introduced does not seem to be working.
So, I would like to know if there is any way to activate the non-reflecting boundary at a later time step different from the load application time (preferably at the end of my loading phase).
Thank you very much.
Regards
Ye Pyae Sone Oo
As we know Maa et all have pioneered the theoretical calculations on Acoustics Impedance and Absorption by introducing the concept of micro-perforations. While preparing some theoretical predictions over the experimentally generated data many of the calculations in literature contradicts and seem a little confusing with imaginary terms and differential equations.
Are there any suggestions on some simplified calculations and methods that can be applied for theoretical modeling in MPP acoustics for the determination of Sound absorption Coeeficeint?
Experimental results of the Sound absorption coefficient of material have mostly been found to be in good terms with the Theoretical models. But the calculations look confusing. With imaginary terms and differential equations.
Are there any simplified calculations and methods for theoretical modeling in acoustics?
I am using 2 ultrasonic assembly for cleaning purpose and I want to increase cavitation intensity.
(1) a ceramic transducer with a diameter of 30 mm and a horn with a end diameter of 8 mm.
(2) a ceramic transducer with a diameter of 40 mm and a horn with a end diameter of 8 mm.
Since the input power of (2) is higher than that of (1), I expected that the sound pressure of (2) is higher than that of (1), but it was not.
I think it is because the acoustic impedance of (2) is much lower than that of (1) (even though the power is high, sound pressure can be lower since Z=p/v is lower).
1. Am I misunderstanding something??
2. If not, how to increase the acoustic impedance of the ultrasonic assembly??
3. How can I estimate the acoustic impedance of the ultrasonic assembly??
4. What is the best?? the acoustic impedance of the ultrasonic assembly should be equal to the acoustic impedance of the media (water in my case) or as high as possible?
I think if the amplitude is large then the acoustic pressure also high.
I have tried to increase the acoustic pressure and use the ultrasonic booster which is known to increase amplitude.
But it did not work. The acoustic pressre measured by hydrophone was almost same as the acoustic pressure of the transducer without booster.
I am wondering
1. What is the relationship between the acoustic pressure and the amplitude.
2. What is the ultrasonic booster. Does it can increase the acoustic pressure??
If yes, please indicate an example.
i am doing thermal evaporation of pentacene and dntt polymer. just want to know the acoustic impeadence
I am asking about how to find a formula for acoustic impedance for a porous wall with a variable thickness. The impedance is defined as a function of acoustic resistance (R) and acoustic reactance (X). The formula for the impedance is attached. In the case of having a wall of the same porous material, but the thickness is changing.
How the formula should be modified?
where,
R : acoustic resistance
X: acoustic reactance
X1: Mass coefficient
X2:cavity coefficient
w:frequency
Greetings! Looking for an advice on possible technical literature about specific subject: Influence of an acoustic waves on electronics and it's failure mechanics. If anyone is familiar with respectable source of information, please let me know. Thanks in advance!
I have two IDTs (interdigitated transducers) orthogonal to each other o a piezoelectric substrate. I want to know what happens when an RF signal is applied to both the IDTs at the same time. how to find out the orthogonal interference of two acoustic waves?
1) I would like to know how to choose the acoustic sum rule for phonon calculation in Quantum Espresso for polar materials and also for any materials in general.
2) Based on what criteria we have to choose the acoustic sum rule for a particular material under study.
3) I would also like to know the detailed theoretical explanation behind this.
The aim is to characterize a DAS instrument connected to a fiber optic cable.
what are the properties that I should look into ? white noise ? Dynamic range? SNR ?
I did some choc tests and I'm thinking on how should I calculate the SNR. Should I calculate a SNR for different frequency band ?
Thank you in advance,
I have to understand the use of green's function in acoustics . What does it signify ?
I want to optimize the placement of the sensors on a 3d terrain which is created in the arcmap.
I want to find the optimum no of sensors to be used on the terrain,
can you help me in the algorithm part.

Acoustic field modelling, structure fluid coupled responses
Hello,
I am trying to find acoustic round-trip latency of Android smartphones. I have using the technique of where I play a beep and listen to it through the phone and then perform convolution and get time index where we get the peak value.
Issues:
Here the issue is that when we play our beep, the AEC and NS of Android smartphones cancel and attenuate the beep and it cant be seen in recorded data.
Background & Relevant Information:
Most of the applications that measure acoustic latency such as the OboeTester app etc. use VOICE_RECOGNITION (https://developer.android.com/reference/android/media/MediaRecorder.AudioSource#VOICE_RECOGNITION) mode. In this mode, no DSP is performed and we get raw data. But this excludes the latency of DSP algorithms performed by the built-in components.
What I want:
I want to find the round trip latency using VOICE_COMMUNICATION mode. In this mode, all components of AEC, AGC, and NS are activated. But this cancels our beep and we can't get accurate results.
Is there any want to find latency while AEC and NS are working. Looking forward for the solution.
Regards,
Khubaib
We know that by using a microphone array we gain extra SNR. In the meantime, vector sensors are also with that advantage, and moreover, able to achieve that with a single sensor without the necessity of using an array. However, the reality is nowadays most acoustic products are using a microphone array instead of a vector sensor. Why is that?
The shell is made of metal, the inside and exterior are inviscid fluids.
The ratio of shell thickness to radius (h/a) ranges from 0.5 % to 2 %.
I have written Fortran programs for the models described in the papers listed below but they are all unsatisfactory in some respect. Hickling used full exact elasticity, but there seem to be misprints in the paper. The others are thin-shell models and yield a resonance frequency of zero at particular values of h/a. I will give further details to anyone who asks and is acquainted with the issue.
Hickling 1964
Lou & Su 1978
Felippa & Geers 1980 (same result as preceding paper)
Dean & Werby 1992
Are the broadband noise results taken from ANSYS fluent trustable?
(Acoustic > Broadband noise) for a turbulent flow case.
Hello everyone,
I am trying to set up a 3D model with acoustic infinite elements along the boundaries. To accomplish this I have taken the following steps:
1. create a 3D part.
2. create a material that has acoustic medium property.
3. create a section that is acoustic infinite.
4. assign a section to the part.
The problem is in step 4: I cannot see the acoustic infinite section I created. Do you know how can I fix this?. I thank you very much for any help in advance.
Dear all,
I am learning acoustic analyzes and saw a paper about powertrain acoustic, which is:
At the end of the paper, there is a graph about mesh frequency and harmonic responses. What exactly does the attached graphic mean? Can I kindly ask your help?
Thanks and regards.

I am modelling an axisymmetric piezoelectric transmitter in COMSOL 5.6 and use Exterior Field Calculation (full integral) to find the pressure outside my simulation domain. This works fine when I want to find the pressure in a single point (e.g. acpr.pext(1,0)), but I want to find the average pressure over a plane surface coaxial with the transmitter, i.e. a line average due to the axial symmetry.
I haven't been able to define a line average in "Result -> Derived Values" since there is no boundary outside my simulation domain, and if I add a line in the geometry outside my simulation domain, it does not get meshed. An alternative approach would be to export acpr.pext at the line of interest (with a suitable sampling) to a text file and compute the average outside COMSOL, so far I haven't been able to export the pressure for multiple points.
Does anybody have an idea as to how I could compute the average pressure?
Hi,
I am working on a problem that involves multiple smartphones/tablets. All devices are playing the same sound with synchronization difference up to +- 5ms. Now, there is an echo issue and I want to cancel acoustic echo on the phones.
So, keeping this problem as simple I have to create an echo canceller for multiple speakers and one mic scenario. I have already read papers regarding stereo echo cancellers but that doesn't solve my problem.
Any help will be appreciated.
Regards,
Khubaib
Hello,
I have been working on acoustic echo cancellation while following the research paper :
Conference Paper Echo Detection and Delay Estimation using a Pattern Recognti...
Since I am working on real-time audio data so my problem is as follows:
- I have a buffer that stores Far-end (packets being played by phone) data in terms of 21.33ms chunk equivalent to 1024 shorts.( Sampling rate 48000 Hz)
- A near-end packet is recorded i.e 21.33ms of data with the same format as mentioned above.
Problem Statement:
- Let's suppose we have a Far-end packet containing the word "hello"
------------------------------
| |
| H E L L O |
------------------------------
is being played by phone and now its echo is being recorded in a Near-End packet. Now the cases arise here are that this could be recorded as:1
------------------------------
- | |
| H E L L O |
------------------------------
Hello completely recorded in one packet.
2. ------------------------------ ------------------------------
| | | |
| H E | | L L O |
------------------------------ ------------------------------
Distributed in two Near-end packets
3. ------------------------------ ------------------------------
| | | |
| H | | E L L O |
------------------------------ ------------------------------
and so on random distribution of far-end audio between multiple chunks of near-end audio.
Now, I want to detect echo and perform cancellation. That can be done if I get an accurate chunk of far-end whose echo is in the near end chunk. Right now I am making overlapped chunks with the shift of 1 sample equivalent to 0.020833 ms of data. But this is quite computationally over expensive.
So 2 questions arise here:
- What should be the optimal length of overlapping?
- What could be the minimum resolution of acoustic echo in terms of time in ms? i.e minimum echo delay change? Can it be like 0.1ms or 0.02 ms ?
I hope my question will be cleared. I tried to explain it yet keep it concise.
Any help will be appreciated.
Regards,
Khubaib Ahmad
Test a gas stove at, e.g., five different powers. Record the acoustic signal and the sound pressure level [this latter will not be a precise value, but their ratios will be]. For the acoustic signal, you can use, e.g., the Spectroid or Spectrogram app (there are a lot of apps out there to do FFT on the microphone signal). For the sound pressure level, you can use, e.g., the Andro sensor app.
need recommendations for above statement..
Please see the attachment.
1- I have to use Perfectly matched layer while using port boundary condition or not.
2- My port shape is hexagonal.
So from available options, I am choosing user defined port. But while computation it is showing error.
Plz suggest.
Thanks.

Goal: I would like to make a plate of wood of equal acoustic path length (transversely).
Question: Is there any simply way to measure the acoustic path length (something similar to optical path length).
In optics, because light travel in various speed, we have optical length.
How about acoustics? Is there any acoustic path length?
Is there any convenient method to measure it?
I want to measure the acoustic path length within a wood. Because every place within a wood structure may have different density. I want to make a wood plate of equal acoustic path length every (therefore the physical thickness may differ). Is it possible?
I am using boosted regression trees (BRT) to investigate the environmental influences of shark presence at a single location. I have a year-long dataset of acoustic tag detections and environmental data (e.g. current speed and direction etc.) which I have split into 1hrly time bins, i.e. the number of detections every hour and the hourly mean for each environmental variable. Having researched BRTs extensively, it seems that temporal autocorrelation (serial correlation) is not addressed (although spatial autocorrelation is). Having built several models I used the acf() function and a Durbin–Watson test on the model's residuals and it is showing some degree of autocorrelation (DW test = 1.09). Is serial correlation a problem with BRTs? If so, can anyone suggest a way to 'fix' the issue? Many thanks!
I am doing AE experiment in FRC concrete by keeping the threshold as 30 db (as low as possible to get maximum number of hits without noise). I want to verify whether the AE data obtained is correct or not. Is there any standard process for AE data validation.
I have modelled a unit-cell of an anechoic structure using ANSYS. The layer is silicone and has been assigned as an acoustics physics region, with an air cavity inside (the air has been modelled and also assigned as an acoustics physics region). There is a steel backplate as well that is structural. In order to simulate the water closure on the silicone layer face I have applied an impedance boundary condition, assigned a port to this front surface and used the body of the same layer to assign the inside surface bodies. A planar wave has been applied using a 'port in duct' excitation condition (with 10000 Pa) and the acoustic absorbance is calculated.
I have used the same material parameters that I have seen in many papers, however I am getting very different results and wondered if anybody could please highlight where I have modelled it incorrectly or explain the observed behaviour? I have included an image of the absorption coefficient with regards to frequency in a 0-6000 kHz range.
Thank you for any help.

Hi,
I have been working on multi-loudspeaker and single mic acoustic echo canceller ( more than a typical stereophonic echo canceller). I have researched and came to know that we have to decorrelate the signals in order to correctly identify the estimated echo signal (i.e replicate impulse response). So, I want to know if there are any decorrelation techniques for let's say N number of loudspeakers?
Please share the link if possible.
Regards,
Khubaib Ahmad
I am using Audacity to analyse the presence of birds and frogs recorded with a SM4 recorder (Wildlife Acoustic). Background noise (rainfall, wind) is an important variable to consider that can affect the detection of some species. What bioacoustic variable can be taken to quantify this noise? The RMS dB is an option.
I have used ICA for the problem of mixing acoustic and UHF signals from partial discharges. I know that it has been used in Biomedical, Acoustic, and many other fields. Do you comment on how you have used it?
In ANSYS, as in also e.g. COMSOL Multiphysics, you can calculate a far-field acoustic pressure outside the explicit FEM domain. However, whereas in COMSOL Multiphysics you define a certain closed surface on which you calculate the far-field pressure via the Helmholtz-Kirchhoff integral, it is unclear which surface is actually being used in ANSYS (or what is actually being calculated).
Hi,
I am keen to know the basics to remove echo knowing FarEnd Signal and NearEnd Signal. I have read research papers in which its commonly said:
y(n) = hTx
where hT is the room impulse response and echo signal estimated is y(n).
Now, what I really want to understand is the computation of room impulse response. Given two-time frame arrays of PCM 16 bit. Let's say:
Near_end_sig = [x1,x2,x3,x4,....,xN] (one time frame)
Far_end_sig = [y1,y2,y3,....,yN] (one time frame)
Here I want to know in this case working on digital acoustic signals (PCM-16 bit arrays of size let's assume 1024). How do I compute room impulse response h and how do I estimate echo signal? My main concern is the computation of room impulse response and the echo signal estimation based on the time frames.
Regards,
Khubaib Ahmad