Science topic

Signal Analysis - Science topic

Explore the latest questions and answers in Signal Analysis, and find Signal Analysis experts.
Questions related to Signal Analysis
  • asked a question related to Signal Analysis
Question
6 answers
I'm looking for a finger PPG measurement device for my new scientific project that allows me to export raw signals (pulse waves). Do you know any product like this?
I know some wristbands with this export option, but I can't find any pulse-oximeter for fingers.
Relevant answer
Answer
Biopack (https://www.biopac.com/products/) have range of products that suit your requirement , Although their DAQ is somehow expansive but, it's really versatile, and it's widely used by lots of educational institutions.
  • asked a question related to Signal Analysis
Question
3 answers
salam
Hello
How does the emg data filter which recoreded with sampling frequency of 250 Hz?
The data were recorded from forehead muscles in static and normal face position.
The time of recording is 5 minutes and three emg electrode were used.
Participants were male soccer players with 19 to 25 years old.
Relevant answer
Answer
Look here:
EMG is low-frequency signal bursts with high-frequency noise-like filling. Most often, the amplitude of the bursts and the repetition rate are analyzed. And the high-frequency noise-like filling is probably not very informative.
But in EEG, they conduct a spectral analysis of a noise-like signal with a slowly changing envelope in different frequency filters, which are associated with specific rhythms.
In ECG, little attention is paid to frequency spectra, but the period and shape of the signal are analyzed.
Unfortunately, I can't read the source
in the original language.
  • asked a question related to Signal Analysis
Question
5 answers
Hello everyone,
While reading articles related to EMG analysis, two concepts caught my attention. One of them is the Hilbert transform, and the other is the envelope. First, I haven't come across this transform often in many software tools (although I might have missed it). Instead, it is stated that rectification and RMS methods can also be used. I have come to the conclusion that the Hilbert transform might be more suitable if you need instantaneous amplitude and phase information or if you want to examine very subtle amplitude changes and instantaneous phase differences in the signal.
I usually used rectification and RMS methods. So, in this case;
  • What are the advantages and limitations of using the Hilbert transform compared to rectification and RMS methods in analyzing EMG signals in sports science?
  • In what contexts is the Hilbert transform preferred over rectification and RMS methods in EMG analysis for sports science research?
I would appreciate your insights on this topic. Thank you in advance.
BERMAN
Relevant answer
Answer
Thank you for the insightful comparison between RMS and the Hilbert Transform in EMG analysis Dr. Safaa Ismaeel
  • asked a question related to Signal Analysis
Question
8 answers
Hi all,
I'm trying to do cilia beat frequency analysis on high-speed microscopy videos of beating cilia on cultures on airway epithelium.
I have not been able to get any of the cilia beat frequency analysis programs to work, so I'm writing my own script for the analysis in R.
So far I've managed to extract series of pixel intensities over time for every pixel in the video. However, when I run `fft` on the intensity series, I get no peaks.
I believe my sampling frequency is good, 480fps video, with an expected dominant frequency of ~15Hz. I believe my trouble comes from "low bit-depth" as the video has very low contrast, each pixel series typically spanning only 7-8 different intensity values.
Would something like interpolation be a solution here?
Please excuse my lack of technical knowledge here, this is well beyond what I usually deal with! I'd also be happy to share my code of course.
Sam
Relevant answer
Answer
Hi Sam,
I think 7 pixelvalue difference to good enough to do FFT, and I don't think fantasizing additional data by interpolation is a good idea.
Could you share some data, preferably a raw byte volume (or an AVI file)?
  • asked a question related to Signal Analysis
Question
3 answers
Noise removal in ECG signal using an improved adaptive learning approach, classification of ECG signals using CNN for cardiac arrhythmia detection, EEG signal analysis for stroke detection, and EMG signal analysis for gesture classification are essential to proper diagnosis. The application of CNN in pertussis Diagnosis by temperature monitoring, physician handwriting recognition using deep learning model, melanoma detection using ABCD parameters, and transfer learning enabled heuristic approach for pneumonia detection has become one of many AI embedded image processing systems.
source: 1st Edition
Artificial Intelligence in TelemedicineProcessing of Biosignals and Medical images
Edited By S. N. Kumar, Sherin Zafar, Eduard Babulak, M. Afshar Alam, Farheen SiddiquiCopyright 2023
Relevant answer
Answer
The role of AI and computational algorithms in the processing of biosignals and medical images is critical for disease diagnosis and treatment planning. These technologies significantly enhance diagnostic accuracy by identifying subtle patterns and abnormalities that may be missed by human experts. AI-driven tools can automate the analysis of vast amounts of medical data, reducing the time required for diagnosis and enabling quick decision-making in critical clinical settings. Furthermore, AI algorithms support personalized treatment planning by analyzing patient-specific data, leading to more effective and targeted therapies. They also play a key role in early disease detection and prognosis, which are essential for successful treatment. By reducing the risk of human error and supporting healthcare professionals in interpreting complex medical data, AI enhances the consistency and reliability of diagnoses. Additionally, AI contributes to medical research, facilitating the development of new diagnostic tools and treatment options by analyzing large datasets. Overall, the integration of AI and computational algorithms into healthcare is transforming the field, leading to better patient outcomes and more efficient medical practices.
  • asked a question related to Signal Analysis
Question
3 answers
Lock-in amplifiers have both types of output: one is in XY mode and another in R theta mode. How do we understand the physical meaning of the output in both cases?
Relevant answer
Answer
To ensure that your signal is phase-locked and to keep the phase difference constant between the reference signal and the input signal, you can use a Phase-Locked Loop (PLL). Here are the steps and considerations for implementing a PLL to stabilize the phase difference:
### Components of a Phase-Locked Loop (PLL)
1. **Phase Detector (PD)**: Compares the phase of the input signal with the reference signal and generates an error signal proportional to the phase difference.
2. **Low Pass Filter (LPF)**: Filters the high-frequency components of the error signal, providing a smooth control signal.
3. **Voltage-Controlled Oscillator (VCO)**: Adjusts the frequency of the signal based on the control signal from the LPF.
4. **Feedback Loop**: Feeds the output of the VCO back to the phase detector, closing the loop.
### Steps to Implement a PLL
1. **Phase Detection**:
- Use a phase detector to measure the phase difference between the input signal and the reference signal. This can be an XOR gate, a mixer, or a digital phase comparator.
2. **Error Signal Filtering**:
- Pass the error signal through a low-pass filter to remove high-frequency noise and retain the low-frequency component that represents the phase difference.
3. **Control Signal Generation**:
- The filtered error signal is used to adjust the frequency of a voltage-controlled oscillator (VCO). The VCO generates a signal that tries to match the phase of the reference signal.
4. **Feedback Mechanism**:
- The output of the VCO is fed back into the phase detector, creating a closed-loop system. The PLL adjusts the VCO until the phase difference between the input and reference signals is minimized, effectively locking the phase.
### Practical Considerations
1. **Lock Range**:
- Ensure that the PLL's VCO has a frequency range that covers the expected range of your input signal. This range is known as the lock range.
2. **Loop Bandwidth**:
- Choose the bandwidth of the low-pass filter to balance between fast response time and noise rejection. A wider bandwidth allows faster locking but can introduce more noise, while a narrower bandwidth reduces noise but may slow down the locking process.
3. **Stability**:
- Make sure the loop is stable. An unstable PLL can oscillate or fail to lock. Proper design of the loop filter and careful selection of the VCO and phase detector are crucial.
4. **Implementation**:
- PLLs can be implemented using analog components, digital components, or a combination of both. Digital PLLs (DPLLs) often use a microcontroller or FPGA for flexibility and precision.
### Example Circuit for Analog PLL
1. **Phase Detector**: Use an XOR gate for digital signals or a mixer for analog signals.
2. **Low Pass Filter**: Design an RC low-pass filter with a cutoff frequency appropriate for your application.
3. **VCO**: Select or design a VCO with a frequency range that covers your input signal frequency.
4. **Feedback**: Connect the output of the VCO to the phase detector input.
### Software-Based PLL
If using software-defined radio (SDR) or digital systems, a software-based PLL can be implemented in a microcontroller or FPGA, where phase detection, filtering, and control signal generation are performed algorithmically.
### Troubleshooting Phase Lock Issues
1. **Signal Quality**: Ensure that both the reference and input signals are clean and stable.
2. **Component Tolerances**: Check the tolerances of the components used in the PLL, especially in the VCO and low-pass filter.
3. **Noise**: Minimize external noise and interference that can affect the signals and the PLL performance.
4. **Temperature Stability**: Ensure that temperature variations do not affect the components, especially the VCO.
By implementing and tuning a PLL, you can stabilize the phase difference between your reference signal and the input signal, ensuring a constant phase relationship over time.
  • asked a question related to Signal Analysis
Question
4 answers
String theory's applications are remarkably wide. Unfortunately its been beaten in the nuclear domain, where physics on a lattice QCD are more acclaimed as well as in cosmology where GR regns supreme.
The lattice theorists call it "dead" to cement their superiotity, some of its found ing developers abandoned it (Smolin) but its still breathing.. I. E see Vasilakos recent article on cosmic string signatures in Nanograv grav. Waves signal analysis.
So what is the non foe-based status of the theory in these domain?
Relevant answer
Answer
“…String theory's applications are remarkably wide. Unfortunately its been beaten in the nuclear domain, where physics on a lattice QCD are more acclaimed as well as in cosmology where GR regns supreme….”
- really String theory is nothing else than some really senseless mathematical exercises that are based on some completely unphysical – and so fundamentally experimentally non-grounded – postulates, and its remarkably wide applications are nothing else than corresponding equally mathematical fairy tales.
And that is in both, cosmology and fundamentally Nature Strong force researches, including since that this theory was developed, first of all, aimed at development of “quantum GR”, while the last is fundamentally impossible, since GR itself is fundamentally strange theory. Matter’s spacetime is fundamentally absolute, fundamentally flat, fundamentally continuous, and fundamentally “Cartesian”, (at least) [4+4+1]4D spacetime with metrics (at least) (cτ,X,Y,Z, g,w,e,s,ct), which fundamentally cannot be “contracted”, “dilated”, “curved”, in the spacetime fundamentally no “quanta”, “granules”, “strings”, etc., can exist;
- including so, say, GR is inevitably fundamentally incompatible with QM .
Gravity is fundamentally nothing else than some fundamental Nature force, which will be without any problems “quantized” after the correct “classical” Gravity theory will be developed.
Though really complete theories of all fundamental Forces, including Gravity Force, can be developed only on Planck scale, and really, correspondingly, only basing on the Shevchenko-Tokarevsky’s Planck scale informational physical model, 3 main papers are
- where yet now initial Planck scale models of Gravity, Electric [see 2-nd link, section 6. “Mediation of the fundamental forces in complex systems”] and Nuclear Forces yet now are developed.
Though yeah, really that
“….So what is the non foe-based status of the [String] theory in these domain?……”
- despite that this question is scientifically quite senseless, the answer see above, but in mainstream physics publications, including in top physical journals, with this theory versions, and with other really fantastic “new physics” are numerous till now…
Cheers
  • asked a question related to Signal Analysis
Question
3 answers
I faced a very simple yet problematic phenomena when trying to find the bode plot of an unknown system with oscilloscope.
as we know we can simply inject a signal to a system by a signal generator and swipe the frequency then measure the input and output of the system and then by comparing the gain and phase shift plot the bode diagram.
here is the problem. when you have an unknown system with no prior knowledge. how can you find that the phase shift is positive or negative. as it can be seen in the picture the phase shift both can be considered +20 and -160
Relevant answer
Answer
Your '-160' must be below -180 as you pass simple inversion, I would say -240.. Also, your '+20' is more like 120 degress or so (90 degree shift = top coninceds with 0 crossing).
120+240 =360.
One should note it is 360 deg (full period ) between sinusoiidal peaks, and you may choose to represent phase as 0..360 or +-180 degress
  • asked a question related to Signal Analysis
Question
2 answers
What is the difference between DTFS and DFT?
DTFS-Discrete Time Fourier Series
DTFT-Discrete Time Fourier Transform
DFT-Discrete Fourier Transform
Relevant answer
Answer
DTFS (Discrete Time Fourier Series) and DFT (Discrete Fourier Transform) are mathematical tools used to analyze discrete-time signals in the frequency domain. Although they are both related to the Fourier transform, they differ in their domain and representation.
DTFS is used to represent a periodic discrete-time signal in the frequency domain. It is defined as the Fourier series representation of a periodic sequence of samples. DTFS is useful when analyzing signals that are periodic in nature, such as audio signals or digital signals with a fixed frame rate. DTFS coefficients represent the frequency components of a periodic signal and are discrete in nature.
DFT, on the other hand, is used to represent a finite-length discrete-time signal in the frequency domain. It is defined as the Fourier transform of a finite-length sequence of samples. DFT is useful when analyzing signals that are non-periodic in nature, such as speech signals or biomedical signals. DFT coefficients represent the frequency components of a finite-length signal and are also discrete in nature.
The main difference between DTFS and DFT is their input and output. DTFS requires a periodic signal as input and produces a set of discrete frequency components as output. DFT requires a finite-length signal as input and produces a set of discrete frequency components as output. Another difference is that DTFS coefficients are complex, while DFT coefficients are also complex but are usually represented as real and imaginary parts.
In summary, DTFS is used to represent periodic signals in the frequency domain using discrete frequency components, while DFT is used to represent finite-length signals in the frequency domain using discrete frequency components.
  • asked a question related to Signal Analysis
Question
2 answers
I try to work on EEG signals from corona virus patients so I need clinical datasets of that. I would be grateful to you for helping me.
Relevant answer
Answer
  • asked a question related to Signal Analysis
Question
5 answers
Hello everyone!
Through my studies, I used a lot of signal analysis methods for medical data (mostly RR interval series), focusing on the nonlinear ones such as:
Currently I'm working on RR interval series obtained during listening to (or playing) short excerpts of music pieces. I'm wondering which nonlinear method would be the most appropriate for short-term data, from 30 seconds to 5 minutes (it's about 30-500 samples per signal). My preliminary results showed that I see significant differences between the baseline and the music piece period for Shannon entropy (this parameter works much better than most linear indices). In turn, I cannon see any interesting results using sample entropy and I think that these signals are too short for this method. Similarly, DFA cannot be used for a such short period.
My question is, what other nonlinear methods can I use for short-term analysis and maintaining a good quality level of the results?
I will be grateful for any suggestions.
Best
Mateusz Solinski
Relevant answer
Hi Mateusz,
That's a very interesting discussion. I would also try "fuzzy entropy", as it is not dependent on the number of matches (doi: 10.1109/TNSRE.2007.897025), as well as DFA for a short and narrow interval of window sizes (say 5-10).
Best,
Luiz
  • asked a question related to Signal Analysis
Question
2 answers
I-have research about lie detection using voice stress analysis and i need book talking about voice stress analysis
Relevant answer
Answer
Thanks for answering Aparna Sathya Murthy
  • asked a question related to Signal Analysis
Question
2 answers
I am working on design wavelet frames to detect specific patterns in 1-D signals. I wondering if you could recommend mesome good texts on wavelet frames construction. Knowing if some code is available in Python or Matlab would be helpful. Thanks a lot!
Relevant answer
Answer
Thanks!
  • asked a question related to Signal Analysis
Question
5 answers
I am working on a research point that employs estimation techniques. I am trying to apply an algorithm in my work to estimate system poles. I wrote an m-file and tried to apply this technique on a simple transfer function to estimate its roots .any suggestions about estimation techniques ?
Relevant answer
Answer
There are many estimation techniques that can be used to estimate system poles. Here are a few popular ones:
  1. Least Squares Method: This method involves fitting a model to the data in a way that minimizes the sum of the squares of the errors. This can be used to estimate system parameters such as poles and zeros.
  2. Maximum Likelihood Method: This method involves finding the parameter values that maximize the likelihood of the observed data. This can be used to estimate system parameters such as poles and zeros. (See reference [1-3])
  3. Prony's Method: This method involves fitting an exponential function to the data using the method of least squares. The method can be used to estimate system poles and can be useful when the system poles are well-separated.
  4. Eigenvector Method: This method involves calculating the eigenvectors of the system and using them to estimate the system poles. This can be useful when the system is large and complex.
  5. System Identification Method: This method involves using a set of input and output data to estimate the system parameters. The method can be used to estimate system poles as well as other parameters such as gains and time delays.
To apply an algorithm to estimate system poles, you can start with a simple transfer function and apply the algorithm to estimate the poles. You can then compare the estimated poles with the known poles of the transfer function to evaluate the accuracy of the algorithm. It may also be useful to test the algorithm on more complex systems to see how well it performs.
[1] Bazzi, Ahmad, Dirk TM Slock, and Lisa Meilhac. "Efficient maximum likelihood joint estimation of angles and times of arrival of multiple paths." 2015 IEEE Globecom Workshops (GC Wkshps). IEEE, 2015.
[2] Bazzi, Ahmad, Dirk TM Slock, and Lisa Meilhac. "On a mutual coupling agnostic maximum likelihood angle of arrival estimator by alternating projection." 2016 IEEE Global Conference on Signal and Information Processing (GlobalSIP). IEEE, 2016.
[3] Bazzi, Ahmad, Dirk TM Slock, and Lisa Meilhac. "On Maximum Likelihood Angle of Arrival Estimation Using Orthogonal Projections." 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2018.
  • asked a question related to Signal Analysis
Question
4 answers
I have seen many ways to find similarity measure of multiple time series data in the literature. But, in my case, I have one time series data X of dim [n,1]. Now I want to get a similarity measure between data points of X. I tried autocorrelation. But I want to get only one or two numeric values that will represent the similarity measure.
Relevant answer
Answer
Dear Prof. Gorman,
Autocorrelation function (ACF) of a time series provides time series' correlation with respect to lag (time). Many groups [1] Consider ACF at lag-1 as a number of correlation. Otherwise, from linear theory ACF varies as e^(-t/tau) [2] where tau is the time scale of the decaying process of ACF. Tau can be considered as only one numeric value that will represent the similarity measure or correlation [2]. Slowly decaying ACF indicates high tau value, i.e., the time series is high correlated. Fast decaying ACF indicates low tau value, i.e., the time series is less correlated.
References
With warm regards
Satyaki Kundu
  • asked a question related to Signal Analysis
Question
4 answers
I want denoise motion artifacts from the article of my data. I must have 3 level of decomposed levels in wavelet. for denoising what could be the value of threshold for each level?do you have any opinion? Thanks
Relevant answer
Answer
Thank you all for your consideratio and response. The ppg is extracted from smartwatch and has many artifact noises. The frequency of them is not known. I read some articles that used wavelet for removing these artifact.first hpf is applied for removing the drift and then using wavelet.Recently I found a paper that use this method for EEG signal. Could it be possible to use that threshold for my ppg signal?
  • asked a question related to Signal Analysis
Question
8 answers
I am looking for adaptive short time Fourier transform implemented in MATLAB as a code. This can be useful for non stationary signal analysis.
Relevant answer
Answer
Thanks for your answer, adaptive STFT is a step that I need in a big Program so I can't use another program to implement it while I am using MATLAB for the rest of the code.
So, are there any other ideas?
  • asked a question related to Signal Analysis
Question
13 answers
Hey all. I have two signals as shown. the red one is how my output should look like (the ideal case) and the blue one is what I am actually getting. I am quite new to signal analysis so was asking what are good metrics I can use to find the similarity between my generated signal and the ideal signal? I have looked at cross correlation and SNR using MATLAB, but I wanted to know if there are more methods out there that can provide me with a clearer picture specially with regards to how shape similarity etc.
Thank you all again.
Relevant answer
Answer
Hello Jonas,
All answers above are good suggestions- and to those I might add, do a FFT of both signals and compare not only frequencies but also amplitudes of components. This can also be used to test similarity, when frequencies with certain values are found to have quasi-equal amplitudes.
Also a simple "difference signal" that is the actual algebraic difference of the two signals or "error signal" as it would be called if in a looped control system is also a measure of similarity of signals- i.e. if the difference signal is very small and/or null on certain frequency ranges it means signals are actually very close/identical on those intervals. Difference modulus for selected frequency points. You can express one signal as the other plus error to get the other and judge the error as percent value from the amplitude; you can analyze the phase errors by considering the minimums or arbitrarily chosen zero crossings. There are a plethora of other metrics, such as Kurtosis for the two signals, the peak/crest factor, mean energy (area below curve) or spectral energy in various frequency bands etc. As colleagues above noted - it helps to know what are you looking at to choose the proper instruments to analyze results.
  • asked a question related to Signal Analysis
Question
3 answers
Actually, I wish to understand the process and coding to define new wavelet transform. So that I can understand and modify some wavelet transform to get better results. There is inbuilt wavelet transform in MATLAB and we just have to choose wavelets. I wish to define new wavelet transform.
Relevant answer
Answer
By converting the signal from 2 dimensional signal to 1 dimensional, the transformation could be processed on vectors.
  • asked a question related to Signal Analysis
Question
13 answers
Is there any mobile App publicly available on App stores (both IoS/Android) which can be used to gather/collect/analyze signal strength measurements from the available WLAN access points? My aim is to utilize these RSS reading for WLAN based indoor localization systems.
Relevant answer
You can monitor your RSS of the accessed WIFI networks using the methods described in the site:https://www.lifewire.com/how-to-measure-your-wifi-signal-strength-818303
Best wishes
  • asked a question related to Signal Analysis
Question
7 answers
Could any one please help me in suggesting some resources where I could find a comparison curve between signal strength after Multi Path propagation effect with respect to obstacle positions between transmitter and receiver.
After conduction some experiment I found that the effect was greater near Rx or Near Tx but lesser when the obstacle is in same distance from Rx and Tx. Why such phenomena happens?
Relevant answer
Answer
I suppose this depends on what kind of obstacles you are considering and how they are affecting the signals. If you consider an object that is scattering the signal, then the pathloss will be proportional to (d_1*d_2)^2 where d_1 is the distance from the transmitter to the obstacle and d_2 is the distance from the obstacle to receiver. For a given total propagation distance d_1+d_2, it follows that the pathloss is at its smallest when the scattering object is close to the transmitter or the receiver.
  • asked a question related to Signal Analysis
Question
5 answers
By referring to some scientific resources we've found that the brain produces Some signals that relate to its activities and monitored by EEG. The question is "Can we force the brain to do some actions by injecting signals (in a direct or indirect way)?".
Relevant answer
Answer
Brain–computer interface can also restore communication to people who have lost the ability to move or speak:
  • asked a question related to Signal Analysis
Question
6 answers
Dear Colleagues,
Please suggest any open source software for ECG signal analysis.
Thanks in advance
N Das
Relevant answer
Answer
  • asked a question related to Signal Analysis
Question
6 answers
I want to find the resonance and anti-resonance frequencies of an ultrasonic transducer by analyzing its impedance.
so I need to buy a impedance analyzer or spectrum analyzer or something like that.
but my budget is limited.
do you recommend any device for my application and limited budget? :D
Relevant answer
Answer
If you want to measure impedance in a low cost way, get yourself
1) Suitable signal generator
2) An appropriately sized current sense transformer
3) a two-channel oscilloscope.
Measure the voltage and current as you vary frequency. Oscilloscope will give you the phase relationship between current and voltage across transducer. You can then calculate the real and imaginary components of impedance. I leave it as an exercise how you might calibrate this setup. Cheers!
  • asked a question related to Signal Analysis
Question
3 answers
Hi everyone,
This is just a 'out-of-curiosity' question, but why is the cerebellum used as the reference point? What is the reasoning? I was always told is it because it is 'silent' compared to the cortical regions, but obviously the cerebellum is also active. Is there any paper that explains the choice, or if a better reference region or method is available?
Thank you!
Relevant answer
Answer
Dear Haram,
I have been always heavily involved in this past discussions when we measured with telemetry EEG. The answer has been always that the cerebellum is obviously active but the neurons fires at very high frequencies whereas the most interesting frequencies for a "normal" EEG are much lower (0.5-100 Hz). That is why most likely the cerebellum is taken as reference and additionally if you have differential electrodes you get greater signal/noise ratios. Hope this helps.
  • asked a question related to Signal Analysis
Question
3 answers
I want to identify a peptide signal for a gene. All the tools like signalP is not showing anything and other tools are not showing sequence. Could someone suggest some tools?
Relevant answer
Answer
Did you tried the latest SignalP 6.0?
Best regards
PS. If still difficult, did you consider to use the AA sequence instead of gene sequence (here are programs out there that will do the job)?
  • asked a question related to Signal Analysis
Question
1 answer
Hello everyone,
for my thesis I want to extract some voice features from audio data recorded during psychotherapy sessions. For this I am using the openSMILE toolkit. For the fundamental frequency and jitter I already get good results, but the extraction of center frequencies and bandwidths of the formants 1-3 is puzzling me. For some reason there appears to be just one formant (the first one) with a frequency range up to 6kHz. Formants 2 and 3 are getting values of 0. I expected the formants to be within a range of 500 to 2000 Hz.
I tried to fix the problem myself but could not find the issue here. Does anybody have experience with openSMILE, especially formant extraction, and could help me out?
For testing purposes I am using various audio files recorded by myself or extracted from youtube. My config file looks like this:
///////////////////////////////////////////////////////////////////////////
// openSMILE configuration template file generated by SMILExtract binary //
///////////////////////////////////////////////////////////////////////////
[componentInstances:cComponentManager]
instance[dataMemory].type = cDataMemory
instance[waveSource].type = cWaveSource
instance[framer].type = cFramer
instance[vectorPreemphasis].type = cVectorPreemphasis
instance[windower].type = cWindower
instance[transformFFT].type = cTransformFFT
instance[fFTmagphase].type = cFFTmagphase
instance[melspec].type = cMelspec
instance[mfcc].type = cMfcc
instance[acf].type = cAcf
instance[cepstrum].type = cAcf
instance[pitchAcf].type = cPitchACF
instance[lpc].type = cLpc
instance[formantLpc].type = cFormantLpc
instance[formantSmoother].type = cFormantSmoother
instance[pitchJitter].type = cPitchJitter
instance[lld].type = cContourSmoother
instance[deltaRegression1].type = cDeltaRegression
instance[deltaRegression2].type = cDeltaRegression
instance[functionals].type = cFunctionals
instance[arffSink].type = cArffSink
printLevelStats = 1
nThreads = 1
[waveSource:cWaveSource]
writer.dmLevel = wave
basePeriod = -1
filename = \cm[inputfile(I):name of input file]
monoMixdown = 1
[framer:cFramer]
reader.dmLevel = wave
writer.dmLevel = frames
copyInputName = 1
frameMode = fixed
frameSize = 0.0250
frameStep = 0.010
frameCenterSpecial = center
noPostEOIprocessing = 1
buffersize = 1000
[vectorPreemphasis:cVectorPreemphasis]
reader.dmLevel = frames
writer.dmLevel = framespe
k = 0.97
de = 0
[windower:cWindower]
reader.dmLevel=framespe
writer.dmLevel=winframe
copyInputName = 1
processArrayFields = 1
winFunc = ham
gain = 1.0
offset = 0
[transformFFT:cTransformFFT]
reader.dmLevel = winframe
writer.dmLevel = fftc
copyInputName = 1
processArrayFields = 1
inverse = 0
zeroPadSymmetric = 0
[fFTmagphase:cFFTmagphase]
reader.dmLevel = fftc
writer.dmLevel = fftmag
copyInputName = 1
processArrayFields = 1
inverse = 0
magnitude = 1
phase = 0
[melspec:cMelspec]
reader.dmLevel = fftmag
writer.dmLevel = mspec
nameAppend = melspec
copyInputName = 1
processArrayFields = 1
htkcompatible = 1
usePower = 0
nBands = 26
lofreq = 0
hifreq = 8000
usePower = 0
inverse = 0
specScale = mel
[mfcc:cMfcc]
reader.dmLevel=mspec
writer.dmLevel=mfcc1
copyInputName = 0
processArrayFields = 1
firstMfcc = 0
lastMfcc = 12
cepLifter = 22.0
htkcompatible = 1
[acf:cAcf]
reader.dmLevel=fftmag
writer.dmLevel=acf
nameAppend = acf
copyInputName = 1
processArrayFields = 1
usePower = 1
cepstrum = 0
acfCepsNormOutput = 0
[cepstrum:cAcf]
reader.dmLevel=fftmag
writer.dmLevel=cepstrum
nameAppend = acf
copyInputName = 1
processArrayFields = 1
usePower = 1
cepstrum = 1
acfCepsNormOutput = 0
oldCompatCepstrum = 1
absCepstrum = 1
[pitchAcf:cPitchACF]
reader.dmLevel=acf;cepstrum
writer.dmLevel=pitchACF
copyInputName = 1
processArrayFields = 0
maxPitch = 500
voiceProb = 0
voiceQual = 0
HNRdB = 0
F0 = 1
F0raw = 0
F0env = 1
voicingCutoff = 0.550000
[lpc:cLpc]
reader.dmLevel = fftc
writer.dmLevel = lpc1
method = acf
p = 8
saveLPCoeff = 1
lpGain = 0
saveRefCoeff = 0
residual = 0
forwardFilter = 0
lpSpectrum = 0
[formantLpc:cFormantLpc]
reader.dmLevel = lpc1
writer.dmLevel = formants
copyInputName = 1
nFormants = 3
saveFormants = 1
saveIntensity = 0
saveNumberOfValidFormants = 1
saveBandwidths = 1
minF = 400
maxF = 6000
[formantSmoother:cFormantSmoother]
reader.dmLevel = formants;pitchACF
writer.dmLevel = forsmoo
copyInputName = 1
medianFilter0 = 0
postSmoothing = 0
postSmoothingMethod = simple
F0field = F0
formantBandwidthField = formantBand
formantFreqField = formantFreq
formantFrameIntensField = formantFrameIntens
intensity = 0
nFormants = 3
formants = 1
bandwidths = 1
saveEnvs = 0
no0f0 = 0
[pitchJitter:cPitchJitter]
reader.dmLevel = wave
writer.dmLevel = jitter
writer.levelconf.nT = 1000
copyInputName = 1
F0reader.dmLevel = pitchACF
F0field = F0
searchRangeRel = 0.250000
jitterLocal = 1
jitterDDP = 1
jitterLocalEnv = 0
jitterDDPEnv = 0
shimmerLocal = 0
shimmerLocalEnv = 0
onlyVoiced = 0
inputMaxDelaySec = 2.0
[lld:cContourSmoother]
reader.dmLevel=mfcc1;pitchACF;forsmoo;jitter
writer.dmLevel=lld1
writer.levelconf.nT=10
writer.levelconf.isRb=0
writer.levelconf.growDyn=1
nameAppend = sma
copyInputName = 1
noPostEOIprocessing = 0
smaWin = 3
[deltaRegression1:cDeltaRegression]
reader.dmLevel=lld1
writer.dmLevel=lld_de
writer.levelconf.isRb=0
writer.levelconf.growDyn=1
nameAppend = de
copyInputName = 1
noPostEOIprocessing = 0
deltawin=2
blocksize=1
[deltaRegression2:cDeltaRegression]
reader.dmLevel=lld_de
writer.dmLevel=lld_dede
writer.levelconf.isRb=0
writer.levelconf.growDyn=1
nameAppend = de
copyInputName = 1
noPostEOIprocessing = 0
deltawin=2
blocksize=1
[functionals:cFunctionals]
reader.dmLevel = lld1;lld_de;lld_dede
writer.dmLevel = statist
copyInputName = 1
frameMode = full
// frameListFile =
// frameList =
frameSize = 0
frameStep = 0
frameCenterSpecial = left
noPostEOIprocessing = 0
functionalsEnabled=Extremes;Moments;Means
Extremes.max = 1
Extremes.min = 1
Extremes.range = 1
Extremes.maxpos = 0
Extremes.minpos = 0
Extremes.amean = 0
Extremes.maxameandist = 0
Extremes.minameandist = 0
Extremes.norm = frame
Moments.doRatioLimit = 0
Moments.variance = 1
Moments.stddev = 1
Moments.skewness = 0
Moments.kurtosis = 0
Moments.amean = 0
Means.amean = 1
Means.absmean = 1
Means.qmean = 0
Means.nzamean = 1
Means.nzabsmean = 1
Means.nzqmean = 0
Means.nzgmean = 0
Means.nnz = 0
[arffSink:cArffSink]
reader.dmLevel = statist
filename = \cm[outputfile(O):name of output file]
append = 0
relation = smile
instanceName = \cm[inputfile]
number = 0
timestamp = 0
frameIndex = 1
frameTime = 1
frameTimeAdd = 0
frameLength = 0
// class[] =
printDefaultClassDummyAttribute = 0
// target[] =
// ################### END OF openSMILE CONFIG FILE ######################
Relevant answer
Answer
Hi,
Please pay attention to these parameters:
...
nFormants = 3
formants = 1
bandwidths = 1
...
Change the 1's with 3's
  • asked a question related to Signal Analysis
Question
9 answers
I have come up with a mixtures of gaussian based classification system for image recognition which can theoretically be modelled for signal analysis but I would like to improve the system by enhancing some of it
features like the optimizers, classifiers and the like. The best option was to make a single package out of it which might solve other problems in AI too and to make it available for other with the gnu vx license
Relevant answer
Answer
I think hosting it in GitHub may the way to go as pointed out by Raoul G. C. Schönhof . I undertook a similar approach sometimes back and it worked for me.
  • asked a question related to Signal Analysis
Question
5 answers
If it's possible to be under the form of links or full name of the research papers. Thank you in advance !
Relevant answer
Answer
One more paper that study the classification:
  • asked a question related to Signal Analysis
Question
17 answers
dear community, my model is based feature extraction from non stationary signals using discrete Wavelet Transform and then using statistical features then machine learning classifiers in order to train my model , I achieved an accuracy of 77% maximum for 5 classes to be classified, how to increase it ? size of my data frame is X=(335,48) , y=(335,1)
Thank you
Relevant answer
Accuracy can be seen as a measure of quality. High accuracy means that a rule gives more relevant results than irrelevant ones, and a high recall gives many relevant results on whether or not rules invalidity is returned.
  • asked a question related to Signal Analysis
Question
4 answers
Hello! We have a project where participants engaged in reading, thinking and then responding to an ethical dilemma. We used the EMOTIV 14 channel headset to track brain activity. The reading, thinking and responding times varied across participants (they were given all the time they needed). Do you have any advice or literature about either standardizing these varying times across participants or time-varying analysis?
Many thanks,
Deyang Yu
Relevant answer
Answer
We can use Spectral Analysis for Neural Signals
  • asked a question related to Signal Analysis
Question
7 answers
I have a 1D signal and I have done wavelet packet decomposition on it which is giving several sub-bands. Can I stack these sub-bands (one below other) to form a 2D matrix and hence an image representation of that 1D signal?
Relevant answer
Answer
Yes, you can convert 1-D signal (wave) : first convert it to data then any data can convert to image in matlab program. After that you can use wavelet decomposition pockets to image using DWT2( , , ) and same thing doing.
  • asked a question related to Signal Analysis
Question
3 answers
Hi all, I hope everyone is doing good.
I am working on Machine Learning, I am working on EEG data for which I have to extract statistical features of the data. Using mne library I have extracted the data in a matrix form but my work requires some statistical features to be extracted.
All features which are to be extracted are given in table 2 of this paper: "Context-Aware Human Activity Recognition (CAHAR) in-the-Wild Using Smartphone Accelerometer,". The data set I am using is dataset 2b from "http://www.bbci.de/competition/iv/".
I can't find a signal processing library. Can you suggest me any signal processing library for processing EEG signal data in Python?
Thanks to all who help.
Relevant answer
Answer
Aparna Sathya Murthy I came across this and tried to install this in google colab (pip install pyeeg), but it says:
ERROR: Could not find a version that satisfies the requirement pyeeg (from versions: none)
ERROR: No matching distribution found for pyeeg
  • asked a question related to Signal Analysis
Question
9 answers
Dear community , I need your help , I'm training my model in order to classify sleep stages , after extracting features from my signal I collected the features(X) in a DataFrame with shape(335,48) , and y (labels) in shape of (335,)
this is my code :
def get_base_model(): inp = Input(shape=(335,48)) img_1 = Convolution1D(16, kernel_size=5, activation=activations.relu, padding="valid")(inp) img_1 = Convolution1D(16, kernel_size=5, activation=activations.relu, padding="valid")(img_1) img_1 = MaxPool1D(pool_size=2)(img_1) img_1 = SpatialDropout1D(rate=0.01)(img_1) img_1 = Convolution1D(32, kernel_size=3, activation=activations.relu, padding="valid")(img_1) img_1 = Convolution1D(32, kernel_size=3, activation=activations.relu, padding="valid")(img_1) img_1 = MaxPool1D(pool_size=2)(img_1) img_1 = SpatialDropout1D(rate=0.01)(img_1) img_1 = Convolution1D(32, kernel_size=3, activation=activations.relu, padding="valid")(img_1) img_1 = Convolution1D(32, kernel_size=3, activation=activations.relu, padding="valid")(img_1) img_1 = MaxPool1D(pool_size=2)(img_1) img_1 = SpatialDropout1D(rate=0.01)(img_1) img_1 = Convolution1D(256, kernel_size=3, activation=activations.relu, padding="valid")(img_1) img_1 = Convolution1D(256, kernel_size=3, activation=activations.relu, padding="valid")(img_1) img_1 = GlobalMaxPool1D()(img_1) img_1 = Dropout(rate=0.01)(img_1) dense_1 = Dropout(0.01)(Dense(64, activation=activations.relu, name="dense_1")(img_1)) base_model = models.Model(inputs=inp, outputs=dense_1) opt = optimizers.Adam(0.001) base_model.compile(optimizer=opt, loss=losses.sparse_categorical_crossentropy, metrics=['acc']) model.summary() return base_model model=get_base_model() test_loss, test_acc = model.evaluate(Xtest, ytest, verbose=0) model.fit(X,y) print('\nTest accuracy:', test_acc)
I got the error : Input 0 is incompatible with layer model_16: expected shape=(None, 335, 48), found shape=(None, 48)
you can have in this picture an idea about my data shape :
Relevant answer
Answer
So you need to code your network to get the input of size (none, 1, 48). Each feature is of dimension 1x48, while 'none' would take up the size of the number of sample points (335 in your case). Hence, your input would be of shape 335x1x48. So, modify the input layer of your network to expect input of size 1x48 instead of 335x48, and provide input as 335x1x48.
  • asked a question related to Signal Analysis
Question
5 answers
Hello everyone,
I am trying to generate a faulty acceleration signal with SIMULINK. The inner ring of the bearing is fixed to the shaft. The bearing has 17 rolling elements. I was thinking of creating a fault in the inner ring because it is attached to the shaft. My approach was to add 17 impulses per cycle to the original measured acceleration data in order to generate a faulty signal.
I attached a picture of my Simulink model. What do you think about this approach and is my model correct so far?
10.848 * f_wheel is the Ball Pass frequency of the inner ring.
Kind regards
  • asked a question related to Signal Analysis
Question
1 answer
Hello,
I have accelerometer data and I want to calculate the displacement, I found a software called SeismoSignal, but is mainly used to analyze the seismic signal and I want simple software for signal processing to calculate displacement and applying highpass filter and denoising processing.
Relevant answer
Answer
You can use OriginPro. In the Analysis, there is signal processing for several types of filters. Also, you can get the displacement from the Integrate function at Mathematics.
  • asked a question related to Signal Analysis
Question
5 answers
Dear Colleagues, please suggest which is the best and user friendly open source software for audio signal analysis with most of the Scientific tools to audio signal analysis.
Thanks and Regards
N Das
Relevant answer
Answer
Thanks dear @Shahin Mardani, @Mila Ilieva, @Mani Entezami for your response to my discussion.
Dear @Mani, I will install the sonicvisualiser . Thanks for your support.
Best Wishes
Regards
N Das
  • asked a question related to Signal Analysis
Question
8 answers
In MOSFET small signal analysis, we can relate charge (Q) and capacitance (C) using the formula Cij = dQi/dVj. Is the voltage term (dVj) calculated across the electrode terminals 'i' and 'j' ? Or is it between electrode terminal 'j' and GND?
For example: Cgs = dQg/dVs; where g=gate and s=source. Is this voltage (dVs) between gate and source? or between source and ground?
Relevant answer
Answer
You aare right in specifying that the capacitance is expressed by the dielectric between the two interfaces with conductors, and I would add that its value is specific of the dielectric itself. For example, the capacitance of the gate oxide is of course epsilon_ox x thickness / gate area. The fact is that in any case, the answer to the original question is that the voltage drop which charge the capacitance is that applied to the dielectric interfaces, not between one of the interfaces and ground. Of course, if one of the interfaces is grounded, one can say that dV is between the other interface and ground, but i is just a special case. It is semantically true when a PowerMOSFET is operated in static regime (away from swithching phases) and the source electrode is kept at ground potential, so the gate-source capacitance is named a gate-to-ground parameter just because the source electrode is grounded.
Of course we agree on the meaning of dynamically active parasitic terms playing a role in switching speed and power, such as Ciss, Coss, Crss, but again the (time-dependent) voltage drop across the involved capacitors (e.g. Cgd) applies across the same capacitors which, by the way, are dynamically involving not only an oxide, but also a modulated depletion zone which behaves as a dielectric between the two conductive materials (as in the case of Cds) or are in series to an oxide capacitor (as in the Cgs - when surface inversion is not reached - or Cgd)
  • asked a question related to Signal Analysis
Question
5 answers
Suppose that X1, X2 are random variables with given probability distributions fx1(x), fx2(x).
Let fy(x) = fy( fx1(x) , fx2(x) ) be a known probability distribution of "unknown" random variable Y. Is it possible to determine how the variable Y is related to X1 and X2?
  • asked a question related to Signal Analysis
Question
3 answers
Dear all,
When using tonyplot, I can get the I-V curve without any problems, but when I try to add small signal analysis after gate voltage sweep (ac freq=1e6) in Silvaco Atlas it shows zero Cgd for all gate voltages. I was wondeing if there is a way to plot the C-V curve correctly.
Thank You in advance.
Relevant answer
Maryam
May I give a hint that you may use to solve the problem.
When you want to calculate or measure some physical parameter you must stick to its definition relation. Specifically here, you have
cgd= d QGD/dVGD
In small signal notation
cgd= qgd/ vgd
So accordingly:
You have to set certain DC operating point
apply a small signal voltage vgd in series with the DC bias,
Then measure or calculate qgd.By dividing qgd by vgd you get cgd.
You can also calculate the current igd
Then you you can get igd/vgd= jwcgd which is the susceptance of the cgd.
Best wishes
  • asked a question related to Signal Analysis
Question
3 answers
I need some help with VISSIM. I have modeled an intersection where I would like to use no-changing lane rules near 100 ft of the traffic signal. Picture 1 shows intersection without applying lane change restriction, where vehicle 1 and vehicle 2 are changing lane near the traffic signal.
However, after applying no-lane change near the traffic signal, I found the picture 2 for EB direction. Actually, those two vehicles from picture 2 would like to turn left, but they are in no turning section. Therefore, they are not moving.
How may I apply no-lane changing near an intersection?
Relevant answer
Answer
Hi,
You shall make the connector sufficienctly long (say 100ft as per your requirement), and enable the NoLnChRAllVehTypes and NoLnChLAllVehTypes parameters in the link properties for the connector. Also, to prevent vehicles from having to make last minute decisions on changing the lane to follow a desired route, introduce the vehicle route decision point well upstream of the connector.
Hope this helps.
Best wishes,
Abdhul
  • asked a question related to Signal Analysis
Question
4 answers
Hi All,
I have an audio database consisting of various types of signals and I'm planning to extract features from the audio signal. So I would like to know whether it's a good idea to extract basic audio features (eg MFCC, Energy ) from the audio signal with a large window (Let's say 5s width 1s overlap) rather than using conventional small frame size (in ms). I know that the audio signal exhibits homogeneous behavior in a 5s duration.
Thanks in advance
Relevant answer
Answer
Due to the dynamic nature of audio signals, features calculated from large window sizes becomes an average value over the window rather than instantaneous values. On the other extreme, for window sizes less than 5 ms there might too few samples to give a reliable evaluation.
  • asked a question related to Signal Analysis
Question
3 answers
Can anybody give details about how NIR spectra is related to glucose absorption in the sense of wavelength?
Relevant answer
Answer
Infrared radiation induces molecular vibrations as a result of which different bonds absorb light at different frequencies. Glucose for example is a hydrocarbon which consists of C-H, O-H, C-C, C=O functional groups which absorb photons with the right energy to excite overtone and combinations of fundamental molecular vibrations. Therefore, glucose is capable of absorbing NIR light. However, NIR absorption features are low in magnitude and highly overlapping in nature.
References
Hope that helps. Best of luck!!
  • asked a question related to Signal Analysis
Question
4 answers
Is the introduction of labeling bad while detecting small molecules? If yes, what are the major disadvantages of using labels for signal amplification in the detection of small molecular weight ( <1 kDa ) compounds?
Relevant answer
Answer
For a given size label, the effect on molecular properties will depend on the size of the molecule you want to label. This is even true for isotope effects, they are much stronger for 2H/1H than for 13C/12C. In addition, for a small molecule the chances that the label sterically interferes with binding of the molecule into the pocket of its receptor are much larger than for, say, a big protein.
  • asked a question related to Signal Analysis
Question
8 answers
The Fourier transform does not give information about the local time–frequency
characteristics of the signal which are especially important in the context of nonstationary
signal analysis. Various other mathematical tools like the Wavelet transform,
Gabor transform,Wigner–Ville distribution, etc. have been used to analyze such
kind of signals. The Fractional Fourier transform (FrFT) which is a generalization
of the integer order Fourier transform can also be used in this context
Relevant answer
Answer
Its probable easiest to illustrate the difference with examples. Try to visualise your time signal as being the x-axis of a graph, and the Fourier domain (frequency domain) as being the Y-axis of a graph. This is exactly how you would represent a STFT. The Fourier transform operation is a mapping that "rotates" your data from the time axis to the frequency one.
As a side note, you may find that the STFT does what you seem to require in that is allows you to look at the changing frequency content as a function of time.
Now ask yourself what happens if you don't fully rotate by 90 degrees from time to frequency - you end up somewhere in between. This is what a fractional Fourier transform does; it takes you to a domain that is neither frequency or time but between the two
Note that the concept of frequency applies when your basis functions are (co)sinusoids. However we could use a different set of basis functions, that rather than expanding and contracting by changing thier frequency, are stretched by a scaling factor. You could use any family of pulses from the wide variety availableand decompose your signals into pulses of different scales and amplitudes; this is a wavelet transform. Clearly when looking to decompose against a wavelet base, frequency is now an irrelevant concept
Hope that helps
  • asked a question related to Signal Analysis
Question
4 answers
I'm working on Bipolar shaper amplifier and solving some circuits.
As shown in attachment the drain of two Mosfet pairs are shorted. Does that serve as AC ground in small signal analysis?
I've made its small signal model but it seems to be incorrect.
Can anybody plz comment in it how to perceive the short between 2Mos pair in differential amplifier?
Relevant answer
Dear Adeel,
Hope you are well,
If M4a and M4b are biased in a consatnt current source then they will act as emitter follower. When one applies equal differential input signals at M2 a and M2b then they will cancel at the drains of M4a andb.
This means that the short can be considered virtual ac ground.
Best wishes
  • asked a question related to Signal Analysis
Question
4 answers
I have a reference time series and main data set(Similar sampling rate) which contains multiple instance of reference signal. Applying Cross Correlation (xcorr - Matlab) and from the highest xcorr values I have extracted multiple signals instance from the main data set.
It occurs some times the list include slightly different signals and I wanted to keep/remove those signal by again comparing with the reference signal by finding whether this matches or not. Any efficient way to do that ?
Reference snapshot attached.
Regards
Sreeraj
Relevant answer
Answer
The length of data result is 2xN-1 (N size of the original signal). The cross correlation result than can be displayed on [-N, N]. You will get maximal value when two signals are more similar. For example, if y=x (t-t0), then we get maximum of xcorr (x, y) at"t0". Cross correlation can be used to measure the delay between two similar signals... Please refer to this link for more details : https://fr.mathworks.com/help/matlab/ref/xcorr.html
  • asked a question related to Signal Analysis
Question
3 answers
I have conducted an experiment in which the impulse was exerted on sound bowl (it was hit by a hammer). As a result I have obtained acceleration from accelerometer on 3 axis, calculated net acceleration ( sqrt(x^2+y^2+z^2) ) and tried to obtain frequency components of the response. What I need to do is to identify input impulse but I have no idea on how to do that and I'm kind of walking blindly. I would appreciate any ideas/references.
Relevant answer
Answer
Accelerometer gives system response. I think you need input signal( excitation source ) or hammer signal. To measure system response you should connect hammer cable to the FFT device. Actually you measure a transfer function. As you know transfer function is output/input. you may get input from transfer function itself. The following link may help you:
  • asked a question related to Signal Analysis
Question
3 answers
Hi,
I am trying to generate eye diagram for a particular signal along with defined Eye mask. But cannot find any reference for how to integrate Eye Mask along with Matlab Eye Diagram Object ? Any one have any information ?
First diagram is the matlab eye diagram generated to which I would like to add Eye mask to look like the second diagram.
Relevant answer
Answer
Thank You Muhammad Ali for the suggestion. But I am afraid I am looking also for a way to pictorially represent Eye Mask Along with generated Eye diagram to get a quick glance on the performance. Now uploaded expected waveform.
  • asked a question related to Signal Analysis
Question
8 answers
I'm wondering:
The spectrogram gives a limited information about the non-stationary signal, but it is enough to do a classification method? Is there any predefined "names" for the shape and behavior of the spectrogram? Where (on the spectrograms) is the fundamental frequency (F0)?
We have found a time-frequency behaviour in plant signals
See full spectrograms below
Any inputs will be very appreciated. Thank you
Relevant answer
Answer
In order to obtain a steady state or equilibrium state or to obtain roots or maxima/minima, we need a first derivative or making saddle point approximation or the Laplace approximation for the high dimension integrals. The easy way is then to perform Bayesian parametric classification of maximum likelihood estimation. Data may be pre-process via PCA, and finally predictions via multivariate density estimation over parametric modelling.
  • asked a question related to Signal Analysis
Question
3 answers
I am facing a problem in Simulink.
When I want to write a bus signal to workspace in Simulink, I fail.
The next time I use a bus selector to select some signals in the bus and then write into the workspace. But I also fail this time with the error: 'The selected signal is invalid since it refers to a bus element within an array of subbuses'.
When I want to use the busselector to select the subbuses from the former busselector, I cannot find other signals.
So how to write a bus signal to a workspace?
Relevant answer
Hello
I found this answer in mathwork website, I hope it is useful for you:
There are several ways to store bus signals in the MATLAB workspace. Please choose one of the following three approaches:
1. Starting with Simulink 7.9 (R2012a), the 'To Workspace' block can be used to store bus signals of any mixed data when using the MATLAB 'Timeseries' format. For more information on this, please refer to the following link:
2. To store bus signals in Simulink 7.6 (R2010b) in the MATLAB workspace as a structure with the same hierarchy and signal names, data logging can be used as follows:
1) Right click on the desired bus signal and select Signal Properties.
2) In the Signal Properties window, check the box for 'Log signal data'.
3) In the model's Configuration Parameters dialog box, select Data Import/Export in the left pane.
4) Select the Signal logging checkbox and specify the variable name where the bus signal will be stored.
The logged signal variable will be of the form:
variablename.busname.signalname
3. Unfortunately, the above methods do not work for models compiled with Real-Time Workshop. This is because Real-Time Workshop does not support signal logging. An alternative method can be used as a workaround, however it is less direct. The following steps demonstrate how to create a MATLAB structure from a bus signal's flattened array and the accompanying model.
1) Save the bus signal to a MAT-file using the To File block and select 'MAT-file logging' under Configuration Parameters>Real-Time Workshop>Interface>Data exchange
2) Compile and run the model and executable, respectively, to generate the MAT-file with the bus signal stored as an array.
3) From the Simulink model, use the following code to create a Simulink bus object:
busInfo = Simulink.Bus.createObject(mdlName, blkName); num_el = eval([busInfo.busName '.getNumLeafBusElements']); elemList = eval([busInfo.busName '.getLeafBusElements']);
4) Create an array of Timeseries objects that capture the signal data from your MAT-file:
load MyFile % generated in the second step for i = 1:num_el size = elemList(i).Dimensions; ts{i} = timeseries(data(i+1:i+size,:)',data(1,:)'); end
5) Finally, propagate the Simulink bus object with the above timeseries using the CREATESTRUCTOFTIMESERIES method:
MYBUS = Simulink.SimulationData.createStructOfTimeseries(busInfo.busName,ts);
End of answer
Bast Regards
  • asked a question related to Signal Analysis
Question
3 answers
I want to get a time-frecuency spectogram using windowed burg and lomb-scargle method. As long as I know they calculate the psd for a segment of time. But for shot signal(less than 5 min of length). The recommended window sizes are bigger than the singal length so I get only a psd for the whole signal. So what window size should I use in order to get a 5 min time frequency spectogram for a 5 min signal.
Relevant answer
Answer
The window size of the STFT should be short enough to maintain the stationarity of the signal. If the frequency characteristics change in a window, you can set the window size shorter. Check the periodicity of signal in the time domain, and and determine the window size short enough to catch the periodicity.
  • asked a question related to Signal Analysis
Question
3 answers
The signals for induction motor current are plotted in frequency domain using MATLAB.
I attached the plot for more explanation.
  • asked a question related to Signal Analysis
Question
4 answers
Hi
I was going through different methods to implement serial decoding for Flexray analogue electrical signal using Matlab. Any Suggestion or useful reference much appropriated !
Reference waveform from external tool attached. The electrical waveform is represented by 8bit(0x83)
Relevant answer
Dear Sreeraj,
In order to preserve the waveform shape you have to perform waveform decoding. It is sometimes called pulse code modulation.
Such encoding has two parameters:
The sampling frequency fs which must be greater than or = 2fmax where fmax is the maximum frequency contained in the waveform.
The other parameter is the sample amplitude quantization which is determined by the allowed quantization noise. So generally very sample is quantized bu an n bits. As you toled n= 8,
Then you have only to determine fs by determining fmax.
This can be determined empirically or by the frequency analysis of the signal using FFT with oversampling.
Then one cut the bandwidth to the dominant frequency components.
If you take fs greater than 2fmax the reconstruction of the waveform will simpler.
Best wishes
  • asked a question related to Signal Analysis
Question
5 answers
By calculating the distance between two antennas and then taking the 'fft' of the received signal, how the speed of the signal can be calculated?
Relevant answer
In free space the speed of the signal in form of electromagnetic wave is the speed of light which is the speed of propagation.
But i think you mean to calculate the bit rate or the symbol rate.
For this you use the Shannon limit of the channel capacity
C= w log2(1 +S/N ), w is the bandwidth and N is the noise of the receiver.
Here the signal power S is the received power = Th transmitted power/4pi R^2,
where R is the distance between the two antennas provided that the two antennas are in light of site.
Best wishes
  • asked a question related to Signal Analysis
Question
7 answers
I have been doing research on extracting information like acceleration, distance, etc. from an analogue signal by performing analogue signal processing on any type of signal. I haven't found any certain techniques which would help me in providing a formula or any sort of information which would lead to extracting information by analogue signal processing. If anyone is aware of analogue signal processing, could you please enlighten me if it is even possible to do this?
After spending hours, I am at a stage where I feel like it's not even possible to do this.
Relevant answer
Answer
I think so. The most acceptable method is to convert the measured non-electric quantity (acceleration, distance, etc.) into an electrical quantity (voltage, current, resistance). After receiving the analog electrical signal corresponding to the measured value, you can further work on the processing of this signal. First of all, it will be necessary to choose the appropriate type of sensor (accelerometer, piezo sensor, displacement sensor, etc.). The analog signal received from the sensor will most likely have to be amplified, and then it can be processed using an analog-to-digital converter. But the easiest way is to convert the signal from the sensor into electrical voltage with a value proportional to this value and then using the capabilities of a conventional multimeter, you can control this value. If the received DC signal, then using a multimeter you can easily control the voltage in the range of 1 mV - 1000V. If the AC voltage is less than 200mV, then it must be amplified so that it can be measured with a multimeter.
I wish you success
  • asked a question related to Signal Analysis
Question
7 answers
Is it possible to visualize such high frequencies in distribution networks with the conventional signal processing techniques?
Relevant answer
Answer
Dear Utkarsh,
another way to measure harmonics over 2 kHz is to use a spectrum analyzer. There are many products on the market.
Sincerely,
Bystrík
  • asked a question related to Signal Analysis
Question
4 answers
Hi,
I am new to EEG signal processing. I am now working on the DEAP dataset to classify EEG signals into different emotion catagories.
My inputs are EEG samples of shape channel*timesteps. The provider of dataset has already removed artifects.
I use convolution layers to extract features and use a fully connected layer to classify. I also use dropout.
I shuffle all trails(sessions) and split the dataset into trainning and testing sets. I get resonable accuracy on the trainning set.
However the model is anable to generalize accross trails.
It overfits. Its performance on the test set is just as bad as a random guess(around 50% accuracy for low/high valance classification).
Is there any good practice for alleviate overfitting in that senario?
Another thing bothers me is that when I search for related literature, I find many paper also give an around 50% accuracy.
Why are results from EEG emotion classifcation so bad??
I feel quite helpless now. Thank you for any suggestions and reply!
Relevant answer
Answer
Hi, Ge.
I have applied the DEAP dataset for EEG emotion classification before. The situation you mentioned also occurred in my research. From my perspective, here are some suggestions:
Firstly, Compared with the classification model you used, I think the input features are more significant. Maybe you could pay more attention about the feature extraction(Such as: PSD, based on frequency domain; HOC, on time domain; Discrete Wavelet Transform, on time-freq domain), selection and confusion parts, which was included the channel selection in terms of the different emotion categories.
Secondly, about overfitting issue, I thought it's kind of unsuitable to use the DNN model for this DEAP dataset unless you could increase the mount of data. Maybe you could give the data segmentation part(cutting the epochs) a shot.
Finally, in terms of accuracy problem, I suggest you double check which emotion estimation method was used in the specific paper. There are two aspects for EEG emotion classification: one is based on Valence-Arousal plane, which was learned from Speech emotion recognition; another is for the specific emotions(such as: angry, happy,sad,etc). So, be careful about the baseline you used to compare.
Regards
  • asked a question related to Signal Analysis
Question
5 answers
Hi I am confused about the concept of signal quality index. what is signal quality index? what is the relation between signal quality index and signal strength? can we determine signal strength from signal quality index?
Relevant answer
Answer
As Martín Martínez stated, I don't think there is a unique definition of a signal quality index. However, if you consider for example the field of (biomedical) signal processing, by a signal quality index you usually mean a value between 0 and 1 (or 0 and 100%) that indicated how "good" (free from noise and other artifacts) the signal of interest is.
  • asked a question related to Signal Analysis
Question
13 answers
Brain signals Analysis for fMRI images.
Relevant answer
Answer
From the literature I am familiar with, I can say that the most numerous are studies of brain activity in the reading process. This is probably due to the great interest in his disorders and the problems of dyslexia. I have practically never encountered such studies of brain signals when performing various mathematical tasks. At the same time, the claim is that the left hemisphere dominates this type of operation.
In your study, you should consider the different involvement of the brain departments in solving arithmetic (non-verbal) and verbally-assigned (text-based) tasks. I'm sure you'll find a difference in the organization of brain signals in these two types of tasks.
I wish you success and look forward to seeing the results of your experiment!
Neli
  • asked a question related to Signal Analysis
Question
2 answers
I analysis four ethanolamine compounds (Mono-ethanolamine, Diethanolamine and methyl diethanolamine (MDEA) and Piperazine) via triple quadrupole LCMSMS.The column is proshell 120 EC-C18 and Acetonitrile and 5 mM ammonium acetate are organic and aqueous phases respectively. The method is used to analysis water sample via direct injection. Recently, I have got high background signal for MDEA and Piperazine in the blank sample. Any suggestion to solve this issue. Thanks in advance.
Relevant answer
Answer
Thanks for your reply. I am looking for a robust method via LCMSMS to specifically analysis Alkanoamines that do not show false positive signals for blank samples.
  • asked a question related to Signal Analysis
Question
4 answers
Someone who is in a coma is unconscious and will not respond to voices, other sounds, or any sort of activity going on nearby. however; in this case I'm wondering if any brain activity yet causes some senses to work.
Relevant answer
when a person is in coma you should not say what you would not say when the person is awake.
  • asked a question related to Signal Analysis
Question
1 answer
I am developing an application to estimate the distance to a BLE Beacon using its RSSi values measured from a Mobile Phone. But when I started to collect data I could see that they varied so much that I cannot possibly estimate the distance using the values. Even though I implemented mean and median filters with various window sizes, the RSSI values are highly varying. Is this a common problem with RSSI values ? Is there a way to eliminate/filter out these variations so that I can input them to a distance modal ? Help is greatly appriciated.
Relevant answer
Dear Ravindu,
It seems that your channel from the transmitter to the receiver is relatively rapidly varying with time possibly due to multi path fading. The channel could be a Rayleigh scattering channel.
The solution could be to fit the RSSI with Rayleigh channel response from which you can get the average received power that can be considered a function of the link length. The other solution which is a practical solution is to calibrate the RSSI with link distance.
If you give more description of the communication channel one can propose a suitable method. One of the best method is to see for a free line of sight path between the beacon and the receiver.
Best wishes
  • asked a question related to Signal Analysis
Question
3 answers
Hi everybody, I'm actually doing my master thesis in Biomedical Engineering about pulse oximeter.
I have to consider a real time system for my device. To compute the SpO2 from PPG signal, I filtered my signal with a low pass filter (to obtain DC component) and a BPF (to obtain AC component) .
I used FIR filter because of real time system and linear phase .
However, all the article about real time system which I have read, use IIR Butterworth filter because they have lower order and better result.
So, my question is: what is the better way to proceed? I can use higher order FIR filter (only way to obtain good AC and DC signal quality)?
Thanks for your answer.
Relevant answer
Answer
As per my understanding since you have to select a band limited signal and reject DC and high frequency noise. You do not require "perfect liner " phase filter, as per my understanding.
FIR filter has bulkey hardware, so your pulse oximeter will be costly as well as bulkey.
If cost and compactness, are not an issue, then only go for FIR
  • asked a question related to Signal Analysis
Question
4 answers
How can implementation an Adaptive Dictionary Reconstruction for Compressed Sensing of ECG Signals and how can an analysis of the overall power consumption of the proposed ECG compression framework as would be used in WBAN.
Relevant answer
Answer
Thid is in class of machine learning in youtubre
  • asked a question related to Signal Analysis
Question
5 answers
I have a number of questions regarding EEG signal analysis. Even after looking in a number of websites I am confused regarding it's usage. Please suggest with clear explanation.
1) Suppose, I have an eeg data with 100 Hz sampling frequency, recorded for 30 seconds. This means that we have 30 * 100 = 3000 samples of data. For these 3000 samples of data, in order to plot it's graph against time, how do I calculate the time data from this ? Do I simply divide the sample points by 3000 which makes (time_at_1_sec = 1/3000, time_at_2_sec = 2/3000 etc) ? Or Do i divide the sample points by 100 (time_at_1_sec = 1/100, time_at_2_sec = 2/100 etc) And continue with (1+1/100) after it reaches 100 sample points ?
2) While calculating alpha, beta, delta bands power, do we need to average the final value of power spectrum from all the channels of eeg data ?
3) After cleaning the noise using FFT and obtaining the final spectra, how to separate the individual band power from the final data ? Is there any function ?
Relevant answer
Answer
I am not into this field but at least I could answer the first question for the time being :)
the time axis:
t = (sample index/total number of samples) * total sampling time
or, t = sample index / sampling rate
so for example, the sample #300 occurs at t = 300/100 = 3 s
I hope you get a complete answer to your questions as soon as possible.