Science topic
Signal Analysis - Science topic
Explore the latest questions and answers in Signal Analysis, and find Signal Analysis experts.
Questions related to Signal Analysis
I'm looking for a finger PPG measurement device for my new scientific project that allows me to export raw signals (pulse waves). Do you know any product like this?
I know some wristbands with this export option, but I can't find any pulse-oximeter for fingers.
salam
Hello
How does the emg data filter which recoreded with sampling frequency of 250 Hz?
The data were recorded from forehead muscles in static and normal face position.
The time of recording is 5 minutes and three emg electrode were used.
Participants were male soccer players with 19 to 25 years old.
Hello everyone,
While reading articles related to EMG analysis, two concepts caught my attention. One of them is the Hilbert transform, and the other is the envelope. First, I haven't come across this transform often in many software tools (although I might have missed it). Instead, it is stated that rectification and RMS methods can also be used. I have come to the conclusion that the Hilbert transform might be more suitable if you need instantaneous amplitude and phase information or if you want to examine very subtle amplitude changes and instantaneous phase differences in the signal.
I usually used rectification and RMS methods. So, in this case;
- What are the advantages and limitations of using the Hilbert transform compared to rectification and RMS methods in analyzing EMG signals in sports science?
- In what contexts is the Hilbert transform preferred over rectification and RMS methods in EMG analysis for sports science research?
I would appreciate your insights on this topic. Thank you in advance.
BERMAN
Hi all,
I'm trying to do cilia beat frequency analysis on high-speed microscopy videos of beating cilia on cultures on airway epithelium.
I have not been able to get any of the cilia beat frequency analysis programs to work, so I'm writing my own script for the analysis in R.
So far I've managed to extract series of pixel intensities over time for every pixel in the video. However, when I run `fft` on the intensity series, I get no peaks.
I believe my sampling frequency is good, 480fps video, with an expected dominant frequency of ~15Hz. I believe my trouble comes from "low bit-depth" as the video has very low contrast, each pixel series typically spanning only 7-8 different intensity values.
Would something like interpolation be a solution here?
Please excuse my lack of technical knowledge here, this is well beyond what I usually deal with! I'd also be happy to share my code of course.
Sam
Noise removal in ECG signal using an improved adaptive learning approach, classification of ECG signals using CNN for cardiac arrhythmia detection, EEG signal analysis for stroke detection, and EMG signal analysis for gesture classification are essential to proper diagnosis. The application of CNN in pertussis Diagnosis by temperature monitoring, physician handwriting recognition using deep learning model, melanoma detection using ABCD parameters, and transfer learning enabled heuristic approach for pneumonia detection has become one of many AI embedded image processing systems.
source: 1st Edition
Artificial Intelligence in TelemedicineProcessing of Biosignals and Medical images
Edited By S. N. Kumar, Sherin Zafar, Eduard Babulak, M. Afshar Alam, Farheen SiddiquiCopyright 2023
Lock-in amplifiers have both types of output: one is in XY mode and another in R theta mode. How do we understand the physical meaning of the output in both cases?
String theory's applications are remarkably wide. Unfortunately its been beaten in the nuclear domain, where physics on a lattice QCD are more acclaimed as well as in cosmology where GR regns supreme.
The lattice theorists call it "dead" to cement their superiotity, some of its found ing developers abandoned it (Smolin) but its still breathing.. I. E see Vasilakos recent article on cosmic string signatures in Nanograv grav. Waves signal analysis.
So what is the non foe-based status of the theory in these domain?
I faced a very simple yet problematic phenomena when trying to find the bode plot of an unknown system with oscilloscope.
as we know we can simply inject a signal to a system by a signal generator and swipe the frequency then measure the input and output of the system and then by comparing the gain and phase shift plot the bode diagram.
here is the problem. when you have an unknown system with no prior knowledge. how can you find that the phase shift is positive or negative. as it can be seen in the picture the phase shift both can be considered +20 and -160
What is the difference between DTFS and DFT?
DTFS-Discrete Time Fourier Series
DTFT-Discrete Time Fourier Transform
DFT-Discrete Fourier Transform
I try to work on EEG signals from corona virus patients so I need clinical datasets of that. I would be grateful to you for helping me.
Hello everyone!
Through my studies, I used a lot of signal analysis methods for medical data (mostly RR interval series), focusing on the nonlinear ones such as:
- Shannon entropy,
- sample entropy (https://journals.physiology.org/doi/full/10.1152/ajpheart.2000.278.6.H2039?view=long&pmid=10843903),
- approximate entropy ( )
- detrended fluctuation analysis (Peng et al. 1994, ),
- multiscale multifractal analysis ( )
- symbolic analysis ( )
Currently I'm working on RR interval series obtained during listening to (or playing) short excerpts of music pieces. I'm wondering which nonlinear method would be the most appropriate for short-term data, from 30 seconds to 5 minutes (it's about 30-500 samples per signal). My preliminary results showed that I see significant differences between the baseline and the music piece period for Shannon entropy (this parameter works much better than most linear indices). In turn, I cannon see any interesting results using sample entropy and I think that these signals are too short for this method. Similarly, DFA cannot be used for a such short period.
My question is, what other nonlinear methods can I use for short-term analysis and maintaining a good quality level of the results?
I will be grateful for any suggestions.
Best
Mateusz Solinski
I-have research about lie detection using voice stress analysis and i need book talking about voice stress analysis
I am working on design wavelet frames to detect specific patterns in 1-D signals. I wondering if you could recommend mesome good texts on wavelet frames construction. Knowing if some code is available in Python or Matlab would be helpful. Thanks a lot!
I am working on a research point that employs estimation techniques.
I am trying to apply an algorithm in my work to estimate system poles. I wrote an m-file and tried to apply this technique on a simple transfer function to estimate its roots .any suggestions about estimation techniques ?
I have seen many ways to find similarity measure of multiple time series data in the literature. But, in my case, I have one time series data X of dim [n,1]. Now I want to get a similarity measure between data points of X. I tried autocorrelation. But I want to get only one or two numeric values that will represent the similarity measure.
I want denoise motion artifacts from the article of my data. I must have 3 level of decomposed levels in wavelet. for denoising what could be the value of threshold for each level?do you have any opinion? Thanks
I am looking for adaptive short time Fourier transform implemented in MATLAB as a code. This can be useful for non stationary signal analysis.
Hey all. I have two signals as shown. the red one is how my output should look like (the ideal case) and the blue one is what I am actually getting. I am quite new to signal analysis so was asking what are good metrics I can use to find the similarity between my generated signal and the ideal signal? I have looked at cross correlation and SNR using MATLAB, but I wanted to know if there are more methods out there that can provide me with a clearer picture specially with regards to how shape similarity etc.
Thank you all again.
Actually, I wish to understand the process and coding to define new wavelet transform. So that I can understand and modify some wavelet transform to get better results. There is inbuilt wavelet transform in MATLAB and we just have to choose wavelets. I wish to define new wavelet transform.
Is there any mobile App publicly available on App stores (both IoS/Android) which can be used to gather/collect/analyze signal strength measurements from the available WLAN access points? My aim is to utilize these RSS reading for WLAN based indoor localization systems.
Could any one please help me in suggesting some resources where I could find a comparison curve between signal strength after Multi Path propagation effect with respect to obstacle positions between transmitter and receiver.
After conduction some experiment I found that the effect was greater near Rx or Near Tx but lesser when the obstacle is in same distance from Rx and Tx. Why such phenomena happens?
By referring to some scientific resources we've found that the brain produces Some signals that relate to its activities and monitored by EEG. The question is "Can we force the brain to do some actions by injecting signals (in a direct or indirect way)?".
Dear Colleagues,
Please suggest any open source software for ECG signal analysis.
Thanks in advance
N Das
I want to find the resonance and anti-resonance frequencies of an ultrasonic transducer by analyzing its impedance.
so I need to buy a impedance analyzer or spectrum analyzer or something like that.
but my budget is limited.
do you recommend any device for my application and limited budget? :D
Hi everyone,
This is just a 'out-of-curiosity' question, but why is the cerebellum used as the reference point? What is the reasoning? I was always told is it because it is 'silent' compared to the cortical regions, but obviously the cerebellum is also active. Is there any paper that explains the choice, or if a better reference region or method is available?
Thank you!
I want to identify a peptide signal for a gene. All the tools like signalP is not showing anything and other tools are not showing sequence. Could someone suggest some tools?
Hello everyone,
for my thesis I want to extract some voice features from audio data recorded during psychotherapy sessions. For this I am using the openSMILE toolkit. For the fundamental frequency and jitter I already get good results, but the extraction of center frequencies and bandwidths of the formants 1-3 is puzzling me. For some reason there appears to be just one formant (the first one) with a frequency range up to 6kHz. Formants 2 and 3 are getting values of 0. I expected the formants to be within a range of 500 to 2000 Hz.
I tried to fix the problem myself but could not find the issue here. Does anybody have experience with openSMILE, especially formant extraction, and could help me out?
For testing purposes I am using various audio files recorded by myself or extracted from youtube. My config file looks like this:
///////////////////////////////////////////////////////////////////////////
// openSMILE configuration template file generated by SMILExtract binary //
///////////////////////////////////////////////////////////////////////////
[componentInstances:cComponentManager]
instance[dataMemory].type = cDataMemory
instance[waveSource].type = cWaveSource
instance[framer].type = cFramer
instance[vectorPreemphasis].type = cVectorPreemphasis
instance[windower].type = cWindower
instance[transformFFT].type = cTransformFFT
instance[fFTmagphase].type = cFFTmagphase
instance[melspec].type = cMelspec
instance[mfcc].type = cMfcc
instance[acf].type = cAcf
instance[cepstrum].type = cAcf
instance[pitchAcf].type = cPitchACF
instance[lpc].type = cLpc
instance[formantLpc].type = cFormantLpc
instance[formantSmoother].type = cFormantSmoother
instance[pitchJitter].type = cPitchJitter
instance[lld].type = cContourSmoother
instance[deltaRegression1].type = cDeltaRegression
instance[deltaRegression2].type = cDeltaRegression
instance[functionals].type = cFunctionals
instance[arffSink].type = cArffSink
printLevelStats = 1
nThreads = 1
[waveSource:cWaveSource]
writer.dmLevel = wave
basePeriod = -1
filename = \cm[inputfile(I):name of input file]
monoMixdown = 1
[framer:cFramer]
reader.dmLevel = wave
writer.dmLevel = frames
copyInputName = 1
frameMode = fixed
frameSize = 0.0250
frameStep = 0.010
frameCenterSpecial = center
noPostEOIprocessing = 1
buffersize = 1000
[vectorPreemphasis:cVectorPreemphasis]
reader.dmLevel = frames
writer.dmLevel = framespe
k = 0.97
de = 0
[windower:cWindower]
reader.dmLevel=framespe
writer.dmLevel=winframe
copyInputName = 1
processArrayFields = 1
winFunc = ham
gain = 1.0
offset = 0
[transformFFT:cTransformFFT]
reader.dmLevel = winframe
writer.dmLevel = fftc
copyInputName = 1
processArrayFields = 1
inverse = 0
zeroPadSymmetric = 0
[fFTmagphase:cFFTmagphase]
reader.dmLevel = fftc
writer.dmLevel = fftmag
copyInputName = 1
processArrayFields = 1
inverse = 0
magnitude = 1
phase = 0
[melspec:cMelspec]
reader.dmLevel = fftmag
writer.dmLevel = mspec
nameAppend = melspec
copyInputName = 1
processArrayFields = 1
htkcompatible = 1
usePower = 0
nBands = 26
lofreq = 0
hifreq = 8000
usePower = 0
inverse = 0
specScale = mel
[mfcc:cMfcc]
reader.dmLevel=mspec
writer.dmLevel=mfcc1
copyInputName = 0
processArrayFields = 1
firstMfcc = 0
lastMfcc = 12
cepLifter = 22.0
htkcompatible = 1
[acf:cAcf]
reader.dmLevel=fftmag
writer.dmLevel=acf
nameAppend = acf
copyInputName = 1
processArrayFields = 1
usePower = 1
cepstrum = 0
acfCepsNormOutput = 0
[cepstrum:cAcf]
reader.dmLevel=fftmag
writer.dmLevel=cepstrum
nameAppend = acf
copyInputName = 1
processArrayFields = 1
usePower = 1
cepstrum = 1
acfCepsNormOutput = 0
oldCompatCepstrum = 1
absCepstrum = 1
[pitchAcf:cPitchACF]
reader.dmLevel=acf;cepstrum
writer.dmLevel=pitchACF
copyInputName = 1
processArrayFields = 0
maxPitch = 500
voiceProb = 0
voiceQual = 0
HNRdB = 0
F0 = 1
F0raw = 0
F0env = 1
voicingCutoff = 0.550000
[lpc:cLpc]
reader.dmLevel = fftc
writer.dmLevel = lpc1
method = acf
p = 8
saveLPCoeff = 1
lpGain = 0
saveRefCoeff = 0
residual = 0
forwardFilter = 0
lpSpectrum = 0
[formantLpc:cFormantLpc]
reader.dmLevel = lpc1
writer.dmLevel = formants
copyInputName = 1
nFormants = 3
saveFormants = 1
saveIntensity = 0
saveNumberOfValidFormants = 1
saveBandwidths = 1
minF = 400
maxF = 6000
[formantSmoother:cFormantSmoother]
reader.dmLevel = formants;pitchACF
writer.dmLevel = forsmoo
copyInputName = 1
medianFilter0 = 0
postSmoothing = 0
postSmoothingMethod = simple
F0field = F0
formantBandwidthField = formantBand
formantFreqField = formantFreq
formantFrameIntensField = formantFrameIntens
intensity = 0
nFormants = 3
formants = 1
bandwidths = 1
saveEnvs = 0
no0f0 = 0
[pitchJitter:cPitchJitter]
reader.dmLevel = wave
writer.dmLevel = jitter
writer.levelconf.nT = 1000
copyInputName = 1
F0reader.dmLevel = pitchACF
F0field = F0
searchRangeRel = 0.250000
jitterLocal = 1
jitterDDP = 1
jitterLocalEnv = 0
jitterDDPEnv = 0
shimmerLocal = 0
shimmerLocalEnv = 0
onlyVoiced = 0
inputMaxDelaySec = 2.0
[lld:cContourSmoother]
reader.dmLevel=mfcc1;pitchACF;forsmoo;jitter
writer.dmLevel=lld1
writer.levelconf.nT=10
writer.levelconf.isRb=0
writer.levelconf.growDyn=1
nameAppend = sma
copyInputName = 1
noPostEOIprocessing = 0
smaWin = 3
[deltaRegression1:cDeltaRegression]
reader.dmLevel=lld1
writer.dmLevel=lld_de
writer.levelconf.isRb=0
writer.levelconf.growDyn=1
nameAppend = de
copyInputName = 1
noPostEOIprocessing = 0
deltawin=2
blocksize=1
[deltaRegression2:cDeltaRegression]
reader.dmLevel=lld_de
writer.dmLevel=lld_dede
writer.levelconf.isRb=0
writer.levelconf.growDyn=1
nameAppend = de
copyInputName = 1
noPostEOIprocessing = 0
deltawin=2
blocksize=1
[functionals:cFunctionals]
reader.dmLevel = lld1;lld_de;lld_dede
writer.dmLevel = statist
copyInputName = 1
frameMode = full
// frameListFile =
// frameList =
frameSize = 0
frameStep = 0
frameCenterSpecial = left
noPostEOIprocessing = 0
functionalsEnabled=Extremes;Moments;Means
Extremes.max = 1
Extremes.min = 1
Extremes.range = 1
Extremes.maxpos = 0
Extremes.minpos = 0
Extremes.amean = 0
Extremes.maxameandist = 0
Extremes.minameandist = 0
Extremes.norm = frame
Moments.doRatioLimit = 0
Moments.variance = 1
Moments.stddev = 1
Moments.skewness = 0
Moments.kurtosis = 0
Moments.amean = 0
Means.amean = 1
Means.absmean = 1
Means.qmean = 0
Means.nzamean = 1
Means.nzabsmean = 1
Means.nzqmean = 0
Means.nzgmean = 0
Means.nnz = 0
[arffSink:cArffSink]
reader.dmLevel = statist
filename = \cm[outputfile(O):name of output file]
append = 0
relation = smile
instanceName = \cm[inputfile]
number = 0
timestamp = 0
frameIndex = 1
frameTime = 1
frameTimeAdd = 0
frameLength = 0
// class[] =
printDefaultClassDummyAttribute = 0
// target[] =
// ################### END OF openSMILE CONFIG FILE ######################
I have come up with a mixtures of gaussian based classification system for image recognition which can theoretically be modelled for signal analysis but I would like to improve the system by enhancing some of it
features like the optimizers, classifiers and the like. The best option was to make a single package out of it which might solve other problems in AI too and to make it available for other with the gnu vx license
If it's possible to be under the form of links or full name of the research papers. Thank you in advance !
dear community, my model is based feature extraction from non stationary signals using discrete Wavelet Transform and then using statistical features then machine learning classifiers in order to train my model , I achieved an accuracy of 77% maximum for 5 classes to be classified, how to increase it ? size of my data frame is X=(335,48) , y=(335,1)
Thank you
Hello! We have a project where participants engaged in reading, thinking and then responding to an ethical dilemma. We used the EMOTIV 14 channel headset to track brain activity. The reading, thinking and responding times varied across participants (they were given all the time they needed). Do you have any advice or literature about either standardizing these varying times across participants or time-varying analysis?
Many thanks,
Deyang Yu
I have a 1D signal and I have done wavelet packet decomposition on it which is giving several sub-bands. Can I stack these sub-bands (one below other) to form a 2D matrix and hence an image representation of that 1D signal?
Hi all, I hope everyone is doing good.
I am working on Machine Learning, I am working on EEG data for which I have to extract statistical features of the data. Using mne library I have extracted the data in a matrix form but my work requires some statistical features to be extracted.
All features which are to be extracted are given in table 2 of this paper: "Context-Aware Human Activity Recognition (CAHAR) in-the-Wild Using Smartphone Accelerometer,". The data set I am using is dataset 2b from "http://www.bbci.de/competition/iv/".
I can't find a signal processing library. Can you suggest me any signal processing library for processing EEG signal data in Python?
Thanks to all who help.
Dear community , I need your help , I'm training my model in order to classify sleep stages , after extracting features from my signal I collected the features(X) in a DataFrame with shape(335,48) , and y (labels) in shape of (335,)
this is my code :
def get_base_model():
inp = Input(shape=(335,48))
img_1 = Convolution1D(16, kernel_size=5, activation=activations.relu, padding="valid")(inp)
img_1 = Convolution1D(16, kernel_size=5, activation=activations.relu, padding="valid")(img_1)
img_1 = MaxPool1D(pool_size=2)(img_1)
img_1 = SpatialDropout1D(rate=0.01)(img_1)
img_1 = Convolution1D(32, kernel_size=3, activation=activations.relu, padding="valid")(img_1)
img_1 = Convolution1D(32, kernel_size=3, activation=activations.relu, padding="valid")(img_1)
img_1 = MaxPool1D(pool_size=2)(img_1)
img_1 = SpatialDropout1D(rate=0.01)(img_1)
img_1 = Convolution1D(32, kernel_size=3, activation=activations.relu, padding="valid")(img_1)
img_1 = Convolution1D(32, kernel_size=3, activation=activations.relu, padding="valid")(img_1)
img_1 = MaxPool1D(pool_size=2)(img_1)
img_1 = SpatialDropout1D(rate=0.01)(img_1)
img_1 = Convolution1D(256, kernel_size=3, activation=activations.relu, padding="valid")(img_1)
img_1 = Convolution1D(256, kernel_size=3, activation=activations.relu, padding="valid")(img_1)
img_1 = GlobalMaxPool1D()(img_1)
img_1 = Dropout(rate=0.01)(img_1)
dense_1 = Dropout(0.01)(Dense(64, activation=activations.relu, name="dense_1")(img_1))
base_model = models.Model(inputs=inp, outputs=dense_1)
opt = optimizers.Adam(0.001)
base_model.compile(optimizer=opt, loss=losses.sparse_categorical_crossentropy, metrics=['acc'])
model.summary()
return base_model
model=get_base_model()
test_loss, test_acc = model.evaluate(Xtest, ytest, verbose=0)
model.fit(X,y)
print('\nTest accuracy:', test_acc)
I got the error : Input 0 is incompatible with layer model_16: expected shape=(None, 335, 48), found shape=(None, 48)
you can have in this picture an idea about my data shape :
Hello everyone,
I am trying to generate a faulty acceleration signal with SIMULINK. The inner ring of the bearing is fixed to the shaft. The bearing has 17 rolling elements. I was thinking of creating a fault in the inner ring because it is attached to the shaft. My approach was to add 17 impulses per cycle to the original measured acceleration data in order to generate a faulty signal.
I attached a picture of my Simulink model. What do you think about this approach and is my model correct so far?
10.848 * f_wheel is the Ball Pass frequency of the inner ring.
Kind regards
Hello,
I have accelerometer data and I want to calculate the displacement, I found a software called SeismoSignal, but is mainly used to analyze the seismic signal and I want simple software for signal processing to calculate displacement and applying highpass filter and denoising processing.
Dear Colleagues, please suggest which is the best and user friendly open source software for audio signal analysis with most of the Scientific tools to audio signal analysis.
Thanks and Regards
N Das
In MOSFET small signal analysis, we can relate charge (Q) and capacitance (C) using the formula Cij = dQi/dVj. Is the voltage term (dVj) calculated across the electrode terminals 'i' and 'j' ? Or is it between electrode terminal 'j' and GND?
For example: Cgs = dQg/dVs; where g=gate and s=source. Is this voltage (dVs) between gate and source? or between source and ground?
Suppose that X1, X2 are random variables with given probability distributions fx1(x), fx2(x).
Let fy(x) = fy( fx1(x) , fx2(x) ) be a known probability distribution of "unknown" random variable Y. Is it possible to determine how the variable Y is related to X1 and X2?
Dear all,
When using tonyplot, I can get the I-V curve without any problems, but when I try to add small signal analysis after gate voltage sweep (ac freq=1e6) in Silvaco Atlas it shows zero Cgd for all gate voltages. I was wondeing if there is a way to plot the C-V curve correctly.
Thank You in advance.
I need some help with VISSIM. I have modeled an intersection where I would like to use no-changing lane rules near 100 ft of the traffic signal. Picture 1 shows intersection without applying lane change restriction, where vehicle 1 and vehicle 2 are changing lane near the traffic signal.
However, after applying no-lane change near the traffic signal, I found the picture 2 for EB direction. Actually, those two vehicles from picture 2 would like to turn left, but they are in no turning section. Therefore, they are not moving.
How may I apply no-lane changing near an intersection?
Hi All,
I have an audio database consisting of various types of signals and I'm planning to extract features from the audio signal. So I would like to know whether it's a good idea to extract basic audio features (eg MFCC, Energy ) from the audio signal with a large window (Let's say 5s width 1s overlap) rather than using conventional small frame size (in ms). I know that the audio signal exhibits homogeneous behavior in a 5s duration.
Thanks in advance
Can anybody give details about how NIR spectra is related to glucose absorption in the sense of wavelength?
Is the introduction of labeling bad while detecting small molecules? If yes, what are the major disadvantages of using labels for signal amplification in the detection of small molecular weight ( <1 kDa ) compounds?
The Fourier transform does not give information about the local time–frequency
characteristics of the signal which are especially important in the context of nonstationary
signal analysis. Various other mathematical tools like the Wavelet transform,
Gabor transform,Wigner–Ville distribution, etc. have been used to analyze such
kind of signals. The Fractional Fourier transform (FrFT) which is a generalization
of the integer order Fourier transform can also be used in this context
I'm working on Bipolar shaper amplifier and solving some circuits.
As shown in attachment the drain of two Mosfet pairs are shorted. Does that serve as AC ground in small signal analysis?
I've made its small signal model but it seems to be incorrect.
Can anybody plz comment in it how to perceive the short between 2Mos pair in differential amplifier?
I have a reference time series and main data set(Similar sampling rate) which contains multiple instance of reference signal. Applying Cross Correlation (xcorr - Matlab) and from the highest xcorr values I have extracted multiple signals instance from the main data set.
It occurs some times the list include slightly different signals and I wanted to keep/remove those signal by again comparing with the reference signal by finding whether this matches or not. Any efficient way to do that ?
Reference snapshot attached.
Regards
Sreeraj
I have conducted an experiment in which the impulse was exerted on sound bowl (it was hit by a hammer). As a result I have obtained acceleration from accelerometer on 3 axis, calculated net acceleration ( sqrt(x^2+y^2+z^2) ) and tried to obtain frequency components of the response. What I need to do is to identify input impulse but I have no idea on how to do that and I'm kind of walking blindly. I would appreciate any ideas/references.
Hi,
I am trying to generate eye diagram for a particular signal along with defined Eye mask. But cannot find any reference for how to integrate Eye Mask along with Matlab Eye Diagram Object ? Any one have any information ?
First diagram is the matlab eye diagram generated to which I would like to add Eye mask to look like the second diagram.
I'm wondering:
The spectrogram gives a limited information about the non-stationary signal, but it is enough to do a classification method? Is there any predefined "names" for the shape and behavior of the spectrogram? Where (on the spectrograms) is the fundamental frequency (F0)?
We have found a time-frequency behaviour in plant signals
See full spectrograms below
Any inputs will be very appreciated. Thank you
I am facing a problem in Simulink.
When I want to write a bus signal to workspace in Simulink, I fail.
The next time I use a bus selector to select some signals in the bus and then write into the workspace. But I also fail this time with the error: 'The selected signal is invalid since it refers to a bus element within an array of subbuses'.
When I want to use the busselector to select the subbuses from the former busselector, I cannot find other signals.
So how to write a bus signal to a workspace?
I want to get a time-frecuency spectogram using windowed burg and lomb-scargle method. As long as I know they calculate the psd for a segment of time. But for shot signal(less than 5 min of length). The recommended window sizes are bigger than the singal length so I get only a psd for the whole signal. So what window size should I use in order to get a 5 min time frequency spectogram for a 5 min signal.
The signals for induction motor current are plotted in frequency domain using MATLAB.
I attached the plot for more explanation.
Hi
I was going through different methods to implement serial decoding for Flexray analogue electrical signal using Matlab. Any Suggestion or useful reference much appropriated !
Reference waveform from external tool attached. The electrical waveform is represented by 8bit(0x83)
By calculating the distance between two antennas and then taking the 'fft' of the received signal, how the speed of the signal can be calculated?
I have been doing research on extracting information like acceleration, distance, etc. from an analogue signal by performing analogue signal processing on any type of signal. I haven't found any certain techniques which would help me in providing a formula or any sort of information which would lead to extracting information by analogue signal processing. If anyone is aware of analogue signal processing, could you please enlighten me if it is even possible to do this?
After spending hours, I am at a stage where I feel like it's not even possible to do this.
Is it possible to visualize such high frequencies in distribution networks with the conventional signal processing techniques?
Hi,
I am new to EEG signal processing. I am now working on the DEAP dataset to classify EEG signals into different emotion catagories.
My inputs are EEG samples of shape channel*timesteps. The provider of dataset has already removed artifects.
I use convolution layers to extract features and use a fully connected layer to classify. I also use dropout.
I shuffle all trails(sessions) and split the dataset into trainning and testing sets. I get resonable accuracy on the trainning set.
However the model is anable to generalize accross trails.
It overfits. Its performance on the test set is just as bad as a random guess(around 50% accuracy for low/high valance classification).
Is there any good practice for alleviate overfitting in that senario?
Another thing bothers me is that when I search for related literature, I find many paper also give an around 50% accuracy.
Why are results from EEG emotion classifcation so bad??
I feel quite helpless now. Thank you for any suggestions and reply!
Hi I am confused about the concept of signal quality index. what is signal quality index? what is the relation between signal quality index and signal strength? can we determine signal strength from signal quality index?
Brain signals Analysis for fMRI images.
I analysis four ethanolamine compounds (Mono-ethanolamine, Diethanolamine and methyl diethanolamine (MDEA) and Piperazine) via triple quadrupole LCMSMS.The column is proshell 120 EC-C18 and Acetonitrile and 5 mM ammonium acetate are organic and aqueous phases respectively. The method is used to analysis water sample via direct injection. Recently, I have got high background signal for MDEA and Piperazine in the blank sample. Any suggestion to solve this issue. Thanks in advance.
Someone who is in a coma is unconscious and will not respond to voices, other sounds, or any sort of activity going on nearby. however; in this case I'm wondering if any brain activity yet causes some senses to work.
I am developing an application to estimate the distance to a BLE Beacon using its RSSi values measured from a Mobile Phone. But when I started to collect data I could see that they varied so much that I cannot possibly estimate the distance using the values. Even though I implemented mean and median filters with various window sizes, the RSSI values are highly varying. Is this a common problem with RSSI values ? Is there a way to eliminate/filter out these variations so that I can input them to a distance modal ? Help is greatly appriciated.
Hi everybody, I'm actually doing my master thesis in Biomedical Engineering about pulse oximeter.
I have to consider a real time system for my device. To compute the SpO2 from PPG signal, I filtered my signal with a low pass filter (to obtain DC component) and a BPF (to obtain AC component) .
I used FIR filter because of real time system and linear phase .
However, all the article about real time system which I have read, use IIR Butterworth filter because they have lower order and better result.
So, my question is: what is the better way to proceed? I can use higher order FIR filter (only way to obtain good AC and DC signal quality)?
Thanks for your answer.
How can implementation an Adaptive Dictionary Reconstruction for Compressed Sensing of ECG Signals and how can an analysis of the overall power consumption of the proposed ECG compression framework as would be used in WBAN.
I have a number of questions regarding EEG signal analysis. Even after looking in a number of websites I am confused regarding it's usage. Please suggest with clear explanation.
1) Suppose, I have an eeg data with 100 Hz sampling frequency, recorded for 30 seconds. This means that we have 30 * 100 = 3000 samples of data. For these 3000 samples of data, in order to plot it's graph against time, how do I calculate the time data from this ? Do I simply divide the sample points by 3000 which makes (time_at_1_sec = 1/3000, time_at_2_sec = 2/3000 etc) ? Or Do i divide the sample points by 100 (time_at_1_sec = 1/100, time_at_2_sec = 2/100 etc) And continue with (1+1/100) after it reaches 100 sample points ?
2) While calculating alpha, beta, delta bands power, do we need to average the final value of power spectrum from all the channels of eeg data ?
3) After cleaning the noise using FFT and obtaining the final spectra, how to separate the individual band power from the final data ? Is there any function ?