Science topic
Signal Processing - Science topic
Explore the latest questions and answers in Signal Processing, and find Signal Processing experts.
Questions related to Signal Processing
How to filter input signal through lognormal shadowing model or kappa mu shadowing model by using a code which generates PDF in Matlab?
I have been working on classifying ecg signals,and for feature extraction I am going to use AR modelling using Burgs method. After reading a few papers I got to know that the features are extracted after splitting the ecg signals into blocks of different duration,My question is why is it necessary to do so,and how could we fix a certain duration? For instance I have a signal with 50000 samples with fs = 256 hz ,so what could be the duration of each block.
And it would be really helpful if someone could help me understand the burg's method.There are videos and all for learning the Yule-Walker equations but did'nt find any for burgs method
Thank you in advance
I have two datasets (.edf) of EEG recordings, one for healthy people, one for depressive people.
Each of the recording has 20 channels. So far I opened the data in matlab with edfread() as a timetable.
How can I add a white noise in that timetable?
My area of research is genomic signal processing. I need to give names two experts from outside India in this area to review my work for a journal.
Can anyone kindly suggest experts in the areas of genomic signal processing, signal processing , Bioinformatics.
Hi to everyone, I am an engineering student and I started to learn signal processing / signals&systems topics. I think that one problem of self-learning is, can't find someone/teacher to ask the point you stuck it.
I don't understand how the CT impulse function is transformed to the discrete-time impulse as its amplitude is 1. How does this process work?
I have problems with converting CT to DT, the sampling, and the periodization. I am trying to watch several videos about it, but the actual mathematical operation of this "converter" is not clear for me.
What I mean is , what is the operator that converts;
""impulse(t) >> to impulse[n] with amplitude of 1""
or ""x(t) . p(t) impulse train >> to x[n] as a sequence""
x(t) . p(t) could be represented as = summation of the series of x(nT) . impulse ( t - nT )
But this is still not equal to a sequence of x[n] , because it contains scaled impulses with amplitude of infinity, right?
To remind it again, my question is how to transform X(t) to x[n] mathematically? How this sampling is occurring?
Could you please suggest any articles/book chapters where I could start with to learn the concept of Total Variation in classical signal processing? I would like to relate to Graph Signal Processing in understanding Fourier Basis.
hi my thesis is about detection of myocardial infarction from ECG signals and i want to know is there any database for it?
I'm pre-processing a UAV Magnetic data where the flight path is parallel to each other in N-S direction (heading N and S one after another). The magnetic values seems to be vertically shifted and flipped when going in different headings. The only way I could solve this is by compensating the values by exporting the difference in median (constant median) in Magdrone Data Tool but these compensated values would be insufficient for magnetic susceptibility calculation later. I've tried doing heading correction in Oasis Montaj but to no avail. Is there a way I could solve this heading error?
The first image shows a profile of 6 tracks. The arrow corresponds to the UAV turning. This data have been low pass filtered. Profile 2 shows the data after removal of the turning errors.
I've also attached a scatterplot of the raw data and grid (minimum curvature) of Profile 2.
Dear all,
I'd like to detect diabetes through PPG signal processing, Which method do you recommend me to use? If you happen to have access to the scripts, I'd appreciate it.
Thanks!
Fernando
I need to remotely measure the thickness of an object mounted on a black plate with the accuracy <5mm. While the accuracy is challenging for depth cameras and, a lidar cannot get the reflection from the black plate (as it absorbs the signal) (we need to measure distance to the plate and to the object from the camera to infer object thickness).
suggestion of any techniques that could fit is appreciated
Hello,
I graduated with a Master's degree in machine learning and signal processing.
I'm in the first year of my Ph.D. in computer science. I have some difficulties finding topics on smart cities.
Do you have some suggestions or ideas?
Human surface EEG (electroencephaloraphy) is made up of background activity + oscillations. Many of these oscillations come in short bursts (1-5 seconds) or even sustained trains (>10 s) such as occipital alpha activity (8-10 Hz). What is the best method for automatically identifying these bursts, without relying on simple fixed amplitude thresholds or complex machine learning algorithms?
I am working on a research point related to stability in power system based on poles estimation .
I am trying to apply ESPRIT algorithm in my work to estimate system poles. I wrote an m-file and tried to apply this technique on a simple transfer function to estimate its roots . I calculated the covariance matrix of the data signal and then took svd to get the 2 overlapped vectors s1 and s2 and calculated epsai matrix and took its eig(epsai). how can I calculate the frequency and damping ratio after calculating the eigen value of epsai of this algorithm ? the equations I use for frequency and damping factor give wrong values.
In processors the complex and challenging operations are needed to be handled to overcome the demands, which leads to an increase in processor cores. This leads to an increase in the load of the processor and can be limited by placing a co-processors under specific type of functions like signal processing. But anyhow the speed of the ALU replies on the multiplier. Since multipliers are the major components to perform operations in the CPU.
The background is, we are trying to calculate an index relying on high frequency band over 100Hz with only 128Hz signal. The assumption is that: Say we have a 128Hz signal, while using fft to convert it into frequency spectrum which will get information from 0-64Hz according to Nyquist. Then, if we have the original signal subtracting ifft of the 0-64Hz spectrum, will it produce some information of 64-18Hz band?
I want to detect anomaly from a streaming data. Using FFT or DWT history is it possible to detect anomaly on the fly (online) . It will help a lot if anybody could suggest some related resources.
Thanks.
Dear all colleague,
We now working with SDDB ECG record https://physionet.org/content/sddb/1.0.0/. When we do a preliminary literature study and dataset assessment, we found that each record has Baseline Wander happen. But unfortunately, we cannot determine is this a true Baseline Wander or this is happen naturally from the heart, since we not found any kind of pre-processing for SDDB records, except there're signal segmentation for Sudden Death classification research.
If this phenomenon are a baseline wander, what is the best practice baseline wander for this record? Here i put the picture of SDDB signal segment below from Physionet.
Researchers are now employing wifi sensing and wifi csi to design and develop various activity detection, heart beat monitor, and other devices. They train the data in an environment using machine learning or deep learning technology.
My issue is that because the wifi csi is highly sensitive to the environment, how will it operate if the environment changes, for example, if I train it at home and then use it in my workplace room? Is it necessary to train before attempting to use each environment?
If you do research in the area of Signal Processing, mainly Graph Signal Processing (GSP), then I recommend that you try your luck in the following 5-min video challenge:
Compressed sensing(also known as , compressive sampling, or sparse sampling) is a signal processing technique for efficiently acquiring and reconstructing a signal, by finding solutions to underdetermined linear systems. This is based on the principle that, through optimization, the sparsity of a signal can be exploited to recover it from far fewer samples than required by the Nyquist–Shannon sampling theorem. There are two conditions under which recovery is possible. The first one is sparsity, which requires the signal to be sparse in some domain. The second one is incoherence, which is applied through the isometric property, which is sufficient for sparse signals
I want to measure the distance between two Bluetooth devices (A Master and a slave) using the corresponding RSSI value. Is there any algorithm or popular approach that maps RSSI values directly to distance in, let's say, centimeters??
Could any one please help me in suggesting some resources where I could find a comparison curve between signal strength after Multi Path propagation effect with respect to obstacle positions between transmitter and receiver.
After conduction some experiment I found that the effect was greater near Rx or Near Tx but lesser when the obstacle is in same distance from Rx and Tx. Why such phenomena happens?
My research interests include but are not limited to fault diagnosis and signal processing. Recently, I am focusing on the data of PHM challenge in 2009. Do you know where to find labeled data for this dataset? Data without labels seems to be easy to find.
A cell phone is used to record acceleration data with the Physics Toolbox Pro. It looks like, that the acceleration signal is not recorded with a constant sampling rate. Is resampling and filtering necessary before further processing of the acceleration signal? A double integration of the acceleration signals to obtain displacement signals is finally needed. A python script would be very helpful.
We know that the brain sends and directs meaningful messages to control the patient's cells.
as we know, The brain is affected by factors such as diseases And we know that the brain also controls other organs of the body.nevertheless,Damage to the CELLS is visible on eeg?
Is Cancer Effective In EEG?
I am new to the field of signal processing but I have read that DWT can be used to find similarity between two time series, I am curious as to what kind of similarity measure do we use once we have calculated the the approx and detailed coefficients for both the time series at an appropriate decomposition level.
So for example using DWT on time series1 I will have an array which contains :
[12,10,4.5,7,-2.8,-1.2]
Similarly for second time series I will have:
[17,9,8,23,-3,-6.8]
Now what similarity measure do i use to find a similarity index and indicate how similar these wavelets are.
I am coding in Python, if that helps.
I need some guidance regarding CRLB to compute numerically, and to estimate Doppler frequency, for a synthetic signal, given below.
X = A*sinc(B*(t-𝜏).*exp(j*2*pi*F*(t-𝜏); whereas θ = [ F, A, 𝜏 ]
"A" is complex and has amplitude and phase. "F" is doppler and "𝜏 " is azimuth shift
hello, everyone. I am asking for help on the suggestions (scripts) about extractions of phase value. I am not familiar with the techniques in signal processing. Currently, I can use the Hilbert transform to extract the envelope, but I do not know what to do for the next step, extracting phase of one envelope. I am looking for one way. Who can give me some suggestions?
thanks a million.
I am working with an image sequence of an evolving wave pattern. I was interested in analyzing such sequence with a growth rate vs. wave number diagram.
I find that one way to do this is via a linear stability analysis which involves finding the maximum eigenvalue of the matrix at each time step. Is this a correct approach?
I am also confused where time is in the growth rate vs. wave number diagram. For example, the diagram shows how the growth rate changes for different wave numbers, but is this then for a fixed time?
In courses about DSP that I did at university we only covered theoretical material, I am looking for a good book that covers practical implementation of DSP in MATLAB like designing filters and DFT or FFT.
also looking for good books on signal processing with MATLAB in general.
Thanks.
My question refers to the following papers:
- S. J. Julier and J. J. LaViola, "On Kalman Filtering With Nonlinear Equality Constraints," in IEEE Transactions on Signal Processing, vol. 55, no. 6, pp. 2774-2784, June 2007, doi: 10.1109/TSP.2007.893949
- A. T. Alouani and W. D. Blair, "Use of a kinematic constraint in tracking constant speed, maneuvering targets," in IEEE Transactions on Automatic Control, vol. 38, no. 7, pp. 1107-1111, July 1993, doi: 10.1109/9.231465.
In particular, my concerns are about the Fig. 1 of [1], the statements at the end of the left column in the page 2 of [1], and the statements in the middle of the left column in the page 2 of [2]. In both papers, it is suggested to apply the constraints only after the update of the state through the measurements. Should be possible to obtain better results projecting the state on the constraining surface both after the prediction and update steps?
I am curious about what happened to the atomizer software by Buckheit J. (http://statweb.stanford.edu/~wavelab/personnel/) and if it is available somewhere.
Sadly I found only 2 dodgy sites that require a login to download the MATLAB code. Does someone have any information on where to get it from?
Alternatively, if there are other toolkits that have implemented this code please let me know, it does not have to be MATLAB, any language is fine for me :).
Thank you.
I am working on CTU (Coding Tree Unit) partition using CNN for intra mode HEVC. I need to prepare database for that. I have referred multiple papers for that. In most of papers they are encoding images to get binary labels splitting or non-splitting for all CU (Coding Unit) sizes, resolutions, and QP (Quantization Parameters).
If any one knows how to do it, please give steps or reference material for that.
Reference papers
instead of wavelet transform theories, have you ever used techniques that have the ability to treat with signals processing specially non-stationary signals like a brain signal and was superior?
Besides positioning solutions, Global Navigation Satellite Systems (GNSS) receivers can provide a reference clock signal known as Pulse-per-Second (PPS or 1-PPS). A TTL electrical signal that is used in several applications for synchronization purposes.
- Is the PPS physically generated through a digitally-controlled oscillator (or line driver) whose offset is periodically re-initialized by the estimated clock bias (retrieved by means of PVT algorithms)?
- Are there any specific filters/estimators devoted to a fine PPS generation and control?
- Does some colleague know any reference providing technical details on this aspect?
While reading a research paper I have found that to find the abnormality in a signal authors were using a sliding window and in each window he was dividing the mean^2 by variance. After searching in the internet I found a term called fano factor but it was inverse of that. Could anyone please give an intuitive idea behind
the equation mean^2/variance?
I am looking for some topics where I can use graph signal processing to solve problems in wireless sensor networks. I have gone through few papers where I have got an overview of the application of GSP in this domain, but I want to work on some specific problems (like intrusion detection, efficient energy distribution, etc.)
2 Logistic chaotic sequences generation, we are generating two y sequence(Y1,Y2) to encrypt a data
2D logistic chaotic sequence, we are generating x and y sequence to encrypt a data
whether the above statement is correct, kindly help in this and kindly share the relevant paper if possible
I am collecting wifi csi data using esp32 and retrieve the phase and amplitude of each sub carrier and plotting in real-time using pyqtgraph.
But probem is the I did not getting any significant changes in the plot ( for both , amplitude and phase) while moving my hand. Should I use any kind of filteration to see the deviations ? If yes , then what kind of pre processing and filtering is required?
Today, sensors are usually interpreted as devices which convert different sorts of quantities (e.g. pressure, light intensity, temperature, acceleration, humidity, etc.), into an electrical quantity (e.g. current, voltage, charge, resistance, capacitance, etc.), which make them useful to detect the states or changes of events of the real world in order to convey the information to the relevant electronic circuits (which perform the signal processing and computation tasks required for control, decision taking, data storage, etc.).
If we think in a simple way, we can assume that actuators work the opposite direction to avail an "action" interface between the signal processing circuits and the real world.
If the signal processing and computation becomes based on "light" signals instead of electrical signals, we may need to replace today's sensors and actuators with some others (and probably the sensor and actuator definitions will also be modified).
- Let's assume a case that we need to convert pressure to light: One can prefer the simplest (hybrid) approach, which is to use a pressure sensor and then an electrical-to-optical transducer (.e.g. an LED) for obtaining the required new type of sensor. However, instead of this indirect conversion, if a more efficient or faster direct pressure-to-light converter (new type of pressure sensor) is available, it might be more favorable. In near future, we may need to use such direct transducer devices for low-noise and/or high-speed realizations.
(The example may not be a proper one but I just needed to provide a scenario. If you can provide better examples, you are welcome)
Most probably there are research studies ongoing in these fields, but I am not familiar with them. I would like to know about your thoughts and/or your information about this issue.
I have a list of stores, each stores has variables (revenue, market share etc) which have been captured at a monthly level for a year, so basically 12 data points for each variable. So I am treating this as a time series data.
I want to cluster the stores based on these variables, with the condition that the store variables within a cluster match with each other in not just values but also trend. So for example if market share is one of the variable then the 2 stores can be put into same cluster if their monthly values are close along with a matching trend.
I have done some research and saw following approaches:
Model based : Fit time series of each feature to a model and then cluster the model parameters. From what I understand this generally works better on problem having lots of data, in my case each single time series will have only 12 data points, so will this work?
Shape based: Perform any conventional clustering on raw data or extract features and then perform clustering. Here my concern is how will the trend in the data be captured.
Waveform representation: Represent each time series as a waveform and use signal processing techniques(wavelet transformation etc) to cluster these time series. Honestly I dont have any background here but this approach sounds promising, so any inputs would be appreciated.
What would be the best way to go about this problem?
For information, I already removed mean, high pass filtered, linear enveloped (LPF and full wave rectified), and amplitude normalized the signals.
hello,
i work on a controlled Microgrid and i want to test the robustness of my controller againt a white noise that may be added to the output or the input. Is there is any specific condition to follow in order to take a good choise of a noise power ? or it is somthing random ?
- Actually i tried to take it about 3% of the nominal measurement value, is this enough to be good choice ?
- in addition, i tried the two types of noises, but i noticed that the one applied on the output affects much more the system than the one applied on the output (in such a way, my system looses its stability with the output noise, but gives an acceptable performance with the input noise ) , is this reasonnable ? if yes, why ?
thank you in advance
Dear all.
I am using a Laser Doppler vibrometry to me measure the vibration of a structure, what I am doing now consist of exporting velocity to signal processing and integrate it to obtain Displacement but due to that an error will be always added to the results where the signal will be fluctuating around different value and even that (max-max) value remain correct but the shape of the signal is not as I am willing.
is there is a way to input directly the displacement by time? I read the guide but nothing there.
a lot of article could handle that but they did not show the way.
Please advice
Dear all.
Since I am working on the establishment of a Lab for vibration measurement and signal processing for rotating machinery, I would highly appreciate based on your experience what should be included (bought) as equipment.
Thanks all
I have torques and angular positions data (p) to model a second-order linear model T=Is2p+Bsp+kp(s=j*2*pi*f). So first I converted my data( torque, angular position ) from the time domain into the frequency domain. next, frequency domain derivative is done from angular positions to obtain velocity and acceleration data. finally, a least square command lsqminnorm(MATLAB) used to predict its coefficients, I expect to have a linear relation but the results showed very low R2 (<30%), and my coefficient not positive always!
filtering data :
angular displacements: moving average
torques: low pass Butterworth cutoff frequency(4 HZ) sampling (130 Hz )
velocities and accelerations: only pass frequency between [-5 5] to decrease noise
Could anyone help me out with this?
what Can I do to get a better estimation?
here is part of my codes
%%
angle_Data_p = movmean(angle_Data,5);
%% derivative
N=2^nextpow2(length(angle_Data_p ));
df = 1/(N*dt); %Fs/K
Nyq = 1/(2*dt); %Fs/2
A = fft(angle_Data_p );
A = fftshift(A);
f=-Nyq : df : Nyq-df;
A(f>5)=0+0i;
A(f<-5)=0+0i;
iomega_array = 1i*2*pi*(-Nyq : df : Nyq-df); %-FS/2:Fs/N:FS/2
iomega_exp =1 % 1 for velocity and 2 for acceleration
for j = 1 : N
if iomega_array(j) ~= 0
A(j) = A(j) * (iomega_array(j) ^ iomega_exp); % *iw or *-w2
else
A(j) = complex(0.0,0.0);
end
end
A = ifftshift(A);
velocity_freq_p=A; %% including both part (real + imaginary ) in least square
Velocity_time=real( ifft(A));
%%
[b2,a2] = butter(4,fc/(Fs/2));
torque=filter(b2,a2,S(5).data.torque);
T = fft(torque);
T = fftshift(T);
f=-Nyq : df : Nyq-df;
A(f>7)=0+0i;
A(f<-7)=0+0i;
torque_freq=ifftshift(T);
% same procedure for fft of angular frequency data --> angle_freqData_p
phi_P=[accele_freq_p(1:end) velocity_freq_p(1:end) angle_freqData_p(1:end)];
TorqueP_freqData=(torque_freq(1:end));
Theta = lsqminnorm((phi_P),(TorqueP_freqData))
stimatedT2=phi_P*Theta ;
Rsq2_S = 1 - sum((TorqueP_freqData - stimatedT2).^2)/sum((TorqueP_freqData - mean(TorqueP_freqData)).^2)
I have performed all the attack for my image cryptography algorithm. finally i need to test NIST results for my cryptography algorithm. if any one have the code kindly share the code. please do the needful
Sorry but it may seem like an obvious question because I am new to EMG analysis but I have been reading many papers on how various research groups clean, filter, segment, and classify muscle activation and fatigue using time, frequency, and time-frequency domain analyses. However, I am struggling to find a common protocol for taking a raw EMG signal, processing it such that I can feed it into to these different types of analyses. Is there a general repository or guideline or protocol or flow chart that is generally accepted that someone can point me to? Any help would be much appreciated.
Dear all
What are the recent work in deep learning. how to start with python kindly suggest some work and materials to start with that.
I am using sparse array concepts (e.g., minimum redundancy array) to estimate the DoAs of multiple targets. For uncorrelated sources, applying super-resolution (SR) algorithms (e.g., MUSIC and ESPRIT) on the constructed difference co-array could provide a good DoA estimation results. However, if the sources are fully correlated, the covariance matrix of the received signal becomes rank one, and SR algorithms would fail.
In uniform linear array (ULA) case, we could use spatial smoothing or forward/backward technique to decorrelate coherent sources. However, in the case of sparse arrays, these techniques will not work unfortunately, because of the missing elements.
I am curious about how to decorrelate coherent sources in sparse array. Any discussion, suggestions or paper referring would be very welcome!
Thanks a lot!
Yuliang
Dear community, after using the wavelet transform to extract the important features from my EEG signals , i'm wondering about how to calculate the Shanon entropy of each value of my coefficients (cD1,cD2,....cA6), another thing is how to use the Shanon entropy for dimension reduction ?
Thank you .
I have used the wavelet decomposition and reconstruction of a specific signal (for, e.g., rainfall). Among the all-available levels (suppose I have ten low-frequency reconstruction signals), which level provides the information that consists of deterministic components, reflecting the variation characteristics of the provided signal? To add more, the higher approximation levels (such as a8, a9, and a10) indicated the residual of the decomposition process. This level contains the average value of the data series, so the variation characteristics that we are looking into the signal don’t necessarily present as they start showing a flat curve in these levels. On the other hand, Levels a0, a1, and a2 include most of the high frequencies that reduce the correlation and do not significantly improve the signal characterization. So, in between them, which level should be taken into account to study the particularities of the signal. Should we follow the level with high correlation coefficients?
Hello everyone,
I hope you are doing well.
I am using a Vantage Verasonics Research Ultrasound System to do Ultrafast Compound Doppler Imaging. I acquire the beamformed IQData with compounding angles (na = 3) and ensemble size of (ne = 75) which are transmitted at the ultrafast frame rate (PRFmax = 9kHz) and (PRFflow = 3kHz). Can I used the Global SVD clutter filter to process the beamformed IQData instead of conventional high-pass butterworth filter.
Your kind responses will be highly appreciated.
Thank you
While training my GNN (graph neural network) model, the loss is badly fluctuating. Someone had suggested increasing the batch size or decreasing the learning rate, but the results are remaining the same. Can anyone suggest other possible reasons and remedies for solving this issue?
(In the graphs attached below, the x-axis represents the number of samples and the y-axis represents training loss.)
I have the scattering matrix images (8 images: S11_real, S11_imaginary, similarly for S22, S12, S21) and I need to create the coherency matrix images (6 images: Diagonal and upper elements T11,T22,T33,T12,T13,T23). The sensor is mono-static so S12=S21. How can it be done using python\MATLAB. Kindly share the required library/code or equation, required for it.
The Nyquist-Shannon theorem provides an upper bound for the sampling period when designing a Kalman filter. Leaving apart the computational cost, are there any other reasons, e.g., noise-related issues, to set a lower bound for the sampling period? And, if so, is there an optimal value between these bounds?
In the characterization of noise from a fabricated MOSFET, obtaining the PSD is critical. How can this be done under DC bias conditions?
Hi everyone! I am looking for a dataset and I'm gonna be so thankful if anyone helps me by introducing any link to access databases. I want to research on knock detection in spark ignition engines by processing of vibratory signals. so first, I wanna validate a pervious study in this field. Thus, I am looking for a dataset and related researches that were done before according to the databases.
As I know CNN requires images most of the times , but my data frame is taking a size of (335,48) ; which is not an image but numerical values and categorical output, how Can I use CNN or deep learning for this situation ? Thank you..
dear community, my model is based feature extraction from non stationary signals using discrete Wavelet Transform and then using statistical features then machine learning classifiers in order to train my model , I achieved an accuracy of 77% maximum for 5 classes to be classified, how to increase it ? size of my data frame is X=(335,48) , y=(335,1)
Thank you
Hi,
i want to classifiy time series of varying length to classify drivers of a bike by the Torque. I was planning on dividing the signal in lengths of lets say 5 rotations so the length of the time series would vary by the velocity of rotation. Do I need to extract features like Mean value and fft or is it enough to simply apply the filtered signal to the classifier?
Thanks in advance
Hi RG,
There are a lot of papers using the HDsEMG database CapgMyo to test gesture recognition algorithms (http://zju-capg.org/myo/data/).
However, it seems that there is a missing file on the original server (http://zju-capg.org/myo/data/dbc-preprocessed-010.zip).
I wonder if anyone know if there is an alternative source for the database?
All the best.
I am trying to select a mother wavelet function for signal analysis.first i am trying to select the level for each wavelet.now the problem i am facing is the total entropy of subsequent decomposition is increasing (approx +detail).where as detail is decreasing to a reasonable level (say 2 or 3,4)my interest is with the detail coefficient .should i continue to level where detail entropy is minimum though the total entropy is increasing.
Working on chandrayaan-2 DFSAR data, there are three datasets available:
1) Slant range image data product: The slant range complex image file. Each pixel is represented by two 4-byte floating point value (one 4-byte floating point real and one 4-byte floating point imaginary value). Each pixel in the slant range image is Seleno-tagged with a lat./lon. value.
2) Ground range image data product: The ground range unsigned short int image file. Each pixel is represented by 2-byte unsigned short int. Each pixel in the slant range image is Seleno-tagged with a lat./lon. value.
3) Seleno-referenced image data product: The Map projected image file. Each pixel is represented by 2-byte unsigned short int file(amplitude).
Which should I be using if i want to generate coherency matrix and perform target decomposition?
Thanks in advance!
A signal is split into two parts and one of them is going through a filter (say, with a transfer function H(f)) and the other part stays unchanged. Then I want to know how to calculate their cross correlation function. My guess is, given the spectral density function S(f), it will be the ordinary Wiener-Khinchine theorem with an addition of the transfer function: R=Integral{S(f)H(f)*exp(i*2*pi*f*t)df}
Hi all, I hope everyone is doing good.
I am working on Machine Learning, I am working on EEG data for which I have to extract statistical features of the data. Using mne library I have extracted the data in a matrix form but my work requires some statistical features to be extracted.
All features which are to be extracted are given in table 2 of this paper: "Context-Aware Human Activity Recognition (CAHAR) in-the-Wild Using Smartphone Accelerometer,". The data set I am using is dataset 2b from "http://www.bbci.de/competition/iv/".
I can't find a signal processing library. Can you suggest me any signal processing library for processing EEG signal data in Python?
Thanks to all who help.
I am working on ECG arrhythmia classification by using SVM , implemented some kernels tricks
and using different kernels on MIT BIH dataset (features create 44187 row ,18 column matrix)
now it is difficult to plot support vector for such large data sets , now how can i plot it and please suggest any other plots or methods to show comparison between different kernels , i already have comparison chart of accuracy efficiency etc.
In several discussions, I have often come across a question on the 'mathematical meaning of the various signal processing techniques' such as Fourier transform, short-term fourier transform, stockwell transform, wavelet transform, etc. - as to what is the real reason for choosing one technique over the other for certain applications.
Apparently, the ability of these techniques to overcome the shortcomings of each other in terms of time-frequency resolution, noise immunity, etc. is not the perfect answer.
I would like to know the opinion of experts in this field.
Dear All,
Id you are interested in the area my Adversarial Multimedia Forensics, myy PhD thesis now available on EURASIP database at the https://theses.eurasip.org/theses/859/machine-learning-techniques-for-image-forensics/
Thanks
My SR785 Dynamic Signal Analyzer keeps rebooting after showing a "WaitFlag Error" in between measurements. I have been unable to find any mention of this error in the service/user manuals available on the SRS website (thinksrs.com). Any leads as to how to approach/troubleshoot this problem would be appreciated. Thanks!