Science topic

Signal Processing - Science topic

Explore the latest questions and answers in Signal Processing, and find Signal Processing experts.
Questions related to Signal Processing
  • asked a question related to Signal Processing
Question
3 answers
How to filter input signal through lognormal shadowing model or kappa mu shadowing model by using a code which generates PDF in Matlab?
Relevant answer
Answer
you can use following code to pass through the log-normal shadowing model:
s= randn(1,20); %Randomly generated signal to pass through the channel.
M=4; order of modulation (I am considering that you are using 4-QAM modulation)
s_mod= qammod(s, M); % Modulate the signal
mean= 0; %Consider mean of Log-Normal model as 0
var=1; % Consider variance of Log-Normal model as 1
h_t= lognrnd(mean, var); % Generate Log-normal model
%Befor passing through channel, add cyclic-Prefix to the modulated signal
Ncp= 5; %cyclic prefix length
s_cp= [s_mod(end-Ncp+1:end) s_mod];
% Now pass throgh Log-Normal model
r1= conv(h_t,s_cp);
%The received signal will be noisy signal, so add the noise
r= awgn(r1,10,'measured'); % Noise at SNR=5
%Now remove cyclic prefix,
y=r(Ncp+1:N+Ncp);
  • asked a question related to Signal Processing
Question
3 answers
I have been working on classifying ecg signals,and for feature extraction I am going to use AR modelling using Burgs method. After reading a few papers I got to know that the features are extracted after splitting the ecg signals into blocks of different duration,My question is why is it necessary to do so,and how could we fix a certain duration? For instance I have a signal with 50000 samples with fs = 256 hz ,so what could be the duration of each block.
And it would be really helpful if someone could help me understand the burg's method.There are videos and all for learning the Yule-Walker equations but did'nt find any for burgs method
Thank you in advance
Relevant answer
Answer
Computer assisted analyses of cardiovascular signals facilitate timely diagnosis and treatment in critically ill patients. Different methods have been employed for the analysis and diagnosis, yet there is scope for enhancement of classification/diagnosis accuracy.
Regards,
Shafagat
  • asked a question related to Signal Processing
Question
3 answers
I have two datasets (.edf) of EEG recordings, one for healthy people, one for depressive people.
Each of the recording has 20 channels. So far I opened the data in matlab with edfread() as a timetable.
How can I add a white noise in that timetable?
Relevant answer
Answer
Artificial Intelligence
answer would work. But it might not generate what you need. Have you considered including noises that are true to the body, like bodily function, sounds and surrounding RF? Eleonora Adelina Dănilă
  • asked a question related to Signal Processing
Question
5 answers
My area of research is genomic signal processing. I need to give names two experts from outside India in this area to review my work for a journal.
Can anyone kindly suggest experts in the areas of genomic signal processing, signal processing , Bioinformatics.
Relevant answer
Answer
I am working in Genomics Signal Processing ( Big-data Analysis) for the last 11 years and completed his Ph.D. in this domain itself.
Ph.D. thesis title is “Characterization of Periodicities in DNA Sequences Using Signal Processing” and the following contributions have been made by him in this domain:
(i) Journal Publications: 09
(ii) Conference Publications:04
(iii) Book Chapter:01
(iv) Ph.D. Guidance: 01 (Detection and Localization of the Hidden Patterns in the DNA Sequences Using Signal Processing-2022)
Also good knowledge of machine and deep learning algorithms.
Any researcher associated with this field can contact to me for help.
  • asked a question related to Signal Processing
Question
11 answers
Hi to everyone, I am an engineering student and I started to learn signal processing / signals&systems topics. I think that one problem of self-learning is, can't find someone/teacher to ask the point you stuck it.
I don't understand how the CT impulse function is transformed to the discrete-time impulse as its amplitude is 1. How does this process work?
I have problems with converting CT to DT, the sampling, and the periodization. I am trying to watch several videos about it, but the actual mathematical operation of this "converter" is not clear for me.
What I mean is , what is the operator that converts;
""impulse(t) >> to impulse[n] with amplitude of 1""
or ""x(t) . p(t) impulse train >> to x[n] as a sequence""
x(t) . p(t) could be represented as = summation of the series of x(nT) . impulse ( t - nT )
But this is still not equal to a sequence of x[n] , because it contains scaled impulses with amplitude of infinity, right?
To remind it again, my question is how to transform X(t) to x[n] mathematically? How this sampling is occurring?
Relevant answer
Answer
x(t)----->x(nTs) (Process of sampling which gives a discrete signal with infinite samples separated by sampling time, Ts)
To take the samples of x(t), you can simply multiply the function with Dirac(t-nTs) {This is one of the ways to carry out this operation---process of discretization/sampling)
PS: x(nTs) is actually a discrete signal whereas x[n] is a digital signal. x(nTs) can be treated as x[n] if its amplitude is quantized.
  • asked a question related to Signal Processing
Question
3 answers
Could you please suggest any articles/book chapters where I could start with to learn the concept of Total Variation in classical signal processing? I would like to relate to Graph Signal Processing in understanding Fourier Basis.
Relevant answer
Answer
Hi please go through the attached reputable research papers to clarify the concept. It is generally used in image processing and has limited application in signal processing. The above-mentioned definition from Fernando is acquired from the paper from Fikret et al. and can be a good starting point to increase your knowledge on the topic.
Fikret Işık Karahanoglu; İlker Bayram; Dimitri Van De Ville, "A Signal Processing Approach to Generalized 1-D Total Variation", IEEE TRANSACTIONS ON SIGNAL PROCESSING
Condat, L. (2013). A direct algorithm for 1-D total variation denoising. IEEE Signal Processing Letters, 20(11), 1054-1057.
  • asked a question related to Signal Processing
Question
12 answers
hi my thesis is about detection of myocardial infarction from ECG signals and i want to know is there any database for it?
Relevant answer
Answer
hi,
i hope you are fine. is there any research or literature review available on the comparative analysis of different ensemble method used for misdiagnosis of cancer patient against xgboost method?
  • asked a question related to Signal Processing
Question
4 answers
I'm pre-processing a UAV Magnetic data where the flight path is parallel to each other in N-S direction (heading N and S one after another). The magnetic values seems to be vertically shifted and flipped when going in different headings. The only way I could solve this is by compensating the values by exporting the difference in median (constant median) in Magdrone Data Tool but these compensated values would be insufficient for magnetic susceptibility calculation later. I've tried doing heading correction in Oasis Montaj but to no avail. Is there a way I could solve this heading error?
The first image shows a profile of 6 tracks. The arrow corresponds to the UAV turning. This data have been low pass filtered. Profile 2 shows the data after removal of the turning errors.
I've also attached a scatterplot of the raw data and grid (minimum curvature) of Profile 2.
Relevant answer
Answer
Dear Mr Pavlov Nikolay Pavlov
Thanks for your your input and explanation. Really appreciate it.
  • asked a question related to Signal Processing
Question
5 answers
Dear all,
I'd like to detect diabetes through PPG signal processing, Which method do you recommend me to use? If you happen to have access to the scripts, I'd appreciate it.
Thanks!
Fernando
Relevant answer
Answer
The article below explains the Classification of Diabetes Using Photoplethysmogram (PPG) Waveform Analysis: Logistic Regression Modeling. It may help you.
  • asked a question related to Signal Processing
Question
3 answers
I need to remotely measure the thickness of an object mounted on a black plate with the accuracy <5mm. While the accuracy is challenging for depth cameras and, a lidar cannot get the reflection from the black plate (as it absorbs the signal) (we need to measure distance to the plate and to the object from the camera to infer object thickness).
suggestion of any techniques that could fit is appreciated
Relevant answer
Answer
I'd try out some Time-of-Flight sensor (e.g. from the ST VL53-series). "Black" usually does not mean that a surface does not reflect at all, it's just a minute reflection. Might still be enough for a ToF sensor.
  • asked a question related to Signal Processing
Question
5 answers
Hello,
I graduated with a Master's degree in machine learning and signal processing.
I'm in the first year of my Ph.D. in computer science. I have some difficulties finding topics on smart cities.
Do you have some suggestions or ideas?
Relevant answer
Answer
Take a look at "digital twin" in the context of the target city of your study (let's say city X).
Benefits of a digital twin of city X for its residents? Disaster management, energy, etc.
What are the existing infrastructure in city X, that can support development of a digital twin of the city? What is missing?
  • asked a question related to Signal Processing
Question
4 answers
Human surface EEG (electroencephaloraphy) is made up of background activity + oscillations. Many of these oscillations come in short bursts (1-5 seconds) or even sustained trains (>10 s) such as occipital alpha activity (8-10 Hz). What is the best method for automatically identifying these bursts, without relying on simple fixed amplitude thresholds or complex machine learning algorithms?
Relevant answer
Answer
The bycycle package might be interesting for you. It is made to detect sustained oscillations and bursty ones. I've not tested it on human data personally yet (just rats), but it was made for human EEG originally.
I think it is an interesting addition to spectrum based approaches (FFT, wavelet etc) that tend to assume that your oscillation is stable over time.
  • asked a question related to Signal Processing
Question
3 answers
I am working on a research point related to stability in power system based on poles estimation . I am trying to apply ESPRIT algorithm in my work to estimate system poles. I wrote an m-file and tried to apply this technique on a simple transfer function to estimate its roots . I calculated the covariance matrix of the data signal and then took svd to get the 2 overlapped vectors s1 and s2 and calculated epsai matrix and took its eig(epsai). how can I calculate the frequency and damping ratio after calculating the eigen value of epsai of this algorithm ? the equations I use for frequency and damping factor give wrong values.
Relevant answer
Answer
Ahmed Abdulsalam Thank you , Ahmed
  • asked a question related to Signal Processing
Question
3 answers
In processors the complex and challenging operations are needed to be handled to overcome the demands, which leads to an increase in processor cores. This leads to an increase in the load of the processor and can be limited by placing a co-processors under specific type of functions like signal processing. But anyhow the speed of the ALU replies on the multiplier. Since multipliers are the major components to perform operations in the CPU.
Relevant answer
Answer
Please go through the recent published articles mentioned below :
Biji, Rhea, and Vijay Savani. "Performance analysis of Vedic mathematics algorithms on re-configurable hardware platform." Sādhanā 46, no. 2 (2021): 1-5.
  • asked a question related to Signal Processing
Question
4 answers
The background is, we are trying to calculate an index relying on high frequency band over 100Hz with only 128Hz signal. The assumption is that: Say we have a 128Hz signal, while using fft to convert it into frequency spectrum which will get information from 0-64Hz according to Nyquist. Then, if we have the original signal subtracting ifft of the 0-64Hz spectrum, will it produce some information of 64-18Hz band?
Relevant answer
Answer
If your signal contains information between 0..64 Hz and 64..128Hz, and you sample it as 128Hz, the sampling process will fold-over (alias) everything in the 64..128Hz band backwards into the 0..64Hz band. So for example, a 74Hz tone will be folded over to 54Hz. A 100Hz tone will be folded over to 28Hz. (Signals above 128Hz will also get aliased into the band) So to answer your question - Yes the output of your FFT will contain information from the 64..128Hz band. But it is indistinguishable from the information in the 0..64 Hz band. If you know that there is no signal between 0..64Hz then nothing has been lost - you can fully reconstruct the signal. But if you DID have something in there, then you can't separate the two signals and they are forever combined.
  • asked a question related to Signal Processing
Question
4 answers
I want to detect anomaly from a streaming data. Using FFT or DWT history is it possible to detect anomaly on the fly (online) . It will help a lot if anybody could suggest some related resources.
Thanks.
Relevant answer
Answer
why not consider using S-transform as it combines the properties of FFT and wavelet transform.
  • asked a question related to Signal Processing
Question
6 answers
ECG signal processing
Relevant answer
Answer
Yes, preprocessing steps changes the raw ECG signal upto some extent but this is mainly depends on the opted signal processing methods. Few methods do elimination of entire noisy components in the signal which causes huge data loss while few methods perform Denoising operation with the objective of minimal dataloss.
  • asked a question related to Signal Processing
Question
3 answers
Dear all colleague,
We now working with SDDB ECG record https://physionet.org/content/sddb/1.0.0/. When we do a preliminary literature study and dataset assessment, we found that each record has Baseline Wander happen. But unfortunately, we cannot determine is this a true Baseline Wander or this is happen naturally from the heart, since we not found any kind of pre-processing for SDDB records, except there're signal segmentation for Sudden Death classification research.
If this phenomenon are a baseline wander, what is the best practice baseline wander for this record? Here i put the picture of SDDB signal segment below from Physionet.
Relevant answer
Answer
The Holter electronics would have to be designed to set the high bandpass filter to >10-30 Hz. Baseline wander is nothing more than a low frequency signal and the monitor has no clue that it is not a physiologic signal. ECG recorders allow baseline wander to be displayed because some of the components of the ECG are low frequency, like the T wave. Therefore, most ECG recording devices use a fairly low high-band-pass filter, around 0.15 Hz.
It depends on what you are looking for as to whether you can get away with raising the high-band-pass filter. Intracardiac signals recorded during an electrophysiology study typically are filtered 30 Hz to 500 Hz, in which case there will be no baseline wander. The recorded signals, however, will have only high-frequency characteristics, so they will look very sharp and spikey. If that doesn't matter to you, that's your answer
  • asked a question related to Signal Processing
Question
5 answers
Researchers are now employing wifi sensing and wifi csi to design and develop various activity detection, heart beat monitor, and other devices. They train the data in an environment using machine learning or deep learning technology.
My issue is that because the wifi csi is highly sensitive to the environment, how will it operate if the environment changes, for example, if I train it at home and then use it in my workplace room? Is it necessary to train before attempting to use each environment?
Relevant answer
Answer
Good answer by Aparna Sathya Murthy
  • asked a question related to Signal Processing
Question
5 answers
If you do research in the area of Signal Processing, mainly Graph Signal Processing (GSP), then I recommend that you try your luck in the following 5-min video challenge:
Relevant answer
Answer
Numerous data from the big data are often represented by graphs. Once vector data is associated with their nodes, one obtains what is called a graph signal. The processing of such graph signals faces several challenges because of the nature of the underlying information; combining graphs theoretical aspects with signal processing methods.
Regards,
Shafagat
  • asked a question related to Signal Processing
Question
6 answers
Compressed sensing(also known as , compressive sampling, or sparse sampling) is a signal processing technique for efficiently acquiring and reconstructing a signal, by finding solutions to underdetermined linear systems. This is based on the principle that, through optimization, the sparsity of a signal can be exploited to recover it from far fewer samples than required by the Nyquist–Shannon sampling theorem. There are two conditions under which recovery is possible. The first one is sparsity, which requires the signal to be sparse in some domain. The second one is incoherence, which is applied through the isometric property, which is sufficient for sparse signals
Relevant answer
Answer
Image processing can benefit from the usage of discrete wavelet transformations. As image resolution grows, so does the amount of storage space required. DWT is used to reduce the size of an image without sacrificing quality, resulting in higher resolution.
I also recommend that you read the following articles:
  • asked a question related to Signal Processing
Question
7 answers
I want to measure the distance between two Bluetooth devices (A Master and a slave) using the corresponding RSSI value. Is there any algorithm or popular approach that maps RSSI values directly to distance in, let's say, centimeters??
Relevant answer
Answer
There are several works in the literature dealing with this problem. The challenge is that the RSSI measurements are noisy and there exists some variability between consecutive measurements even if the master and slave are completely static.
One approach is to use the Log-Distance Path Loss (LDPL) model to transform the RSSI into a distance, more information at https://en.wikipedia.org/wiki/Log-distance_path_loss_model , but it requires a proper calibration for your environment and devices. Even with a proper calibration, it might fail indoors because of several signal propagation issues.
A colleague is currently working in this problem, you may find more information at:
Best regards,
Joaquín.
  • asked a question related to Signal Processing
Question
7 answers
Could any one please help me in suggesting some resources where I could find a comparison curve between signal strength after Multi Path propagation effect with respect to obstacle positions between transmitter and receiver.
After conduction some experiment I found that the effect was greater near Rx or Near Tx but lesser when the obstacle is in same distance from Rx and Tx. Why such phenomena happens?
Relevant answer
Answer
I suppose this depends on what kind of obstacles you are considering and how they are affecting the signals. If you consider an object that is scattering the signal, then the pathloss will be proportional to (d_1*d_2)^2 where d_1 is the distance from the transmitter to the obstacle and d_2 is the distance from the obstacle to receiver. For a given total propagation distance d_1+d_2, it follows that the pathloss is at its smallest when the scattering object is close to the transmitter or the receiver.
  • asked a question related to Signal Processing
Question
3 answers
My research interests include but are not limited to fault diagnosis and signal processing. Recently, I am focusing on the data of PHM challenge in 2009. Do you know where to find labeled data for this dataset? Data without labels seems to be easy to find.
Relevant answer
Answer
Thank you very much.
@Anil Kamboj
@Qamar Ul Islam
  • asked a question related to Signal Processing
Question
3 answers
A cell phone is used to record acceleration data with the Physics Toolbox Pro. It looks like, that the acceleration signal is not recorded with a constant sampling rate. Is resampling and filtering necessary before further processing of the acceleration signal? A double integration of the acceleration signals to obtain displacement signals is finally needed. A python script would be very helpful.
Relevant answer
Answer
Hello, Nils Wagner .
What are you trying to achieve? Wouldn't be an acceleration signal sufficient for your application? Double-intergation of the acceleration signal to the displacement is a highly unreliable process, as pointed out by Rolf Schirmacher .
If you need to estimate a total displacement (i.e. a static part of the signal), be vary that you don't know initial conditions (especially initial velocity) and there is practically no method to evaluate them based on acceleration data. There is also noise but it can be at least partially resolved by some smoothing/filtering.
Also note that Phyhox claims that "errors in the acceleration sum up in the velocity and hence have an even worse impact on the location. Usually, the noise of the sensors brings the results to absolutely unreasonable values within a short time."
  • asked a question related to Signal Processing
Question
6 answers
We know that the brain sends and directs meaningful messages to control the patient's cells.
as we know, The brain is affected by factors such as diseases And we know that the brain also controls other organs of the body.nevertheless,Damage to the CELLS is visible on eeg?
Is Cancer Effective In EEG?
Relevant answer
Answer
Stomach cancer is cancer that may affect any part of the stomach and extend to the esophagus or small intestine, and it causes the death of nearly one million people annually. It is more prevalent in Korea, Japan, England and South America. It is more prevalent among men than women. It is associated with eating too much salt, smoking, and also low intake of fruits and vegetables. Therefore, it is believed that its spread in countries such as Korea and Japan is due to the consumption of salted fish mainly by Koreans and Japanese, as well as the use of canned food and food preservatives. Mucosal colonization of H. pylori is believed to be the main risk factor in about 80% of stomach cancers
Stomach cancer is diagnosed through an endoscopic examination that allows a biopsy to be extracted from the affected tissue, and then analyzed to confirm the presence of a tumor. Dr. Riccardo Rosati, a specialist in gastroenterology at San Raffaele Hospital in Milan, says, "Before undergoing treatment, the patient needs to do a series of other ultrasound and other examinations to check the areas, glands and organs covered by the disease, in order to determine the degree of its progression.
As a researcher, I believe that stomach cancer cells do not send messages to the brain due to the lack of associated neurons
  • asked a question related to Signal Processing
Question
5 answers
I am new to the field of signal processing but I have read that DWT can be used to find similarity between two time series, I am curious as to what kind of similarity measure do we use once we have calculated the the approx and detailed coefficients for both the time series at an appropriate decomposition level.
So for example using DWT on time series1 I will have an array which contains : [12,10,4.5,7,-2.8,-1.2] Similarly for second time series I will have: [17,9,8,23,-3,-6.8]
Now what similarity measure do i use to find a similarity index and indicate how similar these wavelets are.
I am coding in Python, if that helps.
Relevant answer
Answer
Just wild guessing...
Kullback-Leibler divergence? ( transforming a bit the data )
In R programming there is a function to do it called KLD. I guess you can do it in python as well.
  • asked a question related to Signal Processing
Question
3 answers
I need some guidance regarding CRLB to compute numerically, and to estimate Doppler frequency, for a synthetic signal, given below.
X =  A*sinc(B*(t-𝜏).*exp(j*2*pi*F*(t-𝜏); whereas   θ = [ F, A, 𝜏  ]
"A" is complex and has amplitude and phase. "F" is doppler and  "𝜏 " is azimuth shift
Relevant answer
  • asked a question related to Signal Processing
Question
9 answers
hello, everyone. I am asking for help on the suggestions (scripts) about extractions of phase value. I am not familiar with the techniques in signal processing. Currently, I can use the Hilbert transform to extract the envelope, but I do not know what to do for the next step, extracting phase of one envelope. I am looking for one way. Who can give me some suggestions?
thanks a million.
Relevant answer
Answer
Arkadiy Prodeus Thank you so much. I have solved this problem with Hilbert transform.
  • asked a question related to Signal Processing
Question
2 answers
I am working with an image sequence of an evolving wave pattern. I was interested in analyzing such sequence with a growth rate vs. wave number diagram.
I find that one way to do this is via a linear stability analysis which involves finding the maximum eigenvalue of the matrix at each time step. Is this a correct approach?
I am also confused where time is in the growth rate vs. wave number diagram. For example, the diagram shows how the growth rate changes for different wave numbers, but is this then for a fixed time?
Relevant answer
Answer
These articles might be useful, have a look:
Kind Regards
Qamar Ul Islam
  • asked a question related to Signal Processing
Question
11 answers
In courses about DSP that I did at university we only covered theoretical material, I am looking for a good book that covers practical implementation of DSP in MATLAB like designing filters and DFT or FFT.
also looking for good books on signal processing with MATLAB in general.
Thanks.
Relevant answer
Answer
you can use DSP tool in Matlab.
  • asked a question related to Signal Processing
Question
4 answers
My question refers to the following papers:
  1. S. J. Julier and J. J. LaViola, "On Kalman Filtering With Nonlinear Equality Constraints," in IEEE Transactions on Signal Processing, vol. 55, no. 6, pp. 2774-2784, June 2007, doi: 10.1109/TSP.2007.893949
  2. A. T. Alouani and W. D. Blair, "Use of a kinematic constraint in tracking constant speed, maneuvering targets," in IEEE Transactions on Automatic Control, vol. 38, no. 7, pp. 1107-1111, July 1993, doi: 10.1109/9.231465.
In particular, my concerns are about the Fig. 1 of [1], the statements at the end of the left column in the page 2 of [1], and the statements in the middle of the left column in the page 2 of [2]. In both papers, it is suggested to apply the constraints only after the update of the state through the measurements. Should be possible to obtain better results projecting the state on the constraining surface both after the prediction and update steps?
Relevant answer
Answer
  • asked a question related to Signal Processing
Question
3 answers
I am curious about what happened to the atomizer software by Buckheit J. (http://statweb.stanford.edu/~wavelab/personnel/) and if it is available somewhere.
Sadly I found only 2 dodgy sites that require a login to download the MATLAB code. Does someone have any information on where to get it from?
Alternatively, if there are other toolkits that have implemented this code please let me know, it does not have to be MATLAB, any language is fine for me :).
Thank you.
Relevant answer
Answer
Old news but I found it here (http://sparselab.stanford.edu/atomizer/) if anyone finds this post while searching for it.
I've also downloaded the ZIP for posterity. Anyone feel free to get in touch if you need it. I plan on keeping it forever.
  • asked a question related to Signal Processing
Question
3 answers
I am working on CTU (Coding Tree Unit) partition using CNN for intra mode HEVC. I need to prepare database for that. I have referred multiple papers for that. In most of papers they are encoding images to get binary labels splitting or non-splitting for all CU (Coding Unit) sizes, resolutions, and QP (Quantization Parameters).
If any one knows how to do it, please give steps or reference material for that.
Reference papers
Relevant answer
  • asked a question related to Signal Processing
Question
4 answers
instead of wavelet transform theories, have you ever used techniques that have the ability to treat with signals processing specially non-stationary signals like a brain signal and was superior?
Relevant answer
Answer
Hi Kareem,
I haven't worked brain signals. However I would suggest you try HOSA toolbox to work on non-stationary signals https://in.mathworks.com/matlabcentral/fileexchange/3013-hosa-higher-order-spectral-analysis-toolbox
Especially, you can try cumulants or bispectrum techniques.
  • asked a question related to Signal Processing
Question
3 answers
Besides positioning solutions, Global Navigation Satellite Systems (GNSS) receivers can provide a reference clock signal known as Pulse-per-Second (PPS or 1-PPS). A TTL electrical signal that is used in several applications for synchronization purposes.
  1. Is the PPS physically generated through a digitally-controlled oscillator (or line driver) whose offset is periodically re-initialized by the estimated clock bias (retrieved by means of PVT algorithms)?
  2. Are there any specific filters/estimators devoted to a fine PPS generation and control?
  3. Does some colleague know any reference providing technical details on this aspect?
Relevant answer
we worked on the design and implementation of of a GPS receiver for the sake of extracting the one PPS, we could realize the signal acquisition phase and the tracking phase where we could generate a copy of the carrier with reduced frequency and of the pn conde clock generator. BY dividing these signal with appropriate division ratio one can get the one PPS. Its stability was to be evaluated. But the last divider stage is not yet realized and we plan to realize it and after that evaluate its accuracy and stability. please see the papers:
Once achieved I will notify you.
Best wishes
  • asked a question related to Signal Processing
Question
5 answers
While reading a research paper I have found that to find the abnormality in a signal authors were using a sliding window and in each window he was dividing the mean^2 by variance. After searching in the internet I found a term called fano factor but it was inverse of that. Could anyone please give an intuitive idea behind
the equation mean^2/variance?
Relevant answer
Answer
This criterion has some advantages:
It does not have unit. It is a coefficient. Therefore, you can compare different signals.
I think it stems from the coefficient of variations (CV). It is the inverse of square of CV. CV is a standardized deviation index.
  • asked a question related to Signal Processing
Question
2 answers
I am looking for some topics where I can use graph signal processing to solve problems in wireless sensor networks. I have gone through few papers where I have got an overview of the application of GSP in this domain, but I want to work on some specific problems (like intrusion detection, efficient energy distribution, etc.)
Relevant answer
Answer
Application of graph signal processing (GSP) tools to various sensor processing tasks include sensing, filtering, sensor data classification/clustering, anomaly detection, and prediction. There are also several issues related to time-series signals as well as image, video, and heterogeneous signals.
GSP help represent irregular data structures on graphs, extends classical digital signal processing (DSP) to signals on graphs by combining algebraic and spectral graph theory with DSP and offers a potential solution to numerous real-world problems involving signals defined on topologically complex domains, e.g., social networks, point clouds, biological networks, environmental and condition monitoring sensor networks, etc. Hence, GSP tools can deal with various sensing tasks, including, but not restricted to sampling, distributed filtering, denoising, data processing tasks, smart grids and infrastructure health.
  • asked a question related to Signal Processing
Question
1 answer
2 Logistic chaotic sequences generation, we are generating two y sequence(Y1,Y2) to encrypt a data
2D logistic chaotic sequence, we are generating x and y sequence to encrypt a data
whether the above statement is correct, kindly help in this and kindly share the relevant paper if possible
Relevant answer
Answer
here is my article which can be an answer of your question
  • asked a question related to Signal Processing
Question
5 answers
I am collecting wifi csi data using esp32 and retrieve the phase and amplitude of each sub carrier and plotting in real-time using pyqtgraph.
But probem is the I did not getting any significant changes in the plot ( for both , amplitude and phase) while moving my hand. Should I use any kind of filteration to see the deviations ? If yes , then what kind of pre processing and filtering is required?
Relevant answer
Answer
  • asked a question related to Signal Processing
Question
11 answers
Today, sensors are usually interpreted as devices which convert different sorts of quantities (e.g. pressure, light intensity, temperature, acceleration, humidity, etc.), into an electrical quantity (e.g. current, voltage, charge, resistance, capacitance, etc.), which make them useful to detect the states or changes of events of the real world in order to convey the information to the relevant electronic circuits (which perform the signal processing and computation tasks required for control, decision taking, data storage, etc.).
If we think in a simple way, we can assume that actuators work the opposite direction to avail an "action" interface between the signal processing circuits and the real world.
If the signal processing and computation becomes based on "light" signals instead of electrical signals, we may need to replace today's sensors and actuators with some others (and probably the sensor and actuator definitions will also be modified).
  • Let's assume a case that we need to convert pressure to light: One can prefer the simplest (hybrid) approach, which is to use a pressure sensor and then an electrical-to-optical transducer (.e.g. an LED) for obtaining the required new type of sensor. However, instead of this indirect conversion, if a more efficient or faster direct pressure-to-light converter (new type of pressure sensor) is available, it might be more favorable. In near future, we may need to use such direct transducer devices for low-noise and/or high-speed realizations.
(The example may not be a proper one but I just needed to provide a scenario. If you can provide better examples, you are welcome)
Most probably there are research studies ongoing in these fields, but I am not familiar with them. I would like to know about your thoughts and/or your information about this issue.
Relevant answer
Answer
After seeing your and other respectable researchers' answers, I am glad I asked this question.
I am really delighted to hear from you the history of an ever-lasting discussion about sensor and actuator definitions. I have always found it annoying that the sensor definition has usually been preferred as a "too specific" definition to serve only for an interface of an electrical/electronic system and an "other" system/medium with different form of signal(s).
Besides, that diiscussion, I can start another one:
There are many commercial integrated devices which are called "sensor"s, although in fact they are not basic sensors but are more complicated small systems which may also include electronic amplifier(s), filter(s), analog-digital-converter, indicators etc. For sure, these are very convenient devices for electronic design, but I think it is not correct to call them "sensor". Such a device employs a basic sensor but besides it provides other supporting electronic stages to aid the electronic designer. I don't know if there is a specific name for such devices.
Thank you again for your additional explanations.
Best regards...
  • asked a question related to Signal Processing
Question
5 answers
I have a list of stores, each stores has variables (revenue, market share etc) which have been captured at a monthly level for a year, so basically 12 data points for each variable. So I am treating this as a time series data.
I want to cluster the stores based on these variables, with the condition that the store variables within a cluster match with each other in not just values but also trend. So for example if market share is one of the variable then the 2 stores can be put into same cluster if their monthly values are close along with a matching trend.
I have done some research and saw following approaches:
Model based : Fit time series of each feature to a model and then cluster the model parameters. From what I understand this generally works better on problem having lots of data, in my case each single time series will have only 12 data points, so will this work?
Shape based: Perform any conventional clustering on raw data or extract features and then perform clustering. Here my concern is how will the trend in the data be captured.
Waveform representation: Represent each time series as a waveform and use signal processing techniques(wavelet transformation etc) to cluster these time series. Honestly I dont have any background here but this approach sounds promising, so any inputs would be appreciated.
What would be the best way to go about this problem?
Relevant answer
Answer
Thank you so much for your inputs, I will go through them and see if I can leverage it.
  • asked a question related to Signal Processing
Question
9 answers
For information, I already removed mean, high pass filtered, linear enveloped (LPF and full wave rectified), and amplitude normalized the signals.
Relevant answer
Answer
Generally if the EMG signal frequency you are dealing with are present in higher frequencies, then the best option is to use filter 70-300Hz.
Based on your work you can use MVC, another normalization technique.
  • asked a question related to Signal Processing
Question
6 answers
hello,
i work on a controlled Microgrid and i want to test the robustness of my controller againt a white noise that may be added to the output or the input. Is there is any specific condition to follow in order to take a good choise of a noise power ? or it is somthing random ?
- Actually i tried to take it about 3% of the nominal measurement value, is this enough to be good choice ?
- in addition, i tried the two types of noises, but i noticed that the one applied on the output affects much more the system than the one applied on the output (in such a way, my system looses its stability with the output noise, but gives an acceptable performance with the input noise ) , is this reasonnable ? if yes, why ?
thank you in advance
Relevant answer
Answer
Hi Sarah
The Microgrid controller design and its' robustness testing is different from communication system or control system. The white noise concept will not work here. The controlled Microgrid testing depends on operational scenarios and several robustness metrics are proposed by researchers for those scenarios.
One of the testing protocol is published by IEEE Standards--
2030.8-2018 - IEEE Standard for the Testing of Microgrid Controllers
DOI: 10.1109/IEEESTD.2018.8444947
It is useful to simulate operational scenarios and testing of designed controlled Microgrid.
  • asked a question related to Signal Processing
Question
3 answers
Dear all.
I am using a Laser Doppler vibrometry to me measure the vibration of a structure, what I am doing now consist of exporting velocity to signal processing and integrate it to obtain Displacement but due to that an error will be always added to the results where the signal will be fluctuating around different value and even that (max-max) value remain correct but the shape of the signal is not as I am willing.
is there is a way to input directly the displacement by time? I read the guide but nothing there.
a lot of article could handle that but they did not show the way.
Please advice
Relevant answer
Answer
Hi Mohammed,
thank you for your quick answer. Generally, vibrometer measurements on rotating objects are more difficult compared to stationary vibrating objects. The major drawback is the increased noise level, caused by speckle effects due to the object rotation in combination with the surface roughness. "Speckle noise" is increasing with rpm rate and concentrates in "speckle peaks" at multiples of the rpm rate, leading to a typical comb-sprectrum. The amplitudes of the speckle peaks are nearly constant over the measurement band width. So it is much harder to detect a vibration , especially when it is correlated with the rpm rate, which is mostly the case. In that case the vibration signal is on top of a speckle peak. If the vibration amplitude is big enough (say 3 xor 5 x of the speckle peak height), then its easy to detect. But when the vibration amplitude is in the same range as the height of the speckle peak, it is nearly impossible to detect. Furthermore, an investigation of the vibration signal in time domain is virtually impossible- independant if you look at velocity or displacement data, because of the high noise level. Especially the displacement signal will additionally increase or decrease continuously, because the vibrometer will acquire the part of the circumferential veloctiy, which is projected on to the laser beam.
  • asked a question related to Signal Processing
Question
3 answers
Dear all.
Since I am working on the establishment of a Lab for vibration measurement and signal processing for rotating machinery, I would highly appreciate based on your experience what should be included (bought) as equipment.
Thanks all
Relevant answer
Answer
Mohammed, this is a how-long-is-a-piece-of-string question. It is very difficult to give a useful answer without knowing more details, such as:
1. What is the scale of machinery you are studying? Power stations and mega-ships, micromotors or something in between? If you are studying non-portable systems, you will need a portable lab. If you are studying large systems, the rpms will be lower, which affects your choice of suitable accelerometers and signal analysers. If you are studying tiny systems where the vibrations are small, you may need vibration isolation platforms to get usable measurements.
2. How many people does the lab need to accommodate and how many experiments do you plan to run at once? Is the lab for research only or will it also be used for teaching?
3. Do you want to study the effects of loading? If so, you will need to consider braking devices, dynamometers or other forms of loading systems.
4. Laboratories usually provide calibration services, at the very least for the lab users, but often for outside clients as well. What level of calibration do you intend to provide? Do you just want secondary calibration to verify that your equipment is operating correctly or do you want primary calibration to check the secondary calibrators? Calibration and traceability are important in any research work, but they are particularly important in any forensic or expert witness work.
5. Do you need ventilation and/or cooling systems for combustion engines?
6. Do you need soundproofing for noisy equipment or vibration isolation for shaky equipment to avoid disturbance to other building occupants?
7. Do you plan any destructive testing? If so, you may need safety enclosures and other safety equipment.
8. What is your budget? How much floor area do you have? Are you adapting an existing lab space or creating a new one from scratch?
9. Do you plan to make acoustic measurements as well as vibration measurements?
10. Vibration studies may need to measure the modes of vibration as well as the amplitudes. High-speed video and/or strobes may be needed for this. Laser interferometry may also be needed. Appropriate lighting will be needed for video work.
  • asked a question related to Signal Processing
Question
3 answers
I have torques and angular positions data (p) to model a second-order linear model T=Is2p+Bsp+kp(s=j*2*pi*f). So first I converted my data( torque, angular position ) from the time domain into the frequency domain. next, frequency domain derivative is done from angular positions to obtain velocity and acceleration data. finally, a least square command lsqminnorm(MATLAB) used to predict its coefficients, I expect to have a linear relation but the results showed very low R2 (<30%), and my coefficient not positive always!
filtering data :
angular displacements: moving average
torques: low pass Butterworth cutoff frequency(4 HZ) sampling (130 Hz )
velocities and accelerations: only pass frequency between [-5 5] to decrease noise
Could anyone help me out with this?
what Can I do to get a better estimation?
here is part of my codes
%%
angle_Data_p = movmean(angle_Data,5);
%% derivative
N=2^nextpow2(length(angle_Data_p ));
df = 1/(N*dt); %Fs/K
Nyq = 1/(2*dt); %Fs/2
A = fft(angle_Data_p );
A = fftshift(A);
f=-Nyq : df : Nyq-df;
A(f>5)=0+0i;
A(f<-5)=0+0i;
iomega_array = 1i*2*pi*(-Nyq : df : Nyq-df); %-FS/2:Fs/N:FS/2
iomega_exp =1 % 1 for velocity and 2 for acceleration
for j = 1 : N
if iomega_array(j) ~= 0
A(j) = A(j) * (iomega_array(j) ^ iomega_exp); % *iw or *-w2
else
A(j) = complex(0.0,0.0);
end
end
A = ifftshift(A);
velocity_freq_p=A; %% including both part (real + imaginary ) in least square
Velocity_time=real( ifft(A));
%%
[b2,a2] = butter(4,fc/(Fs/2));
torque=filter(b2,a2,S(5).data.torque);
T = fft(torque);
T = fftshift(T);
f=-Nyq : df : Nyq-df;
A(f>7)=0+0i;
A(f<-7)=0+0i;
torque_freq=ifftshift(T);
% same procedure for fft of angular frequency data --> angle_freqData_p
phi_P=[accele_freq_p(1:end) velocity_freq_p(1:end) angle_freqData_p(1:end)];
TorqueP_freqData=(torque_freq(1:end));
Theta = lsqminnorm((phi_P),(TorqueP_freqData))
stimatedT2=phi_P*Theta ;
Rsq2_S = 1 - sum((TorqueP_freqData - stimatedT2).^2)/sum((TorqueP_freqData - mean(TorqueP_freqData)).^2)
  • asked a question related to Signal Processing
Question
4 answers
I have performed all the attack for my image cryptography algorithm. finally i need to test NIST results for my cryptography algorithm. if any one have the code kindly share the code. please do the needful
Relevant answer
Answer
Actually NIST Test suite consists of a bundle tests. Hence, you need not write code for all these tests to find the Randomness of the image. The code for all these tests are given in NIST web site. You need to download the code and run using Eclipse or any other IDE. It will be simple and useful. Only thing you should know that the procedure to the run the different tests.
  • asked a question related to Signal Processing
Question
2 answers
Sorry but it may seem like an obvious question because I am new to EMG analysis but I have been reading many papers on how various research groups clean, filter, segment, and classify muscle activation and fatigue using time, frequency, and time-frequency domain analyses. However, I am struggling to find a common protocol for taking a raw EMG signal, processing it such that I can feed it into to these different types of analyses. Is there a general repository or guideline or protocol or flow chart that is generally accepted that someone can point me to? Any help would be much appreciated.
Relevant answer
I have been working on signal processing since last year and I agree with you, I have not found like a common protocol for sEMG processing. Sometimes the processing stage depends on the signal acquisition process and the characteristics you want to find. After looking for a paper which contains a certain processing procedure, I found these two that could be helpful to give you an idea about where to start. I hope so! Greetings from Mexico!
  • asked a question related to Signal Processing
Question
15 answers
Dear all
What are the recent work in deep learning. how to start with python kindly suggest some work and materials to start with that.
Relevant answer
Answer
Interesting recommendations
  • asked a question related to Signal Processing
Question
5 answers
I am using sparse array concepts (e.g., minimum redundancy array) to estimate the DoAs of multiple targets. For uncorrelated sources, applying super-resolution (SR) algorithms (e.g., MUSIC and ESPRIT) on the constructed difference co-array could provide a good DoA estimation results. However, if the sources are fully correlated, the covariance matrix of the received signal becomes rank one, and SR algorithms would fail.
In uniform linear array (ULA) case, we could use spatial smoothing or forward/backward technique to decorrelate coherent sources. However, in the case of sparse arrays, these techniques will not work unfortunately, because of the missing elements.
I am curious about how to decorrelate coherent sources in sparse array. Any discussion, suggestions or paper referring would be very welcome!
Thanks a lot!
Yuliang
Relevant answer
Answer
Thank you for your suggestions, Vadym Slyusar ! I will have a look on it.
  • asked a question related to Signal Processing
Question
7 answers
Dear community, after using the wavelet transform to extract the important features from my EEG signals , i'm wondering about how to calculate the Shanon entropy of each value of my coefficients (cD1,cD2,....cA6), another thing is how to use the Shanon entropy for dimension reduction ?
Thank you .
Relevant answer
Answer
Hello dear friend Wassim Diai
I hope the next code to calculate the Shanon entropy of given data will be helpful in your work
Wavelet coefficients (cD1,cD2,....A6) will be the entire data.
python 3.7 is used to implement shanon entropy.
pandas library is imported as pd.
Good job
import pandas as pd
data = [3,6,7,12,5,7,.....] #you insert your data here
pd_series = pd.Series(data)
counts = pd_series.value_counts()
entropy = entropy(counts)
print(entropy)
  • asked a question related to Signal Processing
Question
4 answers
I have used the wavelet decomposition and reconstruction of a specific signal (for, e.g., rainfall). Among the all-available levels (suppose I have ten low-frequency reconstruction signals), which level provides the information that consists of deterministic components, reflecting the variation characteristics of the provided signal? To add more, the higher approximation levels (such as a8, a9, and a10) indicated the residual of the decomposition process. This level contains the average value of the data series, so the variation characteristics that we are looking into the signal don’t necessarily present as they start showing a flat curve in these levels. On the other hand, Levels a0, a1, and a2 include most of the high frequencies that reduce the correlation and do not significantly improve the signal characterization. So, in between them, which level should be taken into account to study the particularities of the signal. Should we follow the level with high correlation coefficients?
Relevant answer
Answer
Narasim Ramesh Thank you again for your response. So, we must compute the sum of squares of ai and di for each layer of decomposition and the layer with lowest value of the sum is the required level we are looking at? How do we compute it at the different levels? For example, if we look at the level A8, what actually are the values of A8 and D8? I mean should we compute it by looking at those graphs or is there any formulation? How do we check that?
  • asked a question related to Signal Processing
Question
3 answers
Hello everyone,
I hope you are doing well.
I am using a Vantage Verasonics Research Ultrasound System to do Ultrafast Compound Doppler Imaging. I acquire the beamformed IQData with compounding angles (na = 3) and ensemble size of (ne = 75) which are transmitted at the ultrafast frame rate (PRFmax = 9kHz) and (PRFflow = 3kHz). Can I used the Global SVD clutter filter to process the beamformed IQData instead of conventional high-pass butterworth filter.
Your kind responses will be highly appreciated.
Thank you
Relevant answer
Answer
From one of the best group in the field :
  • asked a question related to Signal Processing
Question
4 answers
While training my GNN (graph neural network) model, the loss is badly fluctuating. Someone had suggested increasing the batch size or decreasing the learning rate, but the results are remaining the same. Can anyone suggest other possible reasons and remedies for solving this issue?
(In the graphs attached below, the x-axis represents the number of samples and the y-axis represents training loss.)
Relevant answer
Answer
There are several reasons that can cause fluctuations in training loss over epochs. The main one though is the fact that almost all neural nets are trained with different forms of stochastic gradient descent. This is why the batch_size parameter exists which determines how many samples you want to use to make one update to the model parameters. If you use all the samples for each update, you should see it decreasing and finally reaching a limit. Note that there are other reasons for the loss having some stochastic behaviour.
This explains why we see oscillations. But in your case, it is more than normal I would say. Looking at your code, I see two possible sources.
  1. A large network, small dataset: It seems you are training a relatively large network with 200K+ parameters with a very small number of samples, ~100. To put this into perspective, you want to learn 200K parameters or find a good local minimum in a 200K-D space using only 100 samples. Thus, you might end up just wandering around rather than locking down on good local minima. (The wandering is also due to the second reason below).
  2. Very small batch_size. You use a very small batch_size. So it's like you are trusting every small portion of the data points. Let's say within your data points, you have a mislabeled sample. This sample when combined with 2-3 even properly labelled samples, can result in an update that does not decrease the global loss, but increase it, or throw it away from local minima. When the batch_size is larger, such effects would be reduced. Along with other reasons, it's good to have batch_size higher than some minimum. Having it too large would also make training go slow. Therefore, batch_size is treated as a hyperparameter.
  • asked a question related to Signal Processing
Question
2 answers
I have the scattering matrix images (8 images: S11_real, S11_imaginary, similarly for S22, S12, S21) and I need to create the coherency matrix images (6 images: Diagonal and upper elements T11,T22,T33,T12,T13,T23). The sensor is mono-static so S12=S21. How can it be done using python\MATLAB. Kindly share the required library/code or equation, required for it.
Relevant answer
Answer
Hi,
You may use the following python code to create T3 from S2 matrix. Since, you have mentioned the sensor is monostatic, considering the reciprocity constraint S12 = S21, the code should produce the required output. Find the attached formulation and python file.
Good luck with polarimetry :-)
# -*- coding: utf-8 -*-
"""
Created on Mon May 3 09:00:21 2021
@author: Narayana
"""
import numpy as np
S11_real = 1
S11_imag = 0
S21_real = 0
S21_imag = 0
S22_real = 1
S22_imag = 0
# Scattering matrix
S2 = np.array([[S11_real+1j*S11_imag,S21_real+1j*S21_imag],
[S21_real+1j*S21_imag,S22_real+1j*S22_imag]])
# Kp- 3-D Pauli feature vector
Kp = np.expand_dims(np.array([S2[0,0]+S2[1,1], S2[0,0]-S2[1,1], S2[1,0]]),axis=1)
# 3x3 Pauli Coherency Matrix
T3 = np.matmul(Kp,np.conj(Kp).T)
  • asked a question related to Signal Processing
Question
13 answers
The Nyquist-Shannon theorem provides an upper bound for the sampling period when designing a Kalman filter. Leaving apart the computational cost, are there any other reasons, e.g., noise-related issues, to set a lower bound for the sampling period? And, if so, is there an optimal value between these bounds?
Relevant answer
Answer
More samples are generally better until such point as the difference in the real signal between samples is smaller than the quantization or other noise. At that point, especially with quantization, it may be a point of diminishing returns.
The other thing that nobody mentions is that faster sampling means less real-time processing time. In many systems, it's not really an issue as the time constants of the physical system are so slow as to never challenge the processing. In others, say high speed flexible meachatronic systems, the required sample rates may challenge the number of processing cycles available to complete the task.
Generally, the best bet is to return to the physical system's time constants and (if possible) sample 20-100x as fast as them.
  • asked a question related to Signal Processing
Question
2 answers
In the characterization of noise from a fabricated MOSFET, obtaining the PSD is critical. How can this be done under DC bias conditions?
Relevant answer
Answer
@Anders Buen Power Spectral Density
  • asked a question related to Signal Processing
Question
3 answers
Hi everyone! I am looking for a dataset and I'm gonna be so thankful if anyone helps me by introducing any link to access databases. I want to research on knock detection in spark ignition engines by processing of vibratory signals. so first, I wanna validate a pervious study in this field. Thus, I am looking for a dataset and related researches that were done before according to the databases.
  • asked a question related to Signal Processing
Question
9 answers
As I know CNN requires images most of the times , but my data frame is taking a size of (335,48) ; which is not an image but numerical values and categorical output, how Can I use CNN or deep learning for this situation ? Thank you..
Relevant answer
Answer
Exactly Wassim Diai . You may experiment with this approach for both 1D and 2D convolutions as well.
  • asked a question related to Signal Processing
Question
17 answers
dear community, my model is based feature extraction from non stationary signals using discrete Wavelet Transform and then using statistical features then machine learning classifiers in order to train my model , I achieved an accuracy of 77% maximum for 5 classes to be classified, how to increase it ? size of my data frame is X=(335,48) , y=(335,1)
Thank you
Relevant answer
Accuracy can be seen as a measure of quality. High accuracy means that a rule gives more relevant results than irrelevant ones, and a high recall gives many relevant results on whether or not rules invalidity is returned.
  • asked a question related to Signal Processing
Question
10 answers
Hi,
i want to classifiy time series of varying length to classify drivers of a bike by the Torque. I was planning on dividing the signal in lengths of lets say 5 rotations so the length of the time series would vary by the velocity of rotation. Do I need to extract features like Mean value and fft or is it enough to simply apply the filtered signal to the classifier?
Thanks in advance
Relevant answer
Answer
Its always better to use ensemble of features
Use FFT and mean both as classification features. Also explore Cepstrum.
in Fourier analysis, the cepstrum is the result of computing the inverse Fourier transform of the logarithm of the estimated signal spectrum. The method is a tool for investigating periodic structures in frequency spectra. The power cepstrum has applications in the analysis of human speech.
This can be used here too based on pedal length and leg length of each person.
and strength/ intensity of strokes.
  • asked a question related to Signal Processing
Question
4 answers
Hi RG,
There are a lot of papers using the HDsEMG database CapgMyo to test gesture recognition algorithms (http://zju-capg.org/myo/data/).
However, it seems that there is a missing file on the original server (http://zju-capg.org/myo/data/dbc-preprocessed-010.zip).
I wonder if anyone know if there is an alternative source for the database?
All the best.
Relevant answer
Answer
  • asked a question related to Signal Processing
Question
7 answers
I am trying to select a mother wavelet function for signal analysis.first i am trying to select the level for each wavelet.now the problem i am facing is the total entropy of subsequent decomposition is increasing (approx +detail).where as detail is decreasing to a reasonable level (say 2 or 3,4)my interest is with the detail coefficient .should i continue to level where detail entropy is minimum though the total entropy is increasing.
Relevant answer
Answer
Each wavelet base has different spectral behavior, therefore, different impulse response for the same scales. You must choose the one that best reflects the spectral characteristics of the analyzed signal. I particularly like using the Morlet wavelet, because it is easy for you to adjust the center frequency and bandwidth to better analyze your signal.
  • asked a question related to Signal Processing
Question
4 answers
Working on chandrayaan-2 DFSAR data, there are three datasets available:
1) Slant range image data product: The slant range complex image file. Each pixel is represented by two 4-byte floating point value (one 4-byte floating point real and one 4-byte floating point imaginary value). Each pixel in the slant range image is Seleno-tagged with a lat./lon. value.
2) Ground range image data product: The ground range unsigned short int image file. Each pixel is represented by 2-byte unsigned short int. Each pixel in the slant range image is Seleno-tagged with a lat./lon. value.
3) Seleno-referenced image data product: The Map projected image file. Each pixel is represented by 2-byte unsigned short int file(amplitude).
Which should I be using if i want to generate coherency matrix and perform target decomposition?
Thanks in advance!
Relevant answer
Answer
Piyush Kumar For generating coherency matrix and further performing target decomposition, Single Look Complex SAR data is required which is Slant Range Data Product. So, in case of Chandrayaan-2 DFSAR data, you need to explore SLI (Single Look Image) data. I think that these basic information are available at ISSDC web portal where you might be downloading data sets.
  • asked a question related to Signal Processing
Question
10 answers
A signal is split into two parts and one of them is going through a filter (say, with a transfer function H(f)) and the other part stays unchanged. Then I want to know how to calculate their cross correlation function. My guess is, given the spectral density function S(f), it will be the ordinary Wiener-Khinchine theorem with an addition of the transfer function: R=Integral{S(f)H(f)*exp(i*2*pi*f*t)df}
Relevant answer
Answer
Agreeing with Pascal Salart, but making it simple. One signal is x(t), the other is y(t), you study the covariance matrix E(x(t+i) y(t+j)), which can be estimated by low pass filtering vectors v(t, i) = (x(t+i-N), ... x(t+i))
and w(t, j) =(y(t+j-N),... y(t+j)) with N a length of observation window,
then the scalar product <v(t, i), w(t, j)> estimates the mean expectation E() above.
The matrix obtained is a Gram matrix, hence it diagonalises with eigenvectors, eigenvalues obtained by Gram-Schmidt algorithm. QED...
Ok ?
  • asked a question related to Signal Processing
Question
3 answers
Hi all, I hope everyone is doing good.
I am working on Machine Learning, I am working on EEG data for which I have to extract statistical features of the data. Using mne library I have extracted the data in a matrix form but my work requires some statistical features to be extracted.
All features which are to be extracted are given in table 2 of this paper: "Context-Aware Human Activity Recognition (CAHAR) in-the-Wild Using Smartphone Accelerometer,". The data set I am using is dataset 2b from "http://www.bbci.de/competition/iv/".
I can't find a signal processing library. Can you suggest me any signal processing library for processing EEG signal data in Python?
Thanks to all who help.
Relevant answer
Answer
Aparna Sathya Murthy I came across this and tried to install this in google colab (pip install pyeeg), but it says:
ERROR: Could not find a version that satisfies the requirement pyeeg (from versions: none)
ERROR: No matching distribution found for pyeeg
  • asked a question related to Signal Processing
Question
5 answers
I am working on ECG arrhythmia classification by using SVM , implemented some kernels tricks
and using different kernels on MIT BIH dataset (features create 44187 row ,18 column matrix)
now it is difficult to plot support vector for such large data sets , now how can i plot it and please suggest any other plots or methods to show comparison between different kernels , i already have comparison chart of accuracy efficiency etc.
Relevant answer
Answer
It might interest you that there is a possibility to use complexity measures to assess the state of the observed complex system and make decision about arrhythmias.
An example how to do it can be bound in our paper on prediction of TdP arrhythmias from ECG recordings. Everything is explained in the paper in detail. The final version will contain rewritten entropy section and substantially improved methods, intro, etc.
Back to your question. Complexity measures when applied wisely enable us to substantially reduce the complexity of complex systems under the observation. This includes biosignals along with ECGs, EEGs, etc.
Hopefully this will enable you to orientate yourself in this exciting, yet quite complicated area of research.
  • asked a question related to Signal Processing
Question
4 answers
In several discussions, I have often come across a question on the 'mathematical meaning of the various signal processing techniques' such as Fourier transform, short-term fourier transform, stockwell transform, wavelet transform, etc. - as to what is the real reason for choosing one technique over the other for certain applications.
Apparently, the ability of these techniques to overcome the shortcomings of each other in terms of time-frequency resolution, noise immunity, etc. is not the perfect answer.
I would like to know the opinion of experts in this field.
Relevant answer
Answer
Utkarsh Singh There is an esthetic reason in why a mathematical method is of interest in signal processing:
-a beautiful algorithm is well articulated, says what it does in few instructions, and does it in a stable and reliable manner
-this hints to the underlying algebra
With powerful and minimal computation, we go deep into algebra structures: group, rings, fields (see references on Evariste Galois as the inventor of "group" as we know it)
-Fourier transform is an interesting invention: it allows to decompose a signal into resonating modes (as for piano music: you produce a sound at frequency F, but also its harmonic NxF...). Naturally there is the aliasing question and the Nyquist theorem for reconstruction
There are many more time-frequency representations: Fourier, Laplace, discrete or continuous, cosine transform, wavelet transform, etc.
The interesting feature of discrete algorithms for those transforms is that you can implement a butterfly structure.
The key idea is to replace a very large number of multiplications (in brute force "non-esthetic" programming) by a smaller number of additions.
This idea worked for me for developing a codec system using underlying GF(n) properties.
See this patent:
The regularity in the processing and the efficiency of the representation go hand in hand.
Let me go back to a very basic mathematical method: the Gram-Schmidt decomposition: take a sequence of n vectors v(1),..., v(n), and the matrix of cross-products m(i,j)=<v((i),v(j)>. The Gram-Schmidt method diagonalises this matrix. It extracts eigenvalues, and eigen vectors. In frequency terms, it extracts modes (resonating modes present in the signal).
This algorithm highlights the efficiency side of the representation: it's projecting the signal onto something found "in itself", call it principal components if you want.
There are only two reasons for choosing a technique in engineering:
-(i) it addresses the problem completely
-(ii)it's economically implementable.
Both criteria are equally important and a good way to find these is to look for elegant, esthetic solutions (minimal and complete at the same time).
Does it help?
  • asked a question related to Signal Processing
Question
3 answers
Dear All,
Id you are interested in the area my Adversarial Multimedia Forensics, myy PhD thesis now available on EURASIP database at the https://theses.eurasip.org/theses/859/machine-learning-techniques-for-image-forensics/
Thanks
Relevant answer
Answer
good luck
  • asked a question related to Signal Processing
Question
5 answers
My SR785 Dynamic Signal Analyzer keeps rebooting after showing a "WaitFlag Error" in between measurements. I have been unable to find any mention of this error in the service/user manuals available on the SRS website (thinksrs.com). Any leads as to how to approach/troubleshoot this problem would be appreciated. Thanks!
Relevant answer
Answer
Serial polling is associated with computer-interface aspect of the instrument. The interface connectors are put on the back-side of the instrument (RS-232 or IEEE-488). If the error occurs when the instrument computer-interface is active, then it can be due to error in the interface program in the computer.