Questions related to Biomedical Signal Processing
I am working on an ECG QRS detection algorithm which I did implement it using C language. The code detects the ECG peaks regardless of the beat type (Normal, Paced, … etc.) and it saves the detected peaks into a text file as time location of the peak.
To evaluate my algorithm I have to compare the obtained peaks to those provided in the reference annotation files (MIT-BIH arrhythmia database). By comparison I can find the FP and FN peaks, and then calculate the sensitivity and positive predictivity from them. The main objective to find all of the peaks and not the type of the beat.
According to the Physionet guide, using the “WFDB Software Package" in "Cygwin” I have to do the following:
- Use “rdann” and “wrann” functions to convert my text file into a compatible annotation file.
- Use “bxb” function to compare my obtained beat annotations beat-by-beat to the reference annotations. (Ex.: bxb -r 100 -a atr yow -L bxb.out sd.out)
I am looking for few clear examples showing how to convert the text file into an annotation file and then compare the annotations to the referenced ones. I also tried “rr2ann” function to convert my text file into an annotation file but it did not work for me.
I have been working on classifying ecg signals,and for feature extraction I am going to use AR modelling using Burgs method. After reading a few papers I got to know that the features are extracted after splitting the ecg signals into blocks of different duration,My question is why is it necessary to do so,and how could we fix a certain duration? For instance I have a signal with 50000 samples with fs = 256 hz ,so what could be the duration of each block.
And it would be really helpful if someone could help me understand the burg's method.There are videos and all for learning the Yule-Walker equations but did'nt find any for burgs method
Thank you in advance
Heart Rate Variability is a well known and useful concept in Biomedical Engineering and Medical Sciences. Breath Rate is a lesser researched field and a newer measure Breath-Rate Variability is introduced recently to quantify meditation effect.
It is gaining attention of researchers as BRV has a number of novel applications. What could they be.
I hope you are doing well.
I am using a Vantage Verasonics Research Ultrasound System to do Ultrafast Compound Doppler Imaging. I acquire the beamformed IQData with compounding angles (na = 3) and ensemble size of (ne = 75) which are transmitted at the ultrafast frame rate (PRFmax = 9kHz) and (PRFflow = 3kHz). Can I used the Global SVD clutter filter to process the beamformed IQData instead of conventional high-pass butterworth filter.
Your kind responses will be highly appreciated.
If someone know any information from where raw ECG data can be collected or If anyone has already collected the raw data, can you send me or mail me.
There are a lot of papers using the HDsEMG database CapgMyo to test gesture recognition algorithms (http://zju-capg.org/myo/data/).
However, it seems that there is a missing file on the original server (http://zju-capg.org/myo/data/dbc-preprocessed-010.zip).
I wonder if anyone know if there is an alternative source for the database?
All the best.
The Special Issue entitled “Analysis of 1D Biomedical signals through AI-based approaches for Image processing” on Journal of Biomedical Signal Processing and Control (Impact Factor: 3.137) is now open to receive your paper.
Please, find all information at the following link: https://www.journals.elsevier.com/biomedical-signal-processing-and-control/call-for-papers/special-issue-on-analysis-of-1d-biomedical-signals
We will be waiting for your new paper! #Biomedical #SignalProcessing #ArtificialItelligence #future #innovation #healthcare
Dear community , I need your help , I'm training my model in order to classify sleep stages , after extracting features from my signal I collected the features(X) in a DataFrame with shape(335,48) , and y (labels) in shape of (335,)
this is my code :
def get_base_model(): inp = Input(shape=(335,48)) img_1 = Convolution1D(16, kernel_size=5, activation=activations.relu, padding="valid")(inp) img_1 = Convolution1D(16, kernel_size=5, activation=activations.relu, padding="valid")(img_1) img_1 = MaxPool1D(pool_size=2)(img_1) img_1 = SpatialDropout1D(rate=0.01)(img_1) img_1 = Convolution1D(32, kernel_size=3, activation=activations.relu, padding="valid")(img_1) img_1 = Convolution1D(32, kernel_size=3, activation=activations.relu, padding="valid")(img_1) img_1 = MaxPool1D(pool_size=2)(img_1) img_1 = SpatialDropout1D(rate=0.01)(img_1) img_1 = Convolution1D(32, kernel_size=3, activation=activations.relu, padding="valid")(img_1) img_1 = Convolution1D(32, kernel_size=3, activation=activations.relu, padding="valid")(img_1) img_1 = MaxPool1D(pool_size=2)(img_1) img_1 = SpatialDropout1D(rate=0.01)(img_1) img_1 = Convolution1D(256, kernel_size=3, activation=activations.relu, padding="valid")(img_1) img_1 = Convolution1D(256, kernel_size=3, activation=activations.relu, padding="valid")(img_1) img_1 = GlobalMaxPool1D()(img_1) img_1 = Dropout(rate=0.01)(img_1) dense_1 = Dropout(0.01)(Dense(64, activation=activations.relu, name="dense_1")(img_1)) base_model = models.Model(inputs=inp, outputs=dense_1) opt = optimizers.Adam(0.001) base_model.compile(optimizer=opt, loss=losses.sparse_categorical_crossentropy, metrics=['acc']) model.summary() return base_model model=get_base_model() test_loss, test_acc = model.evaluate(Xtest, ytest, verbose=0) model.fit(X,y) print('\nTest accuracy:', test_acc)
I got the error : Input 0 is incompatible with layer model_16: expected shape=(None, 335, 48), found shape=(None, 48)
you can have in this picture an idea about my data shape :
Dear community , currently working on emotions recognition , as a first step I'm trying to extract features , I was checking some recources , I found that they used the SEED dataset , it contains EEG signals of 15 subjects that were recorded while the subjects were watching emotional film clips. Each subject is asked to carry out the experiments in 3 sessions. There are 45 experiments in this dataset in total. Different film clips (positive, neutral, and negative emotions) were chosen to receive highest match across participants. The length of each film clip is about 4 minutes. The EEG signals of each subject were recorded as separate files containing the name of the subjects and the date. These files contain a preprocessed, down-sampled, and segmented version of the EEG data. The data was down-sampled to 200 Hz. A bandpass frequency filter from 0–75 Hz was used. The EEG segments associated with every movie were extracted. There are a total of 45 .mat files, one for each experiment. Every person carried out the experiment three times within a week. Every subject file includes 16 arrays; 15 arrays include preprocessed and segmented EEG data of 15 trials in one experiment. An array named LABELS contains the label of the corresponding emotion- al labels (−1 for negative, 0 for neutral, and +1 for positive). I found that they loaded each dataset separately (negative , neutral , positive) , and they fixed length of signal at 4096 and number of signal for each class at 100 , and fixed number of features extracted from Wavelet packet decomposition at 83 , my question is why they selected 83 , 4096 and 100 exactly ?
I know that my question is a bit long but I tried to explain clearly the situation , I appreciate your help thank you .
Those who are working in the area of Data Acquisition, Signal Processing and LabVIEW.
I'd like to measure ECG signal with dry electrode. But I'd like to know what kind of electrode are mostly used. Is there any person can give me some information?
I would like to know which application scenarios need Sequence learning method ( RNN, LSTM, GRU and so on ), such as medical video pattern analysis and time series forecasting for fighting against COVID-19? I wonder in what other scenarios we need to mine time correlation? I specialize in sequence learning. I don't know what I can do more effectively to help more people.
I have fNIRS signal and I have applied wavelet transform both continuous and discrete but I am unable to separate the different frequencies on the basis of coefficients . I want to know how can I tell that which coefficient is for which frequency.
In order to do some simulation work for my research, I need a database of standard all types of ECG and PCG signals. Can anyone help me in this way. Thank you.
My query is regarding identification of interical and preictal stages in CHB-MIT Scalp EEG Database of epileptic seizures collected at Children’s Hospital Boston , in which Seizure intervals are given in annotations. Generally most of the literature use only two classes of seizures in this database, viz. SEIZURE and NON-SEIZURE. My question is about identification of INTER-ICTAL, PREICTAL and ICTAL classes, so that appropriate Machine Learning/Deep Learning algorithm can be adopted for prediction of such seizure classes. Thanks in advance .
How can implementation an Adaptive Dictionary Reconstruction for Compressed Sensing of ECG Signals and how can an analysis of the overall power consumption of the proposed ECG compression framework as would be used in WBAN.
Muscle tissue does not normally produce electrical signals during rest. So its expected that the value Amplitude in mV will be roughly 0. However when muscles are stiff is when your muscles feel tight and you find it more difficult to move than usual, especially after rest. You may also have muscle pains, cramping, and discomfort. Cramps, which acts like muscle stiffness, can occur when muscles are unable to relax properly due to myosin fiber's not fully detaching from actin filaments. In skeletal muscle, ATP must attach to the myosin heads for them to disassociate from the actin and allow relaxation — the absence of ATP in sufficient quantities means that the myosin heads remains attached to actin. So will there be an expected amplitude in mV well greater than 0, maybe 3 to 5 mV range.
Feeding knowledge directly into your brain, just like in sci-fi classic The Matrix, by a simulator which can feed information directly into a person’s brain and teach them new skills in a shorter amount of time, comparing it to “life imitating art”. I think inventing a devive that feed skills directly into a person’s brain is little far from believing . But what if build a device that stimulates some parts of a person's brain that related to the new skill that person is learning .so ,that person by using such a device could learn that new skill in very shorter time.
What additional information does the phase measurement in a frequency-domain imaging technique provide compared with the continuous wave technique that measures only the amplitude of the diffuse light?
I want to do multichannel ECG data compression using multiscale PCA . Do the transformed coefficients are eigenspace.
Can someone please provide some comments/references on the advantage of using Seismocardiograph (SCG) analysis while we already have matured ECG analysis and processing available. Also please comment on the probable artifacts or noise that may present in SCG recordings.
Also please share the latest Signal Processing algorithms suitable for SCG analysis and processing.
Thanks in advance.
I am working with sensor signals and finding some problems with signal manipulation .(Any idea /hint /suggestion are welcome as i need something to move forward .
I have chnaged the qurey and added some new and more details to make the question easy to understand.
I have 2 signals.
⦁ Light Pink is the original Signal (refernce signal)with Red dots showing Local maxima’s.
⦁ Blue is a signal which is found after having test from another sensor that looks like refernce sensor but have some faults in it as it is made by us.
⦁ These signals are plotted against time on x-axis.
I have attached some plausible / some part of values of signal here so if anyone can help me with the logic along with MATLAB code it will be of great help.
⦁ Please open the attached link .
⦁ Copy the code written in MATLAB and run after saving it .
⦁ You will see 2 signals as shown in figure and when u will zoom it you will clearly see the difference.
(How to make my Original signal Pink signal Straight as like blue signal in plots)
⦁ I want to make my original signal (pink) to look alike blue signal interms of flat portion only.
Common behaviour to observe the logic:
⦁ The common behavior that I have seen in my measured signal is that it gets flat where it finds local maxima (either on positive side or negative side).
⦁ At every point in local maxima I see that my blue signal gets flat.
Everything I have to do is with Original signal (Pink signal) to formulate some results.
Is there any way, I can make my original signal flat just like blue signal.
Can someone suggest me the best way to do that? And some one provide me an example MATLAB code then it would be great help .
Please have a look at the picture to get a glimpse about my idea.
Thanks a lot in adavance for help.
I have tried few techniques.
The results of those Techniques are as follows but speaking truly nothing is working for me uptill now.
I have find local maxima values and using nlfilter applied neighborhood rule and tried to make the peaks and neighborhood area flat but unfortunately it is not working as window size is fixed but in my case window size varies and it also depends upon position of local maxima and most important constant window size changes the shape of the signal.
I have also tried to apply varying window but its not working for me may be i have not good concept of how to apply varying size window. I do not know how it will work for my signal.
Cut the long story short what I have done up till now is not working so I need help in that.
It will be really nice if someone provides me how to solve this issue and If i will get some MATLAB so it will great for me.
Thanks a lot in advance for your time ,expertise and help.
Code for running the variables of data in the attached Link:
load originalflatenninrough t original_signal_data measured_Signal_data
[p l] = findpeaks(original_signal_data);
[pn ln] = findpeaks(-original_signal_data);
legend ('originalsignal', 'measureddatasignal')
Code on Test data example data for NL filter (Which is applied on original signal)
n = 10; % number of values to replace in the neighborhood of a local max
A = sin(2*pi*t);
[pks,locs] = findpeaks(A);
% [pks,locs] = findpeaks(-A);
locs = (locs) ;
locations = zeros(size(A));
locations(locs) = true;
locations = conv(locations, ones(1, 2*n+1), 'same') > 0;
X = -inf(size(A)); % create temporary
X(locs) = A(locs); % copy the local maxima
X = nlfilter(X, [1 2*n+1 ], @(x) max(x)); %replace all values with it local maxima
X(locs) = A(locs); % ensure local maxima are not changed
A(locations) = X(locations); % copy filtered temporary to output
A = sin(2*pi*t);
The goal of many algorithms in (biomedical) signal processing is, at the end of the day, to perform some sort of classification, e.g., binary classification. The binary labels (categorical response variables) could for instance represent the presence or absence of a disease or the binary signal quality of a physiological recording.
When working with time series data, such as the ECG, EEG, blood pressure etc., one can extract features from these signals, that are afterwards used for classification.
When working with the ECG for example, one could use the duration, height of the QRS complex etc., as features for a classification of normal/abnormal beats. Each training sample, corresponding to one heart beat, would consist of the feature vector and the label.
Now, when training a classifier, be it a SVM, Logistic Regression or some sort of decision tree/forest, the samples are clearly NOT independent. These classifiers however assume that the samples are IID and I observe that e.g. decision tree, overfit heavily on such data. This is also problematic with ensemble methods such as random forests which rely on bagging (resampling the training data).
What are common approaches to alleviate this problem or work with dependent samples in a classification procedure?
I know that alternative methods such as hidden Markov models are well suited for time series data, but I am specifically interested in the supervised classification setup using such type of learners (SVM, Log.Regression, ....).
I am working on signal processing. now i am look for DSP processor which support signal process. kindly send some supporting materials related to this. kindly do the needful
My work concerns filtering the event-related potentials (ERP), and I wanted to know what are the methods I could use to estimate the signal-to-noise ratio (SNR) of the real data before and after filtering the signals.
Thank you for your help, I really appreciate.
after filtering the power line noise, electrode movement artifacts, and muscle and eye-blink noise , why we need filter the EEG signal for suppressing frequencies over 30 Hz ?
I am new to active shape models (ASM) and I want to use it in my research to do image segmentation.
For using ASM, there should be a training set to generate the shape statistical model: x = xbar+Pb, where xbaris the mean shape and P is the eigenvectors, x is the shapes obtained by changing the shape parameters b.
In my particular case, there is no training set. However, the mean shape is known, so is the shape constraints. Is there a way to use the statistical model? Use simulated shapes for training? But how to mimic the gray-level profile?
Thanks in advance.
Who Do You Trust More for your Heart Monitoring, Diagnosis, and Therapy?
A Medical Dr such as a Cardiologist or Dr Machine Learning & Artificial Intelligence ? Do You Trust your Medical Dr-Cardiologist or a Sophisticated Computer based Cardio-Diagnostic System based on Machine Learning and Artificial Intelligence to Interpret and Evaluate your Health Status of your Heart?
We use the analogue output of the Portapres device for monitoring continuous blood pressure.
Is band-pass filtering necessary for such blood pressure wave analysis?
What should be the low/high pass cut offs?
I'm going to design an innovative theranostics device to be used in biomedical sector. I need that the system should be connected in Internet in order to be remote controlled. Can anyone suggest an easy to use IoT board? thanks in advance.
Can anyone please let me know about the Sub-harmonic interferences in neural recording? We have in-vivo neural data recordings that have been severely corrupted by radio interferences (basically sub-harmonic interferences of the mobile phone frequency). I have never heard of such sub-harmonic interferences before, but according to the recording experiment, we are pretty sure that those interferences are due to radio frequencies transmitted by cell phones during a voice call and the interferences ranging roughly from 150 Hz up to few kHz. The sampling frequency of recording was around 40kHz.
If anyone knows anything about such Sub-harmonic interferences please let me know. Thanks in advance!
I am working on psychoacoustic active noise control. I want to design an psychoacoustic model for sound quality measurement. I went through the book of Zwicker's loudness: Psychoacoustic Facts and Models.
I am not able to understand how to write the program in matlab to calculate the loudness.
I’m working on project about the development of electronic communication device to be used with invasive mechanically ventilated patients who are not able to speak verbally, so beside I need to measure its impacts on patients outcomes , I am looking to measure the nurses experiences or satisfaction while using this new device.
Multiple sclerosis(MS) is demyelinating disease in which the insulating covers of nerve cells in the brain and spinal cord are damaged . In some cases we use FES ( functional electrical stimulation) to cure lack of walking in leg .But Is there any solution in order to improve the conductivity of neuron in absence of insulating covers and after that use a system like FES in other cases for example Visionary system ?
Hi. As we know by stimulating special cells, doctors and scientists are able to transfer different senses such as pain to patients. Now the question is: "Is it possible to import specific data such as words in other languages to brain by extracellular stimulation? "
What I noticed from the literature is that for automated heartbeat classification ECG segmentation is an essential pre-processing step.
However, I wonder that they are fixing the same beat length for all types of heartbeats such as normal,PVC, APC, paced beat etc. In practical situations, how far it is advisable?
Next, from paper to paper segmentation lengths are different then how can they compare their results with others?
If it is desirable to fix the length of a segment why can't we choose a window that covers P, QRS, T events rather using fiducial points?
Previoius studies showed that it is possible to evaluate fetal heart rate by heart sound signal. I wish to examine the previous methods. I am grateful if someone can share fetal heart sound records with me.
In each state of life we have different behaviour but some people can easily hide that . Is there any solution to detect the moods of human with EEG signal because of Non-hidden Characteristic ?
I am not professional about compressed sensing (CS). I know CS can reconstruct the texture of the original image from highly undersampled data if some conditions are met. However, I have no clue how CS algorithms do to the noise. Is it possible that compressed sensing reconstructed image in MRI maintains or improves SNR comparing to fully sampled original image?
Is it possible to do feature normalization with respective to class. EX: 10x10 data matrix with two class. Each class of size 5x5. Now normalize 25 features of class 1 and 25 features of class 2 separately. Is this process acceptable.
My problem consists of Data matrix with size 1000x12. 12 features with 1000 examples. It is a 4 class problem.Now my question is how to perform feature normalization. I assume some ways (listed below) and suggest me the right path.
(1) normalization of all features related to specific class individually (Let class 1 contains [300x12]=3600, normalization of 3600 features).
(2) normalization of all features (12000) at once.
(3) normalization of each column in data matrix (1000x12).
Does the selection of time delay τ values for computing measures that consider this parameter should depend on the sampling rate of the signal that is being analyzed with those measures?
I'm currently collecting a large amount of blood flow data with the use of a Perimed Laser Doppler rig. The recording is pretty simple )15 minutes recorded baseline, 2 minutes occlusion, 7 minutes post occlusion. I'm looking to evaluate pre/post measures of the different oscillatory bands following an intervention. I've read a lot of excellent papers that utilized wavelet analysis to create a spectrogram of the relative oscillatory bands (and I'd love to follow suit), but can't find a good methods paper, or even a computer program to help me analyze my data. If anyone has a suggestion on relevant software or a good working guidebook, I would be hugely appreciative.
i would prefer to have a automatic or semi-automatic (control point based) registration of CT - US
Hi, I'm currently trying to estimate spectral coherence between two signals recorded from the same channel in two different conditions. This is fine when applying this to an individual subject, however, I am uncertain of how to quantify and compare this at the group level (six subjects, two conditions).
Any help would be greatly appreciated.
[cA,cD] = dwt(sig, 'db1'); i have used this code to decompose a signal and i have obtained CA and CD of Size (Ixn) but if i change the code to some other 'wname' like db2, db4 etc i am getting a signal of size of (I x(n-1)) but i require a size of (IXn). kindly help me how to solve this
I'm sending out a general call for your help in deciding on a thesis topic for my MS degree. I had an MSEE and had worked as an electronics engineer for 20 years (in US Defense, FiberOptics, Data Storage, Machine Vision industries). Now decided to leave it behind and returned to school this past January 2016 for an MSBE degree (Ph.D. later on, God willing) since I wanted to meld my knowledge in electronics with the new field of bioengineering(new to me anyways). The problem is I have no life sciences background (Biology, Chemistry, BioChem, etc.) to speak of since high school (and that was a really long time ago!). All thesis project ideas I have seen are so biology/life science-based which I, unfortunately, can't do (without spending another year or two acquiring that knowledge). Designing standard bioelectronics devices such as heart rate monitors/pacemakers etc. bores me silly and honestly don't think one can submit that as a thesis topic as still be an honest individual. I have the following interests:
- Image Processing ... I designed an IP board as my MSEE thesis back in the day). Analyzing MRI scan images seem to have been done to death. Is it still a topic worthy of an MS degree? If so, any new challenges here?
- Bioimpedance measurements ... I'm new to this but find the whole concept intriguing and fascinating. But designing body fat measurement devices is also too boring and technically dumb. Not worth an MS degree in my opinion. Sorry!
- Biosensors ... new to them as a concept and find them also very intriguing and fascinating. But I'm afraid my lack of biology/biochemistry would be a hindrance and burden to me.
Any suggested project must be realistically doable within a 4-5 month timeframe, tops. I'd start in Spring 2017 then finish in the following summer.
Thanks All :)
I am engaged with a project, which is related to the stimulation of pancreatic beta cells with different electrical wave shapes. My computational analysis based on mathematical model of this cell shows that if induced electrical field in the membrane area arrives to several hundreds of kv/m, the electrical stimulation my change cell functions. Now I have faced with some ambiguities if the electric field in membrane area is equal to the electric field produced by applying voltages to the electrodes both sides of the cell culture medium (separated by 10 cm distance) or it is different?
any help will be appreciated.
In order to test any event detection algorithm, it is common practice to compute a confusion matrix, in order to get performance parameters (e.g. sensitivity, and specificity).
It is well known that the confision matrix must be built (identifing and) counting the number of negatives/positives true/false detections, according to a pre-annotated dataset (Gold Standard).
Now, here is my question: how can i identify a true/false positive?
Let me show you a little example.
My algorith wants to detect QRS-Complex's peak, in order to get R-R intervals of a ECG recording. My pre-annotated dataset identify QRS complex not on QRS peak, but some samples before.
It is obvious that, testing my algorithm, i cannot identify a true positive detection "when my algorithm identify the same point as the gold standard", since there's always a time-shift between them. At the other side, all of my detections (because of the time-shift) would be identified as false positives (no time-coincidence with Gold Standard).
An easy solution would be to consider a "coincidence-window" around each point in the Gold Standard but, if i fix the problem with true/false positives, the problem with true/false negatives remains still unsolved.
Has anybody an idea? (Any reference would be great for me).
Thanks a lot.
I need a free software to simulate athletic movements, for example parallel movements in gymnastics. I will be appreciate that someone help me about this matter.
i want to obtain the cardiac sound characteristic waveform proposed in the paper available at following link
the equation is: C1Y2(n)+C2Y1(n)+C3Y(n)=X(n), where X(n) is input discrete signal.
Hi all. I'm currently trying to expand upon an analysis in which I only used a small selection of channels as my ROI, and now I'm considering a data driven approach. Is there a consensus on the best method for this, or is it more of a trial and error item?
The standard scipy.signal.resample is used to resample the signals, can anybody tell me how we can change the sampling frequency of speech signal from 44100 Hz to 8000 Hz using scipy.signal.resample function??
I'm doing my research in biomedical signal analysis.. Can I get some of the frequency domain analysis methods which best suits for biomedical signals, probably which could be implemented in hardware as well?
Thanks in advance..
Basically I have two time series signals, obtained from different sensors. One signal has a constant sample rate (20 ms), while the other has different and variable sample rate (it goes from 10 ms to almost 1000 ms). I realize of this problem after analyzing my data. The last signal was acquired through a sensor connected to a local network, so I think that is the reason of such variability.
I work on environmental artfacts during EEG recordings. I'm studying the light, the influence of the conductor equipment linked, or not linked, to the ground and the use or non-use of the electrodes which record EKG and EMG. I would like to know if there are other artifacts, but only non-physiological artifacts.
I would like to conduct HRV spectral analysis over different pathological groups. However, groups differ in age mean and age range. Since HRV depends on age.. how can I evaluate data? Is there any paper that adjusts HRV parameters in terms of age, thus allowing groups comparison?
In speaker recognition we extract i-vector, but after to reduces the number of i-vector by using LDA, but why it is compulsory?