Conference PaperPDF Available

Robust ECG R-peak detection using LSTM



Content may be subject to copyright.
Robust ECG R-peak Detection Using LSTM
Juho Laitala
Department of Future Technologies,
University of Turku
Turku, Finland
Mingzhe Jiang
Department of Future Technologies,
University of Turku
Turku, Finland
Elise Syrjälä
Department of Future Technologies,
University of Turku
Turku, Finland
Emad Kasaeyan Naeini
Department of Computer Science,
University of California Irvine
Irvine, California
Antti Airola
Department of Future Technologies,
University of Turku
Turku, Finland
Amir M. Rahmani
School of Nursing,
Dep. of Computer Science,
University of California Irvine
Irvine, California
Nikil D. Dutt
Department of Computer Science,
University of California Irvine
Irvine, California
Pasi Liljeberg
Department of Future Technologies,
University of Turku
Turku, Finland
Detecting QRS complexes or R-peaks from the electrocardiogram
(ECG) is the basis for heart rate determination and heart rate vari-
ability analysis. Over the years, multiple dierent methods have
been proposed as solutions to this problem. Vast majority of the
proposed methods are traditional rule based algorithms that are vul-
nerable to noise. We propose a new R-peak detection method that
is based on the Long Short-Term Memory (LSTM) network. LSTM
networks excel at temporal modelling tasks that include long-term
dependencies, making it suitable for ECG analysis. Additionally,
we propose data generator for creating noisy ECG data that is used
to train the robust R-peak detector. Our initial testing shows that
the proposed method outperforms traditional algorithms while the
greatest competitive edge is achieved with the noisy ECG signals.
Computing methodologies Machine learning;
LSTM, noisy ECG, R-peak detection, data augmentation
ACM Reference Format:
Juho Laitala, Mingzhe Jiang, Elise Syrjälä, Emad Kasaeyan Naeini, Antti
Airola, Amir M. Rahmani, Nikil D. Dutt, and Pasi Liljeberg. 2020. Robust
ECG R-peak Detection Using LSTM. In The 35th ACM/SIGAPP Symposium
on Applied Computing (SAC ’20), March 30-April 3, 2020, Brno, Czech Republic.
Permission to make digital or hard copies of part or all of this work for personal or
classroom use is granted without fee provided that copies are not made or distributed
for prot or commercial advantage and that copies bear this notice and the full citation
on the rst page. Copyrights for third-party components of this work must be honored.
For all other uses, contact the owner/author(s).
SAC ’20, March 30-April 3, 2020, Brno, Czech Republic
©2020 Copyright held by the owner/author(s).
ACM ISBN 978-1-4503-6866-7/20/03.
ACM, New York, NY, USA, Article 4, 8 pages.
An electrocardiogram (ECG) shows the strength and timing of
electrical activity in the heart by measuring from specic sites on
the body’s skin surface. Among the several standard ECG waves,
the QRS complex in the center of an ECG cycle is the most visually
distinct part. The correct detection of the R-peak in the QRS complex
is the base of the following interpretive analysis such as heart rate
extraction and heart rate variability analysis in disease diagnosis,
well-being tracking as well as in research studies where autonomic
nervous system activities are observed [8, 30, 35].
Among the QRS complex or R-peak detection algorithms de-
veloped in the past thirty-ve years, the one developed by Pan
and Tompkins [
] is the most well-known one serving as both an
algorithm benchmark and the basics of several other algorithms
developed after it (e.g., [
] and [
]). Many of the previously devel-
oped R-peak detection algorithms contain two main parts - signal
processing to enhance the presence of QRS complex (i.e., signal
derivatives, applying lters or signal transformations) and an am-
plitude threshold-based peak decision making [
]. Those methods
are usually lightweight and could be implemented on embedded
devices. However, these algorithms often only perform well with
relatively clean ECG signals but are not robust enough to noises
and artifacts [
]. On the other hand, the signal quality could
vary over time, particularly in wearable devices, due to motions,
electrode-skin conductance changes, or the use of dry electrodes
]. These issues call for more robust methods for R-peak de-
tection that are more resilient to ECG signal noises such as baseline
wander and muscle artifact. The need for such solutions is further
pronounced if accuracy is of higher priority than real-time require-
ments (e.g., post-processing) or when processing can be preformed
in the back-end (e.g., in the Fog or Cloud layers).
SAC ’20, March 30-April 3, 2020, Brno, Czech Republic J. Laitala et al.
In this paper, we propose a LSTM based R-peak detection ap-
proach which is robust against quality uctuations and high degrees
of artifacts in ECG signals. In the evaluation part, our approach
outperforms several existing R-peak detection algorithms with our
dataset. Since LSTM networks are more capable of dealing with
temporal modeling tasks by remembering long-term dependencies
compared to other types of neural networks [
], making
them more suitable for time series signals. LSTM networks can be
bidirectional, multilayered or combination of the two. Bidirectional
] is combination of two LSTMs that process the input
sequence both in chronological and reversed order and that are
connected to the same output layer. Therefore, for a every time
step of a given sequence, bidirectional LSTM has information from
the time steps preceding and following it. In multilayered LSTM
networks, multiple LSTM layers are stacked on top of each other
so that the output sequence of one layer forms the input sequence
of the next layer [
]. This increases representational capacity of
the network and allows it to learn more complex problems.
To the best of our knowledge, this is the rst study predicting
R-peak locations directly from raw ECG signals using LSTM. Only
a few neural network solutions are found in the literature, and they
use feed forward networks which lack the capability to model time
dependency [26, 33].
Training a neural network requires a large volume of labels or
annotated data, which is highly expensive particularly when peak-
by-peak checking and manual corrections are needed. Therefore,
most of the existing R-peak detection methods are developed from
benchmark ECG databases. Among those, MIT-BIH Arrhythmia
Database [
] is the most frequently used. However, the signals
in this database are relatively noise free and thus they alone are
not enough for training a robust R-peak detector. To the best of
our knowledge, there are only two open databases that contain
noisy ECG signals with annotated peaks [
]. However, these
databases are not large enough for training accurate neural network
As part of the proposed R-peak detection method, we introduce a
noisy ECG data generator for training data augmentation. Our data
generator mixes data from two open databases to create training
data similar to noisy ECG signals recorded in real life. This data is
then used to train LSTM network to detect R-peaks. Our method is
tested with a separate dataset containing ECG signals from a clinical
study with dierent levels of artifacts. In addition, one signal from
the dataset is selected and dierent amounts of Gaussian noise is
added to it. The proposed R-peak detector is compared with several
classic R-peak detection methods using the same dataset.
To sum up, the contributions of this work are twofold:
rst, we propose a robust LSTM based solution for R-peak
detection with noisy ECG signals;
second, we propose data generator for generating noisy ECG
signals to train the robust R-peak detector.
The implementation is released as open source on GitHub
. The
rest of the paper is organized as follows: Section 2 introduces the
datasets used in our study; Section 3 presents the proposed method
in detail; Section 4 evaluates the proposed method and compares
it against several other methods; and Section 5 concludes and dis-
cusses on this work.
Three datasets were involved in this work including two publicly
available databases in the model training phase, and one clinical
database in the model evaluation phase.
The data used in the training phase consists of two annotated
databases, MIT-BIH Arrhythmia database [
] and MIT-BIH
Noise Stress Test database [
]. The former one has been widely
employed in R-peak or QRS complex detection studies and works
as a benchmark database for detection algorithm development and
testing [
]. The 48 ECG records from 47 subjects were digitized
at 360 Hz, and each record covered 30 minutes. The signals in the
database are in general clean and may have dierent types of ar-
rhythmia. The modied lead II ECG was used in this study. MIT-BIH
Noise Stress Test database contains three dierent noise records
that represent typical noise sources in the ambulatory ECG record-
ings. In this study two noise records, baseline wander (BW) and
muscle artifact (MA), were used. The third noise data source was
simulated powerline interference, which was 60 Hz sinusoidal wave
in this study.
The goal of this study is to develop an R-peak detection algorithm
that is robust to noises. Thus for testing the method we use a
real-world dataset containing one or several ECG contaminants,
such as powerline interference, electromyographic noise, baseline
wandering, or electrode motion artifact. The test dataset is part of
a larger database
, where one channel ECG signal was recorded by
a portable biopotential acquisition device [
] from postoperative
patients during re-examination. In total 103 minutes of lead I ECG
recordings sampled at 500 Hz from 7 patients are included in the
test phase. The recordings include irregular heart rhythm as well
with premature ventricular contractions or arrhythmia. Nearly 19
minutes of signals are with dierent types of noises and visually
detectable R-peaks. The ground truth labels of the R-peak locations
(annotations) in the dataset are from a threshold-based automatic
R-peak detector followed by manual corrections.
3.1 Overview
Our development process was two fold (see Figure 1). At rst phase,
we used publicly available ECG and ECG noise databases to train
the LSTM network. The heart of the training process was the gener-
ator function that created training data by mixing ECG signals with
noise. At second phase, we implemented a wrapper function that
uses the model to detect R-peak locations. Also, a ltering func-
tion was developed which allows ltering out unnaturally closely
occurring R-peaks by using distance and model predictions as a
decision criteria. After development, we tested our method against
a real-world ECG dataset and also against one ECG signal with
variable degrees of additional Gaussian noise. All of the develop-
ment work was done with the Python 3 programming language
while the following external libraries were also utilized: Keras [3],
NumPy [
], SciPy [
], TensorFlow [
] and wfdb-python [
Robust ECG R-peak Detection Using LSTM SAC ’20, March 30-April 3, 2020, Brno, Czech Republic
In addition, following external Python libraries were used during
the method evaluation: BioSPPy [
], Matplotlib [
] and Pandas
Figure 1: Overview of the proposed method.
3.2 Training data preparation
3.2.1 ECG recordings. All ECG recordings and corresponding an-
notations were downsampled from 360 Hz to 250 Hz. Working with
lower frequency is benecial as more temporal information can be
squeezed into the same amount of sample points. This lowers the
computational costs and at the same time it might help LSTM model
to process temporal information. ECG annotations have dierent
encodings for 19 dierent heart beat types that indicate dierent
types of arrhythmia. From these the normal beat type is the most
common, 75052 beats are classied as normal. To make problem
more simple, only normal beats were selected for training. Normal
beats were further ltered so that only normal beats that were
located within 5 samples from the local maxima of the ECG were
kept. This was done as it was noticed that in some rare cases beats
that were labeled as normal occurred as local minimas (downward
facing peaks) of the ECG signal.
3.2.2 Noise recordings. Downsampling from 360 Hz to 250 Hz was
also done for both noise recordings. Both recordings contain two
channels which dier from each other. Longer recordings from both
noise sources were constructed by concatenating these channels.
3.3 Model architecture
Newest version (2.0 RC) from the TensorFlow deep learning frame-
work with high-level Keras API was used to build and train the
model. Figure 2 shows the constructed sequential model with two
bidirectional LSTM layers and one dense layer. Both bidirectional
layers have 64 units in each while nal dense layer contains just
one output unit. Hyperbolic tangent was used as activation func-
tion for all the other layers except for the nal output layer which
uses sigmoid as an activation function. Model contains 132 737
parameters which all are trainable. Input and output shapes are in
the form (batch size, time steps, features) where the time steps and
features are xed to 1000 and 1 respectively. Model makes sequence
to sequence mapping where both input and output sequences have
the same length. Probability value of an time step being an R-peak
is produced for every time step of the input sequence. This is illus-
trated in the Figure 3 where predictions produced by the model are
show below the corresponding inputs.
Figure 2: Schematic illustration of the network architecture.
Figure 3: Upper row: Noise free and noisy ECG examples
from the test dataset. Both examples have been downsam-
pled to 250 Hz and normalized to the range [-1,1]. Corre-
sponding predictions at the lower row. Notice the high prob-
ability (almost 0.5) produced for the noise peak.
3.4 Model training
3.4.1 Generating training data. Generator function was used to
generate the training data. It was constructed so that it also takes
care of the data augmentation during the training. Data augmen-
tation was carried out by mixing ECG signals from the MIT-BIH
Arrhytmia database with the baseline wander, muscle artifact and
powerline interference noise sources. Former two (Figure 4) are
taken from MIT-BIH Noise Stress Test Database while powerline
interference is simulated with 60 Hz sine wave.
SAC ’20, March 30-April 3, 2020, Brno, Czech Republic J. Laitala et al.
Figure 4: Examples of the used noise types from MIT-BIH
Noise Stress Test Database, in 1000 sample windows.
Goal of the data augmentation phase was to generate large
amount of diverse training examples that simulate real life ECG
recordings. Generator function yields batches of training data and
corresponding labels. Each training instance in the batch is con-
structed as follows (Figure 5):
(1) Randomly select one ECG record.
Randomly select 1000 sample window from the ECG record.
(3) Check that all beats in the window are classied as normal.
(4) Add randomly selected noise to the window.
Create numerically encoded labels for the selected window.
Figure 5: The construction of a single training instance in
generator function. BW=Baseline wander, MA=Muscle arti-
fact. U indicates uniform distribution where random multi-
plier is drawn.
From these the steps 1-3 are self explanatory but steps 4-5 war-
rant a more detailed description. In step 4, 1000 sample windows are
rst randomly selected from baseline wander and muscle artifact
noise sources. Then the category of the added noise is determined
randomly, it can be either baseline wander, muscle artifact or com-
posite of these two. After category selection, selected noise source is
multiplied by random number. For baseline wander random number
is selected from uniform distribution (0,10) while for muscle artifact
it is selected from uniform distribution (0,5). As a nal step, 60 Hz
sine wave multiplied by random number from uniform distribution
(0,0.5) is added to the selected noise category.
Uniform distributions in all aforementioned cases were deter-
mined by visually examining the training examples produced by the
generator function. Randomization was purposefully used in every
possible step to maximize the variation in the training data. In this
way, the generator function produces almost noise free training
examples as well as examples that are saturated with complex noise.
The composed noise is added to the ECG signal which has been
normalized to the range [-1,1]. After noise addition, the noisy ECG
signal is normalized again to the same range so that all training
examples of the batch have similar range.
In step 5, binary labelling was used to label each time step of
the window. Label "1" was used for points corresponding R-peak
locations while rest of the time steps were labeled with zeroes.
To make labels slightly more balanced, ones were added also two
indices before and two indices after the R-peak index (Figure 6).
This labeling scheme where one R-peak is marked with ve ones
can be also utilized to reduce the number of false positives when
predictions are processed.
Figure 6: Schematic illustration of the ECG labeling scheme.
3.4.2 Training. Model was trained on a GPU runtime environment
of the Google Colaboratory, which is a cloud based research tool for
machine learning education and research. Binary cross entropy was
selected as loss function, while Adam [
] was chosen as optimizer.
Network was trained 150 epochs with steps per epoch being 40
and batch size of 256. This means that a total number of 1 536 000
(150x40x256) dierent training examples (Figure 7) were used for
3.5 Using model for R-peak detection
3.5.1 Prerequisites. Model expects its inputs to be in the form of 3D
tensor where the information is arranged similarly as in the training
phase. First axis of the tensor is batch dimension while the following
two axes correspond to time steps and features. Output of the model
is also a 3D tensor which has similar shape as input. Wrapper
function (nd_peaks) was developed to feed the ECG signal in the
correct form to the model and to extract R-peak locations from
Robust ECG R-peak Detection Using LSTM SAC ’20, March 30-April 3, 2020, Brno, Czech Republic
Figure 7: Examples of the training data produced by the gen-
erator function. Training example in the lower right is rela-
tively noise free while rest of the training examples contain
varying amounts of baseline wander, muscle artifacts and
powerline interference.
model predictions. For many of the steps, it uses helper functions
that each have their own well dened subtask. These subtasks are
carried out in the following order:
(1) Resample ECG signal to 250 Hz.
(2) Split ECG signal into overlapping segments (windows).
(3) Make predictions with the model.
(4) Calculate averages for the overlapping predictions.
(5) Filter out low probability predictions.
(6) Resample R-peaks back to original sampling frequency.
(7) Correct R-peaks with respect to original signal.
(8) Remove duplicate R-peaks if present.
Steps 2 and 5-8 are described at more detail in their own subsec-
3.5.2 Wrapper function - Spliing data. ECG record is split into
overlapping segments by moving the 1000 time step wide window
with user dened stride. In this study, the stride of 250 was used,
which corresponds to four predictions per every time step. Usage
of overlapping windows adds the computational costs but at the
same time it improves the R-peak detection, as time steps can be
seen in a dierent context. Padding (median of 1000 closest time
steps) is added to the both ends of the signal to allow same amount
of overlap for each time step. During splitting, each 1000 time step
long ECG segment is also normalized to range [-1,1]. After splitting
data is reshaped into form of 3D tensor that is expected by the
3.5.3 Wrapper function - Filtering predictions. After overlapping
predictions are averaged, one probability value for each time step
remains. Time steps are then ltered based on the user dened prob-
ability threshold, only time steps above the probability threshold
are selected for further evaluation. Using lower values for proba-
bility threshold increases the recall but at same time some of the
precision is lost. After ltering, remaining time steps are corrected
to the local maxima of the ECG signal that is within ve time steps.
If ve or more time steps are corrected to the same location, then
that location is considered to be an R-peak. The idea is to utilize the
labeling scheme of training phase, where each R-peak was labeled
with ve time steps. This has a noise reducing eect, as R-peak
cannot be identied by just one positive prediction, but it needs
ve closely spaced positive predictions.
3.5.4 Wrapper function - Correcting R-peaks. So far, all of the work
has taken place with the ECG signal that has been resampled to
250 Hz. In step 7, identied R-peak locations are mapped to the
location indices that correspond R-peak locations in the original
sampling frequency. Because the R-peak location can dier slightly
in the two dierent frequencies, correction to the local maxima is
done with respect to the ECG signal in original sampling frequency.
In some very rare cases two R-peak locations are corrected to the
same location. Step 8 is for these situations, duplicate indices are
removed if they occur. After step 8 wrapper function returns R-peak
indices and corresponding probability values.
3.5.5 Filtering function. Separate ltering function (Figure 8) was
developed to lter out unnaturally closely occurring R-peaks. It
becomes especially useful when goal is to maximize recall and user
dened probability threshold has been set low. By using ltering
function, precision can then be improved by removing false pos-
itives. Typical use case for the ltering function is present at the
Figure 3 where the model gives almost 0.5 probability for sharp
noise peak that occurs at the middle of the noisy ECG segment.
At rst, R-peaks that are within threshold distance from another
R-peaks are identied. Then these R-peaks are removed from the
set of all R-peaks. After this, removed R-peaks are iterated over
in a descending probability order. At every iteration the R-peak
with the highest probability value is selected from removed R-
peaks and it is checked if it can be added to the set of approved
R-peaks. If its distance to the neighbouring R-peaks is greater than
threshold distance, it is added, otherwise it is thrown away. This
simple algorithm works surprisingly well when variations in heart
rate are not too large. More advanced adaptive method will be
needed for ECG signals where heart rate varies a lot.
4.1 Evaluation methods and results
4.1.1 Evaluation metrics. The metrics used in the performance
evaluation are precision, recall, and F1 score, which are calculated
from true positives (TP), false negatives (FN), and false positives
(FP) by:
precision =TP
TP +FP (1)
recall =TP
TP +FN (2)
F1 score =2×precision ×recall
precision +recall =2×TP
2×TP +FP +FN (3)
Predicted R-peak is considered true positive if it falls within
one tenth of the sampling rate (in this study, 50 samples at 500 Hz
sampling rate) from the ground truth.
SAC ’20, March 30-April 3, 2020, Brno, Czech Republic J. Laitala et al.
Table 1: Performance comparison between the proposed LSTM method and selected R-peak detection methods
No. Anno. LSTM Hamilton Christov Engzee Pan-Tompkins
peaks preci. recall F1 preci. recall F1 preci. recall F1 preci. recall F1 preci. recall F1
1 1159 0.991 0.990 0.991 0.979 0.984 0.982 0.988 0.985 0.987 0.977 0.983 0.980 0.952 0.979 0.966
2 1876 1.000 1.000 1.000 0.995 0.999 0.997 0.997 0.999 0.998 0.953 0.985 0.968 0.984 0.995 0.989
3 932 0.996 0.997 0.996 0.989 0.845 0.912 0.994 0.845 0.914 0.290 0.550 0.380 0.961 0.974 0.968
4 956 0.998 0.999 0.998 0.973 0.998 0.986 0.986 0.987 0.987 0.965 0.992 0.978 0.955 0.991 0.972
5 1720 0.995 0.987 0.991 0.987 0.910 0.947 0.953 0.224 0.363 0.983 0.970 0.977 0.982 0.949 0.965
6 843 0.999 0.999 0.999 0.989 0.999 0.994 0.995 0.999 0.997 0.979 0.993 0.986 0.990 0.987 0.989
7 1054 0.995 0.995 0.995 0.984 0.994 0.989 0.993 0.657 0.790 0.983 0.990 0.986 0.977 0.973 0.975
*The best performance with precision, recall, or F1 score within the same ECG record (in each row) are highlighted in bold font.
Figure 8: Working principle of the ltering function.
4.1.2 Test with annotated test dataset. Our method was rst tested
with the seven raw ECG recordings from the test dataset. For our
method parameters, We selected 0.05 as a user dened probability
threshold and stride value of 250. Then ltering function was run
by using the threshold distance of 350 ms. These parameters were
kept the same for all of the tested ECG recordings. To compare
the performance of the proposed method with the existing meth-
ods, four classic methods were tested as well with the same ECG
recordings. The other tested methods were Hamilton [
], Chris-
tov [
], Engzee [
], and Pan-Tompkins [
]. In Hamilton and
Engzee, the raw ECG signals were band-pass (3 - 45 Hz) ltered
before detection. The inputs to Christov were raw ECG signals.
For the rst three methods, we used the versions that were imple-
mented at the BioSPPy library [
]. These methods were run with
their default parameters. For the Pan-Tompkins method, we used
a ready-made Matlab function [
] and the input were raw ECG
recordings segmented with a sliding 4-second window.
Table 1 presents the performance of the tested methods. It is clear
that the proposed LSTM based method has higher performance with
the whole test dataset than the classical methods. Moreover, the
performance of the proposed method is more consistent as it is
not so sensitive to the variations in the record quality as are the
classical methods. For example, the signal quality of record No. 2
and No. 6 are generally good throughout the record, therefore the
corresponding performance of Hamilton and Christov are high as
well (F1 score
0.994). By contrast, in the record No. 3 where the
signal quality is very low in some parts (Figure 3 is an extreme
example in record No. 3), the F1 score of the proposed method
is 0.996 while the F1 scores of Hamilton and Christov are lower
than 0.92. For these two methods, the recall values drop to 0.845
indicating that many R peaks are not detected.
4.1.3 Test with SNR controlled ECG samples. To further assess the
performance of our method with the noisy ECG signals, test exam-
ples with dierent degrees of Gaussian noise were generated. Test
was carried out by using exactly similar settings for the dierent
methods as in the previous test. Test examples were generated from
ECG record No.1 by adding Gaussian noise with controlled linear
=PSi дnal /PN o is e
. The noise was added in sliding 1-second
windows. The detrended raw ECG samples inside the window were
the signal base and their total power was
PSi дnal
. All of methods
were then tested with the generated noisy ECG samples with an
SNR values of 20, 10, 5, 1, 0.5, 0.4, 0.3, 0.2, and 0.1. Figure 9 shows a
segment of ECG signal with dierent levels of additional noise. In
Figure 10, F1 scores at dierent SNR levels are plotted for all of the
methods. It is evident that our method has the highest performance
in all noise levels. Performance decay is also slowest from the tested
methods. This is especially true at the highest noise levels where
the performances of the classical methods plummet. Even at the
highest noise level (SNR 0.1), our method achieves the precision
(0.939), recall (0.938) and F1 (0.939) scores that are well above 0.9.
4.2 Error analysis
Our method is the most error prone when sharp and intense base
line changes associated to electrode motion or loss of contact are
present in the ECG signal. Good example from this kind of situation
Robust ECG R-peak Detection Using LSTM SAC ’20, March 30-April 3, 2020, Brno, Czech Republic
Figure 9: Examples of ECG with three dierent SNR levels.
All signals have been normalized to range [-1,1].
Figure 10: F1 scores at dierent noise levels.
can be seen on the Figure 11 where otherwise good quality ECG
signal has sharp and intense noise peak at the middle. Detection
failures in these situations are mostly due the fact that these types
of artifacts were not simulated during the training phase. In the
presence of intense base line shift, ECG values associated to R-peaks
are at completely dierent range than they were during the training.
Therefore model has no clue about the correct R-peak positions.
Trying to include this noise type to training phase would be the
logical next step and basis for future improvement.
In this paper we presented a new LSTM-based approach for R-peak
detection from ECG signals. Compared to traditional methods, our
method gives more robust R peak detection both with the real life
ECG data and ECG data with added Gaussian noise. Our method
was developed on the basis that it will be used in a oine manner.
However, it would be straightforward to convert it to real-time
Our initial testing gives encouraging results of the method perfor-
mance. The current results may be limited to the size of test dataset.
In our future work, we plan to further validate the proposed method
with a publicly available ECG database recently published by Porr
and Howell [27]. This database contains noisy ECG signals where
Figure 11: Detection failures at the immediate vicinity of
strong electrode contact artifact. Circles are true positives
while crosses represent false negatives. ECG signal has been
normalized to the range [-1,1].
noise originates from the real life activities like walking or jogging.
This kind of database would give more comprehensive evaluation
of the method performance.
Our ultimate goal is to develop fully automated end-to-end solu-
tion to R-peak detection. This would remove the need for separate
ltering function altogether. Good starting point for the network
improvement can be found from the data augmentation step as more
noise sources (e.g., electrode motion artifacts) should be incorpo-
rated into training data. There is also a great deal of parameters (e.g.
window size, network architecture) that could be tuned to increase
the performance.
This research was supported by Academy of Finland project, Per-
sonalized Pain Assessment System based on IoT 313488. It was also
supported partially by the US National Science Foundation (NSF)
WiFiUS grant CNS-1702950 and Academy of Finland grants 311764,
313448, and 313449.
Martín Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jerey
Dean, Matthieu Devin, Sanjay Ghemawat, Georey Irving, Michael Isard, Man-
junath Kudlur, Josh Levenberg, Rajat Monga, Sherry Moore, Derek G. Murray,
Benoit Steiner, Paul Tucker, Vijay Vasudevan, Pete Warden, Martin Wicke, Yuan
Yu, and Xiaoqiang Zheng. 2016. TensorFlow: A System for Large-Scale Ma-
chine Learning. In 12th USENIX Symposium on Operating Systems Design and
Implementation (OSDI 16). USENIX Association, Savannah, GA, 265–283. https:
Carlos Carreiras, Ana Priscila Alves, André Lourenço, Filipe Canento, Hugo
Silva, Ana Fred, et al
2015. BioSPPy: Biosignal Processing in Python. [Online; accessed 2019-12-02].
François Chollet et al
2015. Keras. [Online; accessed 2019-12-02].
Ivaylo I Christov. 2004. Real time electrocardiogram QRS detection using
combined adaptive threshold. Biomedical Engineering Online 3, 1 (2004), 28.
Mohamed Elgendi, Björn Eskoer, Socrates Dokos, and Derek Abbott. 2014.
Revisiting QRS detection methodologies for portable, wearable, battery-operated,
and wireless ECG systems. PloS One 9, 1 (2014), e84018.
WAH Engelse and C Zeelenberg. 1979. A single scan algorithm for QRS-detection
and feature extraction. Computers in Cardiology 6, 1979 (1979), 37–42.
Gary M Friesen, Thomas C Jannett, Manal Afy Jadallah, Stanford L Yates,
Stephen R Quint, and H Troy Nagle. 1990. A comparison of the noise sensitivity
of nine QRS detection algorithms. IEEE Transactions on Biomedical Engineering
37, 1 (1990), 85–98.
SAC ’20, March 30-April 3, 2020, Brno, Czech Republic J. Laitala et al.
Fay C.M. Geisler, Nadja Vennewald, Thomas Kubiak, and Hannelore Weber. 2010.
The impact of heart rate variability on subjective well-being is mediated by
emotion regulation. Personality and Individual Dierences 49, 7 (2010), 723–728.
A. L. Goldberger, L. A. N. Amaral, L. Glass, J. M. Hausdor, P. Ch. Ivanov, R. G.
Mark, J. E. Mietus, G. B. Moody, C.-K. Peng, and H. E. Stanley. 2000 (June 13).
PhysioBank, PhysioToolkit, and PhysioNet: Components of a New Research
Resource for Complex Physiologic Signals. Circulation 101, 23 (2000 (June 13)),
Alex Graves, Navdeep Jaitly, and Abdel-rahman Mohamed. 2013. Hybrid speech
recognition with deep bidirectional LSTM. In IEEE Workshop on Automatic Speech
Recognition and Understanding. IEEE, 273–278.
Alex Graves and Jürgen Schmidhuber. 2005. Framewise phoneme classica-
tion with bidirectional LSTM and other neural network architectures. In Neural
Networks, Vol. 18. 602–610.
P Hamilton. 2002. Open Source ECG Analysis. Computers in Cardiology 29 (2002),
Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long Short-Term Memory.
Neural Computation 9, 8 (nov 1997), 1735–1780.
John D. Hunter. 2007. Matplotlib: A 2D graphics environment. Computing in
Science and Engineering 9, 3 (may 2007), 99–104.
Vignesh Kalidas and Lakshman Tamil. 2017. Real-time QRS detector using
stationary wavelet transform for automated ECG analysis. In IEEE 17th Interna-
tional Conference on Bioinformatics and Bioengineering, Vol. 2018-Janua. 457–461.
Diederik P. Kingma and Jimmy Ba. 2014. Adam: A Method for Stochastic Opti-
mization. (dec 2014). arXiv:1412.6980
B-U Kohler, Carsten Hennig, and Reinhold Orglmeister. 2002. The principles of
software QRS detection. IEEE Engineering in Medicine and Biology Magazine 21,
1 (2002), 42–57.
Chengyu Liu, Xiangyu Zhang, Lina Zhao, Feifei Liu, Xingwen Chen, Yingjia Yao,
and Jianqing Li. 2019. Signal quality assessment and lightweight QRS detection
for wearable ECG smartvest system. IEEE Internet of Things Journal 6, 2 (2019),
André Lourenço, Hugo Silva, Paulo Leite, Renato Lourenço, and Ana Fred. 2012.
Real time electrocardiogram segmentation for nger based ECG biometrics. In
Proceedings of the International Conference on Bio-Inspired Systems and Signal
Processing. 49–54.
Wes McKinney. 2010. Data structures for statistical computing in python. In
Proceedings of the 9th Python in Science Conference, Vol. 445. Austin, TX, 51–56.
George B Moody and Roger G Mark. 2001. The impact of the MI T-BIH arrhythmia
database. IEEE Engineering in Medicine and Biology Magazine 20, 3 (2001), 45–50.
Moody GB, Muldrow WE, and Mark RG. 1984. The MIT-BIH Noise Stress Test
Database. In Computers in Cardiology. 381–384.
Travis E. Oliphant. 2015. Guide to NumPy (2nd ed.). CreateSpace Independent
Publishing Platform, USA.
Christina Orphanidou and Ivana Drobnjak. 2017. Quality assessment of ambula-
tory ECG using wavelet entropy of the HRV signal. IEEE Journal of Biomedical
and Health Informatics 21, 5 (2017), 1216–1223.
Jiapu Pan and Willis J Tompkins. 1985. A real-time QRS detection algorithm.
IEEE Transactions Biomedical Engineering 32, 3 (1985), 230–236.
Sean Parsons and Jan Huizinga. 2019. Robust and fast heart rate variability
analysis of long and noisy electrocardiograms using neural networks and images.
(2019). arXiv:arXiv:1902.06151
Bernd Porr and Luis Howell. 2019. R-peak detector stress test with
a new noisy ECG database reveals signicant performance dierences
amongst popular detectors. bioRxiv (2019).
Victor Kathan Sarker, Mingzhe Jiang, Tuan Nguyen Gia, Arman Anzanpour,
Amir M. Rahmani, and Pasi Liljeberg. 2017. Portable multipurpose bio-signal
acquisition and wireless streaming device for wearables. In Proceedings of IEEE
Sensors Applications Symposium.
Hooman Sedghamiz. 2014. Complete Pan-Tompkins implementation
ECG QRS detector.leexchange/
45840-complete- pan-tompkins-implementation-ecg-qrs- detector.
Fred Shaer and J. P. Ginsberg. 2017. An overview of heart rate variability
metrics and norms. Frontiers in Public Health 5, September (2017), 1–17. https:
Stéfan Van Der Walt, S. Chris Colbert, and Gaël Varoquaux. 2011. The NumPy
array: A structure for ecient numerical computation. Computing in Science and
Engineering 13, 2 (mar 2011), 22–30.
Pauli Virtanen, Ralf Gommers, Travis E. Oliphant, Matt Haberland, Tyler
Reddy, David Cournapeau, Evgeni Burovski, Pearu Peterson, Warren Weckesser,
Jonathan Bright, Stéfan J. van der Walt, Matthew Brett, Joshua Wilson, K. Jarrod
Millman, Nikolay Mayorov, Andrew R. J. Nelson, Eric Jones, Robert Kern, Eric
Larson, CJ Carey, İlhan Polat, Yu Feng, Eric W. Moore, Jake Vand erPlas, Denis
Laxalde, Josef Perktold, Robert Cimrman, Ian Henriksen, E. A. Quintero, Charles R
Harris, Anne M. Archibald, Antônio H. Ribeiro, Fabian Pedregosa, Paul van Mul-
bregt, and SciPy 1. 0 Contributors. 2019. SciPy 1.0–Fundamental Algorithms for
Scientic Computing in Python. , arXiv:1907.10121 pages. arXiv:cs.MS/1907.10121
Yande Xiang, Zhitao Lin, and Jianyi Meng. 2018. Automatic QRS complex de-
tection using two-level convolutional neural network. BioMedical Engineering
Online 17, 1 (2018), 1–17. 018-0441-4
Chen Xie and Julien Dubiel. 2016. wfdb-python.
LCP/wfdb-python. [Online; accessed 2019-12-02].
Sean Shensheng Xu, Man Wai Mak, and Chi Chung Cheung. 2019. Towards end-
to-end ECG classication with raw signal extraction and deep neural networks.
IEEE Journal of Biomedical and Health Informatics 23, 4 (2019), 1574–1584. https:
... Their method was tested with 24 h wearable ECG recordings. In addition, [44] proposed an automatic R-peak detection for ECG signals. The method comprised a bidirectional LSTM to obtain the probabilities and locations of R-peaks. ...
... Figure 10 shows a segment of the PPG signal and the defined tolerance distance with gray shaded rectangulars. In the proposed method, the tolerance distance is 50 ms, which is smaller than the tolerance distance (i.e., 88 ms) in other studies in the literature [44,56]. If the peak is detected within this range it is considered as true peak. ...
Full-text available
Accurate peak determination from noise-corrupted photoplethysmogram (PPG) signal is the basis for further analysis of physiological quantities such as heart rate. Conventional methods are designed for noise-free PPG signals and are insufficient for PPG signals with low signal-to-noise ratio (SNR). This paper focuses on enhancing PPG noise-resiliency and proposes a robust peak detection algorithm for PPG signals distorted due to noise and motion artifact. Our algorithm is based on convolutional neural networks (CNNs) with dilated convolutions. We train and evaluate the proposed method using a dataset collected via smartwatches under free-living conditions in a home-based health monitoring application. A data generator is also developed to produce noisy PPG data used for model training and evaluation. The method performance is compared against other state-of-the-art methods and is tested with SNRs ranging from 0 to 45 dB. Our method outperforms the existing adaptive threshold, transform-based, and machine learning methods. The proposed method shows overall precision, recall, and F1-score of 82%, 80%, and 81% in all the SNR ranges. In contrast, the best results obtained by the existing methods are 78%, 80%, and 79%. The proposed method proves to be accurate for detecting PPG peaks even in the presence of noise.
... However, identifying and classifying arrhythmias can be an erroneous, labor-intensive, and subjective task even for cardiologists since it often requires considering each heartbeat of an ECG signal accumulated over hours or days. With the recent advances in various low-cost portable ECG devices [2], [3] such as chest straps and wristbands, the opportunities for self-monitoring and auto-diagnosis have increased. Therefore, it is highly desirable to have global (patient independent or inter-patient) and reliable ECG classification methods. ...
... Patient-wise, 5-folds cross-validation is used to train the model and tune the hyper-parameters. We used the crossentropy loss as the objective function for training the network [2], which is then summed over all the samples in a mini-batch. The experiments were conducted on a computer equipped with an Intel Core i7-8750H, 16GB memory, 6 GB NVIDIA GeForce GTX 1060 graphics card, and a 2.21 GHz processor. ...
Full-text available
Objective: Global (inter-patient) ECG classification for arrhythmia detection over Electrocardiogram (ECG) signal is a challenging task for both humans and machines. The main reason is the significant variations of both normal and arrhythmic ECG patterns among patients. Automating this process with utmost accuracy is, therefore, highly desirable due to the advent of wearable ECG sensors. However, even with numerous deep learning approaches proposed recently, there is still a notable gap in the performance of global and patient-specific ECG classification performances. This study proposes a novel approach to narrow this gap and propose a real-time solution with shallow and compact 1D Self-Organized Operational Neural Networks (Self-ONNs). Methods: In this study, we propose a novel approach for inter-patient ECG classification using a compact 1D Self-ONN by exploiting morphological and timing information in heart cycles. We used 1D Self-ONN layers to automatically learn morphological representations from ECG data, enabling us to capture the shape of the ECG waveform around the R peaks. We further inject temporal features based on RR interval for timing characterization. The classification layers can thus benefit from both temporal and learned features for the final arrhythmia classification. Results: Using the MIT-BIH arrhythmia benchmark database, the proposed method achieves the highest classification performance ever achieved, i.e., 99.21% precision, 99.10% recall, and 99.15% F1-score for normal (N) segments; 82.19% precision, 82.50% recall, and 82.34% F1-score for the supra-ventricular ectopic beat (SVEBs); and finally, 94.41% precision, 96.10% recall, and 95.2% F1-score for the ventricular-ectopic beats (VEBs).
... The extreme case corresponds to analyze an ECG trace containing only V peaks, in this case, 0.97% of peaks are not detected. In literature, there are more advanced realtime algorithms that obtain sensitivity and precision values similar or higher than those obtained with our algorithm, such as [48] and [49]. In [49] more offline algorithms are shown with higher sensitivity and precision values, the authors state that it is easy to address the same results in the real-time case for their method. ...
... In literature, there are more advanced realtime algorithms that obtain sensitivity and precision values similar or higher than those obtained with our algorithm, such as [48] and [49]. In [49] more offline algorithms are shown with higher sensitivity and precision values, the authors state that it is easy to address the same results in the real-time case for their method. Our aim is to find an algorithm that achieves high sensitivity and accuracy values and at the same time it provided a low computational load VOLUME 4, 2016 This work is licensed under a Creative Commons Attribution 4.0 License. ...
Full-text available
The Internet of Medical Things (IoMT) paradigm is becoming mainstream in multiple clinical trials and healthcare procedures. Cardiovascular diseases monitoring, usually involving electrocardiogram (ECG) traces analysis, is one of the most promising and high-impact applications. Nevertheless, to fully exploit the potential of IoMT in this domain, some steps forward are needed. First, the edge-computing paradigm must be added to the picture. A certain level of near-sensor processing has to be enabled, to improve the scalability, portability, reliability, responsiveness of the IoMT nodes. Second, novel, increasingly accurate, data analysis algorithms, such as those based on artificial intelligence and Deep Learning, must be exploited. To reach these objectives, designers, and programmers of IoMT nodes, have to face challenging optimization tasks, in order to execute fairly complex computing tasks on low-power wearable and portable processing systems, with tight power and battery lifetime budgets. In this work, we explore the implementation of a cognitive data analysis algorithm, based on a convolutional neural network trained to classify ECG waveforms, on a resource-constrained microcontroller-based computing platform. To minimize power consumption, we add an adaptivity layer that dynamically manages the hardware and software configuration of the device to adapt it at runtime to the required operating mode. Our experimental results show that adapting the node setup to the workload at runtime can save up to 50% power consumption. Our optimized and quantized neural network reaches an accuracy value higher than 97% for arrhythmia disorders detection on MIT-BIH Arrhythmia dataset.
... The fall detection application uses de-noising, feature extraction, and decision-tree training for classification of fall and no-fall events [58]. The pain assessment application uses pre-processing, feature extraction, and SVM classification for determining level of pain [40]. In summary, each application typically includes the pipeline of data pre-processing, feature extraction, and classification ML tasks. ...
Full-text available
Smart eHealth applications deliver personalized and preventive digital healthcare services to clients through remote sensing, continuous monitoring, and data analytics. Smart eHealth applications sense input data from multiple modalities, transmit the data to edge and/or cloud nodes, and process the data with compute intensive machine learning (ML) algorithms. Run-time variations with continuous stream of noisy input data, unreliable network connection, computational requirements of ML algorithms, and choice of compute placement among sensor-edge-cloud layers affect the efficiency of ML-driven eHealth applications. In this chapter, we present edge-centric techniques for optimized compute placement, exploration of accuracy-performance trade-offs, and cross-layered sense-compute co-optimization for ML-driven eHealth applications. We demonstrate the practical use cases of smart eHealth applications in everyday settings, through a sensor-edge-cloud framework for an objective pain assessment case study.
... Various studies have been conducted pertaining to denoising vital signals from various sources using deep learning. Laitala et al. (2020) used LSTM to detect R-peaks in noisy ECG signals and achieved F1 score of 0.939 in terms of accuracy. Arsene et al. (2019) denoised ECG signals using convolutional neural network (CNN) and LSTM structures; their method yielded root mean square errors of 0.0299 and 0.2321 for the CNN and LSTM, respectively. ...
Full-text available
The importance of monitoring vital signs is increasing with the increase in the number of elderly people and deaths from chronic diseases worldwide. Various studies have been conducted for vital sign monitoring, and it has been confirmed that the transmit time, gradient, and amplitude of pulse signals are highly correlated with blood pressure (BP) and respiration rate (RR). In this study, a single photoplethysmography (PPG) sensor-based wearable device is designed for the continuous monitoring of BP, RR, and heart rate (HR). The device is designed as an earphone type for fixedness and signal stability; it transmits data via an Arduino-based Bluetooth module for wireless use and supplies power via batteries. Because of the similar frequency range between pulse signals and walking signals, denoising is difficult to perform via frequency analysis, where the noise of the PPG signal is isolated via denoising long short-term memory (LSTM) auto encoder. The gradient element, HR, and envelope are extracted as features from the denoised PPG signal, and BP regression models and RR measurement algorithms are designed based on these features. Finally, the reference vital signs and signals measured by the device are compared to verify the accuracy of the device. Results show that the average errors for diastolic blood pressure (DBP), systolic blood pressure (SBP) and RR are 3.93%, 6.38%, and 8.95%, respectively.
Referring expression comprehension aims to localize a specific object in an image according to a given language description. It is still challenging to comprehend and mitigate the gap between various types of information in the visual and textual domains. Generally, it needs to extract the salient features from a given expression and match the features of expression to an image. One challenge in referring expression comprehension—the number of region proposals generated by object detection methods is far more than the number of entities in the corresponding language description. Remarkably, the candidate regions without described by the expression will bring a severe impact on referring expression comprehension. To tackle this problem, we first propose a novel Enhanced Cross-modal Graph Attention Networks (ECMGANs) that boosts the matching between the expression and the entity position of an image. Then, an effective strategy named Graph Node Erase (GNE) is proposed to assist ECMGANs in eliminating the effect of irrelevant objects on the target object. Experiments on three public referring expression comprehension datasets show unambiguously that our ECMGANs framework achieves better performance than other state-of-the-art methods. Moreover, GNE is able to obtain higher accuracies of visual-expression matching effectively.
Objective: To develop a method for R-peak detection of ECG data from wearable devices to allow accurate estimation of the physiological parameters including heart rate and heart rate variability. Methods: A fully convolutional neural network was applied to predict the R-peak heatmap of ECG data and locate the R-peak positions. The heartbeat-aware (HA) module was introduced to enable the model to learn to predict the heartbeat number and R-peak heatmap simultaneously, thereby improving the capability of the model for extraction of the global context. The R-R interval estimated by the predicted heartbeat number was adopted to calculate the minimum horizontal distance for peak positioning. To achieve real-time R-peak detection on mobile devices, the deep separable convolution was adopted to reduce the number of parameters and the computational complexity of the model. Results: The proposed model was trained only with ECG data from wearable devices. At a tolerance window interval of 150 ms, the proposed method achieved R peak detection sensitivities of 100% for both wearable device ECG dataset and a public dataset (i.e. LUDB), and the true positivity rates exceeded 99.9%. As for the ECG signal of a 10 s duration, the CPU time of the proposed method for R-peak detection was about 23.2 ms. Conclusion: The proposed method has good performance for R-peak detection of both wearable device ECG data and routine ECG data and also allows real-time R-peak detection of the ECG data.
Arrhythmias are irregular heartbeats that may be life-threatening. Proper monitoring and the right care at the right time are necessary to keep the heart healthy. Monitoring electrocardiogram (ECG) patterns on continuous monitoring devices is time-consuming. An intense manual inspection by caregivers is not an option. In addition, such an inspection could result in errors and inter-variability. This article proposes an automated ECG beat classification method based on deep neural networks (DNN) to aid in the detection of cardiac arrhythmias. The data collected by an Internet of Things enabled ECG monitoring device are transferred to a server. They are analysed by a deep learning model, and the results are shared with the primary caregiver. The proposed model is trained using the MIT-BIH ECG arrhythmia database to classify into four classes: normal beat (N), left bundle branch block beat (L), right bundle branch block beat (R), and premature ventricular contraction (V). The received data are sampled with an overlapping sliding window and divided into an 80:20 ratio for training and testing, with tenfold cross-validation. The proposed method achieves higher accuracy with a simple model without any preprocessing when compared with previous works. For the train and test sets, we achieved accuracy rates of 99.09 and 99.03%, respectively. A precision, recall, and F 1 scores of 0.99 is obtained. The proposed model achieves its goal of developing a simple and accurate ECG monitoring system with improved performance. This simple and efficient deep learning approach for heartbeat classification could be applied in real-time telehealth monitoring systems.
Full-text available
An amendment to this paper has been published and can be accessed via a link at the top of the paper.
Full-text available
The R peak detection of an ECG signal is the basis of virtually any further processing and any error caused by this detection will propagate to further processing stages. Despite this, R peak detection algorithms and annotated databases often allow large error tolerances around 10%, masking any error introduced. In this paper we have re-visited popular ECG R peak detection algorithms by applying sample precision error margins. For this purpose we have created a new open access ECG database with sample precision labelling of both standard Einthoven I, II, III leads and from a chest strap. 25 subjects were recorded and filmed while sitting, solving a math test, operating a handbike, walking and jogging. Our results show that using an error margin with sample precision, common R peak detection algorithms perform much worse than previously reported. In addition, there are significant performance differences between detectors which can have detrimental effects on applications such as heartrate variability, thus leading to meaningless results.
Full-text available
Background The QRS complex is the most noticeable feature in the electrocardiogram (ECG) signal, therefore, its detection is critical for ECG signal analysis. The existing detection methods largely depend on hand-crafted manual features and parameters, which may introduce significant computational complexity, especially in the transform domains. In addition, fixed features and parameters are not suitable for detecting various kinds of QRS complexes under different circumstances. Methods In this study, based on 1-D convolutional neural network (CNN), an accurate method for QRS complex detection is proposed. The CNN consists of object-level and part-level CNNs for extracting different grained ECG morphological features automatically. All the extracted morphological features are used by multi-layer perceptron (MLP) for QRS complex detection. Additionally, a simple ECG signal preprocessing technique which only contains difference operation in temporal domain is adopted. ResultsBased on the MIT-BIH arrhythmia (MIT-BIH-AR) database, the proposed detection method achieves overall sensitivity Sen = 99.77%, positive predictivity rate PPR = 99.91%, and detection error rate DER = 0.32%. In addition, the performance variation is performed according to different signal-to-noise ratio (SNR) values. Conclusions An automatic QRS detection method using two-level 1-D CNN and simple signal preprocessing technique is proposed for QRS complex detection. Compared with the state-of-the-art QRS complex detection approaches, experimental results show that the proposed method acquires comparable accuracy.
Full-text available
Healthy biological systems exhibit complex patterns of variability that can be described by mathematical chaos. Heart rate variability (HRV) consists of changes in the time intervals between consecutive heartbeats called interbeat intervals (IBIs). A healthy heart is not a metronome. The oscillations of a healthy heart are complex and constantly changing, which allow the cardiovascular system to rapidly adjust to sudden physical and psychological challenges to homeostasis. This article briefly reviews current perspectives on the mechanisms that generate 24 h, short-term (~5 min), and ultra-short-term (<5 min) HRV, the importance of HRV, and its implications for health and performance. The authors provide an overview of widely-used HRV time-domain, frequency-domain, and non-linear metrics. Time-domain indices quantify the amount of HRV observed during monitoring periods that may range from ~2 min to 24 h. Frequency-domain values calculate the absolute or relative amount of signal energy within component bands. Non-linear measurements quantify the unpredictability and complexity of a series of IBIs. The authors survey published normative values for clinical, healthy, and optimal performance populations. They stress the importance of measurement context, including recording period length, subject age, and sex, on baseline HRV values. They caution that 24 h, short-term, and ultra-short-term normative values are not interchangeable. They encourage professionals to supplement published norms with findings from their own specialized populations. Finally, the authors provide an overview of HRV assessment strategies for clinical and optimal performance interventions.
Full-text available
Complete Matlab Implementation of Pan Tompkins ECG QRS detector : If you found this script useful please cite the following references; References : [1] Sedghamiz. H, ”Matlab Implementation of Pan Tompkins ECG QRS detector.”, March 2014.
This paper proposes deep learning methods with signal alignment that facilitate the end-to-end classification of raw electrocardiogram (ECG) signals into heartbeat types, i.e., normal beat or different types of arrhythmias. Time-domain sample points are extracted from raw ECG signals, and consecutive vectors are extracted from a sliding time-window covering these sample points. Each of these vectors comprises the consecutive sample points of a complete heartbeat cycle, which includes not only the QRS complex but also the P and T waves. Unlike existing heartbeat classification methods in which medical doctors extract handcrafted features from raw ECG signals, the proposed end-to-end method leverages a deep neural network (DNN) for both feature extraction and classification based on aligned heartbeats. This strategy not only obviates the need to handcraft the features but also produces optimized ECG representation for heartbeat classification. Evaluations on the MIT-BIH arrhythmia database show that at the same specificity, the proposed patient-independent classifier can detect supraventricular- and ventricular-ectopic beats at a sensitivity that is at least 10% higher than current state-of-the-art methods. More importantly, there is a wide range of operating points in which both the sensitivity and specificity of the proposed classifier are higher than those achieved by state-of-the-art classifiers. The proposed classifier can also perform comparable to patient-specific classifiers, but at the same time enjoys the advantage of patient independency.
Recently, development of wearable and Internet of Things (IoT) technologies enables the real-time and continuous individual electrocardiogram (ECG) monitoring. In this paper, we develop a novel IoT-based wearable 12-lead ECG SmartVest system for early detection of cardiovascular diseases, which consists of four typical IoT components: 1) sensing layer using textile dry ECG electrode; 2) network layer utilizing Bluetooth, WiFi, etc.; 3) cloud saving and calculation platform and server; and 4) application layer for signal analysis and decision making. We focus on addressing the challenge of real-time signal quality assessment (SQA) and lightweight QRS detection for wearable ECG application. First, a combination method of multiple signal quality indices and machine learning is proposed for classifying 10-s single-channel ECG segments as acceptable and unacceptable. Then a lightweight QRS detector is developed for accurate location of QRS complexes. The results show that the proposed SQA method can efficiently deal with tradeoff between accepting good (97.9%) and rejecting poor (96.4%) quality ECGs, ensuring that only a low percentage of recorded ECGs are discarded. The proposed lightweight QRS detector achieves a F <sub xmlns:mml="" xmlns:xlink="">1</sub> score higher than 99.5% for processing clean ECGs. Meanwhile, it reports significantly higher F <sub xmlns:mml="" xmlns:xlink="">1</sub> scores than two existing QRS detectors for processing noisy ECGs. In addition, it also has a fine computation efficiency. This paper demonstrates that the developed IoT-driven ECG SmartVest system can be applied for widely monitoring the population during daily life and has a promising application future.