Content uploaded by Zoya Dyka
Author content
All content in this area was uploaded by Zoya Dyka on Feb 06, 2025
Content may be subject to copyright.
FPGA-based Realtime detection of Freezing of
Gait of Parkinson Patients
Author's version; the final version published in: BODYNETS 2021 (LNICST, vol 420), available at
https://link.springer.com/chapter/10.1007/978-3-030-95593-9_9
Patrick Langer1, Ali Haddadi Esfahani1, Zoya Dyka1, Peter Langend¨orfer1,2
1IHP GmbH, Frankfurt (Oder), 15326, Germany
2BTU Cottbus, Cottbus, 03046, Germany
Abstract. In this paper we report on our implementation of a temporal
convolutional network trained to detect freezing of gait on an FPGA. In
order to be able to compare our results with state of the art solutions we
used the well-known open dataset Daphnet. Our most important findings
are even though we used a tool to map the trained model to the FPGA
we can detect FoG in less than a millisecond which will give us sufficient
time to trigger cueing and by that prevent the patient from falling. In
addition, the sensitivity achieved by our implementation is comparable
to solutions running on high end devices and above 78 per cent.
Keywords: Freezing of Gait, Temporal Convolutional Models FPGA
based Implementation, tool assisted implementation, body worn sensor
nodes.
1 Introduction
Detecting Freezing of Gait (FoG) and triggering countermeasures i.e. cueing is
an important issue with respect of quality of life of Parkinson patients. Parkinson
patients are often so afraid of falling due to FoG, that their social life seriously
suffers as they do no longer leave their flats. A proper device for detecting FoG
needs to be wearable and, to avoid stigmatization, as unobtrusive as possible.
The time for detecting FoG and triggering the cueing is limited by the normal
time for taking a step which is about 300ms. As the device needs to get some
sensor data to do the assessment and then to trigger the, cueing the whole
process should be limited to about 50ms. This leads to strict hard real time
requirements and extremely short processing time. In addition, cueing may not
be triggered to often to avoid that the patient becomes too familiar and it is no
longer helping if FoG really occurs. But a few false positives do not cause harm
according to clinicians and patients reports. So, 100 per cent sensitivity is not
the ultimate goal.
Contributions of this paper We report on our FPGA implementation that
provides extremely good parameters with its quite good sensitivity well above 78,
comparable to what is reported in literature for high end devices and a detection
time of far less than 1 millisecond. The paper is structured as follows. Section 2
presents an overview over current research for FOG detection and its require-
ments. Furthermore, recent methods for the implementation of neural networks
on FPGAs are discussed. Section 3 gives details about our own implementations
and methods. Our experimental results are discussed in section 4. Section 5
provides our conclusion and presents an outlook for our further research.
2 Related Work
2.1 Overview of recent methods of detecting FOG
When it comes to Freezing of Gait (FOG), different algorithms have been used
to identify an FOG event from sensor data (usually acceleration sensors). For
example, in [17], [8], [18] and [30], threshold analysis methods are used, normally
by applying those methods on extracted statistical features. Based on that, [3]
and [25] employ Support Vector Machines (SVM) for FOG detection on sen-
sor data. With the breakthrough of Machine Learning (ML) and Deep Learning
(DL) methods, more recent research applied those technologies on FOG datasets
to achieve new state of the art results. [15] tested different machine learning tech-
niques for FOG detection. In [29], a Convolutional Neural Network (CNN) was
trained on the raw sensor data and was used as an end-to-end classifier for FOG
detection. As FOG detection is based on time series data usually from accelerom-
eters, [31] claims that better results for FOG detection can be achieved when
neural network architectures are used that are especially targeted at time series
data. Historically this is associated with Recurrent Neural Networks (RNNs) [4],
[14]. Especially Long short-term memory (LSTM) Units have achieved state of
the art results on time series problems, such as speech recognition [10] or Human
Activity Recognition (HAR) [19]. As such, [31] propose a combined CNN-LSTM
architecture for FOG detection. The CNN is intended to learn the necessary
feature extraction from the raw sensor data, whereas the LSTM shall learn the
time-based dependencies in order to classify a time series and decide whether an
FOG event occurred. A similar architecture was proposed by [27]. They evalu-
ated a CNN combined with fully connected layers (Multilayer Perceptron, MLP)
and a CNN combined with LSTM. They evaluated their models by means of sen-
sitivity, specificity, area under the curve (AUC), geometric mean (GM) and equal
error rate (EER). The CNN-LSTM achieved slightly better results (e.g. 0.844
vs. 0.849 sensitivity and the same for specificity). Thus, the authors state that
the CNN-LSTM is the better architecture and should be used for future experi-
ments.
Recently, it has been shown that CNNs in fact are able to learn (long-term) de-
pendencies on time series data, contradicting recent convictions that RNNs are
the logical choice for such data sets. In [20], a special 1D convolutional model
was developed for raw audio generation. Recent research suggests that similar
convolutional models are able to learn long-term dependencies, possibly better
than recurrent models and LSTMs [6]. Those models form a new group of CNNs,
called Temporal Convolutional Models (TCNs), which achieved state of the art
results on different datasets [6], yet inherently being a lot more straightforward
by having a simpler architecture. In [13] it was proposed to combine TCNs and
LSTMs in order to achieve better results in FOG detection (additionally, they
also introduced an attentional mechanism). However, the TCN here was only
used for the feature detection, as in previous papers, not explicitly for learning
dependencies in the time series data.
2.2 Requirements of FOG detection methods for wearable devices
Most of those publications especially focus on achieving better detection results
and the applied methods have been trained and tested on modern GPUs. For
example the mentioned TCN-LSTM architecture of [13] can be run in 0.52ms
but has only been evaluated on modern GPU (NVIDIA RTX 2060). From our
understanding, there seems to be a gap in research between increasing accuracy
and achieving real time capability for deployment in real world applications.
While focusing on achieving better detection results, it is important to keep in
mind that the proposed methods shall be deployed in wearable devices to aid
parkinson patients. It is necessary that can be run in real time efficiently. A
freezing must be detected within at least 300ms, so that an appropriate cueing
signal can be issued fast enough in order to prevent patients from falling. As
those wearable devices are powered by battery, the used hardware to run the
neural network needs to be efficient, having an overall low power consumption.
2.3 Field Programmable Gate Arrays (FPGAs) for inference of
neural networks
Field Programmable Gate Arrays (FPGA) are often used as algorithm-specific
hardware accelerator [21], [5]. They can eventually be more efficient than generic
computing units like CPUs or GPUs [22], [7]. FPGAs are currently used to run
neural networks, aiming for use cases in embedded applications or wearable de-
vices. In [23], an efficient computing array for convolutional operations was pro-
posed. FOG detection was implemented with specific neural network for those
matters on an FPGA [16]. This architecture even has online learning capabili-
ties, which means that the model on the FPGA can be trained continuously in
action. However, the architecture itself is hardwired, it is not easily possible to
switch to a more modern or sophisticated neural network. In order to speed up
the development process and gain more flexibility means, to convert a neural
network, trained in a common machine learning framework like TensorFlow or
PyTorch, to some format applicable for the FPGAs would be needed. High Level
Synthesis for machine Learning (HLS4ML) [12] is a project addressing this is-
sues. Another approach was taken by Xilinx. They developed the so-called Deep
Learning Processing Unit (DPU) [2] for their FPGAs. This is a programmable
computation engine enabling FPGAs to run neural networks. Different types of
DPUs with different supported layers are available, e.g. there is one for convolu-
tional neural networks and one for recurrent neural networks. To the best of our
knowledge, this technology represents the current state of the art of a generic
method to deploy neural networks on FPGAs.
3 Methods
3.1 VitisAI to run neural networks on FPGA
Methods of FOG detection should be executable in real-time on wearable devices
and preferably only require a minimum amount of power. Therefore, we choose
an FPGA as our targeted hardware and first explain how to use it to run neural
networks. Xilinx provides a development environment for execution of neural
networks on FPGAs. Currently, the machine learning frameworks Caffe, PyTorch
and TensorFlow are supported. For this, the FPGA is configured (programmed)
with Xilinx DPU. The DPU running inside the FPGA then is able to load
a neural network from a file in a specific binary format (called xmodel). To
generate this file from a certain model, it needs to be converted, which is a
process containing two steps [1]:
1. Quantization: The trained model needs to be quantized. Currently, the DPU
only supports 8-bit integer quantization. Thus, if the model was trained using
32-bit-floating-point values, those are converted to 8-bit integer values. This
results in a loss of precision, possibly decreasing the accuracy of the neural
network. This problem can partially be alleviated by so-called finetuning,
which is also supported by the development tools.
2. Compilation: The quantized model is compiled into a binary file, which con-
tains instructions to be run on Xilinx DPU.
Xilinx provides different DPUs for different types of neural networks. Figure 1
shows the used hardware for our experiments, namely the UltraZED-EG hard-
ware platform. It is a credit-card-sized FPGA platform based on a Xilinx XZCU3EG
with some additional hardware. Additionally, a carrier card is available, which
features peripheral connections, like USB or ethernet ports. The reason to use
Fig. 1: UltraZED-EG
The used hardware platform for our experiments, which can used for embedded
applications.
this platform is its small size and energy efficiency, but it currently does not
support to execute recurrent layers and LSTM. We report on how we solved this
issue in the following section.
3.2 Hardware-aware implementation of a Temporal Convolutional
Network for FOG detection
As mentioned, usually RNNs are chosen for dealing with time series data. How-
ever, recently it has been shown that TCNs might achieve comparable or even
better results [6]. When it comes to deciding which type of neural network shall
be used for the implementation,a thorough analysis of benefits and drawbacks
is essential. For the case of FOG detection, we especially consider the advan-
tages shown in table 1 very important. In addition, it needs to be taken into
Table 1: Advantages of TCNs compared to RNNs according to [6].
TCN RNN
Parallelism Input sequence can be pro-
cessed as a whole, convolution
operations and filters are paral-
lelizable. Especially important
for FPGAs or other hardware
accelerators.
Each sample in the input se-
quence is processed one at a
time (only sequential process-
ing).
Stable gradients Backpropagation path differ-
ent from temporal direction
of sequence, avoids vanish-
ing/exploding gradient prob-
lem. Plus strategies like skip
connections etc. can be used to
build deep networks just as for
conventional neural networks.
LSTMs reduce risk of vanish-
ing gradient problems, explod-
ing gradients might still be a
problem.
Receptive field size Flexible, can be easily ex-
panded, e.g. by increasing filter
size or dilation factor. Might
lead to better possibilities to
learn long term dependencies.
Cannot be flexibly changed or
influenced.
account that TCNs potentially have a higher memory requirement during infer-
ence. When it comes to FPGA-based neural network accelerators, often memory
bandwidth is the main limitation [26]. Thus, a recurrent neural network might
have benefits when it comes to long input sequences or more complex problems,
as the memory requirement during inference is potentially lower [6]. However,
in the case of FOG detection, memory of modern FPGA systems is sufficient to
run our proposed architecture very well.
So, from our point of view TCN are the better choice for an FPGA imple-
mentation. Another important aspect is that RNNs are not yet supported for
our targeted FPGA platforms, neither by Xilinx VitisAI nor by other alterna-
tives such as HLS4ML [12] etc. Xilinx provides a DPU able to execute RNNs
only for Alveo platforms.
Implementation details For our implementations, which were done in Keras,
we used [24] as a reference. After training, the model was converted to the binary
file needed for the FPGA. For our tests, we used VitisAI version 1.33. However,
there were some restrictions for the conversion of the model in terms of supported
layers and operations.
–First, the mentioned version of VitisAI only supports models built with Keras
functional API. Furthermore, custom layers as used in [24] cannot be used.
–Second, Conv1D operations are not supported.
The latter issue can be remedied as follows. It is possible to replace any
Conv1D operation with a Conv2D operation equivalently. For example, if the
Conv1D operation uses a kernel of size 3, the Conv2D operation can use kernel of
size 1×3, where 1 represents the height and 3 the width. However, while Conv1D
operation in Keras supports causal (symmetric) and non-causal padding (asym-
metric) padding, Conv2D operations only support the former. However, as Keras
is based on TensorFlow, a tf.pad layer can be used to do asymmetrical padding
manually. But standard TensorFlow layers are currently only supported by Vi-
tisAI for TensorFlow 1, not for TensorFlow 2.
We realized causal padding with manual padding using tf.pad layer. The con-
version for the FPGA worked well, however the used tools indicated that some
of the layers of the converted model might not be run on the FPGA but on
the CPU of the SoC automatically. This is not the case if non-causal padding is
used.
To solve these issues, we came up with our own implementation of TCN which is
convertible with VitisAI to be run on the FPGA (or FPGA + CPU accordingly).
The desired form of padding can be chosen as desired. Training and conversion
can be done end-to-end, no manual steps are required.
4 Experiments
4.1 Dataset description
We trained our model using the Daphnet dataset [9], [11] in order to compare our
own results with the state-of-the-art publications using the same data set (such
as [13] or [15]). It is a publicly available dataset of movement data recordings
from 10 Parkinson patients. The age of those patients ranged from 59 to 75 years
(66.4years ±4.8years), whereas 3 patients were females. The patients were asked
to perform three walking tasks as described in [9]. Three sensor nodes placed at
different locations of the body, i.e. shank, thigh, and on the lower back of the
patients, were used to record the data. Each sensor acquired data at a frequency
of 64 Hz. A physiotherapist marked FOG events through recorded videos of the
experiments. In total, 8h 20min of acceleration signals were recorded, during
which 237 FOG events occurred. Two of the ten patients did not show any freez-
ing, their gait appeared as normal walking.
3https://github.com/Xilinx/Vitis-AI/tree/v1.3
4.2 Performance evaluation
Training details For evaluation, we used a patient dependent approach. This
means that for each patient, a separate model was trained. As mentioned, two
patients (patients four and ten) did not experience any FOG episodes during
the recordings. Those patients were excluded from the training. The training
dataset for each patient was composed of 80% of the corresponding data for the
patient including all data of all other patients (except patients four and ten).
The remaining 20% of the patients’ data were used as validation dataset. The
daphnet dataset, however, is highly imbalanced. Therefore, common indicators
such as accuracy might not be suitable to evaluate the detection performance of a
model. Thus, we use sensitivity (true positive rate) and specificity (true negative
rate), as used in other publications as well. We configured our architecture to use
three residual blocks as described in [6], using a kernel size of 3 and 64 kernels per
layer overall. For each patient (except four and ten), our model was trained five
times for 1000 epochs using a learning rate of 0.001 and batch size 1000. Among
all trainings and epochs, the best model for each patient was saved. Afterwards,
it was quantized using Vitis AI tools and converted for FPGA. The quantized
model is a TensorFlow graph and can be run on GPU as well.
4.3 Hardware platform details
For our experiments, an UltraZED-EG was used. It features a Xilinx XCZU3EG
multiprocessor system on a chip (MPSoC). It contains 154.350 system logic cells,
216 Block RAM blocks (resulting in 7.6 MB BRAM memory in total) and 360
DSP slices. It features an ARM Cortex-A53 processor aswell. In our case, the
processor runs PYNQ, an Ubuntu based operating system. A program running
on the operating system is responsible for loading the test data, feed it to the
model running on the FPGA and interpret the results. The model only needs
0.7ms = 700µs for execution on the FPGA. This number was consistent during
the whole evaluation. The FPGA does not need any scheduling like CPUs or
GPUs, thus the inference time is almost exactly the same each time the model
is run.
Results on standard Daphnet dataset In table 2 we present our results of
the model for each patient. We evaluated sensitivity and specificity on the orig-
inal model, the quantized model running on GPU and the converted quantized
model running on the FPGA.
Table 2: Results for dataset without augmentation
Patient Original Quantized FPGA
Sens. Spec. Sens. Spec. Sens. Spec.
Patient 1 0.0833 0.9987 0.0833 0.9889 0.0833 1.0000
Patient 3 0.4615 0.9760 0.4615 0.9680 0.3846 0.9640
Patient 3 0.4603 0.9514 0.5238 0.9114 0.4127 0.9114
Patient 5 0.4216 0.9394 0.3627 0.9303 0.3725 0.9394
Patient 6 0.0714 0.9943 0.0357 0.9448 0.0357 0.9946
Patient 7 0.3158 0.9968 0.3684 0.9968 0.3158 0.9935
Patient 8 0.6905 0.9052 0.4286 0.9138 0.2857 0.9483
Patient 9 0.4035 0.9663 0.4211 0.8653 0.3158 0.8384
Average 0.3635 0.9669 0.3356 0.9482 0.2758 0.9487
As can be seen, the overall specificity is quite high, 0.9669 on the original
model run on GPU and still 0.9487 on FPGA. However, with this naive approach,
the average sensitivity is quite low. This is due to the fact that the dataset
is highly imbalanced, and the positive class (FOG event) is underrepresented.
This issue was addressed by recent research. Different publications suggest using
augmentation or rebalancing strategies to improve the dataset, such as [13] and
[28].
Results on rebalanced Daphnet set Accordingly, we used a simple over-
sampling strategy to virtually rebalance the dataset and retrained our model
again using the same procedure as described above. The results are presented in
table 3.
Table 3: Results for dataset with augmentation
Patient Original Quantized FPGA
Sens. Spec. Sens. Spec. Sens. Spec.
Patient 1 0.9995 0.9558 0.8103 0.9088 0.8966 0.9171
Patient 2 0.9996 0.9880 0.9692 0.9480 0.9487 0.9480
Patient 3 0.9995 0.9257 0.9199 0.8371 0.8846 0.8886
Patient 5 0.9994 0.9394 0.8518 0.8848 0.7787 0.8394
Patient 6 0.9998 0.9517 0.7059 0.7158 0.6053 0.9196
Patient 7 0.9999 0.9643 0.8000 0.7695 0.7316 0.9123
Patient 8 0.9988 0.9483 0.7000 0.9224 0.6048 0.9569
Patient 9 0.9997 0.9798 0.9146 0.8889 0.8043 0.8923
Average 0.9953 0.9566 0.8340 0.8594 0.7818 0.9093
On this rebalanced dataset, the sensitivity is significantly higher than on the
non-augmented dataset. We achieve an average sensitivity of 0.9953 and speci-
ficity of 0.9566 with our model run on GPU, which is comparable to current
state of the art results. However, after quantization, sensitivity and specificity
suffer a significant drop (0.8340 sensitivity and 0.8594 specificity for the quan-
tized model run on GPU, 0.7818 sensitivity and 0.90932 specificity for converted
quantized model run on FPGA). This can be explained by the loss of precision,
as the quantization converts the 32-bit float model to an 8-bit integer model.
5 Conclusions
In this paper we reported on an FPGA based implementation for detecting freez-
ing of gait of Parkinson patients. We would like to stress the following points.
Our implementation achieves almost the same values for sensitivity and speci-
ficity as reported in the literature for high end devices. Even after quantization,
the results are quite good. So, the use of FPGAs to allow real time detection of
FoG in wearables is a feasible solution. In our discussion with clinicians, they
reported that false positives, if they do not occur too often, are not an issue and
that patients rather like to get a cueing more often to be reassured the system is
still working. So, 100 per cent sensitivity is not the ultimate goal. On the other
hand, a very fast detection of FoG is key when it comes to trigger proper cueing
to prevent the patient from falling due to FoG. Here our implementation provides
extremely good parameters with its quite good sensitivity and a detection time
of far less than 1 millisecond. The latter is the parameter that makes fall pre-
vention by a body worn sensor node feasible. Please note that we achieved this
extremely fast processing even though we used a tool to map the trained model
onto the FPGA. In our future work we aim at integrating our FPGA based solu-
tion with a wireless sensor node and to run experiments with Parkinson patients
together with a clinical partner. In order to improve user experience, we will also
work on increasing sensitivity. The loss of precision because of quantization can
possibly be alleviated by finetuning the model. Xilinx already provides support
for finetuning the converted models using their development tools. We are also
interested to further reduce the processing time on the FPGA.
References
1. Vitis ai user guide. https://www.xilinx.com/support/documentation/sw manuals
/vitis ai/1 3/ug1414-vitis-ai.pdf, accessed: 21.06.2021
2. Convolutional neural network with int4 optimization on xilinx devices white paper
(2014)
3. Ahlrichs, C., Sam`a Monson´ıs, A., Lawo, M., Cabestany, J., Rodr´ıguez-Mart´ın,
D., P´erez, C., Sweeney, D., Quinlan, L., ´
OLaighin, G., Counihan, T., Browne,
P., Lewy, H., Vainstein, G., Costa, A., Annicchiarico, R., Alcaine, S., Mestre, B.,
Quispe, P., Bay´es, A., Rodr´ıguez-Molinero, A.: Detecting freezing of gait with a tri-
axial accelerometer in parkinson’s disease patients. Medical Biological Engineering
Computing 54 (10 2015). https://doi.org/10.1007/s11517-015-1395-3
4. Almqvist, O.: A comparative study between algorithms for time series forecasting
on customer prediction: An investigation into the performance of ARIMA, RNN,
LSTM, TCN and HMM. Ph.D. thesis (06 2019)
5. Andrey, G., Thirer, N.: A fpga implementation of hardware based accelerator for
a generic algorithm (11 2010). https://doi.org/10.1109/EEEI.2010.5662152
6. Bai, S., Kolter, J., Koltun, V.: An empirical evaluation of generic convolutional
and recurrent networks for sequence modeling (03 2018)
7. Betkaoui, B., Thomas, D.B., Luk, W.: Comparing performance and energy ef-
ficiency of fpgas and gpus for high productivity computing. 2010 International
Conference on Field-Programmable Technology pp. 94–101 (2010)
8. B¨achlin, M., Hausdorff, J., Roggen, D., Giladi, N., Plotnik, M., Tr¨oster, G.: Online
detection of freezing of gait in parkinson’s disease patients: A performance charac-
terization. BODYNETS 2009 - 4th International ICST Conference on Body Area
Networks p. 11 (04 2009). https://doi.org/10.4108/ICST.BODYNETS2009.5852
9. B¨achlin, M., Plotnik, M., Roggen, D., Giladi, N., Hausdorff, J., Tr¨oster, G.: A
wearable system to assist walking of parkinson´s disease patients. Methods of in-
formation in medicine 49, 88–95 (12 2009). https://doi.org/10.3414/ME09-02-0003
10. Chiu, C.C., Sainath, T., Wu, Y., Prabhavalkar, R., Nguyen, P., Chen, Z., Kannan,
A., Weiss, R., Rao, K., Gonina, E., Jaitly, N., Li, B., Chorowski, J., Bacchiani, M.:
State-of-the-art speech recognition with sequence-to-sequence models. pp. 4774–
4778 (04 2018). https://doi.org/10.1109/ICASSP.2018.8462105
11. D. Roggen, M.P., Plotnik, M.: Daphnet freezing of gait data set. UCI machine
learning repository (2013)
12. Duarte, J., Han, S., Harris, P., Jindariani, S., Kreinar, E., Kreis, B., Ngadiuba, J.,
Pierini, M., Rivera, R., Tran, N., Wu, Z.: Fast inference of deep neural networks
in fpgas for particle physics. ArXiv abs/1804.06913 (2018)
13. Li, B., Yao, Z., Wang, J., Wang, S., Yang, X., Sun, Y.: Improved deep learning
technique to detect freezing of gait in parkinson’s disease based on wearable sensors.
Electronics 9, 1919 (11 2020). https://doi.org/10.3390/electronics9111919
14. Mahmud, A., Mohammed, A.: A Survey on Deep Learning for Time-Series Fore-
casting, pp. 365–392 (01 2021). https://doi.org/10.1007/978-3-030-59338-4 19
15. Mazilu, S., Hardegger, M., Zhu, Z., Roggen, D., Tr¨oster, G., Plotnik, M., Hausdorff,
J.: Online detection of freezing of gait with smartphones and machine learning
techniques (05 2012). https://doi.org/10.4108/icst.pervasivehealth.2012.248680
16. Mikos, V., Heng, C.H., Tay, A., Yen, S.C., Chia, N., Koh, K., Tan, D., Au, W.L.: A
wearable, patient-adaptive freezing of gait detection system for biofeedback cueing
in parkinson’s disease. IEEE Transactions on Biomedical Circuits and Systems
PP, 1–1 (05 2019). https://doi.org/10.1109/TBCAS.2019.2914253
17. Moore, S., MacDougall, H., Ondo, W.: Ambulatory monitoring of freezing of gait
in parkinson’s disease. Journal of neuroscience methods 167, 340–8 (02 2008).
https://doi.org/10.1016/j.jneumeth.2007.08.023
18. Moore, S., Yungher, D., Morris, T., Dilda, V., MacDougall, H., Shine, J., Nai-
smith, S., Lewis, S.: Autonomous identification of freezing of gait in parkinson’s
disease from lower-body segmental accelerometry. Journal of neuroengineering and
rehabilitation 10, 19 (02 2013). https://doi.org/10.1186/1743-0003-10-19
19. Murad, A., Pyun, J.Y.: Deep recurrent neural networks for human activity recog-
nition. Sensors 17, 2556 (11 2017). https://doi.org/10.3390/s17112556
20. oord, A., Dieleman, S., Zen, H., Simonyan, K., Vinyals, O., Graves, A., Kalchbren-
ner, N., Senior, A., Kavukcuoglu, K.: Wavenet: A generative model for raw audio
(09 2016)
21. Possa, P., Schaillie, D., Valderrama, C.: Fpga-based hardware accelera-
tion: A cpu/accelerator interface exploration. In: 2011 18th IEEE Interna-
tional Conference on Electronics, Circuits, and Systems. pp. 374–377 (2011).
https://doi.org/10.1109/ICECS.2011.6122291
22. Qasaimeh, M., Denolf, K., Lo, J., Vissers, K., Zambreno, J., Jones, P.: Comparing
energy efficiency of cpu, gpu and fpga implementations for vision kernels (05 2019).
https://doi.org/10.1109/ICESS.2019.8782524
23. Rahman, A., Lee, J., Choi, K.: Efficient fpga acceleration of convolutional neural
networks using logical-3d compute array. In: 2016 Design, Automation Test in
Europe Conference Exhibition (DATE). pp. 1393–1398 (2016)
24. Remy, P.: Temporal convolutional networks for keras.
https://github.com/philipperemy/keras-tcn (2020)
25. Rodr´ıguez-Mart´ın, D., Sam`a, A., P´erez-L´opez, C., Catal`a, A., Arostegui, J.M.M.,
Cabestany, J., Bay´es, `
A., Alcaine, S., Mestre, B., Prats, A., Crespo, M., Counihan,
T., Browne, P., Quinlan, L., ´
OLaighin, G., Sweeney, D., Lewy, H., Azuri, J., Vain-
stein, G., Annicchiarico, R., Costa, A., Rodr´ıguez-Molinero, A.: Home detection of
freezing of gait using support vector machines through a single waist-worn triaxial
accelerometer. PLoS ONE 12 (2017)
26. Shawahna, A., Sait, S.M., El-Maleh, A.: Fpga-based accelerators of deep learning
networks for learning and classification: A review. IEEE Access 7, 7823–7859 (2019)
27. Sigcha, L., Costa, N., Pav´on, I., Costa, S., Arezes, P., L´opez, J.M., Arcas, G.: Deep
learning approaches for detecting freezing of gait in parkinson’s disease patients
through on-body acceleration sensors. Sensors (Basel, Switzerland) 20 (2020)
28. Um, T.T., Pfister, F., Pichler, D., Endo, S., Lang, M., Hirche, S., Fietzek, U., Kuli´c,
D.: Data augmentation of wearable sensor data for parkinson’s disease monitoring
using convolutional neural networks. Proceedings of the 19th ACM International
Conference on Multimodal Interaction (2017)
29. Wang, J., Liu, Q., Chen, H.: Detection of freezing of gait for parkinson’s disease
patients based on deep convolutional neural networks. Chinese Journal of Biomed-
ical Engineering 36, 418–425 (08 2017). https://doi.org/10.3969/j.issn.0258-
8021.2017.04.005
30. Zach, H., Janssen, A., Snijders, A., Delval, A., Ferraye, M., Auff, E., Weerdesteyn,
V., Bloem, B., Nonnekes, J.: Identifying freezing of gait in parkinson’s disease
during freezing provoking tasks using waist-mounted accelerometry. Parkinsonism
Related Disorders 21 (10 2015). https://doi.org/10.1016/j.parkreldis.2015.09.051
31. Zhang, Y., Gu, D.: A deep convolutional-recurrent neural network for freez-
ing of gait detection in patients with parkinson’s disease. pp. 1–6 (10 2019).
https://doi.org/10.1109/CISP-BMEI48845.2019.8965723