Figure 4 - uploaded by Kim Phuc TRAN
Content may be subject to copyright.
Source publication
Deep learning plays a vital role in classifying different arrhythmias using electrocardiography (ECG) data. Nevertheless, training deep learning models normally requires a large amount of data and can lead to privacy concerns. Unfortunately, a large amount of healthcare data cannot be easily collected from a single silo. Additionally, deep learning...
Contexts in source publication
Context 1
... proposed classifier is composed of 4 convolution layers, 3 max pooling layers, 2 fully connected layer and 1 softmax layer for classification, as shown in Figure 4. The classifier is designed for classifying an input ECG signal into one of the five classes, as shown in Table 1. ...
Similar publications
Diabetic retinopathy (DR) is a common retinal complication led by diabetes over the years, considered a cause of vision loss. Its timely identification is crucial to prevent blindness, requiring expert humans to analyze digital color fundus images. Hence, it is a time-consuming and expensive process. In this study, we propose a model named Attentio...
Citations
... The algorithms based on machine learning (ML) and deep learning (DL) helps in mining the hidden data and these type of data is stored in a centralized location [3], [4]. Vast number of healthcare-related data is being generated globally, possessing unique characteristics [5]. This collected data is diverse in nature. ...
Medical images are comprised of sensor measurements which help detect the characteristics of diseases. Computer-based analysis results in the early detection of diseases and suitable medications. Human activity recognition (HAR) is highly useful in applications related to medical care, fitness tracking, and patient data archiving. There are two kinds of data fed into the HAR system which are, image data and time series data of physical movements through accelerometers and gyroscopes present in smart devices. This study introduced crayfish optimization algorithm with long short term memory (COA-LSTM). The raw data is obtained from three datasets namely, WISDM, UCI-HAR, and PAMAP2 datasets; then, pre-processing helps in removal of unwanted information. The features from pre-processed data are reduced using principal component analysis and linear discriminant analysis (PCA-LDA). Finally, classification is performed using COA-LSTM where, the hyperparameters are fine-tuned with the help of COA. The suggested method achieves a classification accuracy of 98.23% for UCI-HAR dataset, whereas the existing techniques like convolutional neural network (CNN), multi-branch CNN-bidirectional LSTM, CNN with gated recurrent unit (GRU), ST-deep HAR, and Ensem-HAR obtain a classification accuracy of 91.98%, 96.37%, 96.20%, 97.7%, and 95.05%, respectively.
... Raza and Tran [17] proposed the healthcare system for ECG (Electrocardiogram) monitoring which employs FL, Transfer Learning (TL) and explainable AI. The authors propose a secure framework that combines federated learning and transfer learning to protect the privacy of patient data while achieving accurate healthcare predictions. ...
The continual evolution of the digital health domain has led to an increased dependence on intelligent devices, such as smart bands and smartwatches, for monitoring personal well-being. These devices produce significant amounts of sensitive and personal data daily, underscoring the critical importance of safeguarding the privacy and security of this information. In response to this pressing challenge. This paper introduces a comprehensive framework incorporating deep learning, federated learning, IPFS (a secure data storage system), and blockchain technology. Our approach begins by utilizing deep learning to anonymize sensitive healthcare data, ensuring the protection of individual identities. Subsequently, this anonymized data is securely stored on IPFS, A decentralized and tamperproof data storage platform. We employ federated learning to improve models without exposing raw data, enabling distributed dataset training. Blockchain plays a crucial role by establishing transparent and immutable data access records, thereby enhancing security and accountability, a particularly critical aspect in healthcare where data integrity is paramount. The primary objective is to balance data privacy and usability for research purposes. Testing our framework with a well-established dataset (CIFAR-10) resulted in a high accuracy rate of 84.59% while preserving data privacy. This framework serves as a robust solution for protecting IoT healthcare data, integrating advanced technologies to meet the specific demands of the field. The implications of this research extend beyond current circumstances, offering a potential shift in the healthcare data handling paradigm.
... • [175,176]. However, further research is needed on how to extend interpretability methods to the RUL prediction tasks, thus providing engineers with more convincing decision-making tools. ...
As a novel paradigm in machine learning, deep transfer learning (DTL) can harness the strengths of deep learning for feature representation, while also capitalizing on the advantages of transfer learning for knowledge transfer. Hence, DTL can effectively enhance the robustness and applicability of the data-driven remaining useful life (RUL) prediction methods, and has garnered extensive development and research attention in machinery RUL prediction. Although there are numerous systematic review articles published on the topic of the DTL-based approaches, a comprehensive overview of the application of DTL in the RUL prediction for different mechanical equipment has yet to be systematically conducted. Therefore, it is imperative to further review the pertinent literature on DTL-based approaches. This will facilitate researchers in comprehending the latest technological advancements and devising efficient solutions to address the cross-domain RUL prediction challenge. In this review, a brief overview of the theoretical background of DTL and its application in RUL prediction tasks are provided at first. Then, a detailed discussion of the primary DTL methods and their recent advancements in cross-domain RUL prediction is presented. Next, the practical application of the current research is discussed in relation to the research object and its open-source data. More importantly, several challenges and further trend are further presented to conclude this paper in the end. We have reason to hope this work can offer convenience and inspiration to researchers seeking to advance in the field of RUL prediction.
... No papers in this survey reported real FL setups using more than 1,000,000 data points. Out of the 7 FL setups using between 100,000 and 1,000,000 data points, 3 did so in a real FL setup [73], [70], [74]. Fig. 6 shows the distribution of papers using different amounts of FL centers, while the colors indicate whether the FL setup was real, simulated, or not specified. ...
... Moreover, it includes an module which explains the anomaly detection output by identifying key segments of the ECG signal that show the maximum reconstruction loss. [74] presents an ECG-based arrhythmia classification framework which trains convolutional DNNs via FL, and includes an XAI module computing activation mappings in the ECG signal by means of GradCAM. The framework addresses data availability, privacy, and interpretability challenges. ...
The joint implementation of Federated learning (FL) and Explainable artificial intelligence (XAI) will allow training models from distributed data and explaining their inner workings while preserving important aspects of privacy. Towards establishing the benefits and tensions associated with their interplay, this scoping review maps those publications that jointly deal with FL and XAI, focusing on publications where an interplay between FL and model interpretability or post-hoc explanations was found. In total, 37 studies met our criteria, with more papers focusing on explanation methods (mainly feature relevance) than on interpretability (mainly algorithmic transparency). Most works used simulated horizontal FL setups involving 10 or fewer data centers. Only one study explicitly and quantitatively analyzed the influence of FL on model explanations, revealing a significant research gap. Aggregation of interpretability metrics across FL nodes created generalized global insights at the expense of node-specific patterns being diluted. 8 papers addressed the benefits of incorporating explanation methods as a component of the FL algorithm. Studies using established FL libraries or following reporting guidelines are a minority. More quantitative research and structured, transparent practices are needed to fully understand their mutual impact and under which conditions it happens.
... This heterogeneity complicates the training process and can hinder the convergence of FL models. 3) Unexplainability: DL models, while highly accurate, often function as "black boxes," making their outputs difficult to interpret [7]. In critical fields such as healthcare and finance, where decisions may impact human lives and property, explainability is crucial. ...
Federated learning (FL) is a commonly distributed algorithm for mobile users (MUs) training artificial intelligence (AI) models, however, several challenges arise when applying FL to real-world scenarios, such as label scarcity, non-IID data, and unexplainability. As a result, we propose an explainable personalized FL framework, called XPFL. First, we introduce a generative AI (GAI) assisted personalized federated semi-supervised learning, called GFed. Particularly, in local training, we utilize a GAI model to learn from large unlabeled data and apply knowledge distillation-based semi-supervised learning to train the local FL model using the knowledge acquired from the GAI model. In global aggregation, we obtain the new local FL model by fusing the local and global FL models in specific proportions, allowing each local model to incorporate knowledge from others while preserving its personalized characteristics. Second, we propose an explainable AI mechanism for FL, named XFed. Specifically, in local training, we apply a decision tree to match the input and output of the local FL model. In global aggregation, we utilize t-distributed stochastic neighbor embedding (t-SNE) to visualize the local models before and after aggregation. Finally, simulation results validate the effectiveness of the proposed XPFL framework.
... Raza et al. [13] designed a novel end-to-end framework for ECG-based healthcare using explainable artificial intelligence and deep convolutional neural networks in a federated environment, addressing challenges such as data availability and privacy issues. The proposed framework effectively classifies various arrhythmias. ...
Artificial intelligence has immense potential for applications in smart healthcare. Nowadays, a large amount of medical data collected by wearable or implantable devices has been accumulated in Body Area Networks. Unlocking the value of this data can better explore the applications of artificial intelligence in the smart healthcare field. To utilize these dispersed data, this paper proposes an innovative Federated Learning scheme, focusing on the challenges of explainability and security in smart healthcare. In the proposed scheme, the federated modeling process and explainability analysis are independent of each other. By introducing post-hoc explanation techniques to analyze the global model, the scheme avoids the performance degradation caused by pursuing explainability while understanding the mechanism of the model. In terms of security, firstly, a fair and efficient client private gradient evaluation method is introduced for explainable evaluation of gradient contributions, quantifying client contributions in federated learning and filtering the impact of low-quality data. Secondly, to address the privacy issues of medical health data collected by wireless Body Area Networks, a multi-server model is proposed to solve the secure aggregation problem in federated learning. Furthermore, by employing homomorphic secret sharing and homomorphic hashing techniques, a non-interactive, verifiable secure aggregation protocol is proposed, ensuring that client data privacy is protected and the correctness of the aggregation results is maintained even in the presence of up to t colluding malicious servers. Experimental results demonstrate that the proposed scheme’s explainability is consistent with that of centralized training scenarios and shows competitive performance in terms of security and efficiency.
Graphical abstract
... Experimental results then demonstrate that the proposed solution can obtain an accuracy of 93.06% and a precision of 88.34% while preserving the privacy of patients and providing explainable medical recommendations. Differently, the authors in [64] develop an end-to-end framework for ECG classification using FL and XAI. The XAI module is equipped with the Gradient-weighted Class Activation Mapping method to produce heatmaps showing influential regions in the ECG for predictions. ...
This chapter explores the significant impact of Machine Learning (ML) and the Internet of Things (IoT) on smart healthcare management, marking a new era of innovation with enhanced patient care and health outcomes. The fusion of IoT devices for real-time health monitoring with ML algorithms enables personalized medical interventions by analyzing vast amounts of patient data for predictive diagnostics and tailored treatment plans. Furthermore, the integration includes digital twin technology for precise diagnoses and treatments and highlights blockchain’s role in safeguarding data integrity and privacy. Additionally, the chapter examines the broader applications of Artificial Intelligence (AI) in healthcare, such as advanced ML models and natural language processing, to improve healthcare processes and medical analysis. Despite the promising potential of ML and IoT in transforming healthcare, challenges like data security and the development of scalable, interoperable solutions remain. The chapter underscores the crucial influence of ML and IoT in advancing efficient, accessible, and sophisticated healthcare services.
... Traditional manual ECG analysis, while valuable, is often hindered by inefficiencies, laborintensive processes, and susceptibility to human error. To address these limitations and enhance patient care, there is a growing interest in leveraging advanced technologies [2], [3]. ...
This study explores the integration of deep learning and Internet of Things (IoT) technologies to enhance healthcare delivery, with a primary focus on improving electrocardiogram (ECG) analysis and real-time patient monitoring systems. The research presents the development of two innovative deep learning models based on the MIT-BIH dataset, enabling highly accurate ECG analysis. One model is trained for precise R-R peak detection, while the other performs effective classification of ECG signals into five distinct disease categories. The study also introduces an integrated healthcare system that seamlessly captures patients' real-time physiological data, including ECG, SpO2, and temperature, using an ESP32 microcontroller and Raspberry Pi. An IoT infrastructure with Node-RED IBM Platform and Message Queuing Telemetry Transport (MQTT) securely transmits the ECG data to the advanced analysis algorithms. The user interface displays patients' vital signs, including heart rate, oxygen saturation, and temperature, providing healthcare professionals with comprehensive real-time insights. By integrating the deep learning models, which achieve approximately 99% accuracy, alongside robust sensor technology and an IoT architecture, this system aims to transform healthcare by enabling highly precise ECG analysis and remote patient monitoring. The findings of this study underscore the potential of the synergistic convergence of deep learning, sensor technology, and IoT to advance healthcare delivery and improve patient outcomes.
... Variable selection is determined by their importance to the system's goals, considering actual data, conceptual assumptions, and system limitations [38]. Values given to variables are determined by a combination of facts, assumptions, and observations, whereas model assumptions help shape the formulation of equations. ...
New applications like activity tracking and healthcare monitoring depend on secure and timely data transfer using wearable sensors. This work aims to reduce data loss and latency when transmitting wearable sensor information to analysis terminals by introducing an innovative probabilistic transfer learning approach. Using dynamic transmission slot allocation based on risk thresholds and time sensitivity, the suggested method intelligently arranges and prioritizes the transfer of aggregate sensor data across various sources. High-risk data is given preference when allocating slots in a two-step algorithm that divides the data into emergency and normal classifications to guarantee timely delivery without undue delays. Over time, for better activity identification, the transfer model of learning steadily learns and improves the slot assignment accuracy based on feedback. Comprehensive analysis of various queuing scenarios and transmission disruptions shows notable improvements over existing approaches, with waiting times, data loss, and transmission delays reduced by up to 10.49%, 2.42%, and 13.86%, respectively. Most importantly, 3.28% more accuracy is achieved in identifying distinct activities from the supplied wearable sensor data. This can be achieved by the dependable data supply by the probabilistic modelling approach. With its comprehensive architecture for efficiently managing limited communication resources, the suggested approach can provide real-time health surveillance, smart environment services, and other digital-physical systems requiring trustworthy data streaming. More interaction with statistical engines, improvements to security and privacy, and scalability validation on larger distributed platforms are possible areas for future work.
... Federated learning has found a variety of applications in medical research: brain tumor segmentation [20], [21], breast cancer classification [22], whole prostate segmentation [23], lung cancer detection [24] and ECG interpretation [25]- [31]. However, in the ECG domain, the majority of publications about federated learning use small datasets that often originate from a single source [25]- [30], so in order to simulate data silos, the data is usually partitioned among classes rather than institutions. As a recent counterexample, Gutierrez et al. [31] employ ECG data from different sources, but they only use morphological and spectral features of ECG to train a feedforward neural network and LSTM [32] network. ...
... Previous works in federated learning for ECG classification [25], [27], [28], [30] have predominantly trained convolutional neural networks on small datasets such as MIT-BIH Arrhythmia Database [42] or other single databases from the Phys-ioNet repository [34]. Further examples include Zhang et al. [26] who propose a federated learning algorithm called FedGE specifically designed for ECG classification, and Baghersalimi et al. [29] who focus on detecting epileptic seizures using the EPILEPSIAE database within a federated learning framework. ...
In response to increasing data privacy regulations, this work examines the use of federated learning for deep residual networks to diagnose cardiac abnormalities from electrocardiogram (ECG) data. This approach allows medical institutions to collaborate without exchanging raw patient data. We utilize the publicly available data from the PhysioNet/Computing in Cardiology Challenge 2021, featuring diverse ECG databases, to compare the classification performance of three federated learning methods against both central training with data sharing and isolated training scenarios. We show that federated learning outperforms ECG classifiers trained in isolation. In particular, our findings demonstrate that a globally trained model fine-tuned to specific local datasets surpasses non-collaborative approaches. This shows that models trained in federation learn general features that can be tailored to specific tasks. Furthermore, federated learning almost matches the performance of central training with data sharing on out-of-distribution data from non-participating institutions. These results highlight the ability of federated learning in developing models that generalize well across diverse patient data, without the need to share data among institutions, thus addressing data privacy concerns.