Article

A convolutional neural network approach to detect congestive heart failure

Authors:
  • The Organizational Neuroscience Laboratory | University of Surrey | Warwick University
To read the full-text of this research, you can request a copy directly from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Hence, healthcare business models shift gradually toward data-driven systems taking a small step toward sparing the lives of HF patients. These automatic data-driven systems can process a wide range of information, imaging [4][5][6] or non-imaging data [7][8][9][10], that is a challenging task for clinicians and researchers to perform. Within HF research, different ML models have been employed including developing new approaches to diagnose, classify patients into novel phenotypic groups, and improve prediction capabilities [7][8][9][10][11][12][13][14][15][16][17][18][19][20]. ...
... These automatic data-driven systems can process a wide range of information, imaging [4][5][6] or non-imaging data [7][8][9][10], that is a challenging task for clinicians and researchers to perform. Within HF research, different ML models have been employed including developing new approaches to diagnose, classify patients into novel phenotypic groups, and improve prediction capabilities [7][8][9][10][11][12][13][14][15][16][17][18][19][20]. ...
Article
Full-text available
Heart failure (HF) is a life-threatening disease affecting at least 64 million people worldwide. Hence, it places great stresses on patients and healthcare systems. Accordingly, providing a computerized model for HF prediction will help in enhancing diagnosis, treatment, and long-term management of HF. In this paper, we introduce a new guided attentive HF prediction approach. In this method, a sparse-guided feature ranking method is proposed. Firstly, a Gauss–Seidel strategy is applied to the preprocessed feature pool for low-rank approximation procedure with a trace-norm regularization. The resultant sparse attributes, after a Spearman ranking elimination, are employed to guide the original feature pool through linear translation-variant model. Then, a fast Newton-based method is employed for a non-negative matrix factorization for the guided feature pool. The resultant bases of the factorization process are finally utilized in the adopted deep attentive predictive model. For the final prediction stage, instead of the commonly used machine learning approaches, we introduce an attentive-based classifier. It employs sequential attention to choose the most proper salient features for efficient interpretability and learning process. For the evaluation of the proposed HF prediction model, three different datasets are employed, i.e., UCI, Faisalabad, and Framingham datasets. Compared to state-of-the-art techniques, the proposed approach outperforms their performance on all datasets with even small feature sizes. With only four feature bases, the proposed method achieves an average accuracy of 98%, while, with full feature bases, full accuracy is gained.
... The heterogeneity of data sources in HFrEF prediction studies underscores the disease's multifaceted nature and the potential of utilizing diverse data modalities for various prediction problems. Among data sources, ECG stands out, with high-performance models developed to identify patients with HFrEF [4][5][6], determine the severity of HFrEF [7], or distinguish normal heartbeats from those affected by HFrEF [8], realizing the promise of a high-fidelity cardiovascular measure in HFrEF prediction tasks. Another frequently used data source is Electronic Health Records (EHR), which provide a semi-comprehensive view of a patient's health history by keeping a record of touch-points at various interactions with the healthcare system, e.g., doctor visits, laboratory measurements, etc. EHR data have been utilized to address a large number of prediction problems related to HFrEF, such as distinguishing individuals with HFrEF [9,10], assessing HFrEF severity [11][12][13][14], analyzing the survival of HFrEF patients over periods of time [15], and predicting hospital readmission among HFrEF populations [16,17]. ...
Article
Full-text available
Background: Heart failure with reduced ejection fraction is a complex condition that necessitates adaptive, patient-specific management strategies. This study aimed to evaluate the effectiveness of a time-adaptive machine learning model, the Passive-Aggressive classifier, in predicting heart failure with reduced ejection fraction severity and capturing individualized disease progression. Methods: A time-adaptive Passive-Aggressive classifier was employed, using clinical data and Brain Natriuretic Peptide levels as class designators for heart failure with reduced ejection severity. The model was personalized for individual patients by sequentially incorporating clinical visit data from 0–9 visits. The model’s adaptability and effectiveness in capturing individual health trajectories were assessed using accuracy and reliability metrics as more data were added. Results: With the progressive introduction of patient-specific data, the model demonstrated significant improvements in predictive capabilities. By incorporating data from nine visits, significant gains in accuracy and reliability were achieved, with the One-Versus-Rest AUC increasing from 0.4884 with no personalization (zero visits) to 0.8253 (nine visits). This demonstrates the model’s ability to handle diverse patient presentations and the dynamic nature of disease progression. Conclusions: The findings show the potential of time-adaptive machine learning models, particularly the Passive-Aggressive classifier, in managing heart failure with reduced ejection fraction and other chronic diseases. By enabling precise, patient-specific predictions, these approaches support early detection, tailored interventions, and improved long-term outcomes. This study highlights the feasibility of integrating adaptive models into clinical workflows to enhance the management of heart failure with reduced ejection fraction and similar chronic conditions.
... Complaints of shortness of breath can be divided into cardiac or non-cardiac, where in cases there are complaints of shortness of breath when doing activities (dyspnea d'effort), increased shortness of breath when in a sleeping position (orthopnea), coughing at night, shortness of breath that gets worse at night (Paroxysmal Nocturnal Dyspnea ) and crackles were obtained (Porumb et al., 2020). ...
Article
Full-text available
Background: The heart as an important organ cannot be protected from diseases that can attack it. One of the deadliest diseases in the world is heart attack. Objective: This research aims to determine the behavioral response of heart disease patients which gives rise to individual adaptive responses to maintain individual adaptation in heart disease patients. Method: This research uses a qualitative method with a qualitative descriptive approach. The sampling technique is purposive sampling. Data collection methods were in-depth interviews and observation sheets from Calista Roy. Data analysis using the content analysis method. The informants in this study with heart disease in the Siloam Hospital Palembang Inpatient Room were 4 people (2 men, 2 women) and 1 Key Informant. Results: The results of this study emerged 7 themes, namely lack of knowledge about heart disease, signs and symptoms felt by patients, heart patients' efforts to treat their disease, compliance with treatment regimens, healthy diet and lifestyle, obstacles in maintaining their health, and hospital facilities and actions for heart disease patients. The research results show that there is an adaptive response from heart disease patients which can improve patient integrity and be able to carry out activities that are favorable to survival. However, it was still found that participants lacked knowledge regarding their disease, so there was still non-compliance with the treatment regimen. Conclusions: The use of Roy's adaptive nursing theory in providing nursing care to patients with heart disease makes it easier for nurses to detect the patient's situation in adapting to heart disease.
... In comparison to existing methods, proposed method stands out as it attains an impressive accuracy of 99.58% on the MIT-BIH dataset, surpassing the performance of several existing methods. Porumb et al. [16] employs a CNN and achieves an accuracy of 97.8%, while Avanzato and Beritelli [17] utilizes a 1-D CNN with five layers and reports an accuracy of 98.33%. Kaspal et al. [18] combines ECG feature extraction and CNN, reaching accuracy levels of 90.60% and 93.24% for different datasets. ...
Preprint
Full-text available
Cardiovascular diseases (CVDs), including abnormal arrhythmias and congestive heart failure, are a leading cause of mortality worldwide, with electrocardiogram (ECG) signals serving as a critical diagnostic tool. This study introduces a novel approach for classifying diseases in ECG signals into three categories: normal sinus rhythm (NSR), abnormal arrhythmia (ARR), and congestive heart failure (CHF). The classification is based on a combination of features extracted from both the time domain (mean and standard deviation) and the frequency domain (power spectral density and spectral centroid) of the ECG signals. Additionally, energy values from selected frequency bands are utilized. To enhance the model's robustness, incorporate data augmentation techniques, including time-shifting and flipping of the signals. These augmented datasets are then employed with various classifiers, and an optimization process using grid search is applied to enhance the classification performance. This methodology presents a promising framework for automated ECG signal analysis, The results of the proposed work have been exceptionally promising, showcasing a remarkable specificity rate of 99.7% and achieving an accuracy level of 99.58%. These findings hold significant promise for advancing early detection methods and enhancing patient outcomes in the realm of CVDs.
... Avanzato et al. [27] constructed a 5-layers deep CNN framework to discriminate CHF signals. Porumb et al. [28] exploited an algorithm model based on CNN with a class activation mapping strategy for automatic identification of CHF. Yang et al. [29] put forward an ECG fragment alignment principal component analysis network for CHF fragment recognition. ...
Article
Full-text available
Congestive heart failure (CHF) is a sort of common cardiovascular disease. Artificial vision inspection on electrocardiogram (ECG) has turned into the mainstream diagnosis strategy for CHF detection, however it is strenuous and challenging. Motivated from bottleneck attention module (BAM), we construct an efficient architecture based on naive bidirectional gate recurrent unit (BGRU) for automatic CHF detection called BAM-BGRU. BAM-BGRU can refine the attention maps from both pathways: Channel attention and Spatial attention to enhance feature representation ability. To evaluate the effectiveness, we leverage naive GRU, BGRU and BAM as control groups with BIDMC congestive heart failure database (BCHFD) and congestive heart failure RR intervals database (CHFRID). The results achieve an obvious performance improvement than control groups and existing algorithms with an accuracy of 99.1% and 98.8%. Besides, we implement visualization analysis for multilevel derivative gradient flows of ECG episodes to strengthen model interpretability. To our knowledge, we first reformulate naive BGRU framework for consistent performance improvements showing promising potentials in diverse CNN backbones.
... The main reasons of T2B are obesity, lack of physical activity, and unhealthy diets. In different studies, it is found that T2B mellitus increases the risk of heart failure and leads to worse outcomes once heart failure develops [3][4][5]. ...
Article
Full-text available
The ECG is a crucial tool in the medical field for recording the heartbeat signal over time, aiding in the identification of various cardiac diseases. Commonly, the interpretation of ECGs necessitates specialized knowledge. However, this paper explores the application of machine learning algorithms and deep learning algorithm to autonomously identify cardiac diseases in diabetic patients in the absence of expert intervention. Two models are introduced in this study: The MLP model effectively distinguishes between individuals with heart diseases and those without, achieving a high level of accuracy. Subsequently, the deep CNN model further refines the identification of specific cardiac conditions. The PTB-Diagnostic ECG dataset commonly used in the field of biomedical signal processing and machine learning, particularly for tasks related to electrocardiogram (ECG) analysis. a widely recognized dataset in the field, is employed for training, testing, and validation of both the MLP and CNN models. This dataset comprises a diverse range of ECG recordings, providing a comprehensive representation of cardiac conditions. The proposed models feature two hidden layers with weights and biases in the MLP, and a three-layer CNN, facilitating the mapping of ECG data to different disease classes. The experimental results demonstrate that the MLP and deep CNN based models attain accuracy levels of up to 90.0% and 98.35%, and sensitivity 97.8%, 95.77%, specificity 88.9%, 96.3% F1-Score 93.13%, 95.84% respectively. These outcomes underscore the efficacy of deep learning approaches in automating the diagnosis of cardiac diseases through ECG analysis, showcasing the potential for accurate and efficient healthcare solutions.
... In comparison to existing methods, proposed method stands out as it attains an impressive accuracy of 99.58% on the MIT-BIH dataset, surpassing the performance of several existing methods. Porumb et al. [16] employs a CNN and achieves an accuracy of 97.8%, while Avanzato and Beritelli [17] utilizes a 1-D CNN with five layers and reports an accuracy of 98.33%. Kaspal et al. [18] combines ECG feature extraction and CNN, reaching accuracy levels of 90.60% and 93.24% for different datasets. ...
Preprint
Full-text available
Cardiovascular diseases (CVDs), including abnormal arrhythmias and congestive heart failure, are a leading cause of mortality worldwide, with electrocardiogram (ECG) signals serving as a critical diagnostic tool. This study introduces a novel approach for classifying diseases in ECG signals into three categories: normal sinus rhythm (NSR), abnormal arrhythmia (ARR), and congestive heart failure (CHF). The classification is based on a combination of features extracted from both the time domain (mean and standard deviation) and the frequency domain (power spectral density and spectral centroid) of the ECG signals. Additionally, energy values from selected frequency bands are utilized. To enhance the model's robustness, incorporate data augmentation techniques, including time-shifting and flipping of the signals. These augmented datasets are then employed with various classifiers, and an optimization process using grid search is applied to enhance the classification performance. This methodology presents a promising framework for automated ECG signal analysis, The results of the proposed work have been exceptionally promising, showcasing a remarkable specificity rate of 99.7% and achieving an accuracy level of 99.58%. These findings hold significant promise for advancing early detection methods and enhancing patient outcomes in the realm of CVDs.
... Anwendungen von DNN in der biomedizinischen Signalverarbeitung umfassen die Klassifizierung von EKG-Signalen [43], die Klassifizierung von Gehirntumoren [44], die Vorhersage fehlender Daten in EKG-Signalen [45] und vieles mehr. Convolutional Neural Networks (CNN) haben ebenfalls eine wichtige Rolle in diesem Bereich gespielt, wie zum Beispiel bei der Erkennung von Schläfrigkeit [46], der Erkennung von Herzinsuffizienz [47], der Klassifizierung von EEG-Daten beim Hören verschiedener Arten von Musik [48], der Klassifizierung von EEG-Signalen zur Emotionserkennung [49] ...
... The stacking algorithm was found to be the optimal approach in ten-fold cross-validation using the MIT-BIH Arrhythmia database. Mihaela [11] segmented single heartbeats and proposed a CNN-based multi-layer perceptron for identifying congestive heart failure. Based on idea of the manually extracted features and deep learning, a classification approach was used to convert two-dimensional images of signals, such as time-domain enhancement graphs, S-transformation graphs, and Gram-angle field graphs [12]. ...
Article
Full-text available
In addressing the key issues of the data imbalance within ECG signals and modeling optimization, we employed the TimeGAN network and a local attention mechanism based on the artificial bee colony optimization algorithm to enhance the performance and accuracy of ECG modeling. Initially, the TimeGAN network was introduced to rectify data imbalance and create a balanced dataset. Furthermore, the artificial bee colony algorithm autonomously searched hyperparameter configurations by minimizing Wasserstein distance. Control experiments revealed that data augmentation significantly boosted classification accuracy to 99.51%, effectively addressing challenges with unbalanced datasets. Moreover, to overcome bottlenecks in the existing network, the introduction of the Efficient network was adopted to enhance the performance of modeling optimized with attention mechanisms. Experimental results demonstrated that this integrated approach achieved an impressive overall accuracy of 99.70% and an average positive prediction rate of 99.44%, successfully addressing challenges in ECG signal identification, classification, and diagnosis.
... DÖ yöntemleri elektrokardiyografi (EKG) verileri üzerinden kalp ritmi bozukluklarını da belirleyerek hastalığı tanılayabilir. DÖ temelli yaklaşımlar olarak Evrişimli Sinir Ağı (Convolutional Neural Network (CNN)) [13], Yinelemeli sinir ağı (Recurrent Neural Network (RNN)) [14], Uzun Kısa Süreli Bellek (Long Short-Term Memory (LSTM)) [15] ve Kapı Özyinelemeli Geçit (Gated Recurrent Unit (GRU)) [16] yöntemleri bu hastalığın teşhisinde yaygın bir şekilde kullanılmıştır. Bazı derin öğrenme yapıları hastalık teşhisi sürecinde özellik çıkarıcı gibi kullanılabilmektedir. Bununla çoğu zaman CNN-RNN, CNN-LSTM ve CNN-SVM gibi hibrid yaklaşımlarda karşılaşılmaktadır [17]- [19]. ...
Article
Full-text available
Öz: Kalp yetmezliği yaşam kalitesini olumsuz etkileyen ve tedavi edilmediğinde ölümcül sonuçlar doğurabilen ciddi bir sağlık problemidir. Erken teşhis ve doğru tedavinin uygulanması bu problemleri en aza indirebilir. Bu çalışmada farklı kategorilerde yer alan makine öğrenmesi (MÖ) yöntemlerinin kalp yetmezliği tahminindeki performanslarını ölçmek için bir model geliştirilerek, kategorik ve genel olarak performans analizleri gerçekleştirilmiştir. Kategori temelinde sınıflandırma problemlerinde başarılı sonuçlar ürettiği bilinen yöntemleri içeren ağaç, meta ve fonksiyon kategorileri tercih edilmiş ve her kategoriden beş yöntem kullanılmıştır. Deneysel çalışmalarda MÖ yöntemlerinin performansı Karışıklık matrisine dayanan temel metrikler ile sınıflandırma hata metrikleri üzerinden ölçülmüştür. Deneysel sonuçlar kategorik olarak değerlendirildiğinde en iyi performansların ağaç kategorisinde Duyarlılık ve Yanlış Negatif Oranı (False Negative Rate (FNR)) dışındaki metriklerde Alternatif Karar Ağacı (Alternating Decision Tree | ADTree (ADT)) yöntemi, meta kategorisinde ROC eğrisi altında kalan alan (Area under the curve (AUC)) dışındaki metriklerde Lojistik Artırma Regresyon (Logistic Boosting Regression | LogitBoost (LBST)) yöntemi ve fonksiyon kategorisinde Kesinlik ve Yanlış Pozitif Oranı (False Positive Rate (FPR)) dışındaki metriklerde Radyal Temelli Fonksiyon Sınıflandırıcı (Radial Bases Function Classifier (RBFC)) yöntemi ile elde edildiğini göstermektedir. Sonuçlara tüm yöntemlerin performansları açısından bakıldığında Doğruluk, Duyarlılık, F-skor, FNR ve Yanlış Sınıflandırma Oranı (Misclassification Rate (MCR)) metrikleri açısından 0.8725, 0.9173, 0.8885, 0.0827 ve 0.1275 değerleri ile RBFC yönteminin, Kesinlik, AUC ve FPR metrikleri açısından 0.8718, 0.9300 ve 0.1610 değerleri ile ADT yönteminin en iyi performansa sahip olduğu görülmüştür. Abstract: Heart failure is a severe health problem that negatively impacts the quality of life and can lead to fatal consequences if left untreated. Early diagnosis and proper treatment can minimize these problems. In this study, a model was developed to measure the performances of machine learning (ML) methods in different categories for heart failure prediction, and performance analyses were performed on both categorical and general basis. Tree, meta and function categories, which include methods known to produce successful results in classification problems, were preferred as categories, and five methods from each category were used. In the experimental studies, the performance of the ML methods was measured using basic metrics and classification error metrics based on the confusion matrix. When the experimental results were evaluated categorically, the best performances were obtained with the Alternating Decision Tree (ADT) method in the tree category for the metrics except for Recall and False Negative Rate (FNR), the Logistic Boosting Regression (LBST) method in the meta category for the metrics except for Area under the curve (AUC), and the Radial Bases Function Classifier (RBFC) method in the function category for the metrics except for the Precision and False Positive Rate (FPR). When considering the results in terms of the performances of all methods, the RBFC method exhibited the best performance with values of 0.8725 for Accuracy, 0.9173 for Recall, 0.8885 for F-score, 0.0827 for FNR, and 0.1275 for Misclassification Rate (MCR). On the other hand, the ADT method showed the best performance in terms of Precision, AUC, and FPR metrics with values of 0.8718, 0.9300, and 0.1610, respectively.
... They also tested out the model for multiclass classification-9 categories-and obtained a final score of 86.5%. Porumb et al. 32 proposed a model to detect CHF that makes use of raw ECG signals, rather than heart rate variability features. The accuracy for this model is prominent. ...
Article
Full-text available
Heart diseases are leading to death across the globe. Exact detection and treatment for heart disease in its early stages could potentially save lives. Electrocardiogram (ECG) is one of the tests that take measures of heartbeat fluctuations. The deviation in the signals from the normal sinus rhythm and different variations can help detect various heart conditions. This paper presents a novel approach to cardiac disease detection using an automated Convolutional Neural Network (CNN) system. Leveraging the Scale-Invariant Feature Transform (SIFT) for unique ECG signal image feature extraction, our model classifies signals into three categories: Arrhythmia (ARR), Congestive Heart Failure (CHF), and Normal Sinus Rhythm (NSR). The proposed model has been evaluated using 96 Arrhythmia, 30 CHF, and 36 NSR ECG signals, resulting in a total of 162 images for classification. Our proposed model achieved 99.78% accuracy and an F1 score of 99.78%, which is among one of the highest in the models which were recorded to date with this dataset. Along with the SIFT, we also used HOG and SURF techniques individually and applied the CNN model which achieved 99.45% and 78% accuracy respectively which proved that the SIFT–CNN model is a well-trained and performed model. Notably, our approach introduces significant novelty by combining SIFT with a custom CNN model, enhancing classification accuracy and offering a fresh perspective on cardiac arrhythmia detection. This SIFT–CNN model performed exceptionally well and better than all existing models which are used to classify heart diseases.
... Literature shows that ECG features provide information on the development of HF. [25][26][27] Results from the variable importance analysis (Figure 1) are consistent with this, showing that changes in the T-wave morphology can detect risk of cardiac arrhythmias 28,29 and abnormal QT dispersion Comparison of HF prediction models AUC (95% CI) developed on ARIC holdout data 7 and the corresponding AUC (95% CI) results from validation on the MESA dataset. † Light Gradient Boosting Machine model using 288 ECG characteristics as inputs (called ECG-Chars). ...
Article
Full-text available
Background Heart failure (HF) is a progressive condition with high global incidence. HF has two main subtypes: HF with preserved ejection fraction (HFpEF) and HF with reduced ejection fraction (HFrEF). There is an inherent need for simple yet effective electrocardiogram (ECG)-based artificial intelligence (AI; ECG-AI) models that can predict HF risk early to allow for risk modification. Objective The main objectives were to validate HF risk prediction models using Multi-Ethnic Study of Atherosclerosis (MESA) data and assess performance on HFpEF and HFrEF classification. Methods There were six models in comparision derived using ARIC data. 1) The ECG-AI model predicting HF risk was developed using raw 12-lead ECGs with a convolutional neural network. The clinical models from 2) ARIC (ARIC-HF) and 3) Framingham Heart Study (FHS-HF) used 9 and 8 variables, respectively. 4) Cox proportional hazards (CPH) model developed using the clinical risk factors in ARIC-HF or FHS-HF. 5) CPH model using the outcome of ECG-AI and the clinical risk factors used in CPH model (ECG-AI-Cox) and 6) A Light Gradient Boosting Machine model using 288 ECG Characteristics (ECG-Chars). All the models were validated on MESA. The performances of these models were evaluated using the area under the receiver operating characteristic curve (AUC) and compared using the DeLong test. Results ECG-AI, ECG-Chars, and ECG-AI-Cox resulted in validation AUCs of 0.77, 0.73, and 0.84, respectively. ARIC-HF and FHS-HF yielded AUCs of 0.76 and 0.74, respectively, and CPH resulted in AUC = 0.78. ECG-AI-Cox outperformed all other models. ECG-AI-Cox provided an AUC of 0.85 for HFrEF and 0.83 for HFpEF. Conclusion ECG-AI using ECGs provides better-validated predictions when compared to HF risk calculators, and the ECG feature model and also works well with HFpEF and HFrEF classification.
... The goal of this study is to investigate, assess, and contrast how well different machine learning algorithms perform when used to analyse and diagnose serious illnesses using ECG data. Logistic regression, decision trees, random forests, additional trees classifiers, dense models, convolutional neural networks (CNN), and hybrid CNN-LSTM models are among the techniques being considered [4] [5]. These methods were chosen for their possible usefulness to ECG signal processing as well as their extensive use in machine learning. ...
Article
Full-text available
The application of machine learning algorithms for the analysis and diagnosis of severe diseases using electrocardiogram (ECG) measurements is a key area of research in the field of healthcare. Investigating, evaluating, and comparing the performance of several machine learning algorithms for the detection and diagnosis of severe diseases using ECG data is the aim of this study. Among the methods considered are convolutional neural networks (CNN), decision trees, random forests, extra trees classifiers, dense models, and hybrid CNN-LSTM models. A detailed analysis of the body of work on machine learning, ECG signal processing, and healthcare applications is done at the outset of the project. In order to ensure a diverse representation of the target population, the study makes use of a painstakingly selected and annotated dataset that comprises ECG signals from both healthy persons and those with major disorders. When it comes to binary classification, the CNN and CNN-LSTM models consistently outperform other algorithms thanks to their high accuracy, F1-scores, and AUC-ROC values. These algorithms demonstrate their ability to accurately classify ECG signals into significant disease and non-disease categories. The results of the multiclass classification provide as additional proof of the CNN and CNN-LSTM models' superior accuracy and F1-scores when used to classify a wide range of illnesses. In conclusion, this research contributes to the field of healthcare analytics by providing a complete assessment and comparison of machine learning algorithms for the diagnosis and analysis of severe diseases using ECG data. The results demonstrate the effectiveness of the CNN and CNN-LSTM models in terms of achieving high accuracy and F1-scores, paving the way for their potential application in clinical praxis. The article offers recommendations for additional research and progress in the field of ECG signal processing as well as emphasises the challenges and considerations that must be made when putting these algorithms into operation.
... However, identifying patients with HF remains challenging due to its subtle and concurrent progression with other conditions and the lack of a single gold standard diagnostic test for HF [31,32]. Several algorithmic approaches have been recently published to improve HF detection, such as convolutional neural network with ECG [33][34][35], logistic regression [36], recurrent neural network [37], and transformer [38] models with EHR data. ...
Article
Full-text available
Background The integration of artificial intelligence (AI) into clinical practice is transforming both clinical practice and medical education. AI-based systems aim to improve the efficacy of clinical tasks, enhancing diagnostic accuracy and tailoring treatment delivery. As it becomes increasingly prevalent in health care for high-quality patient care, it is critical for health care providers to use the systems responsibly to mitigate bias, ensure effective outcomes, and provide safe clinical practices. In this study, the clinical task is the identification of heart failure (HF) prior to surgery with the intention of enhancing clinical decision-making skills. HF is a common and severe disease, but detection remains challenging due to its subtle manifestation, often concurrent with other medical conditions, and the absence of a simple and effective diagnostic test. While advanced HF algorithms have been developed, the use of these AI-based systems to enhance clinical decision-making in medical education remains understudied. Objective This research protocol is to demonstrate our study design, systematic procedures for selecting surgical cases from electronic health records, and interventions. The primary objective of this study is to measure the effectiveness of interventions aimed at improving HF recognition before surgery, the second objective is to evaluate the impact of inaccurate AI recommendations, and the third objective is to explore the relationship between the inclination to accept AI recommendations and their accuracy. Methods Our study used a 3 × 2 factorial design (intervention type × order of prepost sets) for this randomized trial with medical students. The student participants are asked to complete a 30-minute e-learning module that includes key information about the intervention and a 5-question quiz, and a 60-minute review of 20 surgical cases to determine the presence of HF. To mitigate selection bias in the pre- and posttests, we adopted a feature-based systematic sampling procedure. From a pool of 703 expert-reviewed surgical cases, 20 were selected based on features such as case complexity, model performance, and positive and negative labels. This study comprises three interventions: (1) a direct AI-based recommendation with a predicted HF score, (2) an indirect AI-based recommendation gauged through the area under the curve metric, and (3) an HF guideline-based intervention. Results As of July 2023, 62 of the enrolled medical students have fulfilled this study’s participation, including the completion of a short quiz and the review of 20 surgical cases. The subject enrollment commenced in August 2022 and will end in December 2023, with the goal of recruiting 75 medical students in years 3 and 4 with clinical experience. Conclusions We demonstrated a study protocol for the randomized trial, measuring the effectiveness of interventions using AI and HF guidelines among medical students to enhance HF recognition in preoperative care with electronic health record data. International Registered Report Identifier (IRRID) DERR1-10.2196/49842
... He applied 10-fold cross-validation process during the learning of the model and showed that the DT algorithm achieved the highest success rate of 93.19%. A recent study on congestive heart failure detection was presented by Porumb et al. [18], where they used convolutional NNs on ECG signals. They achieved 100% congestive heart failure detection accuracy. ...
Chapter
Cardiovascular disease is the leading cause of global death and disability. There are many types of cardiovascular diseases. The diagnosis of heart failure, one of the cardiovascular disease types, is a challenging task and plays a significant role in guiding the treatment of patients. However, machine learning approaches can be helpful for assisting medical institutions and practitioners in predicting heart failure in the early phase. This study is the first application that analyzes the dataset containing clinical records of 299 patients with heart failure using a feedforward backpropagation neural network (NN). The aim of this study is to predict the survival of heart failure patients based on the clinical data and to identify the strongest factors influencing heart failure disease development. We adopted the Shapley additive explanations (SHAP) values, which have been used to interpret model findings. From the study, it is observed that the best and highest accuracy of 91.11% is obtained compared to previous studies and it is found that feedforward backpropagation NN performed better than the previous approaches. Also, this study revealed that time, ejection fraction (EF), serum creatinine, creatinine phosphokinase (CPK), and age are the strongest risk factors for mortality among patients suffering from heart failure.
... That is, data leakage leads to overfitting, which to some extent, negatively influences the prediction performance of the model. 40,66 In this case, the training set has already covered some features (i.e., adsorbent properties and operational conditions) of the validation set in point selection, resulting in lower RMSE values of the validation set. However, the test set is independent of the training set; thus, the model might not perform well for the test set. ...
... There have been several documented applications of deep transfer learning in health informatics [14]- [23]. Specifically, deep learning experiments with ECG signals include the diagnosis of arrhythmia [24], [25], congestive heart failure [26], atrial fibrillation [27] and other cardiac disorders. In these investigations, deep learning outperformed conventional methods and offered additional benefits, including the elimination of the necessity for feature extraction, feature selection, and de-noising. ...
Article
Full-text available
The primary objective and contribution of this research is the development and design of an artificial intelligence system that will diagnose Chronic Obstructive Pulmonary Disease (COPD) utilizing only the heart signal (ECG) of the patient. In contrast to the traditional way of diagnosing COPD, which requires spirometer tests and a laborious workup in a hospital setting, the proposed system uses the classification capabilities of deep transfer learning and the patient’s heart signal, which provides COPD signs in itself and can be received from any modern smart device. The motivation of this research is that it introduces the first research on automated COPD diagnosis using deep learning and utilizes the first annotated dataset in this field. Since the disease progresses slowly and conceals itself until the final stage, hospital visits for diagnosis are uncommon. Hence, the medical goal of this research is to detect COPD using a simple heart signal before it becomes incurable. Deep transfer learning frameworks, which were previously trained on a general image data set, are transferred to carry out an automatic diagnosis of COPD by classifying patients’ electrocardiogram (ECG) signal equivalents, which are produced by signal-to-image transform techniques. Xception, VGG-19, InceptionResNetV2, DenseNet-121, and "trained-from-scratch" convolutional neural network architectures have been investigated for the detection of COPD, and it is demonstrated that they are able to obtain high performance rates in classifying nearly 33.000 instances using diverse training strategies. The highest classification rate was obtained by the Xception model at 99%. This research shows that the newly introduced COPD detection approach is effective, easily applicable, and eliminates the burden of considerable effort in a hospital. It could also be put into practice and serve as a diagnostic aid for chest disease experts by providing a deeper and faster interpretation of ECG signals. Using the knowledge gained while identifying COPD from ECG signals may aid in the early diagnosis of future diseases for which little data is currently available.
... The complexities in data and correlations that occur during prediction are challenging tasks in traditional approaches. Historical medical information has been utilized for predicting the disease with the numerous traditional ML and DL approaches [2]. Similarly, the recommended research [3] has utilized three different supervised learning algorithms such as support vector machine (SVM), Naïve Bayes (NB) and decision tree (DT) for the exploration of correlations in heart disease data which have assisted in the enhancement of prediction. ...
Article
Full-text available
The earlier prediction of heart diseases and appropriate treatment are important for preventing cardiac failure complications and reducing the mortality rate. The traditional prediction and classification approaches have resulted in a minimum rate of prediction accuracy and hence to overcome the pitfalls in existing systems, the present research is aimed to perform the prediction of heart diseases with quantum learning. When quantum learning is employed in ML (Machine Learning) and DL (Deep Learning) algorithms, complex data can be performed efficiently with less time and a higher accuracy rate. Moreover, the proposed ML and DL algorithms possess the ability to adapt to predictions with alterations in the dataset integrated with quantum computing that provides robustness in the earlier detection of chronic diseases. The Cleveland heart disease dataset is being pre-processed for the checking of missing values to avoid incorrect predictions and also for improvising the rate of accuracy. Further, SVM (Support Vector Machine), DT (Decision Tree) and RF (Random Forest) are used to perform classification. Finally, disease prediction is performed with the proposed instance-based quantum ML and DL method in which the number of qubits is computed with respect to features and optimized with instance-based learning. Additionally, a comparative assessment is provided for quantifying the differences between the standard classification algorithms with quantum-based learning in order to determine the significance of quantum-based detection in heart failure. From the results, the accuracy of the proposed system using instance-based quantum DL and instance-based quantum ML is found to be 98% and 83.6% respectively.
Article
Heart Disease (HD) is a leading cause of mortality worldwide. HD causes more number of deaths per year. Hence, the early detection of HD is needed to increase the survival rate. Many existing research works are presented for the detection of HD. However, existing approaches for HD diagnosis suffered from low accuracy and external noise, and most relied on either Electrocardiogram (ECG) or Phonocardiogram (PCG) signals. Different outputs might sometimes be obtained from each signal, creating misclassified outcomes. Hence, this study proposes a new HD classification accuracy prediction approach using the Polynomial Jacobian Matrix-based Deep Jordan Recurrent Neural Network (PJM-DJRNN). The proposed method involves noise removal from ECG and PCG signals separately using the Brownian Functional-based BesseL Filter (BrF-BLF) and Frequency Ratio-based Butterworth Filter (FR-BWF), decomposition of the signals using Hamming-based Ensemble Empirical Mode Decomposition (HEEMD), and clustering of the signals as normal and abnormal using Root Farthest First Clustering (RFFC). Then, the rule is generated for the obtained clustering outcome. Then, from the abnormal signal, the features are extracted. Then, the important features are selected using Poisson Distribution Function - Snow Leopard Optimization (PDF-SLO), and the PJM-DJRNN is used to classify the types of disease. The proposed method is more effective than existing research methodologies as it uses both ECG and PCG signals, achieves better input signals, and accurately predicts HD classification. The proposed model's classification efficiency has been authenticated through experimental analysis, which yielded an accuracy of 97.33%.
Article
Access to healthcare is a fundamental pillar of human well-being, yet cardiovascular diseases (CVD) persist as leading contributors to global mortality. This study explores the transformative potential of Machine Learning (ML) and Deep Learning (DL) algorithms in predicting a spectrum of CVDs, encompassing Heart Failure (HF), Arrhythmia, and coronary artery disease (CAD). With a focus on HF identification and associated risk factors, our primary aim is to rigorously evaluate the efficacy of diverse ML and DL models for CVD diagnosis. Through meticulous analysis of varied datasets, we examine different ML and DL methodologies to determine their predictive capabilities in the realm of CVD. Our findings illuminate promising advancements in CVD prediction accuracy, particularly in heart failure identification. Moreover, we elucidated the intricate interplay between cardiac and pulmonary functions in the context of heart disease, shedding light on disease mechanisms and novel diagnostic avenues. The novelty of our research lies in its comprehensive evaluation of ML and DL algorithms across heterogeneous datasets, fostering the refinement of CVD prediction strategies. By elucidating the effectiveness of various approaches, our study offers invaluable insights for healthcare practitioners and researchers striving to optimize CVD diagnosis and management. Ultimately, the integration of advanced computational techniques holds immense promise for bolstering cardiovascular healthcare outcomes and mitigating the global burden of CVD.
Article
Congenital heart disease (CHD) remains a significant global health challenge, particularly contributing to newborn mortality, with the highest rates observed in middle- and low-income countries due to limited healthcare resources. Machine learning (ML) presents a promising solution by developing predictive models that more accurately assess the risk of mortality associated with CHD. These ML-based models can help healthcare professionals identify high-risk infants and ensure timely and appropriate care. In addition, ML algorithms excel at detecting and analyzing complex patterns that can be overlooked by human clinicians, thereby enhancing diagnostic accuracy. Despite notable advancements, ongoing research continues to explore the full potential of ML in the identification of CHD. The proposed article provides a comprehensive analysis of the ML methods for the diagnosis of CHD in the last eight years. The study also describes different data sets available for CHD research, discussing their characteristics, collection methods, and relevance to ML applications. In addition, the article also evaluates the strengths and weaknesses of existing algorithms, offering a critical review of their performance and limitations. Finally, the article proposes several promising directions for future research, with the aim of further improving the efficacy of ML in the diagnosis and treatment of CHD.
Article
Full-text available
Accurate diagnosis and treatment of cardiovascular diseases require the integration of cardiac imaging, which provides crucial information about the structure and function of heart to improve overall patient care. This review explores the role of artificial intelligence (AI) in advancing cardiac imaging analysis, with a focus on unsupervised learning methods. Unlike supervised AI systems, which rely on annotated datasets, the use of unsupervised learning proves to be a game-changer. It effectively tackles issues related to limited datasets and sets the stage for scalable, adaptive solutions in cardiac imaging. This paper gives a comprehensive overview of the limitations of traditional methods and the potential of unsupervised AI in overcoming challenges related to dataset scarcity through an extensive literature review and analysis of unsupervised algorithms including clustering techniques, dimensionality reduction, and generative models. This review study highlights the contributions of unsupervised techniques for enhancing diagnostic accuracy and efficiency in cardiac imaging. By comparing unsupervised and supervised methods, the paper aims to explain the benefits and limitations of each approach, offering valuable insights for advancing AI integration in cardiac healthcare. The findings are expected to guide future research and development, leading to innovative advancements in cardiovascular diagnostics.
Article
Full-text available
The primary objective of this study was to enhance the operational efficiency of the current healthcare system by proposing a quicker and more effective approach for healthcare providers to deliver services to individuals facing acute heart failure (HF) and concurrent medical conditions. The aim was to support healthcare staff in providing urgent services more efficiently by developing an automated decision-support Patient Prioritization (PP) Tool that utilizes a tailored machine learning (ML) model to prioritize HF patients with chronic heart conditions and concurrent comorbidities during Urgent Care Unit admission. The study applies key ML models to the PhysioNet dataset, encompassing hospital admissions and mortality records of heart failure patients at Zigong Fourth People's Hospital in Sichuan, China, between 2016 and 2019. In addition, the model outcomes for the PhysioNet dataset are compared with the Healthcare Cost and Utilization Project (HCUP) Maryland (MD) State Inpatient Data (SID) for 2014, a secondary dataset containing heart failure patients, to assess the generalizability of results across diverse healthcare settings and patient demographics. The ML models in this project demonstrate efficiencies surpassing 97.8% and specificities exceeding 95% in identifying HF patients at a higher risk and ranking them based on their mortality risk level. Utilizing this machine learning for the PP approach underscores risk assessment, supporting healthcare professionals in managing HF patients more effectively and allocating resources to those in immediate need, whether in hospital or telehealth settings.
Article
Full-text available
Objective Chronic heart failure (CHF) is a clinical syndrome that encompasses individuals who either have received a definitive diagnosis of heart failure or display a gradual escalation of symptoms as time elapses. Echocardiography, particularly evaluating left ventricular function, is crucial for diagnosis and prognosis. However, 24-hour Holter monitoring, focusing on heart rate variability (HRV), provides insights into autonomic dynamics and vulnerability. Recent HRV parameters offer nuanced information, enhancing risk stratification and guiding personalized interventions in CHF. The interplay between echocardiography and HRV enables a comprehensive approach, refining the management of CHF by considering both cardiac structure and autonomic regulation. Methods This prospective study at “St. Spiridon” County Hospital involved 80 patients with left ventricular ejection fraction (LVEF) < 50%. The diagnosis was according to standard clinical echocardiography, laboratory panel, and Holter ECG monitoring. Results Unexpectedly, no statistically significant relationship was found between commonly used HRV parameters and echocardiographic parameters. Further analyses showed statistically significant associations between non-traditional HRV parameters and E/A ratio, E/E’, and S’ lateral and septal. Additionally, modifications in HRV parameters were correlated with mitral valve deceleration time, left atrial volume index, estimated pulmonary artery systolic pressure, and cardiac output. Conclusions Less commonly used Holter ECG parameters, such as acceleration capacity, deceleration capacity, and triangular index, demonstrated significant diagnostic efficacy, especially when conventional HRV parameters were normal. This highlights the importance of incorporating non-traditional HRV parameters in CHF patient risk stratification, urging further exploration through comprehensive multicenter studies for long-term prognostic implications.
Article
Full-text available
Background: Heart failure poses a significant challenge in the global health domain, and accurate prediction of mortality is crucial for devising effective treatment plans. In this study, we employed a Seq2Seq model from deep learning, integrating 12 patient features. By finely modeling continuous medical records, we successfully enhanced the accuracy of mortality prediction. Objective: The objective of this research was to leverage the Seq2Seq model in conjunction with patient features for precise mortality prediction in heart failure cases, surpassing the performance of traditional machine learning methods. Methods: The study utilized a Seq2Seq model in deep learning, incorporating 12 patient features, to intricately model continuous medical records. The experimental design aimed to compare the performance of Seq2Seq with traditional machine learning methods in predicting mortality rates. Results: The experimental results demonstrated that the Seq2Seq model outperformed conventional machine learning methods in terms of predictive accuracy. Feature importance analysis provided critical patient risk factors, offering robust support for formulating personalized treatment plans. Conclusions: This research sheds light on the significant applications of deep learning, specifically the Seq2Seq model, in enhancing the precision of mortality prediction in heart failure cases. The findings present a valuable direction for the application of deep learning in the medical field and provide crucial insights for future research and clinical practices.
Thesis
Full-text available
This research investigates the potential of using physiological signs, including respiratory rate, blood pressure, body temperature, heart rate, and oxygen saturation, to predict cardiovascular disease (CVD) in humans. Machine learning (ML) and deep learning (DL) models were employed to determine the most effective prediction model by comparing their performance metrics to a previous study conducted by Ashfaq et al. in 2022. Ashfaq's research utilized three parameters (body temperature, heart rate, and oxygen saturation) and achieved a top performance of 96% using K-Nearest Neighbour (KNN). The analysis utilized a dataset obtained from the MIMIC-III clinical database. Four models were evaluated: Random Forest (RF), K-Nearest Neighbour (KNN) as part of the ML approach, and Multi-Layer Perception (MLP) and Convolutional Neural Network (CNN) as part of the DL approach. Performance evaluation was conducted using five measurement metrics, namely accuracy, precision, recall, F1-score, and ROC AUC. The findings demonstrate significant performance by all models, with MLP exhibiting the highest overall performance measures, including an accuracy of 99%, precision of 99%, recall of 99%, F1-score of 98%, and ROC AUC of 98%. The RF model closely followed MLP in terms of performance. This research provides valuable insights for medical researchers, individuals, academies, analysts, and artificial intelligence enthusiasts, informing them about research ideas and areas for improvement, particularly in the health sector, specifically in the management of CVD in humans. Furthermore, the integration of these models into monitoring systems using body sensors could facilitate prompt emergency intervention for CVD patients. In comparison to the previous study by Ashfaq et al., this research expands the parameter set to include five body parameters, enhancing the accuracy and effectiveness of CVD prediction. The utilization of advanced ML and DL models highlights the potential for significant improvements in the field of cardiovascular disease prediction and management.
Article
Full-text available
The paper reviews the milestones and various modern-day approaches in developing phonocardiogram (PCG) signal analysis. It also explains the different phases and methods of the Heart Sound signal analysis. Many physicians depend heavily on ECG experts, inviting healthcare costs and ignorance of stethoscope skills. Hence, auscultation is not a simple solution for the detection of valvular heart disease; therefore, doctors prefer clinical evaluation using Doppler Echo-cardiogram and another pathological test. However, the benefits of auscultation and other clinical evaluation can be associated with computer-aided diagnosis methods that can help considerably in measuring and analyzing various Heart Sounds. This review covers the most recent research for segmenting valvular Heart Sound during preprocessing stages, like adaptive fuzzy system, Shannon energy, time-frequency representation, and discrete wavelet distribution for analyzing and diagnosing various heart-related diseases. Different Convolutional Neural Network (CNN) based deep-learning models are discussed for valvular Heart Sound analysis, like LeNet-5, AlexNet, VGG16, VGG19, DenseNet121, Inception Net, Residual Net, Google Net, Mobile Net, Squeeze Net, and Xception Net. Among all deep-learning methods, the Xception Net claimed the highest accuracy of 99.43 + 0.03% and sensitivity of 98.58 + 0.06%. The review also provides the recent advances in the feature extraction and classification techniques of Cardiac Sound, which helps researchers and readers to a great extent.
Conference Paper
Full-text available
Recent studies have focused on early detection of congenital heart diseases (CHD), as the number of cases continues to rise, making it a prevalent condition worldwide. CHD commonly affects newborn babies. Non-Invasive Parameter-Based Machine Learning Models for Accurate Diagnosis of Congenital Heart Disease study has explored the use of non-clinical data to detect CHD, enabling easy identification without affecting the fetus. Machine and deep learning have been employed to detect CHD using this dataset. While previous research has explored various models, the adoption of an artificial neural network (ANN) model has notably enhanced the performance of CHD detection. The findings of the study indicate that these models achieved high accuracy, reaching 99.79%. Hence, it is advised to develop a model capable of achieving higher accuracy within a shorter timeframe for the early detection of CHD in unborn babies using non-invasive datasets.
Article
Heart failure is an incurable disease and shows general symptoms. The presence of general symptoms contrary to specific indications makes early diagnosis difficult. This study aims to obtain clear outputs with 12 different features provided by the UCI public dataset in the research space where there are uncertainties regarding the diagnosis of heart failure. For this, a neural network-based Crocodile and Egyptian Plover (CEP) optimization algorithm has been developed. This algorithm is based on the phenomena of the Egyptian plover fed with food scraps from the crocodile's teeth, and it models mutual benefit. In the first stage of the model, the starting points of the Egyptian plovers are randomly assigned to the crocodile's mouth, and the eating amount of the Egyptian plovers is calculated with the determined parameters. Then, the proportion of plovers-specific traits in the entire population is determined for each iteration adaptively, and the local best and global best parameters are calculated using the cost function. With these processes continuing until the number of iterations is concluded, the proposed CEP algorithm converges to the global optimum without getting stuck to local optima. Finally, the artificial neural network that is capable of learning relationships and patterns between data and the CEP algorithm is used together to optimize success. To show the effectiveness of the proposed approach, its performance is compared with five other algorithms and five different datasets. In most cases, the near-optimal solutions obtained by this hybrid structure are better than the outputs obtained by similar algorithms.
Article
The rapid advancement of technology such as stream processing technologies, deep-learning approaches, and artificial intelligence plays a prominent and vital role, to detect heart rate using a prediction model. However, the existing methods could not handle high -dimensional datasets, and deep feature learning to improvise the performance. Therefore, this work proposed a real-time heart rate prediction model, using K-nearest neighbour (KNN) adhered to the principle component analysis algorithm (PCA) and weighted random forest algorithm for feature fusion (KPCA-WRF) approach and deep CNN feature learning framework. The feature selection, from the fused features, was optimized by ant colony optimization (ACO) and particle swarm optimization (PSO) algorithm to enhance the selected fused features from deep CNN. The optimized features were reduced to low dimensions using the PCA algorithm. The significant straight heart rate features are plotted by capturing out nearest similar data point values using the algorithm. The fused features were then classified for aiding the training process. The weighted values are assigned to those tuned hyper parameters (feature matrix forms). The optimal path and continuity of the weighted feature representations are moved using the random forest algorithm, in K-fold validation iterations.
Article
Full-text available
Although Machine Learning and Deep Learning technologies have been widely used and have shown high accuracy in many applications, including in the health field, their application in early detection of heart disease still has room for improvement. Further research is needed to enhance the accuracy and efficiency of this process. This study aims to understand and improve the process of ECG signal extraction and classification based on Machine Learning and Deep Learning. Essentially, this research aims to evaluate and compare various models, focusing on the Random Forest and Convolutional Neural Networks (CNN) models. The study reviews several related researches, especially those focusing on the process of extraction and classification of ECG signals using Machine Learning and Deep Learning. After extraction and classification of data, an evaluation and comparison process is conducted to determine the best performing model. From the research conducted, it was found that Machine Learning methods generally show an accuracy rate between 97.02% - 99.66%, with the Random Forest method having an accuracy of 97.02%. Meanwhile, the CNN method shows a higher accuracy rate, which is between 98.75% - 100%. Thus, this research confirms the superiority of CNN in this classification process, and shows potential for further use in early detection of heart disease.
Article
Healthcare is indeed an inevitable part of life for everyone. In recent days, most of the deaths have been happening because of noncommunicable diseases. Despite the significant advancements in medical diagnosis, cardiovascular diseases are still the most prominent cause of mortality worldwide. With recent innovations in Machine Learning (ML) and Deep Learning (DL) techniques, there has been an enormous surge in the clinical field, especially in cardiology. Several ML and DL algorithms are useful for predicting cardiovascular diseases. The predictive capability of these algorithms is promising for various cardiovascular diseases like coronary artery disease, arrhythmia, heart failure, and others. We also review the lung interactions during heart disease. After the study of various ML and DL models with different datasets, the performance of the various strategies is analyzed. In this study, we focused on the analysis of various ML and DL algorithms to diagnose cardiovascular disease. In this paper, we also presented a detailed analysis of heart failure detection and various risk factors. This paper may be helpful to researchers in studying various algorithms and finding an optimal algorithm for their dataset.
Article
Many of the used features in sudden cardiac death (SCD) classification algorithms are based on features present in the autonomic system. However, changes in the autonomic system occur in both SCD subjects and patients with congestive heart failure (CHF). Therefore, many overlaps are observed in the features extracted from the cardiac signals of these two groups. To solve this challenge, this paper studies the changes in the multifractal dimension in patients with SCD and compares it with the subjects with CHF using the heart rate variability (HRV) signal processing. For this purpose, HRV signals are initially extracted, and their four sub-signals are determined using the empirical mode decomposition (EMD) method. Afterward, the instant amplitude of each sub-signal obtained in the previous step is calculated using the Teager energy method; thus, new signals are generated through the utilization of these instant amplitudes. Subsequently, modifications in each new signal’s fractal dimensions are obtained using the multifractal detrended fluctuation analysis (MF-DFA) method. The appropriate features are selected using the t-test method and are applied to the support vector machine algorithm as input data. The proposed algorithm can differentiate the signal of SCD subjects with an average accuracy of 84.08% in 26[Formula: see text]min prior to the event.
Preprint
Full-text available
p>This paper has the sole purpose of emphasizing the fact that the movements of the human heart hold an important position in the field of heart ailment diagnosis. Our study brings out a new approach to the analysis of heart movements using machine learning. We have used the deep learning algorithms ResNet and BiLSTM for the classification and segmentation of the images from the video inputs and Explainable AI techniques LIME and SHAP have been applied in order to increase the interpretability and predictability of the model. During the training and testing phases, the downstream values of the tasks had been modified into the pre-trained set which was projected to self-supervised learning through the integration of SimCLR layers. The values thus obtained after the analysis of the pre-trained set helped us to evaluate the working of the model while predicting the next outcomes from a sequence of inputs. After the successful integration of these results with the current sequence of inputs, we have observed that the results generated by the model showed an increased learning rate of le-4 and the predictive outcomes becoming better with time. The highest accuracy value has been observed to be 92.5%. The values of the precision, recall, and f1 score also showed increasing trends with the passage of epochs and the final result suggested that the model is sufficient to provide proper medical assistance in terms of automation of the heart movement detection and it would be helpful for the early diagnosis of heart ailments. </p
Preprint
Full-text available
p>This paper has the sole purpose of emphasizing the fact that the movements of the human heart hold an important position in the field of heart ailment diagnosis. Our study brings out a new approach to the analysis of heart movements using machine learning. We have used the deep learning algorithms ResNet and BiLSTM for the classification and segmentation of the images from the video inputs and Explainable AI techniques LIME and SHAP have been applied in order to increase the interpretability and predictability of the model. During the training and testing phases, the downstream values of the tasks had been modified into the pre-trained set which was projected to self-supervised learning through the integration of SimCLR layers. The values thus obtained after the analysis of the pre-trained set helped us to evaluate the working of the model while predicting the next outcomes from a sequence of inputs. After the successful integration of these results with the current sequence of inputs, we have observed that the results generated by the model showed an increased learning rate of le-4 and the predictive outcomes becoming better with time. The highest accuracy value has been observed to be 92.5%. The values of the precision, recall, and f1 score also showed increasing trends with the passage of epochs and the final result suggested that the model is sufficient to provide proper medical assistance in terms of automation of the heart movement detection and it would be helpful for the early diagnosis of heart ailments. </p
Article
Background: Deep learning has been successfully applied to ECG data to aid in the accurate and more rapid diagnosis of acutely decompensated heart failure (ADHF). Previous applications focused primarily on classifying known ECG patterns in well-controlled clinical settings. However, this approach does not fully capitalize on the potential of deep learning, which directly learns important features without relying on a priori knowledge. In addition, deep learning applications to ECG data obtained from wearable devices have not been well studied, especially in the field of ADHF prediction. Methods: We used ECG and transthoracic bioimpedance data from the SENTINEL-HF study, which enrolled patients (≥21 years) who were hospitalized with a primary diagnosis of heart failure or with ADHF symptoms. To build an ECG-based prediction model of ADHF, we developed a deep cross-modal feature learning pipeline, termed ECGX-Net, that utilizes raw ECG time series and transthoracic bioimpedance data from wearable devices. To extract rich features from ECG time series data, we first adopted a transfer learning approach in which ECG time series were transformed into 2D images, followed by feature extraction using ImageNet-pretrained DenseNet121/VGG19 models. After data filtering, we applied cross-modal feature learning in which a regressor was trained with ECG and transthoracic bioimpedance. Then, we concatenated the DenseNet121/VGG19 features with the regression features and used them to train a support vector machine (SVM) without bioimpedance information. Results: The high-precision classifier using ECGX-Net predicted ADHF with a precision of 94 %, a recall of 79 %, and an F1-score of 0.85. The high-recall classifier with only DenseNet121 had a precision of 80 %, a recall of 98 %, and an F1-score of 0.88. We found that ECGX-Net was effective for high-precision classification, while DenseNet121 was effective for high-recall classification. Conclusion: We show the potential for predicting ADHF from single-channel ECG recordings obtained from outpatients, enabling timely warning signs of heart failure. Our cross-modal feature learning pipeline is expected to improve ECG-based heart failure prediction by handling the unique requirements of medical scenarios and resource limitations.
Chapter
With the advancement of medical science, new healthcare methods have been introduced. Biomedical signals have provided us with a deep insight into the working of the human body. Invasive biomedical signaling and sensing involve inserting sensors inside the human body. Non-invasive biomedical signals such as electroencephalogram (EEG), electromyogram (EMG), electrocardiogram (ECG), electrooculogram (EOG), phonocardiogram (PCG), and photoplethysmography (PPG) can be acquired by placing sensors on the surface of the human body. After the acquisition of these biomedical signals, further processing such as artifact removal and feature extraction is required to extract vital information about the subject’s health and well-being. In addition to conventional signal processing and analysis tools, advanced methods that involve machine and deep learning techniques were introduced to extract useful information from these signals. There are several applications of non-invasive biomedical signal processing, including monitoring, detecting, and estimating physiological and pathological states for diagnosis and therapy. For example, detection and monitoring of different types of cancer, heart diseases, blood vessel blockage, neurological disorders, etc. In addition, biomedical signals are also used in brain control interfaces (BCI), Neurofeedback and biofeedback systems to improve the mental and physical health of the subjects.
Article
Full-text available
We propose a technique for producing ‘visual explanations’ for decisions from a large class of Convolutional Neural Network (CNN)-based models, making them more transparent and explainable. Our approach—Gradient-weighted Class Activation Mapping (Grad-CAM), uses the gradients of any target concept (say ‘dog’ in a classification network or a sequence of words in captioning network) flowing into the final convolutional layer to produce a coarse localization map highlighting the important regions in the image for predicting the concept. Unlike previous approaches, Grad-CAM is applicable to a wide variety of CNN model-families: (1) CNNs with fully-connected layers (e.g.VGG), (2) CNNs used for structured outputs (e.g.captioning), (3) CNNs used in tasks with multi-modal inputs (e.g.visual question answering) or reinforcement learning, all without architectural changes or re-training. We combine Grad-CAM with existing fine-grained visualizations to create a high-resolution class-discriminative visualization, Guided Grad-CAM, and apply it to image classification, image captioning, and visual question answering (VQA) models, including ResNet-based architectures. In the context of image classification models, our visualizations (a) lend insights into failure modes of these models (showing that seemingly unreasonable predictions have reasonable explanations), (b) outperform previous methods on the ILSVRC-15 weakly-supervised localization task, (c) are robust to adversarial perturbations, (d) are more faithful to the underlying model, and (e) help achieve model generalization by identifying dataset bias. For image captioning and VQA, our visualizations show that even non-attention based models learn to localize discriminative regions of input image. We devise a way to identify important neurons through Grad-CAM and combine it with neuron names (Bau et al. in Computer vision and pattern recognition, 2017) to provide textual explanations for model decisions. Finally, we design and conduct human studies to measure if Grad-CAM explanations help users establish appropriate trust in predictions from deep networks and show that Grad-CAM helps untrained users successfully discern a ‘stronger’ deep network from a ‘weaker’ one even when both make identical predictions. Our code is available at https://github.com/ramprs/grad-cam/, along with a demo on CloudCV (Agrawal et al., in: Mobile cloud visual media computing, pp 265–290. Springer, 2015) (http://gradcam.cloudcv.org) and a video at http://youtu.be/COjUB9Izk6E.
Article
Full-text available
Computerized electrocardiogram (ECG) interpretation plays a critical role in the clinical ECG workflow¹. Widely available digital ECG data and the algorithmic paradigm of deep learning² present an opportunity to substantially improve the accuracy and scalability of automated ECG analysis. However, a comprehensive evaluation of an end-to-end deep learning approach for ECG analysis across a wide variety of diagnostic classes has not been previously reported. Here, we develop a deep neural network (DNN) to classify 12 rhythm classes using 91,232 single-lead ECGs from 53,549 patients who used a single-lead ambulatory ECG monitoring device. When validated against an independent test dataset annotated by a consensus committee of board-certified practicing cardiologists, the DNN achieved an average area under the receiver operating characteristic curve (ROC) of 0.97. The average F1 score, which is the harmonic mean of the positive predictive value and sensitivity, for the DNN (0.837) exceeded that of average cardiologists (0.780). With specificity fixed at the average specificity achieved by cardiologists, the sensitivity of the DNN exceeded the average cardiologist sensitivity for all rhythm classes. These findings demonstrate that an end-to-end deep learning approach can classify a broad range of distinct arrhythmias from single-lead ECGs with high diagnostic performance similar to that of cardiologists. If confirmed in clinical settings, this approach could reduce the rate of misdiagnosed computerized ECG interpretations and improve the efficiency of expert human ECG interpretation by accurately triaging or prioritizing the most urgent conditions.
Article
Full-text available
Congestive heart failure (CHF) is a chronic heart condition associated with debilitating symptoms that result in increased mortality, morbidity, healthcare expenditure and decreased quality of life. Electrocardiogram (ECG) is a noninvasive and simple diagnostic method that may demonstrate detectable changes in CHF. However, manual diagnosis of ECG signal is often subject to errors due to the small amplitude and duration of the ECG signals, and in isolation, is neither sensitive nor specific for CHF diagnosis. An automated computer-aided system may enhance the diagnostic objectivity and reliability of ECG signals in CHF. We present an 11-layer deep convolutional neural network (CNN) model for CHF diagnosis herein. This proposed CNN model requires minimum pre-processing of ECG signals, and no engineered features or classification are required. Four different sets of data (A, B, C and D) were used to train and test the proposed CNN model. Out of the four sets, Set B attained the highest accuracy of 98.97%, specificity and sensitivity of 99.01% and 98.87% respectively. The proposed CNN model can be put into practice and serve as a diagnostic aid for cardiologists by providing more objective and faster interpretation of ECG signals.
Article
Full-text available
Recently, the application of neuroscience methods and findings to the study of organizational phenomena has gained significant interest and converged in the emerging field of organizational neuroscience. Yet, this body of research has principally focused on the brain, often overlooking fuller analysis of the activities of the human nervous system and associated methods available to assess them. In this article, we aim to narrow this gap by reviewing heart rate variability (HRV) analysis, which is that set of methods assessing beat-to-beat changes in the heart rhythm over time, used to draw inference on the outflow of the autonomic nervous system (ANS). In addition to anatomo-physiological and detailed methodological considerations, we discuss related theoretical, ethical, and practical implications. Overall, we argue that this methodology offers the opportunity not only to inform on a wealth of constructs relevant for management inquiries but also to advance the overarching organizational neuroscience research agenda and its ecological validity.
Article
Full-text available
Heart failure is a serious condition with high prevalence (about 2% in the adult population in developed countries, and more than 8% in patients older than 75years). About 3–5% of hospital admissions are linked with heart failure incidents. Heart failure is the first cause of admission by healthcare professionals in their clinical practice. The costs are very high, reaching up to 2% of the total health costs in the developed countries. Building an effective disease management strategy requires analysis of large amount of data, early detection of the disease, assessment of the severity and early prediction of adverse events. This will inhibit the progression of the disease, will improve the quality of life of the patients and will reduce the associated medical costs. Towards this direction machine learning techniques have been employed. The aim of this paper is to present the state-of-the-art of the machine learning methodologies applied for the assessment of heart failure. More specifically, models predicting the presence, estimating the subtype, assessing the severity of heart failure and predicting the presence of adverse events, such as destabilizations, re-hospitalizations, and mortality are presented. According to the authors' knowledge, it is the first time that such a comprehensive review, focusing on all aspects of the management of heart failure, is presented.
Article
Full-text available
Risk assessment of congestive heart failure (CHF) is essential for detection, especially helping patients make informed decisions about medications, devices, transplantation, and end-of-life care. The majority of studies have focused on disease detection between CHF patients and normal subjects using short-/long-term heart rate variability (HRV) measures but not much on quantification. We downloaded 116 nominal 24-hour RR interval records from the MIT/BIH database, including 72 normal people and 44 CHF patients. These records were analyzed under a 4-level risk assessment model: no risk (normal people, N), mild risk (patients with New York Heart Association (NYHA) class I-II, P1), moderate risk (patients with NYHA III, P2), and severe risk (patients with NYHA III-IV, P3). A novel multistage classification approach is proposed for risk assessment and rating CHF using the non-equilibrium decision-tree–based support vector machine classifier. We propose dynamic indices of HRV to capture the dynamics of 5-minute short term HRV measurements for quantifying autonomic activity changes of CHF. We extracted 54 classical measures and 126 dynamic indices and selected from these using backward elimination to detect and quantify CHF patients. Experimental results show that the multistage risk assessment model can realize CHF detection and quantification analysis with total accuracy of 96.61%. The multistage model provides a powerful predictor between predicted and actual ratings, and it could serve as a clinically meaningful outcome providing an early assessment and a prognostic marker for CHF patients.
Article
Full-text available
TensorFlow is an interface for expressing machine learning algorithms, and an implementation for executing such algorithms. A computation expressed using TensorFlow can be executed with little or no change on a wide variety of heterogeneous systems, ranging from mobile devices such as phones and tablets up to large-scale distributed systems of hundreds of machines and thousands of computational devices such as GPU cards. The system is flexible and can be used to express a wide variety of algorithms, including training and inference algorithms for deep neural network models, and it has been used for conducting research and for deploying machine learning systems into production across more than a dozen areas of computer science and other fields, including speech recognition, computer vision, robotics, information retrieval, natural language processing, geographic information extraction, and computational drug discovery. This paper describes the TensorFlow interface and an implementation of that interface that we have built at Google. The TensorFlow API and a reference implementation were released as an open-source package under the Apache 2.0 license in November, 2015 and are available at www.tensorflow.org.
Article
Full-text available
Time series classification is related to many different domains, such as health informatics, finance, and bioinformatics. Due to its broad applications, researchers have developed many algorithms for this kind of tasks, e.g., multivariate time series classification. Among the classification algorithms, k-nearest neighbor (k-NN) classification (particularly 1-NN) combined with dynamic time warping (DTW) achieves the state of the art performance. The deficiency is that when the data set grows large, the time consumption of 1-NN with DTWwill be very expensive. In contrast to 1-NN with DTW, it is more efficient but less effective for feature-based classification methods since their performance usually depends on the quality of hand-crafted features. In this paper, we aim to improve the performance of traditional feature-based approaches through the feature learning techniques. Specifically, we propose a novel deep learning framework, multi-channels deep convolutional neural networks (MC-DCNN), for multivariate time series classification. This model first learns features from individual univariate time series in each channel, and combines information from all channels as feature representation at the final layer. Then, the learnt features are applied into a multilayer perceptron (MLP) for classification. Finally, the extensive experiments on real-world data sets show that our model is not only more efficient than the state of the art but also competitive in accuracy. This study implies that feature learning is worth to be investigated for the problem of time series classification. © 2015 Higher Education Press and Springer-Verlag Berlin Heidelberg
Article
Full-text available
Background: Congestive Heart Failure (CHF) is a serious cardiac condition that brings high risks of urgent hospitalization and death. Remote monitoring systems are well-suited to managing patients suffering from CHF, and can reduce deaths and re-hospitalizations, as shown by the literature, including multiple systematic reviews. Methods: The monitoring system proposed in this paper aims at helping CHF stakeholders make appropriate decisions in managing the disease and preventing cardiac events, such as decompensation, which can lead to hospitalization or death. Monitoring activities are stratified into three layers: scheduled visits to a hospital following up on a cardiac event, home monitoring visits by nurses, and patient's self-monitoring performed at home using specialized equipment. Appropriate hardware, desktop and mobile software applications were developed to enable a patient's monitoring by all stakeholders. For the first two layers, we designed and implemented a Decision Support System (DSS) using machine learning (Random Forest algorithm) to predict the number of decompensations per year and to assess the heart failure severity based on a variety of clinical data. For the third layer, custom-designed sensors (the Blue Scale system) for electrocardiogram (EKG), pulse transit times, bio-impedance and weight allowed frequent collection of CHF-related data in the comfort of the patient's home. Results: We report numerical performances of the DSS, calculated as multiclass accuracy, sensitivity and specificity in a 10-fold cross-validation. The obtained average accuracies are: 71.9% in predicting the number of decompensations and 81.3% in severity assessment. The most serious class in severity assessment is detected with good sensitivity and specificity (0.87 / 0.95), while, in predicting decompensation, high specificity combined with good sensitivity prevents false alarms. The HRV parameters extracted from the self-measured EKG using the Blue Scale system of sensors are comparable with those reported in the literature about healthy people. Conclusions: The performance of DSSs trained with new patients confirmed the results of previous work, and emphasizes the strong correlation between some CHF markers, such as brain natriuretic peptide (BNP) and ejection fraction (EF), with the outputs of interest. Comparing HRV parameters from healthy volunteers with HRV parameters obtained from PhysioBank archives, we confirm the literature that considers the HRV a promising method for distinguishing healthy from CHF patients.
Article
Full-text available
This paper presents a fast and accurate patient-specific electrocardiogram (ECG) classification and monitoring system. An adaptive implementation of 1D Convolutional Neural Networks (CNNs) is inherently used to fuse the two major blocks of the ECG classification into a single learning body: feature extraction and classification. Therefore, for each patient an individual and simple CNN will be trained by using relatively small common and patient-specific training data, and thus such a patient-specific feature extraction ability can further improve the classification performance. Since this also negates the necessity to extract hand-crafted manual features, once a dedicated CNN is trained for a particular patient, it can solely be used to classify possibly long ECG data stream in a fast and accurate manner or alternatively, such a solution can conveniently be used for real-time ECG monitoring and early alert system on a light-weight wearable device. The results over the MIT-BIH arrhythmia benchmark database demonstrate that the proposed solution achieves a superior classification performance than most of the state-of-the-art methods for the detection of ventricular ectopic beats (VEB) and supraventricular ectopic beats (SVEB). Besides the speed and computational efficiency achieved, once a dedicated CNN is trained for an individual patient, it can solely be used to classify his/her long ECG records such as Holter registers in a fast and accurate manner. Due to its simple and parameter invariant nature, the proposed system is highly generic and thus applicable to any ECG dataset.
Article
Full-text available
Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. These methods have dramatically improved the state-of-the-art in speech recognition, visual object recognition, object detection and many other domains such as drug discovery and genomics. Deep learning discovers intricate structure in large data sets by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer. Deep convolutional nets have brought about breakthroughs in processing images, video, speech and audio, whereas recurrent nets have shone light on sequential data such as text and speech.
Article
Full-text available
Heart failure is a life-threatening disease and addressing it should be considered a global health priority. At present, approximately 26 million people worldwide are living with heart failure. The outlook for such patients is poor, with survival rates worse than those for bowel, breast or prostate cancer. Furthermore, heart failure places great stresses on patients, caregivers and healthcare systems. Demands on healthcare services, in particular, are predicted to increase dramatically over the next decade as patient numbers rise owing to ageing populations, detrimental lifestyle changes and improved survival of those who go on to develop heart failure as the final stage of another disease. It is time to ease the strain on healthcare systems through clear policy initiatives that prioritize heart failure prevention and champion equity of care for all. Despite the burdens that heart failure imposes on society, awareness of the disease is poor. As a result, many premature deaths occur. This is in spite of the fact that most types of heart failure are preventable and that a healthy lifestyle can reduce risk. Even after heart failure has developed, premature deaths could be prevented if people were taught to recognize the symptoms and seek immediate medical attention. Public awareness campaigns focusing on these messages have great potential to improve outcomes for patients with heart failure and ultimately to save lives. Compliance with clinical practice guidelines is also associated with improved outcomes for patients with heart failure. However, in many countries, there is considerable variation in how closely physicians follow guideline recommendations. To promote equity of care, improvements should be encouraged through the use of hospital performance measures and incentives appropriate to the locality. To this end, policies should promote the research required to establish an evidence base for performance measures that reflect improved outcomes for patients. Continuing research is essential if we are to address unmet needs in caring for patients with heart failure. New therapies are required for patients with types of heart failure for which current treatments relieve symptoms but do not address the disease. More affordable therapies are desperately needed in the economically developing world. International collaborative research focusing on the causes and treatment of heart failure worldwide has the potential to benefit tens of millions of people. Change at the policy level has the power to drive improvements in prevention and care that will save lives. It is time to make a difference across the globe by confronting the problem of heart failure. A call to action: POLICY RECOMMENDATIONS: We urge policymakers at local, national and international levels to collaborate and act on the following recommendations. Promote heart failure prevention: Support the development and implementation of public awareness programmes about heart failure. These should define heart failure in simple and accessible language, explain how to recognize the symptoms and emphasize that most types of heart failure are preventable.Highlight the need for healthcare professionals across all clinical disciplines to identify patients with illnesses that increase the risk of heart failure and to prescribe preventive medications.Prioritize the elimination of infectious diseases in parts of the world where they still cause heart failure. Improve heart failure awareness amongst healthcare professionals: Encourage the development and use of heart failure education programmes for all appropriate healthcare professionals. These should aim to improve the prevention, diagnosis, treatment and long-term management of heart failure and raise awareness of clinical practice guidelines. Ensure equity of care for all patients with heart failure: Provide a healthcare system that delivers timely access to diagnostic services and treatment of heart failure, as well as a seamless transition to long-term management.Ensure that the best available and most appropriate care is consistently provided to all patients with heart failure through efficient use of resources. Support and empower patients and their caregivers: Provide resources for the education and practical support of patients with heart failure and their families or other caregivers, empowering them to engage proactively in long-term care. Promote heart failure research: Fund and encourage international collaborative research to improve understanding of the patterns, causes and effects of modern day heart failure and how the disease can be prevented across the globe.Fund and encourage research into new and more affordable therapies and medical devices for all types of heart failure.Fund and encourage research into evidence-based healthcare performance measures that reflect improved clinical outcomes for patients with heart failure.
Article
Full-text available
Heart rate variability (HRV) analysis has quantified the functioning of the autonomic regulation of the heart and heart's ability to respond. However, majority of studies on HRV report several differences between patients with congestive heart failure (CHF) and healthy subjects, such as time-domain, frequency domain and nonlinear HRV measures. In the paper, we mainly presented a new approach to detect congestive heart failure (CHF) based on combination support vector machine (SVM) and three nonstandard heart rate variability (HRV) measures (e.g. SUM_TD, SUM_FD and SUM_IE). The CHF classification model was presented by using SVM classifier with the combination SUM_TD and SUM_FD. In the analysis performed, we found that the CHF classification algorithm could obtain the best performance with the CHF classification accuracy, sensitivity and specificity of 100%, 100%, 100%, respectively.
Article
Full-text available
In this study, we describe an automatic classifier of patients with Heart Failure designed for a telemonitoring scenario, improving the results obtained in our previous works. Our previous studies showed that the technique that better processes the heart failure typical telemonitoring-parameters is the Classification Tree. We therefore decided to analyze the data with its direct evolution that is the Random Forest algorithm. The results show an improvement both in accuracy and in limiting critical errors.
Article
Full-text available
Whereas before 2006 it appears that deep multilayer neural networks were not successfully trained, since then several algorithms have been shown to successfully train them, with experimental results showing the superiority of deeper vs less deep architectures. All these experimental results were obtained with new initialization or training mechanisms. Our objective here is to understand better why standard gradient descent from random initialization is doing so poorly with deep neural networks, to better understand these recent relative successes and help design better algorithms in the future. We first observe the influence of the non-linear activations functions. We find that the logistic sigmoid activation is unsuited for deep networks with random initialization because of its mean value, which can drive especially the top hidden layer into saturation. Surprisingly, we find that saturated units can move out of saturation by themselves, albeit slowly, and explaining the plateaus sometimes seen when training neural networks. We find that a new non-linearity that saturates less can often be beneficial. Finally, we study how activations and gradients vary across layers and during training, with the idea that training may be more difficult when the singular values of the Jacobian associated with each layer are far from 1. Based on these considerations, we propose a new initialization scheme that brings substantially faster convergence.
Article
Full-text available
In this study, we investigated the discrimination power of short-term heart rate variability (HRV) for discriminating normal subjects versus chronic heart failure (CHF) patients. We analyzed 1914.40 h of ECG of 83 patients of which 54 are normal and 29 are suffering from CHF with New York Heart Association (NYHA) classification I, II, and III, extracted by public databases. Following guidelines, we performed time and frequency analysis in order to measure HRV features. To assess the discrimination power of HRV features, we designed a classifier based on the classification and regression tree (CART) method, which is a nonparametric statistical technique, strongly effective on nonnormal medical data mining. The best subset of features for subject classification includes square root of the mean of the sum of the squares of differences between adjacent NN intervals (RMSSD), total power, high-frequencies power, and the ratio between low- and high-frequencies power (LF/HF). The classifier we developed achieved sensitivity and specificity values of 79.3 % and 100 %, respectively. Moreover, we demonstrated that it is possible to achieve sensitivity and specificity of 89.7 % and 100 %, respectively, by introducing two nonstandard features ΔAVNN and ΔLF/HF, which account, respectively, for variation over the 24 h of the average of consecutive normal intervals (AVNN) and LF/HF. Our results are comparable with other similar studies, but the method we used is particularly valuable because it allows a fully human-understandable description of classification procedures, in terms of intelligible "if … then …" rules.
Article
Full-text available
Heart failure is now recognised as a major and escalating public health problem in industrialised countries with ageing populations. Any attempt to describe the epidemiology, aetiology, and prognosis of heart failure, however, must take account of the difficulty in defining exactly what heart failure is. Though the focus of this article is the symptomatic syndrome it must be remembered that as many patients again may have asymptomatic disease that might be legitimately labelled “heart failure”—for example, asymptomatic left ventricular systolic dysfunction. More comprehensive reviews of the epidemiology and associated burden of heart failure have been published by McMurray and colleagues1 and more recently by Cowie and colleagues.2 Data relating to the aetiology, epidemiology and prognostic implications of heart failure are principally available from five types of studies: Within the context of the specific limitations of the type of data available from these studies, the current understanding of the aetiology, epidemiology, and prognostic implications of chronic heart failure are discussed here. ### Prevalence Table 1 summarises the reported prevalence of heart failure according to whether …
Article
Full-text available
The newly inaugurated Research Resource for Complex Physiologic Signals, which was created under the auspices of the National Center for Research Resources of the National Institutes of Health, is intended to stimulate current research and new investigations in the study of cardiovascular and other complex biomedical signals. The resource has 3 interdependent components. PhysioBank is a large and growing archive of well-characterized digital recordings of physiological signals and related data for use by the biomedical research community. It currently includes databases of multiparameter cardiopulmonary, neural, and other biomedical signals from healthy subjects and from patients with a variety of conditions with major public health implications, including life-threatening arrhythmias, congestive heart failure, sleep apnea, neurological disorders, and aging. PhysioToolkit is a library of open-source software for physiological signal processing and analysis, the detection of physiologically significant events using both classic techniques and novel methods based on statistical physics and nonlinear dynamics, the interactive display and characterization of signals, the creation of new databases, the simulation of physiological and other signals, the quantitative evaluation and comparison of analysis methods, and the analysis of nonstationary processes. PhysioNet is an on-line forum for the dissemination and exchange of recorded biomedical signals and open-source software for analyzing them. It provides facilities for the cooperative analysis of data and the evaluation of proposed new algorithms. In addition to providing free electronic access to PhysioBank data and PhysioToolkit software via the World Wide Web (http://www.physionet. org), PhysioNet offers services and training via on-line tutorials to assist users with varying levels of expertise.
Article
Chronic heart failure (CHF) is now recognized as a major and escalating public health problem. The costs of this syndrome, both in economic and personal terms, are considerable. The prevalence of CHF is 1–2% and appears to be increasing, in part because of ageing of the population. Economic analyses of CHF should include both direct and indirect costs of care. Healthcare expenditure on CHF in developed countries consumes 1–2% of the total health care budget. The cost of hospitalization represents the greatest proportion of total expenditure. Optimization of drug therapy represents the most effective way of reducing costs. Recent economic analyses in the Netherlands and Sweden suggest the costs of care are rising.
Conference Paper
Recurrent neural networks (RNNs) are a powerful model for sequential data. End-to-end training methods such as Connectionist Temporal Classification make it possible to train RNNs for sequence labelling problems where the input-output alignment is unknown. The combination of these methods with the Long Short-term Memory RNN architecture has proved particularly fruitful, delivering state-of-the-art results in cursive handwriting recognition. However RNN performance in speech recognition has so far been disappointing, with better results returned by deep feedforward networks. This paper investigates backslashemphdeep recurrent neural networks, which combine the multiple levels of representation that have proved so effective in deep networks with the flexible use of long range context that empowers RNNs. When trained end-to-end with suitable regularisation, we find that deep Long Short-term Memory RNNs achieve a test set error of 17.7% on the TIMIT phoneme recognition benchmark, which to our knowledge is the best recorded score.
Conference Paper
We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 dif- ferent classes. On the test data, we achieved top-1 and top-5 error rates of 37.5% and 17.0% which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implemen- tation of the convolution operation. To reduce overfitting in the fully-connected layers we employed a recently-developed regularization method called dropout that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3%, compared to 26.2% achieved by the second-best entry
Conference Paper
In this work, we revisit the global average pooling layer proposed in [13], and shed light on how it explicitly enables the convolutional neural network to have remarkable localization ability despite being trained on image-level labels. While this technique was previously proposed as a means for regularizing training, we find that it actually builds a generic localizable deep representation that can be applied to a variety of tasks. Despite the apparent simplicity of global average pooling, we are able to achieve 37.1% top-5 error for object localization on ILSVRC 2014, which is remarkably close to the 34.2% top-5 error achieved by a fully supervised CNN approach. We demonstrate that our network is able to localize the discriminative image regions on a variety of tasks despite not being trained for them
Article
Background and objectives: Automatic electrocardiogram (ECG) heartbeat classification is substantial for diagnosing heart failure. The aim of this paper is to evaluate the effect of machine learning methods in creating the model which classifies normal and congestive heart failure (CHF) on the long-term ECG time series. Methods: The study was performed in two phases: feature extraction and classification phase. In feature extraction phase, autoregressive (AR) Burg method is applied for extracting features. In classification phase, five different classifiers are examined namely, C4.5 decision tree, k-nearest neighbor, support vector machine, artificial neural networks and random forest classifier. The ECG signals were acquired from BIDMC Congestive Heart Failure and PTB Diagnostic ECG databases and classified by applying various experiments. Results: The experimental results are evaluated in several statistical measures (sensitivity, specificity, accuracy, F-measure and ROC curve) and showed that the random forest method gives 100% classification accuracy. Conclusions: Impressive performance of random forest method proves that it plays significant role in detecting congestive heart failure (CHF) and can be valuable in expressing knowledge useful in medicine.
Conference Paper
Purpose Many patients attending their primary care physician with symptoms suggestive of new onset heart failure, have a 12 lead electrocardiogram (ECG) as part of an initial triage work up. However, the role of ECG in predicting heart failure in the community is not yet defined. We thus examined the ability of ECG to predict heart failure in this patient population. Method All 733 patients attending the rapid access clinic for possible heart failure in St Vincent’s University Hospital, Dublin, from the period of 2000 till 2012 were included in this study. 12-lead ECG was performed using the Agilent Page Writer 100 ECG machine and interpreted by independent cardiologists. The ECGs were analysed along side the diagnosis of heart failure. ROC curves were performed to assess the robustness of the ECG in predicting heart failure. Result Heart failure patients had significant prolonged QRS duration, prolonged QT duration, prolonged QTc and more rightward T wave axis compared to the non heart failure group. They also had significant ECG evidence of prior myocardial ischaemia, intraventricular conduction disorder, abnormal axis, ventricular hypertrophy and atrial fibrillation. Using the ECG evidence of myocardial ischaemia, intraventricular conduction disorder, atrioventricular disorder, abnormal axis, atrial enlargement, ventricular hypertrophy, ventricular arrhythmia and atrial fibrillation as a predictive model, the ROC analysis showed that the ECG model is a reasonable test (AUC = 0.81) to help predict heart failure in the community. Adding BNP to the model increased the robustness of the model in predicting heart failure (AUC = 0.88). Conclusion The utility of the 12-lead ECG in predicting heart failure in the community is under appreciated. This study showed that this simple test is useful and can offer primary care physicians the ability to expedite the diagnosis of heart failure in order to start relevant further investigation and treatment in the community. In conclusion, ECG is a useful test in predicting heart failure in the community, however addition of BNP into the model helped to increase the robustness of the test.
Article
The aim of this paper is to describe an automatic classifier to assess the severity of congestive heart failure (CHF) patients. Disease severity is defined according to the New York Heart Association classification (NYHA). The proposed classified aims to distinguish very mild CHF (NYHA I) from mild (NYHA II) and severe CHF patients (NYHA III), using long-term nonlinear Heart Rate Variability (HRV) measures. 24h Holter ECG recording from 2 public databases was performed, including 44 patients suffering from CHF. One non-linear HRV feature was effective in distinguishing very-mild CHF from mild CHF, by achieving a sensibility and specificity rate of 7% and 100% respectively. Moreover, we combine the results obtained by LDA in a classification tree (previously described) in order to obtain an automatic classifier for CHF severity assessment.
Article
The aim of this paper is to describe the design and the preliminary validation of a platform developed to collect and automatically analyze biomedical signals for risk assessment of vascular events and falls in hypertensive patients. This m-health platform, based on cloud computing, was designed to be flexible, extensible, and transparent, and to provide proactive remote monitoring via data-mining functionalities. A retrospective study was conducted to train and test the platform. The developed system was able to predict a future vascular event within the next 12 months with an accuracy rate of 84 % and to identify fallers with an accuracy rate of 72 %. In an ongoing prospective trial, almost all the recruited patients accepted favorably the system with a limited rate of inadherences causing data losses (<20 %). The developed platform supported clinical decision by processing tele-monitored data and providing quick and accurate risk assessment of vascular events and falls.
Article
Repetitive hospitalizations are a major health problem in elderly patients with chronic disease, accounting for up to one fourth of all inpatient Medicare expenditures. Congestive heart failure, one of the most common indications for hospitalization in the elderly, is also associated with a high incidence of early rehospitalization, but variables identifying patients at increased risk and an analysis of potentially remediable factors contributing to read‐mission have not previously been reported. We prospectively evaluated 161 patients 70 years or older that had been hospitalized with documented congestive heart failure. Hospital mortality was 13% (n = 21). Among patients discharged alive, 66 (47%) were readmitted within 90 days. Recurrent heart failure was the most common cause for readmission, occurring in 38 patients (57%). Other cardiac disorders accounted for five readmissions (8%), and noncardiac illness led to read‐mission in 21 cases (32%). Factors predictive of an increased probability of readmission included a prior history of heart failure, four or more admissions within the preceding 8 years, and heart failure precipitated by an acute myocardial infarction or uncontrolled hypertension (all P < .05). Using subjective criteria, 25 first readmissions (38%) were judged possibly preventable, and 10 (15%) were judged probably preventable. Factors contributing to preventable readmissions included noncompliance with medications (15%) or diet (18%), inadequate discharge planning (15%) or follow‐up (20%), failed social support system (21%), and failure to seek medical attention promptly when symptoms recurred (20%). Thus, early rehospitalization in elderly patients with congestive heart failure may be preventable in up to 50% of cases, identification of high risk patients is possible shortly after admission, and further study of nonpharmacologic interventions designed to reduce readmission frequency is justified.
Article
Heart failure is a global pandemic affecting an estimated 26 million people worldwide and resulting in more than 1 million hospitalizations annually in both the United States and Europe. Although the outcomes for ambulatory HF patients with a reduced ejection fraction (EF) have improved with the discovery of multiple evidence-based drug and device therapies, hospitalized heart failure (HHF) patients continue to experience unacceptably high post-discharge mortality and readmission rates that have not changed in the last 2 decades. In addition, the proportion of HHF patients classified as having a preserved EF continues to grow and may overtake HF with a reduced EF in the near future. However, the prognosis for HF with a preserved EF is similar and there are currently no available diseasemodifying therapies. HHF registries have significantly improved our understanding of this clinical entity and remain an important source of data shaping both public policy and research efforts. The authors review global HHF registries to describe the patient characteristics, management, outcomes and their predictors, quality improvement initiatives, regional differences, and limitations of the available data. Moreover, based on the lessons learned, they also propose a roadmap for the design and conduct of future HHF registries. (C) 2014 by the American College of Cardiology Foundation
Article
Training Deep Neural Networks is complicated by the fact that the distribution of each layer's inputs changes during training, as the parameters of the previous layers change. This slows down the training by requiring lower learning rates and careful parameter initialization, and makes it notoriously hard to train models with saturating nonlinearities. We refer to this phenomenon as internal covariate shift, and address the problem by normalizing layer inputs. Our method draws its strength from making normalization a part of the model architecture and performing the normalization for each training mini-batch}. Batch Normalization allows us to use much higher learning rates and be less careful about initialization. It also acts as a regularizer, in some cases eliminating the need for Dropout. Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin. Using an ensemble of batch-normalized networks, we improve upon the best published result on ImageNet classification: reaching 4.9% top-5 validation error (and 4.8% test error), exceeding the accuracy of human raters.
Article
Heart failure is a clinical syndrome associated with high rates of morbidity and mortality and associated healthcare costs. The burden of heart failure is likely to increase with time, but effective treatments that improve quality of life and survival are available. Accurate and timely diagnosis is crucial to ensure patients receive appropriate treatment and avoid hospital admissions. However, diagnosing heart failure can be difficult as symptoms and signs commonly overlap with other conditions. A chest X-ray can be useful to identify evidence of heart failure or other lung pathology; however, a normal result does not rule out a diagnosis of heart failure. An electrocardiogram (ECG) is often abnormal in patients with heart failure, although up to 10% of patients may have a normal ECG. Natriuretic peptides are a useful biomarker for heart failure and a negative result can rule out the diagnosis. This can be helpful in determining who should be referred for echocardiogram. A new clinical-decision rule (CDR) could help clinicians to achieve a more timely and accurate diagnosis of heart failure.
Article
We introduce Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions. The method is straightforward to implement and is based an adaptive estimates of lower-order moments of the gradients. The method is computationally efficient, has little memory requirements and is well suited for problems that are large in terms of data and/or parameters. The method is also ap- propriate for non-stationary objectives and problems with very noisy and/or sparse gradients. The method exhibits invariance to diagonal rescaling of the gradients by adapting to the geometry of the objective function. The hyper-parameters have intuitive interpretations and typically require little tuning. Some connections to related algorithms, on which Adam was inspired, are discussed. We also analyze the theoretical convergence properties of the algorithm and provide a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework. We demonstrate that Adam works well in practice when experimentally compared to other stochastic optimization methods.
Article
We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 dif-ferent classes. On the test data, we achieved top-1 and top-5 error rates of 37.5% and 17.0% which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax. To make train-ing faster, we used non-saturating neurons and a very efficient GPU implemen-tation of the convolution operation. To reduce overfitting in the fully-connected layers we employed a recently-developed regularization method called "dropout" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3%, compared to 26.2% achieved by the second-best entry.
Article
Recurrent neural networks (RNNs) are a powerful model for sequential data. End-to-end training methods such as Connectionist Temporal Classification make it possible to train RNNs for sequence labelling problems where the input-output alignment is unknown. The combination of these methods with the Long Short-term Memory RNN architecture has proved particularly fruitful, delivering state-of-the-art results in cursive handwriting recognition. However RNN performance in speech recognition has so far been disappointing, with better results returned by deep feedforward networks. This paper investigates \emph{deep recurrent neural networks}, which combine the multiple levels of representation that have proved so effective in deep networks with the flexible use of long range context that empowers RNNs. When trained end-to-end with suitable regularisation, we find that deep Long Short-term Memory RNNs achieve a test set error of 17.7% on the TIMIT phoneme recognition benchmark, which to our knowledge is the best recorded score.
Article
In this paper we describe an Heart Failure analysis Dashboard that, combined with a handy device for the automatic acquisition of a set of patient's clinical parameters, allows to support telemonitoring functions. The Dashboard's intelligent core is a Computer Decision Support System designed to assist the clinical decision of non-specialist caring personnel, and it is based on three functional parts: Diagnosis, Prognosis, and Follow-up management. Four Artificial Intelligence-based techniques are compared for providing diagnosis function: a Neural Network, a Support Vector Machine, a Classification Tree and a Fuzzy Expert System whose rules are produced by a Genetic Algorithm. State of the art algorithms are used to support a score-based prognosis function. The patient's Follow-up is used to refine the diagnosis.
Chapter
We are concerned with feed-forward non-linear networks (multi-layer perceptrons, or MLPs) with multiple outputs. We wish to treat the outputs of the network as probabilities of alternatives (e.g. pattern classes), conditioned on the inputs. We look for appropriate output non-linearities and for appropriate criteria for adaptation of the parameters of the network (e.g. weights). We explain two modifications: probability scoring, which is an alternative to squared error minimisation, and a normalised exponential (softmax) multi-input generalisation of the logistic non-linearity. The two modifications together result in quite simple arithmetic, and hardware implementation is not difficult either. The use of radial units (squared distance instead of dot product) immediately before the softmax output stage produces a network which computes posterior distributions over class labels based on an assumption of Gaussian within-class distributions. However the training, which uses cross-class information, can result in better performance at class discrimination than the usual within-class training method, unless the within-class distribution assumptions are actually correct.
Conference Paper
Restricted Boltzmann machines were developed using binary stochastic hidden units. These can be generalized by replacing each binary unit by an infinite number of copies that all have the same weights but have progressively more negative biases. The learning and inference rules for these “Stepped Sigmoid Units ” are unchanged. They can be approximated efficiently by noisy, rectified linear units. Compared with binary units, these units learn features that are better for object recognition on the NORB dataset and face verification on the Labeled Faces in the Wild dataset. Unlike binary units, rectified linear units preserve information about relative intensities as information travels through multiple layers of feature detectors. 1.
Article
The aim of this study was to investigate the discrimination power of standard long-term Heart Rate Variability (HRV) measures for the diagnosis of Chronic Heart Failure (CHF). We performed a retrospective analysis on 4 public Holter databases, analyzing the data of 72 normal subjects and 44 patients suffering from CHF. To assess the discrimination power of HRV measures, we adopted an exhaustive search of all possible combinations of HRV measures and we developed classifiers based on Classification and Regression Tree (CART) method, which is a non-parametric statistical technique. We found that the best combination of features is: Total spectral power of all NN intervals up to 0.4 Hz (TOTPWR), square Root of the Mean of the Sum of the Squares of Differences between adjacent NN intervals (RMSSD) and Standard Deviation of the Averages of NN intervals in all 5-minute segments of a 24-hour recording (SDANN). The classifiers based on this combination achieved a specificity rate and a sensitivity rate of 100.00% and 89.74% respectively. Our results are comparable with other similar studies, but the method we used is particularly valuable because it provides an easy to understand description of classification procedures, in terms of intelligible “if … then …” rules. Finally, the rules obtained by CART are consistent with previous clinical studies.
Article
This review summarizes some of the most current information concerning the incidence and prevalence of CHF and the resulting morbidity and mortality of this condition. CHF clearly represents an enormous clinical problem and a major social and economic burden. The increase in the numbers of patients with CHF and CHF-related deaths is primarily driven by the aging of the population, but these trends persist even after age adjustment. The likely explanation for this pattern, which is unique among cardiovascular diagnoses, is the improved survival of patients with other chronic cardiovascular conditions, particularly coronary heart disease, hypertension, and diabetes. The causative factors of heart failure have significantly shifted from hypertension and valvular heart disease to coronary heart disease and diabetes during the past several decades. However, survival rates after symptoms of CHF develop remain poor despite significant advances in therapy. This review also highlights some of the differences between observations derived from community-based epidemiologic studies and those from clinical trials. The trials are emphasized in the medical literature and are often discussed as if they are the only reliable source of information concerning CHF. However, because of selection criteria, patient preferences, and referral biases, their participants do not reflect the overall CHF population. The broader CHF population is substantially older, contains more women, and has more comorbidity and a higher mortality rate. Importantly, in the community many patients with CHF have only mildly depressed or even normal left ventricular systolic function, and this group has been excluded from clinical trials. The litany of worsening statistics cited in this review raises the question of how to approach CHF as a growing public health problem. As with any epidemic, the focus needs to be shifted from the treatment of already affected individuals, whose prognosis is limited, to prevention and early intervention. The efficacy of a growing number of treatments to prevent cardiac dysfunction or its subsequent progression to clinical CHF has been established. Now the challenge is to implement these approaches in populations at risk. At the opposite end of the spectrum are the patients who already have advanced CHF. These patients are primarily elderly with substantial comorbidity and a variety of underlying pathophysiologic characteristics. Despite the knowledge gained from randomized clinical trials, these patients continue to use a substantial proportion of health care resources and still have a poor prognosis. For these complex patients, a more comprehensive disease management approach should be implemented.
Article
To review the epidemiology, pathophysiology, and etiology of congestive heart failure (CHF) in older adults. Published reports relevant to the epidemiology, pathophysiology, and etiology of CHF were systematically reviewed. Studies involving older adults and more recent studies were emphasized. More than 75% of patients with CHF in the United States are older than 65 years of age, and CHF is the leading cause of hospitalization in older adults. CHF is also a major cause of chronic disability, and annual expenditures for CHF currently exceed $10 billion. In addition, both the incidence and prevalence of CHF are increasing, largely as a result of the aging of the population. Older adults are predisposed to developing CHF as a result of age-related changes in the cardiovascular system and the high prevalence of hypertension, coronary artery disease, and valvular heart disease in this age group. Although the fundamental pathophysiology of CHF is similar in younger and older patients, older individuals are more prone to develop CHF in the setting of preserved left ventricular systolic function. This syndrome, referred to as diastolic heart failure, accounts for up to 50% of all cases of CHF in adults more than 65 years of age. Coronary heart disease and hypertension are the most common etiologies of CHF in older adults, and they often coexist. Valvular heart disease, especially aortic stenosis and mitral regurgitation, are also common in older adults, whereas nonischemic dilated cardiomyopathy, hypertrophic cardiomyopathy, and restrictive cardiomyopathy occur less frequently. Congestive heart failure is a major public health problem in the United States today as a result of its high and increasing prevalence in the older population as well as its substantial impact on healthcare costs and quality of life. There is an urgent need to develop more effective strategies for the prevention and treatment of CHF in older individuals.
Article
The differentiation between systolic and diastolic CHF is clinically important because it allows one to formulate an appropriate therapeutic regimen. As a rule, ACE inhibitors have become a major component in the treatment of systolic heart failure; diuretics, digoxin, and other vasodilators are used in conjunction with them. Optimal therapy for diastolic heart failure remains to be defined. Further research is required for this subset of patients. Numerous other support measures, such as counseling, activity, diet, patient knowledge of medications, and compliance, all affect the patient's outcome.
Article
Heart failure, a major cause of morbidity and mortality among the elderly, is a serious public health problem. As the population ages and the prevalence of heart failure increases, expenditures related to the care of these patients will climb dramatically. As a result, the health care industry must develop strategies to contain this staggering economic burden. Strategies may include adopting approaches for preventing heart failure and implementing new treatment modalities with proven efficacy into large-scale clinical practice. Successful implementation of these strategies will require intensive physician and patient education and development of innovative approaches to fund support services.
Article
Chronic heart failure (CHF) is now recognized as a major and escalating public health problem. The costs of this syndrome, both in economic and personal terms, are considerable. The prevalence of CHF is 1-2% and appears to be increasing, in part because of ageing of the population. Economic analyses of CHF should include both direct and indirect costs of care. Healthcare expenditure on CHF in developed countries consumes 1-2% of the total health care budget. The cost of hospitalization represents the greatest proportion of total expenditure. Optimization of drug therapy represents the most effective way of reducing costs. Recent economic analyses in the Netherlands and Sweden suggest the costs of care are rising.
Article
Congestive heart failure (CHF) is an increasing public health problem. Among Framingham Heart Study subjects who were free of CHF at baseline, we determined the lifetime risk for developing overt CHF at selected index ages. We followed 3757 men and 4472 women from 1971 to 1996 for 124 262 person-years; 583 subjects developed CHF and 2002 died without prior CHF. At age 40 years, the lifetime risk for CHF was 21.0% (95% CI 18.7% to 23.2%) for men and 20.3% (95% CI 18.2% to 22.5%) for women. Remaining lifetime risk did not change with advancing index age because of rapidly increasing CHF incidence rates. At age 80 years, the lifetime risk was 20.2% (95% CI 16.1% to 24.2%) for men and 19.3% (95% CI 16.5% to 22.2%) for women. Lifetime risk for CHF doubled for subjects with blood pressure >/=160/100 versus <140/90 mm Hg. In a secondary analysis, we only considered those who developed CHF without an antecedent myocardial infarction; at age 40 years, the lifetime risk for CHF was 11.4% (95% CI 9.6% to 13.2%) for men and 15.4% (95% CI 13.5% to 17.3%) for women. When established clinical criteria are used to define overt CHF, the lifetime risk for CHF is 1 in 5 for both men and women. For CHF occurring in the absence of myocardial infarction, the lifetime risk is 1 in 9 for men and 1 in 6 for women, which highlights the risk of CHF that is largely attributable to hypertension. These results should assist in predicting the population burden of CHF and placing greater emphasis on prevention of CHF through hypertension control and prevention of myocardial infarction.