FIGURE 3 - uploaded by Afreen Khan
Content may be subject to copyright.
Partial Dependence for CDRSB

Partial Dependence for CDRSB

Source publication
Conference Paper
Full-text available
The recent advancement in the healthcare domain results in the generation of a large amount of clinical, imaging, and medication data. The extensive analysis of such data targets employing big data analytics helps in the timely identification of various diseases which thereby aids in building precautionary measures. Alzheimer's disease (AD) is the...

Contexts in source publication

Context 1
... understand the suitability of these as a right biomarker for the AD research, we uncovered those features which showed higher importance. In this paper, we are illustrating CDRSB and MMSE values pictorially in Figure 3 and Figure 4. These plots are built on complex model i.e. on the fitted Random Forest. ...
Context 2
... y-axis in Figure 3 shows the 'change in the prediction' and the blue region means the level of confidence. Through Figure 3, we can comprehend that the CDRSB score increases the chances of correct AD diagnosis. ...
Context 3
... y-axis in Figure 3 shows the 'change in the prediction' and the blue region means the level of confidence. Through Figure 3, we can comprehend that the CDRSB score increases the chances of correct AD diagnosis. But beyond a certain limit, it holds a reduced amount of influence on the predictions. ...

Similar publications

Conference Paper
Full-text available
The recent advancement in healthcare domain results in the generation of a large amount of clinical, imaging, and medication data. The extensive analysis of such data targets employing big data analytics helps in timely identification of various diseases which thereby aids in building precautionary measures. Alzheimer's disease (AD) is the most com...

Citations

... Multiple recent papers using interpretability techniques have provided compelling results and guidelines 35 for further medical expertise, including regular 36 and multi layer multi modal 37 interpretability of the Alzheimer's disease, interpretability of ensemble learning algorithms for predicting dementia 38 and extracting explainable assessments from MRI imaging scans 39 . Hypothesis. ...
Article
Full-text available
Alzheimer’s disease is still a field of research with lots of open questions. The complexity of the disease prevents the early diagnosis before visible symptoms regarding the individual’s cognitive capabilities occur. This research presents an in-depth analysis of a huge data set encompassing medical, cognitive and lifestyle’s measurements from more than 12,000 individuals. Several hypothesis were established whose validity has been questioned considering the obtained results. The importance of appropriate experimental design is highly stressed in the research. Thus, a sequence of methods for handling missing data, redundancy, data imbalance, and correlation analysis have been applied for appropriate preprocessing of the data set, and consequently XGBoost model has been trained and evaluated with special attention to the hyperparameters tuning. The model was explained by using the Shapley values produced by the SHAP method. XGBoost produced a f1-score of 0.84 and as such is considered to be highly competitive among those published in the literature. This achievement, however, was not the main contribution of this paper. This research’s goal was to perform global and local interpretability of the intelligent model and derive valuable conclusions over the established hypothesis. Those methods led to a single scheme which presents either positive, or, negative influence of the values of each of the features whose importance has been confirmed by means of Shapley values. This scheme might be considered as additional source of knowledge for the physicians and other experts whose concern is the exact diagnosis of early stage of Alzheimer’s disease. The conclusions derived from the intelligent model’s data-driven interpretability confronted all the established hypotheses. This research clearly showed the importance of explainable Machine learning approach that opens the black box and clearly unveils the relationships among the features and the diagnoses.