Lab
Dimitris Iakovidis's Lab
Institution: University of Thessaly
Featured research (13)
The detection of abnormalities in the gastrointestinal (GI) tract, including precancerous lesions, is substantially subject to expert knowledge and experience. To address the challenge of automated lesion risk assessment, based on Wireless Capsule Endoscopy (WCE) images, this paper introduces a novel Artificial Intelligence (AI) framework based on Fuzzy Cognitive Maps (FCMs). Specifically, FCMs are fuzzy graph structures used to model knowledge spaces using cause-and-effect relationships, enabling uncertainty-aware reasoning and inference. The novel proposed Interpretable FCM-based Feature Fusion (IF3) framework, includes the following contributions: a) it automatically constructs an FCM based on similarities discovered in training data; b) it enables the fusion of different features extracted using different methods. The proposed framework is generic, domain-independent and it can be integrated into any classifier. To demonstrate its performance, experiments were conducted using real datasets, which include a variety of GI abnormalities, and different feature extractors. The results show that the automatically constructed FCM outperforms state-of-the-art methods, while providing interpretable results, in an easily understandable way.KeywordsGastrointestinal tractPrecancerous lesionsFuzzy Cognitive MapsFeature FusionFuzzy setsInterpretability
This is a chapter of the published Roadmap on Signal Processing for Next Generation Measurement Systems. Please cite as:
D.K. Iakovidis, M. Ooi, Y.C. Kuang, S. Demidenko, A. Shestakov, V. Sinitsin, M. Henry, A. Sciacchitano, A. Discetti, S. Donati, M. Norgia, A. Menychtas, .I Maglogiannis, S.C. Wriessnegger, L.A. Chacon, G. Dimas, D. Filos, A.H. Aletras, J. Töger, F. Dong, S. Ren, A. Uhl, J. Paziewski, J. Geng, F. Fioranelli, R.M. Narayanan, C. Fernandez, C. Stiller, K. Malamousi, S. Kamnis, K. Delibasis, D. Wang, and J. Zhang, R.X. Gao, “Roadmap on Signal Processing for Next Generation Measurement Systems,” Measurement Science and Technology, vol. 33, no.1, doi: https://doi.org/10.1088/1361-6501/ac2dbd
The adoption of Convolutional Neural Network (CNN) models in high-stake domains is hindered by their inability to meet society's demand for transparency in decision-making. So far, a growing number of methodologies have emerged for developing CNN models that are interpretable by design. However, such models are not capable of providing interpretations in accordance with human perception, while maintaining competent performance. In this paper, we tackle these challenges with a novel, general framework for instantiating inherently interpretable CNN models, named E Pluribus Unum Interpretable CNN (EPU-CNN). An EPU-CNN model consists of CNN sub-networks, each of which receives a different representation of an input image expressing a perceptual feature, such as color or texture. The output of an EPU-CNN model consists of the classification prediction and its interpretation, in terms of relative contributions of perceptual features in different regions of the input image. EPU-CNN models have been extensively evaluated on various publicly available datasets, as well as a contributed benchmark dataset. Medical datasets are used to demonstrate the applicability of EPU-CNN for risk-sensitive decisions in medicine. The experimental results indicate that EPU-CNN models can achieve a comparable or better classification performance than other CNN architectures while providing humanly perceivable interpretations.