Lab
Artificial Intelligence and Cognitive Load Lab
Institution: Technological University Dublin
Department: School of Computing
About the lab
The discipline of Artificial Intelligence (AI) has been drastically influencing the society, and since its inception in the 1950s, it has evolved in many different ways. Originally constrained in laboratories, where the foundations of AI were laid, it is now a wide field of research, with many ramifications. AI grew from a bunch of small-scale ideas to important research areas such as machine learning, reasoning, natural language processing, robotics, planning and perception as well as computer vision. By developing intelligence machines, with a goal of understanding human behavior, the Artificial Intelligence and Cognitive Load Lab (AICL) performs exciting research aimed at pushing the boundaries of artificial intelligence and bridging the gap between machines and humans.
Featured research (43)
Introduction: Electroencephalography (EEG) source localization (SL) has shown potential for various applications, from epilepsy and seizure focus localization to psychiatric disorder evaluation. However, questions remain about its neurophysiological plausibility in real-world settings where only EEG signals are available without subject-specific anatomical information. This study investigates whether established pre-processing and source localization methods can produce neurophysiologically plausible activation patterns when applied to naturalistic EEG data without structural magnetic resonance imaging (MRI) or digitized electrode positions. Methods: Proven methods are aggregated into an end-to-end pipeline that includes automatic pre-processing, eLORETA for source estimation, and a shared forward model derived from the ICBM c Nonlinear Symmetric template and its corresponding CerebrA atlas. The pipeline is validated using two distinct datasets: the Healthy Brain Network (HBN) dataset comparing resting and naturalistic video-watching states and the multi-session and multi-task EEG cognitive dataset (COGBCI) comparing di erent cognitive workload levels. The validation approach focuses on whether the reconstructed source activations exhibit expected neurophysiological patterns via permutation testing. Results: Findings revealed significant di erences between resting state and video-watching tasks, with greater activation in posterior regions during video-watching, consistent with known visual processing pathways. The cognitive workload analysis similarly showed progressive activation increases with task di culty, mapping to regions associated with executive function. Discussion: These results prove that established source localization methods can produce neurophysiologically plausible activation patterns without subject-specific information, highlighting the strengths and limitations of applying these methods to mid-length naturalistic EEG data. This research demonstrates the viability of template-based source analysis for research settings where individual structural imaging is unavailable or impractical.
This research investigates the potential of computational argumentation, specifically the application of the Abstract Argumentation Framework (AAF), to enhance the evaluation of deliberative quality in public discourse. It focuses on integrating AAF and its related semantics with the Discourse Quality Index (DQI), which is a reputable indicator of deliberative quality. The motivation is to overcome the DQI’s constraints using the AAF’s formal and logical features by addressing dependency on hand coding and attention to specific speech acts. This is done by exploring how the AAF can identify conflicts among arguments and assess the acceptability of different viewpoints, potentially leading to a more automated and objective evaluation of deliberative quality. A pilot study is conducted on the topic of abortion to illustrate the proposed methodology. The findings of this research demonstrate that AAF methods can improve discourse analysis by automatically identifying strong arguments through conflict resolution strategies. They also emphasise the potential of the proposed procedure to mitigate the dependence on manual coding and improve deliberation processes.
Tree-based and rule-based machine learning models play pivotal roles in explainable artificial intelligence (XAI) due to their unique ability to provide explanations in the form of tree or rule sets that are easily understandable and interpretable, making them essential for applications in which trust in model decisions is necessary. These transparent models are typically used in surrogate modeling, a post-hoc XAI approach for explaining the logic of black-box models, enabling users to comprehend and trust complex predictive systems while maintaining competitive performance. This study proposes the Cost-Sensitive Rule and Tree Extraction (CORTEX) method, a novel rule-based XAI algorithm grounded in the multi-class cost-sensitive decision tree (CSDT) method. The original version of the CSDT is extended to classification problems with more than two classes by inducing the concept of an n-dimensional class-dependent cost matrix. The performance of CORTEX as a rule-extractor XAI method is compared to other post-hoc tree and rule extraction methods across several datasets with different numbers of classes. Several quantitative evaluation metrics are employed to assess the explainability of generated rule sets. Our findings demonstrate that CORTEX is competitive with other tree-based methods and can be superior to other rule-based methods across different datasets. The extracted rule sets suggest the advantages of using the CORTEX method over other methods by producing smaller rule sets with shorter rules on average across datasets with a diverse number of classes. Overall, the results underscore the potential of CORTEX as a powerful XAI tool for scenarios that require the generation of clear, human-understandable rules while maintaining good predictive performance.
Amidst the remarkable performance of deep learning models in time series classification, there is a pressing demand for methods that unveil their prediction rationale. Existing feature importance techniques often neglect the temporal nature of time series data, focusing solely on segment importance. Addressing this gap, this paper introduces a local model-agnostic method akin to LIME, which generates neighbouring samples by randomly perturbing segments of the original instance. Subsequently, weights are computed for each neighbouring instance based on its distance from the original, elucidating its influence. Parameterised event primitives (PEPs) are then extracted from these perturbed samples, encompassing increasing and decreasing events and local maxima and minima points. These primitives are clustered to form prototypical events that capture the temporal essence of the data. Leveraging these events, computed weights, and black box predictions, a simple linear regression model is trained to provide local explanations. Preliminary experiments on real-world datasets showcase the method's efficacy in identifying salient subsequences and points and their importance scores, thereby enhancing comprehension of produced explanations.
Members (14)
Anastasia Natsiou
Oleksandr Davydko

Sanat Thukral