Fig 3 - available via license: Creative Commons Attribution 4.0 International
Content may be subject to copyright.
illustrates such an example where an image of a Panda is predicted as a Gibbon with high confidence after
Source publication
Explainable Artificial Intelligence (XAI) is an emerging area of research in the field of Artificial Intelligence (AI). XAI can explain how AI obtained a particular solution (e.g., classification or object detection) and can also answer other "wh" questions. This explainability is not possible in traditional AI. Explainability is essential for crit...
Similar publications
While deep neural networks have achieved remarkable success in various computer vision tasks, they often fail to generalize to new domains and subtle variations of input images. Several defenses have been proposed to improve the robustness against these variations. However, current defenses can only withstand the specific attack used in training, a...
Citations
... La inteligencia artificial explicable (XAI) desempeña un papel clave en este contexto, ya que permite aclarar cómo los modelos de IA llegan a sus resultados, respondiendo a preguntas clave sobre el proceso y ayudando a los usuarios a comprender, confiar y mejorar la toma de decisiones. Esto es fundamental en sectores como la contratación pública, donde la transparencia y la confianza son esenciales (Gohel et al., 2021;Love et al, 2023). ...
Este estudio tiene como objetivo analizar el impacto de ChatGPT en la enseñanza del lenguaje de programación Python, evaluando su efectividad y explorando sus ventajas, limitaciones y desafíos. Para ello, se realizó una revisión sistemática de 20 artículos recientes que abordan la implementación de ChatGPT en contextos universitarios. La metodología incluyó una búsqueda exhaustiva en bases de datos relevantes y la aplicación de criterios de inclusión y exclusión rigurosos, centrando el análisis en estudios empíricos y revisiones científicas. Los resultados muestran que ChatGPT facilita el aprendizaje autónomo y la personalización en la enseñanza de Python, permitiendo a los estudiantes resolver problemas complejos y obtener retroalimentación inmediata. Sin embargo, se destaca que su uso sin supervisión puede llevar a una dependencia excesiva. Como conclusión, esta revisión subraya que ChatGPT puede transformar la enseñanza de programación en entornos universitarios si se emplea de manera ética y en conjunto con métodos pedagógicos tradicionales. Este análisis aporta una comprensión integral del papel de ChatGPT en la educación de programación, proporcionando recomendaciones para maximizar su beneficio en el aprendizaje significativo de los estudiantes con la finalidad de promover un uso equilibrado y ético en la educación superior
... La inteligencia artificial explicable (XAI) desempeña un papel clave en este contexto, ya que permite aclarar cómo los modelos de IA llegan a sus resultados, respondiendo a preguntas clave sobre el proceso y ayudando a los usuarios a comprender, confiar y mejorar la toma de decisiones. Esto es fundamental en sectores como la contratación pública, donde la transparencia y la confianza son esenciales (Gohel et al., 2021;Love et al, 2023). ...
Este estudio analizó la brecha digital en las zonas rurales del cantón Baba, Ecuador, mediante la
integración de herramientas tecnológicas en el ámbito educativo. Su objetivo principal fue evaluar
el impacto de estas tecnologías en el rendimiento académico de los estudiantes y en el desarrollo
de competencias digitales en los docentes. A través de un enfoque cuantitativo, complementado
con entrevistas cualitativas, se recopilaron datos mediante cuestionarios y pruebas académicas.
Los resultados revelaron mejoras significativas en el desempeño académico de los estudiantes y
un progreso notable en la adopción y uso de tecnologías por parte de los docentes. La propuesta
presentada ofreció un modelo educativo innovador y replicable que busca cerrar la brecha digital,
garantizando una educación inclusiva y equitativa que favorezca el desarrollo integral de las
comunidades rurales. Además, se plantearon estrategias sostenibles con potencial de expansión a
otras regiones con contextos similares, contribuyendo al cumplimiento de los Objetivos de Desarrollo
Sostenible, especialmente en lo referente a la educación de calidad y la reducción de desigualdades.
Esta investigación resaltó la importancia de empoderar a docentes y estudiantes, preparándolos para
enfrentar los retos de la sociedad actual y futura a través de la integración tecnológica en el aula.
... This accusation is countered by computer science with the movement of explainable AI, e.g. Gohel et al. (2021). 57 Coglianese (2021) for public transparency requirements. ...
... Explainable AI (XAI) refers to a collection of techniques used to interpret a ML model's predictions in a human-readable format (see refs. [67,68] and references therein). Typically, these techniques come in the form of saliency maps which highlight relevant areas of the input feature space that were crucial during the prediction process. ...
A bstract
Quantitatively connecting properties of parton distribution functions (PDFs, or parton densities) to the theoretical assumptions made within the QCD analyses which produce them has been a longstanding problem in HEP phenomenology. To confront this challenge, we introduce an ML-based explainability framework, XAI4PDF, to classify PDFs by parton flavor or underlying theoretical model using ResNet-like neural networks (NNs). By leveraging the differentiable nature of ResNet models, this approach deploys guided backpropagation to dissect relevant features of fitted PDFs, identifying x -dependent signatures of PDFs important to the ML model classifications. By applying our framework, we are able to sort PDFs according to the analysis which produced them while constructing quantitative, human-readable maps locating the x regions most affected by the internal theory assumptions going into each analysis. This technique expands the toolkit available to PDF analysis and adjacent particle phenomenology while pointing to promising generalizations.
... La implementación de algoritmos de IA en el aprendizaje requiere que los desarrolladores consideren diversos factores, como la sensibilidad de los datos utilizados y la confiabilidad de los algoritmos (Orsoni et al., 2023). En este contexto, surge un campo de investigación emergente conocido como Inteligencia Artificial Explicable (XAI), cuyo objetivo es proporcionar explicaciones claras sobre los procesos de toma de decisiones de los sistemas de IA (Gohel et al., 2021). Sin embargo, además de comprender cómo funcionan estos sistemas de IA, es crucial examinar su aplicación en contextos del mundo real y evaluar su alineación con los propósitos previstos bajo supervisión experta (Orsoni et al., 2023). ...
... Los educomunicadores deben ser capaces de buscar, evaluar, gestionar y crear información de manera crítica y responsable, utilizando herramientas digitales. La IA puede potenciar esta competencia al facilitar el acceso a información relevante a través de sistemas de recomendación inteligentes, el análisis de grandes conjuntos de datos para identificar tendencias educativas y la creación de materiales educativos personalizados adaptados a las necesidades individuales de los estudiantes (Gohel et al., 2021). Comunicación y Colaboración. ...
... El DigComp promueve la protección de datos, la seguridad en las comunicaciones y la identificación de riesgos en línea como parte de la competencia digital. La IA puede contribuir a esta competencia al facilitar la detección de contenido inapropiado mediante sistemas de filtrado de contenido, el control del acceso a la información mediante sistemas de autenticación biométrica y la protección de la identidad digital de los estudiantes mediante sistemas de detección de fraudes y robo de identidad basados en IA (Gohel et al., 2021;Rahman & Watanobe, 2023). Resolución de Problemas. ...
This study analyzes the impact of artificial intelligence (AI) on education, highlighting its benefits and challenges. Various applications of AI in the classroom are explored, from personalizing learning to automating tasks, and ethical and practical concerns associated with its implementation are addressed. It is proposed to integrate key digital skills, such as digital literacy and communication, with AI to prepare educators and students for the future. It is concluded that by proactively addressing the challenges and seizing the opportunities offered by AI, educa-tion can be transformed into a more equitable and efficient process.
... The semantic origin of these parameters is not well understood and hence not justifiable to an independent authority. This fact is the basis of the research domain of explainable AI [2]. ...
The generation and execution of qualifiable safe and dependable AI models, necessitates definition of a transparent, complete yet adaptable and preferably lightweight workflow. Given the rapidly progressing domain of AI research and the relative immaturity of the safe-AI domain the process stability upon which functionally safety developments rest must be married with some degree of adaptability. This early-stage work proposes such a workflow basing it on a an extended ONNX model description. A use case provides one foundations of this body of work which we expect to be extended by other, third party use-cases.
... The most straightforward agnostic approach is the occlusion [19,20], where, as a perturbation, a constant value patch is applied in a certain part of the input, and the effect of the patch on the output is analyzed. We then interpret the fluctuation of the output as how important the part covered by the patch is for the classification. ...
Introduction: This work explores the use of eXplainable artificial intelligence (XAI) to analyze a convolutional neural network (CNN) trained for disruption prediction in tokamak devices and fed with inputs composed of different physical quantities.
Methods: This work focuses on a reduced dataset containing disruptions that follow patterns which are distinguishable based on their impact on the electron temperature profile. Our objective is to demonstrate that the CNN, without explicit training for these specific mechanisms, has implicitly learned to differentiate between these two disruption paths. With this purpose, two XAI algorithms have been implemented: occlusion and saliency maps.
Results: The main outcome of this paper comes from the temperature profile analysis, which evaluates whether the CNN prioritizes the outer and inner regions.
Discussion: The result of this investigation reveals a consistent shift in the CNN’s output sensitivity depending on whether the inner or outer part of the temperature profile is perturbed, reflecting the underlying physical phenomena occurring in the plasma.
... It might also be valuable to have real individuals review the emotion labelling of virtual humans before training. Finally, employing Explainable AI (XAI) techniques [14] to comprehend model-driven emotion selection could be explored within the application context, potentially serving as an auxiliary tool for animators. Such efforts would be beneficial for developing a support tool for facial animation. ...
... Though the concept of having self-explaining AI looks interesting, it is not possible to achieve a comprehensive solution for this using traditional AI systems. The current XAI systems in the market have certain shortcoming such as lack of evaluation and consensus when it comes to interpretation and logic written within the algorithms (Gohel et al., 2021;Lopes et al., 2022). ...
This study explores the use of artificial intelligence (AI) in financial markets, focusing on the true value, limitations, and ways to build better strategies for higher returns. The study uncovered the complex relationship between AI and risk-free returns, as well as the importance of quality data and analysis. The findings highlighted areas to address, such as investment objectives, market expectations, ethical aspects, regulatory compliance, and strategic decisions. Topics on capturing human emotions in decision-making, enhancing algorithms to predict human behavior in black swan events, mitigating the risks of sentinel AI forming its own objectives, designing models based on financial market experts, importance of human instincts and industry's evolution were explored.
Learner behaviours often provide critical clues about learners' cognitive processes. However, the capacity of human intelligence to comprehend and intervene in learners' cognitive processes is often constrained by the subjective nature of human evaluation and the challenges of maintaining consistency and scalability. The recent widespread AI technology has been applied to learning analytics (LA), aiming at a more accurate, consistent and scalable understanding of learning to compensate for challenges that human intelligence faces. However, machine intelligence has been criticized for lacking contextual understanding and difficulties dealing with complex human emotions and social cues. In this work, we aim to understand learners' internal cognitive processes based on the external behavioural cues of learners in a digital reading context, using a hybrid intelligence (HI) approach, bridging human and machine intelligence. Based on the behavioural frameworks and the insights from human experts, we scope specific behavioural cues that are known to be relevant to learners' attention regulation, which is highly relevant for learners' cognitive processes. We utilize the public WEDAR dataset with 30 subjects' video data, behaviour annotation and pre–post tests on multiple choice and summarization tasks. We apply the explainable AI (XAI) approach to train the machine learning model so that human evaluators can also understand which behavioural features were essential for predicting the usage of the cognitive processes (ie, higher‐order thinking skills [HOTS] and lower‐order thinking skills [LOTS]) of learners, providing insights for the next‐round feature engineering and intervention design. The result indicates that the dominant use of attention regulation behaviours is a reliable indicator of low use of LOTS with 79.33% prediction accuracy, while reading speed is a valuable indicator for predicting the overall usage of HOTS and LOTS, ranging from 60.66% to 78.66% accuracy, highly surpassing random guess of 33.33%. Our study demonstrates how various combinations of behavioural features supported by HI can inform learners' cognitive processes accurately and interpretably, integrating human and machine intelligence.
Practitioner notes
What is already known about this topic Human attention is a cognitive process that allows us to choose and concentrate on relevant information, which leads to successful learning.
In affective computing, certain behavioural cues (eg, attention regulation behaviours) are used to indicate learners' attentional states during learning.
What this paper adds Attention regulation behaviours during digital reading can work as predictors of different levels of cognitive processes (ie, the utilization of higher‐order thinking skills [HOTS] and lower‐order thinking skills [LOTS]), leveraged by computer vision and machine learning.
By developing an explainable AI model, we can predict learners' cognitive processes, which often cannot be achieved by human observations, while understanding behavioural components that lead to such machine decisions is critical. It can provide valuable machine‐driven insights into the relationship between humans' external and internal states in learning.
Based on the frameworks spanning cognitive AI, psychology and education, expert knowledge can contribute to initial feature selection and engineering for the hybrid intelligence (HI) model development and next‐round intervention design.
Implications for practice and/or policy Human and machine intelligence form an iterative cycle to build a HI to understand and intervene in learners' cognitive processes in digital reading, balancing each other's strengths and weaknesses in decision‐making. It can eventually inform automated feedback loops in widespread e‐learning, a new education norm since the COVID‐19 pandemic.
Our framework also has the potential to be extended to other scenarios with digital reading, providing concrete examples of where human intelligence and machine intelligence can contribute to building a HI. It represents more systematic supports that apply to real‐life practices.