January 2025
·
15 Reads
·
3 Citations
Background Anomaly detection is vital in industrial settings for identifying abnormal behaviors that suggest faults or malfunctions. Artificial intelligence (AI) offers significant potential to assist humans in addressing these challenges. Methods This study compares the performance of supervised and unsupervised machine learning (ML) techniques for anomaly detection. Additionally, model-specific explainability methods were employed to interpret the outputs. A novel explainability approach, MLW-XAttentIon, based on causal reasoning in attention networks, was proposed to visualize the inference process of transformer models. Results Experimental results revealed that unsupervised models perform well without requiring labeled data, offering significant promise. In contrast, supervised models demonstrated greater robustness and reliability. Conclusions Unsupervised ML techniques present a feasible, resource-efficient option for anomaly detection, while supervised methods remain more reliable for critical applications. The MLW-XAttentIon approach enhances interpretability of transformer-based models, contributing to trust and transparency in AI-driven anomaly detection systems.