December 2024
·
10 Reads
·
1 Citation
ACM Transactions on Intelligent Systems and Technology
Deep Reinforcement Learning (DRL) has demonstrated promising capability in solving complex control problems. However, DRL applications in safety-critical systems are hindered by the inherent lack of robust validation techniques to assure their performance in such applications. One of the key requirements of the verification process is the development of effective techniques to explain the system functionality, providing why the system produces specific results in given circumstances. Recently, interpretation methods based on the Counterfactual (CF) explanation approach have been proposed to address the problem of explanation in DRLs. This paper proposes a novel CF explainer to interpret the decisions made by a black-box DRL. To evaluate the efficacy of the proposed explanation framework, we carried out several experiments in the domains of automated driving systems (ADSs) and the Atari Pong game. Our analysis demonstrates that the proposed framework generates plausible and meaningful explanations for various decisions made by deep underlying DRLs. Additionally, we discuss the practical implications of our approach for various automotive stakeholders, illustrating its potential real-world impact. Source codes are available at: https://github.com/Amir-Samadi/Counterfactual-Explanation .