February 2025
·
20 Reads
The increasing complexity of machine learning models has motivated the need to ensure that the results are understandable and transparent, enabling trust and accountability. This work provides an extensive overview of methods to measure the explainability and interpretability of machine learning results. This work addresses the challenges posed by closed-box models, which lack transparency in their decision-making processes and evaluates techniques used to make these models more understandable. Through the application of open-box models, like symbolic regression, we demonstrate that it is possible to achieve high interpretability without sacrificing model performance. Additionally, we highlight the necessity for robust and unified evaluation metrics for explainability and interpretability, evaluating complexity and fidelity scores as a comprehensive measure. A variety of closed-box models have been trained and evaluated in terms of performance, and explainability including, artificial neural networks, support vector regression, and random forest regression. Additionally, we trained symbolic regression models with different levels of complexity, which were evaluated regarding their performance and interpretability. Our results underscore the importance of developing methodologies that balance complexity and interpretability, advocating for further research into explainable artificial intelligence frameworks, particularly those incorporating genetic programming. This work aims to contribute to the advancement of responsible and transparent artificial intelligence systems.