Ronnie Mindlin Miller’s scientific contributions

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (1)


Explaining anomalies detected by autoencoders using Shapley Additive Explanations
  • Article

August 2021

·

224 Reads

·

301 Citations

Expert Systems with Applications

·

Ronnie Mindlin Miller

·

·

Deep learning algorithms for anomaly detection, such as autoencoders, point out the outliers, saving experts the time-consuming task of examining normal cases in order to find anomalies. Most outlier detection algorithms output a score for each instance in the database. The top-k most intense outliers are returned to the user for further inspection; however, the manual validation of results becomes challenging without justification or additional clues. An explanation of why an instance is anomalous enables the experts to focus their investigation on the most important anomalies and may increase their trust in the algorithm. Recently, a game theory-based framework known as SHapley Additive exPlanations (SHAP) was shown to be effective in explaining various supervised learning models. In this paper, we propose a method that uses Kernel SHAP to explain anomalies detected by an autoencoder, which is an unsupervised model. The proposed explanation method aims to provide a comprehensive explanation to the experts by focusing on the connection between the features with high reconstruction error and the features that are most important in terms of their affect on the reconstruction error. We propose a black-box explanation method, because it has the advantage of being able to explain any autoencoder without being aware of the exact architecture of the autoencoder model. The proposed explanation method extracts and visually depicts both features that contribute the most to the anomaly and those that offset it. An expert evaluation using real-world data demonstrates the usefulness of the proposed method in helping domain experts better understand the anomalies. Our evaluation of the explanation method, in which a “perfect” autoencoder is used as the ground truth, shows that the proposed method explains anomalies correctly, using the exact features, and evaluation on real-data demonstrates that (1) our explanation model, which uses SHAP, is more robust than the Local Interpretable Model-agnostic Explanations (LIME) method, and (2) the explanations our method provides are more effective at reducing the anomaly score than other methods.

Citations (1)


... 51 The SHAP (SHapley Additive exPlanations) summary (Fig. 7) provides a valuable analysis of the importance of features and the impact of these features on the output of the fluid classification model for logging data. 52 The plot is divided into three sections representing SHAP values for rock, water, and oil classification. The horizontal axis represents the SHAP values indicating the impact of each feature on the model predictions, and the vertical axis lists the features analyzed. ...

Reference:

Resilient Semi-Supervised Meta-Learning Network based on wavelet transform and K-means optimization for fluid classification
Explaining anomalies detected by autoencoders using Shapley Additive Explanations
  • Citing Article
  • August 2021

Expert Systems with Applications