Poushali Sengupta’s scientific contributions

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (1)


Figure 2: The graph reflects the behaviour of the model in equation 14 considering the feature's uncertainty to con-
Balancing Explainability-Accuracy of Complex Models
  • Preprint
  • File available

May 2023

·

76 Reads

·

1 Citation

Poushali Sengupta

·

·

·

Explainability of AI models is an important topic that can have a significant impact in all domains and applications from autonomous driving to healthcare. The existing approaches to explainable AI (XAI) are mainly limited to simple machine learning algorithms, and the research regarding the explainability-accuracy tradeoff is still in its infancy especially when we are concerned about complex machine learning techniques like neural networks and deep learning (DL). In this work, we introduce a new approach for complex models based on the co-relation impact which enhances the explainability considerably while also ensuring the accuracy at a high level. We propose approaches for both scenarios of independent features and dependent features. In addition, we study the uncertainty associated with features and output. Furthermore, we provide an upper bound of the computation complexity of our proposed approach for the dependent features. The complexity bound depends on the order of logarithmic of the number of observations which provides a reliable result considering the higher dimension of dependent feature space with a smaller number of observations.

Download

Citations (1)


... However, while incorporating chemistry and physics constraints has been shown to increase interpretability, there is no guarantee that these methods will improve the stability of the ML model over time (Sturm et al., 2023). Often, there is a trade-off between interpretability and ML model accuracy, especially with more complex models 555 (Sengupta et al., 2023). ...

Reference:

Applications of Machine Learning and Artificial Intelligence in Tropospheric Ozone Research
Balancing Explainability-Accuracy of Complex Models