One Explanation Does Not Fit XILDownload PDF

01 Mar 2023 (modified: 03 Nov 2024)Submitted to Tiny Papers @ ICLR 2023Readers: Everyone
Keywords: explainable AI (XAI), explanatory interactive learning (XIL)
TL;DR: Revising ML models with explanations can only succeed if done with multiple explanation methods.
Abstract: Current machine learning models produce outstanding results in many areas but, at the same time, suffer from shortcut learning. To address such flaws, the XIL framework has been proposed to revise a model by employing user feedback on a model's explanation. This work sheds light on the explanations used within this framework. In particular, we investigate simultaneous model revision through multiple explanation methods. To this end, we identified that \textit{one explanation does not fit XIL} and propose considering multiple ones when revising models via XIL.
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/one-explanation-does-not-fit-xil/code)
9 Replies

Loading