Tackling Shortcut Learning in Deep Neural Networks: An Iterative Approach with Interpretable Models

ICML 2023 Workshop SCIS Submission20 Authors

Published: 20 Jun 2023, Last Modified: 28 Jul 2023SCIS 2023 PosterEveryoneRevisions
Keywords: Explainable AI, Posthoc Explanation, Interpretability, Shortcuts
TL;DR: Extract mixture of interpretable model from a Blackbox and provide FOL explanation to mitigate shortcut learning.
Abstract: We use concept-based interpretable models to mitigate shortcut learning. Existing methods lack interpretability. Beginning with a Blackbox, we iteratively carve out a mixture of interpretable experts (MoIE) and a residual network. Each expert explains a subset of data using First Order Logic (FOL). While explaining a sample, the FOL from biased BB-derived MoIE detects the shortcut effectively. Finetuning the BB with Metadata Normalization (MDN) eliminates the shortcut. The FOLs from the finetuned-BB-derived MoIE verify the elimination of the shortcut. Our experiments show that MoIE does not hurt the accuracy of the original BB and eliminates shortcuts effectively.
Submission Number: 20
Loading