Understanding Shortcut Learning through the Lens of Causality and RobustnessDownload PDF

20 Jul 2022 (modified: 04 Oct 2023)OpenReview Anonymous Preprint Blind SubmissionReaders: Everyone
Keywords: causality, shortcut learning, generalizability, out-of-distribution, invariance, robustness
TL;DR: We provide an understanding of shortcut learning and present two common approaches for addressing the biases under the rubric of formal causal languages.
Abstract: Despite tremendous successes, modern machine learning models oftentimes fail to generalize for samples out of distributions where the models are trained. Such failure has been reported as shortcut learning [Geirhos et al., 2020], a phenomenon that ML models fail to generalize due to taking unintended features in establishing their decision rules. Notwithstanding that the shortcut learning problem is prevalent in practice, virtually no formal/unified understandings of notions of shortcut learning problems and approaches for addressing the biases have been presented. In this document, we provide an understanding of shortcut learning and present two common approaches for addressing the biases under the rubric of formal causal languages. Finally, we relate the approaches to the causal invariance property. We hope this document will pave the way toward a unified understanding of shortcut learning problems.
0 Replies

Loading