Causal Feature Alignment: Learning to Ignore Spurious Background Features

Published: 01 Jan 2024, Last Modified: 13 Nov 2024WACV 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Deep neural networks are susceptible to spurious features strongly correlating with the target. This phenomenon leads to sub-optimal performance during real-world deployment where spurious correlations do not exist, leading to deployment challenges in safety-critical environments like health-care. While spurious features can correlate with causal features in myriad ways, we propose a solution for a common manifestation in computer vision where the background corresponds to a spurious feature. In contrast to previous works, we do not require apriori knowledge of different groups in the data induced by the presence/absence of spurious features and corresponding access to samples. We propose a method, Causal Feature Alignment (CFA), to ignore the spurious background features by utilizing segmentations on a small subset of training data. To reduce the annotation burden, we reduce the pixel-wise annotation task of segmentation to a review task of selecting the best mask by utilizing the recently released foundation model and a feature attribution method. We demonstrate our method on a wide range of datasets, including the semi-synthetic ColoredMNIST, WaterBirds, and ImageNet Backgrounds Challenge, and obtain significant gains over state-of-the-art methods.
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview