Conditional Distributional Invariance through Implicit RegularizationDownload PDF

Published: 21 Jul 2022, Last Modified: 05 May 2023SCIS 2022 PosterReaders: Everyone
Keywords: machine learning, deep learning, causality, text classification, image classification, invariant learning, spurious correlations
TL;DR: We exploit the causal structure of the data to learn an invariant predictor through a modified ERM problem.
Abstract: A significant challenge faced by models trained via standard Empirical Risk Minimization (ERM) is that they might learn features of the input X which help it predict label Y in the training set which shouldn’t matter, i.e. associations which might not hold in test data. Causality lends itself very well to separate such spurious correlations from genuine, causal, ones. In this paper, we present a simple causal model for data and a method using which we can train a classifier to predict a category Y from an input X, while being invariant to a variable Z which is spuriously associated with Y. Notably, this method is just a slightly modified ERM problem without any explicit regularization. We empirically demonstrate that our method does better than regular ERM on standard metrics on benchmark datasets.
Confirmation: Yes
0 Replies

Loading