Is Importance Weighting Incompatible with Interpolating Classifiers?Download PDF

Published: 02 Dec 2021, Last Modified: 22 Oct 2023NeurIPS 2021 Workshop DistShift SpotlightReaders: Everyone
Keywords: importance weighting, distribution shift, interpolating classifiers, overparameterized networks, implicit bias of gradient descent
TL;DR: We theoretically and empirically demonstrate that importance weighting can be effective in handling distribution shifts in overparameterized classifiers.
Abstract: Importance weighting is a classic technique to handle distribution shifts. However, prior work has presented strong empirical and theoretical evidence demonstrating that importance weights can have little to no effect on overparameterized neural networks. \emph{Is importance weighting truly incompatible with the training of overparameterized neural networks?} Our paper answers this in the negative. We show that importance weighting fails not because of the overparameterization, but instead, as a result of using exponentially-tailed losses like the logistic or cross-entropy loss. As a remedy, we show that polynomially-tailed losses restore the effects of importance reweighting in correcting distribution shift in overparameterized models. We characterize the behavior of gradient descent on importance weighted polynomially-tailed losses with overparameterized linear models, and theoretically demonstrate the advantage of using polynomially-tailed losses in a label shift setting. Surprisingly, our theory shows that using weights that are obtained by exponentiating the classical unbiased importance weights can improve performance. Finally, we demonstrate the practical value of our analysis with neural network experiments on a subpopulation shift and a label shift dataset. Our polynomially-tailed loss consistently increases the test accuracy by 2-3%.
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/arxiv:2112.12986/code)
1 Reply

Loading