Dropout Disagreement: A Recipe for Group Robustness with Fewer AnnotationsDownload PDF

Published: 21 Oct 2022, Last Modified: 05 May 2023NeurIPS 2022 Workshop DistShift PosterReaders: Everyone
Keywords: spurious correlations, distribution shift, group robustness
TL;DR: We propose a one-shot active learning method for improving worst-group accuracy with no group annotations and few class annotations.
Abstract: Empirical risk minimization (ERM) of neural networks can cause over-reliance on spurious correlations and poor generalization on minority groups. Deep feature reweighting (DFR) improves group robustness via last-layer retraining, but it requires full group and class annotations for the reweighting dataset. To eliminate this impractical requirement, we propose a one-shot active learning method which constructs the reweighting dataset with the disagreement points between the ERM model with and without dropout activated. Our experiments show our approach achieves 94% of DFR performance on the Waterbirds and CelebA datasets despite using no group annotations and up to 21$\times$ fewer class annotations.
1 Reply

Loading