Keywords: robustness, subgroups, representation learning
TL;DR: We find that various robustness approaches learn similar representations, motivating an exploration of post-hoc adaptation approaches for improving robustness.
Abstract: A number of deep learning approaches have recently been proposed to improve model performance on subgroups under-represented in the training set. However, Menon et al. recently showed that models with poor subgroup performance can still learn representations which contain useful information about these subgroups. In this work, we explore the representations learned by various approaches to robust learning, finding that different approaches learn practically identical representations. We probe a range of post-hoc procedures for making predictions from learned representations, showing that the distribution of the post-hoc validation set is paramount, and that clustering-based methods may be a promising approach.
1 Reply
Loading