Overparameterisation and worst-case generalisation: friend or foe?Download PDF

Sep 28, 2020 (edited Mar 22, 2021)ICLR 2021 PosterReaders: Everyone
  • Keywords: overparameterisation, worst-case generalisation
  • Abstract: Overparameterised neural networks have demonstrated the remarkable ability to perfectly fit training samples, while still generalising to unseen test samples. However, several recent works have revealed that such models' good average performance does not always translate to good worst-case performance: in particular, they may perform poorly on subgroups that are under-represented in the training set. In this paper, we show that in certain settings, overparameterised models' performance on under-represented subgroups may be improved via post-hoc processing. Specifically, such models' bias can be restricted to their classification layers, and manifest as structured prediction shifts for rare subgroups. We detail two post-hoc correction techniques to mitigate this bias, which operate purely on the outputs of standard model training. We empirically verify that with such post-hoc correction, overparameterisation can improve average and worst-case performance.
  • One-sentence Summary: Overparameterised models' worst-subgroup performance can be improved via post-hoc processing.
  • Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
  • Supplementary Material: zip
7 Replies

Loading