Systematic generalisation with group invariant predictionsDownload PDF

Sep 28, 2020 (edited Mar 17, 2021)ICLR 2021 SpotlightReaders: Everyone
  • Keywords: Systematic generalisation, invariance penalty, semantic anomaly detection
  • Abstract: We consider situations where the presence of dominant simpler correlations with the target variable in a training set can cause an SGD-trained neural network to be less reliant on more persistently correlating complex features. When the non-persistent, simpler correlations correspond to non-semantic background factors, a neural network trained on this data can exhibit dramatic failure upon encountering systematic distributional shift, where the correlating background features are recombined with different objects. We perform an empirical study on three synthetic datasets, showing that group invariance methods across inferred partitionings of the training set can lead to significant improvements at such test-time situations. We also suggest a simple invariance penalty, showing with experiments on our setups that it can perform better than alternatives. We find that even without assuming access to any systematically shifted validation sets, one can still find improvements over an ERM-trained reference model.
  • One-sentence Summary: Invariance penalties across splits of a biased dataset can improve systematic generalisation
  • Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
12 Replies

Loading