In-context learning in presence of spurious correlations

Published: 18 Jun 2024, Last Modified: 19 Jul 2024ICML 2024 Workshop ICL PosterEveryoneRevisionsBibTeXCC BY 4.0
Track: long paper (up to 8 pages)
Keywords: In-context learning, spurious correlations, transformer, out-of-distribution generalization
TL;DR: This work explores the possibility of training an in-context learner for classification tasks involving spurious features.
Abstract: Large language models exhibit a remarkable capacity for in-context learning, where they learn to solve tasks given a few examples. Recent work has shown that transformers can be trained to perform simple regression tasks in-context. This work explores the possibility of training an in-context learner for classification tasks involving spurious features. We propose a novel technique to train such a learner for a given classification task. Remarkably, this in-context learner matches and sometimes outperforms strong methods like ERM and GroupDRO. However, unlike these algorithms, it does not generalize well to other tasks. We show that it is possible to obtain an in-context learner that generalizes to unseen tasks by constructing a diverse dataset of synthetic in-context learning instances.
Submission Number: 16
Loading