Bilevel Optimization to Learn Training Distributions for Language Modeling under Domain Shift

Published: 28 Oct 2023, Last Modified: 02 Apr 2024DistShift 2023 PosterEveryoneRevisionsBibTeX
Keywords: domain adaptation, language modeling, bilevel optimization, online training
TL;DR: We propose a scalable online bilevel optimization to train language model under domain shift.
Abstract: Language models trained on very large web corpora have become a central piece of modern language processing. In this paradigm, the large, heterogeneous training set rarely matches the distribution of the application domain. This work considers modifying the training distribution in the case where one can observe a small sample of data reflecting the test conditions. We propose an algorithm based on recent formulation of this problem as an online, bilevel optimization problem. We show that this approach compares favorably with alternative strategies from the domain adaptation literature. [Extended version available at arXiv:2311.11973]
Submission Number: 3
Loading