LMPriors: Pre-Trained Language Models as Task-Specific PriorsDownload PDF

05 Oct 2022 (modified: 22 Oct 2023)FMDM@NeurIPS2022Readers: Everyone
Keywords: learning general-purpose priors, language models, common sense reasoning
TL;DR: We leverage pre-trained language models to construct task-specific priors for downstream machine learning models.
Abstract: Particularly in low-data regimes, an outstanding challenge in machine learning is developing principled techniques for augmenting our models with suitable priors. This is to encourage them to learn in ways that are compatible with our understanding of the world. But in contrast to generic priors such as shrinkage or sparsity, we draw inspiration from the recent successes of large-scale language models (LMs) to construct \emph{task-specific priors} distilled from the rich knowledge of LMs. Our method, Language Model Priors (LMPriors), incorporates auxiliary natural language metadata about the task---such as variable names and descriptions---to encourage downstream model outputs to be consistent with the LM's common-sense reasoning based on the metadata. Empirically, we demonstrate that LMPriors improve model performance in settings where such natural language descriptions are available, and perform well on several tasks that benefit from such prior knowledge, such as feature selection, causal inference, and safe reinforcement learning.
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 3 code implementations](https://www.catalyzex.com/paper/arxiv:2210.12530/code)
0 Replies

Loading