An Invariant Learning Characterization of Controlled Text GenerationDownload PDF

05 Oct 2022, 00:13 (modified: 14 Nov 2022, 03:38)NeurIPS 2022 Workshop DistShift PosterReaders: Everyone
Keywords: controlled generation, invariance
TL;DR: Inspired by causal representation learning, we cast controlled generation under distribution shift as an invariant learning problem.
Abstract: Controlled generation refers to the problem of creating text that contains stylistic or semantic attributes of interest. Many approaches reduce this problem to building a predictor of the desired attribute. For example, researchers hoping to deploy a large language model to produce non-toxic content may use a toxicity classifier to filter generated text. In this paper, we show that the performance of controlled generation may be poor if the target distribution of text differs from the distribution the predictor was trained on. Instead, we take inspiration from causal representation learning and cast controlled generation under distribution shift as an invariant learning problem: the most effective predictor should be invariant across multiple text environments. Experiments demonstrate the promise and difficulty of adapting invariant learning methods, which have been primarily developed for vision, to text.
1 Reply

Loading