A Distributional Approach to Controlled Text GenerationDownload PDF

Published: 12 Jan 2021, Last Modified: 22 Oct 2023ICLR 2021 OralReaders: Everyone
Keywords: Controlled NLG, Pretrained Language Models, Bias in Language Models, Energy-Based Models, Information Geometry, Exponential Families
Abstract: We propose a Distributional Approach for addressing Controlled Text Generation from pre-trained Language Models (LM). This approach permits to specify, in a single formal framework, both “pointwise’” and “distributional” constraints over the target LM — to our knowledge, the first model with such generality —while minimizing KL divergence from the initial LM distribution. The optimal target distribution is then uniquely determined as an explicit EBM (Energy-BasedModel) representation. From that optimal representation, we then train a target controlled Autoregressive LM through an adaptive distributional variant of PolicyGradient. We conduct a first set of experiments over pointwise constraints showing the advantages of our approach over a set of baselines, in terms of obtaining a controlled LM balancing constraint satisfaction with divergence from the pretrained LM. We then perform experiments over distributional constraints, a unique feature of our approach, demonstrating its potential as a remedy to the problem of Bias in Language Models. Through an ablation study, we show the effectiveness of our adaptive technique for obtaining faster convergence. Code available at https://github.com/naver/gdc
One-sentence Summary: We propose a novel approach to Controlled NLG, relying on Constraints over Distributions, Information Geometry, and Sampling from Energy-Based Models.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Code: [![github](/images/github_icon.svg) naver/gdc](https://github.com/naver/gdc)
Data: [WikiBio](https://paperswithcode.com/dataset/wikibio)
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 2 code implementations](https://www.catalyzex.com/paper/arxiv:2012.11635/code)
9 Replies

Loading