MacLaSa: Multi-Aspect Controllable Text Generation via Efficient Sampling from Compact Latent Space

Published: 07 Oct 2023, Last Modified: 01 Dec 2023EMNLP 2023 FindingsEveryoneRevisionsBibTeX
Submission Type: Regular Long Paper
Submission Track: Natural Language Generation
Submission Track 2: Natural Language Generation
Keywords: Natural Language Generation, Controllable Text Generation, Pretrained Language Models
Abstract: Multi-aspect controllable text generation aims to generate fluent sentences that possess multiple desired attributes simultaneously. Traditional methods either require expensive iteration / searching within the discrete text space during the decoding stage, or train separate controllers for each aspect, resulting in a degradation of text quality due to the discrepancy between different aspects. To address these limitations, we introduce a novel approach for $\textbf{M}$ulti-$\textbf{a}$spect $\textbf{c}$ontrol, namely MacLaSa, that estimates compact $\textbf{La}$tent space for multiple aspects, and performs efficient $\textbf{Sa}$mpling with a fast sampler. To eliminate the domain discrepancies between different aspects, we first utilize a variational autoencoder (VAE) network to map text sequences from various data sources into close latent representations. The estimated latent space enables the formulation of joint energy-based models and the plugging in of arbitrary attribute discriminators to achieve multi-aspect control. Afterwards, we draw latent samples with a fast sampler based on ordinary differential equations and feed sampled examples to the VAE decoder to produce target text sequences. Experimental results demonstrate that MacLaSa outperforms strong baselines on both attribute relevance and textual quality while maintaining a high inference speed.
Submission Number: 223
Loading