Learning Disentangled Semantic Spaces of Explanations via Invertible Neural NetworksDownload PDF

Anonymous

16 Dec 2023ACL ARR 2023 December Blind SubmissionReaders: Everyone
Abstract: Most previous work on controlled text generation have concentrated on the style transfer task: modifying sentences with regard to markers of sentiment, formality, affirmation/negation. Disentanglement of generative factors over Variational Autoencoder (VAE) spaces has been a key mechanism for delivering this type of style transfer control. In this work, we focus on a more general form of controlled text generation, targeting the modification and control of more general semantic features. To achieve this, we introduce a flow-based invertible neural network (INN) mechanism plugged into the Optimus-based AutoEncoder architecture to deliver better properties of separability. Experimental results demonstrate that the model can conform the distributed latent space into a better semantically disentangled space, resulting in a more general form of language interpretability and control when compared to the recent state-of-the-art language VAE models (i.e., Optimus).
Paper Type: long
Research Area: Semantics: Sentence-level Semantics, Textual Inference and Other areas
Contribution Types: NLP engineering experiment
Languages Studied: English
0 Replies

Loading