Abstract: In this paper, we present a method for adversarial decomposition of text representation. This method can be used to decompose a representation of an input sentence into several independent vectors, where each vector is responsible for a specific aspect of the input sentence. We evaluate the proposed method on two case studies: the conversion between different social registers and diachronic language change. We show that the proposed method is capable of fine-grained con- trolled change of these aspects of the input sentence. For example, our model is capable of learning a continuous (rather than categorical) representation of the style of the sentence, in line with the reality of language use. The model uses adversarial-motivational training and includes a special motivational loss, which acts opposite to the discriminator and encourages a better decomposition. Finally, we evaluate the obtained meaning embeddings on a downstream task of para- phrase detection and show that they are significantly better than embeddings of a regular autoencoder.
Keywords: learning representation, decomposition, adversarial training, style transfer
TL;DR: A method which learns separate representations for the meaning and the form of a sentence
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 2 code implementations](https://www.catalyzex.com/paper/adversarial-decomposition-of-text/code)
8 Replies
Loading