Text Generation from Triple via Generative Adversarial NetsOpen Website

Published: 01 Jan 2019, Last Modified: 12 May 2023ChineseCSCW 2019Readers: Everyone
Abstract: Text generation plays an influential role in NLP (Natural Language Processing), but this task is still challenging. In this paper, we focus on generating text from a triple (entity, relation, entity), and we propose a new sequence to sequence model via GAN (Generative Adversarial Networks) rather than MLE (Maximum Likelihood Estimate) to avoid exposure bias. In this model, the generator is a Transformer and the discriminator is a Transformer based binary classifier, both of which use encoder-decoder structure. With regard to generator, the input sequence of encoder is a triple, then the decoder generates sentence in sequence. The input of discriminator consists of a triple and its corresponding sentence, and the output denotes the probability of being real sample. In this experiment, we use different metrics including Bleu score, Rouge-L and Perplexity to evaluate similarity, sufficiency and fluency of the text generated by three models on test set. The experimental results prove our model has achieved the best performance.
0 Replies

Loading