Adversarial text generation with context adapted global knowledge and a self-attentive discriminator
Abstract: Highlights • A word sequence-based adversarial network that exploits the semantics of the corpus by adapting global word embeddings to the context of analysis. • Self-attentive discriminator to map the semantics of the generated text with real-world text. • Evaluation framework based on quantitative and qualitative analyses. • A word sequence-based adversarial network that balances both generator and discriminator towards reaching the Nash equilibrium. Abstract Text generation is a challenging task for intelligent agents. Numerous research attempts have investigated the use of adversarial networks with word sequence-based generators. However, these approaches suffer from an unbalance between generator and discriminator causing overfitting due to the strength that the discriminator acquires by getting too precise in distinguishing what the generator is producing and what instead comes from the real dataset. In this paper, we investigate how to balance both generator and discriminator of a sequence-based text adversarial network exploiting: i) the contribution of global knowledge in the input of the adversarial network encoded by global word embeddings that are adapted to the context of the datasets in which they are utilized, and ii) the use of a self-attentive discriminator that slowly minimizes its loss function and thus enables the generator to get valuable feedback during the training process. Through an extensive evaluation on three datasets of short-, medium- and long-length text documents, the results computed using word-overlapping metrics show that our model outperforms four baselines. We also discuss the results of our model using readability metrics and the human perceived quality of the generated documents.
0 Replies
Loading