Abstract: Inspired by the success of self attention mechanism and Transformer architecture
in sequence transduction and image generation applications, we propose novel self
attention-based architectures to improve the performance of adversarial latent code-
based schemes in text generation. Adversarial latent code-based text generation
has recently gained a lot of attention due to their promising results. In this paper,
we take a step to fortify the architectures used in these setups, specifically AAE
and ARAE. We benchmark two latent code-based methods (AAE and ARAE)
designed based on adversarial setups. In our experiments, the Google sentence
compression dataset is utilized to compare our method with these methods using
various objective and subjective measures. The experiments demonstrate the
proposed (self) attention-based models outperform the state-of-the-art in adversarial
code-based text generation.
Keywords: Self-attention, Transformer, generative adversarial networks, GAN, neural text generation, NTG, generative models
TL;DR: We propose a self-attention based GAN architecture for unconditional text generation and improve on previous adversarial code-based results.
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 2 code implementations](https://www.catalyzex.com/paper/salsa-text-self-attentive-latent-space-based/code)
4 Replies
Loading