Improving Conditional Sequence Generative Adversarial Networks by Evaluating at Every Generation Step


Nov 03, 2017 (modified: Nov 03, 2017) ICLR 2018 Conference Blind Submission readers: everyone Show Bibtex
  • Abstract: Conditional sequence generation is a widely researched topic. One of the most important tasks is dialogue generation, which is famously composed of input-output pairs with one-to-many property. Recently, the success of generative adversarial network (GAN) has attributed researchers to apply GAN on sequence generation. However, there are still limited researches on the conditional sequence generation. We then investigate the influence of GAN on conditional sequence generation with three artificial grammars which share similar properties with dialogue generation. We compare the state-of-the-art GAN related algorithms by evaluation indexes: the accuracy and the coverage of answers. Moreover, we proposed every step GAN (ESGAN) for conditional sequence generation, which predicts reward at each time-step. ESGAN can be considered as the general version of SeqGAN including MCMC and REGS in the model. We also explore energy-based ESGAN (EBESGAN). These then solve the sparse rewards problem in reinforcement learning. In addition, with weighted rewards for every steps, the advantage of curriculum learning is also included. The experimental results show that ESGAN and EBESGAN are usually comparable with the state-of-the-art algorithms, and sometimes outperform them.
  • Keywords: conditional sequence generation, generative adversarial network, REINFORCE, dialogue generation