An Actor-Critic Algorithm for Sequence Prediction

Dzmitry Bahdanau, Philemon Brakel, Kelvin Xu, Anirudh Goyal, Ryan Lowe, Joelle Pineau, Aaron Courville, Yoshua Bengio

Nov 02, 2016 (modified: Mar 03, 2017) ICLR 2017 conference submission readers: everyone
  • Abstract: We present an approach to training neural networks to generate sequences using actor-critic methods from reinforcement learning (RL). Current log-likelihood training methods are limited by the discrepancy between their training and testing modes, as models must generate tokens conditioned on their previous guesses rather than the ground-truth tokens. We address this problem by introducing a textit{critic} network that is trained to predict the value of an output token, given the policy of an textit{actor} network. This results in a training procedure that is much closer to the test phase, and allows us to directly optimize for a task-specific score such as BLEU. Crucially, since we leverage these techniques in the supervised learning setting rather than the traditional RL setting, we condition the critic network on the ground-truth output. We show that our method leads to improved performance on both a synthetic task, and for German-English machine translation. Our analysis paves the way for such methods to be applied in natural language generation tasks, such as machine translation, caption generation, and dialogue modelling.
  • TL;DR: Adapting Actor-Critic methods from reinforcement learning to structured prediction
  • Paperhash: bahdanau|an_actorcritic_algorithm_for_sequence_prediction
  • Conflicts: umontreal.ca, google.com, mcgill.ca
  • Authorids: dimabgv@gmail.com, pbpop3@gmail.com, iamkelvinxu@gmail.com, anirudhgoyal9119@gmail.com, lowe.ryan.t@gmail.com, jpineau@cs.mcgill.ca, aaron.courville@gmail.com, yoshua.bengio@gmail.com
  • Keywords: Natural language processing, Deep learning, Reinforcement Learning, Structured prediction

Loading