Open Peer Review. Open Publishing. Open Access. Open Discussion. Open Directory. Open Recommendations. Open API. Open Source.
Batch Policy Gradient Methods for Improving Neural Conversation Models
Kirthevasan Kandasamy, Yoram Bachrach, Ryota Tomioka, Daniel Tarlow, David Carter
Nov 04, 2016 (modified: Feb 10, 2017)ICLR 2017 conference submissionreaders: everyone
Abstract:We study reinforcement learning of chat-bots with recurrent neural network
architectures when the rewards are noisy and expensive to
obtain. For instance, a chat-bot used in automated customer service support can
be scored by quality assurance agents, but this process can be expensive, time consuming
Previous reinforcement learning work for natural language uses on-policy updates
and/or is designed for on-line learning settings.
We demonstrate empirically that such strategies are not appropriate for this setting
and develop an off-policy batch policy gradient method (\bpg).
We demonstrate the efficacy of our method via a series of
synthetic experiments and an Amazon Mechanical Turk experiment on
a restaurant recommendations dataset.
Keywords:Natural language processing, Reinforcement Learning
Conflicts:cmu.edu, mrt.ac.lk, microsoft.com
Enter your feedback below and we'll get back to you as soon as possible.