Open Peer Review. Open Publishing. Open Access. Open Discussion. Open Directory. Open Recommendations. Open API. Open Source.
Dialogue Learning With Human-in-the-Loop
Jiwei Li, Alexander H. Miller, Sumit Chopra, Marc'Aurelio Ranzato, Jason Weston
Nov 04, 2016 (modified: Feb 14, 2017)ICLR 2017 conference submissionreaders: everyone
Abstract:An important aspect of developing conversational agents is to give a bot the ability to improve through communicating with humans and to learn from the mistakes that it makes. Most research has focused on learning from fixed training sets of labeled data rather than interacting with a dialogue partner in an online fashion. In this paper we explore this direction in a reinforcement learning setting where the bot improves its question-answering ability from feedback a teacher gives following its generated responses. We build a simulator that tests various aspects of such learning in a synthetic environment, and introduce models that work in this regime. Finally, real experiments with Mechanical Turk validate the approach.
TL;DR:we explore a reinforcement learning setting for dialogue where the bot improves its abilities using reward-based or textual feedback
Keywords:Natural language processing
Enter your feedback below and we'll get back to you as soon as possible.