Improving Search Through A3C Reinforcement Learning Based Conversational Agent

Milan Aggarwal, Aarushi Arora, Shagun Sodhani, Balaji Krishnamurthy

Feb 15, 2018 (modified: Feb 15, 2018) ICLR 2018 Conference Blind Submission readers: everyone Show Bibtex
  • Abstract: We develop a reinforcement learning based search assistant which can assist users through a set of actions and sequence of interactions to enable them realize their intent. Our approach caters to subjective search where the user is seeking digital assets such as images which is fundamentally different from the tasks which have objective and limited search modalities. Labeled conversational data is generally not available in such search tasks and training the agent through human interactions can be time consuming. We propose a stochastic virtual user which impersonates a real user and can be used to sample user behavior efficiently to train the agent which accelerates the bootstrapping of the agent. We develop A3C algorithm based context preserving architecture which enables the agent to provide contextual assistance to the user. We compare the A3C agent with Q-learning and evaluate its performance on average rewards and state values it obtains with the virtual user in validation episodes. Our experiments show that the agent learns to achieve higher rewards and better states.
  • TL;DR: A Reinforcement Learning based conversational search assistant which provides contextual assistance in subjective search (like digital assets).
  • Keywords: Subjective search, Reinforcement Learning, Conversational Agent, Virtual user model, A3C, Context aggregation