Learning to Navigate the Web

Anonymous

Sep 27, 2018 (modified: Oct 10, 2018) ICLR 2019 Conference Blind Submission readers: everyone Show Bibtex
  • Abstract: Learning in environments with large state and action spaces as well as sparse rewards can hinder the ability of a Reinforcement Learning (RL) agent to learn through trial-and-error. For instance, the problem of following natural language instructions on theWeb (such as booking a flight ticket) leads to RL settings where input vocabulary and number of actionable elements on a page can grow very large. Even though recent approaches improve the success rate on relatively simpler environments with the help of human demonstrations to guide the exploration, they still fail in environments where the set of possible instructions can reach millions. We approach the aforementioned problems from a different perspective and propose a meta-trainer that can generate unbounded amount of experience for an agent to learn from. Instead of learning from a complicated instruction with a large vocabulary, we decompose it into multiple sub-instructions and schedule a curriculum in which an agent is tasked with gradually increasing subset of these relatively easier sub-instructions. We train DQN, deep reinforcement learning agent, with Q-value function approximated with a novel QWeb neural network architecture on these smaller, synthetic instructions. We evaluate the ability of our agent to generalize to new instructions on World of Bits benchmark, on forms with 100 elements, supporting 14 million possible instructions. The QWeb agent outperforms the baseline without using any human demonstration achieving 100% success rate on several difficult environments.
  • Keywords: navigating web pages, reinforcement learning, q learning, curriculum learning, meta training
  • TL;DR: We train reinforcement learning policies using reward augmentation, curriculum learning, and meta-learning to successfully navigate web pages.
0 Replies

Loading