Deep reinforcement learning with relational inductive biasesDownload PDF

27 Sep 2018 (modified: 22 Feb 2019)ICLR 2019 Conference Blind SubmissionReaders: Everyone
  • Abstract: We introduce an approach for augmenting model-free deep reinforcement learning agents with a mechanism for relational reasoning over structured representations, which improves performance, learning efficiency, generalization, and interpretability. Our architecture encodes an image as a set of vectors, and applies an iterative message-passing procedure to discover and reason about relevant entities and relations in a scene. In six of seven StarCraft II Learning Environment mini-games, our agent achieved state-of-the-art performance, and surpassed human grandmaster-level on four. In a novel navigation and planning task, our agent's performance and learning efficiency far exceeded non-relational baselines, it was able to generalize to more complex scenes than it had experienced during training. Moreover, when we examined its learned internal representations, they reflected important structure about the problem and the agent's intentions. The main contribution of this work is to introduce techniques for representing and reasoning about states in model-free deep reinforcement learning agents via relational inductive biases. Our experiments show this approach can offer advantages in efficiency, generalization, and interpretability, and can scale up to meet some of the most challenging test environments in modern artificial intelligence.
  • Keywords: relational reasoning, reinforcement learning, graph neural networks, starcraft, generalization, inductive bias
  • TL;DR: Relational inductive biases improve out-of-distribution generalization capacities in model-free reinforcement learning agents
11 Replies