Language-Conditioned Goal Generation: a New Approach to Language Grounding in RLDownload PDF

12 Jun 2020 (modified: 17 Jul 2020)ICML 2020 Workshop LaReL Blind SubmissionReaders: Everyone
  • Abstract: In the real world, linguistic agents are also embodied agents: they perceive and act in the physical world. The notion of Language Grounding questions the interactions between language and embodiment: how do learning agents connect or ground linguistic representations to the physical world ? This question has recently been approached by the Reinforcement Learning community under the framework of instruction-following agents. In these agents, behavioral policies or reward functions are conditioned on the embedding of an instruction expressed in natural language. This paper proposes another approach: using language to condition goal generators. Given any goal-conditioned policy, one could train a language-conditioned goal generator to generate language-agnostic goals for the agent. This method allows to decouple sensorimotor learning from language acquisition and enable agents to demonstrate a diversity of behaviors for any given instruction. We propose a particular instantiation of this approach and demonstrate its benefits.
  • TL;DR: This paper proposes language-conditioned goal generation as an alternative to language-conditioned policies to tackle language grounding in reinforcement learning agents.
  • Keywords: language grounding, goals, intrinsic motivations, reinforcement learning, generative models
1 Reply