A Persistent Spatial Semantic Representation for High-level Natural Language Instruction ExecutionDownload PDF

Jun 19, 2021 (edited Aug 25, 2021)CoRL2021 PosterReaders: Everyone
  • Keywords: vision and language, spatial representations, semantic mapping, representation learning, instruction following
  • Abstract: Natural language provides an accessible and expressive interface to specify long-term tasks for robotic agents. However, non-experts are likely to specify such tasks with high-level instructions, which abstract over specific robot actions through several layers of abstraction. We propose that key to bridging this gap between language and robot actions over long execution horizons are persistent representations. We propose a persistent spatial semantic representation method, and show how it enables building an agent that performs hierarchical reasoning to effectively execute long-term tasks. We evaluate our approach on the ALFRED benchmark and achieve state-of-the-art results, despite completely avoiding the commonly used step-by-step instructions. https://hlsm-alfred.github.io/
  • Supplementary Material: zip
14 Replies