Extended Abstract: Improving Vision-and-Language Navigation with Image-Text Pairs from the WebDownload PDF

Jun 12, 2020 (edited Jul 17, 2020)ICML 2020 Workshop LaReL Blind SubmissionReaders: Everyone
  • Abstract: Following a navigation instruction such as 'Walk down the stairs and stop near the sofa' requires an agent to ground scene elements referenced via language (e.g.'stairs') to visual content in the environment (pixels corresponding to 'stairs'). We ask the following question -- can we leverage abundant 'disembodied' web-scraped vision-and-language corpora (e.g. Conceptual Captions (Sharma et al., 2018)) to learn visual groundings (what do 'stairs' look like?) that improve performance on a relatively data-starved embodied perception task (Vision-and-Language Navigation)? Specifically, we develop VLN-BERT, a visiolinguistic transformer model that scores the compatibility between an instruction ('...stop near the sofa') and a sequence of panoramic images. We demonstrate that pretraining VLN-BERT on image-text pairs from the web significantly improves performance on VLN -- outperforming the prior state-of-the-art in the fully-observed setting by 4 absolute percentage points on success rate. Ablations of our pretraining curriculum show each stage to be impactful -- with their combination resulting in further synergistic effects.
  • TL;DR: We demonstrate that visual grounding learned from image-text pairs from the web can be used to significantly improve performance on the embodied AI task of Vision-and-Language Navigation (VLN).
  • Keywords: vision-and-language navigation, instruction following, transfer learning, BERT, embodied AI
1 Reply