ATLAS: Adaptive Landmark Acquisition using LLM-Guided Navigation

Published: 22 Apr 2024, Last Modified: 04 May 2024VLADR 2024 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Autonomous Navigation, Natural Language Processing, Large Language Models, Robot Operating System, Path Planning, Object Detection
Abstract: Autonomous navigation agents traditionally rely on predefined maps and landmarks, limiting their ability to adapt to dynamic and unfamiliar environments. This work presents ATLAS, a novel system that continuously expands its navigable landmark set and performs complex natural language-guided navigation tasks. ATLAS integrates three key components: a path planning module for navigating to known landmarks, an object detection module for identifying and localizing objects in the environment, and a large language model (LLM) for high-level reasoning and natural language understanding. We evaluate ATLAS in diverse virtual environments simulated in Gazebo, including indoor office spaces and warehouses. Results demonstrate the system's ability to steadily expand its landmark set over time and successfully execute navigation tasks of varying complexity, from simple point-to-point navigation to intricate multi-landmark tasks with natural language descriptions. Our comprehensive tests show that ATLAS expands its knowledge, achieving a 100% success rate in tasks involving known landmarks and up to 100% in semantically inferred goals for objects not in the initial Knowledge Base. We further demonstrate the system's capacity to enhance its navigational knowledge incrementally, showcasing its ability to dynamically adapt and accurately perform complex, natural language-driven tasks in diverse simulation environments.
Submission Number: 16
Loading