Leveraging Foundation Models for One-Shot Learning via HRC and Aerial-Ground Multi-Robot Collaboration

Published: 18 Apr 2025, Last Modified: 07 May 2025ICRA 2025 FMNS PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Foundation Model-Driven Robots, Multi-Agent Robot System
TL;DR: This paper explores topics including LLM-driven task decomposition and VLM-assisted perception to improve robot autonomy in human-robot and aerial-ground collaboration.
Abstract: Foundation models, including Large Language Models (LLMs) and Vision-Language Models (VLMs), offer new capabilities for improving robotic autonomy. This paper presents two independent approaches for applying foundation models to robotic task execution. The first approach employs LLM-driven task decomposition and teleoperation-based human-robot collaboration, enabling one-shot learning and rapid refinement of motion primitives for high-difficulty tasks. The second approach leverages an aerial robot for top-down perception, where VLMs process the captured images to support ground robots in manipulation and navigation. Experimental results demonstrate that LLM-driven task decomposition significantly improves robot adaptability to novel tasks while VLM-assisted multimodal perception enhances task-specified reasoning and scene understanding.
Submission Number: 9
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview