Keywords: Embodied AI, Memory, Reasoning
TL;DR: A scalable benchmark in Habitat testing agents on long-horizon embodied tasks requiring memory, contextual reasoning, and navigation.
Abstract: Large vision-language models have recently demonstrated impressive performance in planning and control tasks, driving interest in their application to real-world robotics. However, deploying these models for reasoning in embodied contexts is limited by their ability to incorporate long-term experience collected across multiple days and represented by vast collections of images. Current VLMs typically struggle to process more than a few hundred images concurrently, highlighting the need for more efficient mechanisms to handle long-term memory in embodied settings. To effectively evaluate these models for long-horizon control, a benchmark must specifically target scenarios where memory is crucial for success. Existing long-video QA benchmarks overlook embodied challenges like object manipulation and navigation, which demand low-level skills and fine-grained reasoning over past interactions. Moreover, effective memory integration in embodied agents involves both recalling relevant historical information and executing actions based on that information, making it essential to study these aspects together rather than in isolation. In this work, we introduce a new benchmark for long-range embodied tasks in the Habitat simulator. This scalable, procedurally-generated benchmark evaluates memory-based capabilities across 60 tasks requiring sustained engagement and contextual awareness in an environment. We also present baselines that integrate state-of-the-art VLMs with low level navigation policies, assessing their performance on these memory-intensive tasks and highlight areas for improvement. Our dataset and code can be found [here](https://huggingface.co/datasets/findingdory/findingdory-habitat).
Submission Type: Dataset/Benchmark Paper (< 9 Pages)
Submission Number: 37
Loading