Keywords: Embodied AI, VLA, Reasoning, Manipulation
Abstract: Generalist embodied agents must perform interactive, causally-dependent reasoning, continually interacting with the environment, acquiring information, and updating plans to solve long-horizon tasks before they could be adopted in real-life scenarios. For instance, retrieving an apple from a cabinet may require opening multiple doors and drawers before the apple becomes visible and reachable—demanding sequential interaction under partial observability. However, existing benchmarks fail to systematically evaluate this essential capability. We introduce \textbf{COIN}, a benchmark designed to assess interactive reasoning in realistic robotic manipulation through three key contributions. First, we construct \textbf{COIN-50}: 50 interactive tasks in daily scenarios, and create \textbf{COIN-Primitive} required by causally-dependent tasks, and \textbf{COIN-Composition} with mid-term complexity for skill learning and generalization evaluation. Second, we develop a low-cost mobile AR teleoperation system and collect the COIN-Primitive Dataset with 50 demonstrations per primitive task (1,000 in total). Third, we develop systematic evaluation metrics about execution stability and generalization robustness to evaluate \textbf{CodeAsPolicy}, \textbf{VLA}, and language-conditioned \textbf{H-VLA} approaches. Our comprehensive evaluation reveals critical limitations in current methods: models struggle with interactive reasoning tasks due to significant gaps between visual understanding and motor execution. We provide fine-grained analysis of these limitations.
Primary Area: datasets and benchmarks
Submission Number: 10758
Loading