NEBULA: Do We Evaluate Vision-Language-Action Agents Correctly?

17 Sept 2025 (modified: 11 Feb 2026)Submitted to ICLR 2026EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Robotics, Embodied AI, Benchmarks, Evaluation Metrics, Simulator
TL;DR: We present NEBULA, a unified ecosystem for VLA agents that disentangles capabilities from performance metrics and standardizes task data via a shared API to enable fine-grained, interpretable, and transferable assessment.
Abstract: The evaluation of Vision-Language-Action (VLA) agents is hindered by the coarse, end-task success metric that fails to provide precise skill diagnosis or measure robustness to real-world perturbations. This challenge is amplified by scattered data that limits reproducibility and progress toward generalist models. To address these limitations, we introduce NEBULA, a unified ecosystem for single-arm manipulation that enables diagnostic and reproducible evaluation. NEBULA features a novel dual-axis evaluation protocol that combines fine-grained capability tests for precise skill diagnosis with systematic stress tests that measure robustness. A standardized API and a large-scale, aggregated dataset are provided to reduce fragmentation and support cross-dataset training and fair comparison. Using NEBULA, we demonstrate that top-performing VLAs struggle with key capabilities such as spatial reasoning and dynamic adaptation, which are consistently obscured by conventional end-task success metrics. By measuring both what an agent can do and when it does so reliably, NEBULA provides a practical foundation for robust, general-purpose embodied agents.
Primary Area: datasets and benchmarks
Submission Number: 9320
Loading