SWITCH: Benchmarking Interaction and Verification on Real-World Interfaces in Lifelong Embodied Agents
Keywords: Multimodal Evaluation, Embodied AI, Multimodal Large Language Models (MLLMs), Benchmark
TL;DR: We introduce SWITCH, a benchmark effort designed to evaluate how models perceive, reason about, and act upon semantic TCI user interfaces (e.g., appliances, switches) in lifelong real-world contexts across diverse tasks.
Abstract: Autonomous agents operating in the real world must interact continuously with existing physical and semantic infrastructure, track delayed consequences, and verify outcomes over time. Everyday environments are rich in tangible control interfaces (TCIs)—e.g., light switches, appliance panels, and embedded GUIs—posing core challenges for lifelong embodied agents, including partial observability, causal reasoning across time, and failure-aware verification under real-world constraints. Yet, current benchmarks rarely consider such long-horizon interaction and causality requirements. We introduce SWITCH (Semantic World Interface Tasks for Control & Handling), an embodied, task-driven benchmark created through iterative releases to probe these gaps. Its first iteration, SWITCH-Basic, evaluates five complementary abilities—task-aware VQA, semantic UI grounding, action generation, state-transition prediction, and result verification—under egocentric RGB video input and device diversity across 351 tasks spanning 98 real devices/appliances. Results from commercial and open LMMMs reveal systematic failures, highlighting critical gaps for lifelong agent deployment. SWITCH provides data, code, and held-out splits to enable reproducible non-contaminated evaluation and community contributions toward more challenging future iterations of the benchmark and the creation of relevant training data.
Submission Number: 99
Loading