Keywords: Cross-Perspective Video Understanding; Egocentric and Exocentric Views; Video Question Answering;MLLM Benchmark
Abstract: Transferring and integrating knowledge across first-person (egocentric) and third-person (exocentric) viewpoints is intrinsic to human intelligence, enabling humans to learn from others and convey insights from their own experiences. Despite rapid progress in multimodal large language models (MLLMs), their ability to perform such cross-view reasoning remains unexplored. To address this, we introduce EgoExoBench, the first benchmark for egocentric exocentric video understanding and reasoning. Built from publicly available datasets, EgoExoBench comprises over 7300 question–answer pairs spanning eleven sub-tasks organized into three core challenges: semantic alignment, viewpoint association, and temporal reasoning. We evaluate 13 state-of-the-art MLLMs and find that while these models excel on single-view tasks, they struggle to align semantics across perspectives, accurately associate views, and infer temporal dynamics in the ego-exo context. We hope EgoExoBench can serve as a valuable resource for research on embodied agents and intelligent assistants seeking human-like cross-view intelligence.
Croissant File: json
Dataset URL: https://huggingface.co/datasets/Heleun/EgoExoBench_MCQ
Code URL: https://github.com/ayiyayi/EgoExoBench
Primary Area: Datasets & Benchmarks for applications in language modeling and vision language modeling
Flagged For Ethics Review: true
Submission Number: 873
Loading