Keywords: Egocentric Vision, Goal-oriented Activities, Video Question Answering
Abstract: Understanding human tasks through video observations is an essential capability of intelligent agents. The challenges of such capability lie in the difficulty of generating a detailed understanding of situated actions, their effects on object states (\ie, state changes), and their causal dependencies. These challenges are further aggravated by the natural parallelism from multi-tasking and partial observations in multi-agent collaboration. Most prior works leverage action localization or future prediction as an \textit{indirect} metric for evaluating such task understanding from videos. To make a \textit{direct} evaluation, we introduce the EgoTaskQA benchmark that provides a single home for the crucial dimensions of task understanding through question answering on real-world egocentric videos. We meticulously design questions that target the understanding of (1) action dependencies and effects, (2) intents and goals, and (3) agents' beliefs about others. These questions are divided into four types, including descriptive (what status?), predictive (what will?), explanatory (what caused?), and counterfactual (what if?) to provide diagnostic analyses on \textit{spatial, temporal, and causal} understandings of goal-oriented tasks. We evaluate state-of-the-art video reasoning models on our benchmark and show their significant gaps between humans in understanding complex goal-oriented egocentric videos. We hope this effort would drive the vision community to move onward with goal-oriented video understanding and reasoning.
Author Statement: Yes
URL: https://sites.google.com/view/egotaskqa
Dataset Url: https://sites.google.com/view/egotaskqa
License: CC BY-NC-SA
TL;DR: We present the EgoTaskQA benchmark that targets at action dependencies, post-effects, agents' intents and goals, as well as multi-agent belief modeling in egocentric goal-oriented videos.
Supplementary Material: pdf
Contribution Process Agreement: Yes
In Person Attendance: Yes
21 Replies
Loading