MOMA-LRG: Language-Refined Graphs for Multi-Object Multi-Actor Activity ParsingDownload PDF

Published: 17 Sept 2022, Last Modified: 23 May 2023NeurIPS 2022 Datasets and Benchmarks Readers: Everyone
Keywords: activity recognition, video-language model, video understanding, fine-grained activity recognition, scene graph generation, temporal action detection
TL;DR: We propose a new dataset and framework for evaluating video-language models on activity recognition at multiple levels of granularity
Abstract: Video-language models (VLMs), large models pre-trained on numerous but noisy video-text pairs from the internet, have revolutionized activity recognition through their remarkable generalization and open-vocabulary capabilities. While complex human activities are often hierarchical and compositional, most existing tasks for evaluating VLMs focus only on high-level video understanding, making it difficult to accurately assess and interpret the ability of VLMs to understand complex and fine-grained human activities. Inspired by the recently proposed MOMA framework, we define activity graphs as a single universal representation of human activities that encompasses video understanding at the activity, sub-activity, and atomic action level. We redefine activity parsing as the overarching task of activity graph generation, requiring understanding human activities across all three levels. To facilitate the evaluation of models on activity parsing, we introduce MOMA-LRG (Multi-Object Multi-Actor Language-Refined Graphs), a large dataset of complex human activities with activity graph annotations that can be readily transformed into natural language sentences. Lastly, we present a model-agnostic and lightweight approach to adapting and evaluating VLMs by incorporating structured knowledge from activity graphs into VLMs, addressing the individual limitations of language and graphical models. We demonstrate strong performance on few-shot activity parsing, and our framework is intended to foster future research in the joint modeling of videos, graphs, and language.
Author Statement: Yes
Supplementary Material: pdf
URL: https://github.com/StanfordVL/moma/
Dataset Url: https://momaapi.readthedocs.io/
License: Our work is released under the license CC-BY.
Contribution Process Agreement: Yes
In Person Attendance: Yes
7 Replies

Loading