Spatio-Temporal Interaction Graph Parsing Networks for Human-Object Interaction RecognitionOpen Website

Published: 01 Jan 2021, Last Modified: 13 Nov 2023ACM Multimedia 2021Readers: Everyone
Abstract: For a given video-based Human-Object Interaction scene, modeling the spatio-temporal relationship between humans and objects is the important cue to understand the contextual information presented in the video. With the efficient spatio-temporal relationship modeling, it is possible not only to uncover contextual information in each frame, but to directly capture inter-frame dependencies as well. Capturing the position changes of human and objects over the spatio-temporal dimension is more critical when significant changes in the appearance features may not occur over time. When utilizing appearance features, the spatial location and the semantic information are also the key to improve the video-based Human-Object Interaction recognition performance. In this paper, Spatio-Temporal Interaction Graph Parsing Networks (STIGPN) are constructed, which encode the videos with a graph composed of human and object nodes. These nodes are connected by two types of relations: (i) intra-frame relations: modeling the interactions between human and the interacted objects within each frame. (ii) inter-frame relations: capturing the long range dependencies between human and the interacted objects across frame. With the graph, STIGPN learn spatio-temporal features directly from the whole video-based Human-Object Interaction scenes. Multi-modal features and a multi-stream fusion strategy are used to enhance the reasoning capability of STIGPN. Two Human-Object Interaction video datasets, including CAD-120 and Something-Else, are used to evaluate the proposed architectures, and the state-of-the-art performance demonstrates the superiority of STIGPN. Code for STIGPN is available at https://github.com/GuangmingZhu/STIGPN.
0 Replies

Loading