Abstract: Human actions can be abstracted as interactions between humans and objects. The recently proposed task of compositional action recognition emphasizes the independence and combinability of verbs (actions) and nouns (humans or objects) constituting human actions. Nonetheless, most traditional appearance-based action recognition methods usually extract spatial-temporal features from input videos concurrently to understand actions. This approach tends to excessively rely on overall appearance features and lacks precise modelling of interactions between objects, often leading to the neglect of the actions themselves. Consequently, the biases introduced by the appearance prevent the model from effectively generalizing to unseen combinations of actions and objects. To address this issue, we propose a method that explicitly models the object interaction path, aiming to capture interactions between humans and objects. The advantage of this approach is that these interactions are not affected by the object or environmental appearance bias, providing additional clues for appearance-based action recognition methods. Our method can easily be combined with any appearance-based visual encoder, significantly improving the compositional generalization ability of action recognition algorithms. Extensive experimental results on the Something-Else dataset and the IKEA-Assembly dataset demonstrate the effectiveness of our approach.
External IDs:doi:10.1007/s40747-025-01823-x
Loading