Where and when to look? Spatial-temporal attention for action recognition in videosDownload PDF

27 Sept 2018 (modified: 05 May 2023)ICLR 2019 Conference Blind SubmissionReaders: Everyone
Abstract: Inspired by the observation that humans are able to process videos efficiently by only paying attention when and where it is needed, we propose a novel spatial-temporal attention mechanism for video-based action recognition. For spatial attention, we learn a saliency mask to allow the model to focus on the most salient parts of the feature maps. For temporal attention, we employ a soft temporal attention mechanism to identify the most relevant frames from an input video. Further, we propose a set of regularizers that ensure that our attention mechanism attends to coherent regions in space and time. Our model is efficient, as it proposes a separable spatio-temporal mechanism for video attention, while being able to identify important parts of the video both spatially and temporally. We demonstrate the efficacy of our approach on three public video action recognition datasets. The proposed approach leads to state-of-the-art performance on all of them, including the new large-scale Moments in Time dataset. Furthermore, we quantitatively and qualitatively evaluate our model's ability to accurately localize discriminative regions spatially and critical frames temporally. This is despite our model only being trained with per video classification labels.
Keywords: visual attention, video action recognition, network interpretability
Data: [HMDB51](https://paperswithcode.com/dataset/hmdb51), [ImageNet](https://paperswithcode.com/dataset/imagenet), [Moments in Time](https://paperswithcode.com/dataset/moments-in-time), [THUMOS14](https://paperswithcode.com/dataset/thumos14-1), [UCF101](https://paperswithcode.com/dataset/ucf101)
12 Replies

Loading