Abstract: Temporal action detection aims to correctly predict the categories and temporal intervals of actions in an untrimmed video by using only video-level labels, which is a basic but challenging task in video understanding. Inspired by the work of Sparse R-CNN object detection, we present a purely sparse method in temporal action detection. In our method, a fixed sparse set of learnable temporal proposals, total length of $\mathbf{N}$ (e.g.50), are provided to dynamic action interaction head to perform classification and localization. Sparse temporal action detection method completely avoids all efforts related to temporal candidates design and many- to-one label assignment. More importantly, final predictions are directly output without non-maximum suppression post-procedure. Extensive experiments show that our method achieves state-of-the-art performance for both action proposal and localization on THUMOS14 detection benchmark and competitive performance on ActivityNet-l.3challenge.
Loading