Searching for Better Spatio-temporal Alignment in Few-Shot Action RecognitionDownload PDF

Published: 31 Oct 2022, 18:00, Last Modified: 06 Oct 2022, 10:11NeurIPS 2022 AcceptReaders: Everyone
Keywords: Few-Shot Action Recognition, Temporal Alignment, Neural Architecture Search
TL;DR: This paper introduced a few-shot action recognition method for a neural architecture search method with a Transformer space shrinking strategy and spatio-temporal prototype alignment.
Abstract: Spatio-Temporal feature matching and alignment are essential for few-shot action recognition as they determine the coherence and effectiveness of the temporal patterns. Nevertheless, this process could be not reliable, especially when dealing with complex video scenarios. In this paper, we propose to improve the performance of matching and alignment from the end-to-end design of models. Our solution comes at two-folds. First, we encourage to enhance the extracted Spatio-Temporal representations from few-shot videos in the perspective of architectures. With this aim, we propose a specialized transformer search method for videos, thus the spatial and temporal attention can be well-organized and optimized for stronger feature representations. Second, we also design an efficient non-parametric spatio-temporal prototype alignment strategy to better handle the high variability of motion. In particular, a query-specific class prototype will be generated for each query sample and category, which can better match query sequences against all support sequences. By doing so, our method SST enjoys significant superiority over the benchmark UCF101 and HMDB51 datasets. For example, with no pretraining, our method achieves 17.1\% Top-1 accuracy improvement than the baseline TRX on UCF101 5-way 1-shot setting but with only 3x fewer FLOPs.
Supplementary Material: pdf
23 Replies

Loading