Keywords: State Space Models, Action Recognition
TL;DR: We propose a novel training method specifically for video state space models that enables them to operate on any length of video or resolution at test-time with little degradation in performance.
Abstract: State space models (SSMs) have recently emerged as a competitive alternative to transformers in various linguistic and visual tasks. Their linear complexity and hidden-state recurrence make them particularly attractive for modeling long sequences, whereas attention becomes quadratically expensive. However, current training methods for video understanding are tailored towards transformers and fail to fully leverage the unique attributes of SSMs. For example, video models are often trained at a fixed resolution and video length to balance the quadratic scaling of attention cost against performance. Consequently, these models suffer from degraded performance when evaluated on videos with spatial and temporal resolutions unseen during training; a property we call spatio-temporal inflexibility. In the context of action recognition, this severely limits a model's ability to retain performance across both short- and long-form videos.
Therefore, we propose a flexible training method that leverages and improves the inherent adaptability of SSMs. Our method samples videos at varying temporal and spatial resolutions during training and dynamically interpolates model weights to accommodate any spatio-temporal scale. This instills our SSM, which we call {\sc StretchySnake}, with spatio-temporal flexibility and enables it to seamlessly handle videos ranging from short, fine-grained clips to long, complex activities.
We introduce and compare five different variants of flexible training, and identify the most effective strategy for video SSMs. On $6$ action video benchmarks, {\sc StretchySnake} outperforms vanilla VideoMamba by up to 28\%, while simultaneously delivering 3x speedups and a 90\% reduction in GFLOPs in low-resolution settings. On short-action (UCF-101, HMDB-51) and long-action (COIN, Breakfast) benchmarks, StretchySnake outperforms transformer and SSM baselines alike, with strong adaptability to fine-grained actions (SSV2, Diving-48). Therefore, our method provides a simple drop-in training recipe that makes video SSMs more robust, resolution-agnostic, and efficient across diverse action recognition scenarios.
Supplementary Material: zip
Primary Area: unsupervised, self-supervised, semi-supervised, and supervised representation learning
Submission Number: 2192
Loading