TAda! Temporally-Adaptive Convolutions for Video UnderstandingDownload PDF

29 Sept 2021, 00:33 (edited 16 Mar 2022)ICLR 2022 PosterReaders: Everyone
  • Keywords: Video understanding, Action classification, Dynamic networks
  • Abstract: Spatial convolutions are widely used in numerous deep video models. It fundamentally assumes spatio-temporal invariance, i.e., using shared weights for every location in different frames. This work presents Temporally-Adaptive Convolutions (TAdaConv) for video understanding, which shows that adaptive weight calibration along the temporal dimension is an efficient way to facilitate modelling complex temporal dynamics in videos. Specifically, TAdaConv empowers the spatial convolutions with temporal modelling abilities by calibrating the convolution weights for each frame according to its local and global temporal context. Compared to previous temporal modelling operations, TAdaConv is more efficient as it operates over the convolution kernels instead of the features, whose dimension is an order of magnitude smaller than the spatial resolutions. Further, the kernel calibration brings an increased model capacity. We construct TAda2D and TAdaConvNeXt networks by replacing the 2D convolutions in ResNet and ConvNeXt with TAdaConv, which leads to at least on par or better performance compared to state-of-the-art approaches on multiple video action recognition and localization benchmarks. We also demonstrate that as a readily plug-in operation with negligible computation overhead, TAdaConv can effectively improve many existing video models with a convincing margin.
  • One-sentence Summary: A stand-alone temporal modelling module or a plug-in enhancement of the 1D/2D/3D convolutions used in video models for better and more efficient temporal modelling.
21 Replies