Temporal Gaussian Mixture Layer for VideosDownload PDF

27 Sept 2018 (modified: 22 Oct 2023)ICLR 2019 Conference Blind SubmissionReaders: Everyone
Abstract: We introduce a new convolutional layer named the Temporal Gaussian Mixture (TGM) layer and present how it can be used to efficiently capture longer-term temporal information in continuous activity videos. The TGM layer is a temporal convolutional layer governed by a much smaller set of parameters (e.g., location/variance of Gaussians) that are fully differentiable. We present our fully convolutional video models with multiple TGM layers for activity detection. The experiments on multiple datasets including Charades and MultiTHUMOS confirm the effectiveness of TGM layers, outperforming the state-of-the-arts.
Code: [![github](/images/github_icon.svg) piergiaj/tgm-icml19](https://github.com/piergiaj/tgm-icml19)
Data: [ActivityNet](https://paperswithcode.com/dataset/activitynet), [Charades](https://paperswithcode.com/dataset/charades), [Kinetics](https://paperswithcode.com/dataset/kinetics), [MultiTHUMOS](https://paperswithcode.com/dataset/multithumos), [THUMOS14](https://paperswithcode.com/dataset/thumos14-1)
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/arxiv:1803.06316/code)
11 Replies

Loading