Video Action Segmentation with Hybrid Temporal NetworksDownload PDF

15 Feb 2018 (modified: 10 Feb 2022)ICLR 2018 Conference Blind SubmissionReaders: Everyone
Abstract: Action segmentation as a milestone towards building automatic systems to understand untrimmed videos has received considerable attention in the recent years. It is typically being modeled as a sequence labeling problem but contains intrinsic and sufficient differences than text parsing or speech processing. In this paper, we introduce a novel hybrid temporal convolutional and recurrent network (TricorNet), which has an encoder-decoder architecture: the encoder consists of a hierarchy of temporal convolutional kernels that capture the local motion changes of different actions; the decoder is a hierarchy of recurrent neural networks that are able to learn and memorize long-term action dependencies after the encoding stage. Our model is simple but extremely effective in terms of video sequence labeling. The experimental results on three public action segmentation datasets have shown that the proposed model achieves superior performance over the state of the art.
TL;DR: We propose a new hybrid temporal network that achieves state-of-the-art performance on video action segmentation on three public datasets.
Keywords: action segmentation, video labeling, temporal networks
Data: [GTEA](https://paperswithcode.com/dataset/gtea), [JIGSAWS](https://paperswithcode.com/dataset/jigsaws)
4 Replies

Loading