Temporal Alignment Prediction for Supervised Representation Learning and Few-Shot Sequence ClassificationDownload PDF

Published: 28 Jan 2022, Last Modified: 13 Feb 2023ICLR 2022 PosterReaders: Everyone
Keywords: Temporal Alignment, Supervised Representation Learning, Few-shot Action Recognition, Alignment Prediction, Sequence Classification
Abstract: Explainable distances for sequence data depend on temporal alignment to tackle sequences with different lengths and local variances. Most sequence alignment methods infer the optimal alignment by solving an optimization problem under pre-defined feasible alignment constraints, which not only is time-consuming, but also makes end-to-end sequence learning intractable. In this paper, we propose a learnable sequence distance called Temporal Alignment Prediction (TAP). TAP employs a lightweight convolutional neural network to directly predict the optimal alignment between two sequences, so that only straightforward calculations are required and no optimization is involved in inference. TAP can be applied in different distance-based machine learning tasks. For supervised sequence representation learning, we show that TAP trained with various metric learning losses achieves completive performances with much faster inference speed. For few-shot action classification, we apply TAP as the distance measure in the metric learning-based episode-training paradigm. This simple strategy achieves comparable results with state-of-the-art few-shot action recognition methods.
One-sentence Summary: We propose a learnable sequence distance by predicting the temporal alignment and show its application in supervised representation learning for sequence data and few-shot action recognition.
4 Replies

Loading