Self-Supervised Learning of Disentangled Representations for Multivariate Time-Series

Published: 13 Oct 2024, Last Modified: 02 Dec 2024NeurIPS 2024 Workshop SSLEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Representation Learning, Multivariate Time-Series, Self-Supervised Learning, Time-Series Forecasting, Time-Series Classification
TL;DR: We introduce TimeDRL, a multivariate time-series framework that tackles the anisotropy problem with disentangled dual-level embeddings, outperforming existing methods in forecasting and classification while avoiding inductive biases.
Abstract: Multivariate time-series data in fields like healthcare and industry are informative but challenging due to high dimensionality and lack of labels. Recent self-supervised learning methods excel in learning rich representations without labels but struggle with disentangled embeddings and inductive bias issues like transformation-invariance. To address these challenges, we introduce TimeDRL, a framework for multivariate time-series representation learning with dual-level disentangled embeddings. TimeDRL features: (i) disentangled timestamp-level and instance-level embeddings using a [CLS] token strategy; (ii) timestamp-predictive and instance-contrastive tasks for representation learning; and (iii) avoidance of augmentation methods to eliminate inductive biases. Experiments on forecasting and classification datasets show TimeDRL outperforms existing methods, with further validation in semi-supervised settings with limited labeled data.
Submission Number: 1
Loading