AutoTSAugment: Model-Agnostic Automated Data Augmentation for Unsupervised Contrastive-based Time Series Representation Learning

11 Sept 2025 (modified: 21 Nov 2025)ICLR 2026 Conference Withdrawn SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Time Series, Representation Learning, Data Augmentation, Contrastive Learning, Automated Machine Learning (AutoML), Unsupervised Learning
TL;DR: A framework for automated augmentation methods selection for unsupervised time series representation learning evaluated on a total of 164 widely-used datasets.
Abstract: Contrastive-based time series representation learning methods excel at extracting high-quality representations from raw time series data. The performance of these methods depends, in unsupervised settings, on the data augmentation strategy used from generating similar and dissimilar samples. Configuring an augmentation strategy composed of various transformation is a time-consuming process, typically performed by trial and error, and dependent on extensive domain knowledge. In this work, we propose Automated Time Series Augmentation (AutoTSAugment), a modular, model-agnostic time series augmentation framework for generating augmented time series that can be used within any contrastive-based time series representation learning method, and define a novel search objective that allows evaluating the quality of augmentations in unsupervised settings. The proposed framework is designed as an unsupervised AutoML framework composed of a search space that contains a diverse range of augmentation methods and a search strategy that automatically navigates the sampling of these methods. We evaluated if this model-agnostic framework can replace augmentation strategies in existing contrastive learning methods using three baseline time series representation learning methods and three search strategies (random sampling, random search, and Bayesian optimisation search) on the downstream tasks of univariate and multivariate time series classification and forecasting across 164 datasets. Our empirical results demonstrate that the methods trained within the AutoTSAugment framework achieve similar, or better, results than those obtained using manually tailored augmentation methods, eliminating the need for labour-intensive manual experimentation of augmentations. The results also demonstrate that employing random sampling within this framework achieves similar results to that of dedicated search algorithms while having up to 63.25\% faster training times on average compared with other search strategies. Furthermore, we studied the effect of an adaptive search space, which recommends augmentation methods based on dataset characteristics, showing results equivalent to a fixed search space in downstream tasks while being up to 58.21\% faster.
Primary Area: learning on time series and dynamical systems
Submission Number: 4060
Loading