Toward Robust Feature Space in Long-Tailed Time Series Classification: A Multi-Scale Perspective

17 Sept 2025 (modified: 11 Feb 2026)Submitted to ICLR 2026EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Time series classification, Long-tailed recognition, Contrastive learning
Abstract: In recent years, time-series classification (TSC) has seen significant progress. Nevertheless, research on long-tailed TSC remains relatively limited. A key issue in long-tailed scenarios is that high inter-class similarity often leads models to learn overlapping features, making tail classes particularly difficult to distinguish. This phenomenon gives rise to three specific challenges: (1) Conventional approaches based on oversampling or uniform-intensity data augmentation may overfit or fail to learn robust features for tail classes. (2) Limited model representation capacity can lead to aligned temporal features across classes, further exacerbating class confusion. (3) Such class overlap makes it challenging to establish discriminative decision boundaries, particularly in highly imbalanced scenarios. To address these challenges, we propose TimeLT, a novel framework designed to learn a robust and discriminative feature space from long-tailed time-series data. First, we introduce a personalized augmentation strategy that generates tailored perturbations for scarce tail samples, preventing overfitting while increasing sample diversity. Second, we employ a multi-scale temporal encoder to capture patterns at different temporal resolutions, enabling the model to extract informative and discriminative features for both head and tail classes. Third, we propose a boundary-repelling regularization term that encourages embeddings to move closer to their respective class centroids while being repelled from inter-class boundaries, promoting compact and well-separated feature representations. To promote comprehensive research in this area, we consolidate a dedicated benchmark comprising several long-tailed datasets and over 16 advanced baselines. Extensive experiments across all datasets demonstrate that TimeLT significantly outperforms the strongest baselines, achieving accuracy improvements ranging from 0.55\% to 12.27\%.
Primary Area: learning on time series and dynamical systems
Submission Number: 9450
Loading