Semantic Disentanglement Error: A Pluggable Mechanism for Balanced Contrastive Time-Series Representation

18 Sept 2025 (modified: 11 Feb 2026)Submitted to ICLR 2026EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Semantic Disentanglement Error, Contrastive learning, Time series
TL;DR: We introduce Semantic Disentanglement Error (SDE), a lightweight and pluggable mechanism that dynamically balances trend and seasonal representations in contrastive time-series learning.
Abstract: Contrastive learning has become a cornerstone in unsupervised time-series representation learning. Methods like CoST rely on dual-view encoders to capture semantic components such as trend and seasonality. However, we observe that under certain distributional regimes, dominant components (e.g., trend) often suppress minor ones (e.g., seasonality), leading to biased representations and degraded downstream performance.In this work, we propose a simple yet effective method to explicitly quantify and mitigate semantic imbalance during contrastive training. We introduce the Semantic Disentanglement Error (SDE), a directional measure of component recoverability, and integrate it into an adaptive weighting strategy for view-specific contrastive objectives. Our approach can be plugged into existing frameworks like CoST without architectural changes. Experiments on benchmark datasets demonstrate consistent gains in forecasting accuracy and representational robustness, especially under semantic skew conditions.
Primary Area: unsupervised, self-supervised, semi-supervised, and supervised representation learning
Submission Number: 10234
Loading