Generalizing across non-stationary series via learning dynamic causal factors

Published: 01 Jan 2026, Last Modified: 24 Jul 2025Pattern Recognit. 2026EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Learning domain-invariant representations is a crucial task for achieving out-of-distribution generalization. Recent efforts have begun to incorporate causality into this process, aiming to identify and understand the causal factors relevant to various tasks. However, when confronted with non-stationary time series data, simply extending existing generalization methods may prove ineffective. This inadequacy stems from their failure to adequately model the underlying causal factors, exacerbated by temporal domain shifts in addition to source domain shifts. In this paper, we thoroughly examine the challenges posed by both source and temporal shifts through a causal lens in the context of generalizing non-stationary time series data. We introduce a novel model called the Dynamic Causal Sequential Variational Auto-Encoder (DCSVAE), designed specifically to learn dynamic causal factors. By effectively disentangling the representation of non-stationary time series data, our model distinguishes between dynamic causal, dynamic non-causal, and static non-causal factors, thereby facilitating temporal generalization. To enhance disentanglement, we introduce two constraints on latent variables based on mutual information. Theoretical guarantees rooted in information theory validate the feasibility of our approach. Our experiments, conducted on both synthetic and real datasets, demonstrate the superior performance of the proposed model in time series domain generalization tasks when compared to state-of-the-art benchmarks.
Loading