Abstract: Endowing deep models with the ability to generalize in dynamic scenarios is of vital significance for real-world deployment, given the continuous and complex changes in data distribution. Recently, evolving domain generalization (EDG) has emerged to address distribution shifts over time, aiming to capture evolving patterns for improved model generalization. However, existing EDG methods may suffer from spurious correlations by modeling only the dependence between data and targets across domains, creating a shortcut between task-irrelevant factors and the target, which hinders generalization. To this end, we design a time-aware structural causal model (SCM) that incorporates dynamic causal factors and the causal mechanism drifts, and propose **S**tatic-D**YN**amic **C**ausal Representation Learning (**SYNC**), an approach that effectively learns time-aware causal representations. Specifically, it integrates specially designed information-theoretic objectives into a sequential VAE framework which captures evolving patterns, and produces the desired representations by preserving intra-class compactness of causal factors both across and within domains. Moreover, we theoretically show that our method can yield the optimal causal predictor for each time domain. Results on both synthetic and real-world datasets exhibit that SYNC can achieve superior temporal generalization performance.
Lay Summary: Machine learning models often fail in the real world because the data distribution they rely on keeps changing, for example due to shifting environments or behaviors. This problem is known as evolving domain generalization (EDG). We find that many existing EDG methods fall into the trap of learning shortcuts—misleading patterns that seem useful but break under changing conditions, leading to poor generalization in dynamic environments.
To fix this, we introduce a new causal model that describes how data distribution evolves over time in dynamic situations. On this basis, we develop a static-dynamic causal representation learning method that teaches models to recognize both stable and changing causes, allowing the model to learn to focus on what really causes outcomes, not just what appears correlated.
By filtering out misleading patterns and highlighting what really matters, our method significantly improves a model’s ability to generalize across time. This research brings us closer to building AI systems that remain reliable and effective even as the world around them changes.
Primary Area: General Machine Learning->Transfer, Multitask and Meta-learning
Keywords: Evolving Domain Generalization, Causal Representation Learning
Submission Number: 2150
Loading