Keywords: Evolving Domain Generalization, Hamiltonian Monte Carlo, Variational Autoencoder
TL;DR: We propose a full Bayesian framework that parameterizes a latent structure-aware autoencoder to capture static features, distribution drift, and categorical shifts, leveraging Hamiltonian Monte Carlo to approximate the posterior over latent variables
Abstract: Evolving Domain Generalization (EDG) addresses learning scenarios where the data distribution evolves over time, a setting crucial for real-world applications under varying environmental conditions. Recently, structure-aware variational models have shown promise by disentangling static and variant information, but their reliance on point estimates for model parameters neglects parameter uncertainty, limiting both adaptability and reliability. We propose BayesShift, a full Bayesian framework that parameterizes a latent structure-aware autoencoder to capture static features, distribution drift, and categorical shifts. Unlike standard variational inference, our method leverages Hamiltonian Monte Carlo (HMC) to approximate the posterior over latent variables, enabling principled quantification of uncertainty, which not only improves robustness to evolving distributions but also provides confidence estimates for predictions, a critical property in safety-sensitive domains. Experiments on benchmark datasets demonstrate that BayesShift achieves higher robustness to evolving distributions, outperforming state-of-the-art baselines in both predictive accuracy and adaptability. These results highlight the effectiveness of Bayesian inference for evolving domain generalization.
Supplementary Material: zip
Primary Area: transfer learning, meta learning, and lifelong learning
Submission Number: 24922
Loading