Keywords: Online machine learning, Test-time adaptation, Probabilistic parameter dynamics, Bayesian deep learning
Abstract: Pre-trained models based on deep neural networks hold strong potential for cross-domain adaptability. However, this potential is often impeded in online machine learning (OML) settings, where the breakdown of the independent and identically distributed (i.i.d.) assumption leads to unstable adaptation. While recent advances in test-time adaptation (TTA) have addressed aspects of this challenge under unsupervised learning, most existing methods focus exclusively on unsupervised objectives and overlook the risks posed by non-i.i.d. environments and the resulting dynamics of model parameters. In this work, we present a probabilistic framework that models the adaptation process using stochastic differential equations, enabling a principled analysis of parameter distribution dynamics over time. Within this framework, we find that the log-variance of the parameter transition distribution aligns closely with an inverse-gamma distribution under stable and high-performing adaptation conditions. Motivated by this insight, we propose Structured Inverse-Gamma Model Alignment (SIGMA), a novel algorithm that dynamically regulates parameter evolution to preserve inverse-gamma alignment throughout adaptation. Extensive experiments across diverse models, datasets, and adaptation scenarios show that SIGMA consistently enhances the performance of state-of-the-art TTA methods, highlighting the critical role of parameter dynamics in ensuring robust adaptation.
Primary Area: General machine learning (supervised, unsupervised, online, active, etc.)
Submission Number: 16884
Loading