Keywords: Gradual Domain Adaption, Optimal Transport, Gradient Flow, Semi-Dual Transport, Convex Optimization
TL;DR: We propose a novel semi-dual optimal transport perspective to improve the performance of flow-based gradual domain adaption.
Abstract: Gradual domain adaptation (GDA) aims to mitigate domain shift by progressively adapting models from the source domain to the target domain via intermediate domains. However, real intermediate domains are often unavailable or ineffective, necessitating the synthesis of intermediate samples. Flow-based models are recently used for this purpose by interpolating between source and target distributions, but their training typically resorts to sample-based log-likelihood estimation, which can discard useful information and thus degrade GDA performance. The key to addressing this limitation is constructing the intermediate domains via samples directly. To this end, we propose an $\underline{\text{E}}$ntropy-regularized $\underline{\text{S}}$emi-dual $\underline{\text{U}}$nbalanced $\underline{\text{O}}$ptimal $\underline{\text{T}}$ransport (E-SUOT) framework to construct intermediate domains. Specifically, we reformulate flow-based GDA as a Lagrangian dual problem and derive an equivalent objective that circumvents the needs for likelihood estimation. However, the dual problem results in the unstable min–max training procedure. To alleviate this issue, we further introduce entropy regularization to convert it into a more stable alternative optimization procedure. Based on this, we propose a novel GDA training framework and provide theoretical analysis in terms of stability and generalization. Finally, extensive experiments are conducted to demonstrate the efficacy of the E-SUOT framework.
Primary Area: probabilistic methods (Bayesian methods, variational inference, sampling, UQ, etc.)
Submission Number: 18305
Loading