Abstract: As an important learning paradigm for signal processing and pattern recognition, unsupervised domain adaptation (UDA), which deals with the learning bias induced by the changing data environments (i.e., domains), has achieved great success in real-world applications. Mainstream UDA methods commonly adopt the discrepancy optimization on the two data domains directly, which ignore the hidden information in the latent intermediate domains and cannot sufficiently learn the invariant knowledge across domains; moreover, they usually fail when the domain gap is significant. In this work, a novel learning principle called Wasserstein invariant risk (WIR) is developed to gradually reduce the bias. The core idea is to recover the latent domains along the Wasserstein geodesic with local structure preservation, and explicitly map them into an invariant space via barycenter mapping. Intuitively, the Wasserstein geodesic captures the non-Euclidean structures of the latent domains. Methodologically, the barycenter mapping along geodesic can: 1) reduce the data discrepancy gradually and explore the hidden information; 2) ensure the consistency of risk estimation across domains; 3) admit invariance property of learning model for changing environments. Extensive experiments on visual UDA classification show the superiority of WIR over SOTA methods.
Loading