Model Adaptation through Hypothesis Transfer with Gradual Knowledge Distillation

Published: 01 Jan 2021, Last Modified: 03 May 2025IROS 2021EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: The ability to adapt their perception to changing environments is a core characterization of intelligent robots. At present, Unsupervised Domain Adaptation (UDA) methods are used to address this problem where the adaptation task is formulated as a transfer problem from a well-described scenario (source domain) to a new scenario (target domain). In order to implement the domain adaptation, these methods require access to the source data for achieving the distribution matching between both domains. However, in many real-world applications, the source data is inaccessible and only a source model pre-trained on the source domain is available during the transfer process. Therefore, the traditional UDA methods cannot support the challenging setting. This paper developed a new hypothesis transfer method to achieve model adaptation with gradual knowledge distillation. Specifically, we first prepare a source model through training a deep network on the labeled source domain by supervised learning. Then, we transfer the source model to the unlabeled target domain by self-training. To implement gradual knowledge distillation, we sliced the self-training into several epochs and then used the soft pseudo-labels from the latest epoch to guide the current epoch. In this process, the soft labels were generated by a semantic fusion on a proposed geometry of the neighborhood. To regulate the self-training, we developed a new objective constructed on the neighborhood. Experiments on three benchmarks have confirmed the state-of-the-art results of our method.
Loading