Bilateral Knowledge Distillation for Unsupervised Domain Adaptation of Semantic SegmentationDownload PDFOpen Website

Published: 01 Jan 2022, Last Modified: 15 May 2023IROS 2022Readers: Everyone
Abstract: Unsupervised domain adaptation (UDA) aims to learn domain-invariant representations between the labeled source domain and the unlabeled target domain. Existing self- training-based UDA methods use ground truth and pseudo- labels to supervise source data and target data respectively. However, strong supervision in the source domain and pseudo- label noise in the target domain lead to some problems, such as biased predictions and over-fitting. To tackle these issues, we propose a novel Bilateral Knowledge Distillation (BKD) framework for UDA in semantic segmentation, which adopts different knowledge distillation strategies depending on the domain. Specifically, we first introduce a Source-Flow Distillation (SD) to smooth the labels of source images, which weakens the supervision in the source domain. Meanwhile, a Target-Flow Distillation (TD) is designed to extract the inter- class knowledge in the probability map output from the teacher model, which alleviates the influence of pseudo-label noise in the target domain. Considering the class imbalance in semantic segmentation, we further propose an Image-Wise Hard Pixel Mining (HPM) to address this issue without estimating class frequency in the unlabeled target domain. The effectiveness of our framework against existing state-of-the-art methods is demonstrated by extensive experiments on two benchmarks: GTA5-to-Cityscapes and SYNTHIA-to-Cityscapes.
0 Replies

Loading