DAST: Unsupervised Domain Adaptation in Semantic Segmentation Based on Discriminator Attention and Self-Training

Published: 01 Jan 2021, Last Modified: 26 Jul 2025AAAI 2021EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Unsupervised domain adaption has recently been used to reduce the domain shift, which would ultimately improve the performance of the semantic segmentation on unlabeled real-world data. In this paper, we follow the trend to propose a novel method to reduce the domain shift using strategies of discriminator attention and self-training. The discriminator attention strategy contains a two-stage adversarial learning process, which explicitly distinguishes the well-aligned (domain-invariant) and poorly-aligned (domain-specific) features, and then guides the model to focus on the latter. The self-training strategy adaptively improves the decision boundary of the model for the target domain, which implicitly facilitates the extraction of domain-invariant features. By combining the two strategies, we find a more effective way to reduce the domain shift. Extensive experiments demonstrate the effectiveness of the proposed method on numerous benchmark datasets.
Loading