Source-Target Coordinated Training with Multi-head Hybrid-Attention for Domain Adaptive Semantic SegmentationDownload PDF

22 Sept 2022 (modified: 13 Feb 2023)ICLR 2023 Conference Withdrawn SubmissionReaders: Everyone
Keywords: domain adaptation, semantic segmentation
Abstract: Domain adaptive semantic segmentation aims to densely assign semantic labels for each pixel on the unlabeled target domain by transferring knowledge from the labeled source domain. Due to the domain shift problem, the success of adaptation on the unseen domain depends on the feature alignment between different domains. Hence, this paper focuses on feature alignment for domain adaptive semantic segmentation, \ie, when to align and how to align. Since no label is available in the target domain, aligning the target distribution too early would lead to poor performance due to pseudo-label noise, while too late may cause the model to underfit the target domain. In this paper, we propose a Source-Target Coordinated Training (STCT) framework, where a coordination weight is designed to control the time to align. For the problem of how to align, we design a Multi-head Hybrid-Attention (MHA) module to replace the multi-head self-attention (MSA) module in the transformer. The proposed MHA module consists of intra-domain self-attention and inter-domain cross-attention mechanisms. Compared with the MSA module, the MHA module achieves feature alignment by explicitly constructing interaction between different domains without additional computations and parameters. Moreover, to fully explore the potential of the proposed MHA module, we comprehensively investigate different designs for the MHA module and find some important strategies for effective feature alignment. Our proposed method achieves competitive performance on two challenging synthetic-to-real benchmarks, GTA5-to-CityScapes and SYNTHIA-to-Cityscapes.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Submission Guidelines: Yes
Please Choose The Closest Area That Your Submission Falls Into: Unsupervised and Self-supervised learning
Supplementary Material: zip
5 Replies

Loading