NSCL: Noise-Resistant Soft Contrastive Learning for Universal Domain AdaptationDownload PDF

22 Sept 2022 (modified: 13 Feb 2023)ICLR 2023 Conference Withdrawn SubmissionReaders: Everyone
Keywords: Universal Domain Adaptation, Contrastive Learning
Abstract: Domain adaptation (DA) transfers knowledge from label-rich domains to new domains where labels are scarce to address the problem of generalization of deep neural networks in new domains. Universal domain adaptation (UNDA) assumes the label distributions of labeled and unlabeled data are different and unknowable. In this paper, we concentrate on solving the noise problem on the UNDA problem based on contrastive learning (CL), which includes view noise in data augmentation and label noise in the classifier training. The domain differences in UNDA amplify the noise in the view of data augmentation, resulting in data augmentation schemes that apply to all domains being challenging to find. In addition, the mainstream UNDA classifiers combine closed-set classifiers with open-set classifiers; insufficient competition among open-set classifiers leads to overconfidence, which results in incredible sensitivity to noise in labeled data. Therefore, we propose Noise-Resistant Soft Contrastive Learning (NSCL) addresses the above issues. Firstly, we propose a soft contrast learning loss to avoid the over-response of typical CL loss to noisy samples, thus enabling data augmentation to improve the performance of UNDA further. Secondly, we design an all-in-one (AIO) classifier to improve the robustness of noisy labels while introducing multi-category unknown class competition. Extensive experimental results on UNDA and openset DA demonstrate the advantages of NSCL over existing methods, especially in downstream tasks such as classification and visualization.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Submission Guidelines: Yes
Please Choose The Closest Area That Your Submission Falls Into: Applications (eg, speech processing, computer vision, NLP)
5 Replies

Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview