Text-driven Zero-shot Domain Adaptation with Cross-modality Graph Motif Matching

26 Sept 2024 (modified: 13 Nov 2024)ICLR 2025 Conference Withdrawn SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: computer vision, transfer learning, multi-modality, zero-shot domain adaptation
TL;DR: A CLIP-based zero-shot domain adaptive semantic adaptation method that utilizes graph motif theory to achieve cross-domain feature alignment.
Abstract: Zero-shot domain adaptive semantic adaptation aims to transfer knowledge from a source domain and learn a target segmenter without access to any target domain data. Some existing methods have achieved notable performances by transforming source features to the target domain through language-driven methods. However, these methods often align language features to global image features coarsely resulting in sub-optimal performance. To address the challenges, we propose a graph motif-based adaptation method designed to balance the efficiency and effectiveness of feature alignment. Our approach involves constructing motif structures based on domain-wise image feature distributions. By increasing the angle between language-vision directed edges, we effectively pull visual features toward the language feature center, thereby achieving cross-modality feature alignment. Additionally, we employ relationship-constraint losses, \ie directional and contrastive losses, to mitigate the mode-collapse during target feature stylization. These relationship-constraint losses help stabilize the learning process and improve the robustness of the adaptation. Extensive experimental results validate the efficacy of our proposed method. The code for this method will be made available.
Supplementary Material: zip
Primary Area: transfer learning, meta learning, and lifelong learning
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Reciprocal Reviewing: I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 6143
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview