LAMDA: Unified Language-Driven Multi-Task Domain Adaption

16 Sept 2023 (modified: 25 Mar 2024)ICLR 2024 Conference Withdrawn SubmissionEveryoneRevisionsBibTeX
Keywords: Transfer Learning, Universe Domain Adaption, Language-driven
Abstract: Unsupervised domain adaption (UDA), as a form of transfer learning, seeks to adapt a well-trained model from supervised source domains to an unlabeled target domain. However, most existing UDA approaches have two limitations. Firstly, these approaches assume that the source and target domains share the same language vocabulary, which is not practical in real-world applications where the target domain may have distinct vocabularies. Secondly, existing UDA methods for core vision tasks, such as detection and segmentation, differ significantly in their network architectures and adaption granularities. This leads to redundant research efforts in developing specialized architectures for each UDA task, without the ability to generalize across tasks. To address these limitations, we propose the formulation of unified language-driven multi-task domain adaption (LAMDA). LAMDA incorporates a pre-trained vision-language model into the source domains, allowing for transfer to various tasks in the unlabeled target domain with different vocabularies. This eliminates the need for multiple vocabulary-specific vision models and their respective source datasets. Additionally, LAMDA enables unsupervised transfer to novel domains with custom vocabularies. Extensive experiments on various segmentation and detection datasets validate the effectiveness, extensibility, and practicality of the proposed LAMDA.
Supplementary Material: pdf
Primary Area: transfer learning, meta learning, and lifelong learning
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 565
Loading