SepRep-Net: Multi-source Free Domain Adaptation via Model Separation and ReparameterizationDownload PDF

22 Sept 2022 (modified: 13 Feb 2023)ICLR 2023 Conference Withdrawn SubmissionReaders: Everyone
Keywords: multi-source free domain adaptation, generalized domain adaptation
TL;DR: We introduce a general approach to multi-source free domain adaptation via model separation and reparameterization, which enhances effectiveness, efficiency and generalizability.
Abstract: We consider multi-source free domain adaptation, the problem of adapting multiple existing models to a new domain without accessing the source data. This is a practical problem, which often arises in commercial settings but remains an open question despite the advances in recent years. Previous methods, e.g., model ensemble, are effective, but they also incur significantly increased computational costs. Conventional solutions for efficiency, such as distillation, are limited in preserving source knowledge, i.e., maintaining generalizability. In this work, we propose a novel framework called SepRep-Net, which tackles multi-source free domain adaptation via model Separation and Reparameterization. Concretely, SepRep-Net reassembled multiple existing models to a unified network, while maintaining separate pathways (Separation). During training, separate pathways are optimized in parallel with the information exchange regularly performed via an additional feature merging unit. With our specific design, these pathways can be further reparameterized into a single one to facilitate inference (Reparameterization). SepRep-Net is characterized by 1) effectiveness: competitive performance on the target domain, 2) efficiency: low computational costs, and 3) generalizability: maintaining more source knowledge than existed solutions. As a general approach, SepRep-Net can be seamlessly plugged into various methods. Extensive experiments validate the performance of SepRep-Net on mainstream benchmarks.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Submission Guidelines: Yes
Please Choose The Closest Area That Your Submission Falls Into: Deep Learning and representational learning
Supplementary Material: zip
5 Replies

Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview