Neighborhood-Informed Diffusion Model for Source-Free Domain Adaptation: Retrieving Source Ground Truth from Target Query's Neighbors

19 Sept 2023 (modified: 11 Feb 2024)Submitted to ICLR 2024EveryoneRevisionsBibTeX
Primary Area: unsupervised, self-supervised, semi-supervised, and supervised representation learning
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Keywords: discriminative diffusion models, source-free domain adaptation, generative models, contrastive learning, semi-supervised learning, transfer learning
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
TL;DR: We introduce a discriminative diffusion model that enables a target-to-source generative process for source-free domain adaptation by parameterizing the diffusion prior based on latent neighborhood geometry.
Abstract: Diffusion models, empowered as an input augmentation technique, have demonstrated promise in domain adaptation. However, to effectively capture shared characteristics between two data densities, such a diffusion model needs to be trained using both source and target data for its generation. This constraint narrows its application to a more demanding yet authentic scenario where source data remains inaccessible during target adaptation, i.e., source-free domain adaptation (SFDA). In the absence of source data during adaptation, which hinders the analytical quantification of domain shift, can we employ the pre-trained source representation to formulate a diffusion model for facilitating the unsupervised clustering in target adaptation? To answer this question, we introduce a novel method, discriminative neighborhood diffusion (DND). DND transforms the pre-trained source representation into a target-to-source diffusion model by parameterizing the prior densities of the diffusion process, leveraging the smoothness indicated by latent k-nearest neighbors (k-NNs). The samples generated from the diffusion model are then used as positive keys for contrastive clustering during adaptation. This process effectively introduces a form of supervision into unsupervised clustering by incorporating the latent geometries from both the source and target domains' latent k-NN. By evaluating DND against various SFDA methods on multiple benchmark datasets, we demonstrate the discriminative potential of diffusion models in the absence of source data. Moreover, the effectiveness of DND is demonstrated as it successfully solves SFDA problems, achieving state-of-the-art performance.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
Supplementary Material: zip
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 2015
Loading