Implicit Stylization for Domain Adaptation

Published: 10 Mar 2023, Last Modified: 28 Apr 2023ICLR 2023 Workshop DG PosterEveryoneRevisions
Keywords: Unsupervised domain adaptation, pose estimation, image classification, domain alignment
Abstract: Unsupervised domain adaptation (UDA) aims to bridge the gap between source and target domains in the absence of target domain labels using two main techniques: input-level alignment (such as generative modeling and stylization) and feature-level alignment (which matches the distribution of the feature maps, e.g. gradient reversal layers). Motivated from the success of generative modeling for image classification, stylization-based methods were recently proposed for regression tasks, such as pose estimation. However, use of input-level alignment via generative modeling and stylization incur additional overhead and computational complexity which limit their use in real-world DA tasks. To investigate the role of input-level alignment for DA, we ask the following question: is generative modeling or stylization really needed? In other words, motivated from the title of the workshop: what do we not need for successful domain adaptation? Surprisingly, we find that input-alignment has little effect on regression tasks as compared to classification. Based on these insights, we develop a non-parametric feature-level domain alignment method -- Implicit Stylization (ImSty) -- which results in consistent improvements over SOTA both for regression and classification tasks, without the need for computationally intensive stylization and generative modeling. Our work conducts a critical evaluation of the role of generative modeling and stylization, at a time when these are also gaining popularity for domain generalization.
Submission Number: 14
Loading