A Closer Look at Smoothness in Domain Adversarial TrainingDownload PDF

Published: 28 Jan 2022, Last Modified: 22 Oct 2023ICLR 2022 SubmittedReaders: Everyone
Keywords: Domain Adaptation, Optimization
Abstract: Domain adversarial training has been ubiquitous for achieving invariant representations and is used widely for various domain adaptation tasks. In recent times methods converging to smooth optima have shown improved generalization for supervised learning tasks like classification. In this work, we analyze the effect of smoothness enhancing formulations on domain adversarial training, the objective of which is a combination of classification and adversarial terms. In contrast to classification loss, our analysis shows that \textit{converging to smooth minima w.r.t. adversarial loss leads to sub-optimal generalization on the target domain}. Based on the analysis, we introduce the Smooth Domain Adversarial training (SDAT) procedure, which effectively enhances the performance of existing domain adversarial methods for both classification and object detection tasks. Our smoothness analysis also provides insight into the extensive usage of SGD over Adam in domain adversarial training.
One-sentence Summary: In Domain Adversarial Training converging to a smooth minima helps only for ERM terms, smoothing adversarial terms leads to worse performance.
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/arxiv:2206.08213/code)
17 Replies

Loading