Self-ensembling for visual domain adaptationDownload PDF

15 Feb 2018 (modified: 14 Oct 2024)ICLR 2018 Conference Blind SubmissionReaders: Everyone
Abstract: This paper explores the use of self-ensembling for visual domain adaptation problems. Our technique is derived from the mean teacher variant (Tarvainen et. al 2017) of temporal ensembling (Laine et al. 2017), a technique that achieved state of the art results in the area of semi-supervised learning. We introduce a number of modifications to their approach for challenging domain adaptation scenarios and evaluate its effectiveness. Our approach achieves state of the art results in a variety of benchmarks, including our winning entry in the VISDA-2017 visual domain adaptation challenge. In small image benchmarks, our algorithm not only outperforms prior art, but can also achieve accuracy that is close to that of a classifier trained in a supervised fashion.
TL;DR: Self-ensembling based algorithm for visual domain adaptation, state of the art results, won VisDA-2017 image classification domain adaptation challenge.
Keywords: deep learning, neural networks, domain adaptation, images, visual, computer vision
Code: [![github](/images/github_icon.svg) Britefury/self-ensemble-visual-domain-adapt](https://github.com/Britefury/self-ensemble-visual-domain-adapt) + [![Papers with Code](/images/pwc_icon.svg) 2 community implementations](https://paperswithcode.com/paper/?openreview=rkpoTaxA-)
Data: [GTSRB](https://paperswithcode.com/dataset/gtsrb), [MNIST](https://paperswithcode.com/dataset/mnist), [SVHN](https://paperswithcode.com/dataset/svhn), [USPS](https://paperswithcode.com/dataset/usps), [VisDA-2017](https://paperswithcode.com/dataset/visda-2017)
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 3 code implementations](https://www.catalyzex.com/paper/self-ensembling-for-visual-domain-adaptation/code)
12 Replies

Loading