Adversarial Dropout RegularizationDownload PDF

15 Feb 2018 (modified: 11 Apr 2023)ICLR 2018 Conference Blind SubmissionReaders: Everyone
Abstract: We present a domain adaptation method for transferring neural representations from label-rich source domains to unlabeled target domains. Recent adversarial methods proposed for this task learn to align features across domains by ``fooling'' a special domain classifier network. However, a drawback of this approach is that the domain classifier simply labels the generated features as in-domain or not, without considering the boundaries between classes. This means that ambiguous target features can be generated near class boundaries, reducing target classification accuracy. We propose a novel approach, Adversarial Dropout Regularization (ADR), which encourages the generator to output more discriminative features for the target domain. Our key idea is to replace the traditional domain critic with a critic that detects non-discriminative features by using dropout on the classifier network. The generator then learns to avoid these areas of the feature space and thus creates better features. We apply our ADR approach to the problem of unsupervised domain adaptation for image classification and semantic segmentation tasks, and demonstrate significant improvements over the state of the art.
TL;DR: We present a new adversarial method for adapting neural representations based on a critic that detects non-discriminative features.
Keywords: domain adaptation, computer vision, generative models
Data: [GTA5](, [ImageNet](, [MNIST](, [SVHN](, [Syn2Real](
15 Replies