Abstract: Deep convolutional neural networks often suffer significant performance degradation when deployed to an unknown domain. To tackle this problem, domain generalization (DG) aims to generalize the model learned from source domains to an unseen target domain. Prior work mostly focused on obtaining robust cross-domain feature representations, but neglecting the generalization ability of the classifier. In this paper, we propose a novel approach named Implicit Domain Augmentation (IDA) for classifier regularization. Our motivation is to prompt the classifier to see more diverse domains and thus become more knowledgeable. Specifically, the styles of samples will be transferred and re-equipped to original features. To obtain the direction of meaningful style transfer, we use the multivariate normal distribution to model the feature statistics. Then new styles are sampled from the distribution to simulate potential unknown domains. To efficiently implement IDA, we achieve domain augmentation implicitly by minimizing an upper bound of expected cross-entropy loss on the augmented feature set instead of generating new samples explicitly. As a plug-and-play technique, IDA can be easily applied to other DG methods and boost the performance, introducing negligible computational overhead. Experiments on several tasks demonstrate the effectiveness of our method.
Loading