Learning Robust Representations by Projecting Superficial Statistics Out

Haohan Wang, Zexue He, Zachary C. Lipton, Eric P. Xing

Sep 27, 2018 ICLR 2019 Conference Blind Submission readers: everyone Show Bibtex
  • Abstract: Despite impressive performance as evaluated on i.i.d. holdout data, deep neural networks depend heavily on superficial statistics of the training data and are liable to break under distribution shift. For example, subtle changes to the background or texture of an image can break a seemingly powerful classifier. Building on previous work on domain generalization, we hope to produce a classifier that will generalize to previously unseen domains, even when domain identifiers are not available during training. We refer to this setting as unguided domain generalization. This setting is challenging because the model may extract many distribution-specific (superficial) signals together with distribution-agnostic (semantic) signals. To overcome this challenge, we incorporate the gray-level co-occurrence matrix (GLCM) to extract patterns that our prior knowledge suggests are superficial. Then we introduce two techniques for improving our networks' out-of-sample performance. The first method is built on the reverse gradient method for tuning the model to be invariant to GLCM representation. The second method is built on the independence introduced by projecting the model's representation onto the subspace orthogonal to GLCM representation's. We test our method on a battery of standard domain generalization data sets and achieve comparable or better performance as compared to other domain generalization methods that explicitly require the distribution identification information.
  • Keywords: domain generalization, robustness
  • TL;DR: Building on previous work on domain generalization, we hope to produce a classifier that will generalize to previously unseen domains, even when domain identifiers are not available during training.
0 Replies

Loading