Keywords: robustness, unforeseen data variations, out-of-distribution data, transfer, invariance, equivariance, domain translation
TL;DR: We propose a two-step algorithm based on an equivariant domain translator that can learn unforeseen robustness from out-of-distribution data.
Abstract: Existing approaches to training robust models are typically tailored to scenarios where data variations are available in the training set. While shown effective in achieving robustness to these foreseen variations, these approaches are ineffective in learning unforeseen robustness, i.e., robustness to data variations with unknown characterization or without training examples reflecting them. In this work, we learn such unforeseen robustness by harnessing the variations in the abundant out-of-distribution data. As we attribute the main challenge of using these data to the domain gap, we consider using a domain translator to bridge the gap, with which we bound the intractable robustness on the target distribution. As implied by our analysis, we propose a two-step algorithm that first trains an equivariant domain translator to map out-of-distribution data to the target distribution while preserving the variation, and then regularizes a model’s output consistency on the domain-translated data to improve its robustness. We empirically demonstrate the effectiveness of our method in improving both unforeseen and foreseen robustness in comparison to existing baselines. We also show that training the equivariant domain translator serves as an effective criterion for source data selection.
0 Replies
Loading