OOD-Probe: A Neural Interpretation of Out-of-Domain GeneralizationDownload PDF

28 May 2022, 15:02 (modified: 21 Jul 2022, 01:30)SCIS 2022 PosterReaders: Everyone
Keywords: OOD generalization, probing, interpretability
TL;DR: We propose a probing framework that reveals interesting patterns in OOD generalization algorithms.
Abstract: The ability to generalize out-of-domain (OOD) is an important goal for deep neural network development, and researchers have proposed many high-performing OOD generalization methods from various foundations. While many OOD algorithms perform well in various scenarios, these systems are evaluated as ``black-boxes''. Instead, we propose a flexible framework that evaluates OOD systems with finer granularity using a probing module that predicts the originating domain from intermediate representations. We find that representations always encode some information about the domain. While the layerwise encoding patterns remain largely stable across different OOD algorithms, they vary across the datasets. For example, the information about rotation (on RotatedMNIST) is the most visible on the lower layers, while the information about style (on VLCS and PACS) is the most visible on the middle layers. In addition, the high probing results correlate to the domain generalization performances, leading to further directions in developing OOD generalization systems.
Confirmation: Yes
0 Replies