Relational Out-of-Distribution GeneralizationDownload PDF

Published: 21 Oct 2022, Last Modified: 05 May 2023NeurIPS 2022 Workshop DistShift PosterReaders: Everyone
Keywords: domain relation, out-of-distribution robustness, multi-head predictor ensemble, domain alignment
TL;DR: We propose a method called READ that utilize domain relation to ensemble and align prediction heads to improve out-of-distribution robustness.
Abstract: In out-of-distribution (OOD) generalization, domain relation is an important factor. It can provide a global view on the functionality among domains, e.g., the protein domain in the binding affinity task or the geographical location domain in the weather forecast task. Existing work lacks the utilization of the domain relation; yet in this work, we want to explore how to incorporate such rich information into solving the distribution shift problem. Therefore, we propose READ, a general multi-head deep learning framework that harnesses domain relation to generalize to unseen domains in a structured learning and inference manner. In READ, each training domain shares a common backbone but learns one separate head. Built on a proposed explicit regularization, READ simulates the generalization process among heads, where a weighted ensemble prediction from heads irrelevant to input domain is calculated via domain relation and aligned with the target. To improve the reliability of domain relation, READ further leverages similarity metric learning to update initial relation. Empirically, we evaluate READ on three domain generalization benchmarks. The results indicate that READ consistently improves upon existing state-of-the-art methods on datasets from various fields.
1 Reply