A fine-grained analysis of robustness to distribution shiftsDownload PDF

Published: 02 Dec 2021, Last Modified: 05 May 2023NeurIPS 2021 Workshop DistShift PosterReaders: Everyone
Keywords: robustness, distribution shifts
TL;DR: We introduce an experimental framework that evaluates the robustness of a variety of approaches under varying distribution shifts and amounts of shift.
Abstract: Robustness to distribution shifts is critical for deploying machine learning models in the real world. Despite this necessity, there has been little work in defining the underlying mechanisms that cause these shifts and evaluating the robustness of algorithms across multiple, different distribution shifts. To this end, we introduce a framework that enables fine-grained analysis of various distribution shifts. We provide a holistic analysis of current state-of-the-art methods by evaluating 19 distinct methods grouped into five categories across both synthetic and real-world datasets. Overall, we train more than 85K models. Our experimental framework can be easily extended to include new methods, shifts, and datasets. We find, unlike previous work [Gulrajani and Lopez-Paz, 2021], that progress has been made over a standard ERM baseline; in particular, pre-training and augmentations (learned or heuristic) offer large gains in many cases. However, the best methods are not consistent over different datasets and shifts. A longer version of this paper is at https://arxiv.org/abs/2110.11328.
1 Reply

Loading