Rectifying Group Irregularities in Explanations for Distribution Shift

Published: 27 Oct 2023, Last Modified: 23 Nov 2023NeurIPS XAIA 2023EveryoneRevisionsBibTeX
TL;DR: We find that explanations for distribution shift can arbitrarily degrade on subpopulations, so we mitigate this problem using a group robust learning approach.
Abstract: It is well-known that real-world changes constituting distribution shift adversely affect model performance. How to characterize those changes in an interpretable manner is poorly understood. Existing techniques take the form of shift explana- tions that elucidate how samples map from the original distribution toward the shifted one by reducing the disparity between the two distributions. However, these methods can introduce group irregularities, leading to explanations that are less feasible and robust. To address these issues, we propose Group-aware Shift Explanations (GSE), an explanation method that leverages worst-group optimization to rectify group irregularities. We demonstrate that GSE not only maintains group structures, but can improve feasibility and robustness over a variety of domains by up to 20% and 25% respectively.
Submission Track: Full Paper Track
Application Domain: None of the above / Not applicable
Clarify Domain: Both natural language processing and computer vision
Survey Question 1: Distribution shift, when the training and testing distribution for a machine learning model differ, causes problems for using machine learning techniques across domains. Explaining how and why the distribution itself changes is of increasing importance, but existing techniques break down when subgroups exist within a dataset. We propose a method to mitigate this problem with explanations using insights from group robust learning.
Survey Question 2: This problem is inherently tied to explainability since we want to understand what changed in a distribution instead of stopping at detecting if a change occurred.
Survey Question 3: To achieve explainability we directly learn explanations. One existing explainability approach we used was counterfactual explanation techniques such as DiCE.
Submission Number: 75
Loading