Estimating and Explaining Model Performance When Both Covariates and Labels ShiftDownload PDF

Published: 31 Oct 2022, Last Modified: 09 Jan 2023NeurIPS 2022 AcceptReaders: Everyone
Keywords: ML models, data distribution shift, model deployment and monitoring
Abstract: Deployed machine learning (ML) models often encounter new user data that differs from their training data. Therefore, estimating how well a given model might perform on the new data is an important step toward reliable ML applications. This is very challenging, however, as the data distribution can change in flexible ways, and we may not have any labels on the new data, which is often the case in monitoring settings. In this paper, we propose a new distribution shift model, Sparse Joint Shift (SJS), which considers the joint shift of both labels and a few features. This unifies and generalizes several existing shift models including label shift and sparse covariate shift, where only marginal feature or label distribution shifts are considered. We describe mathematical conditions under which SJS is identifiable. We further propose SEES, an algorithmic framework to characterize the distribution shift under SJS and to estimate a model’s performance on new data without any labels. We conduct extensive experiments on several real-world datasets with various ML models. Across different datasets and distribution shifts, SEES achieves significant (up to an order of magnitude) shift estimation error improvements over existing approaches.
TL;DR: We propose a method to estimate model performance on a new dataset without labels when both the label and features can shift compared to the training set.
Supplementary Material: pdf
16 Replies

Loading