Diagnostic Tool for Out-of-Sample Model EvaluationDownload PDFOpen Website

Published: 01 Jan 2022, Last Modified: 05 May 2023CoRR 2022Readers: Everyone
Abstract: Assessment of model fitness is a key part of machine learning. The standard paradigm is to learn models by minimizing a chosen loss function averaged over training data, with the aim of achieving small losses on future data. In this paper, we consider the use of a finite calibration data set to characterize the future, out-of-sample losses of a model. We propose a simple model diagnostic tool that provides finite-sample guarantees under weak assumptions. The tool is simple to compute and to interpret. Several numerical experiments are presented to show how the proposed method quantifies the impact of distribution shifts, aids the analysis of regression, and enables model selection as well as hyper-parameter tuning.
0 Replies

Loading