Learning under Distribution Mismatch and Model MisspecificationDownload PDFOpen Website

Published: 01 Jan 2021, Last Modified: 23 Feb 2024ISIT 2021Readers: Everyone
Abstract: We study learning algorithms when there is a mismatch between the distributions of the training and test datasets of a learning algorithm. The effect of this mismatch on the generalization error and model misspecification are quantified. Moreover, we provide a connection between the generalization error and the rate-distortion theory, which allows one to utilize bounds from the rate-distortion theory to derive new bounds on the generalization error and vice versa. In particular, the rate-distortion-based bound strictly improves over the earlier bound by Xu and Raginsky even when there is no mismatch. We also discuss how “auxiliary loss functions” can be utilized to obtain upper bounds on the generalization error. A full version of this paper is accessible at [1].
0 Replies

Loading