Optimal Learners for Realizable Regression: PAC Learning and Online Learning

Published: 21 Sept 2023, Last Modified: 02 Nov 2023NeurIPS 2023 oralEveryoneRevisionsBibTeX
Keywords: Learning Theory, Regression, PAC Learning, Online Learning
TL;DR: We provide (almost) optimal learners, in terms of their sample complexity, for realizable regression in the context of PAC learning and in the context of online learning.
Abstract: In this work, we aim to characterize the statistical complexity of realizable regression both in the PAC learning setting and the online learning setting. Previous work had established the sufficiency of finiteness of the fat shattering dimension for PAC learnability and the necessity of finiteness of the scaled Natarajan dimension, but little progress had been made towards a more complete characterization since the work of Simon 1997 (SICOMP '97). To this end, we first introduce a minimax instance optimal learner for realizable regression and propose a novel dimension that both qualitatively and quantitatively characterizes which classes of real-valued predictors are learnable. We then identify a combinatorial dimension related to the graph dimension that characterizes ERM learnability in the realizable setting. Finally, we establish a necessary condition for learnability based on a combinatorial dimension related to the DS dimension, and conjecture that it may also be sufficient in this context. Additionally, in the context of online learning we provide a dimension that characterizes the minimax instance optimal cumulative loss up to a constant factor and design an optimal online learner for realizable regression, thus resolving an open question raised by Daskalakis and Golowich in STOC '22.
Supplementary Material: pdf
Submission Number: 8990
Loading