How many samples are needed to leverage smoothness?

Published: 21 Sept 2023, Last Modified: 14 Jan 2024NeurIPS 2023 posterEveryoneRevisionsBibTeX
Keywords: Statistical learning, breaking the curse of dimensionality, smoothness priors, kernel methods
TL;DR: Investigating transitory regimes in methods that break the curse of dimensionality thanks to smoothness priors
Abstract: A core principle in statistical learning is that smoothness of target functions allows to break the curse of dimensionality. However, learning a smooth function seems to require enough samples close to one another to get meaningful estimate of high-order derivatives, which would be hard in machine learning problems where the ratio between number of data and input dimension is relatively small. By deriving new lower bounds on the generalization error, this paper formalizes such an intuition, before investigating the role of constants and transitory regimes which are usually not depicted beyond classical learning theory statements while they play a dominant role in practice.
Supplementary Material: zip
Submission Number: 1257
Loading