Keywords: statistical learning theory, nonuniform learning, PAC learning, universal learning, VC dimension
TL;DR: We study a variant of nonuniform PAC learning, where the constants in the learning rate may depend on the marginal distribution, and devise a trichotomy of possible rates.
Abstract: We revisit the classical model of nonuniform PAC learning, introduced by Benedek and Itai [1994], where generalization guarantees may depend on the target concept (but not on the marginal distribution). In this work, we propose and study a complementary variant, which we call *marginal-nonuniform learning*. In this setting, guarantees may depend on the marginal distribution over the domain, but must hold uniformly over all concepts. This captures the intuition that some data distributions are inherently easier to learn from than others, allowing for a flexible, distribution-sensitive view of learnability. Our main result is a complete characterization of the achievable learning rates in this model, revealing a trichotomy: exponential rates of the form $e^{-n}$ arise precisely when the hypothesis class is finite; linear rates of the form $d/n$ are achievable when a recently introduced combinatorial parameter, the VC-eluder dimension $d$, is finite; and arbitrarily slow rates may occur when $d = \infty$. Additionally, in the original (concept-)nonuniform model, we show that for all learnable classes linear rates are achievable. We conclude by situating marginal-nonuniform learning within the landscape of universal learning, and by discussing its relationship to other distribution-dependent learning paradigms.
Primary Area: Theory (e.g., control theory, learning theory, algorithmic game theory)
Submission Number: 22900
Loading