Do PAC-Learners Learn the Marginal Distribution?

Published: 18 Dec 2024, Last Modified: 14 Feb 2025ALT 2025EveryoneRevisionsBibTeXCC BY 4.0
Abstract: The Fundamental Theorem of PAC Learning asserts that learnability of a concept class $H$ is equivalent to the *uniform convergence* of empirical error in $H$ to its mean, or equivalently, to the problem of *density estimation*, learnability of the underlying marginal distribution with respect to events in $H$. This seminal equivalence relies strongly on PAC learning's `distribution-free' assumption, that the adversary may choose any marginal distribution over data. Unfortunately, the distribution-free model is known to be overly adversarial in practice, failing to predict the success of modern machine learning algorithms, but without the Fundamental Theorem our theoretical understanding of learning under distributional constraints remains highly limited. In this work, we revisit the connection between PAC learning, uniform convergence, and density estimation beyond the distribution-free setting when the adversary is restricted to choosing a marginal distribution from a known family $\mathscr{P}$. We prove that while the traditional Fundamental Theorem fails, a finer-grained connection between the three fundamental notions continues to hold: 1. PAC-Learning is strictly sandwiched between two relaxed models of density estimation, differing only in whether the learner knows the set of well-estimated events in $\mathcal{H}$. 2. Under reasonable assumptions on $\mathcal{H}$ and $\mathscr{P}$, density estimation is equivalent to *uniform estimation*, a weakening of uniform convergence allowing non-empirical estimators. Together, our results give a clearer picture of how the Fundamental Theorem extends beyond the distribution-free setting and shed new light on the classically challenging problem of learning under arbitrary distributional assumptions.
PDF: pdf
Submission Number: 53
Loading