The Power of Random Features and the Limits of Distribution-Free Gradient Descent

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
TL;DR: We prove that if a function class is learnable in a distribution-free manner by gradient descent, then most functions in the class must have a relatively simple random feature representation.
Abstract: We study the relationship between gradient-based optimization of parametric models (e.g., neural networks) and optimization of linear combinations of random features. Our main result shows that if a parametric model can be learned using mini-batch stochastic gradient descent (bSGD) without making assumptions about the data distribution, then with high probability, the target function can also be approximated using a polynomial-sized combination of random features. The size of this combination depends on the number of gradient steps and numerical precision used in the bSGD process. This finding reveals fundamental limitations of distribution-free learning in neural networks trained by gradient descent, highlighting why making assumptions about data distributions is often crucial in practice. Along the way, we also introduce a new theoretical framework called average probabilistic dimension complexity (adc), which extends the probabilistic dimension complexity developed by Kamath et al. (2020). We prove that adc has a polynomial relationship with statistical query dimension, and use this relationship to demonstrate an infinite separation between adc and standard dimension complexity.
Lay Summary: This paper reveals a fundamental limitation of training neural networks without making assumptions about the data. The paper describes a discovery that when machine learning algorithms try to work on any possible distribution of data (called "distribution-free" learning), they essentially become no more powerful than much simpler methods that just combine randomly chosen features. In other words, if you want your neural network to work well on absolutely any kind of data without knowing anything about what that data looks like, you're severely limiting what the network can actually learn. This helps explain why successful machine learning in practice almost always involves making smart assumptions about the type of data you're working with, rather than trying to build completely general-purpose systems. The finding possibly suggests that the capabilities seen in modern AI come not just from powerful algorithms, but from carefully tailoring those algorithms to the specific characteristics of the data they'll encounter.
Primary Area: Theory->Learning Theory
Keywords: gradient descent, distribution-free, random features, statistical queries
Submission Number: 10768
Loading