On kernel-based statistical learning theory in the mean field limit

Published: 21 Sept 2023, Last Modified: 02 Nov 2023NeurIPS 2023 posterEveryoneRevisionsBibTeX
Keywords: Reproducing Kernel Hilbert Spaces, Kernel Methods, Mean Field Limit, Interacting Particle Systems, Support Vector Machines, Statistical Learning Theory
TL;DR: Mean field convergence results for kernel-based statistical learning methods (convergence of kernels, empirical and infinite-sample risks, and of SVM solutions), in particular, for machine learning in the context of large multiagent systems.
Abstract: In many applications of machine learning, a large number of variables are considered. Motivated by machine learning of interacting particle systems, we consider the situation when the number of input variables goes to infinity. First, we continue the recent investigation of the mean field limit of kernels and their reproducing kernel Hilbert spaces, completing the existing theory. Next, we provide results relevant for approximation with such kernels in the mean field limit, including a representer theorem. Finally, we use these kernels in the context of statistical learning in the mean field limit, focusing on Support Vector Machines. In particular, we show mean field convergence of empirical and infinite-sample solutions as well as the convergence of the corresponding risks. On the one hand, our results establish rigorous mean field limits in the context of kernel methods, providing new theoretical tools and insights for large-scale problems. On the other hand, our setting corresponds to a new form of limit of learning problems, which seems to have not been investigated yet in the statistical learning theory literature.
Supplementary Material: pdf
Submission Number: 7636
Loading