The Bounds on the Rate of Uniform Convergence for Learning Machine

Published: 01 Jan 2005, Last Modified: 25 Jan 2025ISNN (1) 2005EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: The generalization performance is the important property of learning machines. The desired learning machines should have the quality of stability with respect to the training samples. We consider the empirical risk minimization on the function sets which are eliminated noisy. By applying the Kutin’s inequality we establish the bounds of the rate of uniform convergence of the empirical risks to their expected risks for learning machines and compare the bounds with known results.
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview