Abstract: The generalization performance is the important property of learning machines. The desired learning machines should have the quality of stability with respect to the training samples. We consider the empirical risk minimization on the function sets which are eliminated noisy. By applying the Kutin’s inequality we establish the bounds of the rate of uniform convergence of the empirical risks to their expected risks for learning machines and compare the bounds with known results.
Loading