Keywords: Generalization, stochastic algorithms, concentration inequalities, PAC-Bayes
TL;DR: The paper gives bounds for stochastic algorithms including the Gibbs algorithm
Abstract: A method to prove generalization results for a class of stochastic learning algorithms is presented. It applies whenever the algorithm generates a distribution, which is absolutely continuous distribution relative to some a-priori measure, and the logarithm of its density is exponentially concentrated about its mean. Applications include bounds for the Gibbs algorithm and randomizations of stable deterministic algorithms, combinations thereof and PAC-Bayesian bounds with data-dependent priors.
Primary Area: Learning theory
Submission Number: 15404
Loading