Keywords: Generalization bounds, Geometric functional analysis
Abstract: Understanding how a neural network behaves in multiple domains is the key to further its explainability, generalizability, and robustness. In this paper, we prove a novel generalization bound using the fundamental concepts of geometric functional analysis. Specifically, by leveraging the covering number of the training dataset and applying certain geometric inequalities we show that a sharp bound can be obtained. To the best of our knowledge this is the first approach which utilizes covering numbers to estimate such generalization bounds.
One-sentence Summary: We provide improved generalization bounds for deep neural networks using sophisticated tools from Geometric functional analysis
Supplementary Material: zip
9 Replies
Loading