Abstract: The question why deep learning algorithms generalize so well has attracted increasing
research interest. However, most of the well-established approaches,
such as hypothesis capacity, stability or sparseness, have not provided complete
explanations (Zhang et al., 2016; Kawaguchi et al., 2017). In this work, we focus
on the robustness approach (Xu & Mannor, 2012), i.e., if the error of a hypothesis
will not change much due to perturbations of its training examples, then it
will also generalize well. As most deep learning algorithms are stochastic (e.g.,
Stochastic Gradient Descent, Dropout, and Bayes-by-backprop), we revisit the robustness
arguments of Xu & Mannor, and introduce a new approach – ensemble
robustness – that concerns the robustness of a population of hypotheses. Through
the lens of ensemble robustness, we reveal that a stochastic learning algorithm can
generalize well as long as its sensitiveness to adversarial perturbations is bounded
in average over training examples. Moreover, an algorithm may be sensitive to
some adversarial examples (Goodfellow et al., 2015) but still generalize well. To
support our claims, we provide extensive simulations for different deep learning
algorithms and different network architectures exhibiting a strong correlation between
ensemble robustness and the ability to generalize.
TL;DR: Explaining the generalization of stochastic deep learning algorithms, theoretically and empirically, via ensemble robustness
Keywords: Robustness, Generalization, Deep Learning, Adversarial Learning
14 Replies
Loading