Abstract: As deep neural networks (DNNs) achieve tremendous success across many application domains, researchers tried to explore in many aspects on why they generalize well. In this paper, we provide a novel perspective on these issues using
the gradient signal to noise ratio (GSNR) of parameters during training process
of DNNs. The GSNR of a parameter is defined as the ratio between its gradient’s squared mean and variance, over the data distribution. Based on several approximations, we establish a quantitative relationship between model parameters’
GSNR and the generalization gap. This relationship indicates that larger GSNR
during training process leads to better generalization performance. Moreover, we
show that, different from that of shallow models (e.g. logistic regression, support
vector machines), the gradient descent optimization dynamics of DNNs naturally
produces large GSNR during training, which is probably the key to DNNs’ remarkable generalization ability
0 Replies
Loading