Relating Regularization and Generalization through the Intrinsic Dimension of ActivationsDownload PDF

Published: 20 Oct 2022, Last Modified: 10 Nov 2024HITY Workshop NeurIPS 2022Readers: Everyone
Keywords: regularization, generalization, intrinsic dimension, grokking
TL;DR: We examine how the intrinsic dimension of activations in deep neural networks are affected by regularization, correlate with improved validation performance and are coupled with the effects of sudden generalization
Abstract: Given a pair of models with similar training set performance, it is natural to assume that the model that possesses simpler internal representations would exhibit better generalization. In this work, we provide empirical evidence for this intuition through an analysis of the intrinsic dimension (ID) of model activations, which can be thought of as the minimal number of factors of variation in the model's representation of the data. First, we show that common regularization techniques uniformly decrease the last-layer ID (LLID) of validation set activations for image classification models and show how this strongly affects model generalization performance. We also investigate how excessive regularization decreases a model's ability to extract features from data in earlier layers, leading to a negative effect on validation accuracy even while LLID continues to decrease and training accuracy remains near-perfect. Finally, we examine the LLID over the course of training of models that exhibit grokking. We observe that well after training accuracy saturates, when models ``grok'' and validation accuracy suddenly improves from random to perfect, there is a co-occurent sudden drop in LLID, thus providing more insight into the dynamics of sudden generalization.
Supplementary Material: zip
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/relating-regularization-and-generalization/code)
4 Replies

Loading