Understanding Nonlinear Implicit Bias via Region Counts in Input Space

Published: 16 Jun 2024, Last Modified: 05 Jul 2024HiLD at ICML 2024 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: implicit bias, generalization gap, region counts
Abstract: One explanation for the strong generalization ability of neural networks is implicit bias. Yet, the definition and understanding of implicit bias in non-linear contexts remains mysterious. In this work, we propose to characterize implicit bias by the count of connected regions in the input space with the same predicted label. Compared with parameter-dependent metrics (e.g., norm or normalized margin), region count can be better adapted to nonlinear, overparameterized models, because it is determined by the function mapping and is invariant to reparametrization. Empirically, we found that small region counts align with geometrically simple decision boundaries and correlate well with good generalization performance. We also observe that good hyper-parameter choices such as larger learning rates and smaller batch sizes can induce small region counts. We further establish the theoretical connections between region count and the generalization bound, and explain how larger learning rate can induce small region counts in neural networks.
Student Paper: Yes
Submission Number: 8
Loading