Implicit Regularization via Neural Feature AlignmentDownload PDF

Published: 07 Nov 2020, Last Modified: 05 May 2023NeurIPSW 2020: DL-IG PosterReaders: Everyone
Keywords: deep learning, learning dynamics, implicit regularization
TL;DR: highlight an alignment effect of neural features during training ; claim this implicitly regularizes the model
Abstract: We approach the problem of implicit regularization in deep learning from a geometrical viewpoint. We highlight a regularization effect induced by a dynamical alignment of the neural tangent features introduced by Jacot et al (2018), along a small number of task-relevant directions. This can be interpreted as a combined feature selection and compression mechanism. By extrapolating a new analysis of Rademacher complexity bounds for linear models, we propose and study a new heuristic measure of complexity which captures this phenomenon, in terms of sequences of tangent kernel classes along the learning trajectories.
4 Replies

Loading