The Tunnel Effect: Building Data Representations in Deep Neural Networks

Published: 21 Sept 2023, Last Modified: 02 Jan 2024NeurIPS 2023 posterEveryoneRevisionsBibTeX
Keywords: representation learning, continual learning, training dynamics
Abstract: Deep neural networks are widely known for their remarkable effectiveness across various tasks, with the consensus that deeper networks implicitly learn more complex data representations. This paper shows that sufficiently deep networks trained for supervised image classification split into two distinct parts that contribute to the resulting data representations differently. The initial layers create linearly-separable representations, while the subsequent layers, which we refer to as \textit{the tunnel}, compress these representations and have a minimal impact on the overall performance. We explore the tunnel's behavior through comprehensive empirical studies, highlighting that it emerges early in the training process. Its depth depends on the relation between the network's capacity and task complexity. Furthermore, we show that the tunnel degrades out-of-distribution generalization and discuss its implications for continual learning.
Supplementary Material: pdf
Submission Number: 7404