On the Origins of the Block Structure Phenomenon in Neural Network Representations

18 Aug 2022, 17:59 (modified: 27 Jan 2023, 19:42)Accepted by TMLREveryoneRevisionsBibTeX
Abstract: Recent work by Nguyen et al. (2021) has uncovered a striking phenomenon in large-capacity neural networks: they contain blocks of contiguous hidden layers with highly similar representations. This block structure has two seemingly contradictory properties: on the one hand, its constituent layers exhibit highly similar dominant first principal components (PCs), but on the other hand, their representations, and their common first PC, are highly dissimilar across different random seeds. Our work seeks to reconcile these discrepant properties by investigating the origin of the block structure in relation to the data and training methods. By analyzing properties of the dominant PCs, we find that the block structure arises from dominant datapoints — a small group of examples that share similar image statistics (e.g. background color). However, the set of dominant datapoints, and the precise shared image statistic, can vary across random seeds. Thus, the block structure reflects meaningful dataset statistics, but is simultaneously unique to each model. Through studying hidden layer activations and creating synthetic datapoints, we demonstrate that these simple image statistics dominate the representational geometry of the layers inside the block structure. We explore how the phenomenon evolves through training, finding that the block structure takes shape early in training, but the underlying representations and the corresponding dominant datapoints continue to change substantially. Finally, we study the interplay between the block structure and different training mechanisms, introducing a targeted intervention to eliminate the block structure, as well as examining the effects of pre-training and Shake-Shake regularization.
License: Creative Commons Attribution 4.0 International (CC BY 4.0)
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: - Reframed the motivation of our investigation in Section 1 - Clarified in our list of contributions (Section 1) as well as in Section 6 that our experimental findings suggest that regularizing the block structure has a minor impact on generalization - Moved Figure 4, Figure 10, Figure 12, and Table 1 from the Appendix - Added Table 2 to report performance of Shake-shake regularization and transfer learning, compared to standard training
Assigned Action Editor: ~Yingnian_Wu1
Submission Number: 372
Loading