Representation Alignment in Neural Networks

Published: 26 Sept 2022, Last Modified: 20 Sept 2023Accepted by TMLREveryoneRevisionsBibTeX
Event Certifications: lifelong-ml.cc/CoLLAs/2023/Journal_Track
Abstract: It is now a standard for neural network representations to be trained on large, publicly available datasets, and used for new problems. The reasons for why neural network representations have been so successful for transfer, however, are still not fully understood. In this paper we show that, after training, neural network representations align their top singular vectors to the targets. We investigate this representation alignment phenomenon in a variety of neural network architectures and find that (a) alignment emerges across a variety of different architectures and optimizers, with more alignment arising from depth (b) alignment increases for layers closer to the output and (c) existing high-performance deep CNNs exhibit high levels of alignment. We then highlight why alignment between the top singular vectors and the targets can speed up learning and show in a classic synthetic transfer problem that representation alignment correlates with positive and negative transfer to similar and dissimilar tasks.
Submission Length: Long submission (more than 12 pages of main content)
Changes Since Last Submission: Revision 1: - Added an experiment on negative transfer with Office-31 dataset (Section 7.3) - Added an experiment with a Vision Transformer architecture (Section 7.3) - Cited related work on generalization (Section 8) - Removed statements that implied causal relationship - Clarified the description of Eq (1) - Added an explanation for the role of depth (Section 5) Revision 2: - Extended Proposition 1 to convergence in loss - Added Proposition 2
Code: https://github.com/EhsanEI/rep-align-demo
Assigned Action Editor: ~Mingsheng_Long2
License: Creative Commons Attribution 4.0 International (CC BY 4.0)
Submission Number: 84
Loading