- Abstract: We present a versatile quantitative framework for comparing representations in deep neural networks, based on Canonical Correlation Analysis, and use it to analyze the dynamics of representation learning during the training process of a deep network. We find that layers converge to their final representation from the bottom-up, but that the representations themselves migrate downwards in the net-work over the course of learning.
- TL;DR: Use CCA to look at representation learning dynamics of neural networks, and find bottom up convergence, top down representation crawling.
- Keywords: Theory, Deep learning
- Conflicts: google.com, cs.cornell.edu, uber.com