Supplementary Material: pdf
Keywords: feature visualization, learning dynamics, feature learning, self-supervised learning, simclr
TL;DR: We visualized all features of CNNs through self-supervised training. Color jitter augmentation led to diversifying high level features which benefits classification. No jittering led to slightly more diverse low level features.
Abstract: How does feature learning happen during the training of a neural network? We developed an accelerated pipeline to synthesize maximally activating images ("prototypes") for hidden units in a parallel fashion. Through this, we were able to perform feature visualization at scale, and to track the emergence and development of visual features across the training of neural networks.
Using this technique, we studied the `developmental' process of features in a convolutional neural network trained from scratch using SimCLR with or without color jittering augmentation. After creating over one million prototypes with our method, tracking and comparing these visual signatures showed that the color-jittering augmentation led to constantly diversifying high-level features during training, while no color-jittering led to more diverse low-level features but less development of high-level features.
These results illustrate how feature visualization can be used to understand training dynamics under different training objectives and data distribution.
Track: Extended Abstract Track
Submission Number: 91
Loading