On the universality of neural encodings in CNNs

Published: 02 Mar 2024, Last Modified: 02 Mar 2024ICLR 2024 Workshop Re-Align PosterEveryoneRevisionsBibTeXCC BY 4.0
Track: long paper (up to 9 pages)
Keywords: representation alignment, universality, transfer learning, weight covariances
TL;DR: We compare learned weights in CNNs trained on various datasets and evidence the existence of a universal neural code for natural images.
Abstract: We explore the universality of neural encodings in convolutional neural networks (CNNs) trained on image classification tasks. We develop a procedure to directly compare the learned weights rather than their representations. It is based on a factorization of spatial and channel dimensions and measures the similarity of aligned weight covariances. We show that, for a range of layers of VGG-type networks, the learned eigenvectors appear to be universal across different natural image datasets. Our results suggest the existence of a universal neural encoding for natural images. They explain, at a more fundamental level, the success of transfer learning. Our approach shows that, instead of aiming at maximizing performance, one can also attempt to maximize the universality of the learned encoding towards a foundation model.
Anonymization: This submission has been anonymized for double-blind review via the removal of identifying information such as names, affiliations, and identifying URLs.
Submission Number: 54
Loading