Model Stitching: Looking For Functional Similarity Between RepresentationsDownload PDF

Published: 06 Dec 2022, Last Modified: 05 May 2023ICBINB posterReaders: Everyone
Keywords: Representational Similarity, Functional Similarity, Deep Learning, Computer Vision, CIFAR, ResNet, Black Box, Experimental, Empirical, Methodical, Modular, Representation-Learning, Generalizeable, Machine Learning, Artificial Intelligence
TL;DR: We defend a new paradigm in representation similarity measurement, despite seeing perplexing results from our augmentations to a clever existing technique.
Abstract: Model stitching (Lenc \& Vedaldi 2015) is a compelling methodology to compare different neural network representations, because it allows us to measure to what degree they may be interchanged. We expand on a previous work from Bansal, Nakkiran \& Barak which used model stitching to compare representations of the same shapes learned by differently seeded and/or trained neural networks of the same architecture. Our contribution enables us to compare the representations learned by layers with different shapes from neural networks with different architectures. We subsequently reveal unexpected behavior of model stitching. Namely, we find that stitching, based on convolutions, for small ResNets, can reach high accuracy if those layers come later in the first (sender) network than in the second (receiver), even if those layers are far apart. This leads us to hypothesize that stitches are not in fact learning to match the representations expected by receiver layers, but instead finding different representations which nonetheless yield similar results. Thus, we suggest that model stitching, naively implemented, may not necessarily always be an accurate measure of similarity.
0 Replies

Loading