Keywords: Functional Similarity, Representation Learning, Model stitching
TL;DR: Invariance-aware functional latent alignment can make for a reliable functional similarity metric.
Abstract: In deep learning, functional similarity evaluation quantifies the extent to which independently trained models learn similar input-output relationships. A related concept, representation compatibility, is investigated via model stitching, where an affine transformation aligns two models to solve a task. However, recent studies highlight a critical limitation: models trained on different information cues can still produce compatible representations, making them appear functionally similar \cite{smithfunctional}. To address this, we pose two requirements for similarity under model stitching, probing both forward and backward compatibility. To realize this, we introduce invariance-aware Functional Latent Alignment (I-FuLA), a novel model stitching setting. Experiments across convolutional and transformer architectures demonstrate that invariance-aware stitching settings provide a more meaningful measure of functional similarity, with the combination of invariance-aware stitching and FuLA (i.e., I-FuLA) emerging as the optimal setting for convolution-based models.
Primary Area: unsupervised, self-supervised, semi-supervised, and supervised representation learning
Submission Number: 12340
Loading