Infusing invariances in neural representations

Published: 18 Jun 2023, Last Modified: 27 Jun 2023TAGML2023 PosterEveryoneRevisions
Keywords: invariance, latent space, latent comunication, relative representation, zero-shot stitching, representation learning
TL;DR: Indepently trained models induce changes in the latent representations of the models, but it exists an underlying manifold M where the representations are the same.
Abstract: It has been observed that inner representations learned by different neural networks conceal structural similarities when the networks are trained under similar inductive biases. Exploring the geometric structure of latent spaces within these networks offers insights into the underlying similarity among different neural models and facilitates reasoning about the transformations that connect them. Identifying and estimating these transformations presents a challenging task, but it holds significant potential for various downstream tasks, including merging and stitching different neural architectures for model reuse. In this study, drawing on the geometrical structure of latent spaces, we show how it is possible to define representations that incorporate invariances to the targeted transformations in a single framework. We experimentally analyze how inducing different invariances in the representations affects downstream performances on classification and reconstruction tasks, suggesting that the classes of transformations that relate independent latent spaces depend on the task at hand. We analyze models in a variety of settings including different initializations, architectural changes, and trained on multiple modalities (e.g., text, images), testing our framework on 8 different benchmarks.
Supplementary Materials: zip
Type Of Submission: Extended Abstract (4 pages, non-archival)
Submission Number: 90
Loading