Track: Extended Abstract Track
Keywords: Zero-shot learning, Topological autoencoders, Latent space alignment
TL;DR: This paper proposes a method that incorporates a topological regularizations to align latent spaces of independently trained autoencoder models, enabling zero-shot stitching and achieving near-optimal performance compared to end-to-end models.
Abstract: Developing schemes to enable zero-shot stitching between different neural networks with minimal or no information exchange has become increasingly important in the era of large and powerful pre-trained models.
Considering the example of an auto-encoder based data compression framework, having the ability to select the architecture and train the encoder and decoder models completely independently while ensuring interoperability between them can revolutionize how these models are developed, deployed, and maintained.
In this work, we propose a novel approach that utilizes topological regularizations to align the latent spaces of two different auto-encoder models that can be trained independently, without coordination.
Our solution introduces two distinct training schemes: Data2Latent and Latent2Latent.
The Data2Latent scheme focuses on preserving the topological structure of the input data, while the Latent2Latent scheme preserves the latent space of a pre-trained, unconstrained model.
Through numerical experiments in reconstruction tasks, we demonstrate that our approach yields a near-optimal solution, closely approximating the performance of an end-to-end model.
Submission Number: 45
Loading