Keywords: relative representations, zero-shot transfer, anchor learning, representation alignment, encoder stitching, parameterized anchors, whitened similarity, cross-encoder generalization
TL;DR: Learned anchors and a geometry-aware similarity make relative representations more robust - enabling zero-shot transfer that can outperform classifiers trained on absolute embeddings.
Abstract: Relative Representations (RR) enable zero-shot stitching of neural components by mapping encoder outputs to a shared anchor-based space. This work improves the robustness and applicability of RR through two key contributions: (i) anchors are learned as convex mixtures of data points using a differentiable parameterization (PARAM), and (ii) a Whitened Inner Product (WIP) similarity is introduced to account for local geometry and preserve magnitude information. These components jointly strengthen alignment across encoders and produce more stable relative features. In zero-shot classification experiments, the proposed method significantly outperforms prior RR baselines and, for the first time, surpasses the performance of a same-architecture classifier trained on absolute embeddings. The results highlight the potential of RR for efficient model reuse and decoupling encoder complexity from inference.
Submission Number: 46
Loading