Learning Along the Arrow of Time: Hyperbolic Geometry for Backward-Compatible Representation Learning

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
TL;DR: We propose a backward compatible representation learning method using hyperbolic geometry
Abstract: Backward compatible representation learning enables updated models to integrate seamlessly with existing ones, avoiding to reprocess stored data. Despite recent advances, existing compatibility approaches in Euclidean space neglect the uncertainty in the old embedding models and force the new model to replicate outdated representations regardless of their quality, and thereby hindering the learning process. In this paper, we switch perspectives to hyperbolic geometry, where we treat time as a natural axis for capturing a model’s confidence and evolution. By lifting embeddings into hyperbolic space and constraining updated embeddings to lie within the entailment cone of the old ones, we maintain generational consistency across models while accounting for uncertainties in the representations. To further enhance compatibility, we introduce a robust contrastive alignment loss that dynamically adjusts alignment weights based on the uncertainty of the old embeddings. Experiments validate the superiority of the proposed method in achieving compatibility, paving the way for more resilient and adaptable machine learning systems.
Lay Summary: As machine learning models evolve, their internal "representations" — the way they understand and organize information — often change. But this causes a problem: older data that was processed using earlier versions of these systems might no longer work with the updated versions. Reprocessing all that data can be expensive, time-consuming, and risky, especially when privacy is involved. Our research tackles this issue by designing a way for new models to stay compatible with old ones — a concept known as *backward compatibility*. Instead of forcing new models to exactly mimic outdated behavior, we treat learning as an evolving process over time. To do this, we turn to a different kind of geometry — hyperbolic space — which is better suited for modeling the evolution and uncertainty of the embedding model. By placing both old and new representations into this space, and guiding the new ones to stay consistent with the old without being limited by them, we enable models to evolve and stay compatible with previous versions. This makes machine learning systems more adaptable, future-proof, and safe to update without reprocessing all previous data.
Application-Driven Machine Learning: This submission is on Application-Driven Machine Learning.
Primary Area: General Machine Learning->Everything Else
Keywords: backward compatible training, hyperbolic representation learning, alignment
Submission Number: 9528
Loading