Intriguing Properties of Hyperbolic Embeddings in Vision-Language Models

Published: 09 Jul 2024, Last Modified: 09 Jul 2024Accepted by TMLREveryoneRevisionsBibTeX
Abstract: Vision-language models have in short time been established as powerful networks, demonstrating strong performance on a wide range of downstream tasks. A key factor behind their success is the learning of a joint embedding space where pairs of images and textual descriptions are contrastively aligned. Recent work has explored the geometry of the joint embedding space, finding that hyperbolic embeddings provide a compelling alternative to the commonly used Euclidean embeddings. Specifically, hyperbolic embeddings yield improved zero-shot generalization, better visual recognition, and more consistent semantic interpretations. In this paper, we conduct a deeper study into the hyperbolic embeddings and find that they open new doors for vision-language models. In particular, we find that hyperbolic vision-language models provide spatial awareness that Euclidean vision-language models lack, are better capable of dealing with ambiguity, and effectively discriminate between distributions. Our findings shed light on the greater potential of hyperbolic embeddings in large-scale settings, reaching beyond conventional down-stream tasks. Our code is available at https://github.com/saibr/hypvl
Submission Length: Long submission (more than 12 pages of main content)
Changes Since Last Submission: Camera ready version
Code: https://github.com/saibr/hypvl
Assigned Action Editor: ~Dumitru_Erhan1
Submission Number: 2113
Loading