$\textit{All the World's a Sphere}$: Learning Expressive Hierarchical Representations with Isotropic Hyperspherical Embeddings

ICLR 2026 Conference Submission16497 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: hierarchical representation, hyperspherical embeddings, geometrical optimization
TL;DR: Hyperspherical embeddings for hierarchical expressiveness
Abstract: Most existing embedding frameworks rely on Euclidean geometry, which, while effective for modeling symmetric similarity, struggle to represent richer relational structures such as asymmetry, hierarchy, and transitivity. Although alternatives like hypercubes and ellipsoids introduce containment-based semantics, they often suffer from axis-aligned rigidity, anisotropic bias, and high parameter overhead. To address these limitations, we propose SpheREx ($\textbf{Sphe}$rical $\textbf{R}$epresentations for Hierarchical $\textbf{Ex}$pressiveness), a geometric embedding framework that utilizes isotropic hyperspheres for hierarchical and asymmetrical relation representation. By representing entities as hyperspheres, SpheREx naturally models containment, intersection, and mutual exclusion while maintaining rotational invariance and closed-form inclusion criteria. We formally characterize the geometric and probabilistic properties of hyperspherical interactions and show that they capture desirable logical structures. To ensure stable optimization and prevent uncontrolled radius growth, we introduce a volume clipping and radius regularization strategy tailored for asymmetric tasks. We conduct extensive evaluations across four diverse real-world benchmarks, spanning both text and vision modalities. SpheREx consistently outperforms twelve competitive baselines, achieving statistically significant improvements across key evaluation measures. Ablations supported by qualitative analysis across benchmarks demonstrate the efficacy of hyperspheres over state-of-the-art geometric baselines.
Supplementary Material: zip
Primary Area: unsupervised, self-supervised, semi-supervised, and supervised representation learning
Submission Number: 16497
Loading