Native Logical and Hierarchical Representations with Subspace Embeddings

ICLR 2026 Conference Submission19584 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: subspaces, embeddings, representation learning, structured learning, hierarchical representations, entailment
TL;DR: We introduce subspace embeddings, a representation whose geometry yields emergent logical operations and a data-driven rank that reflects specificity, while remaining compatible with inner product-based retrieval.
Abstract: Traditional embeddings represent datapoints as vectors, which makes similarity easy to compute but limits how well they capture hierarchy, asymmetry and compositional reasoning. We propose a fundamentally different approach: representing concepts as learnable linear subspaces. By spanning multiple dimensions, subspaces can model broader concepts with higher-dimensional regions and nest more specific concepts within them. This geometry naturally captures generality through dimension, hierarchy through inclusion, and enables an emergent structure for logical composition, where conjunction, disjunction, and negation are mapped to linear operations. To make this paradigm trainable, we introduce a differentiable parameterization via soft projection matrices, allowing the effective dimension of each subspace to be learned end-to-end. We validate our approach on hierarchical and natural language inference benchmarks. Our method not only achieves state-of-the-art performance but also provides a more interpretable, geometrically-grounded model of entailment. Remarkably, the ability to perform logical composition with the learned concepts arises naturally from standard training objectives, without any direct supervision.
Supplementary Material: zip
Primary Area: unsupervised, self-supervised, semi-supervised, and supervised representation learning
Submission Number: 19584
Loading