The Hidden Lattice Geometry of LLMs

ICLR 2026 Conference Submission14878 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Interpretability, formal concept analysis, language models, ontology
TL;DR: We uncover the hidden lattice geometry of LLMs, showing that linear attribute directions induce concept lattices that support symbolic reasoning via meet and join operations
Abstract: We uncover the hidden lattice geometry of large language models (LLMs): a symbolic backbone that grounds conceptual hierarchies and logical operations in embedding space. Our framework unifies the Linear Representation Hypothesis with Formal Concept Analysis (FCA), showing that linear attribute directions with separating thresholds induce a concept lattice via half-space intersections. This geometry enables symbolic reasoning through geometric meet (intersection) and join (union) operations, and admits a canonical form when attribute directions are linearly independent. Experiments on WordNet sub-hierarchies provide empirical evidence that LLM embeddings encode concept lattices and their logical structure, revealing a principled bridge between continuous geometry and symbolic abstraction.
Supplementary Material: zip
Primary Area: interpretability and explainable AI
Submission Number: 14878
Loading