LOGOS: Neural Language Modeling via a Graph-Based Symbolic Lexical Knowledge Base

ACL ARR 2026 January Submission6643 Authors

05 Jan 2026 (modified: 20 Mar 2026)ACL ARR 2026 January SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Masked Language Modeling, Neuro-Symbolic model, GNN
Abstract: Addressing the interpretability and scaling bottlenecks of modern LLMs, we introduce LOGOS, a neuro-symbolic framework that replaces linear sequence modeling with a global Lemma-Merged Dependency Graph. By collapsing texts into a unified symbolic manifold, LOGOS encodes semantic relationships as explicit topological edges rather than implicit probabilities. LOGOS features: (1) topological compression, which exploits graph connectivity to circumvent the quadratic cost of sequence-level attention; and (2) Stochastic Multi-Mask Supervision, a protocol that compels the reconstruction of multi-hop relational dependencies. Evaluations on PTB, WikiText-2, and WikiText-103 demonstrate that LOGOS achieves competitive intrinsic performance with significantly fewer parameters than autoregressive baselines. Beyond efficiency, this explicit structural grounding provides a verifiable substrate for future research into aligned, hallucination-resistant AI systems.
Paper Type: Long
Research Area: Language Models
Research Area Keywords: Language Modeling; Machine Learning for NLP: graph-based methods
Contribution Types: Model analysis & interpretability
Languages Studied: English
Submission Number: 6643
Loading