PathHD: Efficient Large Language Model Reasoning over Knowledge Graphs via Hyperdimensional Retrieval
Keywords: Large Language Models, Efficient Reasoning, Knowledge Graphs, Hyperdimensional Computing
TL;DR: We present PathHD, a lightweight hyperdimensional computing framework for efficient and interpretable large language model reasoning over knowledge graphs.
Abstract: Recent advances in large language models (LLMs) have enabled strong reasoning
over structured and unstructured knowledge. When grounded on knowledge graphs
(KGs), however, prevailing pipelines rely on neural encoders to embed and score
symbolic paths, incurring heavy computation, high latency, and opaque decisions,
which are limitations that hinder faithful, scalable deployment. We propose a
lightweight, economical, and transparent KG reasoning framework, PathHD, that
replaces neural path scoring with hyperdimensional computing (HDC). PathHD
encodes relation paths into block-diagonal GHRR hypervectors, retrieves candidates
via fast cosine similarity with Top-K pruning, and performs a single LLM call to
produce the final answer with cited supporting paths. Technically, PathHD provides
an order-aware, invertible binding operator for path composition, a calibrated
similarity for robust retrieval, and a one-shot adjudication step that preserves
interpretability while eliminating per-path LLM scoring. Extensive experiments on
WebQSP, CWQ, and the GrailQA split show that PathHD (i) achieves comparable
or better Hits@1 than strong neural baselines while using one LLM call per query;
(ii) reduces end-to-end latency by 40–60% and GPU memory by 3–5× thanks
to encoder-free retrieval; and (iii) delivers faithful, path-grounded rationales that
improve error diagnosis and controllability. These results demonstrate that HDC
is a practical substrate for efficient KG–LLM reasoning, offering a favorable
accuracy–efficiency–interpretability trade-off.
Primary Area: learning on graphs and other geometries & topologies
Submission Number: 25183
Loading