Propagating Knowledge in LLMs with Hypernetworks

18 Sept 2025 (modified: 30 Dec 2025)ICLR 2026 Conference Withdrawn SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Knowledge Editing, Knowledge Propagation, Entity, Large Language Model
Abstract: Knowledge editing techniques for large language models (LLMs) can inject knowledge that is later reproducible verbatim, but fall short on propagating that knowledge; that is, LLMs can’t answer questions that involve reasoning about it. In this paper, we study hypernetwork-based knowledge editing techniques (i.e., MEND (Mitchell et al., 2022)) for knowledge propagation. We find that vanilla hypernetwork-based editing methods do not effectively propagate knowledge. We propose a simple fix to optimize hypernetworks for knowledge propagation, which is to explicitly include propagation questions as the objective during hypernetwork training. This achieves a substantial performance gain in the RippleEdit dataset, almost 2×accuracy on challenging multi-hop questions whose answer strings do not appear in the injected fact. We further introduce a new synthetic dataset, Controlled RippleEdit, that isolates a confounding factor in knowledge propagation evaluation and further supports evaluating the generalization of knowledge propagation. Our approach outperforms all other approaches for knowledge propagation, including more computationally intensive methods such as continued fine-tuning on synthetic data. Hypernetworks demonstrate some scaling to multi-edit settings (up to 20 edits), achieving performance on par with or higher than CPT-based approaches. Yet, we observe significant limitations in the performance for out-of-domain propagation, suggesting future work in propagating knowledge to a wide range of relations.
Primary Area: transfer learning, meta learning, and lifelong learning
Submission Number: 10238
Loading