SKILL: Structural Knowledge Injection into Large Language Models for Inductive Knowledge Graph Reasoning

17 Sept 2025 (modified: 11 Feb 2026)Submitted to ICLR 2026EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Knowledge Graph, Inductive Reasoning, Large Language Models
Abstract: Knowledge Graph Reasoning (KGR) aims to predict missing (head, relation, tail) triples by inferring new facts from existing ones within a knowledge graph. While recent methods embed entities and relations into vectors or model multi-hop paths, they predominantly rely on statistical co-occurrence patterns, yielding logically inconsistent or semantically implausible paths that degrade prediction quality. We introduce SKILL, a new framework that revolutionizes KGR by injecting structural knowledge into large language models (LLMs) through inductive reasoning, thereby optimizing the reasoning process with LLMs' semantic understanding capabilities. Our novel rule-miner module extracts and semantically validates symbolic reasoning rules from closed paths using LLM-based one-shot prompting, effectively filtering out invalid patterns. This innovative rule injection fine-tunes LLMs with explicit symbolic guidance, leading to a comprehension of KG structures required for downstream reasoning. Extensive experiments on three standard inductive benchmarks show that SKILL surpasses competing baselines by up to 5 absolute Hit@1 points, establishing a new state of the art for inductive knowledge graph reasoning.
Supplementary Material: zip
Primary Area: neurosymbolic & hybrid AI systems (physics-informed, logic & formal reasoning, etc.)
Submission Number: 8910
Loading