Synergizing Large Language Models and Knowledge Graphs in Science: A Survey

Published: 24 Sept 2025, Last Modified: 18 Oct 2025NeurIPS2025-AI4Science PosterEveryoneRevisionsBibTeXCC BY 4.0
Additional Submission Instructions: For the camera-ready version, please include the author names and affiliations, funding disclosures, and acknowledgements.
Track: Track 1: Original Research/Position/Education/Attention Track
Keywords: Knowledge graphs, large language models, AI for science
TL;DR: A survey explores the synergy between large language models and knowledge graphs in science.
Abstract: The integration of large language models (LLMs) and scientific knowledge graphs (SciKGs) is emerging as a powerful paradigm in AI for science. This survey examines their bidirectional synergy: LLMs accelerate SciKG construction via automated extraction, completion, and maintenance, while SciKGs make LLMs more factual and explainable and strengthen scientific reasoning and comprehension. We organize the survey around these two directions and adopt a task-centered framework that aligns technical methods with scientific objectives. Building on this framework, we (i) chart techniques that automate and sustain SciKG construction with LLMs, (ii) systematize how SciKGs ground and guide LLMs to improve factuality, explainability, and reasoning, (iii) synthesize representative applications in biomedicine, chemistry, and materials, and (iv) outline open problems and research directions around knowledge consistency and conflict handling, temporal modeling and updating, scalable retrieval and inference, and rigorous evaluation. This work's insights recast LLMs and SciKGs as complementary components of a dynamic, self-improving knowledge infrastructure for scientific discovery, providing a clear foundation for building grounded, transparent, and knowledge-driven models in high-stakes scientific domains.
Submission Number: 368
Loading