A Graph Perspective to Probe Structural Patterns of Knowledge in Large Language Models

ACL ARR 2025 May Submission3853 Authors

19 May 2025 (modified: 03 Jul 2025)ACL ARR 2025 May SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Large Language Models have been extensively studied as neural knowledge bases for their knowledge access, editability, reasoning, and explainability. However, few works focus on their structural patterns of knowledge. Motivated by this gap, we investigate these structural patterns from a graph perspective. We introduce the triplet/entity knowledgeability to quantify LLMs' knowledge at the triplet/entity levels and analyze their relations with graph structure properties, such as node degree and neighborhood context. Our analysis uncovers the relation between entity knowledgeability and its degree, and knowledge homophily, where topologically close entities/nodes share similar knowledgeability. Furthermore, we develop graph machine learning models to estimate each entity's knowledge based on its local neighbor context. This model further enables more valuable knowledge checking by selecting triplets less known to LLMs. Empirical results show that using selected triplets for fine-tuning leads to superior performance.
Paper Type: Short
Research Area: Interpretability and Analysis of Models for NLP
Research Area Keywords: Large Language Models, Graph Theory, Knowledge Homophily, Graph Neural Network
Contribution Types: Model analysis & interpretability
Languages Studied: English
Submission Number: 3853
Loading