SLiNT: Structure-aware Language Model with Injection and Contrastive Training for Knowledge Graph Completion
Abstract: Link prediction in knowledge graphs (KGs) requires integrating structural information and semantic context to infer missing entities. While large language models (LLMs) offer strong generative reasoning capabilities, they often struggle with \emph{structural sparsity}, \emph{semantic ambiguity}, and limited exploitation of structural signals, especially under incomplete or zero-shot settings. To address these challenges, we propose \textbf{SLiNT} (\textbf{S}tructure-aware \textbf{L}anguage model with \textbf{I}njection and co\textbf{N}trastive \textbf{T}raining), a modular framework that injects KG-derived structural context into frozen LLMs for robust link prediction. Specifically, \textbf{Structure-Guided Neighborhood Enhancement (SGNE)} retrieves pseudo-neighbors to enrich sparse entities and mitigate missing context; \textbf{Dynamic Hard Contrastive Learning (DHCL)} introduces fine-grained supervision by interpolating hard positives and negatives to resolve entity-level ambiguity; and \textbf{Gradient-Decoupled Dual Injection (GDDI)} performs token-level structure-aware intervention without altering the LLM backbone. Experiments on WN18RR and FB15k-237 show that SLiNT outperforms both embedding-based and generation-based baselines, demonstrating the effectiveness of structure-aware representation learning for scalable knowledge graph completion.
Paper Type: Long
Research Area: NLP Applications
Research Area Keywords: knowledge graphs, contrastive learning, entity linking, generation-based reasoning, structural augmentation
Contribution Types: Model analysis & interpretability, NLP engineering experiment, Publicly available software and/or pre-trained models
Languages Studied: English
Keywords: knowledge graph completion, large language models, structure-aware learning, contrastive learning, link prediction
Submission Number: 691
Loading