Abstract: Inferring relationships between entities in a knowledge graph (KG) is vital for numerous downstream applications such as semantic search, ontology construction, and personalized learning. However, many domains lack sufficient labeled data to train robust relationship inference models. In this paper, we present an Expert-in-the-Loop Few-Shot Prompting approach (EFP-KGRI) to perform LLM-Assisted relationship inference in KGs. Our framework leverages a large language model (LLM) to generate pseudo-labeled entity pairs, creating an initial set of positive (relationship present) and negative (no relationship) examples even in the absence of ground truth. We then fine-tune or calibrate these initial labels using embedding-based similarity scores and an active learning loop where expert feedback resolves uncertain cases.
Experiments on both general-purpose encyclopedic KGs and specialized educational KGs demonstrate that EFP-KGRI significantly outperforms unsupervised baselines and naive LLM classification. By combining few-shot prompts, LLM self-consistency checks, and expert validation, we achieve more accurate and scalable relationship inference, effectively addressing the cold-start problem in knowledge graph completion.
Paper Type: Long
Research Area: NLP Applications
Research Area Keywords: NLP Applications, Knowledge Graphs, Relationship Inference
Languages Studied: English, Chinese
Submission Number: 4520
Loading