RECIPE-TKG: From Sparse History to Structured Reasoning for LLM-based Temporal Knowledge Graph Completion
Abstract: Temporal Knowledge Graphs (TKGs) represent dynamic facts as timestamped relations between entities. TKG completion involves forecasting missing or future links, requiring models to reason over time-evolving structure. While LLMs show promise for this task, existing approaches often overemphasize supervised fine-tuning and struggle particularly when historical evidence is limited or missing. We introduce RECIPE-TKG, a lightweight and data-efficient framework designed to improve accuracy and generalization in settings with sparse historical context. It combines (1) rule-based multi-hop retrieval for structurally diverse history, (2) contrastive fine-tuning of lightweight adapters to encode relational semantics, and (3) test-time semantic filtering to iteratively refine generations based on embedding similarity. Experiments on four TKG benchmarks show that RECIPE-TKG outperforms previous LLM-based approaches, achieving up to 22.4\% relative improvement in Hits@10. Moreover, our proposed framework produces more semantically coherent predictions, even for the samples with limited historical context.
Paper Type: Long
Research Area: Machine Learning for NLP
Research Area Keywords: Machine Learning for NLP, NLP Applications, Efficient/Low-Resource Methods for NLP, Information Retrieval and Text Mining, Language Modeling
Contribution Types: NLP engineering experiment, Approaches to low-resource settings, Approaches low compute settings-efficiency, Publicly available software and/or pre-trained models
Languages Studied: English
Previous URL: https://openreview.net/forum?id=s7sq3KdQnG
Explanation Of Revisions PDF: pdf
Reassignment Request Area Chair: Yes, I want a different area chair for our submission
Reassignment Request Reviewers: Yes, I want a different set of reviewers
Justification For Not Keeping Action Editor Or Reviewers: We respectfully request new reviewers for this submission based on several concerns with the previous review process: Despite our comprehensive rebuttal, we received no engagement from reviewers despite multiple follow-up inquiries. Several reviewer questions addressed information already present in our appendices, suggesting incomplete review of our materials. Some citation requests appeared inconsistent with standard practice in temporal knowledge graph completion literature, as the papers in question are not commonly referenced in recent ACL/EMNLP publications in this domain. Additionally, several reviews arrived after the deadline and contained questions readily answerable from the main text and appendix, raising concerns about review thoroughness. We have substantially revised the paper to address all concerns and believe a fresh evaluation by reviewers with expertise in LLM-based knowledge graph reasoning would best serve the assessment of our work.
A1 Limitations Section: This paper has a limitations section.
A2 Potential Risks: N/A
B Use Or Create Scientific Artifacts: Yes
B1 Cite Creators Of Artifacts: Yes
B1 Elaboration: References
B2 Discuss The License For Artifacts: Yes
B2 Elaboration: License and Ethics
B3 Artifact Use Consistent With Intended Use: N/A
B4 Data Contains Personally Identifying Info Or Offensive Content: N/A
B5 Documentation Of Artifacts: N/A
B6 Statistics For Data: N/A
C Computational Experiments: Yes
C1 Model Size And Budget: Yes
C1 Elaboration: Appendix B.4
C2 Experimental Setup And Hyperparameters: Yes
C2 Elaboration: Section 5.1 & Appendix B.4
C3 Descriptive Statistics: Yes
C3 Elaboration: Section 5 and Appendix
C4 Parameters For Packages: N/A
D Human Subjects Including Annotators: No
D1 Instructions Given To Participants: N/A
D2 Recruitment And Payment: N/A
D3 Data Consent: N/A
D4 Ethics Review Board Approval: N/A
D5 Characteristics Of Annotators: N/A
E Ai Assistants In Research Or Writing: Yes
E1 Information About Use Of Ai Assistants: Yes
E1 Elaboration: Appendix J
Author Submission Checklist: yes
Submission Number: 1196
Loading