A Closer Look at In-Context Learning for Temporal Knowledge Graph Forecasting

ACL ARR 2025 February Submission6514 Authors

16 Feb 2025 (modified: 09 May 2025)ACL ARR 2025 February SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: While temporal knowledge graph forecasting (TKGF) approaches have traditionally relied heavily on complex graph neural network architectures, recent advances in large language models, specifically in-context learning (ICL), have presented promising out-of-the-box alternatives. While previous works have shown the potential of using ICL, its limitations and generalization capabilities for TKGF are underexplored. In this study, we conduct a comparative analysis of complexity (e.g., number of hops) and sparsity (e.g., relation frequency) confounders between ICL and supervised models using two annotated TKGF benchmarks. Our experimental results showcase that while ICL performs on par or outperforms supervised models in lower complexity scenarios, its effectiveness diminishes in more complex settings (e.g., multi-step, more number of hops, etc.), where supervised models are superior.
Paper Type: Short
Research Area: NLP Applications
Research Area Keywords: knowledge graphs
Contribution Types: Model analysis & interpretability, Data analysis
Languages Studied: english
Submission Number: 6514
Loading