Are Large Language Models All You Need for Temporal Knowledge Graph Forecasting?

ACL ARR 2024 June Submission2816 Authors

15 Jun 2024 (modified: 02 Jul 2024)ACL ARR 2024 June SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: While temporal knowledge graph forecasting (TKGF) approaches have traditionally relied heavily on complex graph neural network architectures, recent advances in large language models (LLMs) and in-context learning (ICL) have presented promising out-of-the-box alternatives. However, little is known about LLMs' limitations and generalization capabilities for TKGF. In this study, we conduct a comparative analysis of complexity (\textit{e.g.}, more number of hops) and sparsity (\textit{e.g.}, relation frequency) confounders between LLMs and supervised models using two weakly annotated TKGF benchmarks. Our experimental results showcase that while LLMs perform on par or outperform supervised models in low-complexity scenarios, their effectiveness diminishes in more complex settings (\textit{e.g.}, multi-step, more number of hops, etc.) where supervised models maintain superior performance.
Paper Type: Short
Research Area: NLP Applications
Research Area Keywords: knowledge graphs
Contribution Types: Model analysis & interpretability
Languages Studied: N/A
Submission Number: 2816
Loading