TKG-LM: Temporal Knowledge Graph Extrapolation Enhanced by Language Models

23 Sept 2023 (modified: 25 Mar 2024)ICLR 2024 Conference Withdrawn SubmissionEveryoneRevisionsBibTeX
Keywords: Knowledge Graph Reasoning, Temporal Knowledge Graph, Large Language Model
Abstract: Temporal Knowledge Graph (TKG) extrapolation aims to predict future missing facts based on historical information. While graph embedding methods based on TKG topology structure have achieved satisfactory performance, the semantic text information of entities and relations still needs to be fully exploited. As large language models (LMs) such as ChatGPT sweep the entire field of natural language processing field, considerable works about KGs augment LMs with structured representations of world knowledge. In this paper, we proposed a method called TKG-LM to fill the gap in the effective integration of TKG and LMs, including historical events pruning, sampling prompt construction, and layer-wise modality fusion. Specifically, we adopt a pruning strategy to extract valuable events from numerous historical facts and reduce the search space for answers. Then, LMs and time-weighted functions are adopted to score the semantic similarity of each neighbor tuple, and the history-sampling prompt is built as the input of LMs. We integrate the encoded representation of LMs and graph neural networks in a multi-layer framework to enable bidirectional information flow between the modalities. This facilitates the incorporation of structured topology knowledge into the language context representation while leveraging linguistic nuances to enhance the graphical representation of knowledge. Our TKG-LM outperforms state-of-the-art (SOTA) TKG methods on five standard TKG datasets and beats the existing LLM and LM+KG models. Further ablation experiments demonstrate the role of our module designs and the benefits of integrating LM and GNN representation.
Primary Area: representation learning for computer vision, audio, language, and other modalities
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 7002
Loading