Abstract: Temporal knowledge graph completion (TKGC) aims to predict missing facts at different timestamps. A promising solution for this task is learning temporal knowledge graph representations in vector space, focusing on modeling important relation patterns inherent in temporality. However, the complex implementation of spatial transformations, such as expanding to complicated spaces, may sacrifice computational efficiency. Additionally, relying solely on individual operations can limit representational ability, thereby hindering performance in temporal link prediction.
To address these challenges, this study introduces a \textbf{T}emporal knowledge graph \textbf{E}mbedding model via \textbf{R}odrigues’ \textbf{R}otation \textbf{F}ormula (TERRF) for TKGC. TERRF regards link prediction as a rigid body transformation in three-dimensional (3D) space, consisting of two operations: a Normalized Scaling operation and an Efficient Rotation operation. The Normalized Scaling operation sets an initial position for entities, providing a more flexible range of rotations, while the Efficient Rotation operation implements rotations using Rodrigues’ Rotation Formula, requiring only an axis and angle representation.
Experimental results show that our proposed TERRF model significantly outperforms competitive baseline models and achieves state-of-the-art results on three popular benchmark datasets.
Paper Type: Long
Research Area: NLP Applications
Research Area Keywords: knowledge graph; knowledge graph embedding; temporal knowledge graph embedding
Languages Studied: English
Submission Number: 7173
Loading