SIT: Selective Incremental Training for Dynamic Knowledge Graph Embedding

Published: 2025, Last Modified: 22 Jan 2026ICDE 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: In recent years, dynamic knowledge graph embedding (DKGE) has been widely studied to deal with large-scale dynamic knowledge graphs (DKG). The core idea is to encode dynamic information within DKGs into embedding vectors and decode them for various downstream tasks on the DKGs. Plenty of contributions have been made to this field. Full retraining DKG models additionally encode temporal information for higher performance, while neighboring retraining models view time data as dynamic changes in graph topology for better efficiency. However, existing approaches within these categories suffer from either effectiveness-insufficient or efficiency-insufficient issues. Recent contributions in graph area propose solutions to selectively retrain the models by choosing training data following certain criteria, but the majority of selective retraining models are designed for homogeneous graphs. The heterogeneous graph information and large graph sizes make it improper to transfer methods across scenarios. In this paper, we propose an efficient selective incremental training framework for DKGE, namely SIT. Given a restriction on training data size, we select a set of important triples instead of all triples in the DKG to improve training efficiency. In detail, we design a novel importance criteria considering DKGE model parameters, historical embedding and graph topology. Extensive experiments on open-source datasets demonstrate the effectiveness and efficiency of the SIT framework against different DKGE models.
Loading