MATA: Memory-Augmented Temporal Anchors for Sparse-Time Dynamic Knowledge Graph Embedding

20 Sept 2025 (modified: 11 Feb 2026)Submitted to ICLR 2026EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Dynamic Knowledge Graphs, TTemporal Embeddings, Memory Networks, Contrastive Learning, Link Prediction, Concept Drift, Sparse Temporal Data, Version Alignment
TL;DR: MATA is a new framework for dynamic knowledge graphs that uses memory-augmented temporal anchors to handle sparse data, concept drift, and evolving environments.
Abstract: Dynamic knowledge graphs (KGs) face major hurdles in temporal link prediction, especially due to issues like concept drift, sparse supervision over time, and the heavy computational costs of updating in evolving environments. To tackle these challenges, we introduce MATA (Memory-Augmented Temporal Anchors), a new framework that leverages learnable temporal anchors stored in a differentiable memory module. MATA provides three main capabilities: (1) temporal interpolation for any timestamp through attention-based synthesis over memory anchors, (2) contrastive learning that maintains temporal consistency while allowing semantic evolution, and (3) cross-version alignment to ensure coherent embeddings across stages of KG evolution. Extensive experiments on four benchmark datasets, such as ICEWS14 (+12.4%), GDELT (+15.7%), WikiKG (+8.3%), and YAGO-T (+11.2%), show that MATA delivers state-of-the-art results, achieving an 8.3–15.7% improvement in mean reciprocal rank (MRR) compared to existing methods such as recurrent model- T-GCN (Temporal Graph Convolutional Networks), TANGO, CyGNet, Know-Evolve, T-GAT (Temporal Graph Attention Network), Memory-based approach - MemN2N. The framework is especially effective in sparse-temporal settings, with a 27.3% MRR gain under 70% missing timestamps, while maintaining strong performance across varying levels of temporal sparsity. Ablation studies confirm the importance of each component: removing temporal anchors reduces performance by 12.4%, removing attention mechanisms by 8.7%, and removing contrastive learning by 6.2%. Overall, MATA introduces a new paradigm for addressing temporal sparsity and concept drift in dynamic knowledge representation learning, with applications in time-aware retrieval, reasoning, and evolving knowledge systems.
Primary Area: learning on graphs and other geometries & topologies
Submission Number: 24654
Loading