LoRE: Robust and Adaptive Graph Embeddings via Local Self-Reconstruction Mechanisms

19 Sept 2025 (modified: 11 Feb 2026)Submitted to ICLR 2026EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Graph Learning, Graph Embeddings, Graph Neural Networks, Knowledge Graphs, Self-Supervised Learning
TL;DR: This work presents LoRE, a generalized graph embedding framework that uses attention-driven self-reconstruction to improve robustness, preserve local semantics during training, and adapt efficiently to evolving graphs without costly retraining.
Abstract: Graph embeddings aim to project nodes into numeric vector spaces that capture structural and semantic regularities, enabling their use as general-purpose representations for a broad range of downstream applications. However, existing embedding methods distort local geometry through negative sampling, fail to enforce semantic consistency, and require expensive retraining when graphs evolve. Therefore, we introduce LoRE, a versatile graph embedding framework based on attention-driven self-reconstruction mechanisms and a perspective-preserving training procedure. Built on a generalized formulation, LoRE can be applied to a wide range of graph types, from undirected graphs to relational knowledge graphs and even attributed node sets without inherent topologies. It enforces identical embeddings for structurally equivalent nodes, respects local context during training, and reduces the likelihood of violations of the open-world assumption. Unlike traditional methods, LoRE supports efficient on-the-fly adaptation: embeddings can be updated in real time as graphs change, without full retraining. Its reconstruction mechanism acts as a self-supervised training signal that improves embedding robustness, yielding improved performance compared to existing approaches. Extensive experiments demonstrate that LoRE consistently matches or outperforms baseline results while maintaining stability under dynamic conditions. Qualitative analyses further show that LoRE produces more separable and compact clusters in embedding spaces. Together, the results underscore its enhanced generalizability and practical value as a global, task-agnostic embedding method.
Primary Area: learning on graphs and other geometries & topologies
Submission Number: 21516
Loading