Graph Theory-Based Deep Graph Similarity Learning: A Unified Survey of Pipeline, Techniques, and Challenges
Abstract: Graph similarity computation, which measures the resemblance between graphs, is a crucial operation in fields such as graph search. Recent advances in graph neural networks have enabled the embedding of graphs into low-dimensional vector spaces, where the similarity or distance between graphs can be efficiently quantified. However, these methods are often tailored to specific tasks and function as black boxes, limiting both generalization and interpretability. To address these challenges, there is growing interest in incorporating domain-agnostic and interpretable concepts from graph theory—such as subgraph isomorphism, maximum common subgraph, and graph edit distance—into graph similarity learning as training objectives. This survey presents a comprehensive review of recent advancements in deep graph similarity learning, focusing on models that integrate these graph theory concepts. Despite the different training objectives of these approaches, they share significant commonalities in the training pipeline, techniques, and challenges. We analyze them within a unified lens referred to as graph theory-based deep similarity learning (GTDGSL) methods. We systematically compare existing GTDGSL methods alongside their common training pipeline, highlighting the technique trend and discussing key challenges, applications, and future research directions in this domain.
Submission Length: Long submission (more than 12 pages of main content)
Changes Since Last Submission:
Revised Section 2.4 and added a summary table to clarify the differences between Graph Theory-Based Deep Graph Similarity Learning and General Deep Similarity Learning
Assigned Action Editor: Di He
Submission Number: 3724
Loading