LaCore: Laplacian Cohesive Subgraphs for Graph Representation Learning

ICLR 2026 Conference Submission21981 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Graph representation learning, Graph embeddings, Graph neural networks
Abstract: Dense, cohesive subgraphs are valuable anchors for pooling and interpretation in graph representation learning (GRL), yet exact cliques are too strict and average-density heuristics are hub-biased and unstable. We introduce \textsc{LaCore}, a fast two-phase \emph{Laplacian-smoothed reverse peeling} method that rebuilds the graph in a fixed importance order and scores each \emph{connected} component with a smooth ratio that penalizes within-component degree variation. A simple one-step growth test yields a natural \emph{first-peak} stopping rule, and a degree-concentration certificate links low Laplacian energy to near-uniform internal support, making the selected subgraphs cohesive and interpretable. \textsc{LaCore} preserves the scalability of greedy peeling, running in $O((|V|{+}|E|)\log|V| + |E|k)$, and is parameter-free when used as a pooling operator. On synthetic planted-subgraph recovery and graph classification benchmarks, \textsc{LaCore} consistently improves downstream GRL metrics. The result is a practical, stable alternative to density-only heuristics that plugs directly into modern GRL pipelines.
Primary Area: learning on graphs and other geometries & topologies
Submission Number: 21981
Loading