Keywords: network representation learning, node classification, linear modularity, label propagation
TL;DR: When node attributes are unavailable, node embeddings for classification can be generated in a computationally efficient manner by including modularity in the objective function.
Abstract: Graph-based learning is a cornerstone for analyzing structured data, with node classification as a central task. However, in many real-world graphs, nodes lack informative feature vectors, leaving only neighborhood connectivity and class labels as available signals. In such cases, effective classification hinges on learning node embeddings that capture structural roles and topological context. We introduce a fast semi-supervised embedding framework that jointly optimizes three complementary objectives: (i) unsupervised structure preservation via scalable modularity approximation, (ii) supervised regularization to minimize intra-class variance among labeled nodes, and (iii) semi-supervised propagation that refines unlabeled nodes through random-walk-based label spreading with attention-weighted similarity. These components are unified into a single iterative optimization scheme, yielding high-quality node embeddings. On standard benchmarks, our method consistently achieves classification accuracy on par with or superior to state-of-the-art approaches, while requiring significantly less computational cost.
Supplementary Material: zip
Primary Area: learning on graphs and other geometries & topologies
Submission Number: 4749
Loading