Benchmarking and Analyzing Unsupervised Network Representation Learning and the Illusion of Progress
Abstract: A number of methods have been developed for unsupervised network representation learning -- ranging from classical methods based on the graph spectra to recent random walk based methods and from deep learning based methods to matrix factorization based methods. Each new study inevitably seeks to establish the relative superiority of the proposed method over others. The lack of a standard assessment protocol and benchmark suite often leave practitioners wondering if a new idea represents a significant scientific advance. In this work, we articulate a clear and pressing need to systematically and rigorously benchmark such methods. Our overall assessment -- a result of a careful benchmarking of 15 methods for unsupervised network representation learning on 16 non-attributed graphs (several with different characteristics) - is that many recently proposed improvements are somewhat of an illusion when assessed through the lens of downstream tasks such as link prediction and node classification. Specifically, we find that several proposed improvements are marginal at best and that aspects of many of these datasets often render such small differences insignificant, especially when viewed from a rigorous statistical lens. A more detailed analysis of our results identify several new insights: first, we find that classical methods, often dismissed or not considered by recent efforts, can compete on certain types of datasets if they are tuned appropriately; second, we find that from a qualitative standpoint, a couple of methods based on matrix factorization offer a small but not always consistent advantage over alternative methods; third, no single method completely outperforms other embedding methods on both node classification and link prediction tasks. Finally, we also present several analysis that reveals settings under which certain algorithms perform well (e.g., the role of neighborhood context and dataset properties that impact performance). An important outcome of this study is the benchmark and evaluation protocol, which practitioners may find useful for future research in this area.
Submission Length: Long submission (more than 12 pages of main content)
Changes Since Last Submission: Updated for camera-ready version.
Code: https://github.com/PriyeshV/NRL_Benchmark
Assigned Action Editor: ~Karthik_R_Narasimhan1
License: Creative Commons Attribution 4.0 International (CC BY 4.0)
Submission Number: 42
Loading