Keywords: Graph embedding, Force-directed, representation learning, Spring model, Reduced complexity
TL;DR: In this work, we expand on a previous work by reducing the complexity of the force-directed graph embedding method. The idea is to reduce the number of force calculations to a limited set of node pairs.
Abstract: Graph embedding is a critical pre-processing step that maps elements of a graph network, such as its nodes or edges, to coordinates in a $d$-dimensional space. The primary goal of the embedding process is to capture and preserve various features of the graph network, including its topology and node attributes, in the generated embedding. Maintaining these graph features in the embedding can significantly enhance the performance of the downstream machine learning tasks. In this work, we introduce a novel family of graph embedding methods that leverage kinematics principles within a spring model and $n$-body simulation framework to generate the graph embedding. The proposed method differs substantially from state-of-the-art (SOTA) methods, as it does not attempt to fit a model (such as neural networks) and eliminates the need for functions such as message passing or back-propagation. Instead, it aims to position the nodes in the embedding space such that the total net force of the system is reduced to a minimal threshold, resulting in the system reaching an equilibrium state. The spring model is designed as a linear summation of non-linear force functions, with the shortest-path distance serving as the adjusting parameter for the force factor between each node pair, and therefore, inducing the graph topology in the force functions. In this work, we attempted to reduce the complexity of the original algorithm from $\log(n^2)$ to $n\log(n)$, while maintaining the performance metrics at a competitive level.
The proposed method is intuitive, parallelizable, and highly scalable. While the primary focus of this work is on the feasibility of the Force-Directed approach, the results in unsupervised graph embeddings are comparable to or better than SOTA methods, demonstrating its potential for practical applications.
Primary Area: unsupervised, self-supervised, semi-supervised, and supervised representation learning
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 8480
Loading