Keywords: network embedding, node representation learning, quantum mechanics, tensorized embedding
Abstract: Node representation learning a.k.a. network embedding (NE) is an essential technique for network analysis by representing nodes as vectors, which also serves downstream tasks or as initial input for GNN models. Most of existing NE algorithms require a space complexity linear to the multiplication of the number of nodes and embedding dimension to store embeddings. Such a conventional embedding paradigm has two defects: i) it brings challenge to the deployment of NE algorithms for large-scale networks on devices of limited memory/storage space; ii) model expressiveness is constrained due to the limited embedding dimension. Impressed and inspired by the large Hilbert space of quantum systems, we propose a brand new NE algorithm \emph{node2ket} by imitating behaviors of quantum systems. Theoretically, we give analysis on how it unifies existing embedding methods including both conventional ones and tensorized ones, and explore the ultimate compressive power of the embedding model on the space complexity compared with conventional ones. Experiments are conducted on five public real-world networks where methods are evaluated through tasks of network reconstruction and link prediction. On BlogCatalog, our method achieves to outperform all baselines with 1/32 training parameters and 1/16 running time on the same machine. On DBLP, the reconstruction precision of node2ket achieves to be 3 times higher than the best baseline i.e. LouvainNE. Source code will be made publicly available.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Submission Guidelines: Yes
Please Choose The Closest Area That Your Submission Falls Into: Deep Learning and representational learning
14 Replies
Loading