Bi-GCL: Efficient Search on Networks

24 Sept 2023 (modified: 25 Mar 2024)ICLR 2024 Conference Withdrawn SubmissionEveryoneRevisionsBibTeX
Keywords: Graph Contrastive Learning, Similarity Search, Binary Coding, Efficiency
TL;DR: A new graph contrastive learning approach to enhance network node similarity search with binary node embeddings.
Abstract: Recent research shows the promising potential of node continuous embedding methods in Top-K network node similarity search, which often involves finding nearest neighbors measured by similarity in a continuous embedding space.However, these methods poorly scale to searching on large networks, since their embeddings demand significant storage and entail tremendous computation costs.In this paper, we introduce a graph contrastive learning framework for compressing continuous node embeddings into binary codes that enable customized bits per dimension, striking a balance between retrieval accuracy, speed, and storage.Specifically, a recurrent binarization with GNNs is presented, which consists of two components, a GNN encoder for learning node continuous representations, and a residual multilayer perception module for encoding representations to binary codes.The whole architecture is trained end-to-end by jointly optimizing three losses, i.e., contrastive loss from giving closely aligned representations to positives, information bottleneck loss from superfluous information minimization, and representation distillation loss from aligning binary codes and their continuous counterparts.Extensive experiments demonstrates that our method achieves approximately 6x-19x faster retrieval and 16x-32x space reduction compared to traditional continuous-based embedding methods.Moreover, it significantly outperforms state-of-the-art continuous- and hash-based network embedding methods on several real-world networks.
Primary Area: unsupervised, self-supervised, semi-supervised, and supervised representation learning
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 9386
Loading