Node Number Awareness Representation for Graph Similarity LearningDownload PDF

Published: 01 Feb 2023, Last Modified: 13 Feb 2023Submitted to ICLR 2023Readers: Everyone
Keywords: graph representation learning, graph similarity learning, graph matching
Abstract: This work aims to address two important issues in the graph similarity computation, the first one is the Node Number Awareness Issue (N$^2$AI), and the second one is how to accelerate the inference speed of graph similarity computation in downstream tasks. We found that existing Graph Neural Network based graph similarity models have a large error in predicting the similarity scores of two graphs with similar number of nodes. Our analysis shows that this is because of the global pooling function in graph neural networks that maps graphs with similar number of nodes to similar embedding distributions, reducing the separability of their embeddings, which we refer to as the N$^2$AI. Our motivation is to enhance the difference between the two embeddings to improve their separability, thus we leverage our proposed Different Attention (DiffAtt) to construct Node Number Awareness Graph Similarity Model (N$^2$AGim). In addition, we propose the Graph Similarity Learning with Landmarks (GSL$^2$) to accelerate similarity computation. GSL$^2$ uses the trained N$^2$AGim to generate the individual embedding for each graph without any additional learning, and this individual embedding can effectively help GSL$^2$ to improve its inference speed. Experiments demonstrate that our N$^2$AGim outperforms the second best approach on Mean Square Error by 24.3\%(1.170 vs 1.546), 43.1\%(0.066 vs 0.116), and 44.3\%(0.308 vs 0.553), on AIDS700nef, LINUX, and IMDBMulti datasets, respectively. Our GSL$^2$ is at most 47.7 and 1.36 times faster than N$^2$AGim and the second faster model. Our code is publicly available on https://github.com/iclr231312/N2AGim.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Submission Guidelines: Yes
Please Choose The Closest Area That Your Submission Falls Into: Deep Learning and representational learning
25 Replies

Loading