Universal Self-Attention Network for Graph ClassificationDownload PDF

12 Sept 2020OpenReview Archive Direct UploadReaders: Everyone
Abstract: We consider a limitation in using graph neural networks (GNNs) for graph classification: the lack of mechanism to exploit dependencies among nodes, often due to the lack in efficiency of aggregating nodes' neighbors. To this end, we present U2GNN -- a novel embedding model leveraging the transformer self-attention network -- to learn plausible node and graph embeddings. In particular, our U2GNN induces a powerful aggregation function, using a self-attention mechanism followed by a recurrent transition, to update vector representation of each node from its neighbors. As a consequence, U2GNN effectively infer the potential dependencies among nodes, leading to better modeling of graph structures. Experimental results show that the proposed U2GNN achieves state-of-the-art accuracies on benchmark datasets for the graph classification task. Our code is available at: \url{https://github.com/daiquocnguyen/Graph-Transformer}.
0 Replies

Loading