Accurate Learning of Graph Representations with Graph Multiset PoolingDownload PDF

Published: 12 Jan 2021, Last Modified: 03 Apr 2024ICLR 2021 PosterReaders: Everyone
Keywords: Graph representation learning, Graph pooling
Abstract: Graph neural networks have been widely used on modeling graph data, achieving impressive results on node classification and link prediction tasks. Yet, obtaining an accurate representation for a graph further requires a pooling function that maps a set of node representations into a compact form. A simple sum or average over all node representations considers all node features equally without consideration of their task relevance, and any structural dependencies among them. Recently proposed hierarchical graph pooling methods, on the other hand, may yield the same representation for two different graphs that are distinguished by the Weisfeiler-Lehman test, as they suboptimally preserve information from the node features. To tackle these limitations of existing graph pooling methods, we first formulate the graph pooling problem as a multiset encoding problem with auxiliary information about the graph structure, and propose a Graph Multiset Transformer (GMT) which is a multi-head attention based global pooling layer that captures the interaction between nodes according to their structural dependencies. We show that GMT satisfies both injectiveness and permutation invariance, such that it is at most as powerful as the Weisfeiler-Lehman graph isomorphism test. Moreover, our methods can be easily extended to the previous node clustering approaches for hierarchical graph pooling. Our experimental results show that GMT significantly outperforms state-of-the-art graph pooling methods on graph classification benchmarks with high memory and time efficiency, and obtains even larger performance gain on graph reconstruction and generation tasks.
One-sentence Summary: A novel graph pooling method for graph representation learning, that considers multiset with attention-based operations.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Code: [![github](/images/github_icon.svg) JinheonBaek/GMT](https://github.com/JinheonBaek/GMT)
Data: [BBBP (Blood-Brain Barrier Penetration)](https://paperswithcode.com/dataset/bbbp-scaffold), [CLUSTER](https://paperswithcode.com/dataset/cluster), [COLLAB](https://paperswithcode.com/dataset/collab), [HIV (Human Immunodeficiency Virus)](https://paperswithcode.com/dataset/qm9-charge-densities-and-energies-calculated), [IMDB-BINARY](https://paperswithcode.com/dataset/imdb-binary), [IMDB-MULTI](https://paperswithcode.com/dataset/imdb-multi), [MUTAG](https://paperswithcode.com/dataset/mutag), [OGB](https://paperswithcode.com/dataset/ogb), [PROTEINS](https://paperswithcode.com/dataset/proteins), [Tox21](https://paperswithcode.com/dataset/tox21-1), [ToxCast (Toxicity Forecaster)](https://paperswithcode.com/dataset/toxcast-scaffold), [ZINC](https://paperswithcode.com/dataset/zinc)
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 3 code implementations](https://www.catalyzex.com/paper/arxiv:2102.11533/code)
34 Replies

Loading