NEURAL HAMILTONIAN FLOWS IN GRAPH NEURAL NETWORKSDownload PDF

Published: 01 Feb 2023, Last Modified: 13 Feb 2023Submitted to ICLR 2023Readers: Everyone
Abstract: Graph neural networks (GNNs) suffer from oversmoothing and oversquashing problems when node features are updated over too many layers. Embedding spaces can also vary significantly for different data types, leading to the need for different GNN model types. In this paper, we model the embedding of a node feature as a Hamiltonian flow over time. As in physics where Hamiltonian flow conserves the energy over time, its induced GNNs enable a more stable feature updating mechanism. Moreover, since the Hamiltonian flows are defined on a general symplectic manifold, this approach allows us to learn the underlying manifold of the graph in training, in contrast to most of the existing literature that assumes a fixed graph embedding manifold. We test Hamiltonian flows of different forms and demonstrate empirically that our approach achieves better node classification accuracy than popular state-of-the-art GNNs.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Submission Guidelines: Yes
Please Choose The Closest Area That Your Submission Falls Into: General Machine Learning (ie none of the above)
Supplementary Material: zip
40 Replies

Loading