Keywords: graph neural networks, entropy, channel capacity, model dimensionality estimation
Abstract: Existing message-passing graph neural networks often rely on carefully designed information propagation methods to perform reasonably in graph-related mining tasks. However, this invokes the problem of whether the dimensions of learnable matrices and the depths of the networks are properly estimated. While this challenge has been attempted by others, it remains an open problem. Using the principle of maximum entropy and Shannon's theorem, we demonstrate that message-passing graph neural networks function similarly to noisy communication channels. The optimal information transmission state of graph neural networks can be reached when Shannon's theorem is satisfied, which is determined by their entropy and channel capacity. In addition, we illustrate that the widths of trainable matrices should be sufficiently large to avoid the shrinkage of model channel capacity and the increase of the channel capacity diminishes as the depth of the networks increases. The proposed approach is empirically verified through extensive experiments on five public semi-supervised node classification datasets.
Primary Area: learning on graphs and other geometries & topologies
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 2981
Loading