A Systematic Approach to Universal Random Features in Graph Neural Networks

Published: 18 Aug 2023, Last Modified: 29 Nov 2024Accepted by TMLREveryoneRevisionsBibTeXCC BY 4.0
Event Certifications: logconference.org/LoG/2024/Journal_Track
Abstract: Universal random features (URF) are state of the art regarding practical graph neural networks that are provably universal. There is great diversity regarding terminology, methodology, benchmarks, and evaluation metrics used among existing URF. Not only does this make it increasingly difficult for practitioners to decide which technique to apply to a given problem, but it also stands in the way of systematic improvements. We propose a new comprehensive framework that captures all previous URF techniques. On the theoretical side, among other results, we formally prove that under natural conditions all instantiations of our framework are universal. The framework thus provides a new simple technique to prove universality results. On the practical side, we develop a method to systematically and automatically train URF. This in turn enables us to impartially and objectively compare all existing URF. New URF naturally emerge from our approach, and our experiments demonstrate that they improve the state of the art.
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: For this revision, we considered all requested changes by reviewers q6bL, kbus, and SXN8. This involved various minor changes throughout the paper as well as larger changes. The larger changes are: - the addition of the "Individualization refinement in a nutshell" paragraph on page 2 addressing the request that "IR trees are backtracking trees" is not clearly understandable and that IR was not properly introduced before the "Why individualization refinement" paragraph. - the addition of a proof for Lemma 3 on page 9. - an extension to the proof for Theorem 4 on page 10. - the addition of Appendix F Trainability, as requested by reviewer SXN8 concluding the interesting finding that trainability inversely correlates with the amount of randomness. - the addition of implementation details in Appendix E. Notable minor changes include: - adding a further clarification on URF in the related work on page 4. - most minor changes mentioned by reviewer kbus. (Some were questions that did not involve changes.) Further, we implemented all suggestions made by reviewer kbus, specifically including clarifications to Lemma 3 and Theorem 4. Lastly, we implemented the changes promised to Urhj for the camera ready version.
Code: https://github.com/bjfranks/IRNI
Supplementary Material: zip
Assigned Action Editor: ~Rémi_Flamary1
License: Creative Commons Attribution 4.0 International (CC BY 4.0)
Submission Number: 1167
Loading