Subgraph Permutation Equivariant Networks

Published: 31 Aug 2023, Last Modified: 08 Sept 2023Accepted by TMLREveryoneRevisionsBibTeX
Abstract: In this work we develop a new method, named Sub-graph Permutation Equivariant Networks (SPEN), which provides a framework for building graph neural networks that operate on sub-graphs, while using a base update function that is permutation equivariant, that are equivariant to a novel choice of automorphism group. Message passing neural networks have been shown to be limited in their expressive power and recent approaches to over come this either lack scalability or require structural information to be encoded into the feature space. The general framework presented here overcomes the scalability issues associated with global permutation equivariance by operating more locally on sub-graphs. In addition, through operating on sub-graphs the expressive power of higher-dimensional global permutation equivariant networks is improved; this is due to fact that two non-distinguishable graphs often contain distinguishable sub-graphs. Furthermore, the proposed framework only requires a choice of $k$-hops for creating ego-network sub-graphs and a choice of representation space to be used for each layer, which makes the method easily applicable across a range of graph based domains. We experimentally validate the method on a range of graph benchmark classification tasks, demonstrating statistically indistinguishable results from the state-of-the-art on six out of seven benchmarks. Further, we demonstrate that the use of local update functions offers a significant improvement in GPU memory over global methods.
Submission Length: Regular submission (no more than 12 pages of main content)
Previous TMLR Submission Url:
Changes Since Last Submission: We resubmit the earlier paper which had satisfied reviewers as being acceptable for TMLR. During the final editorial process, editors raised reasonable concerns that the original version submitted to TMLR had significant presentational overlap with the ESAN paper that had not been sufficiently cited throughout our paper until the final stages of updates in the review process, where we realised the need for improvement, during discussion with the ESAN authors in late September 2022. We were attempting to align our work with terminology and analysis used in already published papers, and used the ESAN paper as a model, and cited it, but we agree we did not make sufficient attribution throughout, until the final version. We apologise to all concerned for the extra work this has caused. The TMLR editors requested that because our updates to the presentation happened towards the end of the review process, we should check that reviewers are satisfied that there is enough of a difference between the core method in the ESAN paper and our paper, that the decision to publish should stand. To ensure that relevant stakeholders were consulted, we discussed this with the authors of the ESAN paper, enhanced the text with relevant citations throughout (as in the final version of the previous round of reviews), and the ESAN authors told us personally at NeurIPS that they have no concerns with the current form of the paper. We also added a footnote to the paper with the history of earlier submitted versions, to document the parallel development of the concepts. We do not believe the changes made since the last review round have any impact on the novelty of the paper. The focus of the changes was to ensure credit is correctly attributed to the terminology and analysis techniques we used during the paper. The core novelty of our paper remains that it presents a subgraph approach that produces a natural and novel choice of automorphism group that is expressive, scalable, and performs well on benchmark graph classification tasks, and this still stands. Our approach presents clear differences to the ESAN paper as our approach considers one subgraph selection policy, which is specifically chosen because it leads to a natural choice of automorphism equivariance constraint that requires less parameterisation than previous works. We explicitly describe the difference between our method and ESAN on p20 of the paper. The changes made since our final version are: Chap 1 - minor wording changes Chap 3 - Added missing citation in definitions and to introduction of a bag of subgraphs Chap A.2 - Added missing citations to ESAN Chap A.3.2 - Rewording of differences between ESAN and SPEN due to repetition We look forward to your decision.
Assigned Action Editor: ~Guillaume_Rabusseau1
License: Creative Commons Attribution 4.0 International (CC BY 4.0)
Submission Number: 862