Relating graph auto-encoders to linear models

Published: 27 Sept 2023, Last Modified: 17 Sept 2024Accepted by TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: Graph auto-encoders are widely used to construct graph representations in Euclidean vector spaces. However, it has already been pointed out empirically that linear models on many tasks can outperform graph auto-encoders. In our work, we prove that the solution space induced by graph auto-encoders is a subset of the solution space of a linear map. This demonstrates that linear embedding models have at least the representational power of graph auto-encoders based on graph convolutional networks. So why are we still using nonlinear graph auto-encoders? One reason could be that actively restricting the linear solution space might introduce an inductive bias that helps improve learning and generalization. While many researchers believe that the nonlinearity of the encoder is the critical ingredient towards this end, we instead identify the node features of the graph as a more powerful inductive bias. We give theoretical insights by introducing a corresponding bias in a linear model and analyzing the change in the solution space. Our experiments are aligned with other empirical work on this question and show that the linear encoder can outperform the nonlinear encoder when using feature information.
Submission Length: Regular submission (no more than 12 pages of main content)
Previous TMLR Submission Url: https://openreview.net/forum?id=7M4F3r66UV&referrer=%5BAuthor%20Console%5D(%2Fgroup%3Fid%3DTMLR%2FAuthors%23your-submissions)
Changes Since Last Submission: Revised Manuscript Submission: Relating graph auto-encoders to linear models (Previous Submission: 724) I hope this letter finds you well. I am writing to resubmit our manuscript entitled "Relating graph auto-encoders to linear models" to TMLR following the valuable feedback received from the reviewers. We greatly appreciate the time and effort all reviewers invested in evaluating our work and providing constructive comments. Their concerns influenced and enhanced the quality of our paper. In the following, we will provide a comprehensive summary of the changes made and highlight the major revisions in the manuscript. The key change made to the manuscript is the adaption of Theorem 1 to hold with even weaker assumptions. In the old version of the paper, the full rank assumption was the primary concern for all the reviewers. With thorough consideration, we found that with slight modifications, Theorem 1 still holds without the full rank assumption. In the new version, Theorem 1 does not rely on a full rank assumption anymore. This change significantly strengthened the manuscript by being applicable in a wider range. We already addressed all comments of reviewers during the author's response but will give a summary of the points mentioned and how we have incorporated the feedback. - In the previous version, the main concern was the (strong) full-rank assumption of the graph adjacency matrix for Theorem 1. Theorem 1 in the new version does not rely on this assumption. - Another concern was that in the previous version, the experiments were not very supportive of the theory. - There was one minor mistake in the description of Figure 3 in the old version We fixed the description error, and the interpretation is still correct and aligned with our intuition and the remaining results. - In the old version, we did not include the misalignment and the training loss for the real-world experiments We added the normalized misalignment information for all considered datasets. Especially for the real-world graphs, this gives new insights into why sometimes the features harm the node-prediction task. We also added the training performance for the real-world data, supporting our theory. - Minor concerns were mentioned about the generality of the results. The scope of our results was not clear enough in the previous version. We now emphasize the specific architecture in the paper and discuss the implications of our more general result. All other concerns were based on misunderstandings, and we put effort into communicating things more clearly. We are confident that the revisions we have made have significantly improved the manuscript, and we believe it is now even more suitable for publication in TMLR. Thank you once again for your time and consideration. We look forward to hearing from you soon. Yours sincerely
Code: https://github.com/tml-tuebingen/linear-gae
Assigned Action Editor: ~Francisco_J._R._Ruiz1
License: Creative Commons Attribution 4.0 International (CC BY 4.0)
Submission Number: 1335
Loading