Jointly Training Task-Specific Encoders and Downstream Models on Heterogeneous Multiplex Graphs

ICLR 2025 Conference Submission13691 Authors

28 Sept 2024 (modified: 28 Nov 2024)ICLR 2025 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Multiplex, Heterogeneous, GNN, GraphSAGE
TL;DR: Jointly optimizing layer-wise embeddings and downstream machine learning models on heterogeneous multiplex graphs increases predictive performance.
Abstract: Learning representations on Heterogeneous Multiplex Graphs (HMGs) is an active field of study, driven by the need for generating expressive, low-dimensional embeddings to support downstream machine learning tasks. A key component of this process is the design of the graph processing pipeline, which directly impacts the quality of learned representations. Information fusion techniques, which aggregate information across layers of a multiplex graph, have been shown to improve the performance of Graph Neural Network (GNN)-based architectures on various tasks including node classification, edge prediction, and graph-level classification. Recent research has explored fusion strategies at different stages of the processing pipeline, leading to graph-, GNN-, embedding-, and prediction-level approaches. In this work, we propose a model extending the $\texttt{GraphSAGE}$ architecture, which simultaneously refines layer-wise embeddings produced by the encoder while training downstream models. We evaluate the model's effectiveness on an HMG on real-world and benchmark datasets, comparing it to models utilizing either graph-level or prediction-level fusion without jointly optimizing their vector embeddings. We demonstrate that our approach enhances the model's performance on downstream tasks, particularly node classification.
Primary Area: unsupervised, self-supervised, semi-supervised, and supervised representation learning
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Reciprocal Reviewing: I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 13691
Loading