Pareto Automatic Multi-Task Graph Representation LearningDownload PDF

22 Sept 2022 (modified: 13 Feb 2023)ICLR 2023 Conference Withdrawn SubmissionReaders: Everyone
Keywords: Graph Representation Learning, Multi-Objective Optimization, Multi-Task Learning, Neural Architecture Search
TL;DR: From a multi-objective perspective, this paper first tries to automatically search for a general-purpose multi-task graph neural network architecture that matches various user-desired task preferences.
Abstract: Various excellent graph representation learning models, such as graph neural networks (GNNs), can produce highly task-specific embeddings in an end-to-end manner. Due to the low transferability of learned embeddings and limited representational capabilities of handcrafted models, existing efforts cannot efficiently handle multiple downstream tasks simultaneously, especially in resource-constrained scenarios. This paper first tries to automatically search for multi-task GNN architectures to improve the generalization performance of GNNs through knowledge sharing across tasks. Because of possible task (objective) conflicts and complex dependencies of architectures and weights, the multi-task GNN architecture search is a bi-level multi-objective optimization problem (BL-MOP) to find a set of Pareto architectures and their Pareto weights, representing different trade-offs across tasks at upper and lower levels (UL and LL), respectively. The Pareto optimality of sub-problems results in each Pareto architecture corresponding to a set of Pareto weights, which is particularly challenging in deep learning with high training costs. For the first time, we propose a simple but effective differentiable multi-task GNN architecture search framework (DMTGAS) with convergence guarantees. By introducing consistent task preferences for UL and LL, DMTGAS only alternately optimizes a single architecture and weights via the gradient-based multi-objective optimizer, which neatly overcomes the above optimization difficulties. Experimental results on several tasks in three real-world graph datasets demonstrate the superiority of the GNNs obtained by our proposal compared with existing handcrafted ones.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Submission Guidelines: Yes
Please Choose The Closest Area That Your Submission Falls Into: Optimization (eg, convex and non-convex optimization)
21 Replies

Loading