Keywords: Graph-based Learning, Deep Learning, Representation Learning
TL;DR: Contrastive learning for a single model to represent multiple domains of graph topologies.
Abstract: Graph neural networks (GNNs) have revolutionised the field of graph representation learning and plays a critical role in graph-based research. Recent work explores applying GNNs to pre-training and fine-tuning, where a model is trained on a large dataset and its learnt representations are then transferred to a smaller dataset. However, current work only explore pre-training on a single domain; for example, a model pre-trained on molecular graphs is fine-tuned on other molecular graphs. This leads to poor generalisability of pre-trained models to novel domains and tasks.
In this work, we curate a multi-graph-domain dataset and apply state-of-the-art Graph Adversarial Contrastive Learning (GACL) methods. We present a pre-trained graph model that may have the capability of acting as a foundational graph model. We will evaluate the efficacy of its learnt representations on various downstream tasks against baseline models pre-trained on single domains. In addition, we aim to compare our model to un-trained and non-transferred models, and show that performance using our foundational model is capable of achieving equal or better than task-specific methodology.
Submission Number: 8
Loading