Unsupervised Graph Transformer With Augmentation-Free Contrastive Learning

Published: 01 Jan 2024, Last Modified: 13 Nov 2024IEEE Trans. Knowl. Data Eng. 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Transformers, having the superior ability to capture both adjacent and long-range dependencies, have been applied to the graph representation learning field. Existing methods are permanently established in the supervised setting with several high-quality labels to optimize the graph Transformers effectively. However, such labels are difficult to be obtained in real-world applications, and it remains largely unexplored in unsupervised representation learning that is essential for graph Transformers to be practical. This article first proposes an unsupervised graph Transformer and makes several technical contributions. 1) We first study various typical augmentations on graph contrastive Transformers, and conclude that such augmentations can lead to model degradation due to their domain-agnostic property. On this basis, we propose an Augmentation-free Graph Contrastive Transformer optimized through nearest neighbors to avoid model degradation; 2) Different similarity measures are designed for positive (mutual information) and negative samples (cosine) to improve the contrastive effectiveness; 3) We derive a novel way to precisely maximize mutual information, capturing more discriminative information with an additional entropy maximization. Finally, by performing the augmentation-free graph contrastive learning at different-scale representations, our graph Transformer can learn discriminative representations without supervision. Extensive experiments conducted on various datasets can demonstrate the superiority of our method.
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview