Abstract: Graph contrastive learning has achieved great success in pre-training graph neural networks without ground-truth labels.
Leading graph contrastive learning follows the classical scheme of contrastive learning, forcing model to identify the essential information from augmented views.
However, general augmented views are produced via random corruption or learning, which inevitably leads to semantics alteration.
Although domain knowledge guided augmentations alleviate this issue, the generated views are domain specific and undermine the generalization.
In this work, motivated by the firm representation ability of sparse model from pruning, we reformulate the problem of graph contrastive learning via contrasting different model versions rather than augmented views.
We first theoretically reveal the superiority of model pruning in contrast to data augmentations.
In practice, we take original graph as input and dynamically generate a perturbed graph encoder to contrast with the original encoder by pruning its transformation weights.
Furthermore, considering the integrity of node embedding in our method, we are capable of developing a local contrastive loss to tackle the hard negative samples that disturb the model training.
We extensively validate our method on various benchmarks regarding graph classification via unsupervised and transfer learning. Compared to the state-of-the-art (SOTA) works, better performance can always be obtained by the proposed method.
Primary Subject Area: [Experience] Multimedia Applications
Secondary Subject Area: [Generation] Multimedia Foundation Models
Relevance To Conference: We employ model pruning techniques to process media-related information in the form of graph, which can pave the way for novel approaches to interpreting or creating multimedia content. Our proposed knowledge methods aim to advance the understanding of model compression in graph contrastive learning.
Supplementary Material: zip
Submission Number: 4006
Loading