Abstract: Graph neural networks (GNNs), as a powerful deep learning framework for modeling graph-structured data, have attracted lots of attention recently. Most of existing GNNs need a lot of labeled data. However, constructing generalizable and robust representation from unlabeled graph data remains a challenge for GNNs. Existing graph contrastive learning (GCL) methods either try to uniformly drop edges, or intend to remove unimportant nodes and edges, which heavily relies on the specific structure of the data. Another thing is that vanilla graph convolutional network only utilize low-pass filter (adjacency matrix), which ignores the middle and high frequency information of the graph structural data. To tackle existing challenges in the GCL methods, instead, we propose a noise perturbation based general GCL framework via flexible filters. Specifically, we first add various types of noise to the nodes and edges. Subsequently, we design flexible filters, which are the combination of low, middle and high-pass filters. Our investigation systematically examines the impact of noise and filters, with an initial theoretical analysis linking these elements to the triplet loss function, shedding light on their roles. Extensive experiments in node classification showcase that our proposed approach surpasses existing state-of-the-art baselines. Surprisingly, we find that moderate levels of noise effectively alleviate the over-smoothing problem encountered in GNNs, while the use of flexible filters notably enhances model performance.
Loading