everyone">EveryoneRevisionsBibTeX
With the development of graph representation learning, self-supervised graph contrastive learning (GCL) has become one of the most important techniques. In GCL, a set of positive and negative samples are generated by data augmentation. However, the majority of existing methods rely on empirical rule-based graph augmentations, which might lead to failures in learning useful graph patterns. To address this issue, we propose a novel model-based adversarial contrastive graph augmentation (ACGA) method for generating both positive samples with minimal sufficient information and hard negative graph samples automatically. We also provide a theoretical framework to analyze the positive and negative graph augmenting process in self-supervised GCL. We evaluate our ACGA via extensive experiments on five benchmark datasets. The experimental results show that ACGA outperforms the state-of-the-art baselines.