Neighbour Contrastive Learning with Heterogeneous Graph Attention Networks on Short Text ClassificationDownload PDF

Anonymous

16 Nov 2021 (modified: 05 May 2023)ACL ARR 2021 November Blind SubmissionReaders: Everyone
Abstract: Graph neural networks (GNNs) have attracted extensive research interests in text classification tasks, due to their superiority in representation learning. However, most existing studies adopt the same semi-supervised learning setting as the vanilla Graph Convolution Network (GCN), which require a large amount of labelled data during training and thus are less robust when dealing with large-scale graph data with few labels. Additionally, graph structure information is normally captured by direct information aggregation via network schema and missing adjacency knowledge may hinder the performance. Addressing those problems, this paper proposes a novel method to learn graph structure, by using simple neighbour contrastive learning for an existing self-supervised heterogeneous graph neural network model (NC-HGAT). It considers the graph structure information from heterogeneous graphs with a multi-layer perceptrons (MLPs) and delivers consistent results, despite the corrupted neighbouring connections. Extensive experiments have been implemented on four benchmark short-text datasets, and demonstrate that our proposed model NC-HGAT outperforms the state-of-the-art methods on three datasets and achieves a competitive result on the remaining dataset.
0 Replies

Loading