Contrastive Learning with Heterogeneous Graph Attention Networks on Short Text Classification

Published: 01 Jan 2022, Last Modified: 14 Nov 2024IJCNN 2022EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Graph neural networks (GNNs) have attracted extensive interest in text classification tasks due to their expected superior performance in representation learning. However, most existing studies adopted the same semi-supervised learning setting as the vanilla Graph Convolution Network (GCN), which requires a large amount of labelled data during training and thus is less robust when dealing with large-scale graph data with fewer labels. Additionally, graph structure information is normally captured by direct information aggregation via network schema and is highly dependent on correct adjacency information. Therefore, any missing adjacency knowledge may hinder the performance. Addressing these problems, this paper thus proposes a novel method to learn a graph structure, NC-HGAT, by expanding a state-of-the-art self-supervised heterogeneous graph neural network model (HGAT) with simple neighbour contrastive learning. The new NC-HGAT considers the graph structure information from heterogeneous graphs with multilayer perceptrons (MLPs) and delivers consistent results, despite the corrupted neighbouring connections. Extensive experiments have been implemented on four benchmark short-text datasets. The results demonstrate that our proposed model NC-HGAT significantly outperforms state-of-the-art methods on three datasets and achieves competitive performance on the remaining dataset.
Loading