Abstract: The key to the text classification task is language representation and important information extraction, and there are many related studies. In recent years, the research on graph neural network (GNN) in text classification has gradually emerged and shown its advantages, but the existing models mainly focus on directly inputting words as graph nodes into the GNN models ignoring the different levels of semantic structure information in the samples. To address the issue, we propose a new hierarchical graph neural network (HieGNN) which extracts corresponding information from word-level, sentence-level (sen-level) and document-level (doc-level) respectively. The doc-level focuses on processing samples from a global perspective, while sen-level and word-level focus on processing samples from the sentences and words themselves. The model is tested on five datasets, and compared with the pure GNN-based model and the hybrid GNN and BERT model, it achieves better classification results on two datasets and similar results on three datasets, which demonstrate that our model is able to obtain more useful information for classification from samples.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Submission Guidelines: Yes
Please Choose The Closest Area That Your Submission Falls Into: Deep Learning and representational learning
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 2 code implementations](https://www.catalyzex.com/paper/a-semantic-hierarchical-graph-neural-network/code)
6 Replies
Loading