Deep Attention Pooling Graph Neural Network for Text ClassificationDownload PDF

22 Sept 2022 (modified: 13 Feb 2023)ICLR 2023 Conference Desk Rejected SubmissionReaders: Everyone
Keywords: GNN, Attention, Pooling, Adjacency matrix, Text Classification
TL;DR: A fresh model based on GNN with dual adjacency matrix, and attention pooling for text classification.
Abstract: Graph Neural Networks (GNN) is a classical method that has been applied to document classification as a compelling message-passing framework inside and between documents. Consider the graph-based models are transductive when representing the documents as nodes in one graph(inter-documents), and require high memory and time efficiency to employ the GNN to each document after aligning the documents to the longest one(intra-documents). This paper proposes a novel method named Deep Attention Pooling Graph Neural Networks (DAPG) to use the structure of each document for inductive document classification. The attention pooling layer (APL) in DAPG adaptively selects nodes to form smaller graphs based on their scalar attention values to alleviate resource consumption. Additionally, regarding the structural variation, a fresh dual adjacency matrix for individual graphs based on the word co-occurrence and the word distance has been built to conquer the sparsity and keep stability after pooling. Experiments conducted on five standard text classification datasets show that our method is competitive with the state-of-the-art. Ablation studies reveal further insights into the impact of the different components on performance.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Submission Guidelines: Yes
Please Choose The Closest Area That Your Submission Falls Into: Applications (eg, speech processing, computer vision, NLP)
1 Reply

Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview