Joint text classification on multiple levels with multiple labelsDownload PDF

25 Sept 2019 (modified: 05 May 2023)ICLR 2020 Conference Withdrawn SubmissionReaders: Everyone
Keywords: multi-head attention, zero-shot learning, multi-task learning, text classification, sequence labeling
Abstract: Natural language uses words in an associative way to construct sentences: it is not words in isolation, but the appropriate use of hierarchical structures that makes communication successful. We propose a deep learning framework for explicitly tying together the representations between single words and full sentences, resulting in a fluid transfer of knowledge between these two levels of granularity. We construct a multi-head attention mechanism for sentence classification, where the individual attention heads simultaneously learn to perform multi-class sequence labeling. Supervision on individual tokens explicitly teaches the classifier which areas it needs to focus on in each sentence, while the sentence-level objective regularizes the token-level predictions and even enables sequence labeling without token-level training data. Our experiments show that the proposed architecture systematically outperforms its single-task counterparts and exhibits strong transfer capabilities, while also achieving reasonable performance as a zero-shot sequence labeler.
Original Pdf: pdf
7 Replies

Loading