Towards Better Representations for Multi-Label Text Classification with Multi-granularity Information

Published: 07 Oct 2023, Last Modified: 01 Dec 2023EMNLP 2023 FindingsEveryoneRevisionsBibTeX
Submission Type: Regular Long Paper
Submission Track: NLP Applications
Submission Track 2: Semantics: Lexical, Sentence level, Document Level, Textual Inference, etc.
Keywords: Multi-label text classification, Text representation, Contrastive learning, Multi-granularity information
TL;DR: We present a multi-granularity information enhancement framework to improve text representation for multi-label text classification and transform the anisotropy problem in pre-trained language models.
Abstract: Multi-label text classification (MLTC) aims to assign multiple labels to a given text. Previous works have focused on text representation learning and label correlations modeling using pre-trained language models (PLMs). However, studies have shown that PLMs generate word frequency-oriented text representations, causing texts with different labels to be closely distributed in a narrow region, which is difficult to classify. To address this, we present a novel framework $\textbf{CL}$($\underline{C}$ontrastive $\underline{L}$earning)-$\textbf{MIL}$ ($\underline{M}$ulti-granularity $\underline{I}$nformation $\underline{L}$earning) to refine the text representation for MLTC task. We first use contrastive learning to generate uniform initial text representation and incorporate label frequency implicitly. Then, we design a multi-task learning module to integrate multi-granularity (diverse text-labels correlations, label-label relations and label frequency) information into text representations, enhancing their discriminative ability. Experimental results demonstrate the complementarity of the modules in CL-MIL, improving the quality of text representations and yielding stable and competitive improvements for MLTC.
Submission Number: 878
Loading