A Decomposition Framework for Class Incremental Text Classification with Dual Prompt Tuning

Published: 01 Jan 2024, Last Modified: 17 May 2025IJCNN 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Text classification models need to be capable of learning new categories continually as new topics emerge over time. Class incremental text classification provides a solution that enables models to sequentially learn new classes while retaining previously acquired knowledge. However, existing methods have limitations such as high computational overhead, substantial storage requirements, and privacy concerns. To address these issues and leverage the powerful capabilities of prompt-based continual learning methods, we propose a decomposition framework for class incremental text classification with two key subtasks: task ID identification and continual text classification. For task identification, we generate synthetic samples using large language models, then train a sentence encoder with supervised contrastive learning. This allows task retrieval with minimal replay data and no privacy concerns. For classification, we introduce a novel dual prompt tuning approach. It employs a unified prompt decoupling strategy to capture both task-general and task-specific knowledge. We also propose a task-aware prompt initialization method utilizing relationships between tasks. Experiments on benchmark datasets demonstrate state-of-the-art performance. Our method proposed reduces reliance on replayed data and optimally leverages knowledge transfer.
Loading