Plug-and-Play Representation Learning of Documents for Pre-trained ModelsDownload PDF

Anonymous

03 Sept 2022 (modified: 05 May 2023)ACL ARR 2022 September Blind SubmissionReaders: Everyone
Abstract: Recently, inserting task-specific plugins such as adapters and prompts into a unified pre-trained model (PTM) to handle multiple tasks has become an efficient paradigm for NLP. In this paper, we explore to extend this paradigm from task adaptation to document representation. Specifically, we introduce plug-and-play representation learning of documents (named PlugD), which aims to represent each document as a unified task-agnostic plugins. By inserting document plugins as well as task plugins into the PTM for downstream tasks, we can encode a document one time to handle different tasks, which is more efficient than conventional methods that learn task-specific encoders to represent documents. Extensive experiments on 7 datasets of 5 typical NLP tasks show that PlugD enables models to encode documents once and for all with a unified PTM as basis, resulting in a 3.2x tuning and inference speedup while achieving comparable or even better performance. Besides, we also find that plugins can serve as an effective way to inject external knowledge into task-specific models, improving model performance without any additional model training. Our code and plugins will be released to advance future work.
Paper Type: long
0 Replies

Loading