Plug-Tagger: A Pluggable Sequence Labeling Framework with Pre-trained Language ModelsDownload PDF

Anonymous

16 Nov 2021 (modified: 05 May 2023)ACL ARR 2021 November Blind SubmissionReaders: Everyone
Abstract: Fine-tuning the pre-trained language models (PLMs) on downstream tasks is the de-facto paradigm in NLP. Despite the superior performance on sequence labeling, the fine-tuning requires large-scale parameters and time-consuming deployment for each task, which limits its application in real-world scenarios. To alleviate these problems, we propose a pluggable sequence labeling framework, plug-tagger. By switching the task-specific plugin on the input, plug-tagger allows a frozen PLM to perform different sequence labeling tasks without redeployment. Specifically, the plugin on the input are a few continuous vectors, which manipulates the PLM without modifying its parameters, and each task only needs to store the lightweight vectors rather than a full copy of PLM. To avoid redeployment, we propose the label word mechanism, which reuses the language model head to prevent task-specific classifiers from modifying model structures. Experimental results on three sequence labeling tasks show that the proposed method achieves comparable performance with fine-tuning by using 0.1% task-specific parameters. Experiments show that our method is faster than other lightweight methods under limited computational resources
0 Replies

Loading