Plug-and-Play Knowledge Injection for Pre-trained Language ModelsDownload PDF

Anonymous

05 Jun 2022 (modified: 05 May 2023)ACL ARR 2022 June Blind SubmissionReaders: Everyone
Abstract: Injecting external knowledge can improve the performance of pre-trained language models (PLMs) in various downstream NLP tasks. However, current knowledge injection methods usually require knowledge-aware pre-training or fine-tuning, which makes the knowledge-enhanced models strongly coupled to some specific knowledge bases. Toward flexible knowledge injection, we explore a new paradigm, plug-and-play knowledge injection, which decouples models from knowledge bases. Correspondingly, we propose a plug-and-play injection method, \textit{map-tuning}, which trains a mapping of knowledge embeddings to enrich model inputs with mapped embeddings while keeping PLMs frozen. Experimental results on two typical knowledge-driven NLP tasks show that map-tuning effectively improves the performance of PLMs with little computational cost. Specifically, one mapping network can be plugged on various downstream tasks without any additional training. And, one downstream model can work with multiple mapping networks of different knowledge bases in order to adapt to different domains. We will release all the code and models of this paper.
Paper Type: long
Editor Reassignment: yes
Reviewer Reassignment: yes
0 Replies

Loading