BadPre: Task-agnostic Backdoor Attacks to Pre-trained NLP Foundation ModelsDownload PDFOpen Website

2022 (modified: 16 Nov 2022)ICLR 2022Readers: Everyone
Abstract: Pre-trained Natural Language Processing (NLP) models, which can be adapted to a variety of downstream language tasks via fine-tuning, highly accelerate the learning progress of NLP models. However,...
0 Replies

Loading