Parameter-efficient fine-tuning of large-scale pre-trained language modelsDownload PDFOpen Website

Published: 01 Jan 2023, Last Modified: 15 May 2023Nat. Mac. Intell. 2023Readers: Everyone
Abstract: Training a deep neural network can be costly but training time is reduced when a pre-trained network can be adapted to different use cases. Ideally, only a small number of parameters needs to be changed in this process of fine-tuning, which can then be more easily distributed. In this Analysis, different methods of fine-tuning with only a small number of parameters are compared on a large set of natural language processing tasks.
0 Replies

Loading