Less-forgetting Multi-lingual Fine-tuningDownload PDF

Published: 31 Oct 2022, Last Modified: 11 Oct 2022NeurIPS 2022 AcceptReaders: Everyone
Keywords: Multi-lingual Language Models, Multi-lingual Fine-tuning, Less-forgetting
TL;DR: This paper conducts both theoretical and experimental analysis on the multi-lingual fine-tuning and proposes a novel multi-lingual fine-tuning method.
Abstract: Multi-lingual fine-tuning (MLF), which fine-tunes a multi-lingual language model (MLLM) with multiple source languages, aims to gain good zero-shot performance on target languages. In MLF, the fine-tuned model tends to fit the source languages while forgetting its cross-lingual knowledge obtained from the pre-training stage. This forgetting phenomenon degenerates the zero-shot performance of MLF, which remains under-explored. To fill this gap, this paper proposes a multi-lingual fine-tuning method, dubbed Less-forgetting Multi-lingual Fine-tuning (LF-MLF). In LF-MLF, we cast multi-lingual fine-tuning as a constrained optimization problem, where the optimization objective is to minimize forgetting, and constraints are reducing the fine-tuning loss. The proposed method has superior zero-shot performance; furthermore, it can achieve the Pareto stationarity. Extensive experiments on Named Entity Recognition, Question Answering and Natural Language Inference back up our theoretical analysis and validate the superiority of our proposals.
Supplementary Material: zip
12 Replies

Loading