Cross-Lingual Transfer with Large Language Models via Adaptive Adapter Merging

24 Sept 2023 (modified: 11 Feb 2024)Submitted to ICLR 2024EveryoneRevisionsBibTeX
Primary Area: transfer learning, meta learning, and lifelong learning
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Keywords: Cross-Lingual Transfer, Model Merging, Large Language Models
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Abstract: As an effective alternative to the direct fine-tuning on target tasks in specific languages, cross-lingual transfer addresses the challenges of limited training data by aligning representations across languages or by explicitly translating target languages into source languages. However, these methods possess certain limitations and fail to fully exploit the potential of Large Language Models (LLMs). In this paper, we regard the ability of LLMs in a particular task and language as a combination of "task ability" and "language ability". In the context of parameter-efficient fine-tuning and cross-lingual transfer, task ability is represented by adapters fine-tuning on the target task in the source language, while language ability is the ability to solve problems using the specific target language. In this work, we propose a novel adaptive adapter merging method for cross-lingual transfer, termed as $\texttt{AdaMergeX}$. As language ability is not tied to any specific task, we introduce another easily accessible reference task from which language ability is obtained by adapter merging. Then by further merging it with adapters tuned on the target task in the source language, we can achieve effective cross-lingual transfer. Furthermore, unlike existing model merging methods that employ arithmetic addition, we propose a new structured-adaptive merging method that adapts the merging process based on the structure of adapters. Our empirical results demonstrate that our approach yields new and effective cross-lingual transfer, outperforming existing methods across all settings.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 9136
Loading