Mitigating Training Imbalance in LLM Fine-Tuning via Selective Parameter Merging

ACL ARR 2024 June Submission4279 Authors

16 Jun 2024 (modified: 02 Jul 2024)ACL ARR 2024 June SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Supervised fine-tuning (SFT) is crucial for adapting Large Language Models (LLMs) to specific tasks. In this work, we demonstrate that the order of training data can lead to significant training imbalances, potentially resulting in performance degradation. Consequently, we propose to mitigate this imbalance by merging SFT models fine-tuned with different data orders, thereby enhancing the overall effectiveness of SFT. Additionally, we introduce a novel technique, "parameter-selection merging," which outperforms traditional weighted-average methods on five datasets. Further, through analysis and ablation studies, we validate the effectiveness of our method and identify the sources of performance improvements.
Paper Type: Short
Research Area: Language Modeling
Research Area Keywords: applications , fine-tuning
Contribution Types: NLP engineering experiment
Languages Studied: English
Submission Number: 4279
Loading