I Learn Better If You Speak My Language: Enhancing Large Language Model Fine-Tuning with Style-Aligned Response AdjustmentsDownload PDF

Anonymous

16 Feb 2024ACL ARR 2024 February Blind SubmissionReaders: Everyone
Abstract: Fine-tuning large language models (LLMs) with a small data set for particular tasks is a widely encountered yet complex challenge. The potential for overfitting on a limited number of examples can negatively impact the model's ability to generalize and retain its original skills. Our research explores the impact of the style of ground-truth responses during the fine-tuning process. We found that matching the ground-truth response style with the LLM’s inherent style results in better learning outcomes. Building on this insight, we developed a method that minimally alters the LLM’s pre-existing responses to correct errors, using these adjusted responses as training targets. This technique enables precise corrections in line with the model’s native response style, safeguarding the model’s core capabilities and thus avoid overfiting. Our findings show that this approach not only improves the LLM's task-specific accuracy but also crucially maintains its original competencies and effectiveness.
Paper Type: long
Research Area: Efficient/Low-Resource Methods for NLP
Contribution Types: Approaches to low-resource settings
Languages Studied: English, python
0 Replies

Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview