Efficiently Selecting Response Generation Strategy by Self-Aligned Perplexity for Fine-Tuning LLMs

ACL ARR 2025 May Submission6322 Authors

20 May 2025 (modified: 03 Jul 2025)ACL ARR 2025 May SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Fine-tuning large language models (LLMs) typically relies on producing large sets of input-output pairs. Yet for a given question, there can be many valid outputs. In practice, these outputs are often derived by distilling knowledge from teacher models, and they can vary depending on the specific teacher model or prompting strategy employed. Recent findings show that \emph{how} these training outputs are generated can significantly affect the performance of the fine-tuned model, raising an important question: how do we pick the best \emph{data generation method} from among numerous possibilities? Rather than exhaustively training and evaluating on each candidate, this paper proposes a scalable approximate method that assesses a \emph{small} subset of generated data to estimate its suitability for a specific target LLM. Our central idea is that effective outputs should be \emph{familiar} to the target LLM. While previous work measures familiarity with perplexity, we find that perplexity might be suboptimal in characterizing ``familiarity'' through theoretical analysis and practical observations. To address this, we introduce \emph{self-aligned perplexity}, a novel metric capturing how closely candidate outputs adhere to the target LLM’s own style and reasoning patterns. In this way, we can identify the most effective generation strategy on a small sample, then apply it to produce the complete training set. We demonstrate that training on data generated by the chosen method yields significant improvements across diverse reasoning-focused benchmarks, particularly in cases where different candidate methods lead to highly divergent training outcomes.
Paper Type: Long
Research Area: Efficient/Low-Resource Methods for NLP
Research Area Keywords: distillation, fine-tuning, data-efficient training, data augmentation, NLP in resource-constrained settings
Contribution Types: Approaches to low-resource settings
Languages Studied: English
Keywords: I agree
Submission Number: 6322
Loading