Keywords: data efficacy, data organization, data ordering, language model
Abstract: Large Language Models (LLMs) have revolutionized various fields, yet their training efficiency is heavily reliant on effective data curation.
While data selection has been widely studied, the strategic data organization for enhanced training remains an underexplored area, particularly since current LLMs are often trained for only one or a few epochs.
This paper systematically explores the influence of data organization on LLM training by reusing pre-computed sample-level scores originally generated for data efficiency, thereby incurring minimal additional computational overhead.
We identify and formalize four key guidances for optimizing data organization: Boundary Sharpening, Cyclic Scheduling, Curriculum Continuity, and Local Diversity.
Guided by them, we introduce two novel data ordering methods termed STR and SAW.
Extensive experiments across different model scales and data sizes, encompassing both pre-training and SFT stages, validate the effectiveness of our summarized guidances.
They also demonstrate the robustness of our proposed data ordering methods in enhancing the stability and performance of LLM training.
Paper Type: Long
Research Area: LLM Efficiency
Research Area Keywords: data-effective training, data organization, LLM efficacy
Languages Studied: English
Submission Number: 824
Loading