EDCO: Dynamic Curriculum Orchestration for Domain-specific Large Language Model Fine-tuning

ICLR 2026 Conference Submission553 Authors

01 Sept 2025 (modified: 23 Dec 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Large language models, domain-specific agents, supervised fine-tuning, reinforcement learning
Abstract: Domain-specific large language models (LLMs), typically developed by fine-tuning a pre-trained general-purpose LLM on specialized datasets, represent a significant advancement in applied AI. A common strategy in LLM fine-tuning is curriculum learning, which pre-orders training samples based on metrics like difficulty to improve learning efficiency compared to a random sampling strategy. However, most existing methods for LLM fine-tuning rely on a static curriculum, designed prior to training, which lacks adaptability to the model's evolving needs during fine-tuning. To address this, we propose EDCO, a novel framework based on two key concepts: inference entropy and dynamic curriculum orchestration. Inspired by recent findings that maintaining high answer entropy benefits long-term reasoning gains, EDCO prioritizes samples with high inference entropy in a continuously adapted curriculum. EDCO integrates three core components: an efficient entropy estimator that uses prefix tokens to approximate full-sequence entropy, an entropy-based curriculum generator that selects data points with the highest inference entropy, and an LLM trainer that optimizes the model on the selected curriculum. Comprehensive experiments in wireless and data communications domains demonstrate that EDCO outperforms common curriculum strategies for fine-tuning Qwen3-1.7B/4B models under supervised and reinforcement learning settings. Furthermore, our efficient entropy estimation reduces computational time by 83.5% while maintaining high accuracy.
Primary Area: applications to computer vision, audio, language, and other modalities
Submission Number: 553
Loading