Keywords: LLM reinforcement learning, curriculum learning, sample efficiency, hamiltonian path
TL;DR: Large Language Model Curriculum Learning Based on Minimum Semantic Similarity Hamiltonian Cycle
Abstract: Recent curriculum reinforcement learning for large language models (LLMs) typically rely on difficulty-based annotations for data filtering and ordering. However, such methods suffer from local optimization, where continual training on simple samples in the early steps causing the policy to lose its exploration. We propose a novel schema, namely *Hamiltonian curiosity AugMented large language ModEl Reinforcement (HAMMER)*, that transfers diversity metrics, commonly used in dataset evaluation, into the dynamic reinforcement learning procedure, where training samples are ordered via a minimum-semantic Hamiltonian path making the initial training retrain more exploration. From a theoretical perspective of generalization bounds, diversity-driven ordering facilitates stable convergence. Empirical evaluations indicate that *HAMMER* stimulates model "curiosity" and consistently achieves a 3% to 4% average accuracy gain across diverse inference benchmark.
Supplementary Material: zip
Primary Area: reinforcement learning
Submission Number: 10535
Loading