Keywords: LLM, Reinforcement Learning, Auto-Curricula, Reasoning, Quality-Diversity
TL;DR: We evolve training data tailored to a model’s current skill level using LLM-guided mutations, enabling more efficient RL-based training of reasoning models.
Abstract: Recent advances in reasoning models have yielded impressive results in mathematics and coding. However, most approaches rely on static datasets, which encourage memorisation and limit generalisation. We introduce DéjàQ, a framework that departs from this paradigm by jointly evolving a diverse set of synthetic mathematical problems alongside model training. This evolutionary process optimises the dataset’s learnability, adapting to the model’s abilities throughout training. We propose two LLM-driven mutation strategies in which the model itself mutates the training data, either by altering contextual details or by directly modifying problem structure. We find that the model can generate novel and meaningful problems, and that these LLM-driven mutations improve training outcomes compared to both standard RL and a mutator that selects examples from a static dataset based on learnability. We analyse key aspects of DéjàQ, including the validity of generated problems and computational overhead. Our results underscore the potential of dynamically evolving training data to enhance mathematical reasoning and indicate broader applicability, which we will support by open-sourcing our code.
Primary Area: foundation or frontier models, including LLMs
Submission Number: 16869
Loading