Keywords: Large Language Model, LLMs, Minimal Criterion Coevolution, Evolutionary Model Merging, Synthetic Data, Quality-Diversity, Open-endedness
TL;DR: Open-ended coevolution of LLMs and synthetic data (without explicit optimization) leads to the discovery of a superior population of LLMs than baselines.
Abstract: Frontier model developers aim to train models continually to possess emergent, diverse capabilities.
To extend capabilities, the current pre-training and post-training paradigm requires manually starting training runs with static datasets or reward functions every time.
Addressing this limitation, our work pursues the insight that open-endedness (via the coevolution of models and tasks) can discover models with increasingly novel skills in a single run.
We introduce a new model development framework that extends coevolution to large language model (LLM) discovery, open-ended \textit{Assessment Coevolving with Diverse Capabilities} (AC/DC).
AC/DC evolves both LLMs via model merging and natural language tasks via synthetic data generation.
AC/DC discovers growing archives of LLMs that surpass the capabilities of larger LLMs while taking up less GPU memory.
In particular, our LLM populations achieve a broader Coverage of expertise than other curated models or baselines on downstream benchmarks, without \textit{any} explicit benchmark optimization.
Furthermore, AC/DC improves Coverage over time, continually innovates on tasks and models, and improves performance in multi-agent best-of-N selection.
Our findings highlight the potential of coevolution as a means of discovering broader sets of capabilities from base LLMs.
Overall, AC/DC brings us one step closer to a profoundly new paradigm of LLM development, where continual improvements to the diversity of model capabilities can be accelerated by leveraging existing models as stepping stones to increasingly powerful models.
Supplementary Material: zip
Primary Area: foundation or frontier models, including LLMs
Submission Number: 10386
Loading