Keywords: Large Language Models, Skill Acquisition, Quality Diversity
Abstract: Training Large Language Models (LLMs) to acquire various skills remains a challenging endeavor. Conventional training approaches often struggle with data distribution imbalances and inadequacies in objective functions that do not align well with task-specific performance. To address these challenges, we introduce CycleQD, a novel approach that leverages the Quality Diversity (QD) framework through a cyclic adaptation of the MAP-Elites algorithm. In this framework, each task's performance metric is alternated as the quality measure while the others serve as the behavioral characteristics. This cyclic focus on individual tasks allows for concentrated effort on one task at a time, eliminating the need for data ratio tuning and simplifying the design of the objective function. Empirical results indicate that applying CycleQD to 8-billion parameter models not only enables them to surpass traditional fine-tuning methods in coding, operating systems, and database tasks, but also achieves performance on par with GPT-3.5-TURBO across these domains. Our code is available at \url{https://github.com/SakanaAI/CycleQD}.
Submission Number: 24
Loading