Beyond Model Collapse: Scaling Up with Synthesized Data Requires Reinforcement

Published: 18 Jun 2024, Last Modified: 19 Jul 2024TF2M 2024 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Learning with Synthetic Data, Data Curation, Avoiding Model Collapse
TL;DR: We theoretically and empirically demonstrate that leveraging reinforcement from humans or models to select synthesized data can prevent model collapse.
Abstract: Synthesized data from generative models is increasingly considered as an alternative to human-annotated data for fine-tuning Large Language Models. This raises concerns about model collapse: a drop in performance of models fine-tuned on generated data. Considering that it is easier for both humans and machines to tell between good and bad examples than to generate high-quality samples, we investigate the use of feedback on synthesized data to prevent model collapse. We derive theoretical conditions under which a Gaussian mixture classification model can achieve asymptotically optimal performance when trained on feedback-augmented synthesized data, and provide supporting simulations for finite regimes. We illustrate our theoretical predictions on news summarization with large language models. We show that training from feedback-augmented synthesized data, either by pruning incorrect predictions or by selecting the best of several guesses, can prevent model collapse, validating popular approaches like RLHF.
Submission Number: 33
Loading