Large Language Models Suffer From Their Own Output: An Analysis of the Self-Consuming Training Loop

TMLR Paper5464 Authors

24 Jul 2025 (modified: 29 Nov 2025)Rejected by TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: Large Language Models (LLMs) are already widely used to generate content for a variety of online platforms. As we are not able to safely distinguish LLM-generated content from human-produced content, also LLM-generated content is used to train the next generation of LLMs, giving rise to a self-consuming training loop. From the image generation domain, we know that such a self-consuming training loop reduces both quality and diversity of images finally ending in a model collapse. However, it is unclear whether this alarming effect can also be observed for LLMs. Therefore, we present the first study investigating the self-consuming training loop for LLMs. Further, we propose a novel method based on logic expressions that allows us to unambiguously verify the correctness of LLM-generated content, which is difficult for natural language text. Our experimental results for LLMs with up to 49.2M parameters indicate that the self-consuming training loop can produce correct outputs if parameters are chosen correctly, however, the output declines in its diversity depending on the proportion of the used generated data as well as on the diversity of the initial dataset. For our experimental setting, fresh data can slow down this decline, but not stop it. Further, we observe similar results on a real natural language dataset. Given these concerning results, we encourage researchers to study methods to negate this process.
Submission Length: Regular submission (no more than 12 pages of main content)
Assigned Action Editor: ~Yu_Meng1
Submission Number: 5464
Loading