Compute Requirements for Algorithmic Innovation in Frontier AI Models

Published: 05 Jun 2025, Last Modified: 15 Jul 2025ICML 2025 Workshop TAIG PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: AI governance, algorithmic progress, compute governance
TL;DR: We estimate the compute for innovations used in frontier AI models, we investigate the trends and the potential impacts of compute caps
Abstract: Algorithmic innovation in the pretraining of large language models has driven a massive reduction in the total compute required to reach a given level of capability. In this paper we empirically investigate the compute requirements for developing algorithmic innovations. We catalog 36 pre-training algorithmic innovations used in Llama 3 and DeepSeek-V3. For each innovation we estimate both the total FLOP used in development and the FLOP/s of the hardware utilized. Innovations using significant resources double in their requirements each year. We then use this dataset to investigate the effect of compute caps on innovation. Our analysis suggests that compute caps alone are unlikely to dramatically slow AI algorithmic progress. Even stringent compute caps—such as capping total operations to the compute used to train GPT-2 or capping hardware capacity to 8 H100 GPUs—could still have allowed for half of the cataloged innovations.
Submission Number: 16
Loading