Trends in Frontier AI Model Count: A Forecast to 2028

Published: 05 Jun 2025, Last Modified: 15 Jul 2025ICML 2025 Workshop TAIG PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: training compute thresholds, frontier AI, forecasting, model proliferation, AI governance
TL;DR: We forecast the number of models exceeding training compute thresholds, such as the 10^25 FLOP threshold in the EU AI Act, for the years 2025-2028.
Abstract: Training compute thresholds are increasingly being used as a tool to regulate AI model development and deployment. We therefore forecast the number of models exceeding training compute thresholds in the coming years (2025-28), such as the $10^{25}$ FLOP threshold in the EU AI Act and the $10^{26}$ FLOP threshold in the US AI Diffusion Framework. We estimate that by the end of 2028, there will be between 103-306 foundation models exceeding a $10^{25}$ FLOP threshold and 45-148 models exceeding the $10^{26}$ FLOP threshold (90\% CIs) with median predictions of 165 and 81 models, respectively. We also find that the number of models exceeding these thresholds grows superlinearly, but subexponentially. Compute thresholds that are defined with respect to the largest training run to date (for example, such that all models within one order of magnitude of the largest training run to date are captured by the threshold) see a more stable trend, with a median forecast of 14-16 models being captured by this definition annually from 2025-2028.
Submission Number: 40
Loading