Carbon Literacy for Generative AI: Visualizing Training Emissions Through Human-Scale Equivalents

Published: 24 Nov 2025, Last Modified: 24 Nov 20255th Muslims in ML Workshop co-located with NeurIPS 2025EveryoneRevisionsBibTeXCC BY 4.0
Keywords: carbon emissions, pre-training
Abstract: Training large language models (LLMs) requires substantial energy and produces significant carbon emissions that are rarely visible to creators and users, due to a lack of transparent data available. We compile reported and estimated carbon emissions (kg CO2) for 13 state-of-the-art models (2018–2024) during their training to reflect the environmental severity of these emissions. These carbon emissions values are translated to human-friendly equivalences, trees required for absorption and average per-capita footprints, as well as scaled comparisons across household, commercial, and industrial contexts through our interactive demo. Our key takeaways note a lack of transparency surrounding reported emissions during model training. Furthermore, the amount of emissions in only training data is alarming, causing harm that cannot be mitigated quickly enough by the environment. We position this work as a socio-technical contribution that bridges quantitative emissions analysis with human-centred interpretation to advance sustainable and transparent AI practice. By offering an accessible lens on sustainability, it promotes more responsible engagement with generative AI in creative communities. Our interactive demo is available at: https://neurips-c02-viz.vercel.app/.
Track: Track 2: ML by Muslim Authors
Submission Number: 22
Loading