Position: Enough of Scaling LLMs! Lets Focus on Downscaling

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 Position Paper Track posterEveryoneRevisionsBibTeXCC BY 4.0
TL;DR: We challenge the trend of scaling up LLMs and propose downscaling laws; strategies to build smaller, eco-friendly models that retain performance using smart data selection, pruning, and ensembling for sustainable and accessible AI.
Abstract: We challenge the dominant focus on neural scaling laws and advocate for a paradigm shift toward downscaling in the development of large language models (LLMs). While scaling laws have provided critical insights into performance improvements through increasing model and dataset size, we emphasize the significant limitations of this approach, particularly in terms of computational inefficiency, environmental impact, and deployment constraints. To address these challenges, we propose a holistic framework for downscaling LLMs that seeks to maintain performance while drastically reducing resource demands. This paper outlines practical strategies for transitioning away from traditional scaling paradigms, advocating for a more sustainable, efficient, and accessible approach to LLM development.
Lay Summary: Artificial intelligence (AI) tools like ChatGPT, Gemini and Perplexity AI work by training large language models (LLMs) on massive amounts of data using powerful computers. While making these models bigger has led to major improvements, it also comes with high costs, huge energy consumption, large carbon footprints, and limited access for smaller companies and researchers. In this paper, we argue that instead of always building bigger models, we should focus on making smaller, smarter, and more efficient ones. We introduce “downscaling laws,” which are principles to guide the design of compact AI models that still perform well. Our approach includes selecting better-quality training data, trimming unnecessary parts of the models, and combining smaller models to achieve strong performance. This shift can make AI more affordable, reduce its environmental impact, and help ensure that more people can benefit from it.
Link To Code: https://github.com/LCS2-IIITD/Downscaling
Primary Area: Research Priorities, Methodology, and Evaluation
Keywords: Large Language Models, Downscaling LLMs, Efficient LLMs
Submission Number: 278
Loading