Position Paper: Uncover Scaling Laws for Large Language Models via Inverse Problems

ACL ARR 2025 May Submission2875 Authors

19 May 2025 (modified: 03 Jul 2025)ACL ARR 2025 May SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Large Language Models (LLMs) are large-scale pretrained models that have achieved remarkable success across diverse domains. These successes have been driven by unprecedented complexity and scale in both data and computations. However, due to the high costs of training such models, brute-force trial-and-error approaches to improve LLMs are not feasible. Inspired by the success of inverse problems in uncovering fundamental scientific laws, this position paper advocates that inverse problems can also be used to efficiently uncover scaling laws that guide the building of LLMs to achieve a desirable performance with significantly better cost-effectiveness.
Paper Type: Long
Research Area: Machine Learning for NLP
Research Area Keywords: Large Language Models, Scaling Laws, Inverse Problems, Cost-Effectiveness
Contribution Types: Position papers
Languages Studied: English
Submission Number: 2875
Loading