Workshop on Scaling Post-training for LLMs (SPOT)

Published: 24 Dec 2025, Last Modified: 24 Dec 2025ICLR 2026 Workshop ProposalsEveryoneRevisionsBibTeXCC BY 4.0
Keywords: science of scaling, post-training, SFT, RL
TL;DR: workshop to establish rigorous and scalable methodologies, design choices, and approaches for post-training
Abstract: Post-training, encompassing techniques like Supervised Fine-Tuning (SFT) and Reinforcement Learning (RL), is no longer a mere final step for task-specific adaptation. It is evolving into a compute-intensive phase in its own right, crucial for unlocking the full potential of foundational models and optimizing for critical downstream behaviors. Yet, the science of post-training, at scale, remains in its infancy. This workshop is motivated by the urgent need to **establish rigorous and scalable methodologies, design choices, and approaches for post-training**. While today's design choices in pre-training are made with a core focus on their ability to scale, a similar **scaling laws** mindset for post-training is largely absent. Our goal is to catalyze a systematic understanding of how post-training scales—across algorithms, data regimes, infrastructure, and objectives—and to identify the open questions that must be addressed to turn post-training into a science of its own. This workshop aims to bring together diverse perspectives from academic and industrial researchers and practitioners, to share practical experiences, and to outline a clear research direction toward building a principled science of post-training at scale.
Submission Number: 99
Loading