Climbing the Ladder of Reasoning: What LLMs Can—and Still Can’t—Solve after SFT?

Published: 17 Oct 2025, Last Modified: 21 Nov 2025MATH-AI 2025 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Mathematical reasoning, supervised fine-tuning, LLM reasoning
Abstract: Recent supervised fine-tuning (SFT) approaches have significantly improved language models' performance on mathematical reasoning tasks, even when models are trained at a small scale. However, the specific capabilities enhanced through such fine-tuning remain poorly understood. In this paper, we conduct a detailed analysis of model performance on the AIME24 dataset to understand how reasoning capabilities evolve. We discover a ladder-like structure in problem difficulty, categorize questions into four tiers (Easy, Medium, Hard, and Extremely Hard (Exh)), and identify the specific requirements for advancing between tiers. We find that progression from Easy to Medium tier requires adopting an R1 reasoning style with minimal SFT (500-1K instances), while Hard-level questions suffer from frequent model's errors at each step of the reasoning chain, with accuracy plateauing at ~65\% despite logarithmic scaling. Exh-level questions present a fundamentally different challenge, which requires unconventional problem-solving skills. Additional findings reveal that carefully curated small-scale datasets offer limited advantage, as scaling dataset size proves far more effective. Our analysis provides a clearer roadmap for advancing LLMs capabilities in mathematical reasoning.
Submission Number: 171
Loading