ThinkSLM: Towards Reasoning in Small Language Models

ACL ARR 2025 May Submission4479 Authors

20 May 2025 (modified: 29 Jul 2025)ACL ARR 2025 May SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Reasoning has long been viewed as an emergent property of large language models (LLMs). However, recent studies challenge this assumption, showing that small language models (SLMs) can also achieve competitive reasoning performance. This paper introduces $\textbf{{ThinkSLM}}$, the first extensive benchmark to systematically evaluate and study the reasoning abilities of SLMs trained from scratch or derived from LLMs through quantization, pruning, and distillation. We first establish a reliable evaluation criterion comparing available methods and LLM judges against our human evaluations. Then we present a study evaluating $\textbf{72}$ diverse SLMs from $\textbf{six}$ major model families across $\textbf{17 reasoning benchmarks}$. We repeat all our experiments $\textbf{three}$ times to ensure a robust assessment. Our findings show that: $\textbf{\textit{1)}}$ reasoning ability in SLMs is strongly influenced by training methods and data quality rather than solely model scale; $\textbf{\textit{2)}}$ quantization preserves reasoning capability, while pruning significantly disrupts it;$\textbf{\textit{ 3)}}$ larger models consistently exhibit higher robustness against adversarial perturbations and intermediate reasoning, but certain smaller models closely match or exceed the larger models' performance. Our findings challenge the assumption that scaling is the only way to achieve strong reasoning. Instead, we foresee a future where SLMs with strong reasoning capabilities can be developed through structured training or post-training compression.
Paper Type: Long
Research Area: Resources and Evaluation
Research Area Keywords: benchmarking, evaluation
Contribution Types: Model analysis & interpretability, Data analysis, Surveys
Languages Studied: English
Submission Number: 4479
Loading