Thinking with Nothinking Calibration: A New In-Context Learning Paradigm in Reasoning Large Language Models
Keywords: In-context Learning, Reasoning, Prompting
Abstract: Reasoning large language models (RLLMs) have recently demonstrated remarkable capabilities through structured and multi-step reasoning. While prior research has primarily focused on improving their training and inference strategies, their potential for in-context learning (ICL) remains largely underexplored. To fill this gap, we propose Thinking with Nothinking Calibration (JointThinking), a new ICL paradigm that prompts the model to generate two answers in parallel: one in Thinking mode and the other in Nothinking mode. A second round of Thinking is triggered only when the two initial responses are inconsistent, using a single prompt with two different answers. Extensive experiments across multiple reasoning benchmarks demonstrate that JointThinking significantly outperforms pure thinking, few-shot chain-of-thought (CoT) and even self-consistency. Moreover, it generalizes better than training-based methods. We further conduct a systematic analysis of the calibration mechanism, showing the importance of structural thinking diversity and consistency check. Besides, we observe that the second thinking performance improves with larger model sizes and explore effective scaling of multi-sample calibration strategies. Finally, we discuss current limitations and outline promising directions for future ICL research in RLLMs.
Paper Type: Long
Research Area: Language Models
Research Area Keywords: prompting, chain-of-thought
Contribution Types: NLP engineering experiment
Languages Studied: English
Submission Number: 7064
Loading