Beyond Fast and Slow: Cognitive-Inspired Elastic Reasoning for Large Language Models

ICLR 2026 Conference Submission705 Authors

02 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Large language models; Reinforcement Learning; Markov decision process
TL;DR: We propose the Cognitive-Inspired Elastic Reasoning framework for efficient scaling of language model reasoning, which dynamically selects the most appropriate processing mode for each query.
Abstract: Large language models (LLMs) have demonstrated impressive performance across various language tasks. However, existing LLM reasoning strategies mainly rely on the LLM itself with fast or slow mode (like o1 thinking) and thus struggle to balance reasoning efficiency and accuracy across queries of varying difficulties. In this paper, we propose Cognitive-Inspired Elastic Reasoning (CogER), a framework inspired by human hierarchical reasoning that dynamically selects the most suitable reasoning strategy for each query. Specifically, CogER first assesses the complexity of incoming queries and assigns them to one of several predefined levels, each corresponding to a tailored processing strategy, thereby addressing the challenge of unobservable query difficulty. To achieve automatic strategy selection, we model the process as a Markov Decision Process and train a CogER-Agent using reinforcement learning. The agent is guided by a reward function that balances solution quality and computational cost, ensuring resource-efficient reasoning. Moreover, for queries requiring external tools, we introduce Cognitive Tool-Assisted Reasoning, which enables the LLM to autonomously invoke external tools within its chain-of-thought. Extensive experiments demonstrate that CogER outperforms state‑of‑the‑art Test‑Time scaling methods, achieving at least a 13% relative improvement in average exact match on In‑Domain tasks and an 8% relative gain on Out‑of‑Domain tasks.
Primary Area: foundation or frontier models, including LLMs
Submission Number: 705
Loading