Meta Thinker: Thinking What AI Thinks

Published: 17 Oct 2025, Last Modified: 21 Nov 2025MATH-AI 2025 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Large Language Models, LLM Reasoning, Meta Learning, Instructions, Thinking Styles
TL;DR: This paper proposes a meta-thinking framework that enables large language models to adaptively select or synthesize reasoning styles, improving accuracy and efficiency across diverse reasoning tasks.
Abstract: Large Language Models (LLMs) have shown remarkable ability to tackle complex reasoning problems across diverse domains. Yet, their performance often hinges on adopting an appropriate reasoning strategy tailored to the problem structure. Existing paradigms such as Chain-of-Thought (CoT) and Tree-of-Thought (ToT) have substantially improved reasoning quality, but a fundamental question remains: how can we automatically determine the optimal reasoning style for a given task? This work addresses the challenge of adaptive reasoning strategy selection in LLMs. We investigate whether LLMs can autonomously identify and apply the most suitable thinking style across different problem types. Through comprehensive experiments spanning five representative reasoning paradigms, we analyze their relative performance on mathematical, logical, and commonsense reasoning benchmarks. Building on these insights, we propose a meta-thinking prompt algorithm that enables LLMs to dynamically select—or even synthesize—reasoning styles based on contextual inputs. Our approach improves both accuracy and token efficiency, demonstrating the potential for self-adaptive reasoning in LLMs. This study represents a step toward more flexible and context-aware, reasoning systems. The implementation is publicly available at https://github.com/JamesJunyuGuo/meta_thinker_LLM.
Submission Number: 5
Loading