Keywords: Large language models, reasoning and thinking, inverse thinking, cognitive ability
TL;DR: LLMs show a preliminary ability to understand the concept of "inverse thinking" in both theoretical and empirical contexts, but they struggle to consistently apply it in practical contexts to solve problems.
Abstract: Large language models (LLMs) have exhibited significant proficiency in various reasoning tasks, yet their capacity for "inverse thinking" remains underexplored. Inverse thinking, inspired by concepts from cognitive science and popularized by figures such as Charlie Munger, involves approaching problems from an opposite perspective, often simplifying complex issues and offering innovative solutions. This paper evaluates the ability of LLMs to comprehend and apply inverse thinking through a series of experiments designed to test theoretical understanding, contextual comprehension, and practical preference in problem-solving scenarios. Our findings indicate that while LLMs demonstrate a basic grasp of inverse thinking, they struggle to consistently apply it in practical contexts, highlighting a nuanced challenge in capturing this cognitive skill within language models. Finally, we discuss the potential directions for future research along this direction and how it can contribute to make better cognitive LLMs.
Submission Number: 64
Loading