Keywords: Language models, system 2 reasoning, language of thoughts
TL;DR: We demonstreate the gap of LLMs in modeling human thoughts for system 2 reasoning and propose Call-of-Thoughts to alleviate the gap.
Abstract: System 2 reasoning is one of the defining characteristics of intelligence, which requires slow and logical thinking. Human conducts System 2 reasoning via the language of thoughts that organizes the reasoning process as *a causal sequence of mental language*, or thoughts. Recently, it has been observed that System 2 reasoning can be elicited from Large Language Models (LLMs) pre-trained on large-scale natural languages. However, in this work, we show that there is a significant gap between the modeling of languages and thoughts. As language is primarily a tool for humans to share knowledge and thinking, *modeling human language can easily integrate into language biases* that are not related to thoughts. Furthermore, we show that the biases may mislead the eliciting of “thoughts” in LLMs to focus only on a given part of the premise. To this end, we propose a new prompt technique termed **L**anguage-**o**f-**T**houghts ( LoT ) to alleviate the issue. Instead of directly eliciting the chain of thoughts from partial information, LoT instructs LLMs to focus and expand based on all the relevant information. We show that the simple strategy significantly reduces the language modeling biases in LLMs and improves the performance of LLMs across a variety of reasoning tasks.
Primary Area: foundation or frontier models, including LLMs
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 7877
Loading