Rationality of Thought Improves Reasoning in Large Language Models

19 Sept 2023 (modified: 11 Feb 2024)Submitted to ICLR 2024EveryoneRevisionsBibTeX
Primary Area: generative models
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Keywords: Large Language Model, Rational Thinking, Rationality of Thought, Cognitive Bias
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
TL;DR: We introduce the "Rationality of Thought" (RoT) approach to improve the rational thinking ability of LLMs. By RoT, LLMs enhance their cognitive reasoning and achieve SOTA results on arithmetic and commonsense reasoning tasks.
Abstract: While the capabilities of large language models (LLMs) have been progressively advanced, their competence in addressing intricate reasoning tasks remains inadequate, primarily due to their insufficient cognitive capabilities. To explore the cognitive proficiency of models like GPT-4, we turn to methodologies from cognitive psychology: cognitive abilities reflect rational thinking skills, and cognitive bias tasks are often used to assess rational thinking levels. In this paper, we develop a cognitive bias dataset to measure the rational thinking and cognitive levels of LLMs. Our observations indicate that GPT-4, akin to humans, exhibits limitations in its rational thinking ability. We propose a new method, “Rationality of Thought” (RoT), to prompt LLMs into a rational thinking process during task execution. This method significantly improves the accuracy of GPT-4 on the cognitive bias task by 18.7\%. Cognitive capacity is also essential for tackling complex issues, therefore, we implement RoT across various reasoning tasks. Using only a zero-shot setting, RoT outperforms inference enhancement techniques such as CoT using few-shot, such as GSM8K (+0.4), AQUA-RAT (+4.8), ARC-c (+0.7) in multiple arithmetic and common sense reasoning tasks. Our empirical evaluation shows that RoT helps LLMs elevate their cognitive capabilities through rational thinking, thereby becoming more adept at navigating complex reasoning tasks.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 1926
Loading