Keywords: Structured Reasoning, Chain-of-Thought, Reliable LLM Reasoning, Condition-Aware Inference
TL;DR: We propose HoT, a structured reasoning framework that improves LLM accuracy and robustness.
Abstract: Large Language Models (LLMs) excel in language comprehension and generation tasks but frequently face challenges in scenarios demanding rigorous logical reasoning or strict adherence to problem conditions. In such reasoning, errors propagate through intermediate steps, hallucinatory outputs violate key problem conditions, and complex problems are often handled in a simplistic, chain-like manner. We propose Holon-of-Thought (HoT), a structured reasoning framework. HoT explicitly extracts problem conditions and enforces their adherence. It dynamically decomposes complex problems into verifiable subtasks and solves them through a four-stage pipeline: condition extraction, path exploration, adaptive decomposition, and aggregation. The experimental results show that HoT improves the accuracy of the inference and enhances the robustness. This establishes a new paradigm for reliable LLM-based reasoning in mathematics and logic.
Supplementary Material: zip
Primary Area: generative models
Submission Number: 13494
Loading