Is Repeatedly Solving the Same Problem Necessary? Rethinking Task Reasoning through Action Prototype Learning
Abstract: Large language models (LLMs) demonstrate remarkable proficiency across a wide array of tasks but often struggle with specialized problem-solving and practical reasoning, especially in domains such as mathematics. Existing approaches frequently rely on solving the same problem multiple times (e.g., 20 iterations) to achieve high precision. We argue that such redundancy is unnecessary, as humans rarely solve problems this way. To address this, we propose action prototype learning for task reasoning, where strategies are systematically organized into discrete action prototypes, each associated with a semantic key and prior knowledge, enabling efficient task alignment and reasoning. Additionally, we design a contradiction-based answer evaluation mechanism to identify logical inconsistencies with problem data, enhancing solution precision. We also develop an action-matching inference mechanism that retrieves relevant prior knowledge, significantly reducing token consumption while improving inference performance. By leveraging efficient reasoning strategies, our method requires only a single pass to achieve high-quality results, minimizing excessive computations. Extensive evaluations across two datasets show that our approach reduces token usage by approximately 68.5\% compared to self-consistency (SC) methods while maintaining robust reasoning capabilities. This highlights the effectiveness of leveraging prior knowledge to refine LLM reasoning, making it both efficient and practical
Paper Type: Long
Research Area: Efficient/Low-Resource Methods for NLP
Research Area Keywords: Efficient/Low-Resource Methods for NLP, Information Extraction, Information Retrieval and Text Mining
Languages Studied: English
Submission Number: 1804
Loading