Abstract: Large Language Models (LLMs) are showcasing impressive ability in handling reasoning tasks. Humans inherently adjust problem-solving approaches based on task complexity. However, most methodologies that leverage LLMs tend to adopt a uniform approach: utilizing consistent models, prompting methods, and degrees of problem decomposition, regardless of the problem complexity. Inflexibility of these methods can bring unnecessary computational overhead or sub-optimal performance. To address this issue, we introduce an Adaptive-Solver (AS) framework that strategically adapts solving approaches to suit various problems. Given an initial solution, the framework functions with two primary modules. The initial evaluation module assesses the adequacy of the current solution. If improvements are needed, the subsequent adaptation module comes into play. Within this module, various types of adaptation strategies are employed collaboratively. Through such dynamic and multi-faceted adaptations, our framework can help reduce computational consumption or elevate performance. Experimental results from complex reasoning benchmarks reveal that instantiation methods developed based on the AS framework can significantly reduce API costs (up to 62%) while maintaining superior performance, or enhance performance across all tasks.
Paper Type: long
Research Area: NLP Applications
Contribution Types: NLP engineering experiment, Approaches to low-resource settings
Languages Studied: English
0 Replies
Loading