Keywords: large language models, task decomposition, constraint satisfaction, complexity measures, reasoning reliability, combinatorial reasoning, database querying
Abstract: Large Language Models (LLMs) suffer from reliability issues on complex tasks, as existing decomposition methods are heuristic and rely on agent or manual decomposition. This work introduces a novel, systematic decomposition framework that we call CONstraint-Induced Complexity (ACONIC), which models the task as a constraint problem and leverages formal complexity measures to guide decomposition. On combinatorial (SATBench) and LLM database querying tasks (Spider), we find that by decomposing the tasks following the measure of complexity, agent can perform considerably better.
Paper Type: Short
Research Area: LLM Efficiency
Research Area Keywords: parameter-efficient-training, LLM Efficiency, data augmentation
Contribution Types: Model analysis & interpretability, NLP engineering experiment, Approaches low compute settings-efficiency, Data analysis
Languages Studied: English
Submission Number: 7554
Loading