When Splitting Makes Stronger: A Theoretical and Empirical Analysis of Divide-and-Conquer Prompting in LLMs

Published: 08 Jul 2025, Last Modified: 26 Aug 2025COLM 2025EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Program-guided Prompt, Divide-and-Conquer, Foundation Model, Large Language Models, Deceptive content
TL;DR: This paper examines when divide-and-conquer (DaC) prompting is beneficial for LLMs. Through theoretical and empirical analysis, we identify specific task types where breaking inputs into sub-parts improves performance.
Abstract: Foundation models, particularly Large Language Models (LLMs), have garnered significant interest due to their wide range of applications. Yet these models demonstrate notable weaknesses when confronted with tasks involving iterative sub-problems or deliberately misleading content—exemplified by complex arithmetic operations and comprehensive fake news evaluation. Conventional instructional prompting frequently produces flawed outputs in these scenarios. While research has established that advanced techniques such as Chain-of-Thoughts and Least-to-Most methodologies can dramatically enhance LLM performance, emerging investigation indicates that a more streamlined divide-and-conquer (DaC) approach—which systematically partitions input sequences into discrete components—can yield remarkable improvements for particular problem classes like misinformation assessment. Our investigation rigorously examines the efficacy of DaC prompting strategies and precisely delineates the task characteristics that benefit most from this methodology. Through comprehensive theoretical analysis, we establish formal guarantees for performance enhancement in specifically identified task categories. We validate our theoretical framework through focused empirical studies on large integer multiplication and factual verification tasks, where experimental outcomes robustly confirm our analytical predictions, demonstrating DaC's practical superiority in these challenging domains.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
Author Guide: I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
Submission Number: 1121
Loading