Designing Algorithms Empowered by Language Models: An Analytical Framework, Case Studies, and Insights

TMLR Paper5192 Authors

24 Jun 2025 (modified: 07 Sept 2025)Decision pending for TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: This work presents an analytical framework for the design and analysis of LLM-based algorithms, i.e., algorithms that contain one or multiple calls of large language models (LLMs) as sub-routines and critically rely on the capabilities of LLMs. While such algorithms, ranging from basic LLM calls with prompt engineering to complicated LLM-powered agentic workflows and compound AI systems, have achieved remarkable empirical success, their design and optimization oftentimes require extensive trial-and-errors and case-by-case analysis. Our proposed framework serves as an attempt to mitigate such headaches, offering a formal and systematic approach for analyzing how the accuracy and efficiency of an LLM-based algorithm will be impacted by critical design choices, such as the pattern and granularity of task decomposition, or the prompt for each LLM call. Through a wide range of case studies covering diverse algorithm patterns (including parallel/hierarchical/recursive task decomposition and generic directed acyclic graphs), we demonstrate the proposed framework in action and derive interesting insights that generalize across scenarios, accompanied by systematic empirical validation in synthetic settings.
Submission Length: Long submission (more than 12 pages of main content)
Assigned Action Editor: ~Binhang_Yuan1
Submission Number: 5192
Loading