Track: Language Modeling
Keywords: large language models, genetic particle filtering, heuristic search, general problem solving, decision making, explore vs exploit, efficiency, cost-quality trade-off
TL;DR: We introduce Fleet of Agents (FoA), where LLM agents solve complex problems using genetic particle filtering for adaptive tree searches. Tested on 3 benchmarks with 4 LLMs, FoA boosts quality by ~5% while reducing costs to ~40% of SOTA baselines.
Abstract: While numerous frameworks have been developed to enhance the reasoning abilities of large language models (LLMs), there is a scarcity of methods that effectively balance the trade-off between cost and quality. In this paper, we introduce **Fleet of Agents (FoA)**, a novel and intuitive yet principled framework utilizing LLMs as agents to navigate through dynamic tree searches, employing a genetic-type particle filtering approach. FoA spawns a multitude of agents, each exploring the search space autonomously, followed by a selection phase where resampling based on a heuristic value function optimizes the balance between exploration and exploitation. This mechanism enables dynamic branching, adapting the exploration strategy based on discovered solutions. We conduct extensive experiments on three benchmark tasks, ``Game of 24``, ``Mini-Crosswords``, and ``WebShop``, utilizing four different LLMs, ``GPT-3.5``, ``GPT-4``, ``LLaMA3.2-11B``, and ``LLaMA3.2-90B``. On average across all tasks and LLMs, FoA obtains a **quality improvement of ~5%** while **requiring only ~40% of the cost** of previous SOTA methods. Notably, our analyses reveal that (1) FoA achieves the best cost-quality trade-off among all benchmarked methods and (2) FoA + LLaMA3.2-11B surpasses the Llama3.2-90B model. FoA is publicly available at [this https URL](https://anonymous.4open.science/r/FoA-4D83).
Serve As Reviewer: ~Nearchos_Potamitis1, ~Lars_Henning_Klein1
Submission Number: 79
Loading