Weak-for-Strong: Training Weak Meta-Agent to Harness Strong Executors

Published: 14 Jun 2025, Last Modified: 19 Jul 2025ICML 2025 Workshop PRALEveryoneRevisionsBibTeXCC BY 4.0
Keywords: large language models, reinforcement learning, workflow generation
TL;DR: We represent agentic workflow as code and train a weak meta-agent to better leverage strong models according to environment feedback through reinforcement learning.
Track: Long Paper (up to 9 pages)
Abstract: Efficiently leveraging of the capabilities of contemporary large language models (LLMs) is increasingly challenging, particularly when direct fine-tuning is expensive and often impractical. Existing training-free methods, including manually or automated designed workflows, typically demand substantial human effort or yield suboptimal results. This paper proposes Weak-for-Strong Harnessing (W4S), a novel framework that customizes smaller, cost-efficient language models to design and optimize workflows for harnessing stronger models. W4S formulates workflow design as a multi-turn markov decision process and introduces reinforcement learning for agentic workflow optimization (RLAO) to train a weak meta-agent. Through iterative interaction with the environment, the meta-agent learns to design increasingly effective workflows without manual intervention. Empirical results demonstrate the superiority of W4S that our 7B meta-agent, trained with just one GPU hour, outperforms the strongest baseline by 2.9% ~ 24.6% across eleven benchmarks, successfully elevating the performance of state-of-the-art models such as GPT-3.5-Turbo and GPT-4o. Notably, W4S exhibits strong generalization capabilities across both seen and unseen tasks, offering an efficient, high-performing alternative to directly fine-tuning strong models.
Format: The paper is in either ICML or NeurIPS format and the main paper does not exceed the page limit for our selected track.
Presenter: ~Fan_Nie1
Anonymization: This submission has been anonymized for double-blind review via the removal of identifying information such as names, affiliations, and identifying URLs.
Submission Number: 15
Loading