How to Train Your LLM Web Agent: A Statistical Diagnosis

Published: 08 Jun 2025, Last Modified: 27 Jun 2025WCUA 2025 OralEveryoneRevisionsBibTeXCC BY 4.0
Submission Track: Paper Track (up to 8 pages)
Keywords: LLM agents, computer agents, MDP, reinforcement learning, reproducibility, compute allocation, generalization, statistical analysis, hyperparameters, curriculum learning.
TL;DR: We provide a statistically rigorous guidelines for training interactive, multi-step LLM web agents, exploring optimal compute allocation, generalization, and hyperparameter settings.
Abstract: Large language model (LLM) agents for web interfaces have advanced rapidly, yet open-source systems still lag behind proprietary agents. Bridging this gap is key to enabling customizable, efficient, and privacy-preserving agents. Two challenges hinder progress: the reproducibility issues in RL and LLM agent training, where results often depend on sensitive factors like seeds and decoding parameters, and the focus of prior work on single-step tasks, overlooking the complexities of web-based, multi-step decision-making. We address these gaps by providing a statistically driven study of training LLM agents for web tasks. Our two-stage pipeline combines imitation learning from a Llama 3.3 70B teacher with on-policy fine-tuning via Group Relative Policy Optimization (GRPO) on a Llama 3.1 8B student. Through 240 configuration sweeps and rigorous bootstrapping, we chart the first compute allocation curve for open-source LLM web agents. Our findings show that dedicating one-third of compute to teacher traces and the rest to RL improves MiniWoB++ success by 6 points and closes 60% of the gap to GPT-4o on WorkArena, while cutting GPU costs by 45%. We introduce a principled hyperparameter sensitivity analysis, offering actionable guidelines for robust and cost-effective agent training.
Submission Number: 24
Loading