Keywords: Large Language Model, Agent, Test-Time Adaptation
TL;DR: We propose a framework named grounded test-time adaptation to adapt LLM-based agents to novel and complex environments.
Abstract: Large language model (LLM)-based agents struggle to generalize to novel and complex environments, such as unseen websites or new sets of functions, due to a fundamental mismatch between their pre-training and test-time conditions.
This challenge stems from two distinct failure modes: a syntactic misunderstanding of environment-specific components like observation formats, and a semantic misunderstanding of state-transition dynamics, which are only revealed at test time.
To address these issues, we propose two distinct and complementary strategies for adapting LLM agents by leveraging environment-specific information available during deployment.
First, an online distributional adaptation method parameterizes environmental nuances by learning a lightweight adaptation vector that biases the model's output distribution, enabling rapid alignment with an environment response format.
Second, a deployment-time dynamics grounding method employs a persona-driven exploration phase to systematically probe and learn the environment's causal dynamics before task execution, equipping the agent with a non-parametric world model.
We evaluate these strategies across diverse agentic benchmarks, including function calling and web navigation.
Our empirical results show the effectiveness of both strategies across all benchmarks with minimal computational cost.
We find that dynamics grounding is particularly effective in complex environments where unpredictable dynamics pose a major obstacle, demonstrating a robust path toward more generalizable and capable LLM-based agents.
For example, on the WebArena multi-site split, this method increases the agent's success rate from 2\% to 23\%.
Supplementary Material: zip
Primary Area: applications to robotics, autonomy, planning
Submission Number: 14892
Loading