Efficient Reinforcement Learning via Large Language Model-based Search

Published: 22 Oct 2024, Last Modified: 30 Oct 2024NeurIPS 2024 Workshop Open-World Agents PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Reinforcement Learning, Sparse Rewards, Large Language Models
TL;DR: This work proposes a framework to leverage Large Language Models for constructing a reward shaping function that can boost the sample efficiency of Reinforcement Learning agents.
Abstract: Reinforcement Learning (RL) suffers from sample inefficiency in sparse reward domains, and the problem is pronounced if there are stochastic transitions. To improve the sample efficiency, reward shaping is a well-studied approach to introduce intrinsic rewards that can help the RL agent converge to an optimal policy faster. However, designing a useful reward shaping function specific to each problem is challenging, even for domain experts. They would either have to rely on task-specific domain knowledge or provide an expert demonstration independently for each task. Given, that Large Language Models (LLMs) have rapidly gained prominence across a magnitude of natural language tasks, we aim to answer the following question: $\textit{Can we leverage LLMs to construct a reward shaping function that can boost the sample efficiency of an RL agent?}$ In this work, we aim to leverage off-the-shelf LLMs to generate a guide policy by solving a simpler deterministic abstraction of the original problem that can then be used to construct the reward shaping function for the downstream RL agent. Given the ineffectiveness of directly prompting LLMs, we propose $\textbf{MEDIC}$: a framework that augments LLMs with a $\textbf{M}$odel-based fe$\textbf{ED}$back crit$\textbf{IC}$, which verifies LLM-generated outputs, to generate a possibly sub-optimal but $\textit{valid}$ plan for the abstract problem. Our experiments across domains from the BabyAI environment suite show 1) the effectiveness of augmenting LLMs with MEDIC, 2) a significant improvement in the sample complexity of PPO and A2C-based RL agents when guided by our LLM-generated plan, and finally, 3) pave the direction for further explorations of how these models can be used to augment existing RL pipelines.
Submission Number: 30
Loading