REvolve: Reward Evolution with Large Language Models using Human Feedback

Published: 23 Jun 2025, Last Modified: 23 Jun 2025Greeks in AI 2025 OralEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Other, Evolutionary Algorithms, Reward Design, Reinforcement Learning, Large Language Models
Abstract: Published at ICLR 2025. Full reference: Rishi Hazra*, Alkis Sygkounas*, Andreas Persson, Amy Loutfi, Pedro Zuidberg Dos Martires. "REvolve: Reward Evolution with Large Language Models for Autonomous Driving." Proceedings of the International Conference on Learning Representations (ICLR), 2025. Link: https://openreview.net/forum?id=cJPUpL8mOw Keywords: Other, Evolutionary Algorithms,Reward Design,Reinforcement Learning,Large Language Models Designing effective reward functions is crucial to training reinforcement learning (RL) algorithms. However, this design is non-trivial, even for domain experts, due to the subjective nature of certain tasks that are hard to quantify explicitly. In recent works, large language models (LLMs) have been used for reward generation from natural language task descriptions, leveraging their extensive instruction tuning and commonsense understanding of human behavior. In this work, we hypothesize that LLMs, guided by human feedback, can be used to formulate reward functions that reflect human implicit knowledge. We study this in three challenging settings -- autonomous driving, humanoid locomotion, and dexterous manipulation -- wherein notions of ``good" behavior are tacit and hard to quantify. To this end, we introduce REvolve, a truly evolutionary framework that uses LLMs for reward design in RL. REvolve generates and refines reward functions by utilizing human feedback to guide the evolution process, effectively translating implicit human knowledge into explicit reward functions for training (deep) RL agents. Experimentally, we demonstrate that agents trained on REvolve-designed rewards outperform other state-of-the-art baselines.
Submission Number: 23
Loading