Provable Zero-Shot Generalization in Offline Reinforcement Learning

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
Abstract: In this work, we study offline reinforcement learning (RL) with zero-shot generalization property (ZSG), where the agent has access to an offline dataset including experiences from different environments, and the goal of the agent is to train a policy over the training environments which performs well on test environments without further interaction. Existing work showed that classical offline RL fails to generalize to new, unseen environments. We propose pessimistic empirical risk minimization (PERM) and pessimistic proximal policy optimization (PPPO), which leverage pessimistic policy evaluation to guide policy learning and enhance generalization. We show that both PERM and PPPO are capable of finding a near-optimal policy with ZSG. Our result serves as a first step in understanding the foundation of the generalization phenomenon in offline reinforcement learning.
Lay Summary: Imagine trying to teach a robot to cook using only videos of people working in many different kitchens—but once training ends, the robot must work flawlessly in a brand-new kitchen it has never seen. That, in a nutshell, is the problem we tackle. Today’s “offline” reinforcement-learning methods can study past experiences but often over-fit to the training kitchens and stumble in new ones. We propose pessimistic empirical risk minimization (PERM) and pessimistic proximal policy optimization (PPPO), which leverage pessimistic policy evaluation to guide policy learning and enhance generalization. We show that both PERM and PPPO are capable of finding a near-optimal policy in this generalization setting. Our result serves as a first step in understanding the foundation of the generalization phenomenon in offline reinforcement learning.
Primary Area: Theory->Reinforcement Learning and Planning
Keywords: offline reinforcement learning, generalization
Submission Number: 13001
Loading