Adversarial Model for Offline Reinforcement Learning

Published: 21 Sept 2023, Last Modified: 22 Dec 2023NeurIPS 2023 posterEveryoneRevisionsBibTeX
Keywords: model based, offline, reinforcement learning, adversarial training
TL;DR: A model-based offline RL framework that uses adversarial training to robustly improve over arbitrary reference policies regardless of data coverage, has strong theoretical guarantees and empirical performance without ensembles for dynamics modelling
Abstract: We propose a novel model-based offline Reinforcement Learning (RL) framework, called Adversarial Model for Offline Reinforcement Learning (ARMOR), which can robustly learn policies to improve upon an arbitrary reference policy regardless of data coverage. ARMOR is designed to optimize policies for the worst-case performance relative to the reference policy through adversarially training a Markov decision process model. In theory, we prove that ARMOR, with a well-tuned hyperparameter, can compete with the best policy within data coverage when the reference policy is supported by the data. At the same time, ARMOR is robust to hyperparameter choices: the policy learned by ARMOR, with any admissible hyperparameter, would never degrade the performance of the reference policy, even when the reference policy is not covered by the dataset. To validate these properties in practice, we design a scalable implementation of ARMOR, which by adversarial training, can optimize policies without using model ensembles in contrast to typical model-based methods. We show that ARMOR achieves competent performance with both state-of-the-art offline model-free and model-based RL algorithms and can robustly improve the reference policy over various hyperparameter choices.
Supplementary Material: zip
Submission Number: 3629
Loading