Bridging Imitation and Online Reinforcement Learning: An Optimistic Tale

Published: 24 Oct 2023, Last Modified: 05 Feb 2024Accepted by TMLREveryoneRevisionsBibTeX
Abstract: In this paper, we address the following problem: Given an offline demonstration dataset from an imperfect expert, what is the best way to leverage it to bootstrap online learning performance in MDPs. We first propose an Informed Posterior Sampling-based RL (iPSRL) algorithm that uses the offline dataset, and information about the expert's behavioral policy used to generate the offline dataset. Its cumulative Bayesian regret goes down to zero exponentially fast in $N$, the offline dataset size if the expert is competent enough. Since this algorithm is computationally impractical, we then propose the iRLSVI algorithm that can be seen as a combination of the RLSVI algorithm for online RL, and imitation learning. Our empirical results show that the proposed iRLSVI algorithm is able to achieve significant reduction in regret as compared to two baselines: no offline data, and offline dataset but used without suitably modeling the generative policy. Our algorithm can be seen as bridging online RL and imitation learning.
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: Thank you for the feedback. We have polished the manuscript. We have attempted to do further experimental work on tabular environments. But they do not provide any more insight than what we can already glean from presented results. Comments: Please note that our objective in this paper is to present a learning-efficient idea to leverage offline datasets for online learning. Our experimental results on DeepSea, while a simple tabular environment, are presented as a proof of concept. The algorithm can likely be extended to challenging continuous/very large state and action space problems such as in D4RL but it requires combining the ideas we present with suitable function approximation architectures in such a way that the function approximation error itself does not lead to regret growing linearly. This is currently an unsolved problem, and the holy grail for online learning research. We have done further experimental work on tabular environments. In particular, in addition to the DeepSea environment, we have added experimental results for the Maze environment from D4RL suite. The results on the Maze environment are as expected and what we observed earlier for the DeepSea environment.
Assigned Action Editor: ~Tao_Qin1
License: Creative Commons Attribution 4.0 International (CC BY 4.0)
Submission Number: 952
Loading