Robust On-Policy Sampling for Data-Efficient Policy Evaluation in Reinforcement LearningDownload PDF

Published: 31 Oct 2022, Last Modified: 06 Jan 2023NeurIPS 2022 AcceptReaders: Everyone
Keywords: Reinforcement Learning, policy evaluation, on-policy, data collection
TL;DR: We use non-i.i.d., off-policy data collection to collect data that better matches the expected distribution of on-policy data than if the data was collected on-policy; this technique leads to more accurate policy evaluation.
Abstract: Reinforcement learning (RL) algorithms are often categorized as either on-policy or off-policy depending on whether they use data from a target policy of interest or from a different behavior policy. In this paper, we study a subtle distinction between on-policy data and on-policy sampling in the context of the RL sub-problem of policy evaluation. We observe that on-policy sampling may fail to match the expected distribution of on-policy data after observing only a finite number of trajectories and this failure hinders data-efficient policy evaluation. Towards improved data-efficiency, we show how non-i.i.d., off-policy sampling can produce data that more closely matches the expected on-policy data distribution and consequently increases the accuracy of the Monte Carlo estimator for policy evaluation. We introduce a method called Robust On-Policy Sampling and demonstrate theoretically and empirically that it produces data that converges faster to the expected on-policy distribution compared to on-policy sampling. Empirically, we show that this faster convergence leads to lower mean squared error policy value estimates.
Supplementary Material: pdf
17 Replies

Loading