Managing Temporal Resolution in Continuous Value Estimation: A Fundamental Trade-off

Published: 21 Sept 2023, Last Modified: 16 Jan 2024NeurIPS 2023 posterEveryoneRevisionsBibTeX
Keywords: Reinforcement Learning, Policy Evaluation, Temporal Discretization, Continuous Time, LQR
TL;DR: To formalize the impact of temporal discretization in RL, we analyze Monte-Carlo policy evaluation in finite and infinite-horizon LQR systems and identify a trade-off between approximation and statistical error.
Abstract: A default assumption in reinforcement learning (RL) and optimal control is that observations arrive at discrete time points on a fixed clock cycle. Yet, many applications involve continuous-time systems where the time discretization, in principle, can be managed. The impact of time discretization on RL methods has not been fully characterized in existing theory, but a more detailed analysis of its effect could reveal opportunities for improving data-efficiency. We address this gap by analyzing Monte-Carlo policy evaluation for LQR systems and uncover a fundamental trade-off between approximation and statistical error in value estimation. Importantly, these two errors behave differently to time discretization, leading to an optimal choice of temporal resolution for a given data budget. These findings show that managing the temporal resolution can provably improve policy evaluation efficiency in LQR systems with finite data. Empirically, we demonstrate the trade-off in numerical simulations of LQR instances and standard RL benchmarks for non-linear continuous control.
Supplementary Material: zip
Submission Number: 2698
Loading