Information-Theoretic Generalization Bounds for Batch Reinforcement Learning

Published: 11 Oct 2024, Last Modified: 12 Nov 2024M3L PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Reinforcement Learning; Learning Theory; Generalization; Mutual Information
Abstract: We analyze the generalization properties of batch reinforcement learning (batch RL) with value function approximation from an information-theoretic perspective. We derive generalization bounds for batch RL using (conditional) mutual information. In addition, we demonstrate how to establish a connection between certain structural assumptions on the value function space and conditional mutual information. As a by-product, we derive a \textit{high-probability} generalization bound via conditional mutual information, which was left open in \cite{steinke2020reasoning} and may be of independent interest.
Is Neurips Submission: No
Submission Number: 20
Loading