Risk-Aware Reinforcement Learning with Coherent Risk Measures and Non-linear Function ApproximationDownload PDF

Published: 01 Feb 2023, Last Modified: 11 Apr 2023ICLR 2023 posterReaders: Everyone
Keywords: Risk-Aware Reinforcement Learning, Coherent Risk Measures, Non-linear Function Approximation
Abstract: We study the risk-aware reinforcement learning (RL) problem in the episodic finite-horizon Markov decision process with unknown transition and reward functions. In contrast to the risk-neutral RL problem, we consider minimizing the risk of having low rewards, which arise due to the intrinsic randomness of the MDPs and imperfect knowledge of the model. Our work provides a unified framework to analyze the regret of risk-aware RL policy with coherent risk measures in conjunction with non-linear function approximation, which gives the first sub-linear regret bounds in the setting. Finally, we validate our theoretical results via empirical experiments on synthetic and real-world data.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Submission Guidelines: Yes
Please Choose The Closest Area That Your Submission Falls Into: Theory (eg, control theory, learning theory, algorithmic game theory)
TL;DR: We propose a unified framework to analyze the regret of risk-aware RL policy that uses a coherent risk measure in conjunction with non-linear function approximation.
Supplementary Material: zip
19 Replies

Loading