Keywords: Reinforcement Learning, Risk-Sensitive Reinforcement Learning, Uncertainty Estimation
TL;DR: We propose a method for joining different kinds of risks in reinforcement learning and prove its superiority over existing ways of joining risk.
Abstract: In this paper, we consider risk-sensitive sequential decision-making in Reinforcement Learning (RL).
Our contributions are two-fold. First, we introduce a novel and \emph{coherent} quantification of risk, namely \emph{composite risk}, which quantifies the joint effect of aleatory and epistemic risk during the learning process.
Existing works considered either aleatory or epistemic risk individually, or as an additive combination.
We prove that the additive formulation is a particular case of the composite risk when the epistemic risk measure is replaced with expectation.
Thus, the composite risk is more sensitive to both aleatory and epistemic uncertainty than the individual and additive formulations.
We also propose an algorithm, SENTINEL-K, based on ensemble bootstrapping and distributional RL for representing epistemic and aleatory uncertainty respectively. The ensemble of K learners uses Follow The Regularised Leader (FTRL) to aggregate the return distributions and obtain the composite risk.
We experimentally verify that SENTINEL-K estimates the return distribution better, and while used with composite risk estimates, demonstrates higher risk-sensitive performance than state-of-the-art risk-sensitive and distributional RL algorithms.
Supplementary Material: zip
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/sentinel-taming-uncertainty-with-ensemble/code)
5 Replies
Loading