Keywords: Reinforcement learning
TL;DR: We introduce a new TECRL framework, which separately learns reward and entropy Q-functions and applies trajectory entropy constraint to achieve higher returns and better stability.
Abstract: Maximum entropy has become a mainstream off-policy reinforcement learning (RL) framework for balancing exploitation and exploration. However, two bottlenecks still limit further performance improvement: \textit{(1) non-stationary Q-value estimation} caused by jointly injecting entropy and updating its weighting parameter, i.e., temperature; and \textit{(2) short-sighted local entropy tuning} that adjusts temperature only according to the current single-step entropy, without considering the effect of cumulative entropy over time. In this paper, we extends maximum entropy framework by proposing a trajectory entropy-constrained reinforcement learning (TECRL) framework to address these two challenges. Within this framework, we first separately learn two Q-functions, one associated with reward and the other with entropy, ensuring clean and stable value targets unaffected by temperature updates. Then, the dedicated entropy Q-function, explicitly quantifying the expected cumulative entropy, enables us to enforce a trajectory entropy constraint and consequently control the policy’s long-term stochasticity. Building on this TECRL framework, we develop a practical off-policy algorithm, DSAC-E, by extending the state-of-the-art distributional soft actor-critic with three refinements (DSAC-T). Empirical results on the OpenAI Gym benchmark demonstrate that our DSAC-E can achieve higher returns and better stability.
Supplementary Material: zip
Primary Area: reinforcement learning
Submission Number: 13273
Loading