$S^2AC$: ENERGY-BASED REINFORCEMENT LEARNING WITH STEIN SOFT ACTOR CRITIC

Published: 07 Nov 2023, Last Modified: 05 Dec 2023FMDM@NeurIPS2023EveryoneRevisionsBibTeX
Keywords: MaxEntr RL, Variational Inference, SVGD, EBM, Entropy
TL;DR: We propose $S^2TAC$, an actor-critic algorithm that yields a more optimal solution to the MaxEnt RL objective. $S^2TAC$ achieves this by leveraging a new family of variational distributions characterized by SVGD dynamics.
Abstract: Learning expressive stochastic policies instead of deterministic ones has been proposed to achieve better stability, sample complexity and robustness. Notably, in Maximum Entropy Reinforcement Learning (MaxEnt RL), the policy is modeled as an expressive Energy-Based Model (EBM) over the Q-values. However, this formulation requires the estimation of the entropy of such EBMs, which is an open problem. To address this, previous MaxEnt RL methods either implicitly estimate the entropy, resulting in high computational complexity and variance (SQL), or follow a variational inference procedure that fits simplified actor distributions (e.g., Gaussian) for tractability (SAC). We propose Stein Soft Actor-Critic ($S^2AC$), a MaxEnt RL algorithm that learns expressive policies without compromising efficiency. Specifically, $S^2AC$ uses parameterized Stein Variational Gradient Descent (SVGD) as the underlying policy. We derive a closed-form expression of the entropy of such policies. Our formula is computationally efficient and only depends on first-order derivatives and vector products. Empirical results show that $S^2AC$ yields more optimal solutions to the MaxEnt objective than SQL and SAC in the multi-goal environment, and outperforms SAC and SQL on the MuJoCo benchmark. Our code is available at: \url{https://anonymous.4open.science/r/Stein-Soft-Actor-Critic/}
Submission Number: 92
Loading