Diffusion-Based Maximum Entropy Reinforcement Learning

Published: 12 Jun 2025, Last Modified: 21 Jun 2025EXAIT@ICML 2025 PosterEveryoneRevisionsBibTeXCC BY 4.0
Track: Robotics
Keywords: Diffusion, Maximum Entropy Reinforcement Learning, Diffusion policy, diffusion based maximum entropy reinforcement learning, reinforcement learning
TL;DR: We propose a tractable lower bound to the maximum entropy RL objective to train a diffusion based policy, where we can control the exploration of the diffusion model.
Abstract: Maximum entropy reinforcement learning (MaxEnt-RL) has become the standard approach to RL due to its beneficial exploration properties. Traditionally, policies are parameterized using Gaussian distributions, which significantly limits their representational capacity. Diffusion-based policies offer a more expressive alternative, yet integrating them into MaxEnt-RL poses challenges—primarily due to the intractability of computing their marginal entropy. We propose Diffusion-Based Maximum Entropy RL (DIME). DIME leverages recent advances in approximate inference with diffusion models to derive a lower bound on the maximum entropy objective. Additionally, we propose a policy iteration scheme that provably converges to the optimal diffusion policy. Our method enables the use of expressive diffusion-based policies while retaining the principled exploration benefits of MaxEnt-RL, significantly outperforming other diffusion-based methods on challenging high-dimensional control benchmarks. It is also competitive with state-of-the-art non-diffusion based RL methods while requiring fewer algorithmic design choices and smaller update-to-data ratios, reducing computational complexity.
Serve As Reviewer: ~Onur_Celik1, ~Zechu_Li1
Submission Number: 41
Loading