Distributions as Actions: A Unified Framework for Diverse Action Spaces

ICLR 2026 Conference Submission20834 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: deterministic policy gradient, actor-critic, continuous control, discrete control, hybrid control, action space, reinforcement learning
TL;DR: We introduce a reinforcement learning framework that treats distributions as actions, enabling a unified policy gradient method (DA-PG) and algorithm (DA-AC) that performs well across different action spaces.
Abstract: We introduce a novel reinforcement learning (RL) framework that treats parameterized action distributions as actions, redefining the boundary between agent and environment. This reparameterization makes the new action space continuous, regardless of the original action type (discrete, continuous, hybrid, etc.). Under this new parameterization, we develop a generalized deterministic policy gradient estimator, \emph{Distributions-as-Actions Policy Gradient} (DA-PG), which has lower variance than the gradient in the original action space. Although learning the critic over distribution parameters poses new challenges, we introduce \emph{interpolated critic learning} (ICL), a simple yet effective strategy to enhance learning, supported by insights from bandit settings. Building on TD3, a strong baseline for continuous control, we propose a practical actor-critic algorithm, \emph{Distributions-as-Actions Actor-Critic} (DA-AC). Empirically, DA-AC achieves competitive performance in various settings across discrete, continuous, and hybrid control.
Primary Area: reinforcement learning
Submission Number: 20834
Loading