Reinforcement Learning under State and Outcome Uncertainty: A Foundational Distributional Perspective

Published: 01 Jul 2025, Last Modified: 21 Jul 2025Finding the Frame (RLC 2025)EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Reinforcement Learning, Partial Observability, Distributional Reinforcement Learning, POMDP, Planning
TL;DR: We extend distributional RL to partially observable domains with new distributional operators, a finite ψ-vector representation, and a point-based algorithm (DPBVI) for robust decision-making.
Abstract: In many real-world planning tasks, agents must tackle uncertainty about the environment’s state and variability in the outcomes of any chosen policy. We address both forms of uncertainty as a first step toward safer algorithms in partially observable settings. Specifically, we extend Distributional Reinforcement Learning (DistRL)—which models the entire return distribution for fully observable domains—to Partially Observable Markov Decision Processes (POMDPs), allowing an agent to learn the distribution of returns for each conditional plan. Concretely, we introduce new distributional Bellman operators for partial observability and prove their convergence under the supremum $p$-Wasserstein metric. We also propose a finite representation of these return distributions via $\psi$-vectors, generalizing the classical $\alpha$-vectors in POMDP solvers. Building on this, we develop Distributional Point-Based Value Iteration (DPBVI), which integrates $\psi$-vectors into a standard point-based backup procedure—bridging DistRL and POMDP planning. By tracking return distributions, DPBVI lays the foundation for future risk-sensitive control in domains where rare, high-impact events must be carefully managed. We provide source code to foster further research in robust decision-making under partial observability.
Submission Number: 5
Loading