Epistemic Robust Offline Reinforcement Learning

19 Sept 2025 (modified: 12 Feb 2026)ICLR 2026 Conference Desk Rejected SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Offline Reinforcement Learning, Epistemic Robustness, Epistemic Neural Networks
TL;DR: We propose an ensemble free offline RL method that models epistemic uncertainty in Q-value predictions, improving robustness and generalization from limited data
Abstract: Offline Reinforcement Learning aims to learn policies from fixed datasets without further environment interaction. A key challenge in this setting is epistemic uncertainty, which arises from limited or biased data coverage, particularly when the behavior policy systematically avoids certain actions. This can lead to inaccurate value estimates and unreliable generalization. Ensemble-based methods like SAC-N mitigate this by conservatively estimating Q-values using the ensemble minimum, but they require large ensembles and often conflate epistemic with aleatoric uncertainty. To address these limitations, we propose a unified and generalizable framework that replaces discrete ensembles with compact uncertainty sets over Q-values. We further introduce an Epinet based model that directly shapes the uncertainty sets to optimize the cumulative reward under the robust Bellman objective without relying on ensembles. We also introduce a benchmark for evaluating offline RL algorithms under risk-sensitive behavior policies, and demonstrate that our method achieves improved robustness and generalization over ensemble-based baselines across both tabular and continuous state domains.
Primary Area: reinforcement learning
Submission Number: 17769
Loading