Robust Adversarial Policy Optimization Under Dynamics Uncertainty

09 Sept 2025 (modified: 11 Feb 2026)Submitted to ICLR 2026EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Reinforcement Learning, Robust Reinforcement Learning, Adversarial Reinforcement Learning
TL;DR: We propose RAPO, a dual-based robust RL framework that combines trajectory-level adversarial rollouts and model-level policy-sensitive sampling to close the theory–practice gap and improve generalization under dynamics uncertainty.
Abstract: Reinforcement learning (RL) policies often fail under dynamics that differ from training, a gap not fully addressed by domain randomization or existing adversarial RL methods. Distributionally robust RL provides a formal remedy but still relies on surrogate adversaries to approximate intractable primal problems, leaving blind spots that potentially cause instability and over-conservatism. We propose a dual formulation that directly exposes the robustness–performance trade-off. At the trajectory level, a temperature parameter from the dual is approximated with an adversarial network, yielding efficient and stable worst-case rollouts within a divergence bound. At the model level, we employ Boltzmann reweighting over dynamics ensembles, focusing on more adverse environments to the current policy rather than uniform sampling. Two components act independently and complement each other: trajectory-level steering ensures robust rollouts, while model-level sampling provides policy-sensitive coverage of adverse dynamics. The resulting framework, robust adversarial policy optimization (RAPO) outperforms robust RL baselines, improving resilience to uncertainty and generalization to out-of-distribution dynamics while maintaining dual tractability.
Supplementary Material: zip
Primary Area: reinforcement learning
Submission Number: 3334
Loading