Policy Regularization on Globally Accessible States in Cross-Dynamics Reinforcement Learning

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 spotlightposterEveryoneRevisionsBibTeXCC BY 4.0
Abstract: To learn from data collected in diverse dynamics, Imitation from Observation (IfO) methods leverage expert state trajectories based on the premise that recovering expert state distributions in other dynamics facilitates policy learning in the current one. However, Imitation Learning inherently imposes a performance upper bound of learned policies. Additionally, as the environment dynamics change, certain expert states may become inaccessible, rendering their distributions less valuable for imitation. To address this, we propose a novel framework that integrates reward maximization with IfO, employing F-distance regularized policy optimization. This framework enforces constraints on globally accessible states—those with nonzero visitation frequency across all considered dynamics—mitigating the challenge posed by inaccessible states. By instantiating F-distance in different ways, we derive two theoretical analysis and develop a practical algorithm called Accessible State Oriented Policy Regularization (ASOR). ASOR serves as a general-purpose module that can be incorporated into various RL approaches, including offline RL and off-policy RL. Extensive experiments across multiple benchmarks demonstrate ASOR’s effectiveness in enhancing state-of-the-art cross-domain policy transfer algorithms, significantly improving their performance.
Lay Summary: Modern reinforcement-learning (RL) agents often have to learn from data gathered in many different versions of the same world—think of a robot that is sometimes heavy, sometimes light, or a driving simulator where traffic density changes. Existing imitation-based methods assume that the expert visits roughly the same situations in every version, but this breaks down whenever some states become unreachable after the dynamics change. Our work pinpoints this mismatch as a key bottleneck and asks: what if we only imitate the parts of the world that stay reachable everywhere? We formalise these safe regions as globally accessible states—states that can still be visited no matter how the environment shifts. Building on this idea, we add a gentle steering force to any RL algorithm: we constrain the learner’s behaviour to stay close (in an information-theoretic F-distance) to the expert only on those accessible states while still maximising reward. The resulting plug-in module, ASOR (Accessible-State Oriented Regularisation), is implemented with a GAN-style discriminator that simply augments the reward signal—no new policy architecture required. We prove that this targeted regularisation guarantees smaller performance loss than previous alternatives, even with finite data. In practice, attaching ASOR to state-of-the-art baselines lifts scores across offline MuJoCo datasets, online robotic control, grid-world navigation and a chaotic Fall-Guys-style game, often by large margins. Because ASOR ignores misleading, now-inaccessible states, it makes agents more reliable and sample-efficient when reality refuses to stay still. In short, we turn a vulnerability—dynamics shift—into a strength, giving RL systems a principled way to focus on what they can actually reach.
Primary Area: Reinforcement Learning
Keywords: Reinforcement Learning, Policy Transfer, Dynamics Shift, Policy Regularization
Flagged For Ethics Review: true
Submission Number: 8469
Loading