Keywords: Partially Observable Environment, Asymmetric Learning, Privileged Information, Privileged Critic, Convergence Analysis, Asymmetric Actor-Critic, Finite-Time Bound, Agent-State Policy, Finite-State Policy, Aliasing
TL;DR: We derive a finite-time bound for an asymmetric actor-critic algorithm and compare it with the symmetric counterpart, offering a justification for the effectiveness of asymmetric learning
Abstract: In reinforcement learning for partially observable environments, many successful algorithms have been developed within the asymmetric learning paradigm. This paradigm leverages additional state information available at training time for faster learning. Although the proposed learning objectives are usually theoretically sound, these methods still lack a precise theoretical justification for their potential benefits. We propose such a justification for asymmetric actor-critic algorithms with linear function approximators by adapting a finite-time convergence analysis to this setting. The resulting finite-time bound reveals that the asymmetric critic eliminates error terms arising from aliasing in the agent state.
Confirmation: I understand that authors of each paper submitted to EWRL may be asked to review 2-3 other submissions to EWRL.
Serve As Reviewer: ~Gaspard_Lambrechts1
Track: Fast Track: published work
Publication Link: https://icml.cc/virtual/2025/poster/45909
Submission Number: 161
Loading